paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Label Leakage and Protection in Two-party Split Learning
1 INTRODUCTION . With increasing concerns over data privacy in machine learning , federated learning ( FL ) ( McMahan et al. , 2017 ) has become a promising direction of study . Based on how sensitive data are distributed among parties , FL can be classified into different categories , notable among which are horizontal FL and vertical FL ( Yang et al. , 2019 ) . In contrast to horizontal FL where the data are partitioned by examples , vertical FL considers data partitioned by features ( including labels ) . As a canonical example of vertical FL , consider an online media platform A which displays advertisements from company B to its users , and charges B for each conversion ( e.g. , a user clicking the ad and buying the product ) . In this case , both parties have different features for each user : A has features on the user ’ s media viewing records , while B has the user ’ s conversion labels . B ’ s labels are not available to A because each user ’ s purchase behaviors happen entirely on B ’ s website/app . If both parties want to jointly learn a model to predict conversion without data sharing , split learning ( Gupta & Raskar , 2018 ; Vepakomma et al. , 2018 ) can be used to split the execution of a deep network between the parties on a layer-wise basis . In vanilla split learning , before training begins , both parties use Private Set Intersection ( PSI ) protocols ( Kolesnikov et al. , 2016 ; Pinkas et al. , 2018 ) to find the intersection of their data records and achieve an example ID alignment . This alignment paves the way for the split training phase . During training ( Figure 1 ) , the party without labels ( nonlabel party ) sends the intermediate layer ( cut layer ) outputs rather than the raw data to the party with labels ( label party ) , and the label party completes the rest of the forward computation to obtain the training loss . To compute the gradients with respect to model parameters , the label party initiates backpropagation from its training loss and computes its own parameters ’ gradients . To allow the non-label party to also compute gradients of its parameters , the label party also computes the gradients with respect to the cut layer outputs and communicates this information back to the non-label party . As a result of the ID alignment , despite not knowing the label party ’ s raw label data , the non-label party can identify the gradient value returned by the label party for each example . At first glance , the process of split learning appears privacy-preserving because only the intermediate computations of the cut layer—rather than raw features or labels—are communicated between the two parties . However , such “ gradient sharing ” schemes have been shown to be vulnerable to privacy leakage in horizontal FL settings ( e.g. , Zhu et al. , 2019 ) . In vertical FL ( and specifically split learning ) , it remains unclear whether the raw data can similarly be leaked during communication . In particular , as the raw labels often contain highly sensitive information ( e.g. , what a user has purchased ( in online advertising ) or whether a user has a disease or not ( in disease prediction ) Vepakomma et al . ( 2018 ) ) , developing a rigorous understanding of the threat of label leakage and its protection is particularly important . Towards this goal , we make the following contributions : 1 . We formalize a threat model for label leakage in two-party split learning in the context of binary classification ( Section 3.1 ) , and propose specific privacy quantification metrics to measure the severity of such threats ( Section 3.2 ) . 2 . We identify two simple and realistic methods within this threat model which can accurately recover the label party ’ s private label information ( Section 3.3 ) . 3 . We propose several random perturbation techniques to limit the label-stealing ability of the non-label party ( Section 4 ) . Among them , our principled approach Marvell directly searches for the optimal random perturbation noise structure to minimize label leakage ( as measured via our quantification metric ) against a worst-case adversarial non-label party . 4 . We experimentally demonstrate the effectiveness of our protection techniques and MARVELL ’ s improved privacy-utility tradeoffs compared to other protection baselines ( Section 5 ) . 2 RELATED WORK . Privacy leakage in split learning . Although raw data is not shared in federated learning , sensitive information may still be leaked when gradients and/or model parameters are communicated between parties . In horizontal FL , Zhu et al . ( 2019 ) showed that an honest-but-curious server can uncover the raw features and labels of a device by knowing the model architecture , parameters , and communicated gradient of the loss on the device ’ s data . Based on their techniques , Zhao et al . ( 2020 ) showed that the ground truth label of an example can be extracted by exploiting the directions of the gradients of the weights connected to the logits of different classes . Here we study a different setting—two-party split learning ( in vertical FL ) ( Yang et al. , 2019 ) , where no party has access to the model architecture or model parameters of the other party . In this setting , Vepakomma et al . ( 2019 ) studied how the forward communication of feature representations can leak the non-label party ’ s raw data to the label party . We instead study whether label information may be leaked from the label party to the non-label party during the backward communication . Despite the importance of maintaining the privacy of these labels , we are unaware of prior work that has studied this problem . Privacy protection and quantification . Techniques to protect communication privacy in FL generally fall into three categories : 1 ) cryptographic methods such as secure multi-party computation ( e.g. , Bonawitz et al. , 2017 ) ; 2 ) system-based methods including trusted execution environments ( Subramanyan et al. , 2017 ) ; and 3 ) perturbation methods that shuffle or modify the communicated messages ( e.g. , Abadi et al. , 2016 ; McMahan et al. , 2018 ; Erlingsson et al. , 2019 ; Cheu et al. , 2019 ; Zhu et al. , 2019 ) . Our protection techniques belong to the third category , as we add random perturbations to the gradients to protect the labels . Many randomness-based protection methods have been proposed in the domain of horizontal FL . In this case , differential privacy ( DP ) ( Dwork , 2006 ; Dwork et al. , 2014 ) is commonly used to measure the proposed random mechanisms ’ ability to anonymize the identity of any single participating example in the model iterates . However , in split learning , after PSI , both parties know exactly the identity of which example has participated in a given gradient update . As we explain in Section 3.1 , the object we aim to protect ( the communicated cut layer gradients ) , unlike the model iterates , is not an aggregate function of all the examples but are instead example-specific . As a result , DP and its variants ( e.g . label DP ( Chaudhuri & Hsu , 2011 ; Ghazi et al. , 2021 ) ) are not directly applicable metrics in our setting , and we instead propose a different metric ( discussed in Section 3.2 ) . 3 LABEL LEAKAGE IN SPLIT LEARNING . We first introduce the two-party split learning problem for binary classification , and then formally describe our threat model and privacy quantification metrics with two concrete attack examples . 3.1 TWO-PARTY SPLIT LEARNING IN BINARY CLASSIFICATION . Problem setup . Consider two parties learning a composition model h ◦ f jointly for a binary classification problem over the domain X × { 0 , 1 } ( Figure 1 ) . The non-label party owns the representation function f : X → Rd and each example ’ s raw feature X ∈ X while the label party owns the logit function h : Rd → R and each example ’ s label y ∈ { 0 , 1 } 1 . Let ` = h ( f ( X ) ) be the logit of the positive class whose predicted probability is given through the sigmoid function : p̃1 = 1/ ( 1 + exp ( − ` ) ) . We measure the loss of such prediction through the cross entropy loss L = log ( 1 + exp ( − ` ) ) + ( 1 − y ) ` . During model inference , the non-label party computes f ( X ) and sends it to the label party who will then execute the rest of forward computation in Figure 1 . Model training ( Figure 1 : backward gradient computation ) . To train the model using gradient descent , the label party starts by first computing the gradient of the loss L with respect to the logit dL d ` = ( p̃1 − y ) . Using the chain rule , the label party can then compute the gradient of L with respect to its function h ’ s parameters and perform the gradient updates . To also allow the non-label party to learn its function f , the label party needs to additionally compute the gradient with respect to cut layer feature f ( X ) and communicate it to the non-label party . We denote this gradient by g : = ∇f ( X ) L = ( p̃1 − y ) ∇zh ( z ) |z=f ( X ) ∈ Rd ( by chain rule ) . After receiving g , the non-label party continues the backpropagation towards f ’ s parameters and also perform the gradient updates . Why Not Differential Privacy ? Note that for a given iteration , the non-label party randomly chooses B example IDs to form a batch . Therefore , the identity of which examples are used is known to the non-label party by default . In addition , the communicated features f ( X ) and returned gradients g will both be matrices in RB×d with each row belonging to a specific example in the batch . The different gradients ( rows of the matrix ) are not with respect to the same model parameters , but are instead with respect to different examples ’ cut-layer features ; thus , no averaging over or shuffling of the rows of the gradient matrix can be done prior to communication to ensure correct computation of f ’ s parameters on the non-label party side . This example-aware and example-specific nature of the communicated gradient matrix makes differential privacy ( which focuses on anonymizing an example ’ s participation in an aggregate function ) inapplicable for this problem ( see also Section 2 ) . 3.2 THREAT MODEL AND PRIVACY QUANTIFICATION . Below we specify several key aspects of our threat model , including the adversary ’ s objective and capabilities , our metric for quantifying privacy loss , and the possible inclusion of side information . Adversary ’ s objective . At a given moment in time during training ( with f and h fixed ) , since the communicated cut layer gradient g is a deterministic function of f ( X ) and y ( see Section 3.1 ) , we consider an adversarial non-label party whose objective is to recover the label party ’ s hidden label y based on the information contained in g for every training example . Adversary ’ s capability . We consider an honest-but-curious non-label party which can not tamper with training by selecting which examples to include in a batch or sending incorrect features f ( X ) ; instead , we assume that the adversary follows the agreed-upon split training procedure while trying to guess the label y . This can be viewed as a binary classification problem where the ( input , output ) distribution is the induced distribution of ( g , y ) . We allow the adversary to use any binary classifier q : Rd → { 0 , 1 } to guess the labels . This classifier can be represented by a ( scoring function r , threshold t ) tuple , where r : Rd → R maps an example ’ s cut layer gradient to a real-valued score and the threshold t ∈ R determines a cut-off so that q ( g ) = 1 if r ( g ) > t and q ( g ) = 0 if r ( g ) ≤ t. Moving forward , we use this tuple representation to describe adversarial non-label party classifiers . Privacy loss quantification . As we consider binary classification , a natural metric to quantify the performance of an adverary ’ s scoring function r is the AUC of its ROC curve . Denote the unperturbed class-conditional distributions of the cut-layer gradients by P ( 1 ) and P ( 0 ) for the 1To simplify notation , we assume no additional features in the label party to compute the logit . The data leakage problem still holds true for other more complicated settings ( see WDL experiment setting in Section 5 ) . positive and negative class , respectively . The ROC curve of a scoring function r is a parametric curve t 7→ ( FPRr ( t ) , TPRr ( t ) ) ∈ [ 0 , 1 ] 2 which maps a threshold value t ∈ R to the corresponding ( False Positive Rate , True Positive Rate ) tuple of the classifier represented by ( r , t ) , with FPRr ( t ) : = P ( 0 ) ( { g : r ( g ) > t } ) and TPRr ( t ) : = P ( 1 ) ( { g : r ( g ) > t } ) . The AUC of the ROC curve of a scoring function r ( denote by AUC ( r ) ) can be expressed as an integral : AUC ( r ) = ∫ −∞ ∞ TPRr ( t ) dFPRr ( t ) ∈ [ 0 , 1 ] ( Leak AUC ) ( more details on this expression see Appendix A.1 . ) We use this value as the privacy loss quantification metric for a specific adversary scoring function r and refer to it as the leak AUC . This metric summarizes the predictive performance of all classifiers that can be constructed through all threshold values t and removes the need to tune this classifier-specific hyperparameter . The leak AUC being close to 1 implies that the corresponding scoring function r can very accurately recover the private label , whereas a value of around 0.5 means r is non-informative in predicting the labels . In practice , during batch training , the leak AUC of r can be estimated at every gradient update iteration using the minibatch of cut-layer gradients together with their labels . Side information . Among all the scoring functions within our threat model , it is conceivable that only some would recover the hidden labels accurately . Picking such effective ones would require the non-label party to have population-level side information specifically regarding the properties of ( and distinction between ) the positive and negative class ’ s cut-layer gradient distributions . Since we allow the adversary to pick any specific ( measurable ) scoring function , we implicitly allow for such population-level side information for the adversary . However , we assume the non-label party has no example-level side information that is different example by example . Thus we also don ’ t use local DP for privacy quantification ( detailed explanation in Appendix A.8 ) . Next we provide two example scoring functions which use population-level side-information to effectively recover the label .
The authors in the paper consider stealing the private label information from the party that does not know the label of training data during split training for binary classification. Specifically, the attack methods are based on differences of gradient information between positive and negative examples and the defense method MARVELL is based on optimal random perturbation solved by an optimization problem. Experimental results suggest that the proposed method could effectively defend against the label stealing attacks.
SP:1d660d8b2497c51b08143c85a1969ddd76da2bc4
Label Leakage and Protection in Two-party Split Learning
1 INTRODUCTION . With increasing concerns over data privacy in machine learning , federated learning ( FL ) ( McMahan et al. , 2017 ) has become a promising direction of study . Based on how sensitive data are distributed among parties , FL can be classified into different categories , notable among which are horizontal FL and vertical FL ( Yang et al. , 2019 ) . In contrast to horizontal FL where the data are partitioned by examples , vertical FL considers data partitioned by features ( including labels ) . As a canonical example of vertical FL , consider an online media platform A which displays advertisements from company B to its users , and charges B for each conversion ( e.g. , a user clicking the ad and buying the product ) . In this case , both parties have different features for each user : A has features on the user ’ s media viewing records , while B has the user ’ s conversion labels . B ’ s labels are not available to A because each user ’ s purchase behaviors happen entirely on B ’ s website/app . If both parties want to jointly learn a model to predict conversion without data sharing , split learning ( Gupta & Raskar , 2018 ; Vepakomma et al. , 2018 ) can be used to split the execution of a deep network between the parties on a layer-wise basis . In vanilla split learning , before training begins , both parties use Private Set Intersection ( PSI ) protocols ( Kolesnikov et al. , 2016 ; Pinkas et al. , 2018 ) to find the intersection of their data records and achieve an example ID alignment . This alignment paves the way for the split training phase . During training ( Figure 1 ) , the party without labels ( nonlabel party ) sends the intermediate layer ( cut layer ) outputs rather than the raw data to the party with labels ( label party ) , and the label party completes the rest of the forward computation to obtain the training loss . To compute the gradients with respect to model parameters , the label party initiates backpropagation from its training loss and computes its own parameters ’ gradients . To allow the non-label party to also compute gradients of its parameters , the label party also computes the gradients with respect to the cut layer outputs and communicates this information back to the non-label party . As a result of the ID alignment , despite not knowing the label party ’ s raw label data , the non-label party can identify the gradient value returned by the label party for each example . At first glance , the process of split learning appears privacy-preserving because only the intermediate computations of the cut layer—rather than raw features or labels—are communicated between the two parties . However , such “ gradient sharing ” schemes have been shown to be vulnerable to privacy leakage in horizontal FL settings ( e.g. , Zhu et al. , 2019 ) . In vertical FL ( and specifically split learning ) , it remains unclear whether the raw data can similarly be leaked during communication . In particular , as the raw labels often contain highly sensitive information ( e.g. , what a user has purchased ( in online advertising ) or whether a user has a disease or not ( in disease prediction ) Vepakomma et al . ( 2018 ) ) , developing a rigorous understanding of the threat of label leakage and its protection is particularly important . Towards this goal , we make the following contributions : 1 . We formalize a threat model for label leakage in two-party split learning in the context of binary classification ( Section 3.1 ) , and propose specific privacy quantification metrics to measure the severity of such threats ( Section 3.2 ) . 2 . We identify two simple and realistic methods within this threat model which can accurately recover the label party ’ s private label information ( Section 3.3 ) . 3 . We propose several random perturbation techniques to limit the label-stealing ability of the non-label party ( Section 4 ) . Among them , our principled approach Marvell directly searches for the optimal random perturbation noise structure to minimize label leakage ( as measured via our quantification metric ) against a worst-case adversarial non-label party . 4 . We experimentally demonstrate the effectiveness of our protection techniques and MARVELL ’ s improved privacy-utility tradeoffs compared to other protection baselines ( Section 5 ) . 2 RELATED WORK . Privacy leakage in split learning . Although raw data is not shared in federated learning , sensitive information may still be leaked when gradients and/or model parameters are communicated between parties . In horizontal FL , Zhu et al . ( 2019 ) showed that an honest-but-curious server can uncover the raw features and labels of a device by knowing the model architecture , parameters , and communicated gradient of the loss on the device ’ s data . Based on their techniques , Zhao et al . ( 2020 ) showed that the ground truth label of an example can be extracted by exploiting the directions of the gradients of the weights connected to the logits of different classes . Here we study a different setting—two-party split learning ( in vertical FL ) ( Yang et al. , 2019 ) , where no party has access to the model architecture or model parameters of the other party . In this setting , Vepakomma et al . ( 2019 ) studied how the forward communication of feature representations can leak the non-label party ’ s raw data to the label party . We instead study whether label information may be leaked from the label party to the non-label party during the backward communication . Despite the importance of maintaining the privacy of these labels , we are unaware of prior work that has studied this problem . Privacy protection and quantification . Techniques to protect communication privacy in FL generally fall into three categories : 1 ) cryptographic methods such as secure multi-party computation ( e.g. , Bonawitz et al. , 2017 ) ; 2 ) system-based methods including trusted execution environments ( Subramanyan et al. , 2017 ) ; and 3 ) perturbation methods that shuffle or modify the communicated messages ( e.g. , Abadi et al. , 2016 ; McMahan et al. , 2018 ; Erlingsson et al. , 2019 ; Cheu et al. , 2019 ; Zhu et al. , 2019 ) . Our protection techniques belong to the third category , as we add random perturbations to the gradients to protect the labels . Many randomness-based protection methods have been proposed in the domain of horizontal FL . In this case , differential privacy ( DP ) ( Dwork , 2006 ; Dwork et al. , 2014 ) is commonly used to measure the proposed random mechanisms ’ ability to anonymize the identity of any single participating example in the model iterates . However , in split learning , after PSI , both parties know exactly the identity of which example has participated in a given gradient update . As we explain in Section 3.1 , the object we aim to protect ( the communicated cut layer gradients ) , unlike the model iterates , is not an aggregate function of all the examples but are instead example-specific . As a result , DP and its variants ( e.g . label DP ( Chaudhuri & Hsu , 2011 ; Ghazi et al. , 2021 ) ) are not directly applicable metrics in our setting , and we instead propose a different metric ( discussed in Section 3.2 ) . 3 LABEL LEAKAGE IN SPLIT LEARNING . We first introduce the two-party split learning problem for binary classification , and then formally describe our threat model and privacy quantification metrics with two concrete attack examples . 3.1 TWO-PARTY SPLIT LEARNING IN BINARY CLASSIFICATION . Problem setup . Consider two parties learning a composition model h ◦ f jointly for a binary classification problem over the domain X × { 0 , 1 } ( Figure 1 ) . The non-label party owns the representation function f : X → Rd and each example ’ s raw feature X ∈ X while the label party owns the logit function h : Rd → R and each example ’ s label y ∈ { 0 , 1 } 1 . Let ` = h ( f ( X ) ) be the logit of the positive class whose predicted probability is given through the sigmoid function : p̃1 = 1/ ( 1 + exp ( − ` ) ) . We measure the loss of such prediction through the cross entropy loss L = log ( 1 + exp ( − ` ) ) + ( 1 − y ) ` . During model inference , the non-label party computes f ( X ) and sends it to the label party who will then execute the rest of forward computation in Figure 1 . Model training ( Figure 1 : backward gradient computation ) . To train the model using gradient descent , the label party starts by first computing the gradient of the loss L with respect to the logit dL d ` = ( p̃1 − y ) . Using the chain rule , the label party can then compute the gradient of L with respect to its function h ’ s parameters and perform the gradient updates . To also allow the non-label party to learn its function f , the label party needs to additionally compute the gradient with respect to cut layer feature f ( X ) and communicate it to the non-label party . We denote this gradient by g : = ∇f ( X ) L = ( p̃1 − y ) ∇zh ( z ) |z=f ( X ) ∈ Rd ( by chain rule ) . After receiving g , the non-label party continues the backpropagation towards f ’ s parameters and also perform the gradient updates . Why Not Differential Privacy ? Note that for a given iteration , the non-label party randomly chooses B example IDs to form a batch . Therefore , the identity of which examples are used is known to the non-label party by default . In addition , the communicated features f ( X ) and returned gradients g will both be matrices in RB×d with each row belonging to a specific example in the batch . The different gradients ( rows of the matrix ) are not with respect to the same model parameters , but are instead with respect to different examples ’ cut-layer features ; thus , no averaging over or shuffling of the rows of the gradient matrix can be done prior to communication to ensure correct computation of f ’ s parameters on the non-label party side . This example-aware and example-specific nature of the communicated gradient matrix makes differential privacy ( which focuses on anonymizing an example ’ s participation in an aggregate function ) inapplicable for this problem ( see also Section 2 ) . 3.2 THREAT MODEL AND PRIVACY QUANTIFICATION . Below we specify several key aspects of our threat model , including the adversary ’ s objective and capabilities , our metric for quantifying privacy loss , and the possible inclusion of side information . Adversary ’ s objective . At a given moment in time during training ( with f and h fixed ) , since the communicated cut layer gradient g is a deterministic function of f ( X ) and y ( see Section 3.1 ) , we consider an adversarial non-label party whose objective is to recover the label party ’ s hidden label y based on the information contained in g for every training example . Adversary ’ s capability . We consider an honest-but-curious non-label party which can not tamper with training by selecting which examples to include in a batch or sending incorrect features f ( X ) ; instead , we assume that the adversary follows the agreed-upon split training procedure while trying to guess the label y . This can be viewed as a binary classification problem where the ( input , output ) distribution is the induced distribution of ( g , y ) . We allow the adversary to use any binary classifier q : Rd → { 0 , 1 } to guess the labels . This classifier can be represented by a ( scoring function r , threshold t ) tuple , where r : Rd → R maps an example ’ s cut layer gradient to a real-valued score and the threshold t ∈ R determines a cut-off so that q ( g ) = 1 if r ( g ) > t and q ( g ) = 0 if r ( g ) ≤ t. Moving forward , we use this tuple representation to describe adversarial non-label party classifiers . Privacy loss quantification . As we consider binary classification , a natural metric to quantify the performance of an adverary ’ s scoring function r is the AUC of its ROC curve . Denote the unperturbed class-conditional distributions of the cut-layer gradients by P ( 1 ) and P ( 0 ) for the 1To simplify notation , we assume no additional features in the label party to compute the logit . The data leakage problem still holds true for other more complicated settings ( see WDL experiment setting in Section 5 ) . positive and negative class , respectively . The ROC curve of a scoring function r is a parametric curve t 7→ ( FPRr ( t ) , TPRr ( t ) ) ∈ [ 0 , 1 ] 2 which maps a threshold value t ∈ R to the corresponding ( False Positive Rate , True Positive Rate ) tuple of the classifier represented by ( r , t ) , with FPRr ( t ) : = P ( 0 ) ( { g : r ( g ) > t } ) and TPRr ( t ) : = P ( 1 ) ( { g : r ( g ) > t } ) . The AUC of the ROC curve of a scoring function r ( denote by AUC ( r ) ) can be expressed as an integral : AUC ( r ) = ∫ −∞ ∞ TPRr ( t ) dFPRr ( t ) ∈ [ 0 , 1 ] ( Leak AUC ) ( more details on this expression see Appendix A.1 . ) We use this value as the privacy loss quantification metric for a specific adversary scoring function r and refer to it as the leak AUC . This metric summarizes the predictive performance of all classifiers that can be constructed through all threshold values t and removes the need to tune this classifier-specific hyperparameter . The leak AUC being close to 1 implies that the corresponding scoring function r can very accurately recover the private label , whereas a value of around 0.5 means r is non-informative in predicting the labels . In practice , during batch training , the leak AUC of r can be estimated at every gradient update iteration using the minibatch of cut-layer gradients together with their labels . Side information . Among all the scoring functions within our threat model , it is conceivable that only some would recover the hidden labels accurately . Picking such effective ones would require the non-label party to have population-level side information specifically regarding the properties of ( and distinction between ) the positive and negative class ’ s cut-layer gradient distributions . Since we allow the adversary to pick any specific ( measurable ) scoring function , we implicitly allow for such population-level side information for the adversary . However , we assume the non-label party has no example-level side information that is different example by example . Thus we also don ’ t use local DP for privacy quantification ( detailed explanation in Appendix A.8 ) . Next we provide two example scoring functions which use population-level side-information to effectively recover the label .
## Summary of Contributions This paper studies the multiparty setting where there are two parties: the first party (aka non-label party) holds the feature vectors $X_i$ of user $i$ whereas the second party (aka label party) holds the corresponding labels $y_i \in \{0, 1\}$. Together they wish to jointly train a model in such a way that the non-label party does not learn of the users' labels. This is especially relevant in online advertisement (where the label party corresponds to the advertiser who knows whether each user converts and the non-label party is e.g. the publisher who shows the ads) and in medical applications (where the label party is the hospital). The paper focuses on *split training* which is a setting where the model is divided among two parties with the first layers (denoted as a function $f$) is with the first party and the remaining layers (denoted as a function $h$) are with the second party. Here the training is performed as follows. First, the non-label party computes $f(X)$ (referred to as the "cut layer") and send this to the label party. The label party then compute the gradient w.r.t. $h$ and update its parameters and furthermore it computes the gradient $g$ w.r.t. the cut layer $f(X)$ and sends it back to the non-label party. The non-label party can then use backpropagation starting with $g$ to update its own parameters. The main concern tackled in this paper is that, in such a scheme, the gradient $g$ sent back to the non-label party can leak the information about the label. Specifically, the authors observe and experimentally show that the following two attacks are very effective in predicting the labels given the intermediate gradient $g$: - Consider the norm $\\|g\\|_2$ and make prediction based on whether it exceeds a certain threshold. - Consider the cosine similarity between $g$ and another (fixed) gradient, and then threshold. In fact, the two above attacks get near-perfect predictions in some examples. The exact measure they use to evaluate their model is the area-under-curve (AUC) of the (false positive rate, true positive rate) curve. The authors then propose to add noise in order to mitigate such attacks. The baseline here would be to add isotopic Gaussian noise (with a certain scale). The authors then formulate an optimization problem aiming to achieve better privacy-utility tradeoff. Roughly speaking, this is an optimization problem of the form "minimize AUC" such that "utility is at least ...". It is hard to make this precise (and it is probably inefficient), so the authors use certain proxy for both the objective and the constraint. For the objective, the authors show that AUC is upper bound by a certain quantity involving the KL divergence of the distributions of the gradients from the two labels. For the utility constraint, the authors instead try to restrict the amount of the noise added by having an upper bound on the trace of their covariance matrices. Furthermore, the authors show that when the (unperturbed) gradient distributions are assumed to be Gaussians and only Gaussian noises are considered, then the optimization problem simplifies greatly to just a single constant-size optimization problem (Theorem 2). This final form constitutes their so-called Marvell algorithm. Finally, the paper concludes by several experimental results showing that Marvell is very effective against protecting the two aforementioned attacks in real-world datasets and models, while preserving reasonable level of privacy.
SP:1d660d8b2497c51b08143c85a1969ddd76da2bc4
NeuPL: Neural Population Learning
Learning in strategy games ( e.g . StarCraft , poker ) requires the discovery of diverse policies . This is often achieved by iteratively training new policies against existing ones , growing a policy population that is robust to exploit . This iterative approach suffers from two issues in real-world games : a ) under finite budget , approximate best-response operators at each iteration needs truncating , resulting in under-trained good-responses populating the population ; b ) repeated learning of basic skills at each iteration is wasteful and becomes intractable in the presence of increasingly strong opponents . In this work , we propose Neural Population Learning ( NeuPL ) as a solution to both issues . NeuPL offers convergence guarantees to a population of best-responses under mild assumptions . By representing a population of policies within a single conditional model , NeuPL enables transfer learning across policies . Empirically , we show the generality , improved performance and efficiency of NeuPL across several test domains1 . Most interestingly , we show that novel strategies become more accessible , not less , as the neural population expands . The need for learning not one , but a population of strategies is rooted in classical game theory . Consider the purely cyclical game of rock-paper-scissors , the performance of individual strategies is meaningless as improving against one entails losing to another . By contrast , performance can be meaningfully examined between populations . A population consisting of pure strategies { rock , paper } does well against a singleton population of { scissors } because in the meta-game where both populations are revealed , a player picking strategies from the former can always beat a player choosing from the latter2 . This observation underpins the unifying population learning framework of Policy Space Response Oracle ( PSRO ) where a new policy is trained to best-respond to a mixture over previous policies at each iteration , following a meta-strategy solver ( Lanctot et al. , 2017 ) . Most impressively , Vinyals et al . ( 2019 ) explored the strategy game of StarCraft with a league of policies , using a practical variation of PSRO . The league counted close to a thousand sophisticated deep RL agents as the population collectively became robust to exploits . Unfortunately , such empirical successes often come at considerable costs . Population learning algorithms with theoretical guarantees are traditionally studied in normal-form games ( Brown , 1951 ; McMahan et al. , 2003 ) where best-responses can be solved exactly . This is in stark contrast to real-world Game-of-Skills ( Czarnecki et al. , 2020 ) — such games are often temporal in nature , where best-responses can only be approximated with computationally intensive methods ( e.g . deep RL ) . This has two implications . First , for a given opponent , one can not efficiently tell apart good-responses that temporarily plateaued at local optima from globally optimal best-responses . As a result , approximate best-response operators are often truncated prematurely , according to hand-crafted schedules ( Lanctot et al. , 2017 ; Mcaleer et al. , 2020 ) . Second , real-world games often afford strategy-agnostic transitive ∗Currently at Reality Labs , work carried out while at DeepMind . †Work carried out while at DeepMind . 1See https : //neupl.github.io/demo/ for supplementary illustrations . 2This is formally quantified by Relative Population Performance , see Definition A.1 ( Balduzzi et al. , 2019 ) . skills that are pre-requisite to strategic reasoning . Learning such skills from scratch at each iteration in the presence of evermore skillful opponents quickly becomes intractable beyond a few iterations . This iterative and isolated approach is fundamentally at odds with human learning . For humans , mastering diverse strategies often facilitates incremental strategic innovation and learning about new strategies does not stop us from revisiting and improving upon known ones ( Caruana , 1997 ; Krakauer et al. , 2006 ) . In this work , we make progress towards endowing artificial agents with similar capability by extending population learning to real-world games . Specifically , we propose NeuPL , an efficient and general framework that learns and represents diverse policies in symmetric zero-sum games within a single conditional network , using the computational infrastructure of simple self-play ( Section 1.2 ) . Theoretically , we show that NeuPL converges to a sequence of iterative best-responses under certain conditions ( Section 1.3 ) . Empirically , we illustrate the generality of NeuPL by replicating known results of population learning algorithms on the classical domain of rockpaper-scissors as well as its partially-observed , spatiotemporal counterpart running-with-scissors ( Vezhnevets et al. , 2020 ) ( Section 2.1 ) . Most interestingly , we show that NeuPL enables transfer learning across policies , discovering exploiters to strong opponents that would have been inaccessible to comparable baselines ( Section 2.2 ) . Finally , we show the appeal of NeuPL in the challenge domain of MuJoCo Football ( Liu et al. , 2019 ) where players must continuously refine their movement skills in order to coordinate as a team . In this highly transitive game , NeuPL naturally represents a short sequence of best-responses without the need for a carefully chosen truncation criteria ( Section 2.4 ) . 1 METHODS . Our method is designed with two desiderata in mind . First , at convergence , the resulting population of policies should represent a sequence of iterative best-responses under reasonable conditions . Second , transfer learning can occur across policies throughout training . In this section , we define the problem setting of interests as well as necessary terminologies . We then describe NeuPL , our main conceptual algorithm as well as its theoretical properties . To make it concrete , we further consider deep RL specifically and offer two practical implementations of NeuPL for real-world games . 1.1 PRELIMINARIES . Approximate Best-Response ( ABR ) in Stochastic Games We consider a symmetric zero-sum Stochastic Game ( Shapley , 1953 ) defined by ( S , O , X , A , P , R , p0 ) with S the state space , O the observation space and X : S → O × O the observation function defining the ( partial ) views of the state for both players . Given joint actions ( at , a′t ) ∈ A × A , the state follows the transition distribution P : S ×A×A → Pr ( S ) . The reward functionR : S → R× R defines the rewards for both players in state st , denotedR ( st ) = ( rt , −rt ) . The initial state of the environment follows the distribution p0 . In a given state st , players act according to policies ( π ( ·|o≤t ) , π′ ( ·|o′≤t ) ) . Player π achieves an expected return of J ( π , π′ ) = Eπ , π′ [ ∑ t rt ] against π ′ . Policy π∗ is a best response to π′ if ∀π , J ( π∗ , π′ ) ≥ J ( π , π′ ) . We define π̂ ← ABR ( π , π′ ) with J ( π̂ , π′ ) ≥ J ( π , π′ ) . In other words , an ABR operator yields a policy π̂ that does no worse than π , in the presence of an opponent π′ . Meta-game Strategies in Population Learning Given a symmetric zero-sum game and a set of N policies Π : = { πi } Ni=1 , we define a normal-form meta-game where players ’ i-th action corresponds to executing policy πi for one episode . A meta-game strategy σ thus defines a probability assignment , or an action profile , over Π . Within Π , we define U ∈ RN×N ← EVAL ( Π ) to be the expected payoffs between pure strategies of this meta-game or equivalently , Uij : = J ( πi , πj ) in the underlying game . We further extend the ABR operator of the underlying game to mixture policies represented by σ , such that π̂ ← ABR ( π , σ , Π ) with Eπ′∼P ( σ ) [ J ( π̂ , π′ ) ] ≥ Eπ′∼P ( σ ) [ J ( π , π′ ) ] . Finally , we define f : R|Π|×|Π| → R|Π| to be a meta-strategy solver ( MSS ) with σ ← f ( U ) and F : RN×N → RN×N a meta-graph solver ( MGS ) with Σ ← F ( U ) . The former formulation is designed for iterative optimization of approximate best-responses as in Lanctot et al . ( 2017 ) whereas the latter is motivated by concurrent optimization over a set of population-level objectives as in Garnelo et al . ( 2021 ) . In particular , Σ ∈ RN×N : = { σi } Ni=1 defines N population-level objectives , with πi optimized against the mixture policy represented by σi and Π . As such , Σ ∈ RN×N corresponds to the adjacency matrix of an interaction graph . Figure 1 illustrates several commonly used population learning algorithms defined by Σ or equivalently , their interaction graphs . 1.2 NEURAL POPULATION LEARNING . We now present NeuPL and contrast it with Policy-Space Response Oracles ( PSRO , Lanctot et al . ( 2017 ) ) which similarly focuses on population learning with approximate best-responses by RL . Algorithm 1 Neural Population Learning ( Ours ) 1 : Πθ ( ·|s , σ ) . Conditional neural population net . 2 : Σ : = { σi } Ni=1 . Initial interaction graph . 3 : F : RN×N → RN×N . Meta-graph solver . 4 : while true do 5 : ΠΣθ ← { Πθ ( ·|s , σi ) } Ni=1 . Neural population . 6 : for σi ∈ UNIQUE ( Σ ) do 7 : Πσiθ ← Πθ ( ·|s , σi ) 8 : Πσiθ ← ABR ( Π σi θ , σi , Π Σ θ ) . Self-play . 9 : U ← EVAL ( ΠΣθ ) . ( Optional ) if F adaptive . 10 : Σ← F ( U ) . ( Optional ) if F adaptive . 11 : return Πθ , Σ Algorithm 2 PSRO ( Lanctot et al. , 2017 ) 1 : Π : = { π0 } . Initial policy population . 2 : σ ← UNIF ( Π ) . Initial meta-game strategy . 3 : f : R|Π|×|Π| → R|Π| . Meta-strategy solver . 4 : 5 : for i ∈ [ [ N ] ] do . N-step ABR . 6 : Initialize πθi . 7 : πθi ← ABR ( πθi , σ , Π ) 8 : Π← Π ∪ { πθi } 9 : U ← EVAL ( Π ) . Empirical payoffs . 10 : σ ← f ( U ) 11 : return Π NeuPL deviates from PSRO in two important ways . First , NeuPL suggests concurrent and continued training of all unique policies such that no good-response features in the population prematurely due to early truncation . Second , NeuPL represents an entire population of policies via a shared conditional network Πθ ( ·|s , σ ) with each policy Πθ ( ·|s , σi ) conditioned on and optimised against a meta-game mixture strategy σi , enabling transfer learning across policies . This representation also makes NeuPL general : it delegates the choice of effective population sizes |UNIQUE ( Σ ) | ≤ |Σ| = N to the meta-graph solver F as σi = σj implies Πθ ( ·|s , σi ) ≡ Πθ ( ·|s , σj ) ( cf . Section 2.1 ) . Finally , NeuPL allows for cyclic interaction graphs , beyond the scope of PSRO . We discuss the generality of NeuPL in the context of prior works in further details in Appendix D. N-step Best-Responses via Lower-Triangular Graphs A popular class of population learning algorithms seeks to converge to a sequence of N iterative best-responses where each policy πi is a best-response to an opponent meta-game strategy σi with support over a subset of the policy population Π < i = { πj } j < i . In NeuPL , this class of algorithms are implemented with meta-graph solvers that return lower-triangular adjacency matrices Σ with Σi≤j = 0 . Under this constraint , σ0 becomes a zero vector , implying that Πθ ( ·|s , σ0 ) does not seek to best-respond to any policies . Similar to the role of initial policies { π0 } in PSRO ( Algorithm 2 ) , Πθ ( ·|s , σ0 ) serves as a starting point for the sequence of N-step best-responses and any fixed policy can be used . We note that this property further allows for incorporating pre-trained policies in NeuPL , as we discuss in Appendix D.1 . Algorithm 3 A meta-graph solver implementing PSRO-NASH . 1 : function FPSRO-N ( U ) . U ∈ RN×N the empirical payoff matrix . 2 : Initialize meta-game strategies Σ ∈ RN×N with zeros . 3 : for i ∈ { 1 , . . . , N − 1 } do 4 : Σi+1,1 : i ← SOLVE-NASH ( U1 : i,1 : i ) . LP Nash solver , see Shoham & Leyton-Brown ( 2008 ) . 5 : return Σ One prominent example is PSRO-NASH , where πi is optimized to best-respond to the Nash mixture policy over Π < i . This particular meta-graph solver is shown in Algorithm 3 .
The paper proposes Neural Population Learning which I think extends PSRO in two aspects. First, it avoid the premature *good*-response. Second, it uses a conditional network to represent the population of policies, so as to enable skill transfer. NeuPL also offers convergence guarantees under some assumptions. NeuPL is empirically verified in *rock-paper-scissors*, *running-with-scissors*, and MuJoCo Football.
SP:b9f4ba88299bd3c2ea77354ceff3bfbb788b7a4b
NeuPL: Neural Population Learning
Learning in strategy games ( e.g . StarCraft , poker ) requires the discovery of diverse policies . This is often achieved by iteratively training new policies against existing ones , growing a policy population that is robust to exploit . This iterative approach suffers from two issues in real-world games : a ) under finite budget , approximate best-response operators at each iteration needs truncating , resulting in under-trained good-responses populating the population ; b ) repeated learning of basic skills at each iteration is wasteful and becomes intractable in the presence of increasingly strong opponents . In this work , we propose Neural Population Learning ( NeuPL ) as a solution to both issues . NeuPL offers convergence guarantees to a population of best-responses under mild assumptions . By representing a population of policies within a single conditional model , NeuPL enables transfer learning across policies . Empirically , we show the generality , improved performance and efficiency of NeuPL across several test domains1 . Most interestingly , we show that novel strategies become more accessible , not less , as the neural population expands . The need for learning not one , but a population of strategies is rooted in classical game theory . Consider the purely cyclical game of rock-paper-scissors , the performance of individual strategies is meaningless as improving against one entails losing to another . By contrast , performance can be meaningfully examined between populations . A population consisting of pure strategies { rock , paper } does well against a singleton population of { scissors } because in the meta-game where both populations are revealed , a player picking strategies from the former can always beat a player choosing from the latter2 . This observation underpins the unifying population learning framework of Policy Space Response Oracle ( PSRO ) where a new policy is trained to best-respond to a mixture over previous policies at each iteration , following a meta-strategy solver ( Lanctot et al. , 2017 ) . Most impressively , Vinyals et al . ( 2019 ) explored the strategy game of StarCraft with a league of policies , using a practical variation of PSRO . The league counted close to a thousand sophisticated deep RL agents as the population collectively became robust to exploits . Unfortunately , such empirical successes often come at considerable costs . Population learning algorithms with theoretical guarantees are traditionally studied in normal-form games ( Brown , 1951 ; McMahan et al. , 2003 ) where best-responses can be solved exactly . This is in stark contrast to real-world Game-of-Skills ( Czarnecki et al. , 2020 ) — such games are often temporal in nature , where best-responses can only be approximated with computationally intensive methods ( e.g . deep RL ) . This has two implications . First , for a given opponent , one can not efficiently tell apart good-responses that temporarily plateaued at local optima from globally optimal best-responses . As a result , approximate best-response operators are often truncated prematurely , according to hand-crafted schedules ( Lanctot et al. , 2017 ; Mcaleer et al. , 2020 ) . Second , real-world games often afford strategy-agnostic transitive ∗Currently at Reality Labs , work carried out while at DeepMind . †Work carried out while at DeepMind . 1See https : //neupl.github.io/demo/ for supplementary illustrations . 2This is formally quantified by Relative Population Performance , see Definition A.1 ( Balduzzi et al. , 2019 ) . skills that are pre-requisite to strategic reasoning . Learning such skills from scratch at each iteration in the presence of evermore skillful opponents quickly becomes intractable beyond a few iterations . This iterative and isolated approach is fundamentally at odds with human learning . For humans , mastering diverse strategies often facilitates incremental strategic innovation and learning about new strategies does not stop us from revisiting and improving upon known ones ( Caruana , 1997 ; Krakauer et al. , 2006 ) . In this work , we make progress towards endowing artificial agents with similar capability by extending population learning to real-world games . Specifically , we propose NeuPL , an efficient and general framework that learns and represents diverse policies in symmetric zero-sum games within a single conditional network , using the computational infrastructure of simple self-play ( Section 1.2 ) . Theoretically , we show that NeuPL converges to a sequence of iterative best-responses under certain conditions ( Section 1.3 ) . Empirically , we illustrate the generality of NeuPL by replicating known results of population learning algorithms on the classical domain of rockpaper-scissors as well as its partially-observed , spatiotemporal counterpart running-with-scissors ( Vezhnevets et al. , 2020 ) ( Section 2.1 ) . Most interestingly , we show that NeuPL enables transfer learning across policies , discovering exploiters to strong opponents that would have been inaccessible to comparable baselines ( Section 2.2 ) . Finally , we show the appeal of NeuPL in the challenge domain of MuJoCo Football ( Liu et al. , 2019 ) where players must continuously refine their movement skills in order to coordinate as a team . In this highly transitive game , NeuPL naturally represents a short sequence of best-responses without the need for a carefully chosen truncation criteria ( Section 2.4 ) . 1 METHODS . Our method is designed with two desiderata in mind . First , at convergence , the resulting population of policies should represent a sequence of iterative best-responses under reasonable conditions . Second , transfer learning can occur across policies throughout training . In this section , we define the problem setting of interests as well as necessary terminologies . We then describe NeuPL , our main conceptual algorithm as well as its theoretical properties . To make it concrete , we further consider deep RL specifically and offer two practical implementations of NeuPL for real-world games . 1.1 PRELIMINARIES . Approximate Best-Response ( ABR ) in Stochastic Games We consider a symmetric zero-sum Stochastic Game ( Shapley , 1953 ) defined by ( S , O , X , A , P , R , p0 ) with S the state space , O the observation space and X : S → O × O the observation function defining the ( partial ) views of the state for both players . Given joint actions ( at , a′t ) ∈ A × A , the state follows the transition distribution P : S ×A×A → Pr ( S ) . The reward functionR : S → R× R defines the rewards for both players in state st , denotedR ( st ) = ( rt , −rt ) . The initial state of the environment follows the distribution p0 . In a given state st , players act according to policies ( π ( ·|o≤t ) , π′ ( ·|o′≤t ) ) . Player π achieves an expected return of J ( π , π′ ) = Eπ , π′ [ ∑ t rt ] against π ′ . Policy π∗ is a best response to π′ if ∀π , J ( π∗ , π′ ) ≥ J ( π , π′ ) . We define π̂ ← ABR ( π , π′ ) with J ( π̂ , π′ ) ≥ J ( π , π′ ) . In other words , an ABR operator yields a policy π̂ that does no worse than π , in the presence of an opponent π′ . Meta-game Strategies in Population Learning Given a symmetric zero-sum game and a set of N policies Π : = { πi } Ni=1 , we define a normal-form meta-game where players ’ i-th action corresponds to executing policy πi for one episode . A meta-game strategy σ thus defines a probability assignment , or an action profile , over Π . Within Π , we define U ∈ RN×N ← EVAL ( Π ) to be the expected payoffs between pure strategies of this meta-game or equivalently , Uij : = J ( πi , πj ) in the underlying game . We further extend the ABR operator of the underlying game to mixture policies represented by σ , such that π̂ ← ABR ( π , σ , Π ) with Eπ′∼P ( σ ) [ J ( π̂ , π′ ) ] ≥ Eπ′∼P ( σ ) [ J ( π , π′ ) ] . Finally , we define f : R|Π|×|Π| → R|Π| to be a meta-strategy solver ( MSS ) with σ ← f ( U ) and F : RN×N → RN×N a meta-graph solver ( MGS ) with Σ ← F ( U ) . The former formulation is designed for iterative optimization of approximate best-responses as in Lanctot et al . ( 2017 ) whereas the latter is motivated by concurrent optimization over a set of population-level objectives as in Garnelo et al . ( 2021 ) . In particular , Σ ∈ RN×N : = { σi } Ni=1 defines N population-level objectives , with πi optimized against the mixture policy represented by σi and Π . As such , Σ ∈ RN×N corresponds to the adjacency matrix of an interaction graph . Figure 1 illustrates several commonly used population learning algorithms defined by Σ or equivalently , their interaction graphs . 1.2 NEURAL POPULATION LEARNING . We now present NeuPL and contrast it with Policy-Space Response Oracles ( PSRO , Lanctot et al . ( 2017 ) ) which similarly focuses on population learning with approximate best-responses by RL . Algorithm 1 Neural Population Learning ( Ours ) 1 : Πθ ( ·|s , σ ) . Conditional neural population net . 2 : Σ : = { σi } Ni=1 . Initial interaction graph . 3 : F : RN×N → RN×N . Meta-graph solver . 4 : while true do 5 : ΠΣθ ← { Πθ ( ·|s , σi ) } Ni=1 . Neural population . 6 : for σi ∈ UNIQUE ( Σ ) do 7 : Πσiθ ← Πθ ( ·|s , σi ) 8 : Πσiθ ← ABR ( Π σi θ , σi , Π Σ θ ) . Self-play . 9 : U ← EVAL ( ΠΣθ ) . ( Optional ) if F adaptive . 10 : Σ← F ( U ) . ( Optional ) if F adaptive . 11 : return Πθ , Σ Algorithm 2 PSRO ( Lanctot et al. , 2017 ) 1 : Π : = { π0 } . Initial policy population . 2 : σ ← UNIF ( Π ) . Initial meta-game strategy . 3 : f : R|Π|×|Π| → R|Π| . Meta-strategy solver . 4 : 5 : for i ∈ [ [ N ] ] do . N-step ABR . 6 : Initialize πθi . 7 : πθi ← ABR ( πθi , σ , Π ) 8 : Π← Π ∪ { πθi } 9 : U ← EVAL ( Π ) . Empirical payoffs . 10 : σ ← f ( U ) 11 : return Π NeuPL deviates from PSRO in two important ways . First , NeuPL suggests concurrent and continued training of all unique policies such that no good-response features in the population prematurely due to early truncation . Second , NeuPL represents an entire population of policies via a shared conditional network Πθ ( ·|s , σ ) with each policy Πθ ( ·|s , σi ) conditioned on and optimised against a meta-game mixture strategy σi , enabling transfer learning across policies . This representation also makes NeuPL general : it delegates the choice of effective population sizes |UNIQUE ( Σ ) | ≤ |Σ| = N to the meta-graph solver F as σi = σj implies Πθ ( ·|s , σi ) ≡ Πθ ( ·|s , σj ) ( cf . Section 2.1 ) . Finally , NeuPL allows for cyclic interaction graphs , beyond the scope of PSRO . We discuss the generality of NeuPL in the context of prior works in further details in Appendix D. N-step Best-Responses via Lower-Triangular Graphs A popular class of population learning algorithms seeks to converge to a sequence of N iterative best-responses where each policy πi is a best-response to an opponent meta-game strategy σi with support over a subset of the policy population Π < i = { πj } j < i . In NeuPL , this class of algorithms are implemented with meta-graph solvers that return lower-triangular adjacency matrices Σ with Σi≤j = 0 . Under this constraint , σ0 becomes a zero vector , implying that Πθ ( ·|s , σ0 ) does not seek to best-respond to any policies . Similar to the role of initial policies { π0 } in PSRO ( Algorithm 2 ) , Πθ ( ·|s , σ0 ) serves as a starting point for the sequence of N-step best-responses and any fixed policy can be used . We note that this property further allows for incorporating pre-trained policies in NeuPL , as we discuss in Appendix D.1 . Algorithm 3 A meta-graph solver implementing PSRO-NASH . 1 : function FPSRO-N ( U ) . U ∈ RN×N the empirical payoff matrix . 2 : Initialize meta-game strategies Σ ∈ RN×N with zeros . 3 : for i ∈ { 1 , . . . , N − 1 } do 4 : Σi+1,1 : i ← SOLVE-NASH ( U1 : i,1 : i ) . LP Nash solver , see Shoham & Leyton-Brown ( 2008 ) . 5 : return Σ One prominent example is PSRO-NASH , where πi is optimized to best-respond to the Nash mixture policy over Π < i . This particular meta-graph solver is shown in Algorithm 3 .
Population based training (PBT) algorithms progressively grow a set of policies by adding best-responses to mixtures of the existing population. When RL is used as a best-response method the new policy is generally not a best-response but instead a good response. Moreover, the good-response is initialized tabula rosa potentially relearning responses to policies responded to in previous iterations. To address these concerns, this work proposes NeuPL, where a single policy is trained to represent the entire population by conditioning it on the opponent's mixture. Moreover, this proposed algorithm introduces the use of interaction graphs as a means to codify match-making in a continuously training population. NeuPL is restricted to zero-sum symmetric games with fixed player sizes.
SP:b9f4ba88299bd3c2ea77354ceff3bfbb788b7a4b
NeuPL: Neural Population Learning
Learning in strategy games ( e.g . StarCraft , poker ) requires the discovery of diverse policies . This is often achieved by iteratively training new policies against existing ones , growing a policy population that is robust to exploit . This iterative approach suffers from two issues in real-world games : a ) under finite budget , approximate best-response operators at each iteration needs truncating , resulting in under-trained good-responses populating the population ; b ) repeated learning of basic skills at each iteration is wasteful and becomes intractable in the presence of increasingly strong opponents . In this work , we propose Neural Population Learning ( NeuPL ) as a solution to both issues . NeuPL offers convergence guarantees to a population of best-responses under mild assumptions . By representing a population of policies within a single conditional model , NeuPL enables transfer learning across policies . Empirically , we show the generality , improved performance and efficiency of NeuPL across several test domains1 . Most interestingly , we show that novel strategies become more accessible , not less , as the neural population expands . The need for learning not one , but a population of strategies is rooted in classical game theory . Consider the purely cyclical game of rock-paper-scissors , the performance of individual strategies is meaningless as improving against one entails losing to another . By contrast , performance can be meaningfully examined between populations . A population consisting of pure strategies { rock , paper } does well against a singleton population of { scissors } because in the meta-game where both populations are revealed , a player picking strategies from the former can always beat a player choosing from the latter2 . This observation underpins the unifying population learning framework of Policy Space Response Oracle ( PSRO ) where a new policy is trained to best-respond to a mixture over previous policies at each iteration , following a meta-strategy solver ( Lanctot et al. , 2017 ) . Most impressively , Vinyals et al . ( 2019 ) explored the strategy game of StarCraft with a league of policies , using a practical variation of PSRO . The league counted close to a thousand sophisticated deep RL agents as the population collectively became robust to exploits . Unfortunately , such empirical successes often come at considerable costs . Population learning algorithms with theoretical guarantees are traditionally studied in normal-form games ( Brown , 1951 ; McMahan et al. , 2003 ) where best-responses can be solved exactly . This is in stark contrast to real-world Game-of-Skills ( Czarnecki et al. , 2020 ) — such games are often temporal in nature , where best-responses can only be approximated with computationally intensive methods ( e.g . deep RL ) . This has two implications . First , for a given opponent , one can not efficiently tell apart good-responses that temporarily plateaued at local optima from globally optimal best-responses . As a result , approximate best-response operators are often truncated prematurely , according to hand-crafted schedules ( Lanctot et al. , 2017 ; Mcaleer et al. , 2020 ) . Second , real-world games often afford strategy-agnostic transitive ∗Currently at Reality Labs , work carried out while at DeepMind . †Work carried out while at DeepMind . 1See https : //neupl.github.io/demo/ for supplementary illustrations . 2This is formally quantified by Relative Population Performance , see Definition A.1 ( Balduzzi et al. , 2019 ) . skills that are pre-requisite to strategic reasoning . Learning such skills from scratch at each iteration in the presence of evermore skillful opponents quickly becomes intractable beyond a few iterations . This iterative and isolated approach is fundamentally at odds with human learning . For humans , mastering diverse strategies often facilitates incremental strategic innovation and learning about new strategies does not stop us from revisiting and improving upon known ones ( Caruana , 1997 ; Krakauer et al. , 2006 ) . In this work , we make progress towards endowing artificial agents with similar capability by extending population learning to real-world games . Specifically , we propose NeuPL , an efficient and general framework that learns and represents diverse policies in symmetric zero-sum games within a single conditional network , using the computational infrastructure of simple self-play ( Section 1.2 ) . Theoretically , we show that NeuPL converges to a sequence of iterative best-responses under certain conditions ( Section 1.3 ) . Empirically , we illustrate the generality of NeuPL by replicating known results of population learning algorithms on the classical domain of rockpaper-scissors as well as its partially-observed , spatiotemporal counterpart running-with-scissors ( Vezhnevets et al. , 2020 ) ( Section 2.1 ) . Most interestingly , we show that NeuPL enables transfer learning across policies , discovering exploiters to strong opponents that would have been inaccessible to comparable baselines ( Section 2.2 ) . Finally , we show the appeal of NeuPL in the challenge domain of MuJoCo Football ( Liu et al. , 2019 ) where players must continuously refine their movement skills in order to coordinate as a team . In this highly transitive game , NeuPL naturally represents a short sequence of best-responses without the need for a carefully chosen truncation criteria ( Section 2.4 ) . 1 METHODS . Our method is designed with two desiderata in mind . First , at convergence , the resulting population of policies should represent a sequence of iterative best-responses under reasonable conditions . Second , transfer learning can occur across policies throughout training . In this section , we define the problem setting of interests as well as necessary terminologies . We then describe NeuPL , our main conceptual algorithm as well as its theoretical properties . To make it concrete , we further consider deep RL specifically and offer two practical implementations of NeuPL for real-world games . 1.1 PRELIMINARIES . Approximate Best-Response ( ABR ) in Stochastic Games We consider a symmetric zero-sum Stochastic Game ( Shapley , 1953 ) defined by ( S , O , X , A , P , R , p0 ) with S the state space , O the observation space and X : S → O × O the observation function defining the ( partial ) views of the state for both players . Given joint actions ( at , a′t ) ∈ A × A , the state follows the transition distribution P : S ×A×A → Pr ( S ) . The reward functionR : S → R× R defines the rewards for both players in state st , denotedR ( st ) = ( rt , −rt ) . The initial state of the environment follows the distribution p0 . In a given state st , players act according to policies ( π ( ·|o≤t ) , π′ ( ·|o′≤t ) ) . Player π achieves an expected return of J ( π , π′ ) = Eπ , π′ [ ∑ t rt ] against π ′ . Policy π∗ is a best response to π′ if ∀π , J ( π∗ , π′ ) ≥ J ( π , π′ ) . We define π̂ ← ABR ( π , π′ ) with J ( π̂ , π′ ) ≥ J ( π , π′ ) . In other words , an ABR operator yields a policy π̂ that does no worse than π , in the presence of an opponent π′ . Meta-game Strategies in Population Learning Given a symmetric zero-sum game and a set of N policies Π : = { πi } Ni=1 , we define a normal-form meta-game where players ’ i-th action corresponds to executing policy πi for one episode . A meta-game strategy σ thus defines a probability assignment , or an action profile , over Π . Within Π , we define U ∈ RN×N ← EVAL ( Π ) to be the expected payoffs between pure strategies of this meta-game or equivalently , Uij : = J ( πi , πj ) in the underlying game . We further extend the ABR operator of the underlying game to mixture policies represented by σ , such that π̂ ← ABR ( π , σ , Π ) with Eπ′∼P ( σ ) [ J ( π̂ , π′ ) ] ≥ Eπ′∼P ( σ ) [ J ( π , π′ ) ] . Finally , we define f : R|Π|×|Π| → R|Π| to be a meta-strategy solver ( MSS ) with σ ← f ( U ) and F : RN×N → RN×N a meta-graph solver ( MGS ) with Σ ← F ( U ) . The former formulation is designed for iterative optimization of approximate best-responses as in Lanctot et al . ( 2017 ) whereas the latter is motivated by concurrent optimization over a set of population-level objectives as in Garnelo et al . ( 2021 ) . In particular , Σ ∈ RN×N : = { σi } Ni=1 defines N population-level objectives , with πi optimized against the mixture policy represented by σi and Π . As such , Σ ∈ RN×N corresponds to the adjacency matrix of an interaction graph . Figure 1 illustrates several commonly used population learning algorithms defined by Σ or equivalently , their interaction graphs . 1.2 NEURAL POPULATION LEARNING . We now present NeuPL and contrast it with Policy-Space Response Oracles ( PSRO , Lanctot et al . ( 2017 ) ) which similarly focuses on population learning with approximate best-responses by RL . Algorithm 1 Neural Population Learning ( Ours ) 1 : Πθ ( ·|s , σ ) . Conditional neural population net . 2 : Σ : = { σi } Ni=1 . Initial interaction graph . 3 : F : RN×N → RN×N . Meta-graph solver . 4 : while true do 5 : ΠΣθ ← { Πθ ( ·|s , σi ) } Ni=1 . Neural population . 6 : for σi ∈ UNIQUE ( Σ ) do 7 : Πσiθ ← Πθ ( ·|s , σi ) 8 : Πσiθ ← ABR ( Π σi θ , σi , Π Σ θ ) . Self-play . 9 : U ← EVAL ( ΠΣθ ) . ( Optional ) if F adaptive . 10 : Σ← F ( U ) . ( Optional ) if F adaptive . 11 : return Πθ , Σ Algorithm 2 PSRO ( Lanctot et al. , 2017 ) 1 : Π : = { π0 } . Initial policy population . 2 : σ ← UNIF ( Π ) . Initial meta-game strategy . 3 : f : R|Π|×|Π| → R|Π| . Meta-strategy solver . 4 : 5 : for i ∈ [ [ N ] ] do . N-step ABR . 6 : Initialize πθi . 7 : πθi ← ABR ( πθi , σ , Π ) 8 : Π← Π ∪ { πθi } 9 : U ← EVAL ( Π ) . Empirical payoffs . 10 : σ ← f ( U ) 11 : return Π NeuPL deviates from PSRO in two important ways . First , NeuPL suggests concurrent and continued training of all unique policies such that no good-response features in the population prematurely due to early truncation . Second , NeuPL represents an entire population of policies via a shared conditional network Πθ ( ·|s , σ ) with each policy Πθ ( ·|s , σi ) conditioned on and optimised against a meta-game mixture strategy σi , enabling transfer learning across policies . This representation also makes NeuPL general : it delegates the choice of effective population sizes |UNIQUE ( Σ ) | ≤ |Σ| = N to the meta-graph solver F as σi = σj implies Πθ ( ·|s , σi ) ≡ Πθ ( ·|s , σj ) ( cf . Section 2.1 ) . Finally , NeuPL allows for cyclic interaction graphs , beyond the scope of PSRO . We discuss the generality of NeuPL in the context of prior works in further details in Appendix D. N-step Best-Responses via Lower-Triangular Graphs A popular class of population learning algorithms seeks to converge to a sequence of N iterative best-responses where each policy πi is a best-response to an opponent meta-game strategy σi with support over a subset of the policy population Π < i = { πj } j < i . In NeuPL , this class of algorithms are implemented with meta-graph solvers that return lower-triangular adjacency matrices Σ with Σi≤j = 0 . Under this constraint , σ0 becomes a zero vector , implying that Πθ ( ·|s , σ0 ) does not seek to best-respond to any policies . Similar to the role of initial policies { π0 } in PSRO ( Algorithm 2 ) , Πθ ( ·|s , σ0 ) serves as a starting point for the sequence of N-step best-responses and any fixed policy can be used . We note that this property further allows for incorporating pre-trained policies in NeuPL , as we discuss in Appendix D.1 . Algorithm 3 A meta-graph solver implementing PSRO-NASH . 1 : function FPSRO-N ( U ) . U ∈ RN×N the empirical payoff matrix . 2 : Initialize meta-game strategies Σ ∈ RN×N with zeros . 3 : for i ∈ { 1 , . . . , N − 1 } do 4 : Σi+1,1 : i ← SOLVE-NASH ( U1 : i,1 : i ) . LP Nash solver , see Shoham & Leyton-Brown ( 2008 ) . 5 : return Σ One prominent example is PSRO-NASH , where πi is optimized to best-respond to the Nash mixture policy over Π < i . This particular meta-graph solver is shown in Algorithm 3 .
This submission provides an integrated and versatile NeuPL framework to improve the performance and convergence speed of population-based training algorithms by representing the entire population of policies within a single conditional policy network. NeuPL is very general, the commonly used population-based training algorithms, such as self-play, fictitious play, and PSRO can be regrade as its special cases by changing the meta-graph solver. The authors conduct extensive ablation experiments on some small games to validate the effectiveness of NeuPL from different perspectives. The results on the large-scale football environment also demonstrate NeuPL’s generalization ability.
SP:b9f4ba88299bd3c2ea77354ceff3bfbb788b7a4b
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications , including adversarial learning , GANs , transport and robust optimization . With increasing data and problem sizes necessary to train high performing models across various applications , we need to rely on parallel and distributed computing . However , in distributed training , communication among the compute nodes is a key bottleneck during training , and this problem is exacerbated for high dimensional and over-parameterized models . Due to these considerations , it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality . In this paper , we present the first theoretically grounded distributed methods for solving variational inequalities and saddle point problems using compressed communication : MASHA1 and MASHA2 . Our theory and methods allow for the use of both unbiased ( such as Randk ; MASHA1 ) and contractive ( such as Topk ; MASHA2 ) compressors . We empirically validate our conclusions using two experimental setups : a standard bilinear min-max problem , and large-scale distributed adversarial training of transformers . 1 INTRODUCTION . 1.1 THE EXPRESSIVE POWER OF VARIATIONAL INEQUALITIES . Due to their abstract mathematical nature and the associated flexibility they offer in modeling various practical problems of interests , variational inequalities ( VI ) have been an active area of research in applied mathematics for more than half a century ( Korpelevich , 1976 ; Harker & Pang. , 1990 ; Facchinei & Pang , 2003 ) . It is well known that VIs can be used to formulate and study convex optimization problems , convex-concave saddle point problems and games , for example , in an elegant unifying mathematical framework ( Korpelevich , 1976 ; Bauschke & Combettes , 2017 ) . Recently , Gidel et al . ( 2019 ) pointed out that multi-player games can be cast as VIs , and proposed to study mini-max or non-zero-sum games formulations of GANs ( Goodfellow et al. , 2014 ) in this fashion . This allowed them to successfully transfer established insights and well-known techniques from the vast literature on VIs to the study of GANs . In particular , oscillatory behavior of optimization methods ( such as SGD ) not originally designed to solve VI problems is well understood in the VI literature , and established tools , such as averaging and extrapolation , can be successfully applied to the training of GANs . Besides their usefulness in studying GANs and alternative adversarial learning models ( Madry et al. , 2018 ) , VIs have recently attracted considerable attention of the machine learning community due to their ability to model other situations where the minimization of a single loss function does not suffice , such as auction theory ( Syrgkanis et al. , 2015 ) and robust and multi-agent reinforcement learning ( Pinto et al. , 2017 ) . In summary , VIs have recently become a potent tool enabling new advances in practical machine learning situations reaching beyond supervised learning where optimization problems and techniques , which can be seen as special instances of VIs and methods for solving them , reign supreme . 1.2 TRAINING OF SUPERVISED MODELS VIA DISTRIBUTED OPTIMIZATION . On the other hand , in the domain of classical , and hence also much better understood , supervised machine learning characterized by the fact that standard optimization techniques apply and work well , researchers and practitioners face other challenges that are currently beyond the reach of existing VI methods . Indeed , the training of modern supervised machine learning models in general , and deep neural networks in particular , is still extremely challenging . Due to their desire to improve the generalization of deployed models , machine learning engineers need to rely on training datasets of ever increasing sizes and on elaborate over-parametrized models ( Arora et al. , 2018 ) . Supporting workloads of such unprecedented magnitudes would be impossible without combining the latest advances in hardware acceleration , distributed systems and distributed algorithm design ( Verbraeken et al. , 2019 ) . When training such modern supervised models in a distributed fashion , communication cost is often the bottleneck of the training system , and for this reason , a lot of effort was recently targeted at the design of communication efficient distributed optimization methods ( Konečný et al. , 2016 ; Smith et al. , 2018 ; Ghosh et al. , 2020 ; Gorbunov et al. , 2021 ) . A particularly successful technique for improving the communication efficiency of distributed first order optimization methods is communication compression . The idea behind this technique is rooted in the observation that in practical implementations it often advantageous to communicate messages compressed via ( often randomized ) lossy compression techniques instead of communicating the full messages ( Seide et al. , 2014 ; Alistarh et al. , 2017 ) . If the number of parallel workers is large enough , the noise introduced by compression is reduced , and training with compressed communication will often lead to the comparable test error while reducing the amount of communicated bits , which results in faster training , both in theory and practice ( Mishchenko et al. , 2019 ; Gorbunov et al. , 2021 ) . 1.3 TWO CLASSES OF COMPRESSION OPERATORS . We say that a ( possibly ) stochastic mapping Q : Rd ! Rd is an unbiased compression operator if there exists a constant q 1 such that EQ ( z ) = z , EkQ ( z ) k2 qkzk2 , 8z 2 Rd . ( 1 ) Further , we say that a stochastic mapping C : Rd ! Rd is a contractive compression operator if there exists a constant 1 such that EkC ( z ) zk2 ( 1 1/ ) kzk2 , 8z 2 Rd . ( 2 ) If b is the number of bits needed to represent a single float ( e.g. , b = 32 or b = 64 ) , then the number of bits needed to represent a generic vector z 2 Rd is kzkbits : = bd . To describe how much a compression operator reduces its input vector on average , we define the notion of expected density , defined via : = 1bdEkQ ( z ) kbits , where kQ ( z ) kbits denotes the number of bits needed to represent the quantized vector Q ( z ) . Note that 1 . For the Randk operator ( Alistarh et al. , 2018 ; Beznosikov et al. , 2020 ) we have q = d/k and = k/d . 1.4 TOWARDS COMMUNICATION-EFFICIENT DISTRIBUTED METHODS FOR VIS . While classical VI algorithms , such as the extragradient method originally proposed by Korpelevich ( 1976 ) and later studied by many authors ( Nemirovski , 2004 ; Juditsky et al. , 2008 ) , were not designed to work in a distributed environment , virtually all methods that were ( Yuan et al. , 2014 ; Hou et al. , 2021 ; Deng & Mahdavi , 2021 ; Beznosikov et al. , 2021b ; c ) do not consider the general VI problem , but tackle the special case of saddle point problems only . Moreover , none of these distributed methods support communication compression , with the exception of the work of Yuan et al . ( 2014 ) , which relies on rounding to the nearest integer multiple of a certain quantity . This compression mechanism does not offer theoretical benefits and does not even lead to convergence to the solution due to the errors introduced through rounding persist and prevent the method from solving the problem . 2 SUMMARY OF CONTRIBUTIONS . In this paper , we investigate whether it is possible to design communication-efficient algorithms for solving distributed VI problems by borrowing generic communication compression techniques ( 1 ) and ( 2 ) from the optimization literature ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Mishchenko et al. , 2019 ; Gorbunov et al. , 2021 ; Richtárik et al. , 2021 ) and adapting and embedding them into established and efficient methods for solving VIs ( Korpelevich , 1976 ; Nemirovski , 2004 ; Juditsky et al. , 2008 ; Alacaoglu & Malitsky , 2021 ) . Whether or not this is possible is an open problem . In summary , we design the first provably communication-efficient algorithms for solving general distributed VI problems ( see Section 3 , Equation 3 ) in the deterministic ( see ( 4 ) ) and stochastic ( see ( 5 ) ) regimes , supporting both unbiased ( MASHA1 = Algorithm 1 ) and contractive ( MASHA2 = Algorithm 2 ) compressors . Our methods are explicitly designed to be variance reduced to achieve better theoretical properties and better practical performance . In Table 1 we give a high level overview of existing methods for VIs , and contrast them with our methods and results . We now elaborate a bit more : 2.1 TWO DISTRIBUTED PROBLEMS : DETERMINISTIC AND STOCHASTIC . We study two distributed VI problems : i ) deterministic , where the monotone operator F : Rd ! Rd featured in the VI is the average of M operators { Fm } Mm=1 , where M is the number of devices/machines , which can be evaluated in each communication round , and ii ) stochastic , where each monotone operator Fm : Rd ! Rd has a finite-sum structure on its own , and only a single operator in the sum can be evaluated in each iteration . In contrast to previous works , we study general constrained VIs in the distributed setup ( see Section 3 ) , and not merely saddle point problems . 2.2 TWO NEW METHODS WITH COMPRESSED COMMUNICATION : MASHA1 AND MASHA2 We develop two extensions of the extragradient / extrastep method of Korpelevich ( 1976 ) to distributed VIs depending on whether we use unbiased ( 1 ) or contractive ( 2 ) compressors , since each type of compressor demands a different algorithmic design and a different analysis . In particular , contractive compressors are notoriously hard to analyze even for optimization problems ( Karimireddy et al. , 2019 ; Richtárik et al. , 2021 ) . Our method based on unbiased compressors is called MASHA1 ( Algorithm 1 ) , and our method based on contraction compressors is called MASHA2 ( Algorithm 2 ) . Both are designed to handle the deterministic and also the stochastic setting , and both are enhanced with bespoke variance-reduction techniques for better theoretical and practical performance . Due to space restrictions , we only describe MASHA1 in the main body of the paper , and relegate MASHA2 and the associated theory to Appendix B . 2.3 THEORETICAL COMPLEXITY RESULTS . We establish a number of theoretical complexity results for our methods , which we summarize in Table 2 . We consider the strongly convex - strongly concave regime as well as the more general convex - concave regime . In the first case we obtain linear convergence results ( O ( log 1/✏ ) ) in terms of the distance to solution , and in the latter we obtain fast sublinear convergence results ( O ( 1/✏ ) ) in terms of the gap function . To get an estimate for the number of information transmitted , one need to multiply the estimates from Table 1 by 1/q and 1/ , respectively . Then we get that from the point of view of the transmitted information , MASHA1 is better by a factor p 1/q + 1/M in comparison with the original extragradient . 3 PROBLEM FORMULATION AND ASSUMPTIONS . Let us first introduce basic notation . We write hx , yi : = Pd i=1 xiyi to denote the standard Euclidean inner product of vectors x , y 2 Rd . This induces ` 2-norm in Rd as usual : kxk : = p hx , xi . We also introduce the proximal operator , defined as proxg ( z ) : = argminu2Z { g ( u ) + 1 2ku zk 2 } , which is well defined for proper lower semicontinuous convex functions g : Rd ! R [ { +1 } .
The paper studies the communication needed in order for a group of distributed players to collectively solve a variational inequality problem. The paper provides two algorithms MASHA1, MASHA2 to do this that solve both the deterministic and stochastic cases. They also provide experimental results on applying their techniques to Bilinear saddle point problem and adversarial training of transformers.
SP:3ca5d9c117190de8a3ef3b3eb3c495e403b9efd2
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications , including adversarial learning , GANs , transport and robust optimization . With increasing data and problem sizes necessary to train high performing models across various applications , we need to rely on parallel and distributed computing . However , in distributed training , communication among the compute nodes is a key bottleneck during training , and this problem is exacerbated for high dimensional and over-parameterized models . Due to these considerations , it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality . In this paper , we present the first theoretically grounded distributed methods for solving variational inequalities and saddle point problems using compressed communication : MASHA1 and MASHA2 . Our theory and methods allow for the use of both unbiased ( such as Randk ; MASHA1 ) and contractive ( such as Topk ; MASHA2 ) compressors . We empirically validate our conclusions using two experimental setups : a standard bilinear min-max problem , and large-scale distributed adversarial training of transformers . 1 INTRODUCTION . 1.1 THE EXPRESSIVE POWER OF VARIATIONAL INEQUALITIES . Due to their abstract mathematical nature and the associated flexibility they offer in modeling various practical problems of interests , variational inequalities ( VI ) have been an active area of research in applied mathematics for more than half a century ( Korpelevich , 1976 ; Harker & Pang. , 1990 ; Facchinei & Pang , 2003 ) . It is well known that VIs can be used to formulate and study convex optimization problems , convex-concave saddle point problems and games , for example , in an elegant unifying mathematical framework ( Korpelevich , 1976 ; Bauschke & Combettes , 2017 ) . Recently , Gidel et al . ( 2019 ) pointed out that multi-player games can be cast as VIs , and proposed to study mini-max or non-zero-sum games formulations of GANs ( Goodfellow et al. , 2014 ) in this fashion . This allowed them to successfully transfer established insights and well-known techniques from the vast literature on VIs to the study of GANs . In particular , oscillatory behavior of optimization methods ( such as SGD ) not originally designed to solve VI problems is well understood in the VI literature , and established tools , such as averaging and extrapolation , can be successfully applied to the training of GANs . Besides their usefulness in studying GANs and alternative adversarial learning models ( Madry et al. , 2018 ) , VIs have recently attracted considerable attention of the machine learning community due to their ability to model other situations where the minimization of a single loss function does not suffice , such as auction theory ( Syrgkanis et al. , 2015 ) and robust and multi-agent reinforcement learning ( Pinto et al. , 2017 ) . In summary , VIs have recently become a potent tool enabling new advances in practical machine learning situations reaching beyond supervised learning where optimization problems and techniques , which can be seen as special instances of VIs and methods for solving them , reign supreme . 1.2 TRAINING OF SUPERVISED MODELS VIA DISTRIBUTED OPTIMIZATION . On the other hand , in the domain of classical , and hence also much better understood , supervised machine learning characterized by the fact that standard optimization techniques apply and work well , researchers and practitioners face other challenges that are currently beyond the reach of existing VI methods . Indeed , the training of modern supervised machine learning models in general , and deep neural networks in particular , is still extremely challenging . Due to their desire to improve the generalization of deployed models , machine learning engineers need to rely on training datasets of ever increasing sizes and on elaborate over-parametrized models ( Arora et al. , 2018 ) . Supporting workloads of such unprecedented magnitudes would be impossible without combining the latest advances in hardware acceleration , distributed systems and distributed algorithm design ( Verbraeken et al. , 2019 ) . When training such modern supervised models in a distributed fashion , communication cost is often the bottleneck of the training system , and for this reason , a lot of effort was recently targeted at the design of communication efficient distributed optimization methods ( Konečný et al. , 2016 ; Smith et al. , 2018 ; Ghosh et al. , 2020 ; Gorbunov et al. , 2021 ) . A particularly successful technique for improving the communication efficiency of distributed first order optimization methods is communication compression . The idea behind this technique is rooted in the observation that in practical implementations it often advantageous to communicate messages compressed via ( often randomized ) lossy compression techniques instead of communicating the full messages ( Seide et al. , 2014 ; Alistarh et al. , 2017 ) . If the number of parallel workers is large enough , the noise introduced by compression is reduced , and training with compressed communication will often lead to the comparable test error while reducing the amount of communicated bits , which results in faster training , both in theory and practice ( Mishchenko et al. , 2019 ; Gorbunov et al. , 2021 ) . 1.3 TWO CLASSES OF COMPRESSION OPERATORS . We say that a ( possibly ) stochastic mapping Q : Rd ! Rd is an unbiased compression operator if there exists a constant q 1 such that EQ ( z ) = z , EkQ ( z ) k2 qkzk2 , 8z 2 Rd . ( 1 ) Further , we say that a stochastic mapping C : Rd ! Rd is a contractive compression operator if there exists a constant 1 such that EkC ( z ) zk2 ( 1 1/ ) kzk2 , 8z 2 Rd . ( 2 ) If b is the number of bits needed to represent a single float ( e.g. , b = 32 or b = 64 ) , then the number of bits needed to represent a generic vector z 2 Rd is kzkbits : = bd . To describe how much a compression operator reduces its input vector on average , we define the notion of expected density , defined via : = 1bdEkQ ( z ) kbits , where kQ ( z ) kbits denotes the number of bits needed to represent the quantized vector Q ( z ) . Note that 1 . For the Randk operator ( Alistarh et al. , 2018 ; Beznosikov et al. , 2020 ) we have q = d/k and = k/d . 1.4 TOWARDS COMMUNICATION-EFFICIENT DISTRIBUTED METHODS FOR VIS . While classical VI algorithms , such as the extragradient method originally proposed by Korpelevich ( 1976 ) and later studied by many authors ( Nemirovski , 2004 ; Juditsky et al. , 2008 ) , were not designed to work in a distributed environment , virtually all methods that were ( Yuan et al. , 2014 ; Hou et al. , 2021 ; Deng & Mahdavi , 2021 ; Beznosikov et al. , 2021b ; c ) do not consider the general VI problem , but tackle the special case of saddle point problems only . Moreover , none of these distributed methods support communication compression , with the exception of the work of Yuan et al . ( 2014 ) , which relies on rounding to the nearest integer multiple of a certain quantity . This compression mechanism does not offer theoretical benefits and does not even lead to convergence to the solution due to the errors introduced through rounding persist and prevent the method from solving the problem . 2 SUMMARY OF CONTRIBUTIONS . In this paper , we investigate whether it is possible to design communication-efficient algorithms for solving distributed VI problems by borrowing generic communication compression techniques ( 1 ) and ( 2 ) from the optimization literature ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Mishchenko et al. , 2019 ; Gorbunov et al. , 2021 ; Richtárik et al. , 2021 ) and adapting and embedding them into established and efficient methods for solving VIs ( Korpelevich , 1976 ; Nemirovski , 2004 ; Juditsky et al. , 2008 ; Alacaoglu & Malitsky , 2021 ) . Whether or not this is possible is an open problem . In summary , we design the first provably communication-efficient algorithms for solving general distributed VI problems ( see Section 3 , Equation 3 ) in the deterministic ( see ( 4 ) ) and stochastic ( see ( 5 ) ) regimes , supporting both unbiased ( MASHA1 = Algorithm 1 ) and contractive ( MASHA2 = Algorithm 2 ) compressors . Our methods are explicitly designed to be variance reduced to achieve better theoretical properties and better practical performance . In Table 1 we give a high level overview of existing methods for VIs , and contrast them with our methods and results . We now elaborate a bit more : 2.1 TWO DISTRIBUTED PROBLEMS : DETERMINISTIC AND STOCHASTIC . We study two distributed VI problems : i ) deterministic , where the monotone operator F : Rd ! Rd featured in the VI is the average of M operators { Fm } Mm=1 , where M is the number of devices/machines , which can be evaluated in each communication round , and ii ) stochastic , where each monotone operator Fm : Rd ! Rd has a finite-sum structure on its own , and only a single operator in the sum can be evaluated in each iteration . In contrast to previous works , we study general constrained VIs in the distributed setup ( see Section 3 ) , and not merely saddle point problems . 2.2 TWO NEW METHODS WITH COMPRESSED COMMUNICATION : MASHA1 AND MASHA2 We develop two extensions of the extragradient / extrastep method of Korpelevich ( 1976 ) to distributed VIs depending on whether we use unbiased ( 1 ) or contractive ( 2 ) compressors , since each type of compressor demands a different algorithmic design and a different analysis . In particular , contractive compressors are notoriously hard to analyze even for optimization problems ( Karimireddy et al. , 2019 ; Richtárik et al. , 2021 ) . Our method based on unbiased compressors is called MASHA1 ( Algorithm 1 ) , and our method based on contraction compressors is called MASHA2 ( Algorithm 2 ) . Both are designed to handle the deterministic and also the stochastic setting , and both are enhanced with bespoke variance-reduction techniques for better theoretical and practical performance . Due to space restrictions , we only describe MASHA1 in the main body of the paper , and relegate MASHA2 and the associated theory to Appendix B . 2.3 THEORETICAL COMPLEXITY RESULTS . We establish a number of theoretical complexity results for our methods , which we summarize in Table 2 . We consider the strongly convex - strongly concave regime as well as the more general convex - concave regime . In the first case we obtain linear convergence results ( O ( log 1/✏ ) ) in terms of the distance to solution , and in the latter we obtain fast sublinear convergence results ( O ( 1/✏ ) ) in terms of the gap function . To get an estimate for the number of information transmitted , one need to multiply the estimates from Table 1 by 1/q and 1/ , respectively . Then we get that from the point of view of the transmitted information , MASHA1 is better by a factor p 1/q + 1/M in comparison with the original extragradient . 3 PROBLEM FORMULATION AND ASSUMPTIONS . Let us first introduce basic notation . We write hx , yi : = Pd i=1 xiyi to denote the standard Euclidean inner product of vectors x , y 2 Rd . This induces ` 2-norm in Rd as usual : kxk : = p hx , xi . We also introduce the proximal operator , defined as proxg ( z ) : = argminu2Z { g ( u ) + 1 2ku zk 2 } , which is well defined for proper lower semicontinuous convex functions g : Rd ! R [ { +1 } .
The paper develops a decentralized algorithm for solving variational inequalities with a certain structure motivated by machine learning applications. The key innovation is the utilization of compression (i.e., quantization) for communicating loss functions and their aggregates between a set of devices and a centralized server node, towards solving the inequalities using extragradient methods. The paper provides theoretical convergence guarantees that account for the quantization errors, and report the trade-off between the communication complexity and the convergence.
SP:3ca5d9c117190de8a3ef3b3eb3c495e403b9efd2
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications , including adversarial learning , GANs , transport and robust optimization . With increasing data and problem sizes necessary to train high performing models across various applications , we need to rely on parallel and distributed computing . However , in distributed training , communication among the compute nodes is a key bottleneck during training , and this problem is exacerbated for high dimensional and over-parameterized models . Due to these considerations , it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality . In this paper , we present the first theoretically grounded distributed methods for solving variational inequalities and saddle point problems using compressed communication : MASHA1 and MASHA2 . Our theory and methods allow for the use of both unbiased ( such as Randk ; MASHA1 ) and contractive ( such as Topk ; MASHA2 ) compressors . We empirically validate our conclusions using two experimental setups : a standard bilinear min-max problem , and large-scale distributed adversarial training of transformers . 1 INTRODUCTION . 1.1 THE EXPRESSIVE POWER OF VARIATIONAL INEQUALITIES . Due to their abstract mathematical nature and the associated flexibility they offer in modeling various practical problems of interests , variational inequalities ( VI ) have been an active area of research in applied mathematics for more than half a century ( Korpelevich , 1976 ; Harker & Pang. , 1990 ; Facchinei & Pang , 2003 ) . It is well known that VIs can be used to formulate and study convex optimization problems , convex-concave saddle point problems and games , for example , in an elegant unifying mathematical framework ( Korpelevich , 1976 ; Bauschke & Combettes , 2017 ) . Recently , Gidel et al . ( 2019 ) pointed out that multi-player games can be cast as VIs , and proposed to study mini-max or non-zero-sum games formulations of GANs ( Goodfellow et al. , 2014 ) in this fashion . This allowed them to successfully transfer established insights and well-known techniques from the vast literature on VIs to the study of GANs . In particular , oscillatory behavior of optimization methods ( such as SGD ) not originally designed to solve VI problems is well understood in the VI literature , and established tools , such as averaging and extrapolation , can be successfully applied to the training of GANs . Besides their usefulness in studying GANs and alternative adversarial learning models ( Madry et al. , 2018 ) , VIs have recently attracted considerable attention of the machine learning community due to their ability to model other situations where the minimization of a single loss function does not suffice , such as auction theory ( Syrgkanis et al. , 2015 ) and robust and multi-agent reinforcement learning ( Pinto et al. , 2017 ) . In summary , VIs have recently become a potent tool enabling new advances in practical machine learning situations reaching beyond supervised learning where optimization problems and techniques , which can be seen as special instances of VIs and methods for solving them , reign supreme . 1.2 TRAINING OF SUPERVISED MODELS VIA DISTRIBUTED OPTIMIZATION . On the other hand , in the domain of classical , and hence also much better understood , supervised machine learning characterized by the fact that standard optimization techniques apply and work well , researchers and practitioners face other challenges that are currently beyond the reach of existing VI methods . Indeed , the training of modern supervised machine learning models in general , and deep neural networks in particular , is still extremely challenging . Due to their desire to improve the generalization of deployed models , machine learning engineers need to rely on training datasets of ever increasing sizes and on elaborate over-parametrized models ( Arora et al. , 2018 ) . Supporting workloads of such unprecedented magnitudes would be impossible without combining the latest advances in hardware acceleration , distributed systems and distributed algorithm design ( Verbraeken et al. , 2019 ) . When training such modern supervised models in a distributed fashion , communication cost is often the bottleneck of the training system , and for this reason , a lot of effort was recently targeted at the design of communication efficient distributed optimization methods ( Konečný et al. , 2016 ; Smith et al. , 2018 ; Ghosh et al. , 2020 ; Gorbunov et al. , 2021 ) . A particularly successful technique for improving the communication efficiency of distributed first order optimization methods is communication compression . The idea behind this technique is rooted in the observation that in practical implementations it often advantageous to communicate messages compressed via ( often randomized ) lossy compression techniques instead of communicating the full messages ( Seide et al. , 2014 ; Alistarh et al. , 2017 ) . If the number of parallel workers is large enough , the noise introduced by compression is reduced , and training with compressed communication will often lead to the comparable test error while reducing the amount of communicated bits , which results in faster training , both in theory and practice ( Mishchenko et al. , 2019 ; Gorbunov et al. , 2021 ) . 1.3 TWO CLASSES OF COMPRESSION OPERATORS . We say that a ( possibly ) stochastic mapping Q : Rd ! Rd is an unbiased compression operator if there exists a constant q 1 such that EQ ( z ) = z , EkQ ( z ) k2 qkzk2 , 8z 2 Rd . ( 1 ) Further , we say that a stochastic mapping C : Rd ! Rd is a contractive compression operator if there exists a constant 1 such that EkC ( z ) zk2 ( 1 1/ ) kzk2 , 8z 2 Rd . ( 2 ) If b is the number of bits needed to represent a single float ( e.g. , b = 32 or b = 64 ) , then the number of bits needed to represent a generic vector z 2 Rd is kzkbits : = bd . To describe how much a compression operator reduces its input vector on average , we define the notion of expected density , defined via : = 1bdEkQ ( z ) kbits , where kQ ( z ) kbits denotes the number of bits needed to represent the quantized vector Q ( z ) . Note that 1 . For the Randk operator ( Alistarh et al. , 2018 ; Beznosikov et al. , 2020 ) we have q = d/k and = k/d . 1.4 TOWARDS COMMUNICATION-EFFICIENT DISTRIBUTED METHODS FOR VIS . While classical VI algorithms , such as the extragradient method originally proposed by Korpelevich ( 1976 ) and later studied by many authors ( Nemirovski , 2004 ; Juditsky et al. , 2008 ) , were not designed to work in a distributed environment , virtually all methods that were ( Yuan et al. , 2014 ; Hou et al. , 2021 ; Deng & Mahdavi , 2021 ; Beznosikov et al. , 2021b ; c ) do not consider the general VI problem , but tackle the special case of saddle point problems only . Moreover , none of these distributed methods support communication compression , with the exception of the work of Yuan et al . ( 2014 ) , which relies on rounding to the nearest integer multiple of a certain quantity . This compression mechanism does not offer theoretical benefits and does not even lead to convergence to the solution due to the errors introduced through rounding persist and prevent the method from solving the problem . 2 SUMMARY OF CONTRIBUTIONS . In this paper , we investigate whether it is possible to design communication-efficient algorithms for solving distributed VI problems by borrowing generic communication compression techniques ( 1 ) and ( 2 ) from the optimization literature ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Mishchenko et al. , 2019 ; Gorbunov et al. , 2021 ; Richtárik et al. , 2021 ) and adapting and embedding them into established and efficient methods for solving VIs ( Korpelevich , 1976 ; Nemirovski , 2004 ; Juditsky et al. , 2008 ; Alacaoglu & Malitsky , 2021 ) . Whether or not this is possible is an open problem . In summary , we design the first provably communication-efficient algorithms for solving general distributed VI problems ( see Section 3 , Equation 3 ) in the deterministic ( see ( 4 ) ) and stochastic ( see ( 5 ) ) regimes , supporting both unbiased ( MASHA1 = Algorithm 1 ) and contractive ( MASHA2 = Algorithm 2 ) compressors . Our methods are explicitly designed to be variance reduced to achieve better theoretical properties and better practical performance . In Table 1 we give a high level overview of existing methods for VIs , and contrast them with our methods and results . We now elaborate a bit more : 2.1 TWO DISTRIBUTED PROBLEMS : DETERMINISTIC AND STOCHASTIC . We study two distributed VI problems : i ) deterministic , where the monotone operator F : Rd ! Rd featured in the VI is the average of M operators { Fm } Mm=1 , where M is the number of devices/machines , which can be evaluated in each communication round , and ii ) stochastic , where each monotone operator Fm : Rd ! Rd has a finite-sum structure on its own , and only a single operator in the sum can be evaluated in each iteration . In contrast to previous works , we study general constrained VIs in the distributed setup ( see Section 3 ) , and not merely saddle point problems . 2.2 TWO NEW METHODS WITH COMPRESSED COMMUNICATION : MASHA1 AND MASHA2 We develop two extensions of the extragradient / extrastep method of Korpelevich ( 1976 ) to distributed VIs depending on whether we use unbiased ( 1 ) or contractive ( 2 ) compressors , since each type of compressor demands a different algorithmic design and a different analysis . In particular , contractive compressors are notoriously hard to analyze even for optimization problems ( Karimireddy et al. , 2019 ; Richtárik et al. , 2021 ) . Our method based on unbiased compressors is called MASHA1 ( Algorithm 1 ) , and our method based on contraction compressors is called MASHA2 ( Algorithm 2 ) . Both are designed to handle the deterministic and also the stochastic setting , and both are enhanced with bespoke variance-reduction techniques for better theoretical and practical performance . Due to space restrictions , we only describe MASHA1 in the main body of the paper , and relegate MASHA2 and the associated theory to Appendix B . 2.3 THEORETICAL COMPLEXITY RESULTS . We establish a number of theoretical complexity results for our methods , which we summarize in Table 2 . We consider the strongly convex - strongly concave regime as well as the more general convex - concave regime . In the first case we obtain linear convergence results ( O ( log 1/✏ ) ) in terms of the distance to solution , and in the latter we obtain fast sublinear convergence results ( O ( 1/✏ ) ) in terms of the gap function . To get an estimate for the number of information transmitted , one need to multiply the estimates from Table 1 by 1/q and 1/ , respectively . Then we get that from the point of view of the transmitted information , MASHA1 is better by a factor p 1/q + 1/M in comparison with the original extragradient . 3 PROBLEM FORMULATION AND ASSUMPTIONS . Let us first introduce basic notation . We write hx , yi : = Pd i=1 xiyi to denote the standard Euclidean inner product of vectors x , y 2 Rd . This induces ` 2-norm in Rd as usual : kxk : = p hx , xi . We also introduce the proximal operator , defined as proxg ( z ) : = argminu2Z { g ( u ) + 1 2ku zk 2 } , which is well defined for proper lower semicontinuous convex functions g : Rd ! R [ { +1 } .
This paper considers the compression methods for solving variational inequalities (or saddle point problems in particular) in the distributed setting. Both unbiased (i.e., MASHA1) and contractive (i.e., MASHA2) compression methods have been proposed. Theoretical analysis is provided to show that the proposed method can converge.
SP:3ca5d9c117190de8a3ef3b3eb3c495e403b9efd2
Generalized Sampling Method for Few Shot Learning
1 INTRODUCTION . Few-shot learning ( FSL ) refers the problem of learning from datasets where only a limited number of examples ( typically , one to tens per class or a problem in general ) are available for training a machine learning model . FSL has gained importance over the years since obtaining large labelled datasets requires a significant investment of time and resources . There is a rich history of research into various methodologies for FSL , two primary ones being model development approaches and representation learning . ( Wang et al. , 2020 ) . Model development approaches aim at capturing the data distribution so that new points can be sampled to improve few-shot classification accuracy ( Park et al. , 2020 ) , ( Wang et al. , 2019 ) , ( Chen et al. , 2019b ) , ( Zhang et al. , 2019 ) . They have typically relied on complex models and loss functions to understand data from only a few examples , which limits the interpretability of the model and hampers generalization . Representation learning , on the other hand , aims at identifying feature transformations which can allow simple statistical techniques like nearest neighbor and bayesian classification generalize on few-shot tasks for novel classes ( Chen et al. , 2020 ) , ( Xue & Wang , 2020 ) . Starting from early works ( Miller et al. , 2000 ) and building to recent methodologies ( Yang et al. , 2021 ) , ( Zhang et al. , 2021b ) , representation learning strategies have relied on simple statistical assumptions that may not hold across diverse datasets . Recently , ( Yang et al. , 2021 ) showed that classes which are semantically similar in meaning are also correlated in the means and covariances of their feature distributions . They used the statistics of classes with plentiful datapoints ( called base classes ) to learn the distribution of classes with a few datapoints ( called novel classes ) . Despite some limiting assumptions , their method , called Distribution Calibration , outperformed much more complex models that relied on non-parametric optimization and generative models ( Park et al. , 2020 ) , ( Wang et al. , 2019 ) , ( Chen et al. , 2019b ) , ( Zhang et al. , 2019 ) . Building on this idea and inspired by traditional statistics , we present a rigorous generalized sampling method for Few-Shot Learning , called DC+ that outperforms existing stateof-the-art classification accuracy without introducing additional statistical assumptions or complex generative models . Our main contributions are : 1 . Introducing a principled approach to estimate novel class mean and covariance from the moments of a random variable weighted by the distance between the novel and the base classes , 2 . Incorporating the statistical technique of variance shrinkage , which not only helps increase the accuracy but also stabilizes covariance estimation in cases when the feature extractor is over-parametrized ( a common occurrence in modern deep learning models ) , 3 . Extending the applicability of the statistical sampling approach to arbitrary feature extractors by introducing general Gaussianization transformations , 4 . Presenting a single scaling parameter in Euclidean distance weighting that mitigates the need to search among Euclidean , Mahalanobis , and generalized distances for novel class estimation , and Combined , our contributions put statistical sampling approach on a sound foundation , close open questions in earlier research , and demonstrate 3 % to 5 % improvement over competitive states-ofthe-art for 5way-1shot and 5way-5shot tasks for miniImageNet ( Ravi & Larochelle , 2017 ) , CUB ( Welinder et al. , 2010 ) , StanfordDogs ( Khosla et al. , 2011 ) , highest level classes of tieredImagenet ( Ren et al. , 2018 ) and 1 % improvement Cross Domain 5way-1shot task of miniImagenet −→ CUB . 2 RELATED WORKS . Learning good features or manipulating the features to help generalize few-shot tasks is an active research area ( Hou et al. , 2019 ) ( Hao et al. , 2019 ) , ( Li et al. , 2019a ) . Miller et al . ( 2000 ) proposed a congealing process which learned a joint distribution among all available classes . This distribution could then be used as a prior knowledge for constructing efficient few-shot classifiers . Chen et al . ( 2020 ) showed that by pre-training first on entire base classes , few shot classification accuracies on novel classes could be improved with a simple meta-learning on nearest-centroid based classification algorithm . Their main focus was on the pretraining methods which could improve a cosine based classifier on the centroids of the extracted features . DC+ can be applied on top of these feature extractor techniques to further explore improvements . To correct bias in centroid or prototype estimations , several rectification methods were proposed— RestoreNet ( Xue & Wang , 2020 ) transforms the feature space to move the images closer to their true centroids . Our proposed method approximates this transformation with scaled Euclidean distances and weighted random variables for novel classes , without any additional learnable parameters . Zhang et al . ( 2021a ) proposed a 4-step method consisting of learning base class details as priors and then using these priors for correcting bias in novel prototype estimation . They then jointly fine tuned the feature extractor and bias corrector . Our method does not have this multi-step process and works with off-the-shelf feature extractors . Liu et al . ( 2020c ) attempted to reduce the bias in distance estimation through reducing intra-class bias by label propagation and cross-class bias through shifting features . Their method works in the transductive setting where entire data , including the query set is consumed without label information . Our method does not need additional unlabelled data from the query set . Again their method improves upon Prototypical Networks ( Snell et al. , 2017 ) whereas we show that our method can be applied with any feature extractor . DC+ can be broadly categorized as a data augmentation method . Several methods have been proposed earlier in this space . Antoniou & Storkey ( 2019 ) learn few-shot tasks by randomly labelling a subset of images and then augmenting this subset with different techniques like random crops , flips etc . Park et al . ( 2020 ) tried to transfer variance between different classes in order to simulate the query examples that can be encountered during test . Other works like Wang et al . ( 2019 ) and Liu et al . ( 2020b ) utilized the intra-class variance to perform augmentation . These methods leverage complex neural networks with large number of learnable parameters to generate new examples . Our method does not require any additional learnable parameter and uses simple statistical techniques while still outperforming all previous deep learning methods . Closest to DC+ , Yang et al . ( 2021 ) estimated the novel class distributions based on their similarity with the base classes . Their method implicitly assumed that the base classes were semantically independent of each other when constructing covariance estimates , did not consider the similarity strength between the base and novel classes when estimating novel class statistics , and could not be applied to arbitrary off-the-shelf feature extractors ( with activation functions different from relu ) and large feature dimensions often capable of producing ill-defined covariances . Our method does not make independence assumptions , leverages similarity information in the base classes , and can be applied to any off-the-shelf feature extractor . 3 PROPOSED APPROACH . 3.1 PROBLEM DEFINITION . Few-shot classification problems are defined as Nway-Kshot classification tasks T ( Vinyals et al. , 2016 ) where given a small support set S of features x̃ and labels y , S = { ( x̃i , yi ) } N×Ki=1 , x̃i ∈ Rd , yi ∈ C , consisting of K points from N classes , the model should correctly classify a query set Q = { ( x̃i , yi ) } N×K+N×qi=N×K+1 with q points from each of the N classes in the support set . The entire dataset D , is divided into Cb base , Cv validation and Cn novel classes such that Cb ∩ Cv ∩ Cn = φ and Cb ∪ Cv ∪ Cn = C. The goal is to train a model with tasks T sampled from Cb and use Cv for hyperparameter tuning , where each task T is an Nway-Kshot classification problem on N unique classes of the set under consideration , for example base , validation set Cb , Cv here . The performance of few-shot learning algorithms is reported as the average accuracy on the query set Q of tasks T sampled from Cn . 3.2 ALGORITHM . Our proposed methodology , DC+ , is outlined in Algorithm 1 . In the following subsections , we incrementally go through the steps of DC+ in detail . 3.2.1 GAUSSIANIZATION OF THE DATA . Following Yang et al . ( 2021 ) , our sampling methodology assumes that the input features follow a multivariate normal distribution . There are many methods of data gaussianization like Tukey ’ s Ladder of Powers ( Tukey , 1977 ) , Yeo-Johnson Transformation ( Weisberg , 2001 ) , and Iterative Gaussianization ( Chen & Gopinath , 2000 ) . In our experiments , we observed that Tukey ’ s Ladder of Powers outperformed other methods , but the fractional powers and log transform could only be applied to non-negative features ( feature extractors which have relu and equivalent activation functions in their final layers ) . Expanding the applicability of the sampling method to arbitrary feature extractors and activation functions in deep learning models , we make a choice on the transformation of input features denoted by random variable x as per equation 1 below , x̂ = { tukey ( x ) if x ≥ 0 always yeo johnson ( x ) otherwise ( 1 ) We give the definitions of tukey and yeo johnson in Appendix A . 3.2.2 PROPOSED RANDOM VARIABLE . We extrapolate the distribution of a given novel class as a weighted average of the distributions of k closest base classes . Formally , if x̃ ∈ Rd is a d dimensional support point from a novel class , and Xi ∈ Rd is a random variable denoting points of base class i , then we compose a random variable X′ representing our estimate of that novel class as X′ = x̃ + ∑ i∈Sk wiXi 1 + ∑ i∈Sk wi , ( 2 ) where wi are the weights assigned to the closest base classes in Sk . The associated mean and the covariance of this random variable are ( since x̃ is a constant vector , it does not affect the covariance in Σ′ ) , µ′ = x̃ + ∑ i∈Sk wiµi 1 + ∑ i∈Sk wi Σ′ = cov ( X′ ) ( 3 ) where µi = E [ Xi ] . There are many ways of estimating wi . One of the simplest is to look at the distance of the novel point from the base classes . In particular , we find the k closest base classes to x̃ in Sk , di = ||x̃− µi||2 , i ∈ Cb ( 4 ) Sk = { i| − di ∈ topk ( −di ) , i ∈ Cb } ( 5 ) Based on di ( for alternative distance formulations , see Appendix B ) , we construct the weights wi of each base class in Sk as , wi = 1 1 + dmi , i ∈ Sk ( 6 ) where m is a hyperparameter that helps in decaying wi as a function of di and gives us control in the relative weights of the classes in Sk . This form of weighted variable estimation is reminiscent of ( though not the same as ) inverse distance weighting , which is widely used to estimate unknown functions at interpolated points ( Shepard , 1968 ) , ( Łukaszyk , 2004 ) It is worth noting that limdi→0 wi = 1 . Hence as the base class i moves closer to the support point x̃ , it gains increasing weight until µi overlaps with x̃ , at which point the weight is 1 , same as DC method .
This paper suggests a data augmentation method for few-shot learning. They generalize the similar method, DC (Yang et al., 2021) introducing a couple of hyperparameters to tune. Like DC, without introducing any learnable parameters, they suggest a way to augment few-shot data by extrapolating its distribution base on weighted empirical distributions of base classes. In addition, in a case when the number of data is smaller, they suggest a covariance shrinkage technique. The suggested method is empirically supported with experiments where the suggested method outperforms other baselines. (Yang et al., 2021) Shuo Yang, Lu Liu, and Min Xu. Free lunch for few-shot learning: Distribution calibration. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id= JWOiYxMG92s
SP:d76d871926f67cecd9c47a7215358c4f781216cf
Generalized Sampling Method for Few Shot Learning
1 INTRODUCTION . Few-shot learning ( FSL ) refers the problem of learning from datasets where only a limited number of examples ( typically , one to tens per class or a problem in general ) are available for training a machine learning model . FSL has gained importance over the years since obtaining large labelled datasets requires a significant investment of time and resources . There is a rich history of research into various methodologies for FSL , two primary ones being model development approaches and representation learning . ( Wang et al. , 2020 ) . Model development approaches aim at capturing the data distribution so that new points can be sampled to improve few-shot classification accuracy ( Park et al. , 2020 ) , ( Wang et al. , 2019 ) , ( Chen et al. , 2019b ) , ( Zhang et al. , 2019 ) . They have typically relied on complex models and loss functions to understand data from only a few examples , which limits the interpretability of the model and hampers generalization . Representation learning , on the other hand , aims at identifying feature transformations which can allow simple statistical techniques like nearest neighbor and bayesian classification generalize on few-shot tasks for novel classes ( Chen et al. , 2020 ) , ( Xue & Wang , 2020 ) . Starting from early works ( Miller et al. , 2000 ) and building to recent methodologies ( Yang et al. , 2021 ) , ( Zhang et al. , 2021b ) , representation learning strategies have relied on simple statistical assumptions that may not hold across diverse datasets . Recently , ( Yang et al. , 2021 ) showed that classes which are semantically similar in meaning are also correlated in the means and covariances of their feature distributions . They used the statistics of classes with plentiful datapoints ( called base classes ) to learn the distribution of classes with a few datapoints ( called novel classes ) . Despite some limiting assumptions , their method , called Distribution Calibration , outperformed much more complex models that relied on non-parametric optimization and generative models ( Park et al. , 2020 ) , ( Wang et al. , 2019 ) , ( Chen et al. , 2019b ) , ( Zhang et al. , 2019 ) . Building on this idea and inspired by traditional statistics , we present a rigorous generalized sampling method for Few-Shot Learning , called DC+ that outperforms existing stateof-the-art classification accuracy without introducing additional statistical assumptions or complex generative models . Our main contributions are : 1 . Introducing a principled approach to estimate novel class mean and covariance from the moments of a random variable weighted by the distance between the novel and the base classes , 2 . Incorporating the statistical technique of variance shrinkage , which not only helps increase the accuracy but also stabilizes covariance estimation in cases when the feature extractor is over-parametrized ( a common occurrence in modern deep learning models ) , 3 . Extending the applicability of the statistical sampling approach to arbitrary feature extractors by introducing general Gaussianization transformations , 4 . Presenting a single scaling parameter in Euclidean distance weighting that mitigates the need to search among Euclidean , Mahalanobis , and generalized distances for novel class estimation , and Combined , our contributions put statistical sampling approach on a sound foundation , close open questions in earlier research , and demonstrate 3 % to 5 % improvement over competitive states-ofthe-art for 5way-1shot and 5way-5shot tasks for miniImageNet ( Ravi & Larochelle , 2017 ) , CUB ( Welinder et al. , 2010 ) , StanfordDogs ( Khosla et al. , 2011 ) , highest level classes of tieredImagenet ( Ren et al. , 2018 ) and 1 % improvement Cross Domain 5way-1shot task of miniImagenet −→ CUB . 2 RELATED WORKS . Learning good features or manipulating the features to help generalize few-shot tasks is an active research area ( Hou et al. , 2019 ) ( Hao et al. , 2019 ) , ( Li et al. , 2019a ) . Miller et al . ( 2000 ) proposed a congealing process which learned a joint distribution among all available classes . This distribution could then be used as a prior knowledge for constructing efficient few-shot classifiers . Chen et al . ( 2020 ) showed that by pre-training first on entire base classes , few shot classification accuracies on novel classes could be improved with a simple meta-learning on nearest-centroid based classification algorithm . Their main focus was on the pretraining methods which could improve a cosine based classifier on the centroids of the extracted features . DC+ can be applied on top of these feature extractor techniques to further explore improvements . To correct bias in centroid or prototype estimations , several rectification methods were proposed— RestoreNet ( Xue & Wang , 2020 ) transforms the feature space to move the images closer to their true centroids . Our proposed method approximates this transformation with scaled Euclidean distances and weighted random variables for novel classes , without any additional learnable parameters . Zhang et al . ( 2021a ) proposed a 4-step method consisting of learning base class details as priors and then using these priors for correcting bias in novel prototype estimation . They then jointly fine tuned the feature extractor and bias corrector . Our method does not have this multi-step process and works with off-the-shelf feature extractors . Liu et al . ( 2020c ) attempted to reduce the bias in distance estimation through reducing intra-class bias by label propagation and cross-class bias through shifting features . Their method works in the transductive setting where entire data , including the query set is consumed without label information . Our method does not need additional unlabelled data from the query set . Again their method improves upon Prototypical Networks ( Snell et al. , 2017 ) whereas we show that our method can be applied with any feature extractor . DC+ can be broadly categorized as a data augmentation method . Several methods have been proposed earlier in this space . Antoniou & Storkey ( 2019 ) learn few-shot tasks by randomly labelling a subset of images and then augmenting this subset with different techniques like random crops , flips etc . Park et al . ( 2020 ) tried to transfer variance between different classes in order to simulate the query examples that can be encountered during test . Other works like Wang et al . ( 2019 ) and Liu et al . ( 2020b ) utilized the intra-class variance to perform augmentation . These methods leverage complex neural networks with large number of learnable parameters to generate new examples . Our method does not require any additional learnable parameter and uses simple statistical techniques while still outperforming all previous deep learning methods . Closest to DC+ , Yang et al . ( 2021 ) estimated the novel class distributions based on their similarity with the base classes . Their method implicitly assumed that the base classes were semantically independent of each other when constructing covariance estimates , did not consider the similarity strength between the base and novel classes when estimating novel class statistics , and could not be applied to arbitrary off-the-shelf feature extractors ( with activation functions different from relu ) and large feature dimensions often capable of producing ill-defined covariances . Our method does not make independence assumptions , leverages similarity information in the base classes , and can be applied to any off-the-shelf feature extractor . 3 PROPOSED APPROACH . 3.1 PROBLEM DEFINITION . Few-shot classification problems are defined as Nway-Kshot classification tasks T ( Vinyals et al. , 2016 ) where given a small support set S of features x̃ and labels y , S = { ( x̃i , yi ) } N×Ki=1 , x̃i ∈ Rd , yi ∈ C , consisting of K points from N classes , the model should correctly classify a query set Q = { ( x̃i , yi ) } N×K+N×qi=N×K+1 with q points from each of the N classes in the support set . The entire dataset D , is divided into Cb base , Cv validation and Cn novel classes such that Cb ∩ Cv ∩ Cn = φ and Cb ∪ Cv ∪ Cn = C. The goal is to train a model with tasks T sampled from Cb and use Cv for hyperparameter tuning , where each task T is an Nway-Kshot classification problem on N unique classes of the set under consideration , for example base , validation set Cb , Cv here . The performance of few-shot learning algorithms is reported as the average accuracy on the query set Q of tasks T sampled from Cn . 3.2 ALGORITHM . Our proposed methodology , DC+ , is outlined in Algorithm 1 . In the following subsections , we incrementally go through the steps of DC+ in detail . 3.2.1 GAUSSIANIZATION OF THE DATA . Following Yang et al . ( 2021 ) , our sampling methodology assumes that the input features follow a multivariate normal distribution . There are many methods of data gaussianization like Tukey ’ s Ladder of Powers ( Tukey , 1977 ) , Yeo-Johnson Transformation ( Weisberg , 2001 ) , and Iterative Gaussianization ( Chen & Gopinath , 2000 ) . In our experiments , we observed that Tukey ’ s Ladder of Powers outperformed other methods , but the fractional powers and log transform could only be applied to non-negative features ( feature extractors which have relu and equivalent activation functions in their final layers ) . Expanding the applicability of the sampling method to arbitrary feature extractors and activation functions in deep learning models , we make a choice on the transformation of input features denoted by random variable x as per equation 1 below , x̂ = { tukey ( x ) if x ≥ 0 always yeo johnson ( x ) otherwise ( 1 ) We give the definitions of tukey and yeo johnson in Appendix A . 3.2.2 PROPOSED RANDOM VARIABLE . We extrapolate the distribution of a given novel class as a weighted average of the distributions of k closest base classes . Formally , if x̃ ∈ Rd is a d dimensional support point from a novel class , and Xi ∈ Rd is a random variable denoting points of base class i , then we compose a random variable X′ representing our estimate of that novel class as X′ = x̃ + ∑ i∈Sk wiXi 1 + ∑ i∈Sk wi , ( 2 ) where wi are the weights assigned to the closest base classes in Sk . The associated mean and the covariance of this random variable are ( since x̃ is a constant vector , it does not affect the covariance in Σ′ ) , µ′ = x̃ + ∑ i∈Sk wiµi 1 + ∑ i∈Sk wi Σ′ = cov ( X′ ) ( 3 ) where µi = E [ Xi ] . There are many ways of estimating wi . One of the simplest is to look at the distance of the novel point from the base classes . In particular , we find the k closest base classes to x̃ in Sk , di = ||x̃− µi||2 , i ∈ Cb ( 4 ) Sk = { i| − di ∈ topk ( −di ) , i ∈ Cb } ( 5 ) Based on di ( for alternative distance formulations , see Appendix B ) , we construct the weights wi of each base class in Sk as , wi = 1 1 + dmi , i ∈ Sk ( 6 ) where m is a hyperparameter that helps in decaying wi as a function of di and gives us control in the relative weights of the classes in Sk . This form of weighted variable estimation is reminiscent of ( though not the same as ) inverse distance weighting , which is widely used to estimate unknown functions at interpolated points ( Shepard , 1968 ) , ( Łukaszyk , 2004 ) It is worth noting that limdi→0 wi = 1 . Hence as the base class i moves closer to the support point x̃ , it gains increasing weight until µi overlaps with x̃ , at which point the weight is 1 , same as DC method .
This paper presents a statistical sampling method to augment related extrapolated features to the novel class few-shot samples by constructing extrapolated distribution with sampled related base classes to construct much robust classifier for few-shot classification. The paper is an improved version over a previous sampling method DC for few-shot learning by Yang et al 2021 addressing the issue of strength of semantic similarity and the gaussianization technique of features for activation functions with negative outputs other than relu. The first of the above issues are dealt with a weighting scheme to have weighted average of sampled base class features along with the novel class features and consequently estimating the associated mean and the covariance for each novel class. The second issue of gaussianization for activation for negative values are handled with Yeo-Johnson transformation while keeping the same Tukey transformation for the non-negative features. Also, the covariance shrinkage method is adopted for dealing with non-singular covariance matrix formed from ill conditioned feature space with feature dimension number higher than the number of samples per class. Finally, the additional features are augmented from this extrapolated distribution per class to get rid of low data problem and simple logistic regression is utilized and classification results are reported on query samples.
SP:d76d871926f67cecd9c47a7215358c4f781216cf
Generalized Sampling Method for Few Shot Learning
1 INTRODUCTION . Few-shot learning ( FSL ) refers the problem of learning from datasets where only a limited number of examples ( typically , one to tens per class or a problem in general ) are available for training a machine learning model . FSL has gained importance over the years since obtaining large labelled datasets requires a significant investment of time and resources . There is a rich history of research into various methodologies for FSL , two primary ones being model development approaches and representation learning . ( Wang et al. , 2020 ) . Model development approaches aim at capturing the data distribution so that new points can be sampled to improve few-shot classification accuracy ( Park et al. , 2020 ) , ( Wang et al. , 2019 ) , ( Chen et al. , 2019b ) , ( Zhang et al. , 2019 ) . They have typically relied on complex models and loss functions to understand data from only a few examples , which limits the interpretability of the model and hampers generalization . Representation learning , on the other hand , aims at identifying feature transformations which can allow simple statistical techniques like nearest neighbor and bayesian classification generalize on few-shot tasks for novel classes ( Chen et al. , 2020 ) , ( Xue & Wang , 2020 ) . Starting from early works ( Miller et al. , 2000 ) and building to recent methodologies ( Yang et al. , 2021 ) , ( Zhang et al. , 2021b ) , representation learning strategies have relied on simple statistical assumptions that may not hold across diverse datasets . Recently , ( Yang et al. , 2021 ) showed that classes which are semantically similar in meaning are also correlated in the means and covariances of their feature distributions . They used the statistics of classes with plentiful datapoints ( called base classes ) to learn the distribution of classes with a few datapoints ( called novel classes ) . Despite some limiting assumptions , their method , called Distribution Calibration , outperformed much more complex models that relied on non-parametric optimization and generative models ( Park et al. , 2020 ) , ( Wang et al. , 2019 ) , ( Chen et al. , 2019b ) , ( Zhang et al. , 2019 ) . Building on this idea and inspired by traditional statistics , we present a rigorous generalized sampling method for Few-Shot Learning , called DC+ that outperforms existing stateof-the-art classification accuracy without introducing additional statistical assumptions or complex generative models . Our main contributions are : 1 . Introducing a principled approach to estimate novel class mean and covariance from the moments of a random variable weighted by the distance between the novel and the base classes , 2 . Incorporating the statistical technique of variance shrinkage , which not only helps increase the accuracy but also stabilizes covariance estimation in cases when the feature extractor is over-parametrized ( a common occurrence in modern deep learning models ) , 3 . Extending the applicability of the statistical sampling approach to arbitrary feature extractors by introducing general Gaussianization transformations , 4 . Presenting a single scaling parameter in Euclidean distance weighting that mitigates the need to search among Euclidean , Mahalanobis , and generalized distances for novel class estimation , and Combined , our contributions put statistical sampling approach on a sound foundation , close open questions in earlier research , and demonstrate 3 % to 5 % improvement over competitive states-ofthe-art for 5way-1shot and 5way-5shot tasks for miniImageNet ( Ravi & Larochelle , 2017 ) , CUB ( Welinder et al. , 2010 ) , StanfordDogs ( Khosla et al. , 2011 ) , highest level classes of tieredImagenet ( Ren et al. , 2018 ) and 1 % improvement Cross Domain 5way-1shot task of miniImagenet −→ CUB . 2 RELATED WORKS . Learning good features or manipulating the features to help generalize few-shot tasks is an active research area ( Hou et al. , 2019 ) ( Hao et al. , 2019 ) , ( Li et al. , 2019a ) . Miller et al . ( 2000 ) proposed a congealing process which learned a joint distribution among all available classes . This distribution could then be used as a prior knowledge for constructing efficient few-shot classifiers . Chen et al . ( 2020 ) showed that by pre-training first on entire base classes , few shot classification accuracies on novel classes could be improved with a simple meta-learning on nearest-centroid based classification algorithm . Their main focus was on the pretraining methods which could improve a cosine based classifier on the centroids of the extracted features . DC+ can be applied on top of these feature extractor techniques to further explore improvements . To correct bias in centroid or prototype estimations , several rectification methods were proposed— RestoreNet ( Xue & Wang , 2020 ) transforms the feature space to move the images closer to their true centroids . Our proposed method approximates this transformation with scaled Euclidean distances and weighted random variables for novel classes , without any additional learnable parameters . Zhang et al . ( 2021a ) proposed a 4-step method consisting of learning base class details as priors and then using these priors for correcting bias in novel prototype estimation . They then jointly fine tuned the feature extractor and bias corrector . Our method does not have this multi-step process and works with off-the-shelf feature extractors . Liu et al . ( 2020c ) attempted to reduce the bias in distance estimation through reducing intra-class bias by label propagation and cross-class bias through shifting features . Their method works in the transductive setting where entire data , including the query set is consumed without label information . Our method does not need additional unlabelled data from the query set . Again their method improves upon Prototypical Networks ( Snell et al. , 2017 ) whereas we show that our method can be applied with any feature extractor . DC+ can be broadly categorized as a data augmentation method . Several methods have been proposed earlier in this space . Antoniou & Storkey ( 2019 ) learn few-shot tasks by randomly labelling a subset of images and then augmenting this subset with different techniques like random crops , flips etc . Park et al . ( 2020 ) tried to transfer variance between different classes in order to simulate the query examples that can be encountered during test . Other works like Wang et al . ( 2019 ) and Liu et al . ( 2020b ) utilized the intra-class variance to perform augmentation . These methods leverage complex neural networks with large number of learnable parameters to generate new examples . Our method does not require any additional learnable parameter and uses simple statistical techniques while still outperforming all previous deep learning methods . Closest to DC+ , Yang et al . ( 2021 ) estimated the novel class distributions based on their similarity with the base classes . Their method implicitly assumed that the base classes were semantically independent of each other when constructing covariance estimates , did not consider the similarity strength between the base and novel classes when estimating novel class statistics , and could not be applied to arbitrary off-the-shelf feature extractors ( with activation functions different from relu ) and large feature dimensions often capable of producing ill-defined covariances . Our method does not make independence assumptions , leverages similarity information in the base classes , and can be applied to any off-the-shelf feature extractor . 3 PROPOSED APPROACH . 3.1 PROBLEM DEFINITION . Few-shot classification problems are defined as Nway-Kshot classification tasks T ( Vinyals et al. , 2016 ) where given a small support set S of features x̃ and labels y , S = { ( x̃i , yi ) } N×Ki=1 , x̃i ∈ Rd , yi ∈ C , consisting of K points from N classes , the model should correctly classify a query set Q = { ( x̃i , yi ) } N×K+N×qi=N×K+1 with q points from each of the N classes in the support set . The entire dataset D , is divided into Cb base , Cv validation and Cn novel classes such that Cb ∩ Cv ∩ Cn = φ and Cb ∪ Cv ∪ Cn = C. The goal is to train a model with tasks T sampled from Cb and use Cv for hyperparameter tuning , where each task T is an Nway-Kshot classification problem on N unique classes of the set under consideration , for example base , validation set Cb , Cv here . The performance of few-shot learning algorithms is reported as the average accuracy on the query set Q of tasks T sampled from Cn . 3.2 ALGORITHM . Our proposed methodology , DC+ , is outlined in Algorithm 1 . In the following subsections , we incrementally go through the steps of DC+ in detail . 3.2.1 GAUSSIANIZATION OF THE DATA . Following Yang et al . ( 2021 ) , our sampling methodology assumes that the input features follow a multivariate normal distribution . There are many methods of data gaussianization like Tukey ’ s Ladder of Powers ( Tukey , 1977 ) , Yeo-Johnson Transformation ( Weisberg , 2001 ) , and Iterative Gaussianization ( Chen & Gopinath , 2000 ) . In our experiments , we observed that Tukey ’ s Ladder of Powers outperformed other methods , but the fractional powers and log transform could only be applied to non-negative features ( feature extractors which have relu and equivalent activation functions in their final layers ) . Expanding the applicability of the sampling method to arbitrary feature extractors and activation functions in deep learning models , we make a choice on the transformation of input features denoted by random variable x as per equation 1 below , x̂ = { tukey ( x ) if x ≥ 0 always yeo johnson ( x ) otherwise ( 1 ) We give the definitions of tukey and yeo johnson in Appendix A . 3.2.2 PROPOSED RANDOM VARIABLE . We extrapolate the distribution of a given novel class as a weighted average of the distributions of k closest base classes . Formally , if x̃ ∈ Rd is a d dimensional support point from a novel class , and Xi ∈ Rd is a random variable denoting points of base class i , then we compose a random variable X′ representing our estimate of that novel class as X′ = x̃ + ∑ i∈Sk wiXi 1 + ∑ i∈Sk wi , ( 2 ) where wi are the weights assigned to the closest base classes in Sk . The associated mean and the covariance of this random variable are ( since x̃ is a constant vector , it does not affect the covariance in Σ′ ) , µ′ = x̃ + ∑ i∈Sk wiµi 1 + ∑ i∈Sk wi Σ′ = cov ( X′ ) ( 3 ) where µi = E [ Xi ] . There are many ways of estimating wi . One of the simplest is to look at the distance of the novel point from the base classes . In particular , we find the k closest base classes to x̃ in Sk , di = ||x̃− µi||2 , i ∈ Cb ( 4 ) Sk = { i| − di ∈ topk ( −di ) , i ∈ Cb } ( 5 ) Based on di ( for alternative distance formulations , see Appendix B ) , we construct the weights wi of each base class in Sk as , wi = 1 1 + dmi , i ∈ Sk ( 6 ) where m is a hyperparameter that helps in decaying wi as a function of di and gives us control in the relative weights of the classes in Sk . This form of weighted variable estimation is reminiscent of ( though not the same as ) inverse distance weighting , which is widely used to estimate unknown functions at interpolated points ( Shepard , 1968 ) , ( Łukaszyk , 2004 ) It is worth noting that limdi→0 wi = 1 . Hence as the base class i moves closer to the support point x̃ , it gains increasing weight until µi overlaps with x̃ , at which point the weight is 1 , same as DC method .
The authors propose a statistical method for estimating the data distribution for few-shot learning. The method is built on a previous work (DC) with minor fixes but have non-trivial performance improvements. Specifically, the main differences lie in the transformation function, weighted random variables and the covariance shrinkage.
SP:d76d871926f67cecd9c47a7215358c4f781216cf
Truth Table Deep Convolutional Neural Network, A New SAT-Encodable Architecture - Application To Complete Robustness
1 INTRODUCTION . Deep Neural Network ( DNN ) systems offer exceptional performance in a variety of difficult domains ( Goodfellow et al. , 2016 ) and today these results far outstrip our ability to secure and analyze those DNNs . As DNNs are becoming widely integrated in a variety of applications , several concerns have emerged : lack of robustness emphasized by a lack of explainability , difficulty of integrating human knowledge in postprocessing and impossibility to formally verify their behavior due to their large complexity . Under these circumstances and especially when these systems are deployed in applications where safety and security are issues , the formal verification of DNN systems and the field of eXplainable AI ( XAI ) are under intense research efforts . For example , Tesla has recently filed a patent on the DNNs ’ portability on a platform incorporating a component dedicated to formal verification ( Driscoll , 2020 ) . Also , the European Union ’ s general data protection regulation includes a provision on the explainability of AIs ( Regulation , 2016 ) . DNNs formal verification methods are mainly based either on Satisfiability Modulo Theory ( SMT ) ( Katz et al. , 2017 ) or Mixed-Integer Programming ( MIP ) ( Xiao et al. , 2018 ) which are not yet scalable to real-valued DNNs . Some recent publications ( Jia & Rinard ( 2020 ) ; Narodytska et al . ( 2019b ) ; Narodytska et al . ( 2018 ) ) approach the problem of complete verification from the well-known boolean SATisfiability ( SAT ) ( Biere et al. , 2009 ) point of view where Binary Neural Networks ( BNNs , Hubara et al . ( 2016 ) ) are first converted into SAT formulas and then formally verified using SAT or MaxSAT solvers . This pipeline is computationally efficient ( Jia & Rinard , 2020 ) , enables security verifications ( Baluta et al. , 2019 ) and more generally can answer a vast range of question such as how many adversarial attacks exist for a given DNN , image and noise level ( Narodytska et al. , 2019a ) . Besides , this approach is faster than SMT or MIP robustness verification method ( Jia & Rinard , 2020 ) . However , to date , only BNNs can be transformed into a SAT formula and their strong binary constraints limit their natural accuracy . Moreover , the corresponding SAT conversion method intrinsically leads to formulas with a large number of variables and clauses , impeding interpretability as well as formal verification scalability . Finally , only few studies ( Ignatiev et al . ( 2019a ) ; Ignatiev et al . ( 2019b ) ) investigated the relationship between formal DNNs ’ verification and XAI . Our contributions . In this work , we offer three main contributions . ( 1 ) First , we define a new family of real-valued Deep Convolutional Neural Networks ( DCNN ) that can be encoded into SAT formulas : Truth Table Deep Convolutional Neural Network ( TT-DCNN ) . Our TT-DCNN leverages its model formulation in the form of a truth table to allow weights and certain intermediate values to be real . To the best of our knowledge , this is the first method to encode a real-valued DCNN into SAT . For the first time , we can extract all the logic classification rules from a subfamily of DCNNs which allows a bridge between XAI and formal verification , while achieving sufficient natural accuracy for practical use . Indeed , the nature of the SAT conversion between TT-DCNN and BNN is intrinsically different : our method relies upon giving one SAT expression per 2D-CNN filter instead of one SAT expression per neuron . This global interpretability method is in sharp contrast with the previous limited BNNs and local DNNs explainability . ( 2 ) TT-DCNNs offer two main valuable conversion properties over BNNs . ( 2-a : Post-tuning ) The first one allows us to integrate human knowledge in the post-processing : we can now interpret the model inference with simple concepts , which enables to manually modify the 2D-CNN filter activation towards a desired goal . For example , we decided to focus on reducing overfitting and , to this end , we characterize TT-DCNN logic rules resulting from overfitting and propose a filtering approach , which increases the verifiable accuracy without decreasing the natural accuracy ( cf . Appendix A.1 ) . ( 2-b : Tractability ) The second property is the possibility to compute all possible model inputs/outputs prior to deployment in production . In an adversarial setting , we can assess whether the input noise will propagate to the output . We can therefore disregard filters with no impact on the output . This leads to a lower number of clauses and variables in the SAT formulas compared to BNNs which allows using generic SAT solvers and exact model counting solvers . ( 3 ) We apply our model to complete robustness verification ( cf . Appendix A.1 ) . TT-DCNNs offer a good tradeoff between the state-of-the-art of BNN/SAT method ( Jia & Rinard , 2020 ) and of real-valued DNN/MILP complete robustness verification methods ( Xiao et al . ( 2018 ) ; Tjeng et al . ( 2017 ) ) . This is expected as our network is both real-weighted and SAT-convertible . Our TT-DCNN model improves the verifiable accuracy by more than 2.5 % for high noise MNIST and by 0.5 % for the high noise of CIFAR10 when compared to BNN/SAT method while decreasing the verification time by a factor of 9 for MNIST and 150 for CIFAR10 high noise when compared to DNN/MILP methods . Finally , our SAT formulas are 5 and 9 times more compact in term of number of clauses for high noise MNIST and CIFAR10 respectively compared to the BNN/SAT method . Outline . Section 2 introduces the notations and the related work . Section 3 presents our new TT-DCNN model and its two main properties . Section 4 details the complete robustness verification set-up and reports the evaluation results . Finally , we conclude this work in Section 5 . 2 BACKGROUND & RELATED WORK . Boolean SATisfiability ( SAT ) . The boolean SATisfiability problem ( SAT ) ( Biere et al. , 2009 ) is the problem of deciding whether there exists a variable assignment to satisfy a given boolean expression Φ . We can consider a boolean expression in a Conjunctive Normal Form ( CNF ) or in a Disjunctive Normal Form ( DNF ) . They are both defined over a set of boolean variables ( x1 , · · · , xn ) . A literal li is defined as a variable xi or its complement xi . A CNF is a conjunction of a set of clauses : Φ = ( c1 ∧ · · · ∧ cm ) , where each clause cj is a disjunction of some literals cj = lj1∨· · ·∨ljr . A DNF is a disjunction of a set of clauses : Φ = ( c1∨· · ·∨cm ) , where each clause cj is a conjunction of some literals cj = lj1 ∧ · · · ∧ ljr . A pseudo-boolean constraint is a constraint of the form : N∑ p=1 aplp ◦ b , where ap ∈ Z , b ∈ Z and ◦ ∈ { ≤ , = , ≥ } , which can be mapped to a SAT formula ( Roussel & Manquinho , 2009 ) . However , the output SAT formula contains a tremendous number of clauses and literals compared to the number of variables in the pseudo-boolean constraint making it very hard to understand ( cf . example in Appendix A.2 ) . A boolean function has the form { 0 , 1 } n → { 0 , 1 } and its corresponding truth table gives outputs for all possible inputs combinations . Two-dimensional Convolutional Neural Networks ( 2D-CNNs ) . We consider the 2D-CNN as a function Φf , which , for a given filter f , takes n = k2c inputs at position ( i , j ) with k the kernel size and c the number of input channels . The outputs can be written y ( i , j ) f = Φf ( x ( i , j ) 1 , · · · , x ( i , j ) n ) . Note that in the binary case , a truth table between inputs and outputs for 2n entries can be easily set up ( if n is not too large ) . If we now consider a multi-layers network with s convolution layers : a similar truth table can be constructed , except now the kernel size k needs to be replaced by a patch function P ( s ) ( also referred to as the size of a receptive field in the literature ) . We have P ( 1 ) = k and P ( s+ 1 ) = P ( s ) if and only if the kernel size of the layer is 1 . We denote by P ( s , i , j ) , the receptive field after the sth layer at position ( i , j ) . We denote the vector obtained after the flatten operation and before the final classifier layer as the vector of features V . If there are a total of L layers in the DNN , each element of V is a non-linear function over a patch ( P ( L ) , P ( L ) ) on the input . SAT encoding of neural networks . The sole published method converting a DNN into a SAT formula is limited to BNNs ( Narodytska et al . ( 2018 ) ; Cheng et al . ( 2018 ) ) and involves recomposing a block formed of a 2D-CNN layer , a batch normalization layer and a step function into an inequality in order to apply the pseudo-boolean constraint ( Roussel & Manquinho , 2009 ) . This approach has been further refined using a different training method and a specific SAT solver resulting in a significantly reduced verification resolution time ( Jia & Rinard , 2020 ) . Although the proposed inequality rewriting is elegant , the corresponding SAT formula contains a tremendous number of clauses and literals compared to the number of variables in the pseudo-boolean constraint ( cf . example Appendix A.2 ) . This prevents both the interpretability and the tractability of those SAT/BNNs formulas . Hence , our goal is to find a real-valued DCNN with good performance and coincidentally convertible to SAT with fully interpretive inference . Interpretability by global rule extraction . Machine learning interpretability analysis fall into four main categories : either local ( input dependant ) or global methods , either exact or non-exact methods . The most famous techniques for local non-exact interpretability are LIME and the ANCHOR explainers ( Ribeiro et al . ( 2016 ) ; Ribeiro et al . ( 2018 ) ) . PI-explanations ( Shih et al. , 2018 ) and SHAP ( Lundberg & Lee , 2017 ) are also popular techniques for local exact method and global non-exact method respectively . The only scalable method for global exact interpretability was proposed in ( Granmo et al. , 2019 ) . Our work aims to extend the studies of the latter strategy with the use of truth tables in order to obtain the equivalent conjectures . Using truth tables in machine learning has been documented for hardware optimisation ( Soga & Nakahara ( 2020 ) , Wang et al . ( 2019 ) ) and recently in an attempt for global non-exact interpretation of BNNs ( Burkhardt et al. , 2021 ) . Our work is pioneer in the use of truth table to create a new architecture enabling the extraction of all the global exact logic rules as well as the increase of model robustness for formal verification .
This paper proposes Truth Table Deep Convolutional Neural Networks (TT-DCNNs), which _distills_ real-valued feature vectors (i.e., small convolutional layers) into truth tables (also called masks), which are represented in boolean formulas using the disjunctive normal form (DNF). These masks indicate certain interpretability, e.g., the set of variables actually contributed to the truth value. Some manual intervention or post processing like removing overfitting masks could help improve model accuracy. The experimental evaluation shows that model accucy on MNIST and CIFAR10 dataset is comparable with or slightly better than other state-of-the-art approaches.
SP:a12ed5ea62f4a7e8a8f709aa7b425784d77eb84c
Truth Table Deep Convolutional Neural Network, A New SAT-Encodable Architecture - Application To Complete Robustness
1 INTRODUCTION . Deep Neural Network ( DNN ) systems offer exceptional performance in a variety of difficult domains ( Goodfellow et al. , 2016 ) and today these results far outstrip our ability to secure and analyze those DNNs . As DNNs are becoming widely integrated in a variety of applications , several concerns have emerged : lack of robustness emphasized by a lack of explainability , difficulty of integrating human knowledge in postprocessing and impossibility to formally verify their behavior due to their large complexity . Under these circumstances and especially when these systems are deployed in applications where safety and security are issues , the formal verification of DNN systems and the field of eXplainable AI ( XAI ) are under intense research efforts . For example , Tesla has recently filed a patent on the DNNs ’ portability on a platform incorporating a component dedicated to formal verification ( Driscoll , 2020 ) . Also , the European Union ’ s general data protection regulation includes a provision on the explainability of AIs ( Regulation , 2016 ) . DNNs formal verification methods are mainly based either on Satisfiability Modulo Theory ( SMT ) ( Katz et al. , 2017 ) or Mixed-Integer Programming ( MIP ) ( Xiao et al. , 2018 ) which are not yet scalable to real-valued DNNs . Some recent publications ( Jia & Rinard ( 2020 ) ; Narodytska et al . ( 2019b ) ; Narodytska et al . ( 2018 ) ) approach the problem of complete verification from the well-known boolean SATisfiability ( SAT ) ( Biere et al. , 2009 ) point of view where Binary Neural Networks ( BNNs , Hubara et al . ( 2016 ) ) are first converted into SAT formulas and then formally verified using SAT or MaxSAT solvers . This pipeline is computationally efficient ( Jia & Rinard , 2020 ) , enables security verifications ( Baluta et al. , 2019 ) and more generally can answer a vast range of question such as how many adversarial attacks exist for a given DNN , image and noise level ( Narodytska et al. , 2019a ) . Besides , this approach is faster than SMT or MIP robustness verification method ( Jia & Rinard , 2020 ) . However , to date , only BNNs can be transformed into a SAT formula and their strong binary constraints limit their natural accuracy . Moreover , the corresponding SAT conversion method intrinsically leads to formulas with a large number of variables and clauses , impeding interpretability as well as formal verification scalability . Finally , only few studies ( Ignatiev et al . ( 2019a ) ; Ignatiev et al . ( 2019b ) ) investigated the relationship between formal DNNs ’ verification and XAI . Our contributions . In this work , we offer three main contributions . ( 1 ) First , we define a new family of real-valued Deep Convolutional Neural Networks ( DCNN ) that can be encoded into SAT formulas : Truth Table Deep Convolutional Neural Network ( TT-DCNN ) . Our TT-DCNN leverages its model formulation in the form of a truth table to allow weights and certain intermediate values to be real . To the best of our knowledge , this is the first method to encode a real-valued DCNN into SAT . For the first time , we can extract all the logic classification rules from a subfamily of DCNNs which allows a bridge between XAI and formal verification , while achieving sufficient natural accuracy for practical use . Indeed , the nature of the SAT conversion between TT-DCNN and BNN is intrinsically different : our method relies upon giving one SAT expression per 2D-CNN filter instead of one SAT expression per neuron . This global interpretability method is in sharp contrast with the previous limited BNNs and local DNNs explainability . ( 2 ) TT-DCNNs offer two main valuable conversion properties over BNNs . ( 2-a : Post-tuning ) The first one allows us to integrate human knowledge in the post-processing : we can now interpret the model inference with simple concepts , which enables to manually modify the 2D-CNN filter activation towards a desired goal . For example , we decided to focus on reducing overfitting and , to this end , we characterize TT-DCNN logic rules resulting from overfitting and propose a filtering approach , which increases the verifiable accuracy without decreasing the natural accuracy ( cf . Appendix A.1 ) . ( 2-b : Tractability ) The second property is the possibility to compute all possible model inputs/outputs prior to deployment in production . In an adversarial setting , we can assess whether the input noise will propagate to the output . We can therefore disregard filters with no impact on the output . This leads to a lower number of clauses and variables in the SAT formulas compared to BNNs which allows using generic SAT solvers and exact model counting solvers . ( 3 ) We apply our model to complete robustness verification ( cf . Appendix A.1 ) . TT-DCNNs offer a good tradeoff between the state-of-the-art of BNN/SAT method ( Jia & Rinard , 2020 ) and of real-valued DNN/MILP complete robustness verification methods ( Xiao et al . ( 2018 ) ; Tjeng et al . ( 2017 ) ) . This is expected as our network is both real-weighted and SAT-convertible . Our TT-DCNN model improves the verifiable accuracy by more than 2.5 % for high noise MNIST and by 0.5 % for the high noise of CIFAR10 when compared to BNN/SAT method while decreasing the verification time by a factor of 9 for MNIST and 150 for CIFAR10 high noise when compared to DNN/MILP methods . Finally , our SAT formulas are 5 and 9 times more compact in term of number of clauses for high noise MNIST and CIFAR10 respectively compared to the BNN/SAT method . Outline . Section 2 introduces the notations and the related work . Section 3 presents our new TT-DCNN model and its two main properties . Section 4 details the complete robustness verification set-up and reports the evaluation results . Finally , we conclude this work in Section 5 . 2 BACKGROUND & RELATED WORK . Boolean SATisfiability ( SAT ) . The boolean SATisfiability problem ( SAT ) ( Biere et al. , 2009 ) is the problem of deciding whether there exists a variable assignment to satisfy a given boolean expression Φ . We can consider a boolean expression in a Conjunctive Normal Form ( CNF ) or in a Disjunctive Normal Form ( DNF ) . They are both defined over a set of boolean variables ( x1 , · · · , xn ) . A literal li is defined as a variable xi or its complement xi . A CNF is a conjunction of a set of clauses : Φ = ( c1 ∧ · · · ∧ cm ) , where each clause cj is a disjunction of some literals cj = lj1∨· · ·∨ljr . A DNF is a disjunction of a set of clauses : Φ = ( c1∨· · ·∨cm ) , where each clause cj is a conjunction of some literals cj = lj1 ∧ · · · ∧ ljr . A pseudo-boolean constraint is a constraint of the form : N∑ p=1 aplp ◦ b , where ap ∈ Z , b ∈ Z and ◦ ∈ { ≤ , = , ≥ } , which can be mapped to a SAT formula ( Roussel & Manquinho , 2009 ) . However , the output SAT formula contains a tremendous number of clauses and literals compared to the number of variables in the pseudo-boolean constraint making it very hard to understand ( cf . example in Appendix A.2 ) . A boolean function has the form { 0 , 1 } n → { 0 , 1 } and its corresponding truth table gives outputs for all possible inputs combinations . Two-dimensional Convolutional Neural Networks ( 2D-CNNs ) . We consider the 2D-CNN as a function Φf , which , for a given filter f , takes n = k2c inputs at position ( i , j ) with k the kernel size and c the number of input channels . The outputs can be written y ( i , j ) f = Φf ( x ( i , j ) 1 , · · · , x ( i , j ) n ) . Note that in the binary case , a truth table between inputs and outputs for 2n entries can be easily set up ( if n is not too large ) . If we now consider a multi-layers network with s convolution layers : a similar truth table can be constructed , except now the kernel size k needs to be replaced by a patch function P ( s ) ( also referred to as the size of a receptive field in the literature ) . We have P ( 1 ) = k and P ( s+ 1 ) = P ( s ) if and only if the kernel size of the layer is 1 . We denote by P ( s , i , j ) , the receptive field after the sth layer at position ( i , j ) . We denote the vector obtained after the flatten operation and before the final classifier layer as the vector of features V . If there are a total of L layers in the DNN , each element of V is a non-linear function over a patch ( P ( L ) , P ( L ) ) on the input . SAT encoding of neural networks . The sole published method converting a DNN into a SAT formula is limited to BNNs ( Narodytska et al . ( 2018 ) ; Cheng et al . ( 2018 ) ) and involves recomposing a block formed of a 2D-CNN layer , a batch normalization layer and a step function into an inequality in order to apply the pseudo-boolean constraint ( Roussel & Manquinho , 2009 ) . This approach has been further refined using a different training method and a specific SAT solver resulting in a significantly reduced verification resolution time ( Jia & Rinard , 2020 ) . Although the proposed inequality rewriting is elegant , the corresponding SAT formula contains a tremendous number of clauses and literals compared to the number of variables in the pseudo-boolean constraint ( cf . example Appendix A.2 ) . This prevents both the interpretability and the tractability of those SAT/BNNs formulas . Hence , our goal is to find a real-valued DCNN with good performance and coincidentally convertible to SAT with fully interpretive inference . Interpretability by global rule extraction . Machine learning interpretability analysis fall into four main categories : either local ( input dependant ) or global methods , either exact or non-exact methods . The most famous techniques for local non-exact interpretability are LIME and the ANCHOR explainers ( Ribeiro et al . ( 2016 ) ; Ribeiro et al . ( 2018 ) ) . PI-explanations ( Shih et al. , 2018 ) and SHAP ( Lundberg & Lee , 2017 ) are also popular techniques for local exact method and global non-exact method respectively . The only scalable method for global exact interpretability was proposed in ( Granmo et al. , 2019 ) . Our work aims to extend the studies of the latter strategy with the use of truth tables in order to obtain the equivalent conjectures . Using truth tables in machine learning has been documented for hardware optimisation ( Soga & Nakahara ( 2020 ) , Wang et al . ( 2019 ) ) and recently in an attempt for global non-exact interpretation of BNNs ( Burkhardt et al. , 2021 ) . Our work is pioneer in the use of truth table to create a new architecture enabling the extraction of all the global exact logic rules as well as the increase of model robustness for formal verification .
This paper proposes a new kind of SAT-encodable Neural Network which has real-valued weights and binary activations. Since the activations are binary, this work constructs a truth table to map the inputs of a CNN block to its outputs. This helps to deduce the output of the CNN block as a boolean function of the input variables while allowing intermediate values to be real. This formula also gives a set of masks that identify the conditions for which a given neuron is active. Tuning this set of masks post training helps increase the generalisability of the network - sec 4.1. Furthermore, this causes the encoded formula to be relatively simpler than the other existing methods for verifying BNNs and thus helps in keeping things scalable for the sat solver. The authors have conducted experiments to show that this method achieves a better performance in terms of verifiable accuracy and runtime than the existing methods (Table 1).
SP:a12ed5ea62f4a7e8a8f709aa7b425784d77eb84c
Truth Table Deep Convolutional Neural Network, A New SAT-Encodable Architecture - Application To Complete Robustness
1 INTRODUCTION . Deep Neural Network ( DNN ) systems offer exceptional performance in a variety of difficult domains ( Goodfellow et al. , 2016 ) and today these results far outstrip our ability to secure and analyze those DNNs . As DNNs are becoming widely integrated in a variety of applications , several concerns have emerged : lack of robustness emphasized by a lack of explainability , difficulty of integrating human knowledge in postprocessing and impossibility to formally verify their behavior due to their large complexity . Under these circumstances and especially when these systems are deployed in applications where safety and security are issues , the formal verification of DNN systems and the field of eXplainable AI ( XAI ) are under intense research efforts . For example , Tesla has recently filed a patent on the DNNs ’ portability on a platform incorporating a component dedicated to formal verification ( Driscoll , 2020 ) . Also , the European Union ’ s general data protection regulation includes a provision on the explainability of AIs ( Regulation , 2016 ) . DNNs formal verification methods are mainly based either on Satisfiability Modulo Theory ( SMT ) ( Katz et al. , 2017 ) or Mixed-Integer Programming ( MIP ) ( Xiao et al. , 2018 ) which are not yet scalable to real-valued DNNs . Some recent publications ( Jia & Rinard ( 2020 ) ; Narodytska et al . ( 2019b ) ; Narodytska et al . ( 2018 ) ) approach the problem of complete verification from the well-known boolean SATisfiability ( SAT ) ( Biere et al. , 2009 ) point of view where Binary Neural Networks ( BNNs , Hubara et al . ( 2016 ) ) are first converted into SAT formulas and then formally verified using SAT or MaxSAT solvers . This pipeline is computationally efficient ( Jia & Rinard , 2020 ) , enables security verifications ( Baluta et al. , 2019 ) and more generally can answer a vast range of question such as how many adversarial attacks exist for a given DNN , image and noise level ( Narodytska et al. , 2019a ) . Besides , this approach is faster than SMT or MIP robustness verification method ( Jia & Rinard , 2020 ) . However , to date , only BNNs can be transformed into a SAT formula and their strong binary constraints limit their natural accuracy . Moreover , the corresponding SAT conversion method intrinsically leads to formulas with a large number of variables and clauses , impeding interpretability as well as formal verification scalability . Finally , only few studies ( Ignatiev et al . ( 2019a ) ; Ignatiev et al . ( 2019b ) ) investigated the relationship between formal DNNs ’ verification and XAI . Our contributions . In this work , we offer three main contributions . ( 1 ) First , we define a new family of real-valued Deep Convolutional Neural Networks ( DCNN ) that can be encoded into SAT formulas : Truth Table Deep Convolutional Neural Network ( TT-DCNN ) . Our TT-DCNN leverages its model formulation in the form of a truth table to allow weights and certain intermediate values to be real . To the best of our knowledge , this is the first method to encode a real-valued DCNN into SAT . For the first time , we can extract all the logic classification rules from a subfamily of DCNNs which allows a bridge between XAI and formal verification , while achieving sufficient natural accuracy for practical use . Indeed , the nature of the SAT conversion between TT-DCNN and BNN is intrinsically different : our method relies upon giving one SAT expression per 2D-CNN filter instead of one SAT expression per neuron . This global interpretability method is in sharp contrast with the previous limited BNNs and local DNNs explainability . ( 2 ) TT-DCNNs offer two main valuable conversion properties over BNNs . ( 2-a : Post-tuning ) The first one allows us to integrate human knowledge in the post-processing : we can now interpret the model inference with simple concepts , which enables to manually modify the 2D-CNN filter activation towards a desired goal . For example , we decided to focus on reducing overfitting and , to this end , we characterize TT-DCNN logic rules resulting from overfitting and propose a filtering approach , which increases the verifiable accuracy without decreasing the natural accuracy ( cf . Appendix A.1 ) . ( 2-b : Tractability ) The second property is the possibility to compute all possible model inputs/outputs prior to deployment in production . In an adversarial setting , we can assess whether the input noise will propagate to the output . We can therefore disregard filters with no impact on the output . This leads to a lower number of clauses and variables in the SAT formulas compared to BNNs which allows using generic SAT solvers and exact model counting solvers . ( 3 ) We apply our model to complete robustness verification ( cf . Appendix A.1 ) . TT-DCNNs offer a good tradeoff between the state-of-the-art of BNN/SAT method ( Jia & Rinard , 2020 ) and of real-valued DNN/MILP complete robustness verification methods ( Xiao et al . ( 2018 ) ; Tjeng et al . ( 2017 ) ) . This is expected as our network is both real-weighted and SAT-convertible . Our TT-DCNN model improves the verifiable accuracy by more than 2.5 % for high noise MNIST and by 0.5 % for the high noise of CIFAR10 when compared to BNN/SAT method while decreasing the verification time by a factor of 9 for MNIST and 150 for CIFAR10 high noise when compared to DNN/MILP methods . Finally , our SAT formulas are 5 and 9 times more compact in term of number of clauses for high noise MNIST and CIFAR10 respectively compared to the BNN/SAT method . Outline . Section 2 introduces the notations and the related work . Section 3 presents our new TT-DCNN model and its two main properties . Section 4 details the complete robustness verification set-up and reports the evaluation results . Finally , we conclude this work in Section 5 . 2 BACKGROUND & RELATED WORK . Boolean SATisfiability ( SAT ) . The boolean SATisfiability problem ( SAT ) ( Biere et al. , 2009 ) is the problem of deciding whether there exists a variable assignment to satisfy a given boolean expression Φ . We can consider a boolean expression in a Conjunctive Normal Form ( CNF ) or in a Disjunctive Normal Form ( DNF ) . They are both defined over a set of boolean variables ( x1 , · · · , xn ) . A literal li is defined as a variable xi or its complement xi . A CNF is a conjunction of a set of clauses : Φ = ( c1 ∧ · · · ∧ cm ) , where each clause cj is a disjunction of some literals cj = lj1∨· · ·∨ljr . A DNF is a disjunction of a set of clauses : Φ = ( c1∨· · ·∨cm ) , where each clause cj is a conjunction of some literals cj = lj1 ∧ · · · ∧ ljr . A pseudo-boolean constraint is a constraint of the form : N∑ p=1 aplp ◦ b , where ap ∈ Z , b ∈ Z and ◦ ∈ { ≤ , = , ≥ } , which can be mapped to a SAT formula ( Roussel & Manquinho , 2009 ) . However , the output SAT formula contains a tremendous number of clauses and literals compared to the number of variables in the pseudo-boolean constraint making it very hard to understand ( cf . example in Appendix A.2 ) . A boolean function has the form { 0 , 1 } n → { 0 , 1 } and its corresponding truth table gives outputs for all possible inputs combinations . Two-dimensional Convolutional Neural Networks ( 2D-CNNs ) . We consider the 2D-CNN as a function Φf , which , for a given filter f , takes n = k2c inputs at position ( i , j ) with k the kernel size and c the number of input channels . The outputs can be written y ( i , j ) f = Φf ( x ( i , j ) 1 , · · · , x ( i , j ) n ) . Note that in the binary case , a truth table between inputs and outputs for 2n entries can be easily set up ( if n is not too large ) . If we now consider a multi-layers network with s convolution layers : a similar truth table can be constructed , except now the kernel size k needs to be replaced by a patch function P ( s ) ( also referred to as the size of a receptive field in the literature ) . We have P ( 1 ) = k and P ( s+ 1 ) = P ( s ) if and only if the kernel size of the layer is 1 . We denote by P ( s , i , j ) , the receptive field after the sth layer at position ( i , j ) . We denote the vector obtained after the flatten operation and before the final classifier layer as the vector of features V . If there are a total of L layers in the DNN , each element of V is a non-linear function over a patch ( P ( L ) , P ( L ) ) on the input . SAT encoding of neural networks . The sole published method converting a DNN into a SAT formula is limited to BNNs ( Narodytska et al . ( 2018 ) ; Cheng et al . ( 2018 ) ) and involves recomposing a block formed of a 2D-CNN layer , a batch normalization layer and a step function into an inequality in order to apply the pseudo-boolean constraint ( Roussel & Manquinho , 2009 ) . This approach has been further refined using a different training method and a specific SAT solver resulting in a significantly reduced verification resolution time ( Jia & Rinard , 2020 ) . Although the proposed inequality rewriting is elegant , the corresponding SAT formula contains a tremendous number of clauses and literals compared to the number of variables in the pseudo-boolean constraint ( cf . example Appendix A.2 ) . This prevents both the interpretability and the tractability of those SAT/BNNs formulas . Hence , our goal is to find a real-valued DCNN with good performance and coincidentally convertible to SAT with fully interpretive inference . Interpretability by global rule extraction . Machine learning interpretability analysis fall into four main categories : either local ( input dependant ) or global methods , either exact or non-exact methods . The most famous techniques for local non-exact interpretability are LIME and the ANCHOR explainers ( Ribeiro et al . ( 2016 ) ; Ribeiro et al . ( 2018 ) ) . PI-explanations ( Shih et al. , 2018 ) and SHAP ( Lundberg & Lee , 2017 ) are also popular techniques for local exact method and global non-exact method respectively . The only scalable method for global exact interpretability was proposed in ( Granmo et al. , 2019 ) . Our work aims to extend the studies of the latter strategy with the use of truth tables in order to obtain the equivalent conjectures . Using truth tables in machine learning has been documented for hardware optimisation ( Soga & Nakahara ( 2020 ) , Wang et al . ( 2019 ) ) and recently in an attempt for global non-exact interpretation of BNNs ( Burkhardt et al. , 2021 ) . Our work is pioneer in the use of truth table to create a new architecture enabling the extraction of all the global exact logic rules as well as the increase of model robustness for formal verification .
This article proposes a novel convolutional architecture, dubbed Truth Table DCNs (TT-DCNs). Compared to Binary Neural Networks (BNNs), the proposed architecture admits a much more compact propositional logic encoding while improving their accuracy. To the best of my understanding, the core idea is using real-weighted convolutions and aggregations that are then binarized with step-functions. These convolutions can be fully represented with truth tables, as long as the number of inputs in the filter is low-dimensional. Furthermore, the authors claim that the proposed architecture is intrinsically more interpretable than BNNs and that it can be post-tuned by integrating human knowledge in the truth tables. Experiments show that TT-DCNs are more easily verified while retaining practical predictive performance.
SP:a12ed5ea62f4a7e8a8f709aa7b425784d77eb84c
Where is the bottleneck in long-tailed classification?
1 INTRODUCTION . Long-tailed distributions are those in which samples from a small number of head ( majority ) classes vastly outnumber the samples for a large number of tail ( minority ) classes . Such distributions can occur naturally , such as rare diseases in medical contexts or minority ethnic groups in face recognition . Learning from long-tailed data is challenging because it combines three problems : imbalanced learning across the head/body/tail , few-shot learning on the tail classes , and a label distribution shift at test time ( long-tailed training set , balanced test set ) . In practice , most research focuses on the data imbalance or label distribution shift problem , which reflects a communal belief that the bottleneck in long-tailed classification lies in the classifier rather than the representation — the representation is though to be ” good enough ” . We question this belief and provide evidence against it through a series of experiments . First , we show that the representation is not good enough ( §1.1 ) and may be a larger bottleneck than the classifier . We show that the long-tailed representations and ” normal ” representations have substantial differences in their second moments ( §2.1 ) , and that long-tailed representations have significantly worse inter-class separation and intra-class compactness ( §2.2 ) , and poorly localize tail classes in feature space ( §8 ) . Finally , we explain the reason why data augmentation boosts performance in long-tailed learning ( Zhong et al . ; Zhang et al. , 2021b ) ( §1.2 ) ) despite having no effect on the imbalance factor , showing that it significantly improves localization of classes ( §8 ) , and confers robustness to distribution shift by improving inter-class separation and inter-class compactness ( §2.2 . 1.1 IS THE REPRESENTATION OR CLASSIFIER THE BOTTLENECK ? . There is a tension between representation learning and classifier learning in long-tailed classification , due to their opposite locations in the bias-variance trade-off spectrum . Representation learning is thought to suffer more from data scarcity rather than data imbalance ( Yang & Xu , 2020 ) , while classifier learning is though to suffer more from data imbalance . Decoupled training ( Zhong et al . ; Kang et al. , 2020 ) methods attempt to address this tension by training the classifier and representation separately ( left side of Fig 1 ) . It is a common belief that the representations are ” good enough ” and the bottleneck is the classifier ( Zhang et al. , 2021a ; Yang & Xu , 2020 ; Kang et al. , 2020 ) , thus many works focus on interventions in the classifier learning phase ( Menon et al. , 2020 ) . We test this belief by carrying out an altered form of decoupled training ( right side of Fig 1 ) . The three most commonly used long-tailed datasets — CIFAR-10/100 LT and ImageNet-LT ( Liu et al. , 2019 ) and ImageNet-LT — all have larger corresponding balanced datasets , namely CIFAR-10/100 ( Krizhevsky ) and ImageNet-1k ( Deng et al. , 2009 ) . Let DLT denote the long-tailed version of a dataset , andD ? denote the normal , balanced version . To assess the relative impact of the representation and classifier , we apply decoupled training without resampling . Instead of resampling , we swap between the DLT and D ? versions of the dataset between Stage 1 and Stage 2 . If the classifier is truly the bottleneck , we expect to see that a representation trained on D ? with a classifier retrained on DLT should perform worse on the balanced test set than a representation trained on DLT with a classifier retrained on D ? . We find that this is not true ( Table 1 ) . As expected , the D ? → D ? model ( with a representation and classifier trained on the true , balanced dataset ) , performs the best . However , across all three datasets , the D ? → DLT model outperforms the DLT → D ? model by a substantial margin . This indicates that contrary to popular belief , the bias in the representation dominates the bias in the classifier , and not the other way around . 1.2 WHY DOES DATA AUGMENTATION HELP ? . At a first glance , it is unclear why class-agnostic input data augmentation should improve long-tailed classification . Data augmentation is incapable of addressing data imbalance — it leaves the imbal- ance factor of a dataset unchanged . Nevertheless , input data augmentation such as MixUp ( Zhang et al. , 2018 ) , Manifold Mixup ( Verma et al. , 2019 ) and AutoAug ( Cubuk et al. , 2019 ) have proven to be highly effective for reducing bias in long-tailed classification ( Zhong et al . ; Zhang et al. , 2021b ; Tan et al. , 2020 ) . When combined with decoupled training ( Kang et al. , 2020 ) , data augmentation results in even larger increases ( Fig 3 ) . However , the mechanism by which data augmentation improves performance and reduces bias is not known . It can not address data imbalance , so data augmentation must be affecting another quality of the representation . 1.3 WHY ISN ’ T REBALANCING ENOUGH ? . Resampling and reweighting strategies have been an essential part of long-tailed classification . However , as first observed by Zhang et al . ( 2021a ) , rebalancing does not seem to be enough . Following the methodology of Zhang et al . ( 2021a ) , we construct an empirical classifier bound by first training a representation on the long-tailed DLT dataset , in this case CIFAR-10 with an imbalance factor of 100 . Next , we follow the standard decoupled training startegy for cRT ( Kang et al. , 2020 ) and retrain the classifier on version ofDLT that has been balanced by class-aware resampling Kang et al . ( 2020 ) after freezing the representation . To obtain the empirical classifier performance bound , we then take the frozen representations from the first stage and train a new classifier on the true dataset D ? . The performance of the classifier trained on D ? can be viewed as an empirical upper bound on the performance obtainable through any resampling strategy , and is the ideal performance to aim for . There is a consistent gap between the class-aware resampling strategy and the upper bound ( Fig 3 ) , suggesting that the resampling strategy alone leaves performance on the table . Moreover , applying data augmentation again in the second stage boosts the performance even further , which is suprising in light of the fact that the classifier is primarily thought to suffer from imbalance — so why should applying data augmentation in the classifier learning phase have such a big impact on performance ? 2 EXPERIMENTS . In the previous section , we showed results that provoke three questions . 1 . Why are the representations learned from the long-tailed data so much poorer than the representations learned from the true dataset ? 2 . Why does data augmentation help in the representation learning phase of decoupledtraining for long-tailed classification ? We provide answers to these questions by analyzing the differences between the representations learned from the long-tailed data , the representations learned from the true dataset , and representations learned from the long-tailed data with augmentation . 2.1 REPRESENTATIONS LEARNED FROM LONG-TAILED DATA HAVE DIFFERENT 2ND MOMENTS . We train a ResNet-32 on a true , balanced dataset D ? and a ResNet-32 on a long-tailed dataset DLT . We then extract features from both models for the entire dataset ( D ? ) , to reflect the true data distribution . We use CIFAR10/100-LT with an imbalance factor of 100 . The representations learned from long-tailed data differ from the representations learned from the true dataset in two significant ways . First , there are notable differences in the variance and covariance of the classes that belonged to the head or tail in the long-tailed dataset within the DLT representation , but not in the D ? representations ( Fig 4 ) . Specifically , the variance of the features belonging to the tail and body classes ( the L1-norm of the covariance matrix diagonal ) diverges substantially from the variance of the features belonging to the head classes . This is not only true of the variance , but of the covariance . The Frobenius norm of the sample covariance matrix computed from the features of a class is also different between the D ? and DLT learners . For the learner trained on D ? , there are no significant differences in variance or covariance between the head , tail , and body , and differences are dominated by noise among the classes . For the learner trained on DLT , the variance of the head is systemically lower than the variance of the head , and this is also true of the covariance . Next , we examine the proportion of variance explained by the first principal component of the data matrix for each class . Concretely , we take all feature vectors belonging to the class from the complete dataset D ? , and compute the first principal component of the data matrix of features for the class . We then plot the proportion of variance explained by the first principal component of each class ( Fig . 5 ) , thus obtaining an estimate of the relative complexity of the subspace each class resides in . Across all three datasets , we see the same result : the first principal component explains more of the variance in the representations learned from DLT than the representations learned from D ? . Intuitively , these results suggest that the representations learned from long-tailed data are substantially different than the representations learned from the true , balanced dataset . The differences are most marked for tail and body classes , and least for the head classes . Geometrically , the long-tailed representations appear to occupy a less complex subspace than the D ? representations , yet exhibit greater variance , suggesting that the feature space is biased — fewer dimensions in the long-tailed feature space are relevant to all classes , and thus the variance concentrates in fewer dimensions of the feature space . 2.2 REPRESENTATIONS LEARNED FROM LONG-TAILED DATA ARE MORE DIFFUSE AND LESS SEPARABLE . We now examine the representations from the angular perspective . Recall that the score for the class k logit in a standard linear classifier for a feature vector x wk · x = ||wk|| × ||x|| × cos θ ( 1 ) Differences in intra-class compactness between representations is a function of the angle θ between x and wk when ||wk|| = ||x|| In practice , the norm ||wk|| of the logits of a class k are strongly correlated with the cardinality of the class ( Menon et al. , 2020 ; Kang et al. , 2020 ; Zhong et al . ) — tail classes have smaller norms and vice versa , reflecting the class priors . However , decoupled training roughly equalizes the logit norms Kang et al . ( 2020 ) , allowing us to ignore the ||wk|| × ||x|| term and focus only on the angle θ . In this setting , the classifier reduces to an angular classifier , and a sample x will be assigned to the logit wk to which it has the smallest angular distance . Thus , we proceed to an examination of the angular distribution created by the features of the long-tailed representation learner and the D ? representation learner . The representations of the tail classes learned by the long-tailed representation learner are significantly less compact than the representations learned by theD ? representation learner with respect to the true data distribution ( Fig 6 ) . While the long-tailed per-class representations are compact w.r.t to the long-tailed distribution , when the unseen samples making up the true data distribution of D ? are added , the angular spread significantly increases for the tail classes . This means that any classifier boundaries learned using only the samples available in the DLT distribution will fail to generalize , as the majority of unseen samples for a tail will escape the classification boundaries . Interestingly , augmentation appears to significantly reduce the compactness gap between the long-tailed and true data distribution for the tail classes . This may partly explain the success of data augmentation in long-tailed classification . However , data augmentation ( MixUp ) also significantly increases the angular spread of all classes , in contrast to the representation trained on D ? , which addresses the distribution shift without compromising the angular spread of the classes . The representations of the tail classes learned by the long-tailed representation learner also display worse inter-class separation with respect to the true data distribution ( Fig 7 ) . The angular distance to the nearest incorrect class center significantly shrinks when the unseen samples from the true data distribution are considered . Again , augmentation makes a significant difference : the angular distribution shift of representations trained on the long-tailed dataset with augmentation is far smaller than the angular distribution shift of representations trained on the long-tailed dataset without augmentation . Thus , two major differences between the representations learned from long-tailed data and representations learned from the D ? data is the effect of distribution shift on intra-class compactness and inter-class separation . When unseen samples from the true data distribution are added , the angular distributions of the tail classes balloon for the representation trained on long-tailed data . The poor intra-class compactness also leads to poor inter-class separation w.r.t to the true data distribution , as the expansion of angular distributions results in each class eating into the margin between it ’ s nearest neighbor . This means that classifier boundaries drawn on the long-tailed distribution will likely be optimistic and fail to generalize due to unseen samples from the true distribution mostly escaping the convex polytope of the long-tailed classifier boundaries . However , data augmentation seems to help , because the representations learned from long-tailed data with data augmentation exhibit much better inter-class separation and intra-class compactness under distribution shift . Differences in inter-class separation between representations
This paper tries to prove that there is a bottleneck in feature learning for long-tailed classification and data augmentation can help relieve the issues in long-tail feature space. Three major experiments were done to prove that feature space 1) is more biased than balanced feature space, 2) is more disused and less compact than balanced feature space, and 3) less localized in terms of feature centroids. And data augmentation can help alleviate all three issues.
SP:03a10ca1c5c7b5d974354c2d9936bf95f348e3e9
Where is the bottleneck in long-tailed classification?
1 INTRODUCTION . Long-tailed distributions are those in which samples from a small number of head ( majority ) classes vastly outnumber the samples for a large number of tail ( minority ) classes . Such distributions can occur naturally , such as rare diseases in medical contexts or minority ethnic groups in face recognition . Learning from long-tailed data is challenging because it combines three problems : imbalanced learning across the head/body/tail , few-shot learning on the tail classes , and a label distribution shift at test time ( long-tailed training set , balanced test set ) . In practice , most research focuses on the data imbalance or label distribution shift problem , which reflects a communal belief that the bottleneck in long-tailed classification lies in the classifier rather than the representation — the representation is though to be ” good enough ” . We question this belief and provide evidence against it through a series of experiments . First , we show that the representation is not good enough ( §1.1 ) and may be a larger bottleneck than the classifier . We show that the long-tailed representations and ” normal ” representations have substantial differences in their second moments ( §2.1 ) , and that long-tailed representations have significantly worse inter-class separation and intra-class compactness ( §2.2 ) , and poorly localize tail classes in feature space ( §8 ) . Finally , we explain the reason why data augmentation boosts performance in long-tailed learning ( Zhong et al . ; Zhang et al. , 2021b ) ( §1.2 ) ) despite having no effect on the imbalance factor , showing that it significantly improves localization of classes ( §8 ) , and confers robustness to distribution shift by improving inter-class separation and inter-class compactness ( §2.2 . 1.1 IS THE REPRESENTATION OR CLASSIFIER THE BOTTLENECK ? . There is a tension between representation learning and classifier learning in long-tailed classification , due to their opposite locations in the bias-variance trade-off spectrum . Representation learning is thought to suffer more from data scarcity rather than data imbalance ( Yang & Xu , 2020 ) , while classifier learning is though to suffer more from data imbalance . Decoupled training ( Zhong et al . ; Kang et al. , 2020 ) methods attempt to address this tension by training the classifier and representation separately ( left side of Fig 1 ) . It is a common belief that the representations are ” good enough ” and the bottleneck is the classifier ( Zhang et al. , 2021a ; Yang & Xu , 2020 ; Kang et al. , 2020 ) , thus many works focus on interventions in the classifier learning phase ( Menon et al. , 2020 ) . We test this belief by carrying out an altered form of decoupled training ( right side of Fig 1 ) . The three most commonly used long-tailed datasets — CIFAR-10/100 LT and ImageNet-LT ( Liu et al. , 2019 ) and ImageNet-LT — all have larger corresponding balanced datasets , namely CIFAR-10/100 ( Krizhevsky ) and ImageNet-1k ( Deng et al. , 2009 ) . Let DLT denote the long-tailed version of a dataset , andD ? denote the normal , balanced version . To assess the relative impact of the representation and classifier , we apply decoupled training without resampling . Instead of resampling , we swap between the DLT and D ? versions of the dataset between Stage 1 and Stage 2 . If the classifier is truly the bottleneck , we expect to see that a representation trained on D ? with a classifier retrained on DLT should perform worse on the balanced test set than a representation trained on DLT with a classifier retrained on D ? . We find that this is not true ( Table 1 ) . As expected , the D ? → D ? model ( with a representation and classifier trained on the true , balanced dataset ) , performs the best . However , across all three datasets , the D ? → DLT model outperforms the DLT → D ? model by a substantial margin . This indicates that contrary to popular belief , the bias in the representation dominates the bias in the classifier , and not the other way around . 1.2 WHY DOES DATA AUGMENTATION HELP ? . At a first glance , it is unclear why class-agnostic input data augmentation should improve long-tailed classification . Data augmentation is incapable of addressing data imbalance — it leaves the imbal- ance factor of a dataset unchanged . Nevertheless , input data augmentation such as MixUp ( Zhang et al. , 2018 ) , Manifold Mixup ( Verma et al. , 2019 ) and AutoAug ( Cubuk et al. , 2019 ) have proven to be highly effective for reducing bias in long-tailed classification ( Zhong et al . ; Zhang et al. , 2021b ; Tan et al. , 2020 ) . When combined with decoupled training ( Kang et al. , 2020 ) , data augmentation results in even larger increases ( Fig 3 ) . However , the mechanism by which data augmentation improves performance and reduces bias is not known . It can not address data imbalance , so data augmentation must be affecting another quality of the representation . 1.3 WHY ISN ’ T REBALANCING ENOUGH ? . Resampling and reweighting strategies have been an essential part of long-tailed classification . However , as first observed by Zhang et al . ( 2021a ) , rebalancing does not seem to be enough . Following the methodology of Zhang et al . ( 2021a ) , we construct an empirical classifier bound by first training a representation on the long-tailed DLT dataset , in this case CIFAR-10 with an imbalance factor of 100 . Next , we follow the standard decoupled training startegy for cRT ( Kang et al. , 2020 ) and retrain the classifier on version ofDLT that has been balanced by class-aware resampling Kang et al . ( 2020 ) after freezing the representation . To obtain the empirical classifier performance bound , we then take the frozen representations from the first stage and train a new classifier on the true dataset D ? . The performance of the classifier trained on D ? can be viewed as an empirical upper bound on the performance obtainable through any resampling strategy , and is the ideal performance to aim for . There is a consistent gap between the class-aware resampling strategy and the upper bound ( Fig 3 ) , suggesting that the resampling strategy alone leaves performance on the table . Moreover , applying data augmentation again in the second stage boosts the performance even further , which is suprising in light of the fact that the classifier is primarily thought to suffer from imbalance — so why should applying data augmentation in the classifier learning phase have such a big impact on performance ? 2 EXPERIMENTS . In the previous section , we showed results that provoke three questions . 1 . Why are the representations learned from the long-tailed data so much poorer than the representations learned from the true dataset ? 2 . Why does data augmentation help in the representation learning phase of decoupledtraining for long-tailed classification ? We provide answers to these questions by analyzing the differences between the representations learned from the long-tailed data , the representations learned from the true dataset , and representations learned from the long-tailed data with augmentation . 2.1 REPRESENTATIONS LEARNED FROM LONG-TAILED DATA HAVE DIFFERENT 2ND MOMENTS . We train a ResNet-32 on a true , balanced dataset D ? and a ResNet-32 on a long-tailed dataset DLT . We then extract features from both models for the entire dataset ( D ? ) , to reflect the true data distribution . We use CIFAR10/100-LT with an imbalance factor of 100 . The representations learned from long-tailed data differ from the representations learned from the true dataset in two significant ways . First , there are notable differences in the variance and covariance of the classes that belonged to the head or tail in the long-tailed dataset within the DLT representation , but not in the D ? representations ( Fig 4 ) . Specifically , the variance of the features belonging to the tail and body classes ( the L1-norm of the covariance matrix diagonal ) diverges substantially from the variance of the features belonging to the head classes . This is not only true of the variance , but of the covariance . The Frobenius norm of the sample covariance matrix computed from the features of a class is also different between the D ? and DLT learners . For the learner trained on D ? , there are no significant differences in variance or covariance between the head , tail , and body , and differences are dominated by noise among the classes . For the learner trained on DLT , the variance of the head is systemically lower than the variance of the head , and this is also true of the covariance . Next , we examine the proportion of variance explained by the first principal component of the data matrix for each class . Concretely , we take all feature vectors belonging to the class from the complete dataset D ? , and compute the first principal component of the data matrix of features for the class . We then plot the proportion of variance explained by the first principal component of each class ( Fig . 5 ) , thus obtaining an estimate of the relative complexity of the subspace each class resides in . Across all three datasets , we see the same result : the first principal component explains more of the variance in the representations learned from DLT than the representations learned from D ? . Intuitively , these results suggest that the representations learned from long-tailed data are substantially different than the representations learned from the true , balanced dataset . The differences are most marked for tail and body classes , and least for the head classes . Geometrically , the long-tailed representations appear to occupy a less complex subspace than the D ? representations , yet exhibit greater variance , suggesting that the feature space is biased — fewer dimensions in the long-tailed feature space are relevant to all classes , and thus the variance concentrates in fewer dimensions of the feature space . 2.2 REPRESENTATIONS LEARNED FROM LONG-TAILED DATA ARE MORE DIFFUSE AND LESS SEPARABLE . We now examine the representations from the angular perspective . Recall that the score for the class k logit in a standard linear classifier for a feature vector x wk · x = ||wk|| × ||x|| × cos θ ( 1 ) Differences in intra-class compactness between representations is a function of the angle θ between x and wk when ||wk|| = ||x|| In practice , the norm ||wk|| of the logits of a class k are strongly correlated with the cardinality of the class ( Menon et al. , 2020 ; Kang et al. , 2020 ; Zhong et al . ) — tail classes have smaller norms and vice versa , reflecting the class priors . However , decoupled training roughly equalizes the logit norms Kang et al . ( 2020 ) , allowing us to ignore the ||wk|| × ||x|| term and focus only on the angle θ . In this setting , the classifier reduces to an angular classifier , and a sample x will be assigned to the logit wk to which it has the smallest angular distance . Thus , we proceed to an examination of the angular distribution created by the features of the long-tailed representation learner and the D ? representation learner . The representations of the tail classes learned by the long-tailed representation learner are significantly less compact than the representations learned by theD ? representation learner with respect to the true data distribution ( Fig 6 ) . While the long-tailed per-class representations are compact w.r.t to the long-tailed distribution , when the unseen samples making up the true data distribution of D ? are added , the angular spread significantly increases for the tail classes . This means that any classifier boundaries learned using only the samples available in the DLT distribution will fail to generalize , as the majority of unseen samples for a tail will escape the classification boundaries . Interestingly , augmentation appears to significantly reduce the compactness gap between the long-tailed and true data distribution for the tail classes . This may partly explain the success of data augmentation in long-tailed classification . However , data augmentation ( MixUp ) also significantly increases the angular spread of all classes , in contrast to the representation trained on D ? , which addresses the distribution shift without compromising the angular spread of the classes . The representations of the tail classes learned by the long-tailed representation learner also display worse inter-class separation with respect to the true data distribution ( Fig 7 ) . The angular distance to the nearest incorrect class center significantly shrinks when the unseen samples from the true data distribution are considered . Again , augmentation makes a significant difference : the angular distribution shift of representations trained on the long-tailed dataset with augmentation is far smaller than the angular distribution shift of representations trained on the long-tailed dataset without augmentation . Thus , two major differences between the representations learned from long-tailed data and representations learned from the D ? data is the effect of distribution shift on intra-class compactness and inter-class separation . When unseen samples from the true data distribution are added , the angular distributions of the tail classes balloon for the representation trained on long-tailed data . The poor intra-class compactness also leads to poor inter-class separation w.r.t to the true data distribution , as the expansion of angular distributions results in each class eating into the margin between it ’ s nearest neighbor . This means that classifier boundaries drawn on the long-tailed distribution will likely be optimistic and fail to generalize due to unseen samples from the true distribution mostly escaping the convex polytope of the long-tailed classifier boundaries . However , data augmentation seems to help , because the representations learned from long-tailed data with data augmentation exhibit much better inter-class separation and intra-class compactness under distribution shift . Differences in inter-class separation between representations
This paper poses an interesting and important question - where are the bottlenecks in long-tailed classification. The authors use empirical experiments to show their observations: (1) representation is more critical than classifier, (2) data augmentation is helpful. Three datasets (CIFAR-10 LT, CIFAR-100 LT and ImageNet-LT) are employed to work with ResNet-32 and ResNet-10 models to demonstrate their observations.
SP:03a10ca1c5c7b5d974354c2d9936bf95f348e3e9
Where is the bottleneck in long-tailed classification?
1 INTRODUCTION . Long-tailed distributions are those in which samples from a small number of head ( majority ) classes vastly outnumber the samples for a large number of tail ( minority ) classes . Such distributions can occur naturally , such as rare diseases in medical contexts or minority ethnic groups in face recognition . Learning from long-tailed data is challenging because it combines three problems : imbalanced learning across the head/body/tail , few-shot learning on the tail classes , and a label distribution shift at test time ( long-tailed training set , balanced test set ) . In practice , most research focuses on the data imbalance or label distribution shift problem , which reflects a communal belief that the bottleneck in long-tailed classification lies in the classifier rather than the representation — the representation is though to be ” good enough ” . We question this belief and provide evidence against it through a series of experiments . First , we show that the representation is not good enough ( §1.1 ) and may be a larger bottleneck than the classifier . We show that the long-tailed representations and ” normal ” representations have substantial differences in their second moments ( §2.1 ) , and that long-tailed representations have significantly worse inter-class separation and intra-class compactness ( §2.2 ) , and poorly localize tail classes in feature space ( §8 ) . Finally , we explain the reason why data augmentation boosts performance in long-tailed learning ( Zhong et al . ; Zhang et al. , 2021b ) ( §1.2 ) ) despite having no effect on the imbalance factor , showing that it significantly improves localization of classes ( §8 ) , and confers robustness to distribution shift by improving inter-class separation and inter-class compactness ( §2.2 . 1.1 IS THE REPRESENTATION OR CLASSIFIER THE BOTTLENECK ? . There is a tension between representation learning and classifier learning in long-tailed classification , due to their opposite locations in the bias-variance trade-off spectrum . Representation learning is thought to suffer more from data scarcity rather than data imbalance ( Yang & Xu , 2020 ) , while classifier learning is though to suffer more from data imbalance . Decoupled training ( Zhong et al . ; Kang et al. , 2020 ) methods attempt to address this tension by training the classifier and representation separately ( left side of Fig 1 ) . It is a common belief that the representations are ” good enough ” and the bottleneck is the classifier ( Zhang et al. , 2021a ; Yang & Xu , 2020 ; Kang et al. , 2020 ) , thus many works focus on interventions in the classifier learning phase ( Menon et al. , 2020 ) . We test this belief by carrying out an altered form of decoupled training ( right side of Fig 1 ) . The three most commonly used long-tailed datasets — CIFAR-10/100 LT and ImageNet-LT ( Liu et al. , 2019 ) and ImageNet-LT — all have larger corresponding balanced datasets , namely CIFAR-10/100 ( Krizhevsky ) and ImageNet-1k ( Deng et al. , 2009 ) . Let DLT denote the long-tailed version of a dataset , andD ? denote the normal , balanced version . To assess the relative impact of the representation and classifier , we apply decoupled training without resampling . Instead of resampling , we swap between the DLT and D ? versions of the dataset between Stage 1 and Stage 2 . If the classifier is truly the bottleneck , we expect to see that a representation trained on D ? with a classifier retrained on DLT should perform worse on the balanced test set than a representation trained on DLT with a classifier retrained on D ? . We find that this is not true ( Table 1 ) . As expected , the D ? → D ? model ( with a representation and classifier trained on the true , balanced dataset ) , performs the best . However , across all three datasets , the D ? → DLT model outperforms the DLT → D ? model by a substantial margin . This indicates that contrary to popular belief , the bias in the representation dominates the bias in the classifier , and not the other way around . 1.2 WHY DOES DATA AUGMENTATION HELP ? . At a first glance , it is unclear why class-agnostic input data augmentation should improve long-tailed classification . Data augmentation is incapable of addressing data imbalance — it leaves the imbal- ance factor of a dataset unchanged . Nevertheless , input data augmentation such as MixUp ( Zhang et al. , 2018 ) , Manifold Mixup ( Verma et al. , 2019 ) and AutoAug ( Cubuk et al. , 2019 ) have proven to be highly effective for reducing bias in long-tailed classification ( Zhong et al . ; Zhang et al. , 2021b ; Tan et al. , 2020 ) . When combined with decoupled training ( Kang et al. , 2020 ) , data augmentation results in even larger increases ( Fig 3 ) . However , the mechanism by which data augmentation improves performance and reduces bias is not known . It can not address data imbalance , so data augmentation must be affecting another quality of the representation . 1.3 WHY ISN ’ T REBALANCING ENOUGH ? . Resampling and reweighting strategies have been an essential part of long-tailed classification . However , as first observed by Zhang et al . ( 2021a ) , rebalancing does not seem to be enough . Following the methodology of Zhang et al . ( 2021a ) , we construct an empirical classifier bound by first training a representation on the long-tailed DLT dataset , in this case CIFAR-10 with an imbalance factor of 100 . Next , we follow the standard decoupled training startegy for cRT ( Kang et al. , 2020 ) and retrain the classifier on version ofDLT that has been balanced by class-aware resampling Kang et al . ( 2020 ) after freezing the representation . To obtain the empirical classifier performance bound , we then take the frozen representations from the first stage and train a new classifier on the true dataset D ? . The performance of the classifier trained on D ? can be viewed as an empirical upper bound on the performance obtainable through any resampling strategy , and is the ideal performance to aim for . There is a consistent gap between the class-aware resampling strategy and the upper bound ( Fig 3 ) , suggesting that the resampling strategy alone leaves performance on the table . Moreover , applying data augmentation again in the second stage boosts the performance even further , which is suprising in light of the fact that the classifier is primarily thought to suffer from imbalance — so why should applying data augmentation in the classifier learning phase have such a big impact on performance ? 2 EXPERIMENTS . In the previous section , we showed results that provoke three questions . 1 . Why are the representations learned from the long-tailed data so much poorer than the representations learned from the true dataset ? 2 . Why does data augmentation help in the representation learning phase of decoupledtraining for long-tailed classification ? We provide answers to these questions by analyzing the differences between the representations learned from the long-tailed data , the representations learned from the true dataset , and representations learned from the long-tailed data with augmentation . 2.1 REPRESENTATIONS LEARNED FROM LONG-TAILED DATA HAVE DIFFERENT 2ND MOMENTS . We train a ResNet-32 on a true , balanced dataset D ? and a ResNet-32 on a long-tailed dataset DLT . We then extract features from both models for the entire dataset ( D ? ) , to reflect the true data distribution . We use CIFAR10/100-LT with an imbalance factor of 100 . The representations learned from long-tailed data differ from the representations learned from the true dataset in two significant ways . First , there are notable differences in the variance and covariance of the classes that belonged to the head or tail in the long-tailed dataset within the DLT representation , but not in the D ? representations ( Fig 4 ) . Specifically , the variance of the features belonging to the tail and body classes ( the L1-norm of the covariance matrix diagonal ) diverges substantially from the variance of the features belonging to the head classes . This is not only true of the variance , but of the covariance . The Frobenius norm of the sample covariance matrix computed from the features of a class is also different between the D ? and DLT learners . For the learner trained on D ? , there are no significant differences in variance or covariance between the head , tail , and body , and differences are dominated by noise among the classes . For the learner trained on DLT , the variance of the head is systemically lower than the variance of the head , and this is also true of the covariance . Next , we examine the proportion of variance explained by the first principal component of the data matrix for each class . Concretely , we take all feature vectors belonging to the class from the complete dataset D ? , and compute the first principal component of the data matrix of features for the class . We then plot the proportion of variance explained by the first principal component of each class ( Fig . 5 ) , thus obtaining an estimate of the relative complexity of the subspace each class resides in . Across all three datasets , we see the same result : the first principal component explains more of the variance in the representations learned from DLT than the representations learned from D ? . Intuitively , these results suggest that the representations learned from long-tailed data are substantially different than the representations learned from the true , balanced dataset . The differences are most marked for tail and body classes , and least for the head classes . Geometrically , the long-tailed representations appear to occupy a less complex subspace than the D ? representations , yet exhibit greater variance , suggesting that the feature space is biased — fewer dimensions in the long-tailed feature space are relevant to all classes , and thus the variance concentrates in fewer dimensions of the feature space . 2.2 REPRESENTATIONS LEARNED FROM LONG-TAILED DATA ARE MORE DIFFUSE AND LESS SEPARABLE . We now examine the representations from the angular perspective . Recall that the score for the class k logit in a standard linear classifier for a feature vector x wk · x = ||wk|| × ||x|| × cos θ ( 1 ) Differences in intra-class compactness between representations is a function of the angle θ between x and wk when ||wk|| = ||x|| In practice , the norm ||wk|| of the logits of a class k are strongly correlated with the cardinality of the class ( Menon et al. , 2020 ; Kang et al. , 2020 ; Zhong et al . ) — tail classes have smaller norms and vice versa , reflecting the class priors . However , decoupled training roughly equalizes the logit norms Kang et al . ( 2020 ) , allowing us to ignore the ||wk|| × ||x|| term and focus only on the angle θ . In this setting , the classifier reduces to an angular classifier , and a sample x will be assigned to the logit wk to which it has the smallest angular distance . Thus , we proceed to an examination of the angular distribution created by the features of the long-tailed representation learner and the D ? representation learner . The representations of the tail classes learned by the long-tailed representation learner are significantly less compact than the representations learned by theD ? representation learner with respect to the true data distribution ( Fig 6 ) . While the long-tailed per-class representations are compact w.r.t to the long-tailed distribution , when the unseen samples making up the true data distribution of D ? are added , the angular spread significantly increases for the tail classes . This means that any classifier boundaries learned using only the samples available in the DLT distribution will fail to generalize , as the majority of unseen samples for a tail will escape the classification boundaries . Interestingly , augmentation appears to significantly reduce the compactness gap between the long-tailed and true data distribution for the tail classes . This may partly explain the success of data augmentation in long-tailed classification . However , data augmentation ( MixUp ) also significantly increases the angular spread of all classes , in contrast to the representation trained on D ? , which addresses the distribution shift without compromising the angular spread of the classes . The representations of the tail classes learned by the long-tailed representation learner also display worse inter-class separation with respect to the true data distribution ( Fig 7 ) . The angular distance to the nearest incorrect class center significantly shrinks when the unseen samples from the true data distribution are considered . Again , augmentation makes a significant difference : the angular distribution shift of representations trained on the long-tailed dataset with augmentation is far smaller than the angular distribution shift of representations trained on the long-tailed dataset without augmentation . Thus , two major differences between the representations learned from long-tailed data and representations learned from the D ? data is the effect of distribution shift on intra-class compactness and inter-class separation . When unseen samples from the true data distribution are added , the angular distributions of the tail classes balloon for the representation trained on long-tailed data . The poor intra-class compactness also leads to poor inter-class separation w.r.t to the true data distribution , as the expansion of angular distributions results in each class eating into the margin between it ’ s nearest neighbor . This means that classifier boundaries drawn on the long-tailed distribution will likely be optimistic and fail to generalize due to unseen samples from the true distribution mostly escaping the convex polytope of the long-tailed classifier boundaries . However , data augmentation seems to help , because the representations learned from long-tailed data with data augmentation exhibit much better inter-class separation and intra-class compactness under distribution shift . Differences in inter-class separation between representations
This paper seeks to study what is the bottleneck in long-tailed learning. Based on extensive experiments, the authors propose that representation learning is the bottleneck in long-tailed classification. Also, this paper analyzes representation learning from the perspectives of intra-class compactness and inter-class separation, as well as the influence of data mixup on long-tailed representation learning.
SP:03a10ca1c5c7b5d974354c2d9936bf95f348e3e9
Model Compression via Symmetries of the Parameter Space
1 INTRODUCTION . Recent work has shown that representation theory , the formal study of symmetry , provides the foundation for various innovative techniques in deep learning ( Cohen & Welling , 2016 ; Kondor & Trivedi , 2018 ; Ravanbakhsh et al. , 2017 ; Cohen & Welling , 2017 ) . Much of this previous work considers symmetries inherent to the input and output spaces , as well as distributions and functions that respect these symmetries . By contrast , in this paper , we expose a broad class of symmetries intrinsic to the parameter space of the neural networks themselves . We use these parameter space symmetries to devise a model compression algorithm that reduces the widths of the hidden layers , and hence the number of parameters . Unlike representation-theoretic techniques in the setting of equivariant neural networks , our methods are applicable to deep learning models with non-symmetric domains and non-equivariant functions , and hence pertain to some degree to all neural networks . Specifically , we formulate a theoretical framework for neural networks inspired by quiver representation theory , a mathematical field with connections to symplectic geometry and Lie theory ( Kirillov Jr , 2016 ; Nakajima et al. , 1998 ) . This approach builds on that of Armenta & Jodoin ( 2020 ) and of Wood & Shawe-Taylor ( 1996 ) , but is arguably simpler and encapsulates larger symmetry groups . Formally , a quiver is another name for a finite directed graph , and a representation of a quiver is the assignment of a vector space to each vertex and a compatible linear map to each edge . Our starting point is to regard the vector space of parameters for a neural network as a representation of a particular quiver , namely , the neural quiver ( Figure 1 ) . The advantage of this viewpoint is that representations of quivers carry rich symmetry groups via change-of-basis transformations ; such operations can be viewed as symmetries of the neural network parameter space . Moreover , these symmetries may be factored out without affecting the feedforward function , making our method a lossless model compression algorithm ( Serra et al. , 2020 ) . Model compression has become critically important as models have grown to billions of parameters ; with compression , enormous models may be reduced and run on smaller systems with faster inference ( Buciluǎ et al. , 2006 ; Cheng et al. , 2017 ; Frankle & Carbin , 2018 ; Zhang et al. , 2018 ) . Whereas many previous approaches to model compression are based on weight pruning , quantization , matrix factorization , or knowledge distillation , similar to Sourek et al . ( 2020 ) , our approach exploits symmetries of neural network parameter spaces . The size of the parameter space symmetry group is determined by properties of the activation functions . We focus on radial activation functions , as in Weiler & Cesa ( 2019 ) ; Sabour et al . ( 2017 ) ; Weiler et al . ( 2018a ; b ) ; these interact favorably with certain QR decompositions , and , consequently , the model compression is significant compared to the more common pointwise ( or ‘ local ’ ) activations . We refer to neural networks with radial activation functions as radial neural networks . Given a radial neural network , our results produce a new network with fewer neurons in each hidden layer and the same feedforward function , and hence the same loss for any batch of training data . Moreover , the value of the loss function after a step of gradient descent applied to the compressed model is the same as the value of the loss function after a step of projected gradient descent applied to the original model . As we explain below , in projected gradient descent , one subtracts a truncation of the gradient , rather than the full gradient . When the compression is significant enough , the compressed model takes less time per epoch to train and reaches local minima faster . To state these results slightly more precisely , recall that the parameters of a neural network with layer widths ( n0 , n1 , . . . , nL ) consist of an ni× ( ni−1+1 ) matrixWi of weights for each layer i , where we include the bias as an extra column . These are grouped into a tuple W = ( Wi ∈ Rni× ( ni−1+1 ) ) i . We define the reduced widths recursively as nredi = min ( ni , n red i−1 + 1 ) for i = 1 , . . . , L− 1 , with nred0 = n0 and n red L = nL . Note that n red i ≤ ni for all i. Theorem 1.1 ( Informal version of Theorems 4.3 and 4.7 ) . Suppose a neural network has L layers with widths ( n0 , . . . , nL ) , parameters W , and radial activation functions . Let fW : Rn0 → RnL be the feedforward function of the network . 1 . There exists an algorithm to produce a reduced radial neural network with layer widths ( n0 , n red 1 , . . . , n red L−1 , nL ) , parameters R , and the same feedforward function fR = fW . 2 . Training fR with gradient descent is an equivalent optimization problem to training fW with projected gradient descent . This theorem can be interpreted as a model compression result : the reduced ( or compressed ) neural network R has the same accuracy as the original neural network W , and there is an explicit relationship between the gradient descent optimization problems for the two neural networks . This result is not just theoretical ; it emerges from a practical and efficient algorithm depending on successive QR decompositions . We describe this procedure below ( Algorithm 1 ) and implement it in Python . To summarize , our contributions are as follows : 1 . A theoretical framework for neural networks based on quiver representation theory ; 2 . A QR decomposition for radial neural networks ; 3 . An implementation of a lossless model compression algorithm for radial neural networks ; 4 . A theorem relating gradient descent optimization of the original and compressed networks . We view this work as a step in the direction of improving learning algorithms by exploiting symmetry inherent to neural network parameter spaces . As such , we expect our framework and results to generalize in several ways , including : ( 1 ) further reductions of the hidden widths , ( 2 ) incorporating certain non-radial activation functions , ( 3 ) encapsulating networks beyond MLPs , such as convolutional , recurrent , and graph neural networks , ( 4 ) integration of regularization techniques . We include a detailed discussion of the limitations of our results , as well as future directions , in Section 6 . 2 RELATED WORK . Quiver Representation Theory and Neural Networks . Armenta & Jodoin ( 2020 ) give an approach to understanding neural networks in terms of quiver representations . Our work generalizes their approach as it ( 1 ) accommodates both pointwise and non-pointwise activation functions , ( 2 ) taps into larger symmetry groups , and ( 3 ) connects more naturally to gradient descent . Jeffreys & Lau ( 2021 ) and Manin & Marcolli ( 2020 ) also place quivers in the context of neural networks ; our approach is inspired by similar algebro-geometric and categorical perspectives , but differs in our emphasis on practical consequences for optimization techniques at the core of machine learning . These works have a number of precursors . One is the study of the “ non-negative homogeneity ” ( also known as “ positive scaling invariance ” ) property of ReLU activation functions ( Dinh et al. , 2017 ; Neyshabur et al. , 2015 ; Meng et al. , 2019 ) , which is a special case of the symmetry studied in this paper . Wood & Shawe-Taylor ( 1996 ) regard layers in a neural network as representations of finite groups and consider only pointwise activation functions ; by contrast , our framework captures Lie groups as well as non-pointwise activations . Our quiver approach to neural networks shares parallels with the “ algebraic neural networks ” of Parada-Mayorga & Ribeiro , and special cases of their formalism amount to representations of quivers over base rings beyond R , such as the ring of polynomials . In a somewhat different data-analytic context , Chindris & Kline ( 2021 ) use quiver representation theory in order to untangle point clouds ; though they do not use neural networks . Equivariant Neural Networks . Previously , representation theory has been used to design neural networks which incorporate symmetry as an inductive bias . A variety of architectures such as G-convolution , steerable CNN , and Clebsch-Gordan networks are constrained by various weightsharing schemes to be equivariant or invariant to various symmetry groups ( Cohen et al. , 2019 ; Weiler & Cesa , 2019 ; Cohen & Welling , 2016 ; Chidester et al. , 2018 ; Kondor & Trivedi , 2018 ; Bao & Song , 2019 ; Worrall et al. , 2017 ; Cohen & Welling , 2017 ; Weiler et al. , 2018b ; Dieleman et al. , 2016 ; Lang & Weiler , 2021 ; Ravanbakhsh et al. , 2017 ) . Our approach , in contrast , does not rely on symmetry of the input domain , output space , or mapping . Rather , our method exploits symmetry of the parameter space and thus applies more generally to domains with no obvious symmetry . From the point of view of model compression , equivariant networks do achieve reduction in the number of trainable parameters through weight-sharing for fixed hidden dimension widths ; however , in practice , they may use larger layer widths and consequently have larger memory requirements than non-equivariant models . Sampling or summing over large symmetry groups may make equivariant models computationally slow as well ( Finzi et al. , 2020 ; Kondor & Trivedi , 2018 ) . Model Compression and Weight Pruning . A major goal in machine learning is to find methods for compressing models in order to reduce the number of trainable parameters , decrease memory usage , or accelerate inference and training ( Cheng et al. , 2017 ) . Our approach toward this goal differs significantly from most existing methods in that it is based on the inherent symmetry of neural network parameter spaces . One prior model compression method is weight pruning , which removes redundant , small , or unnecessary weights from a network with little loss in accuracy ( Han et al. , 2015 ; Blalock et al. , 2020 ; Karnin , 1990 ) . Recent work has shown pruning can be done during training by identifying and removing weights of less relevance and reverting other weights to earlier values ( Frankle & Carbin , 2018 ) . Related work shows effective pruning may be done at initialization ( Lee et al. , 2019 ; Wang et al. , 2020 ) . Gradient-based pruning identifies low saliency weights by estimating the increase in loss resulting from their removal ( LeCun et al. , 1990 ; Hassibi & Stork , 1993 ; Dong et al. , 2017 ; Molchanov et al. , 2016 ) . A complementary approach is quantization , in which the bit depth of weights is decreased ( Wu et al. , 2016 ; Howard et al. , 2017 ; Gong et al. , 2014 ) . Knowledge distillation works by training a small model to mimic the performance of a larger model or ensemble of models ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ; Ba & Caruana , 2013 ) . Matrix Factorization methods replace fully connected layers with lower rank or sparse factored tensors ( Cheng et al. , 2015a ; b ; Tai et al. , 2015 ; Lebedev et al. , 2014 ; Rigamonti et al. , 2013 ; Lu et al. , 2017 ) and can often be applied before training . Our method involves a generalized QR decomposition , which is a type of matrix factorization ; however , rather than aim for a rank reduction of linear layers , we leverage this decomposition in order to reduce hidden layer widths via change-of-basis operations on the hidden representations . Closest to our method are lossless compression methods . Serra et al . ( 2021 ; 2020 ) identify and remove stable neurons in ReLU networks . Sourek et al . ( 2020 ) also exploit symmetry in parameter space to remove redundant neuron . However , their symmetries are induced from permutation equivariance ; ours follow from the symmetries of the radial activation .
The authors make use of a theoretical framework for viewing neural networks as representation of quivers to introduce a reparametrization strategy for neural networks with radial activation functions. This reparametrization is based on a QR decomposition, and leads to a lossless compression of the number of parameters by factoring redundant symmetries of the model. A corresponding gradient descent algorithm is derived on the compressed parameter space corresponding to gradient descent in the original space.
SP:bed075c80d6a879ab7960c2cf4b4c44a53e917ce
Model Compression via Symmetries of the Parameter Space
1 INTRODUCTION . Recent work has shown that representation theory , the formal study of symmetry , provides the foundation for various innovative techniques in deep learning ( Cohen & Welling , 2016 ; Kondor & Trivedi , 2018 ; Ravanbakhsh et al. , 2017 ; Cohen & Welling , 2017 ) . Much of this previous work considers symmetries inherent to the input and output spaces , as well as distributions and functions that respect these symmetries . By contrast , in this paper , we expose a broad class of symmetries intrinsic to the parameter space of the neural networks themselves . We use these parameter space symmetries to devise a model compression algorithm that reduces the widths of the hidden layers , and hence the number of parameters . Unlike representation-theoretic techniques in the setting of equivariant neural networks , our methods are applicable to deep learning models with non-symmetric domains and non-equivariant functions , and hence pertain to some degree to all neural networks . Specifically , we formulate a theoretical framework for neural networks inspired by quiver representation theory , a mathematical field with connections to symplectic geometry and Lie theory ( Kirillov Jr , 2016 ; Nakajima et al. , 1998 ) . This approach builds on that of Armenta & Jodoin ( 2020 ) and of Wood & Shawe-Taylor ( 1996 ) , but is arguably simpler and encapsulates larger symmetry groups . Formally , a quiver is another name for a finite directed graph , and a representation of a quiver is the assignment of a vector space to each vertex and a compatible linear map to each edge . Our starting point is to regard the vector space of parameters for a neural network as a representation of a particular quiver , namely , the neural quiver ( Figure 1 ) . The advantage of this viewpoint is that representations of quivers carry rich symmetry groups via change-of-basis transformations ; such operations can be viewed as symmetries of the neural network parameter space . Moreover , these symmetries may be factored out without affecting the feedforward function , making our method a lossless model compression algorithm ( Serra et al. , 2020 ) . Model compression has become critically important as models have grown to billions of parameters ; with compression , enormous models may be reduced and run on smaller systems with faster inference ( Buciluǎ et al. , 2006 ; Cheng et al. , 2017 ; Frankle & Carbin , 2018 ; Zhang et al. , 2018 ) . Whereas many previous approaches to model compression are based on weight pruning , quantization , matrix factorization , or knowledge distillation , similar to Sourek et al . ( 2020 ) , our approach exploits symmetries of neural network parameter spaces . The size of the parameter space symmetry group is determined by properties of the activation functions . We focus on radial activation functions , as in Weiler & Cesa ( 2019 ) ; Sabour et al . ( 2017 ) ; Weiler et al . ( 2018a ; b ) ; these interact favorably with certain QR decompositions , and , consequently , the model compression is significant compared to the more common pointwise ( or ‘ local ’ ) activations . We refer to neural networks with radial activation functions as radial neural networks . Given a radial neural network , our results produce a new network with fewer neurons in each hidden layer and the same feedforward function , and hence the same loss for any batch of training data . Moreover , the value of the loss function after a step of gradient descent applied to the compressed model is the same as the value of the loss function after a step of projected gradient descent applied to the original model . As we explain below , in projected gradient descent , one subtracts a truncation of the gradient , rather than the full gradient . When the compression is significant enough , the compressed model takes less time per epoch to train and reaches local minima faster . To state these results slightly more precisely , recall that the parameters of a neural network with layer widths ( n0 , n1 , . . . , nL ) consist of an ni× ( ni−1+1 ) matrixWi of weights for each layer i , where we include the bias as an extra column . These are grouped into a tuple W = ( Wi ∈ Rni× ( ni−1+1 ) ) i . We define the reduced widths recursively as nredi = min ( ni , n red i−1 + 1 ) for i = 1 , . . . , L− 1 , with nred0 = n0 and n red L = nL . Note that n red i ≤ ni for all i. Theorem 1.1 ( Informal version of Theorems 4.3 and 4.7 ) . Suppose a neural network has L layers with widths ( n0 , . . . , nL ) , parameters W , and radial activation functions . Let fW : Rn0 → RnL be the feedforward function of the network . 1 . There exists an algorithm to produce a reduced radial neural network with layer widths ( n0 , n red 1 , . . . , n red L−1 , nL ) , parameters R , and the same feedforward function fR = fW . 2 . Training fR with gradient descent is an equivalent optimization problem to training fW with projected gradient descent . This theorem can be interpreted as a model compression result : the reduced ( or compressed ) neural network R has the same accuracy as the original neural network W , and there is an explicit relationship between the gradient descent optimization problems for the two neural networks . This result is not just theoretical ; it emerges from a practical and efficient algorithm depending on successive QR decompositions . We describe this procedure below ( Algorithm 1 ) and implement it in Python . To summarize , our contributions are as follows : 1 . A theoretical framework for neural networks based on quiver representation theory ; 2 . A QR decomposition for radial neural networks ; 3 . An implementation of a lossless model compression algorithm for radial neural networks ; 4 . A theorem relating gradient descent optimization of the original and compressed networks . We view this work as a step in the direction of improving learning algorithms by exploiting symmetry inherent to neural network parameter spaces . As such , we expect our framework and results to generalize in several ways , including : ( 1 ) further reductions of the hidden widths , ( 2 ) incorporating certain non-radial activation functions , ( 3 ) encapsulating networks beyond MLPs , such as convolutional , recurrent , and graph neural networks , ( 4 ) integration of regularization techniques . We include a detailed discussion of the limitations of our results , as well as future directions , in Section 6 . 2 RELATED WORK . Quiver Representation Theory and Neural Networks . Armenta & Jodoin ( 2020 ) give an approach to understanding neural networks in terms of quiver representations . Our work generalizes their approach as it ( 1 ) accommodates both pointwise and non-pointwise activation functions , ( 2 ) taps into larger symmetry groups , and ( 3 ) connects more naturally to gradient descent . Jeffreys & Lau ( 2021 ) and Manin & Marcolli ( 2020 ) also place quivers in the context of neural networks ; our approach is inspired by similar algebro-geometric and categorical perspectives , but differs in our emphasis on practical consequences for optimization techniques at the core of machine learning . These works have a number of precursors . One is the study of the “ non-negative homogeneity ” ( also known as “ positive scaling invariance ” ) property of ReLU activation functions ( Dinh et al. , 2017 ; Neyshabur et al. , 2015 ; Meng et al. , 2019 ) , which is a special case of the symmetry studied in this paper . Wood & Shawe-Taylor ( 1996 ) regard layers in a neural network as representations of finite groups and consider only pointwise activation functions ; by contrast , our framework captures Lie groups as well as non-pointwise activations . Our quiver approach to neural networks shares parallels with the “ algebraic neural networks ” of Parada-Mayorga & Ribeiro , and special cases of their formalism amount to representations of quivers over base rings beyond R , such as the ring of polynomials . In a somewhat different data-analytic context , Chindris & Kline ( 2021 ) use quiver representation theory in order to untangle point clouds ; though they do not use neural networks . Equivariant Neural Networks . Previously , representation theory has been used to design neural networks which incorporate symmetry as an inductive bias . A variety of architectures such as G-convolution , steerable CNN , and Clebsch-Gordan networks are constrained by various weightsharing schemes to be equivariant or invariant to various symmetry groups ( Cohen et al. , 2019 ; Weiler & Cesa , 2019 ; Cohen & Welling , 2016 ; Chidester et al. , 2018 ; Kondor & Trivedi , 2018 ; Bao & Song , 2019 ; Worrall et al. , 2017 ; Cohen & Welling , 2017 ; Weiler et al. , 2018b ; Dieleman et al. , 2016 ; Lang & Weiler , 2021 ; Ravanbakhsh et al. , 2017 ) . Our approach , in contrast , does not rely on symmetry of the input domain , output space , or mapping . Rather , our method exploits symmetry of the parameter space and thus applies more generally to domains with no obvious symmetry . From the point of view of model compression , equivariant networks do achieve reduction in the number of trainable parameters through weight-sharing for fixed hidden dimension widths ; however , in practice , they may use larger layer widths and consequently have larger memory requirements than non-equivariant models . Sampling or summing over large symmetry groups may make equivariant models computationally slow as well ( Finzi et al. , 2020 ; Kondor & Trivedi , 2018 ) . Model Compression and Weight Pruning . A major goal in machine learning is to find methods for compressing models in order to reduce the number of trainable parameters , decrease memory usage , or accelerate inference and training ( Cheng et al. , 2017 ) . Our approach toward this goal differs significantly from most existing methods in that it is based on the inherent symmetry of neural network parameter spaces . One prior model compression method is weight pruning , which removes redundant , small , or unnecessary weights from a network with little loss in accuracy ( Han et al. , 2015 ; Blalock et al. , 2020 ; Karnin , 1990 ) . Recent work has shown pruning can be done during training by identifying and removing weights of less relevance and reverting other weights to earlier values ( Frankle & Carbin , 2018 ) . Related work shows effective pruning may be done at initialization ( Lee et al. , 2019 ; Wang et al. , 2020 ) . Gradient-based pruning identifies low saliency weights by estimating the increase in loss resulting from their removal ( LeCun et al. , 1990 ; Hassibi & Stork , 1993 ; Dong et al. , 2017 ; Molchanov et al. , 2016 ) . A complementary approach is quantization , in which the bit depth of weights is decreased ( Wu et al. , 2016 ; Howard et al. , 2017 ; Gong et al. , 2014 ) . Knowledge distillation works by training a small model to mimic the performance of a larger model or ensemble of models ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ; Ba & Caruana , 2013 ) . Matrix Factorization methods replace fully connected layers with lower rank or sparse factored tensors ( Cheng et al. , 2015a ; b ; Tai et al. , 2015 ; Lebedev et al. , 2014 ; Rigamonti et al. , 2013 ; Lu et al. , 2017 ) and can often be applied before training . Our method involves a generalized QR decomposition , which is a type of matrix factorization ; however , rather than aim for a rank reduction of linear layers , we leverage this decomposition in order to reduce hidden layer widths via change-of-basis operations on the hidden representations . Closest to our method are lossless compression methods . Serra et al . ( 2021 ; 2020 ) identify and remove stable neurons in ReLU networks . Sourek et al . ( 2020 ) also exploit symmetry in parameter space to remove redundant neuron . However , their symmetries are induced from permutation equivariance ; ours follow from the symmetries of the radial activation .
This paper provides a framework to study compression of NN models by analyses of symmetry properties of their parameter spaces. For lossless model compression, an algorithm employing QR decomposition for parameter compression is proposed. In the theoretical analyses, the decomposition of the proposed neural quiver is studied. The proposed methods were experimentally analyses in synthetic datasets.
SP:bed075c80d6a879ab7960c2cf4b4c44a53e917ce
Model Compression via Symmetries of the Parameter Space
1 INTRODUCTION . Recent work has shown that representation theory , the formal study of symmetry , provides the foundation for various innovative techniques in deep learning ( Cohen & Welling , 2016 ; Kondor & Trivedi , 2018 ; Ravanbakhsh et al. , 2017 ; Cohen & Welling , 2017 ) . Much of this previous work considers symmetries inherent to the input and output spaces , as well as distributions and functions that respect these symmetries . By contrast , in this paper , we expose a broad class of symmetries intrinsic to the parameter space of the neural networks themselves . We use these parameter space symmetries to devise a model compression algorithm that reduces the widths of the hidden layers , and hence the number of parameters . Unlike representation-theoretic techniques in the setting of equivariant neural networks , our methods are applicable to deep learning models with non-symmetric domains and non-equivariant functions , and hence pertain to some degree to all neural networks . Specifically , we formulate a theoretical framework for neural networks inspired by quiver representation theory , a mathematical field with connections to symplectic geometry and Lie theory ( Kirillov Jr , 2016 ; Nakajima et al. , 1998 ) . This approach builds on that of Armenta & Jodoin ( 2020 ) and of Wood & Shawe-Taylor ( 1996 ) , but is arguably simpler and encapsulates larger symmetry groups . Formally , a quiver is another name for a finite directed graph , and a representation of a quiver is the assignment of a vector space to each vertex and a compatible linear map to each edge . Our starting point is to regard the vector space of parameters for a neural network as a representation of a particular quiver , namely , the neural quiver ( Figure 1 ) . The advantage of this viewpoint is that representations of quivers carry rich symmetry groups via change-of-basis transformations ; such operations can be viewed as symmetries of the neural network parameter space . Moreover , these symmetries may be factored out without affecting the feedforward function , making our method a lossless model compression algorithm ( Serra et al. , 2020 ) . Model compression has become critically important as models have grown to billions of parameters ; with compression , enormous models may be reduced and run on smaller systems with faster inference ( Buciluǎ et al. , 2006 ; Cheng et al. , 2017 ; Frankle & Carbin , 2018 ; Zhang et al. , 2018 ) . Whereas many previous approaches to model compression are based on weight pruning , quantization , matrix factorization , or knowledge distillation , similar to Sourek et al . ( 2020 ) , our approach exploits symmetries of neural network parameter spaces . The size of the parameter space symmetry group is determined by properties of the activation functions . We focus on radial activation functions , as in Weiler & Cesa ( 2019 ) ; Sabour et al . ( 2017 ) ; Weiler et al . ( 2018a ; b ) ; these interact favorably with certain QR decompositions , and , consequently , the model compression is significant compared to the more common pointwise ( or ‘ local ’ ) activations . We refer to neural networks with radial activation functions as radial neural networks . Given a radial neural network , our results produce a new network with fewer neurons in each hidden layer and the same feedforward function , and hence the same loss for any batch of training data . Moreover , the value of the loss function after a step of gradient descent applied to the compressed model is the same as the value of the loss function after a step of projected gradient descent applied to the original model . As we explain below , in projected gradient descent , one subtracts a truncation of the gradient , rather than the full gradient . When the compression is significant enough , the compressed model takes less time per epoch to train and reaches local minima faster . To state these results slightly more precisely , recall that the parameters of a neural network with layer widths ( n0 , n1 , . . . , nL ) consist of an ni× ( ni−1+1 ) matrixWi of weights for each layer i , where we include the bias as an extra column . These are grouped into a tuple W = ( Wi ∈ Rni× ( ni−1+1 ) ) i . We define the reduced widths recursively as nredi = min ( ni , n red i−1 + 1 ) for i = 1 , . . . , L− 1 , with nred0 = n0 and n red L = nL . Note that n red i ≤ ni for all i. Theorem 1.1 ( Informal version of Theorems 4.3 and 4.7 ) . Suppose a neural network has L layers with widths ( n0 , . . . , nL ) , parameters W , and radial activation functions . Let fW : Rn0 → RnL be the feedforward function of the network . 1 . There exists an algorithm to produce a reduced radial neural network with layer widths ( n0 , n red 1 , . . . , n red L−1 , nL ) , parameters R , and the same feedforward function fR = fW . 2 . Training fR with gradient descent is an equivalent optimization problem to training fW with projected gradient descent . This theorem can be interpreted as a model compression result : the reduced ( or compressed ) neural network R has the same accuracy as the original neural network W , and there is an explicit relationship between the gradient descent optimization problems for the two neural networks . This result is not just theoretical ; it emerges from a practical and efficient algorithm depending on successive QR decompositions . We describe this procedure below ( Algorithm 1 ) and implement it in Python . To summarize , our contributions are as follows : 1 . A theoretical framework for neural networks based on quiver representation theory ; 2 . A QR decomposition for radial neural networks ; 3 . An implementation of a lossless model compression algorithm for radial neural networks ; 4 . A theorem relating gradient descent optimization of the original and compressed networks . We view this work as a step in the direction of improving learning algorithms by exploiting symmetry inherent to neural network parameter spaces . As such , we expect our framework and results to generalize in several ways , including : ( 1 ) further reductions of the hidden widths , ( 2 ) incorporating certain non-radial activation functions , ( 3 ) encapsulating networks beyond MLPs , such as convolutional , recurrent , and graph neural networks , ( 4 ) integration of regularization techniques . We include a detailed discussion of the limitations of our results , as well as future directions , in Section 6 . 2 RELATED WORK . Quiver Representation Theory and Neural Networks . Armenta & Jodoin ( 2020 ) give an approach to understanding neural networks in terms of quiver representations . Our work generalizes their approach as it ( 1 ) accommodates both pointwise and non-pointwise activation functions , ( 2 ) taps into larger symmetry groups , and ( 3 ) connects more naturally to gradient descent . Jeffreys & Lau ( 2021 ) and Manin & Marcolli ( 2020 ) also place quivers in the context of neural networks ; our approach is inspired by similar algebro-geometric and categorical perspectives , but differs in our emphasis on practical consequences for optimization techniques at the core of machine learning . These works have a number of precursors . One is the study of the “ non-negative homogeneity ” ( also known as “ positive scaling invariance ” ) property of ReLU activation functions ( Dinh et al. , 2017 ; Neyshabur et al. , 2015 ; Meng et al. , 2019 ) , which is a special case of the symmetry studied in this paper . Wood & Shawe-Taylor ( 1996 ) regard layers in a neural network as representations of finite groups and consider only pointwise activation functions ; by contrast , our framework captures Lie groups as well as non-pointwise activations . Our quiver approach to neural networks shares parallels with the “ algebraic neural networks ” of Parada-Mayorga & Ribeiro , and special cases of their formalism amount to representations of quivers over base rings beyond R , such as the ring of polynomials . In a somewhat different data-analytic context , Chindris & Kline ( 2021 ) use quiver representation theory in order to untangle point clouds ; though they do not use neural networks . Equivariant Neural Networks . Previously , representation theory has been used to design neural networks which incorporate symmetry as an inductive bias . A variety of architectures such as G-convolution , steerable CNN , and Clebsch-Gordan networks are constrained by various weightsharing schemes to be equivariant or invariant to various symmetry groups ( Cohen et al. , 2019 ; Weiler & Cesa , 2019 ; Cohen & Welling , 2016 ; Chidester et al. , 2018 ; Kondor & Trivedi , 2018 ; Bao & Song , 2019 ; Worrall et al. , 2017 ; Cohen & Welling , 2017 ; Weiler et al. , 2018b ; Dieleman et al. , 2016 ; Lang & Weiler , 2021 ; Ravanbakhsh et al. , 2017 ) . Our approach , in contrast , does not rely on symmetry of the input domain , output space , or mapping . Rather , our method exploits symmetry of the parameter space and thus applies more generally to domains with no obvious symmetry . From the point of view of model compression , equivariant networks do achieve reduction in the number of trainable parameters through weight-sharing for fixed hidden dimension widths ; however , in practice , they may use larger layer widths and consequently have larger memory requirements than non-equivariant models . Sampling or summing over large symmetry groups may make equivariant models computationally slow as well ( Finzi et al. , 2020 ; Kondor & Trivedi , 2018 ) . Model Compression and Weight Pruning . A major goal in machine learning is to find methods for compressing models in order to reduce the number of trainable parameters , decrease memory usage , or accelerate inference and training ( Cheng et al. , 2017 ) . Our approach toward this goal differs significantly from most existing methods in that it is based on the inherent symmetry of neural network parameter spaces . One prior model compression method is weight pruning , which removes redundant , small , or unnecessary weights from a network with little loss in accuracy ( Han et al. , 2015 ; Blalock et al. , 2020 ; Karnin , 1990 ) . Recent work has shown pruning can be done during training by identifying and removing weights of less relevance and reverting other weights to earlier values ( Frankle & Carbin , 2018 ) . Related work shows effective pruning may be done at initialization ( Lee et al. , 2019 ; Wang et al. , 2020 ) . Gradient-based pruning identifies low saliency weights by estimating the increase in loss resulting from their removal ( LeCun et al. , 1990 ; Hassibi & Stork , 1993 ; Dong et al. , 2017 ; Molchanov et al. , 2016 ) . A complementary approach is quantization , in which the bit depth of weights is decreased ( Wu et al. , 2016 ; Howard et al. , 2017 ; Gong et al. , 2014 ) . Knowledge distillation works by training a small model to mimic the performance of a larger model or ensemble of models ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ; Ba & Caruana , 2013 ) . Matrix Factorization methods replace fully connected layers with lower rank or sparse factored tensors ( Cheng et al. , 2015a ; b ; Tai et al. , 2015 ; Lebedev et al. , 2014 ; Rigamonti et al. , 2013 ; Lu et al. , 2017 ) and can often be applied before training . Our method involves a generalized QR decomposition , which is a type of matrix factorization ; however , rather than aim for a rank reduction of linear layers , we leverage this decomposition in order to reduce hidden layer widths via change-of-basis operations on the hidden representations . Closest to our method are lossless compression methods . Serra et al . ( 2021 ; 2020 ) identify and remove stable neurons in ReLU networks . Sourek et al . ( 2020 ) also exploit symmetry in parameter space to remove redundant neuron . However , their symmetries are induced from permutation equivariance ; ours follow from the symmetries of the radial activation .
This paper presents a dimensionality reduction technique that can preserve the model of a neural network by leveraging radial symmetry. Consequently, when the required conditions apply, they also show that training the compressed model is equivalent to training the original model on a particular projection of the parameter space. The first two sections of the paper are very clear and well written, although some definitions would have helped (more on that later), but the sudden change of pace starting at section 3 makes it difficult to follow if you do not have the same background. I believe that I have captured the gist of their results, which seem to be intuitively correct, but I cannot comment on how they were derived. Hence, what follows is a description in my own words and perspective of what they did. The authors present a method that would reduce the width of each layer of the neural network to the smallest width of any preceding layer, including the input layer. That is possible in radial neural networks because (1) the activation function involves the entire layer as opposed to each neuron individually and (2) the activation function is symmetric along any direction considered.
SP:bed075c80d6a879ab7960c2cf4b4c44a53e917ce
Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference
1 INTRODUCTION . Continual learning ( CL ) is a learning scenario where a model learns from a continuous and online stream of data and is regarded as a more realistic and practical learning setup than offline learning on a fixed dataset ( He et al. , 2020 ) . However , many CL methods still focus on the offline setup ( Kirkpatrick et al. , 2017 ; Rebuffi et al. , 2017 ; Saha et al. , 2021 ) instead of the more realistic online setup . These methods assume access to a large storage , storing the entire data of the current task and iterating on it multiple times . On the other hand , we are interested extensively in the more realistic online setup where only a small memory is allowed as storage . Meanwhile , even for the online CL methods , we argue they have room for more practical and realistic improvements concerning multiple crucial aspects . The aspects include the class distributions such as the disjoint ( Rebuffi et al. , 2017 ) or the blurry ( Aljundi et al. , 2019c ) splits and the evaluation metric that focuses only on the task accuracy such as average task accuracy ( Aavg ) . The two main assumptions on the class distributions in existing CL setups , i.e. , the disjoint and blurry splits , are less realistic for the following reasons . The disjoint split assumes no classes overlap over different tasks ; already observed classes will never appear again . This assumption is not plausible because already observed classes can still appear later on in real-world scenarios ( see Fig . 2 of ( Bang et al. , 2021 ) ) . On the other hand , in the blurry split ( Aljundi et al. , 2019c ) no new classes appear after the first task even though the split assumes overlapping classes over tasks . This is also not plausible as observing new classes is common in real-world scenarios . The typical evaluation metric such as Aavg in which the accuracy is measured only at the task transition is also less realistic . It implicitly assumes that no inference queries occur in the middle of a task . However , in real-world scenarios , inference queries can occur at any-time . Moreover , there is no explicit task transition boundary in most real-world scenarios . Thus , it is desirable for CL models to provide good inference results at any time . To accurately evaluate whether a CL model is effective at such ‘ any-time ’ inference , we need a new metric for CL models . In order to address the issues of the current CL setups , we propose a new CL setup that is more realistic and practical by considering the following criteria : First , the class distribution is comprised of the advantages from both blurry and disjoint . That is , we assume that the model continuously encounters new classes as tasks continue , i.e. , class-incremental and that classes overlap across tasks , i.e. , blurry task boundaries , while not suffering from the restrictions of blurry and disjoint . Second , the model is evaluated throughout training and inference such that it can be evaluated for any-time inference . We call this new continual learning setup ‘ i-Blurry ’ . For the i-Blurry setup , we first propose a plausible baseline that employs experience replay ( ER ) with reservoir sampling and a learning rate scheduling tuned for the online and task-free CL setting . While existing online CL methods are applicable to the i-Blurry setup , they perform only marginally better than our baseline or often worse . To better handle the i-Blurry setup , we propose a novel continual learning method , which improves the baseline in three aspects . We design a new memory management scheme to discard samples using a per-sample importance score that reflects how useful a sample is for training . We then propose to draw training samples only from the memory instead of drawing them from both memory and the online stream as is done in ER . Finally , we propose a new learning rate scheduling to adaptively decide whether to increase or decrease the learning rate based on the loss trajectory , i.e . a data-driven manner . To evaluate the algorithms in the new setup , we evaluate methods by conventional metrics , and further define a new metric called ‘ area under the curve of accuracy ’ ( AAUC ) which measures the model ’ s accuracy throughout training . We summarize our contributions as follows : • Proposing a new CL setup called i-Blurry , which addresses a more realistic setting that is online , task-free , class-incremental , of blurry task boundaries , and subject to any-time inference . • Proposing a novel online and task-free CL method by a new memory management , memory usage , and learning rate scheduling strategy . • Outperforming existing CL models by large margins on multiple datasets and settings . • Proposing a new metric to better measure a CL model ’ s capability for the desirable any-time inference . 2 RELATED WORK . Continual learning setups . There are many CL setups that have been proposed to reflect the realworld scenario of training a learning model from a stream of data ( Prabhu et al. , 2020 ) . We categorize them in the following aspects for brevity . First , we categorize them into ( 1 ) task-incremental ( task-IL ) and ( 2 ) class-incremental learning ( class-IL ) , depending on whether the task-ID is given at test time . Task-IL , also called multi-head setup , assumes that task-ID is given at test time ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2018 ; Chaudhry et al. , 2019 ) . In contrast , in class-IL , or single-head setup , task-ID is not given at test time and has to be inferred ( Rebuffi et al. , 2017 ; Wu et al. , 2019 ; Aljundi et al. , 2019a ) . Class-IL is more challenging than task-IL , but is also more realistic since task-ID will not likely be given in the real-world scenario ( Prabhu et al. , 2020 ) . Most CL works assume that task ID is provided at training time , allowing CL methods to utilize the task ID to save model parameters at task boundaries ( Kirkpatrick et al. , 2017 ; Chaudhry et al. , 2018b ) for later use . However , this assumption is impractical ( Lee et al. , 2019 ) since real-world data usually do not have clear task boundaries . To address this issue , a task-free setup ( Aljundi et al. , 2019b ) , where task-ID at training is not available , has been proposed . We focus extensively on the task-free setup as it is challenging and being actively investigated recently ( Kim et al. , 2020 ; Lee et al. , 2019 ; Aljundi et al. , 2019c ) . We now categorize CL setups into disjoint and blurry setup by how the data split is configured . In the disjoint task setup , each task consists of a set of classes disjoint from all other tasks . But the disjoint setup is less realistic as the classes in the real-world can appear at any time not only in a disjoint manner . Recently , to make the setup more realistic , a blurry task setup has been proposed and investigated ( Aljundi et al. , 2019c ; Prabhu et al. , 2020 ; Bang et al. , 2021 ) , where 100 −M % of the sampels are from the dominant class of the task and M % of the samples are from all classes , where M is the blurry level ( Aljundi et al. , 2019c ) . However , the blurry setup assumes no class is added in new tasks , i.e. , not class-incremental , which makes the setup still not quite realistic . Finally , depending on how many samples are streamed at a time , we categorize CL setups into online ( Rolnick et al. , 2018 ; Aljundi et al. , 2019a ; Chaudhry et al. , 2019 ) and offline ( Wu et al. , 2019 ; Rebuffi et al. , 2017 ; Chaudhry et al. , 2018b ; Castro et al. , 2018 ) . In the offline setup , all data from the current task can be used an unlimited number of times . This is impractical since it requires additional memory of size equal to the current task ’ s data . For the online setup , there are many notions of ‘ online ’ that differs in each literature . Prabhu et al . ( 2020 ) ; Bang et al . ( 2021 ) refer online to a setup using each streamed sample only once to train a model while Aljundi et al . ( 2019c ; a ) refer online to a setup where only one or a few samples are streamed at a time . We follow the latter as the former allows storing the whole task ’ s data , which is similar to offline and more unrealistic than using the sample more than a few times . Incorporating the pros and cons of existing setups , we propose a novel CL setup that is online , task-free , class-incremental , of blurry task boundaries , and subject to any-time inference as the most realistic setup for continual learning . Continual learning methods . Given neural networks would suffer from catastrophic forgetting ( McCloskey & Neal , 1989 ; Ratcliff , 1990 ) , the online nature of streaming data in continual learning generally aggravates the issue . To alleviate the forgetting , there are various proposals to store the previous task information ; ( 1 ) regularization , ( 2 ) replay , and ( 3 ) parameter isolation . ( 1 ) Regularization methods ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Lee et al. , 2017b ; Ebrahimi et al. , 2020 ) store previous task information in the form of model priors and use it for regularizing the neural network currently being trained . ( 2 ) Replay methods store a subset of the samples from the previous tasks in an episodic memory ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ; Chaudhry et al. , 2019 ; Wu et al. , 2019 ) or keep a generative model that is trained to generate previous task samples ( Shin et al. , 2017 ; Wu et al. , 2018 ; Hu et al. , 2019 ; Cong et al. , 2020 ) . The sampled or generated examplars are replayed on future tasks and used for distillation , constrained training , or joint training . ( 3 ) Parameter isolation methods augment the networks ( Rusu et al. , 2016 ; Lee et al. , 2017a ; Aljundi et al. , 2017 ) or decompose the network into subnetworks for each task ( Mallya & Lazebnik , 2018 ; Cheung et al. , 2019 ; Yoon et al. , 2020 ) . Since ( 1 ) , ( 2 ) , and ( 3 ) all utilize different ways of storing information that incurs parameter storage costs , episodic memory requirement and increase in network size respectively , a fair comparison among the methods is not straighforward . We mostly compare our method with episodic memorybased methods ( Wu et al. , 2019 ; Aljundi et al. , 2019a ; Bang et al. , 2021 ) , as they perform the best in various CL setups but also with methods that use both regularization and episodic memory ( Chaudhry et al. , 2018b ; Wu et al. , 2019 ) . Online continual learning . Despite being more realistic ( Losing et al. , 2018 ; He et al. , 2020 ) , online CL setups have not been popular ( Prabhu et al. , 2020 ) due to the difficulty and subtle differences in the setups in the published literature . ER ( Rolnick et al. , 2018 ) is a simple yet strong episodic memory-based online CL method . It employs reservoir sampling for memory management and jointly trains a model with half of the batch sampled from memory . Many online CL methods are based on ER ( Aljundi et al. , 2019c ; a ) . GSS ( Aljundi et al. , 2019c ) selects samples using a score based on cosine similarity of gradients . MIR ( Aljundi et al. , 2019a ) retrieves maximally interfering samples from memory to use for training . Different from ER , A-GEM ( Chaudhry et al. , 2019 ) uses the memory to enforce constraints on the loss trajectory of the stored samples . GDumb ( Prabhu et al. , 2020 ) only updates the memory during training phase and trains from scratch at the test time only using the memory . Unlike these methods , recently proposed RM ( Bang et al. , 2021 ) uses an uncertainty-based memory sampling and two-stage training scheme where the model is trained for one epoch on the streamed samples and trains extensively only using the memory at the end of each task , effectively delaying most of the learning to the end of the tasks . Note that the uncertainty-based memory sampling can not be implemented in the online CL setup and the two-stage training performs particularly worse in our i-Blurry setup . Our method outperforms all other online CL method introduced in this section while strictly adhering to the online and task-free restrictions .
The paper proposes a more realistic setting (called *i-Blurry*) for continual learning (CL) that generalizes the *blurry* and *disjoint* settings proposed in prior work. The disjoint setting assumes that there is no class that appears in multiple tasks and the blurry setting assumes that no new classes are seen after the first task. In the *i-Blurry* setting, one can have overlapping classes across tasks as well as new classes appearing in each task. This setting is also **online** and thus the paper is interested in continuous model evaluation i.e., any-time inference too. It proposes a metric which calculates the area under the accuracy curve during training for the same. For the new configuration, the paper proposes a strong baseline and a new algorithm called CLIB. CLIB is a memory-based CL method that refines its memory by throwing out *least important* samples and updating it with more important ones. The sample importance is calculated as the expected decrease in training loss when the sample is used for training. CLIB is also equipped with a data-driven adaptive learning rate scheduling scheme which provides some additional performance benefits. The results showcase that CLIB is able to outperform other online CL methods and the baseline by large margins for various instantiations of their i-Blurry setting, *including the disjoint and Blurry setting*, on the CIFAR-10, CIFAR-100 and TinyImagenet datasets. An ablation study shows the main benefits of the proposed method come from the sample importance based memory scheme.
SP:8955f90191ee97eeb451e79dc12cd7921f6c3fd5
Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference
1 INTRODUCTION . Continual learning ( CL ) is a learning scenario where a model learns from a continuous and online stream of data and is regarded as a more realistic and practical learning setup than offline learning on a fixed dataset ( He et al. , 2020 ) . However , many CL methods still focus on the offline setup ( Kirkpatrick et al. , 2017 ; Rebuffi et al. , 2017 ; Saha et al. , 2021 ) instead of the more realistic online setup . These methods assume access to a large storage , storing the entire data of the current task and iterating on it multiple times . On the other hand , we are interested extensively in the more realistic online setup where only a small memory is allowed as storage . Meanwhile , even for the online CL methods , we argue they have room for more practical and realistic improvements concerning multiple crucial aspects . The aspects include the class distributions such as the disjoint ( Rebuffi et al. , 2017 ) or the blurry ( Aljundi et al. , 2019c ) splits and the evaluation metric that focuses only on the task accuracy such as average task accuracy ( Aavg ) . The two main assumptions on the class distributions in existing CL setups , i.e. , the disjoint and blurry splits , are less realistic for the following reasons . The disjoint split assumes no classes overlap over different tasks ; already observed classes will never appear again . This assumption is not plausible because already observed classes can still appear later on in real-world scenarios ( see Fig . 2 of ( Bang et al. , 2021 ) ) . On the other hand , in the blurry split ( Aljundi et al. , 2019c ) no new classes appear after the first task even though the split assumes overlapping classes over tasks . This is also not plausible as observing new classes is common in real-world scenarios . The typical evaluation metric such as Aavg in which the accuracy is measured only at the task transition is also less realistic . It implicitly assumes that no inference queries occur in the middle of a task . However , in real-world scenarios , inference queries can occur at any-time . Moreover , there is no explicit task transition boundary in most real-world scenarios . Thus , it is desirable for CL models to provide good inference results at any time . To accurately evaluate whether a CL model is effective at such ‘ any-time ’ inference , we need a new metric for CL models . In order to address the issues of the current CL setups , we propose a new CL setup that is more realistic and practical by considering the following criteria : First , the class distribution is comprised of the advantages from both blurry and disjoint . That is , we assume that the model continuously encounters new classes as tasks continue , i.e. , class-incremental and that classes overlap across tasks , i.e. , blurry task boundaries , while not suffering from the restrictions of blurry and disjoint . Second , the model is evaluated throughout training and inference such that it can be evaluated for any-time inference . We call this new continual learning setup ‘ i-Blurry ’ . For the i-Blurry setup , we first propose a plausible baseline that employs experience replay ( ER ) with reservoir sampling and a learning rate scheduling tuned for the online and task-free CL setting . While existing online CL methods are applicable to the i-Blurry setup , they perform only marginally better than our baseline or often worse . To better handle the i-Blurry setup , we propose a novel continual learning method , which improves the baseline in three aspects . We design a new memory management scheme to discard samples using a per-sample importance score that reflects how useful a sample is for training . We then propose to draw training samples only from the memory instead of drawing them from both memory and the online stream as is done in ER . Finally , we propose a new learning rate scheduling to adaptively decide whether to increase or decrease the learning rate based on the loss trajectory , i.e . a data-driven manner . To evaluate the algorithms in the new setup , we evaluate methods by conventional metrics , and further define a new metric called ‘ area under the curve of accuracy ’ ( AAUC ) which measures the model ’ s accuracy throughout training . We summarize our contributions as follows : • Proposing a new CL setup called i-Blurry , which addresses a more realistic setting that is online , task-free , class-incremental , of blurry task boundaries , and subject to any-time inference . • Proposing a novel online and task-free CL method by a new memory management , memory usage , and learning rate scheduling strategy . • Outperforming existing CL models by large margins on multiple datasets and settings . • Proposing a new metric to better measure a CL model ’ s capability for the desirable any-time inference . 2 RELATED WORK . Continual learning setups . There are many CL setups that have been proposed to reflect the realworld scenario of training a learning model from a stream of data ( Prabhu et al. , 2020 ) . We categorize them in the following aspects for brevity . First , we categorize them into ( 1 ) task-incremental ( task-IL ) and ( 2 ) class-incremental learning ( class-IL ) , depending on whether the task-ID is given at test time . Task-IL , also called multi-head setup , assumes that task-ID is given at test time ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2018 ; Chaudhry et al. , 2019 ) . In contrast , in class-IL , or single-head setup , task-ID is not given at test time and has to be inferred ( Rebuffi et al. , 2017 ; Wu et al. , 2019 ; Aljundi et al. , 2019a ) . Class-IL is more challenging than task-IL , but is also more realistic since task-ID will not likely be given in the real-world scenario ( Prabhu et al. , 2020 ) . Most CL works assume that task ID is provided at training time , allowing CL methods to utilize the task ID to save model parameters at task boundaries ( Kirkpatrick et al. , 2017 ; Chaudhry et al. , 2018b ) for later use . However , this assumption is impractical ( Lee et al. , 2019 ) since real-world data usually do not have clear task boundaries . To address this issue , a task-free setup ( Aljundi et al. , 2019b ) , where task-ID at training is not available , has been proposed . We focus extensively on the task-free setup as it is challenging and being actively investigated recently ( Kim et al. , 2020 ; Lee et al. , 2019 ; Aljundi et al. , 2019c ) . We now categorize CL setups into disjoint and blurry setup by how the data split is configured . In the disjoint task setup , each task consists of a set of classes disjoint from all other tasks . But the disjoint setup is less realistic as the classes in the real-world can appear at any time not only in a disjoint manner . Recently , to make the setup more realistic , a blurry task setup has been proposed and investigated ( Aljundi et al. , 2019c ; Prabhu et al. , 2020 ; Bang et al. , 2021 ) , where 100 −M % of the sampels are from the dominant class of the task and M % of the samples are from all classes , where M is the blurry level ( Aljundi et al. , 2019c ) . However , the blurry setup assumes no class is added in new tasks , i.e. , not class-incremental , which makes the setup still not quite realistic . Finally , depending on how many samples are streamed at a time , we categorize CL setups into online ( Rolnick et al. , 2018 ; Aljundi et al. , 2019a ; Chaudhry et al. , 2019 ) and offline ( Wu et al. , 2019 ; Rebuffi et al. , 2017 ; Chaudhry et al. , 2018b ; Castro et al. , 2018 ) . In the offline setup , all data from the current task can be used an unlimited number of times . This is impractical since it requires additional memory of size equal to the current task ’ s data . For the online setup , there are many notions of ‘ online ’ that differs in each literature . Prabhu et al . ( 2020 ) ; Bang et al . ( 2021 ) refer online to a setup using each streamed sample only once to train a model while Aljundi et al . ( 2019c ; a ) refer online to a setup where only one or a few samples are streamed at a time . We follow the latter as the former allows storing the whole task ’ s data , which is similar to offline and more unrealistic than using the sample more than a few times . Incorporating the pros and cons of existing setups , we propose a novel CL setup that is online , task-free , class-incremental , of blurry task boundaries , and subject to any-time inference as the most realistic setup for continual learning . Continual learning methods . Given neural networks would suffer from catastrophic forgetting ( McCloskey & Neal , 1989 ; Ratcliff , 1990 ) , the online nature of streaming data in continual learning generally aggravates the issue . To alleviate the forgetting , there are various proposals to store the previous task information ; ( 1 ) regularization , ( 2 ) replay , and ( 3 ) parameter isolation . ( 1 ) Regularization methods ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Lee et al. , 2017b ; Ebrahimi et al. , 2020 ) store previous task information in the form of model priors and use it for regularizing the neural network currently being trained . ( 2 ) Replay methods store a subset of the samples from the previous tasks in an episodic memory ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ; Chaudhry et al. , 2019 ; Wu et al. , 2019 ) or keep a generative model that is trained to generate previous task samples ( Shin et al. , 2017 ; Wu et al. , 2018 ; Hu et al. , 2019 ; Cong et al. , 2020 ) . The sampled or generated examplars are replayed on future tasks and used for distillation , constrained training , or joint training . ( 3 ) Parameter isolation methods augment the networks ( Rusu et al. , 2016 ; Lee et al. , 2017a ; Aljundi et al. , 2017 ) or decompose the network into subnetworks for each task ( Mallya & Lazebnik , 2018 ; Cheung et al. , 2019 ; Yoon et al. , 2020 ) . Since ( 1 ) , ( 2 ) , and ( 3 ) all utilize different ways of storing information that incurs parameter storage costs , episodic memory requirement and increase in network size respectively , a fair comparison among the methods is not straighforward . We mostly compare our method with episodic memorybased methods ( Wu et al. , 2019 ; Aljundi et al. , 2019a ; Bang et al. , 2021 ) , as they perform the best in various CL setups but also with methods that use both regularization and episodic memory ( Chaudhry et al. , 2018b ; Wu et al. , 2019 ) . Online continual learning . Despite being more realistic ( Losing et al. , 2018 ; He et al. , 2020 ) , online CL setups have not been popular ( Prabhu et al. , 2020 ) due to the difficulty and subtle differences in the setups in the published literature . ER ( Rolnick et al. , 2018 ) is a simple yet strong episodic memory-based online CL method . It employs reservoir sampling for memory management and jointly trains a model with half of the batch sampled from memory . Many online CL methods are based on ER ( Aljundi et al. , 2019c ; a ) . GSS ( Aljundi et al. , 2019c ) selects samples using a score based on cosine similarity of gradients . MIR ( Aljundi et al. , 2019a ) retrieves maximally interfering samples from memory to use for training . Different from ER , A-GEM ( Chaudhry et al. , 2019 ) uses the memory to enforce constraints on the loss trajectory of the stored samples . GDumb ( Prabhu et al. , 2020 ) only updates the memory during training phase and trains from scratch at the test time only using the memory . Unlike these methods , recently proposed RM ( Bang et al. , 2021 ) uses an uncertainty-based memory sampling and two-stage training scheme where the model is trained for one epoch on the streamed samples and trains extensively only using the memory at the end of each task , effectively delaying most of the learning to the end of the tasks . Note that the uncertainty-based memory sampling can not be implemented in the online CL setup and the two-stage training performs particularly worse in our i-Blurry setup . Our method outperforms all other online CL method introduced in this section while strictly adhering to the online and task-free restrictions .
In this paper, the authors proposed a new benchmark protocol (i-Blurry) for continual learning. In this benchmark protocol, the class distribution is class incremental and has blurry task boundaries, and the training is online. They also propose a new method, CILB. This method contains three important components: “sample importance memory”, “memory-only training”, and “adaptive LR scheduling.” Extensive experimental results are provided to show the effectiveness of the proposed method.
SP:8955f90191ee97eeb451e79dc12cd7921f6c3fd5
Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference
1 INTRODUCTION . Continual learning ( CL ) is a learning scenario where a model learns from a continuous and online stream of data and is regarded as a more realistic and practical learning setup than offline learning on a fixed dataset ( He et al. , 2020 ) . However , many CL methods still focus on the offline setup ( Kirkpatrick et al. , 2017 ; Rebuffi et al. , 2017 ; Saha et al. , 2021 ) instead of the more realistic online setup . These methods assume access to a large storage , storing the entire data of the current task and iterating on it multiple times . On the other hand , we are interested extensively in the more realistic online setup where only a small memory is allowed as storage . Meanwhile , even for the online CL methods , we argue they have room for more practical and realistic improvements concerning multiple crucial aspects . The aspects include the class distributions such as the disjoint ( Rebuffi et al. , 2017 ) or the blurry ( Aljundi et al. , 2019c ) splits and the evaluation metric that focuses only on the task accuracy such as average task accuracy ( Aavg ) . The two main assumptions on the class distributions in existing CL setups , i.e. , the disjoint and blurry splits , are less realistic for the following reasons . The disjoint split assumes no classes overlap over different tasks ; already observed classes will never appear again . This assumption is not plausible because already observed classes can still appear later on in real-world scenarios ( see Fig . 2 of ( Bang et al. , 2021 ) ) . On the other hand , in the blurry split ( Aljundi et al. , 2019c ) no new classes appear after the first task even though the split assumes overlapping classes over tasks . This is also not plausible as observing new classes is common in real-world scenarios . The typical evaluation metric such as Aavg in which the accuracy is measured only at the task transition is also less realistic . It implicitly assumes that no inference queries occur in the middle of a task . However , in real-world scenarios , inference queries can occur at any-time . Moreover , there is no explicit task transition boundary in most real-world scenarios . Thus , it is desirable for CL models to provide good inference results at any time . To accurately evaluate whether a CL model is effective at such ‘ any-time ’ inference , we need a new metric for CL models . In order to address the issues of the current CL setups , we propose a new CL setup that is more realistic and practical by considering the following criteria : First , the class distribution is comprised of the advantages from both blurry and disjoint . That is , we assume that the model continuously encounters new classes as tasks continue , i.e. , class-incremental and that classes overlap across tasks , i.e. , blurry task boundaries , while not suffering from the restrictions of blurry and disjoint . Second , the model is evaluated throughout training and inference such that it can be evaluated for any-time inference . We call this new continual learning setup ‘ i-Blurry ’ . For the i-Blurry setup , we first propose a plausible baseline that employs experience replay ( ER ) with reservoir sampling and a learning rate scheduling tuned for the online and task-free CL setting . While existing online CL methods are applicable to the i-Blurry setup , they perform only marginally better than our baseline or often worse . To better handle the i-Blurry setup , we propose a novel continual learning method , which improves the baseline in three aspects . We design a new memory management scheme to discard samples using a per-sample importance score that reflects how useful a sample is for training . We then propose to draw training samples only from the memory instead of drawing them from both memory and the online stream as is done in ER . Finally , we propose a new learning rate scheduling to adaptively decide whether to increase or decrease the learning rate based on the loss trajectory , i.e . a data-driven manner . To evaluate the algorithms in the new setup , we evaluate methods by conventional metrics , and further define a new metric called ‘ area under the curve of accuracy ’ ( AAUC ) which measures the model ’ s accuracy throughout training . We summarize our contributions as follows : • Proposing a new CL setup called i-Blurry , which addresses a more realistic setting that is online , task-free , class-incremental , of blurry task boundaries , and subject to any-time inference . • Proposing a novel online and task-free CL method by a new memory management , memory usage , and learning rate scheduling strategy . • Outperforming existing CL models by large margins on multiple datasets and settings . • Proposing a new metric to better measure a CL model ’ s capability for the desirable any-time inference . 2 RELATED WORK . Continual learning setups . There are many CL setups that have been proposed to reflect the realworld scenario of training a learning model from a stream of data ( Prabhu et al. , 2020 ) . We categorize them in the following aspects for brevity . First , we categorize them into ( 1 ) task-incremental ( task-IL ) and ( 2 ) class-incremental learning ( class-IL ) , depending on whether the task-ID is given at test time . Task-IL , also called multi-head setup , assumes that task-ID is given at test time ( Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2018 ; Chaudhry et al. , 2019 ) . In contrast , in class-IL , or single-head setup , task-ID is not given at test time and has to be inferred ( Rebuffi et al. , 2017 ; Wu et al. , 2019 ; Aljundi et al. , 2019a ) . Class-IL is more challenging than task-IL , but is also more realistic since task-ID will not likely be given in the real-world scenario ( Prabhu et al. , 2020 ) . Most CL works assume that task ID is provided at training time , allowing CL methods to utilize the task ID to save model parameters at task boundaries ( Kirkpatrick et al. , 2017 ; Chaudhry et al. , 2018b ) for later use . However , this assumption is impractical ( Lee et al. , 2019 ) since real-world data usually do not have clear task boundaries . To address this issue , a task-free setup ( Aljundi et al. , 2019b ) , where task-ID at training is not available , has been proposed . We focus extensively on the task-free setup as it is challenging and being actively investigated recently ( Kim et al. , 2020 ; Lee et al. , 2019 ; Aljundi et al. , 2019c ) . We now categorize CL setups into disjoint and blurry setup by how the data split is configured . In the disjoint task setup , each task consists of a set of classes disjoint from all other tasks . But the disjoint setup is less realistic as the classes in the real-world can appear at any time not only in a disjoint manner . Recently , to make the setup more realistic , a blurry task setup has been proposed and investigated ( Aljundi et al. , 2019c ; Prabhu et al. , 2020 ; Bang et al. , 2021 ) , where 100 −M % of the sampels are from the dominant class of the task and M % of the samples are from all classes , where M is the blurry level ( Aljundi et al. , 2019c ) . However , the blurry setup assumes no class is added in new tasks , i.e. , not class-incremental , which makes the setup still not quite realistic . Finally , depending on how many samples are streamed at a time , we categorize CL setups into online ( Rolnick et al. , 2018 ; Aljundi et al. , 2019a ; Chaudhry et al. , 2019 ) and offline ( Wu et al. , 2019 ; Rebuffi et al. , 2017 ; Chaudhry et al. , 2018b ; Castro et al. , 2018 ) . In the offline setup , all data from the current task can be used an unlimited number of times . This is impractical since it requires additional memory of size equal to the current task ’ s data . For the online setup , there are many notions of ‘ online ’ that differs in each literature . Prabhu et al . ( 2020 ) ; Bang et al . ( 2021 ) refer online to a setup using each streamed sample only once to train a model while Aljundi et al . ( 2019c ; a ) refer online to a setup where only one or a few samples are streamed at a time . We follow the latter as the former allows storing the whole task ’ s data , which is similar to offline and more unrealistic than using the sample more than a few times . Incorporating the pros and cons of existing setups , we propose a novel CL setup that is online , task-free , class-incremental , of blurry task boundaries , and subject to any-time inference as the most realistic setup for continual learning . Continual learning methods . Given neural networks would suffer from catastrophic forgetting ( McCloskey & Neal , 1989 ; Ratcliff , 1990 ) , the online nature of streaming data in continual learning generally aggravates the issue . To alleviate the forgetting , there are various proposals to store the previous task information ; ( 1 ) regularization , ( 2 ) replay , and ( 3 ) parameter isolation . ( 1 ) Regularization methods ( Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Lee et al. , 2017b ; Ebrahimi et al. , 2020 ) store previous task information in the form of model priors and use it for regularizing the neural network currently being trained . ( 2 ) Replay methods store a subset of the samples from the previous tasks in an episodic memory ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ; Chaudhry et al. , 2019 ; Wu et al. , 2019 ) or keep a generative model that is trained to generate previous task samples ( Shin et al. , 2017 ; Wu et al. , 2018 ; Hu et al. , 2019 ; Cong et al. , 2020 ) . The sampled or generated examplars are replayed on future tasks and used for distillation , constrained training , or joint training . ( 3 ) Parameter isolation methods augment the networks ( Rusu et al. , 2016 ; Lee et al. , 2017a ; Aljundi et al. , 2017 ) or decompose the network into subnetworks for each task ( Mallya & Lazebnik , 2018 ; Cheung et al. , 2019 ; Yoon et al. , 2020 ) . Since ( 1 ) , ( 2 ) , and ( 3 ) all utilize different ways of storing information that incurs parameter storage costs , episodic memory requirement and increase in network size respectively , a fair comparison among the methods is not straighforward . We mostly compare our method with episodic memorybased methods ( Wu et al. , 2019 ; Aljundi et al. , 2019a ; Bang et al. , 2021 ) , as they perform the best in various CL setups but also with methods that use both regularization and episodic memory ( Chaudhry et al. , 2018b ; Wu et al. , 2019 ) . Online continual learning . Despite being more realistic ( Losing et al. , 2018 ; He et al. , 2020 ) , online CL setups have not been popular ( Prabhu et al. , 2020 ) due to the difficulty and subtle differences in the setups in the published literature . ER ( Rolnick et al. , 2018 ) is a simple yet strong episodic memory-based online CL method . It employs reservoir sampling for memory management and jointly trains a model with half of the batch sampled from memory . Many online CL methods are based on ER ( Aljundi et al. , 2019c ; a ) . GSS ( Aljundi et al. , 2019c ) selects samples using a score based on cosine similarity of gradients . MIR ( Aljundi et al. , 2019a ) retrieves maximally interfering samples from memory to use for training . Different from ER , A-GEM ( Chaudhry et al. , 2019 ) uses the memory to enforce constraints on the loss trajectory of the stored samples . GDumb ( Prabhu et al. , 2020 ) only updates the memory during training phase and trains from scratch at the test time only using the memory . Unlike these methods , recently proposed RM ( Bang et al. , 2021 ) uses an uncertainty-based memory sampling and two-stage training scheme where the model is trained for one epoch on the streamed samples and trains extensively only using the memory at the end of each task , effectively delaying most of the learning to the end of the tasks . Note that the uncertainty-based memory sampling can not be implemented in the online CL setup and the two-stage training performs particularly worse in our i-Blurry setup . Our method outperforms all other online CL method introduced in this section while strictly adhering to the online and task-free restrictions .
The paper proposes a new problem setup in continual learning. As the title suggests, the paper focuses on online, task-free, class incremental, task blurry learning with any-time inference. The authors also came up with new baselines and importance-based memory management. They empirically tested their methods in the proposed problem setup.
SP:8955f90191ee97eeb451e79dc12cd7921f6c3fd5
Active Learning over Multiple Domains in Natural Language Tasks
1 INTRODUCTION . New natural language problems , outside the watershed of core NLP , are often strictly limited by a dearth of labeled data . While unlabeled data is frequently available , it is not always from the same source as the target distribution . This is particularly prevalent for tasks characterized by ( i ) significant distribution shift over time , ( ii ) personalization for user subgroups , or ( iii ) different collection mediums ( see examples in Section A ) . A widely-used solution to this problem is to bootstrap a larger training set using active learning ( AL ) : a method to decide which unlabeled training examples should be labeled on a fixed annotation budget ( Cohn et al. , 1996 ; Settles , 2012 ) . However , most active learning literature in NLP assumes the unlabeled source data is drawn from the same distribution as the target data ( Dor et al. , 2020 ) . This simplifying assumption avoids the frequent challenges faced by practitioners in multi-domain active learning . In this realistic setting , there are multiple sources of data ( i.e . domains ) to consider . In this case , it ’ s unclear whether to optimize for homogeneity or heterogeneity of selected examples . Secondly , is it more effective to allocate an example budget per domain , or treat examples as a single unlabeled pool ? Where active learning baselines traditionally select examples the model is least confident on ( Settles , 2009 ) , in this setting it could lead to distracting examples from very dissimilar distributions . In this work we empirically examine four separate families of methods ( uncertainty-based , HDivergence , reverse classification accuracy , and semantic similarity detection ) over several question answering and sentiment analysis datasets , following ( Lowell et al. , 2019 ; Elsahar & Gallé , 2019b ) , to provide actionable insights to practitioners facing this challenging variant of active learning for natural language . We address the following questions : 1 . What family of methods are effective for multi-domain active learning ? 2 . What properties of the example and domain selection yield strong results ? While previous work has investigated similar settings ( Saha et al. , 2011 ; Liu et al. , 2015 ; Zhao et al. , 2021 ; Kirsch et al. , 2021 ) we contribute , to our knowledge , the first rigorous formalization and broad survey of methods within NLP . We find that many families of techniques for active learning and domain shift detection fail to reliably beat random baselines in this challenging variant of active learning , but certain H-Divergence methods are consistently strong . Our analysis identifies stark dissimilarities of these methods ’ example selection , and suggests domain diversity is an important factor in achieving strong results . These results may serve as a guide to practitioners facing this problem , suggesting particular methods that are generally effective and properties of strategies that increase performance . 2 RELATED WORK . Active Learning in NLP Lowell et al . ( 2019 ) shows how inconsistent active learning methods are in NLP , even under regular conditions . However , Dor et al . ( 2020 ) ; Siddhant & Lipton ( 2018 ) survey active learning methods in NLP and find notable gains over random baselines . Kouw & Loog ( 2019 ) survey domain adaptation without target labels , similar to our setting , but for non-language tasks . We reference more active learning techniques in Section 4 . Domain Shift Detection Elsahar & Gallé ( 2019b ) attempt to predict accuracy drops due to domain shifts and Rabanser et al . ( 2018 ) surveys different domain shift detection methods . Arora et al . ( 2021 ) examine calibration and density estimation for textual OOD detection . Active Learning under Distribution Shift A few previous works investigated active learning under distribution shifts , though mainly in image classification , with single source and target domains . Kirsch et al . ( 2021 ) finds that BALD , which is often considered the state of the art for unshifted domain settings , can get stuck on irrelevant source domain or junk data . Zhao et al . ( 2021 ) investigates label shift , proposing a combination of predicted class balanced subsampling and importance weighting . Saha et al . ( 2011 ) , whose approach corrects joint distribution shift , relies on the covariate shift assumption . However , in practical settings , there may be general distributional shifts where neither the covariate shift nor label shift assumptions hold . Transfer Learning from Multiple Domains Attempts to better understand how to handle shifted domains for better generalization or target performance has motivated work in question answering ( Talmor & Berant , 2019 ; Fisch et al. , 2019 ; Longpre et al. , 2019 ; Kamath et al. , 2020 ) and classification tasks ( Ruder & Plank , 2018 ; Sheoran et al. , 2020 ) . Ruder & Plank ( 2017 ) show the benefits of both data similarity and diversity in transfer learning . Rücklé et al . ( 2020 ) find that sampling from a widevariety of source domains ( data scale ) outperforms sampling similar domains in question answering . He et al . ( 2021 ) investigate a version of multi-domain active learning where models are trained and evaluated on examples from all domains , focusing on robustness across domains . 3 MULTI-DOMAIN ACTIVE LEARNING . Suppose we have multiple domains D1 , D2 , ... , Dk.1 Let one of the k domains be the target set DT , and let the other k − 1 domains comprise the source set DS = ⋃ i 6=T Di . Given : . • Target : Small samples of labeled data points ( x , y ) from the target domain . DtrainT , D dev T , D test T ∼ DT .2 • Source : A large sample of unlabeled points ( x ) from the source domains . DS = ⋃ i6=T Di Task : . 1 . Choose n samples from DS to label . DchosenS ⊂ DS , |DchosenS | = n , selected by argmaxx∈DS Af ( x ) where Af is an acquisition function : a policy to select unlabeled examples from DS for labeling . 2 . Train a model M on Dfinal−train , validating on DdevT . Dfinal−train = DtrainT ∪DchosenS 3 . Evaluate M on DtestT , giving score s. 1We define a domain as a dataset collected independently of the others . 2|DtrainT | = 2000 to simulate a small but reasonable quantity of labeled , in-domain training data for active learning scenarios . For Step 1 , the practitioner chooses n samples with the highest scores according to their acquisition function Af . M is fine-tuned on these n samples , then evaluated on DtestT to demonstrate Af ’ s ability to choose relevant out-of-distribution training examples . 4 METHODS . We identify four families of methods relevant to active learning over multiple shifted domains . Uncertainty methods are common in standard active learning for measuring example uncertainty or familiarity to a model ; H-Divergence techniques train classifiers for domain shift detection ; Semantic Similarity Detection finds data points similar to points in the target domain ; and Reverse Classification Accuracy approximates the benefit of training on a dataset . A limitation of our work is we do not cover all method families , such as domain adaptation , just those we consider most applicable . We derive ∼18 active learning variants , comprising the most prevalent and effective from prior work , and novel extensions/variants of existing paradigms for the multi-domain active learning setting ( see KNN , R̃CA and DAL-E ) . Furthermore , we split the families into two acquisition strategies : Single Pool Strategy and Domain Budget Allocation . Single Pool Strategy , comprising the first three families of methods , treats all examples as coming from one single unlabeled pool . Domain Budget Allocation , consisting of Reverse Classification Accuracy methods , simply allocate an example budget for each domain . We enumerate acquisition methods Af below . Each method produces a full ranking of examples in the source set DS . To rank examples , most acquisition methods train an acquisition model , MA , using the same model architecture as M . MA is trained on all samples from DtrainT , except for DAL and KNN , which split DtrainT into two equal segments , one for training MA and one for an internal model . Some methods have both ascending and descending orders of these rankings ( denoted by ↑ and ↓ respectively , in the method abbreviations ) , to test whether similar or distant examples are preferred in a multi-domain setting . Certain methods use vector representations of candidate examples . We benchmark with both taskagnostic and task-specific encoders . The task-agnostic embeddings are taken from the last layer ’ s CLS token in Reimers & Gurevych ( 2019 ) ’ s sentence encoder ( Appendix for details ) . The task-specific embeddings are taken from the last layer ’ s CLS token in the trained model MA . The motivation of the task-specific variant is that each example ’ s representation will capture taskrelevant differences between examples while ignoring irrelevant differences.3 The versions of DAL and KNN methods that use task-specific vectors are denoted with “ ∗ ” in their abbreviation . Otherwise , they use task-agnostic vectors . 4.1 UNCERTAINTY METHODS . These methods measure the uncertainty of a trained model on a new example . Uncertainty can reflect either aleatoric uncertainty , due to ambiguity inherent in the example , or epistemic uncertainty , due to limitations of the model ( Kendall & Gal , 2017 ) . For the following methods , let Y be the set of all possible labels produced from the model M ( x ) and ly be the logit value for y ∈ Y . Confidence ( CONF ) A model ’ s confidence P ( y|x ) in its prediction y estimates the difficulty or unfamiliarity of an example ( Guo et al. , 2017 ; Elsahar & Gallé , 2019a ) . Entropy ( ENTR ) Entropy applies Shannon entropy ( Shannon , 1948 ) to the full distribution of class probabilities for each example , formalized as AENTR . ACONF ( x , MA ) = −max ( P ( y|x ) ) AENTR ( x , MA ) = − |Y |∑ i=1 P ( yi|x ) · logP ( yi|x ) Energy-based Out-of-Distribution Detection ( ENG ) Liu et al . ( 2020 ) use an energy-based score to distinguish between in- and out-distribution examples . They demonstrate this method is less susceptible to overconfidence issues of softmax approaches . 3For instance , consider in one domain every example is prefixed with “ Text : ” while the other is not — telling the difference is trivial , but the examples could be near-identical with respect to the task . Bayesian Active Learning by Disagreement ( BALD ) Gal & Ghahramani ( 2016 ) introduces estimating uncertainty by measuring prediction disagreement over multiple inference passes , each with a distinct dropout mask . BALD isolates epistemic uncertainty , as the model would theoretically produce stable predictions over inference passes given sufficient capacity . We conduct T = 20 forward passes on x. ŷt = argmaxiP ( yi|x ) t , representing the predicted class on the t-th model pass on x . Following ( Lowell et al. , 2019 ) , ties are broken by taking the mean label entropy over all T runs . AENG ( x , MA ) = − log ∑ y∈Y ely ABALD ( x , MA ) = 1− count ( modet∈T ( ŷt ) ) T 4.2 H-DIVERGENCE METHODS Ben-David et al . ( 2006 ; 2010 ) formalize the divergence between two domains as theH-Divergence , which they approximate as the difficulty for a discriminator to differentiate between the two.4 Discriminative Active Learning ( DAL ) applies this concept to the active learning setting ( Gissin & Shalev-Shwartz , 2019 ) . We explore variants of DAL , using an XGBoost decision tree ( Chen & Guestrin , 2016 ) as the discriminator model g.5 For the following methods , let Dtrain−BT be the 1k examples from D train T that were not used to train MA . Let E be an encoder function , which can be task-specific or agnostic as described above . We use samples both fromDtrain−BT andDS to train the discriminator . We assign samples origin labels l , which depend on the DAL variant . Samples from DS with discriminator predictions closest to 1 are selected for labeling . The acquisition scoring function for each DAL method and training set definition , respectively , are : ADAL ( x , g , E ) = g ( E ( x ) ) { ( E ( x ) , l ) | x ∈ Dtrain−BT ∪DS } Discriminative Active Learning — Target ( DAL-T ) DAL-T trains a discriminator g to distinguish between target examples in Dtrain−BT and out-of-distribution examples from DS . For DAL-T , l = 1Dtrain−BT ( x ) . Discriminative Active Learning — Error ( DAL-E ) DAL-E is a novel variant of DAL . DALE ’ s approach is to find examples that are similar to those in the target domain that MA misclassified . We partition Dtrain−BT further into erroneous samples D err T and correct samples D corr T , where Dtrain−BT = D err T ∪DcorrT . For DAL-E , l = 1DerrT ( x ) .
This paper makes comparison with techniques used in active learning (AL), domain shift detection (DS), and multi-domain sampling to combine data from multiple sources. The experiments are conducted on datasets from questions answering and sentiment analysis. The paper is well organized and easy to follow. However, the contribution of this paper is not clear. Specifically, I would expect authors provide more detailed recommendation for AL, DS, and multi-domain sampling in terms of sampling techniques, and population of different sources for certain application. My another is concern is that the motivation of the experimental design is not clear. Why authors consider questions answering and sentiment analysis as the applications? It requires more analysis about experimental results, such as Figure 1 and tables in Section D.
SP:9bddccfae5ba4235b81fd964f91249da855520fe
Active Learning over Multiple Domains in Natural Language Tasks
1 INTRODUCTION . New natural language problems , outside the watershed of core NLP , are often strictly limited by a dearth of labeled data . While unlabeled data is frequently available , it is not always from the same source as the target distribution . This is particularly prevalent for tasks characterized by ( i ) significant distribution shift over time , ( ii ) personalization for user subgroups , or ( iii ) different collection mediums ( see examples in Section A ) . A widely-used solution to this problem is to bootstrap a larger training set using active learning ( AL ) : a method to decide which unlabeled training examples should be labeled on a fixed annotation budget ( Cohn et al. , 1996 ; Settles , 2012 ) . However , most active learning literature in NLP assumes the unlabeled source data is drawn from the same distribution as the target data ( Dor et al. , 2020 ) . This simplifying assumption avoids the frequent challenges faced by practitioners in multi-domain active learning . In this realistic setting , there are multiple sources of data ( i.e . domains ) to consider . In this case , it ’ s unclear whether to optimize for homogeneity or heterogeneity of selected examples . Secondly , is it more effective to allocate an example budget per domain , or treat examples as a single unlabeled pool ? Where active learning baselines traditionally select examples the model is least confident on ( Settles , 2009 ) , in this setting it could lead to distracting examples from very dissimilar distributions . In this work we empirically examine four separate families of methods ( uncertainty-based , HDivergence , reverse classification accuracy , and semantic similarity detection ) over several question answering and sentiment analysis datasets , following ( Lowell et al. , 2019 ; Elsahar & Gallé , 2019b ) , to provide actionable insights to practitioners facing this challenging variant of active learning for natural language . We address the following questions : 1 . What family of methods are effective for multi-domain active learning ? 2 . What properties of the example and domain selection yield strong results ? While previous work has investigated similar settings ( Saha et al. , 2011 ; Liu et al. , 2015 ; Zhao et al. , 2021 ; Kirsch et al. , 2021 ) we contribute , to our knowledge , the first rigorous formalization and broad survey of methods within NLP . We find that many families of techniques for active learning and domain shift detection fail to reliably beat random baselines in this challenging variant of active learning , but certain H-Divergence methods are consistently strong . Our analysis identifies stark dissimilarities of these methods ’ example selection , and suggests domain diversity is an important factor in achieving strong results . These results may serve as a guide to practitioners facing this problem , suggesting particular methods that are generally effective and properties of strategies that increase performance . 2 RELATED WORK . Active Learning in NLP Lowell et al . ( 2019 ) shows how inconsistent active learning methods are in NLP , even under regular conditions . However , Dor et al . ( 2020 ) ; Siddhant & Lipton ( 2018 ) survey active learning methods in NLP and find notable gains over random baselines . Kouw & Loog ( 2019 ) survey domain adaptation without target labels , similar to our setting , but for non-language tasks . We reference more active learning techniques in Section 4 . Domain Shift Detection Elsahar & Gallé ( 2019b ) attempt to predict accuracy drops due to domain shifts and Rabanser et al . ( 2018 ) surveys different domain shift detection methods . Arora et al . ( 2021 ) examine calibration and density estimation for textual OOD detection . Active Learning under Distribution Shift A few previous works investigated active learning under distribution shifts , though mainly in image classification , with single source and target domains . Kirsch et al . ( 2021 ) finds that BALD , which is often considered the state of the art for unshifted domain settings , can get stuck on irrelevant source domain or junk data . Zhao et al . ( 2021 ) investigates label shift , proposing a combination of predicted class balanced subsampling and importance weighting . Saha et al . ( 2011 ) , whose approach corrects joint distribution shift , relies on the covariate shift assumption . However , in practical settings , there may be general distributional shifts where neither the covariate shift nor label shift assumptions hold . Transfer Learning from Multiple Domains Attempts to better understand how to handle shifted domains for better generalization or target performance has motivated work in question answering ( Talmor & Berant , 2019 ; Fisch et al. , 2019 ; Longpre et al. , 2019 ; Kamath et al. , 2020 ) and classification tasks ( Ruder & Plank , 2018 ; Sheoran et al. , 2020 ) . Ruder & Plank ( 2017 ) show the benefits of both data similarity and diversity in transfer learning . Rücklé et al . ( 2020 ) find that sampling from a widevariety of source domains ( data scale ) outperforms sampling similar domains in question answering . He et al . ( 2021 ) investigate a version of multi-domain active learning where models are trained and evaluated on examples from all domains , focusing on robustness across domains . 3 MULTI-DOMAIN ACTIVE LEARNING . Suppose we have multiple domains D1 , D2 , ... , Dk.1 Let one of the k domains be the target set DT , and let the other k − 1 domains comprise the source set DS = ⋃ i 6=T Di . Given : . • Target : Small samples of labeled data points ( x , y ) from the target domain . DtrainT , D dev T , D test T ∼ DT .2 • Source : A large sample of unlabeled points ( x ) from the source domains . DS = ⋃ i6=T Di Task : . 1 . Choose n samples from DS to label . DchosenS ⊂ DS , |DchosenS | = n , selected by argmaxx∈DS Af ( x ) where Af is an acquisition function : a policy to select unlabeled examples from DS for labeling . 2 . Train a model M on Dfinal−train , validating on DdevT . Dfinal−train = DtrainT ∪DchosenS 3 . Evaluate M on DtestT , giving score s. 1We define a domain as a dataset collected independently of the others . 2|DtrainT | = 2000 to simulate a small but reasonable quantity of labeled , in-domain training data for active learning scenarios . For Step 1 , the practitioner chooses n samples with the highest scores according to their acquisition function Af . M is fine-tuned on these n samples , then evaluated on DtestT to demonstrate Af ’ s ability to choose relevant out-of-distribution training examples . 4 METHODS . We identify four families of methods relevant to active learning over multiple shifted domains . Uncertainty methods are common in standard active learning for measuring example uncertainty or familiarity to a model ; H-Divergence techniques train classifiers for domain shift detection ; Semantic Similarity Detection finds data points similar to points in the target domain ; and Reverse Classification Accuracy approximates the benefit of training on a dataset . A limitation of our work is we do not cover all method families , such as domain adaptation , just those we consider most applicable . We derive ∼18 active learning variants , comprising the most prevalent and effective from prior work , and novel extensions/variants of existing paradigms for the multi-domain active learning setting ( see KNN , R̃CA and DAL-E ) . Furthermore , we split the families into two acquisition strategies : Single Pool Strategy and Domain Budget Allocation . Single Pool Strategy , comprising the first three families of methods , treats all examples as coming from one single unlabeled pool . Domain Budget Allocation , consisting of Reverse Classification Accuracy methods , simply allocate an example budget for each domain . We enumerate acquisition methods Af below . Each method produces a full ranking of examples in the source set DS . To rank examples , most acquisition methods train an acquisition model , MA , using the same model architecture as M . MA is trained on all samples from DtrainT , except for DAL and KNN , which split DtrainT into two equal segments , one for training MA and one for an internal model . Some methods have both ascending and descending orders of these rankings ( denoted by ↑ and ↓ respectively , in the method abbreviations ) , to test whether similar or distant examples are preferred in a multi-domain setting . Certain methods use vector representations of candidate examples . We benchmark with both taskagnostic and task-specific encoders . The task-agnostic embeddings are taken from the last layer ’ s CLS token in Reimers & Gurevych ( 2019 ) ’ s sentence encoder ( Appendix for details ) . The task-specific embeddings are taken from the last layer ’ s CLS token in the trained model MA . The motivation of the task-specific variant is that each example ’ s representation will capture taskrelevant differences between examples while ignoring irrelevant differences.3 The versions of DAL and KNN methods that use task-specific vectors are denoted with “ ∗ ” in their abbreviation . Otherwise , they use task-agnostic vectors . 4.1 UNCERTAINTY METHODS . These methods measure the uncertainty of a trained model on a new example . Uncertainty can reflect either aleatoric uncertainty , due to ambiguity inherent in the example , or epistemic uncertainty , due to limitations of the model ( Kendall & Gal , 2017 ) . For the following methods , let Y be the set of all possible labels produced from the model M ( x ) and ly be the logit value for y ∈ Y . Confidence ( CONF ) A model ’ s confidence P ( y|x ) in its prediction y estimates the difficulty or unfamiliarity of an example ( Guo et al. , 2017 ; Elsahar & Gallé , 2019a ) . Entropy ( ENTR ) Entropy applies Shannon entropy ( Shannon , 1948 ) to the full distribution of class probabilities for each example , formalized as AENTR . ACONF ( x , MA ) = −max ( P ( y|x ) ) AENTR ( x , MA ) = − |Y |∑ i=1 P ( yi|x ) · logP ( yi|x ) Energy-based Out-of-Distribution Detection ( ENG ) Liu et al . ( 2020 ) use an energy-based score to distinguish between in- and out-distribution examples . They demonstrate this method is less susceptible to overconfidence issues of softmax approaches . 3For instance , consider in one domain every example is prefixed with “ Text : ” while the other is not — telling the difference is trivial , but the examples could be near-identical with respect to the task . Bayesian Active Learning by Disagreement ( BALD ) Gal & Ghahramani ( 2016 ) introduces estimating uncertainty by measuring prediction disagreement over multiple inference passes , each with a distinct dropout mask . BALD isolates epistemic uncertainty , as the model would theoretically produce stable predictions over inference passes given sufficient capacity . We conduct T = 20 forward passes on x. ŷt = argmaxiP ( yi|x ) t , representing the predicted class on the t-th model pass on x . Following ( Lowell et al. , 2019 ) , ties are broken by taking the mean label entropy over all T runs . AENG ( x , MA ) = − log ∑ y∈Y ely ABALD ( x , MA ) = 1− count ( modet∈T ( ŷt ) ) T 4.2 H-DIVERGENCE METHODS Ben-David et al . ( 2006 ; 2010 ) formalize the divergence between two domains as theH-Divergence , which they approximate as the difficulty for a discriminator to differentiate between the two.4 Discriminative Active Learning ( DAL ) applies this concept to the active learning setting ( Gissin & Shalev-Shwartz , 2019 ) . We explore variants of DAL , using an XGBoost decision tree ( Chen & Guestrin , 2016 ) as the discriminator model g.5 For the following methods , let Dtrain−BT be the 1k examples from D train T that were not used to train MA . Let E be an encoder function , which can be task-specific or agnostic as described above . We use samples both fromDtrain−BT andDS to train the discriminator . We assign samples origin labels l , which depend on the DAL variant . Samples from DS with discriminator predictions closest to 1 are selected for labeling . The acquisition scoring function for each DAL method and training set definition , respectively , are : ADAL ( x , g , E ) = g ( E ( x ) ) { ( E ( x ) , l ) | x ∈ Dtrain−BT ∪DS } Discriminative Active Learning — Target ( DAL-T ) DAL-T trains a discriminator g to distinguish between target examples in Dtrain−BT and out-of-distribution examples from DS . For DAL-T , l = 1Dtrain−BT ( x ) . Discriminative Active Learning — Error ( DAL-E ) DAL-E is a novel variant of DAL . DALE ’ s approach is to find examples that are similar to those in the target domain that MA misclassified . We partition Dtrain−BT further into erroneous samples D err T and correct samples D corr T , where Dtrain−BT = D err T ∪DcorrT . For DAL-E , l = 1DerrT ( x ) .
This paper surveys a broad range of techniques for active learning in the multi-domain setting applied to text - specifically, given a small labeled dataset from a target domain and a large amount of unlabeled data from a collection of source domains, how should examples be picked from the source domains to for labeling to maximise performance on the target domain. The main findings of the work are that selecting examples based on H-divergence measures perform much better than most active learning approaches. In-particular, they propose a method termed DAL-E where examples are selected from the source domain based on similarity to *misclassified* target domain examples, and show that it outperforms standard “disciminative active learning” on most of their experiments. From analysis, it is revealed that selecting from diverse domains helps, and their proposed DAL-E can help with avoiding bad examples to be selected.
SP:9bddccfae5ba4235b81fd964f91249da855520fe
Active Learning over Multiple Domains in Natural Language Tasks
1 INTRODUCTION . New natural language problems , outside the watershed of core NLP , are often strictly limited by a dearth of labeled data . While unlabeled data is frequently available , it is not always from the same source as the target distribution . This is particularly prevalent for tasks characterized by ( i ) significant distribution shift over time , ( ii ) personalization for user subgroups , or ( iii ) different collection mediums ( see examples in Section A ) . A widely-used solution to this problem is to bootstrap a larger training set using active learning ( AL ) : a method to decide which unlabeled training examples should be labeled on a fixed annotation budget ( Cohn et al. , 1996 ; Settles , 2012 ) . However , most active learning literature in NLP assumes the unlabeled source data is drawn from the same distribution as the target data ( Dor et al. , 2020 ) . This simplifying assumption avoids the frequent challenges faced by practitioners in multi-domain active learning . In this realistic setting , there are multiple sources of data ( i.e . domains ) to consider . In this case , it ’ s unclear whether to optimize for homogeneity or heterogeneity of selected examples . Secondly , is it more effective to allocate an example budget per domain , or treat examples as a single unlabeled pool ? Where active learning baselines traditionally select examples the model is least confident on ( Settles , 2009 ) , in this setting it could lead to distracting examples from very dissimilar distributions . In this work we empirically examine four separate families of methods ( uncertainty-based , HDivergence , reverse classification accuracy , and semantic similarity detection ) over several question answering and sentiment analysis datasets , following ( Lowell et al. , 2019 ; Elsahar & Gallé , 2019b ) , to provide actionable insights to practitioners facing this challenging variant of active learning for natural language . We address the following questions : 1 . What family of methods are effective for multi-domain active learning ? 2 . What properties of the example and domain selection yield strong results ? While previous work has investigated similar settings ( Saha et al. , 2011 ; Liu et al. , 2015 ; Zhao et al. , 2021 ; Kirsch et al. , 2021 ) we contribute , to our knowledge , the first rigorous formalization and broad survey of methods within NLP . We find that many families of techniques for active learning and domain shift detection fail to reliably beat random baselines in this challenging variant of active learning , but certain H-Divergence methods are consistently strong . Our analysis identifies stark dissimilarities of these methods ’ example selection , and suggests domain diversity is an important factor in achieving strong results . These results may serve as a guide to practitioners facing this problem , suggesting particular methods that are generally effective and properties of strategies that increase performance . 2 RELATED WORK . Active Learning in NLP Lowell et al . ( 2019 ) shows how inconsistent active learning methods are in NLP , even under regular conditions . However , Dor et al . ( 2020 ) ; Siddhant & Lipton ( 2018 ) survey active learning methods in NLP and find notable gains over random baselines . Kouw & Loog ( 2019 ) survey domain adaptation without target labels , similar to our setting , but for non-language tasks . We reference more active learning techniques in Section 4 . Domain Shift Detection Elsahar & Gallé ( 2019b ) attempt to predict accuracy drops due to domain shifts and Rabanser et al . ( 2018 ) surveys different domain shift detection methods . Arora et al . ( 2021 ) examine calibration and density estimation for textual OOD detection . Active Learning under Distribution Shift A few previous works investigated active learning under distribution shifts , though mainly in image classification , with single source and target domains . Kirsch et al . ( 2021 ) finds that BALD , which is often considered the state of the art for unshifted domain settings , can get stuck on irrelevant source domain or junk data . Zhao et al . ( 2021 ) investigates label shift , proposing a combination of predicted class balanced subsampling and importance weighting . Saha et al . ( 2011 ) , whose approach corrects joint distribution shift , relies on the covariate shift assumption . However , in practical settings , there may be general distributional shifts where neither the covariate shift nor label shift assumptions hold . Transfer Learning from Multiple Domains Attempts to better understand how to handle shifted domains for better generalization or target performance has motivated work in question answering ( Talmor & Berant , 2019 ; Fisch et al. , 2019 ; Longpre et al. , 2019 ; Kamath et al. , 2020 ) and classification tasks ( Ruder & Plank , 2018 ; Sheoran et al. , 2020 ) . Ruder & Plank ( 2017 ) show the benefits of both data similarity and diversity in transfer learning . Rücklé et al . ( 2020 ) find that sampling from a widevariety of source domains ( data scale ) outperforms sampling similar domains in question answering . He et al . ( 2021 ) investigate a version of multi-domain active learning where models are trained and evaluated on examples from all domains , focusing on robustness across domains . 3 MULTI-DOMAIN ACTIVE LEARNING . Suppose we have multiple domains D1 , D2 , ... , Dk.1 Let one of the k domains be the target set DT , and let the other k − 1 domains comprise the source set DS = ⋃ i 6=T Di . Given : . • Target : Small samples of labeled data points ( x , y ) from the target domain . DtrainT , D dev T , D test T ∼ DT .2 • Source : A large sample of unlabeled points ( x ) from the source domains . DS = ⋃ i6=T Di Task : . 1 . Choose n samples from DS to label . DchosenS ⊂ DS , |DchosenS | = n , selected by argmaxx∈DS Af ( x ) where Af is an acquisition function : a policy to select unlabeled examples from DS for labeling . 2 . Train a model M on Dfinal−train , validating on DdevT . Dfinal−train = DtrainT ∪DchosenS 3 . Evaluate M on DtestT , giving score s. 1We define a domain as a dataset collected independently of the others . 2|DtrainT | = 2000 to simulate a small but reasonable quantity of labeled , in-domain training data for active learning scenarios . For Step 1 , the practitioner chooses n samples with the highest scores according to their acquisition function Af . M is fine-tuned on these n samples , then evaluated on DtestT to demonstrate Af ’ s ability to choose relevant out-of-distribution training examples . 4 METHODS . We identify four families of methods relevant to active learning over multiple shifted domains . Uncertainty methods are common in standard active learning for measuring example uncertainty or familiarity to a model ; H-Divergence techniques train classifiers for domain shift detection ; Semantic Similarity Detection finds data points similar to points in the target domain ; and Reverse Classification Accuracy approximates the benefit of training on a dataset . A limitation of our work is we do not cover all method families , such as domain adaptation , just those we consider most applicable . We derive ∼18 active learning variants , comprising the most prevalent and effective from prior work , and novel extensions/variants of existing paradigms for the multi-domain active learning setting ( see KNN , R̃CA and DAL-E ) . Furthermore , we split the families into two acquisition strategies : Single Pool Strategy and Domain Budget Allocation . Single Pool Strategy , comprising the first three families of methods , treats all examples as coming from one single unlabeled pool . Domain Budget Allocation , consisting of Reverse Classification Accuracy methods , simply allocate an example budget for each domain . We enumerate acquisition methods Af below . Each method produces a full ranking of examples in the source set DS . To rank examples , most acquisition methods train an acquisition model , MA , using the same model architecture as M . MA is trained on all samples from DtrainT , except for DAL and KNN , which split DtrainT into two equal segments , one for training MA and one for an internal model . Some methods have both ascending and descending orders of these rankings ( denoted by ↑ and ↓ respectively , in the method abbreviations ) , to test whether similar or distant examples are preferred in a multi-domain setting . Certain methods use vector representations of candidate examples . We benchmark with both taskagnostic and task-specific encoders . The task-agnostic embeddings are taken from the last layer ’ s CLS token in Reimers & Gurevych ( 2019 ) ’ s sentence encoder ( Appendix for details ) . The task-specific embeddings are taken from the last layer ’ s CLS token in the trained model MA . The motivation of the task-specific variant is that each example ’ s representation will capture taskrelevant differences between examples while ignoring irrelevant differences.3 The versions of DAL and KNN methods that use task-specific vectors are denoted with “ ∗ ” in their abbreviation . Otherwise , they use task-agnostic vectors . 4.1 UNCERTAINTY METHODS . These methods measure the uncertainty of a trained model on a new example . Uncertainty can reflect either aleatoric uncertainty , due to ambiguity inherent in the example , or epistemic uncertainty , due to limitations of the model ( Kendall & Gal , 2017 ) . For the following methods , let Y be the set of all possible labels produced from the model M ( x ) and ly be the logit value for y ∈ Y . Confidence ( CONF ) A model ’ s confidence P ( y|x ) in its prediction y estimates the difficulty or unfamiliarity of an example ( Guo et al. , 2017 ; Elsahar & Gallé , 2019a ) . Entropy ( ENTR ) Entropy applies Shannon entropy ( Shannon , 1948 ) to the full distribution of class probabilities for each example , formalized as AENTR . ACONF ( x , MA ) = −max ( P ( y|x ) ) AENTR ( x , MA ) = − |Y |∑ i=1 P ( yi|x ) · logP ( yi|x ) Energy-based Out-of-Distribution Detection ( ENG ) Liu et al . ( 2020 ) use an energy-based score to distinguish between in- and out-distribution examples . They demonstrate this method is less susceptible to overconfidence issues of softmax approaches . 3For instance , consider in one domain every example is prefixed with “ Text : ” while the other is not — telling the difference is trivial , but the examples could be near-identical with respect to the task . Bayesian Active Learning by Disagreement ( BALD ) Gal & Ghahramani ( 2016 ) introduces estimating uncertainty by measuring prediction disagreement over multiple inference passes , each with a distinct dropout mask . BALD isolates epistemic uncertainty , as the model would theoretically produce stable predictions over inference passes given sufficient capacity . We conduct T = 20 forward passes on x. ŷt = argmaxiP ( yi|x ) t , representing the predicted class on the t-th model pass on x . Following ( Lowell et al. , 2019 ) , ties are broken by taking the mean label entropy over all T runs . AENG ( x , MA ) = − log ∑ y∈Y ely ABALD ( x , MA ) = 1− count ( modet∈T ( ŷt ) ) T 4.2 H-DIVERGENCE METHODS Ben-David et al . ( 2006 ; 2010 ) formalize the divergence between two domains as theH-Divergence , which they approximate as the difficulty for a discriminator to differentiate between the two.4 Discriminative Active Learning ( DAL ) applies this concept to the active learning setting ( Gissin & Shalev-Shwartz , 2019 ) . We explore variants of DAL , using an XGBoost decision tree ( Chen & Guestrin , 2016 ) as the discriminator model g.5 For the following methods , let Dtrain−BT be the 1k examples from D train T that were not used to train MA . Let E be an encoder function , which can be task-specific or agnostic as described above . We use samples both fromDtrain−BT andDS to train the discriminator . We assign samples origin labels l , which depend on the DAL variant . Samples from DS with discriminator predictions closest to 1 are selected for labeling . The acquisition scoring function for each DAL method and training set definition , respectively , are : ADAL ( x , g , E ) = g ( E ( x ) ) { ( E ( x ) , l ) | x ∈ Dtrain−BT ∪DS } Discriminative Active Learning — Target ( DAL-T ) DAL-T trains a discriminator g to distinguish between target examples in Dtrain−BT and out-of-distribution examples from DS . For DAL-T , l = 1Dtrain−BT ( x ) . Discriminative Active Learning — Error ( DAL-E ) DAL-E is a novel variant of DAL . DALE ’ s approach is to find examples that are similar to those in the target domain that MA misclassified . We partition Dtrain−BT further into erroneous samples D err T and correct samples D corr T , where Dtrain−BT = D err T ∪DcorrT . For DAL-E , l = 1DerrT ( x ) .
The authors investigate the efficacy of several active learning-related techniques in classification under domain shift with multiple source domains. They include: uncertainty methods, H-divergence methods, reverse-classification methods, and nearest neighbor methods. They construct 18 methods with various combinations and settings and carry out experiments in two text classification datasets: Question answering and sentiment analysis. They report several findings, among them, the most interesting one to me is that uncertainty in various models manifests itself in various ways and candidate data points are not necessarily the same across models.
SP:9bddccfae5ba4235b81fd964f91249da855520fe
Contrastive Clustering to Mine Pseudo Parallel Data for Unsupervised Translation
Modern unsupervised machine translation systems mostly train their models by generating synthetic parallel training data from large unlabeled monolingual corpora of different languages through various means , such as iterative backtranslation . However , there may exist small amount of actual parallel data hidden in the sea of unlabeled data , which has not been exploited . We develop a new fine-tuning objective , called Language-Agnostic Constraint for SwAV loss , or LAgSwAV , which enables a pre-trained model to extract such pseudo-parallel data from the monolingual corpora in a fully unsupervised manner . We then propose an effective strategy to utilize the obtained synthetic data to augment unsupervised machine translation . Our method achieves the state of the art in the WMT ’ 14 English-French , WMT ’ 16 German-English and English-Romanian bilingual unsupervised translation tasks , with 40.2 , 36.8 , and 37.0 BLEU , respectively . We also achieve substantial improvements in the FLoRes low-resource English-Nepali and English-Sinhala unsupervised tasks with 5.3 and 5.4 BLEU , respectively . 1 INTRODUCTION . The quest to build a fully unsupervised machine translation ( UMT ) system , where only unlabeled monolingual corpora are available , has received increasing attention in recent years . Profoundly , Lample et al . ( 2018a ; c ) introduced the general principles for UMT that include cross-lingual initialization , language modeling and iterative back-translation . Although various UMT variants ( Lample et al. , 2018a ; c ; Conneau & Lample , 2019 ; Song et al. , 2019 ; Liu et al. , 2020 ; Nguyen et al. , 2021 ) applied these principles differently , they ultimately train their models on noisy synthetic parallel data generated by the models themselves or some randomization processes , which may cause harm as the generated synthetic data is often noisy and low-quality , especially at the beginning of training . However , these methods may have missed out a potential that some parallel ( or comparable ) data may exist in the monolingual corpora , which can be effectively mined to augment the UMT training . This paper is motivated to explore mining of pseudo-parallel data for UMT tasks . While there have been limited research in unsupervised mining of such data ( Wu et al. , 2019a ; b ) , there have been several studies on bitext mining from sentence embeddings in semi-supervised translation or zeroshot transfer setups ( Zweigenbaum et al. , 2018 ; Schwenk , 2018 ; Artetxe & Schwenk , 2019a ; b ) . However , these methods require the models to be pre-trained on massive multilingual parallel data , which renders them inapplicable and incomparable to the fully unsupervised setup . Furthermore , such models may not behave well off-the-shell on low-resource languages that were not in the pretraining data , such as Nepali and Sinhala ( Guzmán et al. , 2019 ) . Meanwhile , applying these mining algorithms directly to sentence embeddings produced by fully self-supervised models ( Conneau & Lample , 2019 ) leads to a significant amount of misalignments , which hurts the UMT models as shown later in our experiments ( §4 ) . ∗Most of work done during an internship at Meta AI . In this paper , we propose a novel modification to SwAV loss ( Caron et al. , 2020 ) , called LanguageAgnostic Constraint for SwAV ( LAgSwAV ) 1 , which is used to fine-tune existing pre-trained selfsupervised models ( Conneau & Lample , 2019 ; Liu et al. , 2020 ) , so that they become semantically contrastive and language-agnostic . Specifically , semantic contrastiveness is defined , in this paper , as the model ’ s ability to cluster sentences of similar semantic contents into similar groups . Meanwhile , language-agnostic property means that the embedding space of sentences from different languages are indistinguishably mixed together . Figure 1 further illustrates the two properties . We argue that these two properties combined can help the model mine more accurate pseudo-parallel data , since language-agnosticism reduces the distances between cross-lingual embeddings while semantic contrastiveness helps bring sentences of similar contents closer together even if they are in different languages . To achieve the semantically contrastive property , we adopt the SwAV loss ( Caron et al. , 2020 ) , which is designed to classify input data into different clusters that reflect certain semantic characteristics . However , since SwAV is not designed to be language-agnostic , we introduce a new language-agnostic constraint for it to enable the latter property . Analysis experiments in §4 show that our constraint significantly outperforms vanilla SwAV loss as well as other baselines in mining pseudo-parallel data from monolingual corpora . After fine-tuning pre-trained self-supervised models with LAgSwAV loss , we sample sentences from both languages and pass them to the model to obtain the SwAV cluster assignment probabilities , which are treated as sentence embeddings . The soft cluster assignments are then matched by a margin-based algorithm ( Artetxe & Schwenk , 2019b ) with additional filtering criteria ( §3.2 ) . To augment UMT training with the obtained pseudo-parallel data , we propose a simple ranking-based weighted cross-entropy loss to train on this data along with the standard iterative back-translation ( §3.3 ) . We also adopt a dynamic weight to the augmentation loss to alleviate the overfitting effect due to the limited augmentation data . We tested our method on the standard bilingual WMT ’ 14 English-French , WMT ’ 16 GermanEnglish and English-Romanian unsupervised translation tasks and achieved the state of the art with 40.2 , 36.8 , and 37.0 BLEU , respectively ; which is up to 2 BLEU gain from the baselines . In the FLoRes low-resource unsupervised setups ( Guzmán et al. , 2019 ) , our method also outperforms mBART baseline ( Liu et al. , 2020 ) by up to 2.8 BLEU in the English-Nepali and English-Sinhala tasks . We conducted a series of analyses to demonstrate the outperforming ability of LAgSwAV in mining pseudo-parallel data . The method is also found to outperform supervised LASER ( Artetxe & Schwenk , 2019a ) , which can be attributed to LASER ’ s lack of language specifiers . 2 BACKGROUND . 2.1 UNSUPERVISED MACHINE TRANSLATION . Lample et al . ( 2018a ) and Artetxe et al . ( 2018 ) were among the first to employ iterative backtranslation in fully unsupervised machine translation ( UMT ) . Lample et al . ( 2018c ) later formulated the three-principles to UMT : initialization , language modeling and iterative back-translation ( Sennrich et al. , 2016a ) . Initialization can takes form of MUSE bilingual dictionary ( Lample et al. , 1Code : https : //github.com/nxphi47/fairseq/tree/swav umt 2018b ; a ) , cross-lingual masked language model ( XLM ) ( Conneau & Lample , 2019 ) or Seq2Seq model ( Song et al. , 2019 ) . Language modeling often takes form of denoising autoencoding ( Lample et al. , 2018a ) or is omitted entirely ( Song et al. , 2019 ) . Apart from these principles , Nguyen et al . ( 2021 ) use two distinct UMT teachers to distill the student in a two-stage back-translation process . Despite being different , most existing methods commonly employ the same iterative backtranslation process . Multilinguality is also proved to be useful as Liu et al . ( 2020 ) and Garcia et al . ( 2021 ) utilize extra training data of more than two languages . These methods , however , do not leverage the potential pseudo-parallel data that may exist in the monolingual data to improve UMT . 2.2 PSEUDO-PARALLEL DATA MINING . Parallel data mining for machine translation has been an active field of research ( Zweigenbaum et al. , 2018 ; Schwenk , 2018 ; Chinea-Rı́os et al. , 2017 ) . Artetxe & Schwenk ( 2019b ) suggested an effective margin-based algorithm to mine parallel data using sentence embeddings , which we also use in our method . More recently , LASER ( Artetxe & Schwenk , 2019a ) was trained on massive multilingual parallel data from 93 languages . Nonetheless , most of the effort has been invested largely in a supervised setup where the training data is parallel . In terms of UMT , Wu et al . ( 2019a ) proposed an extract-edit approach to mine relevant data and edit it for use , while Wu et al . ( 2019b ) mine weakly-paired documents . Tran et al . ( 2020 ) suggested to train on massive multilingual data from 25 languages and mine data from 180 translation directions , which may not be applicable in bilingual setup , where monolingual data only for the two languages under consideration are available . 2.3 SWAPPING ASSIGNMENTS BETWEEN VIEWS ( SWAV LOSS ) . Caron et al . ( 2020 ) introduced Swapping Assignments between Views ( SwAV ) loss as a means to conduct self-supervised training in computer vision because masking-based training methods applied in language modeling are intractable in images . Although SwAV is a type of contrastive learning , it differs from its sisters ( Chen et al. , 2020 ; He et al. , 2020 ; Wu et al. , 2018 ) in that it is an online clustering-based method . This means the model is trained to assign cluster labels , or “ code ” vector q , to an input image with encoded latent features z . In particular , SwAV loss seeks to enforce consistency between codes of different augmentations ( or views ) of the same image . In other words , different augmentations of the same image should have almost the same semantic content as the original , and thus the cluster assignments of them should also be consistently the same . Formally , let z1 , z2 ∈ Rd be latent features of two distinct views X1 , X2 of the same image X after passing them through an encoder fθ , we compute the corresponding codes q1 , q2 ∈ [ 0 , 1 ] K by passing z1 , z2 into the prototype layer , which consists of prototype weights C = [ c1 , ... , cK ] with ci ∈ Rd . We then proceed to swap the codes q1 , q2 and use them as labels for z2 , z1 , respectively : LSwAV ( z1 , z2 ) = l ( z1 , q2 ) + l ( z2 , q1 ) ( 1 ) The loss function l ( z , q ) in Equation ( 1 ) measures the similarity distance between features z and code q , and is defined as : l ( z , q ) = − ∑ k qk log pk , with pk = exp ( 1τ z T ck ) ∑ k′ exp ( 1 τ z T ck′ ) ( 2 ) where τ is a temperature hyper-parameter and k is the index for ck row in C. Figure 2a illustrates how SwAV loss works . While the above formulations involve two different augmentations X1 , X2 of the same image for brevity purpose , multiple augmentations are used in practice . We refer the reader to ( Caron et al. , 2020 ) for further details on SwAV loss . 3 METHOD . In this section , we present our three main contributions : the language-agnostic constraint for SwAV loss or LAgSwAV ( §3.1 ) , the pseudo-parallel data mining with filter suite ( §3.2 ) , and finally the rank-based cross-entropy loss for UMT training with augmented data ( §3.3 ) . 3.1 LANGUAGE-AGNOSTIC CONSTRAINT FOR SWAV LOSS IN LANGUAGE MODEL . The SwAV loss was developed to conduct self-supervised training in vision due to infeasibility of applying masked language modeling ( MLM ) to images ( Caron et al. , 2020 ) . Thus , for language model pre-training , we stick to MLM and use our proposed LAgSwAV to fine-tune a pre-trained LM to achieve semantic contrastiveness and language agnosticism . Specifically , in the presence of sentences from two languages , let Bs = [ X1 , ... , XB ] and Bt = [ Y1 , ... , YB ] be the batches of B sentences from language s and t , respectively . We proceed to augment them into Bs,1 = [ X11 , ... , X1B ] and Bs,2 = [ X21 , ... , X2B ] , and Bt,1 = [ Y 11 , ... , Y 1B ] and Bt,2 = [ Y 21 , ... , Y 2B ] by various noising techniques , such as token swapping , masking and deleting . Next , we denote Fθ as the Transformer encoder ( Conneau & Lample , 2019 ) , which outputs the sentence embedding z given an input sentence . We compute the respective latent features Zs,1 , Zs,2 , Zt,1 , Zt,2 ∈ RB×d for Bs,1 , Bs,2 , Bt,1 , Bt,2 using Fθ , as described in §2.3 . Subsequently , we compute the code cluster assignments Qs,1 , Qs,2 , Qt,1 , Qt,2 ∈ [ 0 , 1 ] B×K as : [ Qs,1 ; Qs,2 ; Qt,1 ; Qt,2 ] = sinkhorn ( exp ( [ Zs,1 ; Zs,2 ; Zt,1 ; Zt,2 ] CT ) ) ( 3 ) where [ • ; • ] is the row-wise concatenation function and sinkhorn ( • ) is the iterative Sinkhorn-Knopp algorithm ( Cuturi , 2013 ) , a batch-wise renormalization process to compute the cluster assignment probabilities , and is a hyper-parameter to control the smoothness of the distribution . The above formulation is in fact indifferent from the vanilla SwAV loss ( §2.3 ) . However , intuitively , if the model Fθ is not language-agnostic ( i.e. , s and t are distributionally separate ) , the resulting cluster assignments Qs , j and Qt , j are likely to be distributed unevenly for s and t. In other words , some clusters ci ⊂ C will be frequently assigned to the sentences of language s but rarely to the sentences of t , and vice versa . Figure 2b illustrates the imbalance phenomenon . We argue that it is possible to enforce the language-agnostic property indirectly by fixing this imbalance phenomenon . Therefore , the objective of our proposed LAgSwAV is to measure the stochastic imbalance in cluster assignments Qs , j , Qt , j , and rebalance them into V s , j , V t , j ∈ [ 0 , 1 ] B×K such that clusters in V s , j , V t , j are stochastically assigned evenly regardless of the language of the sentences . Then , we compute the total LAgSwAV loss as usual for both s and t as : LsAgSwAV , i = l ( z s,1 i , v s,2 i ) + l ( z s,2 i , v s,1 i ) ; L t AgSwAV , i = l ( z t,1 i , v t,2 i ) + l ( z t,2 i , v t,1 i ) ( 4 ) LLAgSwAV = ∑ i LsAgSwAV , i + ∑ i LtAgSwAV , i ( 5 ) We formalize the transformation from Q to V using the language-agnostic rebalancing function Ω , which operates at the batch level to capture the statistics on cluster assignments . Specifically , given the Q matrices specified above , we compute the per-language rebalance weight vectors ws , wt ∈ RK , such that when multiplying them with prototype outputs , they try to suppress the magnitudes of frequently-assigned clusters and promote those of rarely-assigned clusters , resulting in more balanced cluster assignments . More formally , the weight vectors are computed as : fs = 2B∑ i=1 onehot ( [ Qs,1 ; Qs,2 ] ) i ; f t = 2B∑ i=1 onehot ( [ Qt,1 ; Qt,2 ] ) i ( 6 ) ws = 1.0− ( fs/ K∑ j=1 fsj ) ; w t = 1.0− ( f t/ K∑ j=1 f tj ) ( 7 ) where onehot ( • ) is a row-wise one-hot function which returns a one-hot vector by turning the maximum entry in the row to 1.0 and the rest to 0.0 . We then use the weight vectors ws , wt to compute the rebalanced cluster assignments V s , j , V t , j ∈ RB×K by modifying Equation ( 3 ) as : [ V s,1 ; V s,2 ; V t,1 ; V t,2 ] = sinkhorn ( exp ( 1 [ ws [ Zs,1 ; Zs,2 ] ; wt [ Zt,1 ; Zt,2 ] ] CT ) ) ( 8 ) where a B is an element-wise multiplication operation of vector a applied to each row of B , and the rows of V s , j , V t , j represent the vs , ji , v s , j i vectors in Equation ( 4 ) . Note that there is no gradient flow for the computation of ws and wt , as well as the cluster assignments Q and V .
This work provides a nice extension to the SwAV proposed by Caron. Specifically, it proposes a variant that can be used to build meaningful cluster assignment for different languages at the same time in an unsupervised manner. The key to their modification to SwAv is the so called language-agnostic rebalancing function, which is a simple modification that encourages the cluster assignments to sentence in different languages balance. Experiments show that their modification improves mining quality (i.e. resulting higher BLEU score) than using SwAV. Despite that, the work shows that using their model is not enough, and we need extra steps to make the mined data usuable (i.e. using filter suite to filter data and rank-based cross entropy to train the model with a better use of the data).
SP:db23298f9c982e7e2f54f4e57175295b131d0fe8
Contrastive Clustering to Mine Pseudo Parallel Data for Unsupervised Translation
Modern unsupervised machine translation systems mostly train their models by generating synthetic parallel training data from large unlabeled monolingual corpora of different languages through various means , such as iterative backtranslation . However , there may exist small amount of actual parallel data hidden in the sea of unlabeled data , which has not been exploited . We develop a new fine-tuning objective , called Language-Agnostic Constraint for SwAV loss , or LAgSwAV , which enables a pre-trained model to extract such pseudo-parallel data from the monolingual corpora in a fully unsupervised manner . We then propose an effective strategy to utilize the obtained synthetic data to augment unsupervised machine translation . Our method achieves the state of the art in the WMT ’ 14 English-French , WMT ’ 16 German-English and English-Romanian bilingual unsupervised translation tasks , with 40.2 , 36.8 , and 37.0 BLEU , respectively . We also achieve substantial improvements in the FLoRes low-resource English-Nepali and English-Sinhala unsupervised tasks with 5.3 and 5.4 BLEU , respectively . 1 INTRODUCTION . The quest to build a fully unsupervised machine translation ( UMT ) system , where only unlabeled monolingual corpora are available , has received increasing attention in recent years . Profoundly , Lample et al . ( 2018a ; c ) introduced the general principles for UMT that include cross-lingual initialization , language modeling and iterative back-translation . Although various UMT variants ( Lample et al. , 2018a ; c ; Conneau & Lample , 2019 ; Song et al. , 2019 ; Liu et al. , 2020 ; Nguyen et al. , 2021 ) applied these principles differently , they ultimately train their models on noisy synthetic parallel data generated by the models themselves or some randomization processes , which may cause harm as the generated synthetic data is often noisy and low-quality , especially at the beginning of training . However , these methods may have missed out a potential that some parallel ( or comparable ) data may exist in the monolingual corpora , which can be effectively mined to augment the UMT training . This paper is motivated to explore mining of pseudo-parallel data for UMT tasks . While there have been limited research in unsupervised mining of such data ( Wu et al. , 2019a ; b ) , there have been several studies on bitext mining from sentence embeddings in semi-supervised translation or zeroshot transfer setups ( Zweigenbaum et al. , 2018 ; Schwenk , 2018 ; Artetxe & Schwenk , 2019a ; b ) . However , these methods require the models to be pre-trained on massive multilingual parallel data , which renders them inapplicable and incomparable to the fully unsupervised setup . Furthermore , such models may not behave well off-the-shell on low-resource languages that were not in the pretraining data , such as Nepali and Sinhala ( Guzmán et al. , 2019 ) . Meanwhile , applying these mining algorithms directly to sentence embeddings produced by fully self-supervised models ( Conneau & Lample , 2019 ) leads to a significant amount of misalignments , which hurts the UMT models as shown later in our experiments ( §4 ) . ∗Most of work done during an internship at Meta AI . In this paper , we propose a novel modification to SwAV loss ( Caron et al. , 2020 ) , called LanguageAgnostic Constraint for SwAV ( LAgSwAV ) 1 , which is used to fine-tune existing pre-trained selfsupervised models ( Conneau & Lample , 2019 ; Liu et al. , 2020 ) , so that they become semantically contrastive and language-agnostic . Specifically , semantic contrastiveness is defined , in this paper , as the model ’ s ability to cluster sentences of similar semantic contents into similar groups . Meanwhile , language-agnostic property means that the embedding space of sentences from different languages are indistinguishably mixed together . Figure 1 further illustrates the two properties . We argue that these two properties combined can help the model mine more accurate pseudo-parallel data , since language-agnosticism reduces the distances between cross-lingual embeddings while semantic contrastiveness helps bring sentences of similar contents closer together even if they are in different languages . To achieve the semantically contrastive property , we adopt the SwAV loss ( Caron et al. , 2020 ) , which is designed to classify input data into different clusters that reflect certain semantic characteristics . However , since SwAV is not designed to be language-agnostic , we introduce a new language-agnostic constraint for it to enable the latter property . Analysis experiments in §4 show that our constraint significantly outperforms vanilla SwAV loss as well as other baselines in mining pseudo-parallel data from monolingual corpora . After fine-tuning pre-trained self-supervised models with LAgSwAV loss , we sample sentences from both languages and pass them to the model to obtain the SwAV cluster assignment probabilities , which are treated as sentence embeddings . The soft cluster assignments are then matched by a margin-based algorithm ( Artetxe & Schwenk , 2019b ) with additional filtering criteria ( §3.2 ) . To augment UMT training with the obtained pseudo-parallel data , we propose a simple ranking-based weighted cross-entropy loss to train on this data along with the standard iterative back-translation ( §3.3 ) . We also adopt a dynamic weight to the augmentation loss to alleviate the overfitting effect due to the limited augmentation data . We tested our method on the standard bilingual WMT ’ 14 English-French , WMT ’ 16 GermanEnglish and English-Romanian unsupervised translation tasks and achieved the state of the art with 40.2 , 36.8 , and 37.0 BLEU , respectively ; which is up to 2 BLEU gain from the baselines . In the FLoRes low-resource unsupervised setups ( Guzmán et al. , 2019 ) , our method also outperforms mBART baseline ( Liu et al. , 2020 ) by up to 2.8 BLEU in the English-Nepali and English-Sinhala tasks . We conducted a series of analyses to demonstrate the outperforming ability of LAgSwAV in mining pseudo-parallel data . The method is also found to outperform supervised LASER ( Artetxe & Schwenk , 2019a ) , which can be attributed to LASER ’ s lack of language specifiers . 2 BACKGROUND . 2.1 UNSUPERVISED MACHINE TRANSLATION . Lample et al . ( 2018a ) and Artetxe et al . ( 2018 ) were among the first to employ iterative backtranslation in fully unsupervised machine translation ( UMT ) . Lample et al . ( 2018c ) later formulated the three-principles to UMT : initialization , language modeling and iterative back-translation ( Sennrich et al. , 2016a ) . Initialization can takes form of MUSE bilingual dictionary ( Lample et al. , 1Code : https : //github.com/nxphi47/fairseq/tree/swav umt 2018b ; a ) , cross-lingual masked language model ( XLM ) ( Conneau & Lample , 2019 ) or Seq2Seq model ( Song et al. , 2019 ) . Language modeling often takes form of denoising autoencoding ( Lample et al. , 2018a ) or is omitted entirely ( Song et al. , 2019 ) . Apart from these principles , Nguyen et al . ( 2021 ) use two distinct UMT teachers to distill the student in a two-stage back-translation process . Despite being different , most existing methods commonly employ the same iterative backtranslation process . Multilinguality is also proved to be useful as Liu et al . ( 2020 ) and Garcia et al . ( 2021 ) utilize extra training data of more than two languages . These methods , however , do not leverage the potential pseudo-parallel data that may exist in the monolingual data to improve UMT . 2.2 PSEUDO-PARALLEL DATA MINING . Parallel data mining for machine translation has been an active field of research ( Zweigenbaum et al. , 2018 ; Schwenk , 2018 ; Chinea-Rı́os et al. , 2017 ) . Artetxe & Schwenk ( 2019b ) suggested an effective margin-based algorithm to mine parallel data using sentence embeddings , which we also use in our method . More recently , LASER ( Artetxe & Schwenk , 2019a ) was trained on massive multilingual parallel data from 93 languages . Nonetheless , most of the effort has been invested largely in a supervised setup where the training data is parallel . In terms of UMT , Wu et al . ( 2019a ) proposed an extract-edit approach to mine relevant data and edit it for use , while Wu et al . ( 2019b ) mine weakly-paired documents . Tran et al . ( 2020 ) suggested to train on massive multilingual data from 25 languages and mine data from 180 translation directions , which may not be applicable in bilingual setup , where monolingual data only for the two languages under consideration are available . 2.3 SWAPPING ASSIGNMENTS BETWEEN VIEWS ( SWAV LOSS ) . Caron et al . ( 2020 ) introduced Swapping Assignments between Views ( SwAV ) loss as a means to conduct self-supervised training in computer vision because masking-based training methods applied in language modeling are intractable in images . Although SwAV is a type of contrastive learning , it differs from its sisters ( Chen et al. , 2020 ; He et al. , 2020 ; Wu et al. , 2018 ) in that it is an online clustering-based method . This means the model is trained to assign cluster labels , or “ code ” vector q , to an input image with encoded latent features z . In particular , SwAV loss seeks to enforce consistency between codes of different augmentations ( or views ) of the same image . In other words , different augmentations of the same image should have almost the same semantic content as the original , and thus the cluster assignments of them should also be consistently the same . Formally , let z1 , z2 ∈ Rd be latent features of two distinct views X1 , X2 of the same image X after passing them through an encoder fθ , we compute the corresponding codes q1 , q2 ∈ [ 0 , 1 ] K by passing z1 , z2 into the prototype layer , which consists of prototype weights C = [ c1 , ... , cK ] with ci ∈ Rd . We then proceed to swap the codes q1 , q2 and use them as labels for z2 , z1 , respectively : LSwAV ( z1 , z2 ) = l ( z1 , q2 ) + l ( z2 , q1 ) ( 1 ) The loss function l ( z , q ) in Equation ( 1 ) measures the similarity distance between features z and code q , and is defined as : l ( z , q ) = − ∑ k qk log pk , with pk = exp ( 1τ z T ck ) ∑ k′ exp ( 1 τ z T ck′ ) ( 2 ) where τ is a temperature hyper-parameter and k is the index for ck row in C. Figure 2a illustrates how SwAV loss works . While the above formulations involve two different augmentations X1 , X2 of the same image for brevity purpose , multiple augmentations are used in practice . We refer the reader to ( Caron et al. , 2020 ) for further details on SwAV loss . 3 METHOD . In this section , we present our three main contributions : the language-agnostic constraint for SwAV loss or LAgSwAV ( §3.1 ) , the pseudo-parallel data mining with filter suite ( §3.2 ) , and finally the rank-based cross-entropy loss for UMT training with augmented data ( §3.3 ) . 3.1 LANGUAGE-AGNOSTIC CONSTRAINT FOR SWAV LOSS IN LANGUAGE MODEL . The SwAV loss was developed to conduct self-supervised training in vision due to infeasibility of applying masked language modeling ( MLM ) to images ( Caron et al. , 2020 ) . Thus , for language model pre-training , we stick to MLM and use our proposed LAgSwAV to fine-tune a pre-trained LM to achieve semantic contrastiveness and language agnosticism . Specifically , in the presence of sentences from two languages , let Bs = [ X1 , ... , XB ] and Bt = [ Y1 , ... , YB ] be the batches of B sentences from language s and t , respectively . We proceed to augment them into Bs,1 = [ X11 , ... , X1B ] and Bs,2 = [ X21 , ... , X2B ] , and Bt,1 = [ Y 11 , ... , Y 1B ] and Bt,2 = [ Y 21 , ... , Y 2B ] by various noising techniques , such as token swapping , masking and deleting . Next , we denote Fθ as the Transformer encoder ( Conneau & Lample , 2019 ) , which outputs the sentence embedding z given an input sentence . We compute the respective latent features Zs,1 , Zs,2 , Zt,1 , Zt,2 ∈ RB×d for Bs,1 , Bs,2 , Bt,1 , Bt,2 using Fθ , as described in §2.3 . Subsequently , we compute the code cluster assignments Qs,1 , Qs,2 , Qt,1 , Qt,2 ∈ [ 0 , 1 ] B×K as : [ Qs,1 ; Qs,2 ; Qt,1 ; Qt,2 ] = sinkhorn ( exp ( [ Zs,1 ; Zs,2 ; Zt,1 ; Zt,2 ] CT ) ) ( 3 ) where [ • ; • ] is the row-wise concatenation function and sinkhorn ( • ) is the iterative Sinkhorn-Knopp algorithm ( Cuturi , 2013 ) , a batch-wise renormalization process to compute the cluster assignment probabilities , and is a hyper-parameter to control the smoothness of the distribution . The above formulation is in fact indifferent from the vanilla SwAV loss ( §2.3 ) . However , intuitively , if the model Fθ is not language-agnostic ( i.e. , s and t are distributionally separate ) , the resulting cluster assignments Qs , j and Qt , j are likely to be distributed unevenly for s and t. In other words , some clusters ci ⊂ C will be frequently assigned to the sentences of language s but rarely to the sentences of t , and vice versa . Figure 2b illustrates the imbalance phenomenon . We argue that it is possible to enforce the language-agnostic property indirectly by fixing this imbalance phenomenon . Therefore , the objective of our proposed LAgSwAV is to measure the stochastic imbalance in cluster assignments Qs , j , Qt , j , and rebalance them into V s , j , V t , j ∈ [ 0 , 1 ] B×K such that clusters in V s , j , V t , j are stochastically assigned evenly regardless of the language of the sentences . Then , we compute the total LAgSwAV loss as usual for both s and t as : LsAgSwAV , i = l ( z s,1 i , v s,2 i ) + l ( z s,2 i , v s,1 i ) ; L t AgSwAV , i = l ( z t,1 i , v t,2 i ) + l ( z t,2 i , v t,1 i ) ( 4 ) LLAgSwAV = ∑ i LsAgSwAV , i + ∑ i LtAgSwAV , i ( 5 ) We formalize the transformation from Q to V using the language-agnostic rebalancing function Ω , which operates at the batch level to capture the statistics on cluster assignments . Specifically , given the Q matrices specified above , we compute the per-language rebalance weight vectors ws , wt ∈ RK , such that when multiplying them with prototype outputs , they try to suppress the magnitudes of frequently-assigned clusters and promote those of rarely-assigned clusters , resulting in more balanced cluster assignments . More formally , the weight vectors are computed as : fs = 2B∑ i=1 onehot ( [ Qs,1 ; Qs,2 ] ) i ; f t = 2B∑ i=1 onehot ( [ Qt,1 ; Qt,2 ] ) i ( 6 ) ws = 1.0− ( fs/ K∑ j=1 fsj ) ; w t = 1.0− ( f t/ K∑ j=1 f tj ) ( 7 ) where onehot ( • ) is a row-wise one-hot function which returns a one-hot vector by turning the maximum entry in the row to 1.0 and the rest to 0.0 . We then use the weight vectors ws , wt to compute the rebalanced cluster assignments V s , j , V t , j ∈ RB×K by modifying Equation ( 3 ) as : [ V s,1 ; V s,2 ; V t,1 ; V t,2 ] = sinkhorn ( exp ( 1 [ ws [ Zs,1 ; Zs,2 ] ; wt [ Zt,1 ; Zt,2 ] ] CT ) ) ( 8 ) where a B is an element-wise multiplication operation of vector a applied to each row of B , and the rows of V s , j , V t , j represent the vs , ji , v s , j i vectors in Equation ( 4 ) . Note that there is no gradient flow for the computation of ws and wt , as well as the cluster assignments Q and V .
The authors propose a language agnostic SwAV loss function to mine pseudo parallel data from unlabeled monolingual corpus from various languages. They start with a pretrained masked language model and then fine-tune it using their proposed language agnostic SwAV. The self supervised model predicts cluster code based on representation in latent space created by swapping predicted codes between different views of the same sentence. Furthermore, they extend that proposed loss with a weighting term to re-balance predictions coming from different embedding spaces resulting in a homogeneous space between source and target languages. They experiment on both Wmt14 dataset and low resource FLoRes dataset achieving state of the art bleu score compared to existing unsupervised machine translation.
SP:db23298f9c982e7e2f54f4e57175295b131d0fe8
Contrastive Clustering to Mine Pseudo Parallel Data for Unsupervised Translation
Modern unsupervised machine translation systems mostly train their models by generating synthetic parallel training data from large unlabeled monolingual corpora of different languages through various means , such as iterative backtranslation . However , there may exist small amount of actual parallel data hidden in the sea of unlabeled data , which has not been exploited . We develop a new fine-tuning objective , called Language-Agnostic Constraint for SwAV loss , or LAgSwAV , which enables a pre-trained model to extract such pseudo-parallel data from the monolingual corpora in a fully unsupervised manner . We then propose an effective strategy to utilize the obtained synthetic data to augment unsupervised machine translation . Our method achieves the state of the art in the WMT ’ 14 English-French , WMT ’ 16 German-English and English-Romanian bilingual unsupervised translation tasks , with 40.2 , 36.8 , and 37.0 BLEU , respectively . We also achieve substantial improvements in the FLoRes low-resource English-Nepali and English-Sinhala unsupervised tasks with 5.3 and 5.4 BLEU , respectively . 1 INTRODUCTION . The quest to build a fully unsupervised machine translation ( UMT ) system , where only unlabeled monolingual corpora are available , has received increasing attention in recent years . Profoundly , Lample et al . ( 2018a ; c ) introduced the general principles for UMT that include cross-lingual initialization , language modeling and iterative back-translation . Although various UMT variants ( Lample et al. , 2018a ; c ; Conneau & Lample , 2019 ; Song et al. , 2019 ; Liu et al. , 2020 ; Nguyen et al. , 2021 ) applied these principles differently , they ultimately train their models on noisy synthetic parallel data generated by the models themselves or some randomization processes , which may cause harm as the generated synthetic data is often noisy and low-quality , especially at the beginning of training . However , these methods may have missed out a potential that some parallel ( or comparable ) data may exist in the monolingual corpora , which can be effectively mined to augment the UMT training . This paper is motivated to explore mining of pseudo-parallel data for UMT tasks . While there have been limited research in unsupervised mining of such data ( Wu et al. , 2019a ; b ) , there have been several studies on bitext mining from sentence embeddings in semi-supervised translation or zeroshot transfer setups ( Zweigenbaum et al. , 2018 ; Schwenk , 2018 ; Artetxe & Schwenk , 2019a ; b ) . However , these methods require the models to be pre-trained on massive multilingual parallel data , which renders them inapplicable and incomparable to the fully unsupervised setup . Furthermore , such models may not behave well off-the-shell on low-resource languages that were not in the pretraining data , such as Nepali and Sinhala ( Guzmán et al. , 2019 ) . Meanwhile , applying these mining algorithms directly to sentence embeddings produced by fully self-supervised models ( Conneau & Lample , 2019 ) leads to a significant amount of misalignments , which hurts the UMT models as shown later in our experiments ( §4 ) . ∗Most of work done during an internship at Meta AI . In this paper , we propose a novel modification to SwAV loss ( Caron et al. , 2020 ) , called LanguageAgnostic Constraint for SwAV ( LAgSwAV ) 1 , which is used to fine-tune existing pre-trained selfsupervised models ( Conneau & Lample , 2019 ; Liu et al. , 2020 ) , so that they become semantically contrastive and language-agnostic . Specifically , semantic contrastiveness is defined , in this paper , as the model ’ s ability to cluster sentences of similar semantic contents into similar groups . Meanwhile , language-agnostic property means that the embedding space of sentences from different languages are indistinguishably mixed together . Figure 1 further illustrates the two properties . We argue that these two properties combined can help the model mine more accurate pseudo-parallel data , since language-agnosticism reduces the distances between cross-lingual embeddings while semantic contrastiveness helps bring sentences of similar contents closer together even if they are in different languages . To achieve the semantically contrastive property , we adopt the SwAV loss ( Caron et al. , 2020 ) , which is designed to classify input data into different clusters that reflect certain semantic characteristics . However , since SwAV is not designed to be language-agnostic , we introduce a new language-agnostic constraint for it to enable the latter property . Analysis experiments in §4 show that our constraint significantly outperforms vanilla SwAV loss as well as other baselines in mining pseudo-parallel data from monolingual corpora . After fine-tuning pre-trained self-supervised models with LAgSwAV loss , we sample sentences from both languages and pass them to the model to obtain the SwAV cluster assignment probabilities , which are treated as sentence embeddings . The soft cluster assignments are then matched by a margin-based algorithm ( Artetxe & Schwenk , 2019b ) with additional filtering criteria ( §3.2 ) . To augment UMT training with the obtained pseudo-parallel data , we propose a simple ranking-based weighted cross-entropy loss to train on this data along with the standard iterative back-translation ( §3.3 ) . We also adopt a dynamic weight to the augmentation loss to alleviate the overfitting effect due to the limited augmentation data . We tested our method on the standard bilingual WMT ’ 14 English-French , WMT ’ 16 GermanEnglish and English-Romanian unsupervised translation tasks and achieved the state of the art with 40.2 , 36.8 , and 37.0 BLEU , respectively ; which is up to 2 BLEU gain from the baselines . In the FLoRes low-resource unsupervised setups ( Guzmán et al. , 2019 ) , our method also outperforms mBART baseline ( Liu et al. , 2020 ) by up to 2.8 BLEU in the English-Nepali and English-Sinhala tasks . We conducted a series of analyses to demonstrate the outperforming ability of LAgSwAV in mining pseudo-parallel data . The method is also found to outperform supervised LASER ( Artetxe & Schwenk , 2019a ) , which can be attributed to LASER ’ s lack of language specifiers . 2 BACKGROUND . 2.1 UNSUPERVISED MACHINE TRANSLATION . Lample et al . ( 2018a ) and Artetxe et al . ( 2018 ) were among the first to employ iterative backtranslation in fully unsupervised machine translation ( UMT ) . Lample et al . ( 2018c ) later formulated the three-principles to UMT : initialization , language modeling and iterative back-translation ( Sennrich et al. , 2016a ) . Initialization can takes form of MUSE bilingual dictionary ( Lample et al. , 1Code : https : //github.com/nxphi47/fairseq/tree/swav umt 2018b ; a ) , cross-lingual masked language model ( XLM ) ( Conneau & Lample , 2019 ) or Seq2Seq model ( Song et al. , 2019 ) . Language modeling often takes form of denoising autoencoding ( Lample et al. , 2018a ) or is omitted entirely ( Song et al. , 2019 ) . Apart from these principles , Nguyen et al . ( 2021 ) use two distinct UMT teachers to distill the student in a two-stage back-translation process . Despite being different , most existing methods commonly employ the same iterative backtranslation process . Multilinguality is also proved to be useful as Liu et al . ( 2020 ) and Garcia et al . ( 2021 ) utilize extra training data of more than two languages . These methods , however , do not leverage the potential pseudo-parallel data that may exist in the monolingual data to improve UMT . 2.2 PSEUDO-PARALLEL DATA MINING . Parallel data mining for machine translation has been an active field of research ( Zweigenbaum et al. , 2018 ; Schwenk , 2018 ; Chinea-Rı́os et al. , 2017 ) . Artetxe & Schwenk ( 2019b ) suggested an effective margin-based algorithm to mine parallel data using sentence embeddings , which we also use in our method . More recently , LASER ( Artetxe & Schwenk , 2019a ) was trained on massive multilingual parallel data from 93 languages . Nonetheless , most of the effort has been invested largely in a supervised setup where the training data is parallel . In terms of UMT , Wu et al . ( 2019a ) proposed an extract-edit approach to mine relevant data and edit it for use , while Wu et al . ( 2019b ) mine weakly-paired documents . Tran et al . ( 2020 ) suggested to train on massive multilingual data from 25 languages and mine data from 180 translation directions , which may not be applicable in bilingual setup , where monolingual data only for the two languages under consideration are available . 2.3 SWAPPING ASSIGNMENTS BETWEEN VIEWS ( SWAV LOSS ) . Caron et al . ( 2020 ) introduced Swapping Assignments between Views ( SwAV ) loss as a means to conduct self-supervised training in computer vision because masking-based training methods applied in language modeling are intractable in images . Although SwAV is a type of contrastive learning , it differs from its sisters ( Chen et al. , 2020 ; He et al. , 2020 ; Wu et al. , 2018 ) in that it is an online clustering-based method . This means the model is trained to assign cluster labels , or “ code ” vector q , to an input image with encoded latent features z . In particular , SwAV loss seeks to enforce consistency between codes of different augmentations ( or views ) of the same image . In other words , different augmentations of the same image should have almost the same semantic content as the original , and thus the cluster assignments of them should also be consistently the same . Formally , let z1 , z2 ∈ Rd be latent features of two distinct views X1 , X2 of the same image X after passing them through an encoder fθ , we compute the corresponding codes q1 , q2 ∈ [ 0 , 1 ] K by passing z1 , z2 into the prototype layer , which consists of prototype weights C = [ c1 , ... , cK ] with ci ∈ Rd . We then proceed to swap the codes q1 , q2 and use them as labels for z2 , z1 , respectively : LSwAV ( z1 , z2 ) = l ( z1 , q2 ) + l ( z2 , q1 ) ( 1 ) The loss function l ( z , q ) in Equation ( 1 ) measures the similarity distance between features z and code q , and is defined as : l ( z , q ) = − ∑ k qk log pk , with pk = exp ( 1τ z T ck ) ∑ k′ exp ( 1 τ z T ck′ ) ( 2 ) where τ is a temperature hyper-parameter and k is the index for ck row in C. Figure 2a illustrates how SwAV loss works . While the above formulations involve two different augmentations X1 , X2 of the same image for brevity purpose , multiple augmentations are used in practice . We refer the reader to ( Caron et al. , 2020 ) for further details on SwAV loss . 3 METHOD . In this section , we present our three main contributions : the language-agnostic constraint for SwAV loss or LAgSwAV ( §3.1 ) , the pseudo-parallel data mining with filter suite ( §3.2 ) , and finally the rank-based cross-entropy loss for UMT training with augmented data ( §3.3 ) . 3.1 LANGUAGE-AGNOSTIC CONSTRAINT FOR SWAV LOSS IN LANGUAGE MODEL . The SwAV loss was developed to conduct self-supervised training in vision due to infeasibility of applying masked language modeling ( MLM ) to images ( Caron et al. , 2020 ) . Thus , for language model pre-training , we stick to MLM and use our proposed LAgSwAV to fine-tune a pre-trained LM to achieve semantic contrastiveness and language agnosticism . Specifically , in the presence of sentences from two languages , let Bs = [ X1 , ... , XB ] and Bt = [ Y1 , ... , YB ] be the batches of B sentences from language s and t , respectively . We proceed to augment them into Bs,1 = [ X11 , ... , X1B ] and Bs,2 = [ X21 , ... , X2B ] , and Bt,1 = [ Y 11 , ... , Y 1B ] and Bt,2 = [ Y 21 , ... , Y 2B ] by various noising techniques , such as token swapping , masking and deleting . Next , we denote Fθ as the Transformer encoder ( Conneau & Lample , 2019 ) , which outputs the sentence embedding z given an input sentence . We compute the respective latent features Zs,1 , Zs,2 , Zt,1 , Zt,2 ∈ RB×d for Bs,1 , Bs,2 , Bt,1 , Bt,2 using Fθ , as described in §2.3 . Subsequently , we compute the code cluster assignments Qs,1 , Qs,2 , Qt,1 , Qt,2 ∈ [ 0 , 1 ] B×K as : [ Qs,1 ; Qs,2 ; Qt,1 ; Qt,2 ] = sinkhorn ( exp ( [ Zs,1 ; Zs,2 ; Zt,1 ; Zt,2 ] CT ) ) ( 3 ) where [ • ; • ] is the row-wise concatenation function and sinkhorn ( • ) is the iterative Sinkhorn-Knopp algorithm ( Cuturi , 2013 ) , a batch-wise renormalization process to compute the cluster assignment probabilities , and is a hyper-parameter to control the smoothness of the distribution . The above formulation is in fact indifferent from the vanilla SwAV loss ( §2.3 ) . However , intuitively , if the model Fθ is not language-agnostic ( i.e. , s and t are distributionally separate ) , the resulting cluster assignments Qs , j and Qt , j are likely to be distributed unevenly for s and t. In other words , some clusters ci ⊂ C will be frequently assigned to the sentences of language s but rarely to the sentences of t , and vice versa . Figure 2b illustrates the imbalance phenomenon . We argue that it is possible to enforce the language-agnostic property indirectly by fixing this imbalance phenomenon . Therefore , the objective of our proposed LAgSwAV is to measure the stochastic imbalance in cluster assignments Qs , j , Qt , j , and rebalance them into V s , j , V t , j ∈ [ 0 , 1 ] B×K such that clusters in V s , j , V t , j are stochastically assigned evenly regardless of the language of the sentences . Then , we compute the total LAgSwAV loss as usual for both s and t as : LsAgSwAV , i = l ( z s,1 i , v s,2 i ) + l ( z s,2 i , v s,1 i ) ; L t AgSwAV , i = l ( z t,1 i , v t,2 i ) + l ( z t,2 i , v t,1 i ) ( 4 ) LLAgSwAV = ∑ i LsAgSwAV , i + ∑ i LtAgSwAV , i ( 5 ) We formalize the transformation from Q to V using the language-agnostic rebalancing function Ω , which operates at the batch level to capture the statistics on cluster assignments . Specifically , given the Q matrices specified above , we compute the per-language rebalance weight vectors ws , wt ∈ RK , such that when multiplying them with prototype outputs , they try to suppress the magnitudes of frequently-assigned clusters and promote those of rarely-assigned clusters , resulting in more balanced cluster assignments . More formally , the weight vectors are computed as : fs = 2B∑ i=1 onehot ( [ Qs,1 ; Qs,2 ] ) i ; f t = 2B∑ i=1 onehot ( [ Qt,1 ; Qt,2 ] ) i ( 6 ) ws = 1.0− ( fs/ K∑ j=1 fsj ) ; w t = 1.0− ( f t/ K∑ j=1 f tj ) ( 7 ) where onehot ( • ) is a row-wise one-hot function which returns a one-hot vector by turning the maximum entry in the row to 1.0 and the rest to 0.0 . We then use the weight vectors ws , wt to compute the rebalanced cluster assignments V s , j , V t , j ∈ RB×K by modifying Equation ( 3 ) as : [ V s,1 ; V s,2 ; V t,1 ; V t,2 ] = sinkhorn ( exp ( 1 [ ws [ Zs,1 ; Zs,2 ] ; wt [ Zt,1 ; Zt,2 ] ] CT ) ) ( 8 ) where a B is an element-wise multiplication operation of vector a applied to each row of B , and the rows of V s , j , V t , j represent the vs , ji , v s , j i vectors in Equation ( 4 ) . Note that there is no gradient flow for the computation of ws and wt , as well as the cluster assignments Q and V .
This paper presents improvements of unsupervised machine translation (UMT). It introduces a modified SwAV loss to cluster sentences from monolingual data that are then exploited during training of UMT. Evaluation is performed on standard UMT tasks and an in-depth analysis is provided.
SP:db23298f9c982e7e2f54f4e57175295b131d0fe8
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
1 INTRODUCTION . The existence of adversarial examples ( Szegedy et al. , 2014 ) causes serious safety concerns when deploying modern deep learning models . For example , for classification tasks , imperceptible perturbations of the input instance can fool state-of-the-art classifiers . Many strategies to obtain models that are robust against adversarial attacks have been proposed ( Buckman et al. , 2018 ; Dhillon et al. , 2018 ; Ma et al. , 2018 ; Pang et al. , 2020 ; 2019 ; Samangouei et al. , 2018 ; Xiao et al. , 2020 ) , but most of them have been found to be ineffective in the presence of adaptive attacks ( Athalye et al. , 2018 ; Croce & Hein , 2020b ; Tramer et al. , 2020 ) . Ultimately , this leaves adversarial training ( Madry et al. , 2018 ) and its variants ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Gowal et al. , 2020 ; Hendrycks et al. , 2019 ; Kumari et al. , 2019 ; Wu et al. , 2020 ; Zhang et al. , 2019a ) as the most effective and popular approach to construct robust models . Unfortunately , adversarial training yields much worse performance on the test data than vanilla training . In particular , it strongly suffers from overfitting ( Rice et al. , 2020 ) , with the model ’ s performance decaying significantly on the test set in the later phase of adversarial training . While this can be mitigated by early stopping ( Rice et al. , 2020 ) or model smoothing ( Chen et al. , 2021b ) , the reason behind the overfitting of adversarial training remains poorly understood . In this paper , we study this phenomenon from the perspective of training instances , i.e. , training input-target pairs . We introduce a quantitative metric to measure the difficulty of the instances and analyze the model ’ s behavior , such as its loss and intermediate activations , on training instances of different difficulty levels . This lets us discover that the model ’ s generalization performance decays significantly when it fits the hard adversarial instances in the later training phase . To more rigorously study this phenomenon , we then perform theoretical analyses on both linear and nonlinear models . For linear models , we study logistic regression on a Gaussian mixture model , in which we can calculate the analytical expression of the model parameters upon convergence and thus the robust test accuracy . Our theorem demonstrates that adversarial training on harder instances leads to larger generalization gaps . We further prove that the gap in robust test accuracy between models trained by hard instances and ones trained by easy instances increases with the size of the adversarial budget . In the case of nonlinear models , we derive the lower bound of the model ’ s Lipschitz constant when the model is well fit to the adversarial training examples . This bound increases with the difficulty level of the training instances and the size of the adversarial budget . Since a larger Lipschitz constant indicates a higher adversarial vulnerability ( Ruan et al. , 2018 ; Weng et al. , 2018a ; b ) , our theoretical analysis confirms our empirical observations . Our findings can be broadly applied to obtain robust models . To evidence this , in addition to the standard adversarial training settings of ( Madry et al. , 2018 ) , we study the following two scenarios : fast adversarial training and adversarial fine-tuning with additional training data . Our proposed method assigns adaptive targets or adaptive weights on training instances to avoid fitting hard inputtarget pairs . We show it can mitigate adversarial overfitting and improve models ’ performance . For fast adversarial training , our results are better than other accelerated adversarial training methods available on RobustBench ( Croce et al. , 2020 ) . For adversarial fine-tuning with additional training data , we show improved performance over the methods in ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ) . Contributions . In summary , our contributions are as follows : 1 ) Based on a quantitative metric of instance difficulty , we show that fitting hard adversarial instances leads to degraded generalization performance in adversarial training . 2 ) We conduct a rigorous theoretical analysis on both linear and nonlinear models . For linear models , we show analytically that models trained on harder instances have larger robust test error than the ones trained on easy instances ; the gap increases with the size of the adversarial budget . For nonlinear models , we derive a lower bound of the model ’ s Lipschitz constant . The lower bound increases with the difficulty of the training instances and the size of the adversarial budget , indicating both factors make adversarial overfitting more severe . 3 ) We show that the adaptive use of the easy and hard training instances can improve the performance in fast adversarial training and adversarial fine-tuning with additional training data . Notation and terminology . In this paper , x and x′ are the clean input and its adversarial counterpart . We use fw to represent a model parameterized by w and omit the subscript w unless ambiguous . o = fw ( x ) and o′ = fw ( x′ ) are the model ’ s output of the clean input and the adversarial input . Lw ( x , y ) and Lw ( x′ , y ) represent the loss of the clean and adversarial instances , receptively , in which we sometimes omit w and y for notation simplicity . We use ‖w‖ and ‖X‖ to represent the l2 norm of the vector w and the largest singular value of the matrix X , respectively . sign is an elementwise function which returns +1 for positive elements , −1 for negative elements and 0 for 0 . 1y is the one-hot vector with only the y-th dimension being 1 . The term adversarial budget refers to the allowable perturbations applied to the input instance . It is characterized by lp norm and the size as a set S ( p ) ( ) = { ∆|‖∆‖p ≤ } , with defining the budget size . Therefore , given the training set D , the robust learning problem can be formulated as the min-max optimization minw Ex∼Dmax∆∈S ( p ) ( ) Lw ( x+ ∆ ) . A notation table is provided in Appendix A . In this paper , vanilla training refers to training on the clean inputs , and vanilla adversarial training to the adversarial training method in ( Madry et al. , 2018 ) . RN18 and WRN34 are the 18-layer ResNet ( He et al. , 2016 ) and the 34-layer WideResNet ( Zagoruyko & Komodakis , 2016 ) used in ( Madry et al. , 2018 ) and ( Wong et al. , 2020 ) , respectively . To avoid confusion with the general term overfitting , which denotes the gap between the training and test accuracy , we employ the term adversarial overfitting to indicate the phenomenon that robust accuracy on the test set decreases significantly in the later phase of vanilla adversarial training . This phenomenon was pointed out in ( Rice et al. , 2020 ) and does not occur in vanilla training . Our code is submitted on GoogleDrive anonymously1 . 2 RELATED WORK . We concentrate on white-box attacks , where the attacker has access to the model parameters . Such attacks are usually based on first-order information and stronger than black-box attacks ( Andriushchenko et al. , 2020 ; Dong et al. , 2018 ) . For example , the fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) perturbs the input based on its gradient ’ s sign , i.e. , ∆ = sign ( OxLw ( x ) ) . The iterative fast gradient sign method ( IFGSM ) ( Kurakin et al. , 2016 ) iteratively runs FGSM using a smaller step size and projects the perturbation back to the adversarial budget after each iteration . On top of IFGSM , projected gradient descent ( PGD ) ( Madry et al. , 2018 ) use random initial perturbations and restarts to boost the strength of the attack . Many methods have been proposed to defend a model against adversarial attacks ( Buckman et al. , 2018 ; Dhillon et al. , 2018 ; Ma et al. , 2018 ; Pang et al. , 2020 ; 2019 ; Samangouei et al. , 2018 ; Xiao 1https : //drive.google.com/file/d/1vb6ehNMkBeNIM3dLr_igKMUcCgBd9ZmK/view ? usp=sharing et al. , 2020 ) . However , most of them were shown to utilize obfuscated gradients ( Athalye et al. , 2018 ; Croce & Hein , 2020b ; Tramer et al. , 2020 ) , that is , training the model to tackle some specific types of attacks instead of achieving true robustness . This makes these falsely robust models vulnerable to stronger adaptive attacks . By contrast , several works have designed training algorithms to obtain provably robust models ( Cohen et al. , 2019 ; Gowal et al. , 2019 ; Raghunathan et al. , 2018 ; Salman et al. , 2019 ; Wong & Kolter , 2018 ) . Unfortunately , these methods either do not generalize to modern network architectures or have a prohibitively large computational complexity . As a consequence , adversarial training ( Madry et al. , 2018 ) and its variants ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Hendrycks et al. , 2019 ; Kumari et al. , 2019 ; Wu et al. , 2020 ; Zhang et al. , 2019a ) have become the de facto approach to obtain robust models in practice . In essence , these methods generate adversarial examples , usually using PGD , and use them to optimize the model parameters . While effective , adversarial training is more challenging than vanilla training . It was shown to require larger models ( Xie & Yuille , 2020 ) and to exhibit a poor convergence behavior ( Liu et al. , 2020 ) . Furthermore , as observed in ( Rice et al. , 2020 ) , it suffers from adversarial overfitting : the robust accuracy on the test set significantly decreases in the late adversarial training phase . ( Rice et al. , 2020 ) thus proposed to perform early stopping based on a separate validation set to improve the generalization performance in adversarial training . Furthermore , ( Chen et al. , 2021b ) introduced logit smoothing and weight smoothing strategies to reduce adversarial overfitting . In parallel to this , several techniques to improve the model ’ s robust test accuracy were proposed ( Wang et al. , 2020 ; Wu et al. , 2020 ; Zhang et al. , 2021 ) , but without solving the adversarial overfitting issue . By contrast , other works ( Balaji et al. , 2019 ; Huang et al. , 2020 ) were empirically shown to mitigate adversarial overfitting but without providing any explanations as to how this phenomenon was addressed . In this paper , we study the causes of adversarial overfitting from both an empirical and a theoretical point of view . We also identify the reasons why prior attempts ( Balaji et al. , 2019 ; Chen et al. , 2021a ; Huang et al. , 2020 ) successfully mitigate it . 3 A METRIC FOR INSTANCE DIFFICULTY . Parametric models are trained to minimize a loss objective based on several input-target pairs called training set , and are then evaluated on a held-out set called test set . By comparing the loss value of each instance , we can understand which ones , in either the training or the test set , are more difficult for the model to fit . In this section , we introduce a metric for instance difficulty , which mainly depends on the data and on the perturbations applied to the instances . Let L ( x ) denote the average loss of x ’ s corresponding perturbed input across all training epochs . We define the difficulty of an instance x within a finite set D as d ( x ) =P ( L ( x ) < L ( x̃ ) |x̃ ∼ U ( D ) ) + 1 2 P ( L ( x ) = L ( x̃ ) |x̃ ∼ U ( D ) ) , ( 1 ) where x̃ ∼ U ( D ) indicates x̃ is uniformly sampled from the finite set D. d ( x ) is a bounded function , close to 0 for the hardest instances , and 1 for the easiest ones . We discuss the motivation and properties of d ( x ) in Appendix D.1 , and show that it mainly depends on the data and the perturbation applied , the model architecture or the training duration can hardly affect the difficulty function . That is , d ( x ) can represent the difficulty of x within a set under a specific type of perturbation .
This paper proposes a metric to measure the difficulty of training examples and find that hard training examples influence the generalization of adversarial training and cause overfitting in adversarial training. The authors also provide a theoretical analysis. To mitigate the issue caused by hard training examples, the authors propose to assign different weights for training examples based on loss values. The evaluation results show that the proposed training methods can improve robustness in fast adversarial training and adversarial fine-tuning with additional data.
SP:5f2dd029e11b05daf0eb2a07c33aad57bb8fe982
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
1 INTRODUCTION . The existence of adversarial examples ( Szegedy et al. , 2014 ) causes serious safety concerns when deploying modern deep learning models . For example , for classification tasks , imperceptible perturbations of the input instance can fool state-of-the-art classifiers . Many strategies to obtain models that are robust against adversarial attacks have been proposed ( Buckman et al. , 2018 ; Dhillon et al. , 2018 ; Ma et al. , 2018 ; Pang et al. , 2020 ; 2019 ; Samangouei et al. , 2018 ; Xiao et al. , 2020 ) , but most of them have been found to be ineffective in the presence of adaptive attacks ( Athalye et al. , 2018 ; Croce & Hein , 2020b ; Tramer et al. , 2020 ) . Ultimately , this leaves adversarial training ( Madry et al. , 2018 ) and its variants ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Gowal et al. , 2020 ; Hendrycks et al. , 2019 ; Kumari et al. , 2019 ; Wu et al. , 2020 ; Zhang et al. , 2019a ) as the most effective and popular approach to construct robust models . Unfortunately , adversarial training yields much worse performance on the test data than vanilla training . In particular , it strongly suffers from overfitting ( Rice et al. , 2020 ) , with the model ’ s performance decaying significantly on the test set in the later phase of adversarial training . While this can be mitigated by early stopping ( Rice et al. , 2020 ) or model smoothing ( Chen et al. , 2021b ) , the reason behind the overfitting of adversarial training remains poorly understood . In this paper , we study this phenomenon from the perspective of training instances , i.e. , training input-target pairs . We introduce a quantitative metric to measure the difficulty of the instances and analyze the model ’ s behavior , such as its loss and intermediate activations , on training instances of different difficulty levels . This lets us discover that the model ’ s generalization performance decays significantly when it fits the hard adversarial instances in the later training phase . To more rigorously study this phenomenon , we then perform theoretical analyses on both linear and nonlinear models . For linear models , we study logistic regression on a Gaussian mixture model , in which we can calculate the analytical expression of the model parameters upon convergence and thus the robust test accuracy . Our theorem demonstrates that adversarial training on harder instances leads to larger generalization gaps . We further prove that the gap in robust test accuracy between models trained by hard instances and ones trained by easy instances increases with the size of the adversarial budget . In the case of nonlinear models , we derive the lower bound of the model ’ s Lipschitz constant when the model is well fit to the adversarial training examples . This bound increases with the difficulty level of the training instances and the size of the adversarial budget . Since a larger Lipschitz constant indicates a higher adversarial vulnerability ( Ruan et al. , 2018 ; Weng et al. , 2018a ; b ) , our theoretical analysis confirms our empirical observations . Our findings can be broadly applied to obtain robust models . To evidence this , in addition to the standard adversarial training settings of ( Madry et al. , 2018 ) , we study the following two scenarios : fast adversarial training and adversarial fine-tuning with additional training data . Our proposed method assigns adaptive targets or adaptive weights on training instances to avoid fitting hard inputtarget pairs . We show it can mitigate adversarial overfitting and improve models ’ performance . For fast adversarial training , our results are better than other accelerated adversarial training methods available on RobustBench ( Croce et al. , 2020 ) . For adversarial fine-tuning with additional training data , we show improved performance over the methods in ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ) . Contributions . In summary , our contributions are as follows : 1 ) Based on a quantitative metric of instance difficulty , we show that fitting hard adversarial instances leads to degraded generalization performance in adversarial training . 2 ) We conduct a rigorous theoretical analysis on both linear and nonlinear models . For linear models , we show analytically that models trained on harder instances have larger robust test error than the ones trained on easy instances ; the gap increases with the size of the adversarial budget . For nonlinear models , we derive a lower bound of the model ’ s Lipschitz constant . The lower bound increases with the difficulty of the training instances and the size of the adversarial budget , indicating both factors make adversarial overfitting more severe . 3 ) We show that the adaptive use of the easy and hard training instances can improve the performance in fast adversarial training and adversarial fine-tuning with additional training data . Notation and terminology . In this paper , x and x′ are the clean input and its adversarial counterpart . We use fw to represent a model parameterized by w and omit the subscript w unless ambiguous . o = fw ( x ) and o′ = fw ( x′ ) are the model ’ s output of the clean input and the adversarial input . Lw ( x , y ) and Lw ( x′ , y ) represent the loss of the clean and adversarial instances , receptively , in which we sometimes omit w and y for notation simplicity . We use ‖w‖ and ‖X‖ to represent the l2 norm of the vector w and the largest singular value of the matrix X , respectively . sign is an elementwise function which returns +1 for positive elements , −1 for negative elements and 0 for 0 . 1y is the one-hot vector with only the y-th dimension being 1 . The term adversarial budget refers to the allowable perturbations applied to the input instance . It is characterized by lp norm and the size as a set S ( p ) ( ) = { ∆|‖∆‖p ≤ } , with defining the budget size . Therefore , given the training set D , the robust learning problem can be formulated as the min-max optimization minw Ex∼Dmax∆∈S ( p ) ( ) Lw ( x+ ∆ ) . A notation table is provided in Appendix A . In this paper , vanilla training refers to training on the clean inputs , and vanilla adversarial training to the adversarial training method in ( Madry et al. , 2018 ) . RN18 and WRN34 are the 18-layer ResNet ( He et al. , 2016 ) and the 34-layer WideResNet ( Zagoruyko & Komodakis , 2016 ) used in ( Madry et al. , 2018 ) and ( Wong et al. , 2020 ) , respectively . To avoid confusion with the general term overfitting , which denotes the gap between the training and test accuracy , we employ the term adversarial overfitting to indicate the phenomenon that robust accuracy on the test set decreases significantly in the later phase of vanilla adversarial training . This phenomenon was pointed out in ( Rice et al. , 2020 ) and does not occur in vanilla training . Our code is submitted on GoogleDrive anonymously1 . 2 RELATED WORK . We concentrate on white-box attacks , where the attacker has access to the model parameters . Such attacks are usually based on first-order information and stronger than black-box attacks ( Andriushchenko et al. , 2020 ; Dong et al. , 2018 ) . For example , the fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) perturbs the input based on its gradient ’ s sign , i.e. , ∆ = sign ( OxLw ( x ) ) . The iterative fast gradient sign method ( IFGSM ) ( Kurakin et al. , 2016 ) iteratively runs FGSM using a smaller step size and projects the perturbation back to the adversarial budget after each iteration . On top of IFGSM , projected gradient descent ( PGD ) ( Madry et al. , 2018 ) use random initial perturbations and restarts to boost the strength of the attack . Many methods have been proposed to defend a model against adversarial attacks ( Buckman et al. , 2018 ; Dhillon et al. , 2018 ; Ma et al. , 2018 ; Pang et al. , 2020 ; 2019 ; Samangouei et al. , 2018 ; Xiao 1https : //drive.google.com/file/d/1vb6ehNMkBeNIM3dLr_igKMUcCgBd9ZmK/view ? usp=sharing et al. , 2020 ) . However , most of them were shown to utilize obfuscated gradients ( Athalye et al. , 2018 ; Croce & Hein , 2020b ; Tramer et al. , 2020 ) , that is , training the model to tackle some specific types of attacks instead of achieving true robustness . This makes these falsely robust models vulnerable to stronger adaptive attacks . By contrast , several works have designed training algorithms to obtain provably robust models ( Cohen et al. , 2019 ; Gowal et al. , 2019 ; Raghunathan et al. , 2018 ; Salman et al. , 2019 ; Wong & Kolter , 2018 ) . Unfortunately , these methods either do not generalize to modern network architectures or have a prohibitively large computational complexity . As a consequence , adversarial training ( Madry et al. , 2018 ) and its variants ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Hendrycks et al. , 2019 ; Kumari et al. , 2019 ; Wu et al. , 2020 ; Zhang et al. , 2019a ) have become the de facto approach to obtain robust models in practice . In essence , these methods generate adversarial examples , usually using PGD , and use them to optimize the model parameters . While effective , adversarial training is more challenging than vanilla training . It was shown to require larger models ( Xie & Yuille , 2020 ) and to exhibit a poor convergence behavior ( Liu et al. , 2020 ) . Furthermore , as observed in ( Rice et al. , 2020 ) , it suffers from adversarial overfitting : the robust accuracy on the test set significantly decreases in the late adversarial training phase . ( Rice et al. , 2020 ) thus proposed to perform early stopping based on a separate validation set to improve the generalization performance in adversarial training . Furthermore , ( Chen et al. , 2021b ) introduced logit smoothing and weight smoothing strategies to reduce adversarial overfitting . In parallel to this , several techniques to improve the model ’ s robust test accuracy were proposed ( Wang et al. , 2020 ; Wu et al. , 2020 ; Zhang et al. , 2021 ) , but without solving the adversarial overfitting issue . By contrast , other works ( Balaji et al. , 2019 ; Huang et al. , 2020 ) were empirically shown to mitigate adversarial overfitting but without providing any explanations as to how this phenomenon was addressed . In this paper , we study the causes of adversarial overfitting from both an empirical and a theoretical point of view . We also identify the reasons why prior attempts ( Balaji et al. , 2019 ; Chen et al. , 2021a ; Huang et al. , 2020 ) successfully mitigate it . 3 A METRIC FOR INSTANCE DIFFICULTY . Parametric models are trained to minimize a loss objective based on several input-target pairs called training set , and are then evaluated on a held-out set called test set . By comparing the loss value of each instance , we can understand which ones , in either the training or the test set , are more difficult for the model to fit . In this section , we introduce a metric for instance difficulty , which mainly depends on the data and on the perturbations applied to the instances . Let L ( x ) denote the average loss of x ’ s corresponding perturbed input across all training epochs . We define the difficulty of an instance x within a finite set D as d ( x ) =P ( L ( x ) < L ( x̃ ) |x̃ ∼ U ( D ) ) + 1 2 P ( L ( x ) = L ( x̃ ) |x̃ ∼ U ( D ) ) , ( 1 ) where x̃ ∼ U ( D ) indicates x̃ is uniformly sampled from the finite set D. d ( x ) is a bounded function , close to 0 for the hardest instances , and 1 for the easiest ones . We discuss the motivation and properties of d ( x ) in Appendix D.1 , and show that it mainly depends on the data and the perturbation applied , the model architecture or the training duration can hardly affect the difficulty function . That is , d ( x ) can represent the difficulty of x within a set under a specific type of perturbation .
This paper began by analyzing the influence of hard adversarial examples. And, they found that hard adversarial examples may be the major reason leading to robust overfitting. Then, based on these observations and analysis, the authors further introduced a Fast Adversarial Training scheme, which can achieve competitive results compared to baselines.
SP:5f2dd029e11b05daf0eb2a07c33aad57bb8fe982
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
1 INTRODUCTION . The existence of adversarial examples ( Szegedy et al. , 2014 ) causes serious safety concerns when deploying modern deep learning models . For example , for classification tasks , imperceptible perturbations of the input instance can fool state-of-the-art classifiers . Many strategies to obtain models that are robust against adversarial attacks have been proposed ( Buckman et al. , 2018 ; Dhillon et al. , 2018 ; Ma et al. , 2018 ; Pang et al. , 2020 ; 2019 ; Samangouei et al. , 2018 ; Xiao et al. , 2020 ) , but most of them have been found to be ineffective in the presence of adaptive attacks ( Athalye et al. , 2018 ; Croce & Hein , 2020b ; Tramer et al. , 2020 ) . Ultimately , this leaves adversarial training ( Madry et al. , 2018 ) and its variants ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Gowal et al. , 2020 ; Hendrycks et al. , 2019 ; Kumari et al. , 2019 ; Wu et al. , 2020 ; Zhang et al. , 2019a ) as the most effective and popular approach to construct robust models . Unfortunately , adversarial training yields much worse performance on the test data than vanilla training . In particular , it strongly suffers from overfitting ( Rice et al. , 2020 ) , with the model ’ s performance decaying significantly on the test set in the later phase of adversarial training . While this can be mitigated by early stopping ( Rice et al. , 2020 ) or model smoothing ( Chen et al. , 2021b ) , the reason behind the overfitting of adversarial training remains poorly understood . In this paper , we study this phenomenon from the perspective of training instances , i.e. , training input-target pairs . We introduce a quantitative metric to measure the difficulty of the instances and analyze the model ’ s behavior , such as its loss and intermediate activations , on training instances of different difficulty levels . This lets us discover that the model ’ s generalization performance decays significantly when it fits the hard adversarial instances in the later training phase . To more rigorously study this phenomenon , we then perform theoretical analyses on both linear and nonlinear models . For linear models , we study logistic regression on a Gaussian mixture model , in which we can calculate the analytical expression of the model parameters upon convergence and thus the robust test accuracy . Our theorem demonstrates that adversarial training on harder instances leads to larger generalization gaps . We further prove that the gap in robust test accuracy between models trained by hard instances and ones trained by easy instances increases with the size of the adversarial budget . In the case of nonlinear models , we derive the lower bound of the model ’ s Lipschitz constant when the model is well fit to the adversarial training examples . This bound increases with the difficulty level of the training instances and the size of the adversarial budget . Since a larger Lipschitz constant indicates a higher adversarial vulnerability ( Ruan et al. , 2018 ; Weng et al. , 2018a ; b ) , our theoretical analysis confirms our empirical observations . Our findings can be broadly applied to obtain robust models . To evidence this , in addition to the standard adversarial training settings of ( Madry et al. , 2018 ) , we study the following two scenarios : fast adversarial training and adversarial fine-tuning with additional training data . Our proposed method assigns adaptive targets or adaptive weights on training instances to avoid fitting hard inputtarget pairs . We show it can mitigate adversarial overfitting and improve models ’ performance . For fast adversarial training , our results are better than other accelerated adversarial training methods available on RobustBench ( Croce et al. , 2020 ) . For adversarial fine-tuning with additional training data , we show improved performance over the methods in ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ) . Contributions . In summary , our contributions are as follows : 1 ) Based on a quantitative metric of instance difficulty , we show that fitting hard adversarial instances leads to degraded generalization performance in adversarial training . 2 ) We conduct a rigorous theoretical analysis on both linear and nonlinear models . For linear models , we show analytically that models trained on harder instances have larger robust test error than the ones trained on easy instances ; the gap increases with the size of the adversarial budget . For nonlinear models , we derive a lower bound of the model ’ s Lipschitz constant . The lower bound increases with the difficulty of the training instances and the size of the adversarial budget , indicating both factors make adversarial overfitting more severe . 3 ) We show that the adaptive use of the easy and hard training instances can improve the performance in fast adversarial training and adversarial fine-tuning with additional training data . Notation and terminology . In this paper , x and x′ are the clean input and its adversarial counterpart . We use fw to represent a model parameterized by w and omit the subscript w unless ambiguous . o = fw ( x ) and o′ = fw ( x′ ) are the model ’ s output of the clean input and the adversarial input . Lw ( x , y ) and Lw ( x′ , y ) represent the loss of the clean and adversarial instances , receptively , in which we sometimes omit w and y for notation simplicity . We use ‖w‖ and ‖X‖ to represent the l2 norm of the vector w and the largest singular value of the matrix X , respectively . sign is an elementwise function which returns +1 for positive elements , −1 for negative elements and 0 for 0 . 1y is the one-hot vector with only the y-th dimension being 1 . The term adversarial budget refers to the allowable perturbations applied to the input instance . It is characterized by lp norm and the size as a set S ( p ) ( ) = { ∆|‖∆‖p ≤ } , with defining the budget size . Therefore , given the training set D , the robust learning problem can be formulated as the min-max optimization minw Ex∼Dmax∆∈S ( p ) ( ) Lw ( x+ ∆ ) . A notation table is provided in Appendix A . In this paper , vanilla training refers to training on the clean inputs , and vanilla adversarial training to the adversarial training method in ( Madry et al. , 2018 ) . RN18 and WRN34 are the 18-layer ResNet ( He et al. , 2016 ) and the 34-layer WideResNet ( Zagoruyko & Komodakis , 2016 ) used in ( Madry et al. , 2018 ) and ( Wong et al. , 2020 ) , respectively . To avoid confusion with the general term overfitting , which denotes the gap between the training and test accuracy , we employ the term adversarial overfitting to indicate the phenomenon that robust accuracy on the test set decreases significantly in the later phase of vanilla adversarial training . This phenomenon was pointed out in ( Rice et al. , 2020 ) and does not occur in vanilla training . Our code is submitted on GoogleDrive anonymously1 . 2 RELATED WORK . We concentrate on white-box attacks , where the attacker has access to the model parameters . Such attacks are usually based on first-order information and stronger than black-box attacks ( Andriushchenko et al. , 2020 ; Dong et al. , 2018 ) . For example , the fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2014 ) perturbs the input based on its gradient ’ s sign , i.e. , ∆ = sign ( OxLw ( x ) ) . The iterative fast gradient sign method ( IFGSM ) ( Kurakin et al. , 2016 ) iteratively runs FGSM using a smaller step size and projects the perturbation back to the adversarial budget after each iteration . On top of IFGSM , projected gradient descent ( PGD ) ( Madry et al. , 2018 ) use random initial perturbations and restarts to boost the strength of the attack . Many methods have been proposed to defend a model against adversarial attacks ( Buckman et al. , 2018 ; Dhillon et al. , 2018 ; Ma et al. , 2018 ; Pang et al. , 2020 ; 2019 ; Samangouei et al. , 2018 ; Xiao 1https : //drive.google.com/file/d/1vb6ehNMkBeNIM3dLr_igKMUcCgBd9ZmK/view ? usp=sharing et al. , 2020 ) . However , most of them were shown to utilize obfuscated gradients ( Athalye et al. , 2018 ; Croce & Hein , 2020b ; Tramer et al. , 2020 ) , that is , training the model to tackle some specific types of attacks instead of achieving true robustness . This makes these falsely robust models vulnerable to stronger adaptive attacks . By contrast , several works have designed training algorithms to obtain provably robust models ( Cohen et al. , 2019 ; Gowal et al. , 2019 ; Raghunathan et al. , 2018 ; Salman et al. , 2019 ; Wong & Kolter , 2018 ) . Unfortunately , these methods either do not generalize to modern network architectures or have a prohibitively large computational complexity . As a consequence , adversarial training ( Madry et al. , 2018 ) and its variants ( Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Hendrycks et al. , 2019 ; Kumari et al. , 2019 ; Wu et al. , 2020 ; Zhang et al. , 2019a ) have become the de facto approach to obtain robust models in practice . In essence , these methods generate adversarial examples , usually using PGD , and use them to optimize the model parameters . While effective , adversarial training is more challenging than vanilla training . It was shown to require larger models ( Xie & Yuille , 2020 ) and to exhibit a poor convergence behavior ( Liu et al. , 2020 ) . Furthermore , as observed in ( Rice et al. , 2020 ) , it suffers from adversarial overfitting : the robust accuracy on the test set significantly decreases in the late adversarial training phase . ( Rice et al. , 2020 ) thus proposed to perform early stopping based on a separate validation set to improve the generalization performance in adversarial training . Furthermore , ( Chen et al. , 2021b ) introduced logit smoothing and weight smoothing strategies to reduce adversarial overfitting . In parallel to this , several techniques to improve the model ’ s robust test accuracy were proposed ( Wang et al. , 2020 ; Wu et al. , 2020 ; Zhang et al. , 2021 ) , but without solving the adversarial overfitting issue . By contrast , other works ( Balaji et al. , 2019 ; Huang et al. , 2020 ) were empirically shown to mitigate adversarial overfitting but without providing any explanations as to how this phenomenon was addressed . In this paper , we study the causes of adversarial overfitting from both an empirical and a theoretical point of view . We also identify the reasons why prior attempts ( Balaji et al. , 2019 ; Chen et al. , 2021a ; Huang et al. , 2020 ) successfully mitigate it . 3 A METRIC FOR INSTANCE DIFFICULTY . Parametric models are trained to minimize a loss objective based on several input-target pairs called training set , and are then evaluated on a held-out set called test set . By comparing the loss value of each instance , we can understand which ones , in either the training or the test set , are more difficult for the model to fit . In this section , we introduce a metric for instance difficulty , which mainly depends on the data and on the perturbations applied to the instances . Let L ( x ) denote the average loss of x ’ s corresponding perturbed input across all training epochs . We define the difficulty of an instance x within a finite set D as d ( x ) =P ( L ( x ) < L ( x̃ ) |x̃ ∼ U ( D ) ) + 1 2 P ( L ( x ) = L ( x̃ ) |x̃ ∼ U ( D ) ) , ( 1 ) where x̃ ∼ U ( D ) indicates x̃ is uniformly sampled from the finite set D. d ( x ) is a bounded function , close to 0 for the hardest instances , and 1 for the easiest ones . We discuss the motivation and properties of d ( x ) in Appendix D.1 , and show that it mainly depends on the data and the perturbation applied , the model architecture or the training duration can hardly affect the difficulty function . That is , d ( x ) can represent the difficulty of x within a set under a specific type of perturbation .
This paper proposes a metric for measuring the difficulty of instances. The authors empirically show that hard instances lead to the issue of robust overfitting. Theoretical analysis on logistic regression and general nonlinear model indicates that as the increase of instance difficulty, the adversarial vulnerability becomes higher. Finally, this paper proposes a reweight strategy and the trick of adaptive label for improving the adversarial robustness.
SP:5f2dd029e11b05daf0eb2a07c33aad57bb8fe982
Defending Against Image Corruptions Through Adversarial Augmentations
1 INTRODUCTION . By following a process known as Empirical Risk Minimization ( ERM ) ( Vapnik , 1998 ) , neural networks are trained to minimize the average error on a training set . ERM has enabled breakthroughs in a wide variety of fields and applications ( Goodfellow et al. , 2016 ; Krizhevsky et al. , 2012 ; Hinton et al. , 2012 ) , ranging from ranking content on the web ( Covington et al. , 2016 ) to autonomous driving ( Bojarski et al. , 2016 ) via medical diagnostics ( De Fauw et al. , 2018 ) . ERM is based on the principle that the data used during training is independently drawn from the same distribution as the one encountered during deployment . In practice , however , training and deployment data may differ and models can fail catastrophically . Such occurrence is commonplace as training data is often collected through a biased process that highlights confounding factors and spurious correlations ( Torralba et al. , 2011 ; Kuehlkamp et al. , 2017 ) , which can lead to undesirable consequences ( e.g. , http : //gendershades.org ) . As such , it has become increasingly important to ensure that deployed models are robust and generalize to various input corruptions . Unfortunately , even small corruptions can significantly affect the performance of existing classifiers . For example , Recht et al . ( 2019 ) ; Hendrycks et al . ( 2019 ) show that the accuracy of IMAGENET models is severely impacted by changes in the data collection process , while imperceptible deviations to the input , called adversarial perturbations , can cause neural networks to make incorrect predictions with high confidence ( Carlini & Wagner , 2017a ; b ; Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Szegedy et al. , 2014 ) . Methods to counteract such effects , which mainly consist of using random or adversarially-chosen data augmentations , struggle . Training against corrupted data only forces the memorization of such corruptions and , as a result , these models fail to generalize to new corruptions ( Vasiljevic et al. , 2016 ; Geirhos et al. , 2018 ) . Recent work from Hendrycks et al . ( 2020b ) ( also known as AugMix ) argues that basic pre-defined corruptions can be composed to improve the robustness of models to common corruptions . Another line of work , DeepAugment ( Hendrycks et al. , 2020a ) , corrupts images by passing them through two specific image-to-image models while distorting the models ’ parameters and activations using an extensive range of manually defined heuristic operations . While both methods perform well on average on the common corruptions present in CIFAR-10-C and IMAGENET-C , they generalize poorly to the adversarial setting . Most recently , Laidlaw et al . ( 2021 ) proposed an adversarial training method based on bounding a neural perceptual distance ( i.e. , an approximation of the true perceptual distance ) , under the acronym of PAT for Perceptual Adversarial Training . Their method performs well against five diverse adversarial attacks , but , as it specifically addresses robustness to pixel-level attacks that directly manipulate image pixels , it performs worse than AugMix on common corruptions . In this work , we address this gap . We focus on training models that are robust to adversarially-chosen corruptions that preserve semantic content . We go beyond conventional random data augmentation schemes ( exemplified by Hendrycks et al. , 2020b ; a ) and adversarial training ( exemplified by Madry et al. , 2018 ; Gowal et al. , 2019 ; Laidlaw et al. , 2021 ) by leveraging image-to-image models that can produce a wide range of semantically-preserving corruptions ; in contrast to related works , our method does not require the manual creation of heuristic transformations . Our contributions are as follows : • We formulate an adversarial training procedure , named AdversarialAugment ( or AdA for short ) which finds adversarial examples by optimizing over the weights of any pre-trained image-to-image model ( i.e . over the weights of arbitrary autoencoders ) . • We give sufficient conditions for the consistency of idealized versions of our method and DeepAugment , and provide PAC-Bayesian performance guarantees , following Neyshabur et al . ( 2017 ) . Our theoretical considerations highlight the potential advantages of AdA over previous work ( DeepAugment ) , as well as the combination of the two . We also establish links to Invariant Risk Minimization ( IRM ) ( Arjovsky et al. , 2020 ) , Adversarial Mixing ( AdvMix ) ( Gowal et al. , 2019 ) and Perceptual Adversarial Training ( Laidlaw et al. , 2021 ) . • We improve upon the known state-of-the-art on CIFAR-10-C by achieving a mean corruption error ( mCE ) of 7.83 % when using our method in conjunction with others ( vs. 23.51 % for Perceptual Adversarial Training ( PAT ) , 10.90 % for AugMix and 8.11 % for DeepAugment ) . On IMAGENET we show that our method can leverage 4 pre-trained image-to-image models simultaneously ( VQVAE ( van den Oord et al. , 2017 ) , U-Net ( Ronneberger et al. , 2015 ) , EDSR ( Lim et al. , 2017 ) & CAE ( Theis et al. , 2017 ) ) to yield the largest increase in robustness to common image corruptions , among all evaluated models . • On ` 2 and ` ∞ norm-bounded perturbations we significantly improve upon previous work ( DeepAugment & AugMix ) using AdA ( EDSR ) , while slightly improving generalization performance on both IMAGENET-V2 and on CIFAR-10.1 . 2 RELATED WORK . Data augmentation . Data augmentation has been shown to reduce the generalization error of standard ( non-robust ) training . For image classification tasks , random flips , rotations and crops are commonly used ( He et al. , 2016a ) . More sophisticated techniques such as Cutout of DeVries & Taylor ( 2017 ) ( which produces random occlusions ) , CutMix of Yun et al . ( 2019 ) ( which replaces parts of an image with another ) and mixup of Zhang et al . ( 2018a ) ; Tokozume et al . ( 2018 ) ( which linearly interpolates between two images ) all demonstrate extremely compelling results . Guo et al . ( 2019 ) improved upon mixup by proposing an adaptive mixing policy . Works , such as AutoAugment ( Cubuk et al. , 2019 ) and the related RandAugment ( Cubuk et al. , 2020 ) , learn augmentation policies from data directly . These methods are tuned to improve standard classification accuracy and have been shown to work well on CIFAR-10 , CIFAR-100 , SVHN and IMAGENET . However , these approaches do not necessarily generalize well to larger data shifts and perform poorly on benign corruptions such as blur or speckle noise ( Taori et al. , 2020 ) . Robustness to synthetic and natural data shift . Several works argue that training against corrupted data only forces the memorization of such corruptions and , as a result , models fail to generalize to new corruptions ( Vasiljevic et al. , 2016 ; Geirhos et al. , 2018 ) . This has not prevented Geirhos et al . ( 2019 ) ; Yin et al . ( 2019 ) ; Hendrycks et al . ( 2020b ) ; Lopes et al . ( 2019 ) ; Hendrycks et al . ( 2020a ) from demonstrating that some forms of data augmentation can improve the robustness of models on IMAGENET-C , despite not being directly trained on these common corruptions . Most works on the topic focus on training models that perform well in expectation . Unfortunately , these models remain vulnerable to more drastic adversarial shifts ( Taori et al. , 2020 ) . Robustness to adversarial data shift . Adversarial data shift has been extensively studied ( Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Szegedy et al. , 2014 ; Moosavi-Dezfooli et al. , 2019 ; Papernot et al. , 2016 ; Madry et al. , 2018 ) . Most works focus on the robustness of classifiers to ` p-norm bounded perturbations . In particular , it is expected that a robust classifier should be invariant to small perturbations in the pixel space ( as defined by the ` p-norm ) . Goodfellow et al . ( 2015 ) and Madry et al . ( 2018 ) laid down foundational principles to train robust networks , and recent works ( Zhang et al. , 2019 ; Qin et al. , 2019 ; Rice et al. , 2020 ; Wu et al. , 2020 ; Gowal et al. , 2020 ) continue to find novel approaches to enhance adversarial robustness . However , approaches focused on ` p-norm bounded perturbations often sacrifice accuracy on non-adversarial images ( Raghunathan et al. , 2019 ) . Several works ( Baluja & Fischer , 2017 ; Song et al. , 2018 ; Xiao et al. , 2018 ; Qiu et al. , 2019 ; Wong & Kolter , 2021b ; Laidlaw et al. , 2021 ) go beyond these analytically defined perturbations and demonstrate that it is not only possible to maintain accuracy on non-adversarial images but also to reduce the effect of spurious correlations and reduce bias ( Gowal et al. , 2019 ) . Unfortunately , most aforementioned approaches perform poorly on CIFAR-10-C and IMAGENET-C . 3 DEFENSE AGAINST ADVERSARIAL CORRUPTIONS . In this section , we introduce AdA , our approach for training models robust to image corruptions through the use of adversarial augmentations while leveraging pre-trained autoencoders . In Appendix A we detail how our work relates to AugMix ( Hendrycks et al. , 2020b ) , DeepAugment ( Hendrycks et al. , 2020a ) , Invariant Risk Minimization ( Arjovsky et al. , 2020 ) , Adversarial Mixing ( Gowal et al. , 2019 ) and Perceptual Adversarial Training ( Laidlaw et al. , 2021 ) . Corrupted adversarial risk . We consider a model fθ : X → Y parametrized by θ . Given a dataset D ⊂ X × Y over pairs of examples x and corresponding labels y , we would like to find the parameters θ which minimize the corrupted adversarial risk : E ( x , y ) ∼D [ max x′∈C ( x ) L ( fθ ( x ′ ) , y ) ] , ( 1 ) where L is a suitable loss function , such as the 0-1 loss for classification , and C : X → 2X outputs a corruption set for a given example x . For example , in the case of an image x , a plausible corruption set C ( x ) could contain blurred , pixelized and noisy variants of x . In other words , we seek the optimal parameters θ∗ which minimize the corrupted adversarial risk so that fθ∗ is invariant to corruptions ; that is , fθ∗ ( x′ ) = fθ∗ ( x ) for all x′ ∈ C ( x ) . For example if x is an image classified to be a horse by fθ∗ , then this prediction should not be affected by the image being slightly corrupted by camera blur , Poisson noise or JPEG compression artifacts . AdversarialAugment ( AdA ) . Our method , AdA , uses image-to-image models to generate adversarially corrupted images . At a high level , this is similar to how DeepAugment works : DeepAugment perturbs the parameters of two specific image-to-image models using heuristic operators , which are manually defined for each model . Our method , instead , is more general and optimizes directly over perturbations to the parameters of any pre-trained image-to-image model . We denote these image-to-image models as corruption networks . We experiment with four corruption networks : a vector-quantised variational autoencoder ( VQ-VAE ) ( van den Oord et al. , 2017 ) ; a convolutional U-Net ( Ronneberger et al. , 2015 ) trained for image completion ( U-Net ) , a super-resolution model ( EDSR ) ( Lim et al. , 2017 ) and a compressive autoencoder ( CAE ) ( Theis et al. , 2017 ) . The latter two models are used in DeepAugment as well . Additional details about the corruption networks are provided in the Appendix in section C. Formally , let cφ : X → X be a corruption network with parameters φ = { φi } Ki=1 which , when its parameters are perturbed , acts upon clean examples by corrupting them . Here each φi corresponds to the vector of parameters in the i-th layer , and K is the number of layers . Let δ = { δi } Ki=1 be a weight perturbation set , so that a corrupted variant of x can be generated by c { φi+δi } Ki=1 ( x ) . With a slight abuse of notation , we shorten c { φi+δi } Ki=1 to cφ+δ . Clearly , using unconstrained perturbations can result in exceedingly corrupted images which have lost all discriminative information and are not useful for training . For example , if cφ is a multi-layer perceptron , trivially setting δi = −φi would yield fully zero , uninformative outputs . Hence , we restrict the corruption sets by defining a maximum relative perturbation radius ν > 0 , and define the corruption set of AdA as C ( x ) = { cφ+δ ( x ) | ‖δ‖2 , φ ≤ ν } , where the norm ‖ · ‖2 , φ is defined as ‖δ‖2 , φ = maxi∈ { 1 , ... , K } ‖δi‖2/‖φi‖2 . Finding adversarial corruptions . For a clean image x with label y , a corrupted adversarial example within a bounded corruption distance ν is a corrupted image x′ = cφ+δ ( x ) generated by the corruption network c with bounded parameter offsets ‖δ‖2 , φ ≤ ν which causes fθ to misclassify x : fθ ( x′ ) 6= y . Similarly to Madry et al . ( 2018 ) , we find an adversarial corruption by maximizing a surrogate loss L̃ to L , for example , the cross-entropy loss between the predicted logits of the corrupted image and its clean label . We optimize over the perturbation δ to c ’ s parameters φ : max ‖δ‖2 , φ≤ν L̃ ( fθ ( cφ+δ ( x ) ) , y ) . ( 2 ) In practice , we solve this optimization problem ( approximately ) using projected gradient ascent to enforce that perturbations δ lie within the feasible set ‖δ‖2 , φ ≤ ν . Examples of corrupted images obtained by AdA are shown in Figure 1 . Adversarial training . Given the model f parameterized by θ , minimizing the corrupted adversarial risk from ( 1 ) results in parameters θ∗ obtained by solving the following optimization problem : θ∗= arg min θ E ( x , y ) ∼D [ max ‖δ‖2 , φ≤ν L̃ ( fθ ( cφ+δ ( x ) ) , y ) ] . ( 3 ) We also provide a full algorithm listing of our method in Appendix B . Meaningful corruptions . A crucial element of AdA is setting the perturbation radius ν to ensure that corruptions are varied enough to constitute a strong defense against common corruptions , while still being meaningful ( i.e. , without destroying semantics ) . We measure the extent of corruption induced by a given ν through the structural similarity index measure ( SSIM ) ( Wang et al. , 2004 ) between clean and corrupted images ( details on how SSIM is computed can be found in section E ) .We plot the distributions of SSIM over various perturbation radii in Figure 2 for corrupted images produced by AdA using two backbones ( EDSR and CAE ) on CIFAR-10 . We find that a relative perturbation radius of ν = .015 yields enough variety in the corruptions for both EDSR and CAE . This is demonstrated for EDSR by having a large SSIM variance compared to , e.g . ν = 0.009375 , without destroying semantic meaning ( retaining a high mean SSIM ) . We guard against unlikely but too severe corruptions ( i.e . with too low SSIM ) using an efficient approximate line-search procedure ( details can be found Appendix E ) . A similar approach for restricting the SSIM values of samples during adversarial training was used by Hameed ( 2020 ) ; Hameed & György ( 2021 ) .
The authors propose a new data augmentation technique, called AdversarialAugment, to increase robustness of image classification models. The proposed method optimizes the parameters of image-to-image models to generate adversarially corrupted images, where they also show sufficient conditions for the consistency in the simple setting. They empirically show that AdversarialAugment improves common corruption robustness on CIFAR-10-C as well as worst-case performance against lp-norm bounded perturbations on CIFAR-10 and ImageNet.
SP:32b52a8f3f6cfbcd0a863d6ea84dc691f504a3b2
Defending Against Image Corruptions Through Adversarial Augmentations
1 INTRODUCTION . By following a process known as Empirical Risk Minimization ( ERM ) ( Vapnik , 1998 ) , neural networks are trained to minimize the average error on a training set . ERM has enabled breakthroughs in a wide variety of fields and applications ( Goodfellow et al. , 2016 ; Krizhevsky et al. , 2012 ; Hinton et al. , 2012 ) , ranging from ranking content on the web ( Covington et al. , 2016 ) to autonomous driving ( Bojarski et al. , 2016 ) via medical diagnostics ( De Fauw et al. , 2018 ) . ERM is based on the principle that the data used during training is independently drawn from the same distribution as the one encountered during deployment . In practice , however , training and deployment data may differ and models can fail catastrophically . Such occurrence is commonplace as training data is often collected through a biased process that highlights confounding factors and spurious correlations ( Torralba et al. , 2011 ; Kuehlkamp et al. , 2017 ) , which can lead to undesirable consequences ( e.g. , http : //gendershades.org ) . As such , it has become increasingly important to ensure that deployed models are robust and generalize to various input corruptions . Unfortunately , even small corruptions can significantly affect the performance of existing classifiers . For example , Recht et al . ( 2019 ) ; Hendrycks et al . ( 2019 ) show that the accuracy of IMAGENET models is severely impacted by changes in the data collection process , while imperceptible deviations to the input , called adversarial perturbations , can cause neural networks to make incorrect predictions with high confidence ( Carlini & Wagner , 2017a ; b ; Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Szegedy et al. , 2014 ) . Methods to counteract such effects , which mainly consist of using random or adversarially-chosen data augmentations , struggle . Training against corrupted data only forces the memorization of such corruptions and , as a result , these models fail to generalize to new corruptions ( Vasiljevic et al. , 2016 ; Geirhos et al. , 2018 ) . Recent work from Hendrycks et al . ( 2020b ) ( also known as AugMix ) argues that basic pre-defined corruptions can be composed to improve the robustness of models to common corruptions . Another line of work , DeepAugment ( Hendrycks et al. , 2020a ) , corrupts images by passing them through two specific image-to-image models while distorting the models ’ parameters and activations using an extensive range of manually defined heuristic operations . While both methods perform well on average on the common corruptions present in CIFAR-10-C and IMAGENET-C , they generalize poorly to the adversarial setting . Most recently , Laidlaw et al . ( 2021 ) proposed an adversarial training method based on bounding a neural perceptual distance ( i.e. , an approximation of the true perceptual distance ) , under the acronym of PAT for Perceptual Adversarial Training . Their method performs well against five diverse adversarial attacks , but , as it specifically addresses robustness to pixel-level attacks that directly manipulate image pixels , it performs worse than AugMix on common corruptions . In this work , we address this gap . We focus on training models that are robust to adversarially-chosen corruptions that preserve semantic content . We go beyond conventional random data augmentation schemes ( exemplified by Hendrycks et al. , 2020b ; a ) and adversarial training ( exemplified by Madry et al. , 2018 ; Gowal et al. , 2019 ; Laidlaw et al. , 2021 ) by leveraging image-to-image models that can produce a wide range of semantically-preserving corruptions ; in contrast to related works , our method does not require the manual creation of heuristic transformations . Our contributions are as follows : • We formulate an adversarial training procedure , named AdversarialAugment ( or AdA for short ) which finds adversarial examples by optimizing over the weights of any pre-trained image-to-image model ( i.e . over the weights of arbitrary autoencoders ) . • We give sufficient conditions for the consistency of idealized versions of our method and DeepAugment , and provide PAC-Bayesian performance guarantees , following Neyshabur et al . ( 2017 ) . Our theoretical considerations highlight the potential advantages of AdA over previous work ( DeepAugment ) , as well as the combination of the two . We also establish links to Invariant Risk Minimization ( IRM ) ( Arjovsky et al. , 2020 ) , Adversarial Mixing ( AdvMix ) ( Gowal et al. , 2019 ) and Perceptual Adversarial Training ( Laidlaw et al. , 2021 ) . • We improve upon the known state-of-the-art on CIFAR-10-C by achieving a mean corruption error ( mCE ) of 7.83 % when using our method in conjunction with others ( vs. 23.51 % for Perceptual Adversarial Training ( PAT ) , 10.90 % for AugMix and 8.11 % for DeepAugment ) . On IMAGENET we show that our method can leverage 4 pre-trained image-to-image models simultaneously ( VQVAE ( van den Oord et al. , 2017 ) , U-Net ( Ronneberger et al. , 2015 ) , EDSR ( Lim et al. , 2017 ) & CAE ( Theis et al. , 2017 ) ) to yield the largest increase in robustness to common image corruptions , among all evaluated models . • On ` 2 and ` ∞ norm-bounded perturbations we significantly improve upon previous work ( DeepAugment & AugMix ) using AdA ( EDSR ) , while slightly improving generalization performance on both IMAGENET-V2 and on CIFAR-10.1 . 2 RELATED WORK . Data augmentation . Data augmentation has been shown to reduce the generalization error of standard ( non-robust ) training . For image classification tasks , random flips , rotations and crops are commonly used ( He et al. , 2016a ) . More sophisticated techniques such as Cutout of DeVries & Taylor ( 2017 ) ( which produces random occlusions ) , CutMix of Yun et al . ( 2019 ) ( which replaces parts of an image with another ) and mixup of Zhang et al . ( 2018a ) ; Tokozume et al . ( 2018 ) ( which linearly interpolates between two images ) all demonstrate extremely compelling results . Guo et al . ( 2019 ) improved upon mixup by proposing an adaptive mixing policy . Works , such as AutoAugment ( Cubuk et al. , 2019 ) and the related RandAugment ( Cubuk et al. , 2020 ) , learn augmentation policies from data directly . These methods are tuned to improve standard classification accuracy and have been shown to work well on CIFAR-10 , CIFAR-100 , SVHN and IMAGENET . However , these approaches do not necessarily generalize well to larger data shifts and perform poorly on benign corruptions such as blur or speckle noise ( Taori et al. , 2020 ) . Robustness to synthetic and natural data shift . Several works argue that training against corrupted data only forces the memorization of such corruptions and , as a result , models fail to generalize to new corruptions ( Vasiljevic et al. , 2016 ; Geirhos et al. , 2018 ) . This has not prevented Geirhos et al . ( 2019 ) ; Yin et al . ( 2019 ) ; Hendrycks et al . ( 2020b ) ; Lopes et al . ( 2019 ) ; Hendrycks et al . ( 2020a ) from demonstrating that some forms of data augmentation can improve the robustness of models on IMAGENET-C , despite not being directly trained on these common corruptions . Most works on the topic focus on training models that perform well in expectation . Unfortunately , these models remain vulnerable to more drastic adversarial shifts ( Taori et al. , 2020 ) . Robustness to adversarial data shift . Adversarial data shift has been extensively studied ( Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Szegedy et al. , 2014 ; Moosavi-Dezfooli et al. , 2019 ; Papernot et al. , 2016 ; Madry et al. , 2018 ) . Most works focus on the robustness of classifiers to ` p-norm bounded perturbations . In particular , it is expected that a robust classifier should be invariant to small perturbations in the pixel space ( as defined by the ` p-norm ) . Goodfellow et al . ( 2015 ) and Madry et al . ( 2018 ) laid down foundational principles to train robust networks , and recent works ( Zhang et al. , 2019 ; Qin et al. , 2019 ; Rice et al. , 2020 ; Wu et al. , 2020 ; Gowal et al. , 2020 ) continue to find novel approaches to enhance adversarial robustness . However , approaches focused on ` p-norm bounded perturbations often sacrifice accuracy on non-adversarial images ( Raghunathan et al. , 2019 ) . Several works ( Baluja & Fischer , 2017 ; Song et al. , 2018 ; Xiao et al. , 2018 ; Qiu et al. , 2019 ; Wong & Kolter , 2021b ; Laidlaw et al. , 2021 ) go beyond these analytically defined perturbations and demonstrate that it is not only possible to maintain accuracy on non-adversarial images but also to reduce the effect of spurious correlations and reduce bias ( Gowal et al. , 2019 ) . Unfortunately , most aforementioned approaches perform poorly on CIFAR-10-C and IMAGENET-C . 3 DEFENSE AGAINST ADVERSARIAL CORRUPTIONS . In this section , we introduce AdA , our approach for training models robust to image corruptions through the use of adversarial augmentations while leveraging pre-trained autoencoders . In Appendix A we detail how our work relates to AugMix ( Hendrycks et al. , 2020b ) , DeepAugment ( Hendrycks et al. , 2020a ) , Invariant Risk Minimization ( Arjovsky et al. , 2020 ) , Adversarial Mixing ( Gowal et al. , 2019 ) and Perceptual Adversarial Training ( Laidlaw et al. , 2021 ) . Corrupted adversarial risk . We consider a model fθ : X → Y parametrized by θ . Given a dataset D ⊂ X × Y over pairs of examples x and corresponding labels y , we would like to find the parameters θ which minimize the corrupted adversarial risk : E ( x , y ) ∼D [ max x′∈C ( x ) L ( fθ ( x ′ ) , y ) ] , ( 1 ) where L is a suitable loss function , such as the 0-1 loss for classification , and C : X → 2X outputs a corruption set for a given example x . For example , in the case of an image x , a plausible corruption set C ( x ) could contain blurred , pixelized and noisy variants of x . In other words , we seek the optimal parameters θ∗ which minimize the corrupted adversarial risk so that fθ∗ is invariant to corruptions ; that is , fθ∗ ( x′ ) = fθ∗ ( x ) for all x′ ∈ C ( x ) . For example if x is an image classified to be a horse by fθ∗ , then this prediction should not be affected by the image being slightly corrupted by camera blur , Poisson noise or JPEG compression artifacts . AdversarialAugment ( AdA ) . Our method , AdA , uses image-to-image models to generate adversarially corrupted images . At a high level , this is similar to how DeepAugment works : DeepAugment perturbs the parameters of two specific image-to-image models using heuristic operators , which are manually defined for each model . Our method , instead , is more general and optimizes directly over perturbations to the parameters of any pre-trained image-to-image model . We denote these image-to-image models as corruption networks . We experiment with four corruption networks : a vector-quantised variational autoencoder ( VQ-VAE ) ( van den Oord et al. , 2017 ) ; a convolutional U-Net ( Ronneberger et al. , 2015 ) trained for image completion ( U-Net ) , a super-resolution model ( EDSR ) ( Lim et al. , 2017 ) and a compressive autoencoder ( CAE ) ( Theis et al. , 2017 ) . The latter two models are used in DeepAugment as well . Additional details about the corruption networks are provided in the Appendix in section C. Formally , let cφ : X → X be a corruption network with parameters φ = { φi } Ki=1 which , when its parameters are perturbed , acts upon clean examples by corrupting them . Here each φi corresponds to the vector of parameters in the i-th layer , and K is the number of layers . Let δ = { δi } Ki=1 be a weight perturbation set , so that a corrupted variant of x can be generated by c { φi+δi } Ki=1 ( x ) . With a slight abuse of notation , we shorten c { φi+δi } Ki=1 to cφ+δ . Clearly , using unconstrained perturbations can result in exceedingly corrupted images which have lost all discriminative information and are not useful for training . For example , if cφ is a multi-layer perceptron , trivially setting δi = −φi would yield fully zero , uninformative outputs . Hence , we restrict the corruption sets by defining a maximum relative perturbation radius ν > 0 , and define the corruption set of AdA as C ( x ) = { cφ+δ ( x ) | ‖δ‖2 , φ ≤ ν } , where the norm ‖ · ‖2 , φ is defined as ‖δ‖2 , φ = maxi∈ { 1 , ... , K } ‖δi‖2/‖φi‖2 . Finding adversarial corruptions . For a clean image x with label y , a corrupted adversarial example within a bounded corruption distance ν is a corrupted image x′ = cφ+δ ( x ) generated by the corruption network c with bounded parameter offsets ‖δ‖2 , φ ≤ ν which causes fθ to misclassify x : fθ ( x′ ) 6= y . Similarly to Madry et al . ( 2018 ) , we find an adversarial corruption by maximizing a surrogate loss L̃ to L , for example , the cross-entropy loss between the predicted logits of the corrupted image and its clean label . We optimize over the perturbation δ to c ’ s parameters φ : max ‖δ‖2 , φ≤ν L̃ ( fθ ( cφ+δ ( x ) ) , y ) . ( 2 ) In practice , we solve this optimization problem ( approximately ) using projected gradient ascent to enforce that perturbations δ lie within the feasible set ‖δ‖2 , φ ≤ ν . Examples of corrupted images obtained by AdA are shown in Figure 1 . Adversarial training . Given the model f parameterized by θ , minimizing the corrupted adversarial risk from ( 1 ) results in parameters θ∗ obtained by solving the following optimization problem : θ∗= arg min θ E ( x , y ) ∼D [ max ‖δ‖2 , φ≤ν L̃ ( fθ ( cφ+δ ( x ) ) , y ) ] . ( 3 ) We also provide a full algorithm listing of our method in Appendix B . Meaningful corruptions . A crucial element of AdA is setting the perturbation radius ν to ensure that corruptions are varied enough to constitute a strong defense against common corruptions , while still being meaningful ( i.e. , without destroying semantics ) . We measure the extent of corruption induced by a given ν through the structural similarity index measure ( SSIM ) ( Wang et al. , 2004 ) between clean and corrupted images ( details on how SSIM is computed can be found in section E ) .We plot the distributions of SSIM over various perturbation radii in Figure 2 for corrupted images produced by AdA using two backbones ( EDSR and CAE ) on CIFAR-10 . We find that a relative perturbation radius of ν = .015 yields enough variety in the corruptions for both EDSR and CAE . This is demonstrated for EDSR by having a large SSIM variance compared to , e.g . ν = 0.009375 , without destroying semantic meaning ( retaining a high mean SSIM ) . We guard against unlikely but too severe corruptions ( i.e . with too low SSIM ) using an efficient approximate line-search procedure ( details can be found Appendix E ) . A similar approach for restricting the SSIM values of samples during adversarial training was used by Hameed ( 2020 ) ; Hameed & György ( 2021 ) .
The main contribution of the paper is the proposed AdversarialAugment (AdA) method. AdA generates augmented versions of an input by passing the input through an corruption network (such as a pretrained image-to-image model) while adding a worst-case perturbation to the _weights_ of this pretrained network. The paper thus aims at using worst-case perturbations for increasing average case out-of-distribution generalization such as common corruption robustness. Tuning the weight perturbation radius of the corruption network is done based on controlling the SSIM of the augmented to the clean input. The paper provides theoretical considerations that state assumptions under which AdA is well-behaved (converges) and how it is related to prior work such as DeepAugment. The paper presents an extensive evaluation on common corruption benchmarks (CIFAR10-C and ImageNet-C), domain shift (ImageNet-R), resampled test sets (CIFAR-10.1 and ImageNet-v2), and worst-case robustness against $\ell_p$ perturbations.
SP:32b52a8f3f6cfbcd0a863d6ea84dc691f504a3b2
Defending Against Image Corruptions Through Adversarial Augmentations
1 INTRODUCTION . By following a process known as Empirical Risk Minimization ( ERM ) ( Vapnik , 1998 ) , neural networks are trained to minimize the average error on a training set . ERM has enabled breakthroughs in a wide variety of fields and applications ( Goodfellow et al. , 2016 ; Krizhevsky et al. , 2012 ; Hinton et al. , 2012 ) , ranging from ranking content on the web ( Covington et al. , 2016 ) to autonomous driving ( Bojarski et al. , 2016 ) via medical diagnostics ( De Fauw et al. , 2018 ) . ERM is based on the principle that the data used during training is independently drawn from the same distribution as the one encountered during deployment . In practice , however , training and deployment data may differ and models can fail catastrophically . Such occurrence is commonplace as training data is often collected through a biased process that highlights confounding factors and spurious correlations ( Torralba et al. , 2011 ; Kuehlkamp et al. , 2017 ) , which can lead to undesirable consequences ( e.g. , http : //gendershades.org ) . As such , it has become increasingly important to ensure that deployed models are robust and generalize to various input corruptions . Unfortunately , even small corruptions can significantly affect the performance of existing classifiers . For example , Recht et al . ( 2019 ) ; Hendrycks et al . ( 2019 ) show that the accuracy of IMAGENET models is severely impacted by changes in the data collection process , while imperceptible deviations to the input , called adversarial perturbations , can cause neural networks to make incorrect predictions with high confidence ( Carlini & Wagner , 2017a ; b ; Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Szegedy et al. , 2014 ) . Methods to counteract such effects , which mainly consist of using random or adversarially-chosen data augmentations , struggle . Training against corrupted data only forces the memorization of such corruptions and , as a result , these models fail to generalize to new corruptions ( Vasiljevic et al. , 2016 ; Geirhos et al. , 2018 ) . Recent work from Hendrycks et al . ( 2020b ) ( also known as AugMix ) argues that basic pre-defined corruptions can be composed to improve the robustness of models to common corruptions . Another line of work , DeepAugment ( Hendrycks et al. , 2020a ) , corrupts images by passing them through two specific image-to-image models while distorting the models ’ parameters and activations using an extensive range of manually defined heuristic operations . While both methods perform well on average on the common corruptions present in CIFAR-10-C and IMAGENET-C , they generalize poorly to the adversarial setting . Most recently , Laidlaw et al . ( 2021 ) proposed an adversarial training method based on bounding a neural perceptual distance ( i.e. , an approximation of the true perceptual distance ) , under the acronym of PAT for Perceptual Adversarial Training . Their method performs well against five diverse adversarial attacks , but , as it specifically addresses robustness to pixel-level attacks that directly manipulate image pixels , it performs worse than AugMix on common corruptions . In this work , we address this gap . We focus on training models that are robust to adversarially-chosen corruptions that preserve semantic content . We go beyond conventional random data augmentation schemes ( exemplified by Hendrycks et al. , 2020b ; a ) and adversarial training ( exemplified by Madry et al. , 2018 ; Gowal et al. , 2019 ; Laidlaw et al. , 2021 ) by leveraging image-to-image models that can produce a wide range of semantically-preserving corruptions ; in contrast to related works , our method does not require the manual creation of heuristic transformations . Our contributions are as follows : • We formulate an adversarial training procedure , named AdversarialAugment ( or AdA for short ) which finds adversarial examples by optimizing over the weights of any pre-trained image-to-image model ( i.e . over the weights of arbitrary autoencoders ) . • We give sufficient conditions for the consistency of idealized versions of our method and DeepAugment , and provide PAC-Bayesian performance guarantees , following Neyshabur et al . ( 2017 ) . Our theoretical considerations highlight the potential advantages of AdA over previous work ( DeepAugment ) , as well as the combination of the two . We also establish links to Invariant Risk Minimization ( IRM ) ( Arjovsky et al. , 2020 ) , Adversarial Mixing ( AdvMix ) ( Gowal et al. , 2019 ) and Perceptual Adversarial Training ( Laidlaw et al. , 2021 ) . • We improve upon the known state-of-the-art on CIFAR-10-C by achieving a mean corruption error ( mCE ) of 7.83 % when using our method in conjunction with others ( vs. 23.51 % for Perceptual Adversarial Training ( PAT ) , 10.90 % for AugMix and 8.11 % for DeepAugment ) . On IMAGENET we show that our method can leverage 4 pre-trained image-to-image models simultaneously ( VQVAE ( van den Oord et al. , 2017 ) , U-Net ( Ronneberger et al. , 2015 ) , EDSR ( Lim et al. , 2017 ) & CAE ( Theis et al. , 2017 ) ) to yield the largest increase in robustness to common image corruptions , among all evaluated models . • On ` 2 and ` ∞ norm-bounded perturbations we significantly improve upon previous work ( DeepAugment & AugMix ) using AdA ( EDSR ) , while slightly improving generalization performance on both IMAGENET-V2 and on CIFAR-10.1 . 2 RELATED WORK . Data augmentation . Data augmentation has been shown to reduce the generalization error of standard ( non-robust ) training . For image classification tasks , random flips , rotations and crops are commonly used ( He et al. , 2016a ) . More sophisticated techniques such as Cutout of DeVries & Taylor ( 2017 ) ( which produces random occlusions ) , CutMix of Yun et al . ( 2019 ) ( which replaces parts of an image with another ) and mixup of Zhang et al . ( 2018a ) ; Tokozume et al . ( 2018 ) ( which linearly interpolates between two images ) all demonstrate extremely compelling results . Guo et al . ( 2019 ) improved upon mixup by proposing an adaptive mixing policy . Works , such as AutoAugment ( Cubuk et al. , 2019 ) and the related RandAugment ( Cubuk et al. , 2020 ) , learn augmentation policies from data directly . These methods are tuned to improve standard classification accuracy and have been shown to work well on CIFAR-10 , CIFAR-100 , SVHN and IMAGENET . However , these approaches do not necessarily generalize well to larger data shifts and perform poorly on benign corruptions such as blur or speckle noise ( Taori et al. , 2020 ) . Robustness to synthetic and natural data shift . Several works argue that training against corrupted data only forces the memorization of such corruptions and , as a result , models fail to generalize to new corruptions ( Vasiljevic et al. , 2016 ; Geirhos et al. , 2018 ) . This has not prevented Geirhos et al . ( 2019 ) ; Yin et al . ( 2019 ) ; Hendrycks et al . ( 2020b ) ; Lopes et al . ( 2019 ) ; Hendrycks et al . ( 2020a ) from demonstrating that some forms of data augmentation can improve the robustness of models on IMAGENET-C , despite not being directly trained on these common corruptions . Most works on the topic focus on training models that perform well in expectation . Unfortunately , these models remain vulnerable to more drastic adversarial shifts ( Taori et al. , 2020 ) . Robustness to adversarial data shift . Adversarial data shift has been extensively studied ( Goodfellow et al. , 2015 ; Kurakin et al. , 2016 ; Szegedy et al. , 2014 ; Moosavi-Dezfooli et al. , 2019 ; Papernot et al. , 2016 ; Madry et al. , 2018 ) . Most works focus on the robustness of classifiers to ` p-norm bounded perturbations . In particular , it is expected that a robust classifier should be invariant to small perturbations in the pixel space ( as defined by the ` p-norm ) . Goodfellow et al . ( 2015 ) and Madry et al . ( 2018 ) laid down foundational principles to train robust networks , and recent works ( Zhang et al. , 2019 ; Qin et al. , 2019 ; Rice et al. , 2020 ; Wu et al. , 2020 ; Gowal et al. , 2020 ) continue to find novel approaches to enhance adversarial robustness . However , approaches focused on ` p-norm bounded perturbations often sacrifice accuracy on non-adversarial images ( Raghunathan et al. , 2019 ) . Several works ( Baluja & Fischer , 2017 ; Song et al. , 2018 ; Xiao et al. , 2018 ; Qiu et al. , 2019 ; Wong & Kolter , 2021b ; Laidlaw et al. , 2021 ) go beyond these analytically defined perturbations and demonstrate that it is not only possible to maintain accuracy on non-adversarial images but also to reduce the effect of spurious correlations and reduce bias ( Gowal et al. , 2019 ) . Unfortunately , most aforementioned approaches perform poorly on CIFAR-10-C and IMAGENET-C . 3 DEFENSE AGAINST ADVERSARIAL CORRUPTIONS . In this section , we introduce AdA , our approach for training models robust to image corruptions through the use of adversarial augmentations while leveraging pre-trained autoencoders . In Appendix A we detail how our work relates to AugMix ( Hendrycks et al. , 2020b ) , DeepAugment ( Hendrycks et al. , 2020a ) , Invariant Risk Minimization ( Arjovsky et al. , 2020 ) , Adversarial Mixing ( Gowal et al. , 2019 ) and Perceptual Adversarial Training ( Laidlaw et al. , 2021 ) . Corrupted adversarial risk . We consider a model fθ : X → Y parametrized by θ . Given a dataset D ⊂ X × Y over pairs of examples x and corresponding labels y , we would like to find the parameters θ which minimize the corrupted adversarial risk : E ( x , y ) ∼D [ max x′∈C ( x ) L ( fθ ( x ′ ) , y ) ] , ( 1 ) where L is a suitable loss function , such as the 0-1 loss for classification , and C : X → 2X outputs a corruption set for a given example x . For example , in the case of an image x , a plausible corruption set C ( x ) could contain blurred , pixelized and noisy variants of x . In other words , we seek the optimal parameters θ∗ which minimize the corrupted adversarial risk so that fθ∗ is invariant to corruptions ; that is , fθ∗ ( x′ ) = fθ∗ ( x ) for all x′ ∈ C ( x ) . For example if x is an image classified to be a horse by fθ∗ , then this prediction should not be affected by the image being slightly corrupted by camera blur , Poisson noise or JPEG compression artifacts . AdversarialAugment ( AdA ) . Our method , AdA , uses image-to-image models to generate adversarially corrupted images . At a high level , this is similar to how DeepAugment works : DeepAugment perturbs the parameters of two specific image-to-image models using heuristic operators , which are manually defined for each model . Our method , instead , is more general and optimizes directly over perturbations to the parameters of any pre-trained image-to-image model . We denote these image-to-image models as corruption networks . We experiment with four corruption networks : a vector-quantised variational autoencoder ( VQ-VAE ) ( van den Oord et al. , 2017 ) ; a convolutional U-Net ( Ronneberger et al. , 2015 ) trained for image completion ( U-Net ) , a super-resolution model ( EDSR ) ( Lim et al. , 2017 ) and a compressive autoencoder ( CAE ) ( Theis et al. , 2017 ) . The latter two models are used in DeepAugment as well . Additional details about the corruption networks are provided in the Appendix in section C. Formally , let cφ : X → X be a corruption network with parameters φ = { φi } Ki=1 which , when its parameters are perturbed , acts upon clean examples by corrupting them . Here each φi corresponds to the vector of parameters in the i-th layer , and K is the number of layers . Let δ = { δi } Ki=1 be a weight perturbation set , so that a corrupted variant of x can be generated by c { φi+δi } Ki=1 ( x ) . With a slight abuse of notation , we shorten c { φi+δi } Ki=1 to cφ+δ . Clearly , using unconstrained perturbations can result in exceedingly corrupted images which have lost all discriminative information and are not useful for training . For example , if cφ is a multi-layer perceptron , trivially setting δi = −φi would yield fully zero , uninformative outputs . Hence , we restrict the corruption sets by defining a maximum relative perturbation radius ν > 0 , and define the corruption set of AdA as C ( x ) = { cφ+δ ( x ) | ‖δ‖2 , φ ≤ ν } , where the norm ‖ · ‖2 , φ is defined as ‖δ‖2 , φ = maxi∈ { 1 , ... , K } ‖δi‖2/‖φi‖2 . Finding adversarial corruptions . For a clean image x with label y , a corrupted adversarial example within a bounded corruption distance ν is a corrupted image x′ = cφ+δ ( x ) generated by the corruption network c with bounded parameter offsets ‖δ‖2 , φ ≤ ν which causes fθ to misclassify x : fθ ( x′ ) 6= y . Similarly to Madry et al . ( 2018 ) , we find an adversarial corruption by maximizing a surrogate loss L̃ to L , for example , the cross-entropy loss between the predicted logits of the corrupted image and its clean label . We optimize over the perturbation δ to c ’ s parameters φ : max ‖δ‖2 , φ≤ν L̃ ( fθ ( cφ+δ ( x ) ) , y ) . ( 2 ) In practice , we solve this optimization problem ( approximately ) using projected gradient ascent to enforce that perturbations δ lie within the feasible set ‖δ‖2 , φ ≤ ν . Examples of corrupted images obtained by AdA are shown in Figure 1 . Adversarial training . Given the model f parameterized by θ , minimizing the corrupted adversarial risk from ( 1 ) results in parameters θ∗ obtained by solving the following optimization problem : θ∗= arg min θ E ( x , y ) ∼D [ max ‖δ‖2 , φ≤ν L̃ ( fθ ( cφ+δ ( x ) ) , y ) ] . ( 3 ) We also provide a full algorithm listing of our method in Appendix B . Meaningful corruptions . A crucial element of AdA is setting the perturbation radius ν to ensure that corruptions are varied enough to constitute a strong defense against common corruptions , while still being meaningful ( i.e. , without destroying semantics ) . We measure the extent of corruption induced by a given ν through the structural similarity index measure ( SSIM ) ( Wang et al. , 2004 ) between clean and corrupted images ( details on how SSIM is computed can be found in section E ) .We plot the distributions of SSIM over various perturbation radii in Figure 2 for corrupted images produced by AdA using two backbones ( EDSR and CAE ) on CIFAR-10 . We find that a relative perturbation radius of ν = .015 yields enough variety in the corruptions for both EDSR and CAE . This is demonstrated for EDSR by having a large SSIM variance compared to , e.g . ν = 0.009375 , without destroying semantic meaning ( retaining a high mean SSIM ) . We guard against unlikely but too severe corruptions ( i.e . with too low SSIM ) using an efficient approximate line-search procedure ( details can be found Appendix E ) . A similar approach for restricting the SSIM values of samples during adversarial training was used by Hameed ( 2020 ) ; Hameed & György ( 2021 ) .
This paper provides a data augmentation method that augments samples by perturbing parameters of a generative model. The perturbations are found by an adversarial loss, and are constrained based on a perceptual similarity distance to guard from the outliers. In addition to thorough empirical evaluations, this paper provides formalization of their method and a closely related one (deep augment), adding theoretical insights , and provide interesting convergence properties and PAC-Bayesian analysis.
SP:32b52a8f3f6cfbcd0a863d6ea84dc691f504a3b2
Fishr: Invariant Gradient Variances for Out-of-distribution Generalization
1 INTRODUCTION . The success of deep neural networks in supervised learning ( Krizhevsky et al. , 2012 ) relies on the crucial assumption that the train and test data distributions are identical . In particular , the tendency of networks to rely on simple features ( Kalimeris et al. , 2019 ; Valle-Perez et al. , 2019 ; Geirhos et al. , 2020 ) is generally a desirable behavior reflecting Occam ’ s razor . However , in case of distribution shift , this simplicity bias deteriorates performance since more complex features are needed ( Tenenbaum , 2018 ; Shah et al. , 2020 ) . For example , in the fight against Covid-19 , most of the deep learning methods developed to detect coronavirus from chest scans were shown useless for clinical use ( DeGrave et al. , 2021 ; Roberts et al. , 2021 ) : indeed , networks exploited simple bias in the training datasets such as patients ’ age or body position rather than ‘ truly ’ analyzing medical pathologies . To better generalize under distribution shifts , most works such as Blanchard et al . ( 2011 ) or Muandet et al . ( 2013 ) assume that the training data is divided into different training domains in which there is a constant underlying causal mechanism ( Peters et al. , 2016 ) . To remove the domain-dependent explanations , different invariance criteria across those training domains have been proposed . ( Ganin et al. , 2016 ; Sun et al. , 2016 ; Sun & Saenko , 2016 ) enforce similar feature distributions , others ( Arjovsky et al. , 2019 ; Krueger et al. , 2021 ) force the classifier to be simultaneously optimal across all domains . Yet , despite the popularity of this research topic , none of these methods perform significantly better than the classical Empirical Risk Minimization ( ERM ) when applied with controlled model selection and restricted hyperparameter search ( Gulrajani & Lopez-Paz , 2021 ; Ye et al. , 2021 ) . These failures motivate the need for new ideas . To foster the emergence of a shared mechanism with consistent generalization properties , our intuition is that learning should progress consistently and similarly across domains . Besides , the learning procedure of deep neural networks is dictated by the distribution of the gradients with respect to the network weights ( Yin et al. , 2018 ; Sankararaman et al. , 2020 ) — usually backpropagated in the network during gradient descent . Thus , we seek distributional invariance across domains in the gradient space : domain-level gradients should be similar , not only in average direction , but most importantly in statistics such as covariance and dispersion . In this paper , we propose the Fishr regularization for out-of-distribution generalization in classification — summarized in Fig . 1 . We match the domain-level gradient variances , i.e. , the second moment of the gradient distributions . In contrast , previous gradient-based works such as Fish ( Shi et al. , 2021 ) only match the domain-level gradients means , i.e. , the first moment . Moreover , our strategy is also motivated by the close relations between the gradient variance , the Fisher Information ( Fisher , 1922 ) and the Hessian . This explains the name of our work , Fishr , using gradients as in Fish and related to the Fisher Information Matrix . Notably , we will study how Fishr forces the model to have similar domain-level Hessians and promotes consistent explanations — by generalizing the inconsistency formalism introduced in Parascandolo et al . ( 2021 ) . To reduce the computational cost , we justify an approximation that only considers the gradients in the classifier . This is simple to implement with the BackPACK ( Dangel et al. , 2020 ) package . We summarize our contributions as follows : • We introduce the Fishr regularization that brings closer the domain-level gradient variances . • Based on the relation between the gradient covariance , the Fisher Information and the Hessian , we show that Fishr matches domain-level Hessians and improves generalization by reducing inconsistencies across domains . • We justify a simple and scalable implementation . Empirically , we first validate that Fishr tackles distribution shifts on the synthetic Colored MNIST ( Arjovsky et al. , 2019 ) . Then , we show that Fishr performs best on the DomainBed benchmark ( Gulrajani & Lopez-Paz , 2021 ) with the ‘ Oracle ’ model selection method and third with the ‘ Training ’ model selection when compared with state-of-the-art counterparts . Critically , Fishr is the only method to perform better ( on VLCS , OfficeHome , TerraIncognita and DomainNet ) or similarly ( on PACS ) than ERM with both selection methods on all ‘ real ’ datasets . 2 CONTEXT AND RELATED WORK . We first describe our task and provide the notations used along our paper . Then we remind some important related works to understand how our Fishr stands in a rich literature . Problem definition and notations We study out-of-distribution ( OOD ) generalization for classification . Our model is a deep neural network ( DNN ) fθ ( parametrized by θ ) made of a deep features extractor Φφ on which we plug a dense linear classifier wω : fθ = wω ◦Φφ and θ = ( φ , ω ) . In training , we have access to different domains E : for each domain e ∈ E , the datasetDe = { ( xie , y i e ) } ne i=1 contains ne i.i.d . ( input , labels ) samples drawn from a domain-dependent probability distribution . Combined together , the datasets { De } e∈E are of size n = ∑ e∈E ne . Our goal is to learn weights θ so that fθ predicts well on a new test domain , unseen in training . As described in Koh et al . ( 2020 ) and Ye et al . ( 2021 ) , most common distribution shifts are diversity shifts — where the training and test distributions comprise data from related but distinct domains — or correlation shifts — where the distribution of the covariates at test time differs from the one during training . To generalize well despite these distribution shifts , fθ should ideally capture an invariant mechanism across training domains . Following standard notations , ‖M‖2F denotes the Frobenius norm of matrix M ; ‖v‖ 2 2 denotes the euclidean norm of vector v ; 1 is a column vector with all elements equal to 1 . The standard Expected Risk Minimization ( ERM ) ( Vapnik , 1999 ) framework simply minimizes the average empirical risk over all training domains , i.e. , 1|E| ∑ e∈E Re ( θ ) where Re ( θ ) = 1ne ∑ne i=1 ` ( fθ ( xie ) , yie ) and ` is the loss , usually the negative log-likelihood . Many approaches try to exploit some external source of knowledge ( Xie et al. , 2021 ) , in particular the domain information . As a side note , these partitions may be inferred if not provided ( Creager et al. , 2021 ) . Some works explore data augmentations to mix samples from different domains ( Wang et al. , 2020 ; Wu et al. , 2020 ) , some re-weight the training samples to favor underrepresented groups ( Sagawa et al. , 2020a ; b ; Zhang et al. , 2021 ) and others include domain-dependent weights ( Ding & Fu , 2017 ; Mancini et al. , 2018 ) . Yet , most recent works promote invariance via a regularization criterion and only differ by the choice of the statistics to be matched across training domains . They can be categorized into three groups : these methods enforce agreement either ( 1 ) in features ( 2 ) in predictors or ( 3 ) in gradients . First , some approaches aim at extracting domain-invariant features and were extensively studied for unsupervised domain adaptation . The features are usually aligned with adversarial methods ( Ganin et al. , 2016 ; Gong et al. , 2016 ; Li et al. , 2018b ; c ) or with kernel methods ( Muandet et al. , 2013 ; Long et al. , 2014 ) . Yet , the simple covariance matching in CORAL ( Sun et al. , 2016 ; Sun & Saenko , 2016 ) performs best on various tasks for OOD generalization ( Gulrajani & Lopez-Paz , 2021 ) . With Zije the j-th dimension of the features extracted by Φφ for the i-th example xie of domain e ∈ E = { A , B } , CORAL minimizes ‖Cov ( ZA ) − Cov ( ZB ) ‖ 2 F where Cov ( Ze ) = 1 ne−1 ( Z > e Ze − 1ne ( 1 > Ze ) > ( 1 > Ze ) ) is the feature covariance matrix . CORAL is more powerful than mere feature matching ∥∥∥ 1nA1 > ZA − 1nB 1 > ZB∥∥∥22 as in Deep Domain Confusion ( DDC ) ( Tzeng et al. , 2014 ) . Yet , Johansson et al . ( 2019 ) and Zhao et al . ( 2019 ) show that these approaches are insufficient to guarantee good generalization . Motivated by arguments from causality ( Pearl , 2009 ) and the idea that statistical dependencies are epiphenomena of an underlying causal structure , Invariant Risk Minimization ( IRM ) ( Arjovsky et al. , 2019 ) explains that the predictor should be invariant ( Peters et al. , 2016 ; Rojas-Carulla et al. , 2018 ) , i.e. , simultaneously optimal across all domains . Among many suggested improvements ( Chang et al. , 2020 ; Idnani & Kao , 2020 ; Teney et al. , 2020 ; Ahmed et al. , 2021 ) , Risk Extrapolation ( V-REx ) ( Krueger et al. , 2021 ) argues that training risks from different domains should be similar and thus penalizes |RA − RB |2 when E = { A , B } . These ideas have been applied in semi-supervised learning ( Li et al. , 2021 ) . Yet , recent works point out pitfalls of IRM ( Javed et al. , 2020 ; Guo et al. , 2021 ; Kamath et al. , 2021 ) , that does not provably work with non-linear data ( Rosenfeld et al. , 2021 ) and could not improve over ERM when hyperparameter selection is restricted ( Koh et al. , 2020 ; Gulrajani & Lopez-Paz , 2021 ; Ye et al. , 2021 ) . A third and most recent line of work promotes agreements between gradients with respect to θ. Gradient agreements help batches from different tasks to cooperate , and have been previously employed for multitasks ( Du et al. , 2018 ; Yu et al. , 2020 ) , continual ( Lopez-Paz & Ranzato , 2017 ) , meta ( Finn et al. , 2017 ; Zhang et al. , 2020 ) and reinforcement ( Zhang et al. , 2019 ) learning . In OOD generalization , Koyama & Yamaguchi ( 2020 ) ; Parascandolo et al . ( 2021 ) ; Shi et al . ( 2021 ) try to find minimas in the loss landscape that are shared across domains . Specifically , these works tackle the domain-level expected gradients : ge = E ( xe , ye ) ∼De∇θ ` ( fθ ( xe ) , ye ) . ( 1 ) When E = { A , B } , IGA ( Koyama & Yamaguchi , 2020 ) minimizes ||gA − gB ||22 ; Fish ( Shi et al. , 2021 ) increases gA · gB ; AND-mask ( Parascandolo et al. , 2021 ) and others ( Mansilla et al. , 2021 ; Shahtalebi et al. , 2021 ) update weights only when gA and gB point to the same direction . Along with the increased computation cost , the main limitation of these gradient-based methods is the per-domain batch averaging of gradients , that removes more granular statistics ; in particular , this averaging removes the information from pairwise interactions between gradients from samples in a same domain . In opposition , our new regularization for OOD generalization keeps extra information from individual gradients and matches across domains the domain-level gradient variances . In a nutshell , Fishr is similar to the covariance-based CORAL ( Sun et al. , 2016 ; Sun & Saenko , 2016 ) but in the gradient space rather than in the feature space .
This paper introduces a simple technique for out-of-distribution generalization, called Fishr. Intuitively, given the success of CORAL, Fishr enforces consistency through mean regularization between the element-wise variances of the gradients across domains. It is efficiently solvable using the existing BackPACK package. Several explanations have been provided for Fishr. Then, the paper concludes with applying Fishr to standard OOD benchmarks showing superior performance of Fishr in several different problems.
SP:633112bc888148bb86867601dd21c422e55e31a5
Fishr: Invariant Gradient Variances for Out-of-distribution Generalization
1 INTRODUCTION . The success of deep neural networks in supervised learning ( Krizhevsky et al. , 2012 ) relies on the crucial assumption that the train and test data distributions are identical . In particular , the tendency of networks to rely on simple features ( Kalimeris et al. , 2019 ; Valle-Perez et al. , 2019 ; Geirhos et al. , 2020 ) is generally a desirable behavior reflecting Occam ’ s razor . However , in case of distribution shift , this simplicity bias deteriorates performance since more complex features are needed ( Tenenbaum , 2018 ; Shah et al. , 2020 ) . For example , in the fight against Covid-19 , most of the deep learning methods developed to detect coronavirus from chest scans were shown useless for clinical use ( DeGrave et al. , 2021 ; Roberts et al. , 2021 ) : indeed , networks exploited simple bias in the training datasets such as patients ’ age or body position rather than ‘ truly ’ analyzing medical pathologies . To better generalize under distribution shifts , most works such as Blanchard et al . ( 2011 ) or Muandet et al . ( 2013 ) assume that the training data is divided into different training domains in which there is a constant underlying causal mechanism ( Peters et al. , 2016 ) . To remove the domain-dependent explanations , different invariance criteria across those training domains have been proposed . ( Ganin et al. , 2016 ; Sun et al. , 2016 ; Sun & Saenko , 2016 ) enforce similar feature distributions , others ( Arjovsky et al. , 2019 ; Krueger et al. , 2021 ) force the classifier to be simultaneously optimal across all domains . Yet , despite the popularity of this research topic , none of these methods perform significantly better than the classical Empirical Risk Minimization ( ERM ) when applied with controlled model selection and restricted hyperparameter search ( Gulrajani & Lopez-Paz , 2021 ; Ye et al. , 2021 ) . These failures motivate the need for new ideas . To foster the emergence of a shared mechanism with consistent generalization properties , our intuition is that learning should progress consistently and similarly across domains . Besides , the learning procedure of deep neural networks is dictated by the distribution of the gradients with respect to the network weights ( Yin et al. , 2018 ; Sankararaman et al. , 2020 ) — usually backpropagated in the network during gradient descent . Thus , we seek distributional invariance across domains in the gradient space : domain-level gradients should be similar , not only in average direction , but most importantly in statistics such as covariance and dispersion . In this paper , we propose the Fishr regularization for out-of-distribution generalization in classification — summarized in Fig . 1 . We match the domain-level gradient variances , i.e. , the second moment of the gradient distributions . In contrast , previous gradient-based works such as Fish ( Shi et al. , 2021 ) only match the domain-level gradients means , i.e. , the first moment . Moreover , our strategy is also motivated by the close relations between the gradient variance , the Fisher Information ( Fisher , 1922 ) and the Hessian . This explains the name of our work , Fishr , using gradients as in Fish and related to the Fisher Information Matrix . Notably , we will study how Fishr forces the model to have similar domain-level Hessians and promotes consistent explanations — by generalizing the inconsistency formalism introduced in Parascandolo et al . ( 2021 ) . To reduce the computational cost , we justify an approximation that only considers the gradients in the classifier . This is simple to implement with the BackPACK ( Dangel et al. , 2020 ) package . We summarize our contributions as follows : • We introduce the Fishr regularization that brings closer the domain-level gradient variances . • Based on the relation between the gradient covariance , the Fisher Information and the Hessian , we show that Fishr matches domain-level Hessians and improves generalization by reducing inconsistencies across domains . • We justify a simple and scalable implementation . Empirically , we first validate that Fishr tackles distribution shifts on the synthetic Colored MNIST ( Arjovsky et al. , 2019 ) . Then , we show that Fishr performs best on the DomainBed benchmark ( Gulrajani & Lopez-Paz , 2021 ) with the ‘ Oracle ’ model selection method and third with the ‘ Training ’ model selection when compared with state-of-the-art counterparts . Critically , Fishr is the only method to perform better ( on VLCS , OfficeHome , TerraIncognita and DomainNet ) or similarly ( on PACS ) than ERM with both selection methods on all ‘ real ’ datasets . 2 CONTEXT AND RELATED WORK . We first describe our task and provide the notations used along our paper . Then we remind some important related works to understand how our Fishr stands in a rich literature . Problem definition and notations We study out-of-distribution ( OOD ) generalization for classification . Our model is a deep neural network ( DNN ) fθ ( parametrized by θ ) made of a deep features extractor Φφ on which we plug a dense linear classifier wω : fθ = wω ◦Φφ and θ = ( φ , ω ) . In training , we have access to different domains E : for each domain e ∈ E , the datasetDe = { ( xie , y i e ) } ne i=1 contains ne i.i.d . ( input , labels ) samples drawn from a domain-dependent probability distribution . Combined together , the datasets { De } e∈E are of size n = ∑ e∈E ne . Our goal is to learn weights θ so that fθ predicts well on a new test domain , unseen in training . As described in Koh et al . ( 2020 ) and Ye et al . ( 2021 ) , most common distribution shifts are diversity shifts — where the training and test distributions comprise data from related but distinct domains — or correlation shifts — where the distribution of the covariates at test time differs from the one during training . To generalize well despite these distribution shifts , fθ should ideally capture an invariant mechanism across training domains . Following standard notations , ‖M‖2F denotes the Frobenius norm of matrix M ; ‖v‖ 2 2 denotes the euclidean norm of vector v ; 1 is a column vector with all elements equal to 1 . The standard Expected Risk Minimization ( ERM ) ( Vapnik , 1999 ) framework simply minimizes the average empirical risk over all training domains , i.e. , 1|E| ∑ e∈E Re ( θ ) where Re ( θ ) = 1ne ∑ne i=1 ` ( fθ ( xie ) , yie ) and ` is the loss , usually the negative log-likelihood . Many approaches try to exploit some external source of knowledge ( Xie et al. , 2021 ) , in particular the domain information . As a side note , these partitions may be inferred if not provided ( Creager et al. , 2021 ) . Some works explore data augmentations to mix samples from different domains ( Wang et al. , 2020 ; Wu et al. , 2020 ) , some re-weight the training samples to favor underrepresented groups ( Sagawa et al. , 2020a ; b ; Zhang et al. , 2021 ) and others include domain-dependent weights ( Ding & Fu , 2017 ; Mancini et al. , 2018 ) . Yet , most recent works promote invariance via a regularization criterion and only differ by the choice of the statistics to be matched across training domains . They can be categorized into three groups : these methods enforce agreement either ( 1 ) in features ( 2 ) in predictors or ( 3 ) in gradients . First , some approaches aim at extracting domain-invariant features and were extensively studied for unsupervised domain adaptation . The features are usually aligned with adversarial methods ( Ganin et al. , 2016 ; Gong et al. , 2016 ; Li et al. , 2018b ; c ) or with kernel methods ( Muandet et al. , 2013 ; Long et al. , 2014 ) . Yet , the simple covariance matching in CORAL ( Sun et al. , 2016 ; Sun & Saenko , 2016 ) performs best on various tasks for OOD generalization ( Gulrajani & Lopez-Paz , 2021 ) . With Zije the j-th dimension of the features extracted by Φφ for the i-th example xie of domain e ∈ E = { A , B } , CORAL minimizes ‖Cov ( ZA ) − Cov ( ZB ) ‖ 2 F where Cov ( Ze ) = 1 ne−1 ( Z > e Ze − 1ne ( 1 > Ze ) > ( 1 > Ze ) ) is the feature covariance matrix . CORAL is more powerful than mere feature matching ∥∥∥ 1nA1 > ZA − 1nB 1 > ZB∥∥∥22 as in Deep Domain Confusion ( DDC ) ( Tzeng et al. , 2014 ) . Yet , Johansson et al . ( 2019 ) and Zhao et al . ( 2019 ) show that these approaches are insufficient to guarantee good generalization . Motivated by arguments from causality ( Pearl , 2009 ) and the idea that statistical dependencies are epiphenomena of an underlying causal structure , Invariant Risk Minimization ( IRM ) ( Arjovsky et al. , 2019 ) explains that the predictor should be invariant ( Peters et al. , 2016 ; Rojas-Carulla et al. , 2018 ) , i.e. , simultaneously optimal across all domains . Among many suggested improvements ( Chang et al. , 2020 ; Idnani & Kao , 2020 ; Teney et al. , 2020 ; Ahmed et al. , 2021 ) , Risk Extrapolation ( V-REx ) ( Krueger et al. , 2021 ) argues that training risks from different domains should be similar and thus penalizes |RA − RB |2 when E = { A , B } . These ideas have been applied in semi-supervised learning ( Li et al. , 2021 ) . Yet , recent works point out pitfalls of IRM ( Javed et al. , 2020 ; Guo et al. , 2021 ; Kamath et al. , 2021 ) , that does not provably work with non-linear data ( Rosenfeld et al. , 2021 ) and could not improve over ERM when hyperparameter selection is restricted ( Koh et al. , 2020 ; Gulrajani & Lopez-Paz , 2021 ; Ye et al. , 2021 ) . A third and most recent line of work promotes agreements between gradients with respect to θ. Gradient agreements help batches from different tasks to cooperate , and have been previously employed for multitasks ( Du et al. , 2018 ; Yu et al. , 2020 ) , continual ( Lopez-Paz & Ranzato , 2017 ) , meta ( Finn et al. , 2017 ; Zhang et al. , 2020 ) and reinforcement ( Zhang et al. , 2019 ) learning . In OOD generalization , Koyama & Yamaguchi ( 2020 ) ; Parascandolo et al . ( 2021 ) ; Shi et al . ( 2021 ) try to find minimas in the loss landscape that are shared across domains . Specifically , these works tackle the domain-level expected gradients : ge = E ( xe , ye ) ∼De∇θ ` ( fθ ( xe ) , ye ) . ( 1 ) When E = { A , B } , IGA ( Koyama & Yamaguchi , 2020 ) minimizes ||gA − gB ||22 ; Fish ( Shi et al. , 2021 ) increases gA · gB ; AND-mask ( Parascandolo et al. , 2021 ) and others ( Mansilla et al. , 2021 ; Shahtalebi et al. , 2021 ) update weights only when gA and gB point to the same direction . Along with the increased computation cost , the main limitation of these gradient-based methods is the per-domain batch averaging of gradients , that removes more granular statistics ; in particular , this averaging removes the information from pairwise interactions between gradients from samples in a same domain . In opposition , our new regularization for OOD generalization keeps extra information from individual gradients and matches across domains the domain-level gradient variances . In a nutshell , Fishr is similar to the covariance-based CORAL ( Sun et al. , 2016 ; Sun & Saenko , 2016 ) but in the gradient space rather than in the feature space .
This paper proposes a new regularization method for OOD generalization. The central idea is to align the covariances of gradients between different domains. Specifically, - Authors first categorize different works in this area into three groups: 1. Methods that match **features** across domains such as those in adversarial training and most notably CORAL which considers the covariance of the **features**. 2. Methods that state that **the classifier on the top** should be invariant. Such as IRM. 3. Those that align **gradients** such as IGA. - Authors then propose Fishr which falls into the 3rd category. It aligns gradients between the domain but with the important different that it aligns the *covariance* of the gradients not the *average* of them. - Authors establish equivalences between the the covariance, the fisher information matrix, and the hessian which in fact describes the loss landscape. - As a practical approach, authors propose to match the **variances** instead of the **covariances** and also only the penultimate linear layer. - Finally, authors evaluate their approach on standard OOD benchmarks in two different setups: 1. When the hyper-parameters are tuned using the test set. 2. When the hyper-parameters are tuned using the training set.
SP:633112bc888148bb86867601dd21c422e55e31a5
Fishr: Invariant Gradient Variances for Out-of-distribution Generalization
1 INTRODUCTION . The success of deep neural networks in supervised learning ( Krizhevsky et al. , 2012 ) relies on the crucial assumption that the train and test data distributions are identical . In particular , the tendency of networks to rely on simple features ( Kalimeris et al. , 2019 ; Valle-Perez et al. , 2019 ; Geirhos et al. , 2020 ) is generally a desirable behavior reflecting Occam ’ s razor . However , in case of distribution shift , this simplicity bias deteriorates performance since more complex features are needed ( Tenenbaum , 2018 ; Shah et al. , 2020 ) . For example , in the fight against Covid-19 , most of the deep learning methods developed to detect coronavirus from chest scans were shown useless for clinical use ( DeGrave et al. , 2021 ; Roberts et al. , 2021 ) : indeed , networks exploited simple bias in the training datasets such as patients ’ age or body position rather than ‘ truly ’ analyzing medical pathologies . To better generalize under distribution shifts , most works such as Blanchard et al . ( 2011 ) or Muandet et al . ( 2013 ) assume that the training data is divided into different training domains in which there is a constant underlying causal mechanism ( Peters et al. , 2016 ) . To remove the domain-dependent explanations , different invariance criteria across those training domains have been proposed . ( Ganin et al. , 2016 ; Sun et al. , 2016 ; Sun & Saenko , 2016 ) enforce similar feature distributions , others ( Arjovsky et al. , 2019 ; Krueger et al. , 2021 ) force the classifier to be simultaneously optimal across all domains . Yet , despite the popularity of this research topic , none of these methods perform significantly better than the classical Empirical Risk Minimization ( ERM ) when applied with controlled model selection and restricted hyperparameter search ( Gulrajani & Lopez-Paz , 2021 ; Ye et al. , 2021 ) . These failures motivate the need for new ideas . To foster the emergence of a shared mechanism with consistent generalization properties , our intuition is that learning should progress consistently and similarly across domains . Besides , the learning procedure of deep neural networks is dictated by the distribution of the gradients with respect to the network weights ( Yin et al. , 2018 ; Sankararaman et al. , 2020 ) — usually backpropagated in the network during gradient descent . Thus , we seek distributional invariance across domains in the gradient space : domain-level gradients should be similar , not only in average direction , but most importantly in statistics such as covariance and dispersion . In this paper , we propose the Fishr regularization for out-of-distribution generalization in classification — summarized in Fig . 1 . We match the domain-level gradient variances , i.e. , the second moment of the gradient distributions . In contrast , previous gradient-based works such as Fish ( Shi et al. , 2021 ) only match the domain-level gradients means , i.e. , the first moment . Moreover , our strategy is also motivated by the close relations between the gradient variance , the Fisher Information ( Fisher , 1922 ) and the Hessian . This explains the name of our work , Fishr , using gradients as in Fish and related to the Fisher Information Matrix . Notably , we will study how Fishr forces the model to have similar domain-level Hessians and promotes consistent explanations — by generalizing the inconsistency formalism introduced in Parascandolo et al . ( 2021 ) . To reduce the computational cost , we justify an approximation that only considers the gradients in the classifier . This is simple to implement with the BackPACK ( Dangel et al. , 2020 ) package . We summarize our contributions as follows : • We introduce the Fishr regularization that brings closer the domain-level gradient variances . • Based on the relation between the gradient covariance , the Fisher Information and the Hessian , we show that Fishr matches domain-level Hessians and improves generalization by reducing inconsistencies across domains . • We justify a simple and scalable implementation . Empirically , we first validate that Fishr tackles distribution shifts on the synthetic Colored MNIST ( Arjovsky et al. , 2019 ) . Then , we show that Fishr performs best on the DomainBed benchmark ( Gulrajani & Lopez-Paz , 2021 ) with the ‘ Oracle ’ model selection method and third with the ‘ Training ’ model selection when compared with state-of-the-art counterparts . Critically , Fishr is the only method to perform better ( on VLCS , OfficeHome , TerraIncognita and DomainNet ) or similarly ( on PACS ) than ERM with both selection methods on all ‘ real ’ datasets . 2 CONTEXT AND RELATED WORK . We first describe our task and provide the notations used along our paper . Then we remind some important related works to understand how our Fishr stands in a rich literature . Problem definition and notations We study out-of-distribution ( OOD ) generalization for classification . Our model is a deep neural network ( DNN ) fθ ( parametrized by θ ) made of a deep features extractor Φφ on which we plug a dense linear classifier wω : fθ = wω ◦Φφ and θ = ( φ , ω ) . In training , we have access to different domains E : for each domain e ∈ E , the datasetDe = { ( xie , y i e ) } ne i=1 contains ne i.i.d . ( input , labels ) samples drawn from a domain-dependent probability distribution . Combined together , the datasets { De } e∈E are of size n = ∑ e∈E ne . Our goal is to learn weights θ so that fθ predicts well on a new test domain , unseen in training . As described in Koh et al . ( 2020 ) and Ye et al . ( 2021 ) , most common distribution shifts are diversity shifts — where the training and test distributions comprise data from related but distinct domains — or correlation shifts — where the distribution of the covariates at test time differs from the one during training . To generalize well despite these distribution shifts , fθ should ideally capture an invariant mechanism across training domains . Following standard notations , ‖M‖2F denotes the Frobenius norm of matrix M ; ‖v‖ 2 2 denotes the euclidean norm of vector v ; 1 is a column vector with all elements equal to 1 . The standard Expected Risk Minimization ( ERM ) ( Vapnik , 1999 ) framework simply minimizes the average empirical risk over all training domains , i.e. , 1|E| ∑ e∈E Re ( θ ) where Re ( θ ) = 1ne ∑ne i=1 ` ( fθ ( xie ) , yie ) and ` is the loss , usually the negative log-likelihood . Many approaches try to exploit some external source of knowledge ( Xie et al. , 2021 ) , in particular the domain information . As a side note , these partitions may be inferred if not provided ( Creager et al. , 2021 ) . Some works explore data augmentations to mix samples from different domains ( Wang et al. , 2020 ; Wu et al. , 2020 ) , some re-weight the training samples to favor underrepresented groups ( Sagawa et al. , 2020a ; b ; Zhang et al. , 2021 ) and others include domain-dependent weights ( Ding & Fu , 2017 ; Mancini et al. , 2018 ) . Yet , most recent works promote invariance via a regularization criterion and only differ by the choice of the statistics to be matched across training domains . They can be categorized into three groups : these methods enforce agreement either ( 1 ) in features ( 2 ) in predictors or ( 3 ) in gradients . First , some approaches aim at extracting domain-invariant features and were extensively studied for unsupervised domain adaptation . The features are usually aligned with adversarial methods ( Ganin et al. , 2016 ; Gong et al. , 2016 ; Li et al. , 2018b ; c ) or with kernel methods ( Muandet et al. , 2013 ; Long et al. , 2014 ) . Yet , the simple covariance matching in CORAL ( Sun et al. , 2016 ; Sun & Saenko , 2016 ) performs best on various tasks for OOD generalization ( Gulrajani & Lopez-Paz , 2021 ) . With Zije the j-th dimension of the features extracted by Φφ for the i-th example xie of domain e ∈ E = { A , B } , CORAL minimizes ‖Cov ( ZA ) − Cov ( ZB ) ‖ 2 F where Cov ( Ze ) = 1 ne−1 ( Z > e Ze − 1ne ( 1 > Ze ) > ( 1 > Ze ) ) is the feature covariance matrix . CORAL is more powerful than mere feature matching ∥∥∥ 1nA1 > ZA − 1nB 1 > ZB∥∥∥22 as in Deep Domain Confusion ( DDC ) ( Tzeng et al. , 2014 ) . Yet , Johansson et al . ( 2019 ) and Zhao et al . ( 2019 ) show that these approaches are insufficient to guarantee good generalization . Motivated by arguments from causality ( Pearl , 2009 ) and the idea that statistical dependencies are epiphenomena of an underlying causal structure , Invariant Risk Minimization ( IRM ) ( Arjovsky et al. , 2019 ) explains that the predictor should be invariant ( Peters et al. , 2016 ; Rojas-Carulla et al. , 2018 ) , i.e. , simultaneously optimal across all domains . Among many suggested improvements ( Chang et al. , 2020 ; Idnani & Kao , 2020 ; Teney et al. , 2020 ; Ahmed et al. , 2021 ) , Risk Extrapolation ( V-REx ) ( Krueger et al. , 2021 ) argues that training risks from different domains should be similar and thus penalizes |RA − RB |2 when E = { A , B } . These ideas have been applied in semi-supervised learning ( Li et al. , 2021 ) . Yet , recent works point out pitfalls of IRM ( Javed et al. , 2020 ; Guo et al. , 2021 ; Kamath et al. , 2021 ) , that does not provably work with non-linear data ( Rosenfeld et al. , 2021 ) and could not improve over ERM when hyperparameter selection is restricted ( Koh et al. , 2020 ; Gulrajani & Lopez-Paz , 2021 ; Ye et al. , 2021 ) . A third and most recent line of work promotes agreements between gradients with respect to θ. Gradient agreements help batches from different tasks to cooperate , and have been previously employed for multitasks ( Du et al. , 2018 ; Yu et al. , 2020 ) , continual ( Lopez-Paz & Ranzato , 2017 ) , meta ( Finn et al. , 2017 ; Zhang et al. , 2020 ) and reinforcement ( Zhang et al. , 2019 ) learning . In OOD generalization , Koyama & Yamaguchi ( 2020 ) ; Parascandolo et al . ( 2021 ) ; Shi et al . ( 2021 ) try to find minimas in the loss landscape that are shared across domains . Specifically , these works tackle the domain-level expected gradients : ge = E ( xe , ye ) ∼De∇θ ` ( fθ ( xe ) , ye ) . ( 1 ) When E = { A , B } , IGA ( Koyama & Yamaguchi , 2020 ) minimizes ||gA − gB ||22 ; Fish ( Shi et al. , 2021 ) increases gA · gB ; AND-mask ( Parascandolo et al. , 2021 ) and others ( Mansilla et al. , 2021 ; Shahtalebi et al. , 2021 ) update weights only when gA and gB point to the same direction . Along with the increased computation cost , the main limitation of these gradient-based methods is the per-domain batch averaging of gradients , that removes more granular statistics ; in particular , this averaging removes the information from pairwise interactions between gradients from samples in a same domain . In opposition , our new regularization for OOD generalization keeps extra information from individual gradients and matches across domains the domain-level gradient variances . In a nutshell , Fishr is similar to the covariance-based CORAL ( Sun et al. , 2016 ; Sun & Saenko , 2016 ) but in the gradient space rather than in the feature space .
This paper proposes a regularization for Out-of-Distribution Generalization problem, which aims to align the gradient of different domains. Extensive experiments are performed to validate the effectiveness of the proposed method. And some intuitions are also given to demonstrate the Fishr method.
SP:633112bc888148bb86867601dd21c422e55e31a5
ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity
1 INTRODUCTION . Despite it being a relatively new subfield of machine learning ( ML ) , Federated Learning ( FL ) ( McMahan et al. , 2017 ; Reddi et al. , 2021 ; Horvath et al. , 2021 ) has become an indispensable tool to enable privacy-preserving collaboratively learning , as well as to deliver personalised models tailored to the end-user ’ s local data and context ( Arivazhagan et al. , 2019 ; Hilmkil et al. , 2021 ; Cheng et al. , 2021 ) . For example : next-word prediction ( Hard et al. , 2018 ) , physical activity detection ( Doherty et al. , 2017 ) , keyword spotting ( Hard et al. , 2020 ) , among others . Unlike standard centralised training , which normally takes place on the Cloud and makes use of powerful hardware ( Hazelwood et al. , 2018 ) , FL is envisioned to run on commodity devices such as smartphones or IoT devices often running of batteries , which are orders of magnitude more restricted in terms of compute , memory and power consumption . This triplet of factors drastically limits the complexity of the ML models that can be trained on-device in a federated manner , ceiling their usefulness for the aforementioned applications as a result . In order to adjust the memory and compute footprints of complex ML model to the FL setting , the research community has presented a number of approaches including : the use of distillation ( Hinton et al. , 2015 ) to enable the aggregation on the server side of heterogeneous model architectures ( e.g . based on the compute capabilities of each device ) that collaboratively train a single global model ( Lin et al. , 2020 ; Zhu et al. , 2021 ) ; group knowledge transfer algorithm ( He et al. , 2020 ) ; federated dropout , by which clients perform local training on a sub-model of the global model ( Caldas et al. , 2019 ) , translates into lower overall communication costs and , enables better support for heterogeneous pools of clients regardless of their compute capabilities ( Horvath et al. , 2021 ) ; and , more generally , better aggregation strategies that enable faster convergence ( Li et al. , 2018 ; Wang et al. , 2020 ; Reddi et al. , 2021 ) , reducing in this way overall device utilization ( e.g . fewer local epochs ) and number of communication rounds . Other optimization techniques such as quantization and sparsity have been used in the context of FL but mostly as a way to reduce communication costs ( Liu et al. , 2021 ; Amiri et al. , 2020 ; Shahid et al. , 2021 ) but not to accelerate on-device training . The use of sparse operations ( e.g . convolutions ) at training time has recently been shown to be an effective technique to accelerate training in centralised settings ( Sun et al. , 2017 ; Goli & Aamodt , 2020 ; Raihan & Aamodt , 2020 ) . The resulting models are as good or close to their densely-trained counterparts despite reducing by up to 90 % their FLOPs budget and , resulting in an overall up to 3.3× training speedup . Acceleration is achieved by performing sparse convolutions during the forward and/or backward pass , which requires at least one of the operands ( i.e . inputs , weights , gradients ) to be sufficiently sparse and , software and hardware support for such operations . However , it is unclear how the different FL-specific challenges ( i.e . data imbalance , stateless clients , periodic aggregation ) will restrict the quality of the global model . This work considers the challenges and opportunities of inducing high levels of sparsity to accelerate training on-device for FL workloads , and provides the following contributions : • The first framework for Federated Learning that leverages sparsity as a mechanism to accelerate on-device training by inducing up to 95 % sparse weights and activations.This work considers three popular datasets : CIFAR-10 and FEMNIST for image classification and , SpeechCommands for audio classification . • A study on the unique aspects that arise when introducing sparsity at training time in FL : the degree of overlap between non-zero values decreases with layer-depth index and , the locations of zero-valued weights in the global model remain constant throughout most of the training rounds . Our discussion sets the foundations for future research in this area . • A technique that alleviates the accuracy degradation when applying a state-of-the-art offthe-shelf sparsification method to the FL domain . ZeroFL achieves +2.1 % and +3.7 % higher accuracy than baselines when inducing 90 % and 95 % sparsity respectively . In addition , ZeroFL also leverages sparsity when transferring the local models to the central server reducing communication costs by 3.0× while still outperforming competitive baselines . 2 RELATED WORK . Pruning neural networks involves discarding parts of the model ( e.g . individual weights or entire channels ) that are irrelevant for solving the task at hand . This procedure generally produces a lightweight model representation more suitable for deployment on constrained devices with limited memory and compute budgets . In this section we detail how different forms of pruning or sparsification have been used to accelerate inference and , to a lesser extent , training . We also discuss how these have been introduced to reduce communication costs in distributed and federated learning . Unstructured pruning . Frameworks relying on unstructured pruning ( Han et al. , 2015a ; b ; Guo et al. , 2016 ; Molchanov et al. , 2017 ) often achieve higher compression ratios at the expense of inference stages being as compute intensive in practice as those of the original model . This is because , assuming pruning has been homogeneously applied on the model , sparse operations can only be efficiently accelerated on supported hardware , such as modern GPUs ( Wang , 2020 ; Zachariadis et al. , 2020 ; Hong et al. , 2018 ) or custom accelerators ( Zhang et al. , 2016 ; Lu et al. , 2019 ; Srivastava et al. , 2020 ) , for a sufficiently high sparsity ratio . The lower the ratio , the less likely sparse operations would translate into measurable speedups . In the case of CPUs , speedups due to sparse operations where one operand is unstructurally sparse are often only feasible at 90 % sparsity ratios or higher ( Hong et al. , 2019 ; Wang , 2021 ) . Structured pruning . Methods that apply structured pruning ( He et al. , 2018 ; 2017 ; Jian-Hao Luo & Lin , 2017 ; Yu et al. , 2018 ; Molchanov et al. , 2019 ; Wang et al. , 2017 ) , on the other hand , trade compression for acceleration potential . These approaches modify the underlying computational graph by discarding entire channels , resulting in smaller but still dense convolution operations , or by removing the nodes all together if an entire layer is set to be removed by the chosen pruning strategy . As a result , structured pruning frameworks are the preferred option when aiming to accelerate inference on general purpose hardware . A body of work across structured and unstructured pruning methods , attempts to induce structure in otherwise randomly sparse networks S. Gray & Kingma ( 2017 ) ; Ren et al . ( 2018 ) ; Wen et al . ( 2020 ) ; Verelst & Tuytelaars ( 2020 ) . This is often referred to as block sparsity and consists in subdividing the matrix representations of inputs or weights into tiles ( e.g . 16×16 tiles ) , and restrict the training in such a way that some tiles contain only zeros while the rest remain dense and real-valued . Matrix-matrix multiplications following such a pattern can be accelerated at lower global sparsity ratios compared to those following unstructured sparsity Hoefler et al . ( 2021 ) . Other forms of constraining how sparsity occurs have been proposed , for example , a cache-aware reordering on the sparsity pattern of the weights Elsen et al . ( 2020 ) . This can be used to ensure high cache reuse on Cortex-A mobile CPUs , resulting in 2.4× acceleration of MobileNets . Sparse training . The majority of works making use of sparsity are envisioned for either model compression or to accelerate inference . Only recently , sparse operations have been considered to accelerate training . The work of Sun et al . ( 2017 ) presented a mechanism to induce high levels of sparsity in the gradients during backpropagation and , demonstrated large speedups when training MLP-only models . More recently , Goli & Aamodt ( 2020 ) build upon the observation that gradients from consecutive batches are near identical . They present a framework to reuse a random sample of previously computed gradients and their thresholded difference w.r.t gradients from the current batch , resulting in a sparse tensor . Their framework accelerates training of CNNs by performing sparse convolutions during the backward pass at the cost of pre-computing partial gradients during forward pass . Closer to our work is SWAT ( Raihan & Aamodt , 2020 ) , a framework that relies on sparsified weights during inference and sparsified weights and activations for backward propagation . Compared to the previous two frameworks , SWAT achieves superior performance in terms of training acceleration and accuracy retention at high sparsity ratios . Compression on communication . Konečnỳ et al . ( 2016 ) proposed to restricts the updates of weight matrices to have a pre-specified structure in order to reduce the total communication cost . The structure can either be random or low-rank structure . ATOMO ( Wang et al. , 2018 ) introduced a generalised gradient decomposition and sparsification technique , aiming to reduce the gradient sizes communicated upstream . Han et al . ( 2020 ) proposed a different way of aggregation in the server , which instead of aggregating model weights , it aggregates the sparsified gradients after every local update step . However , since the method requires to aggregate sparsified gradient after every step , it can not benefit from multiple local updates . Hence it might require extra communication rounds to reach the target performance . PruneFL Jiang et al . ( 2019 ) reduced both computation and communication overhead to minimize the overall training time by including an initial pruning at one selected client and further pruning as a part of FL process . Nevertheless , none of the aforementioned works explored the challenges of extending state-of-theart sparsification methods to federated learning as a way to accelerate on-device training . With ZeroFL , a framework specifically tailored to the FL setting , achieves better accuracy retention than with existing methods that remain exclusive to the centralised training paradigm . 3 BACKGROUND . This section describes the state-of-the-art sparse training method SWAT ( Raihan & Aamodt , 2020 ) ; the way we adapt it to the FL contexts ; and the related challenges that would need be addressed . 3.1 SPARSE WEIGHTS AND ACTIVATIONS TRAINING . The SWAT framework embodies two strategies in the training process . During each forward pass , the weights are partitioned into active weights and non-active weights by a top-K ( in magnitude ) operator and only the active weights are used . For the lth layer in the model , the layer maps the input activations al−1 onto feature maps ol using function fl : ol = fl ( al−1 , wl ) . In this work we consider fl being the 3× 3 convolution in the l-th layer . In the backward pass , the gradient of input activations ( 5al−1 ) and the gradient of weights ( 5wl ) are calculated represented by functions Gl and Hl , as shown below : 5al−1 = Gl ( 5al , wl ) ( 1 ) 5wl = Hl ( 5al , al−1 ) ( 2 ) Then in the backward pass , the retained layer inputs al−1 are also partitioned into active and nonactive by using the same top-K procedure . This results in full gradients and active weights being used in Eq . 1 , while full gradients and active activations are used in Eq . 2 . It is worth noticing that even weights and activations and sparsified in the forward and backward pass , the gradients generated through the training process are dense . Therefore , the resulting model is a dense . The compute cost of updating weights wl given a dense5wl tensor is negligible compare to the savings due to performing the underlying convolutions in Eq.1 & 2 , as this is essentially a weighted sum .
The paper investigates sparse training to accelerate on-device federated learning. It begins by taking an off-the-shelf sparse training method (SWAT) and analyzes its limitations. Then, the method proposes three strategies to improve upon SWAT. Experiments are conducted on CIFAR-10 dataset and show some improvements compared to SWAT.
SP:3dc0bafc12ceb8a9387173cb7640049e4f9c62ce
ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity
1 INTRODUCTION . Despite it being a relatively new subfield of machine learning ( ML ) , Federated Learning ( FL ) ( McMahan et al. , 2017 ; Reddi et al. , 2021 ; Horvath et al. , 2021 ) has become an indispensable tool to enable privacy-preserving collaboratively learning , as well as to deliver personalised models tailored to the end-user ’ s local data and context ( Arivazhagan et al. , 2019 ; Hilmkil et al. , 2021 ; Cheng et al. , 2021 ) . For example : next-word prediction ( Hard et al. , 2018 ) , physical activity detection ( Doherty et al. , 2017 ) , keyword spotting ( Hard et al. , 2020 ) , among others . Unlike standard centralised training , which normally takes place on the Cloud and makes use of powerful hardware ( Hazelwood et al. , 2018 ) , FL is envisioned to run on commodity devices such as smartphones or IoT devices often running of batteries , which are orders of magnitude more restricted in terms of compute , memory and power consumption . This triplet of factors drastically limits the complexity of the ML models that can be trained on-device in a federated manner , ceiling their usefulness for the aforementioned applications as a result . In order to adjust the memory and compute footprints of complex ML model to the FL setting , the research community has presented a number of approaches including : the use of distillation ( Hinton et al. , 2015 ) to enable the aggregation on the server side of heterogeneous model architectures ( e.g . based on the compute capabilities of each device ) that collaboratively train a single global model ( Lin et al. , 2020 ; Zhu et al. , 2021 ) ; group knowledge transfer algorithm ( He et al. , 2020 ) ; federated dropout , by which clients perform local training on a sub-model of the global model ( Caldas et al. , 2019 ) , translates into lower overall communication costs and , enables better support for heterogeneous pools of clients regardless of their compute capabilities ( Horvath et al. , 2021 ) ; and , more generally , better aggregation strategies that enable faster convergence ( Li et al. , 2018 ; Wang et al. , 2020 ; Reddi et al. , 2021 ) , reducing in this way overall device utilization ( e.g . fewer local epochs ) and number of communication rounds . Other optimization techniques such as quantization and sparsity have been used in the context of FL but mostly as a way to reduce communication costs ( Liu et al. , 2021 ; Amiri et al. , 2020 ; Shahid et al. , 2021 ) but not to accelerate on-device training . The use of sparse operations ( e.g . convolutions ) at training time has recently been shown to be an effective technique to accelerate training in centralised settings ( Sun et al. , 2017 ; Goli & Aamodt , 2020 ; Raihan & Aamodt , 2020 ) . The resulting models are as good or close to their densely-trained counterparts despite reducing by up to 90 % their FLOPs budget and , resulting in an overall up to 3.3× training speedup . Acceleration is achieved by performing sparse convolutions during the forward and/or backward pass , which requires at least one of the operands ( i.e . inputs , weights , gradients ) to be sufficiently sparse and , software and hardware support for such operations . However , it is unclear how the different FL-specific challenges ( i.e . data imbalance , stateless clients , periodic aggregation ) will restrict the quality of the global model . This work considers the challenges and opportunities of inducing high levels of sparsity to accelerate training on-device for FL workloads , and provides the following contributions : • The first framework for Federated Learning that leverages sparsity as a mechanism to accelerate on-device training by inducing up to 95 % sparse weights and activations.This work considers three popular datasets : CIFAR-10 and FEMNIST for image classification and , SpeechCommands for audio classification . • A study on the unique aspects that arise when introducing sparsity at training time in FL : the degree of overlap between non-zero values decreases with layer-depth index and , the locations of zero-valued weights in the global model remain constant throughout most of the training rounds . Our discussion sets the foundations for future research in this area . • A technique that alleviates the accuracy degradation when applying a state-of-the-art offthe-shelf sparsification method to the FL domain . ZeroFL achieves +2.1 % and +3.7 % higher accuracy than baselines when inducing 90 % and 95 % sparsity respectively . In addition , ZeroFL also leverages sparsity when transferring the local models to the central server reducing communication costs by 3.0× while still outperforming competitive baselines . 2 RELATED WORK . Pruning neural networks involves discarding parts of the model ( e.g . individual weights or entire channels ) that are irrelevant for solving the task at hand . This procedure generally produces a lightweight model representation more suitable for deployment on constrained devices with limited memory and compute budgets . In this section we detail how different forms of pruning or sparsification have been used to accelerate inference and , to a lesser extent , training . We also discuss how these have been introduced to reduce communication costs in distributed and federated learning . Unstructured pruning . Frameworks relying on unstructured pruning ( Han et al. , 2015a ; b ; Guo et al. , 2016 ; Molchanov et al. , 2017 ) often achieve higher compression ratios at the expense of inference stages being as compute intensive in practice as those of the original model . This is because , assuming pruning has been homogeneously applied on the model , sparse operations can only be efficiently accelerated on supported hardware , such as modern GPUs ( Wang , 2020 ; Zachariadis et al. , 2020 ; Hong et al. , 2018 ) or custom accelerators ( Zhang et al. , 2016 ; Lu et al. , 2019 ; Srivastava et al. , 2020 ) , for a sufficiently high sparsity ratio . The lower the ratio , the less likely sparse operations would translate into measurable speedups . In the case of CPUs , speedups due to sparse operations where one operand is unstructurally sparse are often only feasible at 90 % sparsity ratios or higher ( Hong et al. , 2019 ; Wang , 2021 ) . Structured pruning . Methods that apply structured pruning ( He et al. , 2018 ; 2017 ; Jian-Hao Luo & Lin , 2017 ; Yu et al. , 2018 ; Molchanov et al. , 2019 ; Wang et al. , 2017 ) , on the other hand , trade compression for acceleration potential . These approaches modify the underlying computational graph by discarding entire channels , resulting in smaller but still dense convolution operations , or by removing the nodes all together if an entire layer is set to be removed by the chosen pruning strategy . As a result , structured pruning frameworks are the preferred option when aiming to accelerate inference on general purpose hardware . A body of work across structured and unstructured pruning methods , attempts to induce structure in otherwise randomly sparse networks S. Gray & Kingma ( 2017 ) ; Ren et al . ( 2018 ) ; Wen et al . ( 2020 ) ; Verelst & Tuytelaars ( 2020 ) . This is often referred to as block sparsity and consists in subdividing the matrix representations of inputs or weights into tiles ( e.g . 16×16 tiles ) , and restrict the training in such a way that some tiles contain only zeros while the rest remain dense and real-valued . Matrix-matrix multiplications following such a pattern can be accelerated at lower global sparsity ratios compared to those following unstructured sparsity Hoefler et al . ( 2021 ) . Other forms of constraining how sparsity occurs have been proposed , for example , a cache-aware reordering on the sparsity pattern of the weights Elsen et al . ( 2020 ) . This can be used to ensure high cache reuse on Cortex-A mobile CPUs , resulting in 2.4× acceleration of MobileNets . Sparse training . The majority of works making use of sparsity are envisioned for either model compression or to accelerate inference . Only recently , sparse operations have been considered to accelerate training . The work of Sun et al . ( 2017 ) presented a mechanism to induce high levels of sparsity in the gradients during backpropagation and , demonstrated large speedups when training MLP-only models . More recently , Goli & Aamodt ( 2020 ) build upon the observation that gradients from consecutive batches are near identical . They present a framework to reuse a random sample of previously computed gradients and their thresholded difference w.r.t gradients from the current batch , resulting in a sparse tensor . Their framework accelerates training of CNNs by performing sparse convolutions during the backward pass at the cost of pre-computing partial gradients during forward pass . Closer to our work is SWAT ( Raihan & Aamodt , 2020 ) , a framework that relies on sparsified weights during inference and sparsified weights and activations for backward propagation . Compared to the previous two frameworks , SWAT achieves superior performance in terms of training acceleration and accuracy retention at high sparsity ratios . Compression on communication . Konečnỳ et al . ( 2016 ) proposed to restricts the updates of weight matrices to have a pre-specified structure in order to reduce the total communication cost . The structure can either be random or low-rank structure . ATOMO ( Wang et al. , 2018 ) introduced a generalised gradient decomposition and sparsification technique , aiming to reduce the gradient sizes communicated upstream . Han et al . ( 2020 ) proposed a different way of aggregation in the server , which instead of aggregating model weights , it aggregates the sparsified gradients after every local update step . However , since the method requires to aggregate sparsified gradient after every step , it can not benefit from multiple local updates . Hence it might require extra communication rounds to reach the target performance . PruneFL Jiang et al . ( 2019 ) reduced both computation and communication overhead to minimize the overall training time by including an initial pruning at one selected client and further pruning as a part of FL process . Nevertheless , none of the aforementioned works explored the challenges of extending state-of-theart sparsification methods to federated learning as a way to accelerate on-device training . With ZeroFL , a framework specifically tailored to the FL setting , achieves better accuracy retention than with existing methods that remain exclusive to the centralised training paradigm . 3 BACKGROUND . This section describes the state-of-the-art sparse training method SWAT ( Raihan & Aamodt , 2020 ) ; the way we adapt it to the FL contexts ; and the related challenges that would need be addressed . 3.1 SPARSE WEIGHTS AND ACTIVATIONS TRAINING . The SWAT framework embodies two strategies in the training process . During each forward pass , the weights are partitioned into active weights and non-active weights by a top-K ( in magnitude ) operator and only the active weights are used . For the lth layer in the model , the layer maps the input activations al−1 onto feature maps ol using function fl : ol = fl ( al−1 , wl ) . In this work we consider fl being the 3× 3 convolution in the l-th layer . In the backward pass , the gradient of input activations ( 5al−1 ) and the gradient of weights ( 5wl ) are calculated represented by functions Gl and Hl , as shown below : 5al−1 = Gl ( 5al , wl ) ( 1 ) 5wl = Hl ( 5al , al−1 ) ( 2 ) Then in the backward pass , the retained layer inputs al−1 are also partitioned into active and nonactive by using the same top-K procedure . This results in full gradients and active weights being used in Eq . 1 , while full gradients and active activations are used in Eq . 2 . It is worth noticing that even weights and activations and sparsified in the forward and backward pass , the gradients generated through the training process are dense . Therefore , the resulting model is a dense . The compute cost of updating weights wl given a dense5wl tensor is negligible compare to the savings due to performing the underlying convolutions in Eq.1 & 2 , as this is essentially a weighted sum .
The paper studies the problem of adding sparsity to local training in federated learning. Under the approach each node uses only the top-K weights for computations in the forward and backward pass of training (and only top-K activations for computations in the backward pass). The authors empirically show that adding sparsity in this way, while speeding up local training, significantly degrades the accuracy when directly used in federated learning. To mitigate this issue the authors propose to combine sparse training with sparse aggregation since their empirical study suggests that the accuracy drop may be due to dilution of weights that are active in only a few nodes. Experiments on CIFAR10 with non-IID splits using ResNet-18 as the model show some gain in accuracy along with a drop in communication cost when using the proposed schemes.
SP:3dc0bafc12ceb8a9387173cb7640049e4f9c62ce
ZeroFL: Efficient On-Device Training for Federated Learning with Local Sparsity
1 INTRODUCTION . Despite it being a relatively new subfield of machine learning ( ML ) , Federated Learning ( FL ) ( McMahan et al. , 2017 ; Reddi et al. , 2021 ; Horvath et al. , 2021 ) has become an indispensable tool to enable privacy-preserving collaboratively learning , as well as to deliver personalised models tailored to the end-user ’ s local data and context ( Arivazhagan et al. , 2019 ; Hilmkil et al. , 2021 ; Cheng et al. , 2021 ) . For example : next-word prediction ( Hard et al. , 2018 ) , physical activity detection ( Doherty et al. , 2017 ) , keyword spotting ( Hard et al. , 2020 ) , among others . Unlike standard centralised training , which normally takes place on the Cloud and makes use of powerful hardware ( Hazelwood et al. , 2018 ) , FL is envisioned to run on commodity devices such as smartphones or IoT devices often running of batteries , which are orders of magnitude more restricted in terms of compute , memory and power consumption . This triplet of factors drastically limits the complexity of the ML models that can be trained on-device in a federated manner , ceiling their usefulness for the aforementioned applications as a result . In order to adjust the memory and compute footprints of complex ML model to the FL setting , the research community has presented a number of approaches including : the use of distillation ( Hinton et al. , 2015 ) to enable the aggregation on the server side of heterogeneous model architectures ( e.g . based on the compute capabilities of each device ) that collaboratively train a single global model ( Lin et al. , 2020 ; Zhu et al. , 2021 ) ; group knowledge transfer algorithm ( He et al. , 2020 ) ; federated dropout , by which clients perform local training on a sub-model of the global model ( Caldas et al. , 2019 ) , translates into lower overall communication costs and , enables better support for heterogeneous pools of clients regardless of their compute capabilities ( Horvath et al. , 2021 ) ; and , more generally , better aggregation strategies that enable faster convergence ( Li et al. , 2018 ; Wang et al. , 2020 ; Reddi et al. , 2021 ) , reducing in this way overall device utilization ( e.g . fewer local epochs ) and number of communication rounds . Other optimization techniques such as quantization and sparsity have been used in the context of FL but mostly as a way to reduce communication costs ( Liu et al. , 2021 ; Amiri et al. , 2020 ; Shahid et al. , 2021 ) but not to accelerate on-device training . The use of sparse operations ( e.g . convolutions ) at training time has recently been shown to be an effective technique to accelerate training in centralised settings ( Sun et al. , 2017 ; Goli & Aamodt , 2020 ; Raihan & Aamodt , 2020 ) . The resulting models are as good or close to their densely-trained counterparts despite reducing by up to 90 % their FLOPs budget and , resulting in an overall up to 3.3× training speedup . Acceleration is achieved by performing sparse convolutions during the forward and/or backward pass , which requires at least one of the operands ( i.e . inputs , weights , gradients ) to be sufficiently sparse and , software and hardware support for such operations . However , it is unclear how the different FL-specific challenges ( i.e . data imbalance , stateless clients , periodic aggregation ) will restrict the quality of the global model . This work considers the challenges and opportunities of inducing high levels of sparsity to accelerate training on-device for FL workloads , and provides the following contributions : • The first framework for Federated Learning that leverages sparsity as a mechanism to accelerate on-device training by inducing up to 95 % sparse weights and activations.This work considers three popular datasets : CIFAR-10 and FEMNIST for image classification and , SpeechCommands for audio classification . • A study on the unique aspects that arise when introducing sparsity at training time in FL : the degree of overlap between non-zero values decreases with layer-depth index and , the locations of zero-valued weights in the global model remain constant throughout most of the training rounds . Our discussion sets the foundations for future research in this area . • A technique that alleviates the accuracy degradation when applying a state-of-the-art offthe-shelf sparsification method to the FL domain . ZeroFL achieves +2.1 % and +3.7 % higher accuracy than baselines when inducing 90 % and 95 % sparsity respectively . In addition , ZeroFL also leverages sparsity when transferring the local models to the central server reducing communication costs by 3.0× while still outperforming competitive baselines . 2 RELATED WORK . Pruning neural networks involves discarding parts of the model ( e.g . individual weights or entire channels ) that are irrelevant for solving the task at hand . This procedure generally produces a lightweight model representation more suitable for deployment on constrained devices with limited memory and compute budgets . In this section we detail how different forms of pruning or sparsification have been used to accelerate inference and , to a lesser extent , training . We also discuss how these have been introduced to reduce communication costs in distributed and federated learning . Unstructured pruning . Frameworks relying on unstructured pruning ( Han et al. , 2015a ; b ; Guo et al. , 2016 ; Molchanov et al. , 2017 ) often achieve higher compression ratios at the expense of inference stages being as compute intensive in practice as those of the original model . This is because , assuming pruning has been homogeneously applied on the model , sparse operations can only be efficiently accelerated on supported hardware , such as modern GPUs ( Wang , 2020 ; Zachariadis et al. , 2020 ; Hong et al. , 2018 ) or custom accelerators ( Zhang et al. , 2016 ; Lu et al. , 2019 ; Srivastava et al. , 2020 ) , for a sufficiently high sparsity ratio . The lower the ratio , the less likely sparse operations would translate into measurable speedups . In the case of CPUs , speedups due to sparse operations where one operand is unstructurally sparse are often only feasible at 90 % sparsity ratios or higher ( Hong et al. , 2019 ; Wang , 2021 ) . Structured pruning . Methods that apply structured pruning ( He et al. , 2018 ; 2017 ; Jian-Hao Luo & Lin , 2017 ; Yu et al. , 2018 ; Molchanov et al. , 2019 ; Wang et al. , 2017 ) , on the other hand , trade compression for acceleration potential . These approaches modify the underlying computational graph by discarding entire channels , resulting in smaller but still dense convolution operations , or by removing the nodes all together if an entire layer is set to be removed by the chosen pruning strategy . As a result , structured pruning frameworks are the preferred option when aiming to accelerate inference on general purpose hardware . A body of work across structured and unstructured pruning methods , attempts to induce structure in otherwise randomly sparse networks S. Gray & Kingma ( 2017 ) ; Ren et al . ( 2018 ) ; Wen et al . ( 2020 ) ; Verelst & Tuytelaars ( 2020 ) . This is often referred to as block sparsity and consists in subdividing the matrix representations of inputs or weights into tiles ( e.g . 16×16 tiles ) , and restrict the training in such a way that some tiles contain only zeros while the rest remain dense and real-valued . Matrix-matrix multiplications following such a pattern can be accelerated at lower global sparsity ratios compared to those following unstructured sparsity Hoefler et al . ( 2021 ) . Other forms of constraining how sparsity occurs have been proposed , for example , a cache-aware reordering on the sparsity pattern of the weights Elsen et al . ( 2020 ) . This can be used to ensure high cache reuse on Cortex-A mobile CPUs , resulting in 2.4× acceleration of MobileNets . Sparse training . The majority of works making use of sparsity are envisioned for either model compression or to accelerate inference . Only recently , sparse operations have been considered to accelerate training . The work of Sun et al . ( 2017 ) presented a mechanism to induce high levels of sparsity in the gradients during backpropagation and , demonstrated large speedups when training MLP-only models . More recently , Goli & Aamodt ( 2020 ) build upon the observation that gradients from consecutive batches are near identical . They present a framework to reuse a random sample of previously computed gradients and their thresholded difference w.r.t gradients from the current batch , resulting in a sparse tensor . Their framework accelerates training of CNNs by performing sparse convolutions during the backward pass at the cost of pre-computing partial gradients during forward pass . Closer to our work is SWAT ( Raihan & Aamodt , 2020 ) , a framework that relies on sparsified weights during inference and sparsified weights and activations for backward propagation . Compared to the previous two frameworks , SWAT achieves superior performance in terms of training acceleration and accuracy retention at high sparsity ratios . Compression on communication . Konečnỳ et al . ( 2016 ) proposed to restricts the updates of weight matrices to have a pre-specified structure in order to reduce the total communication cost . The structure can either be random or low-rank structure . ATOMO ( Wang et al. , 2018 ) introduced a generalised gradient decomposition and sparsification technique , aiming to reduce the gradient sizes communicated upstream . Han et al . ( 2020 ) proposed a different way of aggregation in the server , which instead of aggregating model weights , it aggregates the sparsified gradients after every local update step . However , since the method requires to aggregate sparsified gradient after every step , it can not benefit from multiple local updates . Hence it might require extra communication rounds to reach the target performance . PruneFL Jiang et al . ( 2019 ) reduced both computation and communication overhead to minimize the overall training time by including an initial pruning at one selected client and further pruning as a part of FL process . Nevertheless , none of the aforementioned works explored the challenges of extending state-of-theart sparsification methods to federated learning as a way to accelerate on-device training . With ZeroFL , a framework specifically tailored to the FL setting , achieves better accuracy retention than with existing methods that remain exclusive to the centralised training paradigm . 3 BACKGROUND . This section describes the state-of-the-art sparse training method SWAT ( Raihan & Aamodt , 2020 ) ; the way we adapt it to the FL contexts ; and the related challenges that would need be addressed . 3.1 SPARSE WEIGHTS AND ACTIVATIONS TRAINING . The SWAT framework embodies two strategies in the training process . During each forward pass , the weights are partitioned into active weights and non-active weights by a top-K ( in magnitude ) operator and only the active weights are used . For the lth layer in the model , the layer maps the input activations al−1 onto feature maps ol using function fl : ol = fl ( al−1 , wl ) . In this work we consider fl being the 3× 3 convolution in the l-th layer . In the backward pass , the gradient of input activations ( 5al−1 ) and the gradient of weights ( 5wl ) are calculated represented by functions Gl and Hl , as shown below : 5al−1 = Gl ( 5al , wl ) ( 1 ) 5wl = Hl ( 5al , al−1 ) ( 2 ) Then in the backward pass , the retained layer inputs al−1 are also partitioned into active and nonactive by using the same top-K procedure . This results in full gradients and active weights being used in Eq . 1 , while full gradients and active activations are used in Eq . 2 . It is worth noticing that even weights and activations and sparsified in the forward and backward pass , the gradients generated through the training process are dense . Therefore , the resulting model is a dense . The compute cost of updating weights wl given a dense5wl tensor is negligible compare to the savings due to performing the underlying convolutions in Eq.1 & 2 , as this is essentially a weighted sum .
This work proposes ZeroFL; a method that allows for sparse neural network training at the edge along with reduced upload communication, both of which being important aspects in federated learning. ZeroFL essentially follows a prior work on sparse neural network training, namely SWAT, and adapts it appropriately for the federated setting. The SWAT method works by setting a target weight sparsity percentage (denoted by $sp$) which is enforced in both the forward and backward pass by a top-k operation on the weights. This allows for using sparse convolutions and thus offering training time speed-ups. The authors note that it is important to use the same set of sparse weights for inference as well, in order to not affect performance. ZeroFL then applies this idea in the federated setting by letting each client perform SWAT type of training with a target sparsity of $sp$. Instead of the clients then communicating to the server the exact set of sparse weights obtained at the end of their local training procedure, ZeroFL proposes to add an extra “slack” $r_{mask}$ and communicate $1 - sp + r_{mask}$ percentage of weights (instead of $1 - sp$). The authors motivate this change via the results of an ablation study which showed significantly lower performance when directly applying SWAT to the federated setting. They argue that with the extra $r_{mask}$ weights, a “cleaner” training signal from the clients can be given as these weights are not “corrupted” by the weights of the other clients when averaging at the server. The authors further propose three separate weights of updating the server model, i.e., 1) using the client top-k weights, 2) using the difference between the server and client top-k weights and 3) using the top-k differences between the server and client weights. The authors experimentally validate the ZeroFL method on cifar 10 using 100 clients and both iid and non-iid splits.
SP:3dc0bafc12ceb8a9387173cb7640049e4f9c62ce
Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
1 INTRODUCTION . Continual learning ( CL ) refers to the ability of a learning agent to continuously interact with a dynamic environment and process a stream of information to acquire new knowledge while consolidating and retaining previously obtained knowledge ( Parisi et al. , 2019 ) . This ability to continuously learn from a changing environment is a hallmark of intelligence and a critical missing component in our quest towards making our models truly intelligent . The major challenge towards enabling CL in deep neural networks ( DNNs ) is that the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting whereby the performance of the model on previously learned tasks drops drastically as it learns new tasks ( McCloskey & Cohen , 1989 ) . Several approaches have been proposed to address the issue of catastrophic forgetting in CL . These can be broadly categorized into regularization-based methods ( Farajtabar et al. , 2020 ; Kirkpatrick et al. , 2017 ; Ritter et al. , 2018 ; Zenke et al. , 2017 ) which penalizes changes in the network weights , network expansion-based methods ( Rusu et al. , 2016 ; Yoon et al. , 2017 ) which dedicate a distinct set of network parameters to distinct tasks , and rehearsal-based methods ( Chaudhry et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ) which maintains a memory buffer and replays samples from previous tasks . Amongst these , rehearsal-based methods have proven to be more effective in challenging CL tasks ( Farquhar & Gal , 2018 ) . However , an optimal approach for replaying memory samples and constraining the model update to efficiently consolidate knowledge remains an open question . In the brain , the ability to continually acquire , consolidate , and transfer knowledge over time is mediated by a rich set of neurophysiological processing principles ( Parisi et al. , 2019 ; Zenke et al. , 2017 ) and multiple memory systems ( Hassabis et al. , 2017 ) . In particular , the CLS theory ( Kumaran et al. , 2016 ) posits that efficient learning requires two complementary learning systems : the hippocampus exhibits short-term adaptation and rapid learning of episodic information which is then gradually consolidated to the neocortex for slow learning of structured information . Furthermore , a recent study by Hayes et al . ( 2021 ) identified the missing elements of biological reply in the replay ∗Contributed equally . mechanisms employed in DNNs for CL . They highlight that many existing approaches only focus on modeling the prefrontal cortex directly and do not have a fast learning network which plays a critical role in enabling efficient CL in the brain . Inspired by these studies , we hypothesize that mimicking the slow and rapid adaptation of information and having an efficient mechanism for incorporating them into the working memory can enable better CL in DNNs . To this end , we propose a novel dual memory experience replay method based on the complementary learning systems theory in the brain , dubbed as CLS-ER . In addition to a small episodic memory , our method builds long-term and short-term semantic memories which mimic the rapid and slow adaptation of information ( Figure 1 ) . As the network weights encode the learned representations of the tasks ( Krishnan et al. , 2019 ) , the semantic memories are maintained by taking the exponential moving average of the working model ’ s weights to consolidate information across the tasks with varying time windows and frequencies . The semantic memories interact with the episodic memory to extract consolidated replay activation patterns and enforce a consistency loss on the update of the working model so that new knowledge is acquired while aligning the decision boundary of the working model with the decision boundaries of semantic memories . This maintains a balance between the plasticity and stability of the model for effective knowledge consolidation . CLS-ER provides a general CL method that does not utilize the task boundaries or make any strong assumption regarding the distribution of the data and tasks . We demonstrate the versatility and effectiveness of our method on a wide range of CL benchmark tasks as well as more challenging scenarios which simulate the complexities of CL in the real world . 2 RELATED WORK . The base method for the rehearsal-based approach , Experience Replay ( ER ) ( Riemer et al. , 2018 ) combines the memory samples with the task samples into the training batch . Several techniques have since been employed on top of ER . Meta Experience Replay ( MER ) ( Riemer et al. , 2018 ) considers replay as a meta-learning problem for maximizing the transfer from previous tasks and minimizing the interference . iCARL ( Rebuffi et al. , 2017 ) uses the nearest average representation of past exemplars to classify in an incrementally learned representation space . Gradient Episodic Memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) formulates optimization constraints on the exemplars in memory . Gradient-based Sample Selection ( GSS ) ( Aljundi et al. , 2019 ) aims for memory sample diversity in the gradient space and provides a greedy selection approach . Function Distance Regularization ( FDR ) ( Benjamin et al. , 2018 ) saves the network response at the task boundaries and adds a consistency loss on top of ER . Dark Experience Replay ( DER++ ) relaxes this restriction and samples logits during the entire optimization trajectory . CLS has been used as a source of inspiration for dual memory learning systems in earlier works ( French , 1999 ; Robins , 1993 ) but they have not been shown to scale to current computer vision tasks ( Parisi et al. , 2019 ) . Recently , Rostami et al . ( 2019 ) utilizes a generative model to couple sequential tasks in a latent embedding space . Kamra et al . ( 2017 ) utilizes two generative models in a dual memory architecture . However , they utilize the task boundaries and generative replay has its own set of challenges as it is difficult to learn a faithful distribution and performs sub-par in comparison to instance-based replay methods on challenging CL settings . Generally , the inspiration from CLS theory in DNNs has been mostly limited to episodic memory and mimicking the rapid and slow learning mechanism is majorly ignored ( Hayes et al. , 2021 ) which we aim to address . 3 METHOD . We first provide an overview of the CLS theory for the brain and how we aim to mimic it for DNNs before introducing the main components of our method and the overall formulation . 3.1 COMPLEMENTARY LEARNING SYSTEM THEORY . The CLS theory posits that effective lifelong learning in the brain requires two complementary learning systems . The hippocampus rapidly encodes novel information as a short-term memory which is subsequently used to transfer and consolidate knowledge in the neocortex which gradually acquires structured knowledge representation as long-term memory through experience replay . The interplay between the functionality of the hippocampus and neocortex is crucial for concurrently learning efficient representations ( for better generalization ) and the specifics of instance-based episodic memory . 3.2 COMPLEMENTARY LEARNING SYSTEM BASED EXPERIENCED REPLAY . Inspired by the CLS theory , we propose a dual memory experience replay method , CLS-ER , which aims to mimic the interplay between fast learning and slow learning mechanisms for enabling effective CL in DNNs . Our method maintains short-term and long-term semantic memories of the encountered tasks which interact with the episodic memory for replaying the associated neural activities . The working model is updated so that it acquires new knowledge while aligning its decision boundary with the semantic memories to enable the consolidation of structured knowledge across the tasks . Figure 1 highlights the parallels between CLS theory and our method . Semantic Memories : Central to our method is the maintenance of two semantic memories which accumulate and consolidate information over long-term and short-term periods . As the acquired knowledge of the learned tasks is encoded in the weights of DNNs ( Krishnan et al. , 2019 ) , we aim to form our semantic memories by accumulating the knowledge encoded in the corresponding weights of the model as it sequentially learns different tasks . An efficient method for aggregating the weights of a model is provided by Mean Teacher ( Tarvainen & Valpola , 2017 ) which is a knowledge distillation approach that uses an exponential moving average ( EMA ) of the student ’ s weights during training as a teacher for semi-supervised learning . It can also be considered as forming a self-ensemble of the intermediate model states that leads to better internal representations . We adapt the Mean Teacher approach to build our semantic memories as it provides a computational and memory-efficient method for accumulating knowledge over the tasks . As CL involves learning tasks sequentially , the model weights at each training step can be considered as a student model specialized for a particular task . Therefore , averaging the weights during training can be considered as forming an ensemble of task-specific student models which effectively aggregates information across the tasks and leads to smoother decision boundaries . CLS-ER builds long-term ( stable model ) and short-term ( plastic model ) semantic memories by maintaining two EMA-weighted models over the working model ’ s weights . The stable model is updated less frequently with a larger window size so that it retains more information from the earlier tasks while the plastic model is updated more frequently with a smaller window size so that it adapts faster to information from new tasks ( Figure 2 ) . Section D further demonstrates the benefits of employing two semantic memories instead of a single semantic memory . Episodic Memory : Replay of samples from the previous tasks stored in a small episodic memory is a common approach in CL that has proven to be effective in mitigating catastrophic forgetting . As we aim to position CLS-ER as a versatile general incremental learning method , we do not utilize the task boundaries or make any strong assumptions about the distribution of the tasks or samples . Therefore , to maintain a fixed episodic memory buffer , we employ reservoir sampling ( Vitter , 1985 ) which assigns equal probability to each sample in the stream for being represented in the buffer and randomly replaces the existing memory samples ( Algorithm 2 ) . It is a global distribution matching strategy that ensures that at any given time the distribution of samples in the buffer will approximately match the distribution of all the samples seen so far ( Isele & Cosgun , 2018 ) . Consolidation of Information : The key challenge in CL is the consolidation of new information with the previously acquired information . This requires an effective balance between the stability and plasticity of the model . Furthermore , the sharp change in decision boundary as a new task is learned makes the consolidation of information over tasks more challenging . CLS-ER tackles these challenges through a novel dual memory experience replay mechanism . The long-term and shortterm semantic memories interact with the episodic memory to extract the consolidated activations for the memory samples which are then utilized to constrain the update of the working model so that new knowledge is obtained whilst the decision boundary is aligned with the semantic memories . This prevents rapid changes in the parameter space as new tasks are learned . Furthermore , aligning the working model ’ s decision boundary with the semantic memories serves two goals : ( i ) helps in retaining and consolidating information and ( ii ) leads to a smoother adaptation of decision boundary . 3.3 FORMULATION . CLS-ER involves training a working model f ( . ; θw ) on a data stream D sampled from a non-iid distribution . Two additional EMA-weighted models are maintained as semantic memories : plastic model f ( . ; θP ) and the stable model f ( . ; θS ) . Finally , reservoir sampling ( Vitter , 1985 ) is employed to maintain a small episodic memoryM . At each training step , the working model receives the training batch Xb from the data stream and retrieves a random batch of exemplars Xm from the episodic memory . This is then followed by the retrieval of optimal semantic information , i.e . the structural knowledge encoded in the semantic memories which account for the consolidation of feature space and adaptation of the decision boundaries of the previous tasks . The semantic memories are designed so that the plastic model has higher performance on recent tasks whereas the stable model prioritizes retaining information on the older tasks . Therefore , we would prefer to use the logits from the stable model ZS for older exemplars and the plastic model ZP for recent exemplars . As CLS-ER is a general incremental learning method , instead of using a hard threshold or task information , we opt for a simple task-agnostic approach of using the performance of the semantic memories on the exemplars as a selection criterion that empirically works well . For each exemplar , we select the replay logits Z based on which model has the highest softmax score for the ground-truth class ( line 5-6 in Algorithm 1 ) . The selected replay logits from the semantic memories are then used to enforce a consistency loss on the working model so that it does not deviate from the already learned experiences . Hence , the working model is updated with a combination of the cross-entropy loss on the union of the data stream and episodic memory samples , X , and the consistency loss on the exemplars Xm , L = LCE ( σ ( f ( X ; θW ) ) , Y ) + λLMSE ( f ( Xm ; θW ) , Z ) ( 1 ) Algorithm 1 Complementary Learning System-Experience Replay Algorithm Input : Data stream D , Learning rate η , Consistency weight λ , Update rates rP and rS , Decay parameters αP and αS Initialize : θW = θP = θS M←− { } 1 : while Training do 2 : ( Xb , Yb ) ∼ D 3 : ( Xm , Ym ) ∼M 4 : ( X , Y ) = { ( Xb , Yb ) , ( Xm , Ym ) } 5 : ZP , ZS ←− f ( Xm ; θP ) , f ( Xm ; θS ) . Select optimal semantic memory 6 : Z ←− ZP if σ ( ZP ) ( Ym ) > σ ( ZS ) ( Ym ) else ZS 7 : L = LCE ( σ ( f ( X ; θW ) ) , Y ) + λLMSE ( f ( Xm ; θW ) , Z ) . Update working model 8 : θW ←− θW − η∇θWL 9 : a , b ∼ U ( 0 , 1 ) . Update semantic memories 10 : θP ←− αpθP + ( 1− αP ) θW if a < rP else θP 11 : θS ←− αSθS + ( 1− αS ) θW if b < rS else θS 12 : M←− Reservoir ( M , ( Xb , Yb ) ) . Update episodic memory ( Algorithm 2 ) return θW , θP , θS where σ is the softmax function , λ the regularization parameter , and LMSE the mean squared error loss used as consistency term . After updating the working model , we stochastically update the plastic and stable models with rates rP and rS ( note that rP ≥ rS so that the plastic model is updated more frequently ) . Using a stochastic rather than a deterministic approach is more biologically plausible ( Maass , 2014 ) which reduces the overlap in the snapshots of the working model and leads to more diversity in semantic memories . The semantic memories are updated by taking an exponential moving average of the working model ’ s weights ( Tarvainen & Valpola , 2017 ) with decay parameters αP and αS , respectively . θi = αiθi + ( 1− αi ) θW , i ∈ { P , S } ( 2 ) Note that αP ≤ αS so that the plastic model mimics the rapid adaptation of information while the stable model mimics slow acquisition of structured knowledge . For inference , we use the stable model as it retains long-term memory across the tasks , consolidates structural knowledge , and learns efficient representations for generalization ( Figure 1 ) . Details of our proposed method are provided in Algorithm 1 .
The paper presents CLS-ER, a dual memory mechanism for the continual learning setting. During training the model receives data from a non-i.i.d. source as well as random samples from a sample buffer (the episodic memory). The stable and plastic models act as teacher models. Their logits are the target of the student model. Which one is used as a target depends on the highest softmax score w.r.t. a ground trough class. The teacher networks are both exponential moving averages of the previous student models. Thus, the only difference between the teacher models "plastic" and "stable" is their EMA rates which in the case of the plastic model is larger such that it focuses on more recent weight updates. The slower learning of the stable model is the mechanism which ought to mirror the consolidation of structural knowledge. It is used from inference Samples from the datastream are transferred to the sample buffer using reservoir sampling. The authors experiment on image datasets (variants of MNIST, CIFAR10, Tiny-ImageNet, see the Mammoth framework) with data augmentation (random flips and crops). They compare with several prio works and the evaluation is thorough and convincing.
SP:9b693bad5fc10f11cc5942ee023fd4ec37a6f964
Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
1 INTRODUCTION . Continual learning ( CL ) refers to the ability of a learning agent to continuously interact with a dynamic environment and process a stream of information to acquire new knowledge while consolidating and retaining previously obtained knowledge ( Parisi et al. , 2019 ) . This ability to continuously learn from a changing environment is a hallmark of intelligence and a critical missing component in our quest towards making our models truly intelligent . The major challenge towards enabling CL in deep neural networks ( DNNs ) is that the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting whereby the performance of the model on previously learned tasks drops drastically as it learns new tasks ( McCloskey & Cohen , 1989 ) . Several approaches have been proposed to address the issue of catastrophic forgetting in CL . These can be broadly categorized into regularization-based methods ( Farajtabar et al. , 2020 ; Kirkpatrick et al. , 2017 ; Ritter et al. , 2018 ; Zenke et al. , 2017 ) which penalizes changes in the network weights , network expansion-based methods ( Rusu et al. , 2016 ; Yoon et al. , 2017 ) which dedicate a distinct set of network parameters to distinct tasks , and rehearsal-based methods ( Chaudhry et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ) which maintains a memory buffer and replays samples from previous tasks . Amongst these , rehearsal-based methods have proven to be more effective in challenging CL tasks ( Farquhar & Gal , 2018 ) . However , an optimal approach for replaying memory samples and constraining the model update to efficiently consolidate knowledge remains an open question . In the brain , the ability to continually acquire , consolidate , and transfer knowledge over time is mediated by a rich set of neurophysiological processing principles ( Parisi et al. , 2019 ; Zenke et al. , 2017 ) and multiple memory systems ( Hassabis et al. , 2017 ) . In particular , the CLS theory ( Kumaran et al. , 2016 ) posits that efficient learning requires two complementary learning systems : the hippocampus exhibits short-term adaptation and rapid learning of episodic information which is then gradually consolidated to the neocortex for slow learning of structured information . Furthermore , a recent study by Hayes et al . ( 2021 ) identified the missing elements of biological reply in the replay ∗Contributed equally . mechanisms employed in DNNs for CL . They highlight that many existing approaches only focus on modeling the prefrontal cortex directly and do not have a fast learning network which plays a critical role in enabling efficient CL in the brain . Inspired by these studies , we hypothesize that mimicking the slow and rapid adaptation of information and having an efficient mechanism for incorporating them into the working memory can enable better CL in DNNs . To this end , we propose a novel dual memory experience replay method based on the complementary learning systems theory in the brain , dubbed as CLS-ER . In addition to a small episodic memory , our method builds long-term and short-term semantic memories which mimic the rapid and slow adaptation of information ( Figure 1 ) . As the network weights encode the learned representations of the tasks ( Krishnan et al. , 2019 ) , the semantic memories are maintained by taking the exponential moving average of the working model ’ s weights to consolidate information across the tasks with varying time windows and frequencies . The semantic memories interact with the episodic memory to extract consolidated replay activation patterns and enforce a consistency loss on the update of the working model so that new knowledge is acquired while aligning the decision boundary of the working model with the decision boundaries of semantic memories . This maintains a balance between the plasticity and stability of the model for effective knowledge consolidation . CLS-ER provides a general CL method that does not utilize the task boundaries or make any strong assumption regarding the distribution of the data and tasks . We demonstrate the versatility and effectiveness of our method on a wide range of CL benchmark tasks as well as more challenging scenarios which simulate the complexities of CL in the real world . 2 RELATED WORK . The base method for the rehearsal-based approach , Experience Replay ( ER ) ( Riemer et al. , 2018 ) combines the memory samples with the task samples into the training batch . Several techniques have since been employed on top of ER . Meta Experience Replay ( MER ) ( Riemer et al. , 2018 ) considers replay as a meta-learning problem for maximizing the transfer from previous tasks and minimizing the interference . iCARL ( Rebuffi et al. , 2017 ) uses the nearest average representation of past exemplars to classify in an incrementally learned representation space . Gradient Episodic Memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) formulates optimization constraints on the exemplars in memory . Gradient-based Sample Selection ( GSS ) ( Aljundi et al. , 2019 ) aims for memory sample diversity in the gradient space and provides a greedy selection approach . Function Distance Regularization ( FDR ) ( Benjamin et al. , 2018 ) saves the network response at the task boundaries and adds a consistency loss on top of ER . Dark Experience Replay ( DER++ ) relaxes this restriction and samples logits during the entire optimization trajectory . CLS has been used as a source of inspiration for dual memory learning systems in earlier works ( French , 1999 ; Robins , 1993 ) but they have not been shown to scale to current computer vision tasks ( Parisi et al. , 2019 ) . Recently , Rostami et al . ( 2019 ) utilizes a generative model to couple sequential tasks in a latent embedding space . Kamra et al . ( 2017 ) utilizes two generative models in a dual memory architecture . However , they utilize the task boundaries and generative replay has its own set of challenges as it is difficult to learn a faithful distribution and performs sub-par in comparison to instance-based replay methods on challenging CL settings . Generally , the inspiration from CLS theory in DNNs has been mostly limited to episodic memory and mimicking the rapid and slow learning mechanism is majorly ignored ( Hayes et al. , 2021 ) which we aim to address . 3 METHOD . We first provide an overview of the CLS theory for the brain and how we aim to mimic it for DNNs before introducing the main components of our method and the overall formulation . 3.1 COMPLEMENTARY LEARNING SYSTEM THEORY . The CLS theory posits that effective lifelong learning in the brain requires two complementary learning systems . The hippocampus rapidly encodes novel information as a short-term memory which is subsequently used to transfer and consolidate knowledge in the neocortex which gradually acquires structured knowledge representation as long-term memory through experience replay . The interplay between the functionality of the hippocampus and neocortex is crucial for concurrently learning efficient representations ( for better generalization ) and the specifics of instance-based episodic memory . 3.2 COMPLEMENTARY LEARNING SYSTEM BASED EXPERIENCED REPLAY . Inspired by the CLS theory , we propose a dual memory experience replay method , CLS-ER , which aims to mimic the interplay between fast learning and slow learning mechanisms for enabling effective CL in DNNs . Our method maintains short-term and long-term semantic memories of the encountered tasks which interact with the episodic memory for replaying the associated neural activities . The working model is updated so that it acquires new knowledge while aligning its decision boundary with the semantic memories to enable the consolidation of structured knowledge across the tasks . Figure 1 highlights the parallels between CLS theory and our method . Semantic Memories : Central to our method is the maintenance of two semantic memories which accumulate and consolidate information over long-term and short-term periods . As the acquired knowledge of the learned tasks is encoded in the weights of DNNs ( Krishnan et al. , 2019 ) , we aim to form our semantic memories by accumulating the knowledge encoded in the corresponding weights of the model as it sequentially learns different tasks . An efficient method for aggregating the weights of a model is provided by Mean Teacher ( Tarvainen & Valpola , 2017 ) which is a knowledge distillation approach that uses an exponential moving average ( EMA ) of the student ’ s weights during training as a teacher for semi-supervised learning . It can also be considered as forming a self-ensemble of the intermediate model states that leads to better internal representations . We adapt the Mean Teacher approach to build our semantic memories as it provides a computational and memory-efficient method for accumulating knowledge over the tasks . As CL involves learning tasks sequentially , the model weights at each training step can be considered as a student model specialized for a particular task . Therefore , averaging the weights during training can be considered as forming an ensemble of task-specific student models which effectively aggregates information across the tasks and leads to smoother decision boundaries . CLS-ER builds long-term ( stable model ) and short-term ( plastic model ) semantic memories by maintaining two EMA-weighted models over the working model ’ s weights . The stable model is updated less frequently with a larger window size so that it retains more information from the earlier tasks while the plastic model is updated more frequently with a smaller window size so that it adapts faster to information from new tasks ( Figure 2 ) . Section D further demonstrates the benefits of employing two semantic memories instead of a single semantic memory . Episodic Memory : Replay of samples from the previous tasks stored in a small episodic memory is a common approach in CL that has proven to be effective in mitigating catastrophic forgetting . As we aim to position CLS-ER as a versatile general incremental learning method , we do not utilize the task boundaries or make any strong assumptions about the distribution of the tasks or samples . Therefore , to maintain a fixed episodic memory buffer , we employ reservoir sampling ( Vitter , 1985 ) which assigns equal probability to each sample in the stream for being represented in the buffer and randomly replaces the existing memory samples ( Algorithm 2 ) . It is a global distribution matching strategy that ensures that at any given time the distribution of samples in the buffer will approximately match the distribution of all the samples seen so far ( Isele & Cosgun , 2018 ) . Consolidation of Information : The key challenge in CL is the consolidation of new information with the previously acquired information . This requires an effective balance between the stability and plasticity of the model . Furthermore , the sharp change in decision boundary as a new task is learned makes the consolidation of information over tasks more challenging . CLS-ER tackles these challenges through a novel dual memory experience replay mechanism . The long-term and shortterm semantic memories interact with the episodic memory to extract the consolidated activations for the memory samples which are then utilized to constrain the update of the working model so that new knowledge is obtained whilst the decision boundary is aligned with the semantic memories . This prevents rapid changes in the parameter space as new tasks are learned . Furthermore , aligning the working model ’ s decision boundary with the semantic memories serves two goals : ( i ) helps in retaining and consolidating information and ( ii ) leads to a smoother adaptation of decision boundary . 3.3 FORMULATION . CLS-ER involves training a working model f ( . ; θw ) on a data stream D sampled from a non-iid distribution . Two additional EMA-weighted models are maintained as semantic memories : plastic model f ( . ; θP ) and the stable model f ( . ; θS ) . Finally , reservoir sampling ( Vitter , 1985 ) is employed to maintain a small episodic memoryM . At each training step , the working model receives the training batch Xb from the data stream and retrieves a random batch of exemplars Xm from the episodic memory . This is then followed by the retrieval of optimal semantic information , i.e . the structural knowledge encoded in the semantic memories which account for the consolidation of feature space and adaptation of the decision boundaries of the previous tasks . The semantic memories are designed so that the plastic model has higher performance on recent tasks whereas the stable model prioritizes retaining information on the older tasks . Therefore , we would prefer to use the logits from the stable model ZS for older exemplars and the plastic model ZP for recent exemplars . As CLS-ER is a general incremental learning method , instead of using a hard threshold or task information , we opt for a simple task-agnostic approach of using the performance of the semantic memories on the exemplars as a selection criterion that empirically works well . For each exemplar , we select the replay logits Z based on which model has the highest softmax score for the ground-truth class ( line 5-6 in Algorithm 1 ) . The selected replay logits from the semantic memories are then used to enforce a consistency loss on the working model so that it does not deviate from the already learned experiences . Hence , the working model is updated with a combination of the cross-entropy loss on the union of the data stream and episodic memory samples , X , and the consistency loss on the exemplars Xm , L = LCE ( σ ( f ( X ; θW ) ) , Y ) + λLMSE ( f ( Xm ; θW ) , Z ) ( 1 ) Algorithm 1 Complementary Learning System-Experience Replay Algorithm Input : Data stream D , Learning rate η , Consistency weight λ , Update rates rP and rS , Decay parameters αP and αS Initialize : θW = θP = θS M←− { } 1 : while Training do 2 : ( Xb , Yb ) ∼ D 3 : ( Xm , Ym ) ∼M 4 : ( X , Y ) = { ( Xb , Yb ) , ( Xm , Ym ) } 5 : ZP , ZS ←− f ( Xm ; θP ) , f ( Xm ; θS ) . Select optimal semantic memory 6 : Z ←− ZP if σ ( ZP ) ( Ym ) > σ ( ZS ) ( Ym ) else ZS 7 : L = LCE ( σ ( f ( X ; θW ) ) , Y ) + λLMSE ( f ( Xm ; θW ) , Z ) . Update working model 8 : θW ←− θW − η∇θWL 9 : a , b ∼ U ( 0 , 1 ) . Update semantic memories 10 : θP ←− αpθP + ( 1− αP ) θW if a < rP else θP 11 : θS ←− αSθS + ( 1− αS ) θW if b < rS else θS 12 : M←− Reservoir ( M , ( Xb , Yb ) ) . Update episodic memory ( Algorithm 2 ) return θW , θP , θS where σ is the softmax function , λ the regularization parameter , and LMSE the mean squared error loss used as consistency term . After updating the working model , we stochastically update the plastic and stable models with rates rP and rS ( note that rP ≥ rS so that the plastic model is updated more frequently ) . Using a stochastic rather than a deterministic approach is more biologically plausible ( Maass , 2014 ) which reduces the overlap in the snapshots of the working model and leads to more diversity in semantic memories . The semantic memories are updated by taking an exponential moving average of the working model ’ s weights ( Tarvainen & Valpola , 2017 ) with decay parameters αP and αS , respectively . θi = αiθi + ( 1− αi ) θW , i ∈ { P , S } ( 2 ) Note that αP ≤ αS so that the plastic model mimics the rapid adaptation of information while the stable model mimics slow acquisition of structured knowledge . For inference , we use the stable model as it retains long-term memory across the tasks , consolidates structural knowledge , and learns efficient representations for generalization ( Figure 1 ) . Details of our proposed method are provided in Algorithm 1 .
They propose a dual memory experience based on complementary learning systems (CLS). The working model is updated using consistency loss with the selected optimal semantic memory from plastic and stable models. Plastic and stable models are updated with an exponential moving average trick (EMA) and the working model.
SP:9b693bad5fc10f11cc5942ee023fd4ec37a6f964
Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
1 INTRODUCTION . Continual learning ( CL ) refers to the ability of a learning agent to continuously interact with a dynamic environment and process a stream of information to acquire new knowledge while consolidating and retaining previously obtained knowledge ( Parisi et al. , 2019 ) . This ability to continuously learn from a changing environment is a hallmark of intelligence and a critical missing component in our quest towards making our models truly intelligent . The major challenge towards enabling CL in deep neural networks ( DNNs ) is that the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting whereby the performance of the model on previously learned tasks drops drastically as it learns new tasks ( McCloskey & Cohen , 1989 ) . Several approaches have been proposed to address the issue of catastrophic forgetting in CL . These can be broadly categorized into regularization-based methods ( Farajtabar et al. , 2020 ; Kirkpatrick et al. , 2017 ; Ritter et al. , 2018 ; Zenke et al. , 2017 ) which penalizes changes in the network weights , network expansion-based methods ( Rusu et al. , 2016 ; Yoon et al. , 2017 ) which dedicate a distinct set of network parameters to distinct tasks , and rehearsal-based methods ( Chaudhry et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ) which maintains a memory buffer and replays samples from previous tasks . Amongst these , rehearsal-based methods have proven to be more effective in challenging CL tasks ( Farquhar & Gal , 2018 ) . However , an optimal approach for replaying memory samples and constraining the model update to efficiently consolidate knowledge remains an open question . In the brain , the ability to continually acquire , consolidate , and transfer knowledge over time is mediated by a rich set of neurophysiological processing principles ( Parisi et al. , 2019 ; Zenke et al. , 2017 ) and multiple memory systems ( Hassabis et al. , 2017 ) . In particular , the CLS theory ( Kumaran et al. , 2016 ) posits that efficient learning requires two complementary learning systems : the hippocampus exhibits short-term adaptation and rapid learning of episodic information which is then gradually consolidated to the neocortex for slow learning of structured information . Furthermore , a recent study by Hayes et al . ( 2021 ) identified the missing elements of biological reply in the replay ∗Contributed equally . mechanisms employed in DNNs for CL . They highlight that many existing approaches only focus on modeling the prefrontal cortex directly and do not have a fast learning network which plays a critical role in enabling efficient CL in the brain . Inspired by these studies , we hypothesize that mimicking the slow and rapid adaptation of information and having an efficient mechanism for incorporating them into the working memory can enable better CL in DNNs . To this end , we propose a novel dual memory experience replay method based on the complementary learning systems theory in the brain , dubbed as CLS-ER . In addition to a small episodic memory , our method builds long-term and short-term semantic memories which mimic the rapid and slow adaptation of information ( Figure 1 ) . As the network weights encode the learned representations of the tasks ( Krishnan et al. , 2019 ) , the semantic memories are maintained by taking the exponential moving average of the working model ’ s weights to consolidate information across the tasks with varying time windows and frequencies . The semantic memories interact with the episodic memory to extract consolidated replay activation patterns and enforce a consistency loss on the update of the working model so that new knowledge is acquired while aligning the decision boundary of the working model with the decision boundaries of semantic memories . This maintains a balance between the plasticity and stability of the model for effective knowledge consolidation . CLS-ER provides a general CL method that does not utilize the task boundaries or make any strong assumption regarding the distribution of the data and tasks . We demonstrate the versatility and effectiveness of our method on a wide range of CL benchmark tasks as well as more challenging scenarios which simulate the complexities of CL in the real world . 2 RELATED WORK . The base method for the rehearsal-based approach , Experience Replay ( ER ) ( Riemer et al. , 2018 ) combines the memory samples with the task samples into the training batch . Several techniques have since been employed on top of ER . Meta Experience Replay ( MER ) ( Riemer et al. , 2018 ) considers replay as a meta-learning problem for maximizing the transfer from previous tasks and minimizing the interference . iCARL ( Rebuffi et al. , 2017 ) uses the nearest average representation of past exemplars to classify in an incrementally learned representation space . Gradient Episodic Memory ( GEM ) ( Lopez-Paz & Ranzato , 2017 ) formulates optimization constraints on the exemplars in memory . Gradient-based Sample Selection ( GSS ) ( Aljundi et al. , 2019 ) aims for memory sample diversity in the gradient space and provides a greedy selection approach . Function Distance Regularization ( FDR ) ( Benjamin et al. , 2018 ) saves the network response at the task boundaries and adds a consistency loss on top of ER . Dark Experience Replay ( DER++ ) relaxes this restriction and samples logits during the entire optimization trajectory . CLS has been used as a source of inspiration for dual memory learning systems in earlier works ( French , 1999 ; Robins , 1993 ) but they have not been shown to scale to current computer vision tasks ( Parisi et al. , 2019 ) . Recently , Rostami et al . ( 2019 ) utilizes a generative model to couple sequential tasks in a latent embedding space . Kamra et al . ( 2017 ) utilizes two generative models in a dual memory architecture . However , they utilize the task boundaries and generative replay has its own set of challenges as it is difficult to learn a faithful distribution and performs sub-par in comparison to instance-based replay methods on challenging CL settings . Generally , the inspiration from CLS theory in DNNs has been mostly limited to episodic memory and mimicking the rapid and slow learning mechanism is majorly ignored ( Hayes et al. , 2021 ) which we aim to address . 3 METHOD . We first provide an overview of the CLS theory for the brain and how we aim to mimic it for DNNs before introducing the main components of our method and the overall formulation . 3.1 COMPLEMENTARY LEARNING SYSTEM THEORY . The CLS theory posits that effective lifelong learning in the brain requires two complementary learning systems . The hippocampus rapidly encodes novel information as a short-term memory which is subsequently used to transfer and consolidate knowledge in the neocortex which gradually acquires structured knowledge representation as long-term memory through experience replay . The interplay between the functionality of the hippocampus and neocortex is crucial for concurrently learning efficient representations ( for better generalization ) and the specifics of instance-based episodic memory . 3.2 COMPLEMENTARY LEARNING SYSTEM BASED EXPERIENCED REPLAY . Inspired by the CLS theory , we propose a dual memory experience replay method , CLS-ER , which aims to mimic the interplay between fast learning and slow learning mechanisms for enabling effective CL in DNNs . Our method maintains short-term and long-term semantic memories of the encountered tasks which interact with the episodic memory for replaying the associated neural activities . The working model is updated so that it acquires new knowledge while aligning its decision boundary with the semantic memories to enable the consolidation of structured knowledge across the tasks . Figure 1 highlights the parallels between CLS theory and our method . Semantic Memories : Central to our method is the maintenance of two semantic memories which accumulate and consolidate information over long-term and short-term periods . As the acquired knowledge of the learned tasks is encoded in the weights of DNNs ( Krishnan et al. , 2019 ) , we aim to form our semantic memories by accumulating the knowledge encoded in the corresponding weights of the model as it sequentially learns different tasks . An efficient method for aggregating the weights of a model is provided by Mean Teacher ( Tarvainen & Valpola , 2017 ) which is a knowledge distillation approach that uses an exponential moving average ( EMA ) of the student ’ s weights during training as a teacher for semi-supervised learning . It can also be considered as forming a self-ensemble of the intermediate model states that leads to better internal representations . We adapt the Mean Teacher approach to build our semantic memories as it provides a computational and memory-efficient method for accumulating knowledge over the tasks . As CL involves learning tasks sequentially , the model weights at each training step can be considered as a student model specialized for a particular task . Therefore , averaging the weights during training can be considered as forming an ensemble of task-specific student models which effectively aggregates information across the tasks and leads to smoother decision boundaries . CLS-ER builds long-term ( stable model ) and short-term ( plastic model ) semantic memories by maintaining two EMA-weighted models over the working model ’ s weights . The stable model is updated less frequently with a larger window size so that it retains more information from the earlier tasks while the plastic model is updated more frequently with a smaller window size so that it adapts faster to information from new tasks ( Figure 2 ) . Section D further demonstrates the benefits of employing two semantic memories instead of a single semantic memory . Episodic Memory : Replay of samples from the previous tasks stored in a small episodic memory is a common approach in CL that has proven to be effective in mitigating catastrophic forgetting . As we aim to position CLS-ER as a versatile general incremental learning method , we do not utilize the task boundaries or make any strong assumptions about the distribution of the tasks or samples . Therefore , to maintain a fixed episodic memory buffer , we employ reservoir sampling ( Vitter , 1985 ) which assigns equal probability to each sample in the stream for being represented in the buffer and randomly replaces the existing memory samples ( Algorithm 2 ) . It is a global distribution matching strategy that ensures that at any given time the distribution of samples in the buffer will approximately match the distribution of all the samples seen so far ( Isele & Cosgun , 2018 ) . Consolidation of Information : The key challenge in CL is the consolidation of new information with the previously acquired information . This requires an effective balance between the stability and plasticity of the model . Furthermore , the sharp change in decision boundary as a new task is learned makes the consolidation of information over tasks more challenging . CLS-ER tackles these challenges through a novel dual memory experience replay mechanism . The long-term and shortterm semantic memories interact with the episodic memory to extract the consolidated activations for the memory samples which are then utilized to constrain the update of the working model so that new knowledge is obtained whilst the decision boundary is aligned with the semantic memories . This prevents rapid changes in the parameter space as new tasks are learned . Furthermore , aligning the working model ’ s decision boundary with the semantic memories serves two goals : ( i ) helps in retaining and consolidating information and ( ii ) leads to a smoother adaptation of decision boundary . 3.3 FORMULATION . CLS-ER involves training a working model f ( . ; θw ) on a data stream D sampled from a non-iid distribution . Two additional EMA-weighted models are maintained as semantic memories : plastic model f ( . ; θP ) and the stable model f ( . ; θS ) . Finally , reservoir sampling ( Vitter , 1985 ) is employed to maintain a small episodic memoryM . At each training step , the working model receives the training batch Xb from the data stream and retrieves a random batch of exemplars Xm from the episodic memory . This is then followed by the retrieval of optimal semantic information , i.e . the structural knowledge encoded in the semantic memories which account for the consolidation of feature space and adaptation of the decision boundaries of the previous tasks . The semantic memories are designed so that the plastic model has higher performance on recent tasks whereas the stable model prioritizes retaining information on the older tasks . Therefore , we would prefer to use the logits from the stable model ZS for older exemplars and the plastic model ZP for recent exemplars . As CLS-ER is a general incremental learning method , instead of using a hard threshold or task information , we opt for a simple task-agnostic approach of using the performance of the semantic memories on the exemplars as a selection criterion that empirically works well . For each exemplar , we select the replay logits Z based on which model has the highest softmax score for the ground-truth class ( line 5-6 in Algorithm 1 ) . The selected replay logits from the semantic memories are then used to enforce a consistency loss on the working model so that it does not deviate from the already learned experiences . Hence , the working model is updated with a combination of the cross-entropy loss on the union of the data stream and episodic memory samples , X , and the consistency loss on the exemplars Xm , L = LCE ( σ ( f ( X ; θW ) ) , Y ) + λLMSE ( f ( Xm ; θW ) , Z ) ( 1 ) Algorithm 1 Complementary Learning System-Experience Replay Algorithm Input : Data stream D , Learning rate η , Consistency weight λ , Update rates rP and rS , Decay parameters αP and αS Initialize : θW = θP = θS M←− { } 1 : while Training do 2 : ( Xb , Yb ) ∼ D 3 : ( Xm , Ym ) ∼M 4 : ( X , Y ) = { ( Xb , Yb ) , ( Xm , Ym ) } 5 : ZP , ZS ←− f ( Xm ; θP ) , f ( Xm ; θS ) . Select optimal semantic memory 6 : Z ←− ZP if σ ( ZP ) ( Ym ) > σ ( ZS ) ( Ym ) else ZS 7 : L = LCE ( σ ( f ( X ; θW ) ) , Y ) + λLMSE ( f ( Xm ; θW ) , Z ) . Update working model 8 : θW ←− θW − η∇θWL 9 : a , b ∼ U ( 0 , 1 ) . Update semantic memories 10 : θP ←− αpθP + ( 1− αP ) θW if a < rP else θP 11 : θS ←− αSθS + ( 1− αS ) θW if b < rS else θS 12 : M←− Reservoir ( M , ( Xb , Yb ) ) . Update episodic memory ( Algorithm 2 ) return θW , θP , θS where σ is the softmax function , λ the regularization parameter , and LMSE the mean squared error loss used as consistency term . After updating the working model , we stochastically update the plastic and stable models with rates rP and rS ( note that rP ≥ rS so that the plastic model is updated more frequently ) . Using a stochastic rather than a deterministic approach is more biologically plausible ( Maass , 2014 ) which reduces the overlap in the snapshots of the working model and leads to more diversity in semantic memories . The semantic memories are updated by taking an exponential moving average of the working model ’ s weights ( Tarvainen & Valpola , 2017 ) with decay parameters αP and αS , respectively . θi = αiθi + ( 1− αi ) θW , i ∈ { P , S } ( 2 ) Note that αP ≤ αS so that the plastic model mimics the rapid adaptation of information while the stable model mimics slow acquisition of structured knowledge . For inference , we use the stable model as it retains long-term memory across the tasks , consolidates structural knowledge , and learns efficient representations for generalization ( Figure 1 ) . Details of our proposed method are provided in Algorithm 1 .
In this paper, motivated from the CLS theory in neuroscience that efficient learning requires short-term adaptation and slow learning of structured information, the authors propose a novel dual memory experience replay method for continual learning. The idea is to build long-term and short-term semantic memories by maintaining another two models: plastic model and stable model. The plastic model is used for fast learning of recent experiences and the stable model is used for slow learning of structural knowledge. The two models are updated with a Mean Teacher fashion during training with different frequencies. The proposed CLS-ER is task-agnostic and can be applied in different continual learning settings. The experimental results show that the proposed CLS-ER outperforms several baselines for continual learning.
SP:9b693bad5fc10f11cc5942ee023fd4ec37a6f964
Evolution Strategies as an Alternate Learning method for Hierarchical Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) has been used to create artificially intelligent agents for tasks ranging from robot locomotion ( Haarnoja et al. , 2018 ) to video games such as StarCraft ( Vinyals et al. , 2019 ) and board games such as chess and Go ( Silver et al. , 2018 ) . Many such agents use Markov Decision Process ( MDP ) based learning methods , such as Deep Q-Networks ( DQNs ) ( Mnih et al. , 2015 ) and the policy gradient family of methods ( Sutton et al. , 1999a ; Sutton & Barto , 2018 ) . MDP and gradient based RL methods are also used by hierarchical RL ( HRL ) algorithms , which are a class of RL algorithms that excel at challenging RL problems by decomposing them into sub tasks , which mimics the way we as humans build new skills on top of existing simpler skills . HRL methods have seen success in solving some of the hardest RL environments such as Montezumas revenge ( Vezhnevets et al. , 2017 ; Badia et al. , 2020 ) and generating complex robot behaviours ( Nachum et al. , 2018 ) . Another area of reinforcement learning that has enjoyed recent success is evolution strategies . These are a family of black box evolutionary optimization techniques , which Salimans et al . ( 2017 ) showed are competitive with non-hierarchical MDP based RL methods in the robot locomotion and Atari domain . The success of such approaches has lead to a wider use of ES for tackling RL problems , but it has yet to be used to solve challenging HRL problems . ES has been used as a black box optimizer for a multitude of problems such as minimizing the drag of 3D bodies ( Beyer & Schwefel , 2002 ) , optimizing designs in structural and mechanical engineering problems ( Datoussaı̈d et al. , 2006 ) , robot locomotion ( Salimans et al. , 2017 ; Conti et al. , 2017 ; Katona et al. , 2021 ) and loss function optimization ( Gonzalez & Miikkulainen , 2020 ) . There are many different flavours of ES ( Beyer & Schwefel , 2002 ) each with different selection , mutation and self-adaption properties , for example CMA-ES ( Hansen & Ostermeier , 2001 ) and ( 1+γ ) -ES ( Beyer & Schwefel , 2002 ) . However , in this work we are concerned with the version proposed by Salimans et al . ( 2017 ) , namely Scalable Evolution Strategies ( S-ES ) because of it ’ s proven performance in the domain of robot locomotion and Atari game playing . All flavours of ES follow the scheme of sample-and-evaluate , where it samples a cloud of policy variants around it ’ s current policies parameters , evaluates these sampled policies to obtain a fitness and uses fitness to inform an update to the current policy . S-ES specifically uses fitness to approximate the gradient and moves the current policy parameters in the direction that maximizes the average reward . Given that ES is both a blackbox process ( making it indifferent to temporal details ) and is a gradient free method it suffers from sub optimal sample efficiency , however Liu et al . ( 2019 ) showed promising results addressing this inefficiency using trust regions which allow for more of a monotonic improvement . S-ES obtained results comparable to MDP methods on a set of standard MuJoCo ( Todorov et al. , 2012 ) and Atari ( Mnih et al. , 2015 ) benchmarks ( Salimans et al. , 2017 ) , however there are many RL problems harder than these standard benchmarks , some of which are near impossible to solve using non-hierarchical RL ( in this paper we refer to this as flat RL ) and others that are unsolved using flat RL . These environments can range from games such as Montezuma ’ s revenge ( Mnih et al. , 2015 ) which is challenging because it requires long-term credit assignment , to robot locomotion , navigation and interaction ( Nachum et al. , 2018 ; Florensa et al. , 2017 ) which requires complex multi-level reasoning . HRL has long held the promise of solving much more complex tasks than flat RL methods . It allows policies to abstract away large amounts of complexity and focus on solving simpler sub goals . One of the first HRL methods is known as the options framework ( Sutton et al. , 1999b ) which allows the controller policy to select the most appropriate primitive policy from a pool of primitive policies , this primitive passes control back to the controller once its actions are completed and the process repeats . Another competing HRL framework is feudal-RL ( Dayan & Hinton , 1993 ) , this framework allows for communication between the controller and primitives by having the controller set goals for the primitive to complete . Recent feudal-RL methods such as FeUdal Networks for HRL ( FuN ) ( Vezhnevets et al. , 2017 ) and HRL with Off-Policy Correction ( HIRO ) ( Nachum et al. , 2018 ) have shown a lot of promise for learning sparse reward problems and hierarchies requiring complex primitives . HIRO in particular takes the approach of using a two-level hierarchy ( one controller and one primitive ) where the controller sets the goal and reward for the primitive . For example the goal can take the form of a position an agent must reach and the reward is based on the agents distance to the goal position . HIRO , FuN and most modern HRL methods use MDP based RL methods to optimize their hierarchy of policies ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ; Sutton et al. , 1999b ; Badia et al. , 2020 ) and to the best of the authors knowledge , non-MDP based RL solvers , such as ES , have not been extensively tested on hard RL problem that are typically reserved for MDP based HRL solvers . ES has multiple advantages over MDP based RL methods , but two of these advantages make ES especially suited for HRL problems . First , it is invariant to delayed rewards and second , it has a more structured exploration mechanism ( Salimans et al. , 2017 ; Conti et al. , 2017 ) relative to MDP based RL methods . Its robustness to delayed rewards is especially useful for HRL problems as much of the difficulty of these environments can come from the long term credit assignment problem . Similarly hard RL problems often have many large local minima , requiring intelligent exploration methods in order to be solved . These advantages suggest that ES and specifically S-ES should perform well on challenging HRL problems , however to the best of the authors knowledge S-ES has not yet been applied to HRL problems . In this paper we introduce a new method1 for training a two-level hierarchy of policies which are optimized using S-ES2 , namely Scalable Hierarchical Evolution Strategies ( SHES ) . It will be evaluated on two difficult environments , which require robust robot navigation , locomotion and planning . We compare our method ’ s performance to other MDP based HRL methods that have been tested on the same environments ( Nachum et al. , 2018 ; Vezhnevets et al. , 2017 ) . This paper aims to demonstrate that SHES performs well on environments that are challenging for MDP based HRL methods and that S-ES is as viable as any MDP based RL method when training hierarchies of policies . The results obtained in this work show that SHES provides the HRL space with a high task performance , highly scalable and comparably simple HRL method . Furthermore , our method achieves these results using the same hyper-parameters as its flat RL counterpart . 2 SCALABLE HIERARCHICAL EVOLUTION STRATEGIES ( SHES ) . In this section we present our framework for learning hierarchical policies using S-ES , which we call SHES . We show how current MDP based HRL methods needed to be adapted in order to work with 1Removed for review , code can be found in supplementary material ( ScalableHrlEs.jl ) 2Removed for review , code can be found in supplementary material ( ScalableEs.jl ) S-ES and explain the important design choices taken , such as choice of primitive reward function , the goal encoding and controller architecture . 2.1 POLICY HIERARCHY . SHES is a Feudal RL ( Dayan & Hinton , 1993 ) style method where a higher level policy set goals for a lower level policy . Dayan & Hinton ( 1993 ) use a multilevel feudal hierarchy , but similarly to HIRO , SHES uses a two level hierarchy consisting of a higher level controller policy µc and a lower level primitive policy µp . The controller is responsible for setting goals and can not directly perform actions in the world while the primitive directly controls the agent by taking actions in the world and attempts to reach the goals set by the controller . More formally , given a state st from the environment the controller produces a goal gt ∈ Rd ( where d depends on the goal encoding , in the case of this work d = 3 ) . The controller produces gt every c steps , in the interim the goal is transformed using a static function such that it is always relative to the current state , for example if gt is a vector from the agent to a target position , the static function will make sure that gt is always relative to the agents current position . The controller interval c is kept as a hyper-parameter since it has been observed that learning c often leads to it degenerating into the simplest cases where c becomes 1 or the maximum episode length ( Vezhnevets et al. , 2017 ) . This provides the controller with a level of temporal abstraction which , in the case of the environments tested in this work , allows the controller to plan a path without worrying about how the agent will follow this path . The primitive is passed the goal gt and the state st and tasked with reaching the goal . It samples an action at ∼ µp ( st , gt ) from its policy which is applied to the agent . The controller receives a reward from the environment , however it is also responsible for rewarding the primitive . As discussed in section 2.2 the primitive is sensitive to this reward and thus it should be chosen carefully . In HIRO , FuN and this work , the primitive reward is based on its distance to its goal gt ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ) , however all such previous work makes use of different rewards . In a strict feudal RL setting , rewards are not shared between controller and primitive . For example , if the primitive reaches the goal set by the controller , but this does not provide a high environmental reward then the primitive will receive a high reward , but the controller will not . Both this work and HIRO follow this strict style of primitive rewards . Even though FuN shares rewards between their primitives and controllers ( Vezhnevets et al. , 2017 ) , this was decided against as it introduces a hyper-parameter to balance primitive and controller reward scales , which can be challenging . Our method is similar to HIRO , though SHES differs in its lack of off-policy correction , it ’ s primitive goal encoding and the use of S-ES to train the primitive and controller . This has the effect of making SHES very simple to implement , the only difference between SHES and S-ES being that there are now two policies that co-evolve instead of a single policy . SHES stores a set of parameters for both the controller θc and primitive θp . Every generation it creates n new pairs of controllers and primitives by perturbing the parameters θc and θp . The perturbation is done by adding a small amount of noise sampled from an n-variate Gaussian to the parameters θci = θ c + c ∼ N ( 0 , σ2 ) . The primitive is perturbed similarly using a different noise vector which is sampled from the same Gaussian p ∼ N ( 0 , σ2 ) allowing for the sharing of common random numbers at no extra memory cost when compared to a single policy S-ES as SHES uses a shared noise table , which is one of the main contributions of S-ES ( Salimans et al. , 2017 ) . The sharing of common random numbers was shown by Salimans et al . ( 2017 ) to allow for near linear speedup when scaling up to 1000 CPU cores , which means that SHES should scale as well as S-ES , however it was only tested up to 240 cores3 . Each pair of perturbed controller and primitive are evaluated in the environment such that the controller is given a fitness equal to the cumulative environmental reward and the primitive is given a fitness of its cumulative reward from the controller . Both primitives and controllers are separately ranked and shaped according to their fitness using the same method as Salimans et al . ( 2017 ) for S-ES . This is used to approximate the gradients for the controller and primitive separately using the ADAM optimizer ( Kingma & Ba , 2014 ) and update the the controller ( θc ) and primitive ( θp ) parameters which is no different to how S-ES updates its policy . 3Each node has a Intel Xeon 24 core CPU at 2.6GHz , 120GB of RAM and nodes are connected by FDR InfiniBand Using this type of Feudal RL method the controller poses goals for the primitive who ’ s behaviour is constantly changing . This amounts to a non-stationary problem for the controller since while the controller is trying to learn the behaviour of the primitive its behaviour is changing , which can be challenging . Nachum et al . ( 2018 ) develop an off-policy correction method to combat the nonstationary problem and to allow for off-policy training and thus better sample efficiency . We found that we did not need a special method to combat this problem as S-ES robustness to noise makes this problem simpler to solve , since the primitives changing behaviour can be interpreted as noise by the controller , however this does come at the cost of sample efficiency . Algorithm 1 : SHES 1 Input : Learning rate α , noise standard deviation σ , rollouts n , initial policy parameters θc and θp 2 for t = 0,1,2 ... do 3 for i = 1,2 .. n do 4 Sample ci , p i ∼ N ( 0 , I ) 5 F ci , F p i = F ( θ c t + c i ∗ σ , θ p t + p i ∗ σ ) 6 end 7 θct+1 = θ c t + α 1 nσ ∑n i=1 F c i c i 8 θpt+1 = θ p t + α 1 nσ ∑n i=1 F p i p i 9 end
This paper proposes a novel approach for hierarchical reinforcement learning. As the title says, the authors claims that the use of evolution strategies is useful to train hierarchical policy. Basically, the search is based on the estimation of the gradients of the controller fitness and the primitive fitness using the gaussian samples around the current parameters. It is mostly the same as [Salimans et al, 2017]. The authors also introduce the primitive reward. The performance of the proposed approach is evaluated on two test environments.
SP:c145dd1a3c7e89293956d6a13e3597ccf33973c2
Evolution Strategies as an Alternate Learning method for Hierarchical Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) has been used to create artificially intelligent agents for tasks ranging from robot locomotion ( Haarnoja et al. , 2018 ) to video games such as StarCraft ( Vinyals et al. , 2019 ) and board games such as chess and Go ( Silver et al. , 2018 ) . Many such agents use Markov Decision Process ( MDP ) based learning methods , such as Deep Q-Networks ( DQNs ) ( Mnih et al. , 2015 ) and the policy gradient family of methods ( Sutton et al. , 1999a ; Sutton & Barto , 2018 ) . MDP and gradient based RL methods are also used by hierarchical RL ( HRL ) algorithms , which are a class of RL algorithms that excel at challenging RL problems by decomposing them into sub tasks , which mimics the way we as humans build new skills on top of existing simpler skills . HRL methods have seen success in solving some of the hardest RL environments such as Montezumas revenge ( Vezhnevets et al. , 2017 ; Badia et al. , 2020 ) and generating complex robot behaviours ( Nachum et al. , 2018 ) . Another area of reinforcement learning that has enjoyed recent success is evolution strategies . These are a family of black box evolutionary optimization techniques , which Salimans et al . ( 2017 ) showed are competitive with non-hierarchical MDP based RL methods in the robot locomotion and Atari domain . The success of such approaches has lead to a wider use of ES for tackling RL problems , but it has yet to be used to solve challenging HRL problems . ES has been used as a black box optimizer for a multitude of problems such as minimizing the drag of 3D bodies ( Beyer & Schwefel , 2002 ) , optimizing designs in structural and mechanical engineering problems ( Datoussaı̈d et al. , 2006 ) , robot locomotion ( Salimans et al. , 2017 ; Conti et al. , 2017 ; Katona et al. , 2021 ) and loss function optimization ( Gonzalez & Miikkulainen , 2020 ) . There are many different flavours of ES ( Beyer & Schwefel , 2002 ) each with different selection , mutation and self-adaption properties , for example CMA-ES ( Hansen & Ostermeier , 2001 ) and ( 1+γ ) -ES ( Beyer & Schwefel , 2002 ) . However , in this work we are concerned with the version proposed by Salimans et al . ( 2017 ) , namely Scalable Evolution Strategies ( S-ES ) because of it ’ s proven performance in the domain of robot locomotion and Atari game playing . All flavours of ES follow the scheme of sample-and-evaluate , where it samples a cloud of policy variants around it ’ s current policies parameters , evaluates these sampled policies to obtain a fitness and uses fitness to inform an update to the current policy . S-ES specifically uses fitness to approximate the gradient and moves the current policy parameters in the direction that maximizes the average reward . Given that ES is both a blackbox process ( making it indifferent to temporal details ) and is a gradient free method it suffers from sub optimal sample efficiency , however Liu et al . ( 2019 ) showed promising results addressing this inefficiency using trust regions which allow for more of a monotonic improvement . S-ES obtained results comparable to MDP methods on a set of standard MuJoCo ( Todorov et al. , 2012 ) and Atari ( Mnih et al. , 2015 ) benchmarks ( Salimans et al. , 2017 ) , however there are many RL problems harder than these standard benchmarks , some of which are near impossible to solve using non-hierarchical RL ( in this paper we refer to this as flat RL ) and others that are unsolved using flat RL . These environments can range from games such as Montezuma ’ s revenge ( Mnih et al. , 2015 ) which is challenging because it requires long-term credit assignment , to robot locomotion , navigation and interaction ( Nachum et al. , 2018 ; Florensa et al. , 2017 ) which requires complex multi-level reasoning . HRL has long held the promise of solving much more complex tasks than flat RL methods . It allows policies to abstract away large amounts of complexity and focus on solving simpler sub goals . One of the first HRL methods is known as the options framework ( Sutton et al. , 1999b ) which allows the controller policy to select the most appropriate primitive policy from a pool of primitive policies , this primitive passes control back to the controller once its actions are completed and the process repeats . Another competing HRL framework is feudal-RL ( Dayan & Hinton , 1993 ) , this framework allows for communication between the controller and primitives by having the controller set goals for the primitive to complete . Recent feudal-RL methods such as FeUdal Networks for HRL ( FuN ) ( Vezhnevets et al. , 2017 ) and HRL with Off-Policy Correction ( HIRO ) ( Nachum et al. , 2018 ) have shown a lot of promise for learning sparse reward problems and hierarchies requiring complex primitives . HIRO in particular takes the approach of using a two-level hierarchy ( one controller and one primitive ) where the controller sets the goal and reward for the primitive . For example the goal can take the form of a position an agent must reach and the reward is based on the agents distance to the goal position . HIRO , FuN and most modern HRL methods use MDP based RL methods to optimize their hierarchy of policies ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ; Sutton et al. , 1999b ; Badia et al. , 2020 ) and to the best of the authors knowledge , non-MDP based RL solvers , such as ES , have not been extensively tested on hard RL problem that are typically reserved for MDP based HRL solvers . ES has multiple advantages over MDP based RL methods , but two of these advantages make ES especially suited for HRL problems . First , it is invariant to delayed rewards and second , it has a more structured exploration mechanism ( Salimans et al. , 2017 ; Conti et al. , 2017 ) relative to MDP based RL methods . Its robustness to delayed rewards is especially useful for HRL problems as much of the difficulty of these environments can come from the long term credit assignment problem . Similarly hard RL problems often have many large local minima , requiring intelligent exploration methods in order to be solved . These advantages suggest that ES and specifically S-ES should perform well on challenging HRL problems , however to the best of the authors knowledge S-ES has not yet been applied to HRL problems . In this paper we introduce a new method1 for training a two-level hierarchy of policies which are optimized using S-ES2 , namely Scalable Hierarchical Evolution Strategies ( SHES ) . It will be evaluated on two difficult environments , which require robust robot navigation , locomotion and planning . We compare our method ’ s performance to other MDP based HRL methods that have been tested on the same environments ( Nachum et al. , 2018 ; Vezhnevets et al. , 2017 ) . This paper aims to demonstrate that SHES performs well on environments that are challenging for MDP based HRL methods and that S-ES is as viable as any MDP based RL method when training hierarchies of policies . The results obtained in this work show that SHES provides the HRL space with a high task performance , highly scalable and comparably simple HRL method . Furthermore , our method achieves these results using the same hyper-parameters as its flat RL counterpart . 2 SCALABLE HIERARCHICAL EVOLUTION STRATEGIES ( SHES ) . In this section we present our framework for learning hierarchical policies using S-ES , which we call SHES . We show how current MDP based HRL methods needed to be adapted in order to work with 1Removed for review , code can be found in supplementary material ( ScalableHrlEs.jl ) 2Removed for review , code can be found in supplementary material ( ScalableEs.jl ) S-ES and explain the important design choices taken , such as choice of primitive reward function , the goal encoding and controller architecture . 2.1 POLICY HIERARCHY . SHES is a Feudal RL ( Dayan & Hinton , 1993 ) style method where a higher level policy set goals for a lower level policy . Dayan & Hinton ( 1993 ) use a multilevel feudal hierarchy , but similarly to HIRO , SHES uses a two level hierarchy consisting of a higher level controller policy µc and a lower level primitive policy µp . The controller is responsible for setting goals and can not directly perform actions in the world while the primitive directly controls the agent by taking actions in the world and attempts to reach the goals set by the controller . More formally , given a state st from the environment the controller produces a goal gt ∈ Rd ( where d depends on the goal encoding , in the case of this work d = 3 ) . The controller produces gt every c steps , in the interim the goal is transformed using a static function such that it is always relative to the current state , for example if gt is a vector from the agent to a target position , the static function will make sure that gt is always relative to the agents current position . The controller interval c is kept as a hyper-parameter since it has been observed that learning c often leads to it degenerating into the simplest cases where c becomes 1 or the maximum episode length ( Vezhnevets et al. , 2017 ) . This provides the controller with a level of temporal abstraction which , in the case of the environments tested in this work , allows the controller to plan a path without worrying about how the agent will follow this path . The primitive is passed the goal gt and the state st and tasked with reaching the goal . It samples an action at ∼ µp ( st , gt ) from its policy which is applied to the agent . The controller receives a reward from the environment , however it is also responsible for rewarding the primitive . As discussed in section 2.2 the primitive is sensitive to this reward and thus it should be chosen carefully . In HIRO , FuN and this work , the primitive reward is based on its distance to its goal gt ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ) , however all such previous work makes use of different rewards . In a strict feudal RL setting , rewards are not shared between controller and primitive . For example , if the primitive reaches the goal set by the controller , but this does not provide a high environmental reward then the primitive will receive a high reward , but the controller will not . Both this work and HIRO follow this strict style of primitive rewards . Even though FuN shares rewards between their primitives and controllers ( Vezhnevets et al. , 2017 ) , this was decided against as it introduces a hyper-parameter to balance primitive and controller reward scales , which can be challenging . Our method is similar to HIRO , though SHES differs in its lack of off-policy correction , it ’ s primitive goal encoding and the use of S-ES to train the primitive and controller . This has the effect of making SHES very simple to implement , the only difference between SHES and S-ES being that there are now two policies that co-evolve instead of a single policy . SHES stores a set of parameters for both the controller θc and primitive θp . Every generation it creates n new pairs of controllers and primitives by perturbing the parameters θc and θp . The perturbation is done by adding a small amount of noise sampled from an n-variate Gaussian to the parameters θci = θ c + c ∼ N ( 0 , σ2 ) . The primitive is perturbed similarly using a different noise vector which is sampled from the same Gaussian p ∼ N ( 0 , σ2 ) allowing for the sharing of common random numbers at no extra memory cost when compared to a single policy S-ES as SHES uses a shared noise table , which is one of the main contributions of S-ES ( Salimans et al. , 2017 ) . The sharing of common random numbers was shown by Salimans et al . ( 2017 ) to allow for near linear speedup when scaling up to 1000 CPU cores , which means that SHES should scale as well as S-ES , however it was only tested up to 240 cores3 . Each pair of perturbed controller and primitive are evaluated in the environment such that the controller is given a fitness equal to the cumulative environmental reward and the primitive is given a fitness of its cumulative reward from the controller . Both primitives and controllers are separately ranked and shaped according to their fitness using the same method as Salimans et al . ( 2017 ) for S-ES . This is used to approximate the gradients for the controller and primitive separately using the ADAM optimizer ( Kingma & Ba , 2014 ) and update the the controller ( θc ) and primitive ( θp ) parameters which is no different to how S-ES updates its policy . 3Each node has a Intel Xeon 24 core CPU at 2.6GHz , 120GB of RAM and nodes are connected by FDR InfiniBand Using this type of Feudal RL method the controller poses goals for the primitive who ’ s behaviour is constantly changing . This amounts to a non-stationary problem for the controller since while the controller is trying to learn the behaviour of the primitive its behaviour is changing , which can be challenging . Nachum et al . ( 2018 ) develop an off-policy correction method to combat the nonstationary problem and to allow for off-policy training and thus better sample efficiency . We found that we did not need a special method to combat this problem as S-ES robustness to noise makes this problem simpler to solve , since the primitives changing behaviour can be interpreted as noise by the controller , however this does come at the cost of sample efficiency . Algorithm 1 : SHES 1 Input : Learning rate α , noise standard deviation σ , rollouts n , initial policy parameters θc and θp 2 for t = 0,1,2 ... do 3 for i = 1,2 .. n do 4 Sample ci , p i ∼ N ( 0 , I ) 5 F ci , F p i = F ( θ c t + c i ∗ σ , θ p t + p i ∗ σ ) 6 end 7 θct+1 = θ c t + α 1 nσ ∑n i=1 F c i c i 8 θpt+1 = θ p t + α 1 nσ ∑n i=1 F p i p i 9 end
The paper presents the integration of evolution strategies with hierarchical reinforcement learning. From the natural characteristics of ES algorithms, it also can be highly scalable, so authors named the algorithm as SHES (Scalable Hierarchical Evolution Strategies). The algorithm is tested in two robot locomotion and navigation tasks.
SP:c145dd1a3c7e89293956d6a13e3597ccf33973c2
Evolution Strategies as an Alternate Learning method for Hierarchical Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) has been used to create artificially intelligent agents for tasks ranging from robot locomotion ( Haarnoja et al. , 2018 ) to video games such as StarCraft ( Vinyals et al. , 2019 ) and board games such as chess and Go ( Silver et al. , 2018 ) . Many such agents use Markov Decision Process ( MDP ) based learning methods , such as Deep Q-Networks ( DQNs ) ( Mnih et al. , 2015 ) and the policy gradient family of methods ( Sutton et al. , 1999a ; Sutton & Barto , 2018 ) . MDP and gradient based RL methods are also used by hierarchical RL ( HRL ) algorithms , which are a class of RL algorithms that excel at challenging RL problems by decomposing them into sub tasks , which mimics the way we as humans build new skills on top of existing simpler skills . HRL methods have seen success in solving some of the hardest RL environments such as Montezumas revenge ( Vezhnevets et al. , 2017 ; Badia et al. , 2020 ) and generating complex robot behaviours ( Nachum et al. , 2018 ) . Another area of reinforcement learning that has enjoyed recent success is evolution strategies . These are a family of black box evolutionary optimization techniques , which Salimans et al . ( 2017 ) showed are competitive with non-hierarchical MDP based RL methods in the robot locomotion and Atari domain . The success of such approaches has lead to a wider use of ES for tackling RL problems , but it has yet to be used to solve challenging HRL problems . ES has been used as a black box optimizer for a multitude of problems such as minimizing the drag of 3D bodies ( Beyer & Schwefel , 2002 ) , optimizing designs in structural and mechanical engineering problems ( Datoussaı̈d et al. , 2006 ) , robot locomotion ( Salimans et al. , 2017 ; Conti et al. , 2017 ; Katona et al. , 2021 ) and loss function optimization ( Gonzalez & Miikkulainen , 2020 ) . There are many different flavours of ES ( Beyer & Schwefel , 2002 ) each with different selection , mutation and self-adaption properties , for example CMA-ES ( Hansen & Ostermeier , 2001 ) and ( 1+γ ) -ES ( Beyer & Schwefel , 2002 ) . However , in this work we are concerned with the version proposed by Salimans et al . ( 2017 ) , namely Scalable Evolution Strategies ( S-ES ) because of it ’ s proven performance in the domain of robot locomotion and Atari game playing . All flavours of ES follow the scheme of sample-and-evaluate , where it samples a cloud of policy variants around it ’ s current policies parameters , evaluates these sampled policies to obtain a fitness and uses fitness to inform an update to the current policy . S-ES specifically uses fitness to approximate the gradient and moves the current policy parameters in the direction that maximizes the average reward . Given that ES is both a blackbox process ( making it indifferent to temporal details ) and is a gradient free method it suffers from sub optimal sample efficiency , however Liu et al . ( 2019 ) showed promising results addressing this inefficiency using trust regions which allow for more of a monotonic improvement . S-ES obtained results comparable to MDP methods on a set of standard MuJoCo ( Todorov et al. , 2012 ) and Atari ( Mnih et al. , 2015 ) benchmarks ( Salimans et al. , 2017 ) , however there are many RL problems harder than these standard benchmarks , some of which are near impossible to solve using non-hierarchical RL ( in this paper we refer to this as flat RL ) and others that are unsolved using flat RL . These environments can range from games such as Montezuma ’ s revenge ( Mnih et al. , 2015 ) which is challenging because it requires long-term credit assignment , to robot locomotion , navigation and interaction ( Nachum et al. , 2018 ; Florensa et al. , 2017 ) which requires complex multi-level reasoning . HRL has long held the promise of solving much more complex tasks than flat RL methods . It allows policies to abstract away large amounts of complexity and focus on solving simpler sub goals . One of the first HRL methods is known as the options framework ( Sutton et al. , 1999b ) which allows the controller policy to select the most appropriate primitive policy from a pool of primitive policies , this primitive passes control back to the controller once its actions are completed and the process repeats . Another competing HRL framework is feudal-RL ( Dayan & Hinton , 1993 ) , this framework allows for communication between the controller and primitives by having the controller set goals for the primitive to complete . Recent feudal-RL methods such as FeUdal Networks for HRL ( FuN ) ( Vezhnevets et al. , 2017 ) and HRL with Off-Policy Correction ( HIRO ) ( Nachum et al. , 2018 ) have shown a lot of promise for learning sparse reward problems and hierarchies requiring complex primitives . HIRO in particular takes the approach of using a two-level hierarchy ( one controller and one primitive ) where the controller sets the goal and reward for the primitive . For example the goal can take the form of a position an agent must reach and the reward is based on the agents distance to the goal position . HIRO , FuN and most modern HRL methods use MDP based RL methods to optimize their hierarchy of policies ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ; Sutton et al. , 1999b ; Badia et al. , 2020 ) and to the best of the authors knowledge , non-MDP based RL solvers , such as ES , have not been extensively tested on hard RL problem that are typically reserved for MDP based HRL solvers . ES has multiple advantages over MDP based RL methods , but two of these advantages make ES especially suited for HRL problems . First , it is invariant to delayed rewards and second , it has a more structured exploration mechanism ( Salimans et al. , 2017 ; Conti et al. , 2017 ) relative to MDP based RL methods . Its robustness to delayed rewards is especially useful for HRL problems as much of the difficulty of these environments can come from the long term credit assignment problem . Similarly hard RL problems often have many large local minima , requiring intelligent exploration methods in order to be solved . These advantages suggest that ES and specifically S-ES should perform well on challenging HRL problems , however to the best of the authors knowledge S-ES has not yet been applied to HRL problems . In this paper we introduce a new method1 for training a two-level hierarchy of policies which are optimized using S-ES2 , namely Scalable Hierarchical Evolution Strategies ( SHES ) . It will be evaluated on two difficult environments , which require robust robot navigation , locomotion and planning . We compare our method ’ s performance to other MDP based HRL methods that have been tested on the same environments ( Nachum et al. , 2018 ; Vezhnevets et al. , 2017 ) . This paper aims to demonstrate that SHES performs well on environments that are challenging for MDP based HRL methods and that S-ES is as viable as any MDP based RL method when training hierarchies of policies . The results obtained in this work show that SHES provides the HRL space with a high task performance , highly scalable and comparably simple HRL method . Furthermore , our method achieves these results using the same hyper-parameters as its flat RL counterpart . 2 SCALABLE HIERARCHICAL EVOLUTION STRATEGIES ( SHES ) . In this section we present our framework for learning hierarchical policies using S-ES , which we call SHES . We show how current MDP based HRL methods needed to be adapted in order to work with 1Removed for review , code can be found in supplementary material ( ScalableHrlEs.jl ) 2Removed for review , code can be found in supplementary material ( ScalableEs.jl ) S-ES and explain the important design choices taken , such as choice of primitive reward function , the goal encoding and controller architecture . 2.1 POLICY HIERARCHY . SHES is a Feudal RL ( Dayan & Hinton , 1993 ) style method where a higher level policy set goals for a lower level policy . Dayan & Hinton ( 1993 ) use a multilevel feudal hierarchy , but similarly to HIRO , SHES uses a two level hierarchy consisting of a higher level controller policy µc and a lower level primitive policy µp . The controller is responsible for setting goals and can not directly perform actions in the world while the primitive directly controls the agent by taking actions in the world and attempts to reach the goals set by the controller . More formally , given a state st from the environment the controller produces a goal gt ∈ Rd ( where d depends on the goal encoding , in the case of this work d = 3 ) . The controller produces gt every c steps , in the interim the goal is transformed using a static function such that it is always relative to the current state , for example if gt is a vector from the agent to a target position , the static function will make sure that gt is always relative to the agents current position . The controller interval c is kept as a hyper-parameter since it has been observed that learning c often leads to it degenerating into the simplest cases where c becomes 1 or the maximum episode length ( Vezhnevets et al. , 2017 ) . This provides the controller with a level of temporal abstraction which , in the case of the environments tested in this work , allows the controller to plan a path without worrying about how the agent will follow this path . The primitive is passed the goal gt and the state st and tasked with reaching the goal . It samples an action at ∼ µp ( st , gt ) from its policy which is applied to the agent . The controller receives a reward from the environment , however it is also responsible for rewarding the primitive . As discussed in section 2.2 the primitive is sensitive to this reward and thus it should be chosen carefully . In HIRO , FuN and this work , the primitive reward is based on its distance to its goal gt ( Vezhnevets et al. , 2017 ; Nachum et al. , 2018 ) , however all such previous work makes use of different rewards . In a strict feudal RL setting , rewards are not shared between controller and primitive . For example , if the primitive reaches the goal set by the controller , but this does not provide a high environmental reward then the primitive will receive a high reward , but the controller will not . Both this work and HIRO follow this strict style of primitive rewards . Even though FuN shares rewards between their primitives and controllers ( Vezhnevets et al. , 2017 ) , this was decided against as it introduces a hyper-parameter to balance primitive and controller reward scales , which can be challenging . Our method is similar to HIRO , though SHES differs in its lack of off-policy correction , it ’ s primitive goal encoding and the use of S-ES to train the primitive and controller . This has the effect of making SHES very simple to implement , the only difference between SHES and S-ES being that there are now two policies that co-evolve instead of a single policy . SHES stores a set of parameters for both the controller θc and primitive θp . Every generation it creates n new pairs of controllers and primitives by perturbing the parameters θc and θp . The perturbation is done by adding a small amount of noise sampled from an n-variate Gaussian to the parameters θci = θ c + c ∼ N ( 0 , σ2 ) . The primitive is perturbed similarly using a different noise vector which is sampled from the same Gaussian p ∼ N ( 0 , σ2 ) allowing for the sharing of common random numbers at no extra memory cost when compared to a single policy S-ES as SHES uses a shared noise table , which is one of the main contributions of S-ES ( Salimans et al. , 2017 ) . The sharing of common random numbers was shown by Salimans et al . ( 2017 ) to allow for near linear speedup when scaling up to 1000 CPU cores , which means that SHES should scale as well as S-ES , however it was only tested up to 240 cores3 . Each pair of perturbed controller and primitive are evaluated in the environment such that the controller is given a fitness equal to the cumulative environmental reward and the primitive is given a fitness of its cumulative reward from the controller . Both primitives and controllers are separately ranked and shaped according to their fitness using the same method as Salimans et al . ( 2017 ) for S-ES . This is used to approximate the gradients for the controller and primitive separately using the ADAM optimizer ( Kingma & Ba , 2014 ) and update the the controller ( θc ) and primitive ( θp ) parameters which is no different to how S-ES updates its policy . 3Each node has a Intel Xeon 24 core CPU at 2.6GHz , 120GB of RAM and nodes are connected by FDR InfiniBand Using this type of Feudal RL method the controller poses goals for the primitive who ’ s behaviour is constantly changing . This amounts to a non-stationary problem for the controller since while the controller is trying to learn the behaviour of the primitive its behaviour is changing , which can be challenging . Nachum et al . ( 2018 ) develop an off-policy correction method to combat the nonstationary problem and to allow for off-policy training and thus better sample efficiency . We found that we did not need a special method to combat this problem as S-ES robustness to noise makes this problem simpler to solve , since the primitives changing behaviour can be interpreted as noise by the controller , however this does come at the cost of sample efficiency . Algorithm 1 : SHES 1 Input : Learning rate α , noise standard deviation σ , rollouts n , initial policy parameters θc and θp 2 for t = 0,1,2 ... do 3 for i = 1,2 .. n do 4 Sample ci , p i ∼ N ( 0 , I ) 5 F ci , F p i = F ( θ c t + c i ∗ σ , θ p t + p i ∗ σ ) 6 end 7 θct+1 = θ c t + α 1 nσ ∑n i=1 F c i c i 8 θpt+1 = θ p t + α 1 nσ ∑n i=1 F p i p i 9 end
This paper proposes to apply evolution strategy to hierarchical reinforcement learning. The high-level controller sets goals and the low-level primitive learns to move to the goal. The high-level and the low-level have different reward functions. The paper applies evolution strategy to solve such a problem. The proposed method is evaluated on two tasks: Ant Maze and Ant Gather. It outperforms the baselines in one of the experiments.
SP:c145dd1a3c7e89293956d6a13e3597ccf33973c2
PSA-GAN: Progressive Self Attention GANs for Synthetic Time Series
1 INTRODUCTION . In the past years , methods such as ( Salinas et al. , 2020 ; Franceschi et al. , 2019 ; Kurle et al. , 2020 ; de Bézenac et al. , 2020 ; Oreshkin et al. , 2019 ; Rasul et al. , 2021 ; Cui et al. , 2016 ; Wang et al. , 2016 ) have consistently showcased their effectiveness of deep learning in time series analysis tasks . Although these deep learning based methods are effective when sufficient clean data is available , this assumption is not always met in practice . For example , sensor outages can cause gaps in IoT data , which might render the data unusable for machine learning applications ( Zhang et al. , 2019b ) . An additional problem is that time series panels often have insufficient size for forecasting tasks , leading to research in meta-learning for forecasting ( Oreshkin et al. , 2020b ) . Cold starts are another common problem in time series forecasting where some time series have little and no data ( like a new product in a demand forecasting use case ) . Thus , designing flexible and task-independent models that generate synthetic , but realistic time series for arbitrary tasks poses an important challenge . Generative adversarial networks ( GAN ) are a flexible model family that has had success in other domains . However , for their success to carry over to time series , synthetic time series data must be of realistic length , which current state-of-the-art synthetic time series models struggle to generate because they often rely on recurrent networks to capture temporal dynamics ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) . In this work , we make three contributions : i ) We propose PSA-GAN , a progressively growing , convolutional time series GAN model augmented with self-attention ( Karras et al. , 2018 ; Vaswani et al. , 2017 ) . PSA-GAN scales to long time series because the progressive growing architecture starts modeling the coarse-grained time series features and moves towards modeling fine-grained details during training . The self-attention mechanism captures long-range dependencies in the data ( Zhang et al. , 2019a ) . ii ) We show empirically that PSA-GAN samples are of sufficient quality and length to boost several downstream forecasting tasks : far-forecasting and data imputation during inference , data imputation of missing value stretches during training , forecasting under cold start conditions , and data augmentation . Furthermore , we show that PSA-GAN can be used as a forecasting model and has competitive performance when using the same context information as an established baseline . iii ) Finally , we propose a Frechet Inception distance ( FID ) -like score ( Salimans et al. , 2016 ) , ContextFID , leveraging unsupervised time series embeddings ( Franceschi et al. , 2019 ) . We show that the lowest scoring models correspond to the best-performing models in our downstream tasks and that the Context-FID score correlates with the downstream forecasting performance of the GAN model ( measured by normalized root mean squared error ) . Therefore , the Context-FID could be a useful general-purpose tool to select GAN models for downstream applications . We structure this work as follows : We discuss the related work in Section 2 and introduce the model in Section 3 . In Section 4 , we evaluate our proposed GAN model using the proposed Context-FID score and through several downstream forecasting tasks . We also directly evaluate our model as a forecasting algorithm and perform an ablation study . Section 5 concludes this manuscript . 2 RELATED WORK . GANs ( Goodfellow et al. , 2014 ) are an active area of research ( Karras et al. , 2019 ; Yoon et al. , 2019 ; Engel et al. , 2019 ; Lin et al. , 2017 ; Esteban et al. , 2017 ; Brock et al. , 2019 ) that have recently been applied to the time series domain ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) to synthesize data ( Takahashi et al. , 2019 ; Esteban et al. , 2017 ) , and to forecasting tasks ( Wu et al. , 2020b ) . Many time series GAN architectures use recurrent networks to model temporal dynamics ( Mogren , 2016 ; Esteban et al. , 2017 ; Yoon et al. , 2019 ) . Modeling long-range dependencies and scaling recurrent networks to higher lengths is inherently difficult and limits the application of time series GANs to short sequence lengths ( less than 100 time steps ) ( Yoon et al. , 2019 ; Esteban et al. , 2017 ) . One way to achieve longer realistic synthetic time series is by employing convolutional ( van den Oord et al. , 2016 ; Bai et al. , 2018 ; Franceschi et al. , 2019 ) and self-attention architectures ( Vaswani et al. , 2017 ) . Convolutional architectures are able to learn relevant features from the raw time series data ( van den Oord et al. , 2016 ; Bai et al. , 2018 ; Franceschi et al. , 2019 ) , but are ultimately limited to local receptive fields and can only capture long-range dependencies via many stacks of convolutional layers . Self-attention can bridge this gap and allow for modeling long-range dependencies from convolutional feature maps , which has been a successful approach in the image ( Zhang et al. , 2019a ) and time series forecasting domain ( Li et al. , 2019 ; Wu et al. , 2020a ) . Another technique to achieve long sample sizes is progressive growing , which successively increases the resolution by adding layers to generator and discriminator during training ( Karras et al. , 2018 ) . Additionally , the synthetic time series should preserve the dynamics of the real data . This can be achieved by using the latent state of an autoencoder network ( Yoon et al. , 2019 ) . In PSA-GAN , we explore whether unsupervised techniques could instead provide meaningful embeddings that can act as an additional signal to the GAN to model realistic time series . Our proposal , PSA-GAN , synthesizes progressive growing with convolutions , self-attention , and embeddings into a novel architecture particularly geared towards time series . Another line of work in the time series field is focused on developing suitable loss functions for modeling financial time series with GANs where specific challenges include heavy tailed distributions , volatility clustering , absence of autocorrelations , among others ( Cont , 2001 ; Eckerli & Osterrieder , 2021 ) . To this end , several models like QuantGAN ( Wiese et al. , 2020 ) , ( Conditional ) SigWGAN ( Ni et al. , 2020 ; 2021 ) , and DAT-GAN ( Sun et al. , 2020 ) have been proposed ( see ( Eckerli & Osterrieder , 2021 ) for review in this field ) . This line of work targets its own challenges by developing new loss functions for financial time series , which is orthogonal to our work , i.e . we focus on neural network architectures for time series GANs and show its usefulness in the context of time series forecasting . Another challenge is the evaluation of synthetic data . While the computer vision domain uses standard scores like the Inception Score and the Frechet Inception Distance ( FID ) ( Salimans et al. , 2016 ; Heusel et al. , 2018 ) , such universally accepted scores do not exist in the time series field . Thus , researchers rely on a Train on Synthetic–Test on Real setup and assess the quality of the synthetic time series in a downstream classification and/or prediction task ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) . In this work , we build on this idea and assess the GAN models through downstream forecasting tasks . Additionally , we suggest a Frechet Inception Distance-like score that is based on unsupervised time series embeddings ( Franceschi et al. , 2019 ) . Critically , we want to be able to score the fit of our fixed length synthetic samples into their context of ( often much longer ) true time series , which is taken into account by the contrastive training procedure in ( Franceschi et al. , 2019 ) . As we will later show , the lowest scoring models correspond to the best performing models in downstream tasks . 3 MODEL . Problem formulation We denote the values of a time series dataset by zi , t ∈ R , where i ∈ { 1 , 2 , . . . , N } is the index of the individual time series and t ∈ { 1 , 2 , . . . , T } is the time index . Additionally , we consider an associated matrix of time feature vectors X1 : T = ( x1 , . . . , xT ) in RD×T . Our goal is to model a time series of fixed length τ , Ẑi , t , τ = ( ẑi , t , . . . , ẑi , t+τ−1 ) , from this dataset using a conditional generator function G and a fixed time point t. Thus , we aim to model Ẑi , t , τ = G ( n , φ ( i ) , Xt : t+τ−1 ) , where n ∈ Rτ is a noise vector drawn from a Gaussian distribution of mean zero and variance one ; φ is an embedding function that maps the index of a time series to a vector representation , that is concatenated to each time step of Xt : t+τ−1 . An overview of the model architecture is shown in Figure 1 and details about the time features are presented in Appendix A. Spectral Normalised Residual Self Attention with Convolution The generator and discriminator use a main function m that is a composition of convolution , self attention and spectral normalisation m ◦ f : Rnf×l → Rnf×l x 7→ γ SA ( f ( x ) ) + f ( x ) ( 1 ) where f ( x ) = LR ( SN ( c ( x ) ) ) and m ( y ) = γ SA ( y ) +y , c is a one dimensional convolution operator , LR the LeakyReLU operator ( Xu et al. , 2015 ) , SN the Spectral Normalisation operator ( Miyato et al. , 2018 ) and SA the Self Attention module . The variable nf is the number of in and out-channels of c , l is the length of the sequence . Following the work of ( Zhang et al. , 2019a ) , the parameter γ is learnable . It is initialized to zero to allow the network to learn local features directly from the building block f , and is later enriched with distant features as the absolute value of gamma increases , hereby more heavily factoring the self-attention term SA . The module m is referenced as Residual Self-Attention in Fig . 1 ( Right ) . Downscaling and Upscaling The following sections mention upscaling ( UP ) and downscaling ( DOWN ) operators that double and halve the length of the time series , respectively . In this work , the upscaling operator is a linear interpolation and the downscaling operator is the average pooling . PSA-GAN PSA-GAN is a progressively growing GAN ( Karras et al. , 2018 ) ; thus , trainable modules are added during training . Hereby , we model the generator and discriminator as a composition of functions : G = gL+1 ◦ ... ◦g1 andD = d1 ◦ ... ◦dL+1 where each function gi and di for i ∈ [ 1 , L+1 ] corresponds to a module of the generator and discriminator . GENERATOR As a preprocessing step , we first map the concatenated input [ n , φ ( i ) , Xt : t+τ−1 ] from a sequence of length τ to a sequence of length 8 , denoted by Z̃0 , using average pooling . Then , the first layer of the generator g1 applies the main function m : g1 : Rnf×2 3 → Rnf×2 3 Z̃0 7→ Z̃1 = m ◦ f ( Z̃0 ) ( 2 ) For i ∈ [ 2 , L ] , gi maps an input sequence Z̃i−1 to an output sequence Z̃i by applying an upscaling of the input sequence and the function m ◦ f : gi : Rnf×2 i+1 → Rnf×2 i+2 Z̃i−1 7→ Z̃i = m ◦ f ( UP ( Z̃i−1 ) ) ( 3 ) The output of gi is concatenated back to the time features Xt : t+τ−1 and forwarded to the next block . Lastly , the final layer of the generator gL+1 reshapes the multivariate sequence Z̃L to a univariate time series Ẑi , t , τ of length τ = 2L+3 using a one dimensional convolution and spectral normalisation . DISCRIMINATOR The architecture of the discriminator mirrors the architecture of the generator . It maps the generator ’ s output Ẑi , t , τ and the time features Xt : t+τ−1 to a score d. The first module of the discriminator dL+1 uses a one dimensional convolution c1 and a LeakyReLU activation function : d ′ L+1 : R1+D , τ → Rnf , τ ( Z̃L+1 , Xt : t+τ−1 ) 7→ ỸL = SN ( LR ( c1 ( Z̃L+1 , Xt : t+τ−1 ) ) ( 4 ) For i ∈ [ L+ 1 , 2 ] , the module di applies a downscale operator and the main function m : di : Rnf×2 i+2 → Rnf×2 i+1 Yi 7→ Yi−1 = DOWN ( m ( Yi ) ) ( 5 ) The last module d1 turns its input sequence into a score : d1 : Rnf×2 3 → R Y1 7→ Y0 = SN ( FC ( LR ( SN ( c ( m ( Y1 ) ) ) ) ) ) ( 6 ) where FC is a fully connected layer . PSA-GAN-C We another instantiation of PSA-GAN in which we forward to each generator block gi knowledge about the past . The knowledge here is a subseries Ẑi , t−LC , LC in the range [ t− LC , t− 1 ] , with LC being the context length . The context Ẑi , t−LC , LC is concatenated along the feature dimension , i.e . at each time step , to the output sequence of gi . It is then passed through a two layers perceptron to reshape the feature dimension and then added back to the output of gi .
The authors propose a new GAN-based algorithm for time series synthesis. They use progressive growing of GAN architectures to improve the performance of GAN and self-attention to enhance the expressive capability of neural networks. The experimental results validate the superiority of the proposed PSA-GAN algorithm.
SP:017769783f4bdb1c409591da5299bf7d037f2bad
PSA-GAN: Progressive Self Attention GANs for Synthetic Time Series
1 INTRODUCTION . In the past years , methods such as ( Salinas et al. , 2020 ; Franceschi et al. , 2019 ; Kurle et al. , 2020 ; de Bézenac et al. , 2020 ; Oreshkin et al. , 2019 ; Rasul et al. , 2021 ; Cui et al. , 2016 ; Wang et al. , 2016 ) have consistently showcased their effectiveness of deep learning in time series analysis tasks . Although these deep learning based methods are effective when sufficient clean data is available , this assumption is not always met in practice . For example , sensor outages can cause gaps in IoT data , which might render the data unusable for machine learning applications ( Zhang et al. , 2019b ) . An additional problem is that time series panels often have insufficient size for forecasting tasks , leading to research in meta-learning for forecasting ( Oreshkin et al. , 2020b ) . Cold starts are another common problem in time series forecasting where some time series have little and no data ( like a new product in a demand forecasting use case ) . Thus , designing flexible and task-independent models that generate synthetic , but realistic time series for arbitrary tasks poses an important challenge . Generative adversarial networks ( GAN ) are a flexible model family that has had success in other domains . However , for their success to carry over to time series , synthetic time series data must be of realistic length , which current state-of-the-art synthetic time series models struggle to generate because they often rely on recurrent networks to capture temporal dynamics ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) . In this work , we make three contributions : i ) We propose PSA-GAN , a progressively growing , convolutional time series GAN model augmented with self-attention ( Karras et al. , 2018 ; Vaswani et al. , 2017 ) . PSA-GAN scales to long time series because the progressive growing architecture starts modeling the coarse-grained time series features and moves towards modeling fine-grained details during training . The self-attention mechanism captures long-range dependencies in the data ( Zhang et al. , 2019a ) . ii ) We show empirically that PSA-GAN samples are of sufficient quality and length to boost several downstream forecasting tasks : far-forecasting and data imputation during inference , data imputation of missing value stretches during training , forecasting under cold start conditions , and data augmentation . Furthermore , we show that PSA-GAN can be used as a forecasting model and has competitive performance when using the same context information as an established baseline . iii ) Finally , we propose a Frechet Inception distance ( FID ) -like score ( Salimans et al. , 2016 ) , ContextFID , leveraging unsupervised time series embeddings ( Franceschi et al. , 2019 ) . We show that the lowest scoring models correspond to the best-performing models in our downstream tasks and that the Context-FID score correlates with the downstream forecasting performance of the GAN model ( measured by normalized root mean squared error ) . Therefore , the Context-FID could be a useful general-purpose tool to select GAN models for downstream applications . We structure this work as follows : We discuss the related work in Section 2 and introduce the model in Section 3 . In Section 4 , we evaluate our proposed GAN model using the proposed Context-FID score and through several downstream forecasting tasks . We also directly evaluate our model as a forecasting algorithm and perform an ablation study . Section 5 concludes this manuscript . 2 RELATED WORK . GANs ( Goodfellow et al. , 2014 ) are an active area of research ( Karras et al. , 2019 ; Yoon et al. , 2019 ; Engel et al. , 2019 ; Lin et al. , 2017 ; Esteban et al. , 2017 ; Brock et al. , 2019 ) that have recently been applied to the time series domain ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) to synthesize data ( Takahashi et al. , 2019 ; Esteban et al. , 2017 ) , and to forecasting tasks ( Wu et al. , 2020b ) . Many time series GAN architectures use recurrent networks to model temporal dynamics ( Mogren , 2016 ; Esteban et al. , 2017 ; Yoon et al. , 2019 ) . Modeling long-range dependencies and scaling recurrent networks to higher lengths is inherently difficult and limits the application of time series GANs to short sequence lengths ( less than 100 time steps ) ( Yoon et al. , 2019 ; Esteban et al. , 2017 ) . One way to achieve longer realistic synthetic time series is by employing convolutional ( van den Oord et al. , 2016 ; Bai et al. , 2018 ; Franceschi et al. , 2019 ) and self-attention architectures ( Vaswani et al. , 2017 ) . Convolutional architectures are able to learn relevant features from the raw time series data ( van den Oord et al. , 2016 ; Bai et al. , 2018 ; Franceschi et al. , 2019 ) , but are ultimately limited to local receptive fields and can only capture long-range dependencies via many stacks of convolutional layers . Self-attention can bridge this gap and allow for modeling long-range dependencies from convolutional feature maps , which has been a successful approach in the image ( Zhang et al. , 2019a ) and time series forecasting domain ( Li et al. , 2019 ; Wu et al. , 2020a ) . Another technique to achieve long sample sizes is progressive growing , which successively increases the resolution by adding layers to generator and discriminator during training ( Karras et al. , 2018 ) . Additionally , the synthetic time series should preserve the dynamics of the real data . This can be achieved by using the latent state of an autoencoder network ( Yoon et al. , 2019 ) . In PSA-GAN , we explore whether unsupervised techniques could instead provide meaningful embeddings that can act as an additional signal to the GAN to model realistic time series . Our proposal , PSA-GAN , synthesizes progressive growing with convolutions , self-attention , and embeddings into a novel architecture particularly geared towards time series . Another line of work in the time series field is focused on developing suitable loss functions for modeling financial time series with GANs where specific challenges include heavy tailed distributions , volatility clustering , absence of autocorrelations , among others ( Cont , 2001 ; Eckerli & Osterrieder , 2021 ) . To this end , several models like QuantGAN ( Wiese et al. , 2020 ) , ( Conditional ) SigWGAN ( Ni et al. , 2020 ; 2021 ) , and DAT-GAN ( Sun et al. , 2020 ) have been proposed ( see ( Eckerli & Osterrieder , 2021 ) for review in this field ) . This line of work targets its own challenges by developing new loss functions for financial time series , which is orthogonal to our work , i.e . we focus on neural network architectures for time series GANs and show its usefulness in the context of time series forecasting . Another challenge is the evaluation of synthetic data . While the computer vision domain uses standard scores like the Inception Score and the Frechet Inception Distance ( FID ) ( Salimans et al. , 2016 ; Heusel et al. , 2018 ) , such universally accepted scores do not exist in the time series field . Thus , researchers rely on a Train on Synthetic–Test on Real setup and assess the quality of the synthetic time series in a downstream classification and/or prediction task ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) . In this work , we build on this idea and assess the GAN models through downstream forecasting tasks . Additionally , we suggest a Frechet Inception Distance-like score that is based on unsupervised time series embeddings ( Franceschi et al. , 2019 ) . Critically , we want to be able to score the fit of our fixed length synthetic samples into their context of ( often much longer ) true time series , which is taken into account by the contrastive training procedure in ( Franceschi et al. , 2019 ) . As we will later show , the lowest scoring models correspond to the best performing models in downstream tasks . 3 MODEL . Problem formulation We denote the values of a time series dataset by zi , t ∈ R , where i ∈ { 1 , 2 , . . . , N } is the index of the individual time series and t ∈ { 1 , 2 , . . . , T } is the time index . Additionally , we consider an associated matrix of time feature vectors X1 : T = ( x1 , . . . , xT ) in RD×T . Our goal is to model a time series of fixed length τ , Ẑi , t , τ = ( ẑi , t , . . . , ẑi , t+τ−1 ) , from this dataset using a conditional generator function G and a fixed time point t. Thus , we aim to model Ẑi , t , τ = G ( n , φ ( i ) , Xt : t+τ−1 ) , where n ∈ Rτ is a noise vector drawn from a Gaussian distribution of mean zero and variance one ; φ is an embedding function that maps the index of a time series to a vector representation , that is concatenated to each time step of Xt : t+τ−1 . An overview of the model architecture is shown in Figure 1 and details about the time features are presented in Appendix A. Spectral Normalised Residual Self Attention with Convolution The generator and discriminator use a main function m that is a composition of convolution , self attention and spectral normalisation m ◦ f : Rnf×l → Rnf×l x 7→ γ SA ( f ( x ) ) + f ( x ) ( 1 ) where f ( x ) = LR ( SN ( c ( x ) ) ) and m ( y ) = γ SA ( y ) +y , c is a one dimensional convolution operator , LR the LeakyReLU operator ( Xu et al. , 2015 ) , SN the Spectral Normalisation operator ( Miyato et al. , 2018 ) and SA the Self Attention module . The variable nf is the number of in and out-channels of c , l is the length of the sequence . Following the work of ( Zhang et al. , 2019a ) , the parameter γ is learnable . It is initialized to zero to allow the network to learn local features directly from the building block f , and is later enriched with distant features as the absolute value of gamma increases , hereby more heavily factoring the self-attention term SA . The module m is referenced as Residual Self-Attention in Fig . 1 ( Right ) . Downscaling and Upscaling The following sections mention upscaling ( UP ) and downscaling ( DOWN ) operators that double and halve the length of the time series , respectively . In this work , the upscaling operator is a linear interpolation and the downscaling operator is the average pooling . PSA-GAN PSA-GAN is a progressively growing GAN ( Karras et al. , 2018 ) ; thus , trainable modules are added during training . Hereby , we model the generator and discriminator as a composition of functions : G = gL+1 ◦ ... ◦g1 andD = d1 ◦ ... ◦dL+1 where each function gi and di for i ∈ [ 1 , L+1 ] corresponds to a module of the generator and discriminator . GENERATOR As a preprocessing step , we first map the concatenated input [ n , φ ( i ) , Xt : t+τ−1 ] from a sequence of length τ to a sequence of length 8 , denoted by Z̃0 , using average pooling . Then , the first layer of the generator g1 applies the main function m : g1 : Rnf×2 3 → Rnf×2 3 Z̃0 7→ Z̃1 = m ◦ f ( Z̃0 ) ( 2 ) For i ∈ [ 2 , L ] , gi maps an input sequence Z̃i−1 to an output sequence Z̃i by applying an upscaling of the input sequence and the function m ◦ f : gi : Rnf×2 i+1 → Rnf×2 i+2 Z̃i−1 7→ Z̃i = m ◦ f ( UP ( Z̃i−1 ) ) ( 3 ) The output of gi is concatenated back to the time features Xt : t+τ−1 and forwarded to the next block . Lastly , the final layer of the generator gL+1 reshapes the multivariate sequence Z̃L to a univariate time series Ẑi , t , τ of length τ = 2L+3 using a one dimensional convolution and spectral normalisation . DISCRIMINATOR The architecture of the discriminator mirrors the architecture of the generator . It maps the generator ’ s output Ẑi , t , τ and the time features Xt : t+τ−1 to a score d. The first module of the discriminator dL+1 uses a one dimensional convolution c1 and a LeakyReLU activation function : d ′ L+1 : R1+D , τ → Rnf , τ ( Z̃L+1 , Xt : t+τ−1 ) 7→ ỸL = SN ( LR ( c1 ( Z̃L+1 , Xt : t+τ−1 ) ) ( 4 ) For i ∈ [ L+ 1 , 2 ] , the module di applies a downscale operator and the main function m : di : Rnf×2 i+2 → Rnf×2 i+1 Yi 7→ Yi−1 = DOWN ( m ( Yi ) ) ( 5 ) The last module d1 turns its input sequence into a score : d1 : Rnf×2 3 → R Y1 7→ Y0 = SN ( FC ( LR ( SN ( c ( m ( Y1 ) ) ) ) ) ) ( 6 ) where FC is a fully connected layer . PSA-GAN-C We another instantiation of PSA-GAN in which we forward to each generator block gi knowledge about the past . The knowledge here is a subseries Ẑi , t−LC , LC in the range [ t− LC , t− 1 ] , with LC being the context length . The context Ẑi , t−LC , LC is concatenated along the feature dimension , i.e . at each time step , to the output sequence of gi . It is then passed through a two layers perceptron to reshape the feature dimension and then added back to the output of gi .
The paper proposes a type of GAN to generate synthetic time series. The authors use the data generated by their proposed model to train forecasting networks, which improves the baselines. They also show that the proposed GAN can itself be used as a forecasting model and that it has competitive performance to baseline models. Finally, the paper proposes an adaptation of the FID score for time series.
SP:017769783f4bdb1c409591da5299bf7d037f2bad
PSA-GAN: Progressive Self Attention GANs for Synthetic Time Series
1 INTRODUCTION . In the past years , methods such as ( Salinas et al. , 2020 ; Franceschi et al. , 2019 ; Kurle et al. , 2020 ; de Bézenac et al. , 2020 ; Oreshkin et al. , 2019 ; Rasul et al. , 2021 ; Cui et al. , 2016 ; Wang et al. , 2016 ) have consistently showcased their effectiveness of deep learning in time series analysis tasks . Although these deep learning based methods are effective when sufficient clean data is available , this assumption is not always met in practice . For example , sensor outages can cause gaps in IoT data , which might render the data unusable for machine learning applications ( Zhang et al. , 2019b ) . An additional problem is that time series panels often have insufficient size for forecasting tasks , leading to research in meta-learning for forecasting ( Oreshkin et al. , 2020b ) . Cold starts are another common problem in time series forecasting where some time series have little and no data ( like a new product in a demand forecasting use case ) . Thus , designing flexible and task-independent models that generate synthetic , but realistic time series for arbitrary tasks poses an important challenge . Generative adversarial networks ( GAN ) are a flexible model family that has had success in other domains . However , for their success to carry over to time series , synthetic time series data must be of realistic length , which current state-of-the-art synthetic time series models struggle to generate because they often rely on recurrent networks to capture temporal dynamics ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) . In this work , we make three contributions : i ) We propose PSA-GAN , a progressively growing , convolutional time series GAN model augmented with self-attention ( Karras et al. , 2018 ; Vaswani et al. , 2017 ) . PSA-GAN scales to long time series because the progressive growing architecture starts modeling the coarse-grained time series features and moves towards modeling fine-grained details during training . The self-attention mechanism captures long-range dependencies in the data ( Zhang et al. , 2019a ) . ii ) We show empirically that PSA-GAN samples are of sufficient quality and length to boost several downstream forecasting tasks : far-forecasting and data imputation during inference , data imputation of missing value stretches during training , forecasting under cold start conditions , and data augmentation . Furthermore , we show that PSA-GAN can be used as a forecasting model and has competitive performance when using the same context information as an established baseline . iii ) Finally , we propose a Frechet Inception distance ( FID ) -like score ( Salimans et al. , 2016 ) , ContextFID , leveraging unsupervised time series embeddings ( Franceschi et al. , 2019 ) . We show that the lowest scoring models correspond to the best-performing models in our downstream tasks and that the Context-FID score correlates with the downstream forecasting performance of the GAN model ( measured by normalized root mean squared error ) . Therefore , the Context-FID could be a useful general-purpose tool to select GAN models for downstream applications . We structure this work as follows : We discuss the related work in Section 2 and introduce the model in Section 3 . In Section 4 , we evaluate our proposed GAN model using the proposed Context-FID score and through several downstream forecasting tasks . We also directly evaluate our model as a forecasting algorithm and perform an ablation study . Section 5 concludes this manuscript . 2 RELATED WORK . GANs ( Goodfellow et al. , 2014 ) are an active area of research ( Karras et al. , 2019 ; Yoon et al. , 2019 ; Engel et al. , 2019 ; Lin et al. , 2017 ; Esteban et al. , 2017 ; Brock et al. , 2019 ) that have recently been applied to the time series domain ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) to synthesize data ( Takahashi et al. , 2019 ; Esteban et al. , 2017 ) , and to forecasting tasks ( Wu et al. , 2020b ) . Many time series GAN architectures use recurrent networks to model temporal dynamics ( Mogren , 2016 ; Esteban et al. , 2017 ; Yoon et al. , 2019 ) . Modeling long-range dependencies and scaling recurrent networks to higher lengths is inherently difficult and limits the application of time series GANs to short sequence lengths ( less than 100 time steps ) ( Yoon et al. , 2019 ; Esteban et al. , 2017 ) . One way to achieve longer realistic synthetic time series is by employing convolutional ( van den Oord et al. , 2016 ; Bai et al. , 2018 ; Franceschi et al. , 2019 ) and self-attention architectures ( Vaswani et al. , 2017 ) . Convolutional architectures are able to learn relevant features from the raw time series data ( van den Oord et al. , 2016 ; Bai et al. , 2018 ; Franceschi et al. , 2019 ) , but are ultimately limited to local receptive fields and can only capture long-range dependencies via many stacks of convolutional layers . Self-attention can bridge this gap and allow for modeling long-range dependencies from convolutional feature maps , which has been a successful approach in the image ( Zhang et al. , 2019a ) and time series forecasting domain ( Li et al. , 2019 ; Wu et al. , 2020a ) . Another technique to achieve long sample sizes is progressive growing , which successively increases the resolution by adding layers to generator and discriminator during training ( Karras et al. , 2018 ) . Additionally , the synthetic time series should preserve the dynamics of the real data . This can be achieved by using the latent state of an autoencoder network ( Yoon et al. , 2019 ) . In PSA-GAN , we explore whether unsupervised techniques could instead provide meaningful embeddings that can act as an additional signal to the GAN to model realistic time series . Our proposal , PSA-GAN , synthesizes progressive growing with convolutions , self-attention , and embeddings into a novel architecture particularly geared towards time series . Another line of work in the time series field is focused on developing suitable loss functions for modeling financial time series with GANs where specific challenges include heavy tailed distributions , volatility clustering , absence of autocorrelations , among others ( Cont , 2001 ; Eckerli & Osterrieder , 2021 ) . To this end , several models like QuantGAN ( Wiese et al. , 2020 ) , ( Conditional ) SigWGAN ( Ni et al. , 2020 ; 2021 ) , and DAT-GAN ( Sun et al. , 2020 ) have been proposed ( see ( Eckerli & Osterrieder , 2021 ) for review in this field ) . This line of work targets its own challenges by developing new loss functions for financial time series , which is orthogonal to our work , i.e . we focus on neural network architectures for time series GANs and show its usefulness in the context of time series forecasting . Another challenge is the evaluation of synthetic data . While the computer vision domain uses standard scores like the Inception Score and the Frechet Inception Distance ( FID ) ( Salimans et al. , 2016 ; Heusel et al. , 2018 ) , such universally accepted scores do not exist in the time series field . Thus , researchers rely on a Train on Synthetic–Test on Real setup and assess the quality of the synthetic time series in a downstream classification and/or prediction task ( Esteban et al. , 2017 ; Yoon et al. , 2019 ) . In this work , we build on this idea and assess the GAN models through downstream forecasting tasks . Additionally , we suggest a Frechet Inception Distance-like score that is based on unsupervised time series embeddings ( Franceschi et al. , 2019 ) . Critically , we want to be able to score the fit of our fixed length synthetic samples into their context of ( often much longer ) true time series , which is taken into account by the contrastive training procedure in ( Franceschi et al. , 2019 ) . As we will later show , the lowest scoring models correspond to the best performing models in downstream tasks . 3 MODEL . Problem formulation We denote the values of a time series dataset by zi , t ∈ R , where i ∈ { 1 , 2 , . . . , N } is the index of the individual time series and t ∈ { 1 , 2 , . . . , T } is the time index . Additionally , we consider an associated matrix of time feature vectors X1 : T = ( x1 , . . . , xT ) in RD×T . Our goal is to model a time series of fixed length τ , Ẑi , t , τ = ( ẑi , t , . . . , ẑi , t+τ−1 ) , from this dataset using a conditional generator function G and a fixed time point t. Thus , we aim to model Ẑi , t , τ = G ( n , φ ( i ) , Xt : t+τ−1 ) , where n ∈ Rτ is a noise vector drawn from a Gaussian distribution of mean zero and variance one ; φ is an embedding function that maps the index of a time series to a vector representation , that is concatenated to each time step of Xt : t+τ−1 . An overview of the model architecture is shown in Figure 1 and details about the time features are presented in Appendix A. Spectral Normalised Residual Self Attention with Convolution The generator and discriminator use a main function m that is a composition of convolution , self attention and spectral normalisation m ◦ f : Rnf×l → Rnf×l x 7→ γ SA ( f ( x ) ) + f ( x ) ( 1 ) where f ( x ) = LR ( SN ( c ( x ) ) ) and m ( y ) = γ SA ( y ) +y , c is a one dimensional convolution operator , LR the LeakyReLU operator ( Xu et al. , 2015 ) , SN the Spectral Normalisation operator ( Miyato et al. , 2018 ) and SA the Self Attention module . The variable nf is the number of in and out-channels of c , l is the length of the sequence . Following the work of ( Zhang et al. , 2019a ) , the parameter γ is learnable . It is initialized to zero to allow the network to learn local features directly from the building block f , and is later enriched with distant features as the absolute value of gamma increases , hereby more heavily factoring the self-attention term SA . The module m is referenced as Residual Self-Attention in Fig . 1 ( Right ) . Downscaling and Upscaling The following sections mention upscaling ( UP ) and downscaling ( DOWN ) operators that double and halve the length of the time series , respectively . In this work , the upscaling operator is a linear interpolation and the downscaling operator is the average pooling . PSA-GAN PSA-GAN is a progressively growing GAN ( Karras et al. , 2018 ) ; thus , trainable modules are added during training . Hereby , we model the generator and discriminator as a composition of functions : G = gL+1 ◦ ... ◦g1 andD = d1 ◦ ... ◦dL+1 where each function gi and di for i ∈ [ 1 , L+1 ] corresponds to a module of the generator and discriminator . GENERATOR As a preprocessing step , we first map the concatenated input [ n , φ ( i ) , Xt : t+τ−1 ] from a sequence of length τ to a sequence of length 8 , denoted by Z̃0 , using average pooling . Then , the first layer of the generator g1 applies the main function m : g1 : Rnf×2 3 → Rnf×2 3 Z̃0 7→ Z̃1 = m ◦ f ( Z̃0 ) ( 2 ) For i ∈ [ 2 , L ] , gi maps an input sequence Z̃i−1 to an output sequence Z̃i by applying an upscaling of the input sequence and the function m ◦ f : gi : Rnf×2 i+1 → Rnf×2 i+2 Z̃i−1 7→ Z̃i = m ◦ f ( UP ( Z̃i−1 ) ) ( 3 ) The output of gi is concatenated back to the time features Xt : t+τ−1 and forwarded to the next block . Lastly , the final layer of the generator gL+1 reshapes the multivariate sequence Z̃L to a univariate time series Ẑi , t , τ of length τ = 2L+3 using a one dimensional convolution and spectral normalisation . DISCRIMINATOR The architecture of the discriminator mirrors the architecture of the generator . It maps the generator ’ s output Ẑi , t , τ and the time features Xt : t+τ−1 to a score d. The first module of the discriminator dL+1 uses a one dimensional convolution c1 and a LeakyReLU activation function : d ′ L+1 : R1+D , τ → Rnf , τ ( Z̃L+1 , Xt : t+τ−1 ) 7→ ỸL = SN ( LR ( c1 ( Z̃L+1 , Xt : t+τ−1 ) ) ( 4 ) For i ∈ [ L+ 1 , 2 ] , the module di applies a downscale operator and the main function m : di : Rnf×2 i+2 → Rnf×2 i+1 Yi 7→ Yi−1 = DOWN ( m ( Yi ) ) ( 5 ) The last module d1 turns its input sequence into a score : d1 : Rnf×2 3 → R Y1 7→ Y0 = SN ( FC ( LR ( SN ( c ( m ( Y1 ) ) ) ) ) ) ( 6 ) where FC is a fully connected layer . PSA-GAN-C We another instantiation of PSA-GAN in which we forward to each generator block gi knowledge about the past . The knowledge here is a subseries Ẑi , t−LC , LC in the range [ t− LC , t− 1 ] , with LC being the context length . The context Ẑi , t−LC , LC is concatenated along the feature dimension , i.e . at each time step , to the output sequence of gi . It is then passed through a two layers perceptron to reshape the feature dimension and then added back to the output of gi .
This paper proposes a GAN-based approach (PSA) for generating realistic time-series data that can be used to improve several downstream tasks such as imputation of missing data and forecasting. The generator and discrimination model in the proposed GAN framework, PSA-GAN consists of progressively growing blocks consisting of convolutions and self-attention. The paper also proposes a metric for evaluating the quality of synthetic time series data. Experiments show that the proposed approach is able to achieve better performance than previous GAN-based approaches for time series and is also able to improve on downstream forecasting tasks by augmenting forecasting models.
SP:017769783f4bdb1c409591da5299bf7d037f2bad
Invariant Causal Mechanisms through Distribution Matching
1 INTRODUCTION . Learning structured representations which capture the underlying causal mechanisms generating data is of central importance for training robust machine learning models ( Bengio et al. , 2013 ; Schölkopf et al. , 2021 ) . One particular structure the learned representation should capture is invariance to changes in nuisance variables . For example , we may want the representation to be invariant to sensitive attributes such as the race or gender of an individual in order to avoid discrimination or biased decision making in a downstream task ( Creager et al. , 2019 ; Locatello et al. , 2019 ; Träuble et al. , 2021 ) . While learning invariant representations is thus highly important for fairness applications , it also appears in seemingly unrelated tasks such as domain adaptation ( DA ) and domain generalization ( DG ) , where one aims to be invariant across the different domains ( Muandet et al. , 2013 ; Zemel et al. , 2013 ; Ganin et al. , 2016 ; Peters et al. , 2016 ) . For tasks such as DA and DG invariance across domains or environments implies to being invariant to the domain index , which thus is the `` sensitive attribute '' in this case and typically implies a change in the distribution of the data generating process . Being invariant to the domain index is thus a proxy to being invariant to latent unobserved factors that can change in distribution . Established approaches for enforcing invariance in the learned representation usually aim to learn a representation whose statistical distribution is independent to the sensitive attribute e.g. , by including an adversary during training ( Ganin et al. , 2016 ; Xie et al. , 2017 ) . As an adversary is essentially a parametric distributional distance , other approaches minimize different distribution distances , such as maximum mean discrepancy ( MMD ) ( Louizos et al. , 2017 ; Li et al. , 2018b ) , or optimal transport ( OT ) based distances ( Shen et al. , 2018 ; Damodaran et al. , 2018 ) . To enforce independence , these methods add a regularizer to the loss that consists in the pairwise distributional distance between all possible combination of the sensitive attribute , i.e. , dist ( p ( z|d ) , p ( z|d′ ) ) ∀d , d′ ∈ D. As such , the complexity of the loss grows quadratically in the size of the support of the sensitive attribute , which can limit the applicability of these models when the support of D is large ( Koh et al. , 2021 ) . Despite the importance of learning invariant representations and their potential societal impact in the medical domain or fair decision making , most established approaches are still based on heuristics and specialized for different tasks at hand . We take first steps towards a unifying framework by viewing invariance as a property of a causal process ( Pearl , 2009 ; Peters et al. , 2017 ) and our key contributions can be summarized as follows : • We introduce a unifying causal framework for invariant representation learning , which allows us to derive a new algorithm to enforce invariance through distribution matching . One advantage of our algorithm is that only one distributional distance between two batches needs to be computed at each step , irrelevant of the size of the support of D. • We define the notion of style variable and present some necessary and sufficient conditions under which being invariant to the domain index actually leads to invariance to the style variables . We argue that our proposal naturally captures most of the existing invariant representation learning tasks and datasets . • Finally , we conduct a large number of experiments across different tasks and datasets , demonstrating the versatility of our framework . We obtain competitive results on the task of learning fair representations and we are able to significantly boost the performance of existing models using our proposed algorithm for the task of DG . 2 INVARIANT REPRESENTATION LEARNING ACROSS TASKS . In this section , we highlight how the learning of an invariant representation is a goal that is ( implicitly ) pursued in a large spectrum of machine learning tasks . Domain Adaptation The range of techniques used in Domain Adaptation and the different assumptions followed are vast ( see Wilson and Cook ( 2020 ) for a more in depth review ) . Thus , we here concentrate only on a subset of the literature . A direction that is widely followed in DA , and which is the closest to our framework , is the alignment of the latent distribution of the source and target datasets . Under the covariate shift assumption , which assume that the labeling function P ( Y |X ) is fixed , and that only P ( X ) varies across environments , the goal is then to learn a representation h ( X ) that is invariant across source and target and that remains useful to learn a discriminator on the source dataset . Ganin et al . ( 2016 ) uses a domain adversarial network to align the two latent spaces , whereas others uses distributional divergences directly , such as MMD ( Baktashmotlagh et al. , 2016 ) , Wasserstein and optimal transport in general ( Shen et al. , 2018 ; Damodaran et al. , 2018 ; Redko et al. , 2017 ) . DA under different assumptions , such as the case where both P ( Y ) and P ( X|Y ) , have also been studied ( Gong et al. , 2016 ) . Domain Generalization Though very similar to DA , DG differs in one significant way : the test domain is not observed at training time . As such , it is a way harder task as the test domain could exhibit arbitrary shifts in distribution , and the learned model is supposed to handle any reasonable shifts in distribution . Without any assumptions , there is little hope to obtain models that actually generalizes . Nevertheless , many inductive biases and models have been proposed , which have stronger assumptions than classical empirical risk minimization ( ERM ) ( Vapnik , 1998 ) . Given its similarity to DA , similar models have been proposed , and most models work for both tasks . Nevertheless , until recently ( Albuquerque et al. , 2019 ; Deng et al. , 2020 ) , theoretical justification , e.g. , for minimizing the distance between pairs of latent variables coming from different domains , was missing , as results from domain adaptation assumes that the test domain is observed . Without some assumptions , there exists no theoretical reasons to infer that a constant distribution of the latent variables across the training domains leads to better generalization on the test domains . Indeed , many benchmarks ( Gulrajani and Lopez-Paz , 2020 ; Koh et al. , 2021 ) show that it is difficult to create algorithms that consistently beat ERM across different tasks . Invariant representations for DG was first proposed by Muandet et al . ( 2013 ) . This idea was then extended to use other distributional distances , such as MMD ( Li et al. , 2018b ) , Adversarial ( Li et al. , 2018d ; Deng et al. , 2020 ; Albuquerque et al. , 2019 ) , and Optimal Transport ( Zhou et al. , 2020 ) ( see Table 1 ) . On the theoretical side , both Albuquerque et al . ( 2019 ) and Deng et al . ( 2020 ) attempt to give theoretical grounding to the use of an adversarial loss by deriving bounds similar to what exists in DA . Domain Generalization and Causal Inference Many links between causal inference and domain generalization have been made , arguing that domain generalization is inherently a causal discovery task . In particular , causal inference can be seen as a form of distributional robustness ( Meinshausen , 2018 ) . In regression , one way of ensuring interventional robustness is by identifying the causal parents of Y , whose relation to Y is stable . This can be achieved by finding a feature representation such that the optimal classifiers are approximately the same across domains ( Peters et al. , 2016 ; Rojas-Carulla et al. , 2018 ; Arjovsky et al. , 2019 ) . Unfortunately , most of these models do not really apply to classification of structured data such as images , where the classification is predominantly Table 1 : Review of invariance across different tasks . Note that the general loss is defined as 1 n ∑n i=0 L ( xi , yi ) + λ · ( dist ( zn1 , z N n+1 ) ) . Task Adversarial MMD Wasserstein Distance equation 1n ∑n i=0 log 1 Gd ( zi ) + 1n′ ∑N i=n+1 log 1 1−Gd ( zi ) ∥∥∥ 1n∑ni=0 φ ( zi ) − 1n′ ∑Ni=n+1 φ ( zi ) ∥∥∥H 1n∑ni=0Gd ( zi ) − 1n′ ∑Ni=n+1Gd ( zi ) Domain Adaptation Ganin et al . ( 2016 ) Baktashmotlagh et al . ( 2016 ) Shen et al . ( 2018 ) Hoffman et al . ( 2017 ) Damodaran et al . ( 2018 ) Domain Generalization Ganin et al . ( 2016 ) ; Albuquerque et al . ( 2019 ) Li et al . ( 2018b ) Zhou et al . ( 2020 ) Li et al . ( 2018d ; c ) ; Deng et al . ( 2020 ) Fair Representation Learning Edwards and Storkey ( 2015 ) ; Xie et al . ( 2017 ) Louizos et al . ( 2017 ) Jiang et al . ( 2020 ) Roy and Boddeti ( 2019 ) anti-causal and where the wanted invariance is not toward the pixels themselves but towards the unobserved generating factors . In a similar setting to ours , Heinze-Deml and Meinshausen ( 2021 ) tackles the task of image classification and propose a new model . A significant difference to our work is that they rely on the observation of individual instances across different views , i.e. , the images are clustered by an ID . Fair Representation Learning Fair representation learning can also be viewed as an invariant representation learning task . This task consists in learning a representation that maximizes usefulness towards predicting a target variable , while minimizing information leakage of a sensitive attribute ( e.g. , gender , age , race ) . The seminal work in this field is Zemel et al . ( 2013 ) , which aims at learning a multinomial random variable Z , with associated vectors vk , such as the representation Z is fair . More recent work directly learns a continuous variable Z that has minimal information about the sensitive attribute , either through minimizing the MMD distance ( Louizos et al. , 2017 ) , through adversarial training ( Edwards and Storkey , 2015 ; Xie et al. , 2017 ; Roy and Boddeti , 2019 ) , or through a Wasserstein distance ( Jiang et al. , 2020 ) . 3 INVARIANCE AS THE PROPERTY OF A CAUSAL PROCESS . In this section , we will first consider the assumptions for the causal process underlying the data generating mechanism using a structural causal model ( SCM ) type graph from Causality theory ( Pearl , 2009 ) and following the causal view of learning disentangled representations ( Suter et al. , 2019 ) , as illustrated in Figure 1 . G1 toGk represents all the factors of variation that generate the data , i.e. , there exists a ( one-to-one ) function such that given all the factors , X is fixed : X ←− g ( G1 , . . . , Gk ) Y is a target value that we may want to predict in a downstream task and is either known ( supervised setting ) or unobserved ( unsupervised ) . D is another confounder that we want to be invariant to . It can be a domain index , such as in DA and DG , or a sensitive attribute such as in fairness . We will assume for now that D does not have an effect on Y . Lastly , the generative factors Gi are assumed to not have any causal relations between them , and any correlation between some factors may only come from a hidden confounder . This assumption is similar to the assumptions in Suter et al . ( 2019 ) . Furthermore , in this work , we assume that the label Y and D directly have an effect on the latent generating factors . In this setting , Y and D are thus independent . Given our data generating framework , we can now give some definitions , especially the notion of style generating factors . Definition 3.1 . We call style variables the set of variables G that are children of D in the DAG . We denote this set S. Observation 3.1 . X and Z are independent from D given S , as they are d-separated from D by the set S in the graph . To the best of our knowledge there is no consistent and widely accepted definition of an invariant representation , yet . Using the above framework , we propose the following definition : Definition 3.2 . We say that a representation Z is invariant to a variable D if and only if D has no total causal effect1on Z . This definition of invariance is very robust since it guarantees that no intervention on the variable D can break the independence between Z and D. This is particularly relevant in application such as fair and private representation learning , as we may not want that intervening on the distribution of a sensitive variable breaks the property of fairness or privacy of a representation . The goal of invariant representation learning can then be described as creating a new variable Z = f ( X ) such that D has no total causal effect on Z . In a way , we can view it as adding a new variable in the SCM and learning its structural equation . If we assume that our distribution follows our proposed SCM ( Figure 1 ) , then absence of total causal effect is equivalent to independence , as D has no parent in the DAG . We use this assumption of D having no causal parents in Theorem 3.1 . Theorem 3.1 . Under the assumption of the graph in Figure 1 , we have that : Z is independent from D ( equivalently , D has no total causal effect on Z , or p ( z|d ) = p ( z|d′ ) for all d , d′ ) ⇐⇒ p ( z|do ( d = Nd ) ) = p ( z ) for all Nd ( intervention on the distribution of D ) . In summary , Theorem 3.1 states that no total causal effect is equivalent to independence given the right assumptions , and that having a constant marginal distribution of Z under different mixtures of D leads to independence . The proof is presented in Appendix B . In the case where the full support of D is observed during training , we have the guarantee that independence in training I ( Z ; D ) = 0 will hold in any test setting . However , in DG for example , we observe a new value of D at test time . Having p ( z|dtraini ) = p ( z|dtrainj ) ∀i , j does not guarantee that p ( z|dtest ) = p ( z|dtraini ) . Indeed , in this setting , the variable D works more like an index , where each value indicates a domain where the distribution of X has changed . Invariance to the variable D is thus a proxy to being invariant to the unobserved style variables ( see Definition 3.1 ) .
- The paper claims to provide a unifying causal framework for invariance-based algorithms. - It uses the graphical model in figure 1 to derive some conditions for achieving invariance. - It also develops a new algorithm for invariance-based representation learning. The idea is based on conjecture 4.3, which hypothesizes that one can create new domains using mixtures of existing ones.
SP:ebd310cc9bc1e6d543022b8d75dbd1dc9e3c819c
Invariant Causal Mechanisms through Distribution Matching
1 INTRODUCTION . Learning structured representations which capture the underlying causal mechanisms generating data is of central importance for training robust machine learning models ( Bengio et al. , 2013 ; Schölkopf et al. , 2021 ) . One particular structure the learned representation should capture is invariance to changes in nuisance variables . For example , we may want the representation to be invariant to sensitive attributes such as the race or gender of an individual in order to avoid discrimination or biased decision making in a downstream task ( Creager et al. , 2019 ; Locatello et al. , 2019 ; Träuble et al. , 2021 ) . While learning invariant representations is thus highly important for fairness applications , it also appears in seemingly unrelated tasks such as domain adaptation ( DA ) and domain generalization ( DG ) , where one aims to be invariant across the different domains ( Muandet et al. , 2013 ; Zemel et al. , 2013 ; Ganin et al. , 2016 ; Peters et al. , 2016 ) . For tasks such as DA and DG invariance across domains or environments implies to being invariant to the domain index , which thus is the `` sensitive attribute '' in this case and typically implies a change in the distribution of the data generating process . Being invariant to the domain index is thus a proxy to being invariant to latent unobserved factors that can change in distribution . Established approaches for enforcing invariance in the learned representation usually aim to learn a representation whose statistical distribution is independent to the sensitive attribute e.g. , by including an adversary during training ( Ganin et al. , 2016 ; Xie et al. , 2017 ) . As an adversary is essentially a parametric distributional distance , other approaches minimize different distribution distances , such as maximum mean discrepancy ( MMD ) ( Louizos et al. , 2017 ; Li et al. , 2018b ) , or optimal transport ( OT ) based distances ( Shen et al. , 2018 ; Damodaran et al. , 2018 ) . To enforce independence , these methods add a regularizer to the loss that consists in the pairwise distributional distance between all possible combination of the sensitive attribute , i.e. , dist ( p ( z|d ) , p ( z|d′ ) ) ∀d , d′ ∈ D. As such , the complexity of the loss grows quadratically in the size of the support of the sensitive attribute , which can limit the applicability of these models when the support of D is large ( Koh et al. , 2021 ) . Despite the importance of learning invariant representations and their potential societal impact in the medical domain or fair decision making , most established approaches are still based on heuristics and specialized for different tasks at hand . We take first steps towards a unifying framework by viewing invariance as a property of a causal process ( Pearl , 2009 ; Peters et al. , 2017 ) and our key contributions can be summarized as follows : • We introduce a unifying causal framework for invariant representation learning , which allows us to derive a new algorithm to enforce invariance through distribution matching . One advantage of our algorithm is that only one distributional distance between two batches needs to be computed at each step , irrelevant of the size of the support of D. • We define the notion of style variable and present some necessary and sufficient conditions under which being invariant to the domain index actually leads to invariance to the style variables . We argue that our proposal naturally captures most of the existing invariant representation learning tasks and datasets . • Finally , we conduct a large number of experiments across different tasks and datasets , demonstrating the versatility of our framework . We obtain competitive results on the task of learning fair representations and we are able to significantly boost the performance of existing models using our proposed algorithm for the task of DG . 2 INVARIANT REPRESENTATION LEARNING ACROSS TASKS . In this section , we highlight how the learning of an invariant representation is a goal that is ( implicitly ) pursued in a large spectrum of machine learning tasks . Domain Adaptation The range of techniques used in Domain Adaptation and the different assumptions followed are vast ( see Wilson and Cook ( 2020 ) for a more in depth review ) . Thus , we here concentrate only on a subset of the literature . A direction that is widely followed in DA , and which is the closest to our framework , is the alignment of the latent distribution of the source and target datasets . Under the covariate shift assumption , which assume that the labeling function P ( Y |X ) is fixed , and that only P ( X ) varies across environments , the goal is then to learn a representation h ( X ) that is invariant across source and target and that remains useful to learn a discriminator on the source dataset . Ganin et al . ( 2016 ) uses a domain adversarial network to align the two latent spaces , whereas others uses distributional divergences directly , such as MMD ( Baktashmotlagh et al. , 2016 ) , Wasserstein and optimal transport in general ( Shen et al. , 2018 ; Damodaran et al. , 2018 ; Redko et al. , 2017 ) . DA under different assumptions , such as the case where both P ( Y ) and P ( X|Y ) , have also been studied ( Gong et al. , 2016 ) . Domain Generalization Though very similar to DA , DG differs in one significant way : the test domain is not observed at training time . As such , it is a way harder task as the test domain could exhibit arbitrary shifts in distribution , and the learned model is supposed to handle any reasonable shifts in distribution . Without any assumptions , there is little hope to obtain models that actually generalizes . Nevertheless , many inductive biases and models have been proposed , which have stronger assumptions than classical empirical risk minimization ( ERM ) ( Vapnik , 1998 ) . Given its similarity to DA , similar models have been proposed , and most models work for both tasks . Nevertheless , until recently ( Albuquerque et al. , 2019 ; Deng et al. , 2020 ) , theoretical justification , e.g. , for minimizing the distance between pairs of latent variables coming from different domains , was missing , as results from domain adaptation assumes that the test domain is observed . Without some assumptions , there exists no theoretical reasons to infer that a constant distribution of the latent variables across the training domains leads to better generalization on the test domains . Indeed , many benchmarks ( Gulrajani and Lopez-Paz , 2020 ; Koh et al. , 2021 ) show that it is difficult to create algorithms that consistently beat ERM across different tasks . Invariant representations for DG was first proposed by Muandet et al . ( 2013 ) . This idea was then extended to use other distributional distances , such as MMD ( Li et al. , 2018b ) , Adversarial ( Li et al. , 2018d ; Deng et al. , 2020 ; Albuquerque et al. , 2019 ) , and Optimal Transport ( Zhou et al. , 2020 ) ( see Table 1 ) . On the theoretical side , both Albuquerque et al . ( 2019 ) and Deng et al . ( 2020 ) attempt to give theoretical grounding to the use of an adversarial loss by deriving bounds similar to what exists in DA . Domain Generalization and Causal Inference Many links between causal inference and domain generalization have been made , arguing that domain generalization is inherently a causal discovery task . In particular , causal inference can be seen as a form of distributional robustness ( Meinshausen , 2018 ) . In regression , one way of ensuring interventional robustness is by identifying the causal parents of Y , whose relation to Y is stable . This can be achieved by finding a feature representation such that the optimal classifiers are approximately the same across domains ( Peters et al. , 2016 ; Rojas-Carulla et al. , 2018 ; Arjovsky et al. , 2019 ) . Unfortunately , most of these models do not really apply to classification of structured data such as images , where the classification is predominantly Table 1 : Review of invariance across different tasks . Note that the general loss is defined as 1 n ∑n i=0 L ( xi , yi ) + λ · ( dist ( zn1 , z N n+1 ) ) . Task Adversarial MMD Wasserstein Distance equation 1n ∑n i=0 log 1 Gd ( zi ) + 1n′ ∑N i=n+1 log 1 1−Gd ( zi ) ∥∥∥ 1n∑ni=0 φ ( zi ) − 1n′ ∑Ni=n+1 φ ( zi ) ∥∥∥H 1n∑ni=0Gd ( zi ) − 1n′ ∑Ni=n+1Gd ( zi ) Domain Adaptation Ganin et al . ( 2016 ) Baktashmotlagh et al . ( 2016 ) Shen et al . ( 2018 ) Hoffman et al . ( 2017 ) Damodaran et al . ( 2018 ) Domain Generalization Ganin et al . ( 2016 ) ; Albuquerque et al . ( 2019 ) Li et al . ( 2018b ) Zhou et al . ( 2020 ) Li et al . ( 2018d ; c ) ; Deng et al . ( 2020 ) Fair Representation Learning Edwards and Storkey ( 2015 ) ; Xie et al . ( 2017 ) Louizos et al . ( 2017 ) Jiang et al . ( 2020 ) Roy and Boddeti ( 2019 ) anti-causal and where the wanted invariance is not toward the pixels themselves but towards the unobserved generating factors . In a similar setting to ours , Heinze-Deml and Meinshausen ( 2021 ) tackles the task of image classification and propose a new model . A significant difference to our work is that they rely on the observation of individual instances across different views , i.e. , the images are clustered by an ID . Fair Representation Learning Fair representation learning can also be viewed as an invariant representation learning task . This task consists in learning a representation that maximizes usefulness towards predicting a target variable , while minimizing information leakage of a sensitive attribute ( e.g. , gender , age , race ) . The seminal work in this field is Zemel et al . ( 2013 ) , which aims at learning a multinomial random variable Z , with associated vectors vk , such as the representation Z is fair . More recent work directly learns a continuous variable Z that has minimal information about the sensitive attribute , either through minimizing the MMD distance ( Louizos et al. , 2017 ) , through adversarial training ( Edwards and Storkey , 2015 ; Xie et al. , 2017 ; Roy and Boddeti , 2019 ) , or through a Wasserstein distance ( Jiang et al. , 2020 ) . 3 INVARIANCE AS THE PROPERTY OF A CAUSAL PROCESS . In this section , we will first consider the assumptions for the causal process underlying the data generating mechanism using a structural causal model ( SCM ) type graph from Causality theory ( Pearl , 2009 ) and following the causal view of learning disentangled representations ( Suter et al. , 2019 ) , as illustrated in Figure 1 . G1 toGk represents all the factors of variation that generate the data , i.e. , there exists a ( one-to-one ) function such that given all the factors , X is fixed : X ←− g ( G1 , . . . , Gk ) Y is a target value that we may want to predict in a downstream task and is either known ( supervised setting ) or unobserved ( unsupervised ) . D is another confounder that we want to be invariant to . It can be a domain index , such as in DA and DG , or a sensitive attribute such as in fairness . We will assume for now that D does not have an effect on Y . Lastly , the generative factors Gi are assumed to not have any causal relations between them , and any correlation between some factors may only come from a hidden confounder . This assumption is similar to the assumptions in Suter et al . ( 2019 ) . Furthermore , in this work , we assume that the label Y and D directly have an effect on the latent generating factors . In this setting , Y and D are thus independent . Given our data generating framework , we can now give some definitions , especially the notion of style generating factors . Definition 3.1 . We call style variables the set of variables G that are children of D in the DAG . We denote this set S. Observation 3.1 . X and Z are independent from D given S , as they are d-separated from D by the set S in the graph . To the best of our knowledge there is no consistent and widely accepted definition of an invariant representation , yet . Using the above framework , we propose the following definition : Definition 3.2 . We say that a representation Z is invariant to a variable D if and only if D has no total causal effect1on Z . This definition of invariance is very robust since it guarantees that no intervention on the variable D can break the independence between Z and D. This is particularly relevant in application such as fair and private representation learning , as we may not want that intervening on the distribution of a sensitive variable breaks the property of fairness or privacy of a representation . The goal of invariant representation learning can then be described as creating a new variable Z = f ( X ) such that D has no total causal effect on Z . In a way , we can view it as adding a new variable in the SCM and learning its structural equation . If we assume that our distribution follows our proposed SCM ( Figure 1 ) , then absence of total causal effect is equivalent to independence , as D has no parent in the DAG . We use this assumption of D having no causal parents in Theorem 3.1 . Theorem 3.1 . Under the assumption of the graph in Figure 1 , we have that : Z is independent from D ( equivalently , D has no total causal effect on Z , or p ( z|d ) = p ( z|d′ ) for all d , d′ ) ⇐⇒ p ( z|do ( d = Nd ) ) = p ( z ) for all Nd ( intervention on the distribution of D ) . In summary , Theorem 3.1 states that no total causal effect is equivalent to independence given the right assumptions , and that having a constant marginal distribution of Z under different mixtures of D leads to independence . The proof is presented in Appendix B . In the case where the full support of D is observed during training , we have the guarantee that independence in training I ( Z ; D ) = 0 will hold in any test setting . However , in DG for example , we observe a new value of D at test time . Having p ( z|dtraini ) = p ( z|dtrainj ) ∀i , j does not guarantee that p ( z|dtest ) = p ( z|dtraini ) . Indeed , in this setting , the variable D works more like an index , where each value indicates a domain where the distribution of X has changed . Invariance to the variable D is thus a proxy to being invariant to the unobserved style variables ( see Definition 3.1 ) .
This paper considers a causal framework for invariant representation learning, with some analysis about the invariance and independences based on the proposed causal graph. An algorithm is then proposed based on the above analysis and empirical result validates the performance of the proposed algorithm. The paper is well written.
SP:ebd310cc9bc1e6d543022b8d75dbd1dc9e3c819c
Invariant Causal Mechanisms through Distribution Matching
1 INTRODUCTION . Learning structured representations which capture the underlying causal mechanisms generating data is of central importance for training robust machine learning models ( Bengio et al. , 2013 ; Schölkopf et al. , 2021 ) . One particular structure the learned representation should capture is invariance to changes in nuisance variables . For example , we may want the representation to be invariant to sensitive attributes such as the race or gender of an individual in order to avoid discrimination or biased decision making in a downstream task ( Creager et al. , 2019 ; Locatello et al. , 2019 ; Träuble et al. , 2021 ) . While learning invariant representations is thus highly important for fairness applications , it also appears in seemingly unrelated tasks such as domain adaptation ( DA ) and domain generalization ( DG ) , where one aims to be invariant across the different domains ( Muandet et al. , 2013 ; Zemel et al. , 2013 ; Ganin et al. , 2016 ; Peters et al. , 2016 ) . For tasks such as DA and DG invariance across domains or environments implies to being invariant to the domain index , which thus is the `` sensitive attribute '' in this case and typically implies a change in the distribution of the data generating process . Being invariant to the domain index is thus a proxy to being invariant to latent unobserved factors that can change in distribution . Established approaches for enforcing invariance in the learned representation usually aim to learn a representation whose statistical distribution is independent to the sensitive attribute e.g. , by including an adversary during training ( Ganin et al. , 2016 ; Xie et al. , 2017 ) . As an adversary is essentially a parametric distributional distance , other approaches minimize different distribution distances , such as maximum mean discrepancy ( MMD ) ( Louizos et al. , 2017 ; Li et al. , 2018b ) , or optimal transport ( OT ) based distances ( Shen et al. , 2018 ; Damodaran et al. , 2018 ) . To enforce independence , these methods add a regularizer to the loss that consists in the pairwise distributional distance between all possible combination of the sensitive attribute , i.e. , dist ( p ( z|d ) , p ( z|d′ ) ) ∀d , d′ ∈ D. As such , the complexity of the loss grows quadratically in the size of the support of the sensitive attribute , which can limit the applicability of these models when the support of D is large ( Koh et al. , 2021 ) . Despite the importance of learning invariant representations and their potential societal impact in the medical domain or fair decision making , most established approaches are still based on heuristics and specialized for different tasks at hand . We take first steps towards a unifying framework by viewing invariance as a property of a causal process ( Pearl , 2009 ; Peters et al. , 2017 ) and our key contributions can be summarized as follows : • We introduce a unifying causal framework for invariant representation learning , which allows us to derive a new algorithm to enforce invariance through distribution matching . One advantage of our algorithm is that only one distributional distance between two batches needs to be computed at each step , irrelevant of the size of the support of D. • We define the notion of style variable and present some necessary and sufficient conditions under which being invariant to the domain index actually leads to invariance to the style variables . We argue that our proposal naturally captures most of the existing invariant representation learning tasks and datasets . • Finally , we conduct a large number of experiments across different tasks and datasets , demonstrating the versatility of our framework . We obtain competitive results on the task of learning fair representations and we are able to significantly boost the performance of existing models using our proposed algorithm for the task of DG . 2 INVARIANT REPRESENTATION LEARNING ACROSS TASKS . In this section , we highlight how the learning of an invariant representation is a goal that is ( implicitly ) pursued in a large spectrum of machine learning tasks . Domain Adaptation The range of techniques used in Domain Adaptation and the different assumptions followed are vast ( see Wilson and Cook ( 2020 ) for a more in depth review ) . Thus , we here concentrate only on a subset of the literature . A direction that is widely followed in DA , and which is the closest to our framework , is the alignment of the latent distribution of the source and target datasets . Under the covariate shift assumption , which assume that the labeling function P ( Y |X ) is fixed , and that only P ( X ) varies across environments , the goal is then to learn a representation h ( X ) that is invariant across source and target and that remains useful to learn a discriminator on the source dataset . Ganin et al . ( 2016 ) uses a domain adversarial network to align the two latent spaces , whereas others uses distributional divergences directly , such as MMD ( Baktashmotlagh et al. , 2016 ) , Wasserstein and optimal transport in general ( Shen et al. , 2018 ; Damodaran et al. , 2018 ; Redko et al. , 2017 ) . DA under different assumptions , such as the case where both P ( Y ) and P ( X|Y ) , have also been studied ( Gong et al. , 2016 ) . Domain Generalization Though very similar to DA , DG differs in one significant way : the test domain is not observed at training time . As such , it is a way harder task as the test domain could exhibit arbitrary shifts in distribution , and the learned model is supposed to handle any reasonable shifts in distribution . Without any assumptions , there is little hope to obtain models that actually generalizes . Nevertheless , many inductive biases and models have been proposed , which have stronger assumptions than classical empirical risk minimization ( ERM ) ( Vapnik , 1998 ) . Given its similarity to DA , similar models have been proposed , and most models work for both tasks . Nevertheless , until recently ( Albuquerque et al. , 2019 ; Deng et al. , 2020 ) , theoretical justification , e.g. , for minimizing the distance between pairs of latent variables coming from different domains , was missing , as results from domain adaptation assumes that the test domain is observed . Without some assumptions , there exists no theoretical reasons to infer that a constant distribution of the latent variables across the training domains leads to better generalization on the test domains . Indeed , many benchmarks ( Gulrajani and Lopez-Paz , 2020 ; Koh et al. , 2021 ) show that it is difficult to create algorithms that consistently beat ERM across different tasks . Invariant representations for DG was first proposed by Muandet et al . ( 2013 ) . This idea was then extended to use other distributional distances , such as MMD ( Li et al. , 2018b ) , Adversarial ( Li et al. , 2018d ; Deng et al. , 2020 ; Albuquerque et al. , 2019 ) , and Optimal Transport ( Zhou et al. , 2020 ) ( see Table 1 ) . On the theoretical side , both Albuquerque et al . ( 2019 ) and Deng et al . ( 2020 ) attempt to give theoretical grounding to the use of an adversarial loss by deriving bounds similar to what exists in DA . Domain Generalization and Causal Inference Many links between causal inference and domain generalization have been made , arguing that domain generalization is inherently a causal discovery task . In particular , causal inference can be seen as a form of distributional robustness ( Meinshausen , 2018 ) . In regression , one way of ensuring interventional robustness is by identifying the causal parents of Y , whose relation to Y is stable . This can be achieved by finding a feature representation such that the optimal classifiers are approximately the same across domains ( Peters et al. , 2016 ; Rojas-Carulla et al. , 2018 ; Arjovsky et al. , 2019 ) . Unfortunately , most of these models do not really apply to classification of structured data such as images , where the classification is predominantly Table 1 : Review of invariance across different tasks . Note that the general loss is defined as 1 n ∑n i=0 L ( xi , yi ) + λ · ( dist ( zn1 , z N n+1 ) ) . Task Adversarial MMD Wasserstein Distance equation 1n ∑n i=0 log 1 Gd ( zi ) + 1n′ ∑N i=n+1 log 1 1−Gd ( zi ) ∥∥∥ 1n∑ni=0 φ ( zi ) − 1n′ ∑Ni=n+1 φ ( zi ) ∥∥∥H 1n∑ni=0Gd ( zi ) − 1n′ ∑Ni=n+1Gd ( zi ) Domain Adaptation Ganin et al . ( 2016 ) Baktashmotlagh et al . ( 2016 ) Shen et al . ( 2018 ) Hoffman et al . ( 2017 ) Damodaran et al . ( 2018 ) Domain Generalization Ganin et al . ( 2016 ) ; Albuquerque et al . ( 2019 ) Li et al . ( 2018b ) Zhou et al . ( 2020 ) Li et al . ( 2018d ; c ) ; Deng et al . ( 2020 ) Fair Representation Learning Edwards and Storkey ( 2015 ) ; Xie et al . ( 2017 ) Louizos et al . ( 2017 ) Jiang et al . ( 2020 ) Roy and Boddeti ( 2019 ) anti-causal and where the wanted invariance is not toward the pixels themselves but towards the unobserved generating factors . In a similar setting to ours , Heinze-Deml and Meinshausen ( 2021 ) tackles the task of image classification and propose a new model . A significant difference to our work is that they rely on the observation of individual instances across different views , i.e. , the images are clustered by an ID . Fair Representation Learning Fair representation learning can also be viewed as an invariant representation learning task . This task consists in learning a representation that maximizes usefulness towards predicting a target variable , while minimizing information leakage of a sensitive attribute ( e.g. , gender , age , race ) . The seminal work in this field is Zemel et al . ( 2013 ) , which aims at learning a multinomial random variable Z , with associated vectors vk , such as the representation Z is fair . More recent work directly learns a continuous variable Z that has minimal information about the sensitive attribute , either through minimizing the MMD distance ( Louizos et al. , 2017 ) , through adversarial training ( Edwards and Storkey , 2015 ; Xie et al. , 2017 ; Roy and Boddeti , 2019 ) , or through a Wasserstein distance ( Jiang et al. , 2020 ) . 3 INVARIANCE AS THE PROPERTY OF A CAUSAL PROCESS . In this section , we will first consider the assumptions for the causal process underlying the data generating mechanism using a structural causal model ( SCM ) type graph from Causality theory ( Pearl , 2009 ) and following the causal view of learning disentangled representations ( Suter et al. , 2019 ) , as illustrated in Figure 1 . G1 toGk represents all the factors of variation that generate the data , i.e. , there exists a ( one-to-one ) function such that given all the factors , X is fixed : X ←− g ( G1 , . . . , Gk ) Y is a target value that we may want to predict in a downstream task and is either known ( supervised setting ) or unobserved ( unsupervised ) . D is another confounder that we want to be invariant to . It can be a domain index , such as in DA and DG , or a sensitive attribute such as in fairness . We will assume for now that D does not have an effect on Y . Lastly , the generative factors Gi are assumed to not have any causal relations between them , and any correlation between some factors may only come from a hidden confounder . This assumption is similar to the assumptions in Suter et al . ( 2019 ) . Furthermore , in this work , we assume that the label Y and D directly have an effect on the latent generating factors . In this setting , Y and D are thus independent . Given our data generating framework , we can now give some definitions , especially the notion of style generating factors . Definition 3.1 . We call style variables the set of variables G that are children of D in the DAG . We denote this set S. Observation 3.1 . X and Z are independent from D given S , as they are d-separated from D by the set S in the graph . To the best of our knowledge there is no consistent and widely accepted definition of an invariant representation , yet . Using the above framework , we propose the following definition : Definition 3.2 . We say that a representation Z is invariant to a variable D if and only if D has no total causal effect1on Z . This definition of invariance is very robust since it guarantees that no intervention on the variable D can break the independence between Z and D. This is particularly relevant in application such as fair and private representation learning , as we may not want that intervening on the distribution of a sensitive variable breaks the property of fairness or privacy of a representation . The goal of invariant representation learning can then be described as creating a new variable Z = f ( X ) such that D has no total causal effect on Z . In a way , we can view it as adding a new variable in the SCM and learning its structural equation . If we assume that our distribution follows our proposed SCM ( Figure 1 ) , then absence of total causal effect is equivalent to independence , as D has no parent in the DAG . We use this assumption of D having no causal parents in Theorem 3.1 . Theorem 3.1 . Under the assumption of the graph in Figure 1 , we have that : Z is independent from D ( equivalently , D has no total causal effect on Z , or p ( z|d ) = p ( z|d′ ) for all d , d′ ) ⇐⇒ p ( z|do ( d = Nd ) ) = p ( z ) for all Nd ( intervention on the distribution of D ) . In summary , Theorem 3.1 states that no total causal effect is equivalent to independence given the right assumptions , and that having a constant marginal distribution of Z under different mixtures of D leads to independence . The proof is presented in Appendix B . In the case where the full support of D is observed during training , we have the guarantee that independence in training I ( Z ; D ) = 0 will hold in any test setting . However , in DG for example , we observe a new value of D at test time . Having p ( z|dtraini ) = p ( z|dtrainj ) ∀i , j does not guarantee that p ( z|dtest ) = p ( z|dtraini ) . Indeed , in this setting , the variable D works more like an index , where each value indicates a domain where the distribution of X has changed . Invariance to the variable D is thus a proxy to being invariant to the unobserved style variables ( see Definition 3.1 ) .
This work provides a causal perspective and new algorithm for learning invariant representations from multiple domain datasets. This work introduces the notion of style variables and shows theoretically that being invariant to the domain index actually leads to invariance to the style variables. The extensive experiments demonstrate the usefulness of the proposed algorithm in various tasks.
SP:ebd310cc9bc1e6d543022b8d75dbd1dc9e3c819c
GANet: Glyph-Attention Network for Few-Shot Font Generation
1 Introduction . With the increasing popularity of Chinese characters globally , the demand for Chinese fonts is rising . However , it ’ s very time consuming and expensive to design such font library . As we know , the number of letters in English alphabet is 52 ( upper and lower cases ) , much smaller than that of the Chinese characters which is over 20,000 . It takes about 6 months for an experienced designer to design a Chinese font library . Typically , the designer starts the font design with manually designing a few hundred of Chinese characters which encompass most of the radicals ( i.e. , the components of the characters ) . The rest of the characters will be completed by mixing and matching the radicals based on word vocabulary . However , it is very tedious and time consuming in both the initial design phase and the block building process . If only a few Chinese glyphs ( e.g . 4 or 8 ) are needed by manual design and there is a tool that can automatically complete the font library of the 20,000+ characters , designers ’ efficiency will be greatly improved . Recently , image to image translation based methods Jiang et al . ( 2019 ) have been proposed to generate fonts automatically . Most of these approaches are based on generative adversarial network Goodfellow et al . ( 2014 ) , which was shown to be able to transfer one font style to another successfully . However , they need large scale paired or un-paired source and target domain dataset to train a model . It ’ s inefficient because for every new font style , the pretrained model needs to be finetuned . Moreover , it is difficult and expensive to collect target font dataset such as Chinese historical calligraphic works , whose font glyphs may just have a small percentage left . But these methods typically require hundreds of font glyphs for training as a minimum . In order to address the limitation of lack of dataset , several few-shot font generation methods have been developed Gao et al . ( 2019 ) Park et al . ( 2021 ) , which don ’ t demand a lot of glyphs to train the model . They are capable of producing a complete font library with just few samples . Although these methods can fuse source content and target style representations successfully with few font glyphs unseen style during training , they tend to output characters with either wrong content or weak style . The unsatisfactory results of recent few-shot font generation methods make us re-examine the style and content relationship in the glyphs . A Chinese glyph can be divided into two parts ( i.e . content and style ) . From the content perspective , we recognize a glyph ’ s meaning at a glance . It implies that human understands font ’ s content through vision based on font global information . Inspired by this observation , we propose a content glyph-attention module to capture the global features from the content set of glyphs . On the other side , special stroke and radical details of a glyph determine the its style . In other words , glyph style is closely related to local information . Based on this hypothesis , we propose a style glyph-attention module to encapsulate pattern information from the style set of glyphs . Our contributions are summarized as follows : • We propose a generic few-shot font generation framework that bridges the gap between font content and style . • We propose a content and a style glyph-attention modules to characterize overall structure information from the content set and local stroke details from the style set , respectively . • Experiments demonstrate superior performance in terms of both quantitative and qualitative results compared with other cutting-edge few-shot glyph generation methods . 2 Related Work 2.1 Style Transfer Few-shot Chinese font generation aims to combine the content and reference style . It is a special style transfer problem in essence . However , font design requires high accuracy in content structure , the result of traditional style transfer model for font generation is usually unsatisfactory . Gatys et al . ( 2016 ) proposed neural style transfer in VGG Simonyan & Zisserman ( 2014 ) feature space by reconstructing the gram matrix but with limited inference speed . Johnson et al . ( 2016 ) trained a fast neural style transfer ( FNST ) model , which is able to merge style and content representations in real-time . Although speed-up is achieved in FNST , a new style image is necessary to retrain the model for transfer . In order to solve this problem , Dumoulin et al . ( 2016 ) proposed a n-style transfer method which can embed n style features by using conditional instance normalization . Further improvement was made by Huang & Belongie ( 2017 ) by inventing AdaIN that can achieve arbitrary style transfer in real time without retraining a model . 2.2 Image to Image Translation Although style transfer method has advantage in combining content and style representations , it may damage the content structure . For font generation , it is crucial to keep character semantics so that they are visually distinguishable and recognizable . Image to image translation method ( pix2pix ) Isola et al . ( 2017 ) was devised to transfer from the source domain to the target domain ( e.g. , from a sketch to a real cat drawing ) . Pix2pix differs from style transfer based methods in that it prevents the structure from being destroyed . Nevertheless , the result quality of pix2pix is heavily dependent on paired dataset that is expensive to collect . In order to tackle this problem , Zhu et al . ( 2017 ) moved a step further by introducing unpaired image to image translation method ( CycleGAN ) , which maps the source domain to the target domain by using cyclic loss . Although CycleGAN made break-through by eliminating the necessity of paired dataset , it ’ s inefficient because different ( source , target ) domain tuples require different models for inference . Huang et al . ( 2018 ) extended CycleGAN by implementing multi-domin transfer in a single model . Recently , Liu et al . ( 2019 ) put forward few-shot image to image translation using AdaIN . 2.3 Attention Models Lately , attention mechanisms are widely adopted in several models in order to capture global dependencies Xu et al . ( 2015 ) . In particular , self-attention ( also known as intra-attention ) calculates the response at a position in a sequence by attending to all positions within the same sequence Parikh et al . ( 2016 ) . ( SAGAN ) Zhang et al . ( 2019 ) demonstrated that selfattention could improve image generation quality of the GAN model . ( non-local ) Wang et al . ( 2018 ) used a reformalized self-attention module in spatial-temporal domain between video sequences . To our knowledge , attention module has not yet been explored in font generation tasks . In this study , we apply content and style glyph attention modules to efficiently capture the characteristics of the font content and style sets . 2.4 Few-shot Font Generation Methods The goal of few-shot font generation method is to make the generated glyphs akin to the style references , which are only a few samples , without re-training a new model . At the same time , the generated glyphs ’ semantic is unchanged . Recently , Zhang et al . ( 2018 ) proposed EMD model that leverages a pair of encoders to merge content and style . Gao et al . ( 2019 ) invented AGIS-Net to reach the few-shot font generation goal by transferring both shape and texture with a few reference samples . Unlike other methods , MX-Font Park et al . ( 2021 ) extracts multiple style features not explicitly conditioned on component labels , but automatically by multiple experts to represent different local concepts . 3 METHOD . As mentioned above , our goal is to generate stylized glyph images from a small number of reference samples . We design three encoders , two glyph attention modules and one decoder to form a Glyph-Attention Network ( GANet ) . Details of the model architecture and loss functions are discussed in Sections 3.1 and 3.2 . 3.1 Network architecture As shown in Figure 1 , GANet consists of one query encoder , one style encoder , one content encoder , two glyph-attention modules and one decoder . In order to stablize the training process and improve the synthesized results quality , two discriminators are adopted . 3.1.1 Query , Content and Style Encoders We formulate the content , style and query encoding process as trainable high-dimensional glyph feature extractors . The content encoder Ec aims to extract semantic feature . The input of the content encoder is Xc = { x1 , x2 , ... , xN } , each of them has the same content but different style , the output of the content encoder is Fc = Ec ( Xc ) ∈ RN×H×W×C . Similarly , the style encoder Es is used to characterize style features from the the style set Xs = { y1 , y2 , ... , yN } , which contains N glyphs with the same style but different content . The output of the style encoder is Fs = Es ( Xs ) ∈ RN×H×W×C . The query encoder Eq is devised to obtain a query feature vector Fq that is essential for the glyph-attention modules to identify local style features from the style feature set Fs and generate the most proper global content feature from the content feature set Fc . The input of the query encoder is a glyph image Xq whose content is the same as the content in the content glyph set but with a new style . Output of the query encoder is Fq = Eq ( Xq ) ∈ RH×W×C . The three encoders Es , Eq and Ec have identical architecture but they do not share weights . Symbols H , W and C represent feature map ’ s height , width and channel respectively . 3.1.2 Decoder The style glyph-attention module is intended to query a local style feature tensor from the style feature set . Similarly , the objective of the content glyph-attention module is to acquire the most proper glyph global content from the content feature set . The outputs of the content , style , and query encoders are added element-wise before being fed into the decoder . As defined in ( 1 ) F = SGA ( q , ks , vs ) + CGA ( q , kc , vc ) + Fq ( 1 ) O = Decoder ( F ) ( 2 ) Where SGA and CGA are abbreviations of Style and Content Glyph-Attention respectively , ks and vs are equal to Fs which is from the style encoder . Similar to SGA , kc and vc are equal to Fc which is from the content encoder . q is equal to Fq which is from the query encoder . O is the synthesized result generated by the decoder . 3.1.3 Multi-task discriminators It has been demonstrated that generative adversarial networks ( GAN ) Goodfellow et al . ( 2014 ) are able to generate output with distribution close to that of the actual data . Therefore , we can leverage GAN to develop a model that outputs synthesized glyph images whose content and style comply the distributions of the input content and reference style . In principle , a conventional discriminator , which can discern the fake and real data , can perform the discrimination task independently to complete the generation task . However , the synthesized results are not satisfactory because they are often wrong in content or weak in style . To enhance the model performance , multi-task discriminators are proposed to further disentangle content and style information . They are called content Dc and style Ds discriminators respectively , based on SNGAN Miyato & Koyama ( 2018 ) , which uses a trainable embedding matrix to obtain conditional information . Dc allows closer content distributions between the model output and the real data , and Ds helps the convergence of the output style to that of the real data . 3.2 Glyph attention Most few-shot image generation methodsLiu et al . ( 2019 ) use adaptive instance normalization ( AdaIn ) Huang & Belongie ( 2017 ) to transfer image style such as texture and color to another which provide content like pose and structure . However , it ’ s challenging to transfer font style to another due to the fact that font does not have texture and color , just consists of some white and black strokes . Different from AdaIn , we propose style and content glyph-attention modules to extract style and content features from custom glyph sets . The proposed method Glyph-Attention Network ( GANet ) is named after the glyph-attention modules , which are shown in Figure 2 . 3.2.1 Style glyph-attention As we all know , font style mainly depend on local stroke details , so it means that local features of a glyph dictate styles . Based on this hypothesis , we propose the style glyphattention module . The style encoder provides us a style feature set Fs ∈ RN×H×W×C which contains the desired font style . A style glyph-attention module is utilized to query the style feature from the style set . vs and ks denote the value and key in the attention module . They are equal to Fs . The query vector q ∈ RH×W×C is derived from the query encoder . Key ks and query q are first transformed into two feature spaces fs , gs , where fs ( q ) = Wfsq , gs ( ks ) = Wgsks . We then reshape gs ( ks ) ∈ RNHW×C and transpose and reshape fs ( q ) ∈ RC×HW . Finally , they are used to calculate the attention map β , as defined in ( 3 ) , βij = exp ( ξij ) ∑NHW m=1 exp ( ξmj ) , ξ = gs ( ks ) fs ( q ) ξ ∈ RNHW×HW ( 3 ) The attention map βij indicates the extent to which the model attends to the ith style location when synthesizing jth . Where H , W , C are the height , width and channel of features from previous 1 × 1 conv layer . N is the number of glyphs in the style set . The output of the style glyph-attention layer is Os = ( O1 , O2 , ... , Oj , ... , OHW ) , Oj ∈ R1×C , where , Oij = NHW∑ m=1 τimβmj , τ = hs ( vs ) τ ∈ RC×NHW ( 4 ) where τ is transposed and reshaped from hs ( vs ) = Whsvs . In addition , Whs ∈ RC×C , Wfs ∈ RC×C and Wgs ∈ RC×C are the learned weights in the 1× 1 conv layer of the style glyph-attention module . The final output is reshaped to Os ∈ RH×W×C . 3.2.2 Content glyph-attention Different from the style glyph-attention module , which queries the style features from the local spatial area , the content glyph-attention mainly queries content features globally . It is based on an assumption that content information resides in the overall structure of the skeletons . We define a content feature set Fc ∈ RN×H×W×C that is extracted by the content encoder from the content set Xc = { x1 , x2 , ... , xN } . vc and kc denote the value and key in the attention module , which are equal to Fc . The query vector q ∈ RH×W×C is identical to that in the style glyph-attention module . Similarly , key kc and query q are transformed into two feature spaces fc , gc , where fc ( q ) = Wfcq , gc ( kc ) = Wgckc . gc ( kc ) ∈ RN×HWC and fc ( q ) ∈ RHWC×1 are then reshaped to calculate the attention map γ . As defined in ( 5 ) , γi = exp ( ξi ) ∑N m=1 exp ( ξm ) , ξ = gc ( kc ) fc ( q ) ξ ∈ RN×1 ( 5 ) The attention map γi indicates the extent to which the model attends to the ith global content set location . H , W , and C are the height , width and channel of features from previous 1× 1 conv layer . N is the number of glyphs in the content set . The output of the content glyph-attention layer is Oc = ( O1 , O2 , ... , Oj , ... , OHWC ) , Oj ∈ R , Oj = N∑ m=1 τmjγm , τ = hc ( vc ) τ ∈ RN×HWC ( 6 ) where τ is reshaped from hc ( vc ) = Whcvc . In addition , Whc ∈ RC×C , Wfc ∈ RC×C and Wgc ∈ RC×C are the learned weights in the 1× 1 conv layer of the content glyph-attention module . The final output is reshaped to Oc ∈ RH×W×C . 3.3 Loss function Three individual loss functions are combined during model training : identity loss , feature matching loss , and multi-task adversarial loss . 3.3.1 Identity loss In order to make the output distribution of the model agreeing to that of the target in the pixel and feature space , we employ the identity loss . As we know , perceptual loss Johnson et al . ( 2016 ) is an effective way to measure the distance between images in VGG feature space . Compared to pixel-wise loss functions such as Mean-Squared Error ( MSE ) , perceptual loss can recover more high-frequency details . However , every coin has two sides , due to the downsampling operation ( max-pooling ) in VGG19 , the perceptual loss function will result in partial information loss during reconstruction , especially at low-frequencies . To avoid losing low-frequency details , we add L1 loss as a complementary term with perceptual loss , as defined in ( 7 ) , Lid = ∥Φ3_1 ( T ) − Φ3_1 ( G ( Xc , Xq , Xs ) ) ∥1 + ∥T −G ( Xc , Xq , Xs ) ∥1 ( 7 ) Where Φ3_1 is the feature map of VGG19 in relu3_1 . Xc and Xs are input content set and reference style set respectively , Xq is the query image whose content is the same as the content set . T is the target image whose content is the same as Xc , with style similar to Xs . G is the generator that consists of a content encoder , a query encoder , a style encoder , a decoder and two glyph-attention modules as shown in Figure 1 . 3.3.2 Feature matching loss To stabilize the adversarial training , feature matching ( FM ) loss Salimans et al . ( 2016 ) is also applied in this study . Similar to the perceptual similarity measure , the FM loss uses discriminator network as the feature extractor , which contain a large amount of information about the glyph images . To obtain more disentangled representations in content and style , we use Dc and Ds to extract feature maps , as defined in ( 8 ) , Lfm = l∑ i=1 ∥∥∥D ( i ) c ( T ) −D ( i ) c ( G ( Xc , Xq , Xs ) ) ∥∥∥ 1 + l∑ i=1 ∥∥∥D ( i ) s ( T ) −D ( i ) s ( G ( Xc , Xq , Xs ) ) ∥∥∥ 1 ( 8 ) Where D ( i ) is the feature map of layer i in the discriminator . We use all layers except fully-connected layer to extract features . 3.3.3 Multi-task adversarial loss In the proposed framework , two condition discriminators Dc and Ds are used to improve model performance , each of which has its own mission . Dc and Ds aim at calibrating the model to attain accurate content and stronger style respectively . We utilize hinge version loss Lim & Ye ( 2017 ) as the adversarial objective function , as defined in ( 9-10 ) , LG = −Ds ( G ( Xc , Xq , Xs ) ) −Dc ( G ( Xc , Xq , Xs ) ) ( 9 ) LD =max ( 0 , 1 +Dc ( G ( Xc , Xq , Xs ) ) ) +max ( 0 , 1−Dc ( T ) ) + max ( 0 , 1 +Ds ( G ( Xc , Xq , Xs ) ) ) +max ( 0 , 1−Ds ( T ) ) ( 10 ) 3.3.4 Total loss A simple additive form is used for the total loss , as illustrated in ( 11-12 ) , Lθ ( G ) = λ1Lid + λ2Lfm + LG ( 11 ) Lω ( D ) = LD ( 12 ) Where λ1 and λ2 are the hyper parameters of the generator loss function , θ and ω are the weights for the generator and discriminators respectively . 4 Experiments 4.1 Experimental settings We use the Adam optimizer Kingma & Ba ( 2014 ) to optimize all models with hyperparameters β1 = 0 and β2 = 0.9 . Using the two-time scale learning rates Heusel et al . ( 2017 ) strategy , we set the learning rate of all discriminators to 4e− 4 and the generator to 1e− 4 . The number of both Xc and Xs is set to 8 and the size of all images is 128× 128× 3 . The weights in the loss function are chosen to be λ1 = 1 and λ2 = 10 . We train the model with batch size of 4 for 500,000 iterations in a Nvidia GTX 1080Ti GPU with 12G memory . 4.2 Datasets and evaluation metrics We collected 411 Chinese fonts conforming to the GB2312 standard . Each font contains 6,763 Chinese characters . We randomly selected 405 fonts as the training set and the remaining 6 fonts as the test set . Various metrics are used to evaluate the quality of the synthesized glyphs . To measure the similarity between the generated image and the target , intersection over union ( IOU ) is utilized . We further employ two classifiers with Inception-v3 Szegedy et al . ( 2016 ) backbone to distinguish between content and style labels in the test set . 4.3 Benchmarking We compare our method with four many-shot font generation methods ( i.e. , zi2zi Tian , pix2pix Isola et al . ( 2017 ) , CycleGAN Zhu et al . ( 2017 ) , ZiGAN Wen et al . ( 2021 ) ) and three state-of-the-art few-shot font generation methods ( i.e. , FUNIT Liu et al . ( 2019 ) , MXFont Park et al . ( 2021 ) and AGIS-Net Gao et al . ( 2019 ) ) . The four many-shot methods demand a lot of training data and a different model is required for each font style during inference . As such , for many-shot methods , we further split each test font library into 5763 glyphs for training and 1000 glyphs for testing . Therefore , each of the four manyshot methods generates 6 models . In comparison , few-shot methods including the proposed method GANet need only a few glyphs to transfer font style during inference and no retraining is necessary for different font styles . Different from many-shot method , the few-shot models are trained using the selected 405 font libraries . During inference , few-shot methods take a few glyphs ( i.e . 1 , 2 , 4 , 8 ) as references to synthesize the font library rather than training a new model . To make a fair comparison , we use the 1000 glyphs split from the test set to evaluate all models . 4.4 Quantitative evaluation We assess the quality of the generated images in two aspects . Firstly , Intersection Over Union ( IOU ) is used to measure the similarity between the synthesized and the ground truth . Higher IOU score indicates better result . As shown in Table 1 , we randomly sample four reference images from the 5763 glyphs of one test font , based on which each model generates 1000 glyphs . There are a total number of 6 fonts in the test dataset . The metrics in the table are calculated based on the 1000×6 glyphs for each model and averaged . Table 1 shows that our method outperforms previous state-of-the-art few-shot approaches by a wide margin and generate comparable results to the many-shot approaches . Howerver , our method needs only few reference samples and one-time training . On the contrary , many-shot methods demand re-training using thousands of glyphs for each new font style . IOU can evaluate the similarity in pixel-level , but can not represent the style and content in essence . In addition , pixel-based metrics are often inconsistent with human visual perception . To improve the reliability of the evaluation , we trained two classifiers to identify the content and style classifications respectively and then use them to evaluate model prediction accuracies for the test set . As shown in Figure 4 , MX-Font and FUNIT show comparable performance in terms of style accuracy , but they show lower performance in content preservation . AGIS-Net shows comparable performance in content accuracy , but weak in style . In other words , FUNIT and MX-Font focus only on styling and fails to preserve content structure . AGIS-Net focuses on content and fails to transfer more style . 4.5 Qualitative analysis To demonstrate the model ’ s performance visually , we evaluate the models using the remaining 6 fonts in the test set as reference glyphs . Figure 3 presents the many-shot and few-shot methods generated images from the test dataset . It is observed that many-shot methods except ZiGAN always fail in content preservation as shown in the red box . Among few-shot methods , FUNIT learns the style but the local structure in the generated glyphs is weak as highlighted by the yellow box . Compared with FUNIT , AGIS-Net has better performance in content preservation . However , it has poor performance in capturing global and local styles ( black box ) . MX-Font also succeeds in learning the style of the reference , but the content structure is unsatisfactory ( blue box ) . Overall , our proposed method generated best results qualitatively compared to other methods both in terms of content preservation and style transfer .
The paper proposes a glyph-attention network for few-shot font generation. They claim that fonts’ content features are basically global features and style features are related to local features. They propose a style glyph attention to capture the global features from the content set of glyphs, and a content glyph attention module to encapsulate pattern information from the style set of glyphs. The motivation is clear and reasonable but there are still some major issues in model construction and the experiment.
SP:c50e58518244f90f768aa571d1bc4485dc9a5eec
GANet: Glyph-Attention Network for Few-Shot Font Generation
1 Introduction . With the increasing popularity of Chinese characters globally , the demand for Chinese fonts is rising . However , it ’ s very time consuming and expensive to design such font library . As we know , the number of letters in English alphabet is 52 ( upper and lower cases ) , much smaller than that of the Chinese characters which is over 20,000 . It takes about 6 months for an experienced designer to design a Chinese font library . Typically , the designer starts the font design with manually designing a few hundred of Chinese characters which encompass most of the radicals ( i.e. , the components of the characters ) . The rest of the characters will be completed by mixing and matching the radicals based on word vocabulary . However , it is very tedious and time consuming in both the initial design phase and the block building process . If only a few Chinese glyphs ( e.g . 4 or 8 ) are needed by manual design and there is a tool that can automatically complete the font library of the 20,000+ characters , designers ’ efficiency will be greatly improved . Recently , image to image translation based methods Jiang et al . ( 2019 ) have been proposed to generate fonts automatically . Most of these approaches are based on generative adversarial network Goodfellow et al . ( 2014 ) , which was shown to be able to transfer one font style to another successfully . However , they need large scale paired or un-paired source and target domain dataset to train a model . It ’ s inefficient because for every new font style , the pretrained model needs to be finetuned . Moreover , it is difficult and expensive to collect target font dataset such as Chinese historical calligraphic works , whose font glyphs may just have a small percentage left . But these methods typically require hundreds of font glyphs for training as a minimum . In order to address the limitation of lack of dataset , several few-shot font generation methods have been developed Gao et al . ( 2019 ) Park et al . ( 2021 ) , which don ’ t demand a lot of glyphs to train the model . They are capable of producing a complete font library with just few samples . Although these methods can fuse source content and target style representations successfully with few font glyphs unseen style during training , they tend to output characters with either wrong content or weak style . The unsatisfactory results of recent few-shot font generation methods make us re-examine the style and content relationship in the glyphs . A Chinese glyph can be divided into two parts ( i.e . content and style ) . From the content perspective , we recognize a glyph ’ s meaning at a glance . It implies that human understands font ’ s content through vision based on font global information . Inspired by this observation , we propose a content glyph-attention module to capture the global features from the content set of glyphs . On the other side , special stroke and radical details of a glyph determine the its style . In other words , glyph style is closely related to local information . Based on this hypothesis , we propose a style glyph-attention module to encapsulate pattern information from the style set of glyphs . Our contributions are summarized as follows : • We propose a generic few-shot font generation framework that bridges the gap between font content and style . • We propose a content and a style glyph-attention modules to characterize overall structure information from the content set and local stroke details from the style set , respectively . • Experiments demonstrate superior performance in terms of both quantitative and qualitative results compared with other cutting-edge few-shot glyph generation methods . 2 Related Work 2.1 Style Transfer Few-shot Chinese font generation aims to combine the content and reference style . It is a special style transfer problem in essence . However , font design requires high accuracy in content structure , the result of traditional style transfer model for font generation is usually unsatisfactory . Gatys et al . ( 2016 ) proposed neural style transfer in VGG Simonyan & Zisserman ( 2014 ) feature space by reconstructing the gram matrix but with limited inference speed . Johnson et al . ( 2016 ) trained a fast neural style transfer ( FNST ) model , which is able to merge style and content representations in real-time . Although speed-up is achieved in FNST , a new style image is necessary to retrain the model for transfer . In order to solve this problem , Dumoulin et al . ( 2016 ) proposed a n-style transfer method which can embed n style features by using conditional instance normalization . Further improvement was made by Huang & Belongie ( 2017 ) by inventing AdaIN that can achieve arbitrary style transfer in real time without retraining a model . 2.2 Image to Image Translation Although style transfer method has advantage in combining content and style representations , it may damage the content structure . For font generation , it is crucial to keep character semantics so that they are visually distinguishable and recognizable . Image to image translation method ( pix2pix ) Isola et al . ( 2017 ) was devised to transfer from the source domain to the target domain ( e.g. , from a sketch to a real cat drawing ) . Pix2pix differs from style transfer based methods in that it prevents the structure from being destroyed . Nevertheless , the result quality of pix2pix is heavily dependent on paired dataset that is expensive to collect . In order to tackle this problem , Zhu et al . ( 2017 ) moved a step further by introducing unpaired image to image translation method ( CycleGAN ) , which maps the source domain to the target domain by using cyclic loss . Although CycleGAN made break-through by eliminating the necessity of paired dataset , it ’ s inefficient because different ( source , target ) domain tuples require different models for inference . Huang et al . ( 2018 ) extended CycleGAN by implementing multi-domin transfer in a single model . Recently , Liu et al . ( 2019 ) put forward few-shot image to image translation using AdaIN . 2.3 Attention Models Lately , attention mechanisms are widely adopted in several models in order to capture global dependencies Xu et al . ( 2015 ) . In particular , self-attention ( also known as intra-attention ) calculates the response at a position in a sequence by attending to all positions within the same sequence Parikh et al . ( 2016 ) . ( SAGAN ) Zhang et al . ( 2019 ) demonstrated that selfattention could improve image generation quality of the GAN model . ( non-local ) Wang et al . ( 2018 ) used a reformalized self-attention module in spatial-temporal domain between video sequences . To our knowledge , attention module has not yet been explored in font generation tasks . In this study , we apply content and style glyph attention modules to efficiently capture the characteristics of the font content and style sets . 2.4 Few-shot Font Generation Methods The goal of few-shot font generation method is to make the generated glyphs akin to the style references , which are only a few samples , without re-training a new model . At the same time , the generated glyphs ’ semantic is unchanged . Recently , Zhang et al . ( 2018 ) proposed EMD model that leverages a pair of encoders to merge content and style . Gao et al . ( 2019 ) invented AGIS-Net to reach the few-shot font generation goal by transferring both shape and texture with a few reference samples . Unlike other methods , MX-Font Park et al . ( 2021 ) extracts multiple style features not explicitly conditioned on component labels , but automatically by multiple experts to represent different local concepts . 3 METHOD . As mentioned above , our goal is to generate stylized glyph images from a small number of reference samples . We design three encoders , two glyph attention modules and one decoder to form a Glyph-Attention Network ( GANet ) . Details of the model architecture and loss functions are discussed in Sections 3.1 and 3.2 . 3.1 Network architecture As shown in Figure 1 , GANet consists of one query encoder , one style encoder , one content encoder , two glyph-attention modules and one decoder . In order to stablize the training process and improve the synthesized results quality , two discriminators are adopted . 3.1.1 Query , Content and Style Encoders We formulate the content , style and query encoding process as trainable high-dimensional glyph feature extractors . The content encoder Ec aims to extract semantic feature . The input of the content encoder is Xc = { x1 , x2 , ... , xN } , each of them has the same content but different style , the output of the content encoder is Fc = Ec ( Xc ) ∈ RN×H×W×C . Similarly , the style encoder Es is used to characterize style features from the the style set Xs = { y1 , y2 , ... , yN } , which contains N glyphs with the same style but different content . The output of the style encoder is Fs = Es ( Xs ) ∈ RN×H×W×C . The query encoder Eq is devised to obtain a query feature vector Fq that is essential for the glyph-attention modules to identify local style features from the style feature set Fs and generate the most proper global content feature from the content feature set Fc . The input of the query encoder is a glyph image Xq whose content is the same as the content in the content glyph set but with a new style . Output of the query encoder is Fq = Eq ( Xq ) ∈ RH×W×C . The three encoders Es , Eq and Ec have identical architecture but they do not share weights . Symbols H , W and C represent feature map ’ s height , width and channel respectively . 3.1.2 Decoder The style glyph-attention module is intended to query a local style feature tensor from the style feature set . Similarly , the objective of the content glyph-attention module is to acquire the most proper glyph global content from the content feature set . The outputs of the content , style , and query encoders are added element-wise before being fed into the decoder . As defined in ( 1 ) F = SGA ( q , ks , vs ) + CGA ( q , kc , vc ) + Fq ( 1 ) O = Decoder ( F ) ( 2 ) Where SGA and CGA are abbreviations of Style and Content Glyph-Attention respectively , ks and vs are equal to Fs which is from the style encoder . Similar to SGA , kc and vc are equal to Fc which is from the content encoder . q is equal to Fq which is from the query encoder . O is the synthesized result generated by the decoder . 3.1.3 Multi-task discriminators It has been demonstrated that generative adversarial networks ( GAN ) Goodfellow et al . ( 2014 ) are able to generate output with distribution close to that of the actual data . Therefore , we can leverage GAN to develop a model that outputs synthesized glyph images whose content and style comply the distributions of the input content and reference style . In principle , a conventional discriminator , which can discern the fake and real data , can perform the discrimination task independently to complete the generation task . However , the synthesized results are not satisfactory because they are often wrong in content or weak in style . To enhance the model performance , multi-task discriminators are proposed to further disentangle content and style information . They are called content Dc and style Ds discriminators respectively , based on SNGAN Miyato & Koyama ( 2018 ) , which uses a trainable embedding matrix to obtain conditional information . Dc allows closer content distributions between the model output and the real data , and Ds helps the convergence of the output style to that of the real data . 3.2 Glyph attention Most few-shot image generation methodsLiu et al . ( 2019 ) use adaptive instance normalization ( AdaIn ) Huang & Belongie ( 2017 ) to transfer image style such as texture and color to another which provide content like pose and structure . However , it ’ s challenging to transfer font style to another due to the fact that font does not have texture and color , just consists of some white and black strokes . Different from AdaIn , we propose style and content glyph-attention modules to extract style and content features from custom glyph sets . The proposed method Glyph-Attention Network ( GANet ) is named after the glyph-attention modules , which are shown in Figure 2 . 3.2.1 Style glyph-attention As we all know , font style mainly depend on local stroke details , so it means that local features of a glyph dictate styles . Based on this hypothesis , we propose the style glyphattention module . The style encoder provides us a style feature set Fs ∈ RN×H×W×C which contains the desired font style . A style glyph-attention module is utilized to query the style feature from the style set . vs and ks denote the value and key in the attention module . They are equal to Fs . The query vector q ∈ RH×W×C is derived from the query encoder . Key ks and query q are first transformed into two feature spaces fs , gs , where fs ( q ) = Wfsq , gs ( ks ) = Wgsks . We then reshape gs ( ks ) ∈ RNHW×C and transpose and reshape fs ( q ) ∈ RC×HW . Finally , they are used to calculate the attention map β , as defined in ( 3 ) , βij = exp ( ξij ) ∑NHW m=1 exp ( ξmj ) , ξ = gs ( ks ) fs ( q ) ξ ∈ RNHW×HW ( 3 ) The attention map βij indicates the extent to which the model attends to the ith style location when synthesizing jth . Where H , W , C are the height , width and channel of features from previous 1 × 1 conv layer . N is the number of glyphs in the style set . The output of the style glyph-attention layer is Os = ( O1 , O2 , ... , Oj , ... , OHW ) , Oj ∈ R1×C , where , Oij = NHW∑ m=1 τimβmj , τ = hs ( vs ) τ ∈ RC×NHW ( 4 ) where τ is transposed and reshaped from hs ( vs ) = Whsvs . In addition , Whs ∈ RC×C , Wfs ∈ RC×C and Wgs ∈ RC×C are the learned weights in the 1× 1 conv layer of the style glyph-attention module . The final output is reshaped to Os ∈ RH×W×C . 3.2.2 Content glyph-attention Different from the style glyph-attention module , which queries the style features from the local spatial area , the content glyph-attention mainly queries content features globally . It is based on an assumption that content information resides in the overall structure of the skeletons . We define a content feature set Fc ∈ RN×H×W×C that is extracted by the content encoder from the content set Xc = { x1 , x2 , ... , xN } . vc and kc denote the value and key in the attention module , which are equal to Fc . The query vector q ∈ RH×W×C is identical to that in the style glyph-attention module . Similarly , key kc and query q are transformed into two feature spaces fc , gc , where fc ( q ) = Wfcq , gc ( kc ) = Wgckc . gc ( kc ) ∈ RN×HWC and fc ( q ) ∈ RHWC×1 are then reshaped to calculate the attention map γ . As defined in ( 5 ) , γi = exp ( ξi ) ∑N m=1 exp ( ξm ) , ξ = gc ( kc ) fc ( q ) ξ ∈ RN×1 ( 5 ) The attention map γi indicates the extent to which the model attends to the ith global content set location . H , W , and C are the height , width and channel of features from previous 1× 1 conv layer . N is the number of glyphs in the content set . The output of the content glyph-attention layer is Oc = ( O1 , O2 , ... , Oj , ... , OHWC ) , Oj ∈ R , Oj = N∑ m=1 τmjγm , τ = hc ( vc ) τ ∈ RN×HWC ( 6 ) where τ is reshaped from hc ( vc ) = Whcvc . In addition , Whc ∈ RC×C , Wfc ∈ RC×C and Wgc ∈ RC×C are the learned weights in the 1× 1 conv layer of the content glyph-attention module . The final output is reshaped to Oc ∈ RH×W×C . 3.3 Loss function Three individual loss functions are combined during model training : identity loss , feature matching loss , and multi-task adversarial loss . 3.3.1 Identity loss In order to make the output distribution of the model agreeing to that of the target in the pixel and feature space , we employ the identity loss . As we know , perceptual loss Johnson et al . ( 2016 ) is an effective way to measure the distance between images in VGG feature space . Compared to pixel-wise loss functions such as Mean-Squared Error ( MSE ) , perceptual loss can recover more high-frequency details . However , every coin has two sides , due to the downsampling operation ( max-pooling ) in VGG19 , the perceptual loss function will result in partial information loss during reconstruction , especially at low-frequencies . To avoid losing low-frequency details , we add L1 loss as a complementary term with perceptual loss , as defined in ( 7 ) , Lid = ∥Φ3_1 ( T ) − Φ3_1 ( G ( Xc , Xq , Xs ) ) ∥1 + ∥T −G ( Xc , Xq , Xs ) ∥1 ( 7 ) Where Φ3_1 is the feature map of VGG19 in relu3_1 . Xc and Xs are input content set and reference style set respectively , Xq is the query image whose content is the same as the content set . T is the target image whose content is the same as Xc , with style similar to Xs . G is the generator that consists of a content encoder , a query encoder , a style encoder , a decoder and two glyph-attention modules as shown in Figure 1 . 3.3.2 Feature matching loss To stabilize the adversarial training , feature matching ( FM ) loss Salimans et al . ( 2016 ) is also applied in this study . Similar to the perceptual similarity measure , the FM loss uses discriminator network as the feature extractor , which contain a large amount of information about the glyph images . To obtain more disentangled representations in content and style , we use Dc and Ds to extract feature maps , as defined in ( 8 ) , Lfm = l∑ i=1 ∥∥∥D ( i ) c ( T ) −D ( i ) c ( G ( Xc , Xq , Xs ) ) ∥∥∥ 1 + l∑ i=1 ∥∥∥D ( i ) s ( T ) −D ( i ) s ( G ( Xc , Xq , Xs ) ) ∥∥∥ 1 ( 8 ) Where D ( i ) is the feature map of layer i in the discriminator . We use all layers except fully-connected layer to extract features . 3.3.3 Multi-task adversarial loss In the proposed framework , two condition discriminators Dc and Ds are used to improve model performance , each of which has its own mission . Dc and Ds aim at calibrating the model to attain accurate content and stronger style respectively . We utilize hinge version loss Lim & Ye ( 2017 ) as the adversarial objective function , as defined in ( 9-10 ) , LG = −Ds ( G ( Xc , Xq , Xs ) ) −Dc ( G ( Xc , Xq , Xs ) ) ( 9 ) LD =max ( 0 , 1 +Dc ( G ( Xc , Xq , Xs ) ) ) +max ( 0 , 1−Dc ( T ) ) + max ( 0 , 1 +Ds ( G ( Xc , Xq , Xs ) ) ) +max ( 0 , 1−Ds ( T ) ) ( 10 ) 3.3.4 Total loss A simple additive form is used for the total loss , as illustrated in ( 11-12 ) , Lθ ( G ) = λ1Lid + λ2Lfm + LG ( 11 ) Lω ( D ) = LD ( 12 ) Where λ1 and λ2 are the hyper parameters of the generator loss function , θ and ω are the weights for the generator and discriminators respectively . 4 Experiments 4.1 Experimental settings We use the Adam optimizer Kingma & Ba ( 2014 ) to optimize all models with hyperparameters β1 = 0 and β2 = 0.9 . Using the two-time scale learning rates Heusel et al . ( 2017 ) strategy , we set the learning rate of all discriminators to 4e− 4 and the generator to 1e− 4 . The number of both Xc and Xs is set to 8 and the size of all images is 128× 128× 3 . The weights in the loss function are chosen to be λ1 = 1 and λ2 = 10 . We train the model with batch size of 4 for 500,000 iterations in a Nvidia GTX 1080Ti GPU with 12G memory . 4.2 Datasets and evaluation metrics We collected 411 Chinese fonts conforming to the GB2312 standard . Each font contains 6,763 Chinese characters . We randomly selected 405 fonts as the training set and the remaining 6 fonts as the test set . Various metrics are used to evaluate the quality of the synthesized glyphs . To measure the similarity between the generated image and the target , intersection over union ( IOU ) is utilized . We further employ two classifiers with Inception-v3 Szegedy et al . ( 2016 ) backbone to distinguish between content and style labels in the test set . 4.3 Benchmarking We compare our method with four many-shot font generation methods ( i.e. , zi2zi Tian , pix2pix Isola et al . ( 2017 ) , CycleGAN Zhu et al . ( 2017 ) , ZiGAN Wen et al . ( 2021 ) ) and three state-of-the-art few-shot font generation methods ( i.e. , FUNIT Liu et al . ( 2019 ) , MXFont Park et al . ( 2021 ) and AGIS-Net Gao et al . ( 2019 ) ) . The four many-shot methods demand a lot of training data and a different model is required for each font style during inference . As such , for many-shot methods , we further split each test font library into 5763 glyphs for training and 1000 glyphs for testing . Therefore , each of the four manyshot methods generates 6 models . In comparison , few-shot methods including the proposed method GANet need only a few glyphs to transfer font style during inference and no retraining is necessary for different font styles . Different from many-shot method , the few-shot models are trained using the selected 405 font libraries . During inference , few-shot methods take a few glyphs ( i.e . 1 , 2 , 4 , 8 ) as references to synthesize the font library rather than training a new model . To make a fair comparison , we use the 1000 glyphs split from the test set to evaluate all models . 4.4 Quantitative evaluation We assess the quality of the generated images in two aspects . Firstly , Intersection Over Union ( IOU ) is used to measure the similarity between the synthesized and the ground truth . Higher IOU score indicates better result . As shown in Table 1 , we randomly sample four reference images from the 5763 glyphs of one test font , based on which each model generates 1000 glyphs . There are a total number of 6 fonts in the test dataset . The metrics in the table are calculated based on the 1000×6 glyphs for each model and averaged . Table 1 shows that our method outperforms previous state-of-the-art few-shot approaches by a wide margin and generate comparable results to the many-shot approaches . Howerver , our method needs only few reference samples and one-time training . On the contrary , many-shot methods demand re-training using thousands of glyphs for each new font style . IOU can evaluate the similarity in pixel-level , but can not represent the style and content in essence . In addition , pixel-based metrics are often inconsistent with human visual perception . To improve the reliability of the evaluation , we trained two classifiers to identify the content and style classifications respectively and then use them to evaluate model prediction accuracies for the test set . As shown in Figure 4 , MX-Font and FUNIT show comparable performance in terms of style accuracy , but they show lower performance in content preservation . AGIS-Net shows comparable performance in content accuracy , but weak in style . In other words , FUNIT and MX-Font focus only on styling and fails to preserve content structure . AGIS-Net focuses on content and fails to transfer more style . 4.5 Qualitative analysis To demonstrate the model ’ s performance visually , we evaluate the models using the remaining 6 fonts in the test set as reference glyphs . Figure 3 presents the many-shot and few-shot methods generated images from the test dataset . It is observed that many-shot methods except ZiGAN always fail in content preservation as shown in the red box . Among few-shot methods , FUNIT learns the style but the local structure in the generated glyphs is weak as highlighted by the yellow box . Compared with FUNIT , AGIS-Net has better performance in content preservation . However , it has poor performance in capturing global and local styles ( black box ) . MX-Font also succeeds in learning the style of the reference , but the content structure is unsatisfactory ( blue box ) . Overall , our proposed method generated best results qualitatively compared to other methods both in terms of content preservation and style transfer .
This paper considers Chinese glyph font style transfer problem with few reference inputs. The model has three encoders for query, style and content references; one decoder for target generation; and two discriminators for style and content. The main novelty comes with the glyph attention design with both local and global structure for style and content.
SP:c50e58518244f90f768aa571d1bc4485dc9a5eec
GANet: Glyph-Attention Network for Few-Shot Font Generation
1 Introduction . With the increasing popularity of Chinese characters globally , the demand for Chinese fonts is rising . However , it ’ s very time consuming and expensive to design such font library . As we know , the number of letters in English alphabet is 52 ( upper and lower cases ) , much smaller than that of the Chinese characters which is over 20,000 . It takes about 6 months for an experienced designer to design a Chinese font library . Typically , the designer starts the font design with manually designing a few hundred of Chinese characters which encompass most of the radicals ( i.e. , the components of the characters ) . The rest of the characters will be completed by mixing and matching the radicals based on word vocabulary . However , it is very tedious and time consuming in both the initial design phase and the block building process . If only a few Chinese glyphs ( e.g . 4 or 8 ) are needed by manual design and there is a tool that can automatically complete the font library of the 20,000+ characters , designers ’ efficiency will be greatly improved . Recently , image to image translation based methods Jiang et al . ( 2019 ) have been proposed to generate fonts automatically . Most of these approaches are based on generative adversarial network Goodfellow et al . ( 2014 ) , which was shown to be able to transfer one font style to another successfully . However , they need large scale paired or un-paired source and target domain dataset to train a model . It ’ s inefficient because for every new font style , the pretrained model needs to be finetuned . Moreover , it is difficult and expensive to collect target font dataset such as Chinese historical calligraphic works , whose font glyphs may just have a small percentage left . But these methods typically require hundreds of font glyphs for training as a minimum . In order to address the limitation of lack of dataset , several few-shot font generation methods have been developed Gao et al . ( 2019 ) Park et al . ( 2021 ) , which don ’ t demand a lot of glyphs to train the model . They are capable of producing a complete font library with just few samples . Although these methods can fuse source content and target style representations successfully with few font glyphs unseen style during training , they tend to output characters with either wrong content or weak style . The unsatisfactory results of recent few-shot font generation methods make us re-examine the style and content relationship in the glyphs . A Chinese glyph can be divided into two parts ( i.e . content and style ) . From the content perspective , we recognize a glyph ’ s meaning at a glance . It implies that human understands font ’ s content through vision based on font global information . Inspired by this observation , we propose a content glyph-attention module to capture the global features from the content set of glyphs . On the other side , special stroke and radical details of a glyph determine the its style . In other words , glyph style is closely related to local information . Based on this hypothesis , we propose a style glyph-attention module to encapsulate pattern information from the style set of glyphs . Our contributions are summarized as follows : • We propose a generic few-shot font generation framework that bridges the gap between font content and style . • We propose a content and a style glyph-attention modules to characterize overall structure information from the content set and local stroke details from the style set , respectively . • Experiments demonstrate superior performance in terms of both quantitative and qualitative results compared with other cutting-edge few-shot glyph generation methods . 2 Related Work 2.1 Style Transfer Few-shot Chinese font generation aims to combine the content and reference style . It is a special style transfer problem in essence . However , font design requires high accuracy in content structure , the result of traditional style transfer model for font generation is usually unsatisfactory . Gatys et al . ( 2016 ) proposed neural style transfer in VGG Simonyan & Zisserman ( 2014 ) feature space by reconstructing the gram matrix but with limited inference speed . Johnson et al . ( 2016 ) trained a fast neural style transfer ( FNST ) model , which is able to merge style and content representations in real-time . Although speed-up is achieved in FNST , a new style image is necessary to retrain the model for transfer . In order to solve this problem , Dumoulin et al . ( 2016 ) proposed a n-style transfer method which can embed n style features by using conditional instance normalization . Further improvement was made by Huang & Belongie ( 2017 ) by inventing AdaIN that can achieve arbitrary style transfer in real time without retraining a model . 2.2 Image to Image Translation Although style transfer method has advantage in combining content and style representations , it may damage the content structure . For font generation , it is crucial to keep character semantics so that they are visually distinguishable and recognizable . Image to image translation method ( pix2pix ) Isola et al . ( 2017 ) was devised to transfer from the source domain to the target domain ( e.g. , from a sketch to a real cat drawing ) . Pix2pix differs from style transfer based methods in that it prevents the structure from being destroyed . Nevertheless , the result quality of pix2pix is heavily dependent on paired dataset that is expensive to collect . In order to tackle this problem , Zhu et al . ( 2017 ) moved a step further by introducing unpaired image to image translation method ( CycleGAN ) , which maps the source domain to the target domain by using cyclic loss . Although CycleGAN made break-through by eliminating the necessity of paired dataset , it ’ s inefficient because different ( source , target ) domain tuples require different models for inference . Huang et al . ( 2018 ) extended CycleGAN by implementing multi-domin transfer in a single model . Recently , Liu et al . ( 2019 ) put forward few-shot image to image translation using AdaIN . 2.3 Attention Models Lately , attention mechanisms are widely adopted in several models in order to capture global dependencies Xu et al . ( 2015 ) . In particular , self-attention ( also known as intra-attention ) calculates the response at a position in a sequence by attending to all positions within the same sequence Parikh et al . ( 2016 ) . ( SAGAN ) Zhang et al . ( 2019 ) demonstrated that selfattention could improve image generation quality of the GAN model . ( non-local ) Wang et al . ( 2018 ) used a reformalized self-attention module in spatial-temporal domain between video sequences . To our knowledge , attention module has not yet been explored in font generation tasks . In this study , we apply content and style glyph attention modules to efficiently capture the characteristics of the font content and style sets . 2.4 Few-shot Font Generation Methods The goal of few-shot font generation method is to make the generated glyphs akin to the style references , which are only a few samples , without re-training a new model . At the same time , the generated glyphs ’ semantic is unchanged . Recently , Zhang et al . ( 2018 ) proposed EMD model that leverages a pair of encoders to merge content and style . Gao et al . ( 2019 ) invented AGIS-Net to reach the few-shot font generation goal by transferring both shape and texture with a few reference samples . Unlike other methods , MX-Font Park et al . ( 2021 ) extracts multiple style features not explicitly conditioned on component labels , but automatically by multiple experts to represent different local concepts . 3 METHOD . As mentioned above , our goal is to generate stylized glyph images from a small number of reference samples . We design three encoders , two glyph attention modules and one decoder to form a Glyph-Attention Network ( GANet ) . Details of the model architecture and loss functions are discussed in Sections 3.1 and 3.2 . 3.1 Network architecture As shown in Figure 1 , GANet consists of one query encoder , one style encoder , one content encoder , two glyph-attention modules and one decoder . In order to stablize the training process and improve the synthesized results quality , two discriminators are adopted . 3.1.1 Query , Content and Style Encoders We formulate the content , style and query encoding process as trainable high-dimensional glyph feature extractors . The content encoder Ec aims to extract semantic feature . The input of the content encoder is Xc = { x1 , x2 , ... , xN } , each of them has the same content but different style , the output of the content encoder is Fc = Ec ( Xc ) ∈ RN×H×W×C . Similarly , the style encoder Es is used to characterize style features from the the style set Xs = { y1 , y2 , ... , yN } , which contains N glyphs with the same style but different content . The output of the style encoder is Fs = Es ( Xs ) ∈ RN×H×W×C . The query encoder Eq is devised to obtain a query feature vector Fq that is essential for the glyph-attention modules to identify local style features from the style feature set Fs and generate the most proper global content feature from the content feature set Fc . The input of the query encoder is a glyph image Xq whose content is the same as the content in the content glyph set but with a new style . Output of the query encoder is Fq = Eq ( Xq ) ∈ RH×W×C . The three encoders Es , Eq and Ec have identical architecture but they do not share weights . Symbols H , W and C represent feature map ’ s height , width and channel respectively . 3.1.2 Decoder The style glyph-attention module is intended to query a local style feature tensor from the style feature set . Similarly , the objective of the content glyph-attention module is to acquire the most proper glyph global content from the content feature set . The outputs of the content , style , and query encoders are added element-wise before being fed into the decoder . As defined in ( 1 ) F = SGA ( q , ks , vs ) + CGA ( q , kc , vc ) + Fq ( 1 ) O = Decoder ( F ) ( 2 ) Where SGA and CGA are abbreviations of Style and Content Glyph-Attention respectively , ks and vs are equal to Fs which is from the style encoder . Similar to SGA , kc and vc are equal to Fc which is from the content encoder . q is equal to Fq which is from the query encoder . O is the synthesized result generated by the decoder . 3.1.3 Multi-task discriminators It has been demonstrated that generative adversarial networks ( GAN ) Goodfellow et al . ( 2014 ) are able to generate output with distribution close to that of the actual data . Therefore , we can leverage GAN to develop a model that outputs synthesized glyph images whose content and style comply the distributions of the input content and reference style . In principle , a conventional discriminator , which can discern the fake and real data , can perform the discrimination task independently to complete the generation task . However , the synthesized results are not satisfactory because they are often wrong in content or weak in style . To enhance the model performance , multi-task discriminators are proposed to further disentangle content and style information . They are called content Dc and style Ds discriminators respectively , based on SNGAN Miyato & Koyama ( 2018 ) , which uses a trainable embedding matrix to obtain conditional information . Dc allows closer content distributions between the model output and the real data , and Ds helps the convergence of the output style to that of the real data . 3.2 Glyph attention Most few-shot image generation methodsLiu et al . ( 2019 ) use adaptive instance normalization ( AdaIn ) Huang & Belongie ( 2017 ) to transfer image style such as texture and color to another which provide content like pose and structure . However , it ’ s challenging to transfer font style to another due to the fact that font does not have texture and color , just consists of some white and black strokes . Different from AdaIn , we propose style and content glyph-attention modules to extract style and content features from custom glyph sets . The proposed method Glyph-Attention Network ( GANet ) is named after the glyph-attention modules , which are shown in Figure 2 . 3.2.1 Style glyph-attention As we all know , font style mainly depend on local stroke details , so it means that local features of a glyph dictate styles . Based on this hypothesis , we propose the style glyphattention module . The style encoder provides us a style feature set Fs ∈ RN×H×W×C which contains the desired font style . A style glyph-attention module is utilized to query the style feature from the style set . vs and ks denote the value and key in the attention module . They are equal to Fs . The query vector q ∈ RH×W×C is derived from the query encoder . Key ks and query q are first transformed into two feature spaces fs , gs , where fs ( q ) = Wfsq , gs ( ks ) = Wgsks . We then reshape gs ( ks ) ∈ RNHW×C and transpose and reshape fs ( q ) ∈ RC×HW . Finally , they are used to calculate the attention map β , as defined in ( 3 ) , βij = exp ( ξij ) ∑NHW m=1 exp ( ξmj ) , ξ = gs ( ks ) fs ( q ) ξ ∈ RNHW×HW ( 3 ) The attention map βij indicates the extent to which the model attends to the ith style location when synthesizing jth . Where H , W , C are the height , width and channel of features from previous 1 × 1 conv layer . N is the number of glyphs in the style set . The output of the style glyph-attention layer is Os = ( O1 , O2 , ... , Oj , ... , OHW ) , Oj ∈ R1×C , where , Oij = NHW∑ m=1 τimβmj , τ = hs ( vs ) τ ∈ RC×NHW ( 4 ) where τ is transposed and reshaped from hs ( vs ) = Whsvs . In addition , Whs ∈ RC×C , Wfs ∈ RC×C and Wgs ∈ RC×C are the learned weights in the 1× 1 conv layer of the style glyph-attention module . The final output is reshaped to Os ∈ RH×W×C . 3.2.2 Content glyph-attention Different from the style glyph-attention module , which queries the style features from the local spatial area , the content glyph-attention mainly queries content features globally . It is based on an assumption that content information resides in the overall structure of the skeletons . We define a content feature set Fc ∈ RN×H×W×C that is extracted by the content encoder from the content set Xc = { x1 , x2 , ... , xN } . vc and kc denote the value and key in the attention module , which are equal to Fc . The query vector q ∈ RH×W×C is identical to that in the style glyph-attention module . Similarly , key kc and query q are transformed into two feature spaces fc , gc , where fc ( q ) = Wfcq , gc ( kc ) = Wgckc . gc ( kc ) ∈ RN×HWC and fc ( q ) ∈ RHWC×1 are then reshaped to calculate the attention map γ . As defined in ( 5 ) , γi = exp ( ξi ) ∑N m=1 exp ( ξm ) , ξ = gc ( kc ) fc ( q ) ξ ∈ RN×1 ( 5 ) The attention map γi indicates the extent to which the model attends to the ith global content set location . H , W , and C are the height , width and channel of features from previous 1× 1 conv layer . N is the number of glyphs in the content set . The output of the content glyph-attention layer is Oc = ( O1 , O2 , ... , Oj , ... , OHWC ) , Oj ∈ R , Oj = N∑ m=1 τmjγm , τ = hc ( vc ) τ ∈ RN×HWC ( 6 ) where τ is reshaped from hc ( vc ) = Whcvc . In addition , Whc ∈ RC×C , Wfc ∈ RC×C and Wgc ∈ RC×C are the learned weights in the 1× 1 conv layer of the content glyph-attention module . The final output is reshaped to Oc ∈ RH×W×C . 3.3 Loss function Three individual loss functions are combined during model training : identity loss , feature matching loss , and multi-task adversarial loss . 3.3.1 Identity loss In order to make the output distribution of the model agreeing to that of the target in the pixel and feature space , we employ the identity loss . As we know , perceptual loss Johnson et al . ( 2016 ) is an effective way to measure the distance between images in VGG feature space . Compared to pixel-wise loss functions such as Mean-Squared Error ( MSE ) , perceptual loss can recover more high-frequency details . However , every coin has two sides , due to the downsampling operation ( max-pooling ) in VGG19 , the perceptual loss function will result in partial information loss during reconstruction , especially at low-frequencies . To avoid losing low-frequency details , we add L1 loss as a complementary term with perceptual loss , as defined in ( 7 ) , Lid = ∥Φ3_1 ( T ) − Φ3_1 ( G ( Xc , Xq , Xs ) ) ∥1 + ∥T −G ( Xc , Xq , Xs ) ∥1 ( 7 ) Where Φ3_1 is the feature map of VGG19 in relu3_1 . Xc and Xs are input content set and reference style set respectively , Xq is the query image whose content is the same as the content set . T is the target image whose content is the same as Xc , with style similar to Xs . G is the generator that consists of a content encoder , a query encoder , a style encoder , a decoder and two glyph-attention modules as shown in Figure 1 . 3.3.2 Feature matching loss To stabilize the adversarial training , feature matching ( FM ) loss Salimans et al . ( 2016 ) is also applied in this study . Similar to the perceptual similarity measure , the FM loss uses discriminator network as the feature extractor , which contain a large amount of information about the glyph images . To obtain more disentangled representations in content and style , we use Dc and Ds to extract feature maps , as defined in ( 8 ) , Lfm = l∑ i=1 ∥∥∥D ( i ) c ( T ) −D ( i ) c ( G ( Xc , Xq , Xs ) ) ∥∥∥ 1 + l∑ i=1 ∥∥∥D ( i ) s ( T ) −D ( i ) s ( G ( Xc , Xq , Xs ) ) ∥∥∥ 1 ( 8 ) Where D ( i ) is the feature map of layer i in the discriminator . We use all layers except fully-connected layer to extract features . 3.3.3 Multi-task adversarial loss In the proposed framework , two condition discriminators Dc and Ds are used to improve model performance , each of which has its own mission . Dc and Ds aim at calibrating the model to attain accurate content and stronger style respectively . We utilize hinge version loss Lim & Ye ( 2017 ) as the adversarial objective function , as defined in ( 9-10 ) , LG = −Ds ( G ( Xc , Xq , Xs ) ) −Dc ( G ( Xc , Xq , Xs ) ) ( 9 ) LD =max ( 0 , 1 +Dc ( G ( Xc , Xq , Xs ) ) ) +max ( 0 , 1−Dc ( T ) ) + max ( 0 , 1 +Ds ( G ( Xc , Xq , Xs ) ) ) +max ( 0 , 1−Ds ( T ) ) ( 10 ) 3.3.4 Total loss A simple additive form is used for the total loss , as illustrated in ( 11-12 ) , Lθ ( G ) = λ1Lid + λ2Lfm + LG ( 11 ) Lω ( D ) = LD ( 12 ) Where λ1 and λ2 are the hyper parameters of the generator loss function , θ and ω are the weights for the generator and discriminators respectively . 4 Experiments 4.1 Experimental settings We use the Adam optimizer Kingma & Ba ( 2014 ) to optimize all models with hyperparameters β1 = 0 and β2 = 0.9 . Using the two-time scale learning rates Heusel et al . ( 2017 ) strategy , we set the learning rate of all discriminators to 4e− 4 and the generator to 1e− 4 . The number of both Xc and Xs is set to 8 and the size of all images is 128× 128× 3 . The weights in the loss function are chosen to be λ1 = 1 and λ2 = 10 . We train the model with batch size of 4 for 500,000 iterations in a Nvidia GTX 1080Ti GPU with 12G memory . 4.2 Datasets and evaluation metrics We collected 411 Chinese fonts conforming to the GB2312 standard . Each font contains 6,763 Chinese characters . We randomly selected 405 fonts as the training set and the remaining 6 fonts as the test set . Various metrics are used to evaluate the quality of the synthesized glyphs . To measure the similarity between the generated image and the target , intersection over union ( IOU ) is utilized . We further employ two classifiers with Inception-v3 Szegedy et al . ( 2016 ) backbone to distinguish between content and style labels in the test set . 4.3 Benchmarking We compare our method with four many-shot font generation methods ( i.e. , zi2zi Tian , pix2pix Isola et al . ( 2017 ) , CycleGAN Zhu et al . ( 2017 ) , ZiGAN Wen et al . ( 2021 ) ) and three state-of-the-art few-shot font generation methods ( i.e. , FUNIT Liu et al . ( 2019 ) , MXFont Park et al . ( 2021 ) and AGIS-Net Gao et al . ( 2019 ) ) . The four many-shot methods demand a lot of training data and a different model is required for each font style during inference . As such , for many-shot methods , we further split each test font library into 5763 glyphs for training and 1000 glyphs for testing . Therefore , each of the four manyshot methods generates 6 models . In comparison , few-shot methods including the proposed method GANet need only a few glyphs to transfer font style during inference and no retraining is necessary for different font styles . Different from many-shot method , the few-shot models are trained using the selected 405 font libraries . During inference , few-shot methods take a few glyphs ( i.e . 1 , 2 , 4 , 8 ) as references to synthesize the font library rather than training a new model . To make a fair comparison , we use the 1000 glyphs split from the test set to evaluate all models . 4.4 Quantitative evaluation We assess the quality of the generated images in two aspects . Firstly , Intersection Over Union ( IOU ) is used to measure the similarity between the synthesized and the ground truth . Higher IOU score indicates better result . As shown in Table 1 , we randomly sample four reference images from the 5763 glyphs of one test font , based on which each model generates 1000 glyphs . There are a total number of 6 fonts in the test dataset . The metrics in the table are calculated based on the 1000×6 glyphs for each model and averaged . Table 1 shows that our method outperforms previous state-of-the-art few-shot approaches by a wide margin and generate comparable results to the many-shot approaches . Howerver , our method needs only few reference samples and one-time training . On the contrary , many-shot methods demand re-training using thousands of glyphs for each new font style . IOU can evaluate the similarity in pixel-level , but can not represent the style and content in essence . In addition , pixel-based metrics are often inconsistent with human visual perception . To improve the reliability of the evaluation , we trained two classifiers to identify the content and style classifications respectively and then use them to evaluate model prediction accuracies for the test set . As shown in Figure 4 , MX-Font and FUNIT show comparable performance in terms of style accuracy , but they show lower performance in content preservation . AGIS-Net shows comparable performance in content accuracy , but weak in style . In other words , FUNIT and MX-Font focus only on styling and fails to preserve content structure . AGIS-Net focuses on content and fails to transfer more style . 4.5 Qualitative analysis To demonstrate the model ’ s performance visually , we evaluate the models using the remaining 6 fonts in the test set as reference glyphs . Figure 3 presents the many-shot and few-shot methods generated images from the test dataset . It is observed that many-shot methods except ZiGAN always fail in content preservation as shown in the red box . Among few-shot methods , FUNIT learns the style but the local structure in the generated glyphs is weak as highlighted by the yellow box . Compared with FUNIT , AGIS-Net has better performance in content preservation . However , it has poor performance in capturing global and local styles ( black box ) . MX-Font also succeeds in learning the style of the reference , but the content structure is unsatisfactory ( blue box ) . Overall , our proposed method generated best results qualitatively compared to other methods both in terms of content preservation and style transfer .
This paper proposed a few-shot font generation method, GANet. The key idea is to design glyph-attention modules including the style glyph-attention module and the content glyph-attention module to recover the glyph in the target style from the queried glyph. The multi-task adversarial loss was also employed to further improve the synthesizing performance. Experiments conducted on a dataset constructed by the authors verified the effectiveness of the proposed method in handling the task of few-shot Chinese font synthesis.
SP:c50e58518244f90f768aa571d1bc4485dc9a5eec
Data Poisoning Won’t Save You From Facial Recognition
1 INTRODUCTION . Facial recognition systems pose a serious threat to individual privacy . Various companies routinely scrape the Web for users ’ pictures to train large-scale facial recognition systems ( Hill , 2020a ; Harwell , 2021 ) , and then make these systems available to law enforcement agencies ( Lipton , 2020 ) or private individuals ( Harwell , 2021 ; Mozur & Krolik , 2019 ; Wong , 2019 ) . A growing body of work develops tools to allow users to fight back , using techniques from adversarial machine learning ( Sharif et al. , 2016 ; Oh et al. , 2017 ; Thys et al. , 2019 ; Kulynych et al. , 2020 ; Shan et al. , 2020 ; Evtimov et al. , 2020 ; Gao et al. , 2020 ; Xu et al. , 2020 ; Yang et al. , 2020 ; Komkov & Petiushko , 2021 ; Cherepanova et al. , 2021a ; Rajabi et al. , 2021 ; Browne et al. , 2020 ) . One approach taken by these tools allow users to perturb any picture before they post it online , so that a facial recognition model that trains on these pictures will become poisoned . The objective of these perturbed images is that when any unperturbed image is fed into the facial recognition model ( e.g. , a photo taken by a stalker , a security camera , or by the police ) , the model misidentifies the user . This research direction was popularized by Fawkes ( Shan et al. , 2020 ) , an academic image-poisoning system with 500,000+ downloads and covered by the New York Times ( Hill , 2020b ) , that promises “ strong protection against unauthorized [ facial recognition ] models ” . Following the success of Fawkes , similar systems have been proposed by academic ( Cherepanova et al. , 2021a ; Evtimov et al. , 2020 ) and commercial ( Vincent , 2021 ) parties . This paper shows that these systems ( and , in fact , any poisoning strategy ) can not protect users ’ privacy . Worse , we argue that these systems offer a false sense of security . There exists a class of privacy-conscious users who might have otherwise never uploaded their photos to the internet ; however who now might do so , under the false belief that data poisoning will protect their privacy . These users are now less private than they were before . Figure 1 shows an overview of our results . The reason these systems are not currently private , and can never be private , comes down to a fundamental asymmetry between Web users and the trainers of facial recognition models . Once a user commits to an attack and uploads a perturbed picture that gets scraped , this perturbation can no longer be changed . The model trainer , who acts second , then gets to choose their training strategy . As prior work lacks a formal security setup , we begin by defining a security game to capture this dynamic nature of poisoning attacks . We then introduce two powerful defense strategies that completely break two state-of-the-art poisoning attacks—Fawkes ( Shan et al. , 2020 ) and LowKey ( Cherepanova et al. , 2021a ) . In the first strategy , we show how to adapt the facial recognition training to work in the presence of poisoned images . Because image-perturbation systems are made publicly accessible to cater to a large user base ( Shan et al. , 2021 ; Cherepanova et al. , 2021b ) , we must assume facial recognition trainers are aware of these attack strategies . Our adaptive models fully circumvent these poisoning tools with just black-box access to the attack system . Worse , we find there exists an even simpler defensive strategy : model trainers can just wait for better facial recognition systems , which are no longer vulnerable to these particular poisoning attacks . That is , because existing poisoning attacks were only designed to prevent current face recognition tools from working , there is no reason to believe that future tools will be poisoned as well . Indeed , we show that the state-of-the-art poisoning attacks are already broken by new training techniques that appeared less than a year later . For example , Fawkes ( released in July 2020 ) is ineffective if the model trainer switches to a MagFace model ( Meng et al. , 2021 ) ( released in March 2021 ) , and LowKey ( released January 2021 ) is ineffective against a facial recognition model obtained by finetuning OpenAI ’ s CLIP model ( Radford et al. , 2021 ) ( also released January 2021 ) . We argue that poisoning attacks against facial recognition will not lead to an “ arms race ” , where new attacks can continuously counteract new defenses . Since the perturbation applied to a picture can not be changed once the picture is scraped , a successful poisoning attack has to remain effective against all future models , even models trained adaptively against the attack , or models that use new techniques discovered only after the attack . In light of this , we argue that users ’ only hope is a push for legislation that restricts the use of privacy-invasive facial recognition systems ( Singer , 2018 ; Weise & Singer , 2020 ; Winder , 2020 ) . 2 DATA POISONING FOR FACIAL RECOGNITION . 2.1 THREAT MODEL . We consider a setting where a user uploads pictures of themselves to an online service such as a social media platform . The user attempts to protect their pictures by adding perturbations that should be almost imperceptible to other people ( Szegedy et al. , 2013 ) . The user ’ s goal is that a model trained on their perturbed pictures will achieve low accuracy when classifying unperturbed pictures of the user ( Shan et al. , 2020 ; Cherepanova et al. , 2021a ; Yang et al. , 2020 ; Evtimov et al. , 2020 ) . A second party , the model trainer , scrapes the Web for pictures to train a large-scale facial recognition model ( capable of identifying a large number of users ) . We assume that the data scraped by the trainer is labeled , i.e. , all ( possibly perturbed ) images collected of a user can be assigned to the user ’ s identity . The trainer ’ s goal is to build a model that correctly recognizes users in future images . The trainer is active , i.e. , they continuously scrape new uploaded pictures at regular intervals . This setting corresponds to that of training-only clean-label poisoning attacks ( Shan et al. , 2020 ; Cherepanova et al. , 2021a ; Goldblum et al. , 2020 ; Evtimov et al. , 2020 ) . Keeping with the terminology of the data poisoning literature ( Goldblum et al. , 2020 ) , we refer to the user as the attacker and the trainer as the defender ( even though it is the trainer that aims to breach the user ’ s privacy ! ) . 2.2 POISONING ATTACK GAMES . We present a standard security game for training-only clean-label poisoning attacks in Figure 2a . We argue that this game fails to properly capture the threat model of our facial recognition scenario . In this game , the attacker first samples training data X , Y from a distribution D. In the setting of facial recognition , the data consists of facial images of users X , along with their identities Y . The attacker then applies an attack to get the perturbed data Xadv . The defender gets the perturbed labeled data ( Xadv , Y ) and trains a model f . The model f is evaluated on unperturbed inputs x from the distribution D. For a given test input x , the attacker wins the game if the perturbation of the training data is small ( as measured by an oracle O ( X , Xadv ) 7→ { 0 , 1 } ) , and if the model misclassifies x . The poisoning game in Figure 2a fails to capture an important facet of the facial recognition problem . The problem is not static : users continuously upload new pictures , and the model trainer actively scrapes them to update their model . Below , we introduce a dynamic version of the poisoning game , and show how a model trainer can use a retroactive defense strategy to win the game . In turn , we discuss how users and model trainers may adapt their strategies based on the other party ’ s actions . Dynamic poisoning attacks . To capture the dynamic nature of the facial recognition game , we define a generalized game for clean-label poisoning attacks in Figure 2b . The game now operates in rounds indexed by i ≥ 1 . In each round , the attacker perturbs new pictures and sends them to the defender . The strategies of the attacker and defender may change from one round to the next . The game in Figure 2b allows for the data distribution Di to change across rounds . Indeed , new users might begin uploading pictures , and users ’ faces may change over time . Yet , our thesis is that the main challenge faced by the user is precisely that the distribution of pictures of their own face changes little over time . For example , a facial recognition model trained on pictures of a user at 20 years old can reliably recognize pictures of the same user at 30 years old ( Ling et al. , 2010 ) . Thus , in each round the defender can reuse training data ( Xadv , Y ) collected in prior rounds . If the defender scrapes a user ’ s images , the perturbations applied to these images can not later be changed . Retroactive defenses . The observation above places a high burden on the attacker . Suppose that in round i , the defender discovers a training technique traini that is resilient to past poisoning attacks Attackj for j < i . Then , the defender can train their model solely on data ( Xadv , Y ) collected up to round j . From there on , the defender can trivially win the game by simply ignoring future training data ( until they find a defense against newer attacks as well ) . Thus , the attacker ’ s perturbations have to work against all future defenses , even those applied retroactively , for as long as the user ’ s facial features do not naturally change . By design , this retroactive defense does not lead to an “ arms race ” with future attacks . The defender applies newly discovered defenses to past pictures only . As we will show , this retroactive defense can even be instantiated by a fully oblivious model trainer , with no knowledge of users ’ attacks . The model trainer simply waits for a better facial recognition model to be developed , and then applies the model to pictures scraped before the new model was published . This oblivious strategy demonstrates the futility of preventing facial recognition with data poisoning , so long as progress in facial recognition models is expected to continue in the future . Adaptive defenses . A model trainer that does not want to wait for progress in facial recognition can exploit another source of asymmetry over users : adaptivity . In our setting , it is easier for the defender to adapt to the attacker , than vice-versa . Indeed , users must perturb their pictures before the model trainer scrapes them and feeds them to a secret training algorithm . As the trainer ’ s model f will likely be inaccessible to users , users will have no idea if their attack actually succeeded or not . In contrast , the users ’ attack strategy is likely public ( at least as a black-box ) to support users with minimal technical background . For example , Fawkes offers open-source software to perturb images ( Shan et al. , 2021 ) , and LowKey ( Cherepanova et al. , 2021b ) and DoNotPay ( Vincent , 2021 ) offer a Web API . The defender can thus assemble a dataset of perturbed images and use them to train a model . We call such a defender adaptive . A note on evasion and obfuscation attacks . The security games in Figure 2 assume that the training data is “ clean label ” ( i.e. , the user can still be identified in their pictures by other human users ) and that the evaluation data is unperturbed . This is the setting considered by Fawkes ( Shan et al. , 2020 ) and LowKey ( Cherepanova et al. , 2021a ) , where a user shares their pictures online , but the user can not control the pictures that are fed to the facial recognition model ( e.g. , pictures taken by a stalker , a security camera , or law enforcement ) . The game dynamics change if the user evades the model with adversarial examples , by modifying their facial appearance at test time ( Szegedy et al. , 2013 ; Sharif et al. , 2016 ; Thys et al. , 2019 ; Gao et al. , 2020 ; Cilloni et al. , 2020 ; Rajabi et al. , 2021 ; Oh et al. , 2017 ; Deb et al. , 2019 ; Browne et al. , 2020 ; Deb et al. , 2020 ) . Evasion attacks favor the attacker : the defender must commit to a defense and the attacker can adapt their strategy accordingly ( Tramer et al. , 2020 ) . Our setting and security game also do not capture face obfuscation or anonymization techniques ( Newton et al. , 2005 ; Sun et al. , 2018a ; b ; Sam et al. , 2020 ; Cao et al. , 2021 ; Maximov et al. , 2020 ; Gafni et al. , 2019 ) . These attacks remove or synthetically replace a user ’ s face , and thus fall outside of our threat model of clean-label poisoning attacks ( i.e. , the aim of these works is to remove identifying features from uploaded pictures , so that even a human user would fail to identify the user ) .
This paper studies the effect of data poisoning in face recognition and the relation to the defense techniques. Conventionally, the poisoned data will fail the face recognition models who is trained without defense strategy. Two solutions of defense are given: oblivious trainer and adaptive trainer. The claim is that, any existing poisoning methods cannot protect the privacy of users in the face images.
SP:23cca476cbbe185c5c3f88c910817da2ff6d6458
Data Poisoning Won’t Save You From Facial Recognition
1 INTRODUCTION . Facial recognition systems pose a serious threat to individual privacy . Various companies routinely scrape the Web for users ’ pictures to train large-scale facial recognition systems ( Hill , 2020a ; Harwell , 2021 ) , and then make these systems available to law enforcement agencies ( Lipton , 2020 ) or private individuals ( Harwell , 2021 ; Mozur & Krolik , 2019 ; Wong , 2019 ) . A growing body of work develops tools to allow users to fight back , using techniques from adversarial machine learning ( Sharif et al. , 2016 ; Oh et al. , 2017 ; Thys et al. , 2019 ; Kulynych et al. , 2020 ; Shan et al. , 2020 ; Evtimov et al. , 2020 ; Gao et al. , 2020 ; Xu et al. , 2020 ; Yang et al. , 2020 ; Komkov & Petiushko , 2021 ; Cherepanova et al. , 2021a ; Rajabi et al. , 2021 ; Browne et al. , 2020 ) . One approach taken by these tools allow users to perturb any picture before they post it online , so that a facial recognition model that trains on these pictures will become poisoned . The objective of these perturbed images is that when any unperturbed image is fed into the facial recognition model ( e.g. , a photo taken by a stalker , a security camera , or by the police ) , the model misidentifies the user . This research direction was popularized by Fawkes ( Shan et al. , 2020 ) , an academic image-poisoning system with 500,000+ downloads and covered by the New York Times ( Hill , 2020b ) , that promises “ strong protection against unauthorized [ facial recognition ] models ” . Following the success of Fawkes , similar systems have been proposed by academic ( Cherepanova et al. , 2021a ; Evtimov et al. , 2020 ) and commercial ( Vincent , 2021 ) parties . This paper shows that these systems ( and , in fact , any poisoning strategy ) can not protect users ’ privacy . Worse , we argue that these systems offer a false sense of security . There exists a class of privacy-conscious users who might have otherwise never uploaded their photos to the internet ; however who now might do so , under the false belief that data poisoning will protect their privacy . These users are now less private than they were before . Figure 1 shows an overview of our results . The reason these systems are not currently private , and can never be private , comes down to a fundamental asymmetry between Web users and the trainers of facial recognition models . Once a user commits to an attack and uploads a perturbed picture that gets scraped , this perturbation can no longer be changed . The model trainer , who acts second , then gets to choose their training strategy . As prior work lacks a formal security setup , we begin by defining a security game to capture this dynamic nature of poisoning attacks . We then introduce two powerful defense strategies that completely break two state-of-the-art poisoning attacks—Fawkes ( Shan et al. , 2020 ) and LowKey ( Cherepanova et al. , 2021a ) . In the first strategy , we show how to adapt the facial recognition training to work in the presence of poisoned images . Because image-perturbation systems are made publicly accessible to cater to a large user base ( Shan et al. , 2021 ; Cherepanova et al. , 2021b ) , we must assume facial recognition trainers are aware of these attack strategies . Our adaptive models fully circumvent these poisoning tools with just black-box access to the attack system . Worse , we find there exists an even simpler defensive strategy : model trainers can just wait for better facial recognition systems , which are no longer vulnerable to these particular poisoning attacks . That is , because existing poisoning attacks were only designed to prevent current face recognition tools from working , there is no reason to believe that future tools will be poisoned as well . Indeed , we show that the state-of-the-art poisoning attacks are already broken by new training techniques that appeared less than a year later . For example , Fawkes ( released in July 2020 ) is ineffective if the model trainer switches to a MagFace model ( Meng et al. , 2021 ) ( released in March 2021 ) , and LowKey ( released January 2021 ) is ineffective against a facial recognition model obtained by finetuning OpenAI ’ s CLIP model ( Radford et al. , 2021 ) ( also released January 2021 ) . We argue that poisoning attacks against facial recognition will not lead to an “ arms race ” , where new attacks can continuously counteract new defenses . Since the perturbation applied to a picture can not be changed once the picture is scraped , a successful poisoning attack has to remain effective against all future models , even models trained adaptively against the attack , or models that use new techniques discovered only after the attack . In light of this , we argue that users ’ only hope is a push for legislation that restricts the use of privacy-invasive facial recognition systems ( Singer , 2018 ; Weise & Singer , 2020 ; Winder , 2020 ) . 2 DATA POISONING FOR FACIAL RECOGNITION . 2.1 THREAT MODEL . We consider a setting where a user uploads pictures of themselves to an online service such as a social media platform . The user attempts to protect their pictures by adding perturbations that should be almost imperceptible to other people ( Szegedy et al. , 2013 ) . The user ’ s goal is that a model trained on their perturbed pictures will achieve low accuracy when classifying unperturbed pictures of the user ( Shan et al. , 2020 ; Cherepanova et al. , 2021a ; Yang et al. , 2020 ; Evtimov et al. , 2020 ) . A second party , the model trainer , scrapes the Web for pictures to train a large-scale facial recognition model ( capable of identifying a large number of users ) . We assume that the data scraped by the trainer is labeled , i.e. , all ( possibly perturbed ) images collected of a user can be assigned to the user ’ s identity . The trainer ’ s goal is to build a model that correctly recognizes users in future images . The trainer is active , i.e. , they continuously scrape new uploaded pictures at regular intervals . This setting corresponds to that of training-only clean-label poisoning attacks ( Shan et al. , 2020 ; Cherepanova et al. , 2021a ; Goldblum et al. , 2020 ; Evtimov et al. , 2020 ) . Keeping with the terminology of the data poisoning literature ( Goldblum et al. , 2020 ) , we refer to the user as the attacker and the trainer as the defender ( even though it is the trainer that aims to breach the user ’ s privacy ! ) . 2.2 POISONING ATTACK GAMES . We present a standard security game for training-only clean-label poisoning attacks in Figure 2a . We argue that this game fails to properly capture the threat model of our facial recognition scenario . In this game , the attacker first samples training data X , Y from a distribution D. In the setting of facial recognition , the data consists of facial images of users X , along with their identities Y . The attacker then applies an attack to get the perturbed data Xadv . The defender gets the perturbed labeled data ( Xadv , Y ) and trains a model f . The model f is evaluated on unperturbed inputs x from the distribution D. For a given test input x , the attacker wins the game if the perturbation of the training data is small ( as measured by an oracle O ( X , Xadv ) 7→ { 0 , 1 } ) , and if the model misclassifies x . The poisoning game in Figure 2a fails to capture an important facet of the facial recognition problem . The problem is not static : users continuously upload new pictures , and the model trainer actively scrapes them to update their model . Below , we introduce a dynamic version of the poisoning game , and show how a model trainer can use a retroactive defense strategy to win the game . In turn , we discuss how users and model trainers may adapt their strategies based on the other party ’ s actions . Dynamic poisoning attacks . To capture the dynamic nature of the facial recognition game , we define a generalized game for clean-label poisoning attacks in Figure 2b . The game now operates in rounds indexed by i ≥ 1 . In each round , the attacker perturbs new pictures and sends them to the defender . The strategies of the attacker and defender may change from one round to the next . The game in Figure 2b allows for the data distribution Di to change across rounds . Indeed , new users might begin uploading pictures , and users ’ faces may change over time . Yet , our thesis is that the main challenge faced by the user is precisely that the distribution of pictures of their own face changes little over time . For example , a facial recognition model trained on pictures of a user at 20 years old can reliably recognize pictures of the same user at 30 years old ( Ling et al. , 2010 ) . Thus , in each round the defender can reuse training data ( Xadv , Y ) collected in prior rounds . If the defender scrapes a user ’ s images , the perturbations applied to these images can not later be changed . Retroactive defenses . The observation above places a high burden on the attacker . Suppose that in round i , the defender discovers a training technique traini that is resilient to past poisoning attacks Attackj for j < i . Then , the defender can train their model solely on data ( Xadv , Y ) collected up to round j . From there on , the defender can trivially win the game by simply ignoring future training data ( until they find a defense against newer attacks as well ) . Thus , the attacker ’ s perturbations have to work against all future defenses , even those applied retroactively , for as long as the user ’ s facial features do not naturally change . By design , this retroactive defense does not lead to an “ arms race ” with future attacks . The defender applies newly discovered defenses to past pictures only . As we will show , this retroactive defense can even be instantiated by a fully oblivious model trainer , with no knowledge of users ’ attacks . The model trainer simply waits for a better facial recognition model to be developed , and then applies the model to pictures scraped before the new model was published . This oblivious strategy demonstrates the futility of preventing facial recognition with data poisoning , so long as progress in facial recognition models is expected to continue in the future . Adaptive defenses . A model trainer that does not want to wait for progress in facial recognition can exploit another source of asymmetry over users : adaptivity . In our setting , it is easier for the defender to adapt to the attacker , than vice-versa . Indeed , users must perturb their pictures before the model trainer scrapes them and feeds them to a secret training algorithm . As the trainer ’ s model f will likely be inaccessible to users , users will have no idea if their attack actually succeeded or not . In contrast , the users ’ attack strategy is likely public ( at least as a black-box ) to support users with minimal technical background . For example , Fawkes offers open-source software to perturb images ( Shan et al. , 2021 ) , and LowKey ( Cherepanova et al. , 2021b ) and DoNotPay ( Vincent , 2021 ) offer a Web API . The defender can thus assemble a dataset of perturbed images and use them to train a model . We call such a defender adaptive . A note on evasion and obfuscation attacks . The security games in Figure 2 assume that the training data is “ clean label ” ( i.e. , the user can still be identified in their pictures by other human users ) and that the evaluation data is unperturbed . This is the setting considered by Fawkes ( Shan et al. , 2020 ) and LowKey ( Cherepanova et al. , 2021a ) , where a user shares their pictures online , but the user can not control the pictures that are fed to the facial recognition model ( e.g. , pictures taken by a stalker , a security camera , or law enforcement ) . The game dynamics change if the user evades the model with adversarial examples , by modifying their facial appearance at test time ( Szegedy et al. , 2013 ; Sharif et al. , 2016 ; Thys et al. , 2019 ; Gao et al. , 2020 ; Cilloni et al. , 2020 ; Rajabi et al. , 2021 ; Oh et al. , 2017 ; Deb et al. , 2019 ; Browne et al. , 2020 ; Deb et al. , 2020 ) . Evasion attacks favor the attacker : the defender must commit to a defense and the attacker can adapt their strategy accordingly ( Tramer et al. , 2020 ) . Our setting and security game also do not capture face obfuscation or anonymization techniques ( Newton et al. , 2005 ; Sun et al. , 2018a ; b ; Sam et al. , 2020 ; Cao et al. , 2021 ; Maximov et al. , 2020 ; Gafni et al. , 2019 ) . These attacks remove or synthetically replace a user ’ s face , and thus fall outside of our threat model of clean-label poisoning attacks ( i.e. , the aim of these works is to remove identifying features from uploaded pictures , so that even a human user would fail to identify the user ) .
This paper points out that current data poisoning techniques cannot effiectively protect users privacy, i.e., face data, on the Internet. The authors have examined several strategies to enable modern face recognition models to defense attacks from widely used data poisoning methods. Experimental results suggest that those data poisoning attacks can be easily defensed by adaptively tuning the face recognition models or using more advanced algorithms which would be developed in the future. The main conclusion is that people should not rely on technical solutions to protect users privacy and legislation actions are what is actually needed.
SP:23cca476cbbe185c5c3f88c910817da2ff6d6458
Data Poisoning Won’t Save You From Facial Recognition
1 INTRODUCTION . Facial recognition systems pose a serious threat to individual privacy . Various companies routinely scrape the Web for users ’ pictures to train large-scale facial recognition systems ( Hill , 2020a ; Harwell , 2021 ) , and then make these systems available to law enforcement agencies ( Lipton , 2020 ) or private individuals ( Harwell , 2021 ; Mozur & Krolik , 2019 ; Wong , 2019 ) . A growing body of work develops tools to allow users to fight back , using techniques from adversarial machine learning ( Sharif et al. , 2016 ; Oh et al. , 2017 ; Thys et al. , 2019 ; Kulynych et al. , 2020 ; Shan et al. , 2020 ; Evtimov et al. , 2020 ; Gao et al. , 2020 ; Xu et al. , 2020 ; Yang et al. , 2020 ; Komkov & Petiushko , 2021 ; Cherepanova et al. , 2021a ; Rajabi et al. , 2021 ; Browne et al. , 2020 ) . One approach taken by these tools allow users to perturb any picture before they post it online , so that a facial recognition model that trains on these pictures will become poisoned . The objective of these perturbed images is that when any unperturbed image is fed into the facial recognition model ( e.g. , a photo taken by a stalker , a security camera , or by the police ) , the model misidentifies the user . This research direction was popularized by Fawkes ( Shan et al. , 2020 ) , an academic image-poisoning system with 500,000+ downloads and covered by the New York Times ( Hill , 2020b ) , that promises “ strong protection against unauthorized [ facial recognition ] models ” . Following the success of Fawkes , similar systems have been proposed by academic ( Cherepanova et al. , 2021a ; Evtimov et al. , 2020 ) and commercial ( Vincent , 2021 ) parties . This paper shows that these systems ( and , in fact , any poisoning strategy ) can not protect users ’ privacy . Worse , we argue that these systems offer a false sense of security . There exists a class of privacy-conscious users who might have otherwise never uploaded their photos to the internet ; however who now might do so , under the false belief that data poisoning will protect their privacy . These users are now less private than they were before . Figure 1 shows an overview of our results . The reason these systems are not currently private , and can never be private , comes down to a fundamental asymmetry between Web users and the trainers of facial recognition models . Once a user commits to an attack and uploads a perturbed picture that gets scraped , this perturbation can no longer be changed . The model trainer , who acts second , then gets to choose their training strategy . As prior work lacks a formal security setup , we begin by defining a security game to capture this dynamic nature of poisoning attacks . We then introduce two powerful defense strategies that completely break two state-of-the-art poisoning attacks—Fawkes ( Shan et al. , 2020 ) and LowKey ( Cherepanova et al. , 2021a ) . In the first strategy , we show how to adapt the facial recognition training to work in the presence of poisoned images . Because image-perturbation systems are made publicly accessible to cater to a large user base ( Shan et al. , 2021 ; Cherepanova et al. , 2021b ) , we must assume facial recognition trainers are aware of these attack strategies . Our adaptive models fully circumvent these poisoning tools with just black-box access to the attack system . Worse , we find there exists an even simpler defensive strategy : model trainers can just wait for better facial recognition systems , which are no longer vulnerable to these particular poisoning attacks . That is , because existing poisoning attacks were only designed to prevent current face recognition tools from working , there is no reason to believe that future tools will be poisoned as well . Indeed , we show that the state-of-the-art poisoning attacks are already broken by new training techniques that appeared less than a year later . For example , Fawkes ( released in July 2020 ) is ineffective if the model trainer switches to a MagFace model ( Meng et al. , 2021 ) ( released in March 2021 ) , and LowKey ( released January 2021 ) is ineffective against a facial recognition model obtained by finetuning OpenAI ’ s CLIP model ( Radford et al. , 2021 ) ( also released January 2021 ) . We argue that poisoning attacks against facial recognition will not lead to an “ arms race ” , where new attacks can continuously counteract new defenses . Since the perturbation applied to a picture can not be changed once the picture is scraped , a successful poisoning attack has to remain effective against all future models , even models trained adaptively against the attack , or models that use new techniques discovered only after the attack . In light of this , we argue that users ’ only hope is a push for legislation that restricts the use of privacy-invasive facial recognition systems ( Singer , 2018 ; Weise & Singer , 2020 ; Winder , 2020 ) . 2 DATA POISONING FOR FACIAL RECOGNITION . 2.1 THREAT MODEL . We consider a setting where a user uploads pictures of themselves to an online service such as a social media platform . The user attempts to protect their pictures by adding perturbations that should be almost imperceptible to other people ( Szegedy et al. , 2013 ) . The user ’ s goal is that a model trained on their perturbed pictures will achieve low accuracy when classifying unperturbed pictures of the user ( Shan et al. , 2020 ; Cherepanova et al. , 2021a ; Yang et al. , 2020 ; Evtimov et al. , 2020 ) . A second party , the model trainer , scrapes the Web for pictures to train a large-scale facial recognition model ( capable of identifying a large number of users ) . We assume that the data scraped by the trainer is labeled , i.e. , all ( possibly perturbed ) images collected of a user can be assigned to the user ’ s identity . The trainer ’ s goal is to build a model that correctly recognizes users in future images . The trainer is active , i.e. , they continuously scrape new uploaded pictures at regular intervals . This setting corresponds to that of training-only clean-label poisoning attacks ( Shan et al. , 2020 ; Cherepanova et al. , 2021a ; Goldblum et al. , 2020 ; Evtimov et al. , 2020 ) . Keeping with the terminology of the data poisoning literature ( Goldblum et al. , 2020 ) , we refer to the user as the attacker and the trainer as the defender ( even though it is the trainer that aims to breach the user ’ s privacy ! ) . 2.2 POISONING ATTACK GAMES . We present a standard security game for training-only clean-label poisoning attacks in Figure 2a . We argue that this game fails to properly capture the threat model of our facial recognition scenario . In this game , the attacker first samples training data X , Y from a distribution D. In the setting of facial recognition , the data consists of facial images of users X , along with their identities Y . The attacker then applies an attack to get the perturbed data Xadv . The defender gets the perturbed labeled data ( Xadv , Y ) and trains a model f . The model f is evaluated on unperturbed inputs x from the distribution D. For a given test input x , the attacker wins the game if the perturbation of the training data is small ( as measured by an oracle O ( X , Xadv ) 7→ { 0 , 1 } ) , and if the model misclassifies x . The poisoning game in Figure 2a fails to capture an important facet of the facial recognition problem . The problem is not static : users continuously upload new pictures , and the model trainer actively scrapes them to update their model . Below , we introduce a dynamic version of the poisoning game , and show how a model trainer can use a retroactive defense strategy to win the game . In turn , we discuss how users and model trainers may adapt their strategies based on the other party ’ s actions . Dynamic poisoning attacks . To capture the dynamic nature of the facial recognition game , we define a generalized game for clean-label poisoning attacks in Figure 2b . The game now operates in rounds indexed by i ≥ 1 . In each round , the attacker perturbs new pictures and sends them to the defender . The strategies of the attacker and defender may change from one round to the next . The game in Figure 2b allows for the data distribution Di to change across rounds . Indeed , new users might begin uploading pictures , and users ’ faces may change over time . Yet , our thesis is that the main challenge faced by the user is precisely that the distribution of pictures of their own face changes little over time . For example , a facial recognition model trained on pictures of a user at 20 years old can reliably recognize pictures of the same user at 30 years old ( Ling et al. , 2010 ) . Thus , in each round the defender can reuse training data ( Xadv , Y ) collected in prior rounds . If the defender scrapes a user ’ s images , the perturbations applied to these images can not later be changed . Retroactive defenses . The observation above places a high burden on the attacker . Suppose that in round i , the defender discovers a training technique traini that is resilient to past poisoning attacks Attackj for j < i . Then , the defender can train their model solely on data ( Xadv , Y ) collected up to round j . From there on , the defender can trivially win the game by simply ignoring future training data ( until they find a defense against newer attacks as well ) . Thus , the attacker ’ s perturbations have to work against all future defenses , even those applied retroactively , for as long as the user ’ s facial features do not naturally change . By design , this retroactive defense does not lead to an “ arms race ” with future attacks . The defender applies newly discovered defenses to past pictures only . As we will show , this retroactive defense can even be instantiated by a fully oblivious model trainer , with no knowledge of users ’ attacks . The model trainer simply waits for a better facial recognition model to be developed , and then applies the model to pictures scraped before the new model was published . This oblivious strategy demonstrates the futility of preventing facial recognition with data poisoning , so long as progress in facial recognition models is expected to continue in the future . Adaptive defenses . A model trainer that does not want to wait for progress in facial recognition can exploit another source of asymmetry over users : adaptivity . In our setting , it is easier for the defender to adapt to the attacker , than vice-versa . Indeed , users must perturb their pictures before the model trainer scrapes them and feeds them to a secret training algorithm . As the trainer ’ s model f will likely be inaccessible to users , users will have no idea if their attack actually succeeded or not . In contrast , the users ’ attack strategy is likely public ( at least as a black-box ) to support users with minimal technical background . For example , Fawkes offers open-source software to perturb images ( Shan et al. , 2021 ) , and LowKey ( Cherepanova et al. , 2021b ) and DoNotPay ( Vincent , 2021 ) offer a Web API . The defender can thus assemble a dataset of perturbed images and use them to train a model . We call such a defender adaptive . A note on evasion and obfuscation attacks . The security games in Figure 2 assume that the training data is “ clean label ” ( i.e. , the user can still be identified in their pictures by other human users ) and that the evaluation data is unperturbed . This is the setting considered by Fawkes ( Shan et al. , 2020 ) and LowKey ( Cherepanova et al. , 2021a ) , where a user shares their pictures online , but the user can not control the pictures that are fed to the facial recognition model ( e.g. , pictures taken by a stalker , a security camera , or law enforcement ) . The game dynamics change if the user evades the model with adversarial examples , by modifying their facial appearance at test time ( Szegedy et al. , 2013 ; Sharif et al. , 2016 ; Thys et al. , 2019 ; Gao et al. , 2020 ; Cilloni et al. , 2020 ; Rajabi et al. , 2021 ; Oh et al. , 2017 ; Deb et al. , 2019 ; Browne et al. , 2020 ; Deb et al. , 2020 ) . Evasion attacks favor the attacker : the defender must commit to a defense and the attacker can adapt their strategy accordingly ( Tramer et al. , 2020 ) . Our setting and security game also do not capture face obfuscation or anonymization techniques ( Newton et al. , 2005 ; Sun et al. , 2018a ; b ; Sam et al. , 2020 ; Cao et al. , 2021 ; Maximov et al. , 2020 ; Gafni et al. , 2019 ) . These attacks remove or synthetically replace a user ’ s face , and thus fall outside of our threat model of clean-label poisoning attacks ( i.e. , the aim of these works is to remove identifying features from uploaded pictures , so that even a human user would fail to identify the user ) .
Recent works propose to protect users from facial recognition by poisoning their images before uploading them to the Internet (called poisoning "attacks"). This paper reveals the flaws of these methods by designing two effective methods (called "defenses") to defeat that protection mechanism. The first approach is adaptive defense, in which the model trainer assumes to have access to the poisoning function as a black-box. He then can collect a clean facial image dataset, create perturbed and unperturbed versions, and finetune the poisoned face recognition model to learn robust features. The second approach, called obvious defense, relies on the fact that poisoned examples do not transfer well over time to newer models. Hence, the model trainer can wait to obtain a better face recognition model and properly safely finetune it on the perturbed images. Both methods successfully defeat two poisoning attack baselines, raising awareness on the inefficiency of the poisoning-based identity protection mechanism.
SP:23cca476cbbe185c5c3f88c910817da2ff6d6458
Unifying Likelihood-free Inference with Black-box Sequence Design and Beyond
1 INTRODUCTION . Discovering new drugs to fulfill specific criteria , such as binding affinity towards a given molecular target , is a fundamental problem in chemistry and the pharmaceutical industry ( Hughes et al. , 2011 ) . In this work , we focus on an important subdomain : de novo biological sequence design . This task is challenging for two reasons : ( 1 ) the exploration space for sequences is combinatorially large ; and ( 2 ) sequence usefulness is evaluated via a complicated process which usually involves time-consuming and expensive wet-lab experiments . Despite the difficulty of this task , many approaches have been developed over the past few decades thanks to recent advances in biochemistry and machine learning . The Nobel Prize wining paradigm , directed evolution ( Chen & Arnold , 1991 ) , which conducts local evolutionary search under human guidance , is one of the popular techniques . Unfortunately , it is limited by its sample inefficiency and reliance on strong prior knowledge , e.g. , about where to mutate ( Ahn et al. , 2020 ) . Furthermore , to compete with other machine learning methods ( Gottipati et al. , 2020 ) , guided evolution ( Yoshikawa et al. , 2018 ; Jensen , 2019 ; Nigam et al. , 2019 ) heavily relies on human intuition for designing domain-specific evolutionary operators , which may not always apply to tasks at hand . In this work , we deem sequence design to be a black-box optimization problem , tasked with maximizing an unknown oracle function . We assume that oracle queries are limited due to the constraint on resources , such as the budgets for evaluating queries in a wet-lab . Thus , sample efficiency is crucial . We develop a probabilistic framework by reformulating the aforementioned black-box optimization target as a posterior modeling problem . With this framework , we draw a surprising connection between likelihood-free inference and sequence design , and thus linking two fields which are previously considered as unrelated . The key observation we leverage here for establishing this connection is that both settings share similar elements and targets which will be elaborated in Section 2.2 . This connection facilitates our understanding of both fields and provides a recipe for developing sequence design algorithms . Going beyond , we also combine different probabilistic modeling insights and develop three novel composite probabilistic algorithms . We point out that our framework could actually be applied to any black-box optimization settings , but in this work we focus on its application to biological sequence design . To demonstrate the empirical effectiveness of our methods , we conduct systematical experiments to evaluate their performance on four in-silico sequence design benchmarks . Our proposed meth- ∗Corresponding Author ods achieve at least comparable results to existing baselines , and the proposed composite methods behave consistently better than all other ones across various sequence design tasks . We summarize our contribution as follows : • We develop a probabilistic framework that unifies likelihood-free inference and black-box optimization . • Based on this framework , we provide a recipe for designing algorithms for black-box problems . We apply these ideas to propose a series of composite design algorithms . • We perform systematical evaluation on a series of black-box sequence design benchmarks , and find that these algorithms achieves consistently comparable or better results compared to previous ones , thus illustrating the benefit of the proposed unified framework . 2 A UNIFYING PROBABILISTIC FRAMEWORK . 2.1 BACKGROUND . Likelihood-free inference ( LFI ) . We use θ ∈ Θ and x ∈ X to separately denote the parameters and the data generated via the mechanism x ∼ p ( x|θ ) . In this scenario , LFI refers to a special kind of Bayesian inference setting where the likelihood function is not tractable but sampling ( by simulation ) from the likelihood is feasible . Consider the objective of modeling the Bayesian posterior when we can not compute the likelihood p ( xo|θ ) : p ( θ|xo ) ∝ p ( θ ) p ( xo|θ ) ︸ ︷︷ ︸ ? , ( 1 ) where xo is the observed data , p ( θ ) is the ( given ) prior over the model parameters θ , p ( x|θ ) is the intractable likelihood function and p ( θ|x ) is the desired posterior over θ . While we do not have access to the exact likelihood , we can still simulate ( sample ) data x from the model simulator : x ∼ p ( x|θ ) . Instead of trying to obtain a numerical value of the generic posterior p ( θ|x ) for arbitrary x , LFI only tries to obtain an approximation of p ( θ|xo ) for the given xo . During the inference process , we can take advantage of the sampled data : D = { ( θi , xi ) } ni=1 where xi ∼ p ( x|θi ) for selected values of θi . Biological black-box sequence design . We consider biological sequence design as a black-box optimization problem : m∗ = arg max m∈M f ( m ) , where f ( · ) is the oracle score function , and we would like to discover values of m for which f ( m ) is large . In real-world situations , a query of this oracle f could represent a series of wet-lab experiments to measure specific chemical properties or specificity for a given binding site target . In general , these experiments are time- and cost-consuming . As a result , the total number of queries is limited . In our setting , we useM = VL to denote the search space for sequences with fixed length L , where V is the vocabulary for each entry of the sequence : for DNA nucleotides |V| = 4 , and for protein amino acids |V| = 20 . For variable length setting , we haveM = ∪L∈ [ Lmin , Lmax ] VL , where Lmin and Lmax are the minimal and maximal length , respectively . 2.2 CONNECTING LFI AND BLACK-BOX OPTIMIZATION . In order to draw a connection to LFI , we require a probabilistic formulation of the black-box sequence design problem . To this end , we relax the goal of searching for a single maximum of the oracle / score function f to a posterior modeling problem , i.e. , finding a representative sample of the configurations of m sampled with probability related to some target posterior . Think of C is the set of sequences with these desirable configurations , E is a Boolean event about whether a sequence m belongs to C , and our goal is to characterize the posterior distribution p ( m|E ) from which we obtain the desired sequences . Below , we consider two specific ways of doing this : Example A . We explicitly define C ( and E accordingly ) as all the sequences whose scores are larger than a given threshold s : C = { m|f ( m ) ≥ s } . ( 2 ) Here s could be any fixed value , or a certain quantile of a particular score distribution . In this way , we have p ( E|m ) = p ( m ∈ C|m ) = 1 { f ( m ) ≥ s } where 1 { } is the indicator function . Example B . In a softer version of E and C , we can define its conditional probability of being true to follow a Boltzmann distribution : p ( E|m ) = p ( m ∈ C|m ) ∝ exp ( f ( m ) /τ ) . ( 3 ) where τ is a temperature parameter . We introduce the exponential because f ( · ) does not necessarily take positive values . Any monotone transformation of f ( · ) to non-negative reals could be used , so that sequences with larger oracle scores have a greater probability of making E true . With this posterior objective , our goal now becomes effectively modeling and sampling from the posterior p ( m|E ) . It is thus natural to resort to the tools of Bayesian inference for this task . In order to examine this possibility , we draw a detailed comparison between the settings of black-box sequence design problem and likelihood-free Bayesian inference in Table 1 . It can be observed that both tasks share similar elements and targets . The two settings also share similar limitations on the allowed queries , which are too time-consuming and / or cost-intensive . Notice that in sequence design , the oracle could be either exact or noisy , thus we use the more general s ∼ f ( m ) formulation rather than s = f ( m ) . We will further present several concrete examples as demonstrations of this correspondence in the following section . Another way to understand this correspondence is to consider the following mapping T : T : Θ×X →M× R ( θ , x ) 7→ ( m , s ) , s.t . s = −‖x− xo‖ . Here we can see the score value s as a quantitative metric for how close the generated data x ( given θ ) is to the target observed data xo . In addition , querying the oracle in the sequence design setting can also be thought of as follows : ( 1 ) sample x ∼ p ( ·|θ ) and then ( 2 ) calculate s = −‖x− xo‖ under some distance ‖·‖ . In this manner , T could conceptually transform any LFI problem into a black-box optimization task . In this work , we only focus on the application of sequence design . 3 METHODOLOGY . We provide a recipe for designing new sequence design algorithms based on the correspondence in Section 2.2 . The recipe induces different approaches by modeling different probabilistic component of the Bayesian inference problem . We begin with common algorithm restrictions under this setting . Common constraint for algorithms . Due to the restriction of simulation / query in our setting , we constrain our algorithms to act in a sequential / iterative way , gradually achieving the desired posterior round by round . Every algorithm starts with an empty dataset D = ∅ and an initial proposal p1 ( · ) = p ( · ) , where p ( · ) is the prior given by the task . In the r-th round of this multi-round setting , the algorithm would use the proposal pr ( · ) of this round to sample a batch of data ( θ / m ) for simulation / query , and augment the current datasetD with the newly obtained batch of data . We use n to denote the batch size for each round ’ s simulation / query . Afterwards , the algorithm updates the proposal to pr+1 ( · ) . The outcomes for the two settings we discuss may be slightly different : an algorithm for likelihood-free inference would return the posterior , while a sequence design method would return the dataset of all the sequences it has queried , which hopefully contains desired high scored sequences . On the other hand , a sequence design method could produce as an intermediate result a generative model for sampling queries , which then completely fits with the LFI framework . 3.1 BACKWARD MODELING OF THE MECHANISM . Approximate Bayesian Computation ( ABC ) ( Beaumont et al. , 2002 ) is a standard method for tackling LFI problems . In Algorithm 1 , we display one of the most popular variants : Sequential Monte Carlo-Approximate Bayesian Computation ( SMC-ABC ) ( Beaumont et al. , 2009 ) . In each round , parameters θ are sampled from the current proposal distribution pr ( θ ) for simulation . A rejection step is then involved to remove the θi whose simulation outcomes xi can not reproduce the observed data xo with sufficient accuracy . The remaining accepted { θi } i are adopted to update the next round ’ s proposal pr+1 ( · ) towards the target posterior , i.e. , by refitting qφ with the modified data . We defer more details of this approach to Section A.1 in Appendix . Algorithm 1 SMC-ABC p1 ( θ ) ← p ( θ ) ; for r in 1 to R do repeat sample θi ∼ pr ( θ ) ; simulate xi ∼ p ( x|θi ) ; until n samples are obtained D ← D ∪ { ( θi , xi ) } ni=1 sort D according to −‖xi − xo‖ ; fit qφ ( θ ) with top { θi } i in D ; pr+1 ( θ ) ← qφ ( θ ) ; end for return p̂ ( θ|xo ) = pR+1 ( θ ) Algorithm 2 FB-VAE p1 ( m ) ← p ( m ) ; for r in 1 to R do repeat sample mi ∼ pr ( m ) ; query the oracle : si ← f ( mi ) ; until n samples are obtained D ← D ∪ { ( mi , si ) } ni=1 ; sort D according to si fit qφ ( m ) with top { mi } i in D ; pr+1 ( m ) ← qφ ( m ) ; end for return { m : ( m , s ) ∈ D } It would then be natural to construct an analogical sequence design algorithm using top scored entities { mi } i to guide the update of a certain sequence distribution , see Algorithm 2 . Interestingly , this is the proposed sequence design algorithm in Gupta & Zou ( 2019 ) , where the authors name this kind of updating “ feedback ” because training of the parametric generator qφ ( m ) exploits feedback signals from the oracle . In this paper , we follow Brookes & Listgarten ( 2018 ) to crystallize qφ ( m ) to be a variational autoencoder ( Kingma & Welling , 2014 ) , and use the term Feedback-Variational AutoEncoder ( FB-VAE ) to refer to Algorithm 2 . We place Algorithm 1 & 2 side-by-side to highlight their correspondence . We also make the same arrangement for the following Algorithm 3 & 4 , Algorithm 5 & 6 and Algorithm 7 & 8 . Algorithm 3 Sequential Neural Posterior p1 ( θ ) ← p ( θ ) ; for r in 1 to R do repeat sample θi ∼ pr ( θ ) ; simulate xi ∼ p ( x|θi ) ; until n samples are obtained D ← D ∪ { ( θi , xi ) } ni=1 ; qφ ← arg minq Ex [ DKL ( p ( θ|x ) ‖q ) ] ; pr+1 ( θ ) ← qφ ( θ|xo ) ; end for return p̂ ( θ|xo ) = pR+1 ( θ ) Algorithm 4 Design by Adaptive Sampling p1 ( m ) ← p ( m ) ; for r in 1 to R do repeat sample mi ∼ pr ( m ) ; query the oracle : si ← f ( mi ) ; until n samples are obtained D ← D ∪ { ( mi , si ) } ni=1 ; qφ ← arg minqDKL ( p ( m|E ) ‖q ) ; pr+1 ( m ) ← qφ ( m ) ; end for return { m : ( m , s ) ∈ D } In comparison with SMC-ABC , the Sequential Neural Posterior ( SNP ) method ( Papamakarios & Murray , 2016 ; Lueckmann et al. , 2017 ; Greenberg et al. , 2019 ) for likelihood-free inference adopts a more flexible approach , taking the power of conditional neural density estimator ( e.g. , Papamakarios et al . ( 2017 ) ) to model the general posterior p ( θ|x ) , which takes arbitrary θ and x as two inputs and outputs a distribution . This neural estimator is trained via approximately minimizing the Kullback–Leibler ( KL ) divergence between qφ ( θ|x ) and the true posterior p ( θ|x ) . We defer more training details to Section A.1 in Appendix . Under the connection viewpoint , one similar algorithm for sequence design is the Design by Adaptive Sampling ( DbAS ) proposed in Brookes & Listgarten ( 2018 ) which is characterized in Algorithm 4 , fitting qφ ( m ) through minimizing the KL divergence with the posterior p ( m|E ) . Based on the difference in specific implementations , both algorithms have more than one variant , whose details are deferred to Section A.1 in Appendix . We refer to the above algorithms as “ backward modeling ” because the trained generative network qφ ( going from x/E to θ/m ) is a sort of reverse model of the simulation mechanism ( which goes from θ/m to x/s ) .
The paper draws on connections between likelihood-free inference and black-box optimization to propose new black-box optimization methods. In general, the goal here is to not find the exact optimum of the black-box objective, but to sample from a set of sequences with high-quality objective. This is akin to the problem of collecting posterior samples in a likelihood-free inference problem. The paper provides a number of proposed methods and compares them on some benchmark sequence optimization problems that have appeared in recent literature.
SP:9367c02ee25747aa6ab8131ef68171b717d1f9ea
Unifying Likelihood-free Inference with Black-box Sequence Design and Beyond
1 INTRODUCTION . Discovering new drugs to fulfill specific criteria , such as binding affinity towards a given molecular target , is a fundamental problem in chemistry and the pharmaceutical industry ( Hughes et al. , 2011 ) . In this work , we focus on an important subdomain : de novo biological sequence design . This task is challenging for two reasons : ( 1 ) the exploration space for sequences is combinatorially large ; and ( 2 ) sequence usefulness is evaluated via a complicated process which usually involves time-consuming and expensive wet-lab experiments . Despite the difficulty of this task , many approaches have been developed over the past few decades thanks to recent advances in biochemistry and machine learning . The Nobel Prize wining paradigm , directed evolution ( Chen & Arnold , 1991 ) , which conducts local evolutionary search under human guidance , is one of the popular techniques . Unfortunately , it is limited by its sample inefficiency and reliance on strong prior knowledge , e.g. , about where to mutate ( Ahn et al. , 2020 ) . Furthermore , to compete with other machine learning methods ( Gottipati et al. , 2020 ) , guided evolution ( Yoshikawa et al. , 2018 ; Jensen , 2019 ; Nigam et al. , 2019 ) heavily relies on human intuition for designing domain-specific evolutionary operators , which may not always apply to tasks at hand . In this work , we deem sequence design to be a black-box optimization problem , tasked with maximizing an unknown oracle function . We assume that oracle queries are limited due to the constraint on resources , such as the budgets for evaluating queries in a wet-lab . Thus , sample efficiency is crucial . We develop a probabilistic framework by reformulating the aforementioned black-box optimization target as a posterior modeling problem . With this framework , we draw a surprising connection between likelihood-free inference and sequence design , and thus linking two fields which are previously considered as unrelated . The key observation we leverage here for establishing this connection is that both settings share similar elements and targets which will be elaborated in Section 2.2 . This connection facilitates our understanding of both fields and provides a recipe for developing sequence design algorithms . Going beyond , we also combine different probabilistic modeling insights and develop three novel composite probabilistic algorithms . We point out that our framework could actually be applied to any black-box optimization settings , but in this work we focus on its application to biological sequence design . To demonstrate the empirical effectiveness of our methods , we conduct systematical experiments to evaluate their performance on four in-silico sequence design benchmarks . Our proposed meth- ∗Corresponding Author ods achieve at least comparable results to existing baselines , and the proposed composite methods behave consistently better than all other ones across various sequence design tasks . We summarize our contribution as follows : • We develop a probabilistic framework that unifies likelihood-free inference and black-box optimization . • Based on this framework , we provide a recipe for designing algorithms for black-box problems . We apply these ideas to propose a series of composite design algorithms . • We perform systematical evaluation on a series of black-box sequence design benchmarks , and find that these algorithms achieves consistently comparable or better results compared to previous ones , thus illustrating the benefit of the proposed unified framework . 2 A UNIFYING PROBABILISTIC FRAMEWORK . 2.1 BACKGROUND . Likelihood-free inference ( LFI ) . We use θ ∈ Θ and x ∈ X to separately denote the parameters and the data generated via the mechanism x ∼ p ( x|θ ) . In this scenario , LFI refers to a special kind of Bayesian inference setting where the likelihood function is not tractable but sampling ( by simulation ) from the likelihood is feasible . Consider the objective of modeling the Bayesian posterior when we can not compute the likelihood p ( xo|θ ) : p ( θ|xo ) ∝ p ( θ ) p ( xo|θ ) ︸ ︷︷ ︸ ? , ( 1 ) where xo is the observed data , p ( θ ) is the ( given ) prior over the model parameters θ , p ( x|θ ) is the intractable likelihood function and p ( θ|x ) is the desired posterior over θ . While we do not have access to the exact likelihood , we can still simulate ( sample ) data x from the model simulator : x ∼ p ( x|θ ) . Instead of trying to obtain a numerical value of the generic posterior p ( θ|x ) for arbitrary x , LFI only tries to obtain an approximation of p ( θ|xo ) for the given xo . During the inference process , we can take advantage of the sampled data : D = { ( θi , xi ) } ni=1 where xi ∼ p ( x|θi ) for selected values of θi . Biological black-box sequence design . We consider biological sequence design as a black-box optimization problem : m∗ = arg max m∈M f ( m ) , where f ( · ) is the oracle score function , and we would like to discover values of m for which f ( m ) is large . In real-world situations , a query of this oracle f could represent a series of wet-lab experiments to measure specific chemical properties or specificity for a given binding site target . In general , these experiments are time- and cost-consuming . As a result , the total number of queries is limited . In our setting , we useM = VL to denote the search space for sequences with fixed length L , where V is the vocabulary for each entry of the sequence : for DNA nucleotides |V| = 4 , and for protein amino acids |V| = 20 . For variable length setting , we haveM = ∪L∈ [ Lmin , Lmax ] VL , where Lmin and Lmax are the minimal and maximal length , respectively . 2.2 CONNECTING LFI AND BLACK-BOX OPTIMIZATION . In order to draw a connection to LFI , we require a probabilistic formulation of the black-box sequence design problem . To this end , we relax the goal of searching for a single maximum of the oracle / score function f to a posterior modeling problem , i.e. , finding a representative sample of the configurations of m sampled with probability related to some target posterior . Think of C is the set of sequences with these desirable configurations , E is a Boolean event about whether a sequence m belongs to C , and our goal is to characterize the posterior distribution p ( m|E ) from which we obtain the desired sequences . Below , we consider two specific ways of doing this : Example A . We explicitly define C ( and E accordingly ) as all the sequences whose scores are larger than a given threshold s : C = { m|f ( m ) ≥ s } . ( 2 ) Here s could be any fixed value , or a certain quantile of a particular score distribution . In this way , we have p ( E|m ) = p ( m ∈ C|m ) = 1 { f ( m ) ≥ s } where 1 { } is the indicator function . Example B . In a softer version of E and C , we can define its conditional probability of being true to follow a Boltzmann distribution : p ( E|m ) = p ( m ∈ C|m ) ∝ exp ( f ( m ) /τ ) . ( 3 ) where τ is a temperature parameter . We introduce the exponential because f ( · ) does not necessarily take positive values . Any monotone transformation of f ( · ) to non-negative reals could be used , so that sequences with larger oracle scores have a greater probability of making E true . With this posterior objective , our goal now becomes effectively modeling and sampling from the posterior p ( m|E ) . It is thus natural to resort to the tools of Bayesian inference for this task . In order to examine this possibility , we draw a detailed comparison between the settings of black-box sequence design problem and likelihood-free Bayesian inference in Table 1 . It can be observed that both tasks share similar elements and targets . The two settings also share similar limitations on the allowed queries , which are too time-consuming and / or cost-intensive . Notice that in sequence design , the oracle could be either exact or noisy , thus we use the more general s ∼ f ( m ) formulation rather than s = f ( m ) . We will further present several concrete examples as demonstrations of this correspondence in the following section . Another way to understand this correspondence is to consider the following mapping T : T : Θ×X →M× R ( θ , x ) 7→ ( m , s ) , s.t . s = −‖x− xo‖ . Here we can see the score value s as a quantitative metric for how close the generated data x ( given θ ) is to the target observed data xo . In addition , querying the oracle in the sequence design setting can also be thought of as follows : ( 1 ) sample x ∼ p ( ·|θ ) and then ( 2 ) calculate s = −‖x− xo‖ under some distance ‖·‖ . In this manner , T could conceptually transform any LFI problem into a black-box optimization task . In this work , we only focus on the application of sequence design . 3 METHODOLOGY . We provide a recipe for designing new sequence design algorithms based on the correspondence in Section 2.2 . The recipe induces different approaches by modeling different probabilistic component of the Bayesian inference problem . We begin with common algorithm restrictions under this setting . Common constraint for algorithms . Due to the restriction of simulation / query in our setting , we constrain our algorithms to act in a sequential / iterative way , gradually achieving the desired posterior round by round . Every algorithm starts with an empty dataset D = ∅ and an initial proposal p1 ( · ) = p ( · ) , where p ( · ) is the prior given by the task . In the r-th round of this multi-round setting , the algorithm would use the proposal pr ( · ) of this round to sample a batch of data ( θ / m ) for simulation / query , and augment the current datasetD with the newly obtained batch of data . We use n to denote the batch size for each round ’ s simulation / query . Afterwards , the algorithm updates the proposal to pr+1 ( · ) . The outcomes for the two settings we discuss may be slightly different : an algorithm for likelihood-free inference would return the posterior , while a sequence design method would return the dataset of all the sequences it has queried , which hopefully contains desired high scored sequences . On the other hand , a sequence design method could produce as an intermediate result a generative model for sampling queries , which then completely fits with the LFI framework . 3.1 BACKWARD MODELING OF THE MECHANISM . Approximate Bayesian Computation ( ABC ) ( Beaumont et al. , 2002 ) is a standard method for tackling LFI problems . In Algorithm 1 , we display one of the most popular variants : Sequential Monte Carlo-Approximate Bayesian Computation ( SMC-ABC ) ( Beaumont et al. , 2009 ) . In each round , parameters θ are sampled from the current proposal distribution pr ( θ ) for simulation . A rejection step is then involved to remove the θi whose simulation outcomes xi can not reproduce the observed data xo with sufficient accuracy . The remaining accepted { θi } i are adopted to update the next round ’ s proposal pr+1 ( · ) towards the target posterior , i.e. , by refitting qφ with the modified data . We defer more details of this approach to Section A.1 in Appendix . Algorithm 1 SMC-ABC p1 ( θ ) ← p ( θ ) ; for r in 1 to R do repeat sample θi ∼ pr ( θ ) ; simulate xi ∼ p ( x|θi ) ; until n samples are obtained D ← D ∪ { ( θi , xi ) } ni=1 sort D according to −‖xi − xo‖ ; fit qφ ( θ ) with top { θi } i in D ; pr+1 ( θ ) ← qφ ( θ ) ; end for return p̂ ( θ|xo ) = pR+1 ( θ ) Algorithm 2 FB-VAE p1 ( m ) ← p ( m ) ; for r in 1 to R do repeat sample mi ∼ pr ( m ) ; query the oracle : si ← f ( mi ) ; until n samples are obtained D ← D ∪ { ( mi , si ) } ni=1 ; sort D according to si fit qφ ( m ) with top { mi } i in D ; pr+1 ( m ) ← qφ ( m ) ; end for return { m : ( m , s ) ∈ D } It would then be natural to construct an analogical sequence design algorithm using top scored entities { mi } i to guide the update of a certain sequence distribution , see Algorithm 2 . Interestingly , this is the proposed sequence design algorithm in Gupta & Zou ( 2019 ) , where the authors name this kind of updating “ feedback ” because training of the parametric generator qφ ( m ) exploits feedback signals from the oracle . In this paper , we follow Brookes & Listgarten ( 2018 ) to crystallize qφ ( m ) to be a variational autoencoder ( Kingma & Welling , 2014 ) , and use the term Feedback-Variational AutoEncoder ( FB-VAE ) to refer to Algorithm 2 . We place Algorithm 1 & 2 side-by-side to highlight their correspondence . We also make the same arrangement for the following Algorithm 3 & 4 , Algorithm 5 & 6 and Algorithm 7 & 8 . Algorithm 3 Sequential Neural Posterior p1 ( θ ) ← p ( θ ) ; for r in 1 to R do repeat sample θi ∼ pr ( θ ) ; simulate xi ∼ p ( x|θi ) ; until n samples are obtained D ← D ∪ { ( θi , xi ) } ni=1 ; qφ ← arg minq Ex [ DKL ( p ( θ|x ) ‖q ) ] ; pr+1 ( θ ) ← qφ ( θ|xo ) ; end for return p̂ ( θ|xo ) = pR+1 ( θ ) Algorithm 4 Design by Adaptive Sampling p1 ( m ) ← p ( m ) ; for r in 1 to R do repeat sample mi ∼ pr ( m ) ; query the oracle : si ← f ( mi ) ; until n samples are obtained D ← D ∪ { ( mi , si ) } ni=1 ; qφ ← arg minqDKL ( p ( m|E ) ‖q ) ; pr+1 ( m ) ← qφ ( m ) ; end for return { m : ( m , s ) ∈ D } In comparison with SMC-ABC , the Sequential Neural Posterior ( SNP ) method ( Papamakarios & Murray , 2016 ; Lueckmann et al. , 2017 ; Greenberg et al. , 2019 ) for likelihood-free inference adopts a more flexible approach , taking the power of conditional neural density estimator ( e.g. , Papamakarios et al . ( 2017 ) ) to model the general posterior p ( θ|x ) , which takes arbitrary θ and x as two inputs and outputs a distribution . This neural estimator is trained via approximately minimizing the Kullback–Leibler ( KL ) divergence between qφ ( θ|x ) and the true posterior p ( θ|x ) . We defer more training details to Section A.1 in Appendix . Under the connection viewpoint , one similar algorithm for sequence design is the Design by Adaptive Sampling ( DbAS ) proposed in Brookes & Listgarten ( 2018 ) which is characterized in Algorithm 4 , fitting qφ ( m ) through minimizing the KL divergence with the posterior p ( m|E ) . Based on the difference in specific implementations , both algorithms have more than one variant , whose details are deferred to Section A.1 in Appendix . We refer to the above algorithms as “ backward modeling ” because the trained generative network qφ ( going from x/E to θ/m ) is a sort of reverse model of the simulation mechanism ( which goes from θ/m to x/s ) .
The authors describe a mapping from likelihood free inference to black-box sequence optimization, then use this mapping to link common algorithms in both fields. They go on to describe novel black-box sequence design algorithms induced by known LFI algorithms. Empirical results show their methods are competitive on standard datasets.
SP:9367c02ee25747aa6ab8131ef68171b717d1f9ea
Unifying Likelihood-free Inference with Black-box Sequence Design and Beyond
1 INTRODUCTION . Discovering new drugs to fulfill specific criteria , such as binding affinity towards a given molecular target , is a fundamental problem in chemistry and the pharmaceutical industry ( Hughes et al. , 2011 ) . In this work , we focus on an important subdomain : de novo biological sequence design . This task is challenging for two reasons : ( 1 ) the exploration space for sequences is combinatorially large ; and ( 2 ) sequence usefulness is evaluated via a complicated process which usually involves time-consuming and expensive wet-lab experiments . Despite the difficulty of this task , many approaches have been developed over the past few decades thanks to recent advances in biochemistry and machine learning . The Nobel Prize wining paradigm , directed evolution ( Chen & Arnold , 1991 ) , which conducts local evolutionary search under human guidance , is one of the popular techniques . Unfortunately , it is limited by its sample inefficiency and reliance on strong prior knowledge , e.g. , about where to mutate ( Ahn et al. , 2020 ) . Furthermore , to compete with other machine learning methods ( Gottipati et al. , 2020 ) , guided evolution ( Yoshikawa et al. , 2018 ; Jensen , 2019 ; Nigam et al. , 2019 ) heavily relies on human intuition for designing domain-specific evolutionary operators , which may not always apply to tasks at hand . In this work , we deem sequence design to be a black-box optimization problem , tasked with maximizing an unknown oracle function . We assume that oracle queries are limited due to the constraint on resources , such as the budgets for evaluating queries in a wet-lab . Thus , sample efficiency is crucial . We develop a probabilistic framework by reformulating the aforementioned black-box optimization target as a posterior modeling problem . With this framework , we draw a surprising connection between likelihood-free inference and sequence design , and thus linking two fields which are previously considered as unrelated . The key observation we leverage here for establishing this connection is that both settings share similar elements and targets which will be elaborated in Section 2.2 . This connection facilitates our understanding of both fields and provides a recipe for developing sequence design algorithms . Going beyond , we also combine different probabilistic modeling insights and develop three novel composite probabilistic algorithms . We point out that our framework could actually be applied to any black-box optimization settings , but in this work we focus on its application to biological sequence design . To demonstrate the empirical effectiveness of our methods , we conduct systematical experiments to evaluate their performance on four in-silico sequence design benchmarks . Our proposed meth- ∗Corresponding Author ods achieve at least comparable results to existing baselines , and the proposed composite methods behave consistently better than all other ones across various sequence design tasks . We summarize our contribution as follows : • We develop a probabilistic framework that unifies likelihood-free inference and black-box optimization . • Based on this framework , we provide a recipe for designing algorithms for black-box problems . We apply these ideas to propose a series of composite design algorithms . • We perform systematical evaluation on a series of black-box sequence design benchmarks , and find that these algorithms achieves consistently comparable or better results compared to previous ones , thus illustrating the benefit of the proposed unified framework . 2 A UNIFYING PROBABILISTIC FRAMEWORK . 2.1 BACKGROUND . Likelihood-free inference ( LFI ) . We use θ ∈ Θ and x ∈ X to separately denote the parameters and the data generated via the mechanism x ∼ p ( x|θ ) . In this scenario , LFI refers to a special kind of Bayesian inference setting where the likelihood function is not tractable but sampling ( by simulation ) from the likelihood is feasible . Consider the objective of modeling the Bayesian posterior when we can not compute the likelihood p ( xo|θ ) : p ( θ|xo ) ∝ p ( θ ) p ( xo|θ ) ︸ ︷︷ ︸ ? , ( 1 ) where xo is the observed data , p ( θ ) is the ( given ) prior over the model parameters θ , p ( x|θ ) is the intractable likelihood function and p ( θ|x ) is the desired posterior over θ . While we do not have access to the exact likelihood , we can still simulate ( sample ) data x from the model simulator : x ∼ p ( x|θ ) . Instead of trying to obtain a numerical value of the generic posterior p ( θ|x ) for arbitrary x , LFI only tries to obtain an approximation of p ( θ|xo ) for the given xo . During the inference process , we can take advantage of the sampled data : D = { ( θi , xi ) } ni=1 where xi ∼ p ( x|θi ) for selected values of θi . Biological black-box sequence design . We consider biological sequence design as a black-box optimization problem : m∗ = arg max m∈M f ( m ) , where f ( · ) is the oracle score function , and we would like to discover values of m for which f ( m ) is large . In real-world situations , a query of this oracle f could represent a series of wet-lab experiments to measure specific chemical properties or specificity for a given binding site target . In general , these experiments are time- and cost-consuming . As a result , the total number of queries is limited . In our setting , we useM = VL to denote the search space for sequences with fixed length L , where V is the vocabulary for each entry of the sequence : for DNA nucleotides |V| = 4 , and for protein amino acids |V| = 20 . For variable length setting , we haveM = ∪L∈ [ Lmin , Lmax ] VL , where Lmin and Lmax are the minimal and maximal length , respectively . 2.2 CONNECTING LFI AND BLACK-BOX OPTIMIZATION . In order to draw a connection to LFI , we require a probabilistic formulation of the black-box sequence design problem . To this end , we relax the goal of searching for a single maximum of the oracle / score function f to a posterior modeling problem , i.e. , finding a representative sample of the configurations of m sampled with probability related to some target posterior . Think of C is the set of sequences with these desirable configurations , E is a Boolean event about whether a sequence m belongs to C , and our goal is to characterize the posterior distribution p ( m|E ) from which we obtain the desired sequences . Below , we consider two specific ways of doing this : Example A . We explicitly define C ( and E accordingly ) as all the sequences whose scores are larger than a given threshold s : C = { m|f ( m ) ≥ s } . ( 2 ) Here s could be any fixed value , or a certain quantile of a particular score distribution . In this way , we have p ( E|m ) = p ( m ∈ C|m ) = 1 { f ( m ) ≥ s } where 1 { } is the indicator function . Example B . In a softer version of E and C , we can define its conditional probability of being true to follow a Boltzmann distribution : p ( E|m ) = p ( m ∈ C|m ) ∝ exp ( f ( m ) /τ ) . ( 3 ) where τ is a temperature parameter . We introduce the exponential because f ( · ) does not necessarily take positive values . Any monotone transformation of f ( · ) to non-negative reals could be used , so that sequences with larger oracle scores have a greater probability of making E true . With this posterior objective , our goal now becomes effectively modeling and sampling from the posterior p ( m|E ) . It is thus natural to resort to the tools of Bayesian inference for this task . In order to examine this possibility , we draw a detailed comparison between the settings of black-box sequence design problem and likelihood-free Bayesian inference in Table 1 . It can be observed that both tasks share similar elements and targets . The two settings also share similar limitations on the allowed queries , which are too time-consuming and / or cost-intensive . Notice that in sequence design , the oracle could be either exact or noisy , thus we use the more general s ∼ f ( m ) formulation rather than s = f ( m ) . We will further present several concrete examples as demonstrations of this correspondence in the following section . Another way to understand this correspondence is to consider the following mapping T : T : Θ×X →M× R ( θ , x ) 7→ ( m , s ) , s.t . s = −‖x− xo‖ . Here we can see the score value s as a quantitative metric for how close the generated data x ( given θ ) is to the target observed data xo . In addition , querying the oracle in the sequence design setting can also be thought of as follows : ( 1 ) sample x ∼ p ( ·|θ ) and then ( 2 ) calculate s = −‖x− xo‖ under some distance ‖·‖ . In this manner , T could conceptually transform any LFI problem into a black-box optimization task . In this work , we only focus on the application of sequence design . 3 METHODOLOGY . We provide a recipe for designing new sequence design algorithms based on the correspondence in Section 2.2 . The recipe induces different approaches by modeling different probabilistic component of the Bayesian inference problem . We begin with common algorithm restrictions under this setting . Common constraint for algorithms . Due to the restriction of simulation / query in our setting , we constrain our algorithms to act in a sequential / iterative way , gradually achieving the desired posterior round by round . Every algorithm starts with an empty dataset D = ∅ and an initial proposal p1 ( · ) = p ( · ) , where p ( · ) is the prior given by the task . In the r-th round of this multi-round setting , the algorithm would use the proposal pr ( · ) of this round to sample a batch of data ( θ / m ) for simulation / query , and augment the current datasetD with the newly obtained batch of data . We use n to denote the batch size for each round ’ s simulation / query . Afterwards , the algorithm updates the proposal to pr+1 ( · ) . The outcomes for the two settings we discuss may be slightly different : an algorithm for likelihood-free inference would return the posterior , while a sequence design method would return the dataset of all the sequences it has queried , which hopefully contains desired high scored sequences . On the other hand , a sequence design method could produce as an intermediate result a generative model for sampling queries , which then completely fits with the LFI framework . 3.1 BACKWARD MODELING OF THE MECHANISM . Approximate Bayesian Computation ( ABC ) ( Beaumont et al. , 2002 ) is a standard method for tackling LFI problems . In Algorithm 1 , we display one of the most popular variants : Sequential Monte Carlo-Approximate Bayesian Computation ( SMC-ABC ) ( Beaumont et al. , 2009 ) . In each round , parameters θ are sampled from the current proposal distribution pr ( θ ) for simulation . A rejection step is then involved to remove the θi whose simulation outcomes xi can not reproduce the observed data xo with sufficient accuracy . The remaining accepted { θi } i are adopted to update the next round ’ s proposal pr+1 ( · ) towards the target posterior , i.e. , by refitting qφ with the modified data . We defer more details of this approach to Section A.1 in Appendix . Algorithm 1 SMC-ABC p1 ( θ ) ← p ( θ ) ; for r in 1 to R do repeat sample θi ∼ pr ( θ ) ; simulate xi ∼ p ( x|θi ) ; until n samples are obtained D ← D ∪ { ( θi , xi ) } ni=1 sort D according to −‖xi − xo‖ ; fit qφ ( θ ) with top { θi } i in D ; pr+1 ( θ ) ← qφ ( θ ) ; end for return p̂ ( θ|xo ) = pR+1 ( θ ) Algorithm 2 FB-VAE p1 ( m ) ← p ( m ) ; for r in 1 to R do repeat sample mi ∼ pr ( m ) ; query the oracle : si ← f ( mi ) ; until n samples are obtained D ← D ∪ { ( mi , si ) } ni=1 ; sort D according to si fit qφ ( m ) with top { mi } i in D ; pr+1 ( m ) ← qφ ( m ) ; end for return { m : ( m , s ) ∈ D } It would then be natural to construct an analogical sequence design algorithm using top scored entities { mi } i to guide the update of a certain sequence distribution , see Algorithm 2 . Interestingly , this is the proposed sequence design algorithm in Gupta & Zou ( 2019 ) , where the authors name this kind of updating “ feedback ” because training of the parametric generator qφ ( m ) exploits feedback signals from the oracle . In this paper , we follow Brookes & Listgarten ( 2018 ) to crystallize qφ ( m ) to be a variational autoencoder ( Kingma & Welling , 2014 ) , and use the term Feedback-Variational AutoEncoder ( FB-VAE ) to refer to Algorithm 2 . We place Algorithm 1 & 2 side-by-side to highlight their correspondence . We also make the same arrangement for the following Algorithm 3 & 4 , Algorithm 5 & 6 and Algorithm 7 & 8 . Algorithm 3 Sequential Neural Posterior p1 ( θ ) ← p ( θ ) ; for r in 1 to R do repeat sample θi ∼ pr ( θ ) ; simulate xi ∼ p ( x|θi ) ; until n samples are obtained D ← D ∪ { ( θi , xi ) } ni=1 ; qφ ← arg minq Ex [ DKL ( p ( θ|x ) ‖q ) ] ; pr+1 ( θ ) ← qφ ( θ|xo ) ; end for return p̂ ( θ|xo ) = pR+1 ( θ ) Algorithm 4 Design by Adaptive Sampling p1 ( m ) ← p ( m ) ; for r in 1 to R do repeat sample mi ∼ pr ( m ) ; query the oracle : si ← f ( mi ) ; until n samples are obtained D ← D ∪ { ( mi , si ) } ni=1 ; qφ ← arg minqDKL ( p ( m|E ) ‖q ) ; pr+1 ( m ) ← qφ ( m ) ; end for return { m : ( m , s ) ∈ D } In comparison with SMC-ABC , the Sequential Neural Posterior ( SNP ) method ( Papamakarios & Murray , 2016 ; Lueckmann et al. , 2017 ; Greenberg et al. , 2019 ) for likelihood-free inference adopts a more flexible approach , taking the power of conditional neural density estimator ( e.g. , Papamakarios et al . ( 2017 ) ) to model the general posterior p ( θ|x ) , which takes arbitrary θ and x as two inputs and outputs a distribution . This neural estimator is trained via approximately minimizing the Kullback–Leibler ( KL ) divergence between qφ ( θ|x ) and the true posterior p ( θ|x ) . We defer more training details to Section A.1 in Appendix . Under the connection viewpoint , one similar algorithm for sequence design is the Design by Adaptive Sampling ( DbAS ) proposed in Brookes & Listgarten ( 2018 ) which is characterized in Algorithm 4 , fitting qφ ( m ) through minimizing the KL divergence with the posterior p ( m|E ) . Based on the difference in specific implementations , both algorithms have more than one variant , whose details are deferred to Section A.1 in Appendix . We refer to the above algorithms as “ backward modeling ” because the trained generative network qφ ( going from x/E to θ/m ) is a sort of reverse model of the simulation mechanism ( which goes from θ/m to x/s ) .
In this paper the authors draw direct parallels between likelihood-free inference (LFI) and black-box sequence design. This allows that authors to draw parallels between existing methods from the LFI and black-box sequence design literatures. In a few cases there is no direct analog in the black-box sequence design literature for a given LFI algorithm, and so the authors are able to immediately propose such an algorithm. The authors also present a number of "composite" methods that combine ideas from a number of these approaches.
SP:9367c02ee25747aa6ab8131ef68171b717d1f9ea
Towards Understanding Generalization via Decomposing Excess Risk Dynamics
1 INTRODUCTION . Generalization is one of the essential mysteries uncovered in modern machine learning ( Neyshabur et al. , 2014 ; Zhang et al. , 2016 ; Kawaguchi et al. , 2017 ) , measuring how the trained model performs on unseen data . One of the most popular approaches to generalization is uniform convergence ( Mohri et al. , 2018 ) , which takes supremum over parameter space to decouple the dependency between the training set and the trained model . However , Nagarajan & Kolter ( 2019 ) pointed out that uniform convergence itself might not be powerful enough to explain generalization , because the uniform bound can still be vacuous under overparameterized linear regimes even if we reduce the parameter space to the minimum . One alternative solution beyond uniform convergence is to analyze the generalization dynamics , which measures the generalization gap during the training dynamics . Stability-based bound is among the most popular techniques in generalization dynamics analysis ( Lei & Ying , 2020 ) , which is derived from algorithmic stability ( Bousquet & Elisseeff , 2002 ) . Fortunately , one can derive nonvacuous bounds under general convex regimes using stability frameworks ( Hardt et al. , 2016 ) . However , stability is still far from explaining the remarkable generalization abilities of neural networks , mainly due to two obstructions . Firstly , stability-based bound depends heavily on the gradient norm in non-convex regimes ( Li et al. , 2019 ) , which is typically large at the beginning phase in training neural networks . Secondly , stability-based bound usually does not work well under general nonconvex regimes ( Hardt et al. , 2016 ; Charles & Papailiopoulos , 2018 ; Zhou et al. , 2018b ) but neural networks are usually highly non-convex . The aforementioned two obstructions mainly stem from the coarse-grained analysis of the signal and noise . As Zhang et al . ( 2016 ) argued , neural networks converge fast when fitting signal but converge relatively slowly when fitting noise1 , indicating that the training dynamics over signal and noise are significantly different . Consequently , on the one hand , the fast convergence of signal-related training contributes to a large gradient norm at the beginning phase ( see Figure 1a ) , resulting in 1In this paper , we refer the signal to the clean data without the output noise , and the noise to the output noise . See Section 2 for the formal definitions . poor stability . On the other hand , the training on signal forces the trained parameter away from the initialization , making the whole training path highly non-convex ( see Figure 1b ) . The above two phenomena inspire us to decompose the training dynamics into noise and signal component and only apply the stability-based analysis over the noise component . To demonstrate that such decomposition generally holds in practice , we conduct several experiments of neural networks on both synthetic dataset and real-world dataset ( see Figure 2 for more details ) . Based on the above discussion , we improve the stability-based analysis by proposing a decomposition framework on excess risk dynamics2 , where we handle the noise and signal components separately via bias-variance decomposition . In detail , we decompose the excess risk into variance excess risk ( VER ) and bias excess risk ( BER ) , where VER measures how the model fits noise and BER measures how the model fits signal . Under the decomposition , we apply the stability-based techniques to VER and apply uniform convergence to BER inspired by Negrea et al . ( 2020 ) . The decomposition framework accords with the theoretical and experimental evidence surprisingly well , providing that it outperforms stability-based bounds in both linear ( overparameterized linear regression ) and non-linear ( diagonal matrix recovery ) regimes . We summarize our contributions as follows : • We propose a new framework aiming at improving the traditional stability-based bounds , which is a novel approach to generalization dynamics analysis focusing on decomposing the excess risk dynamics into variance component and bias component . Starting from the overparameterized linear regression regimes , we show how to deploy the decomposition framework in practice , and the proposed framework outperforms the stability-based bounds . • We theoretically analyze the excess risk decomposition beyond linear regimes . As a case study , we derive a generalization bound under diagonal matrix recovery regimes . To our best knowledge , this is the first work to analyze the generalization performance of diagonal matrix recovery . • We conduct several experiments on both synthetic datasets and real-world datasets ( MINIST , CIFAR-10 ) to validate the utility of the decomposition framework , indicating that the framework provides interesting insights into the generalization community . 2We decompose the excess risk , which is closely related to generalization , purely due to technical reasons . The excess risk dynamics tracks the excess risk during the training process . 1.1 RELATED WORK . Stability-based Generalization . The stability-based researches can be roughly split into two branches . One branch is about how algorithmic stability leads to generalization ( Feldman & Vondrak , 2018 ; 2019 ; Bousquet et al. , 2020 ) . Another branch focus on how to calculate the stability parameter for specific problems , e.g. , Hardt et al . ( 2016 ) prove a generalization bound scales linearly with time in convex regimes . Furthermore , researchers try to apply stability techniques into more general settings , e.g. , non-smooth loss ( Lei & Ying , 2020 ; Bassily et al. , 2020 ) , noisy gradient descent in non-convex regimes ( Mou et al. , 2018 ; Li et al. , 2019 ) and stochastic gradient descent in non-convex regimes ( Zhou et al. , 2018b ; Charles & Papailiopoulos , 2018 ; Zhang et al. , 2021 ) . In this paper , we mainly focus on applying the decomposition framework to improve the stability-based bound . Uniform Convergence is widely used in generalization analysis . For bounded losses , the generalization gap is tightly bounded by its Rademacher Complexity ( Koltchinskii & Panchenko , 2000 ; Koltchinskii , 2001 ; Koltchinskii et al. , 2006 ) . Furthermore , we reach faster rates under realizable assumption ( Srebro et al. , 2010 ) . A line of work focuses on uniform convergence under neural network regimes , which is usually related to the parameter norm ( Bartlett et al. , 2017 ; Wei & Ma , 2019 ) . However , as Nagarajan & Kolter ( 2019 ) pointed out , uniform convergence may be unable to explain generalization . Therefore , more techniques are explored to go beyond uniform convergence . Other Approaches to Generalization . There are some other approaches to generalization , including PAC-Bayes ( Neyshabur et al. , 2017a ; Dziugaite & Roy , 2017 ; Neyshabur et al. , 2017b ; Dziugaite & Roy , 2018 ; Zhou et al. , 2018a ; Yang et al. , 2019 ) , information-based bound ( Russo & Zou , 2016 ; Xu & Raginsky , 2017 ; Banerjee & Montúfar , 2021 ; Haghifam et al. , 2020 ; Steinke & Zakynthinou , 2020 ) , and compression-based bound ( Arora et al. , 2018 ; Allen-Zhu et al. , 2018 ; Arora et al. , 2019 ) . Bias-Variance Decomposition . Bias-variance decomposition plays an important role in statistical analysis ( Lehmann & Casella , 2006 ; Casella & Berger , 2021 ; Geman et al. , 1992 ) . Generally , high bias indicates that the model has poor predicting ability on average and high variance indicates that the model performs unstably . Bias-variance decomposition is widely used in machine learning analysis , e.g. , adversarial training ( Yu et al. , 2021 ) , double descent ( Adlam & Pennington , 2020 ) , uncertainty ( Hu et al. , 2020 ) . Oymak et al . ( 2019 ) applied bias-variance decomposition on Jacobian of neural networks to explain their different performances on clean and noisy data . This paper considers a slightly different bias-variance decomposition following the analysis of SGD ( Dieuleveut et al. , 2016 ; Jain et al. , 2018 ; Zou et al. , 2021 ) , focusing on the decomposition of the noisy output . Matrix Recovery . Earlier methods for solving matrix recovery problems rely on the convex relaxation techniques for minimum norm solutions ( Recht et al. , 2010 ; Chandrasekaran et al. , 2011 ) . Recently , a branch of works focuses on the matrix factorization techniques with simple local search methods . It has been shown that there is no spurious local minima in the exact regime ( Ge et al. , 2016 ; 2017 ; Zhang et al. , 2019 ) . In the overparameterized regimes , it was first conjectured by Gunasekar et al . ( 2018 ) and then answered by Li et al . ( 2018 ) that gradient descent methods converge to the low-rank solution efficiently . Later , Zhuo et al . ( 2021 ) extends the conclusion to the noisy settings . 2 PRELIMINARY . In this section , we introduce necessary definitions , assumptions , and formally introduce previous techniques in dealing with generalization . Data distribution . Let x ∈ X ⊂ Rp be the input and y ∈ Y ⊂ R be the output , where x , y are generated from a joint distribution ( x , y ) ∼ P . Define Px , Py , and Py|x as the corresponding marginal and conditional distributions , respectively . Given n training samples D ≜ { xi , yi } i∈ [ n ] generated from distribution P , we denote its empirical distribution by Pn . To simplify the notations , we define X ∈ Rn×p as the design matrix and Y ∈ Rn as the response vector . Excess Risk . Given the loss function ℓ ( θ ; x , y ) with parameter θ and sample ( x , y ) , we define the population loss as L ( θ ; P ) ≜ E ( x , y ) ∼P [ ℓ ( θ ; x , y ) ] and its corresponding training loss as L ( θ ; Pn ) ≜ 1n ∑ i ℓ ( θ ; xi , yi ) . Let At denote the optimization algorithm which takes dataset D as input and return the trained parameter θ̂ ( t ) at time t , namely , At ( D ) = θ̂ ( t ) . During the analysis , we focus on the excess risk dynamics EL ( θ̂ ( t ) ; P ) , which measures how the trained parameter θ̂ ( t ) performs on the population loss : EL ( θ̂ ( t ) ; P ) ≜ L ( θ̂ ( t ) ; P ) −min θ L ( θ ; P ) . Although the minimizer can be not unique , we define L ( θ∗ ; Pn ) ≜ minθ L ( θ ; P ) where θ∗ denotes arbitrarily one of the minimizers . We additionally remark that bounding the generalization gap L ( θ̂ ( t ) ; P ) −L ( θ̂ ( t ) ; Pn ) suffices to bound the excess risk under Empirical Risk Minimization ( ERM ) framework by the following split ( see Equation 1 ) . Therefore , we mainly discuss the excess risk following Bartlett et al . ( 2020 ) . EL ( θ̂ ( t ) ; P ) = [ L ( θ̂ ( t ) ; P ) − L ( θ̂ ( t ) ; Pn ) ] ︸ ︷︷ ︸ generalization gap + [ L ( θ̂ ( t ) ; Pn ) − L ( θ∗ ; Pn ) ] ︸ ︷︷ ︸ ≤0 under ERM + [ L ( θ∗ ; Pn ) − L ( θ∗ ; P ) ] ︸ ︷︷ ︸ ≈0 by concentration inequalities . ( 1 ) VER and BER . As discussed in Section 1 , we aim to decompose the excess risk into the variance excess risk ( VER ) and the bias excess risk ( BER ) defined in Definition 1 . The decomposition focuses on the noisy output regimes and we split the output y into the signal component E [ y|x ] and the noise component y − E [ y|x ] . Informally , VER measures how the model performs on pure noise , and BER measures how the model performs on clean data . Definition 1 ( VER and BER ) Given ( x , y ) ∼ P , let ( x , E [ y|x ] ) ∼ Pb denote the signal distribution and ( x , y − E [ y|x ] ) ∼ Pv denote the noise distribution . The variance excess risk ( VER ) EvL ( θ ; P ) and bias excess risk ( BER ) EbL ( θ ; P ) are defined as : EvL ( θ ; P ) ≜ EL ( θ ; Pv ) , EbL ( θ ; P ) ≜ EL ( θ ; Pb ) . To better illustrate VER and BER , we consider three surrogate training dynamics : Standard Training , Variance Training , and Bias Training , corresponding to ER , VER , and BER , respectively . Standard Training : Training process over the noisy data ( X , Y ) from the initialization θ ( 0 ) . We denote the trained parameter at time t by θ̂ ( t ) . Variance Training : Training process over the pure noise ( X , Y − E [ Y |X ] ) from the initialization θ ( 0 ) v . We denote the trained parameter at time t by θ̂ ( t ) v . Bias Training : Training process over the clean data ( X , E [ Y |X ] ) from the initialization θ ( 0 ) b . We denote the trained parameter at time t by θ̂ ( t ) b . When the context is clear , although the trained parameters θ̂ ( t ) , θ̂ ( t ) v , θ̂ ( t ) b are related to the corresponding initialization and the algorithms , we omit the dependency . Besides , we denote θ∗ , θ∗v and θ∗b the optimal parameters which minimize the corresponding population loss L ( θ ; P ) , L ( θ ; Pv ) , and L ( θ ; Pb ) , respectively . Techniques in generalization . We next introduce two techniques in generalization analysis including stability-based techniques in Proposition 1 and uniform convergence in Proposition 2 . These techniques will be revisited in Section 3 . Proposition 1 ( Stability Bound from Feldman & Vondrak ( 2019 ) ) Assume that algorithm At is ϵ-uniformly-stable at time t , meaning that for any two dataset D and D′ with only one different data point , we have sup ( x , y ) EA [ ℓ ( At ( D ) ; x , y ) − ℓ ( At ( D′ ) ; x , y ) ] ≤ ϵ . Then the following inequality holds3 with probability at least 1− δ : |L ( At ( D ) ; P ) − L ( At ( D ) ; Pn ) | = O ( ϵ log ( n ) log ( n/δ ) + √ log ( 1/δ ) √ n ) . Proposition 2 ( Uniform Convergence from Wainwright ( 2019 ) ) Uniform convergence decouples the dependency between the trained parameter and the training set by taking supremum over a parameter space that is independent of training data , namely L ( At ( D ) ; P ) − L ( At ( D ) ; Pn ) ≤ sup θ∈B [ L ( θ ; P ) − L ( θ ; Pn ) ] . where B is independent of dataset D and At ( D ) ∈ B for any time t .
This work proposes a new bound for the excess risk based on a decomposition into a bias and variance term. The authors use a stability-based analysis to bound the bias excess risk, and uniform convergence to bound the variance excess risk. They illustrate their framework in two contexts: overparametrised linear regression with SGD and low-rank diagonal matrix recovery with gradient flow.
SP:5af29ac70568ac6e8afb37add204d02b534880fe
Towards Understanding Generalization via Decomposing Excess Risk Dynamics
1 INTRODUCTION . Generalization is one of the essential mysteries uncovered in modern machine learning ( Neyshabur et al. , 2014 ; Zhang et al. , 2016 ; Kawaguchi et al. , 2017 ) , measuring how the trained model performs on unseen data . One of the most popular approaches to generalization is uniform convergence ( Mohri et al. , 2018 ) , which takes supremum over parameter space to decouple the dependency between the training set and the trained model . However , Nagarajan & Kolter ( 2019 ) pointed out that uniform convergence itself might not be powerful enough to explain generalization , because the uniform bound can still be vacuous under overparameterized linear regimes even if we reduce the parameter space to the minimum . One alternative solution beyond uniform convergence is to analyze the generalization dynamics , which measures the generalization gap during the training dynamics . Stability-based bound is among the most popular techniques in generalization dynamics analysis ( Lei & Ying , 2020 ) , which is derived from algorithmic stability ( Bousquet & Elisseeff , 2002 ) . Fortunately , one can derive nonvacuous bounds under general convex regimes using stability frameworks ( Hardt et al. , 2016 ) . However , stability is still far from explaining the remarkable generalization abilities of neural networks , mainly due to two obstructions . Firstly , stability-based bound depends heavily on the gradient norm in non-convex regimes ( Li et al. , 2019 ) , which is typically large at the beginning phase in training neural networks . Secondly , stability-based bound usually does not work well under general nonconvex regimes ( Hardt et al. , 2016 ; Charles & Papailiopoulos , 2018 ; Zhou et al. , 2018b ) but neural networks are usually highly non-convex . The aforementioned two obstructions mainly stem from the coarse-grained analysis of the signal and noise . As Zhang et al . ( 2016 ) argued , neural networks converge fast when fitting signal but converge relatively slowly when fitting noise1 , indicating that the training dynamics over signal and noise are significantly different . Consequently , on the one hand , the fast convergence of signal-related training contributes to a large gradient norm at the beginning phase ( see Figure 1a ) , resulting in 1In this paper , we refer the signal to the clean data without the output noise , and the noise to the output noise . See Section 2 for the formal definitions . poor stability . On the other hand , the training on signal forces the trained parameter away from the initialization , making the whole training path highly non-convex ( see Figure 1b ) . The above two phenomena inspire us to decompose the training dynamics into noise and signal component and only apply the stability-based analysis over the noise component . To demonstrate that such decomposition generally holds in practice , we conduct several experiments of neural networks on both synthetic dataset and real-world dataset ( see Figure 2 for more details ) . Based on the above discussion , we improve the stability-based analysis by proposing a decomposition framework on excess risk dynamics2 , where we handle the noise and signal components separately via bias-variance decomposition . In detail , we decompose the excess risk into variance excess risk ( VER ) and bias excess risk ( BER ) , where VER measures how the model fits noise and BER measures how the model fits signal . Under the decomposition , we apply the stability-based techniques to VER and apply uniform convergence to BER inspired by Negrea et al . ( 2020 ) . The decomposition framework accords with the theoretical and experimental evidence surprisingly well , providing that it outperforms stability-based bounds in both linear ( overparameterized linear regression ) and non-linear ( diagonal matrix recovery ) regimes . We summarize our contributions as follows : • We propose a new framework aiming at improving the traditional stability-based bounds , which is a novel approach to generalization dynamics analysis focusing on decomposing the excess risk dynamics into variance component and bias component . Starting from the overparameterized linear regression regimes , we show how to deploy the decomposition framework in practice , and the proposed framework outperforms the stability-based bounds . • We theoretically analyze the excess risk decomposition beyond linear regimes . As a case study , we derive a generalization bound under diagonal matrix recovery regimes . To our best knowledge , this is the first work to analyze the generalization performance of diagonal matrix recovery . • We conduct several experiments on both synthetic datasets and real-world datasets ( MINIST , CIFAR-10 ) to validate the utility of the decomposition framework , indicating that the framework provides interesting insights into the generalization community . 2We decompose the excess risk , which is closely related to generalization , purely due to technical reasons . The excess risk dynamics tracks the excess risk during the training process . 1.1 RELATED WORK . Stability-based Generalization . The stability-based researches can be roughly split into two branches . One branch is about how algorithmic stability leads to generalization ( Feldman & Vondrak , 2018 ; 2019 ; Bousquet et al. , 2020 ) . Another branch focus on how to calculate the stability parameter for specific problems , e.g. , Hardt et al . ( 2016 ) prove a generalization bound scales linearly with time in convex regimes . Furthermore , researchers try to apply stability techniques into more general settings , e.g. , non-smooth loss ( Lei & Ying , 2020 ; Bassily et al. , 2020 ) , noisy gradient descent in non-convex regimes ( Mou et al. , 2018 ; Li et al. , 2019 ) and stochastic gradient descent in non-convex regimes ( Zhou et al. , 2018b ; Charles & Papailiopoulos , 2018 ; Zhang et al. , 2021 ) . In this paper , we mainly focus on applying the decomposition framework to improve the stability-based bound . Uniform Convergence is widely used in generalization analysis . For bounded losses , the generalization gap is tightly bounded by its Rademacher Complexity ( Koltchinskii & Panchenko , 2000 ; Koltchinskii , 2001 ; Koltchinskii et al. , 2006 ) . Furthermore , we reach faster rates under realizable assumption ( Srebro et al. , 2010 ) . A line of work focuses on uniform convergence under neural network regimes , which is usually related to the parameter norm ( Bartlett et al. , 2017 ; Wei & Ma , 2019 ) . However , as Nagarajan & Kolter ( 2019 ) pointed out , uniform convergence may be unable to explain generalization . Therefore , more techniques are explored to go beyond uniform convergence . Other Approaches to Generalization . There are some other approaches to generalization , including PAC-Bayes ( Neyshabur et al. , 2017a ; Dziugaite & Roy , 2017 ; Neyshabur et al. , 2017b ; Dziugaite & Roy , 2018 ; Zhou et al. , 2018a ; Yang et al. , 2019 ) , information-based bound ( Russo & Zou , 2016 ; Xu & Raginsky , 2017 ; Banerjee & Montúfar , 2021 ; Haghifam et al. , 2020 ; Steinke & Zakynthinou , 2020 ) , and compression-based bound ( Arora et al. , 2018 ; Allen-Zhu et al. , 2018 ; Arora et al. , 2019 ) . Bias-Variance Decomposition . Bias-variance decomposition plays an important role in statistical analysis ( Lehmann & Casella , 2006 ; Casella & Berger , 2021 ; Geman et al. , 1992 ) . Generally , high bias indicates that the model has poor predicting ability on average and high variance indicates that the model performs unstably . Bias-variance decomposition is widely used in machine learning analysis , e.g. , adversarial training ( Yu et al. , 2021 ) , double descent ( Adlam & Pennington , 2020 ) , uncertainty ( Hu et al. , 2020 ) . Oymak et al . ( 2019 ) applied bias-variance decomposition on Jacobian of neural networks to explain their different performances on clean and noisy data . This paper considers a slightly different bias-variance decomposition following the analysis of SGD ( Dieuleveut et al. , 2016 ; Jain et al. , 2018 ; Zou et al. , 2021 ) , focusing on the decomposition of the noisy output . Matrix Recovery . Earlier methods for solving matrix recovery problems rely on the convex relaxation techniques for minimum norm solutions ( Recht et al. , 2010 ; Chandrasekaran et al. , 2011 ) . Recently , a branch of works focuses on the matrix factorization techniques with simple local search methods . It has been shown that there is no spurious local minima in the exact regime ( Ge et al. , 2016 ; 2017 ; Zhang et al. , 2019 ) . In the overparameterized regimes , it was first conjectured by Gunasekar et al . ( 2018 ) and then answered by Li et al . ( 2018 ) that gradient descent methods converge to the low-rank solution efficiently . Later , Zhuo et al . ( 2021 ) extends the conclusion to the noisy settings . 2 PRELIMINARY . In this section , we introduce necessary definitions , assumptions , and formally introduce previous techniques in dealing with generalization . Data distribution . Let x ∈ X ⊂ Rp be the input and y ∈ Y ⊂ R be the output , where x , y are generated from a joint distribution ( x , y ) ∼ P . Define Px , Py , and Py|x as the corresponding marginal and conditional distributions , respectively . Given n training samples D ≜ { xi , yi } i∈ [ n ] generated from distribution P , we denote its empirical distribution by Pn . To simplify the notations , we define X ∈ Rn×p as the design matrix and Y ∈ Rn as the response vector . Excess Risk . Given the loss function ℓ ( θ ; x , y ) with parameter θ and sample ( x , y ) , we define the population loss as L ( θ ; P ) ≜ E ( x , y ) ∼P [ ℓ ( θ ; x , y ) ] and its corresponding training loss as L ( θ ; Pn ) ≜ 1n ∑ i ℓ ( θ ; xi , yi ) . Let At denote the optimization algorithm which takes dataset D as input and return the trained parameter θ̂ ( t ) at time t , namely , At ( D ) = θ̂ ( t ) . During the analysis , we focus on the excess risk dynamics EL ( θ̂ ( t ) ; P ) , which measures how the trained parameter θ̂ ( t ) performs on the population loss : EL ( θ̂ ( t ) ; P ) ≜ L ( θ̂ ( t ) ; P ) −min θ L ( θ ; P ) . Although the minimizer can be not unique , we define L ( θ∗ ; Pn ) ≜ minθ L ( θ ; P ) where θ∗ denotes arbitrarily one of the minimizers . We additionally remark that bounding the generalization gap L ( θ̂ ( t ) ; P ) −L ( θ̂ ( t ) ; Pn ) suffices to bound the excess risk under Empirical Risk Minimization ( ERM ) framework by the following split ( see Equation 1 ) . Therefore , we mainly discuss the excess risk following Bartlett et al . ( 2020 ) . EL ( θ̂ ( t ) ; P ) = [ L ( θ̂ ( t ) ; P ) − L ( θ̂ ( t ) ; Pn ) ] ︸ ︷︷ ︸ generalization gap + [ L ( θ̂ ( t ) ; Pn ) − L ( θ∗ ; Pn ) ] ︸ ︷︷ ︸ ≤0 under ERM + [ L ( θ∗ ; Pn ) − L ( θ∗ ; P ) ] ︸ ︷︷ ︸ ≈0 by concentration inequalities . ( 1 ) VER and BER . As discussed in Section 1 , we aim to decompose the excess risk into the variance excess risk ( VER ) and the bias excess risk ( BER ) defined in Definition 1 . The decomposition focuses on the noisy output regimes and we split the output y into the signal component E [ y|x ] and the noise component y − E [ y|x ] . Informally , VER measures how the model performs on pure noise , and BER measures how the model performs on clean data . Definition 1 ( VER and BER ) Given ( x , y ) ∼ P , let ( x , E [ y|x ] ) ∼ Pb denote the signal distribution and ( x , y − E [ y|x ] ) ∼ Pv denote the noise distribution . The variance excess risk ( VER ) EvL ( θ ; P ) and bias excess risk ( BER ) EbL ( θ ; P ) are defined as : EvL ( θ ; P ) ≜ EL ( θ ; Pv ) , EbL ( θ ; P ) ≜ EL ( θ ; Pb ) . To better illustrate VER and BER , we consider three surrogate training dynamics : Standard Training , Variance Training , and Bias Training , corresponding to ER , VER , and BER , respectively . Standard Training : Training process over the noisy data ( X , Y ) from the initialization θ ( 0 ) . We denote the trained parameter at time t by θ̂ ( t ) . Variance Training : Training process over the pure noise ( X , Y − E [ Y |X ] ) from the initialization θ ( 0 ) v . We denote the trained parameter at time t by θ̂ ( t ) v . Bias Training : Training process over the clean data ( X , E [ Y |X ] ) from the initialization θ ( 0 ) b . We denote the trained parameter at time t by θ̂ ( t ) b . When the context is clear , although the trained parameters θ̂ ( t ) , θ̂ ( t ) v , θ̂ ( t ) b are related to the corresponding initialization and the algorithms , we omit the dependency . Besides , we denote θ∗ , θ∗v and θ∗b the optimal parameters which minimize the corresponding population loss L ( θ ; P ) , L ( θ ; Pv ) , and L ( θ ; Pb ) , respectively . Techniques in generalization . We next introduce two techniques in generalization analysis including stability-based techniques in Proposition 1 and uniform convergence in Proposition 2 . These techniques will be revisited in Section 3 . Proposition 1 ( Stability Bound from Feldman & Vondrak ( 2019 ) ) Assume that algorithm At is ϵ-uniformly-stable at time t , meaning that for any two dataset D and D′ with only one different data point , we have sup ( x , y ) EA [ ℓ ( At ( D ) ; x , y ) − ℓ ( At ( D′ ) ; x , y ) ] ≤ ϵ . Then the following inequality holds3 with probability at least 1− δ : |L ( At ( D ) ; P ) − L ( At ( D ) ; Pn ) | = O ( ϵ log ( n ) log ( n/δ ) + √ log ( 1/δ ) √ n ) . Proposition 2 ( Uniform Convergence from Wainwright ( 2019 ) ) Uniform convergence decouples the dependency between the trained parameter and the training set by taking supremum over a parameter space that is independent of training data , namely L ( At ( D ) ; P ) − L ( At ( D ) ; Pn ) ≤ sup θ∈B [ L ( θ ; P ) − L ( θ ; Pn ) ] . where B is independent of dataset D and At ( D ) ∈ B for any time t .
This paper proposes a new approach to characterize the excess risk by decomposing the risk into two parts, the risk of learning the exact response and the risk of fitting the noise signal. Under the linear regression setting, the learning dynamic can be decomposed exactly into the proposed two parts. Stability and uniform convergence arguments are applied to two parts respectively. Similar results are shown under the more general setting, though additional assumptions are made to guarantee a reasonable dynamic decomposition.
SP:5af29ac70568ac6e8afb37add204d02b534880fe
Towards Understanding Generalization via Decomposing Excess Risk Dynamics
1 INTRODUCTION . Generalization is one of the essential mysteries uncovered in modern machine learning ( Neyshabur et al. , 2014 ; Zhang et al. , 2016 ; Kawaguchi et al. , 2017 ) , measuring how the trained model performs on unseen data . One of the most popular approaches to generalization is uniform convergence ( Mohri et al. , 2018 ) , which takes supremum over parameter space to decouple the dependency between the training set and the trained model . However , Nagarajan & Kolter ( 2019 ) pointed out that uniform convergence itself might not be powerful enough to explain generalization , because the uniform bound can still be vacuous under overparameterized linear regimes even if we reduce the parameter space to the minimum . One alternative solution beyond uniform convergence is to analyze the generalization dynamics , which measures the generalization gap during the training dynamics . Stability-based bound is among the most popular techniques in generalization dynamics analysis ( Lei & Ying , 2020 ) , which is derived from algorithmic stability ( Bousquet & Elisseeff , 2002 ) . Fortunately , one can derive nonvacuous bounds under general convex regimes using stability frameworks ( Hardt et al. , 2016 ) . However , stability is still far from explaining the remarkable generalization abilities of neural networks , mainly due to two obstructions . Firstly , stability-based bound depends heavily on the gradient norm in non-convex regimes ( Li et al. , 2019 ) , which is typically large at the beginning phase in training neural networks . Secondly , stability-based bound usually does not work well under general nonconvex regimes ( Hardt et al. , 2016 ; Charles & Papailiopoulos , 2018 ; Zhou et al. , 2018b ) but neural networks are usually highly non-convex . The aforementioned two obstructions mainly stem from the coarse-grained analysis of the signal and noise . As Zhang et al . ( 2016 ) argued , neural networks converge fast when fitting signal but converge relatively slowly when fitting noise1 , indicating that the training dynamics over signal and noise are significantly different . Consequently , on the one hand , the fast convergence of signal-related training contributes to a large gradient norm at the beginning phase ( see Figure 1a ) , resulting in 1In this paper , we refer the signal to the clean data without the output noise , and the noise to the output noise . See Section 2 for the formal definitions . poor stability . On the other hand , the training on signal forces the trained parameter away from the initialization , making the whole training path highly non-convex ( see Figure 1b ) . The above two phenomena inspire us to decompose the training dynamics into noise and signal component and only apply the stability-based analysis over the noise component . To demonstrate that such decomposition generally holds in practice , we conduct several experiments of neural networks on both synthetic dataset and real-world dataset ( see Figure 2 for more details ) . Based on the above discussion , we improve the stability-based analysis by proposing a decomposition framework on excess risk dynamics2 , where we handle the noise and signal components separately via bias-variance decomposition . In detail , we decompose the excess risk into variance excess risk ( VER ) and bias excess risk ( BER ) , where VER measures how the model fits noise and BER measures how the model fits signal . Under the decomposition , we apply the stability-based techniques to VER and apply uniform convergence to BER inspired by Negrea et al . ( 2020 ) . The decomposition framework accords with the theoretical and experimental evidence surprisingly well , providing that it outperforms stability-based bounds in both linear ( overparameterized linear regression ) and non-linear ( diagonal matrix recovery ) regimes . We summarize our contributions as follows : • We propose a new framework aiming at improving the traditional stability-based bounds , which is a novel approach to generalization dynamics analysis focusing on decomposing the excess risk dynamics into variance component and bias component . Starting from the overparameterized linear regression regimes , we show how to deploy the decomposition framework in practice , and the proposed framework outperforms the stability-based bounds . • We theoretically analyze the excess risk decomposition beyond linear regimes . As a case study , we derive a generalization bound under diagonal matrix recovery regimes . To our best knowledge , this is the first work to analyze the generalization performance of diagonal matrix recovery . • We conduct several experiments on both synthetic datasets and real-world datasets ( MINIST , CIFAR-10 ) to validate the utility of the decomposition framework , indicating that the framework provides interesting insights into the generalization community . 2We decompose the excess risk , which is closely related to generalization , purely due to technical reasons . The excess risk dynamics tracks the excess risk during the training process . 1.1 RELATED WORK . Stability-based Generalization . The stability-based researches can be roughly split into two branches . One branch is about how algorithmic stability leads to generalization ( Feldman & Vondrak , 2018 ; 2019 ; Bousquet et al. , 2020 ) . Another branch focus on how to calculate the stability parameter for specific problems , e.g. , Hardt et al . ( 2016 ) prove a generalization bound scales linearly with time in convex regimes . Furthermore , researchers try to apply stability techniques into more general settings , e.g. , non-smooth loss ( Lei & Ying , 2020 ; Bassily et al. , 2020 ) , noisy gradient descent in non-convex regimes ( Mou et al. , 2018 ; Li et al. , 2019 ) and stochastic gradient descent in non-convex regimes ( Zhou et al. , 2018b ; Charles & Papailiopoulos , 2018 ; Zhang et al. , 2021 ) . In this paper , we mainly focus on applying the decomposition framework to improve the stability-based bound . Uniform Convergence is widely used in generalization analysis . For bounded losses , the generalization gap is tightly bounded by its Rademacher Complexity ( Koltchinskii & Panchenko , 2000 ; Koltchinskii , 2001 ; Koltchinskii et al. , 2006 ) . Furthermore , we reach faster rates under realizable assumption ( Srebro et al. , 2010 ) . A line of work focuses on uniform convergence under neural network regimes , which is usually related to the parameter norm ( Bartlett et al. , 2017 ; Wei & Ma , 2019 ) . However , as Nagarajan & Kolter ( 2019 ) pointed out , uniform convergence may be unable to explain generalization . Therefore , more techniques are explored to go beyond uniform convergence . Other Approaches to Generalization . There are some other approaches to generalization , including PAC-Bayes ( Neyshabur et al. , 2017a ; Dziugaite & Roy , 2017 ; Neyshabur et al. , 2017b ; Dziugaite & Roy , 2018 ; Zhou et al. , 2018a ; Yang et al. , 2019 ) , information-based bound ( Russo & Zou , 2016 ; Xu & Raginsky , 2017 ; Banerjee & Montúfar , 2021 ; Haghifam et al. , 2020 ; Steinke & Zakynthinou , 2020 ) , and compression-based bound ( Arora et al. , 2018 ; Allen-Zhu et al. , 2018 ; Arora et al. , 2019 ) . Bias-Variance Decomposition . Bias-variance decomposition plays an important role in statistical analysis ( Lehmann & Casella , 2006 ; Casella & Berger , 2021 ; Geman et al. , 1992 ) . Generally , high bias indicates that the model has poor predicting ability on average and high variance indicates that the model performs unstably . Bias-variance decomposition is widely used in machine learning analysis , e.g. , adversarial training ( Yu et al. , 2021 ) , double descent ( Adlam & Pennington , 2020 ) , uncertainty ( Hu et al. , 2020 ) . Oymak et al . ( 2019 ) applied bias-variance decomposition on Jacobian of neural networks to explain their different performances on clean and noisy data . This paper considers a slightly different bias-variance decomposition following the analysis of SGD ( Dieuleveut et al. , 2016 ; Jain et al. , 2018 ; Zou et al. , 2021 ) , focusing on the decomposition of the noisy output . Matrix Recovery . Earlier methods for solving matrix recovery problems rely on the convex relaxation techniques for minimum norm solutions ( Recht et al. , 2010 ; Chandrasekaran et al. , 2011 ) . Recently , a branch of works focuses on the matrix factorization techniques with simple local search methods . It has been shown that there is no spurious local minima in the exact regime ( Ge et al. , 2016 ; 2017 ; Zhang et al. , 2019 ) . In the overparameterized regimes , it was first conjectured by Gunasekar et al . ( 2018 ) and then answered by Li et al . ( 2018 ) that gradient descent methods converge to the low-rank solution efficiently . Later , Zhuo et al . ( 2021 ) extends the conclusion to the noisy settings . 2 PRELIMINARY . In this section , we introduce necessary definitions , assumptions , and formally introduce previous techniques in dealing with generalization . Data distribution . Let x ∈ X ⊂ Rp be the input and y ∈ Y ⊂ R be the output , where x , y are generated from a joint distribution ( x , y ) ∼ P . Define Px , Py , and Py|x as the corresponding marginal and conditional distributions , respectively . Given n training samples D ≜ { xi , yi } i∈ [ n ] generated from distribution P , we denote its empirical distribution by Pn . To simplify the notations , we define X ∈ Rn×p as the design matrix and Y ∈ Rn as the response vector . Excess Risk . Given the loss function ℓ ( θ ; x , y ) with parameter θ and sample ( x , y ) , we define the population loss as L ( θ ; P ) ≜ E ( x , y ) ∼P [ ℓ ( θ ; x , y ) ] and its corresponding training loss as L ( θ ; Pn ) ≜ 1n ∑ i ℓ ( θ ; xi , yi ) . Let At denote the optimization algorithm which takes dataset D as input and return the trained parameter θ̂ ( t ) at time t , namely , At ( D ) = θ̂ ( t ) . During the analysis , we focus on the excess risk dynamics EL ( θ̂ ( t ) ; P ) , which measures how the trained parameter θ̂ ( t ) performs on the population loss : EL ( θ̂ ( t ) ; P ) ≜ L ( θ̂ ( t ) ; P ) −min θ L ( θ ; P ) . Although the minimizer can be not unique , we define L ( θ∗ ; Pn ) ≜ minθ L ( θ ; P ) where θ∗ denotes arbitrarily one of the minimizers . We additionally remark that bounding the generalization gap L ( θ̂ ( t ) ; P ) −L ( θ̂ ( t ) ; Pn ) suffices to bound the excess risk under Empirical Risk Minimization ( ERM ) framework by the following split ( see Equation 1 ) . Therefore , we mainly discuss the excess risk following Bartlett et al . ( 2020 ) . EL ( θ̂ ( t ) ; P ) = [ L ( θ̂ ( t ) ; P ) − L ( θ̂ ( t ) ; Pn ) ] ︸ ︷︷ ︸ generalization gap + [ L ( θ̂ ( t ) ; Pn ) − L ( θ∗ ; Pn ) ] ︸ ︷︷ ︸ ≤0 under ERM + [ L ( θ∗ ; Pn ) − L ( θ∗ ; P ) ] ︸ ︷︷ ︸ ≈0 by concentration inequalities . ( 1 ) VER and BER . As discussed in Section 1 , we aim to decompose the excess risk into the variance excess risk ( VER ) and the bias excess risk ( BER ) defined in Definition 1 . The decomposition focuses on the noisy output regimes and we split the output y into the signal component E [ y|x ] and the noise component y − E [ y|x ] . Informally , VER measures how the model performs on pure noise , and BER measures how the model performs on clean data . Definition 1 ( VER and BER ) Given ( x , y ) ∼ P , let ( x , E [ y|x ] ) ∼ Pb denote the signal distribution and ( x , y − E [ y|x ] ) ∼ Pv denote the noise distribution . The variance excess risk ( VER ) EvL ( θ ; P ) and bias excess risk ( BER ) EbL ( θ ; P ) are defined as : EvL ( θ ; P ) ≜ EL ( θ ; Pv ) , EbL ( θ ; P ) ≜ EL ( θ ; Pb ) . To better illustrate VER and BER , we consider three surrogate training dynamics : Standard Training , Variance Training , and Bias Training , corresponding to ER , VER , and BER , respectively . Standard Training : Training process over the noisy data ( X , Y ) from the initialization θ ( 0 ) . We denote the trained parameter at time t by θ̂ ( t ) . Variance Training : Training process over the pure noise ( X , Y − E [ Y |X ] ) from the initialization θ ( 0 ) v . We denote the trained parameter at time t by θ̂ ( t ) v . Bias Training : Training process over the clean data ( X , E [ Y |X ] ) from the initialization θ ( 0 ) b . We denote the trained parameter at time t by θ̂ ( t ) b . When the context is clear , although the trained parameters θ̂ ( t ) , θ̂ ( t ) v , θ̂ ( t ) b are related to the corresponding initialization and the algorithms , we omit the dependency . Besides , we denote θ∗ , θ∗v and θ∗b the optimal parameters which minimize the corresponding population loss L ( θ ; P ) , L ( θ ; Pv ) , and L ( θ ; Pb ) , respectively . Techniques in generalization . We next introduce two techniques in generalization analysis including stability-based techniques in Proposition 1 and uniform convergence in Proposition 2 . These techniques will be revisited in Section 3 . Proposition 1 ( Stability Bound from Feldman & Vondrak ( 2019 ) ) Assume that algorithm At is ϵ-uniformly-stable at time t , meaning that for any two dataset D and D′ with only one different data point , we have sup ( x , y ) EA [ ℓ ( At ( D ) ; x , y ) − ℓ ( At ( D′ ) ; x , y ) ] ≤ ϵ . Then the following inequality holds3 with probability at least 1− δ : |L ( At ( D ) ; P ) − L ( At ( D ) ; Pn ) | = O ( ϵ log ( n ) log ( n/δ ) + √ log ( 1/δ ) √ n ) . Proposition 2 ( Uniform Convergence from Wainwright ( 2019 ) ) Uniform convergence decouples the dependency between the trained parameter and the training set by taking supremum over a parameter space that is independent of training data , namely L ( At ( D ) ; P ) − L ( At ( D ) ; Pn ) ≤ sup θ∈B [ L ( θ ; P ) − L ( θ ; Pn ) ] . where B is independent of dataset D and At ( D ) ∈ B for any time t .
This paper studies the generalization performance of learning algorithms. The basic idea is to decompose the training dynamics into noise and signal components, and then consider separately the behavior of models trained with noise and signals. The authors then proposed to use the stability approach to tackle the noise components, and the uniform convergence approach to tackle the signal part. The authors consider two specific algorithms: overparameterized linear regression and diagonal matrix recovery. Experimental results are also reported.
SP:5af29ac70568ac6e8afb37add204d02b534880fe
On the Capacity and Superposition of Minima in Neural Network Loss Function Landscapes
1 INTRODUCTION . Deep learning with neural networks ( NNs ) is a high-dimensional , non-convex optimisation problem for a loss function landscape ( LFL ) . The coordinates of a minimum in the LFL are a set of weights for the machine learning model and a locally optimal solution to the learning problem , and these terms will therefore be used interchangeably throughout . It follows that the coordinates of the global minimum of the LFL are the weights that produce the lowest possible value of the loss function for the training data . The aim of machine learning is usually for the model to find a set of weights that fit the training data , but also generalise well to unseen testing data . Our approach extends this view . Instead of looking at just one minimum of the LFL , we are interested in the expressive power of multiple minima . To analyse how different minima extract and process information from the input data , we survey numerous low-lying minima of the LFL . Here , we employ tools from the energy landscape approach ( Wales , 2003 ) to gain new insight into machine learning LFLs ( Ballard et al. , 2017 ) . We note that the concept of a minimum is somewhat abstract in machine learning landscapes compared to molecular systems . While in a molecular energy landscape only minima provide valid configurations for a stable molecule , this restriction does not apply to LFLs for machine learning . In fact , some low-lying non-minima will have a smaller loss value and higher classification accuracy than a high-lying minimum . Here , we are interested in developing a better understanding of the capacity of diverse minima of the LFL , and showing that by combining the expressive power of different minima , we can build a better classifier . The compact form of this predictor provides a balance between accuracy and efficiency as required in applications where evaluation is a computational bottleneck . 1.1 BACKGROUND . Machine learning models are structurally limited in the amount of data they can fit : their capacity is finite . The most commonly known measure of capacity is perhaps the Vapnik–Chervonenkis ( VC ) dimension ( Vapnik & Chervonenkis , 1971 ; Vapnik et al. , 1994 ) . The higher the VC dimension , the more complex are the data can be fitted . More rigorously , VC dimension is defined as the largest cardinality of a set of data points that the NN can shatter ( for our purpose , shatter means classify correctly ) . Thus , the weights of an underparameterised model ( i.e . fewer parameters than training data points ) may be incapable of fitting the entire test data set , but instead fit just parts of it . The approach we employ to study the expressive power of combinations of individual minima is a variation of ensemble learning , where the results of multiple different predictors are combined to improve the overall accuracy of an approximation problem ( Dong et al. , 2020 ) . The idea of combining multiple sources of information , specifically the output predictions of multiple classifiers , has been considered for over two decades ( Breiman , 1996 ; Hashem , 1997 ; Jin & Lu , 2009 ) . Two of the most important questions in ensemble learning are : which classifiers to consider , and how to combine the individual predictions ( Wang , 2008 ) . For a detailed review see Kuncheva ( 2014 ) . 1.2 MOTIVATION . In this contribution , we are interested in quantitatively and systematically characterising a cornerstone of ensemble learning , namely classifier diversity . Logically , ensemble learning works if different classifiers extract different information from the input data or process it differently ( Melville & Mooney , 2005 ; Zaidi et al. , 2020 ) . In the present work , the classifiers in question correspond to local minima of a reference neural network . We aim to visualise diversity of minima from the corresponding LFL and show how to select a few of them to produce a compact yet more accurate classification . We will show that different minima of the LFL successfully classify distinct subsets of the entire input dataset . Hence different local minima specialise in distinct parts of the test dataset , which we believe has not been shown before . In summary , our main contributions are : • Showing that different local minima specialise in distinct subsets of the input • MLSUP , a proof-of-concept method that exploits minima diversity to improve classification results for complex problems • An interpretation of the limitation of single-minima models and visualisation of the differ- ences between minima • Novel insights into the symmetry properties of minima in neural network LFLs 2 SUPERPOSITION OF MACHINE LEARNING SOLUTIONS : MLSUP . We observe that different local minima of a reference neural network extract different information from the input data and that combining just a few examples can improve classification significantly . To study this effect , we employ a modified stacking approach where multiple minima from the same classifier are combined , rather than multiple classifiers . We do not obtain these minima by different random initialisation but rather from sampling solutions from the LFL . This approach provides insight into the functional landscape and a deeper understanding of LFL minima . To answer the second important question in ensemble learning design , we employ a second neural network to select one of the local minima for a given input data item . This idea is related to previous theory , Jordan & Jacobs ( 1994 ) where a gating network chooses which classifier to apply to some problem ( Shazeer et al. , 2017 ; McGill & Perona , 2017 ) . We call our method MLSUP , to denote a superposition of machine learning solutions ( local minima of the LFL ) . We describe MLSUP in a four step process ( Figure 1 ) . The first step involves characterising local minimaM by exploring the loss function landscape during training . Next , we choose a subset of minimaM′ ⊆M and evaluate eachm ∈M′ for every training datapoint , which reveals how well each of them can classify specific data items ( step 2 in Figure 1 ) . A detailed discussion of how a few minima are selected for combination is included below . The superposition of chosen minima is done by training a second , meta-network ( classifier 2 , i.e . step 3 in Figure 1 ) to learn which of the m ∈M′ minima is best suited to classify a specific input datapoint . Thus , the second network learns to apply different minima to classify different types of input data , as shown in Step 4 of Figure 1 . A pseudocode version of MLSUP is provided in the Appendix . 3 MODEL . We consider a classification problem for C classes with a single hidden layer , as we are specifically interested in underparameterised networks . For some data D = ( X , c ) , the inputs are denoted X = { x1 , . . . , xN } , where N is the number of data points in the training or testing set , which are denoted as Xtrain and Xtest respectively . The correct label for some data point d is defined as cd . We use tanh as the nonlinear activation function to the hidden layer , since it has continuous derivatives , which we require for optimisation . Outputs at node yi are converted to softmax probabilities pi ( W ; X ) = exp ( yi ) / exp ( ∑ j e yj ) , where W denotes the vector containing all weights . During training , we minimise a loss function L ( W ; X ) with respect to these weights . We use a cross-entropy loss function L ( W ; X ) = − 1 N N∑ d=1 ln pcd ( W ; X ) + λW2 ( 1 ) where cd is the correct class for some data item xd . A L2 regularisation term λW2 is added to eliminate zero Hessian eigenvalues , which by Noether ’ s theorem arise as a consequence of continuous symmetries in the loss function when additively shifting all the output bias weights ( Ballard et al. , 2017 ) . We find λ = 10−5 to be appropriate for the tests considered ; our conclusions are largely insensitive to λ . This setup is used for the neural networks in steps 1 , 2 and 4 of Figure 1 . 3.1 DEFINING THE META-NETWORK LOSS FUNCTION . For step 3 of Figure 1 , the loss function is different from equation 1 . This section describes Classifier 2 from Figure 1 , which is distinct from the one used for steps 1 , 2 and 4 . For Classifier 2 , we are not interested in learning cd , i.e . the correct output class , but rather the best local minimum to classify some input data item d , which is defined by the highest corresponding probability : bd = arg maxm mpcd ∀ m ∈M′ ( 2 ) for data item d , where m is one set of weights . This formulation changes the loss function to L ( W̃ ; X ) = − 1 N N∑ d=1 ln pbd ( W̃ ; X ) + λW̃2 . ( 3 ) with W̃ representing the weights for network 2 . We evaluate the classification predictions of our model using the area under the receiver operating characteristic curve ( ROC-AUC ) ( Fawcett , 2006 ) . The change in loss function also impacts the way we calculate the AUC for Step 4 in Figure 1 . In the usual case , the AUC is given as AUC = ∫ 1 0 T ( P ) dF ( P ) ( 4 ) with the true positive rate for outcome number one , T ( P ) , and the false positive rate , F ( P ) , T ( P ) = ∑Ndata d δ ( c d − 1 ) Θ ( p1 − P ) ∑Ndata d δ ( c d − 1 ) F ( P ) = ∑Ndata d [ 1− δ ( cd − 1 ) ] Θ ( p1 − P ) ∑Ndata d 1− δ ( cd − 1 ) ( 5 ) where δ ( cd − 1 ) is the Dirac delta function , and Θ ( p1 − P ) the Heaviside step function , defined as δ ( cd − 1 ) = { 1 if cd = 1 , 0 if cd 6= 1 Θ ( p1 − P ) = { 1 if p1 ≥ P , 0 if p1 < P ( 6 ) However , we now have |M ′| possibilities . Thus , we must evaluate the AUC using the minimum sd that is chosen by Classifier 2 from Figure 1 T ( P ) = ∑Ndata d δ ( c d − 1 ) Θ ( psd1 − P ) ∑Ndata d δ ( c d − 1 ) F ( P ) = ∑Ndata d [ 1− δ ( cd − 1 ) ] Θ ( ps d 1 − P ) ∑Ndata d 1− δ ( cd − 1 ) ( 7 ) 3.2 OPTIMISATION ROUTINE . To survey the loss function landscape we employ methods from the energy landscape approach , which has been widely used to study molecular and condensed matter systems in the physical sciences ( Wales , 2003 ) . Specifically , global optimisation is performed using the basin-hopping method ( Li & Scheraga , 1987 ; Wales & Doye , 1997 ) with a customised quasi-Newton L-BFGS ( Nocedal , 1980 ) optimiser . More information is included in the Appendix . Candidates for transition states are obtained using a doubly-nudged ( Trygubenko & Wales , 2004a ; b ) elastic band ( Henkelman & Jónsson , 2000 ; Henkelman et al. , 2000 ) approach and accurately refined by hybrid-eigenvector following ( Munro & Wales , 1999 ; Zeng et al. , 2014 ) . The routines are implemented in the GMIN GMI , OPTIM OPT and PATHSAMPLE PAT programs , which are available under the GNU General Public License .
The paper proposes a method to select a subset of local minima of loss function landscape and combine the minima using a second meta-network which learns to select a set of weights for each input data point. The subset of local minima is selected based on two criteria, minima that are distant in classification space and peak heat capacity. The combination of selected minima achieves better classification accuracy than the best individual and minima selected based on Euclidean distance.
SP:2ded0ffd70950d61fe69e808c3624b2ccac70b58
On the Capacity and Superposition of Minima in Neural Network Loss Function Landscapes
1 INTRODUCTION . Deep learning with neural networks ( NNs ) is a high-dimensional , non-convex optimisation problem for a loss function landscape ( LFL ) . The coordinates of a minimum in the LFL are a set of weights for the machine learning model and a locally optimal solution to the learning problem , and these terms will therefore be used interchangeably throughout . It follows that the coordinates of the global minimum of the LFL are the weights that produce the lowest possible value of the loss function for the training data . The aim of machine learning is usually for the model to find a set of weights that fit the training data , but also generalise well to unseen testing data . Our approach extends this view . Instead of looking at just one minimum of the LFL , we are interested in the expressive power of multiple minima . To analyse how different minima extract and process information from the input data , we survey numerous low-lying minima of the LFL . Here , we employ tools from the energy landscape approach ( Wales , 2003 ) to gain new insight into machine learning LFLs ( Ballard et al. , 2017 ) . We note that the concept of a minimum is somewhat abstract in machine learning landscapes compared to molecular systems . While in a molecular energy landscape only minima provide valid configurations for a stable molecule , this restriction does not apply to LFLs for machine learning . In fact , some low-lying non-minima will have a smaller loss value and higher classification accuracy than a high-lying minimum . Here , we are interested in developing a better understanding of the capacity of diverse minima of the LFL , and showing that by combining the expressive power of different minima , we can build a better classifier . The compact form of this predictor provides a balance between accuracy and efficiency as required in applications where evaluation is a computational bottleneck . 1.1 BACKGROUND . Machine learning models are structurally limited in the amount of data they can fit : their capacity is finite . The most commonly known measure of capacity is perhaps the Vapnik–Chervonenkis ( VC ) dimension ( Vapnik & Chervonenkis , 1971 ; Vapnik et al. , 1994 ) . The higher the VC dimension , the more complex are the data can be fitted . More rigorously , VC dimension is defined as the largest cardinality of a set of data points that the NN can shatter ( for our purpose , shatter means classify correctly ) . Thus , the weights of an underparameterised model ( i.e . fewer parameters than training data points ) may be incapable of fitting the entire test data set , but instead fit just parts of it . The approach we employ to study the expressive power of combinations of individual minima is a variation of ensemble learning , where the results of multiple different predictors are combined to improve the overall accuracy of an approximation problem ( Dong et al. , 2020 ) . The idea of combining multiple sources of information , specifically the output predictions of multiple classifiers , has been considered for over two decades ( Breiman , 1996 ; Hashem , 1997 ; Jin & Lu , 2009 ) . Two of the most important questions in ensemble learning are : which classifiers to consider , and how to combine the individual predictions ( Wang , 2008 ) . For a detailed review see Kuncheva ( 2014 ) . 1.2 MOTIVATION . In this contribution , we are interested in quantitatively and systematically characterising a cornerstone of ensemble learning , namely classifier diversity . Logically , ensemble learning works if different classifiers extract different information from the input data or process it differently ( Melville & Mooney , 2005 ; Zaidi et al. , 2020 ) . In the present work , the classifiers in question correspond to local minima of a reference neural network . We aim to visualise diversity of minima from the corresponding LFL and show how to select a few of them to produce a compact yet more accurate classification . We will show that different minima of the LFL successfully classify distinct subsets of the entire input dataset . Hence different local minima specialise in distinct parts of the test dataset , which we believe has not been shown before . In summary , our main contributions are : • Showing that different local minima specialise in distinct subsets of the input • MLSUP , a proof-of-concept method that exploits minima diversity to improve classification results for complex problems • An interpretation of the limitation of single-minima models and visualisation of the differ- ences between minima • Novel insights into the symmetry properties of minima in neural network LFLs 2 SUPERPOSITION OF MACHINE LEARNING SOLUTIONS : MLSUP . We observe that different local minima of a reference neural network extract different information from the input data and that combining just a few examples can improve classification significantly . To study this effect , we employ a modified stacking approach where multiple minima from the same classifier are combined , rather than multiple classifiers . We do not obtain these minima by different random initialisation but rather from sampling solutions from the LFL . This approach provides insight into the functional landscape and a deeper understanding of LFL minima . To answer the second important question in ensemble learning design , we employ a second neural network to select one of the local minima for a given input data item . This idea is related to previous theory , Jordan & Jacobs ( 1994 ) where a gating network chooses which classifier to apply to some problem ( Shazeer et al. , 2017 ; McGill & Perona , 2017 ) . We call our method MLSUP , to denote a superposition of machine learning solutions ( local minima of the LFL ) . We describe MLSUP in a four step process ( Figure 1 ) . The first step involves characterising local minimaM by exploring the loss function landscape during training . Next , we choose a subset of minimaM′ ⊆M and evaluate eachm ∈M′ for every training datapoint , which reveals how well each of them can classify specific data items ( step 2 in Figure 1 ) . A detailed discussion of how a few minima are selected for combination is included below . The superposition of chosen minima is done by training a second , meta-network ( classifier 2 , i.e . step 3 in Figure 1 ) to learn which of the m ∈M′ minima is best suited to classify a specific input datapoint . Thus , the second network learns to apply different minima to classify different types of input data , as shown in Step 4 of Figure 1 . A pseudocode version of MLSUP is provided in the Appendix . 3 MODEL . We consider a classification problem for C classes with a single hidden layer , as we are specifically interested in underparameterised networks . For some data D = ( X , c ) , the inputs are denoted X = { x1 , . . . , xN } , where N is the number of data points in the training or testing set , which are denoted as Xtrain and Xtest respectively . The correct label for some data point d is defined as cd . We use tanh as the nonlinear activation function to the hidden layer , since it has continuous derivatives , which we require for optimisation . Outputs at node yi are converted to softmax probabilities pi ( W ; X ) = exp ( yi ) / exp ( ∑ j e yj ) , where W denotes the vector containing all weights . During training , we minimise a loss function L ( W ; X ) with respect to these weights . We use a cross-entropy loss function L ( W ; X ) = − 1 N N∑ d=1 ln pcd ( W ; X ) + λW2 ( 1 ) where cd is the correct class for some data item xd . A L2 regularisation term λW2 is added to eliminate zero Hessian eigenvalues , which by Noether ’ s theorem arise as a consequence of continuous symmetries in the loss function when additively shifting all the output bias weights ( Ballard et al. , 2017 ) . We find λ = 10−5 to be appropriate for the tests considered ; our conclusions are largely insensitive to λ . This setup is used for the neural networks in steps 1 , 2 and 4 of Figure 1 . 3.1 DEFINING THE META-NETWORK LOSS FUNCTION . For step 3 of Figure 1 , the loss function is different from equation 1 . This section describes Classifier 2 from Figure 1 , which is distinct from the one used for steps 1 , 2 and 4 . For Classifier 2 , we are not interested in learning cd , i.e . the correct output class , but rather the best local minimum to classify some input data item d , which is defined by the highest corresponding probability : bd = arg maxm mpcd ∀ m ∈M′ ( 2 ) for data item d , where m is one set of weights . This formulation changes the loss function to L ( W̃ ; X ) = − 1 N N∑ d=1 ln pbd ( W̃ ; X ) + λW̃2 . ( 3 ) with W̃ representing the weights for network 2 . We evaluate the classification predictions of our model using the area under the receiver operating characteristic curve ( ROC-AUC ) ( Fawcett , 2006 ) . The change in loss function also impacts the way we calculate the AUC for Step 4 in Figure 1 . In the usual case , the AUC is given as AUC = ∫ 1 0 T ( P ) dF ( P ) ( 4 ) with the true positive rate for outcome number one , T ( P ) , and the false positive rate , F ( P ) , T ( P ) = ∑Ndata d δ ( c d − 1 ) Θ ( p1 − P ) ∑Ndata d δ ( c d − 1 ) F ( P ) = ∑Ndata d [ 1− δ ( cd − 1 ) ] Θ ( p1 − P ) ∑Ndata d 1− δ ( cd − 1 ) ( 5 ) where δ ( cd − 1 ) is the Dirac delta function , and Θ ( p1 − P ) the Heaviside step function , defined as δ ( cd − 1 ) = { 1 if cd = 1 , 0 if cd 6= 1 Θ ( p1 − P ) = { 1 if p1 ≥ P , 0 if p1 < P ( 6 ) However , we now have |M ′| possibilities . Thus , we must evaluate the AUC using the minimum sd that is chosen by Classifier 2 from Figure 1 T ( P ) = ∑Ndata d δ ( c d − 1 ) Θ ( psd1 − P ) ∑Ndata d δ ( c d − 1 ) F ( P ) = ∑Ndata d [ 1− δ ( cd − 1 ) ] Θ ( ps d 1 − P ) ∑Ndata d 1− δ ( cd − 1 ) ( 7 ) 3.2 OPTIMISATION ROUTINE . To survey the loss function landscape we employ methods from the energy landscape approach , which has been widely used to study molecular and condensed matter systems in the physical sciences ( Wales , 2003 ) . Specifically , global optimisation is performed using the basin-hopping method ( Li & Scheraga , 1987 ; Wales & Doye , 1997 ) with a customised quasi-Newton L-BFGS ( Nocedal , 1980 ) optimiser . More information is included in the Appendix . Candidates for transition states are obtained using a doubly-nudged ( Trygubenko & Wales , 2004a ; b ) elastic band ( Henkelman & Jónsson , 2000 ; Henkelman et al. , 2000 ) approach and accurately refined by hybrid-eigenvector following ( Munro & Wales , 1999 ; Zeng et al. , 2014 ) . The routines are implemented in the GMIN GMI , OPTIM OPT and PATHSAMPLE PAT programs , which are available under the GNU General Public License .
This paper studies a form of ensembling of under-parameterized networks. They show that different local minima specialize distinct subset of inputs, and an algorithm called MLSUP can be used to combine them to improve the prediction. The method is empirical tested with a 1-hidden-layer network on a 2D synthetic classification dataset (and in the appendix an artificially corrupted UCI Iris flower task).
SP:2ded0ffd70950d61fe69e808c3624b2ccac70b58
On the Capacity and Superposition of Minima in Neural Network Loss Function Landscapes
1 INTRODUCTION . Deep learning with neural networks ( NNs ) is a high-dimensional , non-convex optimisation problem for a loss function landscape ( LFL ) . The coordinates of a minimum in the LFL are a set of weights for the machine learning model and a locally optimal solution to the learning problem , and these terms will therefore be used interchangeably throughout . It follows that the coordinates of the global minimum of the LFL are the weights that produce the lowest possible value of the loss function for the training data . The aim of machine learning is usually for the model to find a set of weights that fit the training data , but also generalise well to unseen testing data . Our approach extends this view . Instead of looking at just one minimum of the LFL , we are interested in the expressive power of multiple minima . To analyse how different minima extract and process information from the input data , we survey numerous low-lying minima of the LFL . Here , we employ tools from the energy landscape approach ( Wales , 2003 ) to gain new insight into machine learning LFLs ( Ballard et al. , 2017 ) . We note that the concept of a minimum is somewhat abstract in machine learning landscapes compared to molecular systems . While in a molecular energy landscape only minima provide valid configurations for a stable molecule , this restriction does not apply to LFLs for machine learning . In fact , some low-lying non-minima will have a smaller loss value and higher classification accuracy than a high-lying minimum . Here , we are interested in developing a better understanding of the capacity of diverse minima of the LFL , and showing that by combining the expressive power of different minima , we can build a better classifier . The compact form of this predictor provides a balance between accuracy and efficiency as required in applications where evaluation is a computational bottleneck . 1.1 BACKGROUND . Machine learning models are structurally limited in the amount of data they can fit : their capacity is finite . The most commonly known measure of capacity is perhaps the Vapnik–Chervonenkis ( VC ) dimension ( Vapnik & Chervonenkis , 1971 ; Vapnik et al. , 1994 ) . The higher the VC dimension , the more complex are the data can be fitted . More rigorously , VC dimension is defined as the largest cardinality of a set of data points that the NN can shatter ( for our purpose , shatter means classify correctly ) . Thus , the weights of an underparameterised model ( i.e . fewer parameters than training data points ) may be incapable of fitting the entire test data set , but instead fit just parts of it . The approach we employ to study the expressive power of combinations of individual minima is a variation of ensemble learning , where the results of multiple different predictors are combined to improve the overall accuracy of an approximation problem ( Dong et al. , 2020 ) . The idea of combining multiple sources of information , specifically the output predictions of multiple classifiers , has been considered for over two decades ( Breiman , 1996 ; Hashem , 1997 ; Jin & Lu , 2009 ) . Two of the most important questions in ensemble learning are : which classifiers to consider , and how to combine the individual predictions ( Wang , 2008 ) . For a detailed review see Kuncheva ( 2014 ) . 1.2 MOTIVATION . In this contribution , we are interested in quantitatively and systematically characterising a cornerstone of ensemble learning , namely classifier diversity . Logically , ensemble learning works if different classifiers extract different information from the input data or process it differently ( Melville & Mooney , 2005 ; Zaidi et al. , 2020 ) . In the present work , the classifiers in question correspond to local minima of a reference neural network . We aim to visualise diversity of minima from the corresponding LFL and show how to select a few of them to produce a compact yet more accurate classification . We will show that different minima of the LFL successfully classify distinct subsets of the entire input dataset . Hence different local minima specialise in distinct parts of the test dataset , which we believe has not been shown before . In summary , our main contributions are : • Showing that different local minima specialise in distinct subsets of the input • MLSUP , a proof-of-concept method that exploits minima diversity to improve classification results for complex problems • An interpretation of the limitation of single-minima models and visualisation of the differ- ences between minima • Novel insights into the symmetry properties of minima in neural network LFLs 2 SUPERPOSITION OF MACHINE LEARNING SOLUTIONS : MLSUP . We observe that different local minima of a reference neural network extract different information from the input data and that combining just a few examples can improve classification significantly . To study this effect , we employ a modified stacking approach where multiple minima from the same classifier are combined , rather than multiple classifiers . We do not obtain these minima by different random initialisation but rather from sampling solutions from the LFL . This approach provides insight into the functional landscape and a deeper understanding of LFL minima . To answer the second important question in ensemble learning design , we employ a second neural network to select one of the local minima for a given input data item . This idea is related to previous theory , Jordan & Jacobs ( 1994 ) where a gating network chooses which classifier to apply to some problem ( Shazeer et al. , 2017 ; McGill & Perona , 2017 ) . We call our method MLSUP , to denote a superposition of machine learning solutions ( local minima of the LFL ) . We describe MLSUP in a four step process ( Figure 1 ) . The first step involves characterising local minimaM by exploring the loss function landscape during training . Next , we choose a subset of minimaM′ ⊆M and evaluate eachm ∈M′ for every training datapoint , which reveals how well each of them can classify specific data items ( step 2 in Figure 1 ) . A detailed discussion of how a few minima are selected for combination is included below . The superposition of chosen minima is done by training a second , meta-network ( classifier 2 , i.e . step 3 in Figure 1 ) to learn which of the m ∈M′ minima is best suited to classify a specific input datapoint . Thus , the second network learns to apply different minima to classify different types of input data , as shown in Step 4 of Figure 1 . A pseudocode version of MLSUP is provided in the Appendix . 3 MODEL . We consider a classification problem for C classes with a single hidden layer , as we are specifically interested in underparameterised networks . For some data D = ( X , c ) , the inputs are denoted X = { x1 , . . . , xN } , where N is the number of data points in the training or testing set , which are denoted as Xtrain and Xtest respectively . The correct label for some data point d is defined as cd . We use tanh as the nonlinear activation function to the hidden layer , since it has continuous derivatives , which we require for optimisation . Outputs at node yi are converted to softmax probabilities pi ( W ; X ) = exp ( yi ) / exp ( ∑ j e yj ) , where W denotes the vector containing all weights . During training , we minimise a loss function L ( W ; X ) with respect to these weights . We use a cross-entropy loss function L ( W ; X ) = − 1 N N∑ d=1 ln pcd ( W ; X ) + λW2 ( 1 ) where cd is the correct class for some data item xd . A L2 regularisation term λW2 is added to eliminate zero Hessian eigenvalues , which by Noether ’ s theorem arise as a consequence of continuous symmetries in the loss function when additively shifting all the output bias weights ( Ballard et al. , 2017 ) . We find λ = 10−5 to be appropriate for the tests considered ; our conclusions are largely insensitive to λ . This setup is used for the neural networks in steps 1 , 2 and 4 of Figure 1 . 3.1 DEFINING THE META-NETWORK LOSS FUNCTION . For step 3 of Figure 1 , the loss function is different from equation 1 . This section describes Classifier 2 from Figure 1 , which is distinct from the one used for steps 1 , 2 and 4 . For Classifier 2 , we are not interested in learning cd , i.e . the correct output class , but rather the best local minimum to classify some input data item d , which is defined by the highest corresponding probability : bd = arg maxm mpcd ∀ m ∈M′ ( 2 ) for data item d , where m is one set of weights . This formulation changes the loss function to L ( W̃ ; X ) = − 1 N N∑ d=1 ln pbd ( W̃ ; X ) + λW̃2 . ( 3 ) with W̃ representing the weights for network 2 . We evaluate the classification predictions of our model using the area under the receiver operating characteristic curve ( ROC-AUC ) ( Fawcett , 2006 ) . The change in loss function also impacts the way we calculate the AUC for Step 4 in Figure 1 . In the usual case , the AUC is given as AUC = ∫ 1 0 T ( P ) dF ( P ) ( 4 ) with the true positive rate for outcome number one , T ( P ) , and the false positive rate , F ( P ) , T ( P ) = ∑Ndata d δ ( c d − 1 ) Θ ( p1 − P ) ∑Ndata d δ ( c d − 1 ) F ( P ) = ∑Ndata d [ 1− δ ( cd − 1 ) ] Θ ( p1 − P ) ∑Ndata d 1− δ ( cd − 1 ) ( 5 ) where δ ( cd − 1 ) is the Dirac delta function , and Θ ( p1 − P ) the Heaviside step function , defined as δ ( cd − 1 ) = { 1 if cd = 1 , 0 if cd 6= 1 Θ ( p1 − P ) = { 1 if p1 ≥ P , 0 if p1 < P ( 6 ) However , we now have |M ′| possibilities . Thus , we must evaluate the AUC using the minimum sd that is chosen by Classifier 2 from Figure 1 T ( P ) = ∑Ndata d δ ( c d − 1 ) Θ ( psd1 − P ) ∑Ndata d δ ( c d − 1 ) F ( P ) = ∑Ndata d [ 1− δ ( cd − 1 ) ] Θ ( ps d 1 − P ) ∑Ndata d 1− δ ( cd − 1 ) ( 7 ) 3.2 OPTIMISATION ROUTINE . To survey the loss function landscape we employ methods from the energy landscape approach , which has been widely used to study molecular and condensed matter systems in the physical sciences ( Wales , 2003 ) . Specifically , global optimisation is performed using the basin-hopping method ( Li & Scheraga , 1987 ; Wales & Doye , 1997 ) with a customised quasi-Newton L-BFGS ( Nocedal , 1980 ) optimiser . More information is included in the Appendix . Candidates for transition states are obtained using a doubly-nudged ( Trygubenko & Wales , 2004a ; b ) elastic band ( Henkelman & Jónsson , 2000 ; Henkelman et al. , 2000 ) approach and accurately refined by hybrid-eigenvector following ( Munro & Wales , 1999 ; Zeng et al. , 2014 ) . The routines are implemented in the GMIN GMI , OPTIM OPT and PATHSAMPLE PAT programs , which are available under the GNU General Public License .
## Summary This paper proposes a new ensemble learning based on neural networks. First, A basin-hopping method is used to sample diverse minima of the empirical risk landscape. Then, a meta-network is trained to predict which minimum performs the best for specific input. This meta-network maps the input sample to the index set of minima. This can be viewed as an adaptive ensemble method, in the sense that for different input samples, the evaluated models are different.
SP:2ded0ffd70950d61fe69e808c3624b2ccac70b58
Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning
Artificial neural systems trained using reinforcement , supervised , and unsupervised learning all acquire internal representations of high dimensional input . To what extent these representations depend on the different learning objectives is largely unknown . Here we compare the representations learned by eight different convolutional neural networks , each with identical ResNet architectures and trained on the same family of egocentric images , but embedded within different learning systems . Specifically , the representations are trained to guide action in a compound reinforcement learning task ; to predict one or a combination of three task-related targets with supervision ; or using one of three different unsupervised objectives . Using representational similarity analysis , we find that the network trained with reinforcement learning differs most from the other networks . Through further analysis using metrics inspired by the neuroscience literature , we find that the model trained with reinforcement learning has a high-dimensional representation wherein individual images are represented with very different patterns of neural activity . These representations seem to arise in order to guide long-term behavior and goal-seeking in the RL agent . Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches . 1 INTRODUCTION . Many studies in machine learning aim to improve model quality or data efficiency by training the model with multiple objectives or transferring representations trained on other data . While leveraging other data or objectives may lead to higher data efficiency or increased generalization outside the primary task , relying too heavily on tasks other than the primary one can undercut performance if the other tasks are insufficiently well aligned . Humans and animals do in fact confront the full complexity of this continual learning problem of building representations , reusing what is worth transferring to new settings as they learn new things . The potential utility of transferring representations is particularly salient in deep reinforcement learning , where rich sensory inputs must be transformed into actions . While landmark developments in deep RL have enabled the end-to-end training of agents capable of leveraging pixel-level visual information in complex environments to guide behavior ( Mnih et al. , 2015 ) , there is a strong demand for methods that improve the sample efficiency of these training-intensive agents through pre- or joint-training of the representations they rely on . Indeed several studies aim to accelerate representation learning in vision-based RL through the use of additional objectives ( e.g. , Jaderberg et al. , 2017 ; Wayne et al. , 2018 ; Schwarzer et al. , 2021 ) . In the present work , we examine a specific instance of a high-dimensional embodied control problem , the virtual rodent of Merel et al . ( 2020 ) , which is also included in the RL Unplugged datasets for offline learning ( Gulcehre et al. , 2020 ) . This problem involves visuomotor control of a highdimensional body to solve multiple tasks , and it is solvable using a visuomotor policy trained through deep RL . In particular , we will compare multi-layer visual representations that arise in this model to those resulting from different training objectives on the same architecture . Transfer learning approaches tend to be evaluated empirically , demonstrating through performance comparisons circumstances under which pretrained or transferred representations add value . While useful and pragmatic , this approach yields limited general insights into new problems . Instead of focusing on performance comparisons , we will focus on analysis of the representations at multiple layers of the same architecture trained with unsupervised , supervised , and reinforcement learning methods . To perform this analysis , we exploit tools that have been commonly applied in the interpretation of biological neural data . Our hope is to develop intuition for the properties of these different representations in order to support future efforts to develop training procedures . 1.1 RELATED WORK . Transfer learning in the supervised context has been widely applied , studied and reviewed , including within the narrower context of deep learning ( Bengio , 2012 ) . Thus , we limit the scope of our discussion to connections to the most relevant literature on transfer learning in the context of reinforcement learning , as well as literature focusing on the analysis of representations learned . Transfer learning for reinforcement learning Within RL specifically , transfer learning has been of interest since before the recent deep RL boom ( Taylor & Stone , 2009 ; Lazaric , 2012 ) . There have been diverse attempts to learn representations from experience that would support generalization across other tasks , including a focus on predicting future states or rewards ( Rafols et al. , 2005 ; Lehnert et al. , 2020 ) . Relevant to our present setting , Hill et al . ( 2019 ) found specifically that generalization is facilitated by egocentric observations . In addition , transfer learning for reinforcement learning has been one way to operationalize the broader and more natural problem of continual learning , wherein representations must be learned , transferred , reused , and adapted repeatedly over the lifetime of an agent ( Hadsell et al. , 2020 ) . Among more recent deep RL approaches , it is also possible to leverage multiple objectives for learning representations concurrently . That is , rather than first learning a representation from previously logged data and then learning a policy from that pretrained representation , it is possible to perform deep RL with auxiliary objectives ( Jaderberg et al. , 2017 ) including self-supervised tasks such as predicting past or future experience ( Wayne et al. , 2018 ; Schwarzer et al. , 2021 ) . Critically , concurrent learning with unsupervised or self-supervised objectives as well as RL offers the advantage that as RL proceeds and the data distribution changes , the auxiliary tasks continue to train on the shifting ( and increasingly relevant ) data . As constrastive approaches have become popular , they have also been explored as auxiliary losses , such that policies rely on features shaped both by the contrastive and RL objectives ( Oord et al. , 2018 ; Laskin et al. , 2020 ) . While intuitively it seems reasonable that joint training of a representation with RL and an additional loss may help performance , new results by Stooke et al . ( 2021 ) keep alive the possibility that exclusively pre-training a representation from unsupervised objectives could outperform end-to-end RL . Nevertheless , it presently remains far from resolved which objectives used for pretraining or as auxiliary tasks ( i.e . concurrent with RL ) will work best . Indeed , in one of the larger empirical studies to date , Yang & Nachum ( 2021 ) sweep over many pre-training objectives . Their results “ suggest that the ideal representation learning objective may depend on the nature of the downstream task , and no single objective appears to dominate generally ” ( with the caveat for our setting that their environments are relatively simple control environments and they focus on state rather than image observations ) . Analysis of neural representations In general , the present state of the literature on transfer of representations for RL leaves us with meaningful leads as to what objectives to try , but still incomplete insight into why various objectives add value or how to anticipate what will work well on a new problem . However , there is precedent both in the machine learning literature as well as biological neural analysis literature for attempting to analyze learned representations . Taskonomy ( Zamir et al. , 2018 ) determined transfer learning performance across the same architecture trained with several supervised and unsupervised tasks . While this work did not compare representations directly , one relevant finding was that autoencoders tended to be an outlier in the revealed task structures . Furthermore , the Taskonomy networks were also used to predict fMRI activity ( Wang et al. , 2019 ) , and it was found that several tasks related to 3D image processing made very similar predictions , while the autoencoder was not highly similar to any other models . In Zhuang et al . ( 2021 ) , several different unsupervised networks and one supervised network are assessed as models of primate visual processing . This work found that contrastive unsupervised methods generally had good transfer performance and predicted primate neural activity well . Note that neural predictivity is only an indirect way of comparing components of network representation ; we perform direct comparisons across different networks , including an RL-trained model ( which is to our knowledge the first time such a full comparison has been done ) . Neural analysis metrics We use representation similarity analysis ( RSA ) to compare representations across networks . RSA has been used extensively in neural network analysis and has connections to many other metrics used in neuroscience ( Kriegeskorte & Wei , 2021 ) . Historically , the concept of sparsity has also been important in neural analysis . Sparsity can be defined several different ways ( Willmore & Tolhurst , 2001 ) but in general sparse representations have been associated with efficient encoding of natural stimuli statistics , effective associative memory , and enhanced downstream classification ( Rolls & Treves , 1990 ; Olshausen & Field , 1996 ; Babadi & Sompolinsky , 2014 ; Olshausen & Field , 2004 ) . Another commonly-measured property of neural representations is dimensionality , which is used as a way to probe the number of latent variables encoded in a neural population as well as understand how they are embedded ( Jazayeri & Ostojic , 2021 ) . Higher dimensionality is also associated with better downstream classification ( Rigotti et al. , 2013 ) . 2 METHODS . 2.1 MODELS AND TRAINING . Table 1 summarizes the models trained for this study ( for all but RLRod , we train three instantiations of each ) . For more details on these networks and training see Appendix A . The ResNet architecture shared by all models is shown in Figure 1A . The number of feature channels is 16 for the first meta-layer and 32 thereafter . The final layer is a 128-unit fully connected layer . The images used are all 64x64 RGB egocentric images generated from the virtual rodent environments Merel et al . ( 2020 ) built in MuJoCo ( Todorov et al. , 2012 ) using dm control ( Tunyasuvunakool et al. , 2020 ) . For the RL model , the agent creates its own training images through exploration in the environment ( see Appendix A.0.1 ) . Other models were trained using images generated by the agent ’ s exploration – these images , along with some other features of the rodent ’ s state and action output , have been made publicly available and documented for offline RL in Gulcehre et al . ( 2020 ) . Example images can be seen in Figure 1B . The rodent engages in one of four possible tasks at a time : bowl escape ( where the rodent must crawl out of an uneven valley ) , gaps run ( the rodent must run and jump over gaps ) , two-tap task ( the rodent must tap an orb twice with a set amount of time in between ) , and maze forage ( the rodent has to find orbs in a maze structure ) . 2.2 ANALYSIS METHODS . Activity was recorded from four different layers ( marked as R1-4 in Figure 1A ) in response to 2048 test images drawn equally from the four tasks . Here we briefly describe the analyses applied , with more detailed descriptions available in Appendix A. Representational Similarity Analysis RSA was used to determine the extent to which different models represent visual inputs similarly . First , dissimilarity matrices were made for each layer in each network ( see Eqn 2 ; example dissimilarity matrices in Figure 4A ) . RSA matrices ( resulting from two different correlation metrics ) indicate how similar these dissimilarity matrices are across networks . Separately we also performed RSA across layers within networks and included pixel and action spaces . Sparsity To measure the sparsity of representations in these networks we use a lifetime sparsity metric which determines the extent to which a neuron responds selectively to different images ( Vinje & Gallant , 2000 ) . s = 1− ( 1n ) ( ( ∑ ri ) 2∑ r2i ) 1− 1n ( 1 ) where n is the number of images and ri is the response of the neuron to image i . A sparsity value of 1 indicates very selective responses , with 0 indicating equal responses to all inputs . Dimensionality We perform PCA on population activity and then estimate embedding dimensionality using both the participation ratio ( see Eqn 3 ) and the number of PCs needed to reach 85 % variance explained .
This paper compares the representations learned by otherwise identical networks (up to the output layer) for different tasks: supervised (4 tasks), unsupervised (autoencoders, vanilla and variational), self-supervised predictive coding, untrained (randomly initialized), and one RL policy network. They are all trained on the same images from the RL task, and the supervised tasks are related to the RL task (e.g., what task is this image from). The representations are compared by using relatively standard neuroscience measures: RSA (with two distance measures), a kind of meta-RSA that compares the correlations between the RSA matrices of each network to each other (again with two correlation measures), two sparsity measures, and two measures of dimensionality of the representations. They find that the RL network stands alone, treating images in complex ways, being very uncorrelated to the other networks, being more sparse, and more high dimensional. The unsupervised networks also stand out, although not as much, from the rest. They also investigate transfer learning between the networks, training single-layer networks from the output of the penultimate layer of each network (frozen) on each other network's tasks. I have read the authors' response, and given that they are not responding to individual reviews or changing the paper, I am keeping my score the same.
SP:1e7159e897b786fef0cd494a49191673fde611de
Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning
Artificial neural systems trained using reinforcement , supervised , and unsupervised learning all acquire internal representations of high dimensional input . To what extent these representations depend on the different learning objectives is largely unknown . Here we compare the representations learned by eight different convolutional neural networks , each with identical ResNet architectures and trained on the same family of egocentric images , but embedded within different learning systems . Specifically , the representations are trained to guide action in a compound reinforcement learning task ; to predict one or a combination of three task-related targets with supervision ; or using one of three different unsupervised objectives . Using representational similarity analysis , we find that the network trained with reinforcement learning differs most from the other networks . Through further analysis using metrics inspired by the neuroscience literature , we find that the model trained with reinforcement learning has a high-dimensional representation wherein individual images are represented with very different patterns of neural activity . These representations seem to arise in order to guide long-term behavior and goal-seeking in the RL agent . Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches . 1 INTRODUCTION . Many studies in machine learning aim to improve model quality or data efficiency by training the model with multiple objectives or transferring representations trained on other data . While leveraging other data or objectives may lead to higher data efficiency or increased generalization outside the primary task , relying too heavily on tasks other than the primary one can undercut performance if the other tasks are insufficiently well aligned . Humans and animals do in fact confront the full complexity of this continual learning problem of building representations , reusing what is worth transferring to new settings as they learn new things . The potential utility of transferring representations is particularly salient in deep reinforcement learning , where rich sensory inputs must be transformed into actions . While landmark developments in deep RL have enabled the end-to-end training of agents capable of leveraging pixel-level visual information in complex environments to guide behavior ( Mnih et al. , 2015 ) , there is a strong demand for methods that improve the sample efficiency of these training-intensive agents through pre- or joint-training of the representations they rely on . Indeed several studies aim to accelerate representation learning in vision-based RL through the use of additional objectives ( e.g. , Jaderberg et al. , 2017 ; Wayne et al. , 2018 ; Schwarzer et al. , 2021 ) . In the present work , we examine a specific instance of a high-dimensional embodied control problem , the virtual rodent of Merel et al . ( 2020 ) , which is also included in the RL Unplugged datasets for offline learning ( Gulcehre et al. , 2020 ) . This problem involves visuomotor control of a highdimensional body to solve multiple tasks , and it is solvable using a visuomotor policy trained through deep RL . In particular , we will compare multi-layer visual representations that arise in this model to those resulting from different training objectives on the same architecture . Transfer learning approaches tend to be evaluated empirically , demonstrating through performance comparisons circumstances under which pretrained or transferred representations add value . While useful and pragmatic , this approach yields limited general insights into new problems . Instead of focusing on performance comparisons , we will focus on analysis of the representations at multiple layers of the same architecture trained with unsupervised , supervised , and reinforcement learning methods . To perform this analysis , we exploit tools that have been commonly applied in the interpretation of biological neural data . Our hope is to develop intuition for the properties of these different representations in order to support future efforts to develop training procedures . 1.1 RELATED WORK . Transfer learning in the supervised context has been widely applied , studied and reviewed , including within the narrower context of deep learning ( Bengio , 2012 ) . Thus , we limit the scope of our discussion to connections to the most relevant literature on transfer learning in the context of reinforcement learning , as well as literature focusing on the analysis of representations learned . Transfer learning for reinforcement learning Within RL specifically , transfer learning has been of interest since before the recent deep RL boom ( Taylor & Stone , 2009 ; Lazaric , 2012 ) . There have been diverse attempts to learn representations from experience that would support generalization across other tasks , including a focus on predicting future states or rewards ( Rafols et al. , 2005 ; Lehnert et al. , 2020 ) . Relevant to our present setting , Hill et al . ( 2019 ) found specifically that generalization is facilitated by egocentric observations . In addition , transfer learning for reinforcement learning has been one way to operationalize the broader and more natural problem of continual learning , wherein representations must be learned , transferred , reused , and adapted repeatedly over the lifetime of an agent ( Hadsell et al. , 2020 ) . Among more recent deep RL approaches , it is also possible to leverage multiple objectives for learning representations concurrently . That is , rather than first learning a representation from previously logged data and then learning a policy from that pretrained representation , it is possible to perform deep RL with auxiliary objectives ( Jaderberg et al. , 2017 ) including self-supervised tasks such as predicting past or future experience ( Wayne et al. , 2018 ; Schwarzer et al. , 2021 ) . Critically , concurrent learning with unsupervised or self-supervised objectives as well as RL offers the advantage that as RL proceeds and the data distribution changes , the auxiliary tasks continue to train on the shifting ( and increasingly relevant ) data . As constrastive approaches have become popular , they have also been explored as auxiliary losses , such that policies rely on features shaped both by the contrastive and RL objectives ( Oord et al. , 2018 ; Laskin et al. , 2020 ) . While intuitively it seems reasonable that joint training of a representation with RL and an additional loss may help performance , new results by Stooke et al . ( 2021 ) keep alive the possibility that exclusively pre-training a representation from unsupervised objectives could outperform end-to-end RL . Nevertheless , it presently remains far from resolved which objectives used for pretraining or as auxiliary tasks ( i.e . concurrent with RL ) will work best . Indeed , in one of the larger empirical studies to date , Yang & Nachum ( 2021 ) sweep over many pre-training objectives . Their results “ suggest that the ideal representation learning objective may depend on the nature of the downstream task , and no single objective appears to dominate generally ” ( with the caveat for our setting that their environments are relatively simple control environments and they focus on state rather than image observations ) . Analysis of neural representations In general , the present state of the literature on transfer of representations for RL leaves us with meaningful leads as to what objectives to try , but still incomplete insight into why various objectives add value or how to anticipate what will work well on a new problem . However , there is precedent both in the machine learning literature as well as biological neural analysis literature for attempting to analyze learned representations . Taskonomy ( Zamir et al. , 2018 ) determined transfer learning performance across the same architecture trained with several supervised and unsupervised tasks . While this work did not compare representations directly , one relevant finding was that autoencoders tended to be an outlier in the revealed task structures . Furthermore , the Taskonomy networks were also used to predict fMRI activity ( Wang et al. , 2019 ) , and it was found that several tasks related to 3D image processing made very similar predictions , while the autoencoder was not highly similar to any other models . In Zhuang et al . ( 2021 ) , several different unsupervised networks and one supervised network are assessed as models of primate visual processing . This work found that contrastive unsupervised methods generally had good transfer performance and predicted primate neural activity well . Note that neural predictivity is only an indirect way of comparing components of network representation ; we perform direct comparisons across different networks , including an RL-trained model ( which is to our knowledge the first time such a full comparison has been done ) . Neural analysis metrics We use representation similarity analysis ( RSA ) to compare representations across networks . RSA has been used extensively in neural network analysis and has connections to many other metrics used in neuroscience ( Kriegeskorte & Wei , 2021 ) . Historically , the concept of sparsity has also been important in neural analysis . Sparsity can be defined several different ways ( Willmore & Tolhurst , 2001 ) but in general sparse representations have been associated with efficient encoding of natural stimuli statistics , effective associative memory , and enhanced downstream classification ( Rolls & Treves , 1990 ; Olshausen & Field , 1996 ; Babadi & Sompolinsky , 2014 ; Olshausen & Field , 2004 ) . Another commonly-measured property of neural representations is dimensionality , which is used as a way to probe the number of latent variables encoded in a neural population as well as understand how they are embedded ( Jazayeri & Ostojic , 2021 ) . Higher dimensionality is also associated with better downstream classification ( Rigotti et al. , 2013 ) . 2 METHODS . 2.1 MODELS AND TRAINING . Table 1 summarizes the models trained for this study ( for all but RLRod , we train three instantiations of each ) . For more details on these networks and training see Appendix A . The ResNet architecture shared by all models is shown in Figure 1A . The number of feature channels is 16 for the first meta-layer and 32 thereafter . The final layer is a 128-unit fully connected layer . The images used are all 64x64 RGB egocentric images generated from the virtual rodent environments Merel et al . ( 2020 ) built in MuJoCo ( Todorov et al. , 2012 ) using dm control ( Tunyasuvunakool et al. , 2020 ) . For the RL model , the agent creates its own training images through exploration in the environment ( see Appendix A.0.1 ) . Other models were trained using images generated by the agent ’ s exploration – these images , along with some other features of the rodent ’ s state and action output , have been made publicly available and documented for offline RL in Gulcehre et al . ( 2020 ) . Example images can be seen in Figure 1B . The rodent engages in one of four possible tasks at a time : bowl escape ( where the rodent must crawl out of an uneven valley ) , gaps run ( the rodent must run and jump over gaps ) , two-tap task ( the rodent must tap an orb twice with a set amount of time in between ) , and maze forage ( the rodent has to find orbs in a maze structure ) . 2.2 ANALYSIS METHODS . Activity was recorded from four different layers ( marked as R1-4 in Figure 1A ) in response to 2048 test images drawn equally from the four tasks . Here we briefly describe the analyses applied , with more detailed descriptions available in Appendix A. Representational Similarity Analysis RSA was used to determine the extent to which different models represent visual inputs similarly . First , dissimilarity matrices were made for each layer in each network ( see Eqn 2 ; example dissimilarity matrices in Figure 4A ) . RSA matrices ( resulting from two different correlation metrics ) indicate how similar these dissimilarity matrices are across networks . Separately we also performed RSA across layers within networks and included pixel and action spaces . Sparsity To measure the sparsity of representations in these networks we use a lifetime sparsity metric which determines the extent to which a neuron responds selectively to different images ( Vinje & Gallant , 2000 ) . s = 1− ( 1n ) ( ( ∑ ri ) 2∑ r2i ) 1− 1n ( 1 ) where n is the number of images and ri is the response of the neuron to image i . A sparsity value of 1 indicates very selective responses , with 0 indicating equal responses to all inputs . Dimensionality We perform PCA on population activity and then estimate embedding dimensionality using both the participation ratio ( see Eqn 3 ) and the number of PCs needed to reach 85 % variance explained .
This work identifies the important problem of transfer learning for problems that require deep learning due to high-dimensional inputs. It notes that transfer learning approaches often cannot transfer very far across learning objectives and are judged empirically as black boxes, making for brittle approaches. It proposes to study internal representations to assess potential for transfer. This study trains several supervised and unsupervised, and one reinforcement, tasks with the same data and ResNet architecture. It uses representational similarity analysis (inter-task similarity similarity of intra-task dissimilarity), sparsity analysis (neuroscience-sourced measure of response similarity between neurons), and dimensionality analysis (PCA) to assess the representations; these metric choices are taken from neuroscience literature. RSA shows that representations across different tasks diverge with layer depth (except in a few cases, though the paper doesn't call these out as clearly); the same is seen within different instantiations of the same task in all but three cases. RLRod differs the most. Sparseness analysis shows that RLRod has the most internal dissimilarity in responses to various inputs, and dimensionality analysis shows that it is also the highest dimensional. The paper speculates that this might be due to a one-image-per-neuron representation emerging even in early layers. To test whether the high dimensionality in RLRod is purely due to a higher-dimensional output (38-dim action), the paper trains encoders to predict action and finds that the visual inputs are pretty useless and proprioceptions are somewhat helpful, indicating the value of proprioceptive data. The discussion lists some main claims: representations become less similar at later layers, unsupervised objectives can capture some but not all of the representations of RL, the RL-trained visual encoder seems to drive long-term behavioral planning; then suggests the potential for more study to determine promising transfers. It also postulates value for the neuroscience community, since sparse representations are prevalent in biological intelligence and the similarity may be indicative of similar objectives in artificial and biological intelligence.
SP:1e7159e897b786fef0cd494a49191673fde611de
Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning
Artificial neural systems trained using reinforcement , supervised , and unsupervised learning all acquire internal representations of high dimensional input . To what extent these representations depend on the different learning objectives is largely unknown . Here we compare the representations learned by eight different convolutional neural networks , each with identical ResNet architectures and trained on the same family of egocentric images , but embedded within different learning systems . Specifically , the representations are trained to guide action in a compound reinforcement learning task ; to predict one or a combination of three task-related targets with supervision ; or using one of three different unsupervised objectives . Using representational similarity analysis , we find that the network trained with reinforcement learning differs most from the other networks . Through further analysis using metrics inspired by the neuroscience literature , we find that the model trained with reinforcement learning has a high-dimensional representation wherein individual images are represented with very different patterns of neural activity . These representations seem to arise in order to guide long-term behavior and goal-seeking in the RL agent . Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches . 1 INTRODUCTION . Many studies in machine learning aim to improve model quality or data efficiency by training the model with multiple objectives or transferring representations trained on other data . While leveraging other data or objectives may lead to higher data efficiency or increased generalization outside the primary task , relying too heavily on tasks other than the primary one can undercut performance if the other tasks are insufficiently well aligned . Humans and animals do in fact confront the full complexity of this continual learning problem of building representations , reusing what is worth transferring to new settings as they learn new things . The potential utility of transferring representations is particularly salient in deep reinforcement learning , where rich sensory inputs must be transformed into actions . While landmark developments in deep RL have enabled the end-to-end training of agents capable of leveraging pixel-level visual information in complex environments to guide behavior ( Mnih et al. , 2015 ) , there is a strong demand for methods that improve the sample efficiency of these training-intensive agents through pre- or joint-training of the representations they rely on . Indeed several studies aim to accelerate representation learning in vision-based RL through the use of additional objectives ( e.g. , Jaderberg et al. , 2017 ; Wayne et al. , 2018 ; Schwarzer et al. , 2021 ) . In the present work , we examine a specific instance of a high-dimensional embodied control problem , the virtual rodent of Merel et al . ( 2020 ) , which is also included in the RL Unplugged datasets for offline learning ( Gulcehre et al. , 2020 ) . This problem involves visuomotor control of a highdimensional body to solve multiple tasks , and it is solvable using a visuomotor policy trained through deep RL . In particular , we will compare multi-layer visual representations that arise in this model to those resulting from different training objectives on the same architecture . Transfer learning approaches tend to be evaluated empirically , demonstrating through performance comparisons circumstances under which pretrained or transferred representations add value . While useful and pragmatic , this approach yields limited general insights into new problems . Instead of focusing on performance comparisons , we will focus on analysis of the representations at multiple layers of the same architecture trained with unsupervised , supervised , and reinforcement learning methods . To perform this analysis , we exploit tools that have been commonly applied in the interpretation of biological neural data . Our hope is to develop intuition for the properties of these different representations in order to support future efforts to develop training procedures . 1.1 RELATED WORK . Transfer learning in the supervised context has been widely applied , studied and reviewed , including within the narrower context of deep learning ( Bengio , 2012 ) . Thus , we limit the scope of our discussion to connections to the most relevant literature on transfer learning in the context of reinforcement learning , as well as literature focusing on the analysis of representations learned . Transfer learning for reinforcement learning Within RL specifically , transfer learning has been of interest since before the recent deep RL boom ( Taylor & Stone , 2009 ; Lazaric , 2012 ) . There have been diverse attempts to learn representations from experience that would support generalization across other tasks , including a focus on predicting future states or rewards ( Rafols et al. , 2005 ; Lehnert et al. , 2020 ) . Relevant to our present setting , Hill et al . ( 2019 ) found specifically that generalization is facilitated by egocentric observations . In addition , transfer learning for reinforcement learning has been one way to operationalize the broader and more natural problem of continual learning , wherein representations must be learned , transferred , reused , and adapted repeatedly over the lifetime of an agent ( Hadsell et al. , 2020 ) . Among more recent deep RL approaches , it is also possible to leverage multiple objectives for learning representations concurrently . That is , rather than first learning a representation from previously logged data and then learning a policy from that pretrained representation , it is possible to perform deep RL with auxiliary objectives ( Jaderberg et al. , 2017 ) including self-supervised tasks such as predicting past or future experience ( Wayne et al. , 2018 ; Schwarzer et al. , 2021 ) . Critically , concurrent learning with unsupervised or self-supervised objectives as well as RL offers the advantage that as RL proceeds and the data distribution changes , the auxiliary tasks continue to train on the shifting ( and increasingly relevant ) data . As constrastive approaches have become popular , they have also been explored as auxiliary losses , such that policies rely on features shaped both by the contrastive and RL objectives ( Oord et al. , 2018 ; Laskin et al. , 2020 ) . While intuitively it seems reasonable that joint training of a representation with RL and an additional loss may help performance , new results by Stooke et al . ( 2021 ) keep alive the possibility that exclusively pre-training a representation from unsupervised objectives could outperform end-to-end RL . Nevertheless , it presently remains far from resolved which objectives used for pretraining or as auxiliary tasks ( i.e . concurrent with RL ) will work best . Indeed , in one of the larger empirical studies to date , Yang & Nachum ( 2021 ) sweep over many pre-training objectives . Their results “ suggest that the ideal representation learning objective may depend on the nature of the downstream task , and no single objective appears to dominate generally ” ( with the caveat for our setting that their environments are relatively simple control environments and they focus on state rather than image observations ) . Analysis of neural representations In general , the present state of the literature on transfer of representations for RL leaves us with meaningful leads as to what objectives to try , but still incomplete insight into why various objectives add value or how to anticipate what will work well on a new problem . However , there is precedent both in the machine learning literature as well as biological neural analysis literature for attempting to analyze learned representations . Taskonomy ( Zamir et al. , 2018 ) determined transfer learning performance across the same architecture trained with several supervised and unsupervised tasks . While this work did not compare representations directly , one relevant finding was that autoencoders tended to be an outlier in the revealed task structures . Furthermore , the Taskonomy networks were also used to predict fMRI activity ( Wang et al. , 2019 ) , and it was found that several tasks related to 3D image processing made very similar predictions , while the autoencoder was not highly similar to any other models . In Zhuang et al . ( 2021 ) , several different unsupervised networks and one supervised network are assessed as models of primate visual processing . This work found that contrastive unsupervised methods generally had good transfer performance and predicted primate neural activity well . Note that neural predictivity is only an indirect way of comparing components of network representation ; we perform direct comparisons across different networks , including an RL-trained model ( which is to our knowledge the first time such a full comparison has been done ) . Neural analysis metrics We use representation similarity analysis ( RSA ) to compare representations across networks . RSA has been used extensively in neural network analysis and has connections to many other metrics used in neuroscience ( Kriegeskorte & Wei , 2021 ) . Historically , the concept of sparsity has also been important in neural analysis . Sparsity can be defined several different ways ( Willmore & Tolhurst , 2001 ) but in general sparse representations have been associated with efficient encoding of natural stimuli statistics , effective associative memory , and enhanced downstream classification ( Rolls & Treves , 1990 ; Olshausen & Field , 1996 ; Babadi & Sompolinsky , 2014 ; Olshausen & Field , 2004 ) . Another commonly-measured property of neural representations is dimensionality , which is used as a way to probe the number of latent variables encoded in a neural population as well as understand how they are embedded ( Jazayeri & Ostojic , 2021 ) . Higher dimensionality is also associated with better downstream classification ( Rigotti et al. , 2013 ) . 2 METHODS . 2.1 MODELS AND TRAINING . Table 1 summarizes the models trained for this study ( for all but RLRod , we train three instantiations of each ) . For more details on these networks and training see Appendix A . The ResNet architecture shared by all models is shown in Figure 1A . The number of feature channels is 16 for the first meta-layer and 32 thereafter . The final layer is a 128-unit fully connected layer . The images used are all 64x64 RGB egocentric images generated from the virtual rodent environments Merel et al . ( 2020 ) built in MuJoCo ( Todorov et al. , 2012 ) using dm control ( Tunyasuvunakool et al. , 2020 ) . For the RL model , the agent creates its own training images through exploration in the environment ( see Appendix A.0.1 ) . Other models were trained using images generated by the agent ’ s exploration – these images , along with some other features of the rodent ’ s state and action output , have been made publicly available and documented for offline RL in Gulcehre et al . ( 2020 ) . Example images can be seen in Figure 1B . The rodent engages in one of four possible tasks at a time : bowl escape ( where the rodent must crawl out of an uneven valley ) , gaps run ( the rodent must run and jump over gaps ) , two-tap task ( the rodent must tap an orb twice with a set amount of time in between ) , and maze forage ( the rodent has to find orbs in a maze structure ) . 2.2 ANALYSIS METHODS . Activity was recorded from four different layers ( marked as R1-4 in Figure 1A ) in response to 2048 test images drawn equally from the four tasks . Here we briefly describe the analyses applied , with more detailed descriptions available in Appendix A. Representational Similarity Analysis RSA was used to determine the extent to which different models represent visual inputs similarly . First , dissimilarity matrices were made for each layer in each network ( see Eqn 2 ; example dissimilarity matrices in Figure 4A ) . RSA matrices ( resulting from two different correlation metrics ) indicate how similar these dissimilarity matrices are across networks . Separately we also performed RSA across layers within networks and included pixel and action spaces . Sparsity To measure the sparsity of representations in these networks we use a lifetime sparsity metric which determines the extent to which a neuron responds selectively to different images ( Vinje & Gallant , 2000 ) . s = 1− ( 1n ) ( ( ∑ ri ) 2∑ r2i ) 1− 1n ( 1 ) where n is the number of images and ri is the response of the neuron to image i . A sparsity value of 1 indicates very selective responses , with 0 indicating equal responses to all inputs . Dimensionality We perform PCA on population activity and then estimate embedding dimensionality using both the participation ratio ( see Eqn 3 ) and the number of PCs needed to reach 85 % variance explained .
The paper explores how similar the representations learned by supervised, unsupervised, and RL techniques are in an egocentric virtual rodent environment. The main claim is that representations in RL-trained networks are most different from supervised and unsupervised networks, which is tested with an RSA metric on 4 layers of the networks. This is explored further by analyzing the sparsity, dimensionality (RL has much higher dimensionality) and action prediction (proprioception alone without visual features is sufficient here).
SP:1e7159e897b786fef0cd494a49191673fde611de
Optimal Representations for Covariate Shift
Machine learning systems often experience a distribution shift between training and testing . In this paper , we introduce a simple variational objective whose optima are exactly the set of all representations on which risk minimizers are guaranteed to be robust to any distribution shift that preserves the Bayes predictor , e.g. , covariate shifts . Our objective has two components . First , a representation must remain discriminative for the task , i.e. , some predictor must be able to simultaneously minimize the source and target risk . Second , the representation ’ s marginal support needs to be the same across source and target . We make this practical by designing self-supervised learning methods that only use unlabelled data and augmentations to train robust representations . Our objectives achieve state-of-the-art results on DomainBed , and give insights into the robustness of recent methods , such as CLIP . 1 INTRODUCTION . It is hard to build machine learning ( ML ) systems that are robust to distribution shifts between a source ( train ) and target ( test ) domain . One promising approach to domain generalization ( DG ) is learning robust representations from which predictors trained on source must perform well on target . In practice , however , no current DG methods for learning representation uniformly outperform empirical source-risk minimizers ( ERM ) ( Gulrajani & Lopez-Paz , 2021 ) . Furthermore , our theoretical understanding of DG is still lacking . Specifically , while previous work have studied properties that would or would not imply robust representations ( Ben-David et al. , 2007 ; 2010a ; Zhao et al. , 2019 ; Johansson et al. , 2019 ) , the minimal set of achievable requirements for perfect DG is not yet known . We introduce the first , simple , variational objective whose optima are exactly the set of all representations on which source risk minimizers are guaranteed to generalize across distribution shifts that preserve the Bayes predictor . We work in an idealized DG ( IDG ) setting ; we assume that a learner has access to the source population risk . Our variational characterization implies that it is both sufficient and necessary for optimal IDG that a representation : ( a ) remains discriminative for the learning task , i.e. , there must exist predictors from the representation to the labels that can simultaneously minimize both source and target risk ; and ( b ) keeps the support of its marginal distribution invariant to shifts . This means that any optimal representation learning method must seek discriminative information about the target . Even worse , we prove that without access to some knowledge about the target , any representation learning algorithm can not uniformly ( over all target domains ) outperform a constant representation , which may explain why DG methods struggle to outperform ERM . We show , in theory and practice , how to overcome these challenges using only a large set of unlabeled examples and particular data augmentations that retain all discriminative information but minimal domain-specific information . Text descriptions of images are examples of such augmentations , as they are informative for many downstream classification tasks , but they remove a lot of domain-specific information . With such augmentations , we design practical self-supervised learning ( SSL ) objectives for learning robust representations . Our objectives give insights into the robustness of CLIP ( Radford et al. , 2021 ) , and lead to improved CLIP-based representations that achieve state-of-the-art ( SOTA ) results on DomainBed ( Gulrajani & Lopez-Paz , 2021 ) . To summarize , we : • provide minimal sufficient objectives whose optima achieve optimal DG under covariate shift ; • prove that it is impossible to learn useful representations without accessing target information ; • provide practical objectives to learn optimally robust representations using specific augmentations ; • get state-of-the-art results on typical domain generalization benchmarks . 2 BACKGROUND : DOMAIN GENERALIZATION AND REPRESENTATIONS . We are interested in predictions that are robust across distribution shifts . We formalize this using domain generalization ( DG ) language . Given a distribution pX , Y | ds over inputs x ∈ X and labels y ∈ Y from the source domain ds ∈ D , we select a predictor f : X → Γ . The predictions γ ∈ Γ could for example be labels or distributions over labels . Despite being selected on the source domain , we would like f to achieve a small expected risk with respect to a loss function ` : Y × Γ→ R≥0 , Rdf [ Y |X ] : = EpX , Y | d [ ` ( Y , f ( X ) ) ] , ( 1 ) on a distribution pX , Y | d from a target domain d = dt ∈ D , which is somehow related to ds . A common strategy for DG is to learn robust representations , which splits the problem into two . First , learn an encoder pZ |X , which maps inputs X to representations Z . Then , learn a predictor h : Z → Γ from representations Z to labels Y using standard risk minimization . The goal is to design a robust representation Z , so that predictors h trained to minimize the source risk Rdsh [ Y |Z ] also achieve low target risk Rdth [ Y |Z ] . Many methods have been proposed to try to learn such Z , e.g. , by enforcing domain invariance of the marginal pZ | d ( e.g. , Ganin et al. , 2016 ) . Still , many of these proposals are not sound ( Zhao et al. , 2019 ; Johansson et al. , 2019 ) . Furthermore , they rarely outperform source empirical risk minimization ( ERM ) in practice ( Gulrajani & Lopez-Paz , 2021 ) . 3 OPTIMAL REPRESENTATIONS FOR DOMAIN GENERALIZATION . To separate domain generalization from finite sample generalization , we consider an idealized DG ( IDG ) , where the predictor h is selected on the source population risk rather than empirical risk . We assume sample spaces X , Z , Y , D are discrete ; formal statements and proofs are in Appxs . A and B . 3.1 DEFINING OPTIMAL REPRESENTATIONS FOR IDEALIZED DOMAIN GENERALIZATION . We want to evaluate the quality of a representation Z of X . In our IDG , the learner is given a random source Ds ; she selects any source risk minimizer ; and is scored according to her risk on a random target domain Dt . To give uniform guarantees while reflecting the uncertainty over the source-target pair ( Ds , Dt ) , we measure the quality of Z as the expected risk of the learner ’ s worst-case choice . Definition . The idealized domain generalization risk ( IDG risk ) of an encoder pZ |X is the expected ( over domains ) worst-case ( over source risk minimizers ) target risk , i.e. , RIDG [ Y |Z ] : = EpDs , Dt [ sup h∈H∗Ds RDth [ Y |Z ] ] ( 2 ) whereH∗Ds : = arg minh R Ds h [ Y |Z ] are the source risk minimizers . We call a representation Z∗ ( or its encoder ) optimal for IDG if it minimizes the IDG risk : pZ∗ |X ∈ arg minpZ |X RIDG [ Y |Z ] . 3.2 CHARACTERIZING OPTIMAL REPRESENTATIONS FOR IDG UNDER COVARIATE SHIFT . The IDG risk is useful to evaluate representations , but gives few insights into IDG and is impractical to optimize due to the supremum in Eq . ( 2 ) . Under mild assumptions , we provide a simplified , equivalent objective , which is easier to optimize . For convenience , we assume that there is a unique Bayes predictor f∗ , which minimizes the expected risk over domains , i.e. , f∗ = arg minf EpDt [ R Dt f [ Y |X ] ] . This is satisfied by standard ML tasks pY , X and losses ` . More importantly , we assume the following domain structure , which ensures the existence of optimal encoders and allows our simplification . Assumptions . All domains d ∈ D we consider are related by the following assumptions : 1 . Generalized covariate shift . All domain-specific risk minimizers f ∈ arg minf [ Rdf [ Y |X ] ] are equal to the Bayes predictor f∗ on their support , i.e. , f ( x ) = f∗ ( x ) for all x ∈ supp ( pX | d ) . 2 . Invariance of Bayes predictions . The set of Bayes predictions is the same for all domains , i.e. , { f∗ ( x ) |x ∈ supp ( pX | d ) } = { f∗ ( x ) |x ∈ X } . Under review as a conference paper at ICLR 2022 < latexit sha1_base64= '' nv5kgG6Q4MLitfs23A69VllAD+g= '' > AAAB6nicdVC7SgNBFJ2NrxhfUUtBBoOQapldspoUYsDGMkHzkGQJs5NJMmT2wcysEJaUljYWithaW+c77PwGf8JJoqCiBy4czrmXe+71Is6kQujNSC0sLi2vpFcza+sbm1vZ7Z26DGNBaI2EPBRND0vKWUBriilOm5Gg2Pc4bXjDs6nfuKZCsjC4VKOIuj7uB6zHCFZaurg6QZ1sDpnILjoFGyLTdlDJKmniIKt0VICWiWbInb5Mqu83+5NKJ/va7oYk9mmgCMdStiwUKTfBQjHC6TjTjiWNMBniPm1pGmCfSjeZRR3DQ610YS8UugIFZ+r3iQT7Uo58T3f6WA3kb28q/uW1YtUrugkLoljRgMwX9WIOVQind8MuE5QoPtIEE8F0VkgGWGCi9Hcy+glfl8L/Sd02rYLpVFGunAdzpMEeOAB5YIFjUAbnoAJqgIA+uAX34MHgxp3xaDzNW1PG58wu+AHj+QO7hpHM < /latexit > < latexit sha1_base64= '' 8sdK5eqgt6vtuNYDok13axAjw7g= '' > AAAB6nicdVC7SgNBFJ31GeMrainIYBBSLbNxF00hBmwsEzQPSZYwO5lNhsw+mJkVwpLS0sZCEVtr63yHnd/gTzhJFFT0wIXDOfdyz71ezJlUCL0Zc/MLi0vLmZXs6tr6xmZua7suo0QQWiMRj0TTw5JyFtKaYorTZiwoDjxOG97gbOI3rqmQLAov1TCmboB7IfMZwUpLF1cnVieXR6aDrJJTgsgsOqhk25ogC6FDG1ommiJ/+jKuvt/sjSud3Gu7G5EkoKEiHEvZslCs3BQLxQino2w7kTTGZIB7tKVpiAMq3XQadQQPtNKFfiR0hQpO1e8TKQ6kHAae7gyw6svf3kT8y2slyj92UxbGiaIhmS3yEw5VBCd3wy4TlCg+1AQTwXRWSPpYYKL0d7L6CV+Xwv9JvWhatulUUb5cADNkwC7YBwVggSNQBuegAmqAgB64BffgweDGnfFoPM1a54zPmR3wA8bzB632kcI= < /latexit > < latexit sha1_base64= '' NzJ/0OJqTfPqJLiM8d9EWMJJFFU= '' > AAAB8HicbVDJSgNBEK1xjXGLy83LYBA8SJgRRY8BLx4jmEWTYejp9CRNunua7h4hjPkKLx5c8OrnePMX9CfsLAdNfFDweK+KqnqRZFQbz/t05uYXFpeWcyv51bX1jc3C1nZNJ6nCpIoTlqhGhDRhVJCqoYaRhlQE8YiRetS7GPr1O6I0TcS16UsScNQRNKYYGSvdyDC7vW+HZhAWil7JG8GdJf6EFMu73teL+JaVsPDRaic45UQYzJDWTd+TJsiQMhQzMsi3Uk0kwj3UIU1LBeJEB9no4IF7YJW2GyfKljDuSP09kSGudZ9HtpMj09XT3lD8z2umJj4PMipkaojA40VxylyTuMPv3TZVBBvWtwRhRe2tLu4ihbCxGeVtCP70y7OkdlzyT0qnVzaNIxgjB3uwD4fgwxmU4RIqUAUMHB7gCZ4d5Tw6r87buHXOmczswB847z9UpJRr < /latexit > < latexit sha1_base64= '' un3pkM+QCvsHMLw7+AWvfpJS2xM= '' > AAAB8HicbVDJSgNBEK1xjXGLy83LYBA8SJgRRY8BLx4jmEWTYejp9CRNunua7h4hjPkKLx5c8OrnePMX9CfsLAdNfFDweK+KqnqRZFQbz/t05uYXFpeWcyv51bX1jc3C1nZNJ6nCpIoTlqhGhDRhVJCqoYaRhlQE8YiRetS7GPr1O6I0TcS16UsScNQRNKYYGSvdyDC7vW+HehAWil7JG8GdJf6EFMu73teL+JaVsPDRaic45UQYzJDWTd+TJsiQMhQzMsi3Uk0kwj3UIU1LBeJEB9no4IF7YJW2GyfKljDuSP09kSGudZ9HtpMj09XT3lD8z2umJj4PMipkaojA40VxylyTuMPv3TZVBBvWtwRhRe2tLu4ihbCxGeVtCP70y7OkdlzyT0qnVzaNIxgjB3uwD4fgwxmU4RIqUAUMHB7gCZ4d5Tw6r87buHXOmczswB847z9TH5Rq < /latexit > Generalized covariate shift ( GCS ) ensures that f∗ is simultaneously optimal on all domains . For log-loss ` it recovers standard covariate shift , i.e. , pY | x , d = pY | x . For other losses , GCS is weaker , e.g. , it only requires invariance of most likely labels for 0-1 loss , and of conditional expectations for MSE . Invariance of Bayes predictors is necessary to learn useful predictors using a single domain . For example , for 0-1 loss it ensures that each label is seen at least once in each domain . The intuition behind our objective is that under GCS any source risk minimizer will make optimal predictions on target samples x that are also in the source . Thus , IDG optimal representations are exactly those that ( a ) have the same support in Z for all domain , and ( b ) retain GCS from Z without sacrificing the ability to predict Y , which can be ensured by minimizing the risk from Z . See Fig . 1 . Theorem 1 . Under our assumptions , an encoder pZ∗ |X is optimal for IDG if and only if it minimizes the risk R [ Y |Z ] : = infh EpDt [ RDth [ Y |Z ] ] while matching the support of Z across domains , i.e. , pZ∗ |X ∈ arg max pZ |X R [ Y |Z ] s.t . ∀ d ∈ D , supp ( pZ | d ) = supp ( pZ ) ( 3 ) Moreover , such encoders exist and their IDG risk is the Bayes risk RIDG [ Y |Z∗ ] = R [ Y |X ] . Theorem 1 provides an objective to learn representations on which performing risk minimization using a single domain and Z∗ is as good as performing risk minimization on the target domain from inputs X . Other sufficient conditions have previously been hinted towards , e.g. , matching the marginal pZ | d instead of its support ( e.g. , Ben-David et al. , 2010a ) which is the focus of most DG methods ( e.g. , Ganin et al. , 2016 ) . Previous conditions are nevertheless generally neither necessary nor achievable . To our knowledge , Thm . 1 is the first characterization of necessary and sufficient conditions for IDG optimal representations Z∗ . Theorem 1 thus gives better insights into IDG and provides a framework for deriving the least stringent objectives for optimal IDG . The risk minimization ( Eq . ( 3 ) ) shows that one must have some knowledge about the target domains to learn optimal representations for IDG . Access to targets might seem unrealistic , but without such knowledge or additional assumptions it is provably impossible to beat even constant representations . Proposition 1 ( No free lunch for IDG ) . Let ds be any source domain , Zds be any representation chosen on source ds , and C ∈ Z be a constant representation . Under minor assumptions , for every “ good ” target domain outside the source ’ s support on which Zds outperforms C for IDG , there are many “ bad ” target domains on which Zds is strictly worse than C. Formal statement in Appx . B.3 . Proposition 1 shows that target knowledge is necessary for learning useful representations in IDG . This may explain why previous DG methods have been unable to outperform ERM in standard benchmarks ( Gulrajani & Lopez-Paz , 2021 ) : the knowledge they have access to is insufficient to generalize . Taken together , Prop . 1 and Thm . 1 say that either you have access to target domains dt , in which case you can achieve an IDG risk that matches supervised learning , or you do not access dt , in which case any representation learning algorithm can achieve worse IDG risk than a constant .
The paper studies representation learning under covariate shift. Under the IDG setting and the proposed assumptions, the paper gives a so-called variational characterization of the optimal representation. This characterization shows that the optimal representation should remain discriminative while has the same support across domains. It is argued that without any target information, no representation can do uniformly well over constant representation, thus supporting the necessity of target knowledge. The paper provides practical objectives of the proposed variational characterization by self-supervised learning using domain-covering augmentations. The proposed representation learning scheme is tested on several datasets.
SP:b6d498d546af2429df5d021ec957ed63170cdc21
Optimal Representations for Covariate Shift
Machine learning systems often experience a distribution shift between training and testing . In this paper , we introduce a simple variational objective whose optima are exactly the set of all representations on which risk minimizers are guaranteed to be robust to any distribution shift that preserves the Bayes predictor , e.g. , covariate shifts . Our objective has two components . First , a representation must remain discriminative for the task , i.e. , some predictor must be able to simultaneously minimize the source and target risk . Second , the representation ’ s marginal support needs to be the same across source and target . We make this practical by designing self-supervised learning methods that only use unlabelled data and augmentations to train robust representations . Our objectives achieve state-of-the-art results on DomainBed , and give insights into the robustness of recent methods , such as CLIP . 1 INTRODUCTION . It is hard to build machine learning ( ML ) systems that are robust to distribution shifts between a source ( train ) and target ( test ) domain . One promising approach to domain generalization ( DG ) is learning robust representations from which predictors trained on source must perform well on target . In practice , however , no current DG methods for learning representation uniformly outperform empirical source-risk minimizers ( ERM ) ( Gulrajani & Lopez-Paz , 2021 ) . Furthermore , our theoretical understanding of DG is still lacking . Specifically , while previous work have studied properties that would or would not imply robust representations ( Ben-David et al. , 2007 ; 2010a ; Zhao et al. , 2019 ; Johansson et al. , 2019 ) , the minimal set of achievable requirements for perfect DG is not yet known . We introduce the first , simple , variational objective whose optima are exactly the set of all representations on which source risk minimizers are guaranteed to generalize across distribution shifts that preserve the Bayes predictor . We work in an idealized DG ( IDG ) setting ; we assume that a learner has access to the source population risk . Our variational characterization implies that it is both sufficient and necessary for optimal IDG that a representation : ( a ) remains discriminative for the learning task , i.e. , there must exist predictors from the representation to the labels that can simultaneously minimize both source and target risk ; and ( b ) keeps the support of its marginal distribution invariant to shifts . This means that any optimal representation learning method must seek discriminative information about the target . Even worse , we prove that without access to some knowledge about the target , any representation learning algorithm can not uniformly ( over all target domains ) outperform a constant representation , which may explain why DG methods struggle to outperform ERM . We show , in theory and practice , how to overcome these challenges using only a large set of unlabeled examples and particular data augmentations that retain all discriminative information but minimal domain-specific information . Text descriptions of images are examples of such augmentations , as they are informative for many downstream classification tasks , but they remove a lot of domain-specific information . With such augmentations , we design practical self-supervised learning ( SSL ) objectives for learning robust representations . Our objectives give insights into the robustness of CLIP ( Radford et al. , 2021 ) , and lead to improved CLIP-based representations that achieve state-of-the-art ( SOTA ) results on DomainBed ( Gulrajani & Lopez-Paz , 2021 ) . To summarize , we : • provide minimal sufficient objectives whose optima achieve optimal DG under covariate shift ; • prove that it is impossible to learn useful representations without accessing target information ; • provide practical objectives to learn optimally robust representations using specific augmentations ; • get state-of-the-art results on typical domain generalization benchmarks . 2 BACKGROUND : DOMAIN GENERALIZATION AND REPRESENTATIONS . We are interested in predictions that are robust across distribution shifts . We formalize this using domain generalization ( DG ) language . Given a distribution pX , Y | ds over inputs x ∈ X and labels y ∈ Y from the source domain ds ∈ D , we select a predictor f : X → Γ . The predictions γ ∈ Γ could for example be labels or distributions over labels . Despite being selected on the source domain , we would like f to achieve a small expected risk with respect to a loss function ` : Y × Γ→ R≥0 , Rdf [ Y |X ] : = EpX , Y | d [ ` ( Y , f ( X ) ) ] , ( 1 ) on a distribution pX , Y | d from a target domain d = dt ∈ D , which is somehow related to ds . A common strategy for DG is to learn robust representations , which splits the problem into two . First , learn an encoder pZ |X , which maps inputs X to representations Z . Then , learn a predictor h : Z → Γ from representations Z to labels Y using standard risk minimization . The goal is to design a robust representation Z , so that predictors h trained to minimize the source risk Rdsh [ Y |Z ] also achieve low target risk Rdth [ Y |Z ] . Many methods have been proposed to try to learn such Z , e.g. , by enforcing domain invariance of the marginal pZ | d ( e.g. , Ganin et al. , 2016 ) . Still , many of these proposals are not sound ( Zhao et al. , 2019 ; Johansson et al. , 2019 ) . Furthermore , they rarely outperform source empirical risk minimization ( ERM ) in practice ( Gulrajani & Lopez-Paz , 2021 ) . 3 OPTIMAL REPRESENTATIONS FOR DOMAIN GENERALIZATION . To separate domain generalization from finite sample generalization , we consider an idealized DG ( IDG ) , where the predictor h is selected on the source population risk rather than empirical risk . We assume sample spaces X , Z , Y , D are discrete ; formal statements and proofs are in Appxs . A and B . 3.1 DEFINING OPTIMAL REPRESENTATIONS FOR IDEALIZED DOMAIN GENERALIZATION . We want to evaluate the quality of a representation Z of X . In our IDG , the learner is given a random source Ds ; she selects any source risk minimizer ; and is scored according to her risk on a random target domain Dt . To give uniform guarantees while reflecting the uncertainty over the source-target pair ( Ds , Dt ) , we measure the quality of Z as the expected risk of the learner ’ s worst-case choice . Definition . The idealized domain generalization risk ( IDG risk ) of an encoder pZ |X is the expected ( over domains ) worst-case ( over source risk minimizers ) target risk , i.e. , RIDG [ Y |Z ] : = EpDs , Dt [ sup h∈H∗Ds RDth [ Y |Z ] ] ( 2 ) whereH∗Ds : = arg minh R Ds h [ Y |Z ] are the source risk minimizers . We call a representation Z∗ ( or its encoder ) optimal for IDG if it minimizes the IDG risk : pZ∗ |X ∈ arg minpZ |X RIDG [ Y |Z ] . 3.2 CHARACTERIZING OPTIMAL REPRESENTATIONS FOR IDG UNDER COVARIATE SHIFT . The IDG risk is useful to evaluate representations , but gives few insights into IDG and is impractical to optimize due to the supremum in Eq . ( 2 ) . Under mild assumptions , we provide a simplified , equivalent objective , which is easier to optimize . For convenience , we assume that there is a unique Bayes predictor f∗ , which minimizes the expected risk over domains , i.e. , f∗ = arg minf EpDt [ R Dt f [ Y |X ] ] . This is satisfied by standard ML tasks pY , X and losses ` . More importantly , we assume the following domain structure , which ensures the existence of optimal encoders and allows our simplification . Assumptions . All domains d ∈ D we consider are related by the following assumptions : 1 . Generalized covariate shift . All domain-specific risk minimizers f ∈ arg minf [ Rdf [ Y |X ] ] are equal to the Bayes predictor f∗ on their support , i.e. , f ( x ) = f∗ ( x ) for all x ∈ supp ( pX | d ) . 2 . Invariance of Bayes predictions . The set of Bayes predictions is the same for all domains , i.e. , { f∗ ( x ) |x ∈ supp ( pX | d ) } = { f∗ ( x ) |x ∈ X } . Under review as a conference paper at ICLR 2022 < latexit sha1_base64= '' nv5kgG6Q4MLitfs23A69VllAD+g= '' > AAAB6nicdVC7SgNBFJ2NrxhfUUtBBoOQapldspoUYsDGMkHzkGQJs5NJMmT2wcysEJaUljYWithaW+c77PwGf8JJoqCiBy4czrmXe+71Is6kQujNSC0sLi2vpFcza+sbm1vZ7Z26DGNBaI2EPBRND0vKWUBriilOm5Gg2Pc4bXjDs6nfuKZCsjC4VKOIuj7uB6zHCFZaurg6QZ1sDpnILjoFGyLTdlDJKmniIKt0VICWiWbInb5Mqu83+5NKJ/va7oYk9mmgCMdStiwUKTfBQjHC6TjTjiWNMBniPm1pGmCfSjeZRR3DQ610YS8UugIFZ+r3iQT7Uo58T3f6WA3kb28q/uW1YtUrugkLoljRgMwX9WIOVQind8MuE5QoPtIEE8F0VkgGWGCi9Hcy+glfl8L/Sd02rYLpVFGunAdzpMEeOAB5YIFjUAbnoAJqgIA+uAX34MHgxp3xaDzNW1PG58wu+AHj+QO7hpHM < /latexit > < latexit sha1_base64= '' 8sdK5eqgt6vtuNYDok13axAjw7g= '' > AAAB6nicdVC7SgNBFJ31GeMrainIYBBSLbNxF00hBmwsEzQPSZYwO5lNhsw+mJkVwpLS0sZCEVtr63yHnd/gTzhJFFT0wIXDOfdyz71ezJlUCL0Zc/MLi0vLmZXs6tr6xmZua7suo0QQWiMRj0TTw5JyFtKaYorTZiwoDjxOG97gbOI3rqmQLAov1TCmboB7IfMZwUpLF1cnVieXR6aDrJJTgsgsOqhk25ogC6FDG1ommiJ/+jKuvt/sjSud3Gu7G5EkoKEiHEvZslCs3BQLxQino2w7kTTGZIB7tKVpiAMq3XQadQQPtNKFfiR0hQpO1e8TKQ6kHAae7gyw6svf3kT8y2slyj92UxbGiaIhmS3yEw5VBCd3wy4TlCg+1AQTwXRWSPpYYKL0d7L6CV+Xwv9JvWhatulUUb5cADNkwC7YBwVggSNQBuegAmqAgB64BffgweDGnfFoPM1a54zPmR3wA8bzB632kcI= < /latexit > < latexit sha1_base64= '' NzJ/0OJqTfPqJLiM8d9EWMJJFFU= '' > AAAB8HicbVDJSgNBEK1xjXGLy83LYBA8SJgRRY8BLx4jmEWTYejp9CRNunua7h4hjPkKLx5c8OrnePMX9CfsLAdNfFDweK+KqnqRZFQbz/t05uYXFpeWcyv51bX1jc3C1nZNJ6nCpIoTlqhGhDRhVJCqoYaRhlQE8YiRetS7GPr1O6I0TcS16UsScNQRNKYYGSvdyDC7vW+HZhAWil7JG8GdJf6EFMu73teL+JaVsPDRaic45UQYzJDWTd+TJsiQMhQzMsi3Uk0kwj3UIU1LBeJEB9no4IF7YJW2GyfKljDuSP09kSGudZ9HtpMj09XT3lD8z2umJj4PMipkaojA40VxylyTuMPv3TZVBBvWtwRhRe2tLu4ihbCxGeVtCP70y7OkdlzyT0qnVzaNIxgjB3uwD4fgwxmU4RIqUAUMHB7gCZ4d5Tw6r87buHXOmczswB847z9UpJRr < /latexit > < latexit sha1_base64= '' un3pkM+QCvsHMLw7+AWvfpJS2xM= '' > AAAB8HicbVDJSgNBEK1xjXGLy83LYBA8SJgRRY8BLx4jmEWTYejp9CRNunua7h4hjPkKLx5c8OrnePMX9CfsLAdNfFDweK+KqnqRZFQbz/t05uYXFpeWcyv51bX1jc3C1nZNJ6nCpIoTlqhGhDRhVJCqoYaRhlQE8YiRetS7GPr1O6I0TcS16UsScNQRNKYYGSvdyDC7vW+HehAWil7JG8GdJf6EFMu73teL+JaVsPDRaic45UQYzJDWTd+TJsiQMhQzMsi3Uk0kwj3UIU1LBeJEB9no4IF7YJW2GyfKljDuSP09kSGudZ9HtpMj09XT3lD8z2umJj4PMipkaojA40VxylyTuMPv3TZVBBvWtwRhRe2tLu4ihbCxGeVtCP70y7OkdlzyT0qnVzaNIxgjB3uwD4fgwxmU4RIqUAUMHB7gCZ4d5Tw6r87buHXOmczswB847z9TH5Rq < /latexit > Generalized covariate shift ( GCS ) ensures that f∗ is simultaneously optimal on all domains . For log-loss ` it recovers standard covariate shift , i.e. , pY | x , d = pY | x . For other losses , GCS is weaker , e.g. , it only requires invariance of most likely labels for 0-1 loss , and of conditional expectations for MSE . Invariance of Bayes predictors is necessary to learn useful predictors using a single domain . For example , for 0-1 loss it ensures that each label is seen at least once in each domain . The intuition behind our objective is that under GCS any source risk minimizer will make optimal predictions on target samples x that are also in the source . Thus , IDG optimal representations are exactly those that ( a ) have the same support in Z for all domain , and ( b ) retain GCS from Z without sacrificing the ability to predict Y , which can be ensured by minimizing the risk from Z . See Fig . 1 . Theorem 1 . Under our assumptions , an encoder pZ∗ |X is optimal for IDG if and only if it minimizes the risk R [ Y |Z ] : = infh EpDt [ RDth [ Y |Z ] ] while matching the support of Z across domains , i.e. , pZ∗ |X ∈ arg max pZ |X R [ Y |Z ] s.t . ∀ d ∈ D , supp ( pZ | d ) = supp ( pZ ) ( 3 ) Moreover , such encoders exist and their IDG risk is the Bayes risk RIDG [ Y |Z∗ ] = R [ Y |X ] . Theorem 1 provides an objective to learn representations on which performing risk minimization using a single domain and Z∗ is as good as performing risk minimization on the target domain from inputs X . Other sufficient conditions have previously been hinted towards , e.g. , matching the marginal pZ | d instead of its support ( e.g. , Ben-David et al. , 2010a ) which is the focus of most DG methods ( e.g. , Ganin et al. , 2016 ) . Previous conditions are nevertheless generally neither necessary nor achievable . To our knowledge , Thm . 1 is the first characterization of necessary and sufficient conditions for IDG optimal representations Z∗ . Theorem 1 thus gives better insights into IDG and provides a framework for deriving the least stringent objectives for optimal IDG . The risk minimization ( Eq . ( 3 ) ) shows that one must have some knowledge about the target domains to learn optimal representations for IDG . Access to targets might seem unrealistic , but without such knowledge or additional assumptions it is provably impossible to beat even constant representations . Proposition 1 ( No free lunch for IDG ) . Let ds be any source domain , Zds be any representation chosen on source ds , and C ∈ Z be a constant representation . Under minor assumptions , for every “ good ” target domain outside the source ’ s support on which Zds outperforms C for IDG , there are many “ bad ” target domains on which Zds is strictly worse than C. Formal statement in Appx . B.3 . Proposition 1 shows that target knowledge is necessary for learning useful representations in IDG . This may explain why previous DG methods have been unable to outperform ERM in standard benchmarks ( Gulrajani & Lopez-Paz , 2021 ) : the knowledge they have access to is insufficient to generalize . Taken together , Prop . 1 and Thm . 1 say that either you have access to target domains dt , in which case you can achieve an IDG risk that matches supervised learning , or you do not access dt , in which case any representation learning algorithm can achieve worse IDG risk than a constant .
This work focuses on learning representations for domain generalization. It proposes to minimize the idealized domain generalization risk (IDG risk; defined on page 2). Under several assumptions (importantly, domain covering augmentations), the IDG risk objective can be altered (Thm.1 -> Prop.2 -> Eq.(6)) to a more practical one. With additional variational techniques (Sec.4.2), the proposed algorithms approximately minimize conditional entropy and domain bottleneck in (6). Experiments on standard benchmarks show that the proposed method can out-perform SOTA alternatives.
SP:b6d498d546af2429df5d021ec957ed63170cdc21
Optimal Representations for Covariate Shift
Machine learning systems often experience a distribution shift between training and testing . In this paper , we introduce a simple variational objective whose optima are exactly the set of all representations on which risk minimizers are guaranteed to be robust to any distribution shift that preserves the Bayes predictor , e.g. , covariate shifts . Our objective has two components . First , a representation must remain discriminative for the task , i.e. , some predictor must be able to simultaneously minimize the source and target risk . Second , the representation ’ s marginal support needs to be the same across source and target . We make this practical by designing self-supervised learning methods that only use unlabelled data and augmentations to train robust representations . Our objectives achieve state-of-the-art results on DomainBed , and give insights into the robustness of recent methods , such as CLIP . 1 INTRODUCTION . It is hard to build machine learning ( ML ) systems that are robust to distribution shifts between a source ( train ) and target ( test ) domain . One promising approach to domain generalization ( DG ) is learning robust representations from which predictors trained on source must perform well on target . In practice , however , no current DG methods for learning representation uniformly outperform empirical source-risk minimizers ( ERM ) ( Gulrajani & Lopez-Paz , 2021 ) . Furthermore , our theoretical understanding of DG is still lacking . Specifically , while previous work have studied properties that would or would not imply robust representations ( Ben-David et al. , 2007 ; 2010a ; Zhao et al. , 2019 ; Johansson et al. , 2019 ) , the minimal set of achievable requirements for perfect DG is not yet known . We introduce the first , simple , variational objective whose optima are exactly the set of all representations on which source risk minimizers are guaranteed to generalize across distribution shifts that preserve the Bayes predictor . We work in an idealized DG ( IDG ) setting ; we assume that a learner has access to the source population risk . Our variational characterization implies that it is both sufficient and necessary for optimal IDG that a representation : ( a ) remains discriminative for the learning task , i.e. , there must exist predictors from the representation to the labels that can simultaneously minimize both source and target risk ; and ( b ) keeps the support of its marginal distribution invariant to shifts . This means that any optimal representation learning method must seek discriminative information about the target . Even worse , we prove that without access to some knowledge about the target , any representation learning algorithm can not uniformly ( over all target domains ) outperform a constant representation , which may explain why DG methods struggle to outperform ERM . We show , in theory and practice , how to overcome these challenges using only a large set of unlabeled examples and particular data augmentations that retain all discriminative information but minimal domain-specific information . Text descriptions of images are examples of such augmentations , as they are informative for many downstream classification tasks , but they remove a lot of domain-specific information . With such augmentations , we design practical self-supervised learning ( SSL ) objectives for learning robust representations . Our objectives give insights into the robustness of CLIP ( Radford et al. , 2021 ) , and lead to improved CLIP-based representations that achieve state-of-the-art ( SOTA ) results on DomainBed ( Gulrajani & Lopez-Paz , 2021 ) . To summarize , we : • provide minimal sufficient objectives whose optima achieve optimal DG under covariate shift ; • prove that it is impossible to learn useful representations without accessing target information ; • provide practical objectives to learn optimally robust representations using specific augmentations ; • get state-of-the-art results on typical domain generalization benchmarks . 2 BACKGROUND : DOMAIN GENERALIZATION AND REPRESENTATIONS . We are interested in predictions that are robust across distribution shifts . We formalize this using domain generalization ( DG ) language . Given a distribution pX , Y | ds over inputs x ∈ X and labels y ∈ Y from the source domain ds ∈ D , we select a predictor f : X → Γ . The predictions γ ∈ Γ could for example be labels or distributions over labels . Despite being selected on the source domain , we would like f to achieve a small expected risk with respect to a loss function ` : Y × Γ→ R≥0 , Rdf [ Y |X ] : = EpX , Y | d [ ` ( Y , f ( X ) ) ] , ( 1 ) on a distribution pX , Y | d from a target domain d = dt ∈ D , which is somehow related to ds . A common strategy for DG is to learn robust representations , which splits the problem into two . First , learn an encoder pZ |X , which maps inputs X to representations Z . Then , learn a predictor h : Z → Γ from representations Z to labels Y using standard risk minimization . The goal is to design a robust representation Z , so that predictors h trained to minimize the source risk Rdsh [ Y |Z ] also achieve low target risk Rdth [ Y |Z ] . Many methods have been proposed to try to learn such Z , e.g. , by enforcing domain invariance of the marginal pZ | d ( e.g. , Ganin et al. , 2016 ) . Still , many of these proposals are not sound ( Zhao et al. , 2019 ; Johansson et al. , 2019 ) . Furthermore , they rarely outperform source empirical risk minimization ( ERM ) in practice ( Gulrajani & Lopez-Paz , 2021 ) . 3 OPTIMAL REPRESENTATIONS FOR DOMAIN GENERALIZATION . To separate domain generalization from finite sample generalization , we consider an idealized DG ( IDG ) , where the predictor h is selected on the source population risk rather than empirical risk . We assume sample spaces X , Z , Y , D are discrete ; formal statements and proofs are in Appxs . A and B . 3.1 DEFINING OPTIMAL REPRESENTATIONS FOR IDEALIZED DOMAIN GENERALIZATION . We want to evaluate the quality of a representation Z of X . In our IDG , the learner is given a random source Ds ; she selects any source risk minimizer ; and is scored according to her risk on a random target domain Dt . To give uniform guarantees while reflecting the uncertainty over the source-target pair ( Ds , Dt ) , we measure the quality of Z as the expected risk of the learner ’ s worst-case choice . Definition . The idealized domain generalization risk ( IDG risk ) of an encoder pZ |X is the expected ( over domains ) worst-case ( over source risk minimizers ) target risk , i.e. , RIDG [ Y |Z ] : = EpDs , Dt [ sup h∈H∗Ds RDth [ Y |Z ] ] ( 2 ) whereH∗Ds : = arg minh R Ds h [ Y |Z ] are the source risk minimizers . We call a representation Z∗ ( or its encoder ) optimal for IDG if it minimizes the IDG risk : pZ∗ |X ∈ arg minpZ |X RIDG [ Y |Z ] . 3.2 CHARACTERIZING OPTIMAL REPRESENTATIONS FOR IDG UNDER COVARIATE SHIFT . The IDG risk is useful to evaluate representations , but gives few insights into IDG and is impractical to optimize due to the supremum in Eq . ( 2 ) . Under mild assumptions , we provide a simplified , equivalent objective , which is easier to optimize . For convenience , we assume that there is a unique Bayes predictor f∗ , which minimizes the expected risk over domains , i.e. , f∗ = arg minf EpDt [ R Dt f [ Y |X ] ] . This is satisfied by standard ML tasks pY , X and losses ` . More importantly , we assume the following domain structure , which ensures the existence of optimal encoders and allows our simplification . Assumptions . All domains d ∈ D we consider are related by the following assumptions : 1 . Generalized covariate shift . All domain-specific risk minimizers f ∈ arg minf [ Rdf [ Y |X ] ] are equal to the Bayes predictor f∗ on their support , i.e. , f ( x ) = f∗ ( x ) for all x ∈ supp ( pX | d ) . 2 . Invariance of Bayes predictions . The set of Bayes predictions is the same for all domains , i.e. , { f∗ ( x ) |x ∈ supp ( pX | d ) } = { f∗ ( x ) |x ∈ X } . Under review as a conference paper at ICLR 2022 < latexit sha1_base64= '' nv5kgG6Q4MLitfs23A69VllAD+g= '' > AAAB6nicdVC7SgNBFJ2NrxhfUUtBBoOQapldspoUYsDGMkHzkGQJs5NJMmT2wcysEJaUljYWithaW+c77PwGf8JJoqCiBy4czrmXe+71Is6kQujNSC0sLi2vpFcza+sbm1vZ7Z26DGNBaI2EPBRND0vKWUBriilOm5Gg2Pc4bXjDs6nfuKZCsjC4VKOIuj7uB6zHCFZaurg6QZ1sDpnILjoFGyLTdlDJKmniIKt0VICWiWbInb5Mqu83+5NKJ/va7oYk9mmgCMdStiwUKTfBQjHC6TjTjiWNMBniPm1pGmCfSjeZRR3DQ610YS8UugIFZ+r3iQT7Uo58T3f6WA3kb28q/uW1YtUrugkLoljRgMwX9WIOVQind8MuE5QoPtIEE8F0VkgGWGCi9Hcy+glfl8L/Sd02rYLpVFGunAdzpMEeOAB5YIFjUAbnoAJqgIA+uAX34MHgxp3xaDzNW1PG58wu+AHj+QO7hpHM < /latexit > < latexit sha1_base64= '' 8sdK5eqgt6vtuNYDok13axAjw7g= '' > AAAB6nicdVC7SgNBFJ31GeMrainIYBBSLbNxF00hBmwsEzQPSZYwO5lNhsw+mJkVwpLS0sZCEVtr63yHnd/gTzhJFFT0wIXDOfdyz71ezJlUCL0Zc/MLi0vLmZXs6tr6xmZua7suo0QQWiMRj0TTw5JyFtKaYorTZiwoDjxOG97gbOI3rqmQLAov1TCmboB7IfMZwUpLF1cnVieXR6aDrJJTgsgsOqhk25ogC6FDG1ommiJ/+jKuvt/sjSud3Gu7G5EkoKEiHEvZslCs3BQLxQino2w7kTTGZIB7tKVpiAMq3XQadQQPtNKFfiR0hQpO1e8TKQ6kHAae7gyw6svf3kT8y2slyj92UxbGiaIhmS3yEw5VBCd3wy4TlCg+1AQTwXRWSPpYYKL0d7L6CV+Xwv9JvWhatulUUb5cADNkwC7YBwVggSNQBuegAmqAgB64BffgweDGnfFoPM1a54zPmR3wA8bzB632kcI= < /latexit > < latexit sha1_base64= '' NzJ/0OJqTfPqJLiM8d9EWMJJFFU= '' > AAAB8HicbVDJSgNBEK1xjXGLy83LYBA8SJgRRY8BLx4jmEWTYejp9CRNunua7h4hjPkKLx5c8OrnePMX9CfsLAdNfFDweK+KqnqRZFQbz/t05uYXFpeWcyv51bX1jc3C1nZNJ6nCpIoTlqhGhDRhVJCqoYaRhlQE8YiRetS7GPr1O6I0TcS16UsScNQRNKYYGSvdyDC7vW+HZhAWil7JG8GdJf6EFMu73teL+JaVsPDRaic45UQYzJDWTd+TJsiQMhQzMsi3Uk0kwj3UIU1LBeJEB9no4IF7YJW2GyfKljDuSP09kSGudZ9HtpMj09XT3lD8z2umJj4PMipkaojA40VxylyTuMPv3TZVBBvWtwRhRe2tLu4ihbCxGeVtCP70y7OkdlzyT0qnVzaNIxgjB3uwD4fgwxmU4RIqUAUMHB7gCZ4d5Tw6r87buHXOmczswB847z9UpJRr < /latexit > < latexit sha1_base64= '' un3pkM+QCvsHMLw7+AWvfpJS2xM= '' > AAAB8HicbVDJSgNBEK1xjXGLy83LYBA8SJgRRY8BLx4jmEWTYejp9CRNunua7h4hjPkKLx5c8OrnePMX9CfsLAdNfFDweK+KqnqRZFQbz/t05uYXFpeWcyv51bX1jc3C1nZNJ6nCpIoTlqhGhDRhVJCqoYaRhlQE8YiRetS7GPr1O6I0TcS16UsScNQRNKYYGSvdyDC7vW+HehAWil7JG8GdJf6EFMu73teL+JaVsPDRaic45UQYzJDWTd+TJsiQMhQzMsi3Uk0kwj3UIU1LBeJEB9no4IF7YJW2GyfKljDuSP09kSGudZ9HtpMj09XT3lD8z2umJj4PMipkaojA40VxylyTuMPv3TZVBBvWtwRhRe2tLu4ihbCxGeVtCP70y7OkdlzyT0qnVzaNIxgjB3uwD4fgwxmU4RIqUAUMHB7gCZ4d5Tw6r87buHXOmczswB847z9TH5Rq < /latexit > Generalized covariate shift ( GCS ) ensures that f∗ is simultaneously optimal on all domains . For log-loss ` it recovers standard covariate shift , i.e. , pY | x , d = pY | x . For other losses , GCS is weaker , e.g. , it only requires invariance of most likely labels for 0-1 loss , and of conditional expectations for MSE . Invariance of Bayes predictors is necessary to learn useful predictors using a single domain . For example , for 0-1 loss it ensures that each label is seen at least once in each domain . The intuition behind our objective is that under GCS any source risk minimizer will make optimal predictions on target samples x that are also in the source . Thus , IDG optimal representations are exactly those that ( a ) have the same support in Z for all domain , and ( b ) retain GCS from Z without sacrificing the ability to predict Y , which can be ensured by minimizing the risk from Z . See Fig . 1 . Theorem 1 . Under our assumptions , an encoder pZ∗ |X is optimal for IDG if and only if it minimizes the risk R [ Y |Z ] : = infh EpDt [ RDth [ Y |Z ] ] while matching the support of Z across domains , i.e. , pZ∗ |X ∈ arg max pZ |X R [ Y |Z ] s.t . ∀ d ∈ D , supp ( pZ | d ) = supp ( pZ ) ( 3 ) Moreover , such encoders exist and their IDG risk is the Bayes risk RIDG [ Y |Z∗ ] = R [ Y |X ] . Theorem 1 provides an objective to learn representations on which performing risk minimization using a single domain and Z∗ is as good as performing risk minimization on the target domain from inputs X . Other sufficient conditions have previously been hinted towards , e.g. , matching the marginal pZ | d instead of its support ( e.g. , Ben-David et al. , 2010a ) which is the focus of most DG methods ( e.g. , Ganin et al. , 2016 ) . Previous conditions are nevertheless generally neither necessary nor achievable . To our knowledge , Thm . 1 is the first characterization of necessary and sufficient conditions for IDG optimal representations Z∗ . Theorem 1 thus gives better insights into IDG and provides a framework for deriving the least stringent objectives for optimal IDG . The risk minimization ( Eq . ( 3 ) ) shows that one must have some knowledge about the target domains to learn optimal representations for IDG . Access to targets might seem unrealistic , but without such knowledge or additional assumptions it is provably impossible to beat even constant representations . Proposition 1 ( No free lunch for IDG ) . Let ds be any source domain , Zds be any representation chosen on source ds , and C ∈ Z be a constant representation . Under minor assumptions , for every “ good ” target domain outside the source ’ s support on which Zds outperforms C for IDG , there are many “ bad ” target domains on which Zds is strictly worse than C. Formal statement in Appx . B.3 . Proposition 1 shows that target knowledge is necessary for learning useful representations in IDG . This may explain why previous DG methods have been unable to outperform ERM in standard benchmarks ( Gulrajani & Lopez-Paz , 2021 ) : the knowledge they have access to is insufficient to generalize . Taken together , Prop . 1 and Thm . 1 say that either you have access to target domains dt , in which case you can achieve an IDG risk that matches supervised learning , or you do not access dt , in which case any representation learning algorithm can achieve worse IDG risk than a constant .
At a high level, this work considers the problem of designing machine learning systems that generalize well even when the "domain" (i.e., the data-generating process) under which the system is tested (the "target") does not match that under which it was trained (the "source"). More specifically, the authors look to provide an answer to the question of what makes a "good" representation or encoding (a stochastic transformation of the original features), from the perspective of achieving a small risk (expected loss) on the target domain, assuming the learner only has data (and thus representations) from the source domain. To this question, they provide one answer by showing that in a generalized covariance shift scenario, a natural notion of representation optimality (their "IDG optimality") is only possible if a representation minimizes the (best) risk on the average target distribution, while ensuring that the support of the representations is constant across domains. This is obviously a strong requirement, and they show that one cannot expect to satisfy it without domain knowledge going beyond the source. They also show that there is some hope for special cases of domains in which we have augmentations that preserve label information and "cover" the support of the input distribution across all domains. In such a case, the authors show that their strong optimality requirement can be satisfied by designing the encoder to maximize the mutual information with the augmented data. Relaxing this objective leads to objectives that are more practical, and the authors complement their theoretical analysis with a rather in-depth set of empirical tests that evaluate the efficacy of their methodology for representation learning.
SP:b6d498d546af2429df5d021ec957ed63170cdc21
Goal Randomization for Playing Text-based Games without a Reward Function
1 INTRODUCTION . Text-based games are complex , interactive simulations in which the game state is described with text and players act using simple text commands ( e.g. , take sandwich from table , eat sandwich , open door , etc . ) ( Côté et al. , 2018 ) . They serve as a proxy for studying how agents can exploit language to comprehend and interact with the environment . Text-based games are a useful challenge in the pursuit of intelligent agents that communicate with humans ( e.g. , in customer service systems ) . Inspired by this , one of the long term goals in AI is to build agents that can learn to accomplish tasks with language . In the domain of text-based games , the key challenge is to decipher the long textual observations , extract reward cues from them , and generate semantically rich representations such that the policy learned on top of it is well informed . Most of the existing works learn to model text representations during the RL training ( Hausknecht et al. , 2020 ; Ammanabrolu & Hausknecht , 2020 ) . Some works also study generalizability of games with different difficulty levels or layouts ( Ammanabrolu & Riedl , 2019a ; Adolphs & Hofmann , 2020 ) . Deep reinforcement learning ( RL ) has been demonstrated to effectively learn to solve reward-driven problems in various tasks ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Schulman et al. , 2017 ) . In contrast , humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation . Playing text-based games without a reward function is an exceedingly challenging problem . We consider the setting where reward functions are unknown , so we want to learn an agent that can drive itself without environmental rewards . Learning agents without reward has several practical applications ( Eysenbach et al. , 2018 ) . Environments with sparse rewards effectively have no reward until the agent randomly reaches a goal state . Learning intelligent agents without supervision may help address challenges in exploration in these environments ( Gupta et al. , 2018 ; Sharma et al. , 2019 ) . In many practical settings , interacting with the environment is essentially free , but evaluating the reward requires human feedback ( Christiano et al. , 2017 ) . However , there is no work on playing text-based games without reward functions . Solving such kinds of non-reward tasks can encourage agents to explore , experiment , and invent . Sometimes , as in many games and fantasies , without any direct link to reality or to any source of extrinsic reward , it is crucial to enable learning in real-world environments , even for humans ( Schulz , 2012 ) . In this paper , we take a step towards agents that can learn from text-based games without reward functions . The environments are designed to mimic some real-world scenarios where there are no reward cues for guiding agents . The goal is for an agent to be able to learn one policy that is able to solve both tasks it was trained on as well as a variety of unseen tasks which contain similar tasks as the training tasks . To do this , we propose a new method for learning an agent with deep RL in the absence of any rewards . We use a set of available goals characterized by natural language via common-sense rules , and select one of them according to the current knowledge graph based observation . Then , we learn policies for goal-conditioned reinforcement learning . Specifically , our method works as follows . Whenever an agent builds a knowledge graph of the textural world , our method gives a goal in natural language to the agent based on the knowledge graph . The agent takes the goal to form a new experience with a corresponding intrinsic reward , alleviating the no reward problem . For example , our method can describe what the agent has achieved in the episode , and the agent can use goals as advice to obtain intrinsic rewards . In addition , our method also provides a time limit for a goal . If the agent can not accomplish the goal in the time limit , it can use a new goal replacing the old one . The agent can have the opportunity to get out of the difficult goals . We show many benefits brought by language goal representation when combined with goal advice . The agent can efficiently solve reinforcement learning problems in challenging text-based environments ; it can generalize to unseen instructions , and even generalize to instruction with unseen lexicons . Our method can be viewed as the intrinsic motivation of any agent trained with policy gradientbased methods ( Oudeyer & Kaplan , 2009 ; Barto , 2013 ) . Most intrinsic methods design intrinsic motivations to assist environmental rewards to make agents learn efficiently . However , our method is to use the intrinsic motivation to play text-based games without environmental rewards . Under this view , we also need to encode the intrinsic motivation into the policy . That is , the original policy network becomes a goal-conditional policy ; the goal advice can then be seen as a “ bolt-on ” to the original policy network . Because we use natural language for self-advice . It is flexible and can be used on a variety of RL training model architectures and training settings by encoding the intrinsic motivation . In summary , we make the following contributions : ( i ) we first study the problem of playing textbased games without any reward functions and propose a new goal randomization method for solving the problem ; 1 ( ii ) we show , through common-sense knowledge , that agents trained with goal randomization gradually learn to interact with the environment and solve tasks which are difficult for state-of-the-art methods ; ( iii ) we perform an extensive qualitative analysis and ablation study , and we also find an interesting result that our method works better than one of state-of-the-art agents GATA ( Adhikari et al. , 2020 ) , which uses environment rewards , for some text-based games . 2 RELATED WORK . Reinforcement learning for text-based games . Existing agents either perform based on predefined rules or learn to make responses by interacting with the environment . Rule-based agents ( Atkinson et al. , 2019 ; Fulda et al. , 2017 ; Hausknecht et al. , 2019 ; Kostka et al. , 2017 ) attempt to solve text-based games by injecting heuristics . They are thus not flexible since a huge amount of prior knowledge is required to design rules ( Hausknecht et al. , 2020 ) . Learning-based agents ( Adolphs & Hofmann , 2020 ; Hausknecht et al. , 2020 ; He et al. , 2016 ; Jain et al. , 2020 ; Narasimhan et al. , 2015 ; Yin & May , 2019 ; Yuan et al. , 2018 ; Zahavy et al. , 2018 ) usually employ deep reinforcement learning algorithms to deliver adaptive game solving strategies . KG-based agents have been developed to enhance the performance of learning-based agents with the assistance of KGs . KGs can be constructed by simple rules so that it substantially reduces the amount of prior knowledge required by rule-based agents . While KGs have been leveraged to handle partial observability ( Ammanabrolu & Hausknecht , 2020 ; Ammanabrolu & Riedl , 2019a ; Zelinka et al. , 2019 ) , reduce action space ( Ammanabrolu & Hausknecht , 2020 ; Ammanabrolu & Riedl , 2019a ) , and improve generalizability ( Adhikari et al. , 2020 ; Ammanabrolu & Riedl , 2019b ) . Recently , Murugesan et al . ( 2020 ) tried to introduce commonsense reasoning for playing synthetic games . While these works all follow the standard setting with reward functions , our work is the first work that trains an agent without reward functions . 1Code can be found here : https : //anonymous.4open.science/r/goalrand-E167/ Intrinsic motivation for reinforcement learning . Intrinsic motivation has been widely used in reinforcement learning ( Oudeyer et al. , 2007 ; Oudeyer & Kaplan , 2009 ; Barto , 2013 ) . It has been proven effective for solving various hard-exploration tasks ( Bellemare et al. , 2016 ; Pathak et al. , 2017 ; Burda et al. , 2018b ) . One prominent formulation is the use of novelty , which in its simplest form can be estimated with state visitation counts ( Strehl & Littman , 2008 ) and has been extended to high-dimensional state spaces ( Bellemare et al. , 2016 ; Burda et al. , 2018b ; Ostrovski et al. , 2017 ) . Other sophisticated versions of curiosity ( Schmidhuber , 1991 ) guide the agent to learn about environment dynamics by encouraging it to take actions that reduce the agent ’ s uncertainty ( Stadie et al. , 2015 ; Burda et al. , 2018b ) , have unpredictable consequences ( Pathak et al. , 2017 ; Burda et al. , 2018a ) , or a large impact on the environment ( Raileanu & Rocktäschel , 2020 ) . Other forms of intrinsic motivation include empowerment ( Klyubin et al. , 2005 ) which encourages control of the environment by the agent , and goal diversity ( Pong et al. , 2019 ) which encourages maximizing the entropy of the goal distribution . Intrinsic goals are discovered from language supervision ( Lair et al. , 2019 ) . Except for exploration , intrinsic motivation is also used for other problems , such as evolutionary ( Singh et al. , 2009 ; Sorg et al. , 2010 ; Zheng et al. , 2018 ) . Most works use intrinsic motivation as additive rewards for environmental rewards . Our work differs from those works by formulating intrinsic motivation to train an agent directly for text-based games . Generalization in text-based games . Generalization is a challenging problem for reinforcement learning ( Tobin et al. , 2017 ; Agarwal et al. , 2019 ) . In text-based games , it is difficult to study generalization in games initially designed for human players ( Hausknecht et al. , 2020 ) , as they are so challenging that existing RL agents are still far from being able to solve a large proportion of them even under the single game setting ( Yao et al. , 2020 ) . Furthermore , these games usually have different themes , vocabularies and logics , making it hard to determine the domain gap ( Ammanabrolu & Riedl , 2019b ) . Compared with these man-made games , the synthetic games ( Côté et al. , 2018 ; Urbanek et al. , 2019 ) provide a more natural way to study generalization by generating multiple similar games with customizable domain gaps ( e.g. , by varying game layouts ) . Generally , the training and testing game sets in previous works have either the same difficulty level ( Ammanabrolu & Riedl , 2019a ; Murugesan et al. , 2021 ) , or a mixture of multiple levels ( Adolphs & Hofmann , 2020 ; Yin et al. , 2020 ) , or both ( Adhikari et al. , 2020 ) . Our works can be used for generalization in games . Moreover , our work does not use the reward functions of environments to train an agent . 3 PRELIMINARIES . Text-based games as POMDPs . Text-based games can be formally described as Partially Observable Markov Decision Processes ( POMDPs ) . POMDPs can be defined as a tuple 〈S , A , T , R , Ω , O , γ〉 : the state set S , the action set A , the state transition probabilities T , the reward functionR , the observation set Ω , the conditional observation probabilitiesO and the discount factor γ ∈ ( 0 , 1 ] . At each time step , the agent will receive a textual observation ot ∈ Ω , depending on the current state and previous action via the conditional observation probability O ( ot|st , at−1 ) . By executing an action at ∈ A , the environment will transit into a new state based on the state transition probability T ( st+1|st , at ) , and the agent will receive the reward rt+1 = R ( st , at ) . Same as Markov Decision Process ( MDPs ) , the goal of the agent is to learn an optimal policy π∗ to maximize the expected future discounted sum of rewards from each time step : Rt = E [ ∑∞ k=0 γ krt+k+1 ] . Knowledge graph for text-based games . Graph-based representations are effective for text-based games because the state in these games adheres to a graph-like structure . The content in most observations of the environment corresponds either to entity attributes or to relational information about entities in the environment . Knowledge graph ( KG ) for a text-based game can be built from a set of triplets 〈Subject , Relation , Object〉 , denoting that the Subject has Relation with the Object . For example , 〈Kitchen , Has , Food〉 . The KG is denoted as G = ( V , E ) , where V and E are the node set and the edge set , respectively . Both Subject and Object belong to the node set V . Relation , which corresponds to the edge connecting them , belongs to E .
This paper introduces GoalRand, an algorithm for playing text-based games in the absence of extrinsic reward. In particular, it uses generated goals to train a goal-conditioned reinforcement learning agent. These goals are generated by extracting factual information from the agent's knowledge graph. Given a goal sampled from the set of possible goals, as defined by the current knowledge graph, the agent is given a intrinsic reward for completing this goal within a specified time limit of 5 steps. Goals are randomly sampled from the set of possible goals. The paper contributes this goal generation methodology which allows the agent to be trained without access to extrinsic rewards. Additionally, experiments are conducted on Textworld's Cooking Games, comparing against an intrinsic motivation baseline (BeBold) as well as a previously high-performing agent on these games, GATA. The scores show higher performance from GoalRand compared to the baselines across a set of heldout layouts and levels.
SP:9b1286437860daa22f9fa88d1e04fed46d0784f8
Goal Randomization for Playing Text-based Games without a Reward Function
1 INTRODUCTION . Text-based games are complex , interactive simulations in which the game state is described with text and players act using simple text commands ( e.g. , take sandwich from table , eat sandwich , open door , etc . ) ( Côté et al. , 2018 ) . They serve as a proxy for studying how agents can exploit language to comprehend and interact with the environment . Text-based games are a useful challenge in the pursuit of intelligent agents that communicate with humans ( e.g. , in customer service systems ) . Inspired by this , one of the long term goals in AI is to build agents that can learn to accomplish tasks with language . In the domain of text-based games , the key challenge is to decipher the long textual observations , extract reward cues from them , and generate semantically rich representations such that the policy learned on top of it is well informed . Most of the existing works learn to model text representations during the RL training ( Hausknecht et al. , 2020 ; Ammanabrolu & Hausknecht , 2020 ) . Some works also study generalizability of games with different difficulty levels or layouts ( Ammanabrolu & Riedl , 2019a ; Adolphs & Hofmann , 2020 ) . Deep reinforcement learning ( RL ) has been demonstrated to effectively learn to solve reward-driven problems in various tasks ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Schulman et al. , 2017 ) . In contrast , humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation . Playing text-based games without a reward function is an exceedingly challenging problem . We consider the setting where reward functions are unknown , so we want to learn an agent that can drive itself without environmental rewards . Learning agents without reward has several practical applications ( Eysenbach et al. , 2018 ) . Environments with sparse rewards effectively have no reward until the agent randomly reaches a goal state . Learning intelligent agents without supervision may help address challenges in exploration in these environments ( Gupta et al. , 2018 ; Sharma et al. , 2019 ) . In many practical settings , interacting with the environment is essentially free , but evaluating the reward requires human feedback ( Christiano et al. , 2017 ) . However , there is no work on playing text-based games without reward functions . Solving such kinds of non-reward tasks can encourage agents to explore , experiment , and invent . Sometimes , as in many games and fantasies , without any direct link to reality or to any source of extrinsic reward , it is crucial to enable learning in real-world environments , even for humans ( Schulz , 2012 ) . In this paper , we take a step towards agents that can learn from text-based games without reward functions . The environments are designed to mimic some real-world scenarios where there are no reward cues for guiding agents . The goal is for an agent to be able to learn one policy that is able to solve both tasks it was trained on as well as a variety of unseen tasks which contain similar tasks as the training tasks . To do this , we propose a new method for learning an agent with deep RL in the absence of any rewards . We use a set of available goals characterized by natural language via common-sense rules , and select one of them according to the current knowledge graph based observation . Then , we learn policies for goal-conditioned reinforcement learning . Specifically , our method works as follows . Whenever an agent builds a knowledge graph of the textural world , our method gives a goal in natural language to the agent based on the knowledge graph . The agent takes the goal to form a new experience with a corresponding intrinsic reward , alleviating the no reward problem . For example , our method can describe what the agent has achieved in the episode , and the agent can use goals as advice to obtain intrinsic rewards . In addition , our method also provides a time limit for a goal . If the agent can not accomplish the goal in the time limit , it can use a new goal replacing the old one . The agent can have the opportunity to get out of the difficult goals . We show many benefits brought by language goal representation when combined with goal advice . The agent can efficiently solve reinforcement learning problems in challenging text-based environments ; it can generalize to unseen instructions , and even generalize to instruction with unseen lexicons . Our method can be viewed as the intrinsic motivation of any agent trained with policy gradientbased methods ( Oudeyer & Kaplan , 2009 ; Barto , 2013 ) . Most intrinsic methods design intrinsic motivations to assist environmental rewards to make agents learn efficiently . However , our method is to use the intrinsic motivation to play text-based games without environmental rewards . Under this view , we also need to encode the intrinsic motivation into the policy . That is , the original policy network becomes a goal-conditional policy ; the goal advice can then be seen as a “ bolt-on ” to the original policy network . Because we use natural language for self-advice . It is flexible and can be used on a variety of RL training model architectures and training settings by encoding the intrinsic motivation . In summary , we make the following contributions : ( i ) we first study the problem of playing textbased games without any reward functions and propose a new goal randomization method for solving the problem ; 1 ( ii ) we show , through common-sense knowledge , that agents trained with goal randomization gradually learn to interact with the environment and solve tasks which are difficult for state-of-the-art methods ; ( iii ) we perform an extensive qualitative analysis and ablation study , and we also find an interesting result that our method works better than one of state-of-the-art agents GATA ( Adhikari et al. , 2020 ) , which uses environment rewards , for some text-based games . 2 RELATED WORK . Reinforcement learning for text-based games . Existing agents either perform based on predefined rules or learn to make responses by interacting with the environment . Rule-based agents ( Atkinson et al. , 2019 ; Fulda et al. , 2017 ; Hausknecht et al. , 2019 ; Kostka et al. , 2017 ) attempt to solve text-based games by injecting heuristics . They are thus not flexible since a huge amount of prior knowledge is required to design rules ( Hausknecht et al. , 2020 ) . Learning-based agents ( Adolphs & Hofmann , 2020 ; Hausknecht et al. , 2020 ; He et al. , 2016 ; Jain et al. , 2020 ; Narasimhan et al. , 2015 ; Yin & May , 2019 ; Yuan et al. , 2018 ; Zahavy et al. , 2018 ) usually employ deep reinforcement learning algorithms to deliver adaptive game solving strategies . KG-based agents have been developed to enhance the performance of learning-based agents with the assistance of KGs . KGs can be constructed by simple rules so that it substantially reduces the amount of prior knowledge required by rule-based agents . While KGs have been leveraged to handle partial observability ( Ammanabrolu & Hausknecht , 2020 ; Ammanabrolu & Riedl , 2019a ; Zelinka et al. , 2019 ) , reduce action space ( Ammanabrolu & Hausknecht , 2020 ; Ammanabrolu & Riedl , 2019a ) , and improve generalizability ( Adhikari et al. , 2020 ; Ammanabrolu & Riedl , 2019b ) . Recently , Murugesan et al . ( 2020 ) tried to introduce commonsense reasoning for playing synthetic games . While these works all follow the standard setting with reward functions , our work is the first work that trains an agent without reward functions . 1Code can be found here : https : //anonymous.4open.science/r/goalrand-E167/ Intrinsic motivation for reinforcement learning . Intrinsic motivation has been widely used in reinforcement learning ( Oudeyer et al. , 2007 ; Oudeyer & Kaplan , 2009 ; Barto , 2013 ) . It has been proven effective for solving various hard-exploration tasks ( Bellemare et al. , 2016 ; Pathak et al. , 2017 ; Burda et al. , 2018b ) . One prominent formulation is the use of novelty , which in its simplest form can be estimated with state visitation counts ( Strehl & Littman , 2008 ) and has been extended to high-dimensional state spaces ( Bellemare et al. , 2016 ; Burda et al. , 2018b ; Ostrovski et al. , 2017 ) . Other sophisticated versions of curiosity ( Schmidhuber , 1991 ) guide the agent to learn about environment dynamics by encouraging it to take actions that reduce the agent ’ s uncertainty ( Stadie et al. , 2015 ; Burda et al. , 2018b ) , have unpredictable consequences ( Pathak et al. , 2017 ; Burda et al. , 2018a ) , or a large impact on the environment ( Raileanu & Rocktäschel , 2020 ) . Other forms of intrinsic motivation include empowerment ( Klyubin et al. , 2005 ) which encourages control of the environment by the agent , and goal diversity ( Pong et al. , 2019 ) which encourages maximizing the entropy of the goal distribution . Intrinsic goals are discovered from language supervision ( Lair et al. , 2019 ) . Except for exploration , intrinsic motivation is also used for other problems , such as evolutionary ( Singh et al. , 2009 ; Sorg et al. , 2010 ; Zheng et al. , 2018 ) . Most works use intrinsic motivation as additive rewards for environmental rewards . Our work differs from those works by formulating intrinsic motivation to train an agent directly for text-based games . Generalization in text-based games . Generalization is a challenging problem for reinforcement learning ( Tobin et al. , 2017 ; Agarwal et al. , 2019 ) . In text-based games , it is difficult to study generalization in games initially designed for human players ( Hausknecht et al. , 2020 ) , as they are so challenging that existing RL agents are still far from being able to solve a large proportion of them even under the single game setting ( Yao et al. , 2020 ) . Furthermore , these games usually have different themes , vocabularies and logics , making it hard to determine the domain gap ( Ammanabrolu & Riedl , 2019b ) . Compared with these man-made games , the synthetic games ( Côté et al. , 2018 ; Urbanek et al. , 2019 ) provide a more natural way to study generalization by generating multiple similar games with customizable domain gaps ( e.g. , by varying game layouts ) . Generally , the training and testing game sets in previous works have either the same difficulty level ( Ammanabrolu & Riedl , 2019a ; Murugesan et al. , 2021 ) , or a mixture of multiple levels ( Adolphs & Hofmann , 2020 ; Yin et al. , 2020 ) , or both ( Adhikari et al. , 2020 ) . Our works can be used for generalization in games . Moreover , our work does not use the reward functions of environments to train an agent . 3 PRELIMINARIES . Text-based games as POMDPs . Text-based games can be formally described as Partially Observable Markov Decision Processes ( POMDPs ) . POMDPs can be defined as a tuple 〈S , A , T , R , Ω , O , γ〉 : the state set S , the action set A , the state transition probabilities T , the reward functionR , the observation set Ω , the conditional observation probabilitiesO and the discount factor γ ∈ ( 0 , 1 ] . At each time step , the agent will receive a textual observation ot ∈ Ω , depending on the current state and previous action via the conditional observation probability O ( ot|st , at−1 ) . By executing an action at ∈ A , the environment will transit into a new state based on the state transition probability T ( st+1|st , at ) , and the agent will receive the reward rt+1 = R ( st , at ) . Same as Markov Decision Process ( MDPs ) , the goal of the agent is to learn an optimal policy π∗ to maximize the expected future discounted sum of rewards from each time step : Rt = E [ ∑∞ k=0 γ krt+k+1 ] . Knowledge graph for text-based games . Graph-based representations are effective for text-based games because the state in these games adheres to a graph-like structure . The content in most observations of the environment corresponds either to entity attributes or to relational information about entities in the environment . Knowledge graph ( KG ) for a text-based game can be built from a set of triplets 〈Subject , Relation , Object〉 , denoting that the Subject has Relation with the Object . For example , 〈Kitchen , Has , Food〉 . The KG is denoted as G = ( V , E ) , where V and E are the node set and the edge set , respectively . Both Subject and Object belong to the node set V . Relation , which corresponds to the edge connecting them , belongs to E .
The authors propose a simple intrinsic reward for text-based games, in the absence of environment rewards. The method, GoalRand, is based on uniformly sampling random goals from a set of natural language goals generated using common-sense rules. The authors evaluate their method on both seen and unseen text-based games and find that their method outperforms a random agent and a count-based intrinsically motivated agent.
SP:9b1286437860daa22f9fa88d1e04fed46d0784f8
Goal Randomization for Playing Text-based Games without a Reward Function
1 INTRODUCTION . Text-based games are complex , interactive simulations in which the game state is described with text and players act using simple text commands ( e.g. , take sandwich from table , eat sandwich , open door , etc . ) ( Côté et al. , 2018 ) . They serve as a proxy for studying how agents can exploit language to comprehend and interact with the environment . Text-based games are a useful challenge in the pursuit of intelligent agents that communicate with humans ( e.g. , in customer service systems ) . Inspired by this , one of the long term goals in AI is to build agents that can learn to accomplish tasks with language . In the domain of text-based games , the key challenge is to decipher the long textual observations , extract reward cues from them , and generate semantically rich representations such that the policy learned on top of it is well informed . Most of the existing works learn to model text representations during the RL training ( Hausknecht et al. , 2020 ; Ammanabrolu & Hausknecht , 2020 ) . Some works also study generalizability of games with different difficulty levels or layouts ( Ammanabrolu & Riedl , 2019a ; Adolphs & Hofmann , 2020 ) . Deep reinforcement learning ( RL ) has been demonstrated to effectively learn to solve reward-driven problems in various tasks ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Schulman et al. , 2017 ) . In contrast , humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation . Playing text-based games without a reward function is an exceedingly challenging problem . We consider the setting where reward functions are unknown , so we want to learn an agent that can drive itself without environmental rewards . Learning agents without reward has several practical applications ( Eysenbach et al. , 2018 ) . Environments with sparse rewards effectively have no reward until the agent randomly reaches a goal state . Learning intelligent agents without supervision may help address challenges in exploration in these environments ( Gupta et al. , 2018 ; Sharma et al. , 2019 ) . In many practical settings , interacting with the environment is essentially free , but evaluating the reward requires human feedback ( Christiano et al. , 2017 ) . However , there is no work on playing text-based games without reward functions . Solving such kinds of non-reward tasks can encourage agents to explore , experiment , and invent . Sometimes , as in many games and fantasies , without any direct link to reality or to any source of extrinsic reward , it is crucial to enable learning in real-world environments , even for humans ( Schulz , 2012 ) . In this paper , we take a step towards agents that can learn from text-based games without reward functions . The environments are designed to mimic some real-world scenarios where there are no reward cues for guiding agents . The goal is for an agent to be able to learn one policy that is able to solve both tasks it was trained on as well as a variety of unseen tasks which contain similar tasks as the training tasks . To do this , we propose a new method for learning an agent with deep RL in the absence of any rewards . We use a set of available goals characterized by natural language via common-sense rules , and select one of them according to the current knowledge graph based observation . Then , we learn policies for goal-conditioned reinforcement learning . Specifically , our method works as follows . Whenever an agent builds a knowledge graph of the textural world , our method gives a goal in natural language to the agent based on the knowledge graph . The agent takes the goal to form a new experience with a corresponding intrinsic reward , alleviating the no reward problem . For example , our method can describe what the agent has achieved in the episode , and the agent can use goals as advice to obtain intrinsic rewards . In addition , our method also provides a time limit for a goal . If the agent can not accomplish the goal in the time limit , it can use a new goal replacing the old one . The agent can have the opportunity to get out of the difficult goals . We show many benefits brought by language goal representation when combined with goal advice . The agent can efficiently solve reinforcement learning problems in challenging text-based environments ; it can generalize to unseen instructions , and even generalize to instruction with unseen lexicons . Our method can be viewed as the intrinsic motivation of any agent trained with policy gradientbased methods ( Oudeyer & Kaplan , 2009 ; Barto , 2013 ) . Most intrinsic methods design intrinsic motivations to assist environmental rewards to make agents learn efficiently . However , our method is to use the intrinsic motivation to play text-based games without environmental rewards . Under this view , we also need to encode the intrinsic motivation into the policy . That is , the original policy network becomes a goal-conditional policy ; the goal advice can then be seen as a “ bolt-on ” to the original policy network . Because we use natural language for self-advice . It is flexible and can be used on a variety of RL training model architectures and training settings by encoding the intrinsic motivation . In summary , we make the following contributions : ( i ) we first study the problem of playing textbased games without any reward functions and propose a new goal randomization method for solving the problem ; 1 ( ii ) we show , through common-sense knowledge , that agents trained with goal randomization gradually learn to interact with the environment and solve tasks which are difficult for state-of-the-art methods ; ( iii ) we perform an extensive qualitative analysis and ablation study , and we also find an interesting result that our method works better than one of state-of-the-art agents GATA ( Adhikari et al. , 2020 ) , which uses environment rewards , for some text-based games . 2 RELATED WORK . Reinforcement learning for text-based games . Existing agents either perform based on predefined rules or learn to make responses by interacting with the environment . Rule-based agents ( Atkinson et al. , 2019 ; Fulda et al. , 2017 ; Hausknecht et al. , 2019 ; Kostka et al. , 2017 ) attempt to solve text-based games by injecting heuristics . They are thus not flexible since a huge amount of prior knowledge is required to design rules ( Hausknecht et al. , 2020 ) . Learning-based agents ( Adolphs & Hofmann , 2020 ; Hausknecht et al. , 2020 ; He et al. , 2016 ; Jain et al. , 2020 ; Narasimhan et al. , 2015 ; Yin & May , 2019 ; Yuan et al. , 2018 ; Zahavy et al. , 2018 ) usually employ deep reinforcement learning algorithms to deliver adaptive game solving strategies . KG-based agents have been developed to enhance the performance of learning-based agents with the assistance of KGs . KGs can be constructed by simple rules so that it substantially reduces the amount of prior knowledge required by rule-based agents . While KGs have been leveraged to handle partial observability ( Ammanabrolu & Hausknecht , 2020 ; Ammanabrolu & Riedl , 2019a ; Zelinka et al. , 2019 ) , reduce action space ( Ammanabrolu & Hausknecht , 2020 ; Ammanabrolu & Riedl , 2019a ) , and improve generalizability ( Adhikari et al. , 2020 ; Ammanabrolu & Riedl , 2019b ) . Recently , Murugesan et al . ( 2020 ) tried to introduce commonsense reasoning for playing synthetic games . While these works all follow the standard setting with reward functions , our work is the first work that trains an agent without reward functions . 1Code can be found here : https : //anonymous.4open.science/r/goalrand-E167/ Intrinsic motivation for reinforcement learning . Intrinsic motivation has been widely used in reinforcement learning ( Oudeyer et al. , 2007 ; Oudeyer & Kaplan , 2009 ; Barto , 2013 ) . It has been proven effective for solving various hard-exploration tasks ( Bellemare et al. , 2016 ; Pathak et al. , 2017 ; Burda et al. , 2018b ) . One prominent formulation is the use of novelty , which in its simplest form can be estimated with state visitation counts ( Strehl & Littman , 2008 ) and has been extended to high-dimensional state spaces ( Bellemare et al. , 2016 ; Burda et al. , 2018b ; Ostrovski et al. , 2017 ) . Other sophisticated versions of curiosity ( Schmidhuber , 1991 ) guide the agent to learn about environment dynamics by encouraging it to take actions that reduce the agent ’ s uncertainty ( Stadie et al. , 2015 ; Burda et al. , 2018b ) , have unpredictable consequences ( Pathak et al. , 2017 ; Burda et al. , 2018a ) , or a large impact on the environment ( Raileanu & Rocktäschel , 2020 ) . Other forms of intrinsic motivation include empowerment ( Klyubin et al. , 2005 ) which encourages control of the environment by the agent , and goal diversity ( Pong et al. , 2019 ) which encourages maximizing the entropy of the goal distribution . Intrinsic goals are discovered from language supervision ( Lair et al. , 2019 ) . Except for exploration , intrinsic motivation is also used for other problems , such as evolutionary ( Singh et al. , 2009 ; Sorg et al. , 2010 ; Zheng et al. , 2018 ) . Most works use intrinsic motivation as additive rewards for environmental rewards . Our work differs from those works by formulating intrinsic motivation to train an agent directly for text-based games . Generalization in text-based games . Generalization is a challenging problem for reinforcement learning ( Tobin et al. , 2017 ; Agarwal et al. , 2019 ) . In text-based games , it is difficult to study generalization in games initially designed for human players ( Hausknecht et al. , 2020 ) , as they are so challenging that existing RL agents are still far from being able to solve a large proportion of them even under the single game setting ( Yao et al. , 2020 ) . Furthermore , these games usually have different themes , vocabularies and logics , making it hard to determine the domain gap ( Ammanabrolu & Riedl , 2019b ) . Compared with these man-made games , the synthetic games ( Côté et al. , 2018 ; Urbanek et al. , 2019 ) provide a more natural way to study generalization by generating multiple similar games with customizable domain gaps ( e.g. , by varying game layouts ) . Generally , the training and testing game sets in previous works have either the same difficulty level ( Ammanabrolu & Riedl , 2019a ; Murugesan et al. , 2021 ) , or a mixture of multiple levels ( Adolphs & Hofmann , 2020 ; Yin et al. , 2020 ) , or both ( Adhikari et al. , 2020 ) . Our works can be used for generalization in games . Moreover , our work does not use the reward functions of environments to train an agent . 3 PRELIMINARIES . Text-based games as POMDPs . Text-based games can be formally described as Partially Observable Markov Decision Processes ( POMDPs ) . POMDPs can be defined as a tuple 〈S , A , T , R , Ω , O , γ〉 : the state set S , the action set A , the state transition probabilities T , the reward functionR , the observation set Ω , the conditional observation probabilitiesO and the discount factor γ ∈ ( 0 , 1 ] . At each time step , the agent will receive a textual observation ot ∈ Ω , depending on the current state and previous action via the conditional observation probability O ( ot|st , at−1 ) . By executing an action at ∈ A , the environment will transit into a new state based on the state transition probability T ( st+1|st , at ) , and the agent will receive the reward rt+1 = R ( st , at ) . Same as Markov Decision Process ( MDPs ) , the goal of the agent is to learn an optimal policy π∗ to maximize the expected future discounted sum of rewards from each time step : Rt = E [ ∑∞ k=0 γ krt+k+1 ] . Knowledge graph for text-based games . Graph-based representations are effective for text-based games because the state in these games adheres to a graph-like structure . The content in most observations of the environment corresponds either to entity attributes or to relational information about entities in the environment . Knowledge graph ( KG ) for a text-based game can be built from a set of triplets 〈Subject , Relation , Object〉 , denoting that the Subject has Relation with the Object . For example , 〈Kitchen , Has , Food〉 . The KG is denoted as G = ( V , E ) , where V and E are the node set and the edge set , respectively . Both Subject and Object belong to the node set V . Relation , which corresponds to the edge connecting them , belongs to E .
This paper studies text-based games using a technique called goal randomization. This method uses random basic goals to train a policy in the absence of the reward of environment. Authors show that agents can learn policies that generalize well across different text-based games. It also seems working better than a state-of-the-art algorithm that uses environment reward.
SP:9b1286437860daa22f9fa88d1e04fed46d0784f8
Efficient and Modular Implicit Differentiation
1 INTRODUCTION . Automatic differentiation ( autodiff ) is now an inherent part of machine learning software . It allows to express complex computations by composing elementary ones in creative ways and removes the tedious burden of computing their derivatives by hand . In parallel , the differentiation of optimization problem solutions has found many applications . A classical example is bi-level optimization , which typically involves computing the derivatives of a nested optimization problem in order to solve an outer one . Examples of applications in machine learning include hyper-parameter optimization ( Chapelle et al. , 2002 ; Seeger , 2008 ; Pedregosa , 2016 ; Franceschi et al. , 2017 ; Bertrand et al. , 2020 ; 2021 ) , neural networks ( Lorraine et al. , 2020 ) , and meta-learning ( Franceschi et al. , 2018 ; Rajeswaran et al. , 2019 ) . Another line of active research involving differentiation of optimization problem solutions are optimization layers ( Kim et al. , 2017 ; Amos & Kolter , 2017 ; Niculae & Blondel , 2017 ; Djolonga & Krause , 2017 ; Gould et al. , 2019 ) , which can be used to encourage structured outputs , and implicit deep networks ( Bai et al. , 2019 ; El Ghaoui et al. , 2019 ) , which have a smaller memory footprint than backprop-trained networks . Since optimization problem solutions typically do not enjoy an explicit formula in terms of their inputs , autodiff can not be used directly to differentiate these functions . In recent years , two main approaches have been developed to circumvent this problem . The first one consists of unrolling the iterations of an optimization algorithm and using the final iteration as a proxy for the optimization problem solution ( Wengert , 1964 ; Domke , 2012 ; Deledalle et al. , 2014 ; Franceschi et al. , 2018 ; Ablin et al. , 2020 ) . This allows to explicitly construct a computational graph relating the algorithm output to the inputs , on which autodiff can then be used transparently . However , this requires a reimplementation of the algorithm using the autodiff system , and not all algorithms are necessarily autodiff friendly . Moreover , forward-mode autodiff has time complexity that scales linearly with the number of variables and reverse-mode autodiff has memory complexity that scales linearly with the number of algorithm iterations . In contrast , a second approach consists in implicitly relating an optimization problem solution to its inputs using optimality conditions . In a machine learning context , such implicit differentiation has been used for stationarity conditions ( Bengio , 2000 ; Lorraine et al. , 2020 ) , KKT conditions ( Chapelle et al. , 2002 ; Gould et al. , 2016 ; Amos & Kolter , 2017 ; Niculae et al. , 2018 ; Niculae & Martins , 2020 ) and the proximal gradient fixed point ( Niculae & Blondel , 2017 ; Bertrand et al. , 2020 ; 2021 ) . An advantage of implicit differentiation is that a solver reimplementation is not needed , allowing to build upon decades of state-of-the-art software . Although implicit differentiation has a long history in numerical analysis ( Griewank & Walther , 2008 ; Bell & Burke , 2008 ; Krantz & Parks , 2012 ; Bonnans & Shapiro , 2013 ) , so far , it remained difficult to use for practitioners , as it required a case-by-case tedious mathematical derivation and implementation . CasADi ( Andersson et al. , 2019 ) allows to differentiate various optimization and root finding problem algorithms provided by the library . However , it does not allow to easily add implicit differentiation on top of existing solvers from optimality conditions expressed by the user , as we do . A recent tutorial explains how to implement implicit differentiation in JAX ( Duvenaud et al. , 2020 ) . However , the tutorial requires the user to take care of low-level technical details and does not cover a large catalog of optimality condition mappings as we do . Other work ( Agrawal et al. , 2019a ) attempts to address this issue by adding implicit differentiation on top of cvxpy ( Diamond & Boyd , 2016 ) . This works by reducing all convex optimization problems to a conic program and using conic programming ’ s optimality conditions to derive an implicit differentiation formula . While this approach is very generic , solving a convex optimization problem using a conic programming solver—an ADMM-based splitting conic solver ( O ’ Donoghue et al. , 2016 ) in the case of cvxpy—is rarely the state-of-the-art approach for each particular problem instance . In this work , we adopt a different strategy that makes it easy to add implicit differentiation on top of any existing solver . In our approach , the user defines directly in Python a mapping function F capturing the optimality conditions of the problem solved by the algorithm . Once this is done , we leverage autodiff of F combined with implicit differentiation to automatically differentiate the optimization problem solution . In this way , our approach is generic , yet it can exploit the efficiency of state-of-the-art solvers . It therefore combines the benefits of implicit differentiation and autodiff . To summarize , we make the following contributions . • We describe our framework and its JAX implementation ( provided in the supplementary material ) . Our framework significantly lowers the barrier to use implicit differentiation thanks to the use of autodiff of the optimality conditions and the seamless integration in JAX . Our framework significantly extends JAX for numerical optimization , with low-level details all abstracted away . • We instantiate our framework on a large catalog of optimality conditions ( Table 1 ) , recovering existing schemes and obtaining new ones , such as the mirror descent fixed point based one . • On the theoretical side , we provide new bounds on the Jacobian error when the optimization problem is only solved approximately , and empirically validate them . • We implement four illustrative applications , demonstrating our framework ’ s ease of use . Beyond our software implementation , we hope this paper provides a self-contained blueprint for creating an efficient and modular implementation of implicit differentiation . Notation . We denote the gradient and Hessian of f : Rd → R evaluated at x ∈ Rd by∇f ( x ) ∈ Rd and ∇2f ( x ) ∈ Rd×d . We denote the Jacobian of F : Rd → Rp evaluated at x ∈ Rd by ∂F ( x ) ∈ Rp×d . When f or F have several arguments , we denote the gradient , Hessian and Jacobian in the ith argument by ∇i , ∇2i and ∂i , respectively . The standard probability simplex is denoted by 4d : = { x ∈ Rd : ‖x‖1 = 1 , x ≥ 0 } . For any set C ⊂ Rd , we denote the indicator function IC : Rd → R ∪ { +∞ } where IC ( x ) = 0 if x ∈ C , IC ( x ) = +∞ otherwise . For a vector or matrix A , we note ‖A‖ the Frobenius ( or Euclidean ) norm , and ‖A‖op the operator norm . 2 COMBINING IMPLICIT DIFFERENTIATION AND AUTODIFF . 2.1 GENERAL PRINCIPLES . Overview . Contrary to autodiff through unrolled algorithm iterations , implicit differentiation typically involves a manual , sometimes complicated , mathematical derivation . For instance , numerous works ( Chapelle et al. , 2002 ; Gould et al. , 2016 ; Amos & Kolter , 2017 ; Niculae et al. , 2018 ; Niculae & Martins , 2020 ) use Karush–Kuhn–Tucker ( KKT ) conditions in order to relate a constrained optimization problem ’ s solution to its inputs , and to manually derive a formula for its derivatives . The derivation and implementation in these works are always case-by-case . In this work , we propose a generic way to easily add implicit differentiation on top of existing solvers . In our approach , the user defines directly in Python a mapping function F capturing the optimality conditions of the problem solved by the algorithm . We provide reusable building blocks to easily express such F . The provided F is then plugged into our Python decorator @ custom_root , which we append on top of the solver declaration we wish to differentiate . Under the hood , we combine implicit differentiation and autodiff of F to automatically differentiate the optimization problem solution . A simple illustrative example is given in Figure 1 . Differentiating a root . Let F : Rd×Rn → Rd be a user-provided mapping , capturing the optimality conditions of a problem . An optimal solution , denoted x ? ( θ ) , should be a root of F : F ( x ? ( θ ) , θ ) = 0 . ( 1 ) We can see x ? ( θ ) as an implicitly defined function of θ ∈ Rn , i.e. , x ? : Rn → Rd . More precisely , from the implicit function theorem ( Griewank & Walther , 2008 ; Krantz & Parks , 2012 ) , we know that for ( x0 , θ0 ) satisfying F ( x0 , θ0 ) = 0 with a continuously differentiable F , if the Jacobian ∂1F evaluated at ( x0 , θ0 ) is a square invertible matrix , then there exists a function x ? ( · ) defined on a neighborhood of θ0 such that x ? ( θ0 ) = x0 . Furthermore , for all θ in this neighborhood , we have that F ( x ? ( θ ) , θ ) = 0 and ∂x ? ( θ ) exists . Using the chain rule , the Jacobian ∂x ? ( θ ) satisfies ∂1F ( x ? ( θ ) , θ ) ∂x ? ( θ ) + ∂2F ( x ? ( θ ) , θ ) = 0 . Computing ∂x ? ( θ ) therefore boils down to the resolution of the linear system of equations −∂1F ( x ? ( θ ) , θ ) ︸ ︷︷ ︸ A∈Rd×d ∂x ? ( θ ) ︸ ︷︷ ︸ J∈Rd×n = ∂2F ( x ? ( θ ) , θ ) ︸ ︷︷ ︸ B∈Rd×n . ( 2 ) When ( 1 ) is a one-dimensional root finding problem ( d = 1 ) , ( 2 ) becomes particularly simple since we then have ∇x ? ( θ ) = B > /A , where A is a scalar value . We will show that existing and new implicit differentiation methods all reduce to this simple principle . We call our approach hybrid , since it combines implicit differentiation ( and as such requires solving a linear system ) with autodiff of the optimality conditions F . Our approach is efficient as it can be added on top of any state-of-the-art solver and modular as the optimality condition specification is decoupled from the implicit differentiation mechanism . This contrasts with existing works , where the mathematical derivation and implementation are specific to each optimality condition . Differentiating a fixed point . We will encounter numerous applications where x ? ( θ ) is implicitly defined through a fixed point : x ? ( θ ) = T ( x ? ( θ ) , θ ) , where T : Rd × Rn → Rd . This can be seen as a particular case of ( 1 ) by defining the residual F ( x , θ ) = T ( x , θ ) − x . ( 3 ) In this case , using the chain rule , we have
This paper provides a unified tool for combining the implicit differentiation technique and the automatic differentiation method widely used in existing deep learning packages such as PyTorch and TensorFlow. The proposed implementation is easy to use for numeric optimization such as bilevel optimization, meta-learning and hyperparameter optimization because it covers many existing schemes such as fixed point, KKT point, projected method. In the experiments, the authors illustrate how their tool can be useful to simply the implementation.
SP:ddc796b9185d372f4d0829f436bbca50c3990867
Efficient and Modular Implicit Differentiation
1 INTRODUCTION . Automatic differentiation ( autodiff ) is now an inherent part of machine learning software . It allows to express complex computations by composing elementary ones in creative ways and removes the tedious burden of computing their derivatives by hand . In parallel , the differentiation of optimization problem solutions has found many applications . A classical example is bi-level optimization , which typically involves computing the derivatives of a nested optimization problem in order to solve an outer one . Examples of applications in machine learning include hyper-parameter optimization ( Chapelle et al. , 2002 ; Seeger , 2008 ; Pedregosa , 2016 ; Franceschi et al. , 2017 ; Bertrand et al. , 2020 ; 2021 ) , neural networks ( Lorraine et al. , 2020 ) , and meta-learning ( Franceschi et al. , 2018 ; Rajeswaran et al. , 2019 ) . Another line of active research involving differentiation of optimization problem solutions are optimization layers ( Kim et al. , 2017 ; Amos & Kolter , 2017 ; Niculae & Blondel , 2017 ; Djolonga & Krause , 2017 ; Gould et al. , 2019 ) , which can be used to encourage structured outputs , and implicit deep networks ( Bai et al. , 2019 ; El Ghaoui et al. , 2019 ) , which have a smaller memory footprint than backprop-trained networks . Since optimization problem solutions typically do not enjoy an explicit formula in terms of their inputs , autodiff can not be used directly to differentiate these functions . In recent years , two main approaches have been developed to circumvent this problem . The first one consists of unrolling the iterations of an optimization algorithm and using the final iteration as a proxy for the optimization problem solution ( Wengert , 1964 ; Domke , 2012 ; Deledalle et al. , 2014 ; Franceschi et al. , 2018 ; Ablin et al. , 2020 ) . This allows to explicitly construct a computational graph relating the algorithm output to the inputs , on which autodiff can then be used transparently . However , this requires a reimplementation of the algorithm using the autodiff system , and not all algorithms are necessarily autodiff friendly . Moreover , forward-mode autodiff has time complexity that scales linearly with the number of variables and reverse-mode autodiff has memory complexity that scales linearly with the number of algorithm iterations . In contrast , a second approach consists in implicitly relating an optimization problem solution to its inputs using optimality conditions . In a machine learning context , such implicit differentiation has been used for stationarity conditions ( Bengio , 2000 ; Lorraine et al. , 2020 ) , KKT conditions ( Chapelle et al. , 2002 ; Gould et al. , 2016 ; Amos & Kolter , 2017 ; Niculae et al. , 2018 ; Niculae & Martins , 2020 ) and the proximal gradient fixed point ( Niculae & Blondel , 2017 ; Bertrand et al. , 2020 ; 2021 ) . An advantage of implicit differentiation is that a solver reimplementation is not needed , allowing to build upon decades of state-of-the-art software . Although implicit differentiation has a long history in numerical analysis ( Griewank & Walther , 2008 ; Bell & Burke , 2008 ; Krantz & Parks , 2012 ; Bonnans & Shapiro , 2013 ) , so far , it remained difficult to use for practitioners , as it required a case-by-case tedious mathematical derivation and implementation . CasADi ( Andersson et al. , 2019 ) allows to differentiate various optimization and root finding problem algorithms provided by the library . However , it does not allow to easily add implicit differentiation on top of existing solvers from optimality conditions expressed by the user , as we do . A recent tutorial explains how to implement implicit differentiation in JAX ( Duvenaud et al. , 2020 ) . However , the tutorial requires the user to take care of low-level technical details and does not cover a large catalog of optimality condition mappings as we do . Other work ( Agrawal et al. , 2019a ) attempts to address this issue by adding implicit differentiation on top of cvxpy ( Diamond & Boyd , 2016 ) . This works by reducing all convex optimization problems to a conic program and using conic programming ’ s optimality conditions to derive an implicit differentiation formula . While this approach is very generic , solving a convex optimization problem using a conic programming solver—an ADMM-based splitting conic solver ( O ’ Donoghue et al. , 2016 ) in the case of cvxpy—is rarely the state-of-the-art approach for each particular problem instance . In this work , we adopt a different strategy that makes it easy to add implicit differentiation on top of any existing solver . In our approach , the user defines directly in Python a mapping function F capturing the optimality conditions of the problem solved by the algorithm . Once this is done , we leverage autodiff of F combined with implicit differentiation to automatically differentiate the optimization problem solution . In this way , our approach is generic , yet it can exploit the efficiency of state-of-the-art solvers . It therefore combines the benefits of implicit differentiation and autodiff . To summarize , we make the following contributions . • We describe our framework and its JAX implementation ( provided in the supplementary material ) . Our framework significantly lowers the barrier to use implicit differentiation thanks to the use of autodiff of the optimality conditions and the seamless integration in JAX . Our framework significantly extends JAX for numerical optimization , with low-level details all abstracted away . • We instantiate our framework on a large catalog of optimality conditions ( Table 1 ) , recovering existing schemes and obtaining new ones , such as the mirror descent fixed point based one . • On the theoretical side , we provide new bounds on the Jacobian error when the optimization problem is only solved approximately , and empirically validate them . • We implement four illustrative applications , demonstrating our framework ’ s ease of use . Beyond our software implementation , we hope this paper provides a self-contained blueprint for creating an efficient and modular implementation of implicit differentiation . Notation . We denote the gradient and Hessian of f : Rd → R evaluated at x ∈ Rd by∇f ( x ) ∈ Rd and ∇2f ( x ) ∈ Rd×d . We denote the Jacobian of F : Rd → Rp evaluated at x ∈ Rd by ∂F ( x ) ∈ Rp×d . When f or F have several arguments , we denote the gradient , Hessian and Jacobian in the ith argument by ∇i , ∇2i and ∂i , respectively . The standard probability simplex is denoted by 4d : = { x ∈ Rd : ‖x‖1 = 1 , x ≥ 0 } . For any set C ⊂ Rd , we denote the indicator function IC : Rd → R ∪ { +∞ } where IC ( x ) = 0 if x ∈ C , IC ( x ) = +∞ otherwise . For a vector or matrix A , we note ‖A‖ the Frobenius ( or Euclidean ) norm , and ‖A‖op the operator norm . 2 COMBINING IMPLICIT DIFFERENTIATION AND AUTODIFF . 2.1 GENERAL PRINCIPLES . Overview . Contrary to autodiff through unrolled algorithm iterations , implicit differentiation typically involves a manual , sometimes complicated , mathematical derivation . For instance , numerous works ( Chapelle et al. , 2002 ; Gould et al. , 2016 ; Amos & Kolter , 2017 ; Niculae et al. , 2018 ; Niculae & Martins , 2020 ) use Karush–Kuhn–Tucker ( KKT ) conditions in order to relate a constrained optimization problem ’ s solution to its inputs , and to manually derive a formula for its derivatives . The derivation and implementation in these works are always case-by-case . In this work , we propose a generic way to easily add implicit differentiation on top of existing solvers . In our approach , the user defines directly in Python a mapping function F capturing the optimality conditions of the problem solved by the algorithm . We provide reusable building blocks to easily express such F . The provided F is then plugged into our Python decorator @ custom_root , which we append on top of the solver declaration we wish to differentiate . Under the hood , we combine implicit differentiation and autodiff of F to automatically differentiate the optimization problem solution . A simple illustrative example is given in Figure 1 . Differentiating a root . Let F : Rd×Rn → Rd be a user-provided mapping , capturing the optimality conditions of a problem . An optimal solution , denoted x ? ( θ ) , should be a root of F : F ( x ? ( θ ) , θ ) = 0 . ( 1 ) We can see x ? ( θ ) as an implicitly defined function of θ ∈ Rn , i.e. , x ? : Rn → Rd . More precisely , from the implicit function theorem ( Griewank & Walther , 2008 ; Krantz & Parks , 2012 ) , we know that for ( x0 , θ0 ) satisfying F ( x0 , θ0 ) = 0 with a continuously differentiable F , if the Jacobian ∂1F evaluated at ( x0 , θ0 ) is a square invertible matrix , then there exists a function x ? ( · ) defined on a neighborhood of θ0 such that x ? ( θ0 ) = x0 . Furthermore , for all θ in this neighborhood , we have that F ( x ? ( θ ) , θ ) = 0 and ∂x ? ( θ ) exists . Using the chain rule , the Jacobian ∂x ? ( θ ) satisfies ∂1F ( x ? ( θ ) , θ ) ∂x ? ( θ ) + ∂2F ( x ? ( θ ) , θ ) = 0 . Computing ∂x ? ( θ ) therefore boils down to the resolution of the linear system of equations −∂1F ( x ? ( θ ) , θ ) ︸ ︷︷ ︸ A∈Rd×d ∂x ? ( θ ) ︸ ︷︷ ︸ J∈Rd×n = ∂2F ( x ? ( θ ) , θ ) ︸ ︷︷ ︸ B∈Rd×n . ( 2 ) When ( 1 ) is a one-dimensional root finding problem ( d = 1 ) , ( 2 ) becomes particularly simple since we then have ∇x ? ( θ ) = B > /A , where A is a scalar value . We will show that existing and new implicit differentiation methods all reduce to this simple principle . We call our approach hybrid , since it combines implicit differentiation ( and as such requires solving a linear system ) with autodiff of the optimality conditions F . Our approach is efficient as it can be added on top of any state-of-the-art solver and modular as the optimality condition specification is decoupled from the implicit differentiation mechanism . This contrasts with existing works , where the mathematical derivation and implementation are specific to each optimality condition . Differentiating a fixed point . We will encounter numerous applications where x ? ( θ ) is implicitly defined through a fixed point : x ? ( θ ) = T ( x ? ( θ ) , θ ) , where T : Rd × Rn → Rd . This can be seen as a particular case of ( 1 ) by defining the residual F ( x , θ ) = T ( x , θ ) − x . ( 3 ) In this case , using the chain rule , we have
The paper proposes a modular and efficient framework along with its JAX implementation for the implicit differentiation of optimization problems. The user defines the function F capturing the optimality conditions of the problem to be differentiated; then the framework combines implicit differentiation and autodiff of F to automatically differentiate the optimization problem. The proposed framework is labeled as efficient, since it doesn’t have to unroll the computational graph like in autodiff, and modular since it doesn’t require case-by-case mathematical derivation like in implicit differentiation. The authors show that existing implicit differentiation methods can be instantiated in their framework. They provide and empirically validate new bounds on the Jacobian error when the optimization problem is only solved approximately. The authors implemented four illustrative applications of their framework ( Hyperparameter Optimization Of Multiclass SVM; Dataset Distillation; Task-Driven Dictionary Learning; Sensitivity Analysis Of Molecular Dynamics). Code and implementation in JAX are provided along with the paper.
SP:ddc796b9185d372f4d0829f436bbca50c3990867
Efficient and Modular Implicit Differentiation
1 INTRODUCTION . Automatic differentiation ( autodiff ) is now an inherent part of machine learning software . It allows to express complex computations by composing elementary ones in creative ways and removes the tedious burden of computing their derivatives by hand . In parallel , the differentiation of optimization problem solutions has found many applications . A classical example is bi-level optimization , which typically involves computing the derivatives of a nested optimization problem in order to solve an outer one . Examples of applications in machine learning include hyper-parameter optimization ( Chapelle et al. , 2002 ; Seeger , 2008 ; Pedregosa , 2016 ; Franceschi et al. , 2017 ; Bertrand et al. , 2020 ; 2021 ) , neural networks ( Lorraine et al. , 2020 ) , and meta-learning ( Franceschi et al. , 2018 ; Rajeswaran et al. , 2019 ) . Another line of active research involving differentiation of optimization problem solutions are optimization layers ( Kim et al. , 2017 ; Amos & Kolter , 2017 ; Niculae & Blondel , 2017 ; Djolonga & Krause , 2017 ; Gould et al. , 2019 ) , which can be used to encourage structured outputs , and implicit deep networks ( Bai et al. , 2019 ; El Ghaoui et al. , 2019 ) , which have a smaller memory footprint than backprop-trained networks . Since optimization problem solutions typically do not enjoy an explicit formula in terms of their inputs , autodiff can not be used directly to differentiate these functions . In recent years , two main approaches have been developed to circumvent this problem . The first one consists of unrolling the iterations of an optimization algorithm and using the final iteration as a proxy for the optimization problem solution ( Wengert , 1964 ; Domke , 2012 ; Deledalle et al. , 2014 ; Franceschi et al. , 2018 ; Ablin et al. , 2020 ) . This allows to explicitly construct a computational graph relating the algorithm output to the inputs , on which autodiff can then be used transparently . However , this requires a reimplementation of the algorithm using the autodiff system , and not all algorithms are necessarily autodiff friendly . Moreover , forward-mode autodiff has time complexity that scales linearly with the number of variables and reverse-mode autodiff has memory complexity that scales linearly with the number of algorithm iterations . In contrast , a second approach consists in implicitly relating an optimization problem solution to its inputs using optimality conditions . In a machine learning context , such implicit differentiation has been used for stationarity conditions ( Bengio , 2000 ; Lorraine et al. , 2020 ) , KKT conditions ( Chapelle et al. , 2002 ; Gould et al. , 2016 ; Amos & Kolter , 2017 ; Niculae et al. , 2018 ; Niculae & Martins , 2020 ) and the proximal gradient fixed point ( Niculae & Blondel , 2017 ; Bertrand et al. , 2020 ; 2021 ) . An advantage of implicit differentiation is that a solver reimplementation is not needed , allowing to build upon decades of state-of-the-art software . Although implicit differentiation has a long history in numerical analysis ( Griewank & Walther , 2008 ; Bell & Burke , 2008 ; Krantz & Parks , 2012 ; Bonnans & Shapiro , 2013 ) , so far , it remained difficult to use for practitioners , as it required a case-by-case tedious mathematical derivation and implementation . CasADi ( Andersson et al. , 2019 ) allows to differentiate various optimization and root finding problem algorithms provided by the library . However , it does not allow to easily add implicit differentiation on top of existing solvers from optimality conditions expressed by the user , as we do . A recent tutorial explains how to implement implicit differentiation in JAX ( Duvenaud et al. , 2020 ) . However , the tutorial requires the user to take care of low-level technical details and does not cover a large catalog of optimality condition mappings as we do . Other work ( Agrawal et al. , 2019a ) attempts to address this issue by adding implicit differentiation on top of cvxpy ( Diamond & Boyd , 2016 ) . This works by reducing all convex optimization problems to a conic program and using conic programming ’ s optimality conditions to derive an implicit differentiation formula . While this approach is very generic , solving a convex optimization problem using a conic programming solver—an ADMM-based splitting conic solver ( O ’ Donoghue et al. , 2016 ) in the case of cvxpy—is rarely the state-of-the-art approach for each particular problem instance . In this work , we adopt a different strategy that makes it easy to add implicit differentiation on top of any existing solver . In our approach , the user defines directly in Python a mapping function F capturing the optimality conditions of the problem solved by the algorithm . Once this is done , we leverage autodiff of F combined with implicit differentiation to automatically differentiate the optimization problem solution . In this way , our approach is generic , yet it can exploit the efficiency of state-of-the-art solvers . It therefore combines the benefits of implicit differentiation and autodiff . To summarize , we make the following contributions . • We describe our framework and its JAX implementation ( provided in the supplementary material ) . Our framework significantly lowers the barrier to use implicit differentiation thanks to the use of autodiff of the optimality conditions and the seamless integration in JAX . Our framework significantly extends JAX for numerical optimization , with low-level details all abstracted away . • We instantiate our framework on a large catalog of optimality conditions ( Table 1 ) , recovering existing schemes and obtaining new ones , such as the mirror descent fixed point based one . • On the theoretical side , we provide new bounds on the Jacobian error when the optimization problem is only solved approximately , and empirically validate them . • We implement four illustrative applications , demonstrating our framework ’ s ease of use . Beyond our software implementation , we hope this paper provides a self-contained blueprint for creating an efficient and modular implementation of implicit differentiation . Notation . We denote the gradient and Hessian of f : Rd → R evaluated at x ∈ Rd by∇f ( x ) ∈ Rd and ∇2f ( x ) ∈ Rd×d . We denote the Jacobian of F : Rd → Rp evaluated at x ∈ Rd by ∂F ( x ) ∈ Rp×d . When f or F have several arguments , we denote the gradient , Hessian and Jacobian in the ith argument by ∇i , ∇2i and ∂i , respectively . The standard probability simplex is denoted by 4d : = { x ∈ Rd : ‖x‖1 = 1 , x ≥ 0 } . For any set C ⊂ Rd , we denote the indicator function IC : Rd → R ∪ { +∞ } where IC ( x ) = 0 if x ∈ C , IC ( x ) = +∞ otherwise . For a vector or matrix A , we note ‖A‖ the Frobenius ( or Euclidean ) norm , and ‖A‖op the operator norm . 2 COMBINING IMPLICIT DIFFERENTIATION AND AUTODIFF . 2.1 GENERAL PRINCIPLES . Overview . Contrary to autodiff through unrolled algorithm iterations , implicit differentiation typically involves a manual , sometimes complicated , mathematical derivation . For instance , numerous works ( Chapelle et al. , 2002 ; Gould et al. , 2016 ; Amos & Kolter , 2017 ; Niculae et al. , 2018 ; Niculae & Martins , 2020 ) use Karush–Kuhn–Tucker ( KKT ) conditions in order to relate a constrained optimization problem ’ s solution to its inputs , and to manually derive a formula for its derivatives . The derivation and implementation in these works are always case-by-case . In this work , we propose a generic way to easily add implicit differentiation on top of existing solvers . In our approach , the user defines directly in Python a mapping function F capturing the optimality conditions of the problem solved by the algorithm . We provide reusable building blocks to easily express such F . The provided F is then plugged into our Python decorator @ custom_root , which we append on top of the solver declaration we wish to differentiate . Under the hood , we combine implicit differentiation and autodiff of F to automatically differentiate the optimization problem solution . A simple illustrative example is given in Figure 1 . Differentiating a root . Let F : Rd×Rn → Rd be a user-provided mapping , capturing the optimality conditions of a problem . An optimal solution , denoted x ? ( θ ) , should be a root of F : F ( x ? ( θ ) , θ ) = 0 . ( 1 ) We can see x ? ( θ ) as an implicitly defined function of θ ∈ Rn , i.e. , x ? : Rn → Rd . More precisely , from the implicit function theorem ( Griewank & Walther , 2008 ; Krantz & Parks , 2012 ) , we know that for ( x0 , θ0 ) satisfying F ( x0 , θ0 ) = 0 with a continuously differentiable F , if the Jacobian ∂1F evaluated at ( x0 , θ0 ) is a square invertible matrix , then there exists a function x ? ( · ) defined on a neighborhood of θ0 such that x ? ( θ0 ) = x0 . Furthermore , for all θ in this neighborhood , we have that F ( x ? ( θ ) , θ ) = 0 and ∂x ? ( θ ) exists . Using the chain rule , the Jacobian ∂x ? ( θ ) satisfies ∂1F ( x ? ( θ ) , θ ) ∂x ? ( θ ) + ∂2F ( x ? ( θ ) , θ ) = 0 . Computing ∂x ? ( θ ) therefore boils down to the resolution of the linear system of equations −∂1F ( x ? ( θ ) , θ ) ︸ ︷︷ ︸ A∈Rd×d ∂x ? ( θ ) ︸ ︷︷ ︸ J∈Rd×n = ∂2F ( x ? ( θ ) , θ ) ︸ ︷︷ ︸ B∈Rd×n . ( 2 ) When ( 1 ) is a one-dimensional root finding problem ( d = 1 ) , ( 2 ) becomes particularly simple since we then have ∇x ? ( θ ) = B > /A , where A is a scalar value . We will show that existing and new implicit differentiation methods all reduce to this simple principle . We call our approach hybrid , since it combines implicit differentiation ( and as such requires solving a linear system ) with autodiff of the optimality conditions F . Our approach is efficient as it can be added on top of any state-of-the-art solver and modular as the optimality condition specification is decoupled from the implicit differentiation mechanism . This contrasts with existing works , where the mathematical derivation and implementation are specific to each optimality condition . Differentiating a fixed point . We will encounter numerous applications where x ? ( θ ) is implicitly defined through a fixed point : x ? ( θ ) = T ( x ? ( θ ) , θ ) , where T : Rd × Rn → Rd . This can be seen as a particular case of ( 1 ) by defining the residual F ( x , θ ) = T ( x , θ ) − x . ( 3 ) In this case , using the chain rule , we have
This paper introduces a Jax package for implicitly differentiating various numerical solvers. Concretely, the authors develop a systemic methodology for producing gradients for a variety of optimization problems. Then, the authors prove that the Jacobian solution to the approximate numerical solution produces close enough gradients. Finally, the authors show the power of their framework on four test tasks.
SP:ddc796b9185d372f4d0829f436bbca50c3990867
NAFS: A Simple yet Tough-to-Beat Baseline for Graph Representation Learning
1 INTRODUCTION . In recent years , graph representation learning has been extensively applied in various application scenarios , such as node clustering , link prediction , node classification , and graph classification ( Kipf & Welling , 2016b ; a ; Hamilton et al. , 2017 ; Bo et al. , 2020 ; Hettige et al. , 2020 ; Wang et al. , 2016 ; Wu et al. , 2020 ; Abu-El-Haija et al. , 2019 ) . The goal of graph representation learning is to encode graph information to node embeddings . Traditional graph representation learning methods , such as DeepWalk ( Perozzi et al. , 2014 ) , Node2vec ( Grover & Leskovec , 2016 ) , LINE ( Tang et al. , 2015 ) , and ComE ( Cavallari et al. , 2017 ) merely focus on preserving graph structure information . GNN-based graph representation learning has attracted intensive interest by combining knowledge from both graph structure and node features . While most of these GNN-based methods are designed based on Graph AutoEncoder ( GAE ) and Variational Graph AutoEncoder ( VGAE ) ( Kipf & Welling , 2016b ) , these methods share two major limitations : Shallow Architecture . Previous work shows that although stacking multiple GNN layers in Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2016a ) is capable of exploiting deep structural information , applying a large number of GNN layers might lead to indistinguishable node embeddings , i.e. , the over-smoothing issue ( Li et al. , 2018 ) . Therefore , most state-of-the-art GNNs resort to shallow architectures , which hinders the model from capturing long-range dependencies . Low Scalability . GNN-based graph representation learning methods can not scale well to large graphs due to the expensive computation cost and high memory usage . Most existing GNNs need to repeatedly perform the computationally expensive and recursive feature smoothing , which involves the participation of the entire graph at each training epoch . Furthermore , most methods adopt the same training loss function as GAE , which introduces high memory usage by storing the dense-form adjacency matrix on GPU . For a graph of size 200 million , its dense-form adjacency matrix requires a space of roughly 150GB , exceeding the memory capacity of the current powerful GPU devices . To tackle these issues , we propose a new graph representation learning method , which is embarrassingly simple : just smooth the node features and then combine the smoothed features in a node-adaptive manner . We name this method node-adaptive feature smoothing ( NAFS ) , and its goal is to construct better node embeddings that integrate the information from both graph structural information and node features . Based on the observation that different nodes have highly diverse “ smoothing speed ” , NAFS adaptively smooths each node feature and takes advantage of both loworder and high-order neighborhood information of each node . In addition , feature ensemble is also employed to combine the smoothed features extracted via different smoothing operators . Since NAFS is training-free , it significantly reduces the training cost and scales better to large graphs than most GNN-based graph representation learning methods . This paper is not meant to diminish the current advancements in GNN-based graph representation learning approaches . Instead , we aim to introduce an easier way to obtain high-quality node embeddings and understand the source of performance gains of these approaches better . Feature smoothing could be a promising direction towards a more simple and effective integration of information from both graph structure and node features . Our contributions are as follows : ( 1 ) New perspective . To the best of our knowledge , we are the first to explore the possibility that simple feature smoothing without any trainable parameters could even outperform state-of-the-art GNNs ; this incredible finding opens up a new direction towards efficient and scalable graph representation learning . ( 2 ) Novel method . We propose NAFS , a nodeadaptive feature smoothing approach along with various feature ensemble strategies , to fully exploit knowledge from both the graph structure and node features . ( 3 ) State-of-the-art performance . We evaluate the effectiveness and efficiency of NAFS on real-world datasets across various graph-based tasks , including node clustering and link prediction . Empirical results demonstrate that NAFS performs comparably with or even outperforms the state-of-the-art GNNs , and achieves up to two orders of magnitude speedup . In particular , on PubMed , NAFS outperforms GAE ( Kipf & Welling , 2016b ) and AGE ( Cui et al. , 2020 ) by a margin of 9.0 % and 3.8 % in terms of NMI in node clustering , while achieving up to 65.4× and 88.6× training speedups , respectively . 2 PRELIMINARY . In this section , we first explain the notations and problem formulation . Then , we review current GNNs and GNN-based graph representation learning . 2.1 NOTATIONS AND PROBLEM FORMULATION .. In this paper , we consider an undirected graph G = ( V , E ) with |V| = n nodes and |E| = m edges . Here we suppose that m ∝ n as it is the case in most real-world graphs . We denote by A the adjacency matrix of G. Each node can possibly have a feature vector of size f , which stacks up to an n× f feature matrix X . The degree matrix of A is denoted as D = diag ( d1 , d2 , · · · , dn ) ∈ Rn×n , where di = ∑ vj∈V Aij . We denote the final node embedding matrix as Z , and evaluate it in both the node clustering and the link prediction tasks . The node clustering task requires the model to partition the nodes into c disjoint groups G1 , G2 , · · · , Gc , where similar nodes should be in the same group . The target of the link prediction task is to predict whether an edge exists between given node pairs . 2.2 GRAPH CONVOLUTIONAL NETWORK .. Based on the assumption that locally connected nodes are likely to enjoy high similarity ( McPherson et al. , 2001 ) , each node in most GNN models iteratively smooths the representations of its neighbors for better node embedding . Below is the formula of the l-th GCN layer ( Kipf & Welling , 2016a ) : X ( l ) = δ ( ÂX ( l−1 ) Θ ( l ) ) ,  = D̃−1/2ÃD̃−1/2 , à = A + In , ( 1 ) where X ( l ) is the node embedding matrix at layer l , X ( 0 ) is the original feature matrix , Θ ( l ) are the trainable weights , and δ is the activation function .  is the smoothing matrix that helps each node to smooth representations of neighboring nodes . As shown in Eq . 1 , each GCN layer contains two operations : feature aggregation ( smoothing ) and feature transformation . Figure 1 shows the framework of a two-layer GCN . The l-th layer in GCN firstly executes feature smoothing on the node embedding X ( l−1 ) . Then , the smoothed feature X̃ ( l−1 ) is transformed with trainable weights Θ ( l ) and activation function δ to generate new node embedding X ( l ) . Note that GCN will degrade to MLP if feature smoothing is removed from each layer . 2.3 GNN-BASED GRAPH REPRESENTATION LEARNING .. GAE ( Kipf & Welling , 2016b ) , the first and the most representative GNN-based graph embedding method , adopts an encoder to generate node embedding matrix Z with inputs  and X . A simple inner product decoder is then used to reconstruct the adjacency matrix . The final training loss of GAE is the binary entropy loss between A′ and à , the reconstructed adjacency matrix and the original adjacency matrix with self loop added . A′ = sigmoid ( Z · ZT ) , L = ∑ 1≤i , j≤n −Ãi , j logA′i , j − ( 1− Ãi , j ) log ( 1−A′i , j ) ) . ( 2 ) Motivated by GAE , lots of GNN-based graph representation learning methods are proposed recently . MGAE ( Wang et al. , 2017 ) presents a denoising marginalized autoencoder that reconstructs the node feature matrix X. ARGA ( Pan et al. , 2018 ) adopts the adversarial learning strategy , and its generated node embeddings are forced to match a prior distribution . DAEGC ( Wang et al. , 2019 ) exploits side information to generate node embeddings in a self-supervised way . AGC ( Zhang et al. , 2019 ) proposes an improved filter matrix to better filter out the high-frequency noise . AGE ( Cui et al. , 2020 ) further improves AGC by using the similarity of embedding rather than the adjacency matrix to consider original node feature information . Compared with GNN-based graph representation learning methods that rely on the trainable parameters to learn node embeddings , our NAFS is training-free and thus enjoys higher efficiency and scalability . 3 OBSERVATION AND INSIGHT . In this section , we make a quantitative analysis on the over-smoothing issue at the node level and then provide some insights when designing NAFS on graphs . 3.1 FEATURE SMOOTHING IN DECOUPLED GNNS . Recently , many works ( Wu et al. , 2019 ; Zhu & Koniusz , 2021 ; Chen et al. , 2020 ; Zhang et al. , 2021 ) propose to decouple the feature smoothing and feature transformation in each GCN layer for scalable node classification . Concretely , they execute the feature smoothing operation in advance , and the smoothed features are then fed into a simple MLP to generate the final predicted node labels . Under this framework , the predictive node classification accuracy of these methods is comparable with or even higher than the one of coupled GNNs , and these works claim that the true power of GNNs lies in feature smoothing rather than feature transformation . We split the framework of these decoupled GNNs into two parts : feature smoothing and MLP training . Feature smoothing aims to combine the graph structural information and node features into better features for the subsequent MLP ; while MLP training only takes in the smoothed feature and is specially trained for a given task . As stated by previous decoupled GNNs ( Wu et al. , 2019 ; Zhu & Koniusz , 2021 ) , the true success of GNNs lies in feature smoothing rather than feature transformation . Correspondingly , we propose to remove feature transformation and preserve the key feature smoothing part alone for simple and scalable node representation . There is another branch of GNNs that also decouple the feature smoothing and feature transformation . The most representative method of this category is APPNP ( Klicpera et al. , 2018 ) . It first feeds the raw node features into an MLP to generate intermediate node embeddings ; then the personalized PageRank based propagation operations are performed on the node embeddings to produce final prediction results . However , compared with scalable decoupled GNNs mentioned in the previous paragraph , this branch of GNNs still have to recursively execute propagation operations in each training epoch , which makes it impossible to perform on large-scale graphs . In the remaining part of this paper , the terminology “ decoupled GNNs ” refers particularly to the scalable decoupled GNNs mentioned in the previous two paragraphs . 3.2 MEASURING SMOOTHING LEVEL . To capture deep graph structural information , a straightforward way is to simply stack multiple GNN layers . However , a large number of feature smoothing operations in a GNN model would lead to indistinguishable node embeddings , i.e. , the over-smoothing issue ( Li et al. , 2018 ) . Concretely , if we execute ÂX for infinite times , the node embeddings within the same connected component would reach a stationary state . When adopting  = D̃r−1ÃD̃−r , Â∞ follows Â∞i , j = ( di + 1 ) r ( dj + 1 ) 1−r 2m+ n , ( 3 ) which shows that the influence from node vi to vj is only determined by their degrees . Under the extreme condition that r = 0 , all the nodes within one connected component have exactly the same representation , making it impossible to apply the node embeddings to subsequent tasks . Here we introduce a new metric , “ Over-smoothing Distance ” , to measure each node ’ s smoothing level . A smaller value indicates that the node is closer to the stationary state , i.e. , closer to over-smoothing . Definition 3.1 ( Over-smoothing Distance ) . The Over-smoothing Distance Di ( k ) parameterized by node i and smoothing step k is defined as Di ( k ) = Dis ( [  kX ] i , [  ∞X ] i ) , ( 4 ) where [ ÂkX ] i denotes the ith row of ÂkX , representing the representations of node vi after smoothing k times ; [ Â∞X ] i denotes the ith row of Â∞X , representing the stationary state of node vi ; Dis ( · ) is a distance function or a function positively relative with the difference , which can be implemented using Euclidean distance , the inverse of cosine similarity , etc .
The paper presents NAFS (Node-Adaptive Feature Smoothing), a method that constructs node representations by relying on smoothing only, i.e. without parameter learning. To do this, the authors first provide a formulation for the smoothing operator after infinite steps, i.e. when the stationary state is reached. They then define over-smoothing distance as a way to assess how much a node is close to the stationary state after k smoothing steps. Finally, they use over-smoothing distances to calculate a different smoothing weight for each node. Experiments show that representations obtained by smoothing with these weights, together with feature ensembles obtained by applying different convolution coefficients, provide performances in clustering and link prediction tasks which are comparable, if not better, than many other state-of-the-art approaches.
SP:3c5d850f257a0150def7087735e463d418160a04
NAFS: A Simple yet Tough-to-Beat Baseline for Graph Representation Learning
1 INTRODUCTION . In recent years , graph representation learning has been extensively applied in various application scenarios , such as node clustering , link prediction , node classification , and graph classification ( Kipf & Welling , 2016b ; a ; Hamilton et al. , 2017 ; Bo et al. , 2020 ; Hettige et al. , 2020 ; Wang et al. , 2016 ; Wu et al. , 2020 ; Abu-El-Haija et al. , 2019 ) . The goal of graph representation learning is to encode graph information to node embeddings . Traditional graph representation learning methods , such as DeepWalk ( Perozzi et al. , 2014 ) , Node2vec ( Grover & Leskovec , 2016 ) , LINE ( Tang et al. , 2015 ) , and ComE ( Cavallari et al. , 2017 ) merely focus on preserving graph structure information . GNN-based graph representation learning has attracted intensive interest by combining knowledge from both graph structure and node features . While most of these GNN-based methods are designed based on Graph AutoEncoder ( GAE ) and Variational Graph AutoEncoder ( VGAE ) ( Kipf & Welling , 2016b ) , these methods share two major limitations : Shallow Architecture . Previous work shows that although stacking multiple GNN layers in Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2016a ) is capable of exploiting deep structural information , applying a large number of GNN layers might lead to indistinguishable node embeddings , i.e. , the over-smoothing issue ( Li et al. , 2018 ) . Therefore , most state-of-the-art GNNs resort to shallow architectures , which hinders the model from capturing long-range dependencies . Low Scalability . GNN-based graph representation learning methods can not scale well to large graphs due to the expensive computation cost and high memory usage . Most existing GNNs need to repeatedly perform the computationally expensive and recursive feature smoothing , which involves the participation of the entire graph at each training epoch . Furthermore , most methods adopt the same training loss function as GAE , which introduces high memory usage by storing the dense-form adjacency matrix on GPU . For a graph of size 200 million , its dense-form adjacency matrix requires a space of roughly 150GB , exceeding the memory capacity of the current powerful GPU devices . To tackle these issues , we propose a new graph representation learning method , which is embarrassingly simple : just smooth the node features and then combine the smoothed features in a node-adaptive manner . We name this method node-adaptive feature smoothing ( NAFS ) , and its goal is to construct better node embeddings that integrate the information from both graph structural information and node features . Based on the observation that different nodes have highly diverse “ smoothing speed ” , NAFS adaptively smooths each node feature and takes advantage of both loworder and high-order neighborhood information of each node . In addition , feature ensemble is also employed to combine the smoothed features extracted via different smoothing operators . Since NAFS is training-free , it significantly reduces the training cost and scales better to large graphs than most GNN-based graph representation learning methods . This paper is not meant to diminish the current advancements in GNN-based graph representation learning approaches . Instead , we aim to introduce an easier way to obtain high-quality node embeddings and understand the source of performance gains of these approaches better . Feature smoothing could be a promising direction towards a more simple and effective integration of information from both graph structure and node features . Our contributions are as follows : ( 1 ) New perspective . To the best of our knowledge , we are the first to explore the possibility that simple feature smoothing without any trainable parameters could even outperform state-of-the-art GNNs ; this incredible finding opens up a new direction towards efficient and scalable graph representation learning . ( 2 ) Novel method . We propose NAFS , a nodeadaptive feature smoothing approach along with various feature ensemble strategies , to fully exploit knowledge from both the graph structure and node features . ( 3 ) State-of-the-art performance . We evaluate the effectiveness and efficiency of NAFS on real-world datasets across various graph-based tasks , including node clustering and link prediction . Empirical results demonstrate that NAFS performs comparably with or even outperforms the state-of-the-art GNNs , and achieves up to two orders of magnitude speedup . In particular , on PubMed , NAFS outperforms GAE ( Kipf & Welling , 2016b ) and AGE ( Cui et al. , 2020 ) by a margin of 9.0 % and 3.8 % in terms of NMI in node clustering , while achieving up to 65.4× and 88.6× training speedups , respectively . 2 PRELIMINARY . In this section , we first explain the notations and problem formulation . Then , we review current GNNs and GNN-based graph representation learning . 2.1 NOTATIONS AND PROBLEM FORMULATION .. In this paper , we consider an undirected graph G = ( V , E ) with |V| = n nodes and |E| = m edges . Here we suppose that m ∝ n as it is the case in most real-world graphs . We denote by A the adjacency matrix of G. Each node can possibly have a feature vector of size f , which stacks up to an n× f feature matrix X . The degree matrix of A is denoted as D = diag ( d1 , d2 , · · · , dn ) ∈ Rn×n , where di = ∑ vj∈V Aij . We denote the final node embedding matrix as Z , and evaluate it in both the node clustering and the link prediction tasks . The node clustering task requires the model to partition the nodes into c disjoint groups G1 , G2 , · · · , Gc , where similar nodes should be in the same group . The target of the link prediction task is to predict whether an edge exists between given node pairs . 2.2 GRAPH CONVOLUTIONAL NETWORK .. Based on the assumption that locally connected nodes are likely to enjoy high similarity ( McPherson et al. , 2001 ) , each node in most GNN models iteratively smooths the representations of its neighbors for better node embedding . Below is the formula of the l-th GCN layer ( Kipf & Welling , 2016a ) : X ( l ) = δ ( ÂX ( l−1 ) Θ ( l ) ) ,  = D̃−1/2ÃD̃−1/2 , à = A + In , ( 1 ) where X ( l ) is the node embedding matrix at layer l , X ( 0 ) is the original feature matrix , Θ ( l ) are the trainable weights , and δ is the activation function .  is the smoothing matrix that helps each node to smooth representations of neighboring nodes . As shown in Eq . 1 , each GCN layer contains two operations : feature aggregation ( smoothing ) and feature transformation . Figure 1 shows the framework of a two-layer GCN . The l-th layer in GCN firstly executes feature smoothing on the node embedding X ( l−1 ) . Then , the smoothed feature X̃ ( l−1 ) is transformed with trainable weights Θ ( l ) and activation function δ to generate new node embedding X ( l ) . Note that GCN will degrade to MLP if feature smoothing is removed from each layer . 2.3 GNN-BASED GRAPH REPRESENTATION LEARNING .. GAE ( Kipf & Welling , 2016b ) , the first and the most representative GNN-based graph embedding method , adopts an encoder to generate node embedding matrix Z with inputs  and X . A simple inner product decoder is then used to reconstruct the adjacency matrix . The final training loss of GAE is the binary entropy loss between A′ and à , the reconstructed adjacency matrix and the original adjacency matrix with self loop added . A′ = sigmoid ( Z · ZT ) , L = ∑ 1≤i , j≤n −Ãi , j logA′i , j − ( 1− Ãi , j ) log ( 1−A′i , j ) ) . ( 2 ) Motivated by GAE , lots of GNN-based graph representation learning methods are proposed recently . MGAE ( Wang et al. , 2017 ) presents a denoising marginalized autoencoder that reconstructs the node feature matrix X. ARGA ( Pan et al. , 2018 ) adopts the adversarial learning strategy , and its generated node embeddings are forced to match a prior distribution . DAEGC ( Wang et al. , 2019 ) exploits side information to generate node embeddings in a self-supervised way . AGC ( Zhang et al. , 2019 ) proposes an improved filter matrix to better filter out the high-frequency noise . AGE ( Cui et al. , 2020 ) further improves AGC by using the similarity of embedding rather than the adjacency matrix to consider original node feature information . Compared with GNN-based graph representation learning methods that rely on the trainable parameters to learn node embeddings , our NAFS is training-free and thus enjoys higher efficiency and scalability . 3 OBSERVATION AND INSIGHT . In this section , we make a quantitative analysis on the over-smoothing issue at the node level and then provide some insights when designing NAFS on graphs . 3.1 FEATURE SMOOTHING IN DECOUPLED GNNS . Recently , many works ( Wu et al. , 2019 ; Zhu & Koniusz , 2021 ; Chen et al. , 2020 ; Zhang et al. , 2021 ) propose to decouple the feature smoothing and feature transformation in each GCN layer for scalable node classification . Concretely , they execute the feature smoothing operation in advance , and the smoothed features are then fed into a simple MLP to generate the final predicted node labels . Under this framework , the predictive node classification accuracy of these methods is comparable with or even higher than the one of coupled GNNs , and these works claim that the true power of GNNs lies in feature smoothing rather than feature transformation . We split the framework of these decoupled GNNs into two parts : feature smoothing and MLP training . Feature smoothing aims to combine the graph structural information and node features into better features for the subsequent MLP ; while MLP training only takes in the smoothed feature and is specially trained for a given task . As stated by previous decoupled GNNs ( Wu et al. , 2019 ; Zhu & Koniusz , 2021 ) , the true success of GNNs lies in feature smoothing rather than feature transformation . Correspondingly , we propose to remove feature transformation and preserve the key feature smoothing part alone for simple and scalable node representation . There is another branch of GNNs that also decouple the feature smoothing and feature transformation . The most representative method of this category is APPNP ( Klicpera et al. , 2018 ) . It first feeds the raw node features into an MLP to generate intermediate node embeddings ; then the personalized PageRank based propagation operations are performed on the node embeddings to produce final prediction results . However , compared with scalable decoupled GNNs mentioned in the previous paragraph , this branch of GNNs still have to recursively execute propagation operations in each training epoch , which makes it impossible to perform on large-scale graphs . In the remaining part of this paper , the terminology “ decoupled GNNs ” refers particularly to the scalable decoupled GNNs mentioned in the previous two paragraphs . 3.2 MEASURING SMOOTHING LEVEL . To capture deep graph structural information , a straightforward way is to simply stack multiple GNN layers . However , a large number of feature smoothing operations in a GNN model would lead to indistinguishable node embeddings , i.e. , the over-smoothing issue ( Li et al. , 2018 ) . Concretely , if we execute ÂX for infinite times , the node embeddings within the same connected component would reach a stationary state . When adopting  = D̃r−1ÃD̃−r , Â∞ follows Â∞i , j = ( di + 1 ) r ( dj + 1 ) 1−r 2m+ n , ( 3 ) which shows that the influence from node vi to vj is only determined by their degrees . Under the extreme condition that r = 0 , all the nodes within one connected component have exactly the same representation , making it impossible to apply the node embeddings to subsequent tasks . Here we introduce a new metric , “ Over-smoothing Distance ” , to measure each node ’ s smoothing level . A smaller value indicates that the node is closer to the stationary state , i.e. , closer to over-smoothing . Definition 3.1 ( Over-smoothing Distance ) . The Over-smoothing Distance Di ( k ) parameterized by node i and smoothing step k is defined as Di ( k ) = Dis ( [  kX ] i , [  ∞X ] i ) , ( 4 ) where [ ÂkX ] i denotes the ith row of ÂkX , representing the representations of node vi after smoothing k times ; [ Â∞X ] i denotes the ith row of Â∞X , representing the stationary state of node vi ; Dis ( · ) is a distance function or a function positively relative with the difference , which can be implemented using Euclidean distance , the inverse of cosine similarity , etc .
The authors of this paper took a novel perspective to present the node-adaptive feature smoothing (NAFS) algorithm, which generates node embeddings without explicit training/parameter learning. The method first performs feature smoothing, then combines the smoothed features using adaptive weights which are node-specific. They further enhanced this method by ensembling the smoothed features extracted with different hyper-parameters. The authors have conducted many experiments to validate the model performance, and demonstrate the model's efficiency empirically.
SP:3c5d850f257a0150def7087735e463d418160a04
NAFS: A Simple yet Tough-to-Beat Baseline for Graph Representation Learning
1 INTRODUCTION . In recent years , graph representation learning has been extensively applied in various application scenarios , such as node clustering , link prediction , node classification , and graph classification ( Kipf & Welling , 2016b ; a ; Hamilton et al. , 2017 ; Bo et al. , 2020 ; Hettige et al. , 2020 ; Wang et al. , 2016 ; Wu et al. , 2020 ; Abu-El-Haija et al. , 2019 ) . The goal of graph representation learning is to encode graph information to node embeddings . Traditional graph representation learning methods , such as DeepWalk ( Perozzi et al. , 2014 ) , Node2vec ( Grover & Leskovec , 2016 ) , LINE ( Tang et al. , 2015 ) , and ComE ( Cavallari et al. , 2017 ) merely focus on preserving graph structure information . GNN-based graph representation learning has attracted intensive interest by combining knowledge from both graph structure and node features . While most of these GNN-based methods are designed based on Graph AutoEncoder ( GAE ) and Variational Graph AutoEncoder ( VGAE ) ( Kipf & Welling , 2016b ) , these methods share two major limitations : Shallow Architecture . Previous work shows that although stacking multiple GNN layers in Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2016a ) is capable of exploiting deep structural information , applying a large number of GNN layers might lead to indistinguishable node embeddings , i.e. , the over-smoothing issue ( Li et al. , 2018 ) . Therefore , most state-of-the-art GNNs resort to shallow architectures , which hinders the model from capturing long-range dependencies . Low Scalability . GNN-based graph representation learning methods can not scale well to large graphs due to the expensive computation cost and high memory usage . Most existing GNNs need to repeatedly perform the computationally expensive and recursive feature smoothing , which involves the participation of the entire graph at each training epoch . Furthermore , most methods adopt the same training loss function as GAE , which introduces high memory usage by storing the dense-form adjacency matrix on GPU . For a graph of size 200 million , its dense-form adjacency matrix requires a space of roughly 150GB , exceeding the memory capacity of the current powerful GPU devices . To tackle these issues , we propose a new graph representation learning method , which is embarrassingly simple : just smooth the node features and then combine the smoothed features in a node-adaptive manner . We name this method node-adaptive feature smoothing ( NAFS ) , and its goal is to construct better node embeddings that integrate the information from both graph structural information and node features . Based on the observation that different nodes have highly diverse “ smoothing speed ” , NAFS adaptively smooths each node feature and takes advantage of both loworder and high-order neighborhood information of each node . In addition , feature ensemble is also employed to combine the smoothed features extracted via different smoothing operators . Since NAFS is training-free , it significantly reduces the training cost and scales better to large graphs than most GNN-based graph representation learning methods . This paper is not meant to diminish the current advancements in GNN-based graph representation learning approaches . Instead , we aim to introduce an easier way to obtain high-quality node embeddings and understand the source of performance gains of these approaches better . Feature smoothing could be a promising direction towards a more simple and effective integration of information from both graph structure and node features . Our contributions are as follows : ( 1 ) New perspective . To the best of our knowledge , we are the first to explore the possibility that simple feature smoothing without any trainable parameters could even outperform state-of-the-art GNNs ; this incredible finding opens up a new direction towards efficient and scalable graph representation learning . ( 2 ) Novel method . We propose NAFS , a nodeadaptive feature smoothing approach along with various feature ensemble strategies , to fully exploit knowledge from both the graph structure and node features . ( 3 ) State-of-the-art performance . We evaluate the effectiveness and efficiency of NAFS on real-world datasets across various graph-based tasks , including node clustering and link prediction . Empirical results demonstrate that NAFS performs comparably with or even outperforms the state-of-the-art GNNs , and achieves up to two orders of magnitude speedup . In particular , on PubMed , NAFS outperforms GAE ( Kipf & Welling , 2016b ) and AGE ( Cui et al. , 2020 ) by a margin of 9.0 % and 3.8 % in terms of NMI in node clustering , while achieving up to 65.4× and 88.6× training speedups , respectively . 2 PRELIMINARY . In this section , we first explain the notations and problem formulation . Then , we review current GNNs and GNN-based graph representation learning . 2.1 NOTATIONS AND PROBLEM FORMULATION .. In this paper , we consider an undirected graph G = ( V , E ) with |V| = n nodes and |E| = m edges . Here we suppose that m ∝ n as it is the case in most real-world graphs . We denote by A the adjacency matrix of G. Each node can possibly have a feature vector of size f , which stacks up to an n× f feature matrix X . The degree matrix of A is denoted as D = diag ( d1 , d2 , · · · , dn ) ∈ Rn×n , where di = ∑ vj∈V Aij . We denote the final node embedding matrix as Z , and evaluate it in both the node clustering and the link prediction tasks . The node clustering task requires the model to partition the nodes into c disjoint groups G1 , G2 , · · · , Gc , where similar nodes should be in the same group . The target of the link prediction task is to predict whether an edge exists between given node pairs . 2.2 GRAPH CONVOLUTIONAL NETWORK .. Based on the assumption that locally connected nodes are likely to enjoy high similarity ( McPherson et al. , 2001 ) , each node in most GNN models iteratively smooths the representations of its neighbors for better node embedding . Below is the formula of the l-th GCN layer ( Kipf & Welling , 2016a ) : X ( l ) = δ ( ÂX ( l−1 ) Θ ( l ) ) ,  = D̃−1/2ÃD̃−1/2 , à = A + In , ( 1 ) where X ( l ) is the node embedding matrix at layer l , X ( 0 ) is the original feature matrix , Θ ( l ) are the trainable weights , and δ is the activation function .  is the smoothing matrix that helps each node to smooth representations of neighboring nodes . As shown in Eq . 1 , each GCN layer contains two operations : feature aggregation ( smoothing ) and feature transformation . Figure 1 shows the framework of a two-layer GCN . The l-th layer in GCN firstly executes feature smoothing on the node embedding X ( l−1 ) . Then , the smoothed feature X̃ ( l−1 ) is transformed with trainable weights Θ ( l ) and activation function δ to generate new node embedding X ( l ) . Note that GCN will degrade to MLP if feature smoothing is removed from each layer . 2.3 GNN-BASED GRAPH REPRESENTATION LEARNING .. GAE ( Kipf & Welling , 2016b ) , the first and the most representative GNN-based graph embedding method , adopts an encoder to generate node embedding matrix Z with inputs  and X . A simple inner product decoder is then used to reconstruct the adjacency matrix . The final training loss of GAE is the binary entropy loss between A′ and à , the reconstructed adjacency matrix and the original adjacency matrix with self loop added . A′ = sigmoid ( Z · ZT ) , L = ∑ 1≤i , j≤n −Ãi , j logA′i , j − ( 1− Ãi , j ) log ( 1−A′i , j ) ) . ( 2 ) Motivated by GAE , lots of GNN-based graph representation learning methods are proposed recently . MGAE ( Wang et al. , 2017 ) presents a denoising marginalized autoencoder that reconstructs the node feature matrix X. ARGA ( Pan et al. , 2018 ) adopts the adversarial learning strategy , and its generated node embeddings are forced to match a prior distribution . DAEGC ( Wang et al. , 2019 ) exploits side information to generate node embeddings in a self-supervised way . AGC ( Zhang et al. , 2019 ) proposes an improved filter matrix to better filter out the high-frequency noise . AGE ( Cui et al. , 2020 ) further improves AGC by using the similarity of embedding rather than the adjacency matrix to consider original node feature information . Compared with GNN-based graph representation learning methods that rely on the trainable parameters to learn node embeddings , our NAFS is training-free and thus enjoys higher efficiency and scalability . 3 OBSERVATION AND INSIGHT . In this section , we make a quantitative analysis on the over-smoothing issue at the node level and then provide some insights when designing NAFS on graphs . 3.1 FEATURE SMOOTHING IN DECOUPLED GNNS . Recently , many works ( Wu et al. , 2019 ; Zhu & Koniusz , 2021 ; Chen et al. , 2020 ; Zhang et al. , 2021 ) propose to decouple the feature smoothing and feature transformation in each GCN layer for scalable node classification . Concretely , they execute the feature smoothing operation in advance , and the smoothed features are then fed into a simple MLP to generate the final predicted node labels . Under this framework , the predictive node classification accuracy of these methods is comparable with or even higher than the one of coupled GNNs , and these works claim that the true power of GNNs lies in feature smoothing rather than feature transformation . We split the framework of these decoupled GNNs into two parts : feature smoothing and MLP training . Feature smoothing aims to combine the graph structural information and node features into better features for the subsequent MLP ; while MLP training only takes in the smoothed feature and is specially trained for a given task . As stated by previous decoupled GNNs ( Wu et al. , 2019 ; Zhu & Koniusz , 2021 ) , the true success of GNNs lies in feature smoothing rather than feature transformation . Correspondingly , we propose to remove feature transformation and preserve the key feature smoothing part alone for simple and scalable node representation . There is another branch of GNNs that also decouple the feature smoothing and feature transformation . The most representative method of this category is APPNP ( Klicpera et al. , 2018 ) . It first feeds the raw node features into an MLP to generate intermediate node embeddings ; then the personalized PageRank based propagation operations are performed on the node embeddings to produce final prediction results . However , compared with scalable decoupled GNNs mentioned in the previous paragraph , this branch of GNNs still have to recursively execute propagation operations in each training epoch , which makes it impossible to perform on large-scale graphs . In the remaining part of this paper , the terminology “ decoupled GNNs ” refers particularly to the scalable decoupled GNNs mentioned in the previous two paragraphs . 3.2 MEASURING SMOOTHING LEVEL . To capture deep graph structural information , a straightforward way is to simply stack multiple GNN layers . However , a large number of feature smoothing operations in a GNN model would lead to indistinguishable node embeddings , i.e. , the over-smoothing issue ( Li et al. , 2018 ) . Concretely , if we execute ÂX for infinite times , the node embeddings within the same connected component would reach a stationary state . When adopting  = D̃r−1ÃD̃−r , Â∞ follows Â∞i , j = ( di + 1 ) r ( dj + 1 ) 1−r 2m+ n , ( 3 ) which shows that the influence from node vi to vj is only determined by their degrees . Under the extreme condition that r = 0 , all the nodes within one connected component have exactly the same representation , making it impossible to apply the node embeddings to subsequent tasks . Here we introduce a new metric , “ Over-smoothing Distance ” , to measure each node ’ s smoothing level . A smaller value indicates that the node is closer to the stationary state , i.e. , closer to over-smoothing . Definition 3.1 ( Over-smoothing Distance ) . The Over-smoothing Distance Di ( k ) parameterized by node i and smoothing step k is defined as Di ( k ) = Dis ( [  kX ] i , [  ∞X ] i ) , ( 4 ) where [ ÂkX ] i denotes the ith row of ÂkX , representing the representations of node vi after smoothing k times ; [ Â∞X ] i denotes the ith row of Â∞X , representing the stationary state of node vi ; Dis ( · ) is a distance function or a function positively relative with the difference , which can be implemented using Euclidean distance , the inverse of cosine similarity , etc .
The paper deals with (unsupervised) learning with graphs, specifically node-level tasks. Inspired by spectral GNNs, specifically Graph Convolutional Networks (GCN), the authors propose a simple neighborhood smoothing technique to capture the graph structure around each node in the given graph. Contrary to GNN, the proposed algorithm is parameter-free and hence scales better than their end-to-end trained GNN counterpart. Motivated by the problem of over-smoothing of GCN (Li et al., 2018), they propose the so-called "over-smoothing distance" measuring how close a node's feature is to be over-smoothed, which is simply the row-wise distance between $A^k X$ and $A^{\infty}X$ with regard to the Euclidian distance, see Def. 3.1. Based on this distance they define the smoothing weight matrix which is then used to weight the neighboring node features during neighborhood aggregation. The resulting features $\hat{X}$ are computed as $\hat{X} = \sum^{K}_{k=0} W(k)\hat{A}X$, where $W(k)$ is the $k$th smoothing weight matrix. The proposed architecture is evaluated on standard, old, small-scale (unsupervsied) link and node classification (Cora, Citeseer, PubMed) tasks showing somewhat classification performance while showing a considerable speedup in computation time.
SP:3c5d850f257a0150def7087735e463d418160a04
Entroformer: A Transformer-based Entropy Model for Learned Image Compression
1 INTRODUCTION . Image compression is a fundamental research field in computer vision . With the development of deep learning , learned methods have led to several breakthroughs in this task . Currently , the state-ofthe-art ( SOTA ) deep image compression models are built on the auto-encoder framework ( Hinton & Salakhutdinov , 2006 ) with an entropy-constrained bottleneck ( Theis et al. , 2017 ; Ballé et al. , 2017 ; Ballé et al. , 2018 ; Mentzer et al. , 2018 ; Minnen et al. , 2018a ; Lee et al. , 2019 ; Guo et al. , 2021 ) . An entropy model estimates the conditional probability distribution of latents for compression by standard entropy coding algorithms . In this paper , we focus on improving the predictive ability of the entropy model , which leads to higher compression rates without increasing distortion . One of the main challenges of the entropy model is how to learn representation for various dependencies . For example , as shown in Figure 1 , the edges have long-range dependencies ; and the five hats are likely to be related in shape ; the region of the sky has identical color . Some works use local context ( Minnen et al. , 2018a ; Lee et al. , 2019 ; Mentzer et al. , 2018 ) or additional side information ( Ballé et al. , 2018 ; Hu et al. , 2020 ; Minnen et al. , 2018b ) as short-range spatial dependencies , while others use non-local mechanism ( Li et al. , 2020 ; Qian et al. , 2021 ; Cheng et al. , 2020 ; Chen et al. , 2021 ) to capture long-range spatial dependencies . However , the constraint of capturing long-range spatial dependencies still remains in these CNN-based methods . Another main challenge is the trade-off between performance and decoding speed . Though previous context based methods use unidirectional context model to improve the predictive ability , it suffers from slow decoding speed . It decodes symbols in a raster-scan order with O ( n ) serial process that can not be accelerated by modern GPUs . A two-pass parallel context model ( He et al. , 2021 ) is introduced for acceleration , which decodes symbols in a particular order to minimize serial processing . However , this parallel context model uses a weak context information , which degrades the compression performance . In this paper , we propose a novel transformer-based entropy model termed as Entroformer to address the above two challenges in one shot . Following the Transformer ( Vaswani et al. , 2017 ) , selfattention is used to relate different positions of a single latent in order to compute a representation of the latents . In Entroformer , spatial and content based dependencies are jointly taken into account in both hyperprior and context model . To further optimize Entroformer for image compression , the following designs are innovated : ( 1 ) Entroformer uses a multi-head attention with a top-k selection to distill information in representation subspaces : The multi-head attention provides a learning-based partition to learn dependencies in different views ; and the top-k scheme filters noisy signals by select the most similar k latents to the self-attention module , which is crucial for the convergence of Entroformer . ( 2 ) To inherit the local bias of CNNs , a novel position encoding unit is designed to provide better spatial representation for the image compression . This position encoding is based on a relative position encoding with a diamond-shape boundary ( Diamond RPE ) . Based on a empirical evaluation about the rate effect of relative position , Diamond RPE takes full advantage of prior information for image compression . ( 3 ) The two-pass decoding framework ( He et al. , 2021 ) is utilized to speed up the decoding of Entroformer . A bidirectional context with long-range context is introduced instead of the local checkeboard context , which helps counteract the performance degradation . In summary , our contributions include : • In this paper , Entroformer is proposed to improve learned image compression . To our knowledge , it is the first successful attempt to introduce Transformer based method to image compression task . Experiments show that our method outperforms the most advanced CNNs methods by 5.2 % and standard codec BPG ( Bellard. , 2014 ) by 20.5 % at low bit rates . • To further optimize Entroformer for image compression , the multi-head scheme with top-k selection is proposed to capture precise dependencies for probability distribution estimation , and the diamond relative position encoding ( diamond RPE ) is proposed for better positional encoding . Experiments show that these methods support image compression task to keep training stable and obtain superior results . • With the help of bidirectional context and two-pass decoding framework , our parallel Entroformer is more time-efficient than the serialized one on modern GPU devices without performance degradation . 2 COMPRESSION WITH HYPERPRIOR AND CONTEXT . There are types of entropy models such as hyperprior ( Minnen et al. , 2018b ; Ballé et al. , 2018 ) , context model ( Li et al. , 2018 ; Mentzer et al. , 2018 ) , and the combined method ( Lee et al. , 2019 ; Minnen et al. , 2018a ; Minnen & Singh , 2020 ) . Hyperprior methods typically make use of side information of the quantized latent representation with additional bits . Context models learn autoregressive priors incorporating prediction from the causal context of the latents , which is a bit-free . Our Entroformer combines a context model with hyperprior based on previous works ( Minnen et al. , 2018b ; Ballé et al. , 2018 ) . We describe the architecture of compression model with Entroformer in Figure 2 . The main autoencoder learns a quantized latent representation ŷ of image x . The reconstructed image is denoted as x̂ . The hyper-autoencoder learns a quantized hyper-latents representation ẑ . Following the work of Ballé et al . ( 2016 ) , the quantization operation is approximated by an additive uniform noise during training . This ensures a good match between encoder and decoder distributions of both the quantized latents , and continuous-valued latents subjected to additive uniform noise during training . Following the work of Minnen et al . ( 2018a ) , we model each latent , ŷi , as a Gaussian with mean µi and deviation σi convolved with a unit uniform distribution : pŷ ( ŷ|ẑ , θ ) = ∏ i=1 ( N ( µi , σ2i ) ∗ U ( −0.5 , 0.5 ) ) ) ( ŷi ) ( 1 ) where µ and σ are predicted by the entropy model , θ is the parameters of the entropy model . The entropy model consists of hyper-autoencoder , context model , and parameter networks . Since we do not make any assumptions about the distribution of the hyper-latents , a non-parametric , fully factorized density model is used . The training goal for learned image compression is to optimize the trade-off between the estimated coding length of the bitstream and the quality of the reconstruction , which is a rate-distortion optimization problem : L = R+ λD = Ex∼px [ − log2 pŷ ( ŷ ) ] ︸ ︷︷ ︸ rate ( latents ) +Ex∼px [ − log2 pẑ ( ẑ ) ] ︸ ︷︷ ︸ rate ( hyper-latents ) +λ · Ex∼px ||x− x̂||22︸ ︷︷ ︸ distortion , ( 2 ) where λ is the coefficient which controls the rate-distortion trade-off , px is the unknown distribution of natural images . pẑ use a fully factorized density model . The first term represents the estimated compression rate of the latent representation , while the second term represents the estimated compression rate of the hyper-latent representation . The third term represents the distortion value under given metric , such as mean squared error ( MSE ) . 3 TRANSFORMER-BASED ENTROPY MODEL . Our model contains two main components , a CNN-based autoencoder to learn a latent representation , an transformer-based entropy model to predict latents . We describe our architecture in Figure 2 ( a ) , and our Entroformer in detail in Figure 2 ( b ) . In this section , we first propose two ingredients , a diamond relative position encoding ( diamond RPE ) and a top-k scheme , which are essential for image compression . Then , we extend the checkboard context model ( He et al. , 2021 ) to a parallel bidirectional context model . 3.1 TRANSFORMER ARCHITECTURE . In entropy model design we follow the original Transformer ( Vaswani et al. , 2017 ) , which employs an encoder-decoder structure . We model the latent representation ŷ ∈ RH×W×C as a sequence ŷp ∈ R ( H·W ) ×C , where ( H , W ) is the resolution of the latents , C is the number of channels . The latents sequence is then mapped to D dimensions with a trainable linear projection . For hyperprior , we stack N transformer encoder layers to yield hyper-latent z , which is quantized and fed to N transformer-encoder layers to generate hyperprior . To produce a hierarchical representation , the resolution of feature is changed by downsample and upscale module . For autoregressive prior , we stack 2N transformer decoder layers to generate autoregressive features . To generate the Gaussian parameters , a linear layer is attached to the combination of hyperprior features and context features . For any sequence of length n , the vanilla attention in the transformer is the dot product attention ( Vaswani et al. , 2017 ) . Following the standard notation , we compute the matrix of outputs via the linear projection of the input matrix X ∈ Rn×dm : Attention ( X ) = softmax ( XWQ ( XWK ) T√ dk ) XWV ( 3 ) WQ , WK ∈ Rdm×dk and WV ∈ Rdm×dv are learned parameter matrices . 3.2 POSITION ENCODING . To explore the impact of position in image compression , we first design a transformer-based entropy model without any position encoding . We feed a random mask to self-attention during training . During testing , we evaluate the effect of each position i by employing a corresponding mask , which are set to 1 except position i . Figure 3 plots the impact in bit rate of each position , compared to the result with context of all positions . This result highlights how the rate effected by the position of context . This observation presents a comprehensive understanding and provides empirical guideline for new position encoding design . In order for spatial information of the latents , one common approach is to use biased attention weights based on relative relation ( Shaw et al. , 2018 ) . Based on the result in Figure 3 , we propose an extension to this relation-aware self-attention to consider the element position influence on image compression performance . Each attention operates on an input sequence , x = ( x1 , ... , xn ) of n elements where xi ∈ Rdm . The edge between input elements xi and xj is represented by vectors pKij ∈ Rdk . We modify the dot-product attention matrix in eq . 3 to a compatibility function that compares two input elements in relation-aware self-attention : eij = xiW Q ( xjW K + pKij ) T √ dk ( 4 ) The weight coefficient , A , is computed using a softmax on dot-product attention matrix . On image plane , the relative position ai−j is a 2D coordinate . And we modify the maximum absolute relative position to a 2D boundary with diamond shape . As shown in Figure 3 , we observe that the closer contexts latent save more bit rate in context model . That tells how the distance of context latents influences the bit saving of context modeling in learned image compression . Therefore , we consider a 2D boundary with a maximum Hamming distance value of h. pKij = w K clip ( ai−j , h ) clip ( ai−j , h ) = { ai−j ‖ai−j‖1 < = h ( h , h ) otherwise . ( 5 ) We then learn the relative position encoding wK = ( wK−h , −h , w K −h , −h+1 , ... , w K h , h−1 , w K h , h ) .
This work introduces a transformer-based entropy coding model for learned image compression. The backbones of the commonly used hyperprior encoder and decoder are replaced with transformer encoder layers, and the context model is replaced with the transformer decoder. The striking features of the proposed method include (1) a diamond-shaped relative position encoding with clipping, (2) a top-k selection scheme, and (3) a parallel bi-directional context model. The parallel bi-direction context model is a derivative work from He et al. 2021.
SP:58ce187d0a0ffb7bf0697c4c3b6f2fdd989596c1
Entroformer: A Transformer-based Entropy Model for Learned Image Compression
1 INTRODUCTION . Image compression is a fundamental research field in computer vision . With the development of deep learning , learned methods have led to several breakthroughs in this task . Currently , the state-ofthe-art ( SOTA ) deep image compression models are built on the auto-encoder framework ( Hinton & Salakhutdinov , 2006 ) with an entropy-constrained bottleneck ( Theis et al. , 2017 ; Ballé et al. , 2017 ; Ballé et al. , 2018 ; Mentzer et al. , 2018 ; Minnen et al. , 2018a ; Lee et al. , 2019 ; Guo et al. , 2021 ) . An entropy model estimates the conditional probability distribution of latents for compression by standard entropy coding algorithms . In this paper , we focus on improving the predictive ability of the entropy model , which leads to higher compression rates without increasing distortion . One of the main challenges of the entropy model is how to learn representation for various dependencies . For example , as shown in Figure 1 , the edges have long-range dependencies ; and the five hats are likely to be related in shape ; the region of the sky has identical color . Some works use local context ( Minnen et al. , 2018a ; Lee et al. , 2019 ; Mentzer et al. , 2018 ) or additional side information ( Ballé et al. , 2018 ; Hu et al. , 2020 ; Minnen et al. , 2018b ) as short-range spatial dependencies , while others use non-local mechanism ( Li et al. , 2020 ; Qian et al. , 2021 ; Cheng et al. , 2020 ; Chen et al. , 2021 ) to capture long-range spatial dependencies . However , the constraint of capturing long-range spatial dependencies still remains in these CNN-based methods . Another main challenge is the trade-off between performance and decoding speed . Though previous context based methods use unidirectional context model to improve the predictive ability , it suffers from slow decoding speed . It decodes symbols in a raster-scan order with O ( n ) serial process that can not be accelerated by modern GPUs . A two-pass parallel context model ( He et al. , 2021 ) is introduced for acceleration , which decodes symbols in a particular order to minimize serial processing . However , this parallel context model uses a weak context information , which degrades the compression performance . In this paper , we propose a novel transformer-based entropy model termed as Entroformer to address the above two challenges in one shot . Following the Transformer ( Vaswani et al. , 2017 ) , selfattention is used to relate different positions of a single latent in order to compute a representation of the latents . In Entroformer , spatial and content based dependencies are jointly taken into account in both hyperprior and context model . To further optimize Entroformer for image compression , the following designs are innovated : ( 1 ) Entroformer uses a multi-head attention with a top-k selection to distill information in representation subspaces : The multi-head attention provides a learning-based partition to learn dependencies in different views ; and the top-k scheme filters noisy signals by select the most similar k latents to the self-attention module , which is crucial for the convergence of Entroformer . ( 2 ) To inherit the local bias of CNNs , a novel position encoding unit is designed to provide better spatial representation for the image compression . This position encoding is based on a relative position encoding with a diamond-shape boundary ( Diamond RPE ) . Based on a empirical evaluation about the rate effect of relative position , Diamond RPE takes full advantage of prior information for image compression . ( 3 ) The two-pass decoding framework ( He et al. , 2021 ) is utilized to speed up the decoding of Entroformer . A bidirectional context with long-range context is introduced instead of the local checkeboard context , which helps counteract the performance degradation . In summary , our contributions include : • In this paper , Entroformer is proposed to improve learned image compression . To our knowledge , it is the first successful attempt to introduce Transformer based method to image compression task . Experiments show that our method outperforms the most advanced CNNs methods by 5.2 % and standard codec BPG ( Bellard. , 2014 ) by 20.5 % at low bit rates . • To further optimize Entroformer for image compression , the multi-head scheme with top-k selection is proposed to capture precise dependencies for probability distribution estimation , and the diamond relative position encoding ( diamond RPE ) is proposed for better positional encoding . Experiments show that these methods support image compression task to keep training stable and obtain superior results . • With the help of bidirectional context and two-pass decoding framework , our parallel Entroformer is more time-efficient than the serialized one on modern GPU devices without performance degradation . 2 COMPRESSION WITH HYPERPRIOR AND CONTEXT . There are types of entropy models such as hyperprior ( Minnen et al. , 2018b ; Ballé et al. , 2018 ) , context model ( Li et al. , 2018 ; Mentzer et al. , 2018 ) , and the combined method ( Lee et al. , 2019 ; Minnen et al. , 2018a ; Minnen & Singh , 2020 ) . Hyperprior methods typically make use of side information of the quantized latent representation with additional bits . Context models learn autoregressive priors incorporating prediction from the causal context of the latents , which is a bit-free . Our Entroformer combines a context model with hyperprior based on previous works ( Minnen et al. , 2018b ; Ballé et al. , 2018 ) . We describe the architecture of compression model with Entroformer in Figure 2 . The main autoencoder learns a quantized latent representation ŷ of image x . The reconstructed image is denoted as x̂ . The hyper-autoencoder learns a quantized hyper-latents representation ẑ . Following the work of Ballé et al . ( 2016 ) , the quantization operation is approximated by an additive uniform noise during training . This ensures a good match between encoder and decoder distributions of both the quantized latents , and continuous-valued latents subjected to additive uniform noise during training . Following the work of Minnen et al . ( 2018a ) , we model each latent , ŷi , as a Gaussian with mean µi and deviation σi convolved with a unit uniform distribution : pŷ ( ŷ|ẑ , θ ) = ∏ i=1 ( N ( µi , σ2i ) ∗ U ( −0.5 , 0.5 ) ) ) ( ŷi ) ( 1 ) where µ and σ are predicted by the entropy model , θ is the parameters of the entropy model . The entropy model consists of hyper-autoencoder , context model , and parameter networks . Since we do not make any assumptions about the distribution of the hyper-latents , a non-parametric , fully factorized density model is used . The training goal for learned image compression is to optimize the trade-off between the estimated coding length of the bitstream and the quality of the reconstruction , which is a rate-distortion optimization problem : L = R+ λD = Ex∼px [ − log2 pŷ ( ŷ ) ] ︸ ︷︷ ︸ rate ( latents ) +Ex∼px [ − log2 pẑ ( ẑ ) ] ︸ ︷︷ ︸ rate ( hyper-latents ) +λ · Ex∼px ||x− x̂||22︸ ︷︷ ︸ distortion , ( 2 ) where λ is the coefficient which controls the rate-distortion trade-off , px is the unknown distribution of natural images . pẑ use a fully factorized density model . The first term represents the estimated compression rate of the latent representation , while the second term represents the estimated compression rate of the hyper-latent representation . The third term represents the distortion value under given metric , such as mean squared error ( MSE ) . 3 TRANSFORMER-BASED ENTROPY MODEL . Our model contains two main components , a CNN-based autoencoder to learn a latent representation , an transformer-based entropy model to predict latents . We describe our architecture in Figure 2 ( a ) , and our Entroformer in detail in Figure 2 ( b ) . In this section , we first propose two ingredients , a diamond relative position encoding ( diamond RPE ) and a top-k scheme , which are essential for image compression . Then , we extend the checkboard context model ( He et al. , 2021 ) to a parallel bidirectional context model . 3.1 TRANSFORMER ARCHITECTURE . In entropy model design we follow the original Transformer ( Vaswani et al. , 2017 ) , which employs an encoder-decoder structure . We model the latent representation ŷ ∈ RH×W×C as a sequence ŷp ∈ R ( H·W ) ×C , where ( H , W ) is the resolution of the latents , C is the number of channels . The latents sequence is then mapped to D dimensions with a trainable linear projection . For hyperprior , we stack N transformer encoder layers to yield hyper-latent z , which is quantized and fed to N transformer-encoder layers to generate hyperprior . To produce a hierarchical representation , the resolution of feature is changed by downsample and upscale module . For autoregressive prior , we stack 2N transformer decoder layers to generate autoregressive features . To generate the Gaussian parameters , a linear layer is attached to the combination of hyperprior features and context features . For any sequence of length n , the vanilla attention in the transformer is the dot product attention ( Vaswani et al. , 2017 ) . Following the standard notation , we compute the matrix of outputs via the linear projection of the input matrix X ∈ Rn×dm : Attention ( X ) = softmax ( XWQ ( XWK ) T√ dk ) XWV ( 3 ) WQ , WK ∈ Rdm×dk and WV ∈ Rdm×dv are learned parameter matrices . 3.2 POSITION ENCODING . To explore the impact of position in image compression , we first design a transformer-based entropy model without any position encoding . We feed a random mask to self-attention during training . During testing , we evaluate the effect of each position i by employing a corresponding mask , which are set to 1 except position i . Figure 3 plots the impact in bit rate of each position , compared to the result with context of all positions . This result highlights how the rate effected by the position of context . This observation presents a comprehensive understanding and provides empirical guideline for new position encoding design . In order for spatial information of the latents , one common approach is to use biased attention weights based on relative relation ( Shaw et al. , 2018 ) . Based on the result in Figure 3 , we propose an extension to this relation-aware self-attention to consider the element position influence on image compression performance . Each attention operates on an input sequence , x = ( x1 , ... , xn ) of n elements where xi ∈ Rdm . The edge between input elements xi and xj is represented by vectors pKij ∈ Rdk . We modify the dot-product attention matrix in eq . 3 to a compatibility function that compares two input elements in relation-aware self-attention : eij = xiW Q ( xjW K + pKij ) T √ dk ( 4 ) The weight coefficient , A , is computed using a softmax on dot-product attention matrix . On image plane , the relative position ai−j is a 2D coordinate . And we modify the maximum absolute relative position to a 2D boundary with diamond shape . As shown in Figure 3 , we observe that the closer contexts latent save more bit rate in context model . That tells how the distance of context latents influences the bit saving of context modeling in learned image compression . Therefore , we consider a 2D boundary with a maximum Hamming distance value of h. pKij = w K clip ( ai−j , h ) clip ( ai−j , h ) = { ai−j ‖ai−j‖1 < = h ( h , h ) otherwise . ( 5 ) We then learn the relative position encoding wK = ( wK−h , −h , w K −h , −h+1 , ... , w K h , h−1 , w K h , h ) .
This paper addresses the problem of learned image compression using a transformer as the entropy model. The authors introduce a diamond-shaped relative position encoding scheme that makes sense for image modeling. They also adopt a two-step, bidirectional context model based on a checkerboard-style spatial decomposition of the latent tensor. The modeling choices are well-supported by an ablation study. In particular, the authors show that the checkerboard model is almost as good as a much slower spatial context model that is often used for learned image compression. They also show that the transformer-based entropy model ("entroformer") leads to a model with better rate-distortion performance compared to earlier models.
SP:58ce187d0a0ffb7bf0697c4c3b6f2fdd989596c1
Entroformer: A Transformer-based Entropy Model for Learned Image Compression
1 INTRODUCTION . Image compression is a fundamental research field in computer vision . With the development of deep learning , learned methods have led to several breakthroughs in this task . Currently , the state-ofthe-art ( SOTA ) deep image compression models are built on the auto-encoder framework ( Hinton & Salakhutdinov , 2006 ) with an entropy-constrained bottleneck ( Theis et al. , 2017 ; Ballé et al. , 2017 ; Ballé et al. , 2018 ; Mentzer et al. , 2018 ; Minnen et al. , 2018a ; Lee et al. , 2019 ; Guo et al. , 2021 ) . An entropy model estimates the conditional probability distribution of latents for compression by standard entropy coding algorithms . In this paper , we focus on improving the predictive ability of the entropy model , which leads to higher compression rates without increasing distortion . One of the main challenges of the entropy model is how to learn representation for various dependencies . For example , as shown in Figure 1 , the edges have long-range dependencies ; and the five hats are likely to be related in shape ; the region of the sky has identical color . Some works use local context ( Minnen et al. , 2018a ; Lee et al. , 2019 ; Mentzer et al. , 2018 ) or additional side information ( Ballé et al. , 2018 ; Hu et al. , 2020 ; Minnen et al. , 2018b ) as short-range spatial dependencies , while others use non-local mechanism ( Li et al. , 2020 ; Qian et al. , 2021 ; Cheng et al. , 2020 ; Chen et al. , 2021 ) to capture long-range spatial dependencies . However , the constraint of capturing long-range spatial dependencies still remains in these CNN-based methods . Another main challenge is the trade-off between performance and decoding speed . Though previous context based methods use unidirectional context model to improve the predictive ability , it suffers from slow decoding speed . It decodes symbols in a raster-scan order with O ( n ) serial process that can not be accelerated by modern GPUs . A two-pass parallel context model ( He et al. , 2021 ) is introduced for acceleration , which decodes symbols in a particular order to minimize serial processing . However , this parallel context model uses a weak context information , which degrades the compression performance . In this paper , we propose a novel transformer-based entropy model termed as Entroformer to address the above two challenges in one shot . Following the Transformer ( Vaswani et al. , 2017 ) , selfattention is used to relate different positions of a single latent in order to compute a representation of the latents . In Entroformer , spatial and content based dependencies are jointly taken into account in both hyperprior and context model . To further optimize Entroformer for image compression , the following designs are innovated : ( 1 ) Entroformer uses a multi-head attention with a top-k selection to distill information in representation subspaces : The multi-head attention provides a learning-based partition to learn dependencies in different views ; and the top-k scheme filters noisy signals by select the most similar k latents to the self-attention module , which is crucial for the convergence of Entroformer . ( 2 ) To inherit the local bias of CNNs , a novel position encoding unit is designed to provide better spatial representation for the image compression . This position encoding is based on a relative position encoding with a diamond-shape boundary ( Diamond RPE ) . Based on a empirical evaluation about the rate effect of relative position , Diamond RPE takes full advantage of prior information for image compression . ( 3 ) The two-pass decoding framework ( He et al. , 2021 ) is utilized to speed up the decoding of Entroformer . A bidirectional context with long-range context is introduced instead of the local checkeboard context , which helps counteract the performance degradation . In summary , our contributions include : • In this paper , Entroformer is proposed to improve learned image compression . To our knowledge , it is the first successful attempt to introduce Transformer based method to image compression task . Experiments show that our method outperforms the most advanced CNNs methods by 5.2 % and standard codec BPG ( Bellard. , 2014 ) by 20.5 % at low bit rates . • To further optimize Entroformer for image compression , the multi-head scheme with top-k selection is proposed to capture precise dependencies for probability distribution estimation , and the diamond relative position encoding ( diamond RPE ) is proposed for better positional encoding . Experiments show that these methods support image compression task to keep training stable and obtain superior results . • With the help of bidirectional context and two-pass decoding framework , our parallel Entroformer is more time-efficient than the serialized one on modern GPU devices without performance degradation . 2 COMPRESSION WITH HYPERPRIOR AND CONTEXT . There are types of entropy models such as hyperprior ( Minnen et al. , 2018b ; Ballé et al. , 2018 ) , context model ( Li et al. , 2018 ; Mentzer et al. , 2018 ) , and the combined method ( Lee et al. , 2019 ; Minnen et al. , 2018a ; Minnen & Singh , 2020 ) . Hyperprior methods typically make use of side information of the quantized latent representation with additional bits . Context models learn autoregressive priors incorporating prediction from the causal context of the latents , which is a bit-free . Our Entroformer combines a context model with hyperprior based on previous works ( Minnen et al. , 2018b ; Ballé et al. , 2018 ) . We describe the architecture of compression model with Entroformer in Figure 2 . The main autoencoder learns a quantized latent representation ŷ of image x . The reconstructed image is denoted as x̂ . The hyper-autoencoder learns a quantized hyper-latents representation ẑ . Following the work of Ballé et al . ( 2016 ) , the quantization operation is approximated by an additive uniform noise during training . This ensures a good match between encoder and decoder distributions of both the quantized latents , and continuous-valued latents subjected to additive uniform noise during training . Following the work of Minnen et al . ( 2018a ) , we model each latent , ŷi , as a Gaussian with mean µi and deviation σi convolved with a unit uniform distribution : pŷ ( ŷ|ẑ , θ ) = ∏ i=1 ( N ( µi , σ2i ) ∗ U ( −0.5 , 0.5 ) ) ) ( ŷi ) ( 1 ) where µ and σ are predicted by the entropy model , θ is the parameters of the entropy model . The entropy model consists of hyper-autoencoder , context model , and parameter networks . Since we do not make any assumptions about the distribution of the hyper-latents , a non-parametric , fully factorized density model is used . The training goal for learned image compression is to optimize the trade-off between the estimated coding length of the bitstream and the quality of the reconstruction , which is a rate-distortion optimization problem : L = R+ λD = Ex∼px [ − log2 pŷ ( ŷ ) ] ︸ ︷︷ ︸ rate ( latents ) +Ex∼px [ − log2 pẑ ( ẑ ) ] ︸ ︷︷ ︸ rate ( hyper-latents ) +λ · Ex∼px ||x− x̂||22︸ ︷︷ ︸ distortion , ( 2 ) where λ is the coefficient which controls the rate-distortion trade-off , px is the unknown distribution of natural images . pẑ use a fully factorized density model . The first term represents the estimated compression rate of the latent representation , while the second term represents the estimated compression rate of the hyper-latent representation . The third term represents the distortion value under given metric , such as mean squared error ( MSE ) . 3 TRANSFORMER-BASED ENTROPY MODEL . Our model contains two main components , a CNN-based autoencoder to learn a latent representation , an transformer-based entropy model to predict latents . We describe our architecture in Figure 2 ( a ) , and our Entroformer in detail in Figure 2 ( b ) . In this section , we first propose two ingredients , a diamond relative position encoding ( diamond RPE ) and a top-k scheme , which are essential for image compression . Then , we extend the checkboard context model ( He et al. , 2021 ) to a parallel bidirectional context model . 3.1 TRANSFORMER ARCHITECTURE . In entropy model design we follow the original Transformer ( Vaswani et al. , 2017 ) , which employs an encoder-decoder structure . We model the latent representation ŷ ∈ RH×W×C as a sequence ŷp ∈ R ( H·W ) ×C , where ( H , W ) is the resolution of the latents , C is the number of channels . The latents sequence is then mapped to D dimensions with a trainable linear projection . For hyperprior , we stack N transformer encoder layers to yield hyper-latent z , which is quantized and fed to N transformer-encoder layers to generate hyperprior . To produce a hierarchical representation , the resolution of feature is changed by downsample and upscale module . For autoregressive prior , we stack 2N transformer decoder layers to generate autoregressive features . To generate the Gaussian parameters , a linear layer is attached to the combination of hyperprior features and context features . For any sequence of length n , the vanilla attention in the transformer is the dot product attention ( Vaswani et al. , 2017 ) . Following the standard notation , we compute the matrix of outputs via the linear projection of the input matrix X ∈ Rn×dm : Attention ( X ) = softmax ( XWQ ( XWK ) T√ dk ) XWV ( 3 ) WQ , WK ∈ Rdm×dk and WV ∈ Rdm×dv are learned parameter matrices . 3.2 POSITION ENCODING . To explore the impact of position in image compression , we first design a transformer-based entropy model without any position encoding . We feed a random mask to self-attention during training . During testing , we evaluate the effect of each position i by employing a corresponding mask , which are set to 1 except position i . Figure 3 plots the impact in bit rate of each position , compared to the result with context of all positions . This result highlights how the rate effected by the position of context . This observation presents a comprehensive understanding and provides empirical guideline for new position encoding design . In order for spatial information of the latents , one common approach is to use biased attention weights based on relative relation ( Shaw et al. , 2018 ) . Based on the result in Figure 3 , we propose an extension to this relation-aware self-attention to consider the element position influence on image compression performance . Each attention operates on an input sequence , x = ( x1 , ... , xn ) of n elements where xi ∈ Rdm . The edge between input elements xi and xj is represented by vectors pKij ∈ Rdk . We modify the dot-product attention matrix in eq . 3 to a compatibility function that compares two input elements in relation-aware self-attention : eij = xiW Q ( xjW K + pKij ) T √ dk ( 4 ) The weight coefficient , A , is computed using a softmax on dot-product attention matrix . On image plane , the relative position ai−j is a 2D coordinate . And we modify the maximum absolute relative position to a 2D boundary with diamond shape . As shown in Figure 3 , we observe that the closer contexts latent save more bit rate in context model . That tells how the distance of context latents influences the bit saving of context modeling in learned image compression . Therefore , we consider a 2D boundary with a maximum Hamming distance value of h. pKij = w K clip ( ai−j , h ) clip ( ai−j , h ) = { ai−j ‖ai−j‖1 < = h ( h , h ) otherwise . ( 5 ) We then learn the relative position encoding wK = ( wK−h , −h , w K −h , −h+1 , ... , w K h , h−1 , w K h , h ) .
The authors propose a transformer-based entropy modeling order to capture long-range dependencies in probability distribution estimation. This model is optimized for image compression. The authors extend this architecture with a parallel bidirectional context model to speed up the decoding process.
SP:58ce187d0a0ffb7bf0697c4c3b6f2fdd989596c1
Self-Supervised Structured Representations for Deep Reinforcement Learning
Recent reinforcement learning ( RL ) methods have found extracting high-level features from raw pixels with self-supervised learning to be effective in learning policies . However , these methods focus on learning global representations of images , and disregard local spatial structures present in the consecutively stacked frames . In this paper , we propose a novel approach that learns self-supervised spatial representations ( S3R ) for effectively encoding such spatial structures in an unsupervised manner . Given the input frames , the spatial latent volumes are first generated individually using an encoder , and they are used to capture the change in terms of spatial structures , i.e. , flow maps among multiple frames . To be specific , the proposed method establishes flow vectors between two latent volumes via a supervision by the image reconstruction loss.This enables for providing plenty of local samples for training the encoder of deep RL . We further attempt to leverage the spatial representations in the self-predictive representations ( SPR ) method that predicts future representations using the action-conditioned transition model . The proposed method imposes similarity constraints on the three latent volumes ; warped query representations by estimated flows , predicted target representations from the transition model , and target representations of future state . Experimental results on complex tasks in Atari Games and DeepMind Control Suite demonstrate that the RL methods are significantly boosted by the proposed self-supervised learning of spatial representations . The code is available at https : //sites . google.com/view/iclr2022-s3r . 1 INTRODUCTION . Deep reinforcement learning ( RL ) has been an appealing tool for training agents to solve various tasks including complex control and video games ( François-Lavet et al. , 2018 ) . While most approaches have focused on training deep RL agent under the assumption that compact state representations are readily available , this assumption does not hold in the cases where raw visual observations ( e.g . images ) are used as inputs for training the deep RL agent . Without designing an effective algorithm that uses model-based policy and value improvement operators ( Schrittwieser et al. , 2021 ) , or without attempting to use additional image augmentation ( Laskin et al. , 2020a ; Kostrikov et al. , 2020 ) , learning visual features from raw pixels only using a reward function might fail to learn good features in terms of the performance and sample efficiency . To address this challenge , a number of deep RL approaches ( Sermanet et al. , 2018 ; Dwibedi et al. , 2018 ; Anand et al. , 2019 ; Laskin et al. , 2020b ; Mazoure et al. , 2020 ; Stooke et al. , 2020 ; Schwarzer et al. , 2021 ) leverage the recent advance of self-supervised learning which effectively extracts highlevel features from raw pixels in an unsupervised fashion . In Laskin et al . ( 2020b ) ; Stooke et al . ( 2020 ) , they propose to train the convolutional encoder for pairs of images using a contrastive loss ( van den Oord et al. , 2018 ) . For training the RL agent , given a query and a set of keys consisting of positive and negative samples , they minimize the contrastive loss such that the query matches with the positive sample more than any of the negative samples ( Laskin et al. , 2020b ; Stooke et al. , 2020 ) . While the parameters of the query encoder are updated through back-propagation using the contrastive loss ( van den Oord et al. , 2018 ) , the parameters of the key encoder are computed with an exponential moving average ( EMA ) of the query encoder parameters . The output representations of the query encoder are passed to the RL algorithm for training the agent . Schwarzer et al . ( 2021 ) proposes the self-predictive representations ( SPR ) method that trains the RL agent by leveraging the query and positive sample only , following Grill et al . ( 2020 ) that achieves state-of-the-arts performance without the negative samples in the self-supervised learning . Especially , it extends a model-free RL agent by adding the transition model that predicts its own latent representation multiple steps into the future ( Schwarzer et al. , 2021 ) . The predicted future representations and the representations for future states computed using a target encoder serve as the query and positive sample , respectively . These approaches have shown compelling performance and high sample efficiency on the complex control tasks when compared to existing image-based RL approaches ( Kaiser et al. , 2019 ; van Hasselt et al. , 2019 ; Kielak , 2020 ) . While these approaches can effectively encode the global representations of images with the selfsupervised representation learning , there has been no attention on the local spatial structures present in the consecutively stacked images . Our key observation is that spatial deformation , i.e. , the change in terms of the spatial structures across the consecutive frames , can provide plenty of local samples for training the RL agent . A two-frame flow estimation ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Jonschkowski et al. , 2020 ) , which has been widely used for video processing and recognition in computer vision , can be an appropriate tool in modeling the spatial deformation . In this work , we propose a novel approach , termed self-supervised spatial representations ( S3R ) , that learns spatial representations for effectively encoding the spatial structures in a self-supervised fashion . Note that we use the term ‘ spatial representations ’ to indicate a set of feature maps extracted over frames for inferring locally-varying flow maps . The spatial representations generated from an encoder are used to predict the flow maps among the input frames by minimizing an image reconstruction loss in a self-supervised manner . A flow-based warping is then applied to generate future representations . We further extend our framework by leveraging SPR method ( Schwarzer et al. , 2021 ) . As depicted in Figure 1 , we impose similarity constraints on the three representations ; query representations warped by the estimated flow maps , self-predicted target representations from the transition model ( Schwarzer et al. , 2021 ) , and target representations of future state . Note that Shang et al . ( 2021 ) encodes the temporal information by concatenating latent differences of input frames , but the simple substraction operation has certain limitations in effectively capturing the spatial deformation . Contrary to Amiranashvili et al . ( 2018 ) in which the flow map is directly fed into RL algorithms with a stack of images , our method leverages the flow maps to effectively encode the spatial representations of the images and warp spatial representations at the future state . Our contribution is summarized as follows . • Our method learns the spatial representations using the self-supervised flow model for encoding local spatial structures from the consecutive frames used in RL algorithms . • We propose to impose the similarity constraint on the spatial representations as well as the global representations , providing plenty of supervision for training the encoder of deep RL . • We compute the future representations through the flow-based warping for imposing the similarity constraint with target representations . 2 RELATED WORK . Self-supervised Representation Learning : The self-supervised representation learning aims to learn general features from large-scale unlabeled images or videos without expensive data annotations . The contrastive methods have achieved state-of-the-art performance in the self-supervised representation learning . The contrastive learning aims to bring positive samples closer while sep- arating negative samples from each other ( Hadsell et al. , 2006 ) . Wu et al . ( 2018 ) formulate the contrastive learning as a non-parametric classification problem at the instance level , and propose to learn visual features with the memory bank and noise contrastive estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ; Mnih & Kavukcuoglu , 2013 ) . The method in van den Oord et al . ( 2018 ) proposes a probabilistic contrastive loss , called InfoNCE , for inducing representations by leveraging positive and negative samples . The InfoNCE loss has widely been adopted in Chen et al . ( 2020 ) ; He et al . ( 2020 ) ; Hénaff et al . ( 2020 ) ; Tian et al . ( 2020 ) . Chen et al . ( 2020 ) present a simple framework for contrastive self-supervised learning without specialized architecture ( Bachman et al. , 2019 ; Hénaff et al. , 2020 ) or memory bank ( Wu et al. , 2018 ) , but it requires a large batch size for using enough negative samples when computing the InfoNCE loss ( van den Oord et al. , 2018 ) . He et al . ( 2020 ) propose to build a dynamic dictionary with a queue to avoid the use of large batches when collecting negative samples , and also uses the moving averaged ( momentum ) encoder for target data ( positive and negative samples of query data ) . Grill et al . ( 2020 ) use the momentum encoder to produce representations of the targets as a means of stabilizing the bootstrap step . This enables for learning the representations with only positive samples , which are generated by data augmentation , for a given query without the need to carefully set up negative samples . The method in Chen & He ( 2020 ) further extends this idea by using only stop-gradient operation without using the momentum update . While these approaches focuses on learning global representations of a single image , our method proposes to learn spatial representations for effectively encoding the spatial structures ( i.e. , flow map ) in the consecutive images . Self-supervised Representation Learning in Deep RL : Representation learning is crucial for RL algorithms to learn policies with high-dimensional visual observations . The future prediction conditioned on the past observations and actions serves as auxiliary tasks to improve the sample efficiency of model free RL algorithms . Gelada et al . ( 2019 ) train a transition model to predict representations of future states together with a reward prediction loss . Guo et al . ( 2020 ) present Predictions of Bootstrapped Latents ( PBL ) that builds on multi-step predictive representations of future observations for deep RL . The method in Schwarzer et al . ( 2021 ) propose Self-Predictive Representations ( SPR ) based on an action-conditioned transition model that can predict future representations computed using a target ( momentum ) encoder . Our method attempts to predict the future representations through a warping operation using flow maps computed from the spatial representations of the consecutive frames . Contrastive learning has been used to extract desired latent representations of visual observations used in the RL algorithms . For training robot agents , Sermanet et al . ( 2018 ) present the time-contrastive networks ( TCN ) that train viewpoint-invariant representations using a metric learning . This work was extended in Dwibedi et al . ( 2018 ) by embedding multiple frames at each timestep for learning task-agnostic representations such as position and velocity attributes in continuous control tasks . In Anand et al . ( 2019 ) , the representations for RL algorithms are learned by maximizing mutual information ( Hjelm et al. , 2019 ) across spatially and temporally distinct features of an encoder of visual observations . Schwarzer et al . ( 2021 ) leverage the self-supervised learning ( Grill et al. , 2020 ) for imposing the similarity constraint between self-predictive and target representations . Laskin et al . ( 2020b ) introduce Contrastive Unsupervised representations for Reinforcement Learning ( CURL ) that learns the representations from visual inputs using the InfoNCE loss ( van den Oord et al. , 2018 ) . Stooke et al . ( 2020 ) present Augmented Temporal Contrast ( ATC ) using image augmentations and InfoNCE loss ( van den Oord et al. , 2018 ) for representation learning , and decouples it from policy learning . From a different perspective , Hansen et al . ( 2020 ) propose to adapt the policy network through self-supervised representation learning in unseen environments where it is difficult to predict changed rewards . Our method imposes the similarity constraint on the spatial representations as well as the global representations , thus providing plenty of supervision for training the encoder of deep RL . Visual Correspondence Learning : Visual correspondence estimation is a long-standing research in the computer vision community . It aims to establish a pair of corresponding pixels between two ( or more ) views taken under different locations ( stereo matching ) or timestep ( optical flow ) . Recent methods for stereo matching ( Žbontar & LeCun , 2016 ; Chang & Chen , 2018 ; Zhang et al. , 2019 ) and optical flow estimation ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Sun et al. , 2018 ) have been advanced largely thanks to the expressive power of deep networks . Though both approaches share a similar objective of finding corresponding pixels across views , the optical flow is known to be effective for encoding temporal motion trajectories , while the stereo matching is tailored to predicting 3D depth map in the scene . The commonly used architecture for two-frame optical flow estimation involves the feature map extraction of two frames , correlation volume computation , a series of convolutions for refinement , and flow regression . While state-of-the-arts flow estimation methods require using ground truth flow maps as an explicit supervision ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Sun et al. , 2018 ) , some unsupervised learning approaches have attempted to infer flow maps with an image reconstruction loss for imposing the constraint that corresponding pixels should have similar intensities ( Ren et al. , 2017 ; Meister et al. , 2018 ; Wang et al. , 2018 ; Jonschkowski et al. , 2020 ) . In our work , we present the self-supervised flow network that learns spatial representations from the consecutive frames used in the RL algorithms .
This paper proposes a representation learning method that leverages unsupervised signals like flow and forward models and the constraints between them and apply it to the state representation problem of RL. The method is adding a number of auxiliary rewards to the torso which flow and a latent transition model that are constrained to agree with a contrastive loss. The flow has an architecture inspired by FlowNet and trained with warping reconstruction and a spatial smoothness term. A latent transition model is trained contrastively. Then constraints are added to make the latent of the next step agree with both of these predictions. The method is run on 13 Atari tasks and compared there with competitive baselines and it largely outperforms them. An ablation without baselines is performed a few dm control suite tasks
SP:c3b0cf6571db40a084f09fa66e3cd74e4b30b783
Self-Supervised Structured Representations for Deep Reinforcement Learning
Recent reinforcement learning ( RL ) methods have found extracting high-level features from raw pixels with self-supervised learning to be effective in learning policies . However , these methods focus on learning global representations of images , and disregard local spatial structures present in the consecutively stacked frames . In this paper , we propose a novel approach that learns self-supervised spatial representations ( S3R ) for effectively encoding such spatial structures in an unsupervised manner . Given the input frames , the spatial latent volumes are first generated individually using an encoder , and they are used to capture the change in terms of spatial structures , i.e. , flow maps among multiple frames . To be specific , the proposed method establishes flow vectors between two latent volumes via a supervision by the image reconstruction loss.This enables for providing plenty of local samples for training the encoder of deep RL . We further attempt to leverage the spatial representations in the self-predictive representations ( SPR ) method that predicts future representations using the action-conditioned transition model . The proposed method imposes similarity constraints on the three latent volumes ; warped query representations by estimated flows , predicted target representations from the transition model , and target representations of future state . Experimental results on complex tasks in Atari Games and DeepMind Control Suite demonstrate that the RL methods are significantly boosted by the proposed self-supervised learning of spatial representations . The code is available at https : //sites . google.com/view/iclr2022-s3r . 1 INTRODUCTION . Deep reinforcement learning ( RL ) has been an appealing tool for training agents to solve various tasks including complex control and video games ( François-Lavet et al. , 2018 ) . While most approaches have focused on training deep RL agent under the assumption that compact state representations are readily available , this assumption does not hold in the cases where raw visual observations ( e.g . images ) are used as inputs for training the deep RL agent . Without designing an effective algorithm that uses model-based policy and value improvement operators ( Schrittwieser et al. , 2021 ) , or without attempting to use additional image augmentation ( Laskin et al. , 2020a ; Kostrikov et al. , 2020 ) , learning visual features from raw pixels only using a reward function might fail to learn good features in terms of the performance and sample efficiency . To address this challenge , a number of deep RL approaches ( Sermanet et al. , 2018 ; Dwibedi et al. , 2018 ; Anand et al. , 2019 ; Laskin et al. , 2020b ; Mazoure et al. , 2020 ; Stooke et al. , 2020 ; Schwarzer et al. , 2021 ) leverage the recent advance of self-supervised learning which effectively extracts highlevel features from raw pixels in an unsupervised fashion . In Laskin et al . ( 2020b ) ; Stooke et al . ( 2020 ) , they propose to train the convolutional encoder for pairs of images using a contrastive loss ( van den Oord et al. , 2018 ) . For training the RL agent , given a query and a set of keys consisting of positive and negative samples , they minimize the contrastive loss such that the query matches with the positive sample more than any of the negative samples ( Laskin et al. , 2020b ; Stooke et al. , 2020 ) . While the parameters of the query encoder are updated through back-propagation using the contrastive loss ( van den Oord et al. , 2018 ) , the parameters of the key encoder are computed with an exponential moving average ( EMA ) of the query encoder parameters . The output representations of the query encoder are passed to the RL algorithm for training the agent . Schwarzer et al . ( 2021 ) proposes the self-predictive representations ( SPR ) method that trains the RL agent by leveraging the query and positive sample only , following Grill et al . ( 2020 ) that achieves state-of-the-arts performance without the negative samples in the self-supervised learning . Especially , it extends a model-free RL agent by adding the transition model that predicts its own latent representation multiple steps into the future ( Schwarzer et al. , 2021 ) . The predicted future representations and the representations for future states computed using a target encoder serve as the query and positive sample , respectively . These approaches have shown compelling performance and high sample efficiency on the complex control tasks when compared to existing image-based RL approaches ( Kaiser et al. , 2019 ; van Hasselt et al. , 2019 ; Kielak , 2020 ) . While these approaches can effectively encode the global representations of images with the selfsupervised representation learning , there has been no attention on the local spatial structures present in the consecutively stacked images . Our key observation is that spatial deformation , i.e. , the change in terms of the spatial structures across the consecutive frames , can provide plenty of local samples for training the RL agent . A two-frame flow estimation ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Jonschkowski et al. , 2020 ) , which has been widely used for video processing and recognition in computer vision , can be an appropriate tool in modeling the spatial deformation . In this work , we propose a novel approach , termed self-supervised spatial representations ( S3R ) , that learns spatial representations for effectively encoding the spatial structures in a self-supervised fashion . Note that we use the term ‘ spatial representations ’ to indicate a set of feature maps extracted over frames for inferring locally-varying flow maps . The spatial representations generated from an encoder are used to predict the flow maps among the input frames by minimizing an image reconstruction loss in a self-supervised manner . A flow-based warping is then applied to generate future representations . We further extend our framework by leveraging SPR method ( Schwarzer et al. , 2021 ) . As depicted in Figure 1 , we impose similarity constraints on the three representations ; query representations warped by the estimated flow maps , self-predicted target representations from the transition model ( Schwarzer et al. , 2021 ) , and target representations of future state . Note that Shang et al . ( 2021 ) encodes the temporal information by concatenating latent differences of input frames , but the simple substraction operation has certain limitations in effectively capturing the spatial deformation . Contrary to Amiranashvili et al . ( 2018 ) in which the flow map is directly fed into RL algorithms with a stack of images , our method leverages the flow maps to effectively encode the spatial representations of the images and warp spatial representations at the future state . Our contribution is summarized as follows . • Our method learns the spatial representations using the self-supervised flow model for encoding local spatial structures from the consecutive frames used in RL algorithms . • We propose to impose the similarity constraint on the spatial representations as well as the global representations , providing plenty of supervision for training the encoder of deep RL . • We compute the future representations through the flow-based warping for imposing the similarity constraint with target representations . 2 RELATED WORK . Self-supervised Representation Learning : The self-supervised representation learning aims to learn general features from large-scale unlabeled images or videos without expensive data annotations . The contrastive methods have achieved state-of-the-art performance in the self-supervised representation learning . The contrastive learning aims to bring positive samples closer while sep- arating negative samples from each other ( Hadsell et al. , 2006 ) . Wu et al . ( 2018 ) formulate the contrastive learning as a non-parametric classification problem at the instance level , and propose to learn visual features with the memory bank and noise contrastive estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ; Mnih & Kavukcuoglu , 2013 ) . The method in van den Oord et al . ( 2018 ) proposes a probabilistic contrastive loss , called InfoNCE , for inducing representations by leveraging positive and negative samples . The InfoNCE loss has widely been adopted in Chen et al . ( 2020 ) ; He et al . ( 2020 ) ; Hénaff et al . ( 2020 ) ; Tian et al . ( 2020 ) . Chen et al . ( 2020 ) present a simple framework for contrastive self-supervised learning without specialized architecture ( Bachman et al. , 2019 ; Hénaff et al. , 2020 ) or memory bank ( Wu et al. , 2018 ) , but it requires a large batch size for using enough negative samples when computing the InfoNCE loss ( van den Oord et al. , 2018 ) . He et al . ( 2020 ) propose to build a dynamic dictionary with a queue to avoid the use of large batches when collecting negative samples , and also uses the moving averaged ( momentum ) encoder for target data ( positive and negative samples of query data ) . Grill et al . ( 2020 ) use the momentum encoder to produce representations of the targets as a means of stabilizing the bootstrap step . This enables for learning the representations with only positive samples , which are generated by data augmentation , for a given query without the need to carefully set up negative samples . The method in Chen & He ( 2020 ) further extends this idea by using only stop-gradient operation without using the momentum update . While these approaches focuses on learning global representations of a single image , our method proposes to learn spatial representations for effectively encoding the spatial structures ( i.e. , flow map ) in the consecutive images . Self-supervised Representation Learning in Deep RL : Representation learning is crucial for RL algorithms to learn policies with high-dimensional visual observations . The future prediction conditioned on the past observations and actions serves as auxiliary tasks to improve the sample efficiency of model free RL algorithms . Gelada et al . ( 2019 ) train a transition model to predict representations of future states together with a reward prediction loss . Guo et al . ( 2020 ) present Predictions of Bootstrapped Latents ( PBL ) that builds on multi-step predictive representations of future observations for deep RL . The method in Schwarzer et al . ( 2021 ) propose Self-Predictive Representations ( SPR ) based on an action-conditioned transition model that can predict future representations computed using a target ( momentum ) encoder . Our method attempts to predict the future representations through a warping operation using flow maps computed from the spatial representations of the consecutive frames . Contrastive learning has been used to extract desired latent representations of visual observations used in the RL algorithms . For training robot agents , Sermanet et al . ( 2018 ) present the time-contrastive networks ( TCN ) that train viewpoint-invariant representations using a metric learning . This work was extended in Dwibedi et al . ( 2018 ) by embedding multiple frames at each timestep for learning task-agnostic representations such as position and velocity attributes in continuous control tasks . In Anand et al . ( 2019 ) , the representations for RL algorithms are learned by maximizing mutual information ( Hjelm et al. , 2019 ) across spatially and temporally distinct features of an encoder of visual observations . Schwarzer et al . ( 2021 ) leverage the self-supervised learning ( Grill et al. , 2020 ) for imposing the similarity constraint between self-predictive and target representations . Laskin et al . ( 2020b ) introduce Contrastive Unsupervised representations for Reinforcement Learning ( CURL ) that learns the representations from visual inputs using the InfoNCE loss ( van den Oord et al. , 2018 ) . Stooke et al . ( 2020 ) present Augmented Temporal Contrast ( ATC ) using image augmentations and InfoNCE loss ( van den Oord et al. , 2018 ) for representation learning , and decouples it from policy learning . From a different perspective , Hansen et al . ( 2020 ) propose to adapt the policy network through self-supervised representation learning in unseen environments where it is difficult to predict changed rewards . Our method imposes the similarity constraint on the spatial representations as well as the global representations , thus providing plenty of supervision for training the encoder of deep RL . Visual Correspondence Learning : Visual correspondence estimation is a long-standing research in the computer vision community . It aims to establish a pair of corresponding pixels between two ( or more ) views taken under different locations ( stereo matching ) or timestep ( optical flow ) . Recent methods for stereo matching ( Žbontar & LeCun , 2016 ; Chang & Chen , 2018 ; Zhang et al. , 2019 ) and optical flow estimation ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Sun et al. , 2018 ) have been advanced largely thanks to the expressive power of deep networks . Though both approaches share a similar objective of finding corresponding pixels across views , the optical flow is known to be effective for encoding temporal motion trajectories , while the stereo matching is tailored to predicting 3D depth map in the scene . The commonly used architecture for two-frame optical flow estimation involves the feature map extraction of two frames , correlation volume computation , a series of convolutions for refinement , and flow regression . While state-of-the-arts flow estimation methods require using ground truth flow maps as an explicit supervision ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Sun et al. , 2018 ) , some unsupervised learning approaches have attempted to infer flow maps with an image reconstruction loss for imposing the constraint that corresponding pixels should have similar intensities ( Ren et al. , 2017 ; Meister et al. , 2018 ; Wang et al. , 2018 ; Jonschkowski et al. , 2020 ) . In our work , we present the self-supervised flow network that learns spatial representations from the consecutive frames used in the RL algorithms .
In this paper, the authors focus on the problem of representation learning for deep reinforcement learning. To this end, they propose an approach to learning structured representations via establishing flows between latent volumes. Similar to SPR, they predict future representations with a transition model conditioned on actions. They evaluate their approach in two domains: Atari and DeepMind Control Suite and show improved performance over the state of the art.
SP:c3b0cf6571db40a084f09fa66e3cd74e4b30b783
Self-Supervised Structured Representations for Deep Reinforcement Learning
Recent reinforcement learning ( RL ) methods have found extracting high-level features from raw pixels with self-supervised learning to be effective in learning policies . However , these methods focus on learning global representations of images , and disregard local spatial structures present in the consecutively stacked frames . In this paper , we propose a novel approach that learns self-supervised spatial representations ( S3R ) for effectively encoding such spatial structures in an unsupervised manner . Given the input frames , the spatial latent volumes are first generated individually using an encoder , and they are used to capture the change in terms of spatial structures , i.e. , flow maps among multiple frames . To be specific , the proposed method establishes flow vectors between two latent volumes via a supervision by the image reconstruction loss.This enables for providing plenty of local samples for training the encoder of deep RL . We further attempt to leverage the spatial representations in the self-predictive representations ( SPR ) method that predicts future representations using the action-conditioned transition model . The proposed method imposes similarity constraints on the three latent volumes ; warped query representations by estimated flows , predicted target representations from the transition model , and target representations of future state . Experimental results on complex tasks in Atari Games and DeepMind Control Suite demonstrate that the RL methods are significantly boosted by the proposed self-supervised learning of spatial representations . The code is available at https : //sites . google.com/view/iclr2022-s3r . 1 INTRODUCTION . Deep reinforcement learning ( RL ) has been an appealing tool for training agents to solve various tasks including complex control and video games ( François-Lavet et al. , 2018 ) . While most approaches have focused on training deep RL agent under the assumption that compact state representations are readily available , this assumption does not hold in the cases where raw visual observations ( e.g . images ) are used as inputs for training the deep RL agent . Without designing an effective algorithm that uses model-based policy and value improvement operators ( Schrittwieser et al. , 2021 ) , or without attempting to use additional image augmentation ( Laskin et al. , 2020a ; Kostrikov et al. , 2020 ) , learning visual features from raw pixels only using a reward function might fail to learn good features in terms of the performance and sample efficiency . To address this challenge , a number of deep RL approaches ( Sermanet et al. , 2018 ; Dwibedi et al. , 2018 ; Anand et al. , 2019 ; Laskin et al. , 2020b ; Mazoure et al. , 2020 ; Stooke et al. , 2020 ; Schwarzer et al. , 2021 ) leverage the recent advance of self-supervised learning which effectively extracts highlevel features from raw pixels in an unsupervised fashion . In Laskin et al . ( 2020b ) ; Stooke et al . ( 2020 ) , they propose to train the convolutional encoder for pairs of images using a contrastive loss ( van den Oord et al. , 2018 ) . For training the RL agent , given a query and a set of keys consisting of positive and negative samples , they minimize the contrastive loss such that the query matches with the positive sample more than any of the negative samples ( Laskin et al. , 2020b ; Stooke et al. , 2020 ) . While the parameters of the query encoder are updated through back-propagation using the contrastive loss ( van den Oord et al. , 2018 ) , the parameters of the key encoder are computed with an exponential moving average ( EMA ) of the query encoder parameters . The output representations of the query encoder are passed to the RL algorithm for training the agent . Schwarzer et al . ( 2021 ) proposes the self-predictive representations ( SPR ) method that trains the RL agent by leveraging the query and positive sample only , following Grill et al . ( 2020 ) that achieves state-of-the-arts performance without the negative samples in the self-supervised learning . Especially , it extends a model-free RL agent by adding the transition model that predicts its own latent representation multiple steps into the future ( Schwarzer et al. , 2021 ) . The predicted future representations and the representations for future states computed using a target encoder serve as the query and positive sample , respectively . These approaches have shown compelling performance and high sample efficiency on the complex control tasks when compared to existing image-based RL approaches ( Kaiser et al. , 2019 ; van Hasselt et al. , 2019 ; Kielak , 2020 ) . While these approaches can effectively encode the global representations of images with the selfsupervised representation learning , there has been no attention on the local spatial structures present in the consecutively stacked images . Our key observation is that spatial deformation , i.e. , the change in terms of the spatial structures across the consecutive frames , can provide plenty of local samples for training the RL agent . A two-frame flow estimation ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Jonschkowski et al. , 2020 ) , which has been widely used for video processing and recognition in computer vision , can be an appropriate tool in modeling the spatial deformation . In this work , we propose a novel approach , termed self-supervised spatial representations ( S3R ) , that learns spatial representations for effectively encoding the spatial structures in a self-supervised fashion . Note that we use the term ‘ spatial representations ’ to indicate a set of feature maps extracted over frames for inferring locally-varying flow maps . The spatial representations generated from an encoder are used to predict the flow maps among the input frames by minimizing an image reconstruction loss in a self-supervised manner . A flow-based warping is then applied to generate future representations . We further extend our framework by leveraging SPR method ( Schwarzer et al. , 2021 ) . As depicted in Figure 1 , we impose similarity constraints on the three representations ; query representations warped by the estimated flow maps , self-predicted target representations from the transition model ( Schwarzer et al. , 2021 ) , and target representations of future state . Note that Shang et al . ( 2021 ) encodes the temporal information by concatenating latent differences of input frames , but the simple substraction operation has certain limitations in effectively capturing the spatial deformation . Contrary to Amiranashvili et al . ( 2018 ) in which the flow map is directly fed into RL algorithms with a stack of images , our method leverages the flow maps to effectively encode the spatial representations of the images and warp spatial representations at the future state . Our contribution is summarized as follows . • Our method learns the spatial representations using the self-supervised flow model for encoding local spatial structures from the consecutive frames used in RL algorithms . • We propose to impose the similarity constraint on the spatial representations as well as the global representations , providing plenty of supervision for training the encoder of deep RL . • We compute the future representations through the flow-based warping for imposing the similarity constraint with target representations . 2 RELATED WORK . Self-supervised Representation Learning : The self-supervised representation learning aims to learn general features from large-scale unlabeled images or videos without expensive data annotations . The contrastive methods have achieved state-of-the-art performance in the self-supervised representation learning . The contrastive learning aims to bring positive samples closer while sep- arating negative samples from each other ( Hadsell et al. , 2006 ) . Wu et al . ( 2018 ) formulate the contrastive learning as a non-parametric classification problem at the instance level , and propose to learn visual features with the memory bank and noise contrastive estimation ( NCE ) ( Gutmann & Hyvärinen , 2010 ; Mnih & Kavukcuoglu , 2013 ) . The method in van den Oord et al . ( 2018 ) proposes a probabilistic contrastive loss , called InfoNCE , for inducing representations by leveraging positive and negative samples . The InfoNCE loss has widely been adopted in Chen et al . ( 2020 ) ; He et al . ( 2020 ) ; Hénaff et al . ( 2020 ) ; Tian et al . ( 2020 ) . Chen et al . ( 2020 ) present a simple framework for contrastive self-supervised learning without specialized architecture ( Bachman et al. , 2019 ; Hénaff et al. , 2020 ) or memory bank ( Wu et al. , 2018 ) , but it requires a large batch size for using enough negative samples when computing the InfoNCE loss ( van den Oord et al. , 2018 ) . He et al . ( 2020 ) propose to build a dynamic dictionary with a queue to avoid the use of large batches when collecting negative samples , and also uses the moving averaged ( momentum ) encoder for target data ( positive and negative samples of query data ) . Grill et al . ( 2020 ) use the momentum encoder to produce representations of the targets as a means of stabilizing the bootstrap step . This enables for learning the representations with only positive samples , which are generated by data augmentation , for a given query without the need to carefully set up negative samples . The method in Chen & He ( 2020 ) further extends this idea by using only stop-gradient operation without using the momentum update . While these approaches focuses on learning global representations of a single image , our method proposes to learn spatial representations for effectively encoding the spatial structures ( i.e. , flow map ) in the consecutive images . Self-supervised Representation Learning in Deep RL : Representation learning is crucial for RL algorithms to learn policies with high-dimensional visual observations . The future prediction conditioned on the past observations and actions serves as auxiliary tasks to improve the sample efficiency of model free RL algorithms . Gelada et al . ( 2019 ) train a transition model to predict representations of future states together with a reward prediction loss . Guo et al . ( 2020 ) present Predictions of Bootstrapped Latents ( PBL ) that builds on multi-step predictive representations of future observations for deep RL . The method in Schwarzer et al . ( 2021 ) propose Self-Predictive Representations ( SPR ) based on an action-conditioned transition model that can predict future representations computed using a target ( momentum ) encoder . Our method attempts to predict the future representations through a warping operation using flow maps computed from the spatial representations of the consecutive frames . Contrastive learning has been used to extract desired latent representations of visual observations used in the RL algorithms . For training robot agents , Sermanet et al . ( 2018 ) present the time-contrastive networks ( TCN ) that train viewpoint-invariant representations using a metric learning . This work was extended in Dwibedi et al . ( 2018 ) by embedding multiple frames at each timestep for learning task-agnostic representations such as position and velocity attributes in continuous control tasks . In Anand et al . ( 2019 ) , the representations for RL algorithms are learned by maximizing mutual information ( Hjelm et al. , 2019 ) across spatially and temporally distinct features of an encoder of visual observations . Schwarzer et al . ( 2021 ) leverage the self-supervised learning ( Grill et al. , 2020 ) for imposing the similarity constraint between self-predictive and target representations . Laskin et al . ( 2020b ) introduce Contrastive Unsupervised representations for Reinforcement Learning ( CURL ) that learns the representations from visual inputs using the InfoNCE loss ( van den Oord et al. , 2018 ) . Stooke et al . ( 2020 ) present Augmented Temporal Contrast ( ATC ) using image augmentations and InfoNCE loss ( van den Oord et al. , 2018 ) for representation learning , and decouples it from policy learning . From a different perspective , Hansen et al . ( 2020 ) propose to adapt the policy network through self-supervised representation learning in unseen environments where it is difficult to predict changed rewards . Our method imposes the similarity constraint on the spatial representations as well as the global representations , thus providing plenty of supervision for training the encoder of deep RL . Visual Correspondence Learning : Visual correspondence estimation is a long-standing research in the computer vision community . It aims to establish a pair of corresponding pixels between two ( or more ) views taken under different locations ( stereo matching ) or timestep ( optical flow ) . Recent methods for stereo matching ( Žbontar & LeCun , 2016 ; Chang & Chen , 2018 ; Zhang et al. , 2019 ) and optical flow estimation ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Sun et al. , 2018 ) have been advanced largely thanks to the expressive power of deep networks . Though both approaches share a similar objective of finding corresponding pixels across views , the optical flow is known to be effective for encoding temporal motion trajectories , while the stereo matching is tailored to predicting 3D depth map in the scene . The commonly used architecture for two-frame optical flow estimation involves the feature map extraction of two frames , correlation volume computation , a series of convolutions for refinement , and flow regression . While state-of-the-arts flow estimation methods require using ground truth flow maps as an explicit supervision ( Dosovitskiy et al. , 2015 ; Ilg et al. , 2017 ; Sun et al. , 2018 ) , some unsupervised learning approaches have attempted to infer flow maps with an image reconstruction loss for imposing the constraint that corresponding pixels should have similar intensities ( Ren et al. , 2017 ; Meister et al. , 2018 ; Wang et al. , 2018 ; Jonschkowski et al. , 2020 ) . In our work , we present the self-supervised flow network that learns spatial representations from the consecutive frames used in the RL algorithms .
This paper focuses on self-supervised visual representation learning for RL applications. As opposed to previous literature that focused on learning global representations for the current observation, the authors propose learning "structured" representations based on flow maps that encode local structure. To do so, they combine: - a self-supervised flow model (from previous literature), trained using an image reconstruction loss; - a classic action-conditioned 1-step transition model using cosine similarity in latent space - a flow based warping (also from previous literature) to provide a second next-step latent prediction task. They evaluate their method on various standard RL tasks in two domains (Atari, DM Control Suite) and show improvements over SOTA in ~half of the tasks.
SP:c3b0cf6571db40a084f09fa66e3cd74e4b30b783
Safe Linear-Quadratic Dual Control with Almost Sure Performance Guarantee
1 INTRODUCTION . One of the most fundamental and well-studied problems in optimal control , Linear-Quadratic Regulation ( LQR ) has recently aroused renewed interest in the context of data-driven control and reinforcement learning . Considering it is usually challenging to obtain an exact system model from first principles , and that the system may slowly change over time due to various reasons , e.g. , component wear-out , data-driven regulation of unknown linear systems has become an active research problem in the intersection of machine learning and control , with recent works including e.g. , Dean et al . ( 2019 ) ; Mania et al . ( 2019 ) ; Cohen et al . ( 2019 ) ; Wagenmaker & Jamieson ( 2020 ) . In particular , from the perspective of reinforcement learning theory , the LQR problem has become a standard benchmark for continuous control . In this paper , we focus on the dual control ( Feldbaum , 1960 ) setting , also known as online adaptive control in the literature , where the same policy must be adopted to identify the system parameters and optimize the control objective , leading to the well-known exploration-exploitation dilemma . Recently , it was shown by Simchowitz & Foster ( 2020 ) that the optimal regret for this problem setting scales as Θ̃ ( √ T ) , which can be achieved with probability 1 − δ using a certainty equivalent control strategy , where the learner selects control inputs according to the optimal controller for the current estimate of the system while injecting time-decaying exploration noise . However , the strategy proposed in this work , like those in its predecessors ( Abbasi-Yadkori & Szepesvári , 2011 ; Dean et al. , 2018 ; Mania et al. , 2019 ) , may have a nonzero probability δ of failing . Furthermore , it shall be noticed that δ has been chosen as a fixed design parameter in the aforementioned works , which implies the probability of failing does not converge to zero even if the policy is run indefinitely . The above observation gives rise to the question that we address in this paper : Can we design a learning scheme for LQR dual control , such that the policy adopted in almost every trajectory converges to the optimal policy ? We identify that the above goal can hardly be achieved by a naive certainty equivalent learning scheme . Qualitatively , the system parameters learned from data are always corrupted by random noise , and as a result , the controller proposed in previous works may destabilize the system , albeit with a small probability , causing catastrophic system failure . Based on the above reasoning , we propose a notion of bounded-cost safety for the LQR dual control problem : we recognize a learning scheme to be safe if no destabilizing control policy is applied during the entire learning process . In this paper , we propose a learning scheme that satisfies the above definition of bounded-cost safety , and guarantees both the parameter inference error and the suboptimality gap of controller performance converge to zero almost surely . Our strategy consists of two parts : a safe switched controller and a parameter inference algorithm . The switched controller can be viewed as a safety-augmented version of the certainty equivalent controller : it normally selects control inputs according to the optimal linear feedback for the currently estimated system parameters , but falls back to a conservative controller for several steps when the actual state deviates significantly from the target state . We prove that this switching strategy ensures the bounded-cost safety of the learning process , while only inducing a suboptimality gap that decays exponentially as the switching threshold increases . For the parameter inference part , in contrast to the direct least-squares approach for estimating the matrices A , B widely adopted in the literature , we estimate the Markov parameters , also known as the impulse response of the system , based on the cross-correlation between the exploration noise and the system output , and establish the almost-sure convergence using a law of large numbers for martingales . We prefer this approach for its clarity in physical meaning and simplicity of convergence analysis , but we do not foresee substantial difficulty in replacing our parameter inference module with standard least-squares . We prove that under the above described learning scheme , the parameter inference error scales asO ( T−1/4+ ) , while the suboptimality gap of control performance scales as O ( T−1/2+ ) , where T is the number of time steps , and is an arbitrarily small positive number . Both the above results match the corresponding asymptotic rates in Simchowitz & Foster ( 2020 ) , which provides an rate-optimal algorithm for online LQR in the high-probability regime . The main contributions of this paper are as follows : 1 . We propose a practical notion of safety for the LQR dual control problem that has not been considered in the literature , and provide an instance of safe learning scheme based on a switching strategy . 2 . We prove almost sure convergence rates of parameter inference error and suboptimality gap of control performance for our scheme , which match the corresponding optimal rates in the high-probability regime . To the best of our knowledge , this is the first analysis of the almost sure convergence rate for online LQR . The rest of this paper is organized as follows : Section 2 gives a brief introduction of LQR , formulates the LQR dual control problem , and defines the performance metrics as well as the notion of boundedcost safe learning scheme . Section 3 presents and interprets our algorithm . Section 4 states the main theoretical results and characterizes the convergence rates . Section 5 provides simulation results on an industrial process example to illustrate the effectiveness of our proposed strategy . Section 6 summarizes the related literature . Finally , Section 7 gives concluding remarks and discusses future directions . 2 PROBLEM FORMULATION . We consider the control of the following discrete-time linear system : xk+1 = Axk +Buk + wk , ( 1 ) where xk ∈ Rn is the state vector , uk ∈ Rp is the input vector , and wk ∈ Rn is the process noise . We assume x0 ∼ N ( 0 , X0 ) , wk ∼ N ( 0 , W ) , and that x0 , w0 , w1 , . . . are pairwise independent . We also assume w.l.o.g . that ( A , B ) is controllable . We consider control policies of the form π : Rn × Rq → Rp × Rq , ( uk , ξk+1 ) = π ( xk , ξk ) , ( 2 ) which can be either deterministic or stochastic , where ξk ∈ Rq is the internal state of the policy . Notice that we allow the flexibility of the policy being non-Markovian with the introduction of ξk , and we also use the simplified notation uk = π ( xk ) if π is Markovian . The performance of a policy π can be characterized by the infinite-horizon quadratic cost Jπ = lim sup T→∞ 1 T E [ T−1∑ k=0 x > k Qxk + u > k Ruk ] , ( 3 ) where Q 0 , R 0 are known weight matrices specified by the system operator . We denote the optimal cost and the optimal control law by J∗ = inf π Jπ , π∗ ∈ arg min π Jπ . ( 4 ) It is well-known that the optimal policy is a linear function of the state π∗ ( x ) = K∗x , with associated cost J∗ = tr ( WP ∗ ) , where P ∗ is the solution to the discrete-time algebraic Riccati equation P ∗ = Q+A > P ∗A−A > P ∗B ( R+B > P ∗B ) −1 B > P ∗A , ( 5 ) and the linear feedback control gain K∗ can be determined by K∗ = − ( R+B > PB ) −1 B > P ∗A . ( 6 ) Based on the definitions above , we can use Jπ−J∗ to measure of the suboptimality gap of a specific policy π . In the online LQR setting , the system and input matrices A , B are assumed unknown , and the learning process can be viewed as deploying a sequence of time-varying policies { πk } with the dual objectives of exploring the system parameters and stabilizing the system . To characterize the safety of a learning process , we make the definitions below : Definition 1 . A policy π is destabilizing if Jπ = +∞ . Definition 2 . A learning process applying policies { πk } ∞k=0 is bounded-cost safe , if πk is not destabilizing for any time k and for any realization of the noise process . Notice that Definition 1 is a generalization of the common notion of destabilizing linear feedback gain , i.e. , π ( x ) = Kx is destabilizing when ρ ( A + BK ) ≥ 1 , which is equivalent to Jπ = +∞ . Based on the notion of destabilizing policies , we propose the concept of bounded-cost safety in Definition 2 , which requires that destabilizing policies , or policies with unbounded cost , are never applied . It should be pointed out that bounded-cost safety does not guarantee the stability of trajectories , but is an indicator of the reliability of a learning scheme . We assume the system is open-loop strictly stable , i.e. , ρ ( A ) < 1 . Indeed , if there is a known stabilizing linear feedback gain K0 , then we can apply the dual control scheme on the pre-stabilized system ( A + BK0 , B ) instead of ( A , B ) . Existence of such a known stabilizing linear feedback gain a standard assumption in previous works on online LQR ( Mania et al. , 2019 ; Simchowitz & Foster , 2020 ) , and relatively easy to establish through coarse system identification ( Dean et al. , 2019 ; Faradonbeh et al. , 2018b ) or adaptive stabilization methods ( Faradonbeh et al. , 2018a ; 2019 ) . 3 ALGORITHM . The complete algorithm we propose for LQR dual control is presented in Algorithm 1 . The modules in the algorithm will be described later in this section . 3.1 SAFE CONTROL POLICY . This subsection describes the policy for determining the control input uk . The first n + p steps are a warm-up period where purely random inputs are injected . Afterwards , in each step , we inject a exploitation term ũ plus a polynomial-decaying exploratory noise ( k+ 1 ) −βζ , where the decay rate β ∈ ( 0 , 1/2 ) is a constant . The exploitation term ũ is a modified version of the certainty equivalent control input K̂kxk , and this modification , described in Algorithm 2 , is crucial to the safety of the learning process , which we will detail below . Algorithm 1 Safe LQR dual control Input : State dimension n , input dimension p , exploratory noise decay rate β 1 : for k = 0 , 1 , . . . , n+ p− 1 do 2 : ξk+1 ← 0 3 : Apply control input uk ← ( k + 1 ) −βζk , where ζk ∼ N ( 0 , Ip ) 4 : for k = n+ p , n+ p+ 1 , . . . do 5 : Observe the current state xk 6 : for τ = 0 , 1 , . . . , n+ p− 1 do 7 : Ĥk , τ ← 1 k − τ k∑ i=τ+1 ( i− τ ) β [ xi − τ−1∑ t=0 Ĥk , tũi−t−1 ] ζ > i−τ−1 8 : Reconstruct Âk , B̂k from Ĥk,0 , . . . , Ĥk , n+p−1 using Algorithm 3 9 : Compute certainty equivalent feedback gain K̂k by replacing A , B with Âk , B̂k in ( 5 ) , ( 6 ) 10 : Determine policy πk ( · , · ) ← π ( · , · ; k , K̂k , β ) , where π is described by Algorithm 2 11 : ( uk , ξk+1 ) ← πk ( xk , ξk ) ; record ũk ← ũ , ζk ← ζ , where ũ , ζ are the corresponding vari- ables generated when executing the policy 12 : Apply control input uk Algorithm 2 Safe policy π ( x , ξ ; k , K , β ) Input : Arguments : system state x , policy internal state ξ ; Parameters : step k , linear feedback gain K , exploratory noise decay rate β Output : Control input u and next policy internal state ξ′ , i.e. , ( u , ξ′ ) = π ( x , ξ ; k , K , β ) 1 : if ξ > 0 then 2 : ũ← 0 , ξ′ ← ξ − 1 3 : else 4 : if max { ‖K‖ , ‖x‖ } ≥ log k then 5 : ũ← 0 , ξ′ ← blog kc 6 : else 7 : ũ← Kx , ξ′ ← 0 8 : u← ũ+ ( k + 1 ) −βζ , where ζ ∼ N ( 0 , I ) In short , we stop injecting the exploitation input for blog kc+ 1 consecutive steps , if either the state norm ‖xk‖ or the norm of the feedback gain ‖K̂k‖ exceeds the threshold log k. Recall from ( 2 ) that we use ξk to denote the internal state of the policy , and in Algorithm 1 , ξk is a counter that records how many steps are left in the “ non-action ” period . Essentially , we utilize the innate stability of the system to prevent the state from exploding catastrophically . This “ non-action ” mechanism is a critical feature of our control design , without which the controller learned from data may destabilize the system , albeit with a small probability , causing system failure in practice and forbidding the establishment of almost sure performance guarantees in theory . We provide an ablation study of this “ non-action ” mechanism in Section 5 . We choose both the switching threshold and the length of the “ non-action ” period to be timegrowing . The enlarging threshold corresponds to diminishing degree of conservativeness , which is essential for the policy performance to converge to the optimal performance . Meanwhile , the prolonging “ non-action ” period rules out the potential oscillation of state caused by the frequent switching of the controller ( see Appendix B for an illustrative example of this oscillation phenomenon ) . In particular , it can be shown that the suboptimality gap incurred by the switching strategy scales as O ( tM exp ( −cM2 ) ) , where M is the switching threshold , t is the length of the “ non-action ” period , and c is a system-dependent constant ( see Lemma 10 in Appendix A.1.2 ) . With both M and t growing as O ( log k ) , the contribution of our switching strategy to the overall suboptimality gap is merely Õ ( 1 ) .
This paper addresses the unconstrained stochastic linear-quadratic dual control problem with online parameter identification in an online setting, where a near-optimal control policy is sought to minimize the infinite-horizon quadratic cost (i.e. stabilizing the linear dynamical system) while learning the initially unknown matrices A and B. Instead of deriving a regret bound with a high probability done in the standard certainty equivalent learning schemes, this paper focuses on the almost sure convergence to the optimal control performance. The presented main contribution is the proposed switched control strategy (a stability-augmented certainty equivalent controller) and a parameter inference scheme that estimates the impulse response of the dynamical system in contrast of the matrices A and B. Theoretical analysis of the convergence rate of inference error and control performance is provided. Simulation results on the Tennessee Eastman Process (TEP) are given to validate the proposed LQ dual control algorithm.
SP:01927b25e3408a5e889122868c0eee692be69e02
Safe Linear-Quadratic Dual Control with Almost Sure Performance Guarantee
1 INTRODUCTION . One of the most fundamental and well-studied problems in optimal control , Linear-Quadratic Regulation ( LQR ) has recently aroused renewed interest in the context of data-driven control and reinforcement learning . Considering it is usually challenging to obtain an exact system model from first principles , and that the system may slowly change over time due to various reasons , e.g. , component wear-out , data-driven regulation of unknown linear systems has become an active research problem in the intersection of machine learning and control , with recent works including e.g. , Dean et al . ( 2019 ) ; Mania et al . ( 2019 ) ; Cohen et al . ( 2019 ) ; Wagenmaker & Jamieson ( 2020 ) . In particular , from the perspective of reinforcement learning theory , the LQR problem has become a standard benchmark for continuous control . In this paper , we focus on the dual control ( Feldbaum , 1960 ) setting , also known as online adaptive control in the literature , where the same policy must be adopted to identify the system parameters and optimize the control objective , leading to the well-known exploration-exploitation dilemma . Recently , it was shown by Simchowitz & Foster ( 2020 ) that the optimal regret for this problem setting scales as Θ̃ ( √ T ) , which can be achieved with probability 1 − δ using a certainty equivalent control strategy , where the learner selects control inputs according to the optimal controller for the current estimate of the system while injecting time-decaying exploration noise . However , the strategy proposed in this work , like those in its predecessors ( Abbasi-Yadkori & Szepesvári , 2011 ; Dean et al. , 2018 ; Mania et al. , 2019 ) , may have a nonzero probability δ of failing . Furthermore , it shall be noticed that δ has been chosen as a fixed design parameter in the aforementioned works , which implies the probability of failing does not converge to zero even if the policy is run indefinitely . The above observation gives rise to the question that we address in this paper : Can we design a learning scheme for LQR dual control , such that the policy adopted in almost every trajectory converges to the optimal policy ? We identify that the above goal can hardly be achieved by a naive certainty equivalent learning scheme . Qualitatively , the system parameters learned from data are always corrupted by random noise , and as a result , the controller proposed in previous works may destabilize the system , albeit with a small probability , causing catastrophic system failure . Based on the above reasoning , we propose a notion of bounded-cost safety for the LQR dual control problem : we recognize a learning scheme to be safe if no destabilizing control policy is applied during the entire learning process . In this paper , we propose a learning scheme that satisfies the above definition of bounded-cost safety , and guarantees both the parameter inference error and the suboptimality gap of controller performance converge to zero almost surely . Our strategy consists of two parts : a safe switched controller and a parameter inference algorithm . The switched controller can be viewed as a safety-augmented version of the certainty equivalent controller : it normally selects control inputs according to the optimal linear feedback for the currently estimated system parameters , but falls back to a conservative controller for several steps when the actual state deviates significantly from the target state . We prove that this switching strategy ensures the bounded-cost safety of the learning process , while only inducing a suboptimality gap that decays exponentially as the switching threshold increases . For the parameter inference part , in contrast to the direct least-squares approach for estimating the matrices A , B widely adopted in the literature , we estimate the Markov parameters , also known as the impulse response of the system , based on the cross-correlation between the exploration noise and the system output , and establish the almost-sure convergence using a law of large numbers for martingales . We prefer this approach for its clarity in physical meaning and simplicity of convergence analysis , but we do not foresee substantial difficulty in replacing our parameter inference module with standard least-squares . We prove that under the above described learning scheme , the parameter inference error scales asO ( T−1/4+ ) , while the suboptimality gap of control performance scales as O ( T−1/2+ ) , where T is the number of time steps , and is an arbitrarily small positive number . Both the above results match the corresponding asymptotic rates in Simchowitz & Foster ( 2020 ) , which provides an rate-optimal algorithm for online LQR in the high-probability regime . The main contributions of this paper are as follows : 1 . We propose a practical notion of safety for the LQR dual control problem that has not been considered in the literature , and provide an instance of safe learning scheme based on a switching strategy . 2 . We prove almost sure convergence rates of parameter inference error and suboptimality gap of control performance for our scheme , which match the corresponding optimal rates in the high-probability regime . To the best of our knowledge , this is the first analysis of the almost sure convergence rate for online LQR . The rest of this paper is organized as follows : Section 2 gives a brief introduction of LQR , formulates the LQR dual control problem , and defines the performance metrics as well as the notion of boundedcost safe learning scheme . Section 3 presents and interprets our algorithm . Section 4 states the main theoretical results and characterizes the convergence rates . Section 5 provides simulation results on an industrial process example to illustrate the effectiveness of our proposed strategy . Section 6 summarizes the related literature . Finally , Section 7 gives concluding remarks and discusses future directions . 2 PROBLEM FORMULATION . We consider the control of the following discrete-time linear system : xk+1 = Axk +Buk + wk , ( 1 ) where xk ∈ Rn is the state vector , uk ∈ Rp is the input vector , and wk ∈ Rn is the process noise . We assume x0 ∼ N ( 0 , X0 ) , wk ∼ N ( 0 , W ) , and that x0 , w0 , w1 , . . . are pairwise independent . We also assume w.l.o.g . that ( A , B ) is controllable . We consider control policies of the form π : Rn × Rq → Rp × Rq , ( uk , ξk+1 ) = π ( xk , ξk ) , ( 2 ) which can be either deterministic or stochastic , where ξk ∈ Rq is the internal state of the policy . Notice that we allow the flexibility of the policy being non-Markovian with the introduction of ξk , and we also use the simplified notation uk = π ( xk ) if π is Markovian . The performance of a policy π can be characterized by the infinite-horizon quadratic cost Jπ = lim sup T→∞ 1 T E [ T−1∑ k=0 x > k Qxk + u > k Ruk ] , ( 3 ) where Q 0 , R 0 are known weight matrices specified by the system operator . We denote the optimal cost and the optimal control law by J∗ = inf π Jπ , π∗ ∈ arg min π Jπ . ( 4 ) It is well-known that the optimal policy is a linear function of the state π∗ ( x ) = K∗x , with associated cost J∗ = tr ( WP ∗ ) , where P ∗ is the solution to the discrete-time algebraic Riccati equation P ∗ = Q+A > P ∗A−A > P ∗B ( R+B > P ∗B ) −1 B > P ∗A , ( 5 ) and the linear feedback control gain K∗ can be determined by K∗ = − ( R+B > PB ) −1 B > P ∗A . ( 6 ) Based on the definitions above , we can use Jπ−J∗ to measure of the suboptimality gap of a specific policy π . In the online LQR setting , the system and input matrices A , B are assumed unknown , and the learning process can be viewed as deploying a sequence of time-varying policies { πk } with the dual objectives of exploring the system parameters and stabilizing the system . To characterize the safety of a learning process , we make the definitions below : Definition 1 . A policy π is destabilizing if Jπ = +∞ . Definition 2 . A learning process applying policies { πk } ∞k=0 is bounded-cost safe , if πk is not destabilizing for any time k and for any realization of the noise process . Notice that Definition 1 is a generalization of the common notion of destabilizing linear feedback gain , i.e. , π ( x ) = Kx is destabilizing when ρ ( A + BK ) ≥ 1 , which is equivalent to Jπ = +∞ . Based on the notion of destabilizing policies , we propose the concept of bounded-cost safety in Definition 2 , which requires that destabilizing policies , or policies with unbounded cost , are never applied . It should be pointed out that bounded-cost safety does not guarantee the stability of trajectories , but is an indicator of the reliability of a learning scheme . We assume the system is open-loop strictly stable , i.e. , ρ ( A ) < 1 . Indeed , if there is a known stabilizing linear feedback gain K0 , then we can apply the dual control scheme on the pre-stabilized system ( A + BK0 , B ) instead of ( A , B ) . Existence of such a known stabilizing linear feedback gain a standard assumption in previous works on online LQR ( Mania et al. , 2019 ; Simchowitz & Foster , 2020 ) , and relatively easy to establish through coarse system identification ( Dean et al. , 2019 ; Faradonbeh et al. , 2018b ) or adaptive stabilization methods ( Faradonbeh et al. , 2018a ; 2019 ) . 3 ALGORITHM . The complete algorithm we propose for LQR dual control is presented in Algorithm 1 . The modules in the algorithm will be described later in this section . 3.1 SAFE CONTROL POLICY . This subsection describes the policy for determining the control input uk . The first n + p steps are a warm-up period where purely random inputs are injected . Afterwards , in each step , we inject a exploitation term ũ plus a polynomial-decaying exploratory noise ( k+ 1 ) −βζ , where the decay rate β ∈ ( 0 , 1/2 ) is a constant . The exploitation term ũ is a modified version of the certainty equivalent control input K̂kxk , and this modification , described in Algorithm 2 , is crucial to the safety of the learning process , which we will detail below . Algorithm 1 Safe LQR dual control Input : State dimension n , input dimension p , exploratory noise decay rate β 1 : for k = 0 , 1 , . . . , n+ p− 1 do 2 : ξk+1 ← 0 3 : Apply control input uk ← ( k + 1 ) −βζk , where ζk ∼ N ( 0 , Ip ) 4 : for k = n+ p , n+ p+ 1 , . . . do 5 : Observe the current state xk 6 : for τ = 0 , 1 , . . . , n+ p− 1 do 7 : Ĥk , τ ← 1 k − τ k∑ i=τ+1 ( i− τ ) β [ xi − τ−1∑ t=0 Ĥk , tũi−t−1 ] ζ > i−τ−1 8 : Reconstruct Âk , B̂k from Ĥk,0 , . . . , Ĥk , n+p−1 using Algorithm 3 9 : Compute certainty equivalent feedback gain K̂k by replacing A , B with Âk , B̂k in ( 5 ) , ( 6 ) 10 : Determine policy πk ( · , · ) ← π ( · , · ; k , K̂k , β ) , where π is described by Algorithm 2 11 : ( uk , ξk+1 ) ← πk ( xk , ξk ) ; record ũk ← ũ , ζk ← ζ , where ũ , ζ are the corresponding vari- ables generated when executing the policy 12 : Apply control input uk Algorithm 2 Safe policy π ( x , ξ ; k , K , β ) Input : Arguments : system state x , policy internal state ξ ; Parameters : step k , linear feedback gain K , exploratory noise decay rate β Output : Control input u and next policy internal state ξ′ , i.e. , ( u , ξ′ ) = π ( x , ξ ; k , K , β ) 1 : if ξ > 0 then 2 : ũ← 0 , ξ′ ← ξ − 1 3 : else 4 : if max { ‖K‖ , ‖x‖ } ≥ log k then 5 : ũ← 0 , ξ′ ← blog kc 6 : else 7 : ũ← Kx , ξ′ ← 0 8 : u← ũ+ ( k + 1 ) −βζ , where ζ ∼ N ( 0 , I ) In short , we stop injecting the exploitation input for blog kc+ 1 consecutive steps , if either the state norm ‖xk‖ or the norm of the feedback gain ‖K̂k‖ exceeds the threshold log k. Recall from ( 2 ) that we use ξk to denote the internal state of the policy , and in Algorithm 1 , ξk is a counter that records how many steps are left in the “ non-action ” period . Essentially , we utilize the innate stability of the system to prevent the state from exploding catastrophically . This “ non-action ” mechanism is a critical feature of our control design , without which the controller learned from data may destabilize the system , albeit with a small probability , causing system failure in practice and forbidding the establishment of almost sure performance guarantees in theory . We provide an ablation study of this “ non-action ” mechanism in Section 5 . We choose both the switching threshold and the length of the “ non-action ” period to be timegrowing . The enlarging threshold corresponds to diminishing degree of conservativeness , which is essential for the policy performance to converge to the optimal performance . Meanwhile , the prolonging “ non-action ” period rules out the potential oscillation of state caused by the frequent switching of the controller ( see Appendix B for an illustrative example of this oscillation phenomenon ) . In particular , it can be shown that the suboptimality gap incurred by the switching strategy scales as O ( tM exp ( −cM2 ) ) , where M is the switching threshold , t is the length of the “ non-action ” period , and c is a system-dependent constant ( see Lemma 10 in Appendix A.1.2 ) . With both M and t growing as O ( log k ) , the contribution of our switching strategy to the overall suboptimality gap is merely Õ ( 1 ) .
This paper studies an adaptive control for LQR and provides an algorithm to converge to the optimal policy almost surely in an asymptotic sense, where the convergence rate is also provided. The paper assumes that the system is open-loop stable, and switches to zero control input (with exponentially decaying excitation noises) for a while when the state or control gain is larger than O(log(k)), which can steer the state to a smaller neighbourhood of 0 due to the open-loop stability of the original system. Theoretically, the paper shows that any switched policy generated by the algorithm guarantees a finite infinite-horizon-averaged cost, which is called "safe" in this paper. Further, the paper establishes the strong consistency of the Markov parameter estimation in Alg 3 with an error convergence rate. Besides, the paper shows that the learned policies converge to the optimal policy at a rate of O(1/sqrt(k)) in terms of the infinite-horizon-averaged cost. Lastly, the performance is evaluated with different parameter choices of the algorithm.
SP:01927b25e3408a5e889122868c0eee692be69e02
Safe Linear-Quadratic Dual Control with Almost Sure Performance Guarantee
1 INTRODUCTION . One of the most fundamental and well-studied problems in optimal control , Linear-Quadratic Regulation ( LQR ) has recently aroused renewed interest in the context of data-driven control and reinforcement learning . Considering it is usually challenging to obtain an exact system model from first principles , and that the system may slowly change over time due to various reasons , e.g. , component wear-out , data-driven regulation of unknown linear systems has become an active research problem in the intersection of machine learning and control , with recent works including e.g. , Dean et al . ( 2019 ) ; Mania et al . ( 2019 ) ; Cohen et al . ( 2019 ) ; Wagenmaker & Jamieson ( 2020 ) . In particular , from the perspective of reinforcement learning theory , the LQR problem has become a standard benchmark for continuous control . In this paper , we focus on the dual control ( Feldbaum , 1960 ) setting , also known as online adaptive control in the literature , where the same policy must be adopted to identify the system parameters and optimize the control objective , leading to the well-known exploration-exploitation dilemma . Recently , it was shown by Simchowitz & Foster ( 2020 ) that the optimal regret for this problem setting scales as Θ̃ ( √ T ) , which can be achieved with probability 1 − δ using a certainty equivalent control strategy , where the learner selects control inputs according to the optimal controller for the current estimate of the system while injecting time-decaying exploration noise . However , the strategy proposed in this work , like those in its predecessors ( Abbasi-Yadkori & Szepesvári , 2011 ; Dean et al. , 2018 ; Mania et al. , 2019 ) , may have a nonzero probability δ of failing . Furthermore , it shall be noticed that δ has been chosen as a fixed design parameter in the aforementioned works , which implies the probability of failing does not converge to zero even if the policy is run indefinitely . The above observation gives rise to the question that we address in this paper : Can we design a learning scheme for LQR dual control , such that the policy adopted in almost every trajectory converges to the optimal policy ? We identify that the above goal can hardly be achieved by a naive certainty equivalent learning scheme . Qualitatively , the system parameters learned from data are always corrupted by random noise , and as a result , the controller proposed in previous works may destabilize the system , albeit with a small probability , causing catastrophic system failure . Based on the above reasoning , we propose a notion of bounded-cost safety for the LQR dual control problem : we recognize a learning scheme to be safe if no destabilizing control policy is applied during the entire learning process . In this paper , we propose a learning scheme that satisfies the above definition of bounded-cost safety , and guarantees both the parameter inference error and the suboptimality gap of controller performance converge to zero almost surely . Our strategy consists of two parts : a safe switched controller and a parameter inference algorithm . The switched controller can be viewed as a safety-augmented version of the certainty equivalent controller : it normally selects control inputs according to the optimal linear feedback for the currently estimated system parameters , but falls back to a conservative controller for several steps when the actual state deviates significantly from the target state . We prove that this switching strategy ensures the bounded-cost safety of the learning process , while only inducing a suboptimality gap that decays exponentially as the switching threshold increases . For the parameter inference part , in contrast to the direct least-squares approach for estimating the matrices A , B widely adopted in the literature , we estimate the Markov parameters , also known as the impulse response of the system , based on the cross-correlation between the exploration noise and the system output , and establish the almost-sure convergence using a law of large numbers for martingales . We prefer this approach for its clarity in physical meaning and simplicity of convergence analysis , but we do not foresee substantial difficulty in replacing our parameter inference module with standard least-squares . We prove that under the above described learning scheme , the parameter inference error scales asO ( T−1/4+ ) , while the suboptimality gap of control performance scales as O ( T−1/2+ ) , where T is the number of time steps , and is an arbitrarily small positive number . Both the above results match the corresponding asymptotic rates in Simchowitz & Foster ( 2020 ) , which provides an rate-optimal algorithm for online LQR in the high-probability regime . The main contributions of this paper are as follows : 1 . We propose a practical notion of safety for the LQR dual control problem that has not been considered in the literature , and provide an instance of safe learning scheme based on a switching strategy . 2 . We prove almost sure convergence rates of parameter inference error and suboptimality gap of control performance for our scheme , which match the corresponding optimal rates in the high-probability regime . To the best of our knowledge , this is the first analysis of the almost sure convergence rate for online LQR . The rest of this paper is organized as follows : Section 2 gives a brief introduction of LQR , formulates the LQR dual control problem , and defines the performance metrics as well as the notion of boundedcost safe learning scheme . Section 3 presents and interprets our algorithm . Section 4 states the main theoretical results and characterizes the convergence rates . Section 5 provides simulation results on an industrial process example to illustrate the effectiveness of our proposed strategy . Section 6 summarizes the related literature . Finally , Section 7 gives concluding remarks and discusses future directions . 2 PROBLEM FORMULATION . We consider the control of the following discrete-time linear system : xk+1 = Axk +Buk + wk , ( 1 ) where xk ∈ Rn is the state vector , uk ∈ Rp is the input vector , and wk ∈ Rn is the process noise . We assume x0 ∼ N ( 0 , X0 ) , wk ∼ N ( 0 , W ) , and that x0 , w0 , w1 , . . . are pairwise independent . We also assume w.l.o.g . that ( A , B ) is controllable . We consider control policies of the form π : Rn × Rq → Rp × Rq , ( uk , ξk+1 ) = π ( xk , ξk ) , ( 2 ) which can be either deterministic or stochastic , where ξk ∈ Rq is the internal state of the policy . Notice that we allow the flexibility of the policy being non-Markovian with the introduction of ξk , and we also use the simplified notation uk = π ( xk ) if π is Markovian . The performance of a policy π can be characterized by the infinite-horizon quadratic cost Jπ = lim sup T→∞ 1 T E [ T−1∑ k=0 x > k Qxk + u > k Ruk ] , ( 3 ) where Q 0 , R 0 are known weight matrices specified by the system operator . We denote the optimal cost and the optimal control law by J∗ = inf π Jπ , π∗ ∈ arg min π Jπ . ( 4 ) It is well-known that the optimal policy is a linear function of the state π∗ ( x ) = K∗x , with associated cost J∗ = tr ( WP ∗ ) , where P ∗ is the solution to the discrete-time algebraic Riccati equation P ∗ = Q+A > P ∗A−A > P ∗B ( R+B > P ∗B ) −1 B > P ∗A , ( 5 ) and the linear feedback control gain K∗ can be determined by K∗ = − ( R+B > PB ) −1 B > P ∗A . ( 6 ) Based on the definitions above , we can use Jπ−J∗ to measure of the suboptimality gap of a specific policy π . In the online LQR setting , the system and input matrices A , B are assumed unknown , and the learning process can be viewed as deploying a sequence of time-varying policies { πk } with the dual objectives of exploring the system parameters and stabilizing the system . To characterize the safety of a learning process , we make the definitions below : Definition 1 . A policy π is destabilizing if Jπ = +∞ . Definition 2 . A learning process applying policies { πk } ∞k=0 is bounded-cost safe , if πk is not destabilizing for any time k and for any realization of the noise process . Notice that Definition 1 is a generalization of the common notion of destabilizing linear feedback gain , i.e. , π ( x ) = Kx is destabilizing when ρ ( A + BK ) ≥ 1 , which is equivalent to Jπ = +∞ . Based on the notion of destabilizing policies , we propose the concept of bounded-cost safety in Definition 2 , which requires that destabilizing policies , or policies with unbounded cost , are never applied . It should be pointed out that bounded-cost safety does not guarantee the stability of trajectories , but is an indicator of the reliability of a learning scheme . We assume the system is open-loop strictly stable , i.e. , ρ ( A ) < 1 . Indeed , if there is a known stabilizing linear feedback gain K0 , then we can apply the dual control scheme on the pre-stabilized system ( A + BK0 , B ) instead of ( A , B ) . Existence of such a known stabilizing linear feedback gain a standard assumption in previous works on online LQR ( Mania et al. , 2019 ; Simchowitz & Foster , 2020 ) , and relatively easy to establish through coarse system identification ( Dean et al. , 2019 ; Faradonbeh et al. , 2018b ) or adaptive stabilization methods ( Faradonbeh et al. , 2018a ; 2019 ) . 3 ALGORITHM . The complete algorithm we propose for LQR dual control is presented in Algorithm 1 . The modules in the algorithm will be described later in this section . 3.1 SAFE CONTROL POLICY . This subsection describes the policy for determining the control input uk . The first n + p steps are a warm-up period where purely random inputs are injected . Afterwards , in each step , we inject a exploitation term ũ plus a polynomial-decaying exploratory noise ( k+ 1 ) −βζ , where the decay rate β ∈ ( 0 , 1/2 ) is a constant . The exploitation term ũ is a modified version of the certainty equivalent control input K̂kxk , and this modification , described in Algorithm 2 , is crucial to the safety of the learning process , which we will detail below . Algorithm 1 Safe LQR dual control Input : State dimension n , input dimension p , exploratory noise decay rate β 1 : for k = 0 , 1 , . . . , n+ p− 1 do 2 : ξk+1 ← 0 3 : Apply control input uk ← ( k + 1 ) −βζk , where ζk ∼ N ( 0 , Ip ) 4 : for k = n+ p , n+ p+ 1 , . . . do 5 : Observe the current state xk 6 : for τ = 0 , 1 , . . . , n+ p− 1 do 7 : Ĥk , τ ← 1 k − τ k∑ i=τ+1 ( i− τ ) β [ xi − τ−1∑ t=0 Ĥk , tũi−t−1 ] ζ > i−τ−1 8 : Reconstruct Âk , B̂k from Ĥk,0 , . . . , Ĥk , n+p−1 using Algorithm 3 9 : Compute certainty equivalent feedback gain K̂k by replacing A , B with Âk , B̂k in ( 5 ) , ( 6 ) 10 : Determine policy πk ( · , · ) ← π ( · , · ; k , K̂k , β ) , where π is described by Algorithm 2 11 : ( uk , ξk+1 ) ← πk ( xk , ξk ) ; record ũk ← ũ , ζk ← ζ , where ũ , ζ are the corresponding vari- ables generated when executing the policy 12 : Apply control input uk Algorithm 2 Safe policy π ( x , ξ ; k , K , β ) Input : Arguments : system state x , policy internal state ξ ; Parameters : step k , linear feedback gain K , exploratory noise decay rate β Output : Control input u and next policy internal state ξ′ , i.e. , ( u , ξ′ ) = π ( x , ξ ; k , K , β ) 1 : if ξ > 0 then 2 : ũ← 0 , ξ′ ← ξ − 1 3 : else 4 : if max { ‖K‖ , ‖x‖ } ≥ log k then 5 : ũ← 0 , ξ′ ← blog kc 6 : else 7 : ũ← Kx , ξ′ ← 0 8 : u← ũ+ ( k + 1 ) −βζ , where ζ ∼ N ( 0 , I ) In short , we stop injecting the exploitation input for blog kc+ 1 consecutive steps , if either the state norm ‖xk‖ or the norm of the feedback gain ‖K̂k‖ exceeds the threshold log k. Recall from ( 2 ) that we use ξk to denote the internal state of the policy , and in Algorithm 1 , ξk is a counter that records how many steps are left in the “ non-action ” period . Essentially , we utilize the innate stability of the system to prevent the state from exploding catastrophically . This “ non-action ” mechanism is a critical feature of our control design , without which the controller learned from data may destabilize the system , albeit with a small probability , causing system failure in practice and forbidding the establishment of almost sure performance guarantees in theory . We provide an ablation study of this “ non-action ” mechanism in Section 5 . We choose both the switching threshold and the length of the “ non-action ” period to be timegrowing . The enlarging threshold corresponds to diminishing degree of conservativeness , which is essential for the policy performance to converge to the optimal performance . Meanwhile , the prolonging “ non-action ” period rules out the potential oscillation of state caused by the frequent switching of the controller ( see Appendix B for an illustrative example of this oscillation phenomenon ) . In particular , it can be shown that the suboptimality gap incurred by the switching strategy scales as O ( tM exp ( −cM2 ) ) , where M is the switching threshold , t is the length of the “ non-action ” period , and c is a system-dependent constant ( see Lemma 10 in Appendix A.1.2 ) . With both M and t growing as O ( log k ) , the contribution of our switching strategy to the overall suboptimality gap is merely Õ ( 1 ) .
This paper addresses the problem of combining system identification and optimal control in an online framework for stable linear systems with full-state sensing. Despite the restrictions of linearity, stability, and full-state sensing, the proposed problem formulation is still challenging and I am unaware of any results which provide the same optimality guarantees provided by the authors in the online framework. The paper itself is unusually well-conceived, well-written, well-structured, is clear, and makes very efficient use of notation.
SP:01927b25e3408a5e889122868c0eee692be69e02
iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data
1 INTRODUCTION . The mammalian brain is a complex , high-dimensional system , containing billions of neurons whose coordinated dynamics ultimately drives behaviour . Identifying and interpreting these dynamics is the focus of a large body of neuroscience research , which is being facilitated by the advent of new experimental techniques that allow large-scale recordings of neural populations ( Jun et al. , 2017 ; Stosiek et al. , 2003 ) . A range of methods have been developed for learning dynamics from data ( Buesing et al. , 2012 ; Gao et al. , 2016 ; Duncker et al. , 2019 ; Archer et al. , 2015 ; Hernandez et al. , 2018 ; She and Wu , 2020 ; Kim et al. , 2021 ; Nguyen et al. , 2020 ) . These methods all specify a generative model in the form of a flexible latent dynamical system driven by process noise , coupled with an appropriate observation model . However , neural recordings are typically only made in a small selection of brain regions , leaving many areas unobserved which might provide relevant task-related input to the recorded one ( s ) . Yet , the aforementioned methods perform Bayesian inference of state trajectories directly , and therefore do not support inference of external input ( which they effectively treat as process noise and marginalize out ) . Indeed , simultaneous learning of latent dynamics and inference of unobserved control inputs is a challenging problem that involves teasing apart momentary variations in the data that can be attributed to the system ’ s internal transition function , and those that need to be explained by unobserved inputs . This distinction can be achieved by introducing external control in the form of abrupt changes in the latent state transition function , and inferring these switching events ( Ghahramani and Hinton , 2000 ; Linderman et al. , 2017 ) . More recently , Pandarinath et al . ( 2018 ) introduced LFADS , a sequential variational autoencoder ( VAE ) that performs inference at the level of external inputs as well as initial latent states . The inferred inputs were shown to be congruent with taskinduced perturbations in various reaching tasks in primates ( Pandarinath et al. , 2018 ; Keshtkaran and Pandarinath , 2019 ) . Here , we introduce iLQR-VAE , a new method for learning input-driven latent dynamics from data . As in LFADS , we use an input-driven sequential VAE to encode observations into a set of initial conditions and external inputs driving an RNN generator . However , while LFADS uses a separate , bidirectional RNN as the encoder , here we substitute the inference network with an optimizationbased recognition model that relies on the powerful iterative linear quadratic regulator algorithm ( iLQR , Li and Todorov , 2004 ) . iLQR solves an optimization problem that finds a mode of the exact posterior over inputs for the current setting of generative parameters . This ensures that the encoder ( mean ) remains optimal for every update of the decoder , thus reducing the amortization gap ( Cremer et al. , 2018 ) . Moreover , having the recognition model be implicitly defined by the generative model stabilizes training , prevents posterior collapse ( thus circumventing the need for tricks such as KL warmup ) , and greatly reduces the number of ( hyper- ) parameters . While iLQR-VAE could find applications in many fields as a general approach to learning stochastic nonlinear dynamical systems , here we focus on neuroscience case studies . We first demonstrate in a series of synthetic examples that iLQR-VAE can recover the true dynamics in both autonomous and input-driven systems . Next , we show state-of-the art performance on monkey M1 population recordings during two types of reaching tasks ( O ’ Doherty et al. , 2018 ; Churchland et al. , 2010 ) . In particular , we show that hand kinematics can be accurately decoded from inferred latent state trajectories , and that the inferred inputs are consistent with recently proposed theories of motor preparation . 2 METHOD . iLQR-VAE models a set of temporal observations , such as behavioural and/or neural recordings , through a shared input-driven nonlinear latent dynamical system ( Figure S1 ) . The input encapsulates both process noise ( as in traditional latent dynamics models ) , initial inputs that set the initial condition of the dynamics , and any meaningful task-related control input . In this section , we describe the architecture of the generative model , and the control-based variational inference strategy used for training the model and making predictions . 2.1 GENERATIVE MODEL . We consider the following generative model : latent state zt+1 = fθ ( zt , ut , t ) ( 1 ) observations ot|zt ∼ pθ ( ot|zt ) ( 2 ) where ut ∈ Rm , zt ∈ Rn and ot ∈ Rno are the input , latent state and observations at time t , respectively . Here , observations may comprise either neural activity , behavioural variables , or both – the distinction will be made later where relevant . We use the notation θ to denote the set of all parameters of the generative model . We use u0 to set the initial condition z1 = fθ ( 0 , u0 , 0 ) of the network1 . This way , the latent state trajectory of the network z ( u ) = { z1 , . . . , zT } is entirely determined by the input sequence u = { u0 , . . . , uT } and the state transition function fθ ( · ) , according to Equation 1 . For fθ ( · ) , we use either standard linear or GRU-like RNN dynamics ( see Appendix B 1Note that when m < n , u0 can only reach an m-dimensional subspace of initial conditions , which could be limiting . We can circumvent this problem by spreading u0 over multiple surrogate time bins before the start of the trial , i.e . introduce { u−n/m , . . . , u−2 , u−1 , u0 } together with an appropriate dependence of fθ on t ≤ 0 in Equation 1 , such that each of these surrogate inputs target a different latent subspace with purely integrating ( “ sticking ” ) linear dynamics before t = 1. for details ) . For the likelihoods , we use Gaussian or Poisson distributions with means given by linear or nonlinear readouts of the network state of the form ōt = h ( Czt + b ) ( Appendix C ) . We place a Gaussian prior over ut≤0 . We then consider two alternative choices for the prior over ut > 0 . The first is a Gaussian prior pθ ( ut > 0 ) = N ( 0 , S2 ) ( 3 ) with S = diag ( s1 , . . . , sm ) . In many settings however , we expect inputs to enter the system in a sparse manner . To explicitely model this , we introduce a second prior over u in the form of a heavy-tailed distribution constructed hierarchically by assuming that the ith input at time t > 0 is uit = si it √ ν/αt ( 4 ) where si > 0 is a scale factor , it ∼ N ( 0 , 1 ) is independent across i and t , and αt ∼ χ2ν is a shared scale factor drawn from a chi-squared distribution with ν degrees of freedom . Thus , inputs are spatially and temporally independent a priori , such that any spatio-temporal structure in the observations will have to be explained by the coupled dynamics of the latent states . Moreover , the heavy-tailed nature of this prior allows for strong inputs when they are needed . Finally , the fact that the scale factor is shared across input dimensions means that inputs are either all weak or potentially all strong at the same time for all input channels , expressing the prior belief that inputs come as shared events . This hierarchical construction induces a multivariate Student prior at each time step : pθ ( ut ) = Γ [ ( ν +m ) /2 ] Γ [ ν/2 ] ( νπ ) m/2|S| [ 1 + 1 ν uTt S −2ut ] − ( ν+m ) /2 ( 5 ) where S = diag ( s1 , . . . , sm ) . Note that both S and ν are parameters of the generative model , which we will learn . We discuss the relative advantages of the Student and Gaussian priors in Appendix J . 2.2 ILQR-VAE : A NOVEL CONTROL-BASED VARIATIONAL INFERENCE STRATEGY . To train the model , we optimize θ to maximize the log-likelihood of observing a collection of independent observation sequences O = { o ( 1 ) , . . . , o ( K ) } , or “ trials ” , given by : log pθ ( O ) = K∑ k=1 log ∫ pθ ( o ( k ) |z ( u ) ) pθ ( u ) du . ( 6 ) As the integral is in general intractable , we resort to an amortized variational inference strategy by introducing a recognition model qφ ( u|o ( k ) ) to approximate the posterior pθ ( u|o ( k ) ) . Following standard practice ( Kingma and Welling , 2013 ; Rezende et al. , 2014 ) , we thus train the model by maximizing the evidence lower-bound ( ELBO ) : L ( O , θ , φ ) = ∑ k Eqφ ( u|o ( k ) ) [ log pθ ( o ( k ) |u ) + log pθ ( u ) − log qφ ( u|o ( k ) ) ] ( 7 ) = ∑ k Eqφ ( u|o ( k ) ) [ T∑ t=1 log pθ ( o ( k ) t |zt ) + log pθ ( ut ) − log qφ ( ut|o ( k ) ) ] ( 8 ) ≤ log pθ ( O ) . ( 9 ) with respect to both θ and φ . Here , the main novelty is the use of an optimization-based recognition model . We reason that maximizing the exact log posterior , i.e . computing u ? ( o ( k ) ) = argmax u log pθ ( u|o ( k ) ) ( 10 ) = argmax u [ T∑ t=1 log pθ ( o ( k ) t |u ) + log pθ ( ut ) ] ( 11 ) subject to the generative dynamics of Equations 1 and 2 , is a standard nonlinear control problem : log pθ ( o ( k ) t |u ) acts as a running cost penalizing momentary deviations between desired outputs ot and the actual outputs caused by a set of controls u , and log pθ ( ut ) acts as an energetic cost on those controls . Importantly , there exists a general purpose , efficient algorithm to solve such nonlinear control problems : iLQR ( Li and Todorov , 2004 ; Appendix D ) . We thus propose to use a black-box iLQR solver to parameterize the mean of the recognition density qφ ( u|o ) for any o , and to model uncertainty separately using a multivariate Gaussian density common to all trials . Therefore , we parametrize the recognition model as follows : qφ ( u|o ) = N ( u ; u ? ( o ) , Σs ⊗Σt ) ( 12 ) with u ? ( o ) = iLQRsolve ( o , θ ) . ( 13 ) where we use a separable posterior covariance ( the Kronecker product of a spatial factor Σs and a temporal factor Σt ) . To optimize the ELBO , we estimate the expectation in Equation 8 by drawing samples from qφ ( u|o ( k ) ) and using the reparameterization trick ( Kingma et al. , 2015 ) to obtain gradients . A major complication that would normally preclude the use of optimization-based recognition models is the need to differentiate through the mean of the posterior . In this case , this involves differentiating through an entire optimization process . Using automatic differentiation within the iLQR solver is in general impractically expensive memory-wise . However , recent advances in differentiable model predictive control enable implicit differentiation through iLQRsolve with a memory cost that does not depend on the number of iterations ( Amos et al. , 2018 ; Blondel et al. , 2021 ; Appendix E ) .
This paper presents a new approach for inference in a model that simultaneously provides latent dynamics, initial conditions, and - importantly - external inputs. This approach is enabled by using the outcome of an optimization algorithm (iLQR) in the recognition model, recently enabled by other work in the field. The paper demonstrates the use of this approach on simulated data and real neural data, and compares to other classic and contemporary models of nonlinear dynamical systems.
SP:c0d0cff3b0191686d9dae0eaecfe0019711a3966
iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data
1 INTRODUCTION . The mammalian brain is a complex , high-dimensional system , containing billions of neurons whose coordinated dynamics ultimately drives behaviour . Identifying and interpreting these dynamics is the focus of a large body of neuroscience research , which is being facilitated by the advent of new experimental techniques that allow large-scale recordings of neural populations ( Jun et al. , 2017 ; Stosiek et al. , 2003 ) . A range of methods have been developed for learning dynamics from data ( Buesing et al. , 2012 ; Gao et al. , 2016 ; Duncker et al. , 2019 ; Archer et al. , 2015 ; Hernandez et al. , 2018 ; She and Wu , 2020 ; Kim et al. , 2021 ; Nguyen et al. , 2020 ) . These methods all specify a generative model in the form of a flexible latent dynamical system driven by process noise , coupled with an appropriate observation model . However , neural recordings are typically only made in a small selection of brain regions , leaving many areas unobserved which might provide relevant task-related input to the recorded one ( s ) . Yet , the aforementioned methods perform Bayesian inference of state trajectories directly , and therefore do not support inference of external input ( which they effectively treat as process noise and marginalize out ) . Indeed , simultaneous learning of latent dynamics and inference of unobserved control inputs is a challenging problem that involves teasing apart momentary variations in the data that can be attributed to the system ’ s internal transition function , and those that need to be explained by unobserved inputs . This distinction can be achieved by introducing external control in the form of abrupt changes in the latent state transition function , and inferring these switching events ( Ghahramani and Hinton , 2000 ; Linderman et al. , 2017 ) . More recently , Pandarinath et al . ( 2018 ) introduced LFADS , a sequential variational autoencoder ( VAE ) that performs inference at the level of external inputs as well as initial latent states . The inferred inputs were shown to be congruent with taskinduced perturbations in various reaching tasks in primates ( Pandarinath et al. , 2018 ; Keshtkaran and Pandarinath , 2019 ) . Here , we introduce iLQR-VAE , a new method for learning input-driven latent dynamics from data . As in LFADS , we use an input-driven sequential VAE to encode observations into a set of initial conditions and external inputs driving an RNN generator . However , while LFADS uses a separate , bidirectional RNN as the encoder , here we substitute the inference network with an optimizationbased recognition model that relies on the powerful iterative linear quadratic regulator algorithm ( iLQR , Li and Todorov , 2004 ) . iLQR solves an optimization problem that finds a mode of the exact posterior over inputs for the current setting of generative parameters . This ensures that the encoder ( mean ) remains optimal for every update of the decoder , thus reducing the amortization gap ( Cremer et al. , 2018 ) . Moreover , having the recognition model be implicitly defined by the generative model stabilizes training , prevents posterior collapse ( thus circumventing the need for tricks such as KL warmup ) , and greatly reduces the number of ( hyper- ) parameters . While iLQR-VAE could find applications in many fields as a general approach to learning stochastic nonlinear dynamical systems , here we focus on neuroscience case studies . We first demonstrate in a series of synthetic examples that iLQR-VAE can recover the true dynamics in both autonomous and input-driven systems . Next , we show state-of-the art performance on monkey M1 population recordings during two types of reaching tasks ( O ’ Doherty et al. , 2018 ; Churchland et al. , 2010 ) . In particular , we show that hand kinematics can be accurately decoded from inferred latent state trajectories , and that the inferred inputs are consistent with recently proposed theories of motor preparation . 2 METHOD . iLQR-VAE models a set of temporal observations , such as behavioural and/or neural recordings , through a shared input-driven nonlinear latent dynamical system ( Figure S1 ) . The input encapsulates both process noise ( as in traditional latent dynamics models ) , initial inputs that set the initial condition of the dynamics , and any meaningful task-related control input . In this section , we describe the architecture of the generative model , and the control-based variational inference strategy used for training the model and making predictions . 2.1 GENERATIVE MODEL . We consider the following generative model : latent state zt+1 = fθ ( zt , ut , t ) ( 1 ) observations ot|zt ∼ pθ ( ot|zt ) ( 2 ) where ut ∈ Rm , zt ∈ Rn and ot ∈ Rno are the input , latent state and observations at time t , respectively . Here , observations may comprise either neural activity , behavioural variables , or both – the distinction will be made later where relevant . We use the notation θ to denote the set of all parameters of the generative model . We use u0 to set the initial condition z1 = fθ ( 0 , u0 , 0 ) of the network1 . This way , the latent state trajectory of the network z ( u ) = { z1 , . . . , zT } is entirely determined by the input sequence u = { u0 , . . . , uT } and the state transition function fθ ( · ) , according to Equation 1 . For fθ ( · ) , we use either standard linear or GRU-like RNN dynamics ( see Appendix B 1Note that when m < n , u0 can only reach an m-dimensional subspace of initial conditions , which could be limiting . We can circumvent this problem by spreading u0 over multiple surrogate time bins before the start of the trial , i.e . introduce { u−n/m , . . . , u−2 , u−1 , u0 } together with an appropriate dependence of fθ on t ≤ 0 in Equation 1 , such that each of these surrogate inputs target a different latent subspace with purely integrating ( “ sticking ” ) linear dynamics before t = 1. for details ) . For the likelihoods , we use Gaussian or Poisson distributions with means given by linear or nonlinear readouts of the network state of the form ōt = h ( Czt + b ) ( Appendix C ) . We place a Gaussian prior over ut≤0 . We then consider two alternative choices for the prior over ut > 0 . The first is a Gaussian prior pθ ( ut > 0 ) = N ( 0 , S2 ) ( 3 ) with S = diag ( s1 , . . . , sm ) . In many settings however , we expect inputs to enter the system in a sparse manner . To explicitely model this , we introduce a second prior over u in the form of a heavy-tailed distribution constructed hierarchically by assuming that the ith input at time t > 0 is uit = si it √ ν/αt ( 4 ) where si > 0 is a scale factor , it ∼ N ( 0 , 1 ) is independent across i and t , and αt ∼ χ2ν is a shared scale factor drawn from a chi-squared distribution with ν degrees of freedom . Thus , inputs are spatially and temporally independent a priori , such that any spatio-temporal structure in the observations will have to be explained by the coupled dynamics of the latent states . Moreover , the heavy-tailed nature of this prior allows for strong inputs when they are needed . Finally , the fact that the scale factor is shared across input dimensions means that inputs are either all weak or potentially all strong at the same time for all input channels , expressing the prior belief that inputs come as shared events . This hierarchical construction induces a multivariate Student prior at each time step : pθ ( ut ) = Γ [ ( ν +m ) /2 ] Γ [ ν/2 ] ( νπ ) m/2|S| [ 1 + 1 ν uTt S −2ut ] − ( ν+m ) /2 ( 5 ) where S = diag ( s1 , . . . , sm ) . Note that both S and ν are parameters of the generative model , which we will learn . We discuss the relative advantages of the Student and Gaussian priors in Appendix J . 2.2 ILQR-VAE : A NOVEL CONTROL-BASED VARIATIONAL INFERENCE STRATEGY . To train the model , we optimize θ to maximize the log-likelihood of observing a collection of independent observation sequences O = { o ( 1 ) , . . . , o ( K ) } , or “ trials ” , given by : log pθ ( O ) = K∑ k=1 log ∫ pθ ( o ( k ) |z ( u ) ) pθ ( u ) du . ( 6 ) As the integral is in general intractable , we resort to an amortized variational inference strategy by introducing a recognition model qφ ( u|o ( k ) ) to approximate the posterior pθ ( u|o ( k ) ) . Following standard practice ( Kingma and Welling , 2013 ; Rezende et al. , 2014 ) , we thus train the model by maximizing the evidence lower-bound ( ELBO ) : L ( O , θ , φ ) = ∑ k Eqφ ( u|o ( k ) ) [ log pθ ( o ( k ) |u ) + log pθ ( u ) − log qφ ( u|o ( k ) ) ] ( 7 ) = ∑ k Eqφ ( u|o ( k ) ) [ T∑ t=1 log pθ ( o ( k ) t |zt ) + log pθ ( ut ) − log qφ ( ut|o ( k ) ) ] ( 8 ) ≤ log pθ ( O ) . ( 9 ) with respect to both θ and φ . Here , the main novelty is the use of an optimization-based recognition model . We reason that maximizing the exact log posterior , i.e . computing u ? ( o ( k ) ) = argmax u log pθ ( u|o ( k ) ) ( 10 ) = argmax u [ T∑ t=1 log pθ ( o ( k ) t |u ) + log pθ ( ut ) ] ( 11 ) subject to the generative dynamics of Equations 1 and 2 , is a standard nonlinear control problem : log pθ ( o ( k ) t |u ) acts as a running cost penalizing momentary deviations between desired outputs ot and the actual outputs caused by a set of controls u , and log pθ ( ut ) acts as an energetic cost on those controls . Importantly , there exists a general purpose , efficient algorithm to solve such nonlinear control problems : iLQR ( Li and Todorov , 2004 ; Appendix D ) . We thus propose to use a black-box iLQR solver to parameterize the mean of the recognition density qφ ( u|o ) for any o , and to model uncertainty separately using a multivariate Gaussian density common to all trials . Therefore , we parametrize the recognition model as follows : qφ ( u|o ) = N ( u ; u ? ( o ) , Σs ⊗Σt ) ( 12 ) with u ? ( o ) = iLQRsolve ( o , θ ) . ( 13 ) where we use a separable posterior covariance ( the Kronecker product of a spatial factor Σs and a temporal factor Σt ) . To optimize the ELBO , we estimate the expectation in Equation 8 by drawing samples from qφ ( u|o ( k ) ) and using the reparameterization trick ( Kingma et al. , 2015 ) to obtain gradients . A major complication that would normally preclude the use of optimization-based recognition models is the need to differentiate through the mean of the posterior . In this case , this involves differentiating through an entire optimization process . Using automatic differentiation within the iLQR solver is in general impractically expensive memory-wise . However , recent advances in differentiable model predictive control enable implicit differentiation through iLQRsolve with a memory cost that does not depend on the number of iterations ( Amos et al. , 2018 ; Blondel et al. , 2021 ; Appendix E ) .
The paper proposes a control-based variational inference approach that learns latent neural dynamics in input-driven SSM. It utilizes iLQR in the recognition model that transforms it into an optimal-control problem. The recognition model in the proposed method is implicitly implied by the generative model and thus reduces the number of free parameters comparing to existing methods. The proposed methods are evaluated on sythetic chaotic attractor and real-world neural recordings.
SP:c0d0cff3b0191686d9dae0eaecfe0019711a3966
iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data
1 INTRODUCTION . The mammalian brain is a complex , high-dimensional system , containing billions of neurons whose coordinated dynamics ultimately drives behaviour . Identifying and interpreting these dynamics is the focus of a large body of neuroscience research , which is being facilitated by the advent of new experimental techniques that allow large-scale recordings of neural populations ( Jun et al. , 2017 ; Stosiek et al. , 2003 ) . A range of methods have been developed for learning dynamics from data ( Buesing et al. , 2012 ; Gao et al. , 2016 ; Duncker et al. , 2019 ; Archer et al. , 2015 ; Hernandez et al. , 2018 ; She and Wu , 2020 ; Kim et al. , 2021 ; Nguyen et al. , 2020 ) . These methods all specify a generative model in the form of a flexible latent dynamical system driven by process noise , coupled with an appropriate observation model . However , neural recordings are typically only made in a small selection of brain regions , leaving many areas unobserved which might provide relevant task-related input to the recorded one ( s ) . Yet , the aforementioned methods perform Bayesian inference of state trajectories directly , and therefore do not support inference of external input ( which they effectively treat as process noise and marginalize out ) . Indeed , simultaneous learning of latent dynamics and inference of unobserved control inputs is a challenging problem that involves teasing apart momentary variations in the data that can be attributed to the system ’ s internal transition function , and those that need to be explained by unobserved inputs . This distinction can be achieved by introducing external control in the form of abrupt changes in the latent state transition function , and inferring these switching events ( Ghahramani and Hinton , 2000 ; Linderman et al. , 2017 ) . More recently , Pandarinath et al . ( 2018 ) introduced LFADS , a sequential variational autoencoder ( VAE ) that performs inference at the level of external inputs as well as initial latent states . The inferred inputs were shown to be congruent with taskinduced perturbations in various reaching tasks in primates ( Pandarinath et al. , 2018 ; Keshtkaran and Pandarinath , 2019 ) . Here , we introduce iLQR-VAE , a new method for learning input-driven latent dynamics from data . As in LFADS , we use an input-driven sequential VAE to encode observations into a set of initial conditions and external inputs driving an RNN generator . However , while LFADS uses a separate , bidirectional RNN as the encoder , here we substitute the inference network with an optimizationbased recognition model that relies on the powerful iterative linear quadratic regulator algorithm ( iLQR , Li and Todorov , 2004 ) . iLQR solves an optimization problem that finds a mode of the exact posterior over inputs for the current setting of generative parameters . This ensures that the encoder ( mean ) remains optimal for every update of the decoder , thus reducing the amortization gap ( Cremer et al. , 2018 ) . Moreover , having the recognition model be implicitly defined by the generative model stabilizes training , prevents posterior collapse ( thus circumventing the need for tricks such as KL warmup ) , and greatly reduces the number of ( hyper- ) parameters . While iLQR-VAE could find applications in many fields as a general approach to learning stochastic nonlinear dynamical systems , here we focus on neuroscience case studies . We first demonstrate in a series of synthetic examples that iLQR-VAE can recover the true dynamics in both autonomous and input-driven systems . Next , we show state-of-the art performance on monkey M1 population recordings during two types of reaching tasks ( O ’ Doherty et al. , 2018 ; Churchland et al. , 2010 ) . In particular , we show that hand kinematics can be accurately decoded from inferred latent state trajectories , and that the inferred inputs are consistent with recently proposed theories of motor preparation . 2 METHOD . iLQR-VAE models a set of temporal observations , such as behavioural and/or neural recordings , through a shared input-driven nonlinear latent dynamical system ( Figure S1 ) . The input encapsulates both process noise ( as in traditional latent dynamics models ) , initial inputs that set the initial condition of the dynamics , and any meaningful task-related control input . In this section , we describe the architecture of the generative model , and the control-based variational inference strategy used for training the model and making predictions . 2.1 GENERATIVE MODEL . We consider the following generative model : latent state zt+1 = fθ ( zt , ut , t ) ( 1 ) observations ot|zt ∼ pθ ( ot|zt ) ( 2 ) where ut ∈ Rm , zt ∈ Rn and ot ∈ Rno are the input , latent state and observations at time t , respectively . Here , observations may comprise either neural activity , behavioural variables , or both – the distinction will be made later where relevant . We use the notation θ to denote the set of all parameters of the generative model . We use u0 to set the initial condition z1 = fθ ( 0 , u0 , 0 ) of the network1 . This way , the latent state trajectory of the network z ( u ) = { z1 , . . . , zT } is entirely determined by the input sequence u = { u0 , . . . , uT } and the state transition function fθ ( · ) , according to Equation 1 . For fθ ( · ) , we use either standard linear or GRU-like RNN dynamics ( see Appendix B 1Note that when m < n , u0 can only reach an m-dimensional subspace of initial conditions , which could be limiting . We can circumvent this problem by spreading u0 over multiple surrogate time bins before the start of the trial , i.e . introduce { u−n/m , . . . , u−2 , u−1 , u0 } together with an appropriate dependence of fθ on t ≤ 0 in Equation 1 , such that each of these surrogate inputs target a different latent subspace with purely integrating ( “ sticking ” ) linear dynamics before t = 1. for details ) . For the likelihoods , we use Gaussian or Poisson distributions with means given by linear or nonlinear readouts of the network state of the form ōt = h ( Czt + b ) ( Appendix C ) . We place a Gaussian prior over ut≤0 . We then consider two alternative choices for the prior over ut > 0 . The first is a Gaussian prior pθ ( ut > 0 ) = N ( 0 , S2 ) ( 3 ) with S = diag ( s1 , . . . , sm ) . In many settings however , we expect inputs to enter the system in a sparse manner . To explicitely model this , we introduce a second prior over u in the form of a heavy-tailed distribution constructed hierarchically by assuming that the ith input at time t > 0 is uit = si it √ ν/αt ( 4 ) where si > 0 is a scale factor , it ∼ N ( 0 , 1 ) is independent across i and t , and αt ∼ χ2ν is a shared scale factor drawn from a chi-squared distribution with ν degrees of freedom . Thus , inputs are spatially and temporally independent a priori , such that any spatio-temporal structure in the observations will have to be explained by the coupled dynamics of the latent states . Moreover , the heavy-tailed nature of this prior allows for strong inputs when they are needed . Finally , the fact that the scale factor is shared across input dimensions means that inputs are either all weak or potentially all strong at the same time for all input channels , expressing the prior belief that inputs come as shared events . This hierarchical construction induces a multivariate Student prior at each time step : pθ ( ut ) = Γ [ ( ν +m ) /2 ] Γ [ ν/2 ] ( νπ ) m/2|S| [ 1 + 1 ν uTt S −2ut ] − ( ν+m ) /2 ( 5 ) where S = diag ( s1 , . . . , sm ) . Note that both S and ν are parameters of the generative model , which we will learn . We discuss the relative advantages of the Student and Gaussian priors in Appendix J . 2.2 ILQR-VAE : A NOVEL CONTROL-BASED VARIATIONAL INFERENCE STRATEGY . To train the model , we optimize θ to maximize the log-likelihood of observing a collection of independent observation sequences O = { o ( 1 ) , . . . , o ( K ) } , or “ trials ” , given by : log pθ ( O ) = K∑ k=1 log ∫ pθ ( o ( k ) |z ( u ) ) pθ ( u ) du . ( 6 ) As the integral is in general intractable , we resort to an amortized variational inference strategy by introducing a recognition model qφ ( u|o ( k ) ) to approximate the posterior pθ ( u|o ( k ) ) . Following standard practice ( Kingma and Welling , 2013 ; Rezende et al. , 2014 ) , we thus train the model by maximizing the evidence lower-bound ( ELBO ) : L ( O , θ , φ ) = ∑ k Eqφ ( u|o ( k ) ) [ log pθ ( o ( k ) |u ) + log pθ ( u ) − log qφ ( u|o ( k ) ) ] ( 7 ) = ∑ k Eqφ ( u|o ( k ) ) [ T∑ t=1 log pθ ( o ( k ) t |zt ) + log pθ ( ut ) − log qφ ( ut|o ( k ) ) ] ( 8 ) ≤ log pθ ( O ) . ( 9 ) with respect to both θ and φ . Here , the main novelty is the use of an optimization-based recognition model . We reason that maximizing the exact log posterior , i.e . computing u ? ( o ( k ) ) = argmax u log pθ ( u|o ( k ) ) ( 10 ) = argmax u [ T∑ t=1 log pθ ( o ( k ) t |u ) + log pθ ( ut ) ] ( 11 ) subject to the generative dynamics of Equations 1 and 2 , is a standard nonlinear control problem : log pθ ( o ( k ) t |u ) acts as a running cost penalizing momentary deviations between desired outputs ot and the actual outputs caused by a set of controls u , and log pθ ( ut ) acts as an energetic cost on those controls . Importantly , there exists a general purpose , efficient algorithm to solve such nonlinear control problems : iLQR ( Li and Todorov , 2004 ; Appendix D ) . We thus propose to use a black-box iLQR solver to parameterize the mean of the recognition density qφ ( u|o ) for any o , and to model uncertainty separately using a multivariate Gaussian density common to all trials . Therefore , we parametrize the recognition model as follows : qφ ( u|o ) = N ( u ; u ? ( o ) , Σs ⊗Σt ) ( 12 ) with u ? ( o ) = iLQRsolve ( o , θ ) . ( 13 ) where we use a separable posterior covariance ( the Kronecker product of a spatial factor Σs and a temporal factor Σt ) . To optimize the ELBO , we estimate the expectation in Equation 8 by drawing samples from qφ ( u|o ( k ) ) and using the reparameterization trick ( Kingma et al. , 2015 ) to obtain gradients . A major complication that would normally preclude the use of optimization-based recognition models is the need to differentiate through the mean of the posterior . In this case , this involves differentiating through an entire optimization process . Using automatic differentiation within the iLQR solver is in general impractically expensive memory-wise . However , recent advances in differentiable model predictive control enable implicit differentiation through iLQRsolve with a memory cost that does not depend on the number of iterations ( Amos et al. , 2018 ; Blondel et al. , 2021 ; Appendix E ) .
This paper proposes ILQR-VAE, a novel method that allows to simultaneously learn latent dynamics and infer unobserved control inputs. The method relies on IQLR solver and recent advances allowing for implicit differentiation to maximize an Evidence Lower Bound on log-likelihood of observation to infer a conditional distribution over inputs as well as latent states. Authors show comparisons to other models (the closest one being LFADS) on toy datasets and benchmark datasets for neural data analysis methods. iLQR-VAE is on par with state-of-the-art methods on many datasets, does not require any extensive hyperparameter optimization, and allows for fast inference when dimension of the latent processes is not too high.
SP:c0d0cff3b0191686d9dae0eaecfe0019711a3966
One for Many: an Instagram inspired black-box adversarial attack
1 INTRODUCTION . It is well known that deep learning models are susceptible to adversarial attacks and many recent researches in the field have been devoted to produce ever more reliable and effective attacks . Attack reliability is strictly connected to its applicability in real-world scenarios and to its ability to bypass potential defense mechanisms ; this is why the hard-label black-box setting ( also called decisionbased attack ) and the transferability property of white box attacks gained increasing attention . Several techniques have been proposed to increase the transferability of both black-box and white-box attacks ( Cheng et al . ( 2020 ) ; Wu et al . ( 2020 ) ; Brendel et al . ( 2018 ) , among the most popular ) . One of the most commonly adopted technique is to craft the attacks by using an ensemble of multiple models as proposed by Liu et al . ( 2017 ) . Besides the categorization in white-box and black-box methods , attacks can be classified as restricted or unrestricted , considering the amount of modifications they apply to the images in order to fool the systems . Restricted attacks : they generally use a Lp-norm distance to bound the modifications . The attacks are crafted with the aim of minimizing the differences between the original image and the adversarial one , even if it means having visible ( more or less ) artifacts . In Figures 1b and 1d , the attacks produced by Dong et al . ( 2019 ) and Wu et al . ( 2020 ) , two of the most recent state-of-the-art adversarial methods , are shown . In both the cases the generated artifacts are clearly visible . Unrestricted attacks : they employ large and visible perturbations while keeping the images realistic , natural looking and non-suspicious . The idea is to obtain images that can admit great differences from the original one but , beyond a direct comparison with the original one they can not be distinguished from any other real ( maybe filtered ) image . In Figures 1f and 1h , the attacks produced by ACE-Ins ( Zhao et al. , 2020 ) and Colorfool ( Shahin Shamsabadi et al. , 2020 ) are shown . The differences between the original image and the adversarial one are evident but , if the modifications are good enough , looking just at the adversarial one we might not be able to say that it is the attacking image . Another aspect that has to be taken into account for real-world attacks is the amount of queries to the victim model that are necessary to craft effective attacks . All the systems proposed in literature need a huge amount of queries and , also in the case of systems built to work with limited access to the victim model , several thousands of queries are needed to produce reliable attacks . In this paper we propose a system to craft reliable , effective and highly transferable attacks by composing image filters . In previous papers ( for ex . Destylization ) has been shown that Instagram inspired filters can have adversarial , but limited effects , when applied singularly . Hence , we decided to study the effects of filter composition with the aim to increase the adversarial effectiveness . Moreover , in the usual photo editing process people use more than one filter in order to obtain the desired effect . We just wanted to simulate a more realistic scenario . This kind of attack offers multiple benefits : ( i ) it is naturally robust to detecting methods able to find noise , injected patterns and irregularities in the high frequencies image ( Moosavi-Dezfooli et al. , 2018 ; Liao et al. , 2018 ) . ( ii ) it is naturally robust to masking-gradients ( and their numerical estimate ) defending methods . ( iii ) it produces natural looking artifacts-free images . ( iv ) it produces adversarial attacks that can not be distinguished from any other filtered images produced extensively every day , especially on social media platforms . The system works in the pure hard-label black box setting and implements a multi-network approach to increase the attack transferability . It can reach , with artifact-free images , a transferability rate higher than 70 % in the case of unsecured networks and , even more interestingly , higher than 60 % in the case of adversarially trained network . Moreover , the system requires a very low number of queries to find an attack , around 640 in the average case , and it can be limited by construction adjusting the algorithm parameters . In the experiments presented here the maximum number of allowed queries is 1610 . Our contribution can be summarized as follows : ( i ) We propose the AGV-multinetwork attack , a system able to craft powerful adversarial attacks capable of attacking even secured adversarially trained models by composing Instagram inspired image filters ; ( ii ) we empirically demonstrate the high transferability of these attacks and we compare our results with other state-of-the-art systems ; ( iii ) we empirically demonstrate the efficiency of our system in terms of queries to the victim model and we compare our results with other state-of-the-art systems . 2 BACKGROUND . 2.1 ADVERSARIAL MACHINE LEARNING . Given an input image x ∈ X ⊂ Rd and its corresponding label y , let F be a neural network classifier that ( correctly ) predicts the class label for the input image x : F ( x ) = y . An adversarial attack attempts to modify the input image x by adding a perturbation δ into an adversarial image x∗ = x+ δ such that the classifier is mislead into making a wrong prediction , i.e . F ( x∗ ) ̸= F ( x ) . If we consider the type of the applied perturbation δ , the attacks can be classified as restricted or unrestricted . In the restricted case , the modifications applied to the original image are usually small and bounded by a Lp-norm distance measure , forcing the adversarial image x∗ to be as close as possible to the original one . On the contrary , unrestricted attacks use large perturbations without Lp-bounded constraints that manipulate the image in order to create photo-realistic adversarial examples . In this case the objective is not to limit the modifications on pixels but limit the human perception that a modification has been applied ( Shahin Shamsabadi et al. , 2020 ; Zhao et al . ; Wang et al. , 2021 ) . 2.2 IMAGE FILTERS . We implemented ten of the most popular Instagram filters using Python3 and the Pillow , OpenCV and Numpy libraries : Clarendon , Juno , Reyes , Gingham , Lark , Hudson , Slumber , Stinson , Rise , and Perpetua . Each filter has distinct characteristics and effects given by different level of contrast , saturation , brightness , shadows , etc . For instance , Clarendon brightens and highlights a photo , Juno adds saturation and warmth making the colors more intense , Rise gives a warm glow by mixing a radial gradient with a light sepia tone , while Hudson bumps up the blues giving a more colder feel . Examples of single filter applications are shown in Figure 6 in Appendix A. . Each filter is parameterized by two parameters that have to be optimized by the algorithm : intensity α and strength s. The role of the parameter α is to alter the intensity of each basic component inside each filter implementation such as brightness , contrast , saturation , edge enhancement , gamma correction and many more . The parameter s is used to control the strength of the filter application and it is defined as the convex interpolation between the original image x and the transformed image x∗ : strength ( x , x∗ , s ) = ( 1.0− s ) · x+ s · x∗ ( 1 ) thus , if s = 0 the output image of the filter is the original image , while with s = 1 the filter returns the modified image x∗ . 3 APPROACH AND ALGORITHM . 3.1 PROBLEM DEFINITION . Given a set S = { f1 , f2 , . . . , fm } of Instagram inspired image filters as described in Section 2.2 , and a clean image x , we want to find a sequence of n parametrized filters { fk1 ( α1 , s1 ) , . . . , fkn ( αn , sn ) } able to produce an adversarial attack against a classifier model F starting from the image x , that is { F ( x ) ̸= F ( x∗ ) x∗ = fkn ( αn , sn ) ( . . . fk2 ( α2 , s2 ) ( fk1 ( α1 , s1 ) ( x ) ) ) ( 2 ) 3.2 APPROACH . The algorithm used to optimize the sequence of filters and their parameters is inspired by Baia et al . ( 2021 ) . In our proposal the universal approach proposed by Baia et al . has been transformed into a multi-network per-image approach , where the attack is crafted and optimized for just one image with respect to more target models . Since the algorithm uses an evolutionary approach , the fitness function used to guide the search can have any ( also non differentiable ) form , and we decide to use it to implement a sort of generalization ability induced by attacking more models simultaneously . This is inspired by Liu et al . ( 2017 ) where the authors suggested that attacking an ensemble of multiple networks simultaneously can generate much stronger adversarial examples . Instead of attacking an ensemble network we decided to exploit the optimization ability of the evolutionary approach and try to simultaneously attack all the ( or at least k ) reference networks . Therefore , our goal is to find one adversarial perturbation per-image that can fool many deep learning models . This objective is much harder to achieve than attacking a single model but it guarantees complete ( 100 % ) transferability of the attack towards the reference networks . The proposed multi-network approach has the objective to obtain adversarial images with better transferability avoiding the natural overfitting trap we can fall into when the attacks are crafted by using a unique reference model . 3.3 ALGORITHM . The optimization method consists of two evolutionary nested algorithms : the outer algorithm , using a GA approach , in charge of finding the sequence of filters to use , and the inner algorithm , based on ES , used to choose the filter parameter values . The general structure of the multi-network algorithm is shown in Algorithm 1 . Given a set S = { f1 , f2 , fm } of image filters , the outer algorithm genotype ( with length n ) is encoded as a list of n integers k1 , . . . , kn ∈ { 1 , . . . , m } representing the corresponding filters in S , while the inner algorithm genotype is represented by a list containing the pairs of parameters used for each selected filter ( ( α1 , s1 ) , . . . , ( αn , sn ) ) . The associated phenotype is sequence of parametrized filters able to generate the adversarial examples by applying the selected sequence of filters , with their corresponding optimized parameters , as described in Eq.2 . 3.3.1 OUTER ALGORITHM . The outer optimization step is performed by a genetic algorithm : a population of N candidate perturbations is iteratively evolved towards better solutions . In order to breed a new generation , population members are randomly selected and the crossover and mutation operations are performed . The quality of the candidates is evaluated based on their fitness values and , at the end of each iteration , the N best individuals are chosen for the next generation . Initial population : it is generated by randomly selecting l filters from the set S of available filters and their parameters are initialized with default values equal to 1 . Crossover : a standard one-point crossover is used to generate new off-springs from randomly selected members . Each child is guaranteed to inherit some genetic information from both parents , including the optimized parameters . For example , given two parent elements P1 = ( f ′1 ( α ′ 1 , s ′ 1 ) , . . . , f ′ n ( α ′ n , s ′ n ) ) and P2 = ( f ′′ 1 ( α ′′ 1 , s ′′ 1 ) , . . . , f ′′ n ( α ′′ n , s ′′ n ) ) and crossover index i = 2 , we obtain the child element ( f ′1 ( α ′ 1 , s ′ 1 ) , f ′ 2 ( α ′ 2 , s ′ 2 ) , f ′′ 3 ( α ′′ 3 , s ′′ 3 ) , . . . , f ′′ n ( α ′′ n , s ′′ n ) ) . Mutation : it is applied by substituting a filter with another one based on a mutation probability . The substituent filter is initialized with random parameter values . For example considering the element P = ( f1 ( α1 , s1 ) , f2 ( α2 , s2 ) , f3 ( α3 , s3 ) . . . , fn ( αn , sn ) ) and supposing that filters f1 and f3 has been chosen to mutate with g1 and g2 , the new generated element is P ∗ = ( g1 ( α ∗ 1 , s ∗ 1 ) , f2 ( α2 , s2 ) , g2 ( α ∗ 2 , s ∗ 2 ) . . . , fn ( αn , sn ) ) , where α ∗ i , s ∗ i are randomly extracted from the parameter domains . Selection : at the end of each iteration , we choose the N best individuals from the set of 2N candidates ( parents and offsprings ) according to their fitness values . This process is repeated until the algorithm exhausts the allowed number of generations .
This paper introduces a new family of black-box adversarial attacks. These attacks are constructed by composing Instagram filters-based transformation in the input space. Perturbations in the input space are unrestricted and large but only produce natural-looking artifacts. The input is transformed with 10 different filters. Each filter has two variables: alpha to control the intensity and s to control strength. To get the optimal values of these parameters, an evolutionary approach is used. Comprehensive experiments are performed to show the efficacy of the attacks.
SP:c350b2552deb7ab187ea3cb387f18a8b1789f2e0
One for Many: an Instagram inspired black-box adversarial attack
1 INTRODUCTION . It is well known that deep learning models are susceptible to adversarial attacks and many recent researches in the field have been devoted to produce ever more reliable and effective attacks . Attack reliability is strictly connected to its applicability in real-world scenarios and to its ability to bypass potential defense mechanisms ; this is why the hard-label black-box setting ( also called decisionbased attack ) and the transferability property of white box attacks gained increasing attention . Several techniques have been proposed to increase the transferability of both black-box and white-box attacks ( Cheng et al . ( 2020 ) ; Wu et al . ( 2020 ) ; Brendel et al . ( 2018 ) , among the most popular ) . One of the most commonly adopted technique is to craft the attacks by using an ensemble of multiple models as proposed by Liu et al . ( 2017 ) . Besides the categorization in white-box and black-box methods , attacks can be classified as restricted or unrestricted , considering the amount of modifications they apply to the images in order to fool the systems . Restricted attacks : they generally use a Lp-norm distance to bound the modifications . The attacks are crafted with the aim of minimizing the differences between the original image and the adversarial one , even if it means having visible ( more or less ) artifacts . In Figures 1b and 1d , the attacks produced by Dong et al . ( 2019 ) and Wu et al . ( 2020 ) , two of the most recent state-of-the-art adversarial methods , are shown . In both the cases the generated artifacts are clearly visible . Unrestricted attacks : they employ large and visible perturbations while keeping the images realistic , natural looking and non-suspicious . The idea is to obtain images that can admit great differences from the original one but , beyond a direct comparison with the original one they can not be distinguished from any other real ( maybe filtered ) image . In Figures 1f and 1h , the attacks produced by ACE-Ins ( Zhao et al. , 2020 ) and Colorfool ( Shahin Shamsabadi et al. , 2020 ) are shown . The differences between the original image and the adversarial one are evident but , if the modifications are good enough , looking just at the adversarial one we might not be able to say that it is the attacking image . Another aspect that has to be taken into account for real-world attacks is the amount of queries to the victim model that are necessary to craft effective attacks . All the systems proposed in literature need a huge amount of queries and , also in the case of systems built to work with limited access to the victim model , several thousands of queries are needed to produce reliable attacks . In this paper we propose a system to craft reliable , effective and highly transferable attacks by composing image filters . In previous papers ( for ex . Destylization ) has been shown that Instagram inspired filters can have adversarial , but limited effects , when applied singularly . Hence , we decided to study the effects of filter composition with the aim to increase the adversarial effectiveness . Moreover , in the usual photo editing process people use more than one filter in order to obtain the desired effect . We just wanted to simulate a more realistic scenario . This kind of attack offers multiple benefits : ( i ) it is naturally robust to detecting methods able to find noise , injected patterns and irregularities in the high frequencies image ( Moosavi-Dezfooli et al. , 2018 ; Liao et al. , 2018 ) . ( ii ) it is naturally robust to masking-gradients ( and their numerical estimate ) defending methods . ( iii ) it produces natural looking artifacts-free images . ( iv ) it produces adversarial attacks that can not be distinguished from any other filtered images produced extensively every day , especially on social media platforms . The system works in the pure hard-label black box setting and implements a multi-network approach to increase the attack transferability . It can reach , with artifact-free images , a transferability rate higher than 70 % in the case of unsecured networks and , even more interestingly , higher than 60 % in the case of adversarially trained network . Moreover , the system requires a very low number of queries to find an attack , around 640 in the average case , and it can be limited by construction adjusting the algorithm parameters . In the experiments presented here the maximum number of allowed queries is 1610 . Our contribution can be summarized as follows : ( i ) We propose the AGV-multinetwork attack , a system able to craft powerful adversarial attacks capable of attacking even secured adversarially trained models by composing Instagram inspired image filters ; ( ii ) we empirically demonstrate the high transferability of these attacks and we compare our results with other state-of-the-art systems ; ( iii ) we empirically demonstrate the efficiency of our system in terms of queries to the victim model and we compare our results with other state-of-the-art systems . 2 BACKGROUND . 2.1 ADVERSARIAL MACHINE LEARNING . Given an input image x ∈ X ⊂ Rd and its corresponding label y , let F be a neural network classifier that ( correctly ) predicts the class label for the input image x : F ( x ) = y . An adversarial attack attempts to modify the input image x by adding a perturbation δ into an adversarial image x∗ = x+ δ such that the classifier is mislead into making a wrong prediction , i.e . F ( x∗ ) ̸= F ( x ) . If we consider the type of the applied perturbation δ , the attacks can be classified as restricted or unrestricted . In the restricted case , the modifications applied to the original image are usually small and bounded by a Lp-norm distance measure , forcing the adversarial image x∗ to be as close as possible to the original one . On the contrary , unrestricted attacks use large perturbations without Lp-bounded constraints that manipulate the image in order to create photo-realistic adversarial examples . In this case the objective is not to limit the modifications on pixels but limit the human perception that a modification has been applied ( Shahin Shamsabadi et al. , 2020 ; Zhao et al . ; Wang et al. , 2021 ) . 2.2 IMAGE FILTERS . We implemented ten of the most popular Instagram filters using Python3 and the Pillow , OpenCV and Numpy libraries : Clarendon , Juno , Reyes , Gingham , Lark , Hudson , Slumber , Stinson , Rise , and Perpetua . Each filter has distinct characteristics and effects given by different level of contrast , saturation , brightness , shadows , etc . For instance , Clarendon brightens and highlights a photo , Juno adds saturation and warmth making the colors more intense , Rise gives a warm glow by mixing a radial gradient with a light sepia tone , while Hudson bumps up the blues giving a more colder feel . Examples of single filter applications are shown in Figure 6 in Appendix A. . Each filter is parameterized by two parameters that have to be optimized by the algorithm : intensity α and strength s. The role of the parameter α is to alter the intensity of each basic component inside each filter implementation such as brightness , contrast , saturation , edge enhancement , gamma correction and many more . The parameter s is used to control the strength of the filter application and it is defined as the convex interpolation between the original image x and the transformed image x∗ : strength ( x , x∗ , s ) = ( 1.0− s ) · x+ s · x∗ ( 1 ) thus , if s = 0 the output image of the filter is the original image , while with s = 1 the filter returns the modified image x∗ . 3 APPROACH AND ALGORITHM . 3.1 PROBLEM DEFINITION . Given a set S = { f1 , f2 , . . . , fm } of Instagram inspired image filters as described in Section 2.2 , and a clean image x , we want to find a sequence of n parametrized filters { fk1 ( α1 , s1 ) , . . . , fkn ( αn , sn ) } able to produce an adversarial attack against a classifier model F starting from the image x , that is { F ( x ) ̸= F ( x∗ ) x∗ = fkn ( αn , sn ) ( . . . fk2 ( α2 , s2 ) ( fk1 ( α1 , s1 ) ( x ) ) ) ( 2 ) 3.2 APPROACH . The algorithm used to optimize the sequence of filters and their parameters is inspired by Baia et al . ( 2021 ) . In our proposal the universal approach proposed by Baia et al . has been transformed into a multi-network per-image approach , where the attack is crafted and optimized for just one image with respect to more target models . Since the algorithm uses an evolutionary approach , the fitness function used to guide the search can have any ( also non differentiable ) form , and we decide to use it to implement a sort of generalization ability induced by attacking more models simultaneously . This is inspired by Liu et al . ( 2017 ) where the authors suggested that attacking an ensemble of multiple networks simultaneously can generate much stronger adversarial examples . Instead of attacking an ensemble network we decided to exploit the optimization ability of the evolutionary approach and try to simultaneously attack all the ( or at least k ) reference networks . Therefore , our goal is to find one adversarial perturbation per-image that can fool many deep learning models . This objective is much harder to achieve than attacking a single model but it guarantees complete ( 100 % ) transferability of the attack towards the reference networks . The proposed multi-network approach has the objective to obtain adversarial images with better transferability avoiding the natural overfitting trap we can fall into when the attacks are crafted by using a unique reference model . 3.3 ALGORITHM . The optimization method consists of two evolutionary nested algorithms : the outer algorithm , using a GA approach , in charge of finding the sequence of filters to use , and the inner algorithm , based on ES , used to choose the filter parameter values . The general structure of the multi-network algorithm is shown in Algorithm 1 . Given a set S = { f1 , f2 , fm } of image filters , the outer algorithm genotype ( with length n ) is encoded as a list of n integers k1 , . . . , kn ∈ { 1 , . . . , m } representing the corresponding filters in S , while the inner algorithm genotype is represented by a list containing the pairs of parameters used for each selected filter ( ( α1 , s1 ) , . . . , ( αn , sn ) ) . The associated phenotype is sequence of parametrized filters able to generate the adversarial examples by applying the selected sequence of filters , with their corresponding optimized parameters , as described in Eq.2 . 3.3.1 OUTER ALGORITHM . The outer optimization step is performed by a genetic algorithm : a population of N candidate perturbations is iteratively evolved towards better solutions . In order to breed a new generation , population members are randomly selected and the crossover and mutation operations are performed . The quality of the candidates is evaluated based on their fitness values and , at the end of each iteration , the N best individuals are chosen for the next generation . Initial population : it is generated by randomly selecting l filters from the set S of available filters and their parameters are initialized with default values equal to 1 . Crossover : a standard one-point crossover is used to generate new off-springs from randomly selected members . Each child is guaranteed to inherit some genetic information from both parents , including the optimized parameters . For example , given two parent elements P1 = ( f ′1 ( α ′ 1 , s ′ 1 ) , . . . , f ′ n ( α ′ n , s ′ n ) ) and P2 = ( f ′′ 1 ( α ′′ 1 , s ′′ 1 ) , . . . , f ′′ n ( α ′′ n , s ′′ n ) ) and crossover index i = 2 , we obtain the child element ( f ′1 ( α ′ 1 , s ′ 1 ) , f ′ 2 ( α ′ 2 , s ′ 2 ) , f ′′ 3 ( α ′′ 3 , s ′′ 3 ) , . . . , f ′′ n ( α ′′ n , s ′′ n ) ) . Mutation : it is applied by substituting a filter with another one based on a mutation probability . The substituent filter is initialized with random parameter values . For example considering the element P = ( f1 ( α1 , s1 ) , f2 ( α2 , s2 ) , f3 ( α3 , s3 ) . . . , fn ( αn , sn ) ) and supposing that filters f1 and f3 has been chosen to mutate with g1 and g2 , the new generated element is P ∗ = ( g1 ( α ∗ 1 , s ∗ 1 ) , f2 ( α2 , s2 ) , g2 ( α ∗ 2 , s ∗ 2 ) . . . , fn ( αn , sn ) ) , where α ∗ i , s ∗ i are randomly extracted from the parameter domains . Selection : at the end of each iteration , we choose the N best individuals from the set of 2N candidates ( parents and offsprings ) according to their fitness values . This process is repeated until the algorithm exhausts the allowed number of generations .
The paper proposes a black-box attack method that uses an evolutionary algorithm to find the best image filter parameters that can achieve untargeted attacks. Model ensembling is also used in the AE generation to improve attack transferability. The proposed method is compared to other similar image filtering-based AE generation methods. Results show that the proposed method is able to outperform most of the related works in terms of transferability.
SP:c350b2552deb7ab187ea3cb387f18a8b1789f2e0
One for Many: an Instagram inspired black-box adversarial attack
1 INTRODUCTION . It is well known that deep learning models are susceptible to adversarial attacks and many recent researches in the field have been devoted to produce ever more reliable and effective attacks . Attack reliability is strictly connected to its applicability in real-world scenarios and to its ability to bypass potential defense mechanisms ; this is why the hard-label black-box setting ( also called decisionbased attack ) and the transferability property of white box attacks gained increasing attention . Several techniques have been proposed to increase the transferability of both black-box and white-box attacks ( Cheng et al . ( 2020 ) ; Wu et al . ( 2020 ) ; Brendel et al . ( 2018 ) , among the most popular ) . One of the most commonly adopted technique is to craft the attacks by using an ensemble of multiple models as proposed by Liu et al . ( 2017 ) . Besides the categorization in white-box and black-box methods , attacks can be classified as restricted or unrestricted , considering the amount of modifications they apply to the images in order to fool the systems . Restricted attacks : they generally use a Lp-norm distance to bound the modifications . The attacks are crafted with the aim of minimizing the differences between the original image and the adversarial one , even if it means having visible ( more or less ) artifacts . In Figures 1b and 1d , the attacks produced by Dong et al . ( 2019 ) and Wu et al . ( 2020 ) , two of the most recent state-of-the-art adversarial methods , are shown . In both the cases the generated artifacts are clearly visible . Unrestricted attacks : they employ large and visible perturbations while keeping the images realistic , natural looking and non-suspicious . The idea is to obtain images that can admit great differences from the original one but , beyond a direct comparison with the original one they can not be distinguished from any other real ( maybe filtered ) image . In Figures 1f and 1h , the attacks produced by ACE-Ins ( Zhao et al. , 2020 ) and Colorfool ( Shahin Shamsabadi et al. , 2020 ) are shown . The differences between the original image and the adversarial one are evident but , if the modifications are good enough , looking just at the adversarial one we might not be able to say that it is the attacking image . Another aspect that has to be taken into account for real-world attacks is the amount of queries to the victim model that are necessary to craft effective attacks . All the systems proposed in literature need a huge amount of queries and , also in the case of systems built to work with limited access to the victim model , several thousands of queries are needed to produce reliable attacks . In this paper we propose a system to craft reliable , effective and highly transferable attacks by composing image filters . In previous papers ( for ex . Destylization ) has been shown that Instagram inspired filters can have adversarial , but limited effects , when applied singularly . Hence , we decided to study the effects of filter composition with the aim to increase the adversarial effectiveness . Moreover , in the usual photo editing process people use more than one filter in order to obtain the desired effect . We just wanted to simulate a more realistic scenario . This kind of attack offers multiple benefits : ( i ) it is naturally robust to detecting methods able to find noise , injected patterns and irregularities in the high frequencies image ( Moosavi-Dezfooli et al. , 2018 ; Liao et al. , 2018 ) . ( ii ) it is naturally robust to masking-gradients ( and their numerical estimate ) defending methods . ( iii ) it produces natural looking artifacts-free images . ( iv ) it produces adversarial attacks that can not be distinguished from any other filtered images produced extensively every day , especially on social media platforms . The system works in the pure hard-label black box setting and implements a multi-network approach to increase the attack transferability . It can reach , with artifact-free images , a transferability rate higher than 70 % in the case of unsecured networks and , even more interestingly , higher than 60 % in the case of adversarially trained network . Moreover , the system requires a very low number of queries to find an attack , around 640 in the average case , and it can be limited by construction adjusting the algorithm parameters . In the experiments presented here the maximum number of allowed queries is 1610 . Our contribution can be summarized as follows : ( i ) We propose the AGV-multinetwork attack , a system able to craft powerful adversarial attacks capable of attacking even secured adversarially trained models by composing Instagram inspired image filters ; ( ii ) we empirically demonstrate the high transferability of these attacks and we compare our results with other state-of-the-art systems ; ( iii ) we empirically demonstrate the efficiency of our system in terms of queries to the victim model and we compare our results with other state-of-the-art systems . 2 BACKGROUND . 2.1 ADVERSARIAL MACHINE LEARNING . Given an input image x ∈ X ⊂ Rd and its corresponding label y , let F be a neural network classifier that ( correctly ) predicts the class label for the input image x : F ( x ) = y . An adversarial attack attempts to modify the input image x by adding a perturbation δ into an adversarial image x∗ = x+ δ such that the classifier is mislead into making a wrong prediction , i.e . F ( x∗ ) ̸= F ( x ) . If we consider the type of the applied perturbation δ , the attacks can be classified as restricted or unrestricted . In the restricted case , the modifications applied to the original image are usually small and bounded by a Lp-norm distance measure , forcing the adversarial image x∗ to be as close as possible to the original one . On the contrary , unrestricted attacks use large perturbations without Lp-bounded constraints that manipulate the image in order to create photo-realistic adversarial examples . In this case the objective is not to limit the modifications on pixels but limit the human perception that a modification has been applied ( Shahin Shamsabadi et al. , 2020 ; Zhao et al . ; Wang et al. , 2021 ) . 2.2 IMAGE FILTERS . We implemented ten of the most popular Instagram filters using Python3 and the Pillow , OpenCV and Numpy libraries : Clarendon , Juno , Reyes , Gingham , Lark , Hudson , Slumber , Stinson , Rise , and Perpetua . Each filter has distinct characteristics and effects given by different level of contrast , saturation , brightness , shadows , etc . For instance , Clarendon brightens and highlights a photo , Juno adds saturation and warmth making the colors more intense , Rise gives a warm glow by mixing a radial gradient with a light sepia tone , while Hudson bumps up the blues giving a more colder feel . Examples of single filter applications are shown in Figure 6 in Appendix A. . Each filter is parameterized by two parameters that have to be optimized by the algorithm : intensity α and strength s. The role of the parameter α is to alter the intensity of each basic component inside each filter implementation such as brightness , contrast , saturation , edge enhancement , gamma correction and many more . The parameter s is used to control the strength of the filter application and it is defined as the convex interpolation between the original image x and the transformed image x∗ : strength ( x , x∗ , s ) = ( 1.0− s ) · x+ s · x∗ ( 1 ) thus , if s = 0 the output image of the filter is the original image , while with s = 1 the filter returns the modified image x∗ . 3 APPROACH AND ALGORITHM . 3.1 PROBLEM DEFINITION . Given a set S = { f1 , f2 , . . . , fm } of Instagram inspired image filters as described in Section 2.2 , and a clean image x , we want to find a sequence of n parametrized filters { fk1 ( α1 , s1 ) , . . . , fkn ( αn , sn ) } able to produce an adversarial attack against a classifier model F starting from the image x , that is { F ( x ) ̸= F ( x∗ ) x∗ = fkn ( αn , sn ) ( . . . fk2 ( α2 , s2 ) ( fk1 ( α1 , s1 ) ( x ) ) ) ( 2 ) 3.2 APPROACH . The algorithm used to optimize the sequence of filters and their parameters is inspired by Baia et al . ( 2021 ) . In our proposal the universal approach proposed by Baia et al . has been transformed into a multi-network per-image approach , where the attack is crafted and optimized for just one image with respect to more target models . Since the algorithm uses an evolutionary approach , the fitness function used to guide the search can have any ( also non differentiable ) form , and we decide to use it to implement a sort of generalization ability induced by attacking more models simultaneously . This is inspired by Liu et al . ( 2017 ) where the authors suggested that attacking an ensemble of multiple networks simultaneously can generate much stronger adversarial examples . Instead of attacking an ensemble network we decided to exploit the optimization ability of the evolutionary approach and try to simultaneously attack all the ( or at least k ) reference networks . Therefore , our goal is to find one adversarial perturbation per-image that can fool many deep learning models . This objective is much harder to achieve than attacking a single model but it guarantees complete ( 100 % ) transferability of the attack towards the reference networks . The proposed multi-network approach has the objective to obtain adversarial images with better transferability avoiding the natural overfitting trap we can fall into when the attacks are crafted by using a unique reference model . 3.3 ALGORITHM . The optimization method consists of two evolutionary nested algorithms : the outer algorithm , using a GA approach , in charge of finding the sequence of filters to use , and the inner algorithm , based on ES , used to choose the filter parameter values . The general structure of the multi-network algorithm is shown in Algorithm 1 . Given a set S = { f1 , f2 , fm } of image filters , the outer algorithm genotype ( with length n ) is encoded as a list of n integers k1 , . . . , kn ∈ { 1 , . . . , m } representing the corresponding filters in S , while the inner algorithm genotype is represented by a list containing the pairs of parameters used for each selected filter ( ( α1 , s1 ) , . . . , ( αn , sn ) ) . The associated phenotype is sequence of parametrized filters able to generate the adversarial examples by applying the selected sequence of filters , with their corresponding optimized parameters , as described in Eq.2 . 3.3.1 OUTER ALGORITHM . The outer optimization step is performed by a genetic algorithm : a population of N candidate perturbations is iteratively evolved towards better solutions . In order to breed a new generation , population members are randomly selected and the crossover and mutation operations are performed . The quality of the candidates is evaluated based on their fitness values and , at the end of each iteration , the N best individuals are chosen for the next generation . Initial population : it is generated by randomly selecting l filters from the set S of available filters and their parameters are initialized with default values equal to 1 . Crossover : a standard one-point crossover is used to generate new off-springs from randomly selected members . Each child is guaranteed to inherit some genetic information from both parents , including the optimized parameters . For example , given two parent elements P1 = ( f ′1 ( α ′ 1 , s ′ 1 ) , . . . , f ′ n ( α ′ n , s ′ n ) ) and P2 = ( f ′′ 1 ( α ′′ 1 , s ′′ 1 ) , . . . , f ′′ n ( α ′′ n , s ′′ n ) ) and crossover index i = 2 , we obtain the child element ( f ′1 ( α ′ 1 , s ′ 1 ) , f ′ 2 ( α ′ 2 , s ′ 2 ) , f ′′ 3 ( α ′′ 3 , s ′′ 3 ) , . . . , f ′′ n ( α ′′ n , s ′′ n ) ) . Mutation : it is applied by substituting a filter with another one based on a mutation probability . The substituent filter is initialized with random parameter values . For example considering the element P = ( f1 ( α1 , s1 ) , f2 ( α2 , s2 ) , f3 ( α3 , s3 ) . . . , fn ( αn , sn ) ) and supposing that filters f1 and f3 has been chosen to mutate with g1 and g2 , the new generated element is P ∗ = ( g1 ( α ∗ 1 , s ∗ 1 ) , f2 ( α2 , s2 ) , g2 ( α ∗ 2 , s ∗ 2 ) . . . , fn ( αn , sn ) ) , where α ∗ i , s ∗ i are randomly extracted from the parameter domains . Selection : at the end of each iteration , we choose the N best individuals from the set of 2N candidates ( parents and offsprings ) according to their fitness values . This process is repeated until the algorithm exhausts the allowed number of generations .
This paper proposed a nested evolutionary algorithm to generate adversarial perturbations under the black-box settings. Such perturbations are composed of various image filters inspired by Instagram and can simultaneously attack multiple neural networks. They claimed that the attacks were semantically robust and had a low cost of queries.
SP:c350b2552deb7ab187ea3cb387f18a8b1789f2e0
ANOMALY DETECTION WITH FRAME-GROUP ATTENTION IN SURVEILLANCE VIDEOS
1 INTRODUCTION . Nowadays anomaly detection is useful to maintain social security and conduct legal forensics . Due to the ambiguous definition of abnormal events , it increases the difficulty of detection . For example , the appearance of a vehicle on a road is normal , while it is abnormal when a vehicle is on the sidewalk . So we make the goal clear in order to carry out consequent measures . Abnormal behavior is defined as rapid movements in slow moving crowds , such as cycling , running , throwing from a height , etc . Based on this definition , the commonly used datasets for anomaly detection are integrated and relabeled . The new fusion dataset contains more scenes and anomaly types , so it is more challenging for the anomaly detection . Subsequent experiments are performed on this new dataset . The test results show that the algorithm proposed in this paper has great advantages in many metrics . The end-to-end anomaly detection network shown in Fig . 1 has the following contributions : ( 1 ) The video group composed of consecutive multiple frames is the basic processing unit to extract expressive features with the designed group feature extractor . On the one hand , the spatial-temporal information could be retained comparing with single-image processing . On the other hand , the abnormality score of a single frame can be easily obtained without the access of the whole video , which can copy with the situation where video streams are the input . ( 2 ) The designed framework composed of group feature extractor and group score mapper can effectively obtain the abnormality score using the spatial-temporal information . Implicit vector-based attention mechanism is used to weight the frame-group features . The more important the feature is , the higher the weight is . ( 3 ) The basic cross-entropy loss and the improved hinge loss are united to improve the performance of the network . The latter devote to make the score of the abnormal frames greater than that of normal frames . The paper is organized as follows . Section 1 introduces the background of anomaly detection . Section 2 introduces the related work on anomaly detection . Section 3 mainly introduces the details of the proposed anomaly detection algorithm . Section 4 introduces the fusion dataset and then analyzes the experimental results . Section 5 gives the summary of the whole paper . 2 RELATED WORK . The challenges of video semantic analysis lie in the extraction and representation of video features . The video contains complex spatial texture information and time information . Multidimensional data provides more information but meanwhile it contains a lot of redundant information . How to extract low-redundant , comprehensive and representative video features is one of the research focuses . The manual extraction method of video features focuses on the extraction and analysis of low-level visual features , such as guided gradient histograms Xiao et al . ( 2014 ) , optical flow maps Reddy et al . ( 2011 ) , spatio-temporal points of interest Dollár et al . ( 2005 ) , texture models Xiao et al . ( 2018 ) , filtering models Zhang et al . ( 2018 ) , etc . After obtaining the statistical information , the visual dictionary Roshtkhari & Levine ( 2013 ) and other methods will be used to save the normal distribution , and then calculate the similarity criterion to determine whether the target is abnormal . Luo & Wang ( 2019 ) explores the method of multi-stream manual features for video representation . It constructs a three-dimensional video representation structure composed of spatio-temporal vector and positional vector , and improves the encoding method to make the extracted video representation structure more powerful in representation . With the rapid development of deep learning , automatic feature extraction using neural networks has become a research hotspot . Zhou et al . ( 2016 ) uses 3D convolutional networks to detect abnormal events in surveillance video . In Sabokrou et al . ( 2017 ) , the video frame is divided into several small areas , and each small area is feed into a 3D self-encoding network combined with 3D convolutional neural network to extract features and detect anomaly . Medel ( 2016 ) proposes a long-short term memory network based on convolution , which simultaneously extracts spatial and temporal information . Xu et al . ( 2015 ) first uses stacked auto-encoders to learn and fuse the appearance and motion characteristics of abnormal individuals , and then trains multiple single-classifiers to calculate the abnormal score . Ionescu et al . ( 2019 ) uses one-to-many classifiers instead of the single-classifiers after obtaining multiple pseudo-anomaly classes from the trained normal behavior pattern . Hinami et al . ( 2017 ) multiple attributes of the same target to extract features . Due to the lack of anomalous videos and various types of anomaly , it is difficult to find a general model that covers all anomalous events . The auto-encoding network Yuan et al . ( 2019 ) performs anomaly detection based on the reconstruction error . Hasan et al . ( 2016 ) uses a convolutional neural network to implement video anomaly detection . Since the convolutional layer operates on a twodimensional structure , time information will be lost . Chong & Tay ( 2017 ) designs a spatio-temporal autoencoder that encodes video sequence with spatial convolution and Convolutional Long-Short Term Memory ( ConvLSTM ) Shi et al . ( 2015 ) structure , and then uses the symmetric structure called the decoder , convert the video encoding into the image sequence . The abnormal score can be obtained from the calculation of the Euclidean distance between the decoded images and the original images . An & Cho ( 2015 ) proposes a variational autoencoder ( Variational Autoencoder , VAE ) , which uses the results of video encoding to fit a distribution function . Jing & Yujin ( 2017 ) supplements a gradient difference constraint on the basis of the sparse denoising auto-encoding network , which is helpful to make the model more effective in detecting abnormal behavior . For specific tasks , the model trained according to the specific task can often get better results . In Sultani et al . ( 2018 ) , the video containing abnormal events is divided into several video segments to form multiple video instances . A fully connected network is designed to map the video features extracted by the C3D network Tran et al . ( 2015 ) into abnormal scores . The score of the abnormal instance is higher than the score of the instances with only normal frames . The experiments in this paper show that the network has achieved good results . But the detection process is disconnected . In order to minimize the interference to the input data and obtain the final score directly and timely , we designs an end-to-end network , which uses both positive and negative samples to make the model more targeted . 3 PROPOSED METHOD . The paper proposes an end-to-end anomaly detection network , which operates on the frame-group formed with consecutive frames to obtain the abnormality score . The whole framework is composed of group feature extractor and group score mapper . The former performs on the raw frame-group to obtain the spatial-temporal group-feature . The latter performs on the group-feature to obtain the abnormality score . 3.1 THE GROUP-FEATURE EXTRACTOR . The frame-group is defined as a structure consisting of consecutive τ frames , which contains rich spatial texture information and temporal change information . Fig . 2 shows the details of the groupfeature extractor , where In ( 0 , Spa , Tem ) t and Out ( 0 , Spa , Tem ) t represent the input and output matrixes in time t respectively . Spa and Tem mean the input and output feature map in the spatial feature extractor and the temporal feature extractor respectively . Since the frame-group contains multiple continuous frames , in order not to destroy the time information within consecutive frames , we use a trainable convolution kernel to extract spatial information of each single-frame in the time dimension . Then batch normalization Ioffe & Szegedy ( 2015 ) is used to prevent the gradient dispersion and accelerate the model convergence . Activation function helps to make the model nonlinear . Two sets of ” convolution-regular-activation ” structures are used in the experiments . The first group uses 128 convolution kernels with size of 5× 5 and step size of 3 , and the second one uses 64 convolution kernels with size of 3× 3 and step size of 2 . After obtaining the spatial feature maps , ConvLSTM will further extract the spatial and temporal information using gate structures . The implementation of hidden features is shown in Equation ( 1 ) . Where the hidden feature HidTemt is used to record the accumulated state information up to time t. The symbol ⊗ represents the Hadamard product . σ and tanh represents the sigmoid and tanh nonlinear activation functions respectively . Conv is the convolution operation . This paper uses two cascading ConvLSTM layers to extract time flow information . The first layer uses 64 convolution kernels with size of 3 × 3 and the second one uses 32 same kernels . We use the ” same padding ” to keep the size of input and output feature maps same . HidTemt = σ ( Conv ( Out Tem t−1 , In Tem t , Hid Tem t−1 ) ) ⊗ HidTemt−1 + σ ( Conv ( OutTemt−1 , In Tem t , Hid Tem t−1 ) ⊗ tanh ( Conv ( Outt−1 , In Tem t ) ) ( 1 ) While the final output is shown in Equation ( 2 ) : OutTemt = σ ( Conv ( Out Tem t−1 , In Tem t , Hid Tem t ) ) ⊗ tanh ( HidTemt ) ( 2 ) In order to better extract the spatial and temporal information as well as high-level and low-level features of frame-group , we use a multi-level feature fusion structure to merge multi-level features of the frame-group . The output result of the multi-level feature fusion structure is regarded as the final feature representation of the frame-group . In the experiment , the structure was implemented using a 3D convolution Zhou et al . ( 2018 ) layer with size of 8× 1× 1 . 3.2 THE GROUP-SCORE MAPPER . In the group-score mapper , the attention mechanism Ilse et al . ( 2018 ) is used to increase the decisive influence of useful features and weaken the effects of irrelevant features on the results . The fully connected network is used to make the encoding of the group-feature more expressive while reducing feature dimensions . The group-level pooling is applied to map the refined group-feature to the abnormality score of the video group . The specific process is shown in Fig . 3 . The implicit vector method based attention mechanism assigns different weights to different features in order that the key features have a more important impact on the result and the interference of noise can be suppressed . The trainable transformation matrix will be used to project the original group feature into the implicit space , and then the weight vector will be obtained by an inverse transformation matrix . Different from the attention mechanism aiming at multiple instances in Ilse et al . ( 2018 ) , this paper focus on the attention of the group-feature . To be specific , the implicit vector is used to generate a weight vector of the original group , which has a dimension of 128 in the experiment . Fgrp is the flattened Out0t with length =1444 . T represents transposition operation . V and W are the feature space transformation matrices . Ψnl is a non-linear transformation function . Therefore we can define the coefficient χk of the kth element F ( k ) grp as Equation ( 3 ) . χk = exp ( ( WTΨnl ( VF T grp ) ) k ) ∑K i=1 exp ( ( W TΨnl ( VF T grp ) ) i ) , k ∈ [ 1 , K ] ( 3 ) The weighted group feature F̃grp is as Equation ( 4 ) . F̃grp = ( χ1F ( 1 ) grp , · · · , χkF ( k ) grp ) , k ∈ [ 1 , K ] ( 4 ) The weighted group-feature next passes through two fully connected layers with Dropout operation to reduce the feature dimensions and the computation and enhance the feature expressive ability . The output dimensions of the two fully connected layers are 512 , and theRelu activation function is used . The Dropout parameter is set to 0.5 , that is , 50 % of the neural units are discarded randomly and not participated in each iteration of training , which can reduce the risk of neural network overfitting . Group-pooling is used to get the final group score . The trainable weight matrix followed by the sigmoid function is used to map the refined group feature to the abnormality score . The positive samples marked as 1 are the frame-groups containing anomalies , while negative samples contain only normal . As the trainable weight matrix of the refined group feature F̃grp is marked Φ and the sigmoid is marked σ the predicted group-score B̂ ( t ) corresponding to the input ( In0t , · · · , In 0 t+τ ) is defined as Equation ( 5 ) . B̂ ( t ) = σ ( F̃grp ·Φ ) ( 5 )
This paper proposes a novel neural network that maps short chunks of eight video frames to a final probability score (0-1) of abnormality. The network is trained and evaluated in a supervised fashion, using the Avenue, UMN and UCSD data (all three datasets capture semantic anomalies e.g. walking in opposite directions of pedestrians, etc.). The network architecture comprises of various architectural concepts such as CNN, multi-level features, ConvLSTM, attention & transformer, etc. Basically the idea is to design a spatiotemporal feature from the chunk of video frames in an embedding where the features are easily being discriminated to 0/1 (normality/abnormality).
SP:d9735ed582c13df8aa906746db500b271759a785
ANOMALY DETECTION WITH FRAME-GROUP ATTENTION IN SURVEILLANCE VIDEOS
1 INTRODUCTION . Nowadays anomaly detection is useful to maintain social security and conduct legal forensics . Due to the ambiguous definition of abnormal events , it increases the difficulty of detection . For example , the appearance of a vehicle on a road is normal , while it is abnormal when a vehicle is on the sidewalk . So we make the goal clear in order to carry out consequent measures . Abnormal behavior is defined as rapid movements in slow moving crowds , such as cycling , running , throwing from a height , etc . Based on this definition , the commonly used datasets for anomaly detection are integrated and relabeled . The new fusion dataset contains more scenes and anomaly types , so it is more challenging for the anomaly detection . Subsequent experiments are performed on this new dataset . The test results show that the algorithm proposed in this paper has great advantages in many metrics . The end-to-end anomaly detection network shown in Fig . 1 has the following contributions : ( 1 ) The video group composed of consecutive multiple frames is the basic processing unit to extract expressive features with the designed group feature extractor . On the one hand , the spatial-temporal information could be retained comparing with single-image processing . On the other hand , the abnormality score of a single frame can be easily obtained without the access of the whole video , which can copy with the situation where video streams are the input . ( 2 ) The designed framework composed of group feature extractor and group score mapper can effectively obtain the abnormality score using the spatial-temporal information . Implicit vector-based attention mechanism is used to weight the frame-group features . The more important the feature is , the higher the weight is . ( 3 ) The basic cross-entropy loss and the improved hinge loss are united to improve the performance of the network . The latter devote to make the score of the abnormal frames greater than that of normal frames . The paper is organized as follows . Section 1 introduces the background of anomaly detection . Section 2 introduces the related work on anomaly detection . Section 3 mainly introduces the details of the proposed anomaly detection algorithm . Section 4 introduces the fusion dataset and then analyzes the experimental results . Section 5 gives the summary of the whole paper . 2 RELATED WORK . The challenges of video semantic analysis lie in the extraction and representation of video features . The video contains complex spatial texture information and time information . Multidimensional data provides more information but meanwhile it contains a lot of redundant information . How to extract low-redundant , comprehensive and representative video features is one of the research focuses . The manual extraction method of video features focuses on the extraction and analysis of low-level visual features , such as guided gradient histograms Xiao et al . ( 2014 ) , optical flow maps Reddy et al . ( 2011 ) , spatio-temporal points of interest Dollár et al . ( 2005 ) , texture models Xiao et al . ( 2018 ) , filtering models Zhang et al . ( 2018 ) , etc . After obtaining the statistical information , the visual dictionary Roshtkhari & Levine ( 2013 ) and other methods will be used to save the normal distribution , and then calculate the similarity criterion to determine whether the target is abnormal . Luo & Wang ( 2019 ) explores the method of multi-stream manual features for video representation . It constructs a three-dimensional video representation structure composed of spatio-temporal vector and positional vector , and improves the encoding method to make the extracted video representation structure more powerful in representation . With the rapid development of deep learning , automatic feature extraction using neural networks has become a research hotspot . Zhou et al . ( 2016 ) uses 3D convolutional networks to detect abnormal events in surveillance video . In Sabokrou et al . ( 2017 ) , the video frame is divided into several small areas , and each small area is feed into a 3D self-encoding network combined with 3D convolutional neural network to extract features and detect anomaly . Medel ( 2016 ) proposes a long-short term memory network based on convolution , which simultaneously extracts spatial and temporal information . Xu et al . ( 2015 ) first uses stacked auto-encoders to learn and fuse the appearance and motion characteristics of abnormal individuals , and then trains multiple single-classifiers to calculate the abnormal score . Ionescu et al . ( 2019 ) uses one-to-many classifiers instead of the single-classifiers after obtaining multiple pseudo-anomaly classes from the trained normal behavior pattern . Hinami et al . ( 2017 ) multiple attributes of the same target to extract features . Due to the lack of anomalous videos and various types of anomaly , it is difficult to find a general model that covers all anomalous events . The auto-encoding network Yuan et al . ( 2019 ) performs anomaly detection based on the reconstruction error . Hasan et al . ( 2016 ) uses a convolutional neural network to implement video anomaly detection . Since the convolutional layer operates on a twodimensional structure , time information will be lost . Chong & Tay ( 2017 ) designs a spatio-temporal autoencoder that encodes video sequence with spatial convolution and Convolutional Long-Short Term Memory ( ConvLSTM ) Shi et al . ( 2015 ) structure , and then uses the symmetric structure called the decoder , convert the video encoding into the image sequence . The abnormal score can be obtained from the calculation of the Euclidean distance between the decoded images and the original images . An & Cho ( 2015 ) proposes a variational autoencoder ( Variational Autoencoder , VAE ) , which uses the results of video encoding to fit a distribution function . Jing & Yujin ( 2017 ) supplements a gradient difference constraint on the basis of the sparse denoising auto-encoding network , which is helpful to make the model more effective in detecting abnormal behavior . For specific tasks , the model trained according to the specific task can often get better results . In Sultani et al . ( 2018 ) , the video containing abnormal events is divided into several video segments to form multiple video instances . A fully connected network is designed to map the video features extracted by the C3D network Tran et al . ( 2015 ) into abnormal scores . The score of the abnormal instance is higher than the score of the instances with only normal frames . The experiments in this paper show that the network has achieved good results . But the detection process is disconnected . In order to minimize the interference to the input data and obtain the final score directly and timely , we designs an end-to-end network , which uses both positive and negative samples to make the model more targeted . 3 PROPOSED METHOD . The paper proposes an end-to-end anomaly detection network , which operates on the frame-group formed with consecutive frames to obtain the abnormality score . The whole framework is composed of group feature extractor and group score mapper . The former performs on the raw frame-group to obtain the spatial-temporal group-feature . The latter performs on the group-feature to obtain the abnormality score . 3.1 THE GROUP-FEATURE EXTRACTOR . The frame-group is defined as a structure consisting of consecutive τ frames , which contains rich spatial texture information and temporal change information . Fig . 2 shows the details of the groupfeature extractor , where In ( 0 , Spa , Tem ) t and Out ( 0 , Spa , Tem ) t represent the input and output matrixes in time t respectively . Spa and Tem mean the input and output feature map in the spatial feature extractor and the temporal feature extractor respectively . Since the frame-group contains multiple continuous frames , in order not to destroy the time information within consecutive frames , we use a trainable convolution kernel to extract spatial information of each single-frame in the time dimension . Then batch normalization Ioffe & Szegedy ( 2015 ) is used to prevent the gradient dispersion and accelerate the model convergence . Activation function helps to make the model nonlinear . Two sets of ” convolution-regular-activation ” structures are used in the experiments . The first group uses 128 convolution kernels with size of 5× 5 and step size of 3 , and the second one uses 64 convolution kernels with size of 3× 3 and step size of 2 . After obtaining the spatial feature maps , ConvLSTM will further extract the spatial and temporal information using gate structures . The implementation of hidden features is shown in Equation ( 1 ) . Where the hidden feature HidTemt is used to record the accumulated state information up to time t. The symbol ⊗ represents the Hadamard product . σ and tanh represents the sigmoid and tanh nonlinear activation functions respectively . Conv is the convolution operation . This paper uses two cascading ConvLSTM layers to extract time flow information . The first layer uses 64 convolution kernels with size of 3 × 3 and the second one uses 32 same kernels . We use the ” same padding ” to keep the size of input and output feature maps same . HidTemt = σ ( Conv ( Out Tem t−1 , In Tem t , Hid Tem t−1 ) ) ⊗ HidTemt−1 + σ ( Conv ( OutTemt−1 , In Tem t , Hid Tem t−1 ) ⊗ tanh ( Conv ( Outt−1 , In Tem t ) ) ( 1 ) While the final output is shown in Equation ( 2 ) : OutTemt = σ ( Conv ( Out Tem t−1 , In Tem t , Hid Tem t ) ) ⊗ tanh ( HidTemt ) ( 2 ) In order to better extract the spatial and temporal information as well as high-level and low-level features of frame-group , we use a multi-level feature fusion structure to merge multi-level features of the frame-group . The output result of the multi-level feature fusion structure is regarded as the final feature representation of the frame-group . In the experiment , the structure was implemented using a 3D convolution Zhou et al . ( 2018 ) layer with size of 8× 1× 1 . 3.2 THE GROUP-SCORE MAPPER . In the group-score mapper , the attention mechanism Ilse et al . ( 2018 ) is used to increase the decisive influence of useful features and weaken the effects of irrelevant features on the results . The fully connected network is used to make the encoding of the group-feature more expressive while reducing feature dimensions . The group-level pooling is applied to map the refined group-feature to the abnormality score of the video group . The specific process is shown in Fig . 3 . The implicit vector method based attention mechanism assigns different weights to different features in order that the key features have a more important impact on the result and the interference of noise can be suppressed . The trainable transformation matrix will be used to project the original group feature into the implicit space , and then the weight vector will be obtained by an inverse transformation matrix . Different from the attention mechanism aiming at multiple instances in Ilse et al . ( 2018 ) , this paper focus on the attention of the group-feature . To be specific , the implicit vector is used to generate a weight vector of the original group , which has a dimension of 128 in the experiment . Fgrp is the flattened Out0t with length =1444 . T represents transposition operation . V and W are the feature space transformation matrices . Ψnl is a non-linear transformation function . Therefore we can define the coefficient χk of the kth element F ( k ) grp as Equation ( 3 ) . χk = exp ( ( WTΨnl ( VF T grp ) ) k ) ∑K i=1 exp ( ( W TΨnl ( VF T grp ) ) i ) , k ∈ [ 1 , K ] ( 3 ) The weighted group feature F̃grp is as Equation ( 4 ) . F̃grp = ( χ1F ( 1 ) grp , · · · , χkF ( k ) grp ) , k ∈ [ 1 , K ] ( 4 ) The weighted group-feature next passes through two fully connected layers with Dropout operation to reduce the feature dimensions and the computation and enhance the feature expressive ability . The output dimensions of the two fully connected layers are 512 , and theRelu activation function is used . The Dropout parameter is set to 0.5 , that is , 50 % of the neural units are discarded randomly and not participated in each iteration of training , which can reduce the risk of neural network overfitting . Group-pooling is used to get the final group score . The trainable weight matrix followed by the sigmoid function is used to map the refined group feature to the abnormality score . The positive samples marked as 1 are the frame-groups containing anomalies , while negative samples contain only normal . As the trainable weight matrix of the refined group feature F̃grp is marked Φ and the sigmoid is marked σ the predicted group-score B̂ ( t ) corresponding to the input ( In0t , · · · , In 0 t+τ ) is defined as Equation ( 5 ) . B̂ ( t ) = σ ( F̃grp ·Φ ) ( 5 )
The paper presents an algorithm to detect abnormal events in video sequences. The proposed algorithm decomposes a video sequence into groups of consecutive frames ("frame groups"), and uses ConvLSTM to extract features from frame groups. A group level attention module is applied to focus on some most relevant portions of the extracted features, before feeding to fully connected layers for final anomaly score prediction. The algorithm is evaluated on a curated dataset accumulated from three popular benchmarks.
SP:d9735ed582c13df8aa906746db500b271759a785