paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks
1 INTRODUCTION . Adversarial examples against deep learning models were originally investigated as blind spots in classification ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . Formal methods for discovering these blind spots emerged , which we denote as gradient-level attacks , and became the first techniques to reach widespread attention within the deep learning community ( Papernot et al. , 2016 ; MoosaviDezfooli et al. , 2015 ; Carlini & Wagner , 2016 ; 2017 ; Chen et al. , 2018 ) . In order to compute the necessary gradient information , such techniques required access to the model parameters and a sizeable query budget . These shortcomings were addressed by the creation of score-level attacks , which only require the confidence values output by the deep learning models ( Fredrikson et al. , 2015 ; Tramèr et al. , 2016 ; Chen et al. , 2017 ; Ilyas et al. , 2018 ) . However , these attacks still rely on models to divulge information that would be impractical to receive in real-world systems . By contrast , hard-label attacks make no assumptions about receiving side information , and only the predicted class is observable , thus providing the weakest , yet most realistic adversarial threat model . These methods , which originated from a random-walk on the decision boundary ( Brendel et al. , 2017 ) , have been carefully refined to offer convergence guarantees ( Cheng et al. , 2019 ) , query efficiency ( Chen et al. , 2019 ; Cheng et al. , 2020 ) , and capability in the physical world Feng et al . ( 2020 ) . Despite the steady improvements of hard-label attacks , open questions persist about their behavior , and adversarial machine learning ( AML ) attacks at large . Adversarial examples were originally assumed to lie in rare pockets of the input space ( Goodfellow et al. , 2014 ) , but this conventional wisdom was later challenged by the boundary tilting assumption ( Tanay & Griffin , 2016 ; Gilmer et al. , 2018 ) , which adopts a “ data-geometric ” view of the input space living on a lower-dimensional manifold . This is supported by Stutz et al . ( 2019 ) , who suggest that regular adversarial examples leave the data manifold , while on-manifold adversarial examples are generalization errors . From a data-geometric perspective , an adversarial example ’ s distance to the manifold primarily describes the amount of semantic features preserved during the attack process . This makes it advantageous to produce on-manifold adversarial examples , since the adversary can exploit the inherent generalization error of the model while producing samples that are semantically similar for humans . However , the true data manifold is either difficult or impossible to describe , and relying solely on approximations of the manifold can lead to the creation of crude adversarial examples ( Stutz et al. , 2019 ) . In this paper , we adopt the boundary-tilting assumption and demonstrate an unexpected benefit of query-efficient zeroth-order attacks , i.e. , attacks enabled by the use of dimensionality reduction techniques . These attacks are more likely to discover on-manifold examples , which we theoretically demonstrate is the result of manifold-gradient mutual information . Our results suggest that this quantity can increase as a function of the data dimensionality . This information leakage leads to adversarial examples that are on-manifold generalization errors . With this knowledge , we empirically demonstrate how to improve hard-label attacks in a generic yet principled way , and potentially re-think their interaction with model robustness and public-facing systems in the near future . For clarity , we provide a block diagram of our claims and experiments in the Appendix ( Section A.3 ) . Our specific contributions are as follows : • Introduction of manifold distance oracle . To create on-manifold examples , the adversary must ( implicitly ) leverage manifold information during the attack phase . We thus propose an informationtheoretic formulation of the noisy manifold distance ( NMD ) oracle , which can explain how zerothorder attacks craft on-manifold examples . We theoretically demonstrate on a Gaussian data model that manifold-gradient mutual information can increase as a function of data dimensionality . We empirically show this is true even on large-scale image datasets such as CIFAR-10 and ImageNet . This finding relates to known behavior in the gradient-level setting , where semantic manifold priors ( e.g. , shapes and textures ) can be leaked from robust models ( Engstrom et al. , 2019 ) . • Reveal new insights of manifold feedback during query-efficient zeroth-order search . In practice , the data manifold is difficult to characterize . We propose the use of three proxies for manifold distance , which all show consistent results in terms of an adversary ’ s ability to search near the manifold . This methodology allows us to empirically demonstrate the connection between dimension reduction , model robustness , and manifold feedback from the model , beyond the known convergence rates tied to dimensionality ( Nesterov & Spokoiny , 2017 ) . Our findings inform how to search closer to the manifold ( Table 1 ) , reduce gradient deviation ( Table 2 ) , and improve query efficiency ( Figure 2 ) in a simple and generic way for hard-label attacks . • Attack-agnostic method for super-pixel grouping . We show that spatial dimension reduction of a decision-based gradient estimate acts as an attack- and knowledge-agnostic method for searching over super-pixels of an image . More importantly , this helps an attacker exploit a model ’ s reaction to salient input changes , leading to samples closer to the manifold compared to the attack on full dimension . As a result , we demonstrate up to 200 % and 340 % success rate improvement for state-of-the-art hard-label attacks HSJA ( Chen et al. , 2019 ) and Sign-OPT attack ( Cheng et al. , 2020 ) , respectively . 2 RELATED WORK . Since the original discovery of adversarial samples against deep models ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) , the prevailing question was why such examples existed . The original assumption was that adversarial examples lived in low-probability pockets of the input space , and were never encountered during parameter optimization ( Szegedy et al. , 2013 ) . This effect was believed to be amplified by the linearity of weight activations in the presence of small perturbations ( Goodfellow et al. , 2014 ) . These assumptions were later challenged by the boundary tilting assumption , which in summary 1 ) asserts that the train and test sets of a model only occupy a sub-manifold of the true data , while the decision boundary lies close to samples on and beyond the sub-manifold ( Tanay & Griffin , 2016 ) , and 2 ) supports the “ data geometric “ view , where high-dimensional geometry of the true data manifold enables a low-probability error set to exist ( Gilmer et al. , 2018 ) . Likewise the boundary tilting assumption describes adversarial samples as leaving the manifold , which has inspired defenses based on projecting such samples back to the data manifold ( Jalal et al. , 2019 ; Samangouei et al. , 2018 ) . However , these approaches were later defeated by adaptive attacks ( Carlini et al. , 2019 ; Carlini & Wagner , 2017 ; Tramer et al. , 2020 ) . We investigate the scenario where an adversary uses zeroth-order information ( i.e. , top-1 label feedback ) to estimate the desired gradient direction ( Cheng et al. , 2020 ; Chen et al. , 2019 ) . Contemporary attacks in this setting are variants of random gradient-free method ( RGF ) ( Nesterov & Spokoiny , 2017 ) , and rely on formulations which convert the top-1 ( hard ) label , which is a step function , into a continuous real-valued function g : Rd → R , which takes search direction θ ∈ Rd and outputs the distance to the nearest adversarial example ( Cheng et al. , 2018 ) . The gradient estimate is conceived as a function of the gradient ∇g and can be estimated with either two samples of information ( SignOPT ) ( Cheng et al. , 2020 ) , or a single point ( HopSkipJumpAttack ) ( Chen et al. , 2019 ) . Details of specific formulations for each attack are provided in Section A.2 of the Appendix . Query efficiency is a persistent desire in the study of hard-label attacks . One clue for achieving efficiency comes from the theory of gradient estimation error and convergence , which shows that the estimation cost is polynomial in d , the dimension of the optimized variable , thus motivating the use of standard dimension-reduction techniques ( Tu et al. , 2019 ) . However , to date it is not completely understood how this relates to traversal through the data manifold . We leverage previous results of the gradient-level setting ( Stutz et al. , 2019 ; Engstrom et al. , 2019 ) to formulate an explanation of manifold leakage during hard-label adversarial attacks . 3 NOISY MANIFOLD DISTANCE ORACLE . Santurkar et al . ( 2019 ) demonstrate that the gradients of robust models have higher visual semantic alignment with the data compared to gradients of standard models . We build on this finding by first assuming that the benign observable data generates from a true lower-dimension distribution . Under the boundary-tilting assumption , this lower-dimension distribution forms a manifold onto which new observations , either benign or adversarial , can be encoded ( Tanay & Griffin , 2016 ) . Likewise , we assume that deep learning models will learn a lower-dimension representation of the observable data , e.g. , feature layers of convolutional neural networks learn to encode training observations onto a low dimension approximate manifold ( Zhang et al. , 2018 ) . When an adversary creates adversarial samples , they are leveraging a pathway that shadows the model gradient , not the true manifold . Thus there is the possibility that adversarial samples are considered “ off-manifold ” , e.g. , can not be expected to generate naturally from the true manifold . However , it is critical for adversarial samples to be as close to the manifold as possible , since on-manifold adversarial examples can exploit the fundamental generalization error of the model ( Stutz et al. , 2019 ) . More formally , we define the notion of manifold distance as follows . Definition 3.1 ( Manifold Distance ) . Consider the benign sample x0 and adversarial counterpart x . Assuming a perfect encoding back to the true manifold φ , the manifold distance is defined as d ( φ ( x0 ) , φ ( x ) ) , where d is a distance function with the domain of the true manifold . Unfortunately , unless the true manifold for a dataset is known , it is impossible to define φ . Instead , a proxy d′ can be used such that d′ ( x , x′ ) ∼ d ( φ ( x ) , φ ( x′ ) ) . In practice , one can implement d′ with any perceptual distance score , such as Learned Perceptual Image Patch Similarity ( Zhang et al. , 2018 ) . If relying on a distance measure d , such as the Lp-norm , an approximate encoder φ′ ( · ) ∼ φ ( · ) can be learned using reconstruction-based training of autoencoders ( Stutz et al. , 2019 ) , or leveraging feature layers of convolutional neural networks ( Zhang et al. , 2018 ) . We are interested in the class of hard-label adversaries that implicitly minimize some proxy of the manifold distance . Given the result of Santurkar et al . ( 2019 ) , the robust model ’ s gradient could be treated as a manifold distance oracle , because it leaks the direction towards its approximate manifold . As a result , the model acts as an oracle responding to queries about manifold distance , or in other words , an implicit proxy for manifold distance , d′ . In the hard-label setting , the data manifold , true gradient , and model parameters are not accessible . Thus we are interested in a decision-based version of the manifold distance oracle , defined as follows . Definition 3.2 ( Noisy Manifold Distance Oracle ) . Consider a manifold distance oracle instantiating d′ , benign sample x0 , and pair of adversarial samples ( x′ , x′′ ) such that d′ ( x0 , x′ ) < d′ ( x0 , x′′ ) , e.g. , x′ is considered on-manifold while x′′ is not . In the hard-label setting , the noisy manifold distance ( NMD ) oracle instantiates d′′ such that d′′ ( x0 , x′ ) = 0 and d′′ ( x0 , x′′ ) = 1 . During a hard-label attack , the adversary searches in a direction that minimizes perceptual distance to the original sample . Concurrently , the adversary can be said to implicitly minimize the expected output of the NMD oracle , which is a binary indicator that a sample is on-manifold or not . Without knowledge of the true ( or approximate ) manifold , this requires careful selection of the search direction from the current sample . Since the search direction of contemporary hard-label attacks is synthesized over expectation of a ball around the adversarial sample , we are interested in search directions such as x0 − x′ which minimize the expected distance to the manifold . To formalize the entailed information in the NMD oracle , we turn to a standard result in data processing , which states the following : Definition 3.3 ( Data Processing Inequality ( DPI ) ( Beaudry & Renner , 2012 ) ) . If three random variables form the Markov chain X → Y → Z , then their mutual information ( MI ) has the relation I ( X ; Y ) > I ( X ; Z ) . We assume the data manifoldM , the input gradient G , and the hard-label gradient estimate G̈ will form the Markov chainM→ G → G̈ . This assumption is reasonable due to the observations by Santurkar et al . ( 2019 ) ; modifying the sampled data manifold ( e.g. , by adding adversarial samples through saddle-point optimization ) causally induces a smoother loss surface , which imposes its own gradient distribution . Likewise , the true gradient and gradient estimate of hard-label attack are causally linked due to the estimate ’ s bounded variance ( Cheng et al. , 2020 ) . If I ( M , G ) is larger for adversarially robust models , by Definition 3.3 the upper bound on I ( M , G̈ ) is larger , which means more manifold information could be leaked in the noisy gradient . This information could be used to search in the direction where d′′ is minimized in expectation , leading towards on-manifold examples . However , DPI only offers an upper bound , thus the distance decrease is not guaranteed , only suggested . In the information theoretic sense , does this mean the gradients of models robust in an -ball around each sample can reveal more information about the distance to training data than standard models ? An immediate follow-up concern is whether other factors can influence the model to reveal this information , such as the problem dimensionality . As a first step we posit the following hypothesis : Hypothesis 1 . Consider the manifold distributionM which can generate data to train a natural model with gradient distribution G , and train robust model with smoothed gradient distribution G′ . We posit that their manifold-gradient mutual information I has the relation I ( M , G′ ) ≥ I ( M , G ) . In order to empirically verify Hypothesis 1 , we must parameterize the notion of model robustness while solving for I ( M , G ) , given an arbitrary gradient distribution G and manifold distributionM . Schmidt et al . ( 2018 ) have shown that robust training requires additional data as a function of the data dimensionality . We leverage the data model and results from Schmidt et al . ( 2018 ) to derive an analytical solution for I ( M , G ) , since we can parameterize model robustness as a function of data size and dimensionality . Consequently , the remainder of our theoretical analysis assumes a Gaussian mixture data model . Definition 3.4 ( Data model and optimal weights ( Schmidt et al. , 2018 ) ) . Let µ ∈ Rd be the per-class centers ( means ) and let σ > 0 be the variance parameter . Then the ( µ , σI ) -Gaussian model is defined by the following distribution over ( x , y ) ∈ Rd × { ±1 } : First , draw a label y ∈ { ±1 } uniformly at random . Then sample the data point x ∈ Rd from N ( y · µ , σI ) . Definition 3.5 ( Optimal classification weight ( Schmidt et al. , 2018 ) ) . Fix σ ≤ c1d 1 4 for the universal constant c1 , and samples ( x1 , y1 ) , · · · , ( xn , yn ) drawn i.i.d from the ( µ , σI ) -Gaussian model with ||µ|| = √ d ( i.e. , µk = 1 for all dimensions k ∈ { 0 , . . . , d } ) . Schmidt et al . ( 2018 ) prove that the weight setting ŵ = 1n ∑n i=1 yixi yields an l ∞-robust classification error of at most 1 % for the linear classifier fŵ : Rd → { ±1 } instantiated as fŵ ( x ) = sign ( ŵTx ) if n ≥ { 1 , for ≤ 1 4 d− 1 4 c2 2 √ d , for 1 4 d− 1 4 ≤ ≤ 1 4 , ( 1 ) for a universal constant c2 . Note that the instantiation of ŵ must change with choice of and d. We can leverage the weight settings as a function of n and d to give a definition of manifold-gradient mutual information .
This paper studies zeroth-order hard-label adversarial attacks. In particular, the authors explore the connection between how the gradient can reveal information about the data manifold, and its dependence on data dimensionality. They also empirically consider attacks with reduced dimensionality in practice.
SP:9b159df7c4aa35b0d9a518259fec89ef8dbe8cc9
Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks
1 INTRODUCTION . Adversarial examples against deep learning models were originally investigated as blind spots in classification ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . Formal methods for discovering these blind spots emerged , which we denote as gradient-level attacks , and became the first techniques to reach widespread attention within the deep learning community ( Papernot et al. , 2016 ; MoosaviDezfooli et al. , 2015 ; Carlini & Wagner , 2016 ; 2017 ; Chen et al. , 2018 ) . In order to compute the necessary gradient information , such techniques required access to the model parameters and a sizeable query budget . These shortcomings were addressed by the creation of score-level attacks , which only require the confidence values output by the deep learning models ( Fredrikson et al. , 2015 ; Tramèr et al. , 2016 ; Chen et al. , 2017 ; Ilyas et al. , 2018 ) . However , these attacks still rely on models to divulge information that would be impractical to receive in real-world systems . By contrast , hard-label attacks make no assumptions about receiving side information , and only the predicted class is observable , thus providing the weakest , yet most realistic adversarial threat model . These methods , which originated from a random-walk on the decision boundary ( Brendel et al. , 2017 ) , have been carefully refined to offer convergence guarantees ( Cheng et al. , 2019 ) , query efficiency ( Chen et al. , 2019 ; Cheng et al. , 2020 ) , and capability in the physical world Feng et al . ( 2020 ) . Despite the steady improvements of hard-label attacks , open questions persist about their behavior , and adversarial machine learning ( AML ) attacks at large . Adversarial examples were originally assumed to lie in rare pockets of the input space ( Goodfellow et al. , 2014 ) , but this conventional wisdom was later challenged by the boundary tilting assumption ( Tanay & Griffin , 2016 ; Gilmer et al. , 2018 ) , which adopts a “ data-geometric ” view of the input space living on a lower-dimensional manifold . This is supported by Stutz et al . ( 2019 ) , who suggest that regular adversarial examples leave the data manifold , while on-manifold adversarial examples are generalization errors . From a data-geometric perspective , an adversarial example ’ s distance to the manifold primarily describes the amount of semantic features preserved during the attack process . This makes it advantageous to produce on-manifold adversarial examples , since the adversary can exploit the inherent generalization error of the model while producing samples that are semantically similar for humans . However , the true data manifold is either difficult or impossible to describe , and relying solely on approximations of the manifold can lead to the creation of crude adversarial examples ( Stutz et al. , 2019 ) . In this paper , we adopt the boundary-tilting assumption and demonstrate an unexpected benefit of query-efficient zeroth-order attacks , i.e. , attacks enabled by the use of dimensionality reduction techniques . These attacks are more likely to discover on-manifold examples , which we theoretically demonstrate is the result of manifold-gradient mutual information . Our results suggest that this quantity can increase as a function of the data dimensionality . This information leakage leads to adversarial examples that are on-manifold generalization errors . With this knowledge , we empirically demonstrate how to improve hard-label attacks in a generic yet principled way , and potentially re-think their interaction with model robustness and public-facing systems in the near future . For clarity , we provide a block diagram of our claims and experiments in the Appendix ( Section A.3 ) . Our specific contributions are as follows : • Introduction of manifold distance oracle . To create on-manifold examples , the adversary must ( implicitly ) leverage manifold information during the attack phase . We thus propose an informationtheoretic formulation of the noisy manifold distance ( NMD ) oracle , which can explain how zerothorder attacks craft on-manifold examples . We theoretically demonstrate on a Gaussian data model that manifold-gradient mutual information can increase as a function of data dimensionality . We empirically show this is true even on large-scale image datasets such as CIFAR-10 and ImageNet . This finding relates to known behavior in the gradient-level setting , where semantic manifold priors ( e.g. , shapes and textures ) can be leaked from robust models ( Engstrom et al. , 2019 ) . • Reveal new insights of manifold feedback during query-efficient zeroth-order search . In practice , the data manifold is difficult to characterize . We propose the use of three proxies for manifold distance , which all show consistent results in terms of an adversary ’ s ability to search near the manifold . This methodology allows us to empirically demonstrate the connection between dimension reduction , model robustness , and manifold feedback from the model , beyond the known convergence rates tied to dimensionality ( Nesterov & Spokoiny , 2017 ) . Our findings inform how to search closer to the manifold ( Table 1 ) , reduce gradient deviation ( Table 2 ) , and improve query efficiency ( Figure 2 ) in a simple and generic way for hard-label attacks . • Attack-agnostic method for super-pixel grouping . We show that spatial dimension reduction of a decision-based gradient estimate acts as an attack- and knowledge-agnostic method for searching over super-pixels of an image . More importantly , this helps an attacker exploit a model ’ s reaction to salient input changes , leading to samples closer to the manifold compared to the attack on full dimension . As a result , we demonstrate up to 200 % and 340 % success rate improvement for state-of-the-art hard-label attacks HSJA ( Chen et al. , 2019 ) and Sign-OPT attack ( Cheng et al. , 2020 ) , respectively . 2 RELATED WORK . Since the original discovery of adversarial samples against deep models ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) , the prevailing question was why such examples existed . The original assumption was that adversarial examples lived in low-probability pockets of the input space , and were never encountered during parameter optimization ( Szegedy et al. , 2013 ) . This effect was believed to be amplified by the linearity of weight activations in the presence of small perturbations ( Goodfellow et al. , 2014 ) . These assumptions were later challenged by the boundary tilting assumption , which in summary 1 ) asserts that the train and test sets of a model only occupy a sub-manifold of the true data , while the decision boundary lies close to samples on and beyond the sub-manifold ( Tanay & Griffin , 2016 ) , and 2 ) supports the “ data geometric “ view , where high-dimensional geometry of the true data manifold enables a low-probability error set to exist ( Gilmer et al. , 2018 ) . Likewise the boundary tilting assumption describes adversarial samples as leaving the manifold , which has inspired defenses based on projecting such samples back to the data manifold ( Jalal et al. , 2019 ; Samangouei et al. , 2018 ) . However , these approaches were later defeated by adaptive attacks ( Carlini et al. , 2019 ; Carlini & Wagner , 2017 ; Tramer et al. , 2020 ) . We investigate the scenario where an adversary uses zeroth-order information ( i.e. , top-1 label feedback ) to estimate the desired gradient direction ( Cheng et al. , 2020 ; Chen et al. , 2019 ) . Contemporary attacks in this setting are variants of random gradient-free method ( RGF ) ( Nesterov & Spokoiny , 2017 ) , and rely on formulations which convert the top-1 ( hard ) label , which is a step function , into a continuous real-valued function g : Rd → R , which takes search direction θ ∈ Rd and outputs the distance to the nearest adversarial example ( Cheng et al. , 2018 ) . The gradient estimate is conceived as a function of the gradient ∇g and can be estimated with either two samples of information ( SignOPT ) ( Cheng et al. , 2020 ) , or a single point ( HopSkipJumpAttack ) ( Chen et al. , 2019 ) . Details of specific formulations for each attack are provided in Section A.2 of the Appendix . Query efficiency is a persistent desire in the study of hard-label attacks . One clue for achieving efficiency comes from the theory of gradient estimation error and convergence , which shows that the estimation cost is polynomial in d , the dimension of the optimized variable , thus motivating the use of standard dimension-reduction techniques ( Tu et al. , 2019 ) . However , to date it is not completely understood how this relates to traversal through the data manifold . We leverage previous results of the gradient-level setting ( Stutz et al. , 2019 ; Engstrom et al. , 2019 ) to formulate an explanation of manifold leakage during hard-label adversarial attacks . 3 NOISY MANIFOLD DISTANCE ORACLE . Santurkar et al . ( 2019 ) demonstrate that the gradients of robust models have higher visual semantic alignment with the data compared to gradients of standard models . We build on this finding by first assuming that the benign observable data generates from a true lower-dimension distribution . Under the boundary-tilting assumption , this lower-dimension distribution forms a manifold onto which new observations , either benign or adversarial , can be encoded ( Tanay & Griffin , 2016 ) . Likewise , we assume that deep learning models will learn a lower-dimension representation of the observable data , e.g. , feature layers of convolutional neural networks learn to encode training observations onto a low dimension approximate manifold ( Zhang et al. , 2018 ) . When an adversary creates adversarial samples , they are leveraging a pathway that shadows the model gradient , not the true manifold . Thus there is the possibility that adversarial samples are considered “ off-manifold ” , e.g. , can not be expected to generate naturally from the true manifold . However , it is critical for adversarial samples to be as close to the manifold as possible , since on-manifold adversarial examples can exploit the fundamental generalization error of the model ( Stutz et al. , 2019 ) . More formally , we define the notion of manifold distance as follows . Definition 3.1 ( Manifold Distance ) . Consider the benign sample x0 and adversarial counterpart x . Assuming a perfect encoding back to the true manifold φ , the manifold distance is defined as d ( φ ( x0 ) , φ ( x ) ) , where d is a distance function with the domain of the true manifold . Unfortunately , unless the true manifold for a dataset is known , it is impossible to define φ . Instead , a proxy d′ can be used such that d′ ( x , x′ ) ∼ d ( φ ( x ) , φ ( x′ ) ) . In practice , one can implement d′ with any perceptual distance score , such as Learned Perceptual Image Patch Similarity ( Zhang et al. , 2018 ) . If relying on a distance measure d , such as the Lp-norm , an approximate encoder φ′ ( · ) ∼ φ ( · ) can be learned using reconstruction-based training of autoencoders ( Stutz et al. , 2019 ) , or leveraging feature layers of convolutional neural networks ( Zhang et al. , 2018 ) . We are interested in the class of hard-label adversaries that implicitly minimize some proxy of the manifold distance . Given the result of Santurkar et al . ( 2019 ) , the robust model ’ s gradient could be treated as a manifold distance oracle , because it leaks the direction towards its approximate manifold . As a result , the model acts as an oracle responding to queries about manifold distance , or in other words , an implicit proxy for manifold distance , d′ . In the hard-label setting , the data manifold , true gradient , and model parameters are not accessible . Thus we are interested in a decision-based version of the manifold distance oracle , defined as follows . Definition 3.2 ( Noisy Manifold Distance Oracle ) . Consider a manifold distance oracle instantiating d′ , benign sample x0 , and pair of adversarial samples ( x′ , x′′ ) such that d′ ( x0 , x′ ) < d′ ( x0 , x′′ ) , e.g. , x′ is considered on-manifold while x′′ is not . In the hard-label setting , the noisy manifold distance ( NMD ) oracle instantiates d′′ such that d′′ ( x0 , x′ ) = 0 and d′′ ( x0 , x′′ ) = 1 . During a hard-label attack , the adversary searches in a direction that minimizes perceptual distance to the original sample . Concurrently , the adversary can be said to implicitly minimize the expected output of the NMD oracle , which is a binary indicator that a sample is on-manifold or not . Without knowledge of the true ( or approximate ) manifold , this requires careful selection of the search direction from the current sample . Since the search direction of contemporary hard-label attacks is synthesized over expectation of a ball around the adversarial sample , we are interested in search directions such as x0 − x′ which minimize the expected distance to the manifold . To formalize the entailed information in the NMD oracle , we turn to a standard result in data processing , which states the following : Definition 3.3 ( Data Processing Inequality ( DPI ) ( Beaudry & Renner , 2012 ) ) . If three random variables form the Markov chain X → Y → Z , then their mutual information ( MI ) has the relation I ( X ; Y ) > I ( X ; Z ) . We assume the data manifoldM , the input gradient G , and the hard-label gradient estimate G̈ will form the Markov chainM→ G → G̈ . This assumption is reasonable due to the observations by Santurkar et al . ( 2019 ) ; modifying the sampled data manifold ( e.g. , by adding adversarial samples through saddle-point optimization ) causally induces a smoother loss surface , which imposes its own gradient distribution . Likewise , the true gradient and gradient estimate of hard-label attack are causally linked due to the estimate ’ s bounded variance ( Cheng et al. , 2020 ) . If I ( M , G ) is larger for adversarially robust models , by Definition 3.3 the upper bound on I ( M , G̈ ) is larger , which means more manifold information could be leaked in the noisy gradient . This information could be used to search in the direction where d′′ is minimized in expectation , leading towards on-manifold examples . However , DPI only offers an upper bound , thus the distance decrease is not guaranteed , only suggested . In the information theoretic sense , does this mean the gradients of models robust in an -ball around each sample can reveal more information about the distance to training data than standard models ? An immediate follow-up concern is whether other factors can influence the model to reveal this information , such as the problem dimensionality . As a first step we posit the following hypothesis : Hypothesis 1 . Consider the manifold distributionM which can generate data to train a natural model with gradient distribution G , and train robust model with smoothed gradient distribution G′ . We posit that their manifold-gradient mutual information I has the relation I ( M , G′ ) ≥ I ( M , G ) . In order to empirically verify Hypothesis 1 , we must parameterize the notion of model robustness while solving for I ( M , G ) , given an arbitrary gradient distribution G and manifold distributionM . Schmidt et al . ( 2018 ) have shown that robust training requires additional data as a function of the data dimensionality . We leverage the data model and results from Schmidt et al . ( 2018 ) to derive an analytical solution for I ( M , G ) , since we can parameterize model robustness as a function of data size and dimensionality . Consequently , the remainder of our theoretical analysis assumes a Gaussian mixture data model . Definition 3.4 ( Data model and optimal weights ( Schmidt et al. , 2018 ) ) . Let µ ∈ Rd be the per-class centers ( means ) and let σ > 0 be the variance parameter . Then the ( µ , σI ) -Gaussian model is defined by the following distribution over ( x , y ) ∈ Rd × { ±1 } : First , draw a label y ∈ { ±1 } uniformly at random . Then sample the data point x ∈ Rd from N ( y · µ , σI ) . Definition 3.5 ( Optimal classification weight ( Schmidt et al. , 2018 ) ) . Fix σ ≤ c1d 1 4 for the universal constant c1 , and samples ( x1 , y1 ) , · · · , ( xn , yn ) drawn i.i.d from the ( µ , σI ) -Gaussian model with ||µ|| = √ d ( i.e. , µk = 1 for all dimensions k ∈ { 0 , . . . , d } ) . Schmidt et al . ( 2018 ) prove that the weight setting ŵ = 1n ∑n i=1 yixi yields an l ∞-robust classification error of at most 1 % for the linear classifier fŵ : Rd → { ±1 } instantiated as fŵ ( x ) = sign ( ŵTx ) if n ≥ { 1 , for ≤ 1 4 d− 1 4 c2 2 √ d , for 1 4 d− 1 4 ≤ ≤ 1 4 , ( 1 ) for a universal constant c2 . Note that the instantiation of ŵ must change with choice of and d. We can leverage the weight settings as a function of n and d to give a definition of manifold-gradient mutual information .
This paper concerns zeroth-order hard-label adversarial attacks on machine learning models, and in particular, how to strengthen them through leveraging manifold information. The core idea is inspired from a series of intuitions and observations around the fact that the gradient of a robust model has more information regarding the underlying low-dimensional manifold of the data, compared to that of a naturally trained model. Author(s) have built upon this inspiration and proposed the noisy manifold distance oracle, which can leak manifold information to the adversary during forging its attack. Author(s) have also implemented their method on a number of real-world datasets.
SP:9b159df7c4aa35b0d9a518259fec89ef8dbe8cc9
Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks
1 INTRODUCTION . Adversarial examples against deep learning models were originally investigated as blind spots in classification ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . Formal methods for discovering these blind spots emerged , which we denote as gradient-level attacks , and became the first techniques to reach widespread attention within the deep learning community ( Papernot et al. , 2016 ; MoosaviDezfooli et al. , 2015 ; Carlini & Wagner , 2016 ; 2017 ; Chen et al. , 2018 ) . In order to compute the necessary gradient information , such techniques required access to the model parameters and a sizeable query budget . These shortcomings were addressed by the creation of score-level attacks , which only require the confidence values output by the deep learning models ( Fredrikson et al. , 2015 ; Tramèr et al. , 2016 ; Chen et al. , 2017 ; Ilyas et al. , 2018 ) . However , these attacks still rely on models to divulge information that would be impractical to receive in real-world systems . By contrast , hard-label attacks make no assumptions about receiving side information , and only the predicted class is observable , thus providing the weakest , yet most realistic adversarial threat model . These methods , which originated from a random-walk on the decision boundary ( Brendel et al. , 2017 ) , have been carefully refined to offer convergence guarantees ( Cheng et al. , 2019 ) , query efficiency ( Chen et al. , 2019 ; Cheng et al. , 2020 ) , and capability in the physical world Feng et al . ( 2020 ) . Despite the steady improvements of hard-label attacks , open questions persist about their behavior , and adversarial machine learning ( AML ) attacks at large . Adversarial examples were originally assumed to lie in rare pockets of the input space ( Goodfellow et al. , 2014 ) , but this conventional wisdom was later challenged by the boundary tilting assumption ( Tanay & Griffin , 2016 ; Gilmer et al. , 2018 ) , which adopts a “ data-geometric ” view of the input space living on a lower-dimensional manifold . This is supported by Stutz et al . ( 2019 ) , who suggest that regular adversarial examples leave the data manifold , while on-manifold adversarial examples are generalization errors . From a data-geometric perspective , an adversarial example ’ s distance to the manifold primarily describes the amount of semantic features preserved during the attack process . This makes it advantageous to produce on-manifold adversarial examples , since the adversary can exploit the inherent generalization error of the model while producing samples that are semantically similar for humans . However , the true data manifold is either difficult or impossible to describe , and relying solely on approximations of the manifold can lead to the creation of crude adversarial examples ( Stutz et al. , 2019 ) . In this paper , we adopt the boundary-tilting assumption and demonstrate an unexpected benefit of query-efficient zeroth-order attacks , i.e. , attacks enabled by the use of dimensionality reduction techniques . These attacks are more likely to discover on-manifold examples , which we theoretically demonstrate is the result of manifold-gradient mutual information . Our results suggest that this quantity can increase as a function of the data dimensionality . This information leakage leads to adversarial examples that are on-manifold generalization errors . With this knowledge , we empirically demonstrate how to improve hard-label attacks in a generic yet principled way , and potentially re-think their interaction with model robustness and public-facing systems in the near future . For clarity , we provide a block diagram of our claims and experiments in the Appendix ( Section A.3 ) . Our specific contributions are as follows : • Introduction of manifold distance oracle . To create on-manifold examples , the adversary must ( implicitly ) leverage manifold information during the attack phase . We thus propose an informationtheoretic formulation of the noisy manifold distance ( NMD ) oracle , which can explain how zerothorder attacks craft on-manifold examples . We theoretically demonstrate on a Gaussian data model that manifold-gradient mutual information can increase as a function of data dimensionality . We empirically show this is true even on large-scale image datasets such as CIFAR-10 and ImageNet . This finding relates to known behavior in the gradient-level setting , where semantic manifold priors ( e.g. , shapes and textures ) can be leaked from robust models ( Engstrom et al. , 2019 ) . • Reveal new insights of manifold feedback during query-efficient zeroth-order search . In practice , the data manifold is difficult to characterize . We propose the use of three proxies for manifold distance , which all show consistent results in terms of an adversary ’ s ability to search near the manifold . This methodology allows us to empirically demonstrate the connection between dimension reduction , model robustness , and manifold feedback from the model , beyond the known convergence rates tied to dimensionality ( Nesterov & Spokoiny , 2017 ) . Our findings inform how to search closer to the manifold ( Table 1 ) , reduce gradient deviation ( Table 2 ) , and improve query efficiency ( Figure 2 ) in a simple and generic way for hard-label attacks . • Attack-agnostic method for super-pixel grouping . We show that spatial dimension reduction of a decision-based gradient estimate acts as an attack- and knowledge-agnostic method for searching over super-pixels of an image . More importantly , this helps an attacker exploit a model ’ s reaction to salient input changes , leading to samples closer to the manifold compared to the attack on full dimension . As a result , we demonstrate up to 200 % and 340 % success rate improvement for state-of-the-art hard-label attacks HSJA ( Chen et al. , 2019 ) and Sign-OPT attack ( Cheng et al. , 2020 ) , respectively . 2 RELATED WORK . Since the original discovery of adversarial samples against deep models ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) , the prevailing question was why such examples existed . The original assumption was that adversarial examples lived in low-probability pockets of the input space , and were never encountered during parameter optimization ( Szegedy et al. , 2013 ) . This effect was believed to be amplified by the linearity of weight activations in the presence of small perturbations ( Goodfellow et al. , 2014 ) . These assumptions were later challenged by the boundary tilting assumption , which in summary 1 ) asserts that the train and test sets of a model only occupy a sub-manifold of the true data , while the decision boundary lies close to samples on and beyond the sub-manifold ( Tanay & Griffin , 2016 ) , and 2 ) supports the “ data geometric “ view , where high-dimensional geometry of the true data manifold enables a low-probability error set to exist ( Gilmer et al. , 2018 ) . Likewise the boundary tilting assumption describes adversarial samples as leaving the manifold , which has inspired defenses based on projecting such samples back to the data manifold ( Jalal et al. , 2019 ; Samangouei et al. , 2018 ) . However , these approaches were later defeated by adaptive attacks ( Carlini et al. , 2019 ; Carlini & Wagner , 2017 ; Tramer et al. , 2020 ) . We investigate the scenario where an adversary uses zeroth-order information ( i.e. , top-1 label feedback ) to estimate the desired gradient direction ( Cheng et al. , 2020 ; Chen et al. , 2019 ) . Contemporary attacks in this setting are variants of random gradient-free method ( RGF ) ( Nesterov & Spokoiny , 2017 ) , and rely on formulations which convert the top-1 ( hard ) label , which is a step function , into a continuous real-valued function g : Rd → R , which takes search direction θ ∈ Rd and outputs the distance to the nearest adversarial example ( Cheng et al. , 2018 ) . The gradient estimate is conceived as a function of the gradient ∇g and can be estimated with either two samples of information ( SignOPT ) ( Cheng et al. , 2020 ) , or a single point ( HopSkipJumpAttack ) ( Chen et al. , 2019 ) . Details of specific formulations for each attack are provided in Section A.2 of the Appendix . Query efficiency is a persistent desire in the study of hard-label attacks . One clue for achieving efficiency comes from the theory of gradient estimation error and convergence , which shows that the estimation cost is polynomial in d , the dimension of the optimized variable , thus motivating the use of standard dimension-reduction techniques ( Tu et al. , 2019 ) . However , to date it is not completely understood how this relates to traversal through the data manifold . We leverage previous results of the gradient-level setting ( Stutz et al. , 2019 ; Engstrom et al. , 2019 ) to formulate an explanation of manifold leakage during hard-label adversarial attacks . 3 NOISY MANIFOLD DISTANCE ORACLE . Santurkar et al . ( 2019 ) demonstrate that the gradients of robust models have higher visual semantic alignment with the data compared to gradients of standard models . We build on this finding by first assuming that the benign observable data generates from a true lower-dimension distribution . Under the boundary-tilting assumption , this lower-dimension distribution forms a manifold onto which new observations , either benign or adversarial , can be encoded ( Tanay & Griffin , 2016 ) . Likewise , we assume that deep learning models will learn a lower-dimension representation of the observable data , e.g. , feature layers of convolutional neural networks learn to encode training observations onto a low dimension approximate manifold ( Zhang et al. , 2018 ) . When an adversary creates adversarial samples , they are leveraging a pathway that shadows the model gradient , not the true manifold . Thus there is the possibility that adversarial samples are considered “ off-manifold ” , e.g. , can not be expected to generate naturally from the true manifold . However , it is critical for adversarial samples to be as close to the manifold as possible , since on-manifold adversarial examples can exploit the fundamental generalization error of the model ( Stutz et al. , 2019 ) . More formally , we define the notion of manifold distance as follows . Definition 3.1 ( Manifold Distance ) . Consider the benign sample x0 and adversarial counterpart x . Assuming a perfect encoding back to the true manifold φ , the manifold distance is defined as d ( φ ( x0 ) , φ ( x ) ) , where d is a distance function with the domain of the true manifold . Unfortunately , unless the true manifold for a dataset is known , it is impossible to define φ . Instead , a proxy d′ can be used such that d′ ( x , x′ ) ∼ d ( φ ( x ) , φ ( x′ ) ) . In practice , one can implement d′ with any perceptual distance score , such as Learned Perceptual Image Patch Similarity ( Zhang et al. , 2018 ) . If relying on a distance measure d , such as the Lp-norm , an approximate encoder φ′ ( · ) ∼ φ ( · ) can be learned using reconstruction-based training of autoencoders ( Stutz et al. , 2019 ) , or leveraging feature layers of convolutional neural networks ( Zhang et al. , 2018 ) . We are interested in the class of hard-label adversaries that implicitly minimize some proxy of the manifold distance . Given the result of Santurkar et al . ( 2019 ) , the robust model ’ s gradient could be treated as a manifold distance oracle , because it leaks the direction towards its approximate manifold . As a result , the model acts as an oracle responding to queries about manifold distance , or in other words , an implicit proxy for manifold distance , d′ . In the hard-label setting , the data manifold , true gradient , and model parameters are not accessible . Thus we are interested in a decision-based version of the manifold distance oracle , defined as follows . Definition 3.2 ( Noisy Manifold Distance Oracle ) . Consider a manifold distance oracle instantiating d′ , benign sample x0 , and pair of adversarial samples ( x′ , x′′ ) such that d′ ( x0 , x′ ) < d′ ( x0 , x′′ ) , e.g. , x′ is considered on-manifold while x′′ is not . In the hard-label setting , the noisy manifold distance ( NMD ) oracle instantiates d′′ such that d′′ ( x0 , x′ ) = 0 and d′′ ( x0 , x′′ ) = 1 . During a hard-label attack , the adversary searches in a direction that minimizes perceptual distance to the original sample . Concurrently , the adversary can be said to implicitly minimize the expected output of the NMD oracle , which is a binary indicator that a sample is on-manifold or not . Without knowledge of the true ( or approximate ) manifold , this requires careful selection of the search direction from the current sample . Since the search direction of contemporary hard-label attacks is synthesized over expectation of a ball around the adversarial sample , we are interested in search directions such as x0 − x′ which minimize the expected distance to the manifold . To formalize the entailed information in the NMD oracle , we turn to a standard result in data processing , which states the following : Definition 3.3 ( Data Processing Inequality ( DPI ) ( Beaudry & Renner , 2012 ) ) . If three random variables form the Markov chain X → Y → Z , then their mutual information ( MI ) has the relation I ( X ; Y ) > I ( X ; Z ) . We assume the data manifoldM , the input gradient G , and the hard-label gradient estimate G̈ will form the Markov chainM→ G → G̈ . This assumption is reasonable due to the observations by Santurkar et al . ( 2019 ) ; modifying the sampled data manifold ( e.g. , by adding adversarial samples through saddle-point optimization ) causally induces a smoother loss surface , which imposes its own gradient distribution . Likewise , the true gradient and gradient estimate of hard-label attack are causally linked due to the estimate ’ s bounded variance ( Cheng et al. , 2020 ) . If I ( M , G ) is larger for adversarially robust models , by Definition 3.3 the upper bound on I ( M , G̈ ) is larger , which means more manifold information could be leaked in the noisy gradient . This information could be used to search in the direction where d′′ is minimized in expectation , leading towards on-manifold examples . However , DPI only offers an upper bound , thus the distance decrease is not guaranteed , only suggested . In the information theoretic sense , does this mean the gradients of models robust in an -ball around each sample can reveal more information about the distance to training data than standard models ? An immediate follow-up concern is whether other factors can influence the model to reveal this information , such as the problem dimensionality . As a first step we posit the following hypothesis : Hypothesis 1 . Consider the manifold distributionM which can generate data to train a natural model with gradient distribution G , and train robust model with smoothed gradient distribution G′ . We posit that their manifold-gradient mutual information I has the relation I ( M , G′ ) ≥ I ( M , G ) . In order to empirically verify Hypothesis 1 , we must parameterize the notion of model robustness while solving for I ( M , G ) , given an arbitrary gradient distribution G and manifold distributionM . Schmidt et al . ( 2018 ) have shown that robust training requires additional data as a function of the data dimensionality . We leverage the data model and results from Schmidt et al . ( 2018 ) to derive an analytical solution for I ( M , G ) , since we can parameterize model robustness as a function of data size and dimensionality . Consequently , the remainder of our theoretical analysis assumes a Gaussian mixture data model . Definition 3.4 ( Data model and optimal weights ( Schmidt et al. , 2018 ) ) . Let µ ∈ Rd be the per-class centers ( means ) and let σ > 0 be the variance parameter . Then the ( µ , σI ) -Gaussian model is defined by the following distribution over ( x , y ) ∈ Rd × { ±1 } : First , draw a label y ∈ { ±1 } uniformly at random . Then sample the data point x ∈ Rd from N ( y · µ , σI ) . Definition 3.5 ( Optimal classification weight ( Schmidt et al. , 2018 ) ) . Fix σ ≤ c1d 1 4 for the universal constant c1 , and samples ( x1 , y1 ) , · · · , ( xn , yn ) drawn i.i.d from the ( µ , σI ) -Gaussian model with ||µ|| = √ d ( i.e. , µk = 1 for all dimensions k ∈ { 0 , . . . , d } ) . Schmidt et al . ( 2018 ) prove that the weight setting ŵ = 1n ∑n i=1 yixi yields an l ∞-robust classification error of at most 1 % for the linear classifier fŵ : Rd → { ±1 } instantiated as fŵ ( x ) = sign ( ŵTx ) if n ≥ { 1 , for ≤ 1 4 d− 1 4 c2 2 √ d , for 1 4 d− 1 4 ≤ ≤ 1 4 , ( 1 ) for a universal constant c2 . Note that the instantiation of ŵ must change with choice of and d. We can leverage the weight settings as a function of n and d to give a definition of manifold-gradient mutual information .
The paper addresses hard-label adversarial attacks from a geometric/manifold perspective. Specifically, it examines examples that live close to or on the data manifold. The paper presents a noisy manifold distance based on information-theoretic considerations and proposes three ways to approximate it (taking into account that the data manifold is unknown). Experimental results of HSJA and Sign-OPT attacks on CIFAR-10 and ImageNet are presented.
SP:9b159df7c4aa35b0d9a518259fec89ef8dbe8cc9
DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting
1 INTRODUCTION . Time series ( TS ) with apparent periodic ( seasonal ) oscillations , referred to as periodic time series ( PTS ) in this paper , is pervasive in a wide range of critical industries , such as seasonal electricity spot prices in power industry ( Koopman et al. , 2007 ) , periodic traffic flows in transportation ( Lippi et al. , 2013 ) , periodic carbon dioxide exchanges and water flows in sustainability domain ( Seymour , 2001 ; Tesfaye et al. , 2006 ) . Apparently , PTS forecasting plays a crucial role in these industries since it can foster their business development by facilitating a variety of capabilities , including early warning , pre-planning , and resource scheduling ( Kahn , 2003 ; Jain , 2017 ) . Given the pervasiveness and importance of PTS , two obstacles , however , largely hinder the performance of existing forecasting models . First , future TS signals yield complicated dependencies on both adjacent historical observations and inherent periodicity . Nevertheless , many existing studies did not consider this distinctive periodic property ( Salinas et al. , 2020 ; Toubeau et al. , 2018 ; Wang et al. , 2019 ; Oreshkin et al. , 2020 ) . The performance of these methods has been greatly restrained due to its ignorance of periodicity modeling . Some other efforts , though explicitly introducing periodicity modeling , only followed some arbitrary yet simple assumptions , such as additive or multiplicative seasonality , to capture certain plain periodic effects ( Holt , 1957 ; 2004 ; Vecchia , 1985b ; Taylor & Letham , 2018 ) . These methods failed to model complicated periodic dependencies beyond much simplified assumptions . The second challenge lies in that the inherent periodicity of a typical real-world TS is usually composed of various periods with different amplitudes and frequencies . For example , Figure 1 exemplifies the sophisticated composition of diversified periods via an eightyears hourly TS of electricity load in a region of California . However , existing methods ( Taylor & Letham , 2018 ; Smyl , 2020 ) required the pre-specification of periodic frequencies before estimating other parameters from data , which attempted to evade this obstacle by transferring the burden of periodicity coefficient initialization to practitioners . To better tackle the aforementioned two challenges , we develop a deep expansion learning framework , DEPTS , for PTS forecasting . The core idea of DEPTS is to build a deep neural network that conducts the progressive expansions of the complicated dependencies of PTS signals on periodicity to facilitate forecasting . We start from a novel decoupled formulation for PTS forecasting by introducing the periodic state as a hidden variable . This new formulation stimulates us to make more customized and dedicated designs to handle the two specific challenges mentioned above . For the first challenge , we develop an expansion module on top of residual learning ( He et al. , 2016 ; Oreshkin et al. , 2020 ) to conduct layer-by-layer expansions between observed TS signals and hidden periodic states . With such a design , we can build a deep architecture with both high capacities and efficient parameter optimization to model those complicated dependencies of TS signals on periodicity . For the second challenge , we build a periodicity module to estimate the periodic states from observational data . We represent the hidden periodic state with respect to time as a parameterized periodic function with sufficient expressiveness . In this work , for simplicity , we instantiate this function as a series of cosine functions . To release the burden of manually setting periodic coefficients for different data , we develop a data-driven parameter initialization strategy on top of Discrete Cosine Transform ( Ahmed et al. , 1974 ) . After that , we combine the periodicity module with the expansion module to perform end-to-end learning . To the best of our knowledge , DEPTS is a very early attempt to build a customized deep learning ( DL ) architecture for PTS that explicitly takes account of the periodic property . Moreover , with two delicately designed modules , DEPTS also owns certain interpretable capabilities . First , the expansions of forecasts can distinguish the contributions from either adjacent TS signals or inherent periodicity , which intuitively illustrate how the future TS signals may vary based on local momenta and global periodicity . Second , coefficients of the periodicity module have their own practical meanings , such as amplitudes and frequencies , which provide certain interpretable effects inherently . We conduct experiments on both synthetic data and real-world data , which all demonstrate the superiority of DEPTS on handling PTS . On average , DEPTS reduces the error of the best baseline by about 10 % . In a few cases , the error reduction can even reach up to 20 % . Besides , we also include extensive ablation tests to verify our critical designs and visualize specific model components to interpret model behaviors . 2 RELATED WORK . TS forecasting is a longstanding research topic that has been extensively studied for decades . After a comprehensive review of the literature , we find three types of paradigms in developing TS models . At an early stage , researchers developed simple yet effective statistical modeling approaches , including exponentially weighted moving averages ( Holt , 1957 ; 2004 ; Winters , 1960 ) , auto-regressive moving averages ( ARMA ) ( Whittle , 1951 ; 1963 ) , the unified state-space modeling approach as well as other various extensions ( Hyndman & Khandakar , 2008 ) . However , these statistical approaches only considered the linear dependencies of future TS signals on past observations . To handle highorder dependencies , researchers attempted to adopt a hybrid design that combines statistical modeling with more advanced high-capacity models ( Montero-Manso et al. , 2020 ; Smyl , 2020 ) . At the same time , with the great successes of DL in computer vision ( He et al. , 2016 ) and natural language processing ( Vaswani et al. , 2017 ) , various DL models have also been developed for TS forecasting ( Rangapuram et al. , 2018 ; Toubeau et al. , 2018 ; Salinas et al. , 2020 ; Zia & Razzaq , 2020 ) . Among them , the most representative one is N-BEATS ( Oreshkin et al. , 2020 ) , which is a pure DL architecture that has achieved state-of-the-art performance across a wide range of benchmarks . The connections between DEPTS and N-BEATS have been discussed in Section 4.2 . As for PTS forecasting , many traditional statistical approaches explicitly considered the periodic property , such as periodic ARMA ( PARMA ) ( Vecchia , 1985a ; b ) and its variants ( Tesfaye et al. , 2006 ; Anderson et al. , 2007 ; Dudek et al. , 2016 ) . However , as discussed in Sections 1 and 3 , these methods only followed some arbitrary yet simple assumptions , such as additive or multiplicative seasonality , and thus can not well handle complicated periodic dependencies in many real-world scenarios . Besides , other recent studies either followed the similar assumptions for periodicity or required the pre-specification of periodic coefficients ( Taylor & Letham , 2018 ; Smyl , 2020 ) . To the best of our knowledge , we are the first work that develops a customized DL architecture to model complicated periodic dependencies and to capture diversified periodic compositions simultaneously . 3 PROBLEM FORMULATIONS . We consider the point forecasting problem of regularly sampled uni-variate TS . Let xt denote the time series value at time-step t , and the classical auto-regressive formulation is to project the historical observations xt−L : t = [ xt−L , . . . , xt−1 ] into its subsequent future values xt : t+H = [ xt , . . . , xt+H−1 ] : xt : t+H = FΘ ( xt−L : t ) + t : t+H , ( 1 ) where H is the length of the forecast horizon , L is the length of the lookback window , FΘ : RL → RH is a mapping function parameterized by Θ , and t : t+H = [ t , . . . , t+H−1 ] denotes a vector of independent and identically distributed Gaussian noises . Essentially , the fundamental assumption behind this formulation is the Markov property xt : t+H ⊥ x0 : t−L|xt−L : t , which assumes that the future values xt : t+H are independent of all farther historical values x0 : t−L given the adjacent shortterm observations xt−L : t. Note that most existing DL models ( Salinas et al. , 2020 ; Toubeau et al. , 2018 ; Wang et al. , 2019 ; Oreshkin et al. , 2020 ) directly follow this formulation to solve TS . Even traditional statistical TS models ( Holt , 1957 ; 2004 ; Winters , 1960 ) are indeed consistent with that if omitting those long-tail exponentially decayed dependencies introduced by moving averages . To precisely formulate PTS , on the other hand , this assumption needs to be slightly modified such that the dependency of xt : t+H on xt−L : t is further conditioned on the inherent periodicity , which can be anchored by associated time-steps . Accordingly , we alter the equation ( 1 ) into xt : t+H = F ′ Θ ( xt−L : t , t ) + t : t+H , ( 2 ) where other than xt−L : t , F ′ Θ : RL × R → RH takes an extra argument t , which denotes the forecasting time-step . Existing methods for PTS adopt a few different instantiations of F ′Θ . For example , Holt ( 1957 ; 2004 ) developed several exponentially weighted moving average processes with additive or multiplicative seasonality . Vecchia ( 1985a ; b ) adopted the multiplicative seasonality by treating the coefficients of the auto-regressive moving average process as time dependent . Smyl ( 2020 ) also adopted the multiplicative seasonality and built a hybrid method by coupling that with recurrent neural networks ( Hochreiter & Schmidhuber , 1997 ) , while Taylor & Letham ( 2018 ) chose the additive seasonality by adding the periodic forecast with other parts as the final forecast . 4 DEPTS . In this section , we elaborate on our new framework , DEPTS . First , we start with a decoupled formulation of ( 2 ) in Section 4.1 . Then , we illustrate the proposed neural architecture for this formulation in Sections 4.2 and 4.3 . Last , we discuss the interpretable capabilities in Section 4.4 . 4.1 THE DECOUPLED FORMULATION . To explicitly tackle the two-sided challenges of PTS forecasting , i.e. , complicated periodic dependencies and diversified periodic compositions , we introduce a decoupled formulation ( 3 ) that refines ( 2 ) by introducing a hidden variable zt to represent the periodic state at time-step t : xt : t+H = fθ ( xt−L : t , zt−L : t+H ) + t : t+H , zt = gφ ( t ) , ( 3 ) where we treat zt ∈ R1 as a scalar value to be consistent with the uni-variate TS xt ∈ R1 , we use fθ : RL × RL+H → RH to model complicated dependencies of the future signals xt : t+H on the local observations xt−L : t and the corresponding periodic states zt−L : t+H within the lookback and forecast horizons , and gφ : R1 → R1 is to produce a periodic state zt for a specific time-step t. The right part of Figure 2 depicts the overall data flows of this formulation , in which the expansion module fθ and the periodicity module gφ are responsible for handling the two aforementioned PTSspecific challenges , respectively .
This paper focuses on an important property of time series, periodicity, and mainly studies the problem of periodic time series (PTS) forecasting. This work solves two main research challenges of PTS forecasting: (1) to learn the dependencies of observation data on periodicity; (2) to learn the sophisticated compositions of various periods. The authors propose a deep expansion framework on top of residue learning for dependency learning and a parameterized surrogate function for periodicity learning. Extensive experiments on both synthetic data and 5 real-world datasets demonstrate a significant improvement on time series forecasting tasks when considering periodicity especially.
SP:630d574fd6934f44c539d012beca42c16490c010
DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting
1 INTRODUCTION . Time series ( TS ) with apparent periodic ( seasonal ) oscillations , referred to as periodic time series ( PTS ) in this paper , is pervasive in a wide range of critical industries , such as seasonal electricity spot prices in power industry ( Koopman et al. , 2007 ) , periodic traffic flows in transportation ( Lippi et al. , 2013 ) , periodic carbon dioxide exchanges and water flows in sustainability domain ( Seymour , 2001 ; Tesfaye et al. , 2006 ) . Apparently , PTS forecasting plays a crucial role in these industries since it can foster their business development by facilitating a variety of capabilities , including early warning , pre-planning , and resource scheduling ( Kahn , 2003 ; Jain , 2017 ) . Given the pervasiveness and importance of PTS , two obstacles , however , largely hinder the performance of existing forecasting models . First , future TS signals yield complicated dependencies on both adjacent historical observations and inherent periodicity . Nevertheless , many existing studies did not consider this distinctive periodic property ( Salinas et al. , 2020 ; Toubeau et al. , 2018 ; Wang et al. , 2019 ; Oreshkin et al. , 2020 ) . The performance of these methods has been greatly restrained due to its ignorance of periodicity modeling . Some other efforts , though explicitly introducing periodicity modeling , only followed some arbitrary yet simple assumptions , such as additive or multiplicative seasonality , to capture certain plain periodic effects ( Holt , 1957 ; 2004 ; Vecchia , 1985b ; Taylor & Letham , 2018 ) . These methods failed to model complicated periodic dependencies beyond much simplified assumptions . The second challenge lies in that the inherent periodicity of a typical real-world TS is usually composed of various periods with different amplitudes and frequencies . For example , Figure 1 exemplifies the sophisticated composition of diversified periods via an eightyears hourly TS of electricity load in a region of California . However , existing methods ( Taylor & Letham , 2018 ; Smyl , 2020 ) required the pre-specification of periodic frequencies before estimating other parameters from data , which attempted to evade this obstacle by transferring the burden of periodicity coefficient initialization to practitioners . To better tackle the aforementioned two challenges , we develop a deep expansion learning framework , DEPTS , for PTS forecasting . The core idea of DEPTS is to build a deep neural network that conducts the progressive expansions of the complicated dependencies of PTS signals on periodicity to facilitate forecasting . We start from a novel decoupled formulation for PTS forecasting by introducing the periodic state as a hidden variable . This new formulation stimulates us to make more customized and dedicated designs to handle the two specific challenges mentioned above . For the first challenge , we develop an expansion module on top of residual learning ( He et al. , 2016 ; Oreshkin et al. , 2020 ) to conduct layer-by-layer expansions between observed TS signals and hidden periodic states . With such a design , we can build a deep architecture with both high capacities and efficient parameter optimization to model those complicated dependencies of TS signals on periodicity . For the second challenge , we build a periodicity module to estimate the periodic states from observational data . We represent the hidden periodic state with respect to time as a parameterized periodic function with sufficient expressiveness . In this work , for simplicity , we instantiate this function as a series of cosine functions . To release the burden of manually setting periodic coefficients for different data , we develop a data-driven parameter initialization strategy on top of Discrete Cosine Transform ( Ahmed et al. , 1974 ) . After that , we combine the periodicity module with the expansion module to perform end-to-end learning . To the best of our knowledge , DEPTS is a very early attempt to build a customized deep learning ( DL ) architecture for PTS that explicitly takes account of the periodic property . Moreover , with two delicately designed modules , DEPTS also owns certain interpretable capabilities . First , the expansions of forecasts can distinguish the contributions from either adjacent TS signals or inherent periodicity , which intuitively illustrate how the future TS signals may vary based on local momenta and global periodicity . Second , coefficients of the periodicity module have their own practical meanings , such as amplitudes and frequencies , which provide certain interpretable effects inherently . We conduct experiments on both synthetic data and real-world data , which all demonstrate the superiority of DEPTS on handling PTS . On average , DEPTS reduces the error of the best baseline by about 10 % . In a few cases , the error reduction can even reach up to 20 % . Besides , we also include extensive ablation tests to verify our critical designs and visualize specific model components to interpret model behaviors . 2 RELATED WORK . TS forecasting is a longstanding research topic that has been extensively studied for decades . After a comprehensive review of the literature , we find three types of paradigms in developing TS models . At an early stage , researchers developed simple yet effective statistical modeling approaches , including exponentially weighted moving averages ( Holt , 1957 ; 2004 ; Winters , 1960 ) , auto-regressive moving averages ( ARMA ) ( Whittle , 1951 ; 1963 ) , the unified state-space modeling approach as well as other various extensions ( Hyndman & Khandakar , 2008 ) . However , these statistical approaches only considered the linear dependencies of future TS signals on past observations . To handle highorder dependencies , researchers attempted to adopt a hybrid design that combines statistical modeling with more advanced high-capacity models ( Montero-Manso et al. , 2020 ; Smyl , 2020 ) . At the same time , with the great successes of DL in computer vision ( He et al. , 2016 ) and natural language processing ( Vaswani et al. , 2017 ) , various DL models have also been developed for TS forecasting ( Rangapuram et al. , 2018 ; Toubeau et al. , 2018 ; Salinas et al. , 2020 ; Zia & Razzaq , 2020 ) . Among them , the most representative one is N-BEATS ( Oreshkin et al. , 2020 ) , which is a pure DL architecture that has achieved state-of-the-art performance across a wide range of benchmarks . The connections between DEPTS and N-BEATS have been discussed in Section 4.2 . As for PTS forecasting , many traditional statistical approaches explicitly considered the periodic property , such as periodic ARMA ( PARMA ) ( Vecchia , 1985a ; b ) and its variants ( Tesfaye et al. , 2006 ; Anderson et al. , 2007 ; Dudek et al. , 2016 ) . However , as discussed in Sections 1 and 3 , these methods only followed some arbitrary yet simple assumptions , such as additive or multiplicative seasonality , and thus can not well handle complicated periodic dependencies in many real-world scenarios . Besides , other recent studies either followed the similar assumptions for periodicity or required the pre-specification of periodic coefficients ( Taylor & Letham , 2018 ; Smyl , 2020 ) . To the best of our knowledge , we are the first work that develops a customized DL architecture to model complicated periodic dependencies and to capture diversified periodic compositions simultaneously . 3 PROBLEM FORMULATIONS . We consider the point forecasting problem of regularly sampled uni-variate TS . Let xt denote the time series value at time-step t , and the classical auto-regressive formulation is to project the historical observations xt−L : t = [ xt−L , . . . , xt−1 ] into its subsequent future values xt : t+H = [ xt , . . . , xt+H−1 ] : xt : t+H = FΘ ( xt−L : t ) + t : t+H , ( 1 ) where H is the length of the forecast horizon , L is the length of the lookback window , FΘ : RL → RH is a mapping function parameterized by Θ , and t : t+H = [ t , . . . , t+H−1 ] denotes a vector of independent and identically distributed Gaussian noises . Essentially , the fundamental assumption behind this formulation is the Markov property xt : t+H ⊥ x0 : t−L|xt−L : t , which assumes that the future values xt : t+H are independent of all farther historical values x0 : t−L given the adjacent shortterm observations xt−L : t. Note that most existing DL models ( Salinas et al. , 2020 ; Toubeau et al. , 2018 ; Wang et al. , 2019 ; Oreshkin et al. , 2020 ) directly follow this formulation to solve TS . Even traditional statistical TS models ( Holt , 1957 ; 2004 ; Winters , 1960 ) are indeed consistent with that if omitting those long-tail exponentially decayed dependencies introduced by moving averages . To precisely formulate PTS , on the other hand , this assumption needs to be slightly modified such that the dependency of xt : t+H on xt−L : t is further conditioned on the inherent periodicity , which can be anchored by associated time-steps . Accordingly , we alter the equation ( 1 ) into xt : t+H = F ′ Θ ( xt−L : t , t ) + t : t+H , ( 2 ) where other than xt−L : t , F ′ Θ : RL × R → RH takes an extra argument t , which denotes the forecasting time-step . Existing methods for PTS adopt a few different instantiations of F ′Θ . For example , Holt ( 1957 ; 2004 ) developed several exponentially weighted moving average processes with additive or multiplicative seasonality . Vecchia ( 1985a ; b ) adopted the multiplicative seasonality by treating the coefficients of the auto-regressive moving average process as time dependent . Smyl ( 2020 ) also adopted the multiplicative seasonality and built a hybrid method by coupling that with recurrent neural networks ( Hochreiter & Schmidhuber , 1997 ) , while Taylor & Letham ( 2018 ) chose the additive seasonality by adding the periodic forecast with other parts as the final forecast . 4 DEPTS . In this section , we elaborate on our new framework , DEPTS . First , we start with a decoupled formulation of ( 2 ) in Section 4.1 . Then , we illustrate the proposed neural architecture for this formulation in Sections 4.2 and 4.3 . Last , we discuss the interpretable capabilities in Section 4.4 . 4.1 THE DECOUPLED FORMULATION . To explicitly tackle the two-sided challenges of PTS forecasting , i.e. , complicated periodic dependencies and diversified periodic compositions , we introduce a decoupled formulation ( 3 ) that refines ( 2 ) by introducing a hidden variable zt to represent the periodic state at time-step t : xt : t+H = fθ ( xt−L : t , zt−L : t+H ) + t : t+H , zt = gφ ( t ) , ( 3 ) where we treat zt ∈ R1 as a scalar value to be consistent with the uni-variate TS xt ∈ R1 , we use fθ : RL × RL+H → RH to model complicated dependencies of the future signals xt : t+H on the local observations xt−L : t and the corresponding periodic states zt−L : t+H within the lookback and forecast horizons , and gφ : R1 → R1 is to produce a periodic state zt for a specific time-step t. The right part of Figure 2 depicts the overall data flows of this formulation , in which the expansion module fθ and the periodicity module gφ are responsible for handling the two aforementioned PTSspecific challenges , respectively .
The paper addresses the problem of time series forecasting, esp. with periodic dependencies. The authors propose a model that combines a learnt one-dimensional sum of cosines as periodic signal with residual feedforward neural network (N-BEATS). They propose to learn the model in two stages: first to estimate the cosines with a discrete cosine transform and greedily selecting the K ones with largest amplitude, second to refine both the periodic time encoding and the overall forecasting model end-to-end. In experiments on synthetic, three real-life datasets from the literature and two new real-life datasets with longer training segments, that should allow to identify periodicities more easily, they show that they outperform the underlying N-BEATS model mostly consistently.
SP:630d574fd6934f44c539d012beca42c16490c010
DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting
1 INTRODUCTION . Time series ( TS ) with apparent periodic ( seasonal ) oscillations , referred to as periodic time series ( PTS ) in this paper , is pervasive in a wide range of critical industries , such as seasonal electricity spot prices in power industry ( Koopman et al. , 2007 ) , periodic traffic flows in transportation ( Lippi et al. , 2013 ) , periodic carbon dioxide exchanges and water flows in sustainability domain ( Seymour , 2001 ; Tesfaye et al. , 2006 ) . Apparently , PTS forecasting plays a crucial role in these industries since it can foster their business development by facilitating a variety of capabilities , including early warning , pre-planning , and resource scheduling ( Kahn , 2003 ; Jain , 2017 ) . Given the pervasiveness and importance of PTS , two obstacles , however , largely hinder the performance of existing forecasting models . First , future TS signals yield complicated dependencies on both adjacent historical observations and inherent periodicity . Nevertheless , many existing studies did not consider this distinctive periodic property ( Salinas et al. , 2020 ; Toubeau et al. , 2018 ; Wang et al. , 2019 ; Oreshkin et al. , 2020 ) . The performance of these methods has been greatly restrained due to its ignorance of periodicity modeling . Some other efforts , though explicitly introducing periodicity modeling , only followed some arbitrary yet simple assumptions , such as additive or multiplicative seasonality , to capture certain plain periodic effects ( Holt , 1957 ; 2004 ; Vecchia , 1985b ; Taylor & Letham , 2018 ) . These methods failed to model complicated periodic dependencies beyond much simplified assumptions . The second challenge lies in that the inherent periodicity of a typical real-world TS is usually composed of various periods with different amplitudes and frequencies . For example , Figure 1 exemplifies the sophisticated composition of diversified periods via an eightyears hourly TS of electricity load in a region of California . However , existing methods ( Taylor & Letham , 2018 ; Smyl , 2020 ) required the pre-specification of periodic frequencies before estimating other parameters from data , which attempted to evade this obstacle by transferring the burden of periodicity coefficient initialization to practitioners . To better tackle the aforementioned two challenges , we develop a deep expansion learning framework , DEPTS , for PTS forecasting . The core idea of DEPTS is to build a deep neural network that conducts the progressive expansions of the complicated dependencies of PTS signals on periodicity to facilitate forecasting . We start from a novel decoupled formulation for PTS forecasting by introducing the periodic state as a hidden variable . This new formulation stimulates us to make more customized and dedicated designs to handle the two specific challenges mentioned above . For the first challenge , we develop an expansion module on top of residual learning ( He et al. , 2016 ; Oreshkin et al. , 2020 ) to conduct layer-by-layer expansions between observed TS signals and hidden periodic states . With such a design , we can build a deep architecture with both high capacities and efficient parameter optimization to model those complicated dependencies of TS signals on periodicity . For the second challenge , we build a periodicity module to estimate the periodic states from observational data . We represent the hidden periodic state with respect to time as a parameterized periodic function with sufficient expressiveness . In this work , for simplicity , we instantiate this function as a series of cosine functions . To release the burden of manually setting periodic coefficients for different data , we develop a data-driven parameter initialization strategy on top of Discrete Cosine Transform ( Ahmed et al. , 1974 ) . After that , we combine the periodicity module with the expansion module to perform end-to-end learning . To the best of our knowledge , DEPTS is a very early attempt to build a customized deep learning ( DL ) architecture for PTS that explicitly takes account of the periodic property . Moreover , with two delicately designed modules , DEPTS also owns certain interpretable capabilities . First , the expansions of forecasts can distinguish the contributions from either adjacent TS signals or inherent periodicity , which intuitively illustrate how the future TS signals may vary based on local momenta and global periodicity . Second , coefficients of the periodicity module have their own practical meanings , such as amplitudes and frequencies , which provide certain interpretable effects inherently . We conduct experiments on both synthetic data and real-world data , which all demonstrate the superiority of DEPTS on handling PTS . On average , DEPTS reduces the error of the best baseline by about 10 % . In a few cases , the error reduction can even reach up to 20 % . Besides , we also include extensive ablation tests to verify our critical designs and visualize specific model components to interpret model behaviors . 2 RELATED WORK . TS forecasting is a longstanding research topic that has been extensively studied for decades . After a comprehensive review of the literature , we find three types of paradigms in developing TS models . At an early stage , researchers developed simple yet effective statistical modeling approaches , including exponentially weighted moving averages ( Holt , 1957 ; 2004 ; Winters , 1960 ) , auto-regressive moving averages ( ARMA ) ( Whittle , 1951 ; 1963 ) , the unified state-space modeling approach as well as other various extensions ( Hyndman & Khandakar , 2008 ) . However , these statistical approaches only considered the linear dependencies of future TS signals on past observations . To handle highorder dependencies , researchers attempted to adopt a hybrid design that combines statistical modeling with more advanced high-capacity models ( Montero-Manso et al. , 2020 ; Smyl , 2020 ) . At the same time , with the great successes of DL in computer vision ( He et al. , 2016 ) and natural language processing ( Vaswani et al. , 2017 ) , various DL models have also been developed for TS forecasting ( Rangapuram et al. , 2018 ; Toubeau et al. , 2018 ; Salinas et al. , 2020 ; Zia & Razzaq , 2020 ) . Among them , the most representative one is N-BEATS ( Oreshkin et al. , 2020 ) , which is a pure DL architecture that has achieved state-of-the-art performance across a wide range of benchmarks . The connections between DEPTS and N-BEATS have been discussed in Section 4.2 . As for PTS forecasting , many traditional statistical approaches explicitly considered the periodic property , such as periodic ARMA ( PARMA ) ( Vecchia , 1985a ; b ) and its variants ( Tesfaye et al. , 2006 ; Anderson et al. , 2007 ; Dudek et al. , 2016 ) . However , as discussed in Sections 1 and 3 , these methods only followed some arbitrary yet simple assumptions , such as additive or multiplicative seasonality , and thus can not well handle complicated periodic dependencies in many real-world scenarios . Besides , other recent studies either followed the similar assumptions for periodicity or required the pre-specification of periodic coefficients ( Taylor & Letham , 2018 ; Smyl , 2020 ) . To the best of our knowledge , we are the first work that develops a customized DL architecture to model complicated periodic dependencies and to capture diversified periodic compositions simultaneously . 3 PROBLEM FORMULATIONS . We consider the point forecasting problem of regularly sampled uni-variate TS . Let xt denote the time series value at time-step t , and the classical auto-regressive formulation is to project the historical observations xt−L : t = [ xt−L , . . . , xt−1 ] into its subsequent future values xt : t+H = [ xt , . . . , xt+H−1 ] : xt : t+H = FΘ ( xt−L : t ) + t : t+H , ( 1 ) where H is the length of the forecast horizon , L is the length of the lookback window , FΘ : RL → RH is a mapping function parameterized by Θ , and t : t+H = [ t , . . . , t+H−1 ] denotes a vector of independent and identically distributed Gaussian noises . Essentially , the fundamental assumption behind this formulation is the Markov property xt : t+H ⊥ x0 : t−L|xt−L : t , which assumes that the future values xt : t+H are independent of all farther historical values x0 : t−L given the adjacent shortterm observations xt−L : t. Note that most existing DL models ( Salinas et al. , 2020 ; Toubeau et al. , 2018 ; Wang et al. , 2019 ; Oreshkin et al. , 2020 ) directly follow this formulation to solve TS . Even traditional statistical TS models ( Holt , 1957 ; 2004 ; Winters , 1960 ) are indeed consistent with that if omitting those long-tail exponentially decayed dependencies introduced by moving averages . To precisely formulate PTS , on the other hand , this assumption needs to be slightly modified such that the dependency of xt : t+H on xt−L : t is further conditioned on the inherent periodicity , which can be anchored by associated time-steps . Accordingly , we alter the equation ( 1 ) into xt : t+H = F ′ Θ ( xt−L : t , t ) + t : t+H , ( 2 ) where other than xt−L : t , F ′ Θ : RL × R → RH takes an extra argument t , which denotes the forecasting time-step . Existing methods for PTS adopt a few different instantiations of F ′Θ . For example , Holt ( 1957 ; 2004 ) developed several exponentially weighted moving average processes with additive or multiplicative seasonality . Vecchia ( 1985a ; b ) adopted the multiplicative seasonality by treating the coefficients of the auto-regressive moving average process as time dependent . Smyl ( 2020 ) also adopted the multiplicative seasonality and built a hybrid method by coupling that with recurrent neural networks ( Hochreiter & Schmidhuber , 1997 ) , while Taylor & Letham ( 2018 ) chose the additive seasonality by adding the periodic forecast with other parts as the final forecast . 4 DEPTS . In this section , we elaborate on our new framework , DEPTS . First , we start with a decoupled formulation of ( 2 ) in Section 4.1 . Then , we illustrate the proposed neural architecture for this formulation in Sections 4.2 and 4.3 . Last , we discuss the interpretable capabilities in Section 4.4 . 4.1 THE DECOUPLED FORMULATION . To explicitly tackle the two-sided challenges of PTS forecasting , i.e. , complicated periodic dependencies and diversified periodic compositions , we introduce a decoupled formulation ( 3 ) that refines ( 2 ) by introducing a hidden variable zt to represent the periodic state at time-step t : xt : t+H = fθ ( xt−L : t , zt−L : t+H ) + t : t+H , zt = gφ ( t ) , ( 3 ) where we treat zt ∈ R1 as a scalar value to be consistent with the uni-variate TS xt ∈ R1 , we use fθ : RL × RL+H → RH to model complicated dependencies of the future signals xt : t+H on the local observations xt−L : t and the corresponding periodic states zt−L : t+H within the lookback and forecast horizons , and gφ : R1 → R1 is to produce a periodic state zt for a specific time-step t. The right part of Figure 2 depicts the overall data flows of this formulation , in which the expansion module fθ and the periodicity module gφ are responsible for handling the two aforementioned PTSspecific challenges , respectively .
This paper proposed a novel DL framework for periodic time series forecasting. The contributions of the paper include modeling complicated periodic dependency and capturing compositions of diversified periods. The authors conduct extensive experiments on synthetic data and real-world datasets, showing the effectiveness and interpretability of the proposed methods
SP:630d574fd6934f44c539d012beca42c16490c010
Anarchic Federated Learning
1 INTRODUCTION . Federated Learning ( FL ) has recently emerged as an important distributed learning framework that leverages numerous workers to collaboratively learn a joint model ( Li et al. , 2019a ; Yang et al. , 2019 ; Kairouz et al. , 2019 ) . Since its inception , FL algorithms have become increasingly powerful and have been able to handle various heterogeneity in data , network environments , worker computing capabilities , etc . Moreover , most of the prevailing FL algorithms ( e.g. , FedAvg ( McMahan et al. , 2016 ) and its variants ( Li et al. , 2018 ; Zhang et al. , 2020b ; Karimireddy et al. , 2020b ; a ; Acar et al. , 2021 ) ) enjoy so-called “ linear speedup effect , ” i.e. , the convergence time of an FL algorithm decreases linearly as the number of workers increases ( Stich , 2018 ; Yu et al. , 2019 ; Wang & Joshi , 2018 ; Khaled et al. , 2019 ; Karimireddy et al. , 2020b ; Yang et al. , 2021 ; Qu et al. , 2020 ) . To achieve these salient features , most of the existing FL algorithms have adopted a server-centric approach , i.e. , the worker behaviors are tightly “ dictated ” by the server . For example , the server in these FL algorithms can i ) determine either all or a subset of workers to participate in each round of FL update ; ii ) fully control the timing for synchronization and whether to accept/reject information sent from the workers ; iii ) precisely specify the algorithmic operations ( e.g. , the number of local steps performed at each worker before communicating with the server ) , etc . Despite achieving strong performance guarantees , such a server-centric approach introduces several limitations . Specifically , these server-centric FL algorithms often implicitly rely on the following assumptions : ( 1 ) each worker is available for training upon the server ’ s request and throughout a complete round ; ( 2 ) all participating workers are willing to execute the same number of local updates and communicate with the server in a synchronous manner following a common clock . Unfortunately , in edge networks where many FL systems are deployed , these assumptions are restrictive or even problematic due to the following reasons . First , many requested edge devices on the worker side may not be available in each round because of , e.g. , communication errors or battery outages . Second , the use of synchronous communication and an identical number of local updates across all workers ignores the fact that worker devices in edge-based FL systems are heterogeneous in computation and communication capabilities . As a result , stragglers ( i.e. , slow workers ) could significantly slow down the training process . To mitigate the straggler effect , various robust FL algorithms have been developed . For example , the server in FedAvg ( McMahan et al. , 2016 ) can simply ignore and drop the information from the stragglers to speedup learning . However , this may lead to other problems such as wasted computation/energy ( Wang et al. , 2019 ) , slower convergence ( Li et al. , 2018 ) , or biased/unfair uses of worker data ( Kairouz et al. , 2019 ) . Moreover , the synchronous nature of the server-centric approaches implies many networking problems ( e.g. , interference between workers , periodic traffic spikes , high complexity in maintaining a network-wide common clock ) . The above limitations of the current server-centric FL approaches motivate us to propose a new paradigm in FL , which we call Anarchic Federated Learning ( AFL ) . In stark contrast to servercentric FL , workers in AFL are completely free of the “ dictation ” from the server . Specifically , each worker has complete freedom to choose when and how long to participate in FL without following any control signals from the server . As a result , the information fed back from workers is inherently asynchronous . Also , each worker can independently determine the number of local update steps to perform in each round based on its current local situation ( e.g. , battery level , communication channels , privacy concerns ) . In other words , the amount of local computation at each worker is time-varying , device-dependent , and fully controlled by the worker itself . Clearly , AFL has a much lower server-worker coordination complexity and avoids the aforementioned pitfalls in server-centric FL approaches . However , AFL also introduces significant challenges in algorithmic design on the server-side because the server needs to work much harder to handle the chaotic worker behaviors in AFL ( e.g. , asynchrony , spatial and temporal heterogeneity in computing ) . Toward this end , several fundamental questions naturally arise : 1 ) Is it possible to design algorithms that converge under AFL ? 2 ) If the answer to the previous question is yes , how fast could the algorithms converge ? 3 ) Can the new AFL-based algorithms still achieve the desired “ linear speedup effect ? ” In this paper , we answer the above fundamental questions of AFL affirmatively . Our main contributions and key results are summarized as follows : • We propose a new FL paradigm called Anarchic Federated Learning ( AFL ) , where the workers are allowed to engage in training at will and choose the number of local update steps based on their own time-varying situations ( computing resources , energy levels , etc. ) . This loose worker-server coupling significantly simplifies the implementations and renders AFL particularly suitable for FL deployments in edge computing environments . For any AFL algorithms under general worker information arrival processes and non-i.i.d . data across workers , we first establish a fundamental convergence error lower bound that depends on the data heterogeneity in the AFL system . Then , we propose two Anarchic Federated Averaging ( AFA ) algorithms with two-sided learning rates for two classes of FL problems ( cross-device and cross-silo ) ( Kairouz et al. , 2019 ; Wang et al. , 2021 ) . • For AFL in the cross-device ( CD ) setting , our AFA-CD algorithm converges to an error ball whose size matches the fundamental lower bound , with an O ( 1/ √ mT ) convergence rate where m is the number of collected workers in each round of update and T is the total number of rounds . We note that this convergence rate retains the highly desirable “ linear speedup effect ” under AFL.1 Moreover , under the special case with uniform workers ’ participation ( equivalent to uniform workers sampling in conventional FL ( Li et al. , 2019c ; Karimireddy et al. , 2020b ; a ; Acar et al. , 2021 ) ) , AFA-CD can further converge to a stationary point ( i.e. , a singleton ) at a convergence rate that matches the state-of-the-art of conventional distributed and federated learning . • For AFL in the cross-silo ( CS ) setting , our proposed AFA-CS algorithm achieves an enhanced convergence rate of O ( 1/ √ MT ) by leveraging historical feedback and variance reduction tech- niques , where M is the total number of workers . This suggests that , not only can “ linear speedup ” be achieved under AFL-CS , the speedup factor also depends on the total number of workers M instead of the number of collected workers m in each round ( M > m ) . To our knowledge , this result is new in the FL literature . • We validate the proposed algorithms with extensive experiments on CV and NLP tasks and further explore the effect of the asynchrony and local step number in AFL . We also numerically show that our AFL is a general algorithmic framework in the sense that various advanced FL techniques ( e.g. , FedProx ( Li et al. , 2018 ) and SCAFFOLD ( Karimireddy et al. , 2020b ) ) can be integrated as the optimizers in our AFA framework to further enhance the AFL performance . 1To attain -accuracy , it takes O ( 1/ 2 ) steps for an algorithm with an O ( 1/ √ T ) convergence rate , while needing O ( 1/m 2 ) steps for another algorithm with an O ( 1/ √ mT ) convergence rate ( the hidden constant in Big-O is the same ) . In this sense , O ( 1/ √ mT ) implies a linear speedup with respect to the number of workers . The rest of the paper is organized as follows . In Section 2 , we review related work . In Section 3 , we introduce AFL and our AFA algorithms , which are followed by their convergence analysis in Section 4 . We present the numerical results in Section 5 and conclude the work in Section 6 . 2 RELATED WORK . Server-Centric Federated Learning Algorithms : To date , one of the prevailing FL algorithms is Federated Averaging ( FedAvg ) , which was first proposed in ( McMahan et al. , 2016 ) as a heuristic to improve communication efficiency and data privacy for FL . Since then , there have been substantial follow-ups of FedAvg that focus on non-i.i.d . ( heterogeneous ) data ( see , e.g. , FedProx ( Li et al. , 2018 ) , FedPD ( Zhang et al. , 2020b ) , SCAFFOLD ( Karimireddy et al. , 2020b ) , FedNova ( Wang et al. , 2020 ) , FedDyn ( Acar et al. , 2021 ) , and MIME ( Karimireddy et al. , 2020a ) ) , which are closely related to our work . The main idea for these algorithms is to control the “ model drift ” ( due to heterogeneous datasets and the use of multiple local update steps on the worker side of FedAvg ) . While these algorithms achieved various degrees of success in dealing with data heterogeneity , they are all server-centric synchronous algorithms that are not easy to implement in edge-based FL due to straggler issues ( see discussions in Section 1 ) . Federated Learning with Flexible Worker Participation : Recently , some attempts have been made to alleviate the strict requirements on worker ’ s participation , such as allowing different local steps ( Ruan et al. , 2021 ; Wang et al. , 2020 ) and asynchronous FL ( Avdiukhin & Kasiviswanathan , 2021 ; Xie et al. , 2019 ) . However , most of these works either lack theoretical performance guarantees or require strong assumptions . For example , Ruan et al . ( 2021 ) assumed strongly convex loss function and bounded aggregation coefficient ; Avdiukhin & Kasiviswanathan ( 2021 ) assumed bounded gradients and same computation time per iteration for all workers . Our AFL paradigm considered in this paper is more general and subsumes all the above settings as special cases . We note , however , that AFL differs from conventional FL with flexible worker participation in that the worker ’ s participation in AFL and its local optimization process are completely determined by the workers , and not by the sampling requests from the server . This is more practical since it allows workers to participate in FL under drastically different situations in network , charging/idle cycles , etc . Due to the complex couplings between various sources of randomness and multiple layers of heterogeneity in spatial and temporal domains in AFL , the training algorithm design for AFL and its theoretical analysis is far from a straightforward combination of existing FL techniques for flexible worker participation . Asynchronous Distributed Optimization : The asynchrony in AFL also shares some similarity with asynchronous distributed optimization . The basic idea of asynchronous distributed optimization is to forgo the common clock in the system to lower the system implementation complexity in distributed optimization . However , due to extra noise introduced by asynchrony , it is highly non-trivial to establish the convergence performance of asynchronous distributed optimization algorithms . To address this challenge , asynchronous distributed optimization has been studied extensively in the machine learning and optimization literature ( see , e.g. , Lian et al . ( 2018 ) ; Niu et al . ( 2011 ) ; Agarwal & Duchi ( 2012 ) ; Paine et al . ( 2013 ) ; Xie et al . ( 2019 ) ; Zhang et al . ( 2020a ) and references therein ) . We note that the AFL paradigm considered in this paper is more general and subsumes asynchronous distributed optimization as a special case . To see this , note that in addition to the asynchronous updates at the server , the workers in AFL could further have different numbers of local update steps . Moreover , the workers may not even need to be work-conserving ( i.e. , workers could be idle between rounds of updates ) . As a result , the convergence analysis of AFL is much more challenging .
In order to address the issues of FedAvg (e.g. straggler, waste computation, slow convergence), this paper proposes a new federated training scheme "anarchic federated learning" (AFL) as an alternative. Instead of uniformly sampling participant clients, AFL let all workers to decide their number of local steps, when to communicate, and their step-sizes / batch sizes. The authors established a theoretical convergence rate of AFL and show that it recovers the rate of FedAvg under the assumption of uniform distributed arrival of worker information and bounded maximum delay and local steps. The authors also provides empirical evaluations on some experiments to demonstrate that
SP:8789841db1520b75242c756139a64d1f1d284f3b
Anarchic Federated Learning
1 INTRODUCTION . Federated Learning ( FL ) has recently emerged as an important distributed learning framework that leverages numerous workers to collaboratively learn a joint model ( Li et al. , 2019a ; Yang et al. , 2019 ; Kairouz et al. , 2019 ) . Since its inception , FL algorithms have become increasingly powerful and have been able to handle various heterogeneity in data , network environments , worker computing capabilities , etc . Moreover , most of the prevailing FL algorithms ( e.g. , FedAvg ( McMahan et al. , 2016 ) and its variants ( Li et al. , 2018 ; Zhang et al. , 2020b ; Karimireddy et al. , 2020b ; a ; Acar et al. , 2021 ) ) enjoy so-called “ linear speedup effect , ” i.e. , the convergence time of an FL algorithm decreases linearly as the number of workers increases ( Stich , 2018 ; Yu et al. , 2019 ; Wang & Joshi , 2018 ; Khaled et al. , 2019 ; Karimireddy et al. , 2020b ; Yang et al. , 2021 ; Qu et al. , 2020 ) . To achieve these salient features , most of the existing FL algorithms have adopted a server-centric approach , i.e. , the worker behaviors are tightly “ dictated ” by the server . For example , the server in these FL algorithms can i ) determine either all or a subset of workers to participate in each round of FL update ; ii ) fully control the timing for synchronization and whether to accept/reject information sent from the workers ; iii ) precisely specify the algorithmic operations ( e.g. , the number of local steps performed at each worker before communicating with the server ) , etc . Despite achieving strong performance guarantees , such a server-centric approach introduces several limitations . Specifically , these server-centric FL algorithms often implicitly rely on the following assumptions : ( 1 ) each worker is available for training upon the server ’ s request and throughout a complete round ; ( 2 ) all participating workers are willing to execute the same number of local updates and communicate with the server in a synchronous manner following a common clock . Unfortunately , in edge networks where many FL systems are deployed , these assumptions are restrictive or even problematic due to the following reasons . First , many requested edge devices on the worker side may not be available in each round because of , e.g. , communication errors or battery outages . Second , the use of synchronous communication and an identical number of local updates across all workers ignores the fact that worker devices in edge-based FL systems are heterogeneous in computation and communication capabilities . As a result , stragglers ( i.e. , slow workers ) could significantly slow down the training process . To mitigate the straggler effect , various robust FL algorithms have been developed . For example , the server in FedAvg ( McMahan et al. , 2016 ) can simply ignore and drop the information from the stragglers to speedup learning . However , this may lead to other problems such as wasted computation/energy ( Wang et al. , 2019 ) , slower convergence ( Li et al. , 2018 ) , or biased/unfair uses of worker data ( Kairouz et al. , 2019 ) . Moreover , the synchronous nature of the server-centric approaches implies many networking problems ( e.g. , interference between workers , periodic traffic spikes , high complexity in maintaining a network-wide common clock ) . The above limitations of the current server-centric FL approaches motivate us to propose a new paradigm in FL , which we call Anarchic Federated Learning ( AFL ) . In stark contrast to servercentric FL , workers in AFL are completely free of the “ dictation ” from the server . Specifically , each worker has complete freedom to choose when and how long to participate in FL without following any control signals from the server . As a result , the information fed back from workers is inherently asynchronous . Also , each worker can independently determine the number of local update steps to perform in each round based on its current local situation ( e.g. , battery level , communication channels , privacy concerns ) . In other words , the amount of local computation at each worker is time-varying , device-dependent , and fully controlled by the worker itself . Clearly , AFL has a much lower server-worker coordination complexity and avoids the aforementioned pitfalls in server-centric FL approaches . However , AFL also introduces significant challenges in algorithmic design on the server-side because the server needs to work much harder to handle the chaotic worker behaviors in AFL ( e.g. , asynchrony , spatial and temporal heterogeneity in computing ) . Toward this end , several fundamental questions naturally arise : 1 ) Is it possible to design algorithms that converge under AFL ? 2 ) If the answer to the previous question is yes , how fast could the algorithms converge ? 3 ) Can the new AFL-based algorithms still achieve the desired “ linear speedup effect ? ” In this paper , we answer the above fundamental questions of AFL affirmatively . Our main contributions and key results are summarized as follows : • We propose a new FL paradigm called Anarchic Federated Learning ( AFL ) , where the workers are allowed to engage in training at will and choose the number of local update steps based on their own time-varying situations ( computing resources , energy levels , etc. ) . This loose worker-server coupling significantly simplifies the implementations and renders AFL particularly suitable for FL deployments in edge computing environments . For any AFL algorithms under general worker information arrival processes and non-i.i.d . data across workers , we first establish a fundamental convergence error lower bound that depends on the data heterogeneity in the AFL system . Then , we propose two Anarchic Federated Averaging ( AFA ) algorithms with two-sided learning rates for two classes of FL problems ( cross-device and cross-silo ) ( Kairouz et al. , 2019 ; Wang et al. , 2021 ) . • For AFL in the cross-device ( CD ) setting , our AFA-CD algorithm converges to an error ball whose size matches the fundamental lower bound , with an O ( 1/ √ mT ) convergence rate where m is the number of collected workers in each round of update and T is the total number of rounds . We note that this convergence rate retains the highly desirable “ linear speedup effect ” under AFL.1 Moreover , under the special case with uniform workers ’ participation ( equivalent to uniform workers sampling in conventional FL ( Li et al. , 2019c ; Karimireddy et al. , 2020b ; a ; Acar et al. , 2021 ) ) , AFA-CD can further converge to a stationary point ( i.e. , a singleton ) at a convergence rate that matches the state-of-the-art of conventional distributed and federated learning . • For AFL in the cross-silo ( CS ) setting , our proposed AFA-CS algorithm achieves an enhanced convergence rate of O ( 1/ √ MT ) by leveraging historical feedback and variance reduction tech- niques , where M is the total number of workers . This suggests that , not only can “ linear speedup ” be achieved under AFL-CS , the speedup factor also depends on the total number of workers M instead of the number of collected workers m in each round ( M > m ) . To our knowledge , this result is new in the FL literature . • We validate the proposed algorithms with extensive experiments on CV and NLP tasks and further explore the effect of the asynchrony and local step number in AFL . We also numerically show that our AFL is a general algorithmic framework in the sense that various advanced FL techniques ( e.g. , FedProx ( Li et al. , 2018 ) and SCAFFOLD ( Karimireddy et al. , 2020b ) ) can be integrated as the optimizers in our AFA framework to further enhance the AFL performance . 1To attain -accuracy , it takes O ( 1/ 2 ) steps for an algorithm with an O ( 1/ √ T ) convergence rate , while needing O ( 1/m 2 ) steps for another algorithm with an O ( 1/ √ mT ) convergence rate ( the hidden constant in Big-O is the same ) . In this sense , O ( 1/ √ mT ) implies a linear speedup with respect to the number of workers . The rest of the paper is organized as follows . In Section 2 , we review related work . In Section 3 , we introduce AFL and our AFA algorithms , which are followed by their convergence analysis in Section 4 . We present the numerical results in Section 5 and conclude the work in Section 6 . 2 RELATED WORK . Server-Centric Federated Learning Algorithms : To date , one of the prevailing FL algorithms is Federated Averaging ( FedAvg ) , which was first proposed in ( McMahan et al. , 2016 ) as a heuristic to improve communication efficiency and data privacy for FL . Since then , there have been substantial follow-ups of FedAvg that focus on non-i.i.d . ( heterogeneous ) data ( see , e.g. , FedProx ( Li et al. , 2018 ) , FedPD ( Zhang et al. , 2020b ) , SCAFFOLD ( Karimireddy et al. , 2020b ) , FedNova ( Wang et al. , 2020 ) , FedDyn ( Acar et al. , 2021 ) , and MIME ( Karimireddy et al. , 2020a ) ) , which are closely related to our work . The main idea for these algorithms is to control the “ model drift ” ( due to heterogeneous datasets and the use of multiple local update steps on the worker side of FedAvg ) . While these algorithms achieved various degrees of success in dealing with data heterogeneity , they are all server-centric synchronous algorithms that are not easy to implement in edge-based FL due to straggler issues ( see discussions in Section 1 ) . Federated Learning with Flexible Worker Participation : Recently , some attempts have been made to alleviate the strict requirements on worker ’ s participation , such as allowing different local steps ( Ruan et al. , 2021 ; Wang et al. , 2020 ) and asynchronous FL ( Avdiukhin & Kasiviswanathan , 2021 ; Xie et al. , 2019 ) . However , most of these works either lack theoretical performance guarantees or require strong assumptions . For example , Ruan et al . ( 2021 ) assumed strongly convex loss function and bounded aggregation coefficient ; Avdiukhin & Kasiviswanathan ( 2021 ) assumed bounded gradients and same computation time per iteration for all workers . Our AFL paradigm considered in this paper is more general and subsumes all the above settings as special cases . We note , however , that AFL differs from conventional FL with flexible worker participation in that the worker ’ s participation in AFL and its local optimization process are completely determined by the workers , and not by the sampling requests from the server . This is more practical since it allows workers to participate in FL under drastically different situations in network , charging/idle cycles , etc . Due to the complex couplings between various sources of randomness and multiple layers of heterogeneity in spatial and temporal domains in AFL , the training algorithm design for AFL and its theoretical analysis is far from a straightforward combination of existing FL techniques for flexible worker participation . Asynchronous Distributed Optimization : The asynchrony in AFL also shares some similarity with asynchronous distributed optimization . The basic idea of asynchronous distributed optimization is to forgo the common clock in the system to lower the system implementation complexity in distributed optimization . However , due to extra noise introduced by asynchrony , it is highly non-trivial to establish the convergence performance of asynchronous distributed optimization algorithms . To address this challenge , asynchronous distributed optimization has been studied extensively in the machine learning and optimization literature ( see , e.g. , Lian et al . ( 2018 ) ; Niu et al . ( 2011 ) ; Agarwal & Duchi ( 2012 ) ; Paine et al . ( 2013 ) ; Xie et al . ( 2019 ) ; Zhang et al . ( 2020a ) and references therein ) . We note that the AFL paradigm considered in this paper is more general and subsumes asynchronous distributed optimization as a special case . To see this , note that in addition to the asynchronous updates at the server , the workers in AFL could further have different numbers of local update steps . Moreover , the workers may not even need to be work-conserving ( i.e. , workers could be idle between rounds of updates ) . As a result , the convergence analysis of AFL is much more challenging .
This paper proposes a general FL framework, called anarchic federated learning (AFL), that allows voluntary client participation of clients, with individual update steps and delay. Algorithms for both cross-device and cross-silo are presented. Convergence upper bounds and a lower bound are derived. Experimental results are reported to demonstrate the effectiveness of the proposed algorithm.
SP:8789841db1520b75242c756139a64d1f1d284f3b
Anarchic Federated Learning
1 INTRODUCTION . Federated Learning ( FL ) has recently emerged as an important distributed learning framework that leverages numerous workers to collaboratively learn a joint model ( Li et al. , 2019a ; Yang et al. , 2019 ; Kairouz et al. , 2019 ) . Since its inception , FL algorithms have become increasingly powerful and have been able to handle various heterogeneity in data , network environments , worker computing capabilities , etc . Moreover , most of the prevailing FL algorithms ( e.g. , FedAvg ( McMahan et al. , 2016 ) and its variants ( Li et al. , 2018 ; Zhang et al. , 2020b ; Karimireddy et al. , 2020b ; a ; Acar et al. , 2021 ) ) enjoy so-called “ linear speedup effect , ” i.e. , the convergence time of an FL algorithm decreases linearly as the number of workers increases ( Stich , 2018 ; Yu et al. , 2019 ; Wang & Joshi , 2018 ; Khaled et al. , 2019 ; Karimireddy et al. , 2020b ; Yang et al. , 2021 ; Qu et al. , 2020 ) . To achieve these salient features , most of the existing FL algorithms have adopted a server-centric approach , i.e. , the worker behaviors are tightly “ dictated ” by the server . For example , the server in these FL algorithms can i ) determine either all or a subset of workers to participate in each round of FL update ; ii ) fully control the timing for synchronization and whether to accept/reject information sent from the workers ; iii ) precisely specify the algorithmic operations ( e.g. , the number of local steps performed at each worker before communicating with the server ) , etc . Despite achieving strong performance guarantees , such a server-centric approach introduces several limitations . Specifically , these server-centric FL algorithms often implicitly rely on the following assumptions : ( 1 ) each worker is available for training upon the server ’ s request and throughout a complete round ; ( 2 ) all participating workers are willing to execute the same number of local updates and communicate with the server in a synchronous manner following a common clock . Unfortunately , in edge networks where many FL systems are deployed , these assumptions are restrictive or even problematic due to the following reasons . First , many requested edge devices on the worker side may not be available in each round because of , e.g. , communication errors or battery outages . Second , the use of synchronous communication and an identical number of local updates across all workers ignores the fact that worker devices in edge-based FL systems are heterogeneous in computation and communication capabilities . As a result , stragglers ( i.e. , slow workers ) could significantly slow down the training process . To mitigate the straggler effect , various robust FL algorithms have been developed . For example , the server in FedAvg ( McMahan et al. , 2016 ) can simply ignore and drop the information from the stragglers to speedup learning . However , this may lead to other problems such as wasted computation/energy ( Wang et al. , 2019 ) , slower convergence ( Li et al. , 2018 ) , or biased/unfair uses of worker data ( Kairouz et al. , 2019 ) . Moreover , the synchronous nature of the server-centric approaches implies many networking problems ( e.g. , interference between workers , periodic traffic spikes , high complexity in maintaining a network-wide common clock ) . The above limitations of the current server-centric FL approaches motivate us to propose a new paradigm in FL , which we call Anarchic Federated Learning ( AFL ) . In stark contrast to servercentric FL , workers in AFL are completely free of the “ dictation ” from the server . Specifically , each worker has complete freedom to choose when and how long to participate in FL without following any control signals from the server . As a result , the information fed back from workers is inherently asynchronous . Also , each worker can independently determine the number of local update steps to perform in each round based on its current local situation ( e.g. , battery level , communication channels , privacy concerns ) . In other words , the amount of local computation at each worker is time-varying , device-dependent , and fully controlled by the worker itself . Clearly , AFL has a much lower server-worker coordination complexity and avoids the aforementioned pitfalls in server-centric FL approaches . However , AFL also introduces significant challenges in algorithmic design on the server-side because the server needs to work much harder to handle the chaotic worker behaviors in AFL ( e.g. , asynchrony , spatial and temporal heterogeneity in computing ) . Toward this end , several fundamental questions naturally arise : 1 ) Is it possible to design algorithms that converge under AFL ? 2 ) If the answer to the previous question is yes , how fast could the algorithms converge ? 3 ) Can the new AFL-based algorithms still achieve the desired “ linear speedup effect ? ” In this paper , we answer the above fundamental questions of AFL affirmatively . Our main contributions and key results are summarized as follows : • We propose a new FL paradigm called Anarchic Federated Learning ( AFL ) , where the workers are allowed to engage in training at will and choose the number of local update steps based on their own time-varying situations ( computing resources , energy levels , etc. ) . This loose worker-server coupling significantly simplifies the implementations and renders AFL particularly suitable for FL deployments in edge computing environments . For any AFL algorithms under general worker information arrival processes and non-i.i.d . data across workers , we first establish a fundamental convergence error lower bound that depends on the data heterogeneity in the AFL system . Then , we propose two Anarchic Federated Averaging ( AFA ) algorithms with two-sided learning rates for two classes of FL problems ( cross-device and cross-silo ) ( Kairouz et al. , 2019 ; Wang et al. , 2021 ) . • For AFL in the cross-device ( CD ) setting , our AFA-CD algorithm converges to an error ball whose size matches the fundamental lower bound , with an O ( 1/ √ mT ) convergence rate where m is the number of collected workers in each round of update and T is the total number of rounds . We note that this convergence rate retains the highly desirable “ linear speedup effect ” under AFL.1 Moreover , under the special case with uniform workers ’ participation ( equivalent to uniform workers sampling in conventional FL ( Li et al. , 2019c ; Karimireddy et al. , 2020b ; a ; Acar et al. , 2021 ) ) , AFA-CD can further converge to a stationary point ( i.e. , a singleton ) at a convergence rate that matches the state-of-the-art of conventional distributed and federated learning . • For AFL in the cross-silo ( CS ) setting , our proposed AFA-CS algorithm achieves an enhanced convergence rate of O ( 1/ √ MT ) by leveraging historical feedback and variance reduction tech- niques , where M is the total number of workers . This suggests that , not only can “ linear speedup ” be achieved under AFL-CS , the speedup factor also depends on the total number of workers M instead of the number of collected workers m in each round ( M > m ) . To our knowledge , this result is new in the FL literature . • We validate the proposed algorithms with extensive experiments on CV and NLP tasks and further explore the effect of the asynchrony and local step number in AFL . We also numerically show that our AFL is a general algorithmic framework in the sense that various advanced FL techniques ( e.g. , FedProx ( Li et al. , 2018 ) and SCAFFOLD ( Karimireddy et al. , 2020b ) ) can be integrated as the optimizers in our AFA framework to further enhance the AFL performance . 1To attain -accuracy , it takes O ( 1/ 2 ) steps for an algorithm with an O ( 1/ √ T ) convergence rate , while needing O ( 1/m 2 ) steps for another algorithm with an O ( 1/ √ mT ) convergence rate ( the hidden constant in Big-O is the same ) . In this sense , O ( 1/ √ mT ) implies a linear speedup with respect to the number of workers . The rest of the paper is organized as follows . In Section 2 , we review related work . In Section 3 , we introduce AFL and our AFA algorithms , which are followed by their convergence analysis in Section 4 . We present the numerical results in Section 5 and conclude the work in Section 6 . 2 RELATED WORK . Server-Centric Federated Learning Algorithms : To date , one of the prevailing FL algorithms is Federated Averaging ( FedAvg ) , which was first proposed in ( McMahan et al. , 2016 ) as a heuristic to improve communication efficiency and data privacy for FL . Since then , there have been substantial follow-ups of FedAvg that focus on non-i.i.d . ( heterogeneous ) data ( see , e.g. , FedProx ( Li et al. , 2018 ) , FedPD ( Zhang et al. , 2020b ) , SCAFFOLD ( Karimireddy et al. , 2020b ) , FedNova ( Wang et al. , 2020 ) , FedDyn ( Acar et al. , 2021 ) , and MIME ( Karimireddy et al. , 2020a ) ) , which are closely related to our work . The main idea for these algorithms is to control the “ model drift ” ( due to heterogeneous datasets and the use of multiple local update steps on the worker side of FedAvg ) . While these algorithms achieved various degrees of success in dealing with data heterogeneity , they are all server-centric synchronous algorithms that are not easy to implement in edge-based FL due to straggler issues ( see discussions in Section 1 ) . Federated Learning with Flexible Worker Participation : Recently , some attempts have been made to alleviate the strict requirements on worker ’ s participation , such as allowing different local steps ( Ruan et al. , 2021 ; Wang et al. , 2020 ) and asynchronous FL ( Avdiukhin & Kasiviswanathan , 2021 ; Xie et al. , 2019 ) . However , most of these works either lack theoretical performance guarantees or require strong assumptions . For example , Ruan et al . ( 2021 ) assumed strongly convex loss function and bounded aggregation coefficient ; Avdiukhin & Kasiviswanathan ( 2021 ) assumed bounded gradients and same computation time per iteration for all workers . Our AFL paradigm considered in this paper is more general and subsumes all the above settings as special cases . We note , however , that AFL differs from conventional FL with flexible worker participation in that the worker ’ s participation in AFL and its local optimization process are completely determined by the workers , and not by the sampling requests from the server . This is more practical since it allows workers to participate in FL under drastically different situations in network , charging/idle cycles , etc . Due to the complex couplings between various sources of randomness and multiple layers of heterogeneity in spatial and temporal domains in AFL , the training algorithm design for AFL and its theoretical analysis is far from a straightforward combination of existing FL techniques for flexible worker participation . Asynchronous Distributed Optimization : The asynchrony in AFL also shares some similarity with asynchronous distributed optimization . The basic idea of asynchronous distributed optimization is to forgo the common clock in the system to lower the system implementation complexity in distributed optimization . However , due to extra noise introduced by asynchrony , it is highly non-trivial to establish the convergence performance of asynchronous distributed optimization algorithms . To address this challenge , asynchronous distributed optimization has been studied extensively in the machine learning and optimization literature ( see , e.g. , Lian et al . ( 2018 ) ; Niu et al . ( 2011 ) ; Agarwal & Duchi ( 2012 ) ; Paine et al . ( 2013 ) ; Xie et al . ( 2019 ) ; Zhang et al . ( 2020a ) and references therein ) . We note that the AFL paradigm considered in this paper is more general and subsumes asynchronous distributed optimization as a special case . To see this , note that in addition to the asynchronous updates at the server , the workers in AFL could further have different numbers of local update steps . Moreover , the workers may not even need to be work-conserving ( i.e. , workers could be idle between rounds of updates ) . As a result , the convergence analysis of AFL is much more challenging .
This paper analyses a variant of generalized federated averaging ([Wang et al.](https://arxiv.org/abs/2107.06917)) with partial worker participation and asynchrony in the stateful (i.e., the worker specific data can be saved on the server) and stateless settings, which characterize cross-silo and cross-device federated learning ([Kairouz et al.](https://arxiv.org/abs/1912.04977?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%253A+arxiv%252FQSXk+%2528ExcitingAds%2521+cs+updates+on+arXiv.org%2529)) respectively. Specifically for each global update, the server uses fresh gradients from $m$ out of $M$ machines, where each machine can make some local SGD updates starting from a stale global iterate (i.e., the gradients are delayed). This setting is termed *anarchic federated learning* (AFl) when the workers can have an arbitrary number of local steps and gradient delays. The authors first provide a lower bound for convergence to a first-order stationary point in the AFL setting. Then they upper bound the convergence of their algorithms in the stateful and stateless setting when the local updates and delay are bounded. Under some regimes, and assumptions on the worker sampling and delay distribution, a linear convergence speed-up can be shown w.r.t. the number of machines ($m$ for the stateless setting and $M$ for the stateful setting). Some experiments are provided to measure the effect of the anarchic worker behavior.
SP:8789841db1520b75242c756139a64d1f1d284f3b
Towards Understanding the Robustness Against Evasion Attack on Categorical Data
1 INTRODUCTION . Categorical data pervasively exist in real-world safety-critical Machine-Learning-as-a-Service ( MLaaS ) applications , such as ML-driven intrusion detection and digital healthcare . The vulnerability to attacks by intentionally crafting categorical signatures raises concerns on trust and utility of the ML-based analytic services . Characterizing and assessing adversarial robustness on categorical data can thus help evaluate the reliability of the core ML models and flag potential evading efforts . For a classifier 5 with categorical inputs G , the adversarial risk of 5 under evasion attack can be formulated as follows . Definition 1 5 : G→ H denotes a classifier with categorical inputs G. Let ` G , H denote the joint distribution of ( G , H ) . The expected adversarial risk of the classifier 5 under evasion attack is formulated as : R03EY = E ( G , H ) ∼ ` G , H sup |diff ( G , Ĝ ) |≤Y ℓ ( 5 ( Ĝ ) , H ) ( 1 ) where ℓ is the misclassification loss function , G and Ĝ are an unperturbed and the generated adversarial sample respectively . Y denotes the attack budget of evasion attack . G is correctly classified ( 5 ( G ) = H ) . Intuitively , a classifier 5 is more robust against adversarial perturbations , if its adversarial risk is low given the attack budget limiting the number of changed categorical features . Unlike continuous data , a categorical variable can be valued with only one categorical value among others . These categorical values have no intrinsic ordering to the categories . Evasion attack manipulating categorical inputs is in nature an NP-hard knapsack problem . The discontinuous nature raises two fundamental yet rarely addressed questions to evaluate the adversarial risk on categorical data in practice : • Q1 What are the key factors determining 5 ’ s adversarial risk R03EY on categorical data ? • Q2 For a general classifier 5 , can we assess the adversarial risk of 5 with categorical inputs with provably accuracy guarantee ? Despite recent efforts of adversarial vulnerability exploration with discrete data , both questions remain open for several reasons . First , the discontinuity of categorical space prevents the direct use of the previous progress on adversarial risk analysis with continuous data . The local subspace assumption of ; ? -bounded adversarial attacks does not apply to the categorical features ( Hein & Andriushchenko , 2017 ; Wang et al. , 2018 ; Fawzi et al. , 2016 ; Gilmer et al. , 2018 ; Yin et al. , 2019 ; Khim & Loh , 2018 ; Tu et al. , 2019 ) . Second , most practices of discrete adversarial attack are domain specific and depend heavily on domain knowledge . ( Bojchevski & Günnemann , 2019 ; Bojchevski & Günnemann , 2019 ; Zugner & Gunnemann , 2019 ) focus on building differentiable surrogate functions to Graph Neural Networks to facilitate searching for feasible poisoning edits over graph structures and node attributes . ( Narodytska & Kasiviswanathan , 2017 ; Croce & Hein , 2019 ) conduct ! 0-norm perturbations only within local image areas containing sensitive features for image classification . ( Qi et al. , 2019 ; Wang et al. , 2020a ) require non-negativity on the parameters of deep neural networks to deliver provably accurate greedy attacks via submodular function optimization . The non-negativity constraint is unnatural for real-world ML practices . It does harm the utility of the classifier . For a more general classifier with categorical inputs , a provably optimal and domain-agnostic method for attack and adversarial risk evaluation is yet to establish . Using greedy search or the well-known Branch-and-Bound method on a general knapsack problem provides no optimality guarantee of the solutions , thus can produce arbitrarily bad results . Our study aims to address these two questions mentioned above from both theoretical and empirical perspectives . First , we derive an information-theoretic characterization of the adversarial risk of a classifier . It unveils that the informativeness of the input categorical instance , the sensitivity of the perturbed categorical features and the information geometry property of the targeted classifier are the three key factors jointly determining adversarial vulnerability of the classifier . Second , our study adopts an assess-by-attack strategy . We show that assessing adversarial robustness of any measurable classifier with categorical inputs can be cast to a weakly submodular maximization problem , with a mild smoothness condition . It can thus be solved using a simple yet efficient greedy attack strategy with provable approximation guarantees . The theoretical findings not only explain the empirical success of the greedy search strategy to generate adversarial textual and image samples ( Gong et al. , 2018 ; Yang et al. , 2018 ; Narodytska & Kasiviswanathan , 2017 ) , but also pave the way to a domainagnostic adversarial robustness assessment on categorical data with provable optimality guarantees . Third , we instantiate the domain-agnostic adversarial risk characterization and assessment with a widely used DNN classifier , i.e . Long Short-Term Memory ( LSTM ) and three different categorical datasets in Section.4 . These datasets are collected from various real-world applications . The experimental results confirm the impact of the 3 risk factors over the adversarial vulnerability of the DNN classifier . 2 RELATED WORK . Tremendous efforts have been made to vulnerability measurement of a classifier under evasion attack ( Hein & Andriushchenko , 2017 ; Wang et al. , 2018 ; Fawzi et al. , 2016 ; Gilmer et al. , 2018 ; Weng et al. , 2019 ; Sinha et al. , 2018 ; Cohen et al. , 2019 ; Shi et al. , 2020 ; Yin et al. , 2019 ; Khim & Loh , 2018 ; Tu et al. , 2019 ) . Most of the previous works focus on evaluating robustness against ; ? -norm perturbations on continuous data . They all assume adversarial samples locate within a smooth ; ? -ball around an input instance , which doesn ’ t hold for categorical data . In contrast , Tu et al . ( 2019 ) covers both numerical and categorical data . It bounds the adversarial risk with a local worst-case risk over a ? -wassernstein ball centered at the training data distribution . This work associates the adversarial risk of a classifier with its rademacher complexity . Nevertheless , we argue that the adversarial vulnerability of a classifier is determined by not only the characteristics of the classifier , e.g . model complexity , but also the properties of the training/testing data instances . Pioneering works of evasion attacks with categorical inputs depend on domain-specific knowledge to facilitate the attack exploration . Kuleshov et al . ( 2018 ) ; Papernot et al . ( 2016 ) ; Miyato et al . ( 2016 ) ; Samanta & Mehta ( 2017 ) ; Gao et al . ( 2018 ) ; Yang et al . ( 2018 ) ; Gong et al . ( 2018 ) ; Ebrahimi et al . ( 2018 ) ; Narodytska & Kasiviswanathan ( 2017 ) ; Croce & Hein ( 2019 ) focus on replacing individual words/phrases to cheat text classifiers , or modifying pixel intensities to bias image classification results . These methods use heuristic semantic rules , e.g. , replacing words with manually defined candidate synonyms and constraining the word change to preserve readability and semantic integrity . Narodytska & Kasiviswanathan ( 2017 ) ; Croce & Hein ( 2019 ) narrow down the search range to the pixels with high pixel-wise sensitivity for image classification . Despite of the sounding empirical results , there is no guarantee on a successful attack within the attack budget . Bojchevski & Günnemann ( 2019 ) ; Bojchevski & Günnemann ( 2019 ) ; D.Zugner et al . ( 2019 ) ; D. Zügner & Günnemann ( 2018 ) ; Akbarnejad & Günnemann ( 2019 ) adopt edge-flipping and node attribute perturbation to poison graph data mining pipelines , e.g. , graph neural networks and graph embedding models . The key idea is to introduce relaxed surrogate functions to the combinatorial attack objective and then solve the relaxed optimization problem instead . Notably , D.Zugner et al . ( 2019 ) ; D. Zügner & Günnemann ( 2018 ) ; Akbarnejad & Günnemann ( 2019 ) define the attack objective as a sum of smallest eigenvalues of the adjacency matrix of a given graph . Though it is not explicitly claimed , it is intrinsically a submodular maximization problem . Qi et al . ( 2019 ) ; Wang et al . ( 2020a ) unveil that simple greedy search can deliver provably effective attacks against DNN classifiers without domain-specific heuristics , if all the link weights between neurons are non-negative . Non-negativity of the parameter guarantees strict submodularity of the attack objective . However , it brings significantly deterioration of the classifier ’ s accuracy , which is not applicable for real-world learning tasks . 3 ROBUSTNESS CHARACTERIZATION AND ASSESSMENT . We assume an input instance G = { G1 , G2 , G3 , ... , G= } of = categorical attributes . Each G8 takes any of < ( < ≥ 1 ) categorical values . The classifier 5 outputs decision probabilities 5H : ( : = 1,2,3 , ... , ) with respect to different class labels . In practices , each categorical value of G 9 8 is cast to a -dimensional pre- trained embedding vector , e.g. , 4 9 8 ∈R , 9 =1,2 , ... , < . To represent an instance G with the embedding vectors of its category values , we define binary variables 1= { 1 9 8 } , 8=1,2 , ... , = , 9=1,2 , ... , < , where 1 9 8 =1 when the 9-th attribute value is present for G8 and 1 9 8 =0 otherwise . An instance G can then be represented by anR=∗ < ∗ tensor with G { 8 , 9 , : } =1 9 8 4 9 8 . Let 1̂= { 1̂ 9 8 } indicate the adversarial modifications introduced into 1 . For a perturbed Ĝ , its 1̂ ≠ 1 . Depending on the type of attacks to implement , e.g. , insertion , deletion or substitution , 1̂ differs from 1 in different ways . Without loss of generality , let H denote the true class label of G and all the other H : ( : = { 1 , ... , −1 } ) are the potential targets of an evasion attack . The goal of attack is to make 5H ( G,1̂ ) as low as possible and increase 5H : ( G,1̂ ) of a fixed : ( targeted attack ) or any : of { 1 , ... , −1 } ( non-targeted attack ) as high as possible simultaneously . We focus on non-targeted evasion attack , and leave the targeted scenario for future study . 3.1 INFORMATION-THEORETIC CHARACTERIZATION OF ADVERSARIAL VULNERABILITY . Theorem 1 For any input instance ( G , H ) and a training set ( sampled from the same underlying distribution ` G , H , 5 ∈ H is trained using ( with a deterministic training paradigm . The expected adversarial risk R03EY defined in Eq.1 can be bounded as in Eq.2 , if the loss function ℓ in R03EY adopts the zero-one loss . R03EY ≥1− 2 ( G ; H , ( ) −2 [ G− ( 5H ; ( ) /2+const log ( 2 ) [ G = BD ? |diff ( 1,1̂ ) |≤Y ( G ; H , ( ) − ( Ĝ ; H , ( ) ( 3 ) where const denotes a constant term . diff ( 1,1̂ ) indicates the set of the categorical attributes modified in the attack . ( G ; H , ( ) denotes the mutual information between the feature G and the pair of label H and training set ( . [ G is the supremum of the difference between the mutual information before and after adversarial perturbation , noted by ( G ; H , ( ) and ( Ĝ ; H , ( ) respectively . ( 5H ; ( ) denotes the mutual information between ( and 5 . Theorem 1 unveils the three impact factors jointly determining adversarial vulnerability of the targeted classifier 5 ( for answering Q1 in Section 1 ) . The proof is given in Appendix.A . In Eq.2 , a lower mutual information ( G ; H , ( ) indicates higher adversarial risk , i.e. , R03EY has a higher lower bound . A lower ( G ; H , ( ) denotes weaker consistency between the input instance ( G , H ) and the training set ( . The classifier trained by ( thus produces weaker decision confidence for such ( G , H ) . With ( G , H ) dropped to the ambiguous zone near the classification boundary , the classifier is more prone to adversarial perturbation . A higher ( 5H ; ( ) leads to a higher adversarial risk according to Eq.2 . The mutual information ( 5H ; ( ) reflects the dependence between the classifier 5 and the training set ( , which can be considered as a lower bound of the VC-dimension for countable hypothesis space 5 ∈H ( Xu & Raginsky , 2017 ; Zhu et al. , 2020 ) . A higher ( 5H ; ( ) thus denotes that 5 is more likely to suffer from being overfitted to ( . Model overfiting is one of the causes of adversarial vulnerability ( Tu et al. , 2019 ) . Resonating with the association between ( 5H ; ( ) and adversarial risk , three popularly used robustness enhancement methods controlling the classifier ’ s complexity and overfiting risk can potentially adjust the adversarial vulnerability of the classifier with categorical inputs . 1 ) Adversarial training ( Miyato et al. , 2016 ; Sinha et al. , 2018 ; Florian et al. , 2018 ; Wang et al. , 2019 ; Shafahi et al. , 2019 ) . The adversarially retrained classifier 5̂ is less correlated with the original training set ( compared to 5 , which reduces ( 5H ; ( ) and may help mitigate the adversarial threat . 2 ) The addition of nuclear norm regularization over the classifier ’ s parameters in the training process ( Ravi et al. , 2019 ; Tu et al. , 2019 ) . The resultant regularized classifier has a controlled model complexity , which can potentially reduce the adversarial risk . 3 ) Random smoothing ( Lee et al. , 2019 ; Levine & Feizi , 2020 ; Dvijotham et al. , 2020 ; Boj . et al. , 2020 ) . Following ( Cohen et al. , 2019 ) , this defense method randomly selects and flips the input categorical features of the targeted classifier . The randomly perturbed classification output is less correlated with the training data ( ’ s distribution , which may reduce adversarial risk .
The paper proposes new methods to gauge model robustness to perturbations in categorical data. The experiments show the effectiveness of such new estimators and corroborates the intuition over model robustness v.s. mutual information over features. ============================================= I acknowledge that I've read the author response. The response clarified my questions. Thanks for clarifying that the paper is not centered around defense mechanism but rather robustness validation. That also resolved some of my doubts over the practical use of it. I'm raising my score to 8.
SP:1f2bba25f272c3c3ca348e31ce2b1702641320d0
Towards Understanding the Robustness Against Evasion Attack on Categorical Data
1 INTRODUCTION . Categorical data pervasively exist in real-world safety-critical Machine-Learning-as-a-Service ( MLaaS ) applications , such as ML-driven intrusion detection and digital healthcare . The vulnerability to attacks by intentionally crafting categorical signatures raises concerns on trust and utility of the ML-based analytic services . Characterizing and assessing adversarial robustness on categorical data can thus help evaluate the reliability of the core ML models and flag potential evading efforts . For a classifier 5 with categorical inputs G , the adversarial risk of 5 under evasion attack can be formulated as follows . Definition 1 5 : G→ H denotes a classifier with categorical inputs G. Let ` G , H denote the joint distribution of ( G , H ) . The expected adversarial risk of the classifier 5 under evasion attack is formulated as : R03EY = E ( G , H ) ∼ ` G , H sup |diff ( G , Ĝ ) |≤Y ℓ ( 5 ( Ĝ ) , H ) ( 1 ) where ℓ is the misclassification loss function , G and Ĝ are an unperturbed and the generated adversarial sample respectively . Y denotes the attack budget of evasion attack . G is correctly classified ( 5 ( G ) = H ) . Intuitively , a classifier 5 is more robust against adversarial perturbations , if its adversarial risk is low given the attack budget limiting the number of changed categorical features . Unlike continuous data , a categorical variable can be valued with only one categorical value among others . These categorical values have no intrinsic ordering to the categories . Evasion attack manipulating categorical inputs is in nature an NP-hard knapsack problem . The discontinuous nature raises two fundamental yet rarely addressed questions to evaluate the adversarial risk on categorical data in practice : • Q1 What are the key factors determining 5 ’ s adversarial risk R03EY on categorical data ? • Q2 For a general classifier 5 , can we assess the adversarial risk of 5 with categorical inputs with provably accuracy guarantee ? Despite recent efforts of adversarial vulnerability exploration with discrete data , both questions remain open for several reasons . First , the discontinuity of categorical space prevents the direct use of the previous progress on adversarial risk analysis with continuous data . The local subspace assumption of ; ? -bounded adversarial attacks does not apply to the categorical features ( Hein & Andriushchenko , 2017 ; Wang et al. , 2018 ; Fawzi et al. , 2016 ; Gilmer et al. , 2018 ; Yin et al. , 2019 ; Khim & Loh , 2018 ; Tu et al. , 2019 ) . Second , most practices of discrete adversarial attack are domain specific and depend heavily on domain knowledge . ( Bojchevski & Günnemann , 2019 ; Bojchevski & Günnemann , 2019 ; Zugner & Gunnemann , 2019 ) focus on building differentiable surrogate functions to Graph Neural Networks to facilitate searching for feasible poisoning edits over graph structures and node attributes . ( Narodytska & Kasiviswanathan , 2017 ; Croce & Hein , 2019 ) conduct ! 0-norm perturbations only within local image areas containing sensitive features for image classification . ( Qi et al. , 2019 ; Wang et al. , 2020a ) require non-negativity on the parameters of deep neural networks to deliver provably accurate greedy attacks via submodular function optimization . The non-negativity constraint is unnatural for real-world ML practices . It does harm the utility of the classifier . For a more general classifier with categorical inputs , a provably optimal and domain-agnostic method for attack and adversarial risk evaluation is yet to establish . Using greedy search or the well-known Branch-and-Bound method on a general knapsack problem provides no optimality guarantee of the solutions , thus can produce arbitrarily bad results . Our study aims to address these two questions mentioned above from both theoretical and empirical perspectives . First , we derive an information-theoretic characterization of the adversarial risk of a classifier . It unveils that the informativeness of the input categorical instance , the sensitivity of the perturbed categorical features and the information geometry property of the targeted classifier are the three key factors jointly determining adversarial vulnerability of the classifier . Second , our study adopts an assess-by-attack strategy . We show that assessing adversarial robustness of any measurable classifier with categorical inputs can be cast to a weakly submodular maximization problem , with a mild smoothness condition . It can thus be solved using a simple yet efficient greedy attack strategy with provable approximation guarantees . The theoretical findings not only explain the empirical success of the greedy search strategy to generate adversarial textual and image samples ( Gong et al. , 2018 ; Yang et al. , 2018 ; Narodytska & Kasiviswanathan , 2017 ) , but also pave the way to a domainagnostic adversarial robustness assessment on categorical data with provable optimality guarantees . Third , we instantiate the domain-agnostic adversarial risk characterization and assessment with a widely used DNN classifier , i.e . Long Short-Term Memory ( LSTM ) and three different categorical datasets in Section.4 . These datasets are collected from various real-world applications . The experimental results confirm the impact of the 3 risk factors over the adversarial vulnerability of the DNN classifier . 2 RELATED WORK . Tremendous efforts have been made to vulnerability measurement of a classifier under evasion attack ( Hein & Andriushchenko , 2017 ; Wang et al. , 2018 ; Fawzi et al. , 2016 ; Gilmer et al. , 2018 ; Weng et al. , 2019 ; Sinha et al. , 2018 ; Cohen et al. , 2019 ; Shi et al. , 2020 ; Yin et al. , 2019 ; Khim & Loh , 2018 ; Tu et al. , 2019 ) . Most of the previous works focus on evaluating robustness against ; ? -norm perturbations on continuous data . They all assume adversarial samples locate within a smooth ; ? -ball around an input instance , which doesn ’ t hold for categorical data . In contrast , Tu et al . ( 2019 ) covers both numerical and categorical data . It bounds the adversarial risk with a local worst-case risk over a ? -wassernstein ball centered at the training data distribution . This work associates the adversarial risk of a classifier with its rademacher complexity . Nevertheless , we argue that the adversarial vulnerability of a classifier is determined by not only the characteristics of the classifier , e.g . model complexity , but also the properties of the training/testing data instances . Pioneering works of evasion attacks with categorical inputs depend on domain-specific knowledge to facilitate the attack exploration . Kuleshov et al . ( 2018 ) ; Papernot et al . ( 2016 ) ; Miyato et al . ( 2016 ) ; Samanta & Mehta ( 2017 ) ; Gao et al . ( 2018 ) ; Yang et al . ( 2018 ) ; Gong et al . ( 2018 ) ; Ebrahimi et al . ( 2018 ) ; Narodytska & Kasiviswanathan ( 2017 ) ; Croce & Hein ( 2019 ) focus on replacing individual words/phrases to cheat text classifiers , or modifying pixel intensities to bias image classification results . These methods use heuristic semantic rules , e.g. , replacing words with manually defined candidate synonyms and constraining the word change to preserve readability and semantic integrity . Narodytska & Kasiviswanathan ( 2017 ) ; Croce & Hein ( 2019 ) narrow down the search range to the pixels with high pixel-wise sensitivity for image classification . Despite of the sounding empirical results , there is no guarantee on a successful attack within the attack budget . Bojchevski & Günnemann ( 2019 ) ; Bojchevski & Günnemann ( 2019 ) ; D.Zugner et al . ( 2019 ) ; D. Zügner & Günnemann ( 2018 ) ; Akbarnejad & Günnemann ( 2019 ) adopt edge-flipping and node attribute perturbation to poison graph data mining pipelines , e.g. , graph neural networks and graph embedding models . The key idea is to introduce relaxed surrogate functions to the combinatorial attack objective and then solve the relaxed optimization problem instead . Notably , D.Zugner et al . ( 2019 ) ; D. Zügner & Günnemann ( 2018 ) ; Akbarnejad & Günnemann ( 2019 ) define the attack objective as a sum of smallest eigenvalues of the adjacency matrix of a given graph . Though it is not explicitly claimed , it is intrinsically a submodular maximization problem . Qi et al . ( 2019 ) ; Wang et al . ( 2020a ) unveil that simple greedy search can deliver provably effective attacks against DNN classifiers without domain-specific heuristics , if all the link weights between neurons are non-negative . Non-negativity of the parameter guarantees strict submodularity of the attack objective . However , it brings significantly deterioration of the classifier ’ s accuracy , which is not applicable for real-world learning tasks . 3 ROBUSTNESS CHARACTERIZATION AND ASSESSMENT . We assume an input instance G = { G1 , G2 , G3 , ... , G= } of = categorical attributes . Each G8 takes any of < ( < ≥ 1 ) categorical values . The classifier 5 outputs decision probabilities 5H : ( : = 1,2,3 , ... , ) with respect to different class labels . In practices , each categorical value of G 9 8 is cast to a -dimensional pre- trained embedding vector , e.g. , 4 9 8 ∈R , 9 =1,2 , ... , < . To represent an instance G with the embedding vectors of its category values , we define binary variables 1= { 1 9 8 } , 8=1,2 , ... , = , 9=1,2 , ... , < , where 1 9 8 =1 when the 9-th attribute value is present for G8 and 1 9 8 =0 otherwise . An instance G can then be represented by anR=∗ < ∗ tensor with G { 8 , 9 , : } =1 9 8 4 9 8 . Let 1̂= { 1̂ 9 8 } indicate the adversarial modifications introduced into 1 . For a perturbed Ĝ , its 1̂ ≠ 1 . Depending on the type of attacks to implement , e.g. , insertion , deletion or substitution , 1̂ differs from 1 in different ways . Without loss of generality , let H denote the true class label of G and all the other H : ( : = { 1 , ... , −1 } ) are the potential targets of an evasion attack . The goal of attack is to make 5H ( G,1̂ ) as low as possible and increase 5H : ( G,1̂ ) of a fixed : ( targeted attack ) or any : of { 1 , ... , −1 } ( non-targeted attack ) as high as possible simultaneously . We focus on non-targeted evasion attack , and leave the targeted scenario for future study . 3.1 INFORMATION-THEORETIC CHARACTERIZATION OF ADVERSARIAL VULNERABILITY . Theorem 1 For any input instance ( G , H ) and a training set ( sampled from the same underlying distribution ` G , H , 5 ∈ H is trained using ( with a deterministic training paradigm . The expected adversarial risk R03EY defined in Eq.1 can be bounded as in Eq.2 , if the loss function ℓ in R03EY adopts the zero-one loss . R03EY ≥1− 2 ( G ; H , ( ) −2 [ G− ( 5H ; ( ) /2+const log ( 2 ) [ G = BD ? |diff ( 1,1̂ ) |≤Y ( G ; H , ( ) − ( Ĝ ; H , ( ) ( 3 ) where const denotes a constant term . diff ( 1,1̂ ) indicates the set of the categorical attributes modified in the attack . ( G ; H , ( ) denotes the mutual information between the feature G and the pair of label H and training set ( . [ G is the supremum of the difference between the mutual information before and after adversarial perturbation , noted by ( G ; H , ( ) and ( Ĝ ; H , ( ) respectively . ( 5H ; ( ) denotes the mutual information between ( and 5 . Theorem 1 unveils the three impact factors jointly determining adversarial vulnerability of the targeted classifier 5 ( for answering Q1 in Section 1 ) . The proof is given in Appendix.A . In Eq.2 , a lower mutual information ( G ; H , ( ) indicates higher adversarial risk , i.e. , R03EY has a higher lower bound . A lower ( G ; H , ( ) denotes weaker consistency between the input instance ( G , H ) and the training set ( . The classifier trained by ( thus produces weaker decision confidence for such ( G , H ) . With ( G , H ) dropped to the ambiguous zone near the classification boundary , the classifier is more prone to adversarial perturbation . A higher ( 5H ; ( ) leads to a higher adversarial risk according to Eq.2 . The mutual information ( 5H ; ( ) reflects the dependence between the classifier 5 and the training set ( , which can be considered as a lower bound of the VC-dimension for countable hypothesis space 5 ∈H ( Xu & Raginsky , 2017 ; Zhu et al. , 2020 ) . A higher ( 5H ; ( ) thus denotes that 5 is more likely to suffer from being overfitted to ( . Model overfiting is one of the causes of adversarial vulnerability ( Tu et al. , 2019 ) . Resonating with the association between ( 5H ; ( ) and adversarial risk , three popularly used robustness enhancement methods controlling the classifier ’ s complexity and overfiting risk can potentially adjust the adversarial vulnerability of the classifier with categorical inputs . 1 ) Adversarial training ( Miyato et al. , 2016 ; Sinha et al. , 2018 ; Florian et al. , 2018 ; Wang et al. , 2019 ; Shafahi et al. , 2019 ) . The adversarially retrained classifier 5̂ is less correlated with the original training set ( compared to 5 , which reduces ( 5H ; ( ) and may help mitigate the adversarial threat . 2 ) The addition of nuclear norm regularization over the classifier ’ s parameters in the training process ( Ravi et al. , 2019 ; Tu et al. , 2019 ) . The resultant regularized classifier has a controlled model complexity , which can potentially reduce the adversarial risk . 3 ) Random smoothing ( Lee et al. , 2019 ; Levine & Feizi , 2020 ; Dvijotham et al. , 2020 ; Boj . et al. , 2020 ) . Following ( Cohen et al. , 2019 ) , this defense method randomly selects and flips the input categorical features of the targeted classifier . The randomly perturbed classification output is less correlated with the training data ( ’ s distribution , which may reduce adversarial risk .
This paper studies the problem of assessing the adversarial robustness of a classifier with categorical inputs, instead of continuous inputs as in the literature. The authors claim that there exist provably optimality guarantees for Lipschitz-continuous classifiers, and proposed an impact factor based on an n information-theoretic analysis. Experimental studies are conducted to support the claims.
SP:1f2bba25f272c3c3ca348e31ce2b1702641320d0
Towards Understanding the Robustness Against Evasion Attack on Categorical Data
1 INTRODUCTION . Categorical data pervasively exist in real-world safety-critical Machine-Learning-as-a-Service ( MLaaS ) applications , such as ML-driven intrusion detection and digital healthcare . The vulnerability to attacks by intentionally crafting categorical signatures raises concerns on trust and utility of the ML-based analytic services . Characterizing and assessing adversarial robustness on categorical data can thus help evaluate the reliability of the core ML models and flag potential evading efforts . For a classifier 5 with categorical inputs G , the adversarial risk of 5 under evasion attack can be formulated as follows . Definition 1 5 : G→ H denotes a classifier with categorical inputs G. Let ` G , H denote the joint distribution of ( G , H ) . The expected adversarial risk of the classifier 5 under evasion attack is formulated as : R03EY = E ( G , H ) ∼ ` G , H sup |diff ( G , Ĝ ) |≤Y ℓ ( 5 ( Ĝ ) , H ) ( 1 ) where ℓ is the misclassification loss function , G and Ĝ are an unperturbed and the generated adversarial sample respectively . Y denotes the attack budget of evasion attack . G is correctly classified ( 5 ( G ) = H ) . Intuitively , a classifier 5 is more robust against adversarial perturbations , if its adversarial risk is low given the attack budget limiting the number of changed categorical features . Unlike continuous data , a categorical variable can be valued with only one categorical value among others . These categorical values have no intrinsic ordering to the categories . Evasion attack manipulating categorical inputs is in nature an NP-hard knapsack problem . The discontinuous nature raises two fundamental yet rarely addressed questions to evaluate the adversarial risk on categorical data in practice : • Q1 What are the key factors determining 5 ’ s adversarial risk R03EY on categorical data ? • Q2 For a general classifier 5 , can we assess the adversarial risk of 5 with categorical inputs with provably accuracy guarantee ? Despite recent efforts of adversarial vulnerability exploration with discrete data , both questions remain open for several reasons . First , the discontinuity of categorical space prevents the direct use of the previous progress on adversarial risk analysis with continuous data . The local subspace assumption of ; ? -bounded adversarial attacks does not apply to the categorical features ( Hein & Andriushchenko , 2017 ; Wang et al. , 2018 ; Fawzi et al. , 2016 ; Gilmer et al. , 2018 ; Yin et al. , 2019 ; Khim & Loh , 2018 ; Tu et al. , 2019 ) . Second , most practices of discrete adversarial attack are domain specific and depend heavily on domain knowledge . ( Bojchevski & Günnemann , 2019 ; Bojchevski & Günnemann , 2019 ; Zugner & Gunnemann , 2019 ) focus on building differentiable surrogate functions to Graph Neural Networks to facilitate searching for feasible poisoning edits over graph structures and node attributes . ( Narodytska & Kasiviswanathan , 2017 ; Croce & Hein , 2019 ) conduct ! 0-norm perturbations only within local image areas containing sensitive features for image classification . ( Qi et al. , 2019 ; Wang et al. , 2020a ) require non-negativity on the parameters of deep neural networks to deliver provably accurate greedy attacks via submodular function optimization . The non-negativity constraint is unnatural for real-world ML practices . It does harm the utility of the classifier . For a more general classifier with categorical inputs , a provably optimal and domain-agnostic method for attack and adversarial risk evaluation is yet to establish . Using greedy search or the well-known Branch-and-Bound method on a general knapsack problem provides no optimality guarantee of the solutions , thus can produce arbitrarily bad results . Our study aims to address these two questions mentioned above from both theoretical and empirical perspectives . First , we derive an information-theoretic characterization of the adversarial risk of a classifier . It unveils that the informativeness of the input categorical instance , the sensitivity of the perturbed categorical features and the information geometry property of the targeted classifier are the three key factors jointly determining adversarial vulnerability of the classifier . Second , our study adopts an assess-by-attack strategy . We show that assessing adversarial robustness of any measurable classifier with categorical inputs can be cast to a weakly submodular maximization problem , with a mild smoothness condition . It can thus be solved using a simple yet efficient greedy attack strategy with provable approximation guarantees . The theoretical findings not only explain the empirical success of the greedy search strategy to generate adversarial textual and image samples ( Gong et al. , 2018 ; Yang et al. , 2018 ; Narodytska & Kasiviswanathan , 2017 ) , but also pave the way to a domainagnostic adversarial robustness assessment on categorical data with provable optimality guarantees . Third , we instantiate the domain-agnostic adversarial risk characterization and assessment with a widely used DNN classifier , i.e . Long Short-Term Memory ( LSTM ) and three different categorical datasets in Section.4 . These datasets are collected from various real-world applications . The experimental results confirm the impact of the 3 risk factors over the adversarial vulnerability of the DNN classifier . 2 RELATED WORK . Tremendous efforts have been made to vulnerability measurement of a classifier under evasion attack ( Hein & Andriushchenko , 2017 ; Wang et al. , 2018 ; Fawzi et al. , 2016 ; Gilmer et al. , 2018 ; Weng et al. , 2019 ; Sinha et al. , 2018 ; Cohen et al. , 2019 ; Shi et al. , 2020 ; Yin et al. , 2019 ; Khim & Loh , 2018 ; Tu et al. , 2019 ) . Most of the previous works focus on evaluating robustness against ; ? -norm perturbations on continuous data . They all assume adversarial samples locate within a smooth ; ? -ball around an input instance , which doesn ’ t hold for categorical data . In contrast , Tu et al . ( 2019 ) covers both numerical and categorical data . It bounds the adversarial risk with a local worst-case risk over a ? -wassernstein ball centered at the training data distribution . This work associates the adversarial risk of a classifier with its rademacher complexity . Nevertheless , we argue that the adversarial vulnerability of a classifier is determined by not only the characteristics of the classifier , e.g . model complexity , but also the properties of the training/testing data instances . Pioneering works of evasion attacks with categorical inputs depend on domain-specific knowledge to facilitate the attack exploration . Kuleshov et al . ( 2018 ) ; Papernot et al . ( 2016 ) ; Miyato et al . ( 2016 ) ; Samanta & Mehta ( 2017 ) ; Gao et al . ( 2018 ) ; Yang et al . ( 2018 ) ; Gong et al . ( 2018 ) ; Ebrahimi et al . ( 2018 ) ; Narodytska & Kasiviswanathan ( 2017 ) ; Croce & Hein ( 2019 ) focus on replacing individual words/phrases to cheat text classifiers , or modifying pixel intensities to bias image classification results . These methods use heuristic semantic rules , e.g. , replacing words with manually defined candidate synonyms and constraining the word change to preserve readability and semantic integrity . Narodytska & Kasiviswanathan ( 2017 ) ; Croce & Hein ( 2019 ) narrow down the search range to the pixels with high pixel-wise sensitivity for image classification . Despite of the sounding empirical results , there is no guarantee on a successful attack within the attack budget . Bojchevski & Günnemann ( 2019 ) ; Bojchevski & Günnemann ( 2019 ) ; D.Zugner et al . ( 2019 ) ; D. Zügner & Günnemann ( 2018 ) ; Akbarnejad & Günnemann ( 2019 ) adopt edge-flipping and node attribute perturbation to poison graph data mining pipelines , e.g. , graph neural networks and graph embedding models . The key idea is to introduce relaxed surrogate functions to the combinatorial attack objective and then solve the relaxed optimization problem instead . Notably , D.Zugner et al . ( 2019 ) ; D. Zügner & Günnemann ( 2018 ) ; Akbarnejad & Günnemann ( 2019 ) define the attack objective as a sum of smallest eigenvalues of the adjacency matrix of a given graph . Though it is not explicitly claimed , it is intrinsically a submodular maximization problem . Qi et al . ( 2019 ) ; Wang et al . ( 2020a ) unveil that simple greedy search can deliver provably effective attacks against DNN classifiers without domain-specific heuristics , if all the link weights between neurons are non-negative . Non-negativity of the parameter guarantees strict submodularity of the attack objective . However , it brings significantly deterioration of the classifier ’ s accuracy , which is not applicable for real-world learning tasks . 3 ROBUSTNESS CHARACTERIZATION AND ASSESSMENT . We assume an input instance G = { G1 , G2 , G3 , ... , G= } of = categorical attributes . Each G8 takes any of < ( < ≥ 1 ) categorical values . The classifier 5 outputs decision probabilities 5H : ( : = 1,2,3 , ... , ) with respect to different class labels . In practices , each categorical value of G 9 8 is cast to a -dimensional pre- trained embedding vector , e.g. , 4 9 8 ∈R , 9 =1,2 , ... , < . To represent an instance G with the embedding vectors of its category values , we define binary variables 1= { 1 9 8 } , 8=1,2 , ... , = , 9=1,2 , ... , < , where 1 9 8 =1 when the 9-th attribute value is present for G8 and 1 9 8 =0 otherwise . An instance G can then be represented by anR=∗ < ∗ tensor with G { 8 , 9 , : } =1 9 8 4 9 8 . Let 1̂= { 1̂ 9 8 } indicate the adversarial modifications introduced into 1 . For a perturbed Ĝ , its 1̂ ≠ 1 . Depending on the type of attacks to implement , e.g. , insertion , deletion or substitution , 1̂ differs from 1 in different ways . Without loss of generality , let H denote the true class label of G and all the other H : ( : = { 1 , ... , −1 } ) are the potential targets of an evasion attack . The goal of attack is to make 5H ( G,1̂ ) as low as possible and increase 5H : ( G,1̂ ) of a fixed : ( targeted attack ) or any : of { 1 , ... , −1 } ( non-targeted attack ) as high as possible simultaneously . We focus on non-targeted evasion attack , and leave the targeted scenario for future study . 3.1 INFORMATION-THEORETIC CHARACTERIZATION OF ADVERSARIAL VULNERABILITY . Theorem 1 For any input instance ( G , H ) and a training set ( sampled from the same underlying distribution ` G , H , 5 ∈ H is trained using ( with a deterministic training paradigm . The expected adversarial risk R03EY defined in Eq.1 can be bounded as in Eq.2 , if the loss function ℓ in R03EY adopts the zero-one loss . R03EY ≥1− 2 ( G ; H , ( ) −2 [ G− ( 5H ; ( ) /2+const log ( 2 ) [ G = BD ? |diff ( 1,1̂ ) |≤Y ( G ; H , ( ) − ( Ĝ ; H , ( ) ( 3 ) where const denotes a constant term . diff ( 1,1̂ ) indicates the set of the categorical attributes modified in the attack . ( G ; H , ( ) denotes the mutual information between the feature G and the pair of label H and training set ( . [ G is the supremum of the difference between the mutual information before and after adversarial perturbation , noted by ( G ; H , ( ) and ( Ĝ ; H , ( ) respectively . ( 5H ; ( ) denotes the mutual information between ( and 5 . Theorem 1 unveils the three impact factors jointly determining adversarial vulnerability of the targeted classifier 5 ( for answering Q1 in Section 1 ) . The proof is given in Appendix.A . In Eq.2 , a lower mutual information ( G ; H , ( ) indicates higher adversarial risk , i.e. , R03EY has a higher lower bound . A lower ( G ; H , ( ) denotes weaker consistency between the input instance ( G , H ) and the training set ( . The classifier trained by ( thus produces weaker decision confidence for such ( G , H ) . With ( G , H ) dropped to the ambiguous zone near the classification boundary , the classifier is more prone to adversarial perturbation . A higher ( 5H ; ( ) leads to a higher adversarial risk according to Eq.2 . The mutual information ( 5H ; ( ) reflects the dependence between the classifier 5 and the training set ( , which can be considered as a lower bound of the VC-dimension for countable hypothesis space 5 ∈H ( Xu & Raginsky , 2017 ; Zhu et al. , 2020 ) . A higher ( 5H ; ( ) thus denotes that 5 is more likely to suffer from being overfitted to ( . Model overfiting is one of the causes of adversarial vulnerability ( Tu et al. , 2019 ) . Resonating with the association between ( 5H ; ( ) and adversarial risk , three popularly used robustness enhancement methods controlling the classifier ’ s complexity and overfiting risk can potentially adjust the adversarial vulnerability of the classifier with categorical inputs . 1 ) Adversarial training ( Miyato et al. , 2016 ; Sinha et al. , 2018 ; Florian et al. , 2018 ; Wang et al. , 2019 ; Shafahi et al. , 2019 ) . The adversarially retrained classifier 5̂ is less correlated with the original training set ( compared to 5 , which reduces ( 5H ; ( ) and may help mitigate the adversarial threat . 2 ) The addition of nuclear norm regularization over the classifier ’ s parameters in the training process ( Ravi et al. , 2019 ; Tu et al. , 2019 ) . The resultant regularized classifier has a controlled model complexity , which can potentially reduce the adversarial risk . 3 ) Random smoothing ( Lee et al. , 2019 ; Levine & Feizi , 2020 ; Dvijotham et al. , 2020 ; Boj . et al. , 2020 ) . Following ( Cohen et al. , 2019 ) , this defense method randomly selects and flips the input categorical features of the targeted classifier . The randomly perturbed classification output is less correlated with the training data ( ’ s distribution , which may reduce adversarial risk .
The paper mainly studies the adversarial vulnerability of a classifier with categorical inputs. It theoretically analyzes the key factors that determine the robustness of discrete input data and uses greedy strategy to solve the problem. In the experimental part, the importance of the key factors pointed out by the theory is verified, and the robustness scoring is used to conduct experiments under three defense measures.
SP:1f2bba25f272c3c3ca348e31ce2b1702641320d0
Structured Stochastic Gradient MCMC
1 INTRODUCTION . There has been much recent interest in deep Bayesian neural networks ( BNN ) due to their reliable confidence estimates and generalization properties ( Wilson & Izmailov , 2020 ; Jospin et al. , 2020 ; Cardelli et al. , 2019 ) . BNNs rely on ensemble averages over model parameters typically obtained from Markov chain Monte Carlo ( MCMC ) algorithms , which contrasts to regular neural networks that depend on a single set of parameters . The sheer size of these models requires scalable MCMC approaches based on inexpensive stochastic gradients , of which stochastic gradient Markov chain Monte Carlo ( SGMCMC ) algorithms are the gold standard ( Li et al. , 2016 ; Welling & Teh , 2011 ; Patterson & Teh , 2013 ) . These algorithms owe their scalability to approximating gradients via mini-batching . The main downside of using SGMCMC algorithms is their slow mixing rates in high dimensions . An often faster alternative is variational inference ( VI ) algorithms that approximate the posterior with a simpler ( typically factorized ) distribution . This formulation results in an optimization problem that can be solved more efficiently using stochastic optimization ( Blei et al. , 2017 ; Zhang et al. , 2018 ) . One downside of VI approximations is their solid distributional assumptions . A typical choice is to approximate the Bayesian posterior by a product of univariate Gaussian distributions . These distributional assumptions are frequently over-simplistic in high-dimensional models , where the posterior can be highly multi-modal and possibly heavy-tailed . Another downside is that the variational approximation typically underestimates the posterior variance , leading to poorly calibrated uncertainties and overfitting ( Ormerod & Wand , 2010 ; Giordano et al. , 2015 ; Zhang et al. , 2018 ) . In this work , we derive a fundamentally new SGMCMC approach that takes inspiration from structured VI . While our approach remains a sampling algorithm resembling SGMCMC , we speed up the mixing time by systematically breaking posterior correlations . The resulting algorithm furthermore allows users to specify which posterior correlations to keep and which ones to break . It makes no assumptions on the functional form of the approximate posterior . We call our approach structured SGMCMC since it relies on a structured ( i.e. , only partially factorized ) variational approximation of the posterior ( Wainwright & Jordan , 2008 ) . In more detail , we derive the optimal variational distribution for a given posterior subject to factorization constraints by assuming a functional view on variational inference . We show how to sample from this optimal distribution by running SGMCMC on a modified energy function . This energy function is obtained by marginalizing the model ’ s joint distribution over previously generated samples from the Markov chain , leading to an approximate factorization over user-specified parameter groups . Further , we provide a more robust and computationally efficient approximation to the procedure that allows for interpolation between regular SGMCMC and our structured SGMCMC by taking inspiration from dropout techniques . Both methods are compatible with any Markovian SGMCMC algorithm , including Langevin dynamics and stochastic gradient Hamiltonian Monte Carlo . In sum , our contributions are as follows : • We propose a new approximate MCMC scheme running SGMCMC on a modified energy function , trading accuracy for speed . This setup effectively allows sampling from a fully joint posterior , a completely factorized posterior , and any in-between . • We prove mathematically that the resulting scheme asymptotically generates samples from the best possible posterior approximation subject to user-specified factorization constraints between groups of parameters . • We extend this scheme further by making it more scalable with a dropout-inspired approximation . This new scheme has a hyperparameter that enables a smooth interpolation between full SGMCMC and a `` mean-field '' version where all posterior correlations are broken . • We show in both small and large scale experiments that our method well approximates posterior marginals and gives improved results over SGMCMC on Resnet-20 architectures on CIFAR-10 , Fashion MNIST , and SVHN in terms of both runtime and final accuracy . Our paper is structured as follows : Section 2 presents the related work to our proposal , Section 3 introduces preliminaries regarding the energy function and the stochastic gradient updates , Sections 4 and 5 derive our proposed methods , Section 6 details experiments and their results , and Section 7 contains our concluding thoughts . 2 RELATED WORK . Our work connects both to ( stochastic ) variational inference ( Bishop , 2006 ; Hoffman et al. , 2013 ; Ranganath et al. , 2014 ; Blei et al. , 2017 ; Zhang et al. , 2018 ) and scalable MCMC ( Welling & Teh , 2011 ; Chen et al. , 2014 ; Ma et al. , 2017 ; Zhang et al. , 2020 ; Leimkuhler et al. , 2019 ; Wenzel et al. , 2020 ; Izmailov et al. , 2021 ) . For space limitations , we focus on the most related work at the intersection of both topics . Among the earliest works to hybridize both approaches was ( de Freitas et al. , 2001 ) who constructed a variational proposal distribution in the Metropolos-Hastings step of MCMC . An improved approach to that was introduced in ( Habib & Barber , 2018 ) , where by introducing low-dimensional auxiliary variables they fit a more accurate approximating distribution . Other related advances to MCMC methods were proposed by Levy et al . ( 2017 ) who developed a method to train MCMC kernels with NNs , and Wang et al . ( 2018 ) ; Gong et al . ( 2018 ) who leveraged meta learning schemes in SGMCMC methods . Most recent work focuses on connections between VI and stochastic gradient-based MCMC , or between VI and stochastic gradient descent ( SGD ) . For example , Mandt et al . ( 2016 ; 2017 ) and Duvenaud et al . ( 2016 ) consider SGD as a type of variational inference , but their approaches did not attempt to close the gap to exact MCMC . Other works aim at explicitly interpolating between both methods . Domke ( 2017 ) proposes a divergence bound for hybridizing VI and MCMC , essentially by running Langevin dynamics on a tempered evidence lower bound ( ELBO ) . Salimans et al . ( 2015 ) embody MCMC steps into the variational inference approximation . Ahn et al . ( 2012 ) improve stochastic gradient Langevin dynamics by leveraging the central limit theorem and using the estimated inverse Fisher information matrix to sample from the approximate posterior distribution . Rezende & Mohamed ( 2015 ) interpreted the path of an MCMC algorithm as a variational distribution , and then fitting parameters to tighten a variational bound . Recently , Hoffman & Ma ( 2020 ) interpreted ( parametric ) VI as approximate Langevin dynamics and showed that both algorithms have similar transient dynamics . In contrast to all these approaches , our method is inspired by coordinate ascent variational inference ( Bishop , 2006 ) but uses Langevin updates to generate samples from the target distribution that respects an imposed independence structure . 3 PRELIMINARIES . Variational inference ( VI ) approaches differ from MCMC in two regards : ( 1 ) they impose a structured ( e.g. , fully-factorized ) approximation of the posterior for tractability , and ( 2 ) they often make parametric assumptions . Is it possible to construct a modified scheme that only relies on the assumption ( 1 ) , inheriting the non-parametric nature of MCMC while breaking posterior correlations in a controlled manner ? As follows , we will show how such a scheme can be realized . We will first derive a modified energy function for Langevin dynamics that we can sample from and then prove that its negative exponential results in the optimal posterior approximation subject to specified factorization constraints . Running SGMCMC algorithms on this energy function will consequently generate samples from this distribution . Before we explain our new method , we introduce the setup and common notation . Given data D = { ( xi , yi ) } i=1 , ... , N , parameters θ , a proper prior distribution p ( θ ) , and a likelihood p ( D|θ ) =∏N i=1 p ( yi|xi , θ ) , suppose we are interested in the corresponding posterior distribution p ( θ|D ) ∝ p ( D|θ ) p ( θ ) . A convenient representation of the posterior is as a Boltzmann distribution : p ( θ|D ) ∝ exp { −U ( θ ) } where U ( θ ) = − log p ( θ , D ) = − ∑ ( x , y ) ∈D log p ( y|x , θ ) − log p ( θ ) . ( 1 ) U is typically referred to as the posterior energy function . Note that the posterior distribution is typically intractable due to the normalizing constant . A popular approach for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo ( MCMC ) algorithms . These methods work by producing an empirical distribution of samples in parameter space , often times through the use of a random walk . While being very accurate and having asymptotic guarantees , these methods are known to not scale well with respect to both data and parameters ( Brooks et al. , 2011 ; Geyer , 1992 ) . Stochastic gradient MCMC ( SGMCMC ) is a class of scalable MCMC algorithms that can produce posterior samples through gradients on minibatches of data . These algorithms are largely derived from discretized approximations of continuous-time diffusion processes . Examples of these algorithms include stochastic gradient Langevin dynamics ( SGLD ) ( Welling & Teh , 2011 ) , preconditioned SGLD ( pSGLD ) ( Li et al. , 2016 ) , and stochastic gradient Hamiltonian Monte Carlo ( SGHMC ) ( Chen et al. , 2014 ) . As alluded to , the basis of SGMCMC algorithms is using a sampled minibatch of data D̃ from D to produce an differentiable , unbiased estimate of the posterior energy function : U ( θ ) ≈ Û ( θ ; D̃ ) = − N |D̃| ∑ ( x , y ) ∈D̃ log p ( y|x , θ ) − log p ( θ ) . ( 2 ) Once Û is defined , it is fairly straight forward to generate new samples from the posterior distribution . For instance , the SGLD update is θ ( t+1 ) = θ ( t ) − t 2 ∇θÛ ( θ ( t ) ; D̃t ) + ξt where ξt ∼ N ( 0 , tI ) . ( 3 ) Similar rules for pSGLD and SGHMC can be found in the Supplement . All of these update rules produce a chain of samples up to time step t that ultimately form an empirical distribution p̂ ( t ) ( θ|D ) . Should the algorithms converge , then limt→∞ p̂ ( t ) ( θ|D ) = p ( θ|D ) . 4 STRUCTURED SGMCMC . By design , SGMCMC methods produce a fully joint posterior distribution over parameters θ . For models with a large number of parameters , this can lead to various complications due to the curse of dimensionality . This is typically observed with slow convergence times and potentially unexplored parameter spaces . A viable solution is to break dependencies in the posterior distribution by leveraging ideas commonly used in variational inference ( VI ) . This would reduce the number of various potential posterior correlations that the model would need to capture while sampling . To achieve partial factorization , we must first partition θ into M > 1 distinct , mutually independent groups : θ1 , . . . , θM . This partitioning structure is assumed to be known a priori . We will denote the distribution that respects this partitioning structure as q ( θ ) = ∏M i=1 qi ( θi ) . Similar to VI , we would like this distribution q ( θ ) to best approximate the true posterior distribution p ( θ|D ) according to some criteria , such as KL-divergence . This leads to a natural objective function to minimize : J ( q ( θ ) ) = DKL ( q ( θ ) ||p ( θ|D ) ) ≡ Eθ∼q [ log q ( θ ) p ( θ|D ) ] ( 4 ) The following Theorem 1 proves that there is a unique solution to the non-parametric KL minimization problem described in Eq . ( 4 ) . To describe it , we compose θ = { θi , θ̃¬i } for any i where θ̃ ∼ q and define a structured energy function : U ( S ) ( θ ) = M∑ i=1 U ( S ) i ( θi ) , with U ( S ) i ( θi ) : = Eθ̃∼qU ( { θi , θ̃¬i } ) : = −Eθ̃∼q log p ( θi , θ̃¬i , D ) . ( 5 ) That is , we first define the marginals U ( S ) i ( θi ) , where we marginalize U ( θ ) with respect to all q ( θ ) -factors except qi ( θi ) , and then sum up these marginals to define U ( S ) ( θ ) . A similar partial marginalization procedure is carried out for conjugate exponential family distributions in coordinate ascent VI ( Bishop , 2006 ) . Having a well-defined energy function U ( S ) allows us to use standard SGMCMC methods to approximate the posterior q ( θ ) with samples . This serves as the basis for our proposed algorithm that actually approximates this distribution q ( θ ) , which will be discussed shortly . Theorem 1 . The unique solution to the KL minimization problem given in Eq . 4 is given by the Boltzmann distribution q ( θ ) ∝ exp { − ∑M i=1 U ( S ) i ( θi ) } . Please refer to the Supplement for the proof . In an ideal world , we would be able to use the findings of Theorem 1 directly in conjunction with algorithms like Langevin dynamics and Hamiltonian Monte Carlo to produce empirical distributions for q using U ( S ) ( Liu et al. , 2019 ) . However , this is intractable for two reasons : ( 1 ) these algorithms generally work only well with small amounts of data , and ( 2 ) more importantly , the marginals U ( S ) i ( θi ) do not have a closed-form solution but need to be approximated via samples from q. Luckily , since SGMCMC methods only need access to noisy estimates of U ( S ) , we can run these algorithms on a stochastic estimate of Eq . ( 5 ) , U ( S ) ( θ ) ≈ Û ( S ) ( θ ; D̃ ) = M∑ i=1 Eθ̃∼qÛ ( { θi , θ̃¬i } ; D̃ ) , ( 6 ) where Û ( · ) is defined in Eq . ( 2 ) . In practice , at timestep t for i = 1 , . . . , M we estimate Eθ̃∼qÛ ( { θi , θ̃¬i } ; D̃t ) with a Monte Carlo approximation . In place of θ̃ , we use a single sample of θ̃ ( t ) taken from the current approximate distribution q̂ ( t ) which is composed of samples from previous timesteps ( i.e. , a uniform distribution over { θ ( 1 ) , θ ( 2 ) , . . . , θ ( t ) } ) . This leads to the following update step for structured SGLD ( S-SGLD ) : θ ( t+1 ) = θ ( t ) − t 2 ∇θÛ ( S ) ( θ ; D̃ ) + ξt where ξt ∼ N ( 0 , tI ) . ( 7 ) Similar rules for structured variants of pSGLD ( S-pSGLD ) and SGHMC ( S-SGHMC ) can be found in the Supplement . Additionally , the full procedure for structured SGMCMC ( S-SGMCMC ) can be seen in Algorithm 2 . Remark Since ∇θÛ ( S ) is an unbiased estimator for U ( S ) , we are guaranteed to converge to q from sampling with S-SGMCMC with sufficiently decreasing learning rates so long as we are in a stationary state . While it is unlikely to have the procedure initialize to a stationary state , we observe in practice that our scheme both tends to converge towards and remain in a stationary state . A general proof of convergence is outside the scope of this work and is left to follow-up research . An example of S-SGMCMC can be seen in Fig . 1 ( a-b ) , which features the approximate posterior distributions of a linear regression model with three coefficients and with various independence structures imposed with S-SGLD : ( a ) joint dependence between w1 , w2 , and w3 ; ( b-left ) dependence between w1 and w2 but independence between w3 and the other coefficients ; ( b-right ) fully factorized . Of note is that the bivariate posterior distributions appear to respect the imposed independence structure . Interestingly , it also appears that the variance shrinks as we induce these factorizations which is a commonly seen artifact when using VI .
The author proposed a framework to incorporate the independence structure into the posterior inference for faster-mixing speed. To achieve that, the author designed two specific algorithms called S-SGMCMC and S_d-SGMCMC, respectively. Specifically, S-SGMCMC consisted of the following steps. First, the target random variables ($\theta$) are gathered into mutually independent groups. Then, a modified energy function is derived by minimizing the KL divergence between the posterior $q$ and target $p(\theta|D)$. The last step is to apply a standard SGMCMC method to draw samples from the resulting modified energy function. Further, the author also built a connection between dropout and the modified energy function, which results in a structure dropout SGMCMC (S_d-SGMCMC) with better scalability. The author claimed the resulting algorithms achieved faster mixing speed and better classification accuracy when applied to real-world classification tasks.
SP:5d61f3e5e1e833d46a8a53322611f7c5d825e7dc
Structured Stochastic Gradient MCMC
1 INTRODUCTION . There has been much recent interest in deep Bayesian neural networks ( BNN ) due to their reliable confidence estimates and generalization properties ( Wilson & Izmailov , 2020 ; Jospin et al. , 2020 ; Cardelli et al. , 2019 ) . BNNs rely on ensemble averages over model parameters typically obtained from Markov chain Monte Carlo ( MCMC ) algorithms , which contrasts to regular neural networks that depend on a single set of parameters . The sheer size of these models requires scalable MCMC approaches based on inexpensive stochastic gradients , of which stochastic gradient Markov chain Monte Carlo ( SGMCMC ) algorithms are the gold standard ( Li et al. , 2016 ; Welling & Teh , 2011 ; Patterson & Teh , 2013 ) . These algorithms owe their scalability to approximating gradients via mini-batching . The main downside of using SGMCMC algorithms is their slow mixing rates in high dimensions . An often faster alternative is variational inference ( VI ) algorithms that approximate the posterior with a simpler ( typically factorized ) distribution . This formulation results in an optimization problem that can be solved more efficiently using stochastic optimization ( Blei et al. , 2017 ; Zhang et al. , 2018 ) . One downside of VI approximations is their solid distributional assumptions . A typical choice is to approximate the Bayesian posterior by a product of univariate Gaussian distributions . These distributional assumptions are frequently over-simplistic in high-dimensional models , where the posterior can be highly multi-modal and possibly heavy-tailed . Another downside is that the variational approximation typically underestimates the posterior variance , leading to poorly calibrated uncertainties and overfitting ( Ormerod & Wand , 2010 ; Giordano et al. , 2015 ; Zhang et al. , 2018 ) . In this work , we derive a fundamentally new SGMCMC approach that takes inspiration from structured VI . While our approach remains a sampling algorithm resembling SGMCMC , we speed up the mixing time by systematically breaking posterior correlations . The resulting algorithm furthermore allows users to specify which posterior correlations to keep and which ones to break . It makes no assumptions on the functional form of the approximate posterior . We call our approach structured SGMCMC since it relies on a structured ( i.e. , only partially factorized ) variational approximation of the posterior ( Wainwright & Jordan , 2008 ) . In more detail , we derive the optimal variational distribution for a given posterior subject to factorization constraints by assuming a functional view on variational inference . We show how to sample from this optimal distribution by running SGMCMC on a modified energy function . This energy function is obtained by marginalizing the model ’ s joint distribution over previously generated samples from the Markov chain , leading to an approximate factorization over user-specified parameter groups . Further , we provide a more robust and computationally efficient approximation to the procedure that allows for interpolation between regular SGMCMC and our structured SGMCMC by taking inspiration from dropout techniques . Both methods are compatible with any Markovian SGMCMC algorithm , including Langevin dynamics and stochastic gradient Hamiltonian Monte Carlo . In sum , our contributions are as follows : • We propose a new approximate MCMC scheme running SGMCMC on a modified energy function , trading accuracy for speed . This setup effectively allows sampling from a fully joint posterior , a completely factorized posterior , and any in-between . • We prove mathematically that the resulting scheme asymptotically generates samples from the best possible posterior approximation subject to user-specified factorization constraints between groups of parameters . • We extend this scheme further by making it more scalable with a dropout-inspired approximation . This new scheme has a hyperparameter that enables a smooth interpolation between full SGMCMC and a `` mean-field '' version where all posterior correlations are broken . • We show in both small and large scale experiments that our method well approximates posterior marginals and gives improved results over SGMCMC on Resnet-20 architectures on CIFAR-10 , Fashion MNIST , and SVHN in terms of both runtime and final accuracy . Our paper is structured as follows : Section 2 presents the related work to our proposal , Section 3 introduces preliminaries regarding the energy function and the stochastic gradient updates , Sections 4 and 5 derive our proposed methods , Section 6 details experiments and their results , and Section 7 contains our concluding thoughts . 2 RELATED WORK . Our work connects both to ( stochastic ) variational inference ( Bishop , 2006 ; Hoffman et al. , 2013 ; Ranganath et al. , 2014 ; Blei et al. , 2017 ; Zhang et al. , 2018 ) and scalable MCMC ( Welling & Teh , 2011 ; Chen et al. , 2014 ; Ma et al. , 2017 ; Zhang et al. , 2020 ; Leimkuhler et al. , 2019 ; Wenzel et al. , 2020 ; Izmailov et al. , 2021 ) . For space limitations , we focus on the most related work at the intersection of both topics . Among the earliest works to hybridize both approaches was ( de Freitas et al. , 2001 ) who constructed a variational proposal distribution in the Metropolos-Hastings step of MCMC . An improved approach to that was introduced in ( Habib & Barber , 2018 ) , where by introducing low-dimensional auxiliary variables they fit a more accurate approximating distribution . Other related advances to MCMC methods were proposed by Levy et al . ( 2017 ) who developed a method to train MCMC kernels with NNs , and Wang et al . ( 2018 ) ; Gong et al . ( 2018 ) who leveraged meta learning schemes in SGMCMC methods . Most recent work focuses on connections between VI and stochastic gradient-based MCMC , or between VI and stochastic gradient descent ( SGD ) . For example , Mandt et al . ( 2016 ; 2017 ) and Duvenaud et al . ( 2016 ) consider SGD as a type of variational inference , but their approaches did not attempt to close the gap to exact MCMC . Other works aim at explicitly interpolating between both methods . Domke ( 2017 ) proposes a divergence bound for hybridizing VI and MCMC , essentially by running Langevin dynamics on a tempered evidence lower bound ( ELBO ) . Salimans et al . ( 2015 ) embody MCMC steps into the variational inference approximation . Ahn et al . ( 2012 ) improve stochastic gradient Langevin dynamics by leveraging the central limit theorem and using the estimated inverse Fisher information matrix to sample from the approximate posterior distribution . Rezende & Mohamed ( 2015 ) interpreted the path of an MCMC algorithm as a variational distribution , and then fitting parameters to tighten a variational bound . Recently , Hoffman & Ma ( 2020 ) interpreted ( parametric ) VI as approximate Langevin dynamics and showed that both algorithms have similar transient dynamics . In contrast to all these approaches , our method is inspired by coordinate ascent variational inference ( Bishop , 2006 ) but uses Langevin updates to generate samples from the target distribution that respects an imposed independence structure . 3 PRELIMINARIES . Variational inference ( VI ) approaches differ from MCMC in two regards : ( 1 ) they impose a structured ( e.g. , fully-factorized ) approximation of the posterior for tractability , and ( 2 ) they often make parametric assumptions . Is it possible to construct a modified scheme that only relies on the assumption ( 1 ) , inheriting the non-parametric nature of MCMC while breaking posterior correlations in a controlled manner ? As follows , we will show how such a scheme can be realized . We will first derive a modified energy function for Langevin dynamics that we can sample from and then prove that its negative exponential results in the optimal posterior approximation subject to specified factorization constraints . Running SGMCMC algorithms on this energy function will consequently generate samples from this distribution . Before we explain our new method , we introduce the setup and common notation . Given data D = { ( xi , yi ) } i=1 , ... , N , parameters θ , a proper prior distribution p ( θ ) , and a likelihood p ( D|θ ) =∏N i=1 p ( yi|xi , θ ) , suppose we are interested in the corresponding posterior distribution p ( θ|D ) ∝ p ( D|θ ) p ( θ ) . A convenient representation of the posterior is as a Boltzmann distribution : p ( θ|D ) ∝ exp { −U ( θ ) } where U ( θ ) = − log p ( θ , D ) = − ∑ ( x , y ) ∈D log p ( y|x , θ ) − log p ( θ ) . ( 1 ) U is typically referred to as the posterior energy function . Note that the posterior distribution is typically intractable due to the normalizing constant . A popular approach for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo ( MCMC ) algorithms . These methods work by producing an empirical distribution of samples in parameter space , often times through the use of a random walk . While being very accurate and having asymptotic guarantees , these methods are known to not scale well with respect to both data and parameters ( Brooks et al. , 2011 ; Geyer , 1992 ) . Stochastic gradient MCMC ( SGMCMC ) is a class of scalable MCMC algorithms that can produce posterior samples through gradients on minibatches of data . These algorithms are largely derived from discretized approximations of continuous-time diffusion processes . Examples of these algorithms include stochastic gradient Langevin dynamics ( SGLD ) ( Welling & Teh , 2011 ) , preconditioned SGLD ( pSGLD ) ( Li et al. , 2016 ) , and stochastic gradient Hamiltonian Monte Carlo ( SGHMC ) ( Chen et al. , 2014 ) . As alluded to , the basis of SGMCMC algorithms is using a sampled minibatch of data D̃ from D to produce an differentiable , unbiased estimate of the posterior energy function : U ( θ ) ≈ Û ( θ ; D̃ ) = − N |D̃| ∑ ( x , y ) ∈D̃ log p ( y|x , θ ) − log p ( θ ) . ( 2 ) Once Û is defined , it is fairly straight forward to generate new samples from the posterior distribution . For instance , the SGLD update is θ ( t+1 ) = θ ( t ) − t 2 ∇θÛ ( θ ( t ) ; D̃t ) + ξt where ξt ∼ N ( 0 , tI ) . ( 3 ) Similar rules for pSGLD and SGHMC can be found in the Supplement . All of these update rules produce a chain of samples up to time step t that ultimately form an empirical distribution p̂ ( t ) ( θ|D ) . Should the algorithms converge , then limt→∞ p̂ ( t ) ( θ|D ) = p ( θ|D ) . 4 STRUCTURED SGMCMC . By design , SGMCMC methods produce a fully joint posterior distribution over parameters θ . For models with a large number of parameters , this can lead to various complications due to the curse of dimensionality . This is typically observed with slow convergence times and potentially unexplored parameter spaces . A viable solution is to break dependencies in the posterior distribution by leveraging ideas commonly used in variational inference ( VI ) . This would reduce the number of various potential posterior correlations that the model would need to capture while sampling . To achieve partial factorization , we must first partition θ into M > 1 distinct , mutually independent groups : θ1 , . . . , θM . This partitioning structure is assumed to be known a priori . We will denote the distribution that respects this partitioning structure as q ( θ ) = ∏M i=1 qi ( θi ) . Similar to VI , we would like this distribution q ( θ ) to best approximate the true posterior distribution p ( θ|D ) according to some criteria , such as KL-divergence . This leads to a natural objective function to minimize : J ( q ( θ ) ) = DKL ( q ( θ ) ||p ( θ|D ) ) ≡ Eθ∼q [ log q ( θ ) p ( θ|D ) ] ( 4 ) The following Theorem 1 proves that there is a unique solution to the non-parametric KL minimization problem described in Eq . ( 4 ) . To describe it , we compose θ = { θi , θ̃¬i } for any i where θ̃ ∼ q and define a structured energy function : U ( S ) ( θ ) = M∑ i=1 U ( S ) i ( θi ) , with U ( S ) i ( θi ) : = Eθ̃∼qU ( { θi , θ̃¬i } ) : = −Eθ̃∼q log p ( θi , θ̃¬i , D ) . ( 5 ) That is , we first define the marginals U ( S ) i ( θi ) , where we marginalize U ( θ ) with respect to all q ( θ ) -factors except qi ( θi ) , and then sum up these marginals to define U ( S ) ( θ ) . A similar partial marginalization procedure is carried out for conjugate exponential family distributions in coordinate ascent VI ( Bishop , 2006 ) . Having a well-defined energy function U ( S ) allows us to use standard SGMCMC methods to approximate the posterior q ( θ ) with samples . This serves as the basis for our proposed algorithm that actually approximates this distribution q ( θ ) , which will be discussed shortly . Theorem 1 . The unique solution to the KL minimization problem given in Eq . 4 is given by the Boltzmann distribution q ( θ ) ∝ exp { − ∑M i=1 U ( S ) i ( θi ) } . Please refer to the Supplement for the proof . In an ideal world , we would be able to use the findings of Theorem 1 directly in conjunction with algorithms like Langevin dynamics and Hamiltonian Monte Carlo to produce empirical distributions for q using U ( S ) ( Liu et al. , 2019 ) . However , this is intractable for two reasons : ( 1 ) these algorithms generally work only well with small amounts of data , and ( 2 ) more importantly , the marginals U ( S ) i ( θi ) do not have a closed-form solution but need to be approximated via samples from q. Luckily , since SGMCMC methods only need access to noisy estimates of U ( S ) , we can run these algorithms on a stochastic estimate of Eq . ( 5 ) , U ( S ) ( θ ) ≈ Û ( S ) ( θ ; D̃ ) = M∑ i=1 Eθ̃∼qÛ ( { θi , θ̃¬i } ; D̃ ) , ( 6 ) where Û ( · ) is defined in Eq . ( 2 ) . In practice , at timestep t for i = 1 , . . . , M we estimate Eθ̃∼qÛ ( { θi , θ̃¬i } ; D̃t ) with a Monte Carlo approximation . In place of θ̃ , we use a single sample of θ̃ ( t ) taken from the current approximate distribution q̂ ( t ) which is composed of samples from previous timesteps ( i.e. , a uniform distribution over { θ ( 1 ) , θ ( 2 ) , . . . , θ ( t ) } ) . This leads to the following update step for structured SGLD ( S-SGLD ) : θ ( t+1 ) = θ ( t ) − t 2 ∇θÛ ( S ) ( θ ; D̃ ) + ξt where ξt ∼ N ( 0 , tI ) . ( 7 ) Similar rules for structured variants of pSGLD ( S-pSGLD ) and SGHMC ( S-SGHMC ) can be found in the Supplement . Additionally , the full procedure for structured SGMCMC ( S-SGMCMC ) can be seen in Algorithm 2 . Remark Since ∇θÛ ( S ) is an unbiased estimator for U ( S ) , we are guaranteed to converge to q from sampling with S-SGMCMC with sufficiently decreasing learning rates so long as we are in a stationary state . While it is unlikely to have the procedure initialize to a stationary state , we observe in practice that our scheme both tends to converge towards and remain in a stationary state . A general proof of convergence is outside the scope of this work and is left to follow-up research . An example of S-SGMCMC can be seen in Fig . 1 ( a-b ) , which features the approximate posterior distributions of a linear regression model with three coefficients and with various independence structures imposed with S-SGLD : ( a ) joint dependence between w1 , w2 , and w3 ; ( b-left ) dependence between w1 and w2 but independence between w3 and the other coefficients ; ( b-right ) fully factorized . Of note is that the bivariate posterior distributions appear to respect the imposed independence structure . Interestingly , it also appears that the variance shrinks as we induce these factorizations which is a commonly seen artifact when using VI .
The paper proposes a new hybrid method between MCMC and VI. The main idea of the paper is construction of a new energy function which allows to speed up the sampling significantly in comparison to the SGMCMC (stochastic gradient MCMC). An additional modification of the proposed algorithm is done by adopting a drop-out inspired approximation which allows for even better scalability.
SP:5d61f3e5e1e833d46a8a53322611f7c5d825e7dc
Structured Stochastic Gradient MCMC
1 INTRODUCTION . There has been much recent interest in deep Bayesian neural networks ( BNN ) due to their reliable confidence estimates and generalization properties ( Wilson & Izmailov , 2020 ; Jospin et al. , 2020 ; Cardelli et al. , 2019 ) . BNNs rely on ensemble averages over model parameters typically obtained from Markov chain Monte Carlo ( MCMC ) algorithms , which contrasts to regular neural networks that depend on a single set of parameters . The sheer size of these models requires scalable MCMC approaches based on inexpensive stochastic gradients , of which stochastic gradient Markov chain Monte Carlo ( SGMCMC ) algorithms are the gold standard ( Li et al. , 2016 ; Welling & Teh , 2011 ; Patterson & Teh , 2013 ) . These algorithms owe their scalability to approximating gradients via mini-batching . The main downside of using SGMCMC algorithms is their slow mixing rates in high dimensions . An often faster alternative is variational inference ( VI ) algorithms that approximate the posterior with a simpler ( typically factorized ) distribution . This formulation results in an optimization problem that can be solved more efficiently using stochastic optimization ( Blei et al. , 2017 ; Zhang et al. , 2018 ) . One downside of VI approximations is their solid distributional assumptions . A typical choice is to approximate the Bayesian posterior by a product of univariate Gaussian distributions . These distributional assumptions are frequently over-simplistic in high-dimensional models , where the posterior can be highly multi-modal and possibly heavy-tailed . Another downside is that the variational approximation typically underestimates the posterior variance , leading to poorly calibrated uncertainties and overfitting ( Ormerod & Wand , 2010 ; Giordano et al. , 2015 ; Zhang et al. , 2018 ) . In this work , we derive a fundamentally new SGMCMC approach that takes inspiration from structured VI . While our approach remains a sampling algorithm resembling SGMCMC , we speed up the mixing time by systematically breaking posterior correlations . The resulting algorithm furthermore allows users to specify which posterior correlations to keep and which ones to break . It makes no assumptions on the functional form of the approximate posterior . We call our approach structured SGMCMC since it relies on a structured ( i.e. , only partially factorized ) variational approximation of the posterior ( Wainwright & Jordan , 2008 ) . In more detail , we derive the optimal variational distribution for a given posterior subject to factorization constraints by assuming a functional view on variational inference . We show how to sample from this optimal distribution by running SGMCMC on a modified energy function . This energy function is obtained by marginalizing the model ’ s joint distribution over previously generated samples from the Markov chain , leading to an approximate factorization over user-specified parameter groups . Further , we provide a more robust and computationally efficient approximation to the procedure that allows for interpolation between regular SGMCMC and our structured SGMCMC by taking inspiration from dropout techniques . Both methods are compatible with any Markovian SGMCMC algorithm , including Langevin dynamics and stochastic gradient Hamiltonian Monte Carlo . In sum , our contributions are as follows : • We propose a new approximate MCMC scheme running SGMCMC on a modified energy function , trading accuracy for speed . This setup effectively allows sampling from a fully joint posterior , a completely factorized posterior , and any in-between . • We prove mathematically that the resulting scheme asymptotically generates samples from the best possible posterior approximation subject to user-specified factorization constraints between groups of parameters . • We extend this scheme further by making it more scalable with a dropout-inspired approximation . This new scheme has a hyperparameter that enables a smooth interpolation between full SGMCMC and a `` mean-field '' version where all posterior correlations are broken . • We show in both small and large scale experiments that our method well approximates posterior marginals and gives improved results over SGMCMC on Resnet-20 architectures on CIFAR-10 , Fashion MNIST , and SVHN in terms of both runtime and final accuracy . Our paper is structured as follows : Section 2 presents the related work to our proposal , Section 3 introduces preliminaries regarding the energy function and the stochastic gradient updates , Sections 4 and 5 derive our proposed methods , Section 6 details experiments and their results , and Section 7 contains our concluding thoughts . 2 RELATED WORK . Our work connects both to ( stochastic ) variational inference ( Bishop , 2006 ; Hoffman et al. , 2013 ; Ranganath et al. , 2014 ; Blei et al. , 2017 ; Zhang et al. , 2018 ) and scalable MCMC ( Welling & Teh , 2011 ; Chen et al. , 2014 ; Ma et al. , 2017 ; Zhang et al. , 2020 ; Leimkuhler et al. , 2019 ; Wenzel et al. , 2020 ; Izmailov et al. , 2021 ) . For space limitations , we focus on the most related work at the intersection of both topics . Among the earliest works to hybridize both approaches was ( de Freitas et al. , 2001 ) who constructed a variational proposal distribution in the Metropolos-Hastings step of MCMC . An improved approach to that was introduced in ( Habib & Barber , 2018 ) , where by introducing low-dimensional auxiliary variables they fit a more accurate approximating distribution . Other related advances to MCMC methods were proposed by Levy et al . ( 2017 ) who developed a method to train MCMC kernels with NNs , and Wang et al . ( 2018 ) ; Gong et al . ( 2018 ) who leveraged meta learning schemes in SGMCMC methods . Most recent work focuses on connections between VI and stochastic gradient-based MCMC , or between VI and stochastic gradient descent ( SGD ) . For example , Mandt et al . ( 2016 ; 2017 ) and Duvenaud et al . ( 2016 ) consider SGD as a type of variational inference , but their approaches did not attempt to close the gap to exact MCMC . Other works aim at explicitly interpolating between both methods . Domke ( 2017 ) proposes a divergence bound for hybridizing VI and MCMC , essentially by running Langevin dynamics on a tempered evidence lower bound ( ELBO ) . Salimans et al . ( 2015 ) embody MCMC steps into the variational inference approximation . Ahn et al . ( 2012 ) improve stochastic gradient Langevin dynamics by leveraging the central limit theorem and using the estimated inverse Fisher information matrix to sample from the approximate posterior distribution . Rezende & Mohamed ( 2015 ) interpreted the path of an MCMC algorithm as a variational distribution , and then fitting parameters to tighten a variational bound . Recently , Hoffman & Ma ( 2020 ) interpreted ( parametric ) VI as approximate Langevin dynamics and showed that both algorithms have similar transient dynamics . In contrast to all these approaches , our method is inspired by coordinate ascent variational inference ( Bishop , 2006 ) but uses Langevin updates to generate samples from the target distribution that respects an imposed independence structure . 3 PRELIMINARIES . Variational inference ( VI ) approaches differ from MCMC in two regards : ( 1 ) they impose a structured ( e.g. , fully-factorized ) approximation of the posterior for tractability , and ( 2 ) they often make parametric assumptions . Is it possible to construct a modified scheme that only relies on the assumption ( 1 ) , inheriting the non-parametric nature of MCMC while breaking posterior correlations in a controlled manner ? As follows , we will show how such a scheme can be realized . We will first derive a modified energy function for Langevin dynamics that we can sample from and then prove that its negative exponential results in the optimal posterior approximation subject to specified factorization constraints . Running SGMCMC algorithms on this energy function will consequently generate samples from this distribution . Before we explain our new method , we introduce the setup and common notation . Given data D = { ( xi , yi ) } i=1 , ... , N , parameters θ , a proper prior distribution p ( θ ) , and a likelihood p ( D|θ ) =∏N i=1 p ( yi|xi , θ ) , suppose we are interested in the corresponding posterior distribution p ( θ|D ) ∝ p ( D|θ ) p ( θ ) . A convenient representation of the posterior is as a Boltzmann distribution : p ( θ|D ) ∝ exp { −U ( θ ) } where U ( θ ) = − log p ( θ , D ) = − ∑ ( x , y ) ∈D log p ( y|x , θ ) − log p ( θ ) . ( 1 ) U is typically referred to as the posterior energy function . Note that the posterior distribution is typically intractable due to the normalizing constant . A popular approach for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo ( MCMC ) algorithms . These methods work by producing an empirical distribution of samples in parameter space , often times through the use of a random walk . While being very accurate and having asymptotic guarantees , these methods are known to not scale well with respect to both data and parameters ( Brooks et al. , 2011 ; Geyer , 1992 ) . Stochastic gradient MCMC ( SGMCMC ) is a class of scalable MCMC algorithms that can produce posterior samples through gradients on minibatches of data . These algorithms are largely derived from discretized approximations of continuous-time diffusion processes . Examples of these algorithms include stochastic gradient Langevin dynamics ( SGLD ) ( Welling & Teh , 2011 ) , preconditioned SGLD ( pSGLD ) ( Li et al. , 2016 ) , and stochastic gradient Hamiltonian Monte Carlo ( SGHMC ) ( Chen et al. , 2014 ) . As alluded to , the basis of SGMCMC algorithms is using a sampled minibatch of data D̃ from D to produce an differentiable , unbiased estimate of the posterior energy function : U ( θ ) ≈ Û ( θ ; D̃ ) = − N |D̃| ∑ ( x , y ) ∈D̃ log p ( y|x , θ ) − log p ( θ ) . ( 2 ) Once Û is defined , it is fairly straight forward to generate new samples from the posterior distribution . For instance , the SGLD update is θ ( t+1 ) = θ ( t ) − t 2 ∇θÛ ( θ ( t ) ; D̃t ) + ξt where ξt ∼ N ( 0 , tI ) . ( 3 ) Similar rules for pSGLD and SGHMC can be found in the Supplement . All of these update rules produce a chain of samples up to time step t that ultimately form an empirical distribution p̂ ( t ) ( θ|D ) . Should the algorithms converge , then limt→∞ p̂ ( t ) ( θ|D ) = p ( θ|D ) . 4 STRUCTURED SGMCMC . By design , SGMCMC methods produce a fully joint posterior distribution over parameters θ . For models with a large number of parameters , this can lead to various complications due to the curse of dimensionality . This is typically observed with slow convergence times and potentially unexplored parameter spaces . A viable solution is to break dependencies in the posterior distribution by leveraging ideas commonly used in variational inference ( VI ) . This would reduce the number of various potential posterior correlations that the model would need to capture while sampling . To achieve partial factorization , we must first partition θ into M > 1 distinct , mutually independent groups : θ1 , . . . , θM . This partitioning structure is assumed to be known a priori . We will denote the distribution that respects this partitioning structure as q ( θ ) = ∏M i=1 qi ( θi ) . Similar to VI , we would like this distribution q ( θ ) to best approximate the true posterior distribution p ( θ|D ) according to some criteria , such as KL-divergence . This leads to a natural objective function to minimize : J ( q ( θ ) ) = DKL ( q ( θ ) ||p ( θ|D ) ) ≡ Eθ∼q [ log q ( θ ) p ( θ|D ) ] ( 4 ) The following Theorem 1 proves that there is a unique solution to the non-parametric KL minimization problem described in Eq . ( 4 ) . To describe it , we compose θ = { θi , θ̃¬i } for any i where θ̃ ∼ q and define a structured energy function : U ( S ) ( θ ) = M∑ i=1 U ( S ) i ( θi ) , with U ( S ) i ( θi ) : = Eθ̃∼qU ( { θi , θ̃¬i } ) : = −Eθ̃∼q log p ( θi , θ̃¬i , D ) . ( 5 ) That is , we first define the marginals U ( S ) i ( θi ) , where we marginalize U ( θ ) with respect to all q ( θ ) -factors except qi ( θi ) , and then sum up these marginals to define U ( S ) ( θ ) . A similar partial marginalization procedure is carried out for conjugate exponential family distributions in coordinate ascent VI ( Bishop , 2006 ) . Having a well-defined energy function U ( S ) allows us to use standard SGMCMC methods to approximate the posterior q ( θ ) with samples . This serves as the basis for our proposed algorithm that actually approximates this distribution q ( θ ) , which will be discussed shortly . Theorem 1 . The unique solution to the KL minimization problem given in Eq . 4 is given by the Boltzmann distribution q ( θ ) ∝ exp { − ∑M i=1 U ( S ) i ( θi ) } . Please refer to the Supplement for the proof . In an ideal world , we would be able to use the findings of Theorem 1 directly in conjunction with algorithms like Langevin dynamics and Hamiltonian Monte Carlo to produce empirical distributions for q using U ( S ) ( Liu et al. , 2019 ) . However , this is intractable for two reasons : ( 1 ) these algorithms generally work only well with small amounts of data , and ( 2 ) more importantly , the marginals U ( S ) i ( θi ) do not have a closed-form solution but need to be approximated via samples from q. Luckily , since SGMCMC methods only need access to noisy estimates of U ( S ) , we can run these algorithms on a stochastic estimate of Eq . ( 5 ) , U ( S ) ( θ ) ≈ Û ( S ) ( θ ; D̃ ) = M∑ i=1 Eθ̃∼qÛ ( { θi , θ̃¬i } ; D̃ ) , ( 6 ) where Û ( · ) is defined in Eq . ( 2 ) . In practice , at timestep t for i = 1 , . . . , M we estimate Eθ̃∼qÛ ( { θi , θ̃¬i } ; D̃t ) with a Monte Carlo approximation . In place of θ̃ , we use a single sample of θ̃ ( t ) taken from the current approximate distribution q̂ ( t ) which is composed of samples from previous timesteps ( i.e. , a uniform distribution over { θ ( 1 ) , θ ( 2 ) , . . . , θ ( t ) } ) . This leads to the following update step for structured SGLD ( S-SGLD ) : θ ( t+1 ) = θ ( t ) − t 2 ∇θÛ ( S ) ( θ ; D̃ ) + ξt where ξt ∼ N ( 0 , tI ) . ( 7 ) Similar rules for structured variants of pSGLD ( S-pSGLD ) and SGHMC ( S-SGHMC ) can be found in the Supplement . Additionally , the full procedure for structured SGMCMC ( S-SGMCMC ) can be seen in Algorithm 2 . Remark Since ∇θÛ ( S ) is an unbiased estimator for U ( S ) , we are guaranteed to converge to q from sampling with S-SGMCMC with sufficiently decreasing learning rates so long as we are in a stationary state . While it is unlikely to have the procedure initialize to a stationary state , we observe in practice that our scheme both tends to converge towards and remain in a stationary state . A general proof of convergence is outside the scope of this work and is left to follow-up research . An example of S-SGMCMC can be seen in Fig . 1 ( a-b ) , which features the approximate posterior distributions of a linear regression model with three coefficients and with various independence structures imposed with S-SGLD : ( a ) joint dependence between w1 , w2 , and w3 ; ( b-left ) dependence between w1 and w2 but independence between w3 and the other coefficients ; ( b-right ) fully factorized . Of note is that the bivariate posterior distributions appear to respect the imposed independence structure . Interestingly , it also appears that the variance shrinks as we induce these factorizations which is a commonly seen artifact when using VI .
This work proposes using a structured variational approximation for stochastic gradient Markov Chain Monte Carlo. This allows someone to choose a factorization for the variational distribution (which factorization is best is unclear; several are studied). Analogously to coordinate ascent variational inference, the authors show that the best approximation is the Boltzmann energy function marginalized over the complements of every parameter group. However, this structured approximation is computationally expensive and requires the same number of evaluations of the approximation as there are parameter groups. This computational burden is alleviated by a dropout scheme, where instead of sampling from every parameter groups, parameters are masked using a dropout distribution, and the number of stochastic masks is a hyperparameter that controls regularization and fidelity to the structure imposed in the factorization. Experiments show that this is a viable way to impose structure on a variational distribution, and that mixing times are improved.
SP:5d61f3e5e1e833d46a8a53322611f7c5d825e7dc
Offline Reinforcement Learning with Resource Constrained Online Deployment
1 INTRODUCTION . There have been many recent successes in the field of Reinforcement Learning ( Mnih et al. , 2013 ; Lillicrap et al. , 2015 ; Mnih et al. , 2016 ; Silver et al. , 2016 ; Henderson et al. , 2018 ) . In the online RL setting , an agent takes actions , observes the outcome from the environment , and updates its policy based on the outcome . This repeated access to the environment is not feasible in practical applications ; it may be unsafe to interact with the actual environment , and a high-fidelity simulator may be costly to build . Instead , offline RL , consumes fixed training data which consist of recorded interactions between one ( or more ) agent ( s ) and the environment to train a policy ( Levine et al. , 2020 ) . An agent with the trained policy is then deployed in the environment without further evaluation or modification . Notice that in offline RL , the deployed agent must consume data in the same format ( for example , having the same features ) as in the training data . This is a crippling restriction in many large-scale applications , where , due to some combination of resource/system constraints , all of the features used for training can not be observed ( or misspecified ) by the agent during online operation . In this work , we lay the foundations for studying this Resource-Constrained setting for offline RL . We then provide an algorithm that improves performance by transferring information from the full-featured offline training set to the deployed agent ’ s policy acting on limited features . We first illustrate a few practical cases where resource-constrained settings emerge . System Latency A deployed agent is often constrained by how much time it has to process the state of the environment and make a decision . For example , in a customer-facing web application , the customer will start to lose interest within a fraction of a second . Given this constraint , the agent may not be able to fully process more than a few measurements from the customer before making a decision . This is in contrast to the process of recording the training data for offline RL , where one may take sufficient time to generate an abundance of features by post-processing high-dimensional measurements . Power Constraints Consider a situation where an RL agent is being used in deep space probes or nano-satellites ( ( Deshmukh et al. , 2018 ) ) . In this case an RL agent is trained on Earth with rich features and a large amount of sensory information . But when the agent is deployed and being used on these probes , the number of sensors is limited by power and space constraints . Similarly , consider a robot deployed in a real world environment . The limited compute power of the robot prevents it from using powerful feature extractors while making a decision . However , such powerful feature extractors can be used during the offline training of the robot ( Fig 1a ) . In the resource-constrained setting , one can simply ignore the offline features and only train the offline agent with the online features that are available during deployment . This strategy has the drawback of not utilizing all of the information available during training and can lead to a sub-optimal policy . To confirm this , we performed the following simple experiment . We consider an offline RL dataset for the OpenAI gym MuJoCo HalfCheetah-v2 environment and simulate the resourceconstrained setting by removing a fixed set of randomly selected features during deployment ( see Sections 5.1.1 , C.1 for more details ) . We train an offline RL algorithm , TD3+BC ( Fujimoto & Gu , 2021 ) using only the online features and collect online data in the environment using the trained policy . We repeat this assuming all features available during deployment , train a TD3+BC agent using the same offline dataset with all features , and collect online data in the environment . We plot the histogram of rewards in the two datasets in Fig 1b . We observe that the agent trained only with online features obtains much smaller reward than the agent trained with offline features . Traditionally , scenarios where the observability of the state of the system is limited are studied under the Partially Observable Markov Decision Process ( POMDP ) setting by assuming a belief over the observations ( Åström , 1965 ) . In contrast , we have an offline dataset ( which records rich but not necessarily full state transitions ) along with partially obscured ( with respect to the offline dataset ) observations online . Our goal is to leverage the offline dataset to reduce the performance gap caused by the introduction of resource constraints . Towards this , we advocate using a teacherstudent transfer algorithm . Our main contributions are summarized below : • We identify a key challenge in offline RL : in the resource-constrained setting , datasets with rich features can not be effectively utilized when only a limited number of features are observable during online operation . • We propose the transfer approach that trains an agent to efficiently leverage the offline dataset while only observing the limited features during deployment . • We evaluate our approach on a diverse set of tasks showing the applicability of the transfer algorithm . We also highlight that when the behavior policy used by the data-collecting agent is trained using a limited number of features , the quality of the dataset suffers . We propose a data collection procedure ( RC-D4RL ) to simulate this effect . 2 RESOURCE-CONSTRAINED ONLINE SYSTEMS . In the standard RL framework , we consider a Markov Decision Process ( MDP ) defined by the tuple ( S , A , R , P , γ ) where S is the state space , A is the action space , R : S × A → R is the reward function , P : S × A → ∆ ( S ) is the transition function , ∆ ( S ) denotes the set of all probability distributions over S , and γ ∈ ( 0 , 1 ) is the discount factor . We consider the discounted infinite horizon MDP in this paper . We consider the continuous control setting and assume that both S and A are compact subsets of a real-valued vector space . The transition at time t , is given by the tuple ( st , at , R ( st , at ) , st+1 ) . Each policy π : S → ∆ ( A ) , has a value function Qπ : S × A → R that estimates the expected discounted reward for taking action a in state s and uses the policy π after that . The goal of the agent is to learn the policy π that maximizes the expected discounted reward Eπ [ ∑∞ t=0 γ tR ( st , at ) ] . In online RL , this problem is solved by interacting with the environment . In offline ( or batch ) RL ( Lange et al. , 2012 ) , instead of having access to the environment , the agent is provided with a finite dataset of trajectories or transitions denoted by D = { ( si , ai , ri , s′i ) } Ni=1 . The data is collected by one or many behavior policies that induce a distribution µ on the space of S × A . The goal of the agent is to learn a policy using the finite dataset to maximize the expected discounted reward when deployed in the environment . In the resource-constrained setting , the agent does not have access to the full state space or features during deployment . Instead , the agent can only observe from Ŝ ( another bounded subset of the realvalued vector space ) that is different from S. It is assumed that the space S is rich in information as compared to Ŝ . For example , Ŝ might have fewer dimensions , or some entries may include extra noise ( see Figure 1a ) . We will use online/limited features , to refer to observations from the online space Ŝ , offline/rich features to refer to observations from the offline space S. We assume that both online features and offline features are available in offline data . The goal of the agent is to use the offline data and train a policy π : Ŝ → ∆ ( A ) . The agent can use the offline features from S during training but is constrained to only use the online features from Ŝ while making a decision . A similar paradigm of Learning Under Privileged Information ( LUPI ) ( Vapnik et al. , 2015 ) has been studied under the supervised learning setting , where the privileged information is provided by a knowledgeable teacher . 3 RELATED WORK . Offline RL There has been an increasing interest in studying offline RL algorithms due to its practical advantages over online RL algorithms ( Agarwal et al. , 2020 ; Wu et al. , 2021 ; Chen et al. , 2021 ; Brandfonbrener et al. , 2021 ) . Offline RL algorithms typically suffer from overestimation of the value function as well as distribution shift between the offline data and on-policy data . Buckman et al . ( 2020 ) and Kumar et al . ( 2020 ) advocate a pessimistic approach to value function estimation to avoid over-estimation of rarely observed state-action pairs . To constrain the on-policy data to be closer to offline data , several techniques have been explored , such as restricting the actions inside the expectation in the evaluation step to be close to the actions observed in the dataset ( Fujimoto et al. , 2019 ) , adding a regularization term during policy evaluation or iteration ( Kostrikov et al. , 2021 ) , ( Wu et al. , 2019 ) , ( Guo et al. , 2020 ) , adding a constraint of the form MMD ( µ ( ·|s ) , π ( s ) ) ( Gretton et al. , 2012 ; Blanchard et al. , 2021 ; Deshmukh et al. , 2019 ) on the policy ( Kumar et al. , 2019 ) , using behavior cloning ( Fujimoto & Gu , 2021 ) , adding an entropy term in the value function estimation ( Wu et al. , 2019 ) , and model-based approaches that learn a pessimistic MDP Kidambi et al . ( 2020 ) . A thorough review of these techniques is presented in an excellent tutorial by Levine et al . ( 2020 ) . To the best of our knowledge , there is no existing work that addresses the resource-constrained offline RL setting where there is a mismatch between the offline features and online features . Knowledge Transfer Knowledge transfer/distillation is widely studied in various settings including vision , language , and RL domains ( Gou et al. , 2021 ; Wang & Yoon , 2021 ) . In RL , under the domain transfer setting ( Taylor & Stone , 2009 ; Liu et al. , 2016 ) , the teacher is trained on one domain/task and the student needs to perform on a different domain/task ( Konidaris & Barto , 2006 ; Perkins et al. , 1999 ; Torrey et al. , 2005 ; Gupta et al. , 2017 ) . Li et al . ( 2019 ) train a model so that features from different domains have similar embeddings , and Kamienny et al . ( 2020 ) perturb the feature using a random noise centered at the privileged information . An offline RL algorithm for domain transfer has been proposed by Cang et al . ( 2021 ) . Policy distillation is studied in the setting where the knowledge from a trained policy ( teacher ) is imparted to an untrained network ( student ) ( Rusu et al. , 2015 ; Czarnecki et al. , 2019 ) . This leads to several advantages such as model compression and the ability to learn from an ensemble of trained policies to improve performance ( Zhu et al. , 2020 ) . One distinguishing feature of the resource-constrained setting that differentiates it from other transfer settings is that the teacher has access to the privileged information and student needs to adapt from the data available without interactive learning . In most of the existing approaches , the difference between teacher and student was either the network size ( which is also present in our setting due to the difference in input features ) or the dynamics ( as in the domain transfer case ) . To the best of our knowledge , we are the first to study policy distillation in the offline RL framework under the resource-constrained setting . Another interesting line of work is called Sim2Real ( Lee et al. , 2021 ; Traoré et al. , 2019 ) . In these papers , they train a model using a simulator and then transfer the knowledge to real data . However , this work requires an accurate simulator which results in a fairly expert teacher model . However , in the offline RL setting , depending on the data quality , the teacher itself might be weak . Partially Observable MDP POMDP generalizes the MDP framework where the agent does not have access to the full features and only partially observes the state space ( Åström , 1965 ; Kaelbling et al. , 1998 ; Ortner et al. , 2012 ) . More recently , Rafailov et al . ( 2021 ) studied a model-based offline RL algorithm for image data under the POMDP setup . Our setting resembles this setup , but our agent also has access to the full privileged features in the offline dataset while training . This availability of the offline dataset with privileged information differentiates our setting and enables the student to inherit the knowledge from the rich space while only using the limited features during deployment .
This paper considers a novel offline RL setting with resource-constrained online deployment. In that setting, the agent has access to less information about the state in the online deployment phase than the available information from the offline dataset. It proposes a two stage training strategy under this setting, first training a teacher agent with full features, and then training a student agent with regularisation to mimic the teacher agent. Experiments are carried on in three D4RL environments with a variety configurations in the data collection protocol, behaviour policy, feature constraints. Results show the advantage of the proposed distilled policy than a simple baseline.
SP:7de556908b47e1aa105d05adaa8b77b3ab4cb08f
Offline Reinforcement Learning with Resource Constrained Online Deployment
1 INTRODUCTION . There have been many recent successes in the field of Reinforcement Learning ( Mnih et al. , 2013 ; Lillicrap et al. , 2015 ; Mnih et al. , 2016 ; Silver et al. , 2016 ; Henderson et al. , 2018 ) . In the online RL setting , an agent takes actions , observes the outcome from the environment , and updates its policy based on the outcome . This repeated access to the environment is not feasible in practical applications ; it may be unsafe to interact with the actual environment , and a high-fidelity simulator may be costly to build . Instead , offline RL , consumes fixed training data which consist of recorded interactions between one ( or more ) agent ( s ) and the environment to train a policy ( Levine et al. , 2020 ) . An agent with the trained policy is then deployed in the environment without further evaluation or modification . Notice that in offline RL , the deployed agent must consume data in the same format ( for example , having the same features ) as in the training data . This is a crippling restriction in many large-scale applications , where , due to some combination of resource/system constraints , all of the features used for training can not be observed ( or misspecified ) by the agent during online operation . In this work , we lay the foundations for studying this Resource-Constrained setting for offline RL . We then provide an algorithm that improves performance by transferring information from the full-featured offline training set to the deployed agent ’ s policy acting on limited features . We first illustrate a few practical cases where resource-constrained settings emerge . System Latency A deployed agent is often constrained by how much time it has to process the state of the environment and make a decision . For example , in a customer-facing web application , the customer will start to lose interest within a fraction of a second . Given this constraint , the agent may not be able to fully process more than a few measurements from the customer before making a decision . This is in contrast to the process of recording the training data for offline RL , where one may take sufficient time to generate an abundance of features by post-processing high-dimensional measurements . Power Constraints Consider a situation where an RL agent is being used in deep space probes or nano-satellites ( ( Deshmukh et al. , 2018 ) ) . In this case an RL agent is trained on Earth with rich features and a large amount of sensory information . But when the agent is deployed and being used on these probes , the number of sensors is limited by power and space constraints . Similarly , consider a robot deployed in a real world environment . The limited compute power of the robot prevents it from using powerful feature extractors while making a decision . However , such powerful feature extractors can be used during the offline training of the robot ( Fig 1a ) . In the resource-constrained setting , one can simply ignore the offline features and only train the offline agent with the online features that are available during deployment . This strategy has the drawback of not utilizing all of the information available during training and can lead to a sub-optimal policy . To confirm this , we performed the following simple experiment . We consider an offline RL dataset for the OpenAI gym MuJoCo HalfCheetah-v2 environment and simulate the resourceconstrained setting by removing a fixed set of randomly selected features during deployment ( see Sections 5.1.1 , C.1 for more details ) . We train an offline RL algorithm , TD3+BC ( Fujimoto & Gu , 2021 ) using only the online features and collect online data in the environment using the trained policy . We repeat this assuming all features available during deployment , train a TD3+BC agent using the same offline dataset with all features , and collect online data in the environment . We plot the histogram of rewards in the two datasets in Fig 1b . We observe that the agent trained only with online features obtains much smaller reward than the agent trained with offline features . Traditionally , scenarios where the observability of the state of the system is limited are studied under the Partially Observable Markov Decision Process ( POMDP ) setting by assuming a belief over the observations ( Åström , 1965 ) . In contrast , we have an offline dataset ( which records rich but not necessarily full state transitions ) along with partially obscured ( with respect to the offline dataset ) observations online . Our goal is to leverage the offline dataset to reduce the performance gap caused by the introduction of resource constraints . Towards this , we advocate using a teacherstudent transfer algorithm . Our main contributions are summarized below : • We identify a key challenge in offline RL : in the resource-constrained setting , datasets with rich features can not be effectively utilized when only a limited number of features are observable during online operation . • We propose the transfer approach that trains an agent to efficiently leverage the offline dataset while only observing the limited features during deployment . • We evaluate our approach on a diverse set of tasks showing the applicability of the transfer algorithm . We also highlight that when the behavior policy used by the data-collecting agent is trained using a limited number of features , the quality of the dataset suffers . We propose a data collection procedure ( RC-D4RL ) to simulate this effect . 2 RESOURCE-CONSTRAINED ONLINE SYSTEMS . In the standard RL framework , we consider a Markov Decision Process ( MDP ) defined by the tuple ( S , A , R , P , γ ) where S is the state space , A is the action space , R : S × A → R is the reward function , P : S × A → ∆ ( S ) is the transition function , ∆ ( S ) denotes the set of all probability distributions over S , and γ ∈ ( 0 , 1 ) is the discount factor . We consider the discounted infinite horizon MDP in this paper . We consider the continuous control setting and assume that both S and A are compact subsets of a real-valued vector space . The transition at time t , is given by the tuple ( st , at , R ( st , at ) , st+1 ) . Each policy π : S → ∆ ( A ) , has a value function Qπ : S × A → R that estimates the expected discounted reward for taking action a in state s and uses the policy π after that . The goal of the agent is to learn the policy π that maximizes the expected discounted reward Eπ [ ∑∞ t=0 γ tR ( st , at ) ] . In online RL , this problem is solved by interacting with the environment . In offline ( or batch ) RL ( Lange et al. , 2012 ) , instead of having access to the environment , the agent is provided with a finite dataset of trajectories or transitions denoted by D = { ( si , ai , ri , s′i ) } Ni=1 . The data is collected by one or many behavior policies that induce a distribution µ on the space of S × A . The goal of the agent is to learn a policy using the finite dataset to maximize the expected discounted reward when deployed in the environment . In the resource-constrained setting , the agent does not have access to the full state space or features during deployment . Instead , the agent can only observe from Ŝ ( another bounded subset of the realvalued vector space ) that is different from S. It is assumed that the space S is rich in information as compared to Ŝ . For example , Ŝ might have fewer dimensions , or some entries may include extra noise ( see Figure 1a ) . We will use online/limited features , to refer to observations from the online space Ŝ , offline/rich features to refer to observations from the offline space S. We assume that both online features and offline features are available in offline data . The goal of the agent is to use the offline data and train a policy π : Ŝ → ∆ ( A ) . The agent can use the offline features from S during training but is constrained to only use the online features from Ŝ while making a decision . A similar paradigm of Learning Under Privileged Information ( LUPI ) ( Vapnik et al. , 2015 ) has been studied under the supervised learning setting , where the privileged information is provided by a knowledgeable teacher . 3 RELATED WORK . Offline RL There has been an increasing interest in studying offline RL algorithms due to its practical advantages over online RL algorithms ( Agarwal et al. , 2020 ; Wu et al. , 2021 ; Chen et al. , 2021 ; Brandfonbrener et al. , 2021 ) . Offline RL algorithms typically suffer from overestimation of the value function as well as distribution shift between the offline data and on-policy data . Buckman et al . ( 2020 ) and Kumar et al . ( 2020 ) advocate a pessimistic approach to value function estimation to avoid over-estimation of rarely observed state-action pairs . To constrain the on-policy data to be closer to offline data , several techniques have been explored , such as restricting the actions inside the expectation in the evaluation step to be close to the actions observed in the dataset ( Fujimoto et al. , 2019 ) , adding a regularization term during policy evaluation or iteration ( Kostrikov et al. , 2021 ) , ( Wu et al. , 2019 ) , ( Guo et al. , 2020 ) , adding a constraint of the form MMD ( µ ( ·|s ) , π ( s ) ) ( Gretton et al. , 2012 ; Blanchard et al. , 2021 ; Deshmukh et al. , 2019 ) on the policy ( Kumar et al. , 2019 ) , using behavior cloning ( Fujimoto & Gu , 2021 ) , adding an entropy term in the value function estimation ( Wu et al. , 2019 ) , and model-based approaches that learn a pessimistic MDP Kidambi et al . ( 2020 ) . A thorough review of these techniques is presented in an excellent tutorial by Levine et al . ( 2020 ) . To the best of our knowledge , there is no existing work that addresses the resource-constrained offline RL setting where there is a mismatch between the offline features and online features . Knowledge Transfer Knowledge transfer/distillation is widely studied in various settings including vision , language , and RL domains ( Gou et al. , 2021 ; Wang & Yoon , 2021 ) . In RL , under the domain transfer setting ( Taylor & Stone , 2009 ; Liu et al. , 2016 ) , the teacher is trained on one domain/task and the student needs to perform on a different domain/task ( Konidaris & Barto , 2006 ; Perkins et al. , 1999 ; Torrey et al. , 2005 ; Gupta et al. , 2017 ) . Li et al . ( 2019 ) train a model so that features from different domains have similar embeddings , and Kamienny et al . ( 2020 ) perturb the feature using a random noise centered at the privileged information . An offline RL algorithm for domain transfer has been proposed by Cang et al . ( 2021 ) . Policy distillation is studied in the setting where the knowledge from a trained policy ( teacher ) is imparted to an untrained network ( student ) ( Rusu et al. , 2015 ; Czarnecki et al. , 2019 ) . This leads to several advantages such as model compression and the ability to learn from an ensemble of trained policies to improve performance ( Zhu et al. , 2020 ) . One distinguishing feature of the resource-constrained setting that differentiates it from other transfer settings is that the teacher has access to the privileged information and student needs to adapt from the data available without interactive learning . In most of the existing approaches , the difference between teacher and student was either the network size ( which is also present in our setting due to the difference in input features ) or the dynamics ( as in the domain transfer case ) . To the best of our knowledge , we are the first to study policy distillation in the offline RL framework under the resource-constrained setting . Another interesting line of work is called Sim2Real ( Lee et al. , 2021 ; Traoré et al. , 2019 ) . In these papers , they train a model using a simulator and then transfer the knowledge to real data . However , this work requires an accurate simulator which results in a fairly expert teacher model . However , in the offline RL setting , depending on the data quality , the teacher itself might be weak . Partially Observable MDP POMDP generalizes the MDP framework where the agent does not have access to the full features and only partially observes the state space ( Åström , 1965 ; Kaelbling et al. , 1998 ; Ortner et al. , 2012 ) . More recently , Rafailov et al . ( 2021 ) studied a model-based offline RL algorithm for image data under the POMDP setup . Our setting resembles this setup , but our agent also has access to the full privileged features in the offline dataset while training . This availability of the offline dataset with privileged information differentiates our setting and enables the student to inherit the knowledge from the rich space while only using the limited features during deployment .
This paper presents a new under-explored problem in offline RL: the situation when during the online deployment some features that were present in the offline dataset are missing. The authors motivate this problem by the challenges in real applications. They demonstrate that a straightforward approach of training an offline policy on a set of restricted features suffers from a loss in performance. Then, they propose an extension of an existing offline RL algorithm to distil the teacher offline policy which has access to all features to a student policy which has access to a limited feature set. Finally, the authors conduct a set of experiments on 3 Mujoco control tasks where they vary different parameters of the problem, for example, the quality of the datasets (and the way they were collected), the number of dropped dimensions in the resource-constrained setting. The results show the benefits of the proposed method compared to the baseline.
SP:7de556908b47e1aa105d05adaa8b77b3ab4cb08f
Offline Reinforcement Learning with Resource Constrained Online Deployment
1 INTRODUCTION . There have been many recent successes in the field of Reinforcement Learning ( Mnih et al. , 2013 ; Lillicrap et al. , 2015 ; Mnih et al. , 2016 ; Silver et al. , 2016 ; Henderson et al. , 2018 ) . In the online RL setting , an agent takes actions , observes the outcome from the environment , and updates its policy based on the outcome . This repeated access to the environment is not feasible in practical applications ; it may be unsafe to interact with the actual environment , and a high-fidelity simulator may be costly to build . Instead , offline RL , consumes fixed training data which consist of recorded interactions between one ( or more ) agent ( s ) and the environment to train a policy ( Levine et al. , 2020 ) . An agent with the trained policy is then deployed in the environment without further evaluation or modification . Notice that in offline RL , the deployed agent must consume data in the same format ( for example , having the same features ) as in the training data . This is a crippling restriction in many large-scale applications , where , due to some combination of resource/system constraints , all of the features used for training can not be observed ( or misspecified ) by the agent during online operation . In this work , we lay the foundations for studying this Resource-Constrained setting for offline RL . We then provide an algorithm that improves performance by transferring information from the full-featured offline training set to the deployed agent ’ s policy acting on limited features . We first illustrate a few practical cases where resource-constrained settings emerge . System Latency A deployed agent is often constrained by how much time it has to process the state of the environment and make a decision . For example , in a customer-facing web application , the customer will start to lose interest within a fraction of a second . Given this constraint , the agent may not be able to fully process more than a few measurements from the customer before making a decision . This is in contrast to the process of recording the training data for offline RL , where one may take sufficient time to generate an abundance of features by post-processing high-dimensional measurements . Power Constraints Consider a situation where an RL agent is being used in deep space probes or nano-satellites ( ( Deshmukh et al. , 2018 ) ) . In this case an RL agent is trained on Earth with rich features and a large amount of sensory information . But when the agent is deployed and being used on these probes , the number of sensors is limited by power and space constraints . Similarly , consider a robot deployed in a real world environment . The limited compute power of the robot prevents it from using powerful feature extractors while making a decision . However , such powerful feature extractors can be used during the offline training of the robot ( Fig 1a ) . In the resource-constrained setting , one can simply ignore the offline features and only train the offline agent with the online features that are available during deployment . This strategy has the drawback of not utilizing all of the information available during training and can lead to a sub-optimal policy . To confirm this , we performed the following simple experiment . We consider an offline RL dataset for the OpenAI gym MuJoCo HalfCheetah-v2 environment and simulate the resourceconstrained setting by removing a fixed set of randomly selected features during deployment ( see Sections 5.1.1 , C.1 for more details ) . We train an offline RL algorithm , TD3+BC ( Fujimoto & Gu , 2021 ) using only the online features and collect online data in the environment using the trained policy . We repeat this assuming all features available during deployment , train a TD3+BC agent using the same offline dataset with all features , and collect online data in the environment . We plot the histogram of rewards in the two datasets in Fig 1b . We observe that the agent trained only with online features obtains much smaller reward than the agent trained with offline features . Traditionally , scenarios where the observability of the state of the system is limited are studied under the Partially Observable Markov Decision Process ( POMDP ) setting by assuming a belief over the observations ( Åström , 1965 ) . In contrast , we have an offline dataset ( which records rich but not necessarily full state transitions ) along with partially obscured ( with respect to the offline dataset ) observations online . Our goal is to leverage the offline dataset to reduce the performance gap caused by the introduction of resource constraints . Towards this , we advocate using a teacherstudent transfer algorithm . Our main contributions are summarized below : • We identify a key challenge in offline RL : in the resource-constrained setting , datasets with rich features can not be effectively utilized when only a limited number of features are observable during online operation . • We propose the transfer approach that trains an agent to efficiently leverage the offline dataset while only observing the limited features during deployment . • We evaluate our approach on a diverse set of tasks showing the applicability of the transfer algorithm . We also highlight that when the behavior policy used by the data-collecting agent is trained using a limited number of features , the quality of the dataset suffers . We propose a data collection procedure ( RC-D4RL ) to simulate this effect . 2 RESOURCE-CONSTRAINED ONLINE SYSTEMS . In the standard RL framework , we consider a Markov Decision Process ( MDP ) defined by the tuple ( S , A , R , P , γ ) where S is the state space , A is the action space , R : S × A → R is the reward function , P : S × A → ∆ ( S ) is the transition function , ∆ ( S ) denotes the set of all probability distributions over S , and γ ∈ ( 0 , 1 ) is the discount factor . We consider the discounted infinite horizon MDP in this paper . We consider the continuous control setting and assume that both S and A are compact subsets of a real-valued vector space . The transition at time t , is given by the tuple ( st , at , R ( st , at ) , st+1 ) . Each policy π : S → ∆ ( A ) , has a value function Qπ : S × A → R that estimates the expected discounted reward for taking action a in state s and uses the policy π after that . The goal of the agent is to learn the policy π that maximizes the expected discounted reward Eπ [ ∑∞ t=0 γ tR ( st , at ) ] . In online RL , this problem is solved by interacting with the environment . In offline ( or batch ) RL ( Lange et al. , 2012 ) , instead of having access to the environment , the agent is provided with a finite dataset of trajectories or transitions denoted by D = { ( si , ai , ri , s′i ) } Ni=1 . The data is collected by one or many behavior policies that induce a distribution µ on the space of S × A . The goal of the agent is to learn a policy using the finite dataset to maximize the expected discounted reward when deployed in the environment . In the resource-constrained setting , the agent does not have access to the full state space or features during deployment . Instead , the agent can only observe from Ŝ ( another bounded subset of the realvalued vector space ) that is different from S. It is assumed that the space S is rich in information as compared to Ŝ . For example , Ŝ might have fewer dimensions , or some entries may include extra noise ( see Figure 1a ) . We will use online/limited features , to refer to observations from the online space Ŝ , offline/rich features to refer to observations from the offline space S. We assume that both online features and offline features are available in offline data . The goal of the agent is to use the offline data and train a policy π : Ŝ → ∆ ( A ) . The agent can use the offline features from S during training but is constrained to only use the online features from Ŝ while making a decision . A similar paradigm of Learning Under Privileged Information ( LUPI ) ( Vapnik et al. , 2015 ) has been studied under the supervised learning setting , where the privileged information is provided by a knowledgeable teacher . 3 RELATED WORK . Offline RL There has been an increasing interest in studying offline RL algorithms due to its practical advantages over online RL algorithms ( Agarwal et al. , 2020 ; Wu et al. , 2021 ; Chen et al. , 2021 ; Brandfonbrener et al. , 2021 ) . Offline RL algorithms typically suffer from overestimation of the value function as well as distribution shift between the offline data and on-policy data . Buckman et al . ( 2020 ) and Kumar et al . ( 2020 ) advocate a pessimistic approach to value function estimation to avoid over-estimation of rarely observed state-action pairs . To constrain the on-policy data to be closer to offline data , several techniques have been explored , such as restricting the actions inside the expectation in the evaluation step to be close to the actions observed in the dataset ( Fujimoto et al. , 2019 ) , adding a regularization term during policy evaluation or iteration ( Kostrikov et al. , 2021 ) , ( Wu et al. , 2019 ) , ( Guo et al. , 2020 ) , adding a constraint of the form MMD ( µ ( ·|s ) , π ( s ) ) ( Gretton et al. , 2012 ; Blanchard et al. , 2021 ; Deshmukh et al. , 2019 ) on the policy ( Kumar et al. , 2019 ) , using behavior cloning ( Fujimoto & Gu , 2021 ) , adding an entropy term in the value function estimation ( Wu et al. , 2019 ) , and model-based approaches that learn a pessimistic MDP Kidambi et al . ( 2020 ) . A thorough review of these techniques is presented in an excellent tutorial by Levine et al . ( 2020 ) . To the best of our knowledge , there is no existing work that addresses the resource-constrained offline RL setting where there is a mismatch between the offline features and online features . Knowledge Transfer Knowledge transfer/distillation is widely studied in various settings including vision , language , and RL domains ( Gou et al. , 2021 ; Wang & Yoon , 2021 ) . In RL , under the domain transfer setting ( Taylor & Stone , 2009 ; Liu et al. , 2016 ) , the teacher is trained on one domain/task and the student needs to perform on a different domain/task ( Konidaris & Barto , 2006 ; Perkins et al. , 1999 ; Torrey et al. , 2005 ; Gupta et al. , 2017 ) . Li et al . ( 2019 ) train a model so that features from different domains have similar embeddings , and Kamienny et al . ( 2020 ) perturb the feature using a random noise centered at the privileged information . An offline RL algorithm for domain transfer has been proposed by Cang et al . ( 2021 ) . Policy distillation is studied in the setting where the knowledge from a trained policy ( teacher ) is imparted to an untrained network ( student ) ( Rusu et al. , 2015 ; Czarnecki et al. , 2019 ) . This leads to several advantages such as model compression and the ability to learn from an ensemble of trained policies to improve performance ( Zhu et al. , 2020 ) . One distinguishing feature of the resource-constrained setting that differentiates it from other transfer settings is that the teacher has access to the privileged information and student needs to adapt from the data available without interactive learning . In most of the existing approaches , the difference between teacher and student was either the network size ( which is also present in our setting due to the difference in input features ) or the dynamics ( as in the domain transfer case ) . To the best of our knowledge , we are the first to study policy distillation in the offline RL framework under the resource-constrained setting . Another interesting line of work is called Sim2Real ( Lee et al. , 2021 ; Traoré et al. , 2019 ) . In these papers , they train a model using a simulator and then transfer the knowledge to real data . However , this work requires an accurate simulator which results in a fairly expert teacher model . However , in the offline RL setting , depending on the data quality , the teacher itself might be weak . Partially Observable MDP POMDP generalizes the MDP framework where the agent does not have access to the full features and only partially observes the state space ( Åström , 1965 ; Kaelbling et al. , 1998 ; Ortner et al. , 2012 ) . More recently , Rafailov et al . ( 2021 ) studied a model-based offline RL algorithm for image data under the POMDP setup . Our setting resembles this setup , but our agent also has access to the full privileged features in the offline dataset while training . This availability of the offline dataset with privileged information differentiates our setting and enables the student to inherit the knowledge from the rich space while only using the limited features during deployment .
The paper proposes an offline RL algorithm in the resource-constrained setting, where the offline dataset contains richer features than provided by online interactions. The authors propose a transfer learning objective, where a teacher policy is learned from the rich features, then a policy is learned from the limited features by additionally fitting to the actions chosen by the teacher policy. The authors compare the policy learned via this transfer objective to a baseline that does not do transfer learning on D4RL tasks.
SP:7de556908b47e1aa105d05adaa8b77b3ab4cb08f
Practical Conditional Neural Process Via Tractable Dependent Predictions
1 INTRODUCTION . Conditional Neural Processes ( CNP ; Garnelo et al. , 2018a ) are a scalable and flexible family of metalearning models capable of producing well-calibrated uncertainty estimates . CNPs naturally handle off-the-grid and missing data and are trained using a simple-to-implement maximum-likelihood procedure . At test time , CNPs require significantly less computation and memory than other metalearning approaches , such as gradient-based fine tuning ( Finn et al. , 2017 ; Triantafillou et al. , 2019 ) , making them ideal for resource and power-limited applications , such as mobile devices . Further , CNPs can be combined with attention ( Kim et al. , 2019 ) , or equivariant networks which account for symmetries in the task at hand ( Gordon et al. , 2020 ; Kawano et al. , 2021 ; Holderrieth et al. , 2021 ) , achieving impressive performance on a variety of problems . Despite these favourable qualities , CNPs are severely limited by the fact that they do not model dependencies in their output ( fig . 1 ) . Limitations of CNPs : More specifically , given two target input locations xm and xm′ , CNPs model their respective outputs ym and ym′ independently . In this paper , we refer to such predictions as mean-field . The inability to model dependencies hurts the predictive performance of CNPs and ren- ders it impossible to produce coherent function samples . Since many downstream tasks require dependent function samples , this excludes mean-field CNPs form a range of applications . In heatwave or flood prediction for example , we need to evaluate the probability of the event that the temperature or precipitation remains above some threshold , throughout a region of space and time . As illustrated by fig . 1 , mean-field predictions model every location independently , and may assign unreasonably low probabilities to such events . If we were able to draw coherent samples from the predictive , the probabilities of such events and similar useful quantities could be more reasonably estimated . Limitations of existing models with dependencies : To address the above , follow-up work has introduced Neural Processes ( NPs ; Garnelo et al. , 2018b ; Kim et al. , 2019 ; Foong et al. , 2020 ) , which use latent variables to model output dependencies . However , the likelihood for these models is not analytically tractable , so approximate inference is required for training ( Le et al. , 2018 ; Foong et al. , 2020 ) . Alternatively , Bruinsma et al . ( 2021 ) recently introduced a variant of the CNP called the Gaussian Neural Process , which we will refer to as the FullConvGNP , which directly parametrises the covariance of a Gaussian predictive over the output variables . In this way the FullConvGNP models statistical dependencies in the output , and can be trained by an exact maximum-likelihood objective , without requiring approximations . However , for D-dimensional data , the architecture of the FullConvGNP involves 2D-dimensional convolutions , which can be very costly , and , forD > 1 , poorly supported by most Deep Learning libraries . Contributions : In this work ( i ) we introduce Gaussian Neural Processes ( GNPs ) , a class of model which directly parametrises the covariance of a Gaussian predictive process , thereby circumventing the costly convolutions of the FullConvGNP , and is applicable to higher-dimensional input data . GNPs have analytic likelihoods making them substantially easier to train than their latent variable counterparts ; ( ii ) we show that GNPs can be easily applied to multi-output regression , as well as composed with invertible marginal transformations to model non-Gaussian data ; ( iii ) we demonstrate that modelling correlations improves performance on experiments with both Gaussian and non-Gaussian synthetic data , including a downstream estimation task that mean-field models can not solve ; ( iv ) we demonstrate that GNPs outperform their mean-field and latent variable counterparts on real-world electroencephalogram ( EEG ) data and climate data ; ( v ) in climate modelling , GNPs outperform a standard ensemble of widely used methods in statistical downscaling , while providing spatially coherent temperature samples which are necessary for climate impact studies . 2 CONDITIONAL & GAUSSIAN NEURAL PROCESSES . Background : We present CNPs from the viewpoint of prediction maps ( Foong et al. , 2020 ) . A prediction map π is a function which maps ( 1 ) a context set ( xc , yc ) where xc = ( xc,1 , . . . , xc , N ) are the inputs and yc = ( yc,1 , . . . , yc , N ) the outputs and ( 2 ) a set of target inputs xt = ( xt,1 , ... , xt , M ) to a distribution over the corresponding target outputs yt = ( yt,1 , ... , yt , M ) : π ( yt ; xc , yc , xt ) = p ( yt|r ) , ( 1 ) where r = r ( xc , yc , xt ) is a vector which parameterises the distribution over yt . For a fixed context ( xc , yc ) , using Kolmogorov ’ s extension theorem ( Oksendal , 2013 ) , the collection of finitedimensional distributions π ( yt ; xc , yc , xt ) for all xt,1 , . . . , xt , M ∈ RM , M ∈ N , defines a stochas- tic process if these are consistent under ( i ) permutations of any entries of ( xt , yt ) and ( ii ) marginalisations of any entries of yt . Prediction maps include , but are not limited to , Bayesian posteriors . One familiar example of such a map is the Bayesian Gaussian process ( GP ; Rasmussen , 2003 ) posterior π ( yt ; xc , yc , xt ) = N ( yt ; m , K ) , ( 2 ) where m = m ( xc , yc , xt ) and K = k ( xc , xt ) are given by the usual GP posterior expressions . Another prediction map is the CNP Garnelo et al . ( 2018a ) : π ( yt ; xc , yc , xt ) = ∏M m=1 p ( yt , m|rm ) , ( 3 ) where each p ( yt , m|rm ) is an independent Gaussian and rm = r ( xc , yc , xt , m ) is parameterised by a DeepSet ( Zaheer et al. , 2017 ) . CNPs are permutation and marginalisation consistent and thus correspond to valid stochastic processes . However , CNPs do not respect the product rule in general ( Foong et al. , 2020 ) . Nevertheless , CNPs and their variants ( Gordon et al. , 2020 ) have been demonstrated to give competitive performance and robust predictions in a variety of tasks and are a promising class of meta-learning models . Gaussian Neural Processes : A central problem with CNP predictive distributions is that they are mean-field : eq . ( 3 ) does not model correlations between yt , m and yt , m′ for m ̸= m′ . However , many tasks require modelling dependencies in the output variable . To remedy this , we consider parameterising a correlated multivariate Gaussian π ( yt ; xc , yc , xt ) = N ( yt ; m , K ) ( 4 ) where , instead of the expressions for the Bayesian GP posterior , we use neural networks to parameterise the mean m = m ( xc , yc , xt ) and covariance K = K ( xc , yc , xt ) . We refer to this class of models as Gaussian Neural Processes ( GNPs ) . The first such model , the FullConvGNP , was introduced by Bruinsma et al . ( 2021 ) with promising results . Unfortunately , the FullConvGNP relies on 2D-dimensional convolutions for parameterising K , applying the sequence of computations ( xc , yc ) 1−→ ( x̃ , h ) 2−→ r = PSD ( CNN2D ( h ) ) 3−→ Kij = ∑L l=1 ψ ( xt , i , x̃l ) rl ψ ( x̃l , xt , j ) ( 5 ) where 1 maps ( xc , yc ) to a 2D-dimensional grid h at locations x̃ = ( x̃1 , . . . , x̃L ) , x̃l ∈ R2D , using a SetConv layer ( Gordon et al. , 2020 ) , 2 maps h to r through a CNN with 2D-dimensional convolutions , followed by a PSD map which ensures r is positive-definite , and 3 aggregates r using an RBF ψ . The CNN at 2 requires expensive 2D-dimensional convolutions , which are challenging to scale to higher dimensions ( see appendix B ) . To overcome this difficulty , we propose parameterising m and K by mi = f ( xt , i , r ) , Kij = k ( g ( xt , i , r ) , g ( xt , j , r ) ) ( 6 ) where r = r ( xc , yc ) , f and g are neural networks with outputs in R and RDg , and k is an appropriately chosen positive-definite function . Note that , since k models a posterior covariance , it can not be stationary . The special case where Kij = σ2i Iij is diagonal corresponds to a mean-field CNP as presented in ( Garnelo et al. , 2018a ) . Equation ( 6 ) defines a class of GNPs which , unlike the FullConvGNP , do not require costly convolutions . GNPs can be readily trained via the log-likelihood θ∗ = argmaxθ log π ( yt ; xc , yc , xt ) , ( 7 ) where θ collects all the parameters of the neural networks f , g , and r. In this work , we consider two methods to parameterise K , which we discuss next . Linear covariance : The first method we consider is the linear covariance Kij = g ( xt , i , r ) ⊤g ( xt , j , r ) ( 8 ) which can be seen as a linear-in-the-parameters model with Dg basis functions and a unit Gaussian distribution on their weights . This model meta-learns Dg context-dependent basis functions , which approximate the true distribution of the target , given the context . By Mercer ’ s theorem ( Rasmussen , 2003 ) , up to regularity conditions , every positive-definite function k can be decomposed as k ( z , z′ ) = ∑∞ d=0 ϕd ( z ) ϕd ( z ′ ) ( 9 ) where ( ϕd ) ∞d=1 is a set of orthogonal basis functions . We therefore expect eq . ( 8 ) to be able to recover arbitrary ( sufficiently regular ) GP predictives as Dg grows large . Further , the linear covariance has the attractive feature that sampling from it scales linearly with the number of target locations . A drawback is that the finite number of basis functions may limit its expressivity . Kvv covariance : An alternative covariance which sidesteps this issue , is the kvv covariance Kij = k ( g ( xt , i , r ) , g ( xt , j , r ) ) v ( xt , i , r ) v ( xt , j , r ) , ( 10 ) where k is the Exponentiated Quadratic ( EQ ) covariance with unit lengthscale and v is a scalaroutput neural network . The v modulate the magnitude of the covariance , which would otherwise not be able to shrink near the context . Unlike linear , kvv is not limited by a finite number of basis functions , but the cost of drawing samples from it scales cubically in the number of target points . Multi-output regression : Extending this approach to the multi-output setting where yt , m ∈ RDy with Dy > 1 , can be achieved by learning functions m1 , . . . , mDy and g1 , . . . , gDy for each dimension of the output variable . We can represent covariances across different target points and different target vector entries , by passing those features through either the linear or the kvv covariance Kijab = ga ( xt , i , r ) ⊤gb ( xt , j , r ) , ( 11 ) Kijab = k ( ga ( xt , i , r ) , gb ( xt , j , r ) ) va ( xt , i , r ) vb ( xt , j , r ) , ( 12 ) where Kijab denotes the covariance between entry a of yt , i and entry b of yt , j . Neural architectures : This discussion leaves room for choosing f , g , and r , producing different models belonging to the GNP family , of which the FullConvGNP is also a member . For example , we may choose these to be DeepSets , attentive Deepsets or CNNs , giving rise to Gaussian Neural Processes ( GNPs ) , Attentive GNPs ( AGNPs ) or Convolutional GNPs ( ConvGNPs ) respectively . Particularly , in the ConvGNP , the feature function g takes the form ( xc , yc ) 1−→ ( x̃ , h ) 2−→ r = CNND ( h ) 3−→ g ( xt , i , r ) = ∑L l=1 ψ ( xt , i , xr , l ) rl , ( 13 ) where , crucially , h are values on a D-dimensional grid at x̃ = ( x̃1 , . . . , x̃L ) , x̃l ∈ RD , and 2 uses D-dimensional rather than a 2D-dimensional CNN . This renders the ConvGNP much cheaper than the FullConvGNP in both compute and memory , while retaining translation equivariance ( see appendix A.1 for proof ) , making the former a scalable alternative to the latter .
The manuscript proposes variants of Neural process (NP), which can model correlation in the input (and in the output for multiouput regression). The main idea is to directly parameterize the mean and the covariance functions of a Gaussian predictive via neural networks. The authors also propose to use Copulae to handle non-Gaussian marginal distributions.
SP:2c744a0c336c736ec3c91848bd3107dcf0606e20
Practical Conditional Neural Process Via Tractable Dependent Predictions
1 INTRODUCTION . Conditional Neural Processes ( CNP ; Garnelo et al. , 2018a ) are a scalable and flexible family of metalearning models capable of producing well-calibrated uncertainty estimates . CNPs naturally handle off-the-grid and missing data and are trained using a simple-to-implement maximum-likelihood procedure . At test time , CNPs require significantly less computation and memory than other metalearning approaches , such as gradient-based fine tuning ( Finn et al. , 2017 ; Triantafillou et al. , 2019 ) , making them ideal for resource and power-limited applications , such as mobile devices . Further , CNPs can be combined with attention ( Kim et al. , 2019 ) , or equivariant networks which account for symmetries in the task at hand ( Gordon et al. , 2020 ; Kawano et al. , 2021 ; Holderrieth et al. , 2021 ) , achieving impressive performance on a variety of problems . Despite these favourable qualities , CNPs are severely limited by the fact that they do not model dependencies in their output ( fig . 1 ) . Limitations of CNPs : More specifically , given two target input locations xm and xm′ , CNPs model their respective outputs ym and ym′ independently . In this paper , we refer to such predictions as mean-field . The inability to model dependencies hurts the predictive performance of CNPs and ren- ders it impossible to produce coherent function samples . Since many downstream tasks require dependent function samples , this excludes mean-field CNPs form a range of applications . In heatwave or flood prediction for example , we need to evaluate the probability of the event that the temperature or precipitation remains above some threshold , throughout a region of space and time . As illustrated by fig . 1 , mean-field predictions model every location independently , and may assign unreasonably low probabilities to such events . If we were able to draw coherent samples from the predictive , the probabilities of such events and similar useful quantities could be more reasonably estimated . Limitations of existing models with dependencies : To address the above , follow-up work has introduced Neural Processes ( NPs ; Garnelo et al. , 2018b ; Kim et al. , 2019 ; Foong et al. , 2020 ) , which use latent variables to model output dependencies . However , the likelihood for these models is not analytically tractable , so approximate inference is required for training ( Le et al. , 2018 ; Foong et al. , 2020 ) . Alternatively , Bruinsma et al . ( 2021 ) recently introduced a variant of the CNP called the Gaussian Neural Process , which we will refer to as the FullConvGNP , which directly parametrises the covariance of a Gaussian predictive over the output variables . In this way the FullConvGNP models statistical dependencies in the output , and can be trained by an exact maximum-likelihood objective , without requiring approximations . However , for D-dimensional data , the architecture of the FullConvGNP involves 2D-dimensional convolutions , which can be very costly , and , forD > 1 , poorly supported by most Deep Learning libraries . Contributions : In this work ( i ) we introduce Gaussian Neural Processes ( GNPs ) , a class of model which directly parametrises the covariance of a Gaussian predictive process , thereby circumventing the costly convolutions of the FullConvGNP , and is applicable to higher-dimensional input data . GNPs have analytic likelihoods making them substantially easier to train than their latent variable counterparts ; ( ii ) we show that GNPs can be easily applied to multi-output regression , as well as composed with invertible marginal transformations to model non-Gaussian data ; ( iii ) we demonstrate that modelling correlations improves performance on experiments with both Gaussian and non-Gaussian synthetic data , including a downstream estimation task that mean-field models can not solve ; ( iv ) we demonstrate that GNPs outperform their mean-field and latent variable counterparts on real-world electroencephalogram ( EEG ) data and climate data ; ( v ) in climate modelling , GNPs outperform a standard ensemble of widely used methods in statistical downscaling , while providing spatially coherent temperature samples which are necessary for climate impact studies . 2 CONDITIONAL & GAUSSIAN NEURAL PROCESSES . Background : We present CNPs from the viewpoint of prediction maps ( Foong et al. , 2020 ) . A prediction map π is a function which maps ( 1 ) a context set ( xc , yc ) where xc = ( xc,1 , . . . , xc , N ) are the inputs and yc = ( yc,1 , . . . , yc , N ) the outputs and ( 2 ) a set of target inputs xt = ( xt,1 , ... , xt , M ) to a distribution over the corresponding target outputs yt = ( yt,1 , ... , yt , M ) : π ( yt ; xc , yc , xt ) = p ( yt|r ) , ( 1 ) where r = r ( xc , yc , xt ) is a vector which parameterises the distribution over yt . For a fixed context ( xc , yc ) , using Kolmogorov ’ s extension theorem ( Oksendal , 2013 ) , the collection of finitedimensional distributions π ( yt ; xc , yc , xt ) for all xt,1 , . . . , xt , M ∈ RM , M ∈ N , defines a stochas- tic process if these are consistent under ( i ) permutations of any entries of ( xt , yt ) and ( ii ) marginalisations of any entries of yt . Prediction maps include , but are not limited to , Bayesian posteriors . One familiar example of such a map is the Bayesian Gaussian process ( GP ; Rasmussen , 2003 ) posterior π ( yt ; xc , yc , xt ) = N ( yt ; m , K ) , ( 2 ) where m = m ( xc , yc , xt ) and K = k ( xc , xt ) are given by the usual GP posterior expressions . Another prediction map is the CNP Garnelo et al . ( 2018a ) : π ( yt ; xc , yc , xt ) = ∏M m=1 p ( yt , m|rm ) , ( 3 ) where each p ( yt , m|rm ) is an independent Gaussian and rm = r ( xc , yc , xt , m ) is parameterised by a DeepSet ( Zaheer et al. , 2017 ) . CNPs are permutation and marginalisation consistent and thus correspond to valid stochastic processes . However , CNPs do not respect the product rule in general ( Foong et al. , 2020 ) . Nevertheless , CNPs and their variants ( Gordon et al. , 2020 ) have been demonstrated to give competitive performance and robust predictions in a variety of tasks and are a promising class of meta-learning models . Gaussian Neural Processes : A central problem with CNP predictive distributions is that they are mean-field : eq . ( 3 ) does not model correlations between yt , m and yt , m′ for m ̸= m′ . However , many tasks require modelling dependencies in the output variable . To remedy this , we consider parameterising a correlated multivariate Gaussian π ( yt ; xc , yc , xt ) = N ( yt ; m , K ) ( 4 ) where , instead of the expressions for the Bayesian GP posterior , we use neural networks to parameterise the mean m = m ( xc , yc , xt ) and covariance K = K ( xc , yc , xt ) . We refer to this class of models as Gaussian Neural Processes ( GNPs ) . The first such model , the FullConvGNP , was introduced by Bruinsma et al . ( 2021 ) with promising results . Unfortunately , the FullConvGNP relies on 2D-dimensional convolutions for parameterising K , applying the sequence of computations ( xc , yc ) 1−→ ( x̃ , h ) 2−→ r = PSD ( CNN2D ( h ) ) 3−→ Kij = ∑L l=1 ψ ( xt , i , x̃l ) rl ψ ( x̃l , xt , j ) ( 5 ) where 1 maps ( xc , yc ) to a 2D-dimensional grid h at locations x̃ = ( x̃1 , . . . , x̃L ) , x̃l ∈ R2D , using a SetConv layer ( Gordon et al. , 2020 ) , 2 maps h to r through a CNN with 2D-dimensional convolutions , followed by a PSD map which ensures r is positive-definite , and 3 aggregates r using an RBF ψ . The CNN at 2 requires expensive 2D-dimensional convolutions , which are challenging to scale to higher dimensions ( see appendix B ) . To overcome this difficulty , we propose parameterising m and K by mi = f ( xt , i , r ) , Kij = k ( g ( xt , i , r ) , g ( xt , j , r ) ) ( 6 ) where r = r ( xc , yc ) , f and g are neural networks with outputs in R and RDg , and k is an appropriately chosen positive-definite function . Note that , since k models a posterior covariance , it can not be stationary . The special case where Kij = σ2i Iij is diagonal corresponds to a mean-field CNP as presented in ( Garnelo et al. , 2018a ) . Equation ( 6 ) defines a class of GNPs which , unlike the FullConvGNP , do not require costly convolutions . GNPs can be readily trained via the log-likelihood θ∗ = argmaxθ log π ( yt ; xc , yc , xt ) , ( 7 ) where θ collects all the parameters of the neural networks f , g , and r. In this work , we consider two methods to parameterise K , which we discuss next . Linear covariance : The first method we consider is the linear covariance Kij = g ( xt , i , r ) ⊤g ( xt , j , r ) ( 8 ) which can be seen as a linear-in-the-parameters model with Dg basis functions and a unit Gaussian distribution on their weights . This model meta-learns Dg context-dependent basis functions , which approximate the true distribution of the target , given the context . By Mercer ’ s theorem ( Rasmussen , 2003 ) , up to regularity conditions , every positive-definite function k can be decomposed as k ( z , z′ ) = ∑∞ d=0 ϕd ( z ) ϕd ( z ′ ) ( 9 ) where ( ϕd ) ∞d=1 is a set of orthogonal basis functions . We therefore expect eq . ( 8 ) to be able to recover arbitrary ( sufficiently regular ) GP predictives as Dg grows large . Further , the linear covariance has the attractive feature that sampling from it scales linearly with the number of target locations . A drawback is that the finite number of basis functions may limit its expressivity . Kvv covariance : An alternative covariance which sidesteps this issue , is the kvv covariance Kij = k ( g ( xt , i , r ) , g ( xt , j , r ) ) v ( xt , i , r ) v ( xt , j , r ) , ( 10 ) where k is the Exponentiated Quadratic ( EQ ) covariance with unit lengthscale and v is a scalaroutput neural network . The v modulate the magnitude of the covariance , which would otherwise not be able to shrink near the context . Unlike linear , kvv is not limited by a finite number of basis functions , but the cost of drawing samples from it scales cubically in the number of target points . Multi-output regression : Extending this approach to the multi-output setting where yt , m ∈ RDy with Dy > 1 , can be achieved by learning functions m1 , . . . , mDy and g1 , . . . , gDy for each dimension of the output variable . We can represent covariances across different target points and different target vector entries , by passing those features through either the linear or the kvv covariance Kijab = ga ( xt , i , r ) ⊤gb ( xt , j , r ) , ( 11 ) Kijab = k ( ga ( xt , i , r ) , gb ( xt , j , r ) ) va ( xt , i , r ) vb ( xt , j , r ) , ( 12 ) where Kijab denotes the covariance between entry a of yt , i and entry b of yt , j . Neural architectures : This discussion leaves room for choosing f , g , and r , producing different models belonging to the GNP family , of which the FullConvGNP is also a member . For example , we may choose these to be DeepSets , attentive Deepsets or CNNs , giving rise to Gaussian Neural Processes ( GNPs ) , Attentive GNPs ( AGNPs ) or Convolutional GNPs ( ConvGNPs ) respectively . Particularly , in the ConvGNP , the feature function g takes the form ( xc , yc ) 1−→ ( x̃ , h ) 2−→ r = CNND ( h ) 3−→ g ( xt , i , r ) = ∑L l=1 ψ ( xt , i , xr , l ) rl , ( 13 ) where , crucially , h are values on a D-dimensional grid at x̃ = ( x̃1 , . . . , x̃L ) , x̃l ∈ RD , and 2 uses D-dimensional rather than a 2D-dimensional CNN . This renders the ConvGNP much cheaper than the FullConvGNP in both compute and memory , while retaining translation equivariance ( see appendix A.1 for proof ) , making the former a scalable alternative to the latter .
There is a long line of recent interesting work on neural processes, a scalable and more flexible alternative to GPs for performing prediction at a set of test points (x1, ..., xm) given a conditioning set ((x, y)_1, ..., (x, y)_n). This mapping is learned via meta-learning. This paper addresses a core issue of the popular conditional neural process: the predictions at each test point are conditionally independent given the conditioning set. This is an inappropriate modeling assumption for many real-world datasets. In response, the authors propose to go beyond a non-diagonal Gaussian to describe the joint distribution. For example, they use some structured Gaussian covariances (linear, kvv) and also a Gaussian copula model. They demonstrate both qualitatively and quantitatively that modeling these dependencies improves the performance of the model on a variety of datasets spanning application domains.
SP:2c744a0c336c736ec3c91848bd3107dcf0606e20
Practical Conditional Neural Process Via Tractable Dependent Predictions
1 INTRODUCTION . Conditional Neural Processes ( CNP ; Garnelo et al. , 2018a ) are a scalable and flexible family of metalearning models capable of producing well-calibrated uncertainty estimates . CNPs naturally handle off-the-grid and missing data and are trained using a simple-to-implement maximum-likelihood procedure . At test time , CNPs require significantly less computation and memory than other metalearning approaches , such as gradient-based fine tuning ( Finn et al. , 2017 ; Triantafillou et al. , 2019 ) , making them ideal for resource and power-limited applications , such as mobile devices . Further , CNPs can be combined with attention ( Kim et al. , 2019 ) , or equivariant networks which account for symmetries in the task at hand ( Gordon et al. , 2020 ; Kawano et al. , 2021 ; Holderrieth et al. , 2021 ) , achieving impressive performance on a variety of problems . Despite these favourable qualities , CNPs are severely limited by the fact that they do not model dependencies in their output ( fig . 1 ) . Limitations of CNPs : More specifically , given two target input locations xm and xm′ , CNPs model their respective outputs ym and ym′ independently . In this paper , we refer to such predictions as mean-field . The inability to model dependencies hurts the predictive performance of CNPs and ren- ders it impossible to produce coherent function samples . Since many downstream tasks require dependent function samples , this excludes mean-field CNPs form a range of applications . In heatwave or flood prediction for example , we need to evaluate the probability of the event that the temperature or precipitation remains above some threshold , throughout a region of space and time . As illustrated by fig . 1 , mean-field predictions model every location independently , and may assign unreasonably low probabilities to such events . If we were able to draw coherent samples from the predictive , the probabilities of such events and similar useful quantities could be more reasonably estimated . Limitations of existing models with dependencies : To address the above , follow-up work has introduced Neural Processes ( NPs ; Garnelo et al. , 2018b ; Kim et al. , 2019 ; Foong et al. , 2020 ) , which use latent variables to model output dependencies . However , the likelihood for these models is not analytically tractable , so approximate inference is required for training ( Le et al. , 2018 ; Foong et al. , 2020 ) . Alternatively , Bruinsma et al . ( 2021 ) recently introduced a variant of the CNP called the Gaussian Neural Process , which we will refer to as the FullConvGNP , which directly parametrises the covariance of a Gaussian predictive over the output variables . In this way the FullConvGNP models statistical dependencies in the output , and can be trained by an exact maximum-likelihood objective , without requiring approximations . However , for D-dimensional data , the architecture of the FullConvGNP involves 2D-dimensional convolutions , which can be very costly , and , forD > 1 , poorly supported by most Deep Learning libraries . Contributions : In this work ( i ) we introduce Gaussian Neural Processes ( GNPs ) , a class of model which directly parametrises the covariance of a Gaussian predictive process , thereby circumventing the costly convolutions of the FullConvGNP , and is applicable to higher-dimensional input data . GNPs have analytic likelihoods making them substantially easier to train than their latent variable counterparts ; ( ii ) we show that GNPs can be easily applied to multi-output regression , as well as composed with invertible marginal transformations to model non-Gaussian data ; ( iii ) we demonstrate that modelling correlations improves performance on experiments with both Gaussian and non-Gaussian synthetic data , including a downstream estimation task that mean-field models can not solve ; ( iv ) we demonstrate that GNPs outperform their mean-field and latent variable counterparts on real-world electroencephalogram ( EEG ) data and climate data ; ( v ) in climate modelling , GNPs outperform a standard ensemble of widely used methods in statistical downscaling , while providing spatially coherent temperature samples which are necessary for climate impact studies . 2 CONDITIONAL & GAUSSIAN NEURAL PROCESSES . Background : We present CNPs from the viewpoint of prediction maps ( Foong et al. , 2020 ) . A prediction map π is a function which maps ( 1 ) a context set ( xc , yc ) where xc = ( xc,1 , . . . , xc , N ) are the inputs and yc = ( yc,1 , . . . , yc , N ) the outputs and ( 2 ) a set of target inputs xt = ( xt,1 , ... , xt , M ) to a distribution over the corresponding target outputs yt = ( yt,1 , ... , yt , M ) : π ( yt ; xc , yc , xt ) = p ( yt|r ) , ( 1 ) where r = r ( xc , yc , xt ) is a vector which parameterises the distribution over yt . For a fixed context ( xc , yc ) , using Kolmogorov ’ s extension theorem ( Oksendal , 2013 ) , the collection of finitedimensional distributions π ( yt ; xc , yc , xt ) for all xt,1 , . . . , xt , M ∈ RM , M ∈ N , defines a stochas- tic process if these are consistent under ( i ) permutations of any entries of ( xt , yt ) and ( ii ) marginalisations of any entries of yt . Prediction maps include , but are not limited to , Bayesian posteriors . One familiar example of such a map is the Bayesian Gaussian process ( GP ; Rasmussen , 2003 ) posterior π ( yt ; xc , yc , xt ) = N ( yt ; m , K ) , ( 2 ) where m = m ( xc , yc , xt ) and K = k ( xc , xt ) are given by the usual GP posterior expressions . Another prediction map is the CNP Garnelo et al . ( 2018a ) : π ( yt ; xc , yc , xt ) = ∏M m=1 p ( yt , m|rm ) , ( 3 ) where each p ( yt , m|rm ) is an independent Gaussian and rm = r ( xc , yc , xt , m ) is parameterised by a DeepSet ( Zaheer et al. , 2017 ) . CNPs are permutation and marginalisation consistent and thus correspond to valid stochastic processes . However , CNPs do not respect the product rule in general ( Foong et al. , 2020 ) . Nevertheless , CNPs and their variants ( Gordon et al. , 2020 ) have been demonstrated to give competitive performance and robust predictions in a variety of tasks and are a promising class of meta-learning models . Gaussian Neural Processes : A central problem with CNP predictive distributions is that they are mean-field : eq . ( 3 ) does not model correlations between yt , m and yt , m′ for m ̸= m′ . However , many tasks require modelling dependencies in the output variable . To remedy this , we consider parameterising a correlated multivariate Gaussian π ( yt ; xc , yc , xt ) = N ( yt ; m , K ) ( 4 ) where , instead of the expressions for the Bayesian GP posterior , we use neural networks to parameterise the mean m = m ( xc , yc , xt ) and covariance K = K ( xc , yc , xt ) . We refer to this class of models as Gaussian Neural Processes ( GNPs ) . The first such model , the FullConvGNP , was introduced by Bruinsma et al . ( 2021 ) with promising results . Unfortunately , the FullConvGNP relies on 2D-dimensional convolutions for parameterising K , applying the sequence of computations ( xc , yc ) 1−→ ( x̃ , h ) 2−→ r = PSD ( CNN2D ( h ) ) 3−→ Kij = ∑L l=1 ψ ( xt , i , x̃l ) rl ψ ( x̃l , xt , j ) ( 5 ) where 1 maps ( xc , yc ) to a 2D-dimensional grid h at locations x̃ = ( x̃1 , . . . , x̃L ) , x̃l ∈ R2D , using a SetConv layer ( Gordon et al. , 2020 ) , 2 maps h to r through a CNN with 2D-dimensional convolutions , followed by a PSD map which ensures r is positive-definite , and 3 aggregates r using an RBF ψ . The CNN at 2 requires expensive 2D-dimensional convolutions , which are challenging to scale to higher dimensions ( see appendix B ) . To overcome this difficulty , we propose parameterising m and K by mi = f ( xt , i , r ) , Kij = k ( g ( xt , i , r ) , g ( xt , j , r ) ) ( 6 ) where r = r ( xc , yc ) , f and g are neural networks with outputs in R and RDg , and k is an appropriately chosen positive-definite function . Note that , since k models a posterior covariance , it can not be stationary . The special case where Kij = σ2i Iij is diagonal corresponds to a mean-field CNP as presented in ( Garnelo et al. , 2018a ) . Equation ( 6 ) defines a class of GNPs which , unlike the FullConvGNP , do not require costly convolutions . GNPs can be readily trained via the log-likelihood θ∗ = argmaxθ log π ( yt ; xc , yc , xt ) , ( 7 ) where θ collects all the parameters of the neural networks f , g , and r. In this work , we consider two methods to parameterise K , which we discuss next . Linear covariance : The first method we consider is the linear covariance Kij = g ( xt , i , r ) ⊤g ( xt , j , r ) ( 8 ) which can be seen as a linear-in-the-parameters model with Dg basis functions and a unit Gaussian distribution on their weights . This model meta-learns Dg context-dependent basis functions , which approximate the true distribution of the target , given the context . By Mercer ’ s theorem ( Rasmussen , 2003 ) , up to regularity conditions , every positive-definite function k can be decomposed as k ( z , z′ ) = ∑∞ d=0 ϕd ( z ) ϕd ( z ′ ) ( 9 ) where ( ϕd ) ∞d=1 is a set of orthogonal basis functions . We therefore expect eq . ( 8 ) to be able to recover arbitrary ( sufficiently regular ) GP predictives as Dg grows large . Further , the linear covariance has the attractive feature that sampling from it scales linearly with the number of target locations . A drawback is that the finite number of basis functions may limit its expressivity . Kvv covariance : An alternative covariance which sidesteps this issue , is the kvv covariance Kij = k ( g ( xt , i , r ) , g ( xt , j , r ) ) v ( xt , i , r ) v ( xt , j , r ) , ( 10 ) where k is the Exponentiated Quadratic ( EQ ) covariance with unit lengthscale and v is a scalaroutput neural network . The v modulate the magnitude of the covariance , which would otherwise not be able to shrink near the context . Unlike linear , kvv is not limited by a finite number of basis functions , but the cost of drawing samples from it scales cubically in the number of target points . Multi-output regression : Extending this approach to the multi-output setting where yt , m ∈ RDy with Dy > 1 , can be achieved by learning functions m1 , . . . , mDy and g1 , . . . , gDy for each dimension of the output variable . We can represent covariances across different target points and different target vector entries , by passing those features through either the linear or the kvv covariance Kijab = ga ( xt , i , r ) ⊤gb ( xt , j , r ) , ( 11 ) Kijab = k ( ga ( xt , i , r ) , gb ( xt , j , r ) ) va ( xt , i , r ) vb ( xt , j , r ) , ( 12 ) where Kijab denotes the covariance between entry a of yt , i and entry b of yt , j . Neural architectures : This discussion leaves room for choosing f , g , and r , producing different models belonging to the GNP family , of which the FullConvGNP is also a member . For example , we may choose these to be DeepSets , attentive Deepsets or CNNs , giving rise to Gaussian Neural Processes ( GNPs ) , Attentive GNPs ( AGNPs ) or Convolutional GNPs ( ConvGNPs ) respectively . Particularly , in the ConvGNP , the feature function g takes the form ( xc , yc ) 1−→ ( x̃ , h ) 2−→ r = CNND ( h ) 3−→ g ( xt , i , r ) = ∑L l=1 ψ ( xt , i , xr , l ) rl , ( 13 ) where , crucially , h are values on a D-dimensional grid at x̃ = ( x̃1 , . . . , x̃L ) , x̃l ∈ RD , and 2 uses D-dimensional rather than a 2D-dimensional CNN . This renders the ConvGNP much cheaper than the FullConvGNP in both compute and memory , while retaining translation equivariance ( see appendix A.1 for proof ) , making the former a scalable alternative to the latter .
The authors present a class of neural process models that are able to produce correlated predictions while amenable to exact, simple and scalable maximum likelihood optimization supporting multiple outputs. By using invertible transformations (gaussian copula), the model is able to capture non-Gaussian output distributions. Experiments with artificial and real data (EEG and climate), highlight the predictive ability of the proposed model.
SP:2c744a0c336c736ec3c91848bd3107dcf0606e20
Representation Topology Divergence: A Method for Comparing Neural Network Representations.
1 INTRODUCTION . Representations of objects are the essential component learnt by deep neural networks . In opposite to the distance in the original space , similarity of representations are proved to be semantically meaningful . Despite of the significant practical success of deep neural networks many aspect of their behaviour are poorly understood . Only few methods study learned representations without relying on their quality on a specific downstream task . In this work , we focus on the comparison of representations from neural networks . Comparison of representations is an ill-posed problem without a “ ground truth ” answer . Early studies were based on variants of Canonical Correlation Analysis ( CCA ) : SVCCA , ( Raghu et al. , 2017 ) , PWCCA ( Morcos et al. , 2018 ) . Hovewer , CCA-like measures define similarity too loosely since they are invariant to any invertible linear transformation . The Centered Kernel Alignment ( CKA ) , ( Kornblith et al. , 2019 ) is the statistical test to measure the independence of two sets of variables . ( Kornblith et al. , 2019 ) proved it to be more consistent with the intuitive similarity of representations . Particularly , neural networks learn similar representation from different seeds as evaluated by CKA . Another line of work studies alignment between groups of neurons ( Li et al. , 2015 ) , ( Wang et al. , 2018 ) . The similarity of representation is also a topic of a study in neuroscience ( Edelman , 1998 ; Kriegeskorte et al. , 2008 ; Connolly et al. , 2012 ) . Representations ’ comparison metrics like CKA and CCA were used to gain insights on representations obtained in meta-learning ( Raghu et al. , 2020 ) , to compare representations from different layers of language models ( Voita et al. , 2019 ) , study the effect of fine-tuning ( Wu et al. , 2020 ) . Finally , ( Nguyen et al. , 2021 ) used CKA to study the phenomenon of a “ block structure ” emerging in wide and deep networks in computer vision and compare their representations . In this paper , we take a topological perspective on representations ’ comparison . We propose the Representation Topology Divergence ( RTD ) score which measures a dissimilarity between two point clouds of equal size with one-to-one correspondence between points . Point clouds are allowed to lie in different ambient spaces . Existing geometrical and topological methods are dedicated to other problems : they are either too general and doesn ’ t incorporate the requirement of one-to-one correspondence ( Khrulkov & Oseledets , 2018 ) , ( Tsitsulin et al. , 2020 ) , or they restrict point clouds to lie in the same ambient space ( Kynkäänniemi et al. , 2019 ) , ( Barannikov et al. , 2021 ) . Such methods ( except for ( Tsitsulin et al. , 2020 ) ) are mostly applied to the evaluation of GANs , where point clouds of real and generated objects are matched . Recently , ( Moor et al. , 2020 ) proposed a loss term to compare a topology of data in original and latent spaces ( with natural one-to-one correspondence ) and applied it as a part of the Topological Autoencoder . In this work , we make the following contributions : 1 . We propose a topologically-inspired approach for comparison of neural network representations ; 2 . We introduce the R-Cross-Barcode ( P , P̃ ) , a tool based on Topological Data Analysis ( TDA ) which measures differences in multi-scale topology of two point clouds P , P̃ with one-to-one correspondence between points ; 3 . Based on the R-Cross-Barcode ( P , P̃ ) , we define the Representation Topology Divergence ( RTD ) , the scalar measuring the multi-scale topological dissimilarity between two representations ; 4 . By doing computational experiments , we show that RTD agrees with an intuitive notion of neural network representations similarity . In contrast with most existing approaches , RTD score is sensitive to cluster and other topological structures of the representations and enjoys very good correlation with disagreement of models predictions . We apply RTD to compare representations in computer vision and NLP domains and various problems : training dynamics analysis , data distribution shift , transfer learning , ensemble learning , disentanglement . We have also compared RTD with CKA , IMD and SVCCA . 2 COMPARING NEURAL NETWORK REPRESENTATIONS . Our starting point is the geometric perspective on representation learning through the lens of the manifold hypothesis ( Goodfellow et al. , 2016 ) , according to which real-world data presented in a high dimensional space are expected to concentrate in vicinity of a manifold of much lower dimension . The low-dimensional manifoldMP underlying the given data representationP can be accessed generally only through discrete sets of samples . The standard approach to recover the manifold MP is to take a sample P and to approximateMP by a set of simplices with vertices from P . A common approach to select the simplices approximating MP is to fix a threshold α > 0 and consider the simplices with edge lengths not exceeding α ( Niyogi et al. , 2008 ; Belkin & Niyogi , 2001 ) . It is difficult in general to guess the correct value of the threshold , hence a reasonable viewpoint is to study all thresholds at once , see e.g . ( Chazal & Michel , 2017 ) . This can be accomplished by means of the mathematical tool , called barcode , that quantifies the evolution of manifold topology features over multiple scales . Given two representations , we consider two corresponding graphs with distance-like weights and compare the difference in the two graphs ’ multiscale topology . 2.1 R-CROSS-BARCODE . Let P ( V ) , P̃ ( V ) be two representations giving two embeddings of the same data . The two embeddings P , P̃ belong in general to different ambient spaces and we have the natural one-to-one correspondence between points in P and P̃ . Given a sample of data V ⊆ V , the representation P = P ( V ) defines the weighted graph Gw with the vertex set V . The weight wAB of an edge AB is given by the distance between points P ( A ) and P ( B ) . Similarly the representation P̃ = P̃ ( V ) defines the weighted graph Gw̃ on the same vertex set . The simplicial approximation to the manifold MP at threshold α consists of simplices whose edges in Gw have weights not exceeding α . Let Gw≤α denote the graph with the vertex set V and the edges with weights not exceeding α . To compare the simplicial approximations to manifolds MP and MP̃ described by the graphs Gw≤α and Gw̃≤α we embed both graphs into the graph Gmin ( w , w̃ ) ≤α . The graph Gmin ( w , w̃ ) ≤α contains an edge between vertices A and B exactly when the distance between the points A and B is smaller than α in at least one of the representations P , P̃ . Recall that Vietoris-Rips complex of a graph G equipped with edge weights ’ matrix m is the collection of k−simplices , k ≥ 0 , which are ( k + 1 ) −elements subsets of the set of vertices of G , with the filtration threshold of a simplex defined by its edges ’ maximal weight : Rα ( Gm ) = { { A0 , . . . , Ak } , Ai ∈ Vert ( G ) ‖mAjAl ≤ α } Our simplicial approximations to the manifolds MP , MP̃ at threshold α are the unions of all simplices from the simplicial complexes Rα ( Gw ) , Rα ( Gw̃ ) . The dissimilarity between the filtered simplicial complexes Rα ( Gw ) and Rα ( Gw̃ ) can be quantified using the homological methods . The relevant tools here are the homology , the Whitehead theorem and the homology exact sequence . Because of the space limitations we sketch how this leads to our construction , described below , in the Appendix , Section B . Concretely , to compare the multi-scale topology of the two weighted graphs Gw and Gw̃ we introduce the weighted graph Ĝ ( w , w̃ ) with doubled set of vertices and with the edge weights defined as follows . For each vertex A ∈ Vert ( G ) we add the extra vertex A′ together with A to Ĝ and define the distance-like edge weights in Ĝ ( w , w̃ ) as : dAB = min ( wAB , w̃AB ) , dAB′ = dA′B = wAB , dAA′ = 0 , dA′B′ = 0 ( 1 ) where B ∈ Vert ( G ) , A 6= B . In practice G , Ĝ are full graphs and the edge weights on the graph Ĝ ( w , w̃ ) are given by 2N × 2N , N = |V | , symmetric matrix m = ( 0 w w min ( w , w̃ ) ) ( 2 ) where w and w̃ are the distance-like edge weights matrices of Gw and Gw̃ . Next , we construct the Vietoris-Rips filtered simplicial complex from the graph Ĝ ( w , w̃ ) . Intuitively , the i−th barcode of Rα ( Ĝ ( w , w̃ ) ) records the i−dimensional topological features that are born in Rα ( Gw̃ ) but are not yet born at the same place in Rα ( Gw ) and the ( i+ 1 ) −dimensional topological features that are dead inRα ( Gw̃ ) but are not yet dead at this place inRα ( Gw ) , see Theorem 1 below . Definition . The R-Cross-Barcodei ( P , P̃ ) is the set of intervals recording the “ births ” and “ deaths ” of i-dimensional topological features in the filtered simplicial complex Rα ( Ĝ ( w , w̃ ) ) . The R-Cross-Barcode∗ ( P , P̃ ) ( for Representations ’ Cross-Barcode ) records the differences in multiscale topology of the two embeddings . The topological features with longer lifespans indicate in general the essential features . Theorem 1 . Basic properties of R-Cross-Barcode∗ ( P , P̃ ) : • if P ( A ) = P̃ ( A ) for any object A ∈ V , then R-Cross-Barcode∗ ( P , P̃ ) = ∅ ; • if all distances within P̃ ( V ) are zero i.e . all objects are represented by the same point in P̃ , then R-Cross-Barcode∗ ( P , P̃ ) = Barcode∗ ( P ) the standard barcode of the point cloud P ; Algorithm 1 R-Cross-Barcodei ( P , P̃ ) Input : w , w̃ : matrices of pairwise distances within point clouds P , P̃ Require : vr ( m ) : function computing filtered complex from pairwise distances matrix m Require : B ( C , i ) : function computing persis- tence intervals of filtered complexC in dimension i m← ( 0 w w min ( w , w̃ ) ) R-Cross-Barcodei ← B ( vr ( m ) , i ) Return : intervals ’ list R-Cross-Barcodei ( P , P̃ ) representing ” births ” and ” deaths ” of topological discrepancies between P and P̃ . Algorithm 2 RTD ( P , P̃ ) , see section 2.3 for details , suggested default values : b = 500 , n = 10 Input : P ∈ R|V|×D , P̃ ∈ R|V|×D̃ : data representations for j = 1 to n do Vj ← random choice ( V , b ) Pj , P̃j ←P ( Vj ) , P̃ ( Vj ) Bj ← R-Cross-Barcode1 ( Pj , P̃j ) intervals ’ list calculated by Algorithm 1 rtdj ← sum of lengths of all intervals in Bj end for RTD ( P , P̃ ) ← mean ( rtd ) Return : number RTD ( P , P̃ ) representing discrepancy between the representations P , P̃ • for any value of threshold α , the following sequence of natural linear maps of homology groups . . .→ Hi ( Rα ( Gw ) ) → Hi ( Rα ( Gmin ( w , w̃ ) ) ) → Hi ( Rα ( Ĝ ( w , w̃ ) ) ) → → Hi−1 ( Rα ( Gw ) ) → Hi−1 ( Rα ( Gmin ( w , w̃ ) ) ) → . . . ( 3 ) is exact ; recall that it means that the kernel of any map is the image of the previous map The proof of the first two properties is immediate and the third property follows from the exactness of the corresponding sequence of simplicial complexes , see Appendix for more details .
This paper proposes a new divergence metric between two point clouds of equal size with a 1-to-1 correspondence, called Representation Topology Divergence (RTD). They compete against other representation comparison metrics like Canonical Correlation Analysis (CCA) and Centered Kernel Alignment (CKA), and other variants. The authors proposed RTD as a good way to compare representations learned by Deep learning networks. They introduce a new barcode called R-Cross-Barcode(P,P^\tilde) which captures the difference in topology of both point clouds in question. RTD is then applied to data from a wide variety of domains spanning CV and NLP, to name a few.
SP:78e644ea81d66ad3dc435fbce9c621d70f42765f
Representation Topology Divergence: A Method for Comparing Neural Network Representations.
1 INTRODUCTION . Representations of objects are the essential component learnt by deep neural networks . In opposite to the distance in the original space , similarity of representations are proved to be semantically meaningful . Despite of the significant practical success of deep neural networks many aspect of their behaviour are poorly understood . Only few methods study learned representations without relying on their quality on a specific downstream task . In this work , we focus on the comparison of representations from neural networks . Comparison of representations is an ill-posed problem without a “ ground truth ” answer . Early studies were based on variants of Canonical Correlation Analysis ( CCA ) : SVCCA , ( Raghu et al. , 2017 ) , PWCCA ( Morcos et al. , 2018 ) . Hovewer , CCA-like measures define similarity too loosely since they are invariant to any invertible linear transformation . The Centered Kernel Alignment ( CKA ) , ( Kornblith et al. , 2019 ) is the statistical test to measure the independence of two sets of variables . ( Kornblith et al. , 2019 ) proved it to be more consistent with the intuitive similarity of representations . Particularly , neural networks learn similar representation from different seeds as evaluated by CKA . Another line of work studies alignment between groups of neurons ( Li et al. , 2015 ) , ( Wang et al. , 2018 ) . The similarity of representation is also a topic of a study in neuroscience ( Edelman , 1998 ; Kriegeskorte et al. , 2008 ; Connolly et al. , 2012 ) . Representations ’ comparison metrics like CKA and CCA were used to gain insights on representations obtained in meta-learning ( Raghu et al. , 2020 ) , to compare representations from different layers of language models ( Voita et al. , 2019 ) , study the effect of fine-tuning ( Wu et al. , 2020 ) . Finally , ( Nguyen et al. , 2021 ) used CKA to study the phenomenon of a “ block structure ” emerging in wide and deep networks in computer vision and compare their representations . In this paper , we take a topological perspective on representations ’ comparison . We propose the Representation Topology Divergence ( RTD ) score which measures a dissimilarity between two point clouds of equal size with one-to-one correspondence between points . Point clouds are allowed to lie in different ambient spaces . Existing geometrical and topological methods are dedicated to other problems : they are either too general and doesn ’ t incorporate the requirement of one-to-one correspondence ( Khrulkov & Oseledets , 2018 ) , ( Tsitsulin et al. , 2020 ) , or they restrict point clouds to lie in the same ambient space ( Kynkäänniemi et al. , 2019 ) , ( Barannikov et al. , 2021 ) . Such methods ( except for ( Tsitsulin et al. , 2020 ) ) are mostly applied to the evaluation of GANs , where point clouds of real and generated objects are matched . Recently , ( Moor et al. , 2020 ) proposed a loss term to compare a topology of data in original and latent spaces ( with natural one-to-one correspondence ) and applied it as a part of the Topological Autoencoder . In this work , we make the following contributions : 1 . We propose a topologically-inspired approach for comparison of neural network representations ; 2 . We introduce the R-Cross-Barcode ( P , P̃ ) , a tool based on Topological Data Analysis ( TDA ) which measures differences in multi-scale topology of two point clouds P , P̃ with one-to-one correspondence between points ; 3 . Based on the R-Cross-Barcode ( P , P̃ ) , we define the Representation Topology Divergence ( RTD ) , the scalar measuring the multi-scale topological dissimilarity between two representations ; 4 . By doing computational experiments , we show that RTD agrees with an intuitive notion of neural network representations similarity . In contrast with most existing approaches , RTD score is sensitive to cluster and other topological structures of the representations and enjoys very good correlation with disagreement of models predictions . We apply RTD to compare representations in computer vision and NLP domains and various problems : training dynamics analysis , data distribution shift , transfer learning , ensemble learning , disentanglement . We have also compared RTD with CKA , IMD and SVCCA . 2 COMPARING NEURAL NETWORK REPRESENTATIONS . Our starting point is the geometric perspective on representation learning through the lens of the manifold hypothesis ( Goodfellow et al. , 2016 ) , according to which real-world data presented in a high dimensional space are expected to concentrate in vicinity of a manifold of much lower dimension . The low-dimensional manifoldMP underlying the given data representationP can be accessed generally only through discrete sets of samples . The standard approach to recover the manifold MP is to take a sample P and to approximateMP by a set of simplices with vertices from P . A common approach to select the simplices approximating MP is to fix a threshold α > 0 and consider the simplices with edge lengths not exceeding α ( Niyogi et al. , 2008 ; Belkin & Niyogi , 2001 ) . It is difficult in general to guess the correct value of the threshold , hence a reasonable viewpoint is to study all thresholds at once , see e.g . ( Chazal & Michel , 2017 ) . This can be accomplished by means of the mathematical tool , called barcode , that quantifies the evolution of manifold topology features over multiple scales . Given two representations , we consider two corresponding graphs with distance-like weights and compare the difference in the two graphs ’ multiscale topology . 2.1 R-CROSS-BARCODE . Let P ( V ) , P̃ ( V ) be two representations giving two embeddings of the same data . The two embeddings P , P̃ belong in general to different ambient spaces and we have the natural one-to-one correspondence between points in P and P̃ . Given a sample of data V ⊆ V , the representation P = P ( V ) defines the weighted graph Gw with the vertex set V . The weight wAB of an edge AB is given by the distance between points P ( A ) and P ( B ) . Similarly the representation P̃ = P̃ ( V ) defines the weighted graph Gw̃ on the same vertex set . The simplicial approximation to the manifold MP at threshold α consists of simplices whose edges in Gw have weights not exceeding α . Let Gw≤α denote the graph with the vertex set V and the edges with weights not exceeding α . To compare the simplicial approximations to manifolds MP and MP̃ described by the graphs Gw≤α and Gw̃≤α we embed both graphs into the graph Gmin ( w , w̃ ) ≤α . The graph Gmin ( w , w̃ ) ≤α contains an edge between vertices A and B exactly when the distance between the points A and B is smaller than α in at least one of the representations P , P̃ . Recall that Vietoris-Rips complex of a graph G equipped with edge weights ’ matrix m is the collection of k−simplices , k ≥ 0 , which are ( k + 1 ) −elements subsets of the set of vertices of G , with the filtration threshold of a simplex defined by its edges ’ maximal weight : Rα ( Gm ) = { { A0 , . . . , Ak } , Ai ∈ Vert ( G ) ‖mAjAl ≤ α } Our simplicial approximations to the manifolds MP , MP̃ at threshold α are the unions of all simplices from the simplicial complexes Rα ( Gw ) , Rα ( Gw̃ ) . The dissimilarity between the filtered simplicial complexes Rα ( Gw ) and Rα ( Gw̃ ) can be quantified using the homological methods . The relevant tools here are the homology , the Whitehead theorem and the homology exact sequence . Because of the space limitations we sketch how this leads to our construction , described below , in the Appendix , Section B . Concretely , to compare the multi-scale topology of the two weighted graphs Gw and Gw̃ we introduce the weighted graph Ĝ ( w , w̃ ) with doubled set of vertices and with the edge weights defined as follows . For each vertex A ∈ Vert ( G ) we add the extra vertex A′ together with A to Ĝ and define the distance-like edge weights in Ĝ ( w , w̃ ) as : dAB = min ( wAB , w̃AB ) , dAB′ = dA′B = wAB , dAA′ = 0 , dA′B′ = 0 ( 1 ) where B ∈ Vert ( G ) , A 6= B . In practice G , Ĝ are full graphs and the edge weights on the graph Ĝ ( w , w̃ ) are given by 2N × 2N , N = |V | , symmetric matrix m = ( 0 w w min ( w , w̃ ) ) ( 2 ) where w and w̃ are the distance-like edge weights matrices of Gw and Gw̃ . Next , we construct the Vietoris-Rips filtered simplicial complex from the graph Ĝ ( w , w̃ ) . Intuitively , the i−th barcode of Rα ( Ĝ ( w , w̃ ) ) records the i−dimensional topological features that are born in Rα ( Gw̃ ) but are not yet born at the same place in Rα ( Gw ) and the ( i+ 1 ) −dimensional topological features that are dead inRα ( Gw̃ ) but are not yet dead at this place inRα ( Gw ) , see Theorem 1 below . Definition . The R-Cross-Barcodei ( P , P̃ ) is the set of intervals recording the “ births ” and “ deaths ” of i-dimensional topological features in the filtered simplicial complex Rα ( Ĝ ( w , w̃ ) ) . The R-Cross-Barcode∗ ( P , P̃ ) ( for Representations ’ Cross-Barcode ) records the differences in multiscale topology of the two embeddings . The topological features with longer lifespans indicate in general the essential features . Theorem 1 . Basic properties of R-Cross-Barcode∗ ( P , P̃ ) : • if P ( A ) = P̃ ( A ) for any object A ∈ V , then R-Cross-Barcode∗ ( P , P̃ ) = ∅ ; • if all distances within P̃ ( V ) are zero i.e . all objects are represented by the same point in P̃ , then R-Cross-Barcode∗ ( P , P̃ ) = Barcode∗ ( P ) the standard barcode of the point cloud P ; Algorithm 1 R-Cross-Barcodei ( P , P̃ ) Input : w , w̃ : matrices of pairwise distances within point clouds P , P̃ Require : vr ( m ) : function computing filtered complex from pairwise distances matrix m Require : B ( C , i ) : function computing persis- tence intervals of filtered complexC in dimension i m← ( 0 w w min ( w , w̃ ) ) R-Cross-Barcodei ← B ( vr ( m ) , i ) Return : intervals ’ list R-Cross-Barcodei ( P , P̃ ) representing ” births ” and ” deaths ” of topological discrepancies between P and P̃ . Algorithm 2 RTD ( P , P̃ ) , see section 2.3 for details , suggested default values : b = 500 , n = 10 Input : P ∈ R|V|×D , P̃ ∈ R|V|×D̃ : data representations for j = 1 to n do Vj ← random choice ( V , b ) Pj , P̃j ←P ( Vj ) , P̃ ( Vj ) Bj ← R-Cross-Barcode1 ( Pj , P̃j ) intervals ’ list calculated by Algorithm 1 rtdj ← sum of lengths of all intervals in Bj end for RTD ( P , P̃ ) ← mean ( rtd ) Return : number RTD ( P , P̃ ) representing discrepancy between the representations P , P̃ • for any value of threshold α , the following sequence of natural linear maps of homology groups . . .→ Hi ( Rα ( Gw ) ) → Hi ( Rα ( Gmin ( w , w̃ ) ) ) → Hi ( Rα ( Ĝ ( w , w̃ ) ) ) → → Hi−1 ( Rα ( Gw ) ) → Hi−1 ( Rα ( Gmin ( w , w̃ ) ) ) → . . . ( 3 ) is exact ; recall that it means that the kernel of any map is the image of the previous map The proof of the first two properties is immediate and the third property follows from the exactness of the corresponding sequence of simplicial complexes , see Appendix for more details .
This paper focuses on measuring the differences in data representation. The authors propose to evaluate it with Representation Topology Divergence, which estimates the dissimilarity on multi-scale topology. The effectiveness of the proposed method is tested on multiple tasks in the experiments.
SP:78e644ea81d66ad3dc435fbce9c621d70f42765f
Representation Topology Divergence: A Method for Comparing Neural Network Representations.
1 INTRODUCTION . Representations of objects are the essential component learnt by deep neural networks . In opposite to the distance in the original space , similarity of representations are proved to be semantically meaningful . Despite of the significant practical success of deep neural networks many aspect of their behaviour are poorly understood . Only few methods study learned representations without relying on their quality on a specific downstream task . In this work , we focus on the comparison of representations from neural networks . Comparison of representations is an ill-posed problem without a “ ground truth ” answer . Early studies were based on variants of Canonical Correlation Analysis ( CCA ) : SVCCA , ( Raghu et al. , 2017 ) , PWCCA ( Morcos et al. , 2018 ) . Hovewer , CCA-like measures define similarity too loosely since they are invariant to any invertible linear transformation . The Centered Kernel Alignment ( CKA ) , ( Kornblith et al. , 2019 ) is the statistical test to measure the independence of two sets of variables . ( Kornblith et al. , 2019 ) proved it to be more consistent with the intuitive similarity of representations . Particularly , neural networks learn similar representation from different seeds as evaluated by CKA . Another line of work studies alignment between groups of neurons ( Li et al. , 2015 ) , ( Wang et al. , 2018 ) . The similarity of representation is also a topic of a study in neuroscience ( Edelman , 1998 ; Kriegeskorte et al. , 2008 ; Connolly et al. , 2012 ) . Representations ’ comparison metrics like CKA and CCA were used to gain insights on representations obtained in meta-learning ( Raghu et al. , 2020 ) , to compare representations from different layers of language models ( Voita et al. , 2019 ) , study the effect of fine-tuning ( Wu et al. , 2020 ) . Finally , ( Nguyen et al. , 2021 ) used CKA to study the phenomenon of a “ block structure ” emerging in wide and deep networks in computer vision and compare their representations . In this paper , we take a topological perspective on representations ’ comparison . We propose the Representation Topology Divergence ( RTD ) score which measures a dissimilarity between two point clouds of equal size with one-to-one correspondence between points . Point clouds are allowed to lie in different ambient spaces . Existing geometrical and topological methods are dedicated to other problems : they are either too general and doesn ’ t incorporate the requirement of one-to-one correspondence ( Khrulkov & Oseledets , 2018 ) , ( Tsitsulin et al. , 2020 ) , or they restrict point clouds to lie in the same ambient space ( Kynkäänniemi et al. , 2019 ) , ( Barannikov et al. , 2021 ) . Such methods ( except for ( Tsitsulin et al. , 2020 ) ) are mostly applied to the evaluation of GANs , where point clouds of real and generated objects are matched . Recently , ( Moor et al. , 2020 ) proposed a loss term to compare a topology of data in original and latent spaces ( with natural one-to-one correspondence ) and applied it as a part of the Topological Autoencoder . In this work , we make the following contributions : 1 . We propose a topologically-inspired approach for comparison of neural network representations ; 2 . We introduce the R-Cross-Barcode ( P , P̃ ) , a tool based on Topological Data Analysis ( TDA ) which measures differences in multi-scale topology of two point clouds P , P̃ with one-to-one correspondence between points ; 3 . Based on the R-Cross-Barcode ( P , P̃ ) , we define the Representation Topology Divergence ( RTD ) , the scalar measuring the multi-scale topological dissimilarity between two representations ; 4 . By doing computational experiments , we show that RTD agrees with an intuitive notion of neural network representations similarity . In contrast with most existing approaches , RTD score is sensitive to cluster and other topological structures of the representations and enjoys very good correlation with disagreement of models predictions . We apply RTD to compare representations in computer vision and NLP domains and various problems : training dynamics analysis , data distribution shift , transfer learning , ensemble learning , disentanglement . We have also compared RTD with CKA , IMD and SVCCA . 2 COMPARING NEURAL NETWORK REPRESENTATIONS . Our starting point is the geometric perspective on representation learning through the lens of the manifold hypothesis ( Goodfellow et al. , 2016 ) , according to which real-world data presented in a high dimensional space are expected to concentrate in vicinity of a manifold of much lower dimension . The low-dimensional manifoldMP underlying the given data representationP can be accessed generally only through discrete sets of samples . The standard approach to recover the manifold MP is to take a sample P and to approximateMP by a set of simplices with vertices from P . A common approach to select the simplices approximating MP is to fix a threshold α > 0 and consider the simplices with edge lengths not exceeding α ( Niyogi et al. , 2008 ; Belkin & Niyogi , 2001 ) . It is difficult in general to guess the correct value of the threshold , hence a reasonable viewpoint is to study all thresholds at once , see e.g . ( Chazal & Michel , 2017 ) . This can be accomplished by means of the mathematical tool , called barcode , that quantifies the evolution of manifold topology features over multiple scales . Given two representations , we consider two corresponding graphs with distance-like weights and compare the difference in the two graphs ’ multiscale topology . 2.1 R-CROSS-BARCODE . Let P ( V ) , P̃ ( V ) be two representations giving two embeddings of the same data . The two embeddings P , P̃ belong in general to different ambient spaces and we have the natural one-to-one correspondence between points in P and P̃ . Given a sample of data V ⊆ V , the representation P = P ( V ) defines the weighted graph Gw with the vertex set V . The weight wAB of an edge AB is given by the distance between points P ( A ) and P ( B ) . Similarly the representation P̃ = P̃ ( V ) defines the weighted graph Gw̃ on the same vertex set . The simplicial approximation to the manifold MP at threshold α consists of simplices whose edges in Gw have weights not exceeding α . Let Gw≤α denote the graph with the vertex set V and the edges with weights not exceeding α . To compare the simplicial approximations to manifolds MP and MP̃ described by the graphs Gw≤α and Gw̃≤α we embed both graphs into the graph Gmin ( w , w̃ ) ≤α . The graph Gmin ( w , w̃ ) ≤α contains an edge between vertices A and B exactly when the distance between the points A and B is smaller than α in at least one of the representations P , P̃ . Recall that Vietoris-Rips complex of a graph G equipped with edge weights ’ matrix m is the collection of k−simplices , k ≥ 0 , which are ( k + 1 ) −elements subsets of the set of vertices of G , with the filtration threshold of a simplex defined by its edges ’ maximal weight : Rα ( Gm ) = { { A0 , . . . , Ak } , Ai ∈ Vert ( G ) ‖mAjAl ≤ α } Our simplicial approximations to the manifolds MP , MP̃ at threshold α are the unions of all simplices from the simplicial complexes Rα ( Gw ) , Rα ( Gw̃ ) . The dissimilarity between the filtered simplicial complexes Rα ( Gw ) and Rα ( Gw̃ ) can be quantified using the homological methods . The relevant tools here are the homology , the Whitehead theorem and the homology exact sequence . Because of the space limitations we sketch how this leads to our construction , described below , in the Appendix , Section B . Concretely , to compare the multi-scale topology of the two weighted graphs Gw and Gw̃ we introduce the weighted graph Ĝ ( w , w̃ ) with doubled set of vertices and with the edge weights defined as follows . For each vertex A ∈ Vert ( G ) we add the extra vertex A′ together with A to Ĝ and define the distance-like edge weights in Ĝ ( w , w̃ ) as : dAB = min ( wAB , w̃AB ) , dAB′ = dA′B = wAB , dAA′ = 0 , dA′B′ = 0 ( 1 ) where B ∈ Vert ( G ) , A 6= B . In practice G , Ĝ are full graphs and the edge weights on the graph Ĝ ( w , w̃ ) are given by 2N × 2N , N = |V | , symmetric matrix m = ( 0 w w min ( w , w̃ ) ) ( 2 ) where w and w̃ are the distance-like edge weights matrices of Gw and Gw̃ . Next , we construct the Vietoris-Rips filtered simplicial complex from the graph Ĝ ( w , w̃ ) . Intuitively , the i−th barcode of Rα ( Ĝ ( w , w̃ ) ) records the i−dimensional topological features that are born in Rα ( Gw̃ ) but are not yet born at the same place in Rα ( Gw ) and the ( i+ 1 ) −dimensional topological features that are dead inRα ( Gw̃ ) but are not yet dead at this place inRα ( Gw ) , see Theorem 1 below . Definition . The R-Cross-Barcodei ( P , P̃ ) is the set of intervals recording the “ births ” and “ deaths ” of i-dimensional topological features in the filtered simplicial complex Rα ( Ĝ ( w , w̃ ) ) . The R-Cross-Barcode∗ ( P , P̃ ) ( for Representations ’ Cross-Barcode ) records the differences in multiscale topology of the two embeddings . The topological features with longer lifespans indicate in general the essential features . Theorem 1 . Basic properties of R-Cross-Barcode∗ ( P , P̃ ) : • if P ( A ) = P̃ ( A ) for any object A ∈ V , then R-Cross-Barcode∗ ( P , P̃ ) = ∅ ; • if all distances within P̃ ( V ) are zero i.e . all objects are represented by the same point in P̃ , then R-Cross-Barcode∗ ( P , P̃ ) = Barcode∗ ( P ) the standard barcode of the point cloud P ; Algorithm 1 R-Cross-Barcodei ( P , P̃ ) Input : w , w̃ : matrices of pairwise distances within point clouds P , P̃ Require : vr ( m ) : function computing filtered complex from pairwise distances matrix m Require : B ( C , i ) : function computing persis- tence intervals of filtered complexC in dimension i m← ( 0 w w min ( w , w̃ ) ) R-Cross-Barcodei ← B ( vr ( m ) , i ) Return : intervals ’ list R-Cross-Barcodei ( P , P̃ ) representing ” births ” and ” deaths ” of topological discrepancies between P and P̃ . Algorithm 2 RTD ( P , P̃ ) , see section 2.3 for details , suggested default values : b = 500 , n = 10 Input : P ∈ R|V|×D , P̃ ∈ R|V|×D̃ : data representations for j = 1 to n do Vj ← random choice ( V , b ) Pj , P̃j ←P ( Vj ) , P̃ ( Vj ) Bj ← R-Cross-Barcode1 ( Pj , P̃j ) intervals ’ list calculated by Algorithm 1 rtdj ← sum of lengths of all intervals in Bj end for RTD ( P , P̃ ) ← mean ( rtd ) Return : number RTD ( P , P̃ ) representing discrepancy between the representations P , P̃ • for any value of threshold α , the following sequence of natural linear maps of homology groups . . .→ Hi ( Rα ( Gw ) ) → Hi ( Rα ( Gmin ( w , w̃ ) ) ) → Hi ( Rα ( Ĝ ( w , w̃ ) ) ) → → Hi−1 ( Rα ( Gw ) ) → Hi−1 ( Rα ( Gmin ( w , w̃ ) ) ) → . . . ( 3 ) is exact ; recall that it means that the kernel of any map is the image of the previous map The proof of the first two properties is immediate and the third property follows from the exactness of the corresponding sequence of simplicial complexes , see Appendix for more details .
In this paper, the authors proposed a method called “representation topology divergence (RTD)” for comparing structured data (e.g., point clouds and sets of embeddings derived by neural networks). The authors applied RTD to various machine learning tasks and compared it with other classic similarity measurements of structured data (e.g., CKA, SVCCA, IMD). Experimental results verify the rationality of the proposed method.
SP:78e644ea81d66ad3dc435fbce9c621d70f42765f
On Hard Episodes in Meta-Learning
Existing meta-learners primarily focus on improving the average task accuracy across multiple episodes . Different episodes , however , may vary in hardness and quality leading to a wide gap in the meta-learner ’ s performance across episodes . Understanding this issue is particularly critical in industrial few-shot settings , where there is limited control over test episodes as they are typically uploaded by end-users . In this paper , we empirically analyse the behaviour of meta-learners on episodes of varying hardness across three standard benchmark datasets : CIFARFS , mini-ImageNet , and tiered-ImageNet . Surprisingly , we observe a wide gap in accuracy of around 50 % between the hardest and easiest episodes across all the standard benchmarks and meta-learners . We additionally investigate various properties of hard episodes and highlight their connection to catastrophic forgetting during meta-training . To address the issue of sub-par performance on hard episodes , we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning . We find that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes . 1 INTRODUCTION . Humans have a remarkable ability to learn new concepts from very few examples and generalize effectively to unseen tasks . However , standard deep learning approaches still lag behind human capabilities in learning from few examples . For large over-parameterized deep models , learning with general supervision from only a few examples leads to over-fitting and thus poor generalization . To circumvent this , the paradigm of few-shot learning ( Wang et al. , 2020 ; Fei-fei et al. , 2006 ; Vinyals et al. , 2017 ) aims to effectively learn new concepts from very few labeled examples . These learned concepts can generalize well to future unseen learning tasks . Several frameworks have been proposed for tackling the few-shot learning scenario : transfer-learning ( Dhillon et al. , 2019 ) , selftraining ( Phoo & Hariharan , 2020 ) and meta-learning ( Hospedales et al. , 2020 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Meta-learning in particular aims to learn the process of learning from few examples and has shown remarkable performance across various few-shot benchmarks ( Hospedales et al. , 2020 ) . In meta-learning , several few-shot tasks ( episodes ) are sampled from a set of base classes and the underlying model is trained to perform well on these tasks leading to improved generalization in learning from only few examples belonging to novel and unseen classes . Existing meta-learners such as prototypical networks ( Snell et al. , 2017 ) , MAML ( Finn et al. , 2017 ) , MetaOptNet ( Lee et al. , 2019 ) , and R2D2 ( Bertinetto et al. , 2018 ) primarily focus on improving prediction performance on average across multiple episodes . However , different episodes have distinct characteristics and hardness which might lead to a wide variance in prediction accuracy across episodes . This problem is much more prevalent in few-shot models deployed in the industry . For example , meta-trained models are often deployed in the cloud for the end-users to use for various tasks such as object recognition , detection , semantic segmentation in computer vision and natural language understanding in NLP . In such settings , the end-users upload their own few-shot dataset to perform predictions on new and unseen examples belonging to novel classes . In practice , different users may upload few-shot datasets of varying quality and hardness , leading to a wide disparity in performance across different users . To draw a parallel to the widely accepted experimental protocols in meta-learning , each of the uploaded few-shot dataset and the corresponding unseen examples is equivalent to a test episode . In this paper , we study this issue and investigate how existing state-of-the-art meta-learners ( Snell et al. , 2017 ; Bertinetto et al. , 2018 ; Lee et al. , 2019 ) perform on episodes of varying hardness . Across three benchmark datasets : CIFAR-FS , mini-ImageNet , and tieredImageNet , we observe that there is a gap of ≈ 50 % in prediction accuracy between the easiest and hardest episodes . To this end , we identify several intriguing properties of hard episodes in meta-learning . For instance , we find that hard episodes are forgotten more easily than easy episodes during meta-training . Episode forgetting occurs when the underlying meta-learner forgets acquired knowledge by the end of the meta-training . To improve prediction performance on hard episodes , we investigate and benchmark various adversarial training and curriculum learning strategies that can be used jointly with any existing meta-learner . Empirically , we find that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes . The aim of our paper is not to chase another state-of-the-art in meta-learning , but to perform a fine-grained inspection of hard episodes across various meta-learning methods . In summary , we make the following contributions : • We present a detailed analysis of episode hardness in meta-learning across few-shot benchmarks and state-of-the-art meta-learners . In particular , we study various properties ( e.g. , semantic characteristics , forgetting ) of episode hardness across different meta-learners and architectures . • We find strong connections between episode hardness and catastrophic forgetting in metalearning . While catastrophic forgetting can occur when meta-training with multiple datasets in sequence ( Yap et al. , 2020 ) , we observe that forgetting events can occur even when the tasks during meta-training are drawn from a single dataset . In particular , we find that hard episodes are easy to forget , while easy episodes are difficult to forget . • Based on our analysis , we investigate and benchmark different adversarial training and curriculum training strategies to augment general purpose meta-training for improving prediction performance on hard episodes . Empirically , we find that although there is no one-sizefits-all solution , adversarial meta-training strategies are more powerful when compared to curriculum learning strategies . 2 BACKGROUND AND RELATED WORK . Meta-learning aims to learn an underlying model that can generalize and adapt well to examples from unseen classes by the process of learning to learn . This is primarily achieved by mimicking the evaluation and adaptation procedure during meta-training . In general , there are three types of metalearners : ( a ) Memory-based methods ( Ravi & Larochelle , 2017 ; Munkhdalai et al. , 2018 ; Santoro et al. , 2016 ) adapt to novel classes with a memory attached to the meta-learner ; ( b ) Metric-learning based methods ( Snell et al. , 2017 ; Sung et al. , 2017 ) aim to learn transferable deep representations which can adapt to unseen classes without any additional fine-tuning ; ( c ) Optimization based methods ( Finn et al. , 2017 ; Lee et al. , 2019 ; Bertinetto et al. , 2018 ) learn a good pre-training initialization for effective transfer to unseen tasks with only a few optimization steps . Although the primary focus of our work is meta-learning , we note that other few-shot learning paradigms such as transfer learning ( Chen et al. , 2021 ; Sun et al. , 2019 ; Dhillon et al. , 2020 ) have also shown competitive performance with meta-learning . While there has been a significant progress in improving the state-of-the-art in meta-learning , very few work investigates the effectiveness of existing meta-learning approaches on episodes of varying hardness . A recent and concurrent work by Arnold et al . ( 2021 ) discusses episode difficulty and the impact of random episodic sampling during meta-training . Based on their analysis , Arnold et al . ( 2021 ) propose a re-weighted optimization framework for meta-training based on importance sampling . Although our paper and Arnold et al . ( 2021 ) tackle similar problems of episodic hardness , there are several points which distinguishes our work : • We provide a much more fine-grained analysis of episode hardness than Arnold et al . ( 2021 ) . Arnold et al . ( 2021 ) primarily discuss the transferability of episodes across different meta-learners , while we find and investigate a strong connection between episode hardness and catastrophic forgetting . • Arnold et al . ( 2021 ) propose a loss re-weighting framework for improving the average accuracy across episodes . In contrary , we investigate the effectiveness of adversarial training ( Gong et al. , 2020 ) and general curriculum learning techniques in improving the average as well as worst-case prediction performance in meta-learning . Adversarial meta-learning techniques have previously been used in conjunction with dataaugmentation ( Ni et al. , 2021 ) to select the augmentation type resulting in the worst-case loss among different augmentation techniques . In this paper , we focus on how such strategies can be useful in improving the prediction performance of the hard episodes in addition to the average accuracy . 3 RETHINKING EPISODIC ACCURACY . Existing state-of-the-art meta-learners ( Finn et al. , 2017 ; Lee et al. , 2019 ; Snell et al. , 2017 ; Bertinetto et al. , 2018 ) primarily focus on optimizing for the average loss across multiple training episodes or tasks . However , solely the average performance in isolation does not give enough insights into how meta-learners perform on episodes of varying quality and hardness . Such insights can be particularly crucial to investigate and debug meta-learning models deployed in the wild , where the model can encounter diverse test episodes . In this section , we go beyond the average accuracy across different test episodes and evaluate meta-learners on episodes of varying hardness . First , we discuss how to quantify the hardness of an episode and then discuss the performance of meta-learners on hard episodes . 3.1 WHAT IS A GOOD MEASURE OF EPISODE HARDNESS ? . Episodic sampling ( i.e . sampling various few-shot tasks from a base dataset ) in meta-learning takes place in two steps : ( i ) First the episode classes are sampled from the class distribution of the base classes : c ∼ p ( Cbase ) ; ( ii ) Next , an episode τ is sampled from the data distribution conditioned on the set of sampled classes c : τ ∼ p ( D|c ) , where D is the base dataset . An episode τ consists of a set of support examples τs and query examples τq . In few-shot learning , a n-way , k-shot episode is sampled which results in sampling n classes and k support examples per class . Based on this , the meta-learning optimization objective can be generalized as the following : θ∗ = argmin θ Eτ [ ` ( Fθ′ , τq ) ] ( 1 ) where F is the base architecture with θ as the model parameters and θ′ = A ( θ , τs ) is the finetuning step with the support examples . Different meta-learners have different types of fine-tuning procedures and we direct the readers to ( Finn et al. , 2017 ; Snell et al. , 2017 ; Bertinetto et al. , 2018 ) for more information on the characteristics ofA . Based on this definition , we define the hardness of an episodeH ( τ ) in terms of the loss incurred on the query examples in an episode : H ( τ ) = ` ( Fθ∗ , τq ) ( 2 ) We choose query loss as a metric for hardness because of its inherent simplicity in computation as well as interpretation . In addition , we find a strong negative correlation between the episodic loss and the accuracy ( ≈ −0.92 for mini-ImageNet and ≈ −0.89 for tieredImageNet with prototypical networks ) . This is true for other meta-learners such as R2D2 too ( See Appendix A for more details ) . Alternatively , hardness of an episode can also be defined as the average log-odds of the query example ( Dhillon et al. , 2020 ) .
This paper proposes to deviate from studying the average performance of meta-learners on different tasks, and also report separately their performance on ‘hard’ episodes, measuring ‘worst case’ performance which may be critical for deployment in real-world applications. They provide a definition for episode hardness, and investigate the performance on easy and hard episodes. They find that hard episodes are more often forgotten during meta-training compared to easy episodes. They then propose two strategies to mitigate this, based on adversarial and curriculum training. Their experiments on 3 datasets using 3 meta-learners show that the former sometimes leads to improved few-shot learning performance.
SP:b4ef6cc6fc1fb5a193a8fd67e3b48b675cfc0f10
On Hard Episodes in Meta-Learning
Existing meta-learners primarily focus on improving the average task accuracy across multiple episodes . Different episodes , however , may vary in hardness and quality leading to a wide gap in the meta-learner ’ s performance across episodes . Understanding this issue is particularly critical in industrial few-shot settings , where there is limited control over test episodes as they are typically uploaded by end-users . In this paper , we empirically analyse the behaviour of meta-learners on episodes of varying hardness across three standard benchmark datasets : CIFARFS , mini-ImageNet , and tiered-ImageNet . Surprisingly , we observe a wide gap in accuracy of around 50 % between the hardest and easiest episodes across all the standard benchmarks and meta-learners . We additionally investigate various properties of hard episodes and highlight their connection to catastrophic forgetting during meta-training . To address the issue of sub-par performance on hard episodes , we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning . We find that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes . 1 INTRODUCTION . Humans have a remarkable ability to learn new concepts from very few examples and generalize effectively to unseen tasks . However , standard deep learning approaches still lag behind human capabilities in learning from few examples . For large over-parameterized deep models , learning with general supervision from only a few examples leads to over-fitting and thus poor generalization . To circumvent this , the paradigm of few-shot learning ( Wang et al. , 2020 ; Fei-fei et al. , 2006 ; Vinyals et al. , 2017 ) aims to effectively learn new concepts from very few labeled examples . These learned concepts can generalize well to future unseen learning tasks . Several frameworks have been proposed for tackling the few-shot learning scenario : transfer-learning ( Dhillon et al. , 2019 ) , selftraining ( Phoo & Hariharan , 2020 ) and meta-learning ( Hospedales et al. , 2020 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Meta-learning in particular aims to learn the process of learning from few examples and has shown remarkable performance across various few-shot benchmarks ( Hospedales et al. , 2020 ) . In meta-learning , several few-shot tasks ( episodes ) are sampled from a set of base classes and the underlying model is trained to perform well on these tasks leading to improved generalization in learning from only few examples belonging to novel and unseen classes . Existing meta-learners such as prototypical networks ( Snell et al. , 2017 ) , MAML ( Finn et al. , 2017 ) , MetaOptNet ( Lee et al. , 2019 ) , and R2D2 ( Bertinetto et al. , 2018 ) primarily focus on improving prediction performance on average across multiple episodes . However , different episodes have distinct characteristics and hardness which might lead to a wide variance in prediction accuracy across episodes . This problem is much more prevalent in few-shot models deployed in the industry . For example , meta-trained models are often deployed in the cloud for the end-users to use for various tasks such as object recognition , detection , semantic segmentation in computer vision and natural language understanding in NLP . In such settings , the end-users upload their own few-shot dataset to perform predictions on new and unseen examples belonging to novel classes . In practice , different users may upload few-shot datasets of varying quality and hardness , leading to a wide disparity in performance across different users . To draw a parallel to the widely accepted experimental protocols in meta-learning , each of the uploaded few-shot dataset and the corresponding unseen examples is equivalent to a test episode . In this paper , we study this issue and investigate how existing state-of-the-art meta-learners ( Snell et al. , 2017 ; Bertinetto et al. , 2018 ; Lee et al. , 2019 ) perform on episodes of varying hardness . Across three benchmark datasets : CIFAR-FS , mini-ImageNet , and tieredImageNet , we observe that there is a gap of ≈ 50 % in prediction accuracy between the easiest and hardest episodes . To this end , we identify several intriguing properties of hard episodes in meta-learning . For instance , we find that hard episodes are forgotten more easily than easy episodes during meta-training . Episode forgetting occurs when the underlying meta-learner forgets acquired knowledge by the end of the meta-training . To improve prediction performance on hard episodes , we investigate and benchmark various adversarial training and curriculum learning strategies that can be used jointly with any existing meta-learner . Empirically , we find that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes . The aim of our paper is not to chase another state-of-the-art in meta-learning , but to perform a fine-grained inspection of hard episodes across various meta-learning methods . In summary , we make the following contributions : • We present a detailed analysis of episode hardness in meta-learning across few-shot benchmarks and state-of-the-art meta-learners . In particular , we study various properties ( e.g. , semantic characteristics , forgetting ) of episode hardness across different meta-learners and architectures . • We find strong connections between episode hardness and catastrophic forgetting in metalearning . While catastrophic forgetting can occur when meta-training with multiple datasets in sequence ( Yap et al. , 2020 ) , we observe that forgetting events can occur even when the tasks during meta-training are drawn from a single dataset . In particular , we find that hard episodes are easy to forget , while easy episodes are difficult to forget . • Based on our analysis , we investigate and benchmark different adversarial training and curriculum training strategies to augment general purpose meta-training for improving prediction performance on hard episodes . Empirically , we find that although there is no one-sizefits-all solution , adversarial meta-training strategies are more powerful when compared to curriculum learning strategies . 2 BACKGROUND AND RELATED WORK . Meta-learning aims to learn an underlying model that can generalize and adapt well to examples from unseen classes by the process of learning to learn . This is primarily achieved by mimicking the evaluation and adaptation procedure during meta-training . In general , there are three types of metalearners : ( a ) Memory-based methods ( Ravi & Larochelle , 2017 ; Munkhdalai et al. , 2018 ; Santoro et al. , 2016 ) adapt to novel classes with a memory attached to the meta-learner ; ( b ) Metric-learning based methods ( Snell et al. , 2017 ; Sung et al. , 2017 ) aim to learn transferable deep representations which can adapt to unseen classes without any additional fine-tuning ; ( c ) Optimization based methods ( Finn et al. , 2017 ; Lee et al. , 2019 ; Bertinetto et al. , 2018 ) learn a good pre-training initialization for effective transfer to unseen tasks with only a few optimization steps . Although the primary focus of our work is meta-learning , we note that other few-shot learning paradigms such as transfer learning ( Chen et al. , 2021 ; Sun et al. , 2019 ; Dhillon et al. , 2020 ) have also shown competitive performance with meta-learning . While there has been a significant progress in improving the state-of-the-art in meta-learning , very few work investigates the effectiveness of existing meta-learning approaches on episodes of varying hardness . A recent and concurrent work by Arnold et al . ( 2021 ) discusses episode difficulty and the impact of random episodic sampling during meta-training . Based on their analysis , Arnold et al . ( 2021 ) propose a re-weighted optimization framework for meta-training based on importance sampling . Although our paper and Arnold et al . ( 2021 ) tackle similar problems of episodic hardness , there are several points which distinguishes our work : • We provide a much more fine-grained analysis of episode hardness than Arnold et al . ( 2021 ) . Arnold et al . ( 2021 ) primarily discuss the transferability of episodes across different meta-learners , while we find and investigate a strong connection between episode hardness and catastrophic forgetting . • Arnold et al . ( 2021 ) propose a loss re-weighting framework for improving the average accuracy across episodes . In contrary , we investigate the effectiveness of adversarial training ( Gong et al. , 2020 ) and general curriculum learning techniques in improving the average as well as worst-case prediction performance in meta-learning . Adversarial meta-learning techniques have previously been used in conjunction with dataaugmentation ( Ni et al. , 2021 ) to select the augmentation type resulting in the worst-case loss among different augmentation techniques . In this paper , we focus on how such strategies can be useful in improving the prediction performance of the hard episodes in addition to the average accuracy . 3 RETHINKING EPISODIC ACCURACY . Existing state-of-the-art meta-learners ( Finn et al. , 2017 ; Lee et al. , 2019 ; Snell et al. , 2017 ; Bertinetto et al. , 2018 ) primarily focus on optimizing for the average loss across multiple training episodes or tasks . However , solely the average performance in isolation does not give enough insights into how meta-learners perform on episodes of varying quality and hardness . Such insights can be particularly crucial to investigate and debug meta-learning models deployed in the wild , where the model can encounter diverse test episodes . In this section , we go beyond the average accuracy across different test episodes and evaluate meta-learners on episodes of varying hardness . First , we discuss how to quantify the hardness of an episode and then discuss the performance of meta-learners on hard episodes . 3.1 WHAT IS A GOOD MEASURE OF EPISODE HARDNESS ? . Episodic sampling ( i.e . sampling various few-shot tasks from a base dataset ) in meta-learning takes place in two steps : ( i ) First the episode classes are sampled from the class distribution of the base classes : c ∼ p ( Cbase ) ; ( ii ) Next , an episode τ is sampled from the data distribution conditioned on the set of sampled classes c : τ ∼ p ( D|c ) , where D is the base dataset . An episode τ consists of a set of support examples τs and query examples τq . In few-shot learning , a n-way , k-shot episode is sampled which results in sampling n classes and k support examples per class . Based on this , the meta-learning optimization objective can be generalized as the following : θ∗ = argmin θ Eτ [ ` ( Fθ′ , τq ) ] ( 1 ) where F is the base architecture with θ as the model parameters and θ′ = A ( θ , τs ) is the finetuning step with the support examples . Different meta-learners have different types of fine-tuning procedures and we direct the readers to ( Finn et al. , 2017 ; Snell et al. , 2017 ; Bertinetto et al. , 2018 ) for more information on the characteristics ofA . Based on this definition , we define the hardness of an episodeH ( τ ) in terms of the loss incurred on the query examples in an episode : H ( τ ) = ` ( Fθ∗ , τq ) ( 2 ) We choose query loss as a metric for hardness because of its inherent simplicity in computation as well as interpretation . In addition , we find a strong negative correlation between the episodic loss and the accuracy ( ≈ −0.92 for mini-ImageNet and ≈ −0.89 for tieredImageNet with prototypical networks ) . This is true for other meta-learners such as R2D2 too ( See Appendix A for more details ) . Alternatively , hardness of an episode can also be defined as the average log-odds of the query example ( Dhillon et al. , 2020 ) .
The submission investigates and characterizes episode difficulty (as defined in terms of its query loss) in few-shot classification and reports empirical observations on CIFAR-FS, mini-ImageNet, and tiered-ImageNet. Across all benchmarks, a wide accuracy gap is observed between the hardest and easiest test episodes for multiple combinations of meta-learner (Prototypical Networks, R2D2) and network architecture (four-layer ConvNet, ResNet). The paper presents support and query images for easy and hard episodes and draws the conclusion that a mismatch in semantic or shape characteristics or in the number of objects between support and query images often cause misclassification. The submission then examines how the accuracy of easy and hard episodes evolves over training. It uncovers global and local forgetting events and reports that the latter occur more frequently for hard episodes. Finally, the paper examines two strategies for taking easy and hard episodes into account: adversarial training, and adversarial curriculum training. Adversarial training is shown to yield modest improvements over (and adversarial curriculum training does not perform significantly better than) regular training.
SP:b4ef6cc6fc1fb5a193a8fd67e3b48b675cfc0f10
On Hard Episodes in Meta-Learning
Existing meta-learners primarily focus on improving the average task accuracy across multiple episodes . Different episodes , however , may vary in hardness and quality leading to a wide gap in the meta-learner ’ s performance across episodes . Understanding this issue is particularly critical in industrial few-shot settings , where there is limited control over test episodes as they are typically uploaded by end-users . In this paper , we empirically analyse the behaviour of meta-learners on episodes of varying hardness across three standard benchmark datasets : CIFARFS , mini-ImageNet , and tiered-ImageNet . Surprisingly , we observe a wide gap in accuracy of around 50 % between the hardest and easiest episodes across all the standard benchmarks and meta-learners . We additionally investigate various properties of hard episodes and highlight their connection to catastrophic forgetting during meta-training . To address the issue of sub-par performance on hard episodes , we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning . We find that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes . 1 INTRODUCTION . Humans have a remarkable ability to learn new concepts from very few examples and generalize effectively to unseen tasks . However , standard deep learning approaches still lag behind human capabilities in learning from few examples . For large over-parameterized deep models , learning with general supervision from only a few examples leads to over-fitting and thus poor generalization . To circumvent this , the paradigm of few-shot learning ( Wang et al. , 2020 ; Fei-fei et al. , 2006 ; Vinyals et al. , 2017 ) aims to effectively learn new concepts from very few labeled examples . These learned concepts can generalize well to future unseen learning tasks . Several frameworks have been proposed for tackling the few-shot learning scenario : transfer-learning ( Dhillon et al. , 2019 ) , selftraining ( Phoo & Hariharan , 2020 ) and meta-learning ( Hospedales et al. , 2020 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Meta-learning in particular aims to learn the process of learning from few examples and has shown remarkable performance across various few-shot benchmarks ( Hospedales et al. , 2020 ) . In meta-learning , several few-shot tasks ( episodes ) are sampled from a set of base classes and the underlying model is trained to perform well on these tasks leading to improved generalization in learning from only few examples belonging to novel and unseen classes . Existing meta-learners such as prototypical networks ( Snell et al. , 2017 ) , MAML ( Finn et al. , 2017 ) , MetaOptNet ( Lee et al. , 2019 ) , and R2D2 ( Bertinetto et al. , 2018 ) primarily focus on improving prediction performance on average across multiple episodes . However , different episodes have distinct characteristics and hardness which might lead to a wide variance in prediction accuracy across episodes . This problem is much more prevalent in few-shot models deployed in the industry . For example , meta-trained models are often deployed in the cloud for the end-users to use for various tasks such as object recognition , detection , semantic segmentation in computer vision and natural language understanding in NLP . In such settings , the end-users upload their own few-shot dataset to perform predictions on new and unseen examples belonging to novel classes . In practice , different users may upload few-shot datasets of varying quality and hardness , leading to a wide disparity in performance across different users . To draw a parallel to the widely accepted experimental protocols in meta-learning , each of the uploaded few-shot dataset and the corresponding unseen examples is equivalent to a test episode . In this paper , we study this issue and investigate how existing state-of-the-art meta-learners ( Snell et al. , 2017 ; Bertinetto et al. , 2018 ; Lee et al. , 2019 ) perform on episodes of varying hardness . Across three benchmark datasets : CIFAR-FS , mini-ImageNet , and tieredImageNet , we observe that there is a gap of ≈ 50 % in prediction accuracy between the easiest and hardest episodes . To this end , we identify several intriguing properties of hard episodes in meta-learning . For instance , we find that hard episodes are forgotten more easily than easy episodes during meta-training . Episode forgetting occurs when the underlying meta-learner forgets acquired knowledge by the end of the meta-training . To improve prediction performance on hard episodes , we investigate and benchmark various adversarial training and curriculum learning strategies that can be used jointly with any existing meta-learner . Empirically , we find that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes . The aim of our paper is not to chase another state-of-the-art in meta-learning , but to perform a fine-grained inspection of hard episodes across various meta-learning methods . In summary , we make the following contributions : • We present a detailed analysis of episode hardness in meta-learning across few-shot benchmarks and state-of-the-art meta-learners . In particular , we study various properties ( e.g. , semantic characteristics , forgetting ) of episode hardness across different meta-learners and architectures . • We find strong connections between episode hardness and catastrophic forgetting in metalearning . While catastrophic forgetting can occur when meta-training with multiple datasets in sequence ( Yap et al. , 2020 ) , we observe that forgetting events can occur even when the tasks during meta-training are drawn from a single dataset . In particular , we find that hard episodes are easy to forget , while easy episodes are difficult to forget . • Based on our analysis , we investigate and benchmark different adversarial training and curriculum training strategies to augment general purpose meta-training for improving prediction performance on hard episodes . Empirically , we find that although there is no one-sizefits-all solution , adversarial meta-training strategies are more powerful when compared to curriculum learning strategies . 2 BACKGROUND AND RELATED WORK . Meta-learning aims to learn an underlying model that can generalize and adapt well to examples from unseen classes by the process of learning to learn . This is primarily achieved by mimicking the evaluation and adaptation procedure during meta-training . In general , there are three types of metalearners : ( a ) Memory-based methods ( Ravi & Larochelle , 2017 ; Munkhdalai et al. , 2018 ; Santoro et al. , 2016 ) adapt to novel classes with a memory attached to the meta-learner ; ( b ) Metric-learning based methods ( Snell et al. , 2017 ; Sung et al. , 2017 ) aim to learn transferable deep representations which can adapt to unseen classes without any additional fine-tuning ; ( c ) Optimization based methods ( Finn et al. , 2017 ; Lee et al. , 2019 ; Bertinetto et al. , 2018 ) learn a good pre-training initialization for effective transfer to unseen tasks with only a few optimization steps . Although the primary focus of our work is meta-learning , we note that other few-shot learning paradigms such as transfer learning ( Chen et al. , 2021 ; Sun et al. , 2019 ; Dhillon et al. , 2020 ) have also shown competitive performance with meta-learning . While there has been a significant progress in improving the state-of-the-art in meta-learning , very few work investigates the effectiveness of existing meta-learning approaches on episodes of varying hardness . A recent and concurrent work by Arnold et al . ( 2021 ) discusses episode difficulty and the impact of random episodic sampling during meta-training . Based on their analysis , Arnold et al . ( 2021 ) propose a re-weighted optimization framework for meta-training based on importance sampling . Although our paper and Arnold et al . ( 2021 ) tackle similar problems of episodic hardness , there are several points which distinguishes our work : • We provide a much more fine-grained analysis of episode hardness than Arnold et al . ( 2021 ) . Arnold et al . ( 2021 ) primarily discuss the transferability of episodes across different meta-learners , while we find and investigate a strong connection between episode hardness and catastrophic forgetting . • Arnold et al . ( 2021 ) propose a loss re-weighting framework for improving the average accuracy across episodes . In contrary , we investigate the effectiveness of adversarial training ( Gong et al. , 2020 ) and general curriculum learning techniques in improving the average as well as worst-case prediction performance in meta-learning . Adversarial meta-learning techniques have previously been used in conjunction with dataaugmentation ( Ni et al. , 2021 ) to select the augmentation type resulting in the worst-case loss among different augmentation techniques . In this paper , we focus on how such strategies can be useful in improving the prediction performance of the hard episodes in addition to the average accuracy . 3 RETHINKING EPISODIC ACCURACY . Existing state-of-the-art meta-learners ( Finn et al. , 2017 ; Lee et al. , 2019 ; Snell et al. , 2017 ; Bertinetto et al. , 2018 ) primarily focus on optimizing for the average loss across multiple training episodes or tasks . However , solely the average performance in isolation does not give enough insights into how meta-learners perform on episodes of varying quality and hardness . Such insights can be particularly crucial to investigate and debug meta-learning models deployed in the wild , where the model can encounter diverse test episodes . In this section , we go beyond the average accuracy across different test episodes and evaluate meta-learners on episodes of varying hardness . First , we discuss how to quantify the hardness of an episode and then discuss the performance of meta-learners on hard episodes . 3.1 WHAT IS A GOOD MEASURE OF EPISODE HARDNESS ? . Episodic sampling ( i.e . sampling various few-shot tasks from a base dataset ) in meta-learning takes place in two steps : ( i ) First the episode classes are sampled from the class distribution of the base classes : c ∼ p ( Cbase ) ; ( ii ) Next , an episode τ is sampled from the data distribution conditioned on the set of sampled classes c : τ ∼ p ( D|c ) , where D is the base dataset . An episode τ consists of a set of support examples τs and query examples τq . In few-shot learning , a n-way , k-shot episode is sampled which results in sampling n classes and k support examples per class . Based on this , the meta-learning optimization objective can be generalized as the following : θ∗ = argmin θ Eτ [ ` ( Fθ′ , τq ) ] ( 1 ) where F is the base architecture with θ as the model parameters and θ′ = A ( θ , τs ) is the finetuning step with the support examples . Different meta-learners have different types of fine-tuning procedures and we direct the readers to ( Finn et al. , 2017 ; Snell et al. , 2017 ; Bertinetto et al. , 2018 ) for more information on the characteristics ofA . Based on this definition , we define the hardness of an episodeH ( τ ) in terms of the loss incurred on the query examples in an episode : H ( τ ) = ` ( Fθ∗ , τq ) ( 2 ) We choose query loss as a metric for hardness because of its inherent simplicity in computation as well as interpretation . In addition , we find a strong negative correlation between the episodic loss and the accuracy ( ≈ −0.92 for mini-ImageNet and ≈ −0.89 for tieredImageNet with prototypical networks ) . This is true for other meta-learners such as R2D2 too ( See Appendix A for more details ) . Alternatively , hardness of an episode can also be defined as the average log-odds of the query example ( Dhillon et al. , 2020 ) .
In this paper, the authors analyze the hardness of different episodes for an episodic training regime in the context of a meta-learning training (for few-shot classification tasks). The authors show that even though different meta-learners perform relatively similar on average on these tasks, their performances vary significantly when it comes to harder tasks. Authors also showed that the performance on hard tasks can be somewhat improved using an adversarial training strategy.
SP:b4ef6cc6fc1fb5a193a8fd67e3b48b675cfc0f10
Learning to Dequantise with Truncated Flows
1 INTRODUCTION . Deep generative models aim to model a distribution of high-dimensional natural data . Many of these methods assume that the data is continuous , despite it being digitally stored in bits and therefore intrinsically discrete . This discrepancy has led to recent interest in dequantising discrete data types to avoid some of the degeneracies of fitting continuous models to discrete data ( Theis et al. , 2015 ) . When data is ordinal ( such as pixel intensities ) a naive dequantisation scheme can be obtained by adding uniform noise to the discrete values ( Theis et al. , 2015 ) . More recently , a generalisation of this approach where dequantisation is seen as inference in a latent variable model has also been proposed ( Ho et al. , 2019 ; Hoogeboom et al. , 2020 ; Nielsen et al. , 2020 ) . However , these methods may not be directly applied in cases where the data is categorical ( Hoogeboom et al. , 2021 ) , because the data is not naturally represented in a vector space . Attempts at devising dequantisation schemes for categorical data by building upon the variational dequantisation scheme have been recently proposed in Hoogeboom et al . ( 2021 ) and Lippe & Gavves ( 2020 ) . These approaches dequantise a categorical input into a latent continuous space . Ideally , a dequantisation scheme for categorical data should be : ( i ) easily learnable by standard optimization techniques and ( ii ) possibly lossless , in the sense that quantisation should recover the input category . Argmax Flows ( Hoogeboom et al. , 2021 ) offer lossless dequantisation but the support of the stochastic embedding is chosen arbitrarily and not optimised , and the dimensionality of the continuous ( dequantised ) variable is required to be at least logarithmic in the number of categories of the input data . Moreover , the method makes minimal assumptions about the topology of the categorical data , disregarding the possible relationships between categories , which can occur for example between word indices in natural language ( Bengio et al. , 2003 ) or the atomic representations of a molecule ’ s constituents . On the other hand , Categorical Normalizing Flows ( CatNF ; Lippe & Gavves 2020 ) can learn a more compact representation of the input category but the dequantisation might be lossy given that the posteriors over the continuous variables have overlapping support . Is there a trade-off between these two schemes ? In this paper , we propose TRUFL , which builds upon the aforementioned variational dequantisation techniques . We achieve that by using truncated posterior distributions over the continuous variables with potentially bounded and disjoint support . In addition , we present a parametrisation of truncated distributions that can be optimised with standard stochastic reparametrisation techniques . Overall , our method inherits strengths of both CatNF and Argmax flows . Our experimental results highlight the effectiveness of our approach . 2 BACKGROUND : VARIATIONAL DEQUANTISATION . Dequantisation refers to the process of embedding discrete-valued data into a continuous space , which allows us to employ density-based models to capture the distribution of the continuous representation . Concretely , let z = { z1 , . . . , zT } denote this continuous representation , and x = { x1 , . . . , xT } describe the observed data , where each xt represent , e.g . a node in a graph or a token in a sentence . Each xt is assumed to be categorical , i.e . xt ∈ { 0 , · · · , K − 1 } for some integer K > 1. z can be interpreted as a latent variable , which follows a prior distribution p ( z ) . We refer to q ( zt|xt ) as the dequantiser and p ( xt|zt ) as the quantiser . Training can be achieved by maximizing a variational lower bound on the marginal likelihood of the data , i.e . : log p ( x ) ≥ Eq ( z|x ) [ log p ( x|z ) p ( z ) q ( z|x ) ] = : L ( x ) ( 1 ) We are interested in the case where the representation zt can be inferred from xt alone , so we choose the factorisation p ( x|z ) = ∏ t p ( xt|zt ) and q ( z|x ) = ∏ t q ( zt|xt ) , following Lippe & Gavves ( 2020 ) . In this case , the “ optimal ” quantiser p ( xt|zt ) can be conveniently computed as : argmax p ( xt|zt ) Eq ( x ) [ L ( x ) ] = q ( zt|xt ) p̃ ( xt ) ∑K−1 x′t=0 q ( zt|x′t ) p̃ ( x′t ) = q ( xt|zt ) = : p ( xt|zt ) ( 2 ) where q ( x ) denotes the ( empirical ) data distribution , and p̃ ( xt ) denotes the estimate of the marginal distribution of each category ( which can be obtained by counting and , in the case of textual data , this corresponds to the unigram distribution over words ) . This equation shows that the optimal quantiser can be obtained implicitly by applying Bayes ’ rule with the parametric dequantiser q ( zt|xt ) . The factorisation we chose for p ( x|z ) and q ( z|x ) is crucial for the argmax above to be represented in this simple form . Without this assumption , the solution will involve a combinatorial sum or an integral , which results in the choice of a parametric quantiser in Ziegler & Rush ( 2019 ) for computational tractability . Plugging the optimal decoder into Eq . 1 yields : L ( x ) = Eq ( z|x ) [ ∑ t log p̃ ( xt ) + log p ( z ) ∑K−1 x′t=0 q ( zt|x′t ) p̃ ( x′t ) ] ( 3 ) We note that the first term is a constant . Therefore , the expression above implies that accurately modelling the dependencies in x boils down to learning an expressive prior p ( z ) and regularising the dequantiser q ( zt|xt ) . q ( xt|zt ) is deterministic when q ( zt|xt ) does not overlap with other q ( zt|x′t ) , in which case q ( zt|xt ) is encouraged to be expanded to maximize the entropy . If there is certain amount of overlapping , the denominator in the second term will push down the density of other q ( zt|x′t ) , therefore resulting in a spikier aggregate posterior distribution ( see more discussion on this in Section 5.4 ) . With this general framework that also accounts for lossy quantisation , we briefly present some of the previously proposed strategies for dequantisation . Ordinal dequantisation In the case where the data is ordinal , such as the case of image pixel values ( e.g . for an 8-bit representation , K = 256 ) , a dequantisation scheme can be obtained by setting q ( zt|xt ) = Uniform ( xt , xt + 1 ) . The resulting quantisation process is simply bztc , and is deterministic . More generally , q ( zt|xt ) can be any distribution on [ xt , xt+1 ] . See Nielsen & Winther ( 2020 ) ; Hoogeboom et al . ( 2019 ) for extensions of the uniform dequantisation scheme . Argmax Flows For categorical data , uniform dequantisation is not applicable , as there is no intrinsic ordering between the categories . Argmax Flows ( Hoogeboom et al. , 2021 ) dequantises categorical data by letting zt ∈ RK be distributed by some q ( zt|xt ) supported on { zt : argmaxk ( zt ) k = xt } . When the support over the latent space is disjoint , p ( xt|zt ) = q ( xt|zt ) = 1 [ xt = argmaxk ( zt ) k ] 1 ; we depict this in Figure 1 , Argmax flow ( thresh. ) . Argmax Flows makes minimal assumptions on the topology of the data : the support of the dequantiser partitions the continuous space evenly and the representations are equally far from each other . As an example , synonyms in text may still have very distinct dequantised representations despite having similar functions and meaning in a language modelling setting . In the naive formulation , Argmax Flows requires the dimensionality to the latent space to be the same as the number of the input categories K. To accomodate for larger categorical spaces , the authors suggest a binary factorisation , reducing the required latent space dimension to dlog2Ke ; See Figure 1 , Argmax flow ( binary ) . Categorical Normalising Flows ( CatNF ) In the previous cases , the quantisation is deterministic , and there is no loss of information . This is because in a cleanly partitioned latent space like in the ordinal setting or argmax flow , the dequantising distributions q ( zt|xt = k ) for all 0 ≤ k ≤ K − 1 have non-overlapping support . CatNF learns a dequantiser that can “ softly ” partition the space . Lippe & Gavves ( 2020 ) propose using a conditional logistic distribution as q ( zt|xt ) . In this case , the optimal quantiser q ( xt|zt ) is nearly-deterministic if the locations of the dequantisers are far away from each other and they have sufficiently small scale . For this reason , and unlike the first two approaches , CatNF is not capable of losslessly dequantising the data ( we provide a formal discussion on this in Appendix A.2 , which is based on an data-processing inequality argument , using the dequantiser as a transitional kernel ) . It can approximate the lossless limit by pushing the bulk of the mass of q ( zt|xt ) away from each other , but that could potentially lead to a highly complex and multi-modal empirical distribution over the representation space for p ( z ) to approximate . Next , we consider the case where q ( zt|xt ) is a truncated distribution , and as such has the ability to encode the data losslessly while learning a meaningful latent topology of the different categories . 3 TRUNCATED DEQUANTISER AND PRIOR . The general approach to optimising the variational lower bound proposed in Kingma & Welling ( 2014 ) involves sampling from the proposal distribution q ( zt|xt ) to estimate the expectation ( Eq . 1 ) . In our case , we want to parameterise this using a TRUncated FLow , which we will refer to as TRUFL . For simplicity , we will drop the dependency on t and xt , but all of the variational distribution is conditioned on the categorical value xt . If we want to bound a scalar distribution between ( a , b ) , and we have a density function f where its cumulative distribution function F ( CDF ) and the inverse of its CDF F−1 are tractable , we can easily sample from f by sampling u from Uniform ( F ( a ) , F ( b ) ) , and then evaluating F−1 ( u ) . Note that this method is differentiable . We use it to sample from our dequantiser . 1We use 1 [ · ] to denote an indicator function . Algorithm 1 Truncated Categorical Encoding for a timestep t Input : Categorical data xt , Flow g ( · , · ) Output : zt , log q ( zt|xt ) , log p ( xt|zt ) u0 ∼ Uniform ( 0 , 1 ) . Begin encoding u← m ( xt ) + ( u0 − 12 ) · s ( xt ) z′t ← F−1 ( u ) zt ← g ( z′t , xt ) . End Encoding for x̂t = 0 to K − 1 do . Compute probability of zt given all possible x̂t ẑ′t ← g−1 ( zt , x̂t ) log q ( zt|x̂t ) ← log f̃ ( ẑ′t ; m ( x̂t ) , s ( x̂t ) ) + log ∣∣∣dz′tdẑt ∣∣∣ end for log p ( xt|zt ) ← log q ( zt|xt ) p̃ ( xt ) − log ∑ x̂t q ( zt|x̂t ) p̃ ( x̂t ) . log computation of the q posterior In general , however , multi-variate distributions may not simply be truncated at the tails , but rather have a support which is a strict subset of its base distribution . One general approach to sampling from such a distribution is via rejection sampling ( Murphy , 2012 ) . This approach has been used in prior work for sampling from bounded-support distributions ( Polykovskiy & Vetrov , 2020 ; Xu & Durrett , 2018 ; Davidson et al. , 2018 ) . Computing gradients for this method is possible via implicit gradients ( Figurnov et al. , 2018 ) , but we do not need gradients in our case , as we use rejection sampling for generating samples from the generative model ( See Section 3.2 ) .
This paper proposes a Flow model called TRUFL designed for discrete data. The main selling point of the proposed model is that it can handle discrete data better than other dequantization schemes. The authors target two key difficulties of this dequantization problem, which are (i) making the dequantization lossless, and (ii) allowing the dequantizer to be learned easily. Building on the Categorical Normalizing Flows, the authors propose to truncate the dequantized latent space to make the latent space of different input categories less correlated. To compute the probabilities w.r.t. the truncated latent space, the authors use rejection sampling to approximate these probabilities.
SP:d250f15b11e75f95ed59b94e8df7822efb1d1a80
Learning to Dequantise with Truncated Flows
1 INTRODUCTION . Deep generative models aim to model a distribution of high-dimensional natural data . Many of these methods assume that the data is continuous , despite it being digitally stored in bits and therefore intrinsically discrete . This discrepancy has led to recent interest in dequantising discrete data types to avoid some of the degeneracies of fitting continuous models to discrete data ( Theis et al. , 2015 ) . When data is ordinal ( such as pixel intensities ) a naive dequantisation scheme can be obtained by adding uniform noise to the discrete values ( Theis et al. , 2015 ) . More recently , a generalisation of this approach where dequantisation is seen as inference in a latent variable model has also been proposed ( Ho et al. , 2019 ; Hoogeboom et al. , 2020 ; Nielsen et al. , 2020 ) . However , these methods may not be directly applied in cases where the data is categorical ( Hoogeboom et al. , 2021 ) , because the data is not naturally represented in a vector space . Attempts at devising dequantisation schemes for categorical data by building upon the variational dequantisation scheme have been recently proposed in Hoogeboom et al . ( 2021 ) and Lippe & Gavves ( 2020 ) . These approaches dequantise a categorical input into a latent continuous space . Ideally , a dequantisation scheme for categorical data should be : ( i ) easily learnable by standard optimization techniques and ( ii ) possibly lossless , in the sense that quantisation should recover the input category . Argmax Flows ( Hoogeboom et al. , 2021 ) offer lossless dequantisation but the support of the stochastic embedding is chosen arbitrarily and not optimised , and the dimensionality of the continuous ( dequantised ) variable is required to be at least logarithmic in the number of categories of the input data . Moreover , the method makes minimal assumptions about the topology of the categorical data , disregarding the possible relationships between categories , which can occur for example between word indices in natural language ( Bengio et al. , 2003 ) or the atomic representations of a molecule ’ s constituents . On the other hand , Categorical Normalizing Flows ( CatNF ; Lippe & Gavves 2020 ) can learn a more compact representation of the input category but the dequantisation might be lossy given that the posteriors over the continuous variables have overlapping support . Is there a trade-off between these two schemes ? In this paper , we propose TRUFL , which builds upon the aforementioned variational dequantisation techniques . We achieve that by using truncated posterior distributions over the continuous variables with potentially bounded and disjoint support . In addition , we present a parametrisation of truncated distributions that can be optimised with standard stochastic reparametrisation techniques . Overall , our method inherits strengths of both CatNF and Argmax flows . Our experimental results highlight the effectiveness of our approach . 2 BACKGROUND : VARIATIONAL DEQUANTISATION . Dequantisation refers to the process of embedding discrete-valued data into a continuous space , which allows us to employ density-based models to capture the distribution of the continuous representation . Concretely , let z = { z1 , . . . , zT } denote this continuous representation , and x = { x1 , . . . , xT } describe the observed data , where each xt represent , e.g . a node in a graph or a token in a sentence . Each xt is assumed to be categorical , i.e . xt ∈ { 0 , · · · , K − 1 } for some integer K > 1. z can be interpreted as a latent variable , which follows a prior distribution p ( z ) . We refer to q ( zt|xt ) as the dequantiser and p ( xt|zt ) as the quantiser . Training can be achieved by maximizing a variational lower bound on the marginal likelihood of the data , i.e . : log p ( x ) ≥ Eq ( z|x ) [ log p ( x|z ) p ( z ) q ( z|x ) ] = : L ( x ) ( 1 ) We are interested in the case where the representation zt can be inferred from xt alone , so we choose the factorisation p ( x|z ) = ∏ t p ( xt|zt ) and q ( z|x ) = ∏ t q ( zt|xt ) , following Lippe & Gavves ( 2020 ) . In this case , the “ optimal ” quantiser p ( xt|zt ) can be conveniently computed as : argmax p ( xt|zt ) Eq ( x ) [ L ( x ) ] = q ( zt|xt ) p̃ ( xt ) ∑K−1 x′t=0 q ( zt|x′t ) p̃ ( x′t ) = q ( xt|zt ) = : p ( xt|zt ) ( 2 ) where q ( x ) denotes the ( empirical ) data distribution , and p̃ ( xt ) denotes the estimate of the marginal distribution of each category ( which can be obtained by counting and , in the case of textual data , this corresponds to the unigram distribution over words ) . This equation shows that the optimal quantiser can be obtained implicitly by applying Bayes ’ rule with the parametric dequantiser q ( zt|xt ) . The factorisation we chose for p ( x|z ) and q ( z|x ) is crucial for the argmax above to be represented in this simple form . Without this assumption , the solution will involve a combinatorial sum or an integral , which results in the choice of a parametric quantiser in Ziegler & Rush ( 2019 ) for computational tractability . Plugging the optimal decoder into Eq . 1 yields : L ( x ) = Eq ( z|x ) [ ∑ t log p̃ ( xt ) + log p ( z ) ∑K−1 x′t=0 q ( zt|x′t ) p̃ ( x′t ) ] ( 3 ) We note that the first term is a constant . Therefore , the expression above implies that accurately modelling the dependencies in x boils down to learning an expressive prior p ( z ) and regularising the dequantiser q ( zt|xt ) . q ( xt|zt ) is deterministic when q ( zt|xt ) does not overlap with other q ( zt|x′t ) , in which case q ( zt|xt ) is encouraged to be expanded to maximize the entropy . If there is certain amount of overlapping , the denominator in the second term will push down the density of other q ( zt|x′t ) , therefore resulting in a spikier aggregate posterior distribution ( see more discussion on this in Section 5.4 ) . With this general framework that also accounts for lossy quantisation , we briefly present some of the previously proposed strategies for dequantisation . Ordinal dequantisation In the case where the data is ordinal , such as the case of image pixel values ( e.g . for an 8-bit representation , K = 256 ) , a dequantisation scheme can be obtained by setting q ( zt|xt ) = Uniform ( xt , xt + 1 ) . The resulting quantisation process is simply bztc , and is deterministic . More generally , q ( zt|xt ) can be any distribution on [ xt , xt+1 ] . See Nielsen & Winther ( 2020 ) ; Hoogeboom et al . ( 2019 ) for extensions of the uniform dequantisation scheme . Argmax Flows For categorical data , uniform dequantisation is not applicable , as there is no intrinsic ordering between the categories . Argmax Flows ( Hoogeboom et al. , 2021 ) dequantises categorical data by letting zt ∈ RK be distributed by some q ( zt|xt ) supported on { zt : argmaxk ( zt ) k = xt } . When the support over the latent space is disjoint , p ( xt|zt ) = q ( xt|zt ) = 1 [ xt = argmaxk ( zt ) k ] 1 ; we depict this in Figure 1 , Argmax flow ( thresh. ) . Argmax Flows makes minimal assumptions on the topology of the data : the support of the dequantiser partitions the continuous space evenly and the representations are equally far from each other . As an example , synonyms in text may still have very distinct dequantised representations despite having similar functions and meaning in a language modelling setting . In the naive formulation , Argmax Flows requires the dimensionality to the latent space to be the same as the number of the input categories K. To accomodate for larger categorical spaces , the authors suggest a binary factorisation , reducing the required latent space dimension to dlog2Ke ; See Figure 1 , Argmax flow ( binary ) . Categorical Normalising Flows ( CatNF ) In the previous cases , the quantisation is deterministic , and there is no loss of information . This is because in a cleanly partitioned latent space like in the ordinal setting or argmax flow , the dequantising distributions q ( zt|xt = k ) for all 0 ≤ k ≤ K − 1 have non-overlapping support . CatNF learns a dequantiser that can “ softly ” partition the space . Lippe & Gavves ( 2020 ) propose using a conditional logistic distribution as q ( zt|xt ) . In this case , the optimal quantiser q ( xt|zt ) is nearly-deterministic if the locations of the dequantisers are far away from each other and they have sufficiently small scale . For this reason , and unlike the first two approaches , CatNF is not capable of losslessly dequantising the data ( we provide a formal discussion on this in Appendix A.2 , which is based on an data-processing inequality argument , using the dequantiser as a transitional kernel ) . It can approximate the lossless limit by pushing the bulk of the mass of q ( zt|xt ) away from each other , but that could potentially lead to a highly complex and multi-modal empirical distribution over the representation space for p ( z ) to approximate . Next , we consider the case where q ( zt|xt ) is a truncated distribution , and as such has the ability to encode the data losslessly while learning a meaningful latent topology of the different categories . 3 TRUNCATED DEQUANTISER AND PRIOR . The general approach to optimising the variational lower bound proposed in Kingma & Welling ( 2014 ) involves sampling from the proposal distribution q ( zt|xt ) to estimate the expectation ( Eq . 1 ) . In our case , we want to parameterise this using a TRUncated FLow , which we will refer to as TRUFL . For simplicity , we will drop the dependency on t and xt , but all of the variational distribution is conditioned on the categorical value xt . If we want to bound a scalar distribution between ( a , b ) , and we have a density function f where its cumulative distribution function F ( CDF ) and the inverse of its CDF F−1 are tractable , we can easily sample from f by sampling u from Uniform ( F ( a ) , F ( b ) ) , and then evaluating F−1 ( u ) . Note that this method is differentiable . We use it to sample from our dequantiser . 1We use 1 [ · ] to denote an indicator function . Algorithm 1 Truncated Categorical Encoding for a timestep t Input : Categorical data xt , Flow g ( · , · ) Output : zt , log q ( zt|xt ) , log p ( xt|zt ) u0 ∼ Uniform ( 0 , 1 ) . Begin encoding u← m ( xt ) + ( u0 − 12 ) · s ( xt ) z′t ← F−1 ( u ) zt ← g ( z′t , xt ) . End Encoding for x̂t = 0 to K − 1 do . Compute probability of zt given all possible x̂t ẑ′t ← g−1 ( zt , x̂t ) log q ( zt|x̂t ) ← log f̃ ( ẑ′t ; m ( x̂t ) , s ( x̂t ) ) + log ∣∣∣dz′tdẑt ∣∣∣ end for log p ( xt|zt ) ← log q ( zt|xt ) p̃ ( xt ) − log ∑ x̂t q ( zt|x̂t ) p̃ ( x̂t ) . log computation of the q posterior In general , however , multi-variate distributions may not simply be truncated at the tails , but rather have a support which is a strict subset of its base distribution . One general approach to sampling from such a distribution is via rejection sampling ( Murphy , 2012 ) . This approach has been used in prior work for sampling from bounded-support distributions ( Polykovskiy & Vetrov , 2020 ; Xu & Durrett , 2018 ; Davidson et al. , 2018 ) . Computing gradients for this method is possible via implicit gradients ( Figurnov et al. , 2018 ) , but we do not need gradients in our case , as we use rejection sampling for generating samples from the generative model ( See Section 3.2 ) .
This paper proposes a dequantization scheme for categorical data, where unlike with ordinal data, element-wise uniform noise cannot be used. The authors propose to encode categorical data into an interval centre and a deviation, and a (pre)dequantized value is obtained by sampling uniformly in the implied real interval. A normalizing flow is then applied to obtain the dequantized variable. The proposed method can thus in principle learn non-overlapping one-dimensional supports for the dequantized variables, enabling lossless dequantization.
SP:d250f15b11e75f95ed59b94e8df7822efb1d1a80
Learning to Dequantise with Truncated Flows
1 INTRODUCTION . Deep generative models aim to model a distribution of high-dimensional natural data . Many of these methods assume that the data is continuous , despite it being digitally stored in bits and therefore intrinsically discrete . This discrepancy has led to recent interest in dequantising discrete data types to avoid some of the degeneracies of fitting continuous models to discrete data ( Theis et al. , 2015 ) . When data is ordinal ( such as pixel intensities ) a naive dequantisation scheme can be obtained by adding uniform noise to the discrete values ( Theis et al. , 2015 ) . More recently , a generalisation of this approach where dequantisation is seen as inference in a latent variable model has also been proposed ( Ho et al. , 2019 ; Hoogeboom et al. , 2020 ; Nielsen et al. , 2020 ) . However , these methods may not be directly applied in cases where the data is categorical ( Hoogeboom et al. , 2021 ) , because the data is not naturally represented in a vector space . Attempts at devising dequantisation schemes for categorical data by building upon the variational dequantisation scheme have been recently proposed in Hoogeboom et al . ( 2021 ) and Lippe & Gavves ( 2020 ) . These approaches dequantise a categorical input into a latent continuous space . Ideally , a dequantisation scheme for categorical data should be : ( i ) easily learnable by standard optimization techniques and ( ii ) possibly lossless , in the sense that quantisation should recover the input category . Argmax Flows ( Hoogeboom et al. , 2021 ) offer lossless dequantisation but the support of the stochastic embedding is chosen arbitrarily and not optimised , and the dimensionality of the continuous ( dequantised ) variable is required to be at least logarithmic in the number of categories of the input data . Moreover , the method makes minimal assumptions about the topology of the categorical data , disregarding the possible relationships between categories , which can occur for example between word indices in natural language ( Bengio et al. , 2003 ) or the atomic representations of a molecule ’ s constituents . On the other hand , Categorical Normalizing Flows ( CatNF ; Lippe & Gavves 2020 ) can learn a more compact representation of the input category but the dequantisation might be lossy given that the posteriors over the continuous variables have overlapping support . Is there a trade-off between these two schemes ? In this paper , we propose TRUFL , which builds upon the aforementioned variational dequantisation techniques . We achieve that by using truncated posterior distributions over the continuous variables with potentially bounded and disjoint support . In addition , we present a parametrisation of truncated distributions that can be optimised with standard stochastic reparametrisation techniques . Overall , our method inherits strengths of both CatNF and Argmax flows . Our experimental results highlight the effectiveness of our approach . 2 BACKGROUND : VARIATIONAL DEQUANTISATION . Dequantisation refers to the process of embedding discrete-valued data into a continuous space , which allows us to employ density-based models to capture the distribution of the continuous representation . Concretely , let z = { z1 , . . . , zT } denote this continuous representation , and x = { x1 , . . . , xT } describe the observed data , where each xt represent , e.g . a node in a graph or a token in a sentence . Each xt is assumed to be categorical , i.e . xt ∈ { 0 , · · · , K − 1 } for some integer K > 1. z can be interpreted as a latent variable , which follows a prior distribution p ( z ) . We refer to q ( zt|xt ) as the dequantiser and p ( xt|zt ) as the quantiser . Training can be achieved by maximizing a variational lower bound on the marginal likelihood of the data , i.e . : log p ( x ) ≥ Eq ( z|x ) [ log p ( x|z ) p ( z ) q ( z|x ) ] = : L ( x ) ( 1 ) We are interested in the case where the representation zt can be inferred from xt alone , so we choose the factorisation p ( x|z ) = ∏ t p ( xt|zt ) and q ( z|x ) = ∏ t q ( zt|xt ) , following Lippe & Gavves ( 2020 ) . In this case , the “ optimal ” quantiser p ( xt|zt ) can be conveniently computed as : argmax p ( xt|zt ) Eq ( x ) [ L ( x ) ] = q ( zt|xt ) p̃ ( xt ) ∑K−1 x′t=0 q ( zt|x′t ) p̃ ( x′t ) = q ( xt|zt ) = : p ( xt|zt ) ( 2 ) where q ( x ) denotes the ( empirical ) data distribution , and p̃ ( xt ) denotes the estimate of the marginal distribution of each category ( which can be obtained by counting and , in the case of textual data , this corresponds to the unigram distribution over words ) . This equation shows that the optimal quantiser can be obtained implicitly by applying Bayes ’ rule with the parametric dequantiser q ( zt|xt ) . The factorisation we chose for p ( x|z ) and q ( z|x ) is crucial for the argmax above to be represented in this simple form . Without this assumption , the solution will involve a combinatorial sum or an integral , which results in the choice of a parametric quantiser in Ziegler & Rush ( 2019 ) for computational tractability . Plugging the optimal decoder into Eq . 1 yields : L ( x ) = Eq ( z|x ) [ ∑ t log p̃ ( xt ) + log p ( z ) ∑K−1 x′t=0 q ( zt|x′t ) p̃ ( x′t ) ] ( 3 ) We note that the first term is a constant . Therefore , the expression above implies that accurately modelling the dependencies in x boils down to learning an expressive prior p ( z ) and regularising the dequantiser q ( zt|xt ) . q ( xt|zt ) is deterministic when q ( zt|xt ) does not overlap with other q ( zt|x′t ) , in which case q ( zt|xt ) is encouraged to be expanded to maximize the entropy . If there is certain amount of overlapping , the denominator in the second term will push down the density of other q ( zt|x′t ) , therefore resulting in a spikier aggregate posterior distribution ( see more discussion on this in Section 5.4 ) . With this general framework that also accounts for lossy quantisation , we briefly present some of the previously proposed strategies for dequantisation . Ordinal dequantisation In the case where the data is ordinal , such as the case of image pixel values ( e.g . for an 8-bit representation , K = 256 ) , a dequantisation scheme can be obtained by setting q ( zt|xt ) = Uniform ( xt , xt + 1 ) . The resulting quantisation process is simply bztc , and is deterministic . More generally , q ( zt|xt ) can be any distribution on [ xt , xt+1 ] . See Nielsen & Winther ( 2020 ) ; Hoogeboom et al . ( 2019 ) for extensions of the uniform dequantisation scheme . Argmax Flows For categorical data , uniform dequantisation is not applicable , as there is no intrinsic ordering between the categories . Argmax Flows ( Hoogeboom et al. , 2021 ) dequantises categorical data by letting zt ∈ RK be distributed by some q ( zt|xt ) supported on { zt : argmaxk ( zt ) k = xt } . When the support over the latent space is disjoint , p ( xt|zt ) = q ( xt|zt ) = 1 [ xt = argmaxk ( zt ) k ] 1 ; we depict this in Figure 1 , Argmax flow ( thresh. ) . Argmax Flows makes minimal assumptions on the topology of the data : the support of the dequantiser partitions the continuous space evenly and the representations are equally far from each other . As an example , synonyms in text may still have very distinct dequantised representations despite having similar functions and meaning in a language modelling setting . In the naive formulation , Argmax Flows requires the dimensionality to the latent space to be the same as the number of the input categories K. To accomodate for larger categorical spaces , the authors suggest a binary factorisation , reducing the required latent space dimension to dlog2Ke ; See Figure 1 , Argmax flow ( binary ) . Categorical Normalising Flows ( CatNF ) In the previous cases , the quantisation is deterministic , and there is no loss of information . This is because in a cleanly partitioned latent space like in the ordinal setting or argmax flow , the dequantising distributions q ( zt|xt = k ) for all 0 ≤ k ≤ K − 1 have non-overlapping support . CatNF learns a dequantiser that can “ softly ” partition the space . Lippe & Gavves ( 2020 ) propose using a conditional logistic distribution as q ( zt|xt ) . In this case , the optimal quantiser q ( xt|zt ) is nearly-deterministic if the locations of the dequantisers are far away from each other and they have sufficiently small scale . For this reason , and unlike the first two approaches , CatNF is not capable of losslessly dequantising the data ( we provide a formal discussion on this in Appendix A.2 , which is based on an data-processing inequality argument , using the dequantiser as a transitional kernel ) . It can approximate the lossless limit by pushing the bulk of the mass of q ( zt|xt ) away from each other , but that could potentially lead to a highly complex and multi-modal empirical distribution over the representation space for p ( z ) to approximate . Next , we consider the case where q ( zt|xt ) is a truncated distribution , and as such has the ability to encode the data losslessly while learning a meaningful latent topology of the different categories . 3 TRUNCATED DEQUANTISER AND PRIOR . The general approach to optimising the variational lower bound proposed in Kingma & Welling ( 2014 ) involves sampling from the proposal distribution q ( zt|xt ) to estimate the expectation ( Eq . 1 ) . In our case , we want to parameterise this using a TRUncated FLow , which we will refer to as TRUFL . For simplicity , we will drop the dependency on t and xt , but all of the variational distribution is conditioned on the categorical value xt . If we want to bound a scalar distribution between ( a , b ) , and we have a density function f where its cumulative distribution function F ( CDF ) and the inverse of its CDF F−1 are tractable , we can easily sample from f by sampling u from Uniform ( F ( a ) , F ( b ) ) , and then evaluating F−1 ( u ) . Note that this method is differentiable . We use it to sample from our dequantiser . 1We use 1 [ · ] to denote an indicator function . Algorithm 1 Truncated Categorical Encoding for a timestep t Input : Categorical data xt , Flow g ( · , · ) Output : zt , log q ( zt|xt ) , log p ( xt|zt ) u0 ∼ Uniform ( 0 , 1 ) . Begin encoding u← m ( xt ) + ( u0 − 12 ) · s ( xt ) z′t ← F−1 ( u ) zt ← g ( z′t , xt ) . End Encoding for x̂t = 0 to K − 1 do . Compute probability of zt given all possible x̂t ẑ′t ← g−1 ( zt , x̂t ) log q ( zt|x̂t ) ← log f̃ ( ẑ′t ; m ( x̂t ) , s ( x̂t ) ) + log ∣∣∣dz′tdẑt ∣∣∣ end for log p ( xt|zt ) ← log q ( zt|xt ) p̃ ( xt ) − log ∑ x̂t q ( zt|x̂t ) p̃ ( x̂t ) . log computation of the q posterior In general , however , multi-variate distributions may not simply be truncated at the tails , but rather have a support which is a strict subset of its base distribution . One general approach to sampling from such a distribution is via rejection sampling ( Murphy , 2012 ) . This approach has been used in prior work for sampling from bounded-support distributions ( Polykovskiy & Vetrov , 2020 ; Xu & Durrett , 2018 ; Davidson et al. , 2018 ) . Computing gradients for this method is possible via implicit gradients ( Figurnov et al. , 2018 ) , but we do not need gradients in our case , as we use rejection sampling for generating samples from the generative model ( See Section 3.2 ) .
The paper presents a new approach for dequantization ie embedding discrete data in a continuous space using variational inference and truncated flows called TRUFL. Unlike previous approaches, TRUFL aloows the dequantization layer to have a learnable truncated support. The authors perform several experiments to demonstrate the advantages of the proposed method to Categorical Normalizing Flows (CatNF) and Argmax Flows.
SP:d250f15b11e75f95ed59b94e8df7822efb1d1a80
Delving into Feature Space: Improving Adversarial Robustness by Feature Spectral Regularization
1 INTRODUCTION . It is shown that the performance of Deep Neural Networks ( DNNs ) decreases dramatically when confronted with adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . The vulnerability of DNNs brings potential risk to safety-critical deployments ( Finlayson et al. , 2019 ) . To mitigate this vulnerability of DNNs , numerous methods are proposed to improve adversarial robustness ( Papernot et al. , 2016b ; Madry et al. , 2018 ; Xie et al. , 2019 ) . Among these , adversarial training ( AT ) ( Madry et al. , 2018 ) is the most effective approach that achieves state-ofthe-art performance under various attacks ( Croce et al. , 2020 ) . Different from standard training , adversarial training trains DNNs on adversarial examples rather than natural examples . There are numerous methods proposed to eliminate the gap of features between natural examples and adversarial examples ( Kannan et al. , 2018 ; Zhang & Wang , 2019 ; Zhang et al. , 2019 ) . However , they constrain the features on the whole without considering the distinction of contribution from an individual feature . This may be inappropriate since every feature may play a different role in robustness . The work of ( Ilyas et al. , 2019 ) argued that adversarial examples result from non-robust features ( Ilyas et al. , 2019 ) , i.e. , well-generalizing but brittle features . This inspires us that we could treat each feature separately in the adversarial setting . While the work of ( Bai et al. , 2021 ; Yan et al. , 2021 ) considered the influence of different channels on robustness , the problem has not been explored clearly from the perspective of spectral signatures , i.e. , the eigenvalues and eigenvectors of feature covariance . The space is split into many spectral components , and then it is worth analyzing which component is beneficial for adversarial robustness and which is fragile under attack . In this paper , we show that spectral signatures of deep features have a close connection with adversarial robustness . Through applying principal component analysis ( PCA ) to deep features , we could split feature space to various eigenvectors with its eigenvalues . Our motivation comes from the phenomenon that the standard trained model often results in a sharp distribution of eigenvalues ( Yu et al. , 2020 ) , i.e. , the eigenvalues rapidly become very small as shown in Figure 1 . This property may be beneficial for natural accuracy ( Lezama et al. , 2018 ; Papyan et al. , 2020 ) , while its impact on robust generalization is unclear . However , we hypothesize that the sharp distribution of eigenvalues impels models to learn less diverse features and is a cause of the vulnerability of DNNs . A minority of eigenvalues occupy the overwhelming majority in the sum of eigenvalues , which may make the model pay little attention to features along the eigenvectors with smaller eigenvalues , so features along such directions are not generalized during training . To verify our hypothesis , we define a new metric to measure the variation of features along different eigenvectors under attacks , as shown in Figure 4 . Our observation reveals that the adversary tends to add more components along eigenvectors with smaller eigenvalues , and such huge variation could be distinctly alleviated by AT . Therefore , we propose to improve adversarial robustness by alleviating the sharp distribution of eigenvalues . Considering that more eigenvalues should occupy the majority , we propose a regularizer named Feature Spectral Regularization ( FSR ) to penalize the largest eigenvalue of the feature matrix covariance . Empirical evidence shows that FSR increases the overall eigenvalues relatively , making models focus on more spectral components during training . We also provide a theoretical explanation with robust linear regression . Comprehensive experiments confirm that FSR indeed improves the adversarial robustness on several datasets . Our contributions are summarized as follows : • We find a close connection between spectral signatures of features and adversarial robustness . On one hand , standard trained model presents a sharp distribution of eigenvalues , which is beneficial for natural accuracy while harmful in adversarial setting . On the other hand , the adversary tends to add more quantity along eigenvectors with smaller eigenvalues . • We propose Feature Spectral Regularization ( FSR ) to increase the overall eigenvalues relatively in deep feature space , thus alleviating the sharp distribution of eigenvalues . Furthermore , we provide a theoretical explanation based on robust linear regression . • We empirically show that FSR improves adversarial robustness and alleviates the sharp distribution of eigenvalues , through comprehensive experiments . 2 RELATED WORK . Adversarial Defense . Many defense methods have been proposed to improve adversarial robustness since the discovery of adversarial examples ( Papernot et al. , 2016b ; Xie et al. , 2019 ; Carmon et al. , 2019 ; Zhang et al. , 2020 ) . However , many of them are proven to be noneffective because they highly depend on obfuscated gradients ( Athalye et al. , 2018 ) or gradient masking ( Papernot et al. , 2017 ) . Among these , adversarial training ( Madry et al. , 2018 ) is now regarded as the state-of-theart method ( Rice et al. , 2020 ; Pang et al. , 2021 ) . Distinguished form standard training , adversarial training trains DNN on adversarial examples : min θ max ‖δ‖≤ E ( x , y ) ∈D LCE ( x+ δ , y ; θ ) ( 1 ) where , D is the dataset , the parameters of DNN are denoted as θ , δ means the perturbation within the -ball , and LCE is the cross-entropy ( CE ) loss . By introducing a trade-off between robustness and generalization , TRADES ( Zhang et al. , 2019 ) is another framework that reaches comparative robustness with AT . Among the proposed methods based on AT , Adversarial Weight Perturbation ( AWP ) ( Wu et al. , 2020 ) explicitly regularizes the flatness of weight loss landscape , and forms a double-perturbation mechanism , which shows huge improvement on adversarial robustness and alleviates the overfitting in AT ( Rice et al. , 2020 ) . Spectral Signatures of Feature Representations . Some studies have revealed that the spectral signatures of features influence the performance in various learning tasks . The spectral properties are crucial to detect backdoors ( Tran et al. , 2018 ; Hayase et al. , 2021 ) . The eigenvectors corresponding to the larger eigenvalues are found to dominate the transferability of features in adversarial domain adaptation ( Chen et al. , 2019b ) . The work of ( Chen et al. , 2019a ) explores the correlation between negative transfer and the spectral components of features and weights in inductive transfer learning . By utilizing the principle of Maximal Coding Rate Reduction ( MCR2 ) , it is theoretically proven that the larger several singular values of feature matrix for every class should be equal to learn the maximally diverse representation ( Yu et al. , 2020 ; Chan et al. , 2020 ) . Different from these studies , we analyse the connection between adversarial robustness and spectral components of deep features . We aim to explore which components are more fragile under attacks , and propose a method to boost adversarial robustness by constraining spectral properties . 3 SPECTRAL ANALYSIS IN FEATURE SPACE . In this section , we investigate the connection between spectral signatures and adversarial robustness . Concretely , we train models by standard training and adversarial training ( Madry et al. , 2018 ) , and then apply spectral decomposition to attain the spectral signatures . We find that in standard training the eigenvalues reveal a rapidly descending curve while this tendency is alleviated by AT . Then , we find that the adversary tends to add more quantity along eigenvectors with smaller eigenvalues . 3.1 CURVE OF EIGENVALUES AND ADVERSARIAL ROBUSTNESS . Given a dataset D = { ( xi , yi ) } ni=1 including C classes , xi represents the input data and yi is the label . DNN is composed of a feature extractor h ( · ) : RD → Rd and a linear classifier g ( · ) : Rd → RC . After centralizing the learned features ( i.e . 1n ∑n i=1 h ( xi ) = 0 ) , we decompose the learned features by spectral decomposition , which is similar to PCA : 1 n n∑ i=1 h ( xi ) h ( xi ) T = d∑ j=1 ujλju T j ( 2 ) where λj means the eigenvalues with index j and uj ∈ Rd represents its eigenvector . Specifically , we train ResNet-18 ( He et al. , 2016 ) using both standard training and adversarial training on CIFAR-10 . The parameters for AT are the same as ( Rice et al. , 2020 ) . We calculate the eigenvalues by applying Eq . ( 2 ) , and the eigenvalues are plotted in Figure 2 . The features come from the penultimate layer ( 512 dimensions ) . All the features are extracted from the test set in CIFAR-10 . A part of the eigenvalues is shown for better visualization . advori Difference of models in spectral analysis . As shown in Figure 2 ( a ) ( b ) , the eigenvalues of a standard trained model drop rapidly at some point , while the sharp decrease of eigenvalues is much alleviated by AT . These sharp spectral signature in standard training makes just a few eigenvalues informative from the opinion of PCA . The model fails to explore the influence of eigenvectors with smaller eigenvalues on classification , so the trained model could not recognize the change of features along eigenvectors with smaller eigenvalues . The eigenvectors which may endow useful features are overly penalized . Consequently , we propose a hypothesis that the severe dominance of the top eigenvectors is a cause of vulnerability in DNN , and the adversary adds more components in eigenvectors with smaller eigenvalues . We will verify the proposed hypothesis in the next section . Connection with intrinsic dimension . We introduce intrinsic dimension ( ID ) to quantitatively describe the decreasing tendency in eigenvalues . ID ( i.e. , the minimal number of parameters needed to describe a representation ) has a close connection with natural accuracy ( Ansuini et al. , 2019 ) . It has been found that reduction of ID contributes to an improvement on natural accuracy . We adopt PC-ID proposed by ( Ansuini et al. , 2019 ) to estimate ID , which is determined by the number of principal components included to describe 90 % of the variance . We mark ID in Figure 2 with solid circles . The results reveal the ID of a standard trained model is very small , while models obtained by AT achieve a higher ID , which is contrary to the influence of ID on natural accuracy . This also verifies that there exists a trade-off between generalization and robustness ( Tsipras et al. , 2019 ; Zhang et al. , 2019 ) from the perspective of ID . FSR could further increase ID , based on AT . 3.2 VARIATION ALONG EIGENVECTORS UNDER ATTACKS . In this section , we aim to verify the hypothesis that adversary adds more components along eigenvectors with smaller eigenvalues in attacking stage . We define a metric to quantitatively observe the change of features along different eigenvectors under attack , called variation in this paper . Definition 1 ( Alignment ) Given a dataset Ds = { xs , i , ys , i } ni=1 which may be perturbed . The alignment of Ds to the pre-given direction uj is calculated by the expectation over cosine similarity between features extracted by DNN and the direction vector uj : align ( Ds , uj ) = E ( xs , i , ys , i ) ∈Ds |〈h ( xs , i ) , uj〉| ‖h ( xs , i ) ‖ · ‖uj‖ ( 3 ) where the norm ‖·‖ used is Euclidian norm , and uj is calculated by Eq . ( 2 ) . The calculation of uj is based on features covariance of natural samples , and uj is fixed . Definition 2 ( Variation ) Given a dataset D consist of natural examples and its perturbed dataset Dadv . The variation on direction uj is defined as the ratio between alignment on Dadv and D : r ( Dadv , D , uj ) = align ( Dadv , uj ) align ( D , uj ) ( 4 ) The alignment is correlated with the distance between subspace spanned by uj and the actual feature space , so the change of alignment under attack is suitable to describe the influence of attacks on direction uj , which is called as variation . Our metric is similar to ( Tran et al. , 2018 ; Hayase et al. , 2021 ) used for analyzing backdoors , but we define the alignment by cosine similarity while the latter uses the inner product . Compared with inner product , cosine similarity could eliminate the influence of scale . In Appendix B.1 , we provide a complete procedure to calculate the metrics . We give an intuitive explanation on why the defined variation is suitable to capture the change of features along different eigenvectors under attacks . An toy model for illustration is shown in Figure 3 . Suppose the distribution transfers from Figure 3 ( a ) to ( b ) under attack , we draw an original data x and its shifted data xadv . Take the fixed direction u2 as an illustration . The cosine similarity between x and u2 is equal to cos ( θ ) , i.e. , cos ( θ ) = 〈x , u2〉 / ‖x‖ · ‖u2‖ , which is called as alignment above . If the distribution moves from Figure 3 ( a ) to ( b ) , then the cosine similarity cos ( θ ) increases . Consequently , the change of the cosine similarity can describe how the features change along a direction under attacks . For the vari- ation defined in Eq . ( 4 ) , r ( Dadv , D , uj ) > 1 means the features add more components along direction uj , and vice versa . We could compare r ( D , D , uj ) with 1 to observe whether the adversary adds or reduces the components along various eigenvectors . The adversary adds more components along the eigenvectors with smaller eigenvalues . We visualize the variation of features to different eigenvectors { u1 , · · · , u512 } . The results of CIFAR10 and SVHN are shown in Figure 4 . The attacks used include FGSM ( Goodfellow et al. , 2015 ) , PGD ( step size /10 for 10 steps ) ( Madry et al. , 2018 ) and C & W attack ( Carlini & Wagner , 2017 ) . We set attack budget = 4/255 constrained by ` ∞ norm . As observed in Figure 4 ( a ) , the variation keeps close or smaller than 1 for the several largest eigenvalues in standard trained model . However , the variation of smaller eigenvalues is much larger than 1 . This means that FGSM tends to add more components along the eigenvectors uj with smaller eigenvalues in standard training . A similar phenomenon also exists in PGD and C & W , which verifies the hypothesis in Section 3.1 . We also provide the curve while changing budget and the variation on CIAFR-100 in Appendix B.2 . For models trained by AT , variation of all eigenvectors keeps close to 1 , which is distinctly different from standard training . The high variation of directions with smaller eigenvalues visibly decreases . It is worth noticing that we apply cosine similarity for definition , so the influence from the scale of features is eliminated . Therefore , the components corresponding to the larger eigenvalues are more robust while the smaller ones are more fragile . Adopting the opinion of robust features ( Ilyas et al. , 2019 ) , the features along the eigenvectors with larger eigenvalues are regarded as robust features , and these along the direction with smaller eigenvalues are non-robust features . The analysis above motivates us to regularize the spectrum signatures to alleviate the rapid decreasing tendency . If we alleviate the dominance of the top eigenvalues , the adversarial robustness is improved .
By analyzing the spectral difference between the natural and adversarial examples, this paper finds that eigenvectors with smaller eigenvalues are more non-robust and adversary trends to add more components into these directions. To eliminate the dominance of the top eigenvalues, the paper proposes Feature Spectral Regularization (FSR), which adds more penalties to the largest eigenvalues while relatively increasing the smaller ones. Several experimental results demonstrate that FSR can further improve the robustness when combined with other adversarial defenses.
SP:f4d7d9742f307cdab50457de575dc4f828d432ec
Delving into Feature Space: Improving Adversarial Robustness by Feature Spectral Regularization
1 INTRODUCTION . It is shown that the performance of Deep Neural Networks ( DNNs ) decreases dramatically when confronted with adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . The vulnerability of DNNs brings potential risk to safety-critical deployments ( Finlayson et al. , 2019 ) . To mitigate this vulnerability of DNNs , numerous methods are proposed to improve adversarial robustness ( Papernot et al. , 2016b ; Madry et al. , 2018 ; Xie et al. , 2019 ) . Among these , adversarial training ( AT ) ( Madry et al. , 2018 ) is the most effective approach that achieves state-ofthe-art performance under various attacks ( Croce et al. , 2020 ) . Different from standard training , adversarial training trains DNNs on adversarial examples rather than natural examples . There are numerous methods proposed to eliminate the gap of features between natural examples and adversarial examples ( Kannan et al. , 2018 ; Zhang & Wang , 2019 ; Zhang et al. , 2019 ) . However , they constrain the features on the whole without considering the distinction of contribution from an individual feature . This may be inappropriate since every feature may play a different role in robustness . The work of ( Ilyas et al. , 2019 ) argued that adversarial examples result from non-robust features ( Ilyas et al. , 2019 ) , i.e. , well-generalizing but brittle features . This inspires us that we could treat each feature separately in the adversarial setting . While the work of ( Bai et al. , 2021 ; Yan et al. , 2021 ) considered the influence of different channels on robustness , the problem has not been explored clearly from the perspective of spectral signatures , i.e. , the eigenvalues and eigenvectors of feature covariance . The space is split into many spectral components , and then it is worth analyzing which component is beneficial for adversarial robustness and which is fragile under attack . In this paper , we show that spectral signatures of deep features have a close connection with adversarial robustness . Through applying principal component analysis ( PCA ) to deep features , we could split feature space to various eigenvectors with its eigenvalues . Our motivation comes from the phenomenon that the standard trained model often results in a sharp distribution of eigenvalues ( Yu et al. , 2020 ) , i.e. , the eigenvalues rapidly become very small as shown in Figure 1 . This property may be beneficial for natural accuracy ( Lezama et al. , 2018 ; Papyan et al. , 2020 ) , while its impact on robust generalization is unclear . However , we hypothesize that the sharp distribution of eigenvalues impels models to learn less diverse features and is a cause of the vulnerability of DNNs . A minority of eigenvalues occupy the overwhelming majority in the sum of eigenvalues , which may make the model pay little attention to features along the eigenvectors with smaller eigenvalues , so features along such directions are not generalized during training . To verify our hypothesis , we define a new metric to measure the variation of features along different eigenvectors under attacks , as shown in Figure 4 . Our observation reveals that the adversary tends to add more components along eigenvectors with smaller eigenvalues , and such huge variation could be distinctly alleviated by AT . Therefore , we propose to improve adversarial robustness by alleviating the sharp distribution of eigenvalues . Considering that more eigenvalues should occupy the majority , we propose a regularizer named Feature Spectral Regularization ( FSR ) to penalize the largest eigenvalue of the feature matrix covariance . Empirical evidence shows that FSR increases the overall eigenvalues relatively , making models focus on more spectral components during training . We also provide a theoretical explanation with robust linear regression . Comprehensive experiments confirm that FSR indeed improves the adversarial robustness on several datasets . Our contributions are summarized as follows : • We find a close connection between spectral signatures of features and adversarial robustness . On one hand , standard trained model presents a sharp distribution of eigenvalues , which is beneficial for natural accuracy while harmful in adversarial setting . On the other hand , the adversary tends to add more quantity along eigenvectors with smaller eigenvalues . • We propose Feature Spectral Regularization ( FSR ) to increase the overall eigenvalues relatively in deep feature space , thus alleviating the sharp distribution of eigenvalues . Furthermore , we provide a theoretical explanation based on robust linear regression . • We empirically show that FSR improves adversarial robustness and alleviates the sharp distribution of eigenvalues , through comprehensive experiments . 2 RELATED WORK . Adversarial Defense . Many defense methods have been proposed to improve adversarial robustness since the discovery of adversarial examples ( Papernot et al. , 2016b ; Xie et al. , 2019 ; Carmon et al. , 2019 ; Zhang et al. , 2020 ) . However , many of them are proven to be noneffective because they highly depend on obfuscated gradients ( Athalye et al. , 2018 ) or gradient masking ( Papernot et al. , 2017 ) . Among these , adversarial training ( Madry et al. , 2018 ) is now regarded as the state-of-theart method ( Rice et al. , 2020 ; Pang et al. , 2021 ) . Distinguished form standard training , adversarial training trains DNN on adversarial examples : min θ max ‖δ‖≤ E ( x , y ) ∈D LCE ( x+ δ , y ; θ ) ( 1 ) where , D is the dataset , the parameters of DNN are denoted as θ , δ means the perturbation within the -ball , and LCE is the cross-entropy ( CE ) loss . By introducing a trade-off between robustness and generalization , TRADES ( Zhang et al. , 2019 ) is another framework that reaches comparative robustness with AT . Among the proposed methods based on AT , Adversarial Weight Perturbation ( AWP ) ( Wu et al. , 2020 ) explicitly regularizes the flatness of weight loss landscape , and forms a double-perturbation mechanism , which shows huge improvement on adversarial robustness and alleviates the overfitting in AT ( Rice et al. , 2020 ) . Spectral Signatures of Feature Representations . Some studies have revealed that the spectral signatures of features influence the performance in various learning tasks . The spectral properties are crucial to detect backdoors ( Tran et al. , 2018 ; Hayase et al. , 2021 ) . The eigenvectors corresponding to the larger eigenvalues are found to dominate the transferability of features in adversarial domain adaptation ( Chen et al. , 2019b ) . The work of ( Chen et al. , 2019a ) explores the correlation between negative transfer and the spectral components of features and weights in inductive transfer learning . By utilizing the principle of Maximal Coding Rate Reduction ( MCR2 ) , it is theoretically proven that the larger several singular values of feature matrix for every class should be equal to learn the maximally diverse representation ( Yu et al. , 2020 ; Chan et al. , 2020 ) . Different from these studies , we analyse the connection between adversarial robustness and spectral components of deep features . We aim to explore which components are more fragile under attacks , and propose a method to boost adversarial robustness by constraining spectral properties . 3 SPECTRAL ANALYSIS IN FEATURE SPACE . In this section , we investigate the connection between spectral signatures and adversarial robustness . Concretely , we train models by standard training and adversarial training ( Madry et al. , 2018 ) , and then apply spectral decomposition to attain the spectral signatures . We find that in standard training the eigenvalues reveal a rapidly descending curve while this tendency is alleviated by AT . Then , we find that the adversary tends to add more quantity along eigenvectors with smaller eigenvalues . 3.1 CURVE OF EIGENVALUES AND ADVERSARIAL ROBUSTNESS . Given a dataset D = { ( xi , yi ) } ni=1 including C classes , xi represents the input data and yi is the label . DNN is composed of a feature extractor h ( · ) : RD → Rd and a linear classifier g ( · ) : Rd → RC . After centralizing the learned features ( i.e . 1n ∑n i=1 h ( xi ) = 0 ) , we decompose the learned features by spectral decomposition , which is similar to PCA : 1 n n∑ i=1 h ( xi ) h ( xi ) T = d∑ j=1 ujλju T j ( 2 ) where λj means the eigenvalues with index j and uj ∈ Rd represents its eigenvector . Specifically , we train ResNet-18 ( He et al. , 2016 ) using both standard training and adversarial training on CIFAR-10 . The parameters for AT are the same as ( Rice et al. , 2020 ) . We calculate the eigenvalues by applying Eq . ( 2 ) , and the eigenvalues are plotted in Figure 2 . The features come from the penultimate layer ( 512 dimensions ) . All the features are extracted from the test set in CIFAR-10 . A part of the eigenvalues is shown for better visualization . advori Difference of models in spectral analysis . As shown in Figure 2 ( a ) ( b ) , the eigenvalues of a standard trained model drop rapidly at some point , while the sharp decrease of eigenvalues is much alleviated by AT . These sharp spectral signature in standard training makes just a few eigenvalues informative from the opinion of PCA . The model fails to explore the influence of eigenvectors with smaller eigenvalues on classification , so the trained model could not recognize the change of features along eigenvectors with smaller eigenvalues . The eigenvectors which may endow useful features are overly penalized . Consequently , we propose a hypothesis that the severe dominance of the top eigenvectors is a cause of vulnerability in DNN , and the adversary adds more components in eigenvectors with smaller eigenvalues . We will verify the proposed hypothesis in the next section . Connection with intrinsic dimension . We introduce intrinsic dimension ( ID ) to quantitatively describe the decreasing tendency in eigenvalues . ID ( i.e. , the minimal number of parameters needed to describe a representation ) has a close connection with natural accuracy ( Ansuini et al. , 2019 ) . It has been found that reduction of ID contributes to an improvement on natural accuracy . We adopt PC-ID proposed by ( Ansuini et al. , 2019 ) to estimate ID , which is determined by the number of principal components included to describe 90 % of the variance . We mark ID in Figure 2 with solid circles . The results reveal the ID of a standard trained model is very small , while models obtained by AT achieve a higher ID , which is contrary to the influence of ID on natural accuracy . This also verifies that there exists a trade-off between generalization and robustness ( Tsipras et al. , 2019 ; Zhang et al. , 2019 ) from the perspective of ID . FSR could further increase ID , based on AT . 3.2 VARIATION ALONG EIGENVECTORS UNDER ATTACKS . In this section , we aim to verify the hypothesis that adversary adds more components along eigenvectors with smaller eigenvalues in attacking stage . We define a metric to quantitatively observe the change of features along different eigenvectors under attack , called variation in this paper . Definition 1 ( Alignment ) Given a dataset Ds = { xs , i , ys , i } ni=1 which may be perturbed . The alignment of Ds to the pre-given direction uj is calculated by the expectation over cosine similarity between features extracted by DNN and the direction vector uj : align ( Ds , uj ) = E ( xs , i , ys , i ) ∈Ds |〈h ( xs , i ) , uj〉| ‖h ( xs , i ) ‖ · ‖uj‖ ( 3 ) where the norm ‖·‖ used is Euclidian norm , and uj is calculated by Eq . ( 2 ) . The calculation of uj is based on features covariance of natural samples , and uj is fixed . Definition 2 ( Variation ) Given a dataset D consist of natural examples and its perturbed dataset Dadv . The variation on direction uj is defined as the ratio between alignment on Dadv and D : r ( Dadv , D , uj ) = align ( Dadv , uj ) align ( D , uj ) ( 4 ) The alignment is correlated with the distance between subspace spanned by uj and the actual feature space , so the change of alignment under attack is suitable to describe the influence of attacks on direction uj , which is called as variation . Our metric is similar to ( Tran et al. , 2018 ; Hayase et al. , 2021 ) used for analyzing backdoors , but we define the alignment by cosine similarity while the latter uses the inner product . Compared with inner product , cosine similarity could eliminate the influence of scale . In Appendix B.1 , we provide a complete procedure to calculate the metrics . We give an intuitive explanation on why the defined variation is suitable to capture the change of features along different eigenvectors under attacks . An toy model for illustration is shown in Figure 3 . Suppose the distribution transfers from Figure 3 ( a ) to ( b ) under attack , we draw an original data x and its shifted data xadv . Take the fixed direction u2 as an illustration . The cosine similarity between x and u2 is equal to cos ( θ ) , i.e. , cos ( θ ) = 〈x , u2〉 / ‖x‖ · ‖u2‖ , which is called as alignment above . If the distribution moves from Figure 3 ( a ) to ( b ) , then the cosine similarity cos ( θ ) increases . Consequently , the change of the cosine similarity can describe how the features change along a direction under attacks . For the vari- ation defined in Eq . ( 4 ) , r ( Dadv , D , uj ) > 1 means the features add more components along direction uj , and vice versa . We could compare r ( D , D , uj ) with 1 to observe whether the adversary adds or reduces the components along various eigenvectors . The adversary adds more components along the eigenvectors with smaller eigenvalues . We visualize the variation of features to different eigenvectors { u1 , · · · , u512 } . The results of CIFAR10 and SVHN are shown in Figure 4 . The attacks used include FGSM ( Goodfellow et al. , 2015 ) , PGD ( step size /10 for 10 steps ) ( Madry et al. , 2018 ) and C & W attack ( Carlini & Wagner , 2017 ) . We set attack budget = 4/255 constrained by ` ∞ norm . As observed in Figure 4 ( a ) , the variation keeps close or smaller than 1 for the several largest eigenvalues in standard trained model . However , the variation of smaller eigenvalues is much larger than 1 . This means that FGSM tends to add more components along the eigenvectors uj with smaller eigenvalues in standard training . A similar phenomenon also exists in PGD and C & W , which verifies the hypothesis in Section 3.1 . We also provide the curve while changing budget and the variation on CIAFR-100 in Appendix B.2 . For models trained by AT , variation of all eigenvectors keeps close to 1 , which is distinctly different from standard training . The high variation of directions with smaller eigenvalues visibly decreases . It is worth noticing that we apply cosine similarity for definition , so the influence from the scale of features is eliminated . Therefore , the components corresponding to the larger eigenvalues are more robust while the smaller ones are more fragile . Adopting the opinion of robust features ( Ilyas et al. , 2019 ) , the features along the eigenvectors with larger eigenvalues are regarded as robust features , and these along the direction with smaller eigenvalues are non-robust features . The analysis above motivates us to regularize the spectrum signatures to alleviate the rapid decreasing tendency . If we alleviate the dominance of the top eigenvalues , the adversarial robustness is improved .
The paper has presented a new method to improve the robustness of features under adversarial attacks. Authors developed a new metric for the change of features subject to attacks and key findings are eigenvectors of small eigenvalues are more inclined to change under adversarial attacks, i.e., non-robust. Authors believe the dominance of large eigenvalues and eigenvectors are the primary reasons and a way of flattening the spectrum of features would help mitigate this issue. Authors propose to suppress the largest eigenvalues during training, i.e., spectral regularization, which show positive results in adversarial defense when working together with SOTA defense models.
SP:f4d7d9742f307cdab50457de575dc4f828d432ec
Delving into Feature Space: Improving Adversarial Robustness by Feature Spectral Regularization
1 INTRODUCTION . It is shown that the performance of Deep Neural Networks ( DNNs ) decreases dramatically when confronted with adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . The vulnerability of DNNs brings potential risk to safety-critical deployments ( Finlayson et al. , 2019 ) . To mitigate this vulnerability of DNNs , numerous methods are proposed to improve adversarial robustness ( Papernot et al. , 2016b ; Madry et al. , 2018 ; Xie et al. , 2019 ) . Among these , adversarial training ( AT ) ( Madry et al. , 2018 ) is the most effective approach that achieves state-ofthe-art performance under various attacks ( Croce et al. , 2020 ) . Different from standard training , adversarial training trains DNNs on adversarial examples rather than natural examples . There are numerous methods proposed to eliminate the gap of features between natural examples and adversarial examples ( Kannan et al. , 2018 ; Zhang & Wang , 2019 ; Zhang et al. , 2019 ) . However , they constrain the features on the whole without considering the distinction of contribution from an individual feature . This may be inappropriate since every feature may play a different role in robustness . The work of ( Ilyas et al. , 2019 ) argued that adversarial examples result from non-robust features ( Ilyas et al. , 2019 ) , i.e. , well-generalizing but brittle features . This inspires us that we could treat each feature separately in the adversarial setting . While the work of ( Bai et al. , 2021 ; Yan et al. , 2021 ) considered the influence of different channels on robustness , the problem has not been explored clearly from the perspective of spectral signatures , i.e. , the eigenvalues and eigenvectors of feature covariance . The space is split into many spectral components , and then it is worth analyzing which component is beneficial for adversarial robustness and which is fragile under attack . In this paper , we show that spectral signatures of deep features have a close connection with adversarial robustness . Through applying principal component analysis ( PCA ) to deep features , we could split feature space to various eigenvectors with its eigenvalues . Our motivation comes from the phenomenon that the standard trained model often results in a sharp distribution of eigenvalues ( Yu et al. , 2020 ) , i.e. , the eigenvalues rapidly become very small as shown in Figure 1 . This property may be beneficial for natural accuracy ( Lezama et al. , 2018 ; Papyan et al. , 2020 ) , while its impact on robust generalization is unclear . However , we hypothesize that the sharp distribution of eigenvalues impels models to learn less diverse features and is a cause of the vulnerability of DNNs . A minority of eigenvalues occupy the overwhelming majority in the sum of eigenvalues , which may make the model pay little attention to features along the eigenvectors with smaller eigenvalues , so features along such directions are not generalized during training . To verify our hypothesis , we define a new metric to measure the variation of features along different eigenvectors under attacks , as shown in Figure 4 . Our observation reveals that the adversary tends to add more components along eigenvectors with smaller eigenvalues , and such huge variation could be distinctly alleviated by AT . Therefore , we propose to improve adversarial robustness by alleviating the sharp distribution of eigenvalues . Considering that more eigenvalues should occupy the majority , we propose a regularizer named Feature Spectral Regularization ( FSR ) to penalize the largest eigenvalue of the feature matrix covariance . Empirical evidence shows that FSR increases the overall eigenvalues relatively , making models focus on more spectral components during training . We also provide a theoretical explanation with robust linear regression . Comprehensive experiments confirm that FSR indeed improves the adversarial robustness on several datasets . Our contributions are summarized as follows : • We find a close connection between spectral signatures of features and adversarial robustness . On one hand , standard trained model presents a sharp distribution of eigenvalues , which is beneficial for natural accuracy while harmful in adversarial setting . On the other hand , the adversary tends to add more quantity along eigenvectors with smaller eigenvalues . • We propose Feature Spectral Regularization ( FSR ) to increase the overall eigenvalues relatively in deep feature space , thus alleviating the sharp distribution of eigenvalues . Furthermore , we provide a theoretical explanation based on robust linear regression . • We empirically show that FSR improves adversarial robustness and alleviates the sharp distribution of eigenvalues , through comprehensive experiments . 2 RELATED WORK . Adversarial Defense . Many defense methods have been proposed to improve adversarial robustness since the discovery of adversarial examples ( Papernot et al. , 2016b ; Xie et al. , 2019 ; Carmon et al. , 2019 ; Zhang et al. , 2020 ) . However , many of them are proven to be noneffective because they highly depend on obfuscated gradients ( Athalye et al. , 2018 ) or gradient masking ( Papernot et al. , 2017 ) . Among these , adversarial training ( Madry et al. , 2018 ) is now regarded as the state-of-theart method ( Rice et al. , 2020 ; Pang et al. , 2021 ) . Distinguished form standard training , adversarial training trains DNN on adversarial examples : min θ max ‖δ‖≤ E ( x , y ) ∈D LCE ( x+ δ , y ; θ ) ( 1 ) where , D is the dataset , the parameters of DNN are denoted as θ , δ means the perturbation within the -ball , and LCE is the cross-entropy ( CE ) loss . By introducing a trade-off between robustness and generalization , TRADES ( Zhang et al. , 2019 ) is another framework that reaches comparative robustness with AT . Among the proposed methods based on AT , Adversarial Weight Perturbation ( AWP ) ( Wu et al. , 2020 ) explicitly regularizes the flatness of weight loss landscape , and forms a double-perturbation mechanism , which shows huge improvement on adversarial robustness and alleviates the overfitting in AT ( Rice et al. , 2020 ) . Spectral Signatures of Feature Representations . Some studies have revealed that the spectral signatures of features influence the performance in various learning tasks . The spectral properties are crucial to detect backdoors ( Tran et al. , 2018 ; Hayase et al. , 2021 ) . The eigenvectors corresponding to the larger eigenvalues are found to dominate the transferability of features in adversarial domain adaptation ( Chen et al. , 2019b ) . The work of ( Chen et al. , 2019a ) explores the correlation between negative transfer and the spectral components of features and weights in inductive transfer learning . By utilizing the principle of Maximal Coding Rate Reduction ( MCR2 ) , it is theoretically proven that the larger several singular values of feature matrix for every class should be equal to learn the maximally diverse representation ( Yu et al. , 2020 ; Chan et al. , 2020 ) . Different from these studies , we analyse the connection between adversarial robustness and spectral components of deep features . We aim to explore which components are more fragile under attacks , and propose a method to boost adversarial robustness by constraining spectral properties . 3 SPECTRAL ANALYSIS IN FEATURE SPACE . In this section , we investigate the connection between spectral signatures and adversarial robustness . Concretely , we train models by standard training and adversarial training ( Madry et al. , 2018 ) , and then apply spectral decomposition to attain the spectral signatures . We find that in standard training the eigenvalues reveal a rapidly descending curve while this tendency is alleviated by AT . Then , we find that the adversary tends to add more quantity along eigenvectors with smaller eigenvalues . 3.1 CURVE OF EIGENVALUES AND ADVERSARIAL ROBUSTNESS . Given a dataset D = { ( xi , yi ) } ni=1 including C classes , xi represents the input data and yi is the label . DNN is composed of a feature extractor h ( · ) : RD → Rd and a linear classifier g ( · ) : Rd → RC . After centralizing the learned features ( i.e . 1n ∑n i=1 h ( xi ) = 0 ) , we decompose the learned features by spectral decomposition , which is similar to PCA : 1 n n∑ i=1 h ( xi ) h ( xi ) T = d∑ j=1 ujλju T j ( 2 ) where λj means the eigenvalues with index j and uj ∈ Rd represents its eigenvector . Specifically , we train ResNet-18 ( He et al. , 2016 ) using both standard training and adversarial training on CIFAR-10 . The parameters for AT are the same as ( Rice et al. , 2020 ) . We calculate the eigenvalues by applying Eq . ( 2 ) , and the eigenvalues are plotted in Figure 2 . The features come from the penultimate layer ( 512 dimensions ) . All the features are extracted from the test set in CIFAR-10 . A part of the eigenvalues is shown for better visualization . advori Difference of models in spectral analysis . As shown in Figure 2 ( a ) ( b ) , the eigenvalues of a standard trained model drop rapidly at some point , while the sharp decrease of eigenvalues is much alleviated by AT . These sharp spectral signature in standard training makes just a few eigenvalues informative from the opinion of PCA . The model fails to explore the influence of eigenvectors with smaller eigenvalues on classification , so the trained model could not recognize the change of features along eigenvectors with smaller eigenvalues . The eigenvectors which may endow useful features are overly penalized . Consequently , we propose a hypothesis that the severe dominance of the top eigenvectors is a cause of vulnerability in DNN , and the adversary adds more components in eigenvectors with smaller eigenvalues . We will verify the proposed hypothesis in the next section . Connection with intrinsic dimension . We introduce intrinsic dimension ( ID ) to quantitatively describe the decreasing tendency in eigenvalues . ID ( i.e. , the minimal number of parameters needed to describe a representation ) has a close connection with natural accuracy ( Ansuini et al. , 2019 ) . It has been found that reduction of ID contributes to an improvement on natural accuracy . We adopt PC-ID proposed by ( Ansuini et al. , 2019 ) to estimate ID , which is determined by the number of principal components included to describe 90 % of the variance . We mark ID in Figure 2 with solid circles . The results reveal the ID of a standard trained model is very small , while models obtained by AT achieve a higher ID , which is contrary to the influence of ID on natural accuracy . This also verifies that there exists a trade-off between generalization and robustness ( Tsipras et al. , 2019 ; Zhang et al. , 2019 ) from the perspective of ID . FSR could further increase ID , based on AT . 3.2 VARIATION ALONG EIGENVECTORS UNDER ATTACKS . In this section , we aim to verify the hypothesis that adversary adds more components along eigenvectors with smaller eigenvalues in attacking stage . We define a metric to quantitatively observe the change of features along different eigenvectors under attack , called variation in this paper . Definition 1 ( Alignment ) Given a dataset Ds = { xs , i , ys , i } ni=1 which may be perturbed . The alignment of Ds to the pre-given direction uj is calculated by the expectation over cosine similarity between features extracted by DNN and the direction vector uj : align ( Ds , uj ) = E ( xs , i , ys , i ) ∈Ds |〈h ( xs , i ) , uj〉| ‖h ( xs , i ) ‖ · ‖uj‖ ( 3 ) where the norm ‖·‖ used is Euclidian norm , and uj is calculated by Eq . ( 2 ) . The calculation of uj is based on features covariance of natural samples , and uj is fixed . Definition 2 ( Variation ) Given a dataset D consist of natural examples and its perturbed dataset Dadv . The variation on direction uj is defined as the ratio between alignment on Dadv and D : r ( Dadv , D , uj ) = align ( Dadv , uj ) align ( D , uj ) ( 4 ) The alignment is correlated with the distance between subspace spanned by uj and the actual feature space , so the change of alignment under attack is suitable to describe the influence of attacks on direction uj , which is called as variation . Our metric is similar to ( Tran et al. , 2018 ; Hayase et al. , 2021 ) used for analyzing backdoors , but we define the alignment by cosine similarity while the latter uses the inner product . Compared with inner product , cosine similarity could eliminate the influence of scale . In Appendix B.1 , we provide a complete procedure to calculate the metrics . We give an intuitive explanation on why the defined variation is suitable to capture the change of features along different eigenvectors under attacks . An toy model for illustration is shown in Figure 3 . Suppose the distribution transfers from Figure 3 ( a ) to ( b ) under attack , we draw an original data x and its shifted data xadv . Take the fixed direction u2 as an illustration . The cosine similarity between x and u2 is equal to cos ( θ ) , i.e. , cos ( θ ) = 〈x , u2〉 / ‖x‖ · ‖u2‖ , which is called as alignment above . If the distribution moves from Figure 3 ( a ) to ( b ) , then the cosine similarity cos ( θ ) increases . Consequently , the change of the cosine similarity can describe how the features change along a direction under attacks . For the vari- ation defined in Eq . ( 4 ) , r ( Dadv , D , uj ) > 1 means the features add more components along direction uj , and vice versa . We could compare r ( D , D , uj ) with 1 to observe whether the adversary adds or reduces the components along various eigenvectors . The adversary adds more components along the eigenvectors with smaller eigenvalues . We visualize the variation of features to different eigenvectors { u1 , · · · , u512 } . The results of CIFAR10 and SVHN are shown in Figure 4 . The attacks used include FGSM ( Goodfellow et al. , 2015 ) , PGD ( step size /10 for 10 steps ) ( Madry et al. , 2018 ) and C & W attack ( Carlini & Wagner , 2017 ) . We set attack budget = 4/255 constrained by ` ∞ norm . As observed in Figure 4 ( a ) , the variation keeps close or smaller than 1 for the several largest eigenvalues in standard trained model . However , the variation of smaller eigenvalues is much larger than 1 . This means that FGSM tends to add more components along the eigenvectors uj with smaller eigenvalues in standard training . A similar phenomenon also exists in PGD and C & W , which verifies the hypothesis in Section 3.1 . We also provide the curve while changing budget and the variation on CIAFR-100 in Appendix B.2 . For models trained by AT , variation of all eigenvectors keeps close to 1 , which is distinctly different from standard training . The high variation of directions with smaller eigenvalues visibly decreases . It is worth noticing that we apply cosine similarity for definition , so the influence from the scale of features is eliminated . Therefore , the components corresponding to the larger eigenvalues are more robust while the smaller ones are more fragile . Adopting the opinion of robust features ( Ilyas et al. , 2019 ) , the features along the eigenvectors with larger eigenvalues are regarded as robust features , and these along the direction with smaller eigenvalues are non-robust features . The analysis above motivates us to regularize the spectrum signatures to alleviate the rapid decreasing tendency . If we alleviate the dominance of the top eigenvalues , the adversarial robustness is improved .
The authors propose a new defense against adversarial attacks by means of a spectral regularization. The defense is based on an inspection of the embedding of training/test data as returned by the penultimate layer of a neural network. The observation is that the relevant subspace of embedded natural data points (covering e.g. 90% of the variance) is quite low-dimensional, where the use of adversarial training increases the dimensionality of this subspace. In addition, the relevant subspace of embedded adversarial examples is in the provided experiments typically a bit higher dimensional than the subspace of natural examples. Based on these observations, the authors propose a regularizing term, which penalizes the variance in the direction of the first principal component (the largest eigenvalue of the feature covariance matrix). Experiments are conducted on Cifar-10, -100 and SVHN. The proposed regularizer is applied on top of other defense methods, which typically increases the accuracy under attacks by up to 2%.
SP:f4d7d9742f307cdab50457de575dc4f828d432ec
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models
1 INTRODUCTION . Fine-tuning pretrained language models has proven to be highly effective across a wide range of NLP tasks ; the leaderboards for standard benchmarks are currently dominated by models that adopt this general strategy ( Rajpurkar et al. , 2016 ; 2018 ; Wang et al. , 2018 ; Yang et al. , 2018 ; Wang et al. , 2019 ) . Recent work has extended these findings in even more surprising ways : Artetxe et al . ( 2020 ) , Karthikeyan et al . ( 2019 ) , and Tran ( 2020 ) find evidence of transfer between natural languages , and Papadimitriou & Jurafsky ( 2020 ) show that pretraining language models on non-linguistic data such as music and computer code can improve test performance on natural language . Why does pretraining help even across what appear to be fundamentally different domains , and what are the limits of such cross-domain transfer ? In this work , we seek to inform these questions via a systematic exploration of how much cross-domain transfer we see when the model is denied any information about word identity . In this setting , we can vary the pretraining and fine-tuning examples dramatically while holding other aspects of the task constant . This allows us to quantify the extent of transfer , and it can yield insights into the wide-ranging transfer results cited above . Figure 1 gives an overview of our core experimental paradigm : starting with two identical copies of a single pretrained model for English , we fine-tune one on English examples and the other on scrambled English sentences , using a scrambling function F ( section 3 ) , and then we evaluate the resulting models . We apply this paradigm to four classification tasks and two sequence modeling tasks , and we evaluate bag-of-words baselines , LSTMs with GloVe initialization and rich attention mechanisms , and BERT . Our central finding is that only BERT is able to achieve robust cross-domain transfer , and for classification tasks but not sequence labeling ones . To try to understand why such transfer is successful for some tasks but not others , we pursue a number of hypotheses . First , we consider whether using a scrambling function F that matches word frequencies is required for transfer , and we find that such matching plays a small role , but not enough to account for the observed performance ( section 7.1 ) . Second , we assess whether frequency matching might actually be inserting semantic consistency into the scrambling process by , for example , systematically creating substitution pairs like good/great and professor/teacher ( section 7.2 ) . However , we find no evidence of such semantic consistency . Third , we try to isolate the contribution of pretraining versus fine-tuning by fine-tuning randomly initialized models of different sizes ( section 7.3 ) and by freezing the BERT parameters , such that only task-specific parameters are updated ( section 7.4 ) . These variations lead to a substantial drop in transfer , suggesting that finetuning is vital , although our LSTM results show that the BERT pretrained starting point is also an essential component . Fourth , we ask whether the fine-tuning process is primarily learning to reassociate scrambled words with their sources , and we find that it is not ( section 7.5 ) . While these findings do not fully account for the transfer we observe , they offer a partial explanation which should help guide future studies of this issue and which can help with practical fine-tuning work . 2 RELATED WORK . 2.1 EVIDENCE FOR TRANSFER . Transferability across domains is often used to benchmark large pretrained models such as BERT ( Devlin et al. , 2019b ) , RoBERTa ( Liu et al. , 2019b ) , ELECTRA ( Clark et al. , 2019 ) , and XLNet ( Yang et al. , 2019 ) . To assess transferability , pretrained models are fine-tuned for diverse downstream tasks ( Wang et al. , 2018 ; 2019 ) . Recently , pretrained Transformer-based models ( Vaswani et al. , 2017 ) have even surpassed estimates of human performance on GLUE ( Wang et al. , 2018 ) and SuperGLUE ( Wang et al. , 2019 ) . While the benefits of pretraining are reduced when there is a large train set ( Hernandez et al. , 2021 ) , there is little doubt that this pretraining process helps in many scenarios . 2.2 STUDIES OF WHY TRANSFER HAPPENS . There are diverse efforts underway to more deeply understand why transfer occurs . Probing tests often involve fitting supervised models on internal representations in an effort to determine what they encode . Such work suggests that BERT representations encode non-trivial information about morphosyntax and semantics ( Tenney et al. , 2019 ; Liu et al. , 2019a ; Hewitt & Manning , 2019 ; Manning et al. , 2020 ) and perhaps weakly encode world knowledge such as relations between entities ( Da & Kasai , 2019 ; Petroni et al. , 2019 ) , but that they contain relatively little about pragmatics or rolebased event knowledge ( Ettinger , 2020 ) . Newer feature attribution methods ( Zeiler & Fergus , 2014 ; Springenberg et al. , 2015 ; Shrikumar et al. , 2017 ; Binder et al. , 2016 ; Sundararajan et al. , 2017 ) and intervention methods ( McCoy et al. , 2019 ; Vig et al. , 2020 ; Geiger et al. , 2020 ) are corroborating these findings while also yielding a picture of the internal causal dynamics of these models . Another set of strategies for understanding transfer involves modifying network inputs or internal representations and studying the effects of such changes on task performance . For instance , Tamkin et al . ( 2020 ) show that BERT ’ s performance on downstream GLUE tasks suffers only marginally even if some layers are reinitialized before fine-tuning , and Gauthier & Levy ( 2019 ) , Zanzotto et al . ( 2020 ) , Pham et al . ( 2020 ) , and Sinha et al . ( 2021 ) show that BERT-like models are largely insensitive to word order changes . 2.3 EXTREME CROSS-DOMAIN TRANSFER . Cross-domain transfer is not limited to monolingual cases ( Karthikeyan et al. , 2019 ) . With modifications to its tokenizer , English-pretrained BERT improves performance on downstream multilingual NLU tasks ( Artetxe et al. , 2020 ; Tran , 2020 ) . Papadimitriou & Jurafsky ( 2020 ) show that pretraining language models on structured non-linguistic data ( e.g. , MIDI music or Java code ) improves test performance on natural language . Our work complements and advances these efforts along two dimensions . First , we challenge models with extremely ambitious cross-domain settings and find that BERT shows a high degree of transfer , and we conduct a large set of follow-up experiments to help identify the sources and limitations of such transfer . 3 EXPERIMENTAL PARADIGM . We now describe the evaluation paradigm summarized in figure 1 ( section 3.1 ) , with special attention to the scrambling functions F that we consider ( sections 3.2–3.3 ) . 3.1 EVALUATION PIPELINE . Figure 1 shows our main evaluation paradigm for testing the transferability of a model without word identity information . On the left side , we show the classic fine-tuning pipeline ( i.e. , we fine-tune on the original English training set and evaluate on the original English test set ) . On the right side , we show our new evaluation pipeline : starting from a single model , we ( 1 ) fine-tune it with a corrupted training split where regular English word identities are removed and then ( 2 ) evaluate the model on a version of the evaluation set that is corrupted in the same manner . The paradigm applies equally to models without any pretraining and with varying degrees of pretraining for their model parameters . 3.2 SCRAMBLING WITH SIMILAR FREQUENCY . To remove word identities , we scrambled each sentence in each dataset by substituting each word w with a new word w′ in the vocabulary of the dataset . For Scrambling with Similar Frequency , we use the following rules : 1. w and w′ must have the same sub-token length according to the BERT tokenizer ; and 2. w and w′ must have similar frequency . The first rule is motivated by the concern that sub-token length may correlate with word frequency , given that rarer and longer words may be tokenized into longer sub-tokens . The second rule is the core of the procedure . The guiding idea is that word frequency is often reflected in learned embeddings ( Gong et al. , 2018 ) , so this scrambling procedure might preserve useful information and thus help to identify the source of transfer . Table 5 shows an example , and Appendix C provides details about the matching algorithm and additional examples of scrambled sentences . 3.3 RANDOM SCRAMBLING . To better understand the role of frequency in domain transfer , we also consider a word scrambling method that does not seek to match word frequencies . For this , we simply shuffle the vocabulary and match each word with another random word in the vocabulary without replacement . We include the distributions of the difference in frequency for every matched word pair in Appendix C to make sure a word is paired with a new word with drastically different frequency in the dataset . We also tried to pair words by the reverse order of frequencies , which yielded similar results , so we report only random scrambling results here . 4 MODELS . In this section , we describe the models we evaluated within our paradigm . Appendix B provides additional details about how the models were designed . BERT For our BERT model ( Devlin et al. , 2019a ) , we import weights from the pretrained BERTbase model through the HuggingFace transformers library ( Wolf et al. , 2020 ) . For sequence classification tasks , we append a classification head after the [ CLS ] token embedding in the last layer of the BERT model . If an input example contains a pair of sentences , we concatenate them using a [ SEP ] token in between . For sequence labeling tasks , we append a shared classification head to each token embedding in the last layer of the BERT model . LSTM We contextualize our results against a strong LSTM-based model ( Hochreiter & Schmidhuber , 1997 ) . We lower-case each input sentence and tokenize it by separating on spaces and punctuation . We then use 300-dimensional GloVe embeddings ( Pennington et al. , 2014 ) 1 as inputs to a single-layer recurrent neural network with LSTM cells , with a hidden size of 64 . We use dot-product attention ( Luong et al. , 2015 ) to formulate a context vector for each sentence . Finally , we pass the context vector through a multilayer perceptron ( MLP ) layer to get the final prediction . For an input example with a pair of sentences , we concatenate two sentences together before feeding them into our LSTM encoder . For sequence labeling tasks , we directly feed the hidden state at each position to the MLP layer to get the final prediction . Bag-of-Words ( BoW ) Model We compare against a BoW classifier , which serves as a proxy of model performance when only given word co-occurrence information . For each sentence in a dataset , we first formulate a BoW vector that uses unigram representations of an input sentence . Then , we feed the BoW vector through a softmax classifier . For examples with a pair of sentences , we create two BoW vectors for each sentence , and concatenate them together before feeding them into the linear layer for predicting labels . For sequence labeling tasks , we use Conditional Random Fields models ( CRFs ; Lafferty et al. , 2001 ) with character-level unigram BoW features . Dummy Model We include a random classifier that generates predictions randomly proportional to the class distribution of the training set . We use this model to further contextualize our results . 1We use the Common Crawl cased version : http : //nlp.stanford.edu/data/glove.840B.300d.zip
This work aims to identify the source of transfer learning on neural models. To this end they set up a series of experiments where models are trained and tested with english data, and, with scrambled (or randomly replaced) data, keeping token-wise sentence length. Scrambled sentences also use replacement tokens keeping token frequency with the original token. Models are tested on standard benchmarks on text classification and sequence modelling, showing a drop in performance when the input is scrambled, and a similar drop with randomized input. Further analysis are meant to test of the scrambling maintains sentence semantics (it doesn't'), if BERT is just retraining (there is some transfer but inconclusive), keeping BERT frozen (it depends on the layer) and finally identifying word identity reassociation (it seems there is not reassociation).
SP:2cfcfae9f9e3b24520ac8001f5968ab9f8525a09
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models
1 INTRODUCTION . Fine-tuning pretrained language models has proven to be highly effective across a wide range of NLP tasks ; the leaderboards for standard benchmarks are currently dominated by models that adopt this general strategy ( Rajpurkar et al. , 2016 ; 2018 ; Wang et al. , 2018 ; Yang et al. , 2018 ; Wang et al. , 2019 ) . Recent work has extended these findings in even more surprising ways : Artetxe et al . ( 2020 ) , Karthikeyan et al . ( 2019 ) , and Tran ( 2020 ) find evidence of transfer between natural languages , and Papadimitriou & Jurafsky ( 2020 ) show that pretraining language models on non-linguistic data such as music and computer code can improve test performance on natural language . Why does pretraining help even across what appear to be fundamentally different domains , and what are the limits of such cross-domain transfer ? In this work , we seek to inform these questions via a systematic exploration of how much cross-domain transfer we see when the model is denied any information about word identity . In this setting , we can vary the pretraining and fine-tuning examples dramatically while holding other aspects of the task constant . This allows us to quantify the extent of transfer , and it can yield insights into the wide-ranging transfer results cited above . Figure 1 gives an overview of our core experimental paradigm : starting with two identical copies of a single pretrained model for English , we fine-tune one on English examples and the other on scrambled English sentences , using a scrambling function F ( section 3 ) , and then we evaluate the resulting models . We apply this paradigm to four classification tasks and two sequence modeling tasks , and we evaluate bag-of-words baselines , LSTMs with GloVe initialization and rich attention mechanisms , and BERT . Our central finding is that only BERT is able to achieve robust cross-domain transfer , and for classification tasks but not sequence labeling ones . To try to understand why such transfer is successful for some tasks but not others , we pursue a number of hypotheses . First , we consider whether using a scrambling function F that matches word frequencies is required for transfer , and we find that such matching plays a small role , but not enough to account for the observed performance ( section 7.1 ) . Second , we assess whether frequency matching might actually be inserting semantic consistency into the scrambling process by , for example , systematically creating substitution pairs like good/great and professor/teacher ( section 7.2 ) . However , we find no evidence of such semantic consistency . Third , we try to isolate the contribution of pretraining versus fine-tuning by fine-tuning randomly initialized models of different sizes ( section 7.3 ) and by freezing the BERT parameters , such that only task-specific parameters are updated ( section 7.4 ) . These variations lead to a substantial drop in transfer , suggesting that finetuning is vital , although our LSTM results show that the BERT pretrained starting point is also an essential component . Fourth , we ask whether the fine-tuning process is primarily learning to reassociate scrambled words with their sources , and we find that it is not ( section 7.5 ) . While these findings do not fully account for the transfer we observe , they offer a partial explanation which should help guide future studies of this issue and which can help with practical fine-tuning work . 2 RELATED WORK . 2.1 EVIDENCE FOR TRANSFER . Transferability across domains is often used to benchmark large pretrained models such as BERT ( Devlin et al. , 2019b ) , RoBERTa ( Liu et al. , 2019b ) , ELECTRA ( Clark et al. , 2019 ) , and XLNet ( Yang et al. , 2019 ) . To assess transferability , pretrained models are fine-tuned for diverse downstream tasks ( Wang et al. , 2018 ; 2019 ) . Recently , pretrained Transformer-based models ( Vaswani et al. , 2017 ) have even surpassed estimates of human performance on GLUE ( Wang et al. , 2018 ) and SuperGLUE ( Wang et al. , 2019 ) . While the benefits of pretraining are reduced when there is a large train set ( Hernandez et al. , 2021 ) , there is little doubt that this pretraining process helps in many scenarios . 2.2 STUDIES OF WHY TRANSFER HAPPENS . There are diverse efforts underway to more deeply understand why transfer occurs . Probing tests often involve fitting supervised models on internal representations in an effort to determine what they encode . Such work suggests that BERT representations encode non-trivial information about morphosyntax and semantics ( Tenney et al. , 2019 ; Liu et al. , 2019a ; Hewitt & Manning , 2019 ; Manning et al. , 2020 ) and perhaps weakly encode world knowledge such as relations between entities ( Da & Kasai , 2019 ; Petroni et al. , 2019 ) , but that they contain relatively little about pragmatics or rolebased event knowledge ( Ettinger , 2020 ) . Newer feature attribution methods ( Zeiler & Fergus , 2014 ; Springenberg et al. , 2015 ; Shrikumar et al. , 2017 ; Binder et al. , 2016 ; Sundararajan et al. , 2017 ) and intervention methods ( McCoy et al. , 2019 ; Vig et al. , 2020 ; Geiger et al. , 2020 ) are corroborating these findings while also yielding a picture of the internal causal dynamics of these models . Another set of strategies for understanding transfer involves modifying network inputs or internal representations and studying the effects of such changes on task performance . For instance , Tamkin et al . ( 2020 ) show that BERT ’ s performance on downstream GLUE tasks suffers only marginally even if some layers are reinitialized before fine-tuning , and Gauthier & Levy ( 2019 ) , Zanzotto et al . ( 2020 ) , Pham et al . ( 2020 ) , and Sinha et al . ( 2021 ) show that BERT-like models are largely insensitive to word order changes . 2.3 EXTREME CROSS-DOMAIN TRANSFER . Cross-domain transfer is not limited to monolingual cases ( Karthikeyan et al. , 2019 ) . With modifications to its tokenizer , English-pretrained BERT improves performance on downstream multilingual NLU tasks ( Artetxe et al. , 2020 ; Tran , 2020 ) . Papadimitriou & Jurafsky ( 2020 ) show that pretraining language models on structured non-linguistic data ( e.g. , MIDI music or Java code ) improves test performance on natural language . Our work complements and advances these efforts along two dimensions . First , we challenge models with extremely ambitious cross-domain settings and find that BERT shows a high degree of transfer , and we conduct a large set of follow-up experiments to help identify the sources and limitations of such transfer . 3 EXPERIMENTAL PARADIGM . We now describe the evaluation paradigm summarized in figure 1 ( section 3.1 ) , with special attention to the scrambling functions F that we consider ( sections 3.2–3.3 ) . 3.1 EVALUATION PIPELINE . Figure 1 shows our main evaluation paradigm for testing the transferability of a model without word identity information . On the left side , we show the classic fine-tuning pipeline ( i.e. , we fine-tune on the original English training set and evaluate on the original English test set ) . On the right side , we show our new evaluation pipeline : starting from a single model , we ( 1 ) fine-tune it with a corrupted training split where regular English word identities are removed and then ( 2 ) evaluate the model on a version of the evaluation set that is corrupted in the same manner . The paradigm applies equally to models without any pretraining and with varying degrees of pretraining for their model parameters . 3.2 SCRAMBLING WITH SIMILAR FREQUENCY . To remove word identities , we scrambled each sentence in each dataset by substituting each word w with a new word w′ in the vocabulary of the dataset . For Scrambling with Similar Frequency , we use the following rules : 1. w and w′ must have the same sub-token length according to the BERT tokenizer ; and 2. w and w′ must have similar frequency . The first rule is motivated by the concern that sub-token length may correlate with word frequency , given that rarer and longer words may be tokenized into longer sub-tokens . The second rule is the core of the procedure . The guiding idea is that word frequency is often reflected in learned embeddings ( Gong et al. , 2018 ) , so this scrambling procedure might preserve useful information and thus help to identify the source of transfer . Table 5 shows an example , and Appendix C provides details about the matching algorithm and additional examples of scrambled sentences . 3.3 RANDOM SCRAMBLING . To better understand the role of frequency in domain transfer , we also consider a word scrambling method that does not seek to match word frequencies . For this , we simply shuffle the vocabulary and match each word with another random word in the vocabulary without replacement . We include the distributions of the difference in frequency for every matched word pair in Appendix C to make sure a word is paired with a new word with drastically different frequency in the dataset . We also tried to pair words by the reverse order of frequencies , which yielded similar results , so we report only random scrambling results here . 4 MODELS . In this section , we describe the models we evaluated within our paradigm . Appendix B provides additional details about how the models were designed . BERT For our BERT model ( Devlin et al. , 2019a ) , we import weights from the pretrained BERTbase model through the HuggingFace transformers library ( Wolf et al. , 2020 ) . For sequence classification tasks , we append a classification head after the [ CLS ] token embedding in the last layer of the BERT model . If an input example contains a pair of sentences , we concatenate them using a [ SEP ] token in between . For sequence labeling tasks , we append a shared classification head to each token embedding in the last layer of the BERT model . LSTM We contextualize our results against a strong LSTM-based model ( Hochreiter & Schmidhuber , 1997 ) . We lower-case each input sentence and tokenize it by separating on spaces and punctuation . We then use 300-dimensional GloVe embeddings ( Pennington et al. , 2014 ) 1 as inputs to a single-layer recurrent neural network with LSTM cells , with a hidden size of 64 . We use dot-product attention ( Luong et al. , 2015 ) to formulate a context vector for each sentence . Finally , we pass the context vector through a multilayer perceptron ( MLP ) layer to get the final prediction . For an input example with a pair of sentences , we concatenate two sentences together before feeding them into our LSTM encoder . For sequence labeling tasks , we directly feed the hidden state at each position to the MLP layer to get the final prediction . Bag-of-Words ( BoW ) Model We compare against a BoW classifier , which serves as a proxy of model performance when only given word co-occurrence information . For each sentence in a dataset , we first formulate a BoW vector that uses unigram representations of an input sentence . Then , we feed the BoW vector through a softmax classifier . For examples with a pair of sentences , we create two BoW vectors for each sentence , and concatenate them together before feeding them into the linear layer for predicting labels . For sequence labeling tasks , we use Conditional Random Fields models ( CRFs ; Lafferty et al. , 2001 ) with character-level unigram BoW features . Dummy Model We include a random classifier that generates predictions randomly proportional to the class distribution of the training set . We use this model to further contextualize our results . 1We use the Common Crawl cased version : http : //nlp.stanford.edu/data/glove.840B.300d.zip
This paper elaborates on transfer learning and domain adaptation in language models. The authors argue that there are limits on how much information could be transferred when a model is scrambled or somehow randomized. They used different strategies to randomize training data including frequency matching and completely random replacement. They suggest that only Bert shows high rates of transfer into their scrambled domains in classification tasks. Their experimental results exhibit importance of pretraining on sequence labeling tasks where randomness occurs. For example, they showed that identity of words are important and when they are swapped with completely different frequencies (random) the performance drops significantly. They also supported the view of Sinha et al. (2021) and Ethayarajh (2019) that BERT model may preserves frequency better and this is the reason behind its superiority.
SP:2cfcfae9f9e3b24520ac8001f5968ab9f8525a09
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models
1 INTRODUCTION . Fine-tuning pretrained language models has proven to be highly effective across a wide range of NLP tasks ; the leaderboards for standard benchmarks are currently dominated by models that adopt this general strategy ( Rajpurkar et al. , 2016 ; 2018 ; Wang et al. , 2018 ; Yang et al. , 2018 ; Wang et al. , 2019 ) . Recent work has extended these findings in even more surprising ways : Artetxe et al . ( 2020 ) , Karthikeyan et al . ( 2019 ) , and Tran ( 2020 ) find evidence of transfer between natural languages , and Papadimitriou & Jurafsky ( 2020 ) show that pretraining language models on non-linguistic data such as music and computer code can improve test performance on natural language . Why does pretraining help even across what appear to be fundamentally different domains , and what are the limits of such cross-domain transfer ? In this work , we seek to inform these questions via a systematic exploration of how much cross-domain transfer we see when the model is denied any information about word identity . In this setting , we can vary the pretraining and fine-tuning examples dramatically while holding other aspects of the task constant . This allows us to quantify the extent of transfer , and it can yield insights into the wide-ranging transfer results cited above . Figure 1 gives an overview of our core experimental paradigm : starting with two identical copies of a single pretrained model for English , we fine-tune one on English examples and the other on scrambled English sentences , using a scrambling function F ( section 3 ) , and then we evaluate the resulting models . We apply this paradigm to four classification tasks and two sequence modeling tasks , and we evaluate bag-of-words baselines , LSTMs with GloVe initialization and rich attention mechanisms , and BERT . Our central finding is that only BERT is able to achieve robust cross-domain transfer , and for classification tasks but not sequence labeling ones . To try to understand why such transfer is successful for some tasks but not others , we pursue a number of hypotheses . First , we consider whether using a scrambling function F that matches word frequencies is required for transfer , and we find that such matching plays a small role , but not enough to account for the observed performance ( section 7.1 ) . Second , we assess whether frequency matching might actually be inserting semantic consistency into the scrambling process by , for example , systematically creating substitution pairs like good/great and professor/teacher ( section 7.2 ) . However , we find no evidence of such semantic consistency . Third , we try to isolate the contribution of pretraining versus fine-tuning by fine-tuning randomly initialized models of different sizes ( section 7.3 ) and by freezing the BERT parameters , such that only task-specific parameters are updated ( section 7.4 ) . These variations lead to a substantial drop in transfer , suggesting that finetuning is vital , although our LSTM results show that the BERT pretrained starting point is also an essential component . Fourth , we ask whether the fine-tuning process is primarily learning to reassociate scrambled words with their sources , and we find that it is not ( section 7.5 ) . While these findings do not fully account for the transfer we observe , they offer a partial explanation which should help guide future studies of this issue and which can help with practical fine-tuning work . 2 RELATED WORK . 2.1 EVIDENCE FOR TRANSFER . Transferability across domains is often used to benchmark large pretrained models such as BERT ( Devlin et al. , 2019b ) , RoBERTa ( Liu et al. , 2019b ) , ELECTRA ( Clark et al. , 2019 ) , and XLNet ( Yang et al. , 2019 ) . To assess transferability , pretrained models are fine-tuned for diverse downstream tasks ( Wang et al. , 2018 ; 2019 ) . Recently , pretrained Transformer-based models ( Vaswani et al. , 2017 ) have even surpassed estimates of human performance on GLUE ( Wang et al. , 2018 ) and SuperGLUE ( Wang et al. , 2019 ) . While the benefits of pretraining are reduced when there is a large train set ( Hernandez et al. , 2021 ) , there is little doubt that this pretraining process helps in many scenarios . 2.2 STUDIES OF WHY TRANSFER HAPPENS . There are diverse efforts underway to more deeply understand why transfer occurs . Probing tests often involve fitting supervised models on internal representations in an effort to determine what they encode . Such work suggests that BERT representations encode non-trivial information about morphosyntax and semantics ( Tenney et al. , 2019 ; Liu et al. , 2019a ; Hewitt & Manning , 2019 ; Manning et al. , 2020 ) and perhaps weakly encode world knowledge such as relations between entities ( Da & Kasai , 2019 ; Petroni et al. , 2019 ) , but that they contain relatively little about pragmatics or rolebased event knowledge ( Ettinger , 2020 ) . Newer feature attribution methods ( Zeiler & Fergus , 2014 ; Springenberg et al. , 2015 ; Shrikumar et al. , 2017 ; Binder et al. , 2016 ; Sundararajan et al. , 2017 ) and intervention methods ( McCoy et al. , 2019 ; Vig et al. , 2020 ; Geiger et al. , 2020 ) are corroborating these findings while also yielding a picture of the internal causal dynamics of these models . Another set of strategies for understanding transfer involves modifying network inputs or internal representations and studying the effects of such changes on task performance . For instance , Tamkin et al . ( 2020 ) show that BERT ’ s performance on downstream GLUE tasks suffers only marginally even if some layers are reinitialized before fine-tuning , and Gauthier & Levy ( 2019 ) , Zanzotto et al . ( 2020 ) , Pham et al . ( 2020 ) , and Sinha et al . ( 2021 ) show that BERT-like models are largely insensitive to word order changes . 2.3 EXTREME CROSS-DOMAIN TRANSFER . Cross-domain transfer is not limited to monolingual cases ( Karthikeyan et al. , 2019 ) . With modifications to its tokenizer , English-pretrained BERT improves performance on downstream multilingual NLU tasks ( Artetxe et al. , 2020 ; Tran , 2020 ) . Papadimitriou & Jurafsky ( 2020 ) show that pretraining language models on structured non-linguistic data ( e.g. , MIDI music or Java code ) improves test performance on natural language . Our work complements and advances these efforts along two dimensions . First , we challenge models with extremely ambitious cross-domain settings and find that BERT shows a high degree of transfer , and we conduct a large set of follow-up experiments to help identify the sources and limitations of such transfer . 3 EXPERIMENTAL PARADIGM . We now describe the evaluation paradigm summarized in figure 1 ( section 3.1 ) , with special attention to the scrambling functions F that we consider ( sections 3.2–3.3 ) . 3.1 EVALUATION PIPELINE . Figure 1 shows our main evaluation paradigm for testing the transferability of a model without word identity information . On the left side , we show the classic fine-tuning pipeline ( i.e. , we fine-tune on the original English training set and evaluate on the original English test set ) . On the right side , we show our new evaluation pipeline : starting from a single model , we ( 1 ) fine-tune it with a corrupted training split where regular English word identities are removed and then ( 2 ) evaluate the model on a version of the evaluation set that is corrupted in the same manner . The paradigm applies equally to models without any pretraining and with varying degrees of pretraining for their model parameters . 3.2 SCRAMBLING WITH SIMILAR FREQUENCY . To remove word identities , we scrambled each sentence in each dataset by substituting each word w with a new word w′ in the vocabulary of the dataset . For Scrambling with Similar Frequency , we use the following rules : 1. w and w′ must have the same sub-token length according to the BERT tokenizer ; and 2. w and w′ must have similar frequency . The first rule is motivated by the concern that sub-token length may correlate with word frequency , given that rarer and longer words may be tokenized into longer sub-tokens . The second rule is the core of the procedure . The guiding idea is that word frequency is often reflected in learned embeddings ( Gong et al. , 2018 ) , so this scrambling procedure might preserve useful information and thus help to identify the source of transfer . Table 5 shows an example , and Appendix C provides details about the matching algorithm and additional examples of scrambled sentences . 3.3 RANDOM SCRAMBLING . To better understand the role of frequency in domain transfer , we also consider a word scrambling method that does not seek to match word frequencies . For this , we simply shuffle the vocabulary and match each word with another random word in the vocabulary without replacement . We include the distributions of the difference in frequency for every matched word pair in Appendix C to make sure a word is paired with a new word with drastically different frequency in the dataset . We also tried to pair words by the reverse order of frequencies , which yielded similar results , so we report only random scrambling results here . 4 MODELS . In this section , we describe the models we evaluated within our paradigm . Appendix B provides additional details about how the models were designed . BERT For our BERT model ( Devlin et al. , 2019a ) , we import weights from the pretrained BERTbase model through the HuggingFace transformers library ( Wolf et al. , 2020 ) . For sequence classification tasks , we append a classification head after the [ CLS ] token embedding in the last layer of the BERT model . If an input example contains a pair of sentences , we concatenate them using a [ SEP ] token in between . For sequence labeling tasks , we append a shared classification head to each token embedding in the last layer of the BERT model . LSTM We contextualize our results against a strong LSTM-based model ( Hochreiter & Schmidhuber , 1997 ) . We lower-case each input sentence and tokenize it by separating on spaces and punctuation . We then use 300-dimensional GloVe embeddings ( Pennington et al. , 2014 ) 1 as inputs to a single-layer recurrent neural network with LSTM cells , with a hidden size of 64 . We use dot-product attention ( Luong et al. , 2015 ) to formulate a context vector for each sentence . Finally , we pass the context vector through a multilayer perceptron ( MLP ) layer to get the final prediction . For an input example with a pair of sentences , we concatenate two sentences together before feeding them into our LSTM encoder . For sequence labeling tasks , we directly feed the hidden state at each position to the MLP layer to get the final prediction . Bag-of-Words ( BoW ) Model We compare against a BoW classifier , which serves as a proxy of model performance when only given word co-occurrence information . For each sentence in a dataset , we first formulate a BoW vector that uses unigram representations of an input sentence . Then , we feed the BoW vector through a softmax classifier . For examples with a pair of sentences , we create two BoW vectors for each sentence , and concatenate them together before feeding them into the linear layer for predicting labels . For sequence labeling tasks , we use Conditional Random Fields models ( CRFs ; Lafferty et al. , 2001 ) with character-level unigram BoW features . Dummy Model We include a random classifier that generates predictions randomly proportional to the class distribution of the training set . We use this model to further contextualize our results . 1We use the Common Crawl cased version : http : //nlp.stanford.edu/data/glove.840B.300d.zip
This paper proposes an evaluation pipeline for pre-trained models by testing their transferability without word identity information. Specifically, they take an English pre-trained BERT off-the-shelf and fine-tune it with a corrupted English dataset. Those corrupted texts are designed to remove word identity information while preserving the word frequency information. They conduct experiments on 6 tasks.
SP:2cfcfae9f9e3b24520ac8001f5968ab9f8525a09
Selective Token Generation for Few-shot Language Modeling
1 INTRODUCTION . Natural language processing ( NLP ) have recently achieved great progress using advanced neural language models ( Radford et al. , 2018 ; Devlin et al. , 2019 ; Yang et al. , 2019 ; Liu et al. , 2019 ; Clark et al. , 2020 ; Raffel et al. , 2019 ; Lan et al. , 2019 ; Lewis et al. , 2020 ) . However , these neural models typically require large-scale training data for each individual task , and solving a new NLP task that has only a few examples is still challenging problem ( Yin , 2020 ) . Especially , natural language generation ( NLG ) with limited training data is an important yet more difficult task due to its fast adaptation of sequential prediction models in a wide range of applications including text summarization , question answering , data-to-text generation , machine translation , etc ( Peng et al. , 2020 ; Chen et al. , 2020 ; Xu et al. , 2021 ; Schick & Schütze , 2020 ; Chang et al. , 2021 ; Radford et al. , 2019 ; Lewis et al. , 2020 ; Brown et al. , 2020 ) . More recently , pretrained language models ( PLMs ) have shown great generalization ability when combined with large-scale data and big transformer-based models ( Devlin et al. , 2019 ; Radford et al. , 2019 ; Lewis et al. , 2020 ; Brown et al. , 2020 ; Subramanyam Kalyan et al. , 2021 ) . Therefore , transfer learning from transformer PLMs has been popularly used for few-shot NLG tasks with promising results . In specific , the use of PLM for few-shot NLG can be categorized into three approaches : 1 ) prompt-based , 2 ) finetuning , and 3 ) additive learning . Prompt-based approaches encode a task description and task-specific examples as a natural language prompt for few-shot text generation ( Radford et al. , 2019 ; Brown et al. , 2020 ; Zheng & Huang , 2021 ; Schick & Schütze , 2020 ; Li & Liang , 2021 ) . While these approaches can take full advantage of the universal natural language understanding and generation capabilities of large-scale PLMs without further training of the main model , these have some limitations in dealing with a large domain shift from the pretraining corpus data , tuning suitable task-specific prompts , and covering an increased size of conditioning examples . On the other hand , finetuning of the PLM is able to explicitly impart task-specific knowledge to the model and hence lift the above limitations ( Ziegler et al. , 2019 ; Xu et al. , 2021 ; Chen et al. , 2020 ) . However , these finetuned models are prone to overfitting when only a small amount of training data is available . In order to alleviate such an overfitting problem , additive learning has been extensively exploited by incorporating task-specific adapters into the PLM ( Zeldes et al. , 2020 ; Stickland & Murray , 2019 ) . In general , task-specialized adapters for few-shot NLG are trained by maximum likelihood estimation ( MLE ) . While MLE is efficient in learning , it suffers from the exposure bias problem due to the difference in the training and inference mechanisms ( He et al. , 2019 ) , and this problem can be severe with limited training data . Reinforcement learning ( RL ) is capable of resolving this exposure bias problem by sequential output sampling during training ( Ranzato et al. , 2015 ; Keneshloo et al. , 2019 ; Shi et al. , 2021 ) . Moreover , it allows to leverage the target-specific sequence-level objectives such as BLEU and ROUGE ( Wu et al. , 2018 ; Guo et al. , 2021 ) . However , in the task of NLG , the exponentially large space of output sequences restricts the use of RL since it leads to high variance and unstable training which is more serious in the few-shot setting . In this work , we develop a novel RLbased additive learning algorithm on the transformer-based PLM to overcome these shortcomings and to improve the performance of few-shot NLG . In particular , we first convert the NLG task to the sequential token generation task based on the transformer language model , and then propose a selective token generation between the PLM and the task-specific adapter , during both RL-based training and inference . The proposed output token selection enables to not only explicitly maintain a general prior knowledge from the frozen PLM but also focus only on the task-relevant parts in sequence generation . In addition , in few-shot learning this partial token generation makes the task-specific adapter more resilient to overfitting and furthermore reduces the overall output space which leads to stable RL training . Here , in order to make the two token generators ( policies ) complement each other as well as to realize the robust output selection at the token level on the fly , we exploit a separate token-level policy selector . It is noted that both the policy selector and the task-specific adapter are simultaneously learned by the RL algorithm . Experimental results on various few-shot NLG tasks show that the proposed selective token generation outperforms the previous PLM-based additive learning algorithms with the comprehensive ( non-selective ) token generation . Our main contributions can be summarized as follows . • A novel selective token generation between the PLM and the task-specific adapter is proposed for transformer-based few-shot NLG . • A separate selecting module is exploited to adaptively determine each output token in a sequence both at training and testing time . • RL is applied to train both the policy selector and the task-specific adapter that is complementary to the PLM in text generation . • An extensive empirical validation on few-shot NLG tasks demonstrates that the proposed selective token generation performs better in comparison to the previous PLM-based additive learning algorithms . 2 BACKGROUND . 2.1 NATURAL LANGUAGE GENERATION . The goal of NLG is to generate a text sequence y = [ y0 , ... , yT ] for a given task , where yt is the tth output token from a vocabulary V , and T is the output sequence length . For this generation , we aims to model the distribution of y that is autoregressively factorized as pθ ( y ) = ∏T t=0 pθ ( yt|y < t ) , where θ denotes the model parameters and y < t = [ y0 , ... , yt−1 ] . Here , the conditional distribution to sample a token for each step , pθ ( yt|y < t ) , is defined by the softmax function on the output logits fθ ( yt|y < t ) . Note that in general , the language generation is conditioned on input context according to a given task . Here , we encode the conditioning context by the same sequential model for generating an output sequence , and for simplicity we omit it . In this work , we utilize the autoregressive transformer for our generative model . 2.2 ADDITIVE LEARNING FOR FEW-SHOT GENERATION . To effectively leverage the general linguistic knowledge , θ is first initialized by the PLM parameters , θLM , for NLG . Given N task-specific training instances , D = { yn∗ } Nn=1 , where yn∗ is the nth ground-truth output sequence , directly finetuning θLM usingD can incur the severe overfitting problem when N is small in the few-shot scenario . Therefore , we add the task-specific adapter , gθa parameterized by θa , on top of the PLM , and optimize only θa ( Zeldes et al. , 2020 ; Stickland & Murray , 2019 ) . In specific , we reformulate f ( ·|y < t ; θ ) = WTh ( y < t ; θh ) where W ∈ RH×|V| and h ∈ RH denote the weight matrix and the penultimate representations , respectively , and θ = { W , θh } . Then , we define the task-specific conditional distribution as follows : p ( yt|y < t ; θLM , θa ) = softmax ( WLM ThLM ( y < t ) +Wa T g ( hLM ( y < t ) ; θg ) ) , ( 1 ) where hLM ( y < t ) = h ( y < t ; θh , LM ) and θa = { Wa , θg } . Here , the summation of the PLM logits and the adapter logits is motivated by auxiliary training1 . It is noted that in our additive learning θa is updated while θLM is kept frozen . Hence , in the following we omit θLM such that pθa ( yt|y < t ) = p ( yt|y < t ; θLM , θa ) for simplicity . 2.3 MAXIMUM LIKELIHOOD ESTIMATION ( MLE ) . Given a small amount of training data D = { yn∗ } Nn=1 , MLE optimizes θ by maximizing the data log-likelihood as follows : θ̂ = argmax θ N∑ n=1 T∑ t=0 log pθ ( y n∗ t |yn∗ < t ) . ( 2 ) Here , the output token at each step is conditioned on not the previous sampled tokens from the current model but the previous ground-truth tokens yn∗ < t . Namely , tokens are drawn from the data distribution during training , which is opposed to that tokens are drawn from the model distribution at test time . This discrepancy , also known as the exposure bias , leads that the errors will be accumulated along the generated sequence at test time since the model is biased to only perform well on the ground-truth history distribution . Especially , this bias problem can be more severe in the few-shot training . In addition , the token-level cross-entropy loss in MLE training is different from the sequence-level test metrics such as BLEU and ROUGE that are commonly used in the tasks of NLG . 2.4 REINFORCEMENT LEARNING ( RL ) . As an alternative to MLE , RL is able to overcome the exposure bias problem of MLE by sequencelevel sampling from the model distribution during training ( Ranzato et al. , 2015 ) . Also , RL can improve the performance by directly optimizing the evaluation metrics ( Guo et al. , 2021 ) . In order to use RL for our additive learning , we reformulate our text generation as an RL problem : at each time step t , the agent takes the current state st = y < t as an input and performs an action at that outputs a token yt by a policy πθ ( at|st ) corresponding to pθ ( yt|y < t ) . Then , the agent receives a reward rt = r ( st , at ) and deterministically transitions to the next state st+1 . Here , note that the token-level intermediate reward rt = 0 , ∀t < T when we use the delayed reward associated with the sequencelevel evaluation metric between the two full sequences , y and y∗ . Let τ = { ( st , at , rt ) } Tt=0 be the trajectory generated by πθ . The RL objective for the optimal agent is to maximize the expected sum of future discounted rewards : J ( πθ ) = Eτ∼πθ [ T∑ t=0 γtrt ] , ( 3 ) where γ ∈ [ 0 , 1 ) is the discount factor . 1Although the auxiliary training is particularly designed for maximizing the likelihood of the target task output , it also can take an advantage for RL since the adapter logits are nearly zero before training is advanced . Namely , it lets the task-specific conditional distribution start learning from the distribution of PLM , not a uniform distribution . Among a number of algorithms to approximately optimize the RL objective , we employ an actorcritic algorithm ( Bahdanau et al. , 2017 ) since it explicitly optimizes the policy network and it can also alleviate the delayed reward problem . The actor-critic algorithm requires the additional critic network to estimate the value of a state , V π ( st ) = Eπ [ ∑T t′=t γ t′−trt′ |st ] = ∑ at π ( at|st ) Qπ ( st , at ) where the state-action value function Qπ ( st , at ) = Eπ [ ∑T t′=t γ t′−trt′ |st , at ] = rt + V π ( st+1 ) . We use the following policy gradient loss to learn the policy parameters θ : L = − T∑ t=0 Aπθ ( st , at ) log πθ ( at|st ) , ( 4 ) where Aπθ ( st , at ) = Qπθ ( st , at ) − V πθ ( st ) is the advantage function that quantifies how an action at is better than the average action in state st . In few-shot text generation , the extremely large action space ( |V|T ) as well as the small amount of training data often make it difficult to perform RL with degraded performances , even though we conduct the additive learning from the PLM . Furthermore , it commonly has a delayed reward function ( e.g . BLEU ) which is defined after an entire sequence generated . It is hard to decide which token and how much contributes to the reward . This problem is known as the credit assignment . Therefore , in this work , we propose a selective token generation for improving the RL-based additive learning .
This paper proposes a selective token generation method for additive learning under pre-trained language models. Specifically, the authors introduce a selective module to decide the generation policy of tokens, either from the LM generation policy or the RL generation policy. The experiments are conducted on data-to-text task and text summarization task. Results show the improvements over the traditional baselines.
SP:d0ef622c67739528745130cb8f99d1489d9d5468
Selective Token Generation for Few-shot Language Modeling
1 INTRODUCTION . Natural language processing ( NLP ) have recently achieved great progress using advanced neural language models ( Radford et al. , 2018 ; Devlin et al. , 2019 ; Yang et al. , 2019 ; Liu et al. , 2019 ; Clark et al. , 2020 ; Raffel et al. , 2019 ; Lan et al. , 2019 ; Lewis et al. , 2020 ) . However , these neural models typically require large-scale training data for each individual task , and solving a new NLP task that has only a few examples is still challenging problem ( Yin , 2020 ) . Especially , natural language generation ( NLG ) with limited training data is an important yet more difficult task due to its fast adaptation of sequential prediction models in a wide range of applications including text summarization , question answering , data-to-text generation , machine translation , etc ( Peng et al. , 2020 ; Chen et al. , 2020 ; Xu et al. , 2021 ; Schick & Schütze , 2020 ; Chang et al. , 2021 ; Radford et al. , 2019 ; Lewis et al. , 2020 ; Brown et al. , 2020 ) . More recently , pretrained language models ( PLMs ) have shown great generalization ability when combined with large-scale data and big transformer-based models ( Devlin et al. , 2019 ; Radford et al. , 2019 ; Lewis et al. , 2020 ; Brown et al. , 2020 ; Subramanyam Kalyan et al. , 2021 ) . Therefore , transfer learning from transformer PLMs has been popularly used for few-shot NLG tasks with promising results . In specific , the use of PLM for few-shot NLG can be categorized into three approaches : 1 ) prompt-based , 2 ) finetuning , and 3 ) additive learning . Prompt-based approaches encode a task description and task-specific examples as a natural language prompt for few-shot text generation ( Radford et al. , 2019 ; Brown et al. , 2020 ; Zheng & Huang , 2021 ; Schick & Schütze , 2020 ; Li & Liang , 2021 ) . While these approaches can take full advantage of the universal natural language understanding and generation capabilities of large-scale PLMs without further training of the main model , these have some limitations in dealing with a large domain shift from the pretraining corpus data , tuning suitable task-specific prompts , and covering an increased size of conditioning examples . On the other hand , finetuning of the PLM is able to explicitly impart task-specific knowledge to the model and hence lift the above limitations ( Ziegler et al. , 2019 ; Xu et al. , 2021 ; Chen et al. , 2020 ) . However , these finetuned models are prone to overfitting when only a small amount of training data is available . In order to alleviate such an overfitting problem , additive learning has been extensively exploited by incorporating task-specific adapters into the PLM ( Zeldes et al. , 2020 ; Stickland & Murray , 2019 ) . In general , task-specialized adapters for few-shot NLG are trained by maximum likelihood estimation ( MLE ) . While MLE is efficient in learning , it suffers from the exposure bias problem due to the difference in the training and inference mechanisms ( He et al. , 2019 ) , and this problem can be severe with limited training data . Reinforcement learning ( RL ) is capable of resolving this exposure bias problem by sequential output sampling during training ( Ranzato et al. , 2015 ; Keneshloo et al. , 2019 ; Shi et al. , 2021 ) . Moreover , it allows to leverage the target-specific sequence-level objectives such as BLEU and ROUGE ( Wu et al. , 2018 ; Guo et al. , 2021 ) . However , in the task of NLG , the exponentially large space of output sequences restricts the use of RL since it leads to high variance and unstable training which is more serious in the few-shot setting . In this work , we develop a novel RLbased additive learning algorithm on the transformer-based PLM to overcome these shortcomings and to improve the performance of few-shot NLG . In particular , we first convert the NLG task to the sequential token generation task based on the transformer language model , and then propose a selective token generation between the PLM and the task-specific adapter , during both RL-based training and inference . The proposed output token selection enables to not only explicitly maintain a general prior knowledge from the frozen PLM but also focus only on the task-relevant parts in sequence generation . In addition , in few-shot learning this partial token generation makes the task-specific adapter more resilient to overfitting and furthermore reduces the overall output space which leads to stable RL training . Here , in order to make the two token generators ( policies ) complement each other as well as to realize the robust output selection at the token level on the fly , we exploit a separate token-level policy selector . It is noted that both the policy selector and the task-specific adapter are simultaneously learned by the RL algorithm . Experimental results on various few-shot NLG tasks show that the proposed selective token generation outperforms the previous PLM-based additive learning algorithms with the comprehensive ( non-selective ) token generation . Our main contributions can be summarized as follows . • A novel selective token generation between the PLM and the task-specific adapter is proposed for transformer-based few-shot NLG . • A separate selecting module is exploited to adaptively determine each output token in a sequence both at training and testing time . • RL is applied to train both the policy selector and the task-specific adapter that is complementary to the PLM in text generation . • An extensive empirical validation on few-shot NLG tasks demonstrates that the proposed selective token generation performs better in comparison to the previous PLM-based additive learning algorithms . 2 BACKGROUND . 2.1 NATURAL LANGUAGE GENERATION . The goal of NLG is to generate a text sequence y = [ y0 , ... , yT ] for a given task , where yt is the tth output token from a vocabulary V , and T is the output sequence length . For this generation , we aims to model the distribution of y that is autoregressively factorized as pθ ( y ) = ∏T t=0 pθ ( yt|y < t ) , where θ denotes the model parameters and y < t = [ y0 , ... , yt−1 ] . Here , the conditional distribution to sample a token for each step , pθ ( yt|y < t ) , is defined by the softmax function on the output logits fθ ( yt|y < t ) . Note that in general , the language generation is conditioned on input context according to a given task . Here , we encode the conditioning context by the same sequential model for generating an output sequence , and for simplicity we omit it . In this work , we utilize the autoregressive transformer for our generative model . 2.2 ADDITIVE LEARNING FOR FEW-SHOT GENERATION . To effectively leverage the general linguistic knowledge , θ is first initialized by the PLM parameters , θLM , for NLG . Given N task-specific training instances , D = { yn∗ } Nn=1 , where yn∗ is the nth ground-truth output sequence , directly finetuning θLM usingD can incur the severe overfitting problem when N is small in the few-shot scenario . Therefore , we add the task-specific adapter , gθa parameterized by θa , on top of the PLM , and optimize only θa ( Zeldes et al. , 2020 ; Stickland & Murray , 2019 ) . In specific , we reformulate f ( ·|y < t ; θ ) = WTh ( y < t ; θh ) where W ∈ RH×|V| and h ∈ RH denote the weight matrix and the penultimate representations , respectively , and θ = { W , θh } . Then , we define the task-specific conditional distribution as follows : p ( yt|y < t ; θLM , θa ) = softmax ( WLM ThLM ( y < t ) +Wa T g ( hLM ( y < t ) ; θg ) ) , ( 1 ) where hLM ( y < t ) = h ( y < t ; θh , LM ) and θa = { Wa , θg } . Here , the summation of the PLM logits and the adapter logits is motivated by auxiliary training1 . It is noted that in our additive learning θa is updated while θLM is kept frozen . Hence , in the following we omit θLM such that pθa ( yt|y < t ) = p ( yt|y < t ; θLM , θa ) for simplicity . 2.3 MAXIMUM LIKELIHOOD ESTIMATION ( MLE ) . Given a small amount of training data D = { yn∗ } Nn=1 , MLE optimizes θ by maximizing the data log-likelihood as follows : θ̂ = argmax θ N∑ n=1 T∑ t=0 log pθ ( y n∗ t |yn∗ < t ) . ( 2 ) Here , the output token at each step is conditioned on not the previous sampled tokens from the current model but the previous ground-truth tokens yn∗ < t . Namely , tokens are drawn from the data distribution during training , which is opposed to that tokens are drawn from the model distribution at test time . This discrepancy , also known as the exposure bias , leads that the errors will be accumulated along the generated sequence at test time since the model is biased to only perform well on the ground-truth history distribution . Especially , this bias problem can be more severe in the few-shot training . In addition , the token-level cross-entropy loss in MLE training is different from the sequence-level test metrics such as BLEU and ROUGE that are commonly used in the tasks of NLG . 2.4 REINFORCEMENT LEARNING ( RL ) . As an alternative to MLE , RL is able to overcome the exposure bias problem of MLE by sequencelevel sampling from the model distribution during training ( Ranzato et al. , 2015 ) . Also , RL can improve the performance by directly optimizing the evaluation metrics ( Guo et al. , 2021 ) . In order to use RL for our additive learning , we reformulate our text generation as an RL problem : at each time step t , the agent takes the current state st = y < t as an input and performs an action at that outputs a token yt by a policy πθ ( at|st ) corresponding to pθ ( yt|y < t ) . Then , the agent receives a reward rt = r ( st , at ) and deterministically transitions to the next state st+1 . Here , note that the token-level intermediate reward rt = 0 , ∀t < T when we use the delayed reward associated with the sequencelevel evaluation metric between the two full sequences , y and y∗ . Let τ = { ( st , at , rt ) } Tt=0 be the trajectory generated by πθ . The RL objective for the optimal agent is to maximize the expected sum of future discounted rewards : J ( πθ ) = Eτ∼πθ [ T∑ t=0 γtrt ] , ( 3 ) where γ ∈ [ 0 , 1 ) is the discount factor . 1Although the auxiliary training is particularly designed for maximizing the likelihood of the target task output , it also can take an advantage for RL since the adapter logits are nearly zero before training is advanced . Namely , it lets the task-specific conditional distribution start learning from the distribution of PLM , not a uniform distribution . Among a number of algorithms to approximately optimize the RL objective , we employ an actorcritic algorithm ( Bahdanau et al. , 2017 ) since it explicitly optimizes the policy network and it can also alleviate the delayed reward problem . The actor-critic algorithm requires the additional critic network to estimate the value of a state , V π ( st ) = Eπ [ ∑T t′=t γ t′−trt′ |st ] = ∑ at π ( at|st ) Qπ ( st , at ) where the state-action value function Qπ ( st , at ) = Eπ [ ∑T t′=t γ t′−trt′ |st , at ] = rt + V π ( st+1 ) . We use the following policy gradient loss to learn the policy parameters θ : L = − T∑ t=0 Aπθ ( st , at ) log πθ ( at|st ) , ( 4 ) where Aπθ ( st , at ) = Qπθ ( st , at ) − V πθ ( st ) is the advantage function that quantifies how an action at is better than the average action in state st . In few-shot text generation , the extremely large action space ( |V|T ) as well as the small amount of training data often make it difficult to perform RL with degraded performances , even though we conduct the additive learning from the PLM . Furthermore , it commonly has a delayed reward function ( e.g . BLEU ) which is defined after an entire sequence generated . It is hard to decide which token and how much contributes to the reward . This problem is known as the credit assignment . Therefore , in this work , we propose a selective token generation for improving the RL-based additive learning .
The paper is interested in the problem of exposure bias of sequential text generation: tokens are drawn from the data during training, while during inference, the tokens are sampled from the model’s distribution. Specifically, the authors consider this problem in the context of few-shot learning, where only a few labeled examples are available. The authors introduce a method that combines task-specific adapters and reinforcement learning-based training: at each token prediction, the model chooses between the distribution of the original pre-trained LM and the distribution induced by the task-specific adapters on top of the pre-trained LM. The next token is sampled according to the selected distribution. The authors observed gains in two tasks (data-to-text generation and summarization).
SP:d0ef622c67739528745130cb8f99d1489d9d5468
Selective Token Generation for Few-shot Language Modeling
1 INTRODUCTION . Natural language processing ( NLP ) have recently achieved great progress using advanced neural language models ( Radford et al. , 2018 ; Devlin et al. , 2019 ; Yang et al. , 2019 ; Liu et al. , 2019 ; Clark et al. , 2020 ; Raffel et al. , 2019 ; Lan et al. , 2019 ; Lewis et al. , 2020 ) . However , these neural models typically require large-scale training data for each individual task , and solving a new NLP task that has only a few examples is still challenging problem ( Yin , 2020 ) . Especially , natural language generation ( NLG ) with limited training data is an important yet more difficult task due to its fast adaptation of sequential prediction models in a wide range of applications including text summarization , question answering , data-to-text generation , machine translation , etc ( Peng et al. , 2020 ; Chen et al. , 2020 ; Xu et al. , 2021 ; Schick & Schütze , 2020 ; Chang et al. , 2021 ; Radford et al. , 2019 ; Lewis et al. , 2020 ; Brown et al. , 2020 ) . More recently , pretrained language models ( PLMs ) have shown great generalization ability when combined with large-scale data and big transformer-based models ( Devlin et al. , 2019 ; Radford et al. , 2019 ; Lewis et al. , 2020 ; Brown et al. , 2020 ; Subramanyam Kalyan et al. , 2021 ) . Therefore , transfer learning from transformer PLMs has been popularly used for few-shot NLG tasks with promising results . In specific , the use of PLM for few-shot NLG can be categorized into three approaches : 1 ) prompt-based , 2 ) finetuning , and 3 ) additive learning . Prompt-based approaches encode a task description and task-specific examples as a natural language prompt for few-shot text generation ( Radford et al. , 2019 ; Brown et al. , 2020 ; Zheng & Huang , 2021 ; Schick & Schütze , 2020 ; Li & Liang , 2021 ) . While these approaches can take full advantage of the universal natural language understanding and generation capabilities of large-scale PLMs without further training of the main model , these have some limitations in dealing with a large domain shift from the pretraining corpus data , tuning suitable task-specific prompts , and covering an increased size of conditioning examples . On the other hand , finetuning of the PLM is able to explicitly impart task-specific knowledge to the model and hence lift the above limitations ( Ziegler et al. , 2019 ; Xu et al. , 2021 ; Chen et al. , 2020 ) . However , these finetuned models are prone to overfitting when only a small amount of training data is available . In order to alleviate such an overfitting problem , additive learning has been extensively exploited by incorporating task-specific adapters into the PLM ( Zeldes et al. , 2020 ; Stickland & Murray , 2019 ) . In general , task-specialized adapters for few-shot NLG are trained by maximum likelihood estimation ( MLE ) . While MLE is efficient in learning , it suffers from the exposure bias problem due to the difference in the training and inference mechanisms ( He et al. , 2019 ) , and this problem can be severe with limited training data . Reinforcement learning ( RL ) is capable of resolving this exposure bias problem by sequential output sampling during training ( Ranzato et al. , 2015 ; Keneshloo et al. , 2019 ; Shi et al. , 2021 ) . Moreover , it allows to leverage the target-specific sequence-level objectives such as BLEU and ROUGE ( Wu et al. , 2018 ; Guo et al. , 2021 ) . However , in the task of NLG , the exponentially large space of output sequences restricts the use of RL since it leads to high variance and unstable training which is more serious in the few-shot setting . In this work , we develop a novel RLbased additive learning algorithm on the transformer-based PLM to overcome these shortcomings and to improve the performance of few-shot NLG . In particular , we first convert the NLG task to the sequential token generation task based on the transformer language model , and then propose a selective token generation between the PLM and the task-specific adapter , during both RL-based training and inference . The proposed output token selection enables to not only explicitly maintain a general prior knowledge from the frozen PLM but also focus only on the task-relevant parts in sequence generation . In addition , in few-shot learning this partial token generation makes the task-specific adapter more resilient to overfitting and furthermore reduces the overall output space which leads to stable RL training . Here , in order to make the two token generators ( policies ) complement each other as well as to realize the robust output selection at the token level on the fly , we exploit a separate token-level policy selector . It is noted that both the policy selector and the task-specific adapter are simultaneously learned by the RL algorithm . Experimental results on various few-shot NLG tasks show that the proposed selective token generation outperforms the previous PLM-based additive learning algorithms with the comprehensive ( non-selective ) token generation . Our main contributions can be summarized as follows . • A novel selective token generation between the PLM and the task-specific adapter is proposed for transformer-based few-shot NLG . • A separate selecting module is exploited to adaptively determine each output token in a sequence both at training and testing time . • RL is applied to train both the policy selector and the task-specific adapter that is complementary to the PLM in text generation . • An extensive empirical validation on few-shot NLG tasks demonstrates that the proposed selective token generation performs better in comparison to the previous PLM-based additive learning algorithms . 2 BACKGROUND . 2.1 NATURAL LANGUAGE GENERATION . The goal of NLG is to generate a text sequence y = [ y0 , ... , yT ] for a given task , where yt is the tth output token from a vocabulary V , and T is the output sequence length . For this generation , we aims to model the distribution of y that is autoregressively factorized as pθ ( y ) = ∏T t=0 pθ ( yt|y < t ) , where θ denotes the model parameters and y < t = [ y0 , ... , yt−1 ] . Here , the conditional distribution to sample a token for each step , pθ ( yt|y < t ) , is defined by the softmax function on the output logits fθ ( yt|y < t ) . Note that in general , the language generation is conditioned on input context according to a given task . Here , we encode the conditioning context by the same sequential model for generating an output sequence , and for simplicity we omit it . In this work , we utilize the autoregressive transformer for our generative model . 2.2 ADDITIVE LEARNING FOR FEW-SHOT GENERATION . To effectively leverage the general linguistic knowledge , θ is first initialized by the PLM parameters , θLM , for NLG . Given N task-specific training instances , D = { yn∗ } Nn=1 , where yn∗ is the nth ground-truth output sequence , directly finetuning θLM usingD can incur the severe overfitting problem when N is small in the few-shot scenario . Therefore , we add the task-specific adapter , gθa parameterized by θa , on top of the PLM , and optimize only θa ( Zeldes et al. , 2020 ; Stickland & Murray , 2019 ) . In specific , we reformulate f ( ·|y < t ; θ ) = WTh ( y < t ; θh ) where W ∈ RH×|V| and h ∈ RH denote the weight matrix and the penultimate representations , respectively , and θ = { W , θh } . Then , we define the task-specific conditional distribution as follows : p ( yt|y < t ; θLM , θa ) = softmax ( WLM ThLM ( y < t ) +Wa T g ( hLM ( y < t ) ; θg ) ) , ( 1 ) where hLM ( y < t ) = h ( y < t ; θh , LM ) and θa = { Wa , θg } . Here , the summation of the PLM logits and the adapter logits is motivated by auxiliary training1 . It is noted that in our additive learning θa is updated while θLM is kept frozen . Hence , in the following we omit θLM such that pθa ( yt|y < t ) = p ( yt|y < t ; θLM , θa ) for simplicity . 2.3 MAXIMUM LIKELIHOOD ESTIMATION ( MLE ) . Given a small amount of training data D = { yn∗ } Nn=1 , MLE optimizes θ by maximizing the data log-likelihood as follows : θ̂ = argmax θ N∑ n=1 T∑ t=0 log pθ ( y n∗ t |yn∗ < t ) . ( 2 ) Here , the output token at each step is conditioned on not the previous sampled tokens from the current model but the previous ground-truth tokens yn∗ < t . Namely , tokens are drawn from the data distribution during training , which is opposed to that tokens are drawn from the model distribution at test time . This discrepancy , also known as the exposure bias , leads that the errors will be accumulated along the generated sequence at test time since the model is biased to only perform well on the ground-truth history distribution . Especially , this bias problem can be more severe in the few-shot training . In addition , the token-level cross-entropy loss in MLE training is different from the sequence-level test metrics such as BLEU and ROUGE that are commonly used in the tasks of NLG . 2.4 REINFORCEMENT LEARNING ( RL ) . As an alternative to MLE , RL is able to overcome the exposure bias problem of MLE by sequencelevel sampling from the model distribution during training ( Ranzato et al. , 2015 ) . Also , RL can improve the performance by directly optimizing the evaluation metrics ( Guo et al. , 2021 ) . In order to use RL for our additive learning , we reformulate our text generation as an RL problem : at each time step t , the agent takes the current state st = y < t as an input and performs an action at that outputs a token yt by a policy πθ ( at|st ) corresponding to pθ ( yt|y < t ) . Then , the agent receives a reward rt = r ( st , at ) and deterministically transitions to the next state st+1 . Here , note that the token-level intermediate reward rt = 0 , ∀t < T when we use the delayed reward associated with the sequencelevel evaluation metric between the two full sequences , y and y∗ . Let τ = { ( st , at , rt ) } Tt=0 be the trajectory generated by πθ . The RL objective for the optimal agent is to maximize the expected sum of future discounted rewards : J ( πθ ) = Eτ∼πθ [ T∑ t=0 γtrt ] , ( 3 ) where γ ∈ [ 0 , 1 ) is the discount factor . 1Although the auxiliary training is particularly designed for maximizing the likelihood of the target task output , it also can take an advantage for RL since the adapter logits are nearly zero before training is advanced . Namely , it lets the task-specific conditional distribution start learning from the distribution of PLM , not a uniform distribution . Among a number of algorithms to approximately optimize the RL objective , we employ an actorcritic algorithm ( Bahdanau et al. , 2017 ) since it explicitly optimizes the policy network and it can also alleviate the delayed reward problem . The actor-critic algorithm requires the additional critic network to estimate the value of a state , V π ( st ) = Eπ [ ∑T t′=t γ t′−trt′ |st ] = ∑ at π ( at|st ) Qπ ( st , at ) where the state-action value function Qπ ( st , at ) = Eπ [ ∑T t′=t γ t′−trt′ |st , at ] = rt + V π ( st+1 ) . We use the following policy gradient loss to learn the policy parameters θ : L = − T∑ t=0 Aπθ ( st , at ) log πθ ( at|st ) , ( 4 ) where Aπθ ( st , at ) = Qπθ ( st , at ) − V πθ ( st ) is the advantage function that quantifies how an action at is better than the average action in state st . In few-shot text generation , the extremely large action space ( |V|T ) as well as the small amount of training data often make it difficult to perform RL with degraded performances , even though we conduct the additive learning from the PLM . Furthermore , it commonly has a delayed reward function ( e.g . BLEU ) which is defined after an entire sequence generated . It is hard to decide which token and how much contributes to the reward . This problem is known as the credit assignment . Therefore , in this work , we propose a selective token generation for improving the RL-based additive learning .
The authors proposes a method to improve large pre-trained language model's ability in few-shot language generation. The main idea is to freeze the large pre-trained language model (PLM), fine-tune a copy of this PLM on the task at hand, and use a selector to switch between generating the next token from the freezes PLM or the fine-tuned PLM. Instead of the commonly used maximum likelihood estimation, the authors cast the optimization problem as a reinforcement learning (RL) problem using the task evaluation metric such as BLEU or ROUGE as the reward. The authors perform experiments on few-shot data-to-text and text summarization settings, comparing to a few baselines (fine-tuned PLM and a few variants of the proposed method).
SP:d0ef622c67739528745130cb8f99d1489d9d5468
RAR: Region-Aware Point Cloud Registration
Point set registration is a challenging but meaningful task , which has wide application in many fields Bai et al . ( 2007 ) ; Bai & Latecki ( 2008 ) ; Myronenko & Song ( 2009 ) ; Ma et al . ( 2016 ) ; Wu et al . ( 2012 ) ; Klaus et al . ( 2006 ) ; Maintz & Viergever ( 1998 ) ; Besl & McKay ( 1992 ) ; Raguram et al . ( 2008 ) ; Yuille & Grzywacz ( 1988 ) ; Sonka et al . ( 2014 ) . Most existing non-learning methods solve the registration problem through an iterative optimization process to search the optimal geometric transformation to minimize a pre-defined alignment loss between transformed source point set and target point set Myronenko et al . ( 2007 ) ; Ma et al . ( 2013 ; 2014 ) ; Ling & Jacobs ( 2005 ) . The geometric transformation can be modeled by a specific type of parametric transformation ( e.g . rotation , translation , thin-plate spline , and so on ) Besl & McKay ( 1992 ) . For example , one of the most commonly applied methods , iterative closest point ( ICP ) Besl & McKay ( 1992 ) , estimates the rigid transformation based on a set of corresponding points . The ICP model , however , strongly depends on the initialization and has limited performance in choosing corresponding points . Moreover , iterative methods usually treat registration as an independent optimization process for each given pair of source and target point sets , which can not transfer knowledge from registering one pair to another . In recent years , deep-learning-based algorithms have been implemented in various industries and achieved great success , researchers are increasingly interested in bringing deep-learning-based solutions to the field of point set registration . Instead of directly optimizing the transformation matrix towards minimization of alignment loss in non-learning-based methods , learning-based methods usually leverage modern feature extraction technologies for feature learning and then regress the transformation matrix based on the mutual information and correlation defined on the extracted features of source and target shapes . The most recent model , deep closest point ( DCP ) Wang & Solomon ( 2019 ) , leverages DGCNN Wang et al . ( 2019 ) for feature learning and a pointer network to perform soft matching . To refine the soft matching results to predict the final rigid transformation , the DCP model further proposes a singular value decomposition layer for fine-tuning . However , it is still challenging to design an explicit module for learning both the features from unstructured point clouds and their “ geometric relationship ” Wang et al . ( 2018 ) . Existing works developed various models to compute the spatial correlation feature . For example , FlowNet3D Liu et al . ( 2019 ) tied to concatenate two global descriptors of source and target point sets ; Balakrishnan et al . ( 2018 ) used a U-Net-based structure to mix the source and target volumetric shapes ; Rocco et al . ( 2017 ) proposed a correlation tensor calculated from source and target feature map and so on . The learning of robust point cloud registration models with deep neural networks has emerged as a powerful paradigm , offering promising performance in predicting the global geometric transformation for a pair of point sets . Those methods share a similar pipeline by firstly leveraging an encoder to regress a latent shape embedding , which is then decoded into a shape-conditioned transformation via concatenation-based conditioning . In this paper , we observe that different regions of a 3D shape vary in their geometric structures which makes it more sense that we have a region-conditioned ( in contrast to shape-conditioned ) transformation decoder via concatenation-based conditioning . As shown in Figure 1 , the shapeconditioned transformation predicts one global transformation for point sets alignment whereas the region-conditioned transformation predicts a set of transformations for different implicit regions , which are then weighted fused to form a global transformation . With this observation , as illustrated in Figure 2 , we present a region-aware point cloud registration , denoted as RAR , to predict transformation for pairwise point sets in a self-supervised learning fashion . Our proposed RAR framework contains three main components . The first component is a region-aware decoder ( RAD ) module that is formed with an implicit neural region representation parameterized by neural networks conditioned on a shape embedding . The implicit neural region representation is learned with a self-supervised 3D shape reconstruction loss without the need for region labels . The second component is a region-aware transformation ( RAT ) module which decodes shape embedding features to regress a set of region-specific transformations . The third component is the region-aware weight ( RAW ) module which generates the weights for different regions of the 3D shape to be aligned . The global geometric transformation from source point set to target one is then formed by weighted fusion of region-aware transforms . Our contribution is as follows : • We introduce a new concept of region-conditioned transformation that contributes to a novel region-aware point cloud registration ( RAR ) as the learning approach for robust point set alignment . Our RAR models are realized with the development of three new modules : region-aware decoder ( RAD ) module , region-aware transformation ( RAT ) module , and region-aware weight ( RAW ) module . • Our RAR is a novel unsupervised learning model for point cloud registration without the need of training on labeled datasets . • Experimental results demonstrate the effectiveness of the proposed method for point set registration , our RAR achieved superior performance compared to unsupervised and supervised state-of-the-art approaches even without labeled data for training . 1 RELATED WORKS . 1.1 ITERATIVE REGISTRATION METHODS . The development of optimization algorithms to estimate rigid and non-rigid geometric transformations in an iterative routine has attracted extensive research attention in past decades . Assuming that a pair of point sets are related by a rigid transformation , the standard approach is to estimate the best translation and rotation parameters in the iterative search routine , therein aiming to minimize a distance metric between two sets of points . The iterative closest point ( ICP ) algorithm Besl & McKay ( 1992 ) is one successful solution for rigid registration . It initializes an estimation of a rigid function and then iteratively chooses corresponding points to refine the transformation . However , the ICP algorithm is reported to be vulnerable to the selection of corresponding points for initial transformation estimation . Go-ICP Yang et al . ( 2015 ) was further proposed by Yang et al . to leverage the BnB scheme for searching the entire 3D motion space to solve the local initialization problem brought by ICP . Zhou et al . proposed fast global registration Zhou et al . ( 2016 ) for the registration of partially overlapping 3D surfaces . The TPS-RSM algorithm was proposed by Chui and Rangarajan Chui & Rangarajan ( 2000 ) to estimate parameters of non-rigid transformations with a penalty on secondorder derivatives . Existing classical algorithms have achieved great success on the registration task . Although the independent iterative optimization process limits the efficiency of registering a large number of pairs , inspiring us to design a learning-based system for this task . 1.2 LEARNING-BASED REGISTRATION METHODS . In recent years , learning-based methods have achieved great success in many fields of computer vision Su et al . ( 2015 ) ; Sharma et al . ( 2016 ) ; Maturana & Scherer ( 2015 ) ; Bai et al . ( 2016 ) ; Qi et al . ( 2017 ) ; Verma et al . ( 2018 ) ; Masci et al . ( 2015 ) ; Zeng et al . ( 2017 ) . In particular , recent works have started a trend of directly learning geometric features from cloud points ( especially 3D points ) , which motivates us to approach the point set registration problem using deep neural networks Rocco et al . ( 2017 ) ; Balakrishnan et al . ( 2018 ) ; Zeng et al . ( 2017 ) ; Qi et al . ( 2017 ) ; Verma et al . ( 2018 ) ; Masci et al . ( 2015 ) . PointNetLK Aoki et al . ( 2019 ) was proposed by Aoki et al . to leverage the newly proposed PointNet algorithm for directly extracting features from the point cloud with the classical Lucas & Kanade algorithm for the rigid registration of 3D point sets . Liu et al . proposed FlowNet3D Liu et al . ( 2019 ) to treat 3D point cloud registration as a motion process between points . Wang et al . proposed a deep closest point Wang & Solomon ( 2019 ) model , which first leverages the DGCNN structure to exact the features from point sets and then regress the desired transformation based on it . Balakrishnan et al . Balakrishnan et al . ( 2018 ) proposed a voxelMorph CNN architecture to learn the registration field to align two volumetric medical images . For the learning-based registration solutions listed above , the main challenge concerns how to effectively model the “ geometric relationship ” between source and target objects in a learning-based approach . For example , Rocco et al . ( 2017 ) proposed a correlation tensor between the feature maps of source and target images . Balakrishnan et al . ( 2018 ) leveraged a U-Net-based structure to concatenate features of source and target voxels . Liu et al . ( 2019 ) ; Aoki et al . ( 2019 ) used a PointNet-based structure , and Wang & Solomon ( 2019 ) used a DGCNN structure to learn the features from a point set for further registration decoding . In contrast , we introduce a region-aware point cloud registration , denoted as RAR , to predict transformation for pairwise point sets in the self-supervised learning fashion . 2 METHODS . We introduce our approach in the following sections . The problem statement of our method is introduced in section 2.1 . We explain the learning shape descriptor in section 2.2 . Section 2.3 illustrates the network structure of our region-aware decoder module . The region-aware weight module is defined in section 2.4 . In section 2.5 , we describe the region-aware transformation module . The loss function is also discussed in section 2.6 . 2.1 PROBLEM STATEMENT . We define the optimization task of the deep learning-based methods which directly use unordered point clouds as input at first . Giving a training dataset D = { ( Si , Gi ) } , where Si , Gi ⊂ R3 . Si denotes the input source point clouds and Gi denotes the input target point clouds . We aim to obtain a parametric function gθ ( Si , Gi ) using a neural network structure that can predict the rotation matrix R ∈ SO ( 3 ) and a translation vector t ∈ R3 that can deform the source point cloud towards the target point cloud . A pre-defined alignment metric between the transformed source point cloud and the target point cloud can be defined as the objective loss function to update the parameters θ . For a given dataset D , a stochastic gradient-descent based algorithm can usually be utilized to optimize the parameters θ by minimizing the pre-defined loss function : θ∗ = argmin θ [ E ( Si , Gi ) ∼D [ L ( Si , Gi , gθ ( Si , Gi ) ) ] ( 1 ) where L represents the pre-defined loss function . 2.2 LEARNING SHAPE EMBEDDING . For the input point clouds , the learning shape embedding is a non-linear multi-layer perceptron ( MLP ) -based function neural network that can extract shape features and capture the geometric information . Formally , let Pi denotes the input point clouds and fx ⊂ Rm denotes the feature of x , ∀x ∈ Pi , where m is the dimension of output layer . Our Learning Shape Descriptor includes two key components : encoding network and feature information . We define the encoding network g1 : R3 → Rm which uses multi-layer perceptrons ( MLP ) with ReLu activation function for feature extraction : fx = g1 ( x ) x∈Pi ( 2 ) The feature information is combined by extracted feature and point coordinates . Specifically , ∀x ∈ Pi , we concatenate the learned feature fx with the coordinates x as the combined feature [ fx , x ] ∈ R ( m+3 ) . Thus , the shape descriptor of input point cloud Pi is : { [ fx , x ] } x∈Pi .
This paper aims to solve the rigid registration of 3D point clouds using a deep neural network. The key difference from previous methods is that this paper proposes a region-conditioned transformation. Specifically, this method first estimates k transformation matrices and then adopts a region segmentation module to divide the shape, which is further utilized to estimate the region-aware weights to combine the k transformations
SP:c5f905130202a6b69a774b97494c018403256299
RAR: Region-Aware Point Cloud Registration
Point set registration is a challenging but meaningful task , which has wide application in many fields Bai et al . ( 2007 ) ; Bai & Latecki ( 2008 ) ; Myronenko & Song ( 2009 ) ; Ma et al . ( 2016 ) ; Wu et al . ( 2012 ) ; Klaus et al . ( 2006 ) ; Maintz & Viergever ( 1998 ) ; Besl & McKay ( 1992 ) ; Raguram et al . ( 2008 ) ; Yuille & Grzywacz ( 1988 ) ; Sonka et al . ( 2014 ) . Most existing non-learning methods solve the registration problem through an iterative optimization process to search the optimal geometric transformation to minimize a pre-defined alignment loss between transformed source point set and target point set Myronenko et al . ( 2007 ) ; Ma et al . ( 2013 ; 2014 ) ; Ling & Jacobs ( 2005 ) . The geometric transformation can be modeled by a specific type of parametric transformation ( e.g . rotation , translation , thin-plate spline , and so on ) Besl & McKay ( 1992 ) . For example , one of the most commonly applied methods , iterative closest point ( ICP ) Besl & McKay ( 1992 ) , estimates the rigid transformation based on a set of corresponding points . The ICP model , however , strongly depends on the initialization and has limited performance in choosing corresponding points . Moreover , iterative methods usually treat registration as an independent optimization process for each given pair of source and target point sets , which can not transfer knowledge from registering one pair to another . In recent years , deep-learning-based algorithms have been implemented in various industries and achieved great success , researchers are increasingly interested in bringing deep-learning-based solutions to the field of point set registration . Instead of directly optimizing the transformation matrix towards minimization of alignment loss in non-learning-based methods , learning-based methods usually leverage modern feature extraction technologies for feature learning and then regress the transformation matrix based on the mutual information and correlation defined on the extracted features of source and target shapes . The most recent model , deep closest point ( DCP ) Wang & Solomon ( 2019 ) , leverages DGCNN Wang et al . ( 2019 ) for feature learning and a pointer network to perform soft matching . To refine the soft matching results to predict the final rigid transformation , the DCP model further proposes a singular value decomposition layer for fine-tuning . However , it is still challenging to design an explicit module for learning both the features from unstructured point clouds and their “ geometric relationship ” Wang et al . ( 2018 ) . Existing works developed various models to compute the spatial correlation feature . For example , FlowNet3D Liu et al . ( 2019 ) tied to concatenate two global descriptors of source and target point sets ; Balakrishnan et al . ( 2018 ) used a U-Net-based structure to mix the source and target volumetric shapes ; Rocco et al . ( 2017 ) proposed a correlation tensor calculated from source and target feature map and so on . The learning of robust point cloud registration models with deep neural networks has emerged as a powerful paradigm , offering promising performance in predicting the global geometric transformation for a pair of point sets . Those methods share a similar pipeline by firstly leveraging an encoder to regress a latent shape embedding , which is then decoded into a shape-conditioned transformation via concatenation-based conditioning . In this paper , we observe that different regions of a 3D shape vary in their geometric structures which makes it more sense that we have a region-conditioned ( in contrast to shape-conditioned ) transformation decoder via concatenation-based conditioning . As shown in Figure 1 , the shapeconditioned transformation predicts one global transformation for point sets alignment whereas the region-conditioned transformation predicts a set of transformations for different implicit regions , which are then weighted fused to form a global transformation . With this observation , as illustrated in Figure 2 , we present a region-aware point cloud registration , denoted as RAR , to predict transformation for pairwise point sets in a self-supervised learning fashion . Our proposed RAR framework contains three main components . The first component is a region-aware decoder ( RAD ) module that is formed with an implicit neural region representation parameterized by neural networks conditioned on a shape embedding . The implicit neural region representation is learned with a self-supervised 3D shape reconstruction loss without the need for region labels . The second component is a region-aware transformation ( RAT ) module which decodes shape embedding features to regress a set of region-specific transformations . The third component is the region-aware weight ( RAW ) module which generates the weights for different regions of the 3D shape to be aligned . The global geometric transformation from source point set to target one is then formed by weighted fusion of region-aware transforms . Our contribution is as follows : • We introduce a new concept of region-conditioned transformation that contributes to a novel region-aware point cloud registration ( RAR ) as the learning approach for robust point set alignment . Our RAR models are realized with the development of three new modules : region-aware decoder ( RAD ) module , region-aware transformation ( RAT ) module , and region-aware weight ( RAW ) module . • Our RAR is a novel unsupervised learning model for point cloud registration without the need of training on labeled datasets . • Experimental results demonstrate the effectiveness of the proposed method for point set registration , our RAR achieved superior performance compared to unsupervised and supervised state-of-the-art approaches even without labeled data for training . 1 RELATED WORKS . 1.1 ITERATIVE REGISTRATION METHODS . The development of optimization algorithms to estimate rigid and non-rigid geometric transformations in an iterative routine has attracted extensive research attention in past decades . Assuming that a pair of point sets are related by a rigid transformation , the standard approach is to estimate the best translation and rotation parameters in the iterative search routine , therein aiming to minimize a distance metric between two sets of points . The iterative closest point ( ICP ) algorithm Besl & McKay ( 1992 ) is one successful solution for rigid registration . It initializes an estimation of a rigid function and then iteratively chooses corresponding points to refine the transformation . However , the ICP algorithm is reported to be vulnerable to the selection of corresponding points for initial transformation estimation . Go-ICP Yang et al . ( 2015 ) was further proposed by Yang et al . to leverage the BnB scheme for searching the entire 3D motion space to solve the local initialization problem brought by ICP . Zhou et al . proposed fast global registration Zhou et al . ( 2016 ) for the registration of partially overlapping 3D surfaces . The TPS-RSM algorithm was proposed by Chui and Rangarajan Chui & Rangarajan ( 2000 ) to estimate parameters of non-rigid transformations with a penalty on secondorder derivatives . Existing classical algorithms have achieved great success on the registration task . Although the independent iterative optimization process limits the efficiency of registering a large number of pairs , inspiring us to design a learning-based system for this task . 1.2 LEARNING-BASED REGISTRATION METHODS . In recent years , learning-based methods have achieved great success in many fields of computer vision Su et al . ( 2015 ) ; Sharma et al . ( 2016 ) ; Maturana & Scherer ( 2015 ) ; Bai et al . ( 2016 ) ; Qi et al . ( 2017 ) ; Verma et al . ( 2018 ) ; Masci et al . ( 2015 ) ; Zeng et al . ( 2017 ) . In particular , recent works have started a trend of directly learning geometric features from cloud points ( especially 3D points ) , which motivates us to approach the point set registration problem using deep neural networks Rocco et al . ( 2017 ) ; Balakrishnan et al . ( 2018 ) ; Zeng et al . ( 2017 ) ; Qi et al . ( 2017 ) ; Verma et al . ( 2018 ) ; Masci et al . ( 2015 ) . PointNetLK Aoki et al . ( 2019 ) was proposed by Aoki et al . to leverage the newly proposed PointNet algorithm for directly extracting features from the point cloud with the classical Lucas & Kanade algorithm for the rigid registration of 3D point sets . Liu et al . proposed FlowNet3D Liu et al . ( 2019 ) to treat 3D point cloud registration as a motion process between points . Wang et al . proposed a deep closest point Wang & Solomon ( 2019 ) model , which first leverages the DGCNN structure to exact the features from point sets and then regress the desired transformation based on it . Balakrishnan et al . Balakrishnan et al . ( 2018 ) proposed a voxelMorph CNN architecture to learn the registration field to align two volumetric medical images . For the learning-based registration solutions listed above , the main challenge concerns how to effectively model the “ geometric relationship ” between source and target objects in a learning-based approach . For example , Rocco et al . ( 2017 ) proposed a correlation tensor between the feature maps of source and target images . Balakrishnan et al . ( 2018 ) leveraged a U-Net-based structure to concatenate features of source and target voxels . Liu et al . ( 2019 ) ; Aoki et al . ( 2019 ) used a PointNet-based structure , and Wang & Solomon ( 2019 ) used a DGCNN structure to learn the features from a point set for further registration decoding . In contrast , we introduce a region-aware point cloud registration , denoted as RAR , to predict transformation for pairwise point sets in the self-supervised learning fashion . 2 METHODS . We introduce our approach in the following sections . The problem statement of our method is introduced in section 2.1 . We explain the learning shape descriptor in section 2.2 . Section 2.3 illustrates the network structure of our region-aware decoder module . The region-aware weight module is defined in section 2.4 . In section 2.5 , we describe the region-aware transformation module . The loss function is also discussed in section 2.6 . 2.1 PROBLEM STATEMENT . We define the optimization task of the deep learning-based methods which directly use unordered point clouds as input at first . Giving a training dataset D = { ( Si , Gi ) } , where Si , Gi ⊂ R3 . Si denotes the input source point clouds and Gi denotes the input target point clouds . We aim to obtain a parametric function gθ ( Si , Gi ) using a neural network structure that can predict the rotation matrix R ∈ SO ( 3 ) and a translation vector t ∈ R3 that can deform the source point cloud towards the target point cloud . A pre-defined alignment metric between the transformed source point cloud and the target point cloud can be defined as the objective loss function to update the parameters θ . For a given dataset D , a stochastic gradient-descent based algorithm can usually be utilized to optimize the parameters θ by minimizing the pre-defined loss function : θ∗ = argmin θ [ E ( Si , Gi ) ∼D [ L ( Si , Gi , gθ ( Si , Gi ) ) ] ( 1 ) where L represents the pre-defined loss function . 2.2 LEARNING SHAPE EMBEDDING . For the input point clouds , the learning shape embedding is a non-linear multi-layer perceptron ( MLP ) -based function neural network that can extract shape features and capture the geometric information . Formally , let Pi denotes the input point clouds and fx ⊂ Rm denotes the feature of x , ∀x ∈ Pi , where m is the dimension of output layer . Our Learning Shape Descriptor includes two key components : encoding network and feature information . We define the encoding network g1 : R3 → Rm which uses multi-layer perceptrons ( MLP ) with ReLu activation function for feature extraction : fx = g1 ( x ) x∈Pi ( 2 ) The feature information is combined by extracted feature and point coordinates . Specifically , ∀x ∈ Pi , we concatenate the learned feature fx with the coordinates x as the combined feature [ fx , x ] ∈ R ( m+3 ) . Thus , the shape descriptor of input point cloud Pi is : { [ fx , x ] } x∈Pi .
This paper studies point cloud registration with deep neural networks. The key insight is learning a region-based transformation is more robust than learning a per-shape transformation even though shapes are rigid. The proposed method follows a DCP pipeline except that it performs point cloud reconstruction as well as self-supervised region segmentation. First, it uses a PointNet to lift points into a high-dimensional space and tag each point with a pseudo label. Then, points are clustered based on the region labels. Next, a per-region transformation is predicted based on the features of points in each region. Finally, the global transformation is an ensemble of region transformations.
SP:c5f905130202a6b69a774b97494c018403256299
RAR: Region-Aware Point Cloud Registration
Point set registration is a challenging but meaningful task , which has wide application in many fields Bai et al . ( 2007 ) ; Bai & Latecki ( 2008 ) ; Myronenko & Song ( 2009 ) ; Ma et al . ( 2016 ) ; Wu et al . ( 2012 ) ; Klaus et al . ( 2006 ) ; Maintz & Viergever ( 1998 ) ; Besl & McKay ( 1992 ) ; Raguram et al . ( 2008 ) ; Yuille & Grzywacz ( 1988 ) ; Sonka et al . ( 2014 ) . Most existing non-learning methods solve the registration problem through an iterative optimization process to search the optimal geometric transformation to minimize a pre-defined alignment loss between transformed source point set and target point set Myronenko et al . ( 2007 ) ; Ma et al . ( 2013 ; 2014 ) ; Ling & Jacobs ( 2005 ) . The geometric transformation can be modeled by a specific type of parametric transformation ( e.g . rotation , translation , thin-plate spline , and so on ) Besl & McKay ( 1992 ) . For example , one of the most commonly applied methods , iterative closest point ( ICP ) Besl & McKay ( 1992 ) , estimates the rigid transformation based on a set of corresponding points . The ICP model , however , strongly depends on the initialization and has limited performance in choosing corresponding points . Moreover , iterative methods usually treat registration as an independent optimization process for each given pair of source and target point sets , which can not transfer knowledge from registering one pair to another . In recent years , deep-learning-based algorithms have been implemented in various industries and achieved great success , researchers are increasingly interested in bringing deep-learning-based solutions to the field of point set registration . Instead of directly optimizing the transformation matrix towards minimization of alignment loss in non-learning-based methods , learning-based methods usually leverage modern feature extraction technologies for feature learning and then regress the transformation matrix based on the mutual information and correlation defined on the extracted features of source and target shapes . The most recent model , deep closest point ( DCP ) Wang & Solomon ( 2019 ) , leverages DGCNN Wang et al . ( 2019 ) for feature learning and a pointer network to perform soft matching . To refine the soft matching results to predict the final rigid transformation , the DCP model further proposes a singular value decomposition layer for fine-tuning . However , it is still challenging to design an explicit module for learning both the features from unstructured point clouds and their “ geometric relationship ” Wang et al . ( 2018 ) . Existing works developed various models to compute the spatial correlation feature . For example , FlowNet3D Liu et al . ( 2019 ) tied to concatenate two global descriptors of source and target point sets ; Balakrishnan et al . ( 2018 ) used a U-Net-based structure to mix the source and target volumetric shapes ; Rocco et al . ( 2017 ) proposed a correlation tensor calculated from source and target feature map and so on . The learning of robust point cloud registration models with deep neural networks has emerged as a powerful paradigm , offering promising performance in predicting the global geometric transformation for a pair of point sets . Those methods share a similar pipeline by firstly leveraging an encoder to regress a latent shape embedding , which is then decoded into a shape-conditioned transformation via concatenation-based conditioning . In this paper , we observe that different regions of a 3D shape vary in their geometric structures which makes it more sense that we have a region-conditioned ( in contrast to shape-conditioned ) transformation decoder via concatenation-based conditioning . As shown in Figure 1 , the shapeconditioned transformation predicts one global transformation for point sets alignment whereas the region-conditioned transformation predicts a set of transformations for different implicit regions , which are then weighted fused to form a global transformation . With this observation , as illustrated in Figure 2 , we present a region-aware point cloud registration , denoted as RAR , to predict transformation for pairwise point sets in a self-supervised learning fashion . Our proposed RAR framework contains three main components . The first component is a region-aware decoder ( RAD ) module that is formed with an implicit neural region representation parameterized by neural networks conditioned on a shape embedding . The implicit neural region representation is learned with a self-supervised 3D shape reconstruction loss without the need for region labels . The second component is a region-aware transformation ( RAT ) module which decodes shape embedding features to regress a set of region-specific transformations . The third component is the region-aware weight ( RAW ) module which generates the weights for different regions of the 3D shape to be aligned . The global geometric transformation from source point set to target one is then formed by weighted fusion of region-aware transforms . Our contribution is as follows : • We introduce a new concept of region-conditioned transformation that contributes to a novel region-aware point cloud registration ( RAR ) as the learning approach for robust point set alignment . Our RAR models are realized with the development of three new modules : region-aware decoder ( RAD ) module , region-aware transformation ( RAT ) module , and region-aware weight ( RAW ) module . • Our RAR is a novel unsupervised learning model for point cloud registration without the need of training on labeled datasets . • Experimental results demonstrate the effectiveness of the proposed method for point set registration , our RAR achieved superior performance compared to unsupervised and supervised state-of-the-art approaches even without labeled data for training . 1 RELATED WORKS . 1.1 ITERATIVE REGISTRATION METHODS . The development of optimization algorithms to estimate rigid and non-rigid geometric transformations in an iterative routine has attracted extensive research attention in past decades . Assuming that a pair of point sets are related by a rigid transformation , the standard approach is to estimate the best translation and rotation parameters in the iterative search routine , therein aiming to minimize a distance metric between two sets of points . The iterative closest point ( ICP ) algorithm Besl & McKay ( 1992 ) is one successful solution for rigid registration . It initializes an estimation of a rigid function and then iteratively chooses corresponding points to refine the transformation . However , the ICP algorithm is reported to be vulnerable to the selection of corresponding points for initial transformation estimation . Go-ICP Yang et al . ( 2015 ) was further proposed by Yang et al . to leverage the BnB scheme for searching the entire 3D motion space to solve the local initialization problem brought by ICP . Zhou et al . proposed fast global registration Zhou et al . ( 2016 ) for the registration of partially overlapping 3D surfaces . The TPS-RSM algorithm was proposed by Chui and Rangarajan Chui & Rangarajan ( 2000 ) to estimate parameters of non-rigid transformations with a penalty on secondorder derivatives . Existing classical algorithms have achieved great success on the registration task . Although the independent iterative optimization process limits the efficiency of registering a large number of pairs , inspiring us to design a learning-based system for this task . 1.2 LEARNING-BASED REGISTRATION METHODS . In recent years , learning-based methods have achieved great success in many fields of computer vision Su et al . ( 2015 ) ; Sharma et al . ( 2016 ) ; Maturana & Scherer ( 2015 ) ; Bai et al . ( 2016 ) ; Qi et al . ( 2017 ) ; Verma et al . ( 2018 ) ; Masci et al . ( 2015 ) ; Zeng et al . ( 2017 ) . In particular , recent works have started a trend of directly learning geometric features from cloud points ( especially 3D points ) , which motivates us to approach the point set registration problem using deep neural networks Rocco et al . ( 2017 ) ; Balakrishnan et al . ( 2018 ) ; Zeng et al . ( 2017 ) ; Qi et al . ( 2017 ) ; Verma et al . ( 2018 ) ; Masci et al . ( 2015 ) . PointNetLK Aoki et al . ( 2019 ) was proposed by Aoki et al . to leverage the newly proposed PointNet algorithm for directly extracting features from the point cloud with the classical Lucas & Kanade algorithm for the rigid registration of 3D point sets . Liu et al . proposed FlowNet3D Liu et al . ( 2019 ) to treat 3D point cloud registration as a motion process between points . Wang et al . proposed a deep closest point Wang & Solomon ( 2019 ) model , which first leverages the DGCNN structure to exact the features from point sets and then regress the desired transformation based on it . Balakrishnan et al . Balakrishnan et al . ( 2018 ) proposed a voxelMorph CNN architecture to learn the registration field to align two volumetric medical images . For the learning-based registration solutions listed above , the main challenge concerns how to effectively model the “ geometric relationship ” between source and target objects in a learning-based approach . For example , Rocco et al . ( 2017 ) proposed a correlation tensor between the feature maps of source and target images . Balakrishnan et al . ( 2018 ) leveraged a U-Net-based structure to concatenate features of source and target voxels . Liu et al . ( 2019 ) ; Aoki et al . ( 2019 ) used a PointNet-based structure , and Wang & Solomon ( 2019 ) used a DGCNN structure to learn the features from a point set for further registration decoding . In contrast , we introduce a region-aware point cloud registration , denoted as RAR , to predict transformation for pairwise point sets in the self-supervised learning fashion . 2 METHODS . We introduce our approach in the following sections . The problem statement of our method is introduced in section 2.1 . We explain the learning shape descriptor in section 2.2 . Section 2.3 illustrates the network structure of our region-aware decoder module . The region-aware weight module is defined in section 2.4 . In section 2.5 , we describe the region-aware transformation module . The loss function is also discussed in section 2.6 . 2.1 PROBLEM STATEMENT . We define the optimization task of the deep learning-based methods which directly use unordered point clouds as input at first . Giving a training dataset D = { ( Si , Gi ) } , where Si , Gi ⊂ R3 . Si denotes the input source point clouds and Gi denotes the input target point clouds . We aim to obtain a parametric function gθ ( Si , Gi ) using a neural network structure that can predict the rotation matrix R ∈ SO ( 3 ) and a translation vector t ∈ R3 that can deform the source point cloud towards the target point cloud . A pre-defined alignment metric between the transformed source point cloud and the target point cloud can be defined as the objective loss function to update the parameters θ . For a given dataset D , a stochastic gradient-descent based algorithm can usually be utilized to optimize the parameters θ by minimizing the pre-defined loss function : θ∗ = argmin θ [ E ( Si , Gi ) ∼D [ L ( Si , Gi , gθ ( Si , Gi ) ) ] ( 1 ) where L represents the pre-defined loss function . 2.2 LEARNING SHAPE EMBEDDING . For the input point clouds , the learning shape embedding is a non-linear multi-layer perceptron ( MLP ) -based function neural network that can extract shape features and capture the geometric information . Formally , let Pi denotes the input point clouds and fx ⊂ Rm denotes the feature of x , ∀x ∈ Pi , where m is the dimension of output layer . Our Learning Shape Descriptor includes two key components : encoding network and feature information . We define the encoding network g1 : R3 → Rm which uses multi-layer perceptrons ( MLP ) with ReLu activation function for feature extraction : fx = g1 ( x ) x∈Pi ( 2 ) The feature information is combined by extracted feature and point coordinates . Specifically , ∀x ∈ Pi , we concatenate the learned feature fx with the coordinates x as the combined feature [ fx , x ] ∈ R ( m+3 ) . Thus , the shape descriptor of input point cloud Pi is : { [ fx , x ] } x∈Pi .
This paper proposes a method for the registration of 3D point clouds using a trainable cascade of MLPs. The pipeline first computes per-point features, then proposes a transformation based on those features. The transformations are weighted using a predicted weight and combined into a final global transformation. The method is evaluated on ShapeNet40, showing good results w.r.t. prior art. The impact of some building blocks is validated in ablation studies.
SP:c5f905130202a6b69a774b97494c018403256299
Fast topological clustering with Wasserstein distance
1 INTRODUCTION . Network models are extremely useful representations for complex data . Significant attention has been given to cluster analysis within a single network , such as detecting community structure ( Newman , 2006 ; Rohe et al. , 2011 ; Yin et al. , 2017 ) . Less attention has been given to clustering of collections of network representations . Clustering approaches typically group similar networks based on comparisons of edge weights ( Xu & Wunsch , 2005 ) , not topology . Assessing similarity of networks based on topological structure offers the potential for new insight , given the inherent topological patterns exhibited by most real-world networks . However , extracting meaningful network topology is a very difficult task , especially for large and dense networks whose node degrees range over multiple orders of magnitude ( Barrat et al. , 2004 ; Bullmore & Sporns , 2009 ; Honey et al. , 2007 ) . Persistent homology ( Barannikov , 1994 ; Edelsbrunner et al. , 2000 ; Wasserman , 2018 ) has recently emerged as a powerful tool for understanding , characterizing and quantifying complex networks ( Carrière et al. , 2020 ; Chung et al. , 2019 ) . Persistent homology represents a network using topological features such as connected components and cycles . Many networks naturally divide into modules or connected components ( Bullmore & Sporns , 2009 ; Honey et al. , 2007 ) . Similarly , cycles are ubiquitous and are often used to describe information propagation , robustness and feedback mechanisms ( Keizer et al. , 1995 ; Kwon & Cho , 2007 ; Ozbudak et al. , 2005 ; Venkatesh et al. , 2004 ; Weiner et al. , 2002 ) . Effective use of such topological descriptors requires a notion of proximity that quantifies the similarity between persistence barcodes , a convenient representation for connected components and cycles ( Ghrist , 2008 ) . Wasserstein distance , which measures the minimal effort to modify one persistence barcode to another ( Rabin et al. , 2011 ) , is an excellent choice due to its appealing geometric properties ( Staerman et al. , 2021 ) and its effectiveness shown in many machine learning applications ( Kolouri et al. , 2017 ; Mi et al. , 2018 ; Solomon et al. , 2015 ) . Importantly , Wasserstein distance can be used to interpolate networks while preserving topological structure ( Songdechakraiwut et al. , 2021 ) , and the mean under the Wasserstein distance , known as Wasserstein barycenter ( Agueh & Carlier , 2011 ) , can be viewed as the topological centroid of a set of networks . The high cost of computing persistence barcodes , Wasserstein distance and the Wasserstein barycenter limit their applications to small scale problems , see , e.g. , ( Clough et al. , 2020 ; Hu et al. , 2019 ; Kolouri et al. , 2017 ; Mi et al. , 2018 ) . Although approximation algorithms have been developed ( Cuturi , 2013 ; Cuturi & Doucet , 2014 ; Lacombe et al. , 2018 ; Li et al. , 2020 ; Solomon et al. , 2015 ; Vidal et al. , 2019 ; Xie et al. , 2020 ; Ye et al. , 2017 ) , it is unclear whether these approximations are effective for clustering complex networks as they inevitably limit sensitivity to subtle topological features . Indeed , more and more studies , see , e.g. , ( Robins & Turner , 2016 ; Xia & Wei , 2014 ) have demonstrated that such subtle topological patterns are important for the characterization of complex networks , suggesting these approximation algorithms are undesirable . Recently , it was shown that the Wasserstein distance and barycenter for network graphs have closedform solutions that can be computed exactly and efficiently ( Songdechakraiwut et al. , 2021 ) because the persistence barcodes are inherently one dimensional . Motivated by this result , we present a novel and computationally practical topological clustering method that clusters complex networks of the same size with intricate topological characteristics . Topological information alone is effective at clustering networks when there is no correspondence between nodes in different networks . However , when networks have meaningful node correspondence , we perform the cluster analysis using combined topological and geometric information to preserve the node correspondence . Statistical validation based on ground truth information is used to demonstrate the effectiveness of our method when discriminating subtle topological features in simulated networks . The method is further illustrated by clustering measured functional brain networks associated with different levels of arousal during administration of general anesthesia . Our proposed method outperforms other clustering approaches in both the simulated and measured data . The paper is organized as follows . Background on our one-dimensional representation of persistence barcodes is given in section 2 , while section 3 presents our topological clustering method . In sections 4 and 5 , we compare the performance of our method to several baseline algorithms using simulated and measured networks , and conclude the paper with a brief discussion of the potential impact of this work . 2 ONE DIMENSIONAL PERSISTENCE BARCODES . 2.1 GRAPH FILTRATION . Consider a network represented as a weighted graph G = ( V , w ) comprising a set of nodes V with symmetric adjacency matrix w = ( wij ) , with edge weight wij representing the relationship between node i and node j . The number of nodes is denoted as |V | . The binary graph G = ( V , w ) of G is defined as a graph consisting of the node set V and binary edge weights w , ij = 1 if wij > and wij = 0 otherwise . We view the binary network G as a 1-skeleton ( Munkres , 2018 ) , a simplicial complex comprising only nodes ( 0-dimensional topological features ) and edges ( 1-dimensional topological features ) . There are no topological features of higher dimensions in the 1-skeleton , in contrast to well-known Rips complexes ( Ghrist , 2008 ) . In the 1-skeleton , there are two types of topological features : connected components and cycles . The number of connected components and the number of cycles in the binary network are referred to as the 0-th Betti number β0 ( G ) and the 1-st Betti number β1 ( G ) , respectively . A graph filtration of G is defined as a collection of nested binary networks ( Lee et al. , 2012 ) : G 0 ⊇ G 1 ⊇ · · · ⊇ G k , where 0 ≤ 1 ≤ · · · ≤ k are filtration values . As increases , more and more edges are removed from the network G since we threshold the edge weights at higher connectivity . For instance , G−∞ has each pair of nodes connected by an edge and thus is a complete graph consisting of a single connected component , while G∞ has no edges and represents the node set . Figure 1 illustrates the graph filtration of a four-node network and the corresponding Betti numbers . Note that other filtrations for analyzing graphs have been proposed , including use of descriptor functions such as heat kernels ( Carrière et al. , 2020 ) and task-specific learning ( Hofer et al. , 2020 ) . 2.2 BIRTH-DEATH DECOMPOSITION . Persistent homology keeps track of birth and death of connected components and cycles over filtration values to determine their persistence , that is , the lifetime from their birth to death over . The persistence is represented as a persistence barcode PB ( G ) comprising intervals [ bi , di ] representing the life-time of a connected component or a cycle that appears at the filtration value bi and vanishes at di . In the edge-weight threshold graph filtration defined in Section 2.1 , connected components are born and cycles die as the filtration value increases ( Chung et al. , 2019 ) . Specifically , β0 is monotonically increasing from β0 ( G−∞ ) = 1 to β0 ( G∞ ) = |V | . There are β0 ( G∞ ) − β0 ( G−∞ ) = |V | − 1 connected components that are born over the filtration . Connected components will never die once they are born , implying that every connected component has death value at∞ . Thus , we can represent their persistence as a collection of finite birth values B ( G ) = { bi } |V |−1i=1 . On the other hand , G−∞ is a complete graph containing all possible cycles ; thus , all cycles have birth values at −∞ . Again , we can represent the persistence of the cycles as a collection of finite death values D ( G ) = { di } . How many cycles are there ? Since the deletion of an edge wij must result in either the birth of a connected component or the death of a cycle , every edge weight must be in either B ( G ) or D ( G ) . Thus , the edge weight set W = { wij |i > j } decomposes into the collection of birth values B ( G ) and the collection of death values D ( G ) . Since G−∞ is a complete graph with |V | ( |V |−1 ) 2 edge weights and |V | − 1 of these weights are associated with the birth of connected components , the number of cycles inG−∞ is thus equal to |V | ( |V |−1 ) 2 − ( |V |−1 ) = 1+ |V | ( |V |−3 ) 2 . In the example of Figure 1 , we have B ( G ) = { e3 , e5 , e6 } and D ( G ) = { e1 , e2 , e4 } . Other graph filtrations ( Carrière et al. , 2020 ; Hofer et al. , 2020 ) do not necessarily share this monotonicity property and consequently one-dimensional barcode representations are not applicable . Finding the birth values in B ( G ) is equivalent to finding edge weights comprising the maximum spanning tree of G and can be done using well-known methods such as Prim ’ s and Kruskal ’ s algorithms ( Lee et al. , 2012 ) . Once B ( G ) is known , D ( G ) is simply given as the remaining edge weights . Finding B ( G ) and D ( G ) requires only O ( n log n ) operations , where n is the number of edges in the network , and thus is extremely computationally efficient . 3 CLUSTERING METHOD . 3.1 TOPOLOGICAL DISTANCE SIMPLIFICATION . Use of edge-weight threshold filtration and limiting consideration to connected components and cycles as topological features results in significant simplification of the 2-Wasserstein distance ( Rabin et al. , 2011 ) between barcode descriptors ( Cohen-Steiner et al. , 2010 ) of networks as follows . Let G and H be two given networks that have the same number of nodes . The topological distance dtop ( G , H ) is defined as the optimal matching cost : ( min τ ∑ p∈PB ( G ) ||p− τ ( p ) ||2 ) 1 2 = ( min τ ∑ p= [ bp , dp ] ∈PB ( G ) [ bp − bτ ( p ) ] 2 + [ dp − dτ ( p ) ] 2 ) 12 , ( 1 ) where the optimization is over all possible bijections τ from barcode PB ( G ) to barcode PB ( H ) . Intuitively , we can think of each interval [ bi , di ] as a point ( bi , di ) in 2-dimensional plane and that the topological distance measures the minimal amount of work to move points in PB ( G ) to PB ( H ) . Note this alternative representation of points in the plane is equivalent to the persistence barcode and called the persistence diagram ( Edelsbrunner & Harer , 2008 ) . Moving a connected component point ( bi , ∞ ) to a cycle point ( −∞ , dj ) or vice versa takes an infinitely large amount of work . Thus , we only need to optimize over bijections that match the same type of topological features . Subsequently , we can equivalently rewrite dtop in terms of B ( G ) , D ( G ) , B ( H ) and D ( H ) as dtop ( G , H ) = ( min τ0 ∑ b∈B ( G ) [ b− τ0 ( b ) ] 2 +min τ1 ∑ d∈D ( G ) [ d− τ1 ( d ) ] 2 ) 12 , ( 2 ) where τ0 is a bijection from B ( G ) to B ( H ) and τ1 is a bijection from D ( G ) to D ( H ) . The first term matches connected components to connected components and the second term matches cycles to cycles . Matching each type of topological feature separately is commonly done in medical imaging and machine learning studies ( Clough et al. , 2020 ; Hu et al. , 2019 ) . The topological distance dtop has a closed-form solution that allows for efficient computation as follows ( Songdechakraiwut et al. , 2021 ) . dtop ( G , H ) = ( ∑ b∈B ( G ) [ b− τ∗0 ( b ) ] 2 + ∑ d∈D ( G ) [ d− τ∗1 ( d ) ] 2 ) 12 , ( 3 ) where τ∗0 maps the l-th smallest birth value in B ( G ) to the l-th smallest birth value in B ( H ) and τ ∗ 1 maps the l-th smallest death value in D ( G ) to the l-th smallest death value in D ( H ) for all l. A proof is in the supplementary material . As a result , the optimal matching cost can be computed quickly and efficiently by sorting birth and death values , and matching them in order . The computational cost of evaluating dtop is O ( n log n ) , where n is the number of edges in networks .
Given a weighted graph, authors construct a filtration on top of it that is associated with a persistence diagram. This persistence diagram summarizes the topology of the weighted graph. Authors propose then to cluster different graphs using either the distance between the persistence diagrams (that they call the topological distance) or an interpolation between the L2 distance between the weights and the topological distance. The relevance of this algorithm is then showcased on synthetic examples and real-world datasets.
SP:0dd5a89f8517f0d5e91eca5676379bbf50cc0a88
Fast topological clustering with Wasserstein distance
1 INTRODUCTION . Network models are extremely useful representations for complex data . Significant attention has been given to cluster analysis within a single network , such as detecting community structure ( Newman , 2006 ; Rohe et al. , 2011 ; Yin et al. , 2017 ) . Less attention has been given to clustering of collections of network representations . Clustering approaches typically group similar networks based on comparisons of edge weights ( Xu & Wunsch , 2005 ) , not topology . Assessing similarity of networks based on topological structure offers the potential for new insight , given the inherent topological patterns exhibited by most real-world networks . However , extracting meaningful network topology is a very difficult task , especially for large and dense networks whose node degrees range over multiple orders of magnitude ( Barrat et al. , 2004 ; Bullmore & Sporns , 2009 ; Honey et al. , 2007 ) . Persistent homology ( Barannikov , 1994 ; Edelsbrunner et al. , 2000 ; Wasserman , 2018 ) has recently emerged as a powerful tool for understanding , characterizing and quantifying complex networks ( Carrière et al. , 2020 ; Chung et al. , 2019 ) . Persistent homology represents a network using topological features such as connected components and cycles . Many networks naturally divide into modules or connected components ( Bullmore & Sporns , 2009 ; Honey et al. , 2007 ) . Similarly , cycles are ubiquitous and are often used to describe information propagation , robustness and feedback mechanisms ( Keizer et al. , 1995 ; Kwon & Cho , 2007 ; Ozbudak et al. , 2005 ; Venkatesh et al. , 2004 ; Weiner et al. , 2002 ) . Effective use of such topological descriptors requires a notion of proximity that quantifies the similarity between persistence barcodes , a convenient representation for connected components and cycles ( Ghrist , 2008 ) . Wasserstein distance , which measures the minimal effort to modify one persistence barcode to another ( Rabin et al. , 2011 ) , is an excellent choice due to its appealing geometric properties ( Staerman et al. , 2021 ) and its effectiveness shown in many machine learning applications ( Kolouri et al. , 2017 ; Mi et al. , 2018 ; Solomon et al. , 2015 ) . Importantly , Wasserstein distance can be used to interpolate networks while preserving topological structure ( Songdechakraiwut et al. , 2021 ) , and the mean under the Wasserstein distance , known as Wasserstein barycenter ( Agueh & Carlier , 2011 ) , can be viewed as the topological centroid of a set of networks . The high cost of computing persistence barcodes , Wasserstein distance and the Wasserstein barycenter limit their applications to small scale problems , see , e.g. , ( Clough et al. , 2020 ; Hu et al. , 2019 ; Kolouri et al. , 2017 ; Mi et al. , 2018 ) . Although approximation algorithms have been developed ( Cuturi , 2013 ; Cuturi & Doucet , 2014 ; Lacombe et al. , 2018 ; Li et al. , 2020 ; Solomon et al. , 2015 ; Vidal et al. , 2019 ; Xie et al. , 2020 ; Ye et al. , 2017 ) , it is unclear whether these approximations are effective for clustering complex networks as they inevitably limit sensitivity to subtle topological features . Indeed , more and more studies , see , e.g. , ( Robins & Turner , 2016 ; Xia & Wei , 2014 ) have demonstrated that such subtle topological patterns are important for the characterization of complex networks , suggesting these approximation algorithms are undesirable . Recently , it was shown that the Wasserstein distance and barycenter for network graphs have closedform solutions that can be computed exactly and efficiently ( Songdechakraiwut et al. , 2021 ) because the persistence barcodes are inherently one dimensional . Motivated by this result , we present a novel and computationally practical topological clustering method that clusters complex networks of the same size with intricate topological characteristics . Topological information alone is effective at clustering networks when there is no correspondence between nodes in different networks . However , when networks have meaningful node correspondence , we perform the cluster analysis using combined topological and geometric information to preserve the node correspondence . Statistical validation based on ground truth information is used to demonstrate the effectiveness of our method when discriminating subtle topological features in simulated networks . The method is further illustrated by clustering measured functional brain networks associated with different levels of arousal during administration of general anesthesia . Our proposed method outperforms other clustering approaches in both the simulated and measured data . The paper is organized as follows . Background on our one-dimensional representation of persistence barcodes is given in section 2 , while section 3 presents our topological clustering method . In sections 4 and 5 , we compare the performance of our method to several baseline algorithms using simulated and measured networks , and conclude the paper with a brief discussion of the potential impact of this work . 2 ONE DIMENSIONAL PERSISTENCE BARCODES . 2.1 GRAPH FILTRATION . Consider a network represented as a weighted graph G = ( V , w ) comprising a set of nodes V with symmetric adjacency matrix w = ( wij ) , with edge weight wij representing the relationship between node i and node j . The number of nodes is denoted as |V | . The binary graph G = ( V , w ) of G is defined as a graph consisting of the node set V and binary edge weights w , ij = 1 if wij > and wij = 0 otherwise . We view the binary network G as a 1-skeleton ( Munkres , 2018 ) , a simplicial complex comprising only nodes ( 0-dimensional topological features ) and edges ( 1-dimensional topological features ) . There are no topological features of higher dimensions in the 1-skeleton , in contrast to well-known Rips complexes ( Ghrist , 2008 ) . In the 1-skeleton , there are two types of topological features : connected components and cycles . The number of connected components and the number of cycles in the binary network are referred to as the 0-th Betti number β0 ( G ) and the 1-st Betti number β1 ( G ) , respectively . A graph filtration of G is defined as a collection of nested binary networks ( Lee et al. , 2012 ) : G 0 ⊇ G 1 ⊇ · · · ⊇ G k , where 0 ≤ 1 ≤ · · · ≤ k are filtration values . As increases , more and more edges are removed from the network G since we threshold the edge weights at higher connectivity . For instance , G−∞ has each pair of nodes connected by an edge and thus is a complete graph consisting of a single connected component , while G∞ has no edges and represents the node set . Figure 1 illustrates the graph filtration of a four-node network and the corresponding Betti numbers . Note that other filtrations for analyzing graphs have been proposed , including use of descriptor functions such as heat kernels ( Carrière et al. , 2020 ) and task-specific learning ( Hofer et al. , 2020 ) . 2.2 BIRTH-DEATH DECOMPOSITION . Persistent homology keeps track of birth and death of connected components and cycles over filtration values to determine their persistence , that is , the lifetime from their birth to death over . The persistence is represented as a persistence barcode PB ( G ) comprising intervals [ bi , di ] representing the life-time of a connected component or a cycle that appears at the filtration value bi and vanishes at di . In the edge-weight threshold graph filtration defined in Section 2.1 , connected components are born and cycles die as the filtration value increases ( Chung et al. , 2019 ) . Specifically , β0 is monotonically increasing from β0 ( G−∞ ) = 1 to β0 ( G∞ ) = |V | . There are β0 ( G∞ ) − β0 ( G−∞ ) = |V | − 1 connected components that are born over the filtration . Connected components will never die once they are born , implying that every connected component has death value at∞ . Thus , we can represent their persistence as a collection of finite birth values B ( G ) = { bi } |V |−1i=1 . On the other hand , G−∞ is a complete graph containing all possible cycles ; thus , all cycles have birth values at −∞ . Again , we can represent the persistence of the cycles as a collection of finite death values D ( G ) = { di } . How many cycles are there ? Since the deletion of an edge wij must result in either the birth of a connected component or the death of a cycle , every edge weight must be in either B ( G ) or D ( G ) . Thus , the edge weight set W = { wij |i > j } decomposes into the collection of birth values B ( G ) and the collection of death values D ( G ) . Since G−∞ is a complete graph with |V | ( |V |−1 ) 2 edge weights and |V | − 1 of these weights are associated with the birth of connected components , the number of cycles inG−∞ is thus equal to |V | ( |V |−1 ) 2 − ( |V |−1 ) = 1+ |V | ( |V |−3 ) 2 . In the example of Figure 1 , we have B ( G ) = { e3 , e5 , e6 } and D ( G ) = { e1 , e2 , e4 } . Other graph filtrations ( Carrière et al. , 2020 ; Hofer et al. , 2020 ) do not necessarily share this monotonicity property and consequently one-dimensional barcode representations are not applicable . Finding the birth values in B ( G ) is equivalent to finding edge weights comprising the maximum spanning tree of G and can be done using well-known methods such as Prim ’ s and Kruskal ’ s algorithms ( Lee et al. , 2012 ) . Once B ( G ) is known , D ( G ) is simply given as the remaining edge weights . Finding B ( G ) and D ( G ) requires only O ( n log n ) operations , where n is the number of edges in the network , and thus is extremely computationally efficient . 3 CLUSTERING METHOD . 3.1 TOPOLOGICAL DISTANCE SIMPLIFICATION . Use of edge-weight threshold filtration and limiting consideration to connected components and cycles as topological features results in significant simplification of the 2-Wasserstein distance ( Rabin et al. , 2011 ) between barcode descriptors ( Cohen-Steiner et al. , 2010 ) of networks as follows . Let G and H be two given networks that have the same number of nodes . The topological distance dtop ( G , H ) is defined as the optimal matching cost : ( min τ ∑ p∈PB ( G ) ||p− τ ( p ) ||2 ) 1 2 = ( min τ ∑ p= [ bp , dp ] ∈PB ( G ) [ bp − bτ ( p ) ] 2 + [ dp − dτ ( p ) ] 2 ) 12 , ( 1 ) where the optimization is over all possible bijections τ from barcode PB ( G ) to barcode PB ( H ) . Intuitively , we can think of each interval [ bi , di ] as a point ( bi , di ) in 2-dimensional plane and that the topological distance measures the minimal amount of work to move points in PB ( G ) to PB ( H ) . Note this alternative representation of points in the plane is equivalent to the persistence barcode and called the persistence diagram ( Edelsbrunner & Harer , 2008 ) . Moving a connected component point ( bi , ∞ ) to a cycle point ( −∞ , dj ) or vice versa takes an infinitely large amount of work . Thus , we only need to optimize over bijections that match the same type of topological features . Subsequently , we can equivalently rewrite dtop in terms of B ( G ) , D ( G ) , B ( H ) and D ( H ) as dtop ( G , H ) = ( min τ0 ∑ b∈B ( G ) [ b− τ0 ( b ) ] 2 +min τ1 ∑ d∈D ( G ) [ d− τ1 ( d ) ] 2 ) 12 , ( 2 ) where τ0 is a bijection from B ( G ) to B ( H ) and τ1 is a bijection from D ( G ) to D ( H ) . The first term matches connected components to connected components and the second term matches cycles to cycles . Matching each type of topological feature separately is commonly done in medical imaging and machine learning studies ( Clough et al. , 2020 ; Hu et al. , 2019 ) . The topological distance dtop has a closed-form solution that allows for efficient computation as follows ( Songdechakraiwut et al. , 2021 ) . dtop ( G , H ) = ( ∑ b∈B ( G ) [ b− τ∗0 ( b ) ] 2 + ∑ d∈D ( G ) [ d− τ∗1 ( d ) ] 2 ) 12 , ( 3 ) where τ∗0 maps the l-th smallest birth value in B ( G ) to the l-th smallest birth value in B ( H ) and τ ∗ 1 maps the l-th smallest death value in D ( G ) to the l-th smallest death value in D ( H ) for all l. A proof is in the supplementary material . As a result , the optimal matching cost can be computed quickly and efficiently by sorting birth and death values , and matching them in order . The computational cost of evaluating dtop is O ( n log n ) , where n is the number of edges in networks .
In this article, the authors propose a way to characterize graphs using topology and persistent homology. Indeed, graphs represented with adjacency matrices can be filtered using the corresponding edge weights, and the resulting persistence diagrams can then be used for encoding the topological structures of the graph. Moreover, since the topology of graphs can be easily controlled (the only features they have are connected components and loops), their persistence diagrams are simple enough so that matching and distances between them can be computed efficiently. However, since topology alone can miss important information, the authors suggest to combine this topological distance with a more standard distance, namely the Frobenius norm between the adjacency matrices themselves. Then, they show how to use these distances in an expectation-maximization-like algorithm, using some theoretical computation guarantees of the Fréchet mean associated to their distances. Finally, they provide experiments in which their procedure compares favorably to competitors on a few synthetic and real-world graph classification tasks.
SP:0dd5a89f8517f0d5e91eca5676379bbf50cc0a88
Fast topological clustering with Wasserstein distance
1 INTRODUCTION . Network models are extremely useful representations for complex data . Significant attention has been given to cluster analysis within a single network , such as detecting community structure ( Newman , 2006 ; Rohe et al. , 2011 ; Yin et al. , 2017 ) . Less attention has been given to clustering of collections of network representations . Clustering approaches typically group similar networks based on comparisons of edge weights ( Xu & Wunsch , 2005 ) , not topology . Assessing similarity of networks based on topological structure offers the potential for new insight , given the inherent topological patterns exhibited by most real-world networks . However , extracting meaningful network topology is a very difficult task , especially for large and dense networks whose node degrees range over multiple orders of magnitude ( Barrat et al. , 2004 ; Bullmore & Sporns , 2009 ; Honey et al. , 2007 ) . Persistent homology ( Barannikov , 1994 ; Edelsbrunner et al. , 2000 ; Wasserman , 2018 ) has recently emerged as a powerful tool for understanding , characterizing and quantifying complex networks ( Carrière et al. , 2020 ; Chung et al. , 2019 ) . Persistent homology represents a network using topological features such as connected components and cycles . Many networks naturally divide into modules or connected components ( Bullmore & Sporns , 2009 ; Honey et al. , 2007 ) . Similarly , cycles are ubiquitous and are often used to describe information propagation , robustness and feedback mechanisms ( Keizer et al. , 1995 ; Kwon & Cho , 2007 ; Ozbudak et al. , 2005 ; Venkatesh et al. , 2004 ; Weiner et al. , 2002 ) . Effective use of such topological descriptors requires a notion of proximity that quantifies the similarity between persistence barcodes , a convenient representation for connected components and cycles ( Ghrist , 2008 ) . Wasserstein distance , which measures the minimal effort to modify one persistence barcode to another ( Rabin et al. , 2011 ) , is an excellent choice due to its appealing geometric properties ( Staerman et al. , 2021 ) and its effectiveness shown in many machine learning applications ( Kolouri et al. , 2017 ; Mi et al. , 2018 ; Solomon et al. , 2015 ) . Importantly , Wasserstein distance can be used to interpolate networks while preserving topological structure ( Songdechakraiwut et al. , 2021 ) , and the mean under the Wasserstein distance , known as Wasserstein barycenter ( Agueh & Carlier , 2011 ) , can be viewed as the topological centroid of a set of networks . The high cost of computing persistence barcodes , Wasserstein distance and the Wasserstein barycenter limit their applications to small scale problems , see , e.g. , ( Clough et al. , 2020 ; Hu et al. , 2019 ; Kolouri et al. , 2017 ; Mi et al. , 2018 ) . Although approximation algorithms have been developed ( Cuturi , 2013 ; Cuturi & Doucet , 2014 ; Lacombe et al. , 2018 ; Li et al. , 2020 ; Solomon et al. , 2015 ; Vidal et al. , 2019 ; Xie et al. , 2020 ; Ye et al. , 2017 ) , it is unclear whether these approximations are effective for clustering complex networks as they inevitably limit sensitivity to subtle topological features . Indeed , more and more studies , see , e.g. , ( Robins & Turner , 2016 ; Xia & Wei , 2014 ) have demonstrated that such subtle topological patterns are important for the characterization of complex networks , suggesting these approximation algorithms are undesirable . Recently , it was shown that the Wasserstein distance and barycenter for network graphs have closedform solutions that can be computed exactly and efficiently ( Songdechakraiwut et al. , 2021 ) because the persistence barcodes are inherently one dimensional . Motivated by this result , we present a novel and computationally practical topological clustering method that clusters complex networks of the same size with intricate topological characteristics . Topological information alone is effective at clustering networks when there is no correspondence between nodes in different networks . However , when networks have meaningful node correspondence , we perform the cluster analysis using combined topological and geometric information to preserve the node correspondence . Statistical validation based on ground truth information is used to demonstrate the effectiveness of our method when discriminating subtle topological features in simulated networks . The method is further illustrated by clustering measured functional brain networks associated with different levels of arousal during administration of general anesthesia . Our proposed method outperforms other clustering approaches in both the simulated and measured data . The paper is organized as follows . Background on our one-dimensional representation of persistence barcodes is given in section 2 , while section 3 presents our topological clustering method . In sections 4 and 5 , we compare the performance of our method to several baseline algorithms using simulated and measured networks , and conclude the paper with a brief discussion of the potential impact of this work . 2 ONE DIMENSIONAL PERSISTENCE BARCODES . 2.1 GRAPH FILTRATION . Consider a network represented as a weighted graph G = ( V , w ) comprising a set of nodes V with symmetric adjacency matrix w = ( wij ) , with edge weight wij representing the relationship between node i and node j . The number of nodes is denoted as |V | . The binary graph G = ( V , w ) of G is defined as a graph consisting of the node set V and binary edge weights w , ij = 1 if wij > and wij = 0 otherwise . We view the binary network G as a 1-skeleton ( Munkres , 2018 ) , a simplicial complex comprising only nodes ( 0-dimensional topological features ) and edges ( 1-dimensional topological features ) . There are no topological features of higher dimensions in the 1-skeleton , in contrast to well-known Rips complexes ( Ghrist , 2008 ) . In the 1-skeleton , there are two types of topological features : connected components and cycles . The number of connected components and the number of cycles in the binary network are referred to as the 0-th Betti number β0 ( G ) and the 1-st Betti number β1 ( G ) , respectively . A graph filtration of G is defined as a collection of nested binary networks ( Lee et al. , 2012 ) : G 0 ⊇ G 1 ⊇ · · · ⊇ G k , where 0 ≤ 1 ≤ · · · ≤ k are filtration values . As increases , more and more edges are removed from the network G since we threshold the edge weights at higher connectivity . For instance , G−∞ has each pair of nodes connected by an edge and thus is a complete graph consisting of a single connected component , while G∞ has no edges and represents the node set . Figure 1 illustrates the graph filtration of a four-node network and the corresponding Betti numbers . Note that other filtrations for analyzing graphs have been proposed , including use of descriptor functions such as heat kernels ( Carrière et al. , 2020 ) and task-specific learning ( Hofer et al. , 2020 ) . 2.2 BIRTH-DEATH DECOMPOSITION . Persistent homology keeps track of birth and death of connected components and cycles over filtration values to determine their persistence , that is , the lifetime from their birth to death over . The persistence is represented as a persistence barcode PB ( G ) comprising intervals [ bi , di ] representing the life-time of a connected component or a cycle that appears at the filtration value bi and vanishes at di . In the edge-weight threshold graph filtration defined in Section 2.1 , connected components are born and cycles die as the filtration value increases ( Chung et al. , 2019 ) . Specifically , β0 is monotonically increasing from β0 ( G−∞ ) = 1 to β0 ( G∞ ) = |V | . There are β0 ( G∞ ) − β0 ( G−∞ ) = |V | − 1 connected components that are born over the filtration . Connected components will never die once they are born , implying that every connected component has death value at∞ . Thus , we can represent their persistence as a collection of finite birth values B ( G ) = { bi } |V |−1i=1 . On the other hand , G−∞ is a complete graph containing all possible cycles ; thus , all cycles have birth values at −∞ . Again , we can represent the persistence of the cycles as a collection of finite death values D ( G ) = { di } . How many cycles are there ? Since the deletion of an edge wij must result in either the birth of a connected component or the death of a cycle , every edge weight must be in either B ( G ) or D ( G ) . Thus , the edge weight set W = { wij |i > j } decomposes into the collection of birth values B ( G ) and the collection of death values D ( G ) . Since G−∞ is a complete graph with |V | ( |V |−1 ) 2 edge weights and |V | − 1 of these weights are associated with the birth of connected components , the number of cycles inG−∞ is thus equal to |V | ( |V |−1 ) 2 − ( |V |−1 ) = 1+ |V | ( |V |−3 ) 2 . In the example of Figure 1 , we have B ( G ) = { e3 , e5 , e6 } and D ( G ) = { e1 , e2 , e4 } . Other graph filtrations ( Carrière et al. , 2020 ; Hofer et al. , 2020 ) do not necessarily share this monotonicity property and consequently one-dimensional barcode representations are not applicable . Finding the birth values in B ( G ) is equivalent to finding edge weights comprising the maximum spanning tree of G and can be done using well-known methods such as Prim ’ s and Kruskal ’ s algorithms ( Lee et al. , 2012 ) . Once B ( G ) is known , D ( G ) is simply given as the remaining edge weights . Finding B ( G ) and D ( G ) requires only O ( n log n ) operations , where n is the number of edges in the network , and thus is extremely computationally efficient . 3 CLUSTERING METHOD . 3.1 TOPOLOGICAL DISTANCE SIMPLIFICATION . Use of edge-weight threshold filtration and limiting consideration to connected components and cycles as topological features results in significant simplification of the 2-Wasserstein distance ( Rabin et al. , 2011 ) between barcode descriptors ( Cohen-Steiner et al. , 2010 ) of networks as follows . Let G and H be two given networks that have the same number of nodes . The topological distance dtop ( G , H ) is defined as the optimal matching cost : ( min τ ∑ p∈PB ( G ) ||p− τ ( p ) ||2 ) 1 2 = ( min τ ∑ p= [ bp , dp ] ∈PB ( G ) [ bp − bτ ( p ) ] 2 + [ dp − dτ ( p ) ] 2 ) 12 , ( 1 ) where the optimization is over all possible bijections τ from barcode PB ( G ) to barcode PB ( H ) . Intuitively , we can think of each interval [ bi , di ] as a point ( bi , di ) in 2-dimensional plane and that the topological distance measures the minimal amount of work to move points in PB ( G ) to PB ( H ) . Note this alternative representation of points in the plane is equivalent to the persistence barcode and called the persistence diagram ( Edelsbrunner & Harer , 2008 ) . Moving a connected component point ( bi , ∞ ) to a cycle point ( −∞ , dj ) or vice versa takes an infinitely large amount of work . Thus , we only need to optimize over bijections that match the same type of topological features . Subsequently , we can equivalently rewrite dtop in terms of B ( G ) , D ( G ) , B ( H ) and D ( H ) as dtop ( G , H ) = ( min τ0 ∑ b∈B ( G ) [ b− τ0 ( b ) ] 2 +min τ1 ∑ d∈D ( G ) [ d− τ1 ( d ) ] 2 ) 12 , ( 2 ) where τ0 is a bijection from B ( G ) to B ( H ) and τ1 is a bijection from D ( G ) to D ( H ) . The first term matches connected components to connected components and the second term matches cycles to cycles . Matching each type of topological feature separately is commonly done in medical imaging and machine learning studies ( Clough et al. , 2020 ; Hu et al. , 2019 ) . The topological distance dtop has a closed-form solution that allows for efficient computation as follows ( Songdechakraiwut et al. , 2021 ) . dtop ( G , H ) = ( ∑ b∈B ( G ) [ b− τ∗0 ( b ) ] 2 + ∑ d∈D ( G ) [ d− τ∗1 ( d ) ] 2 ) 12 , ( 3 ) where τ∗0 maps the l-th smallest birth value in B ( G ) to the l-th smallest birth value in B ( H ) and τ ∗ 1 maps the l-th smallest death value in D ( G ) to the l-th smallest death value in D ( H ) for all l. A proof is in the supplementary material . As a result , the optimal matching cost can be computed quickly and efficiently by sorting birth and death values , and matching them in order . The computational cost of evaluating dtop is O ( n log n ) , where n is the number of edges in networks .
This paper presents a novel algorithm for clustering networks based on their topological properties. To this end, an algorithm based on persistent homology (a method from computational topology that permits the calculation of multi-scale features of unstructured and structured data sets) is introduced. The key feature of the algorithm is that it manages to substantially reduce the cost of 'matching' topological features between two different data sets, thus making their comparison and similarity assessment computationally feasible. The paper provides an algorithm that can employ *both* geometrical and topological features of a network (with an appropriate regularisation term); experiments demonstrate the general utility of the algorithm.
SP:0dd5a89f8517f0d5e91eca5676379bbf50cc0a88
Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification
1 INTRODUCTION . Offline reinforcement learning ( RL ) has shown great potential in advancing the deployment of RL in real-world tasks where interaction with the environment is prohibitive , costly , or risky ( Thomas , 2015 ) . Since an agent has to learn from a given pre-collected dataset in offline RL , it becomes challenging for regular online RL algorithms such as DDPG ( Lillicrap et al. , 2016 ) and TD3 ( Fujimoto et al. , 2018 ) due to extrapolation error ( Lee et al. , 2021 ) . There has been recent progress in tackling the problem based on conservatism . Behavior regularization ( Wu et al. , 2019 ; Kumar et al. , 2019 ) , e.g. , TD3 with Behavior Cloning ( TD3+BC ) ( Fujimoto & Gu , 2021 ) , compels the learning policy to stay close to the manifold of the datasets . Yet , its performance highly depends on the quality of the dataset . Another line of research investigates incorporating conservatism into the value function by critic regularization ( Nachum et al. , 2019 ; Kostrikov et al. , 2021 ) , e.g. , Conservative Q-Learning ( Kumar et al. , 2020 ) , which usually learns a conservative estimate of the value function to directly address the extrapolation error . However , many practical scenarios involve multiple agents , e.g. , multi-robot control ( Amato , 2018 ) , autonomous driving ( Pomerleau , 1989 ; Sadigh et al. , 2016 ) . Therefore , offline multi-agent reinforcement learning ( MARL ) ( Yang et al. , 2021 ; Jiang & Lu , 2021 ) is crucial for solving real-world tasks . Observing recent success of Independent PPO ( de Witt et al. , 2020 ) and Multi-Agent PPO ( Yu et al. , 2021 ) , both of which are based on the PPO ( Schulman et al. , 2017 ) algorithm , we find that online RL algorithms can be transferred to multi-agent scenarios through either decentralized training or a centralized value function without bells and whistles . Hence , we naturally expect that offline RL algorithms would also transfer easily when applied to multi-agent tasks . Surprisingly , we observe that the performance of the state-of-the-art conservatism-based CQL ( Kumar et al. , 2020 ) algorithm in offline RL degrades dramatically with an increasing number of agents as shown in Figure 1 ( c ) in our experiments . Towards mitigating the degradation , we identify a critical issue in CQL : solely regularizing the critic is insufficient for multiple agents to learn good policies for coordination in the offline setting . The primary cause is that first-order policy gradient methods are prone to local optima ( Nachum et al. , 2016 ; Ge et al. , 2017 ; Safran & Shamir , 2017 ) , saddle points ( Vlatakis-Gkaragkounis et al. , 2019 ; Sun et al. , 2020 ) , or noisy gradient estimates ( Such et al. , 2017 ) . As a result , this can lead to uncoordinated suboptimal learning behavior because the actor can not leverage the global information in the critic well . The issue is exacerbated more in the multi-agent settings due to the exponentially-sized joint action space ( Yang et al. , 2021 ) as well as the nature of the setting that requires each of the agent to learn a good policy for a successful joint policy . For example , in a basketball game , where there are two competing teams each consisting of five players . When one of the players passes the ball among them , it is important for all teammates to perform their duties well in their roles to win the game . As a result , if one of the agents in the team fails to learn a good policy , it can fail to cooperate with other agents for coordinated behaviors and lose the ball . In this paper , we propose a surprisingly simple yet effective method for offline multi-agent continuous control , Offline MARL with Actor Rectification ( OMAR ) , to better leverage the conservative value function via an effective combination of first-order policy gradient and zeroth-order optimization methods . Towards this goal , we add a regularizer to the actor loss , which encourages the actor to mimic actions from the zeroth-order optimizer that maximizes Q-values so that we can combine the best of both first-order policy gradient and zeroth-order optimization . The sampling mechanism is motivated by evolution strategies ( Such et al. , 2017 ; Conti et al. , 2017 ; Mania et al. , 2018 ) , which recently emerged as another paradigm for solving sequential decision making tasks ( Salimans et al. , 2017 ) . Specifically , the zeroth-order optimization part maintains an iteratively updated and refined Gaussian distribution to find better actions based on Q-values . Then , we rectify the policy towards this action to better leverage the conservative value function . We conduct extensive experiments in standard continuous control multi-agent particle environments and the complex multi-agent locomotion task to demonstrate its effectiveness . On all the benchmark tasks , OMAR outperforms the multi-agent version of offline RL algorithms including CQL ( Kumar et al. , 2020 ) and TD3+BC ( Fujimoto & Gu , 2021 ) , as well as a recent offline MARL algorithm MA-ICQ ( Yang et al. , 2021 ) , and achieves the state-of-the-art performance . The main contribution of this work can be summarized as follows . We propose the OMAR algorithm that effectively leverages both first-order and zero-order optimization for solving offline MARL tasks . In addition , we theoretically prove that OMAR leads to safe policy improvement . Finally , extensive experimental results demonstrate the effectiveness of OMAR , which significantly outperforms strong baseline methods and achieves state-of-the-art performance in datasets with different qualities in both decentralized and centralized learning paradigms . 2 BACKGROUND . We consider the framework of partially observable Markov games ( POMG ) ( Littman , 1994 ; Hu et al. , 1998 ) , which extends Markov decision processes to the multi-agent setting . A POMG with N agents is defined by a set of global states S , a set of actions A1 , . . . , AN , and a set of observations O1 , . . . , ON for each agent . At each timestep , each agent i receive an observation oi and chooses an action based on its policy πi . The environment transits to the next state according to the state transition function P : S × A1 × . . . × AN × S → [ 0 , 1 ] . Each agent receives a reward based on the reward function ri : S × A1 . . . × AN → R and a private observation oi : S → Oi . The initial state distribution is defined by ρ : S → [ 0 , 1 ] . The goal is to find a set of optimal policies π = { π1 , . . . , πN } , where each agent aims to maximize its own discounted return ∑∞ t=0 γ trti with γ denoting the discount factor . In the offline setting , agents learn from a fixed dataset D generated from the behavior policy πβ without interaction with the environments . 2.1 MULTI-AGENT ACTOR CRITIC . Centralized critic . Lowe et al . ( 2017 ) propose Multi-Agent Deep Deterministic Policy Gradients ( MADDPG ) under the centralized training with decentralized execution ( CTDE ) paradigm by extending the DDPG algorithm ( Lillicrap et al. , 2016 ) to the multi-agent setting . In CTDE , agents are trained in a centralized way where they can access to extra global information dur- ing training while they need to learn decentralized policies in order to act based only on local observations during execution . In MADDPG , for an agent i , the centralized critic Qi is parameterized by θi . It takes the global state action joint action as inputs , and aims to minimize the temporal difference error defined by L ( θi ) = ED [ ( Qi ( s , a1 , . . . , an ) − yi ) 2 ] , where yi = ri + γQ̄i ( s ′ , a′1 , · · · , a′n ) |a′j=π̄j ( o′j ) and Q̄i and π̄i denote target networks . To reduce the overestimation problem in MADDPG , MATD3 ( Ackermann et al. , 2019 ) estimates the target value using double estimators based on TD3 ( Fujimoto et al. , 2018 ) , where yi = ri + γmink=1,2 Q̄ k i ( s ′ , a′1 , · · · , a′n ) |a′j=π̄j ( o′j ) . Agents learn decentralized policies πi parameterized by φi , which take only local observations as inputs , and are trained by multi-agent policy gradients according to ∇φiJ ( πi ) = ED [ ∇φiπi ( ai|oi ) ∇aiQi ( s , a1 , . . . , an ) |ai=πi ( oi ) ] , where ai is predicted from its policy while a−i are sampled from the replay buffer . Decentralized critic . Although using centralized critics is widely-adopted in multi-agent actorcritic methods , it introduces scalability issues due to the exponentially sized joint action space w.r.t . the number of agents ( Iqbal & Sha , 2019 ) . On the other hand , independent learning approaches train decentralized critics that take only the local observation and action as inputs . It is shown in de Witt et al . ( 2020 ) ; Lyu et al . ( 2021 ) that decentralized value functions can result in more robust performance and be beneficial in practice compared with centralized critic approaches . de Witt et al . ( 2020 ) propose Independent Proximal Policy Optimization ( IPPO ) based on PPO ( Schulman et al. , 2017 ) , and show that it can match or even outperform CTDE approaches in the challenging discrete control benchmark tasks ( Samvelyan et al. , 2019 ) . We can also obtain the Independent TD3 ( ITD3 ) algorithm based on decentralized critics , which is trained to minimize the temporal difference error defined by L ( θi ) = ED [ ( Qi ( oi , ai ) − yi ) 2 ] , where yi = ri + γmink=1,2 Q̄ki ( o ′ i , π̄i ( o ′ i ) ) . 2.2 CONSERVATIVE Q-LEARNING . Conservative Q-Learning ( CQL ) ( Kumar et al. , 2020 ) adds a regularizer to the critic loss to address the extrapolation error and learns lower-bounded Q-values . It penalizes Q-values of state-action pairs sampled from a uniform distribution or a policy while encouraging Q-values for state-action pairs in the dataset to be large . Specifically , when built upon decentralized critic methods in MARL , the critic loss is defined as in Eq . ( 1 ) , where α denotes the regularization coefficient and π̂βi is the empirical behavior policy of agent i. EDi [ ( Qi ( oi , ai ) − yi ) 2 ] + αEDi [ log ∑ ai exp ( Qi ( oi , ai ) ) − Eai∼π̂βi ( ai|oi ) [ Qi ( oi , ai ) ] ] ( 1 ) 3 PROPOSED METHOD . In this section , we first provide a motivating example where previous methods , such as CQL ( Kumar et al. , 2020 ) and TD3+BC ( Fujimoto & Gu , 2021 ) can be inefficient in the face of the multi-agent setting . Then , we propose a method called Offline Multi-Agent Reinforcement Learning with Actor Rectification ( OMAR ) , where we effectively combine first-order policy gradients and zeroth-order optimization methods for the actor to better optimize the conservative value function .
This paper considers the offline multi-agent RL setting, first demonstrating that optimization is more likely to find bad local optima than in the single agent case. To deal with this problem, the authors propose to add zeroth order optimization to multi-agent training and provide a theorem guaranteeing that this approach leads to safe improvements. The authors conduct extensive experiments and ablations in comparison to relevant baselines on the multi-agent particle environments to demonstrate the efficacy of their approach as a function of the type of data used for offline training. The authors also provide experiments on a slightly larger scale tackling the multi-agent half-cheetah environment.
SP:c6450f13972968ade91e9cc398caca62fae97d1b
Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification
1 INTRODUCTION . Offline reinforcement learning ( RL ) has shown great potential in advancing the deployment of RL in real-world tasks where interaction with the environment is prohibitive , costly , or risky ( Thomas , 2015 ) . Since an agent has to learn from a given pre-collected dataset in offline RL , it becomes challenging for regular online RL algorithms such as DDPG ( Lillicrap et al. , 2016 ) and TD3 ( Fujimoto et al. , 2018 ) due to extrapolation error ( Lee et al. , 2021 ) . There has been recent progress in tackling the problem based on conservatism . Behavior regularization ( Wu et al. , 2019 ; Kumar et al. , 2019 ) , e.g. , TD3 with Behavior Cloning ( TD3+BC ) ( Fujimoto & Gu , 2021 ) , compels the learning policy to stay close to the manifold of the datasets . Yet , its performance highly depends on the quality of the dataset . Another line of research investigates incorporating conservatism into the value function by critic regularization ( Nachum et al. , 2019 ; Kostrikov et al. , 2021 ) , e.g. , Conservative Q-Learning ( Kumar et al. , 2020 ) , which usually learns a conservative estimate of the value function to directly address the extrapolation error . However , many practical scenarios involve multiple agents , e.g. , multi-robot control ( Amato , 2018 ) , autonomous driving ( Pomerleau , 1989 ; Sadigh et al. , 2016 ) . Therefore , offline multi-agent reinforcement learning ( MARL ) ( Yang et al. , 2021 ; Jiang & Lu , 2021 ) is crucial for solving real-world tasks . Observing recent success of Independent PPO ( de Witt et al. , 2020 ) and Multi-Agent PPO ( Yu et al. , 2021 ) , both of which are based on the PPO ( Schulman et al. , 2017 ) algorithm , we find that online RL algorithms can be transferred to multi-agent scenarios through either decentralized training or a centralized value function without bells and whistles . Hence , we naturally expect that offline RL algorithms would also transfer easily when applied to multi-agent tasks . Surprisingly , we observe that the performance of the state-of-the-art conservatism-based CQL ( Kumar et al. , 2020 ) algorithm in offline RL degrades dramatically with an increasing number of agents as shown in Figure 1 ( c ) in our experiments . Towards mitigating the degradation , we identify a critical issue in CQL : solely regularizing the critic is insufficient for multiple agents to learn good policies for coordination in the offline setting . The primary cause is that first-order policy gradient methods are prone to local optima ( Nachum et al. , 2016 ; Ge et al. , 2017 ; Safran & Shamir , 2017 ) , saddle points ( Vlatakis-Gkaragkounis et al. , 2019 ; Sun et al. , 2020 ) , or noisy gradient estimates ( Such et al. , 2017 ) . As a result , this can lead to uncoordinated suboptimal learning behavior because the actor can not leverage the global information in the critic well . The issue is exacerbated more in the multi-agent settings due to the exponentially-sized joint action space ( Yang et al. , 2021 ) as well as the nature of the setting that requires each of the agent to learn a good policy for a successful joint policy . For example , in a basketball game , where there are two competing teams each consisting of five players . When one of the players passes the ball among them , it is important for all teammates to perform their duties well in their roles to win the game . As a result , if one of the agents in the team fails to learn a good policy , it can fail to cooperate with other agents for coordinated behaviors and lose the ball . In this paper , we propose a surprisingly simple yet effective method for offline multi-agent continuous control , Offline MARL with Actor Rectification ( OMAR ) , to better leverage the conservative value function via an effective combination of first-order policy gradient and zeroth-order optimization methods . Towards this goal , we add a regularizer to the actor loss , which encourages the actor to mimic actions from the zeroth-order optimizer that maximizes Q-values so that we can combine the best of both first-order policy gradient and zeroth-order optimization . The sampling mechanism is motivated by evolution strategies ( Such et al. , 2017 ; Conti et al. , 2017 ; Mania et al. , 2018 ) , which recently emerged as another paradigm for solving sequential decision making tasks ( Salimans et al. , 2017 ) . Specifically , the zeroth-order optimization part maintains an iteratively updated and refined Gaussian distribution to find better actions based on Q-values . Then , we rectify the policy towards this action to better leverage the conservative value function . We conduct extensive experiments in standard continuous control multi-agent particle environments and the complex multi-agent locomotion task to demonstrate its effectiveness . On all the benchmark tasks , OMAR outperforms the multi-agent version of offline RL algorithms including CQL ( Kumar et al. , 2020 ) and TD3+BC ( Fujimoto & Gu , 2021 ) , as well as a recent offline MARL algorithm MA-ICQ ( Yang et al. , 2021 ) , and achieves the state-of-the-art performance . The main contribution of this work can be summarized as follows . We propose the OMAR algorithm that effectively leverages both first-order and zero-order optimization for solving offline MARL tasks . In addition , we theoretically prove that OMAR leads to safe policy improvement . Finally , extensive experimental results demonstrate the effectiveness of OMAR , which significantly outperforms strong baseline methods and achieves state-of-the-art performance in datasets with different qualities in both decentralized and centralized learning paradigms . 2 BACKGROUND . We consider the framework of partially observable Markov games ( POMG ) ( Littman , 1994 ; Hu et al. , 1998 ) , which extends Markov decision processes to the multi-agent setting . A POMG with N agents is defined by a set of global states S , a set of actions A1 , . . . , AN , and a set of observations O1 , . . . , ON for each agent . At each timestep , each agent i receive an observation oi and chooses an action based on its policy πi . The environment transits to the next state according to the state transition function P : S × A1 × . . . × AN × S → [ 0 , 1 ] . Each agent receives a reward based on the reward function ri : S × A1 . . . × AN → R and a private observation oi : S → Oi . The initial state distribution is defined by ρ : S → [ 0 , 1 ] . The goal is to find a set of optimal policies π = { π1 , . . . , πN } , where each agent aims to maximize its own discounted return ∑∞ t=0 γ trti with γ denoting the discount factor . In the offline setting , agents learn from a fixed dataset D generated from the behavior policy πβ without interaction with the environments . 2.1 MULTI-AGENT ACTOR CRITIC . Centralized critic . Lowe et al . ( 2017 ) propose Multi-Agent Deep Deterministic Policy Gradients ( MADDPG ) under the centralized training with decentralized execution ( CTDE ) paradigm by extending the DDPG algorithm ( Lillicrap et al. , 2016 ) to the multi-agent setting . In CTDE , agents are trained in a centralized way where they can access to extra global information dur- ing training while they need to learn decentralized policies in order to act based only on local observations during execution . In MADDPG , for an agent i , the centralized critic Qi is parameterized by θi . It takes the global state action joint action as inputs , and aims to minimize the temporal difference error defined by L ( θi ) = ED [ ( Qi ( s , a1 , . . . , an ) − yi ) 2 ] , where yi = ri + γQ̄i ( s ′ , a′1 , · · · , a′n ) |a′j=π̄j ( o′j ) and Q̄i and π̄i denote target networks . To reduce the overestimation problem in MADDPG , MATD3 ( Ackermann et al. , 2019 ) estimates the target value using double estimators based on TD3 ( Fujimoto et al. , 2018 ) , where yi = ri + γmink=1,2 Q̄ k i ( s ′ , a′1 , · · · , a′n ) |a′j=π̄j ( o′j ) . Agents learn decentralized policies πi parameterized by φi , which take only local observations as inputs , and are trained by multi-agent policy gradients according to ∇φiJ ( πi ) = ED [ ∇φiπi ( ai|oi ) ∇aiQi ( s , a1 , . . . , an ) |ai=πi ( oi ) ] , where ai is predicted from its policy while a−i are sampled from the replay buffer . Decentralized critic . Although using centralized critics is widely-adopted in multi-agent actorcritic methods , it introduces scalability issues due to the exponentially sized joint action space w.r.t . the number of agents ( Iqbal & Sha , 2019 ) . On the other hand , independent learning approaches train decentralized critics that take only the local observation and action as inputs . It is shown in de Witt et al . ( 2020 ) ; Lyu et al . ( 2021 ) that decentralized value functions can result in more robust performance and be beneficial in practice compared with centralized critic approaches . de Witt et al . ( 2020 ) propose Independent Proximal Policy Optimization ( IPPO ) based on PPO ( Schulman et al. , 2017 ) , and show that it can match or even outperform CTDE approaches in the challenging discrete control benchmark tasks ( Samvelyan et al. , 2019 ) . We can also obtain the Independent TD3 ( ITD3 ) algorithm based on decentralized critics , which is trained to minimize the temporal difference error defined by L ( θi ) = ED [ ( Qi ( oi , ai ) − yi ) 2 ] , where yi = ri + γmink=1,2 Q̄ki ( o ′ i , π̄i ( o ′ i ) ) . 2.2 CONSERVATIVE Q-LEARNING . Conservative Q-Learning ( CQL ) ( Kumar et al. , 2020 ) adds a regularizer to the critic loss to address the extrapolation error and learns lower-bounded Q-values . It penalizes Q-values of state-action pairs sampled from a uniform distribution or a policy while encouraging Q-values for state-action pairs in the dataset to be large . Specifically , when built upon decentralized critic methods in MARL , the critic loss is defined as in Eq . ( 1 ) , where α denotes the regularization coefficient and π̂βi is the empirical behavior policy of agent i. EDi [ ( Qi ( oi , ai ) − yi ) 2 ] + αEDi [ log ∑ ai exp ( Qi ( oi , ai ) ) − Eai∼π̂βi ( ai|oi ) [ Qi ( oi , ai ) ] ] ( 1 ) 3 PROPOSED METHOD . In this section , we first provide a motivating example where previous methods , such as CQL ( Kumar et al. , 2020 ) and TD3+BC ( Fujimoto & Gu , 2021 ) can be inefficient in the face of the multi-agent setting . Then , we propose a method called Offline Multi-Agent Reinforcement Learning with Actor Rectification ( OMAR ) , where we effectively combine first-order policy gradients and zeroth-order optimization methods for the actor to better optimize the conservative value function .
This work looks at the problem of training multi-agent reinforcement learning with continuous action space in an offline setting. The authors identified a saddle point issue of the value function landscape in the existing offline MARL/RL methods, which causes the actor policy to be stuck in a bad local optimum. The proposed method samples and evaluates different actions based on a Gaussian function, and adds a regularizer to the actor loss to encourage the actor policy to take the action with a high Q-value.
SP:c6450f13972968ade91e9cc398caca62fae97d1b
Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification
1 INTRODUCTION . Offline reinforcement learning ( RL ) has shown great potential in advancing the deployment of RL in real-world tasks where interaction with the environment is prohibitive , costly , or risky ( Thomas , 2015 ) . Since an agent has to learn from a given pre-collected dataset in offline RL , it becomes challenging for regular online RL algorithms such as DDPG ( Lillicrap et al. , 2016 ) and TD3 ( Fujimoto et al. , 2018 ) due to extrapolation error ( Lee et al. , 2021 ) . There has been recent progress in tackling the problem based on conservatism . Behavior regularization ( Wu et al. , 2019 ; Kumar et al. , 2019 ) , e.g. , TD3 with Behavior Cloning ( TD3+BC ) ( Fujimoto & Gu , 2021 ) , compels the learning policy to stay close to the manifold of the datasets . Yet , its performance highly depends on the quality of the dataset . Another line of research investigates incorporating conservatism into the value function by critic regularization ( Nachum et al. , 2019 ; Kostrikov et al. , 2021 ) , e.g. , Conservative Q-Learning ( Kumar et al. , 2020 ) , which usually learns a conservative estimate of the value function to directly address the extrapolation error . However , many practical scenarios involve multiple agents , e.g. , multi-robot control ( Amato , 2018 ) , autonomous driving ( Pomerleau , 1989 ; Sadigh et al. , 2016 ) . Therefore , offline multi-agent reinforcement learning ( MARL ) ( Yang et al. , 2021 ; Jiang & Lu , 2021 ) is crucial for solving real-world tasks . Observing recent success of Independent PPO ( de Witt et al. , 2020 ) and Multi-Agent PPO ( Yu et al. , 2021 ) , both of which are based on the PPO ( Schulman et al. , 2017 ) algorithm , we find that online RL algorithms can be transferred to multi-agent scenarios through either decentralized training or a centralized value function without bells and whistles . Hence , we naturally expect that offline RL algorithms would also transfer easily when applied to multi-agent tasks . Surprisingly , we observe that the performance of the state-of-the-art conservatism-based CQL ( Kumar et al. , 2020 ) algorithm in offline RL degrades dramatically with an increasing number of agents as shown in Figure 1 ( c ) in our experiments . Towards mitigating the degradation , we identify a critical issue in CQL : solely regularizing the critic is insufficient for multiple agents to learn good policies for coordination in the offline setting . The primary cause is that first-order policy gradient methods are prone to local optima ( Nachum et al. , 2016 ; Ge et al. , 2017 ; Safran & Shamir , 2017 ) , saddle points ( Vlatakis-Gkaragkounis et al. , 2019 ; Sun et al. , 2020 ) , or noisy gradient estimates ( Such et al. , 2017 ) . As a result , this can lead to uncoordinated suboptimal learning behavior because the actor can not leverage the global information in the critic well . The issue is exacerbated more in the multi-agent settings due to the exponentially-sized joint action space ( Yang et al. , 2021 ) as well as the nature of the setting that requires each of the agent to learn a good policy for a successful joint policy . For example , in a basketball game , where there are two competing teams each consisting of five players . When one of the players passes the ball among them , it is important for all teammates to perform their duties well in their roles to win the game . As a result , if one of the agents in the team fails to learn a good policy , it can fail to cooperate with other agents for coordinated behaviors and lose the ball . In this paper , we propose a surprisingly simple yet effective method for offline multi-agent continuous control , Offline MARL with Actor Rectification ( OMAR ) , to better leverage the conservative value function via an effective combination of first-order policy gradient and zeroth-order optimization methods . Towards this goal , we add a regularizer to the actor loss , which encourages the actor to mimic actions from the zeroth-order optimizer that maximizes Q-values so that we can combine the best of both first-order policy gradient and zeroth-order optimization . The sampling mechanism is motivated by evolution strategies ( Such et al. , 2017 ; Conti et al. , 2017 ; Mania et al. , 2018 ) , which recently emerged as another paradigm for solving sequential decision making tasks ( Salimans et al. , 2017 ) . Specifically , the zeroth-order optimization part maintains an iteratively updated and refined Gaussian distribution to find better actions based on Q-values . Then , we rectify the policy towards this action to better leverage the conservative value function . We conduct extensive experiments in standard continuous control multi-agent particle environments and the complex multi-agent locomotion task to demonstrate its effectiveness . On all the benchmark tasks , OMAR outperforms the multi-agent version of offline RL algorithms including CQL ( Kumar et al. , 2020 ) and TD3+BC ( Fujimoto & Gu , 2021 ) , as well as a recent offline MARL algorithm MA-ICQ ( Yang et al. , 2021 ) , and achieves the state-of-the-art performance . The main contribution of this work can be summarized as follows . We propose the OMAR algorithm that effectively leverages both first-order and zero-order optimization for solving offline MARL tasks . In addition , we theoretically prove that OMAR leads to safe policy improvement . Finally , extensive experimental results demonstrate the effectiveness of OMAR , which significantly outperforms strong baseline methods and achieves state-of-the-art performance in datasets with different qualities in both decentralized and centralized learning paradigms . 2 BACKGROUND . We consider the framework of partially observable Markov games ( POMG ) ( Littman , 1994 ; Hu et al. , 1998 ) , which extends Markov decision processes to the multi-agent setting . A POMG with N agents is defined by a set of global states S , a set of actions A1 , . . . , AN , and a set of observations O1 , . . . , ON for each agent . At each timestep , each agent i receive an observation oi and chooses an action based on its policy πi . The environment transits to the next state according to the state transition function P : S × A1 × . . . × AN × S → [ 0 , 1 ] . Each agent receives a reward based on the reward function ri : S × A1 . . . × AN → R and a private observation oi : S → Oi . The initial state distribution is defined by ρ : S → [ 0 , 1 ] . The goal is to find a set of optimal policies π = { π1 , . . . , πN } , where each agent aims to maximize its own discounted return ∑∞ t=0 γ trti with γ denoting the discount factor . In the offline setting , agents learn from a fixed dataset D generated from the behavior policy πβ without interaction with the environments . 2.1 MULTI-AGENT ACTOR CRITIC . Centralized critic . Lowe et al . ( 2017 ) propose Multi-Agent Deep Deterministic Policy Gradients ( MADDPG ) under the centralized training with decentralized execution ( CTDE ) paradigm by extending the DDPG algorithm ( Lillicrap et al. , 2016 ) to the multi-agent setting . In CTDE , agents are trained in a centralized way where they can access to extra global information dur- ing training while they need to learn decentralized policies in order to act based only on local observations during execution . In MADDPG , for an agent i , the centralized critic Qi is parameterized by θi . It takes the global state action joint action as inputs , and aims to minimize the temporal difference error defined by L ( θi ) = ED [ ( Qi ( s , a1 , . . . , an ) − yi ) 2 ] , where yi = ri + γQ̄i ( s ′ , a′1 , · · · , a′n ) |a′j=π̄j ( o′j ) and Q̄i and π̄i denote target networks . To reduce the overestimation problem in MADDPG , MATD3 ( Ackermann et al. , 2019 ) estimates the target value using double estimators based on TD3 ( Fujimoto et al. , 2018 ) , where yi = ri + γmink=1,2 Q̄ k i ( s ′ , a′1 , · · · , a′n ) |a′j=π̄j ( o′j ) . Agents learn decentralized policies πi parameterized by φi , which take only local observations as inputs , and are trained by multi-agent policy gradients according to ∇φiJ ( πi ) = ED [ ∇φiπi ( ai|oi ) ∇aiQi ( s , a1 , . . . , an ) |ai=πi ( oi ) ] , where ai is predicted from its policy while a−i are sampled from the replay buffer . Decentralized critic . Although using centralized critics is widely-adopted in multi-agent actorcritic methods , it introduces scalability issues due to the exponentially sized joint action space w.r.t . the number of agents ( Iqbal & Sha , 2019 ) . On the other hand , independent learning approaches train decentralized critics that take only the local observation and action as inputs . It is shown in de Witt et al . ( 2020 ) ; Lyu et al . ( 2021 ) that decentralized value functions can result in more robust performance and be beneficial in practice compared with centralized critic approaches . de Witt et al . ( 2020 ) propose Independent Proximal Policy Optimization ( IPPO ) based on PPO ( Schulman et al. , 2017 ) , and show that it can match or even outperform CTDE approaches in the challenging discrete control benchmark tasks ( Samvelyan et al. , 2019 ) . We can also obtain the Independent TD3 ( ITD3 ) algorithm based on decentralized critics , which is trained to minimize the temporal difference error defined by L ( θi ) = ED [ ( Qi ( oi , ai ) − yi ) 2 ] , where yi = ri + γmink=1,2 Q̄ki ( o ′ i , π̄i ( o ′ i ) ) . 2.2 CONSERVATIVE Q-LEARNING . Conservative Q-Learning ( CQL ) ( Kumar et al. , 2020 ) adds a regularizer to the critic loss to address the extrapolation error and learns lower-bounded Q-values . It penalizes Q-values of state-action pairs sampled from a uniform distribution or a policy while encouraging Q-values for state-action pairs in the dataset to be large . Specifically , when built upon decentralized critic methods in MARL , the critic loss is defined as in Eq . ( 1 ) , where α denotes the regularization coefficient and π̂βi is the empirical behavior policy of agent i. EDi [ ( Qi ( oi , ai ) − yi ) 2 ] + αEDi [ log ∑ ai exp ( Qi ( oi , ai ) ) − Eai∼π̂βi ( ai|oi ) [ Qi ( oi , ai ) ] ] ( 1 ) 3 PROPOSED METHOD . In this section , we first provide a motivating example where previous methods , such as CQL ( Kumar et al. , 2020 ) and TD3+BC ( Fujimoto & Gu , 2021 ) can be inefficient in the face of the multi-agent setting . Then , we propose a method called Offline Multi-Agent Reinforcement Learning with Actor Rectification ( OMAR ) , where we effectively combine first-order policy gradients and zeroth-order optimization methods for the actor to better optimize the conservative value function .
This work considers extending conservatism-based algorithms to offline RL with multi agents. The performance of standard algorithms often degrades significantly in this setting, especially when the number of agents increase. To resolve the issue, the authors propose a simple scheme which essentially combines first-order and zeroth-order policy optimization methods, with the goal to possibly extract the advantages of each method. Some empirical results demonstrate that the proposed algorithm can achieve better performance than standard baselines.
SP:c6450f13972968ade91e9cc398caca62fae97d1b
Unifying Distribution Alignment as a Loss for Imbalanced Semi-supervised Learning
1 INTRODUCTION . Semi-supervised learning ( SSL ) uses a large pool of unlabeled data to learn a classifier despite having access to only a small amount of labeled data . Recently , techniques have been introduced ( Berthelot et al. , 2019 ; 2020 ; Sohn et al. , 2020 ) which simplify the process while at the same time pushing performance to new levels . However , these approaches have focused on the cases where the class distributions are balanced for both the labeled and unlabeled data . At the same time , work within the supervised learning community has shown renewed focus on imbalanced or sometimes long-tailed learning — owing to the fact that most data in the real world is not well-balanced . A variety of methodologies for this setting have been introduced ( Kang et al. , 2020 ; Menon et al. , 2021 ; Ren et al. , 2020 ; Hong et al. , 2021 ) . For instance , many have observed the bias that ordinary supervised learning techniques suffer from — favoring head classes over the less numerous tail classes . Kang et al . ( 2020 ) show that softmax-based classifiers often produce classification weights which correlate with class frequency and thus reduces the class-balanced performance of the model . Resampling strategies ( Chawla et al. , 2002 ; He & Garcia , 2009 ; Buda et al. , 2018 ; Byrd & Lipton , 2019 ) ( which sample from the pool based on the desired distribution ) are often effective as well . By shifting from sampling the data distribution for the majority of training to a more class-balanced regime at the end , one can learn a good representation while mitigating the aforementioned bias at the final classification layer . In addition , a distributional shift ( Hong et al. , 2021 ) can be observed within existing protocols where training occurs on an imbalanced dataset yet evaluation is done with respect to a balanced one . In this work , we study the combined setting of semi-supervised and imbalanced learning . In particular , we consider both FixMatch ( Sohn et al. , 2020 ) and MixMatch ( Berthelot et al. , 2019 ) as base semi-supervised learners . Both employ two losses : a cross entropy loss on the labeled data , and an unsupervised loss that relies on consistency between the classifier outputs among augmented versions of unlabeled examples . 1000 2000 3000 4000 5000 6000 Total Training Epochs 45 50 55 60 65 70 75 A cc ur ac y FixMatch DA , _min = 0.5 CReST+ UDAL ( ours ) Supervised ( 10 % labels ) Supervised ( 100 % labels ) Therefore , vulnerability to bias from the imbalance can happen in three ways : the supervised loss itself , the quality of pseudo-labels derived from the classifier on unlabeled examples , and the pseudolabels themselves can bias the classifier even if they are perfectly predicted . Furthermore , confirmation bias within semi-supervised learning is already a worrisome factor even without any imbalance between the classes ( Arazo et al. , 2020 ) . Within the balanced setting , the ReMixMatch ( Berthelot et al. , 2020 ) approach to semi-supervised learning introduces strong regularization through distribution alignment ( Bridle et al. , 1992 ) ( i.e . modifying the prediction by the ratio of the desired distribution to model distribution ) to help mitigate this . Recently CReST ( Wei et al. , 2021 ) has shown that distribution alignment also confers benefits in the imbalanced setting and aims to progressively rebalance the distribution of pseudo-labels . Similar to resampling approaches ( Kang et al. , 2020 ) , imbalanced learning considers a shift from random sampling to class-balanced sampling , CReST attempts to align the pseudo-labels themselves to a more balanced distribution as training progresses . However , distribution alignment is not the only technique that CReST relies upon to achieve good performance . In addition , CReST requires a generational approach to self-training which accumulates a relatively balanced subset of confident pseudo-labels to augment the labeled set with as a form of self-training . As the generations proceed , this subset is re-sampled to become more and more balanced . Each generation re-initializes the classifier ’ s network , and therefore , the only “ state ” retained is through these accumulated pseudo-labels ( now treated as ordinary supervised labels ) . This process can be extremely costly with respect to training time , and we hypothesize that it is not optimal because it fails to directly address the imbalance in the labeled data . Instead , we seek a simpler solution to imbalanced semi-supervised learning through distribution alignment alone . We ask the question : is this disjoint methodology truly necessary ? Can a single , central approach be devised to address imbalance in semi-supervised learning ? In this work , as our contributions , we identify an affirmative answer to this question by connecting the ideas of progressive distribution alignment from Berthelot et al . ( 2020 ) ; Wei et al . ( 2021 ) and the method of logit adjustment from fully supervised , imbalanced learning ( Menon et al. , 2021 ; Ren et al. , 2020 ; Hong et al. , 2021 ) . Furthermore , this approach can be implemented with only a few lines of code , has significantly reduced training time requirements , and generally outperforms previous work . Finally , it shows significantly better performance characteristics as more labeled data becomes available and readily scales to larger datasets — achieving a 1.6 % increase in accuracy compared to the best existing method on ImageNet-127 . 2 PREREQUISITES . We present background information on both problem settings . First , we formally define the problem setting of Imbalanced Semi-Supervised Learning ( SSL ) . Second , we outline the idea of distribution alignment ( Berthelot et al. , 2020 ; Wei et al. , 2021 ) to improve pseudo-label quality within both the balanced and imbalanced settings of SSL . We revisit a recent method , CReST ( Wei et al. , 2021 ) , which also attempts to address imbalanced semi-supervised learning . 2.1 CLASS-IMBALANCED SEMI-SUPERVISED LEARNING . Semi-supervised learning relies on two sources of data : a labeled set X = { ( xi , yi ) } Ni=1 where each xi is a training example and yi is the corresponding target . Since classification is the focus of this work , we consider yi as a class label within C = { 1 , . . . , C } with a total number of C classes . In imbalanced learning , we expect varying numbers of training examples across classes . Therefore , we denote the number of examples in our labeled set corresponding to class c ∈ C as Nc such that ∑C c=1Nc = N . We assume that the classes are ordered with respect to frequencies and in a descending manner i.e . Nc ≥ Nc+1 . It is often useful to characterize the degree of imbalance by the ratio N1/NC , and we refer to this as the imbalance ratio of the dataset . We use pdata ( y ) and q ( y ) to denote the marginal distributions of the data and the model . When there is no ambiguity , we drop y as pdata and q to simplify presentations . Additionally , we have an unlabeled set of examples U = { ui } Mi=1 for which we have no corresponding target . While we expect that this set is also imbalanced , we additionally make the common assumption ( Wei et al. , 2021 ) that it follows the same class distribution and thus shares the same imbalance ( ratio ) as the labeled set . Finally , an important measure β = NN+M considers the percentage of overall examples that are labeled . 2.2 DISTRIBUTION ALIGNMENT . Distribution alignment ( DA ) was re-introduced in the setting of semi-supervised learning within ReMixMatch ( Berthelot et al. , 2020 ) . To mitigate the tendencies of semi-supervised learning to suffer from confirmation bias , regularization can be added to the pseudo-label inference step . In particular , if we assume the labeled and unlabeled data both come from the same distribution pdata ( although , we do not know particular labels from the unlabeled set ) , we would expect that our model should produce pseudo-labels that follow the same distribution . This marginal distribution of the model q ( y ) can be estimated by moving average , which we denote as q̂ ( y ) or q̂ , as in Berthelot et al . ( 2020 ) . If we denote our current model ’ s predictions on unlabeled examples as q ( y|xu ) , these predictions can be re-scaled through dividing by q̂ and multiplying by pdata . After normalization , we have : q̃ ( y|x ) = Normalize ( q ( y|x ) pdata q̂ ) . ( 1 ) In equation 1 , we assume element-wise operations beween q ( y|x ) , pdata and q̂ . Normalize ( p ) ensures p as a probability distribution which sums to 1 . As noted in Wei et al . ( 2021 ) , it is not always optimal to align the predictions directly to pdata when pdata is imbalanced . Rather , a smoothed form which ( elementwise ) exponentiates the distribution by a factor of α before normalization : p̃α = Normalize ( pαdata ) , 0 ≤ α ≤ 1 . ( 2 ) is used instead of pdata and found to both regularize the predictions as well as combat bias . As α→ 0 , this approaches an alignment against a more uniform distribution . 2.3 LOGIT ADJUSTMENT . While DA is clearly applicable to pseudo-label inference during training , it has no direct effect on the labeled portion . Since a semi-supervised approach relies on both labeled and unlabeled losses , it is critical to address the problem of imbalance at the supervised level as well . For this , we examine a popular technique within supervised learning , often known as logit adjustment ( Menon et al. , 2021 ) , balanced softmax ( Ren et al. , 2020 ) , or LADE ( Hong et al. , 2021 ) . These methods modify the loss computation to compensate for the class imbalance found in the data distribution . Notably , when a data distribution is class-imbalanced , we attempt to minimize the classification loss with respect to this data distribution . However , at evaluation time , we either evaluate on a class-balanced dataset or produce a class-balanced error by averaging the per-class accuracies . This is a shift in distribution which can cause poor performance . Therefore , this shift is integrated into the cross entropy loss : LLA ( y , f ( x ) ) = LCE ( y , f ( x ) + log pdata − log ( Unif ( C ) ) ) ≡ LCE ( y , f ( x ) + log pdata ) ( 3 ) where Unif ( C ) is the discrete uniform distribution over C classes , y is the true label of x , f ( x ) is the vector-valued output of the classifier , and pdata is the marginal class distribution of the data as a vector . As elaborated within Menon et al . ( 2021 ) , this has the effect that instead of optimizing f ( x ) directly , we optimize h ( x ) = f ( x ) + log pdata which aligns the source distribution of f ( x ) correctly to the uniform class distribution seen during evaluation . Menon et al . ( 2021 ) also discusses an inference time procedure which attempts to account for this shift without modifications to the training procedure . Since this is “ for free ” , we include results combined with it in Table 1 as “ LA ( Inf ) ” for the inference time procedure .
This paper addresses the topic of semi-supervised learning in cases where the underlying data distribution is severely imbalanced. This approach combines distribution alignment with logit adjustment, resulting in an efficient method for solving the aforementioned problem while improving performance in the test setting. Unlike existing state-of-the-art approaches such as CReST, the approach involves no sampling state, and imbalance mitigation is achieved just by modifying the loss functions of the model. Experiments are conducted over three benchmark vision datasets of varying complexity: long-tailed versions of CIFAR10, CIFAR100, and ImageNet-127. Experimental results are competitive with or exceed other methods on the tested datasets with 5x training speedup compared to CReST.
SP:4d01481950c3da65473c9b7d74bfaebe505320f6
Unifying Distribution Alignment as a Loss for Imbalanced Semi-supervised Learning
1 INTRODUCTION . Semi-supervised learning ( SSL ) uses a large pool of unlabeled data to learn a classifier despite having access to only a small amount of labeled data . Recently , techniques have been introduced ( Berthelot et al. , 2019 ; 2020 ; Sohn et al. , 2020 ) which simplify the process while at the same time pushing performance to new levels . However , these approaches have focused on the cases where the class distributions are balanced for both the labeled and unlabeled data . At the same time , work within the supervised learning community has shown renewed focus on imbalanced or sometimes long-tailed learning — owing to the fact that most data in the real world is not well-balanced . A variety of methodologies for this setting have been introduced ( Kang et al. , 2020 ; Menon et al. , 2021 ; Ren et al. , 2020 ; Hong et al. , 2021 ) . For instance , many have observed the bias that ordinary supervised learning techniques suffer from — favoring head classes over the less numerous tail classes . Kang et al . ( 2020 ) show that softmax-based classifiers often produce classification weights which correlate with class frequency and thus reduces the class-balanced performance of the model . Resampling strategies ( Chawla et al. , 2002 ; He & Garcia , 2009 ; Buda et al. , 2018 ; Byrd & Lipton , 2019 ) ( which sample from the pool based on the desired distribution ) are often effective as well . By shifting from sampling the data distribution for the majority of training to a more class-balanced regime at the end , one can learn a good representation while mitigating the aforementioned bias at the final classification layer . In addition , a distributional shift ( Hong et al. , 2021 ) can be observed within existing protocols where training occurs on an imbalanced dataset yet evaluation is done with respect to a balanced one . In this work , we study the combined setting of semi-supervised and imbalanced learning . In particular , we consider both FixMatch ( Sohn et al. , 2020 ) and MixMatch ( Berthelot et al. , 2019 ) as base semi-supervised learners . Both employ two losses : a cross entropy loss on the labeled data , and an unsupervised loss that relies on consistency between the classifier outputs among augmented versions of unlabeled examples . 1000 2000 3000 4000 5000 6000 Total Training Epochs 45 50 55 60 65 70 75 A cc ur ac y FixMatch DA , _min = 0.5 CReST+ UDAL ( ours ) Supervised ( 10 % labels ) Supervised ( 100 % labels ) Therefore , vulnerability to bias from the imbalance can happen in three ways : the supervised loss itself , the quality of pseudo-labels derived from the classifier on unlabeled examples , and the pseudolabels themselves can bias the classifier even if they are perfectly predicted . Furthermore , confirmation bias within semi-supervised learning is already a worrisome factor even without any imbalance between the classes ( Arazo et al. , 2020 ) . Within the balanced setting , the ReMixMatch ( Berthelot et al. , 2020 ) approach to semi-supervised learning introduces strong regularization through distribution alignment ( Bridle et al. , 1992 ) ( i.e . modifying the prediction by the ratio of the desired distribution to model distribution ) to help mitigate this . Recently CReST ( Wei et al. , 2021 ) has shown that distribution alignment also confers benefits in the imbalanced setting and aims to progressively rebalance the distribution of pseudo-labels . Similar to resampling approaches ( Kang et al. , 2020 ) , imbalanced learning considers a shift from random sampling to class-balanced sampling , CReST attempts to align the pseudo-labels themselves to a more balanced distribution as training progresses . However , distribution alignment is not the only technique that CReST relies upon to achieve good performance . In addition , CReST requires a generational approach to self-training which accumulates a relatively balanced subset of confident pseudo-labels to augment the labeled set with as a form of self-training . As the generations proceed , this subset is re-sampled to become more and more balanced . Each generation re-initializes the classifier ’ s network , and therefore , the only “ state ” retained is through these accumulated pseudo-labels ( now treated as ordinary supervised labels ) . This process can be extremely costly with respect to training time , and we hypothesize that it is not optimal because it fails to directly address the imbalance in the labeled data . Instead , we seek a simpler solution to imbalanced semi-supervised learning through distribution alignment alone . We ask the question : is this disjoint methodology truly necessary ? Can a single , central approach be devised to address imbalance in semi-supervised learning ? In this work , as our contributions , we identify an affirmative answer to this question by connecting the ideas of progressive distribution alignment from Berthelot et al . ( 2020 ) ; Wei et al . ( 2021 ) and the method of logit adjustment from fully supervised , imbalanced learning ( Menon et al. , 2021 ; Ren et al. , 2020 ; Hong et al. , 2021 ) . Furthermore , this approach can be implemented with only a few lines of code , has significantly reduced training time requirements , and generally outperforms previous work . Finally , it shows significantly better performance characteristics as more labeled data becomes available and readily scales to larger datasets — achieving a 1.6 % increase in accuracy compared to the best existing method on ImageNet-127 . 2 PREREQUISITES . We present background information on both problem settings . First , we formally define the problem setting of Imbalanced Semi-Supervised Learning ( SSL ) . Second , we outline the idea of distribution alignment ( Berthelot et al. , 2020 ; Wei et al. , 2021 ) to improve pseudo-label quality within both the balanced and imbalanced settings of SSL . We revisit a recent method , CReST ( Wei et al. , 2021 ) , which also attempts to address imbalanced semi-supervised learning . 2.1 CLASS-IMBALANCED SEMI-SUPERVISED LEARNING . Semi-supervised learning relies on two sources of data : a labeled set X = { ( xi , yi ) } Ni=1 where each xi is a training example and yi is the corresponding target . Since classification is the focus of this work , we consider yi as a class label within C = { 1 , . . . , C } with a total number of C classes . In imbalanced learning , we expect varying numbers of training examples across classes . Therefore , we denote the number of examples in our labeled set corresponding to class c ∈ C as Nc such that ∑C c=1Nc = N . We assume that the classes are ordered with respect to frequencies and in a descending manner i.e . Nc ≥ Nc+1 . It is often useful to characterize the degree of imbalance by the ratio N1/NC , and we refer to this as the imbalance ratio of the dataset . We use pdata ( y ) and q ( y ) to denote the marginal distributions of the data and the model . When there is no ambiguity , we drop y as pdata and q to simplify presentations . Additionally , we have an unlabeled set of examples U = { ui } Mi=1 for which we have no corresponding target . While we expect that this set is also imbalanced , we additionally make the common assumption ( Wei et al. , 2021 ) that it follows the same class distribution and thus shares the same imbalance ( ratio ) as the labeled set . Finally , an important measure β = NN+M considers the percentage of overall examples that are labeled . 2.2 DISTRIBUTION ALIGNMENT . Distribution alignment ( DA ) was re-introduced in the setting of semi-supervised learning within ReMixMatch ( Berthelot et al. , 2020 ) . To mitigate the tendencies of semi-supervised learning to suffer from confirmation bias , regularization can be added to the pseudo-label inference step . In particular , if we assume the labeled and unlabeled data both come from the same distribution pdata ( although , we do not know particular labels from the unlabeled set ) , we would expect that our model should produce pseudo-labels that follow the same distribution . This marginal distribution of the model q ( y ) can be estimated by moving average , which we denote as q̂ ( y ) or q̂ , as in Berthelot et al . ( 2020 ) . If we denote our current model ’ s predictions on unlabeled examples as q ( y|xu ) , these predictions can be re-scaled through dividing by q̂ and multiplying by pdata . After normalization , we have : q̃ ( y|x ) = Normalize ( q ( y|x ) pdata q̂ ) . ( 1 ) In equation 1 , we assume element-wise operations beween q ( y|x ) , pdata and q̂ . Normalize ( p ) ensures p as a probability distribution which sums to 1 . As noted in Wei et al . ( 2021 ) , it is not always optimal to align the predictions directly to pdata when pdata is imbalanced . Rather , a smoothed form which ( elementwise ) exponentiates the distribution by a factor of α before normalization : p̃α = Normalize ( pαdata ) , 0 ≤ α ≤ 1 . ( 2 ) is used instead of pdata and found to both regularize the predictions as well as combat bias . As α→ 0 , this approaches an alignment against a more uniform distribution . 2.3 LOGIT ADJUSTMENT . While DA is clearly applicable to pseudo-label inference during training , it has no direct effect on the labeled portion . Since a semi-supervised approach relies on both labeled and unlabeled losses , it is critical to address the problem of imbalance at the supervised level as well . For this , we examine a popular technique within supervised learning , often known as logit adjustment ( Menon et al. , 2021 ) , balanced softmax ( Ren et al. , 2020 ) , or LADE ( Hong et al. , 2021 ) . These methods modify the loss computation to compensate for the class imbalance found in the data distribution . Notably , when a data distribution is class-imbalanced , we attempt to minimize the classification loss with respect to this data distribution . However , at evaluation time , we either evaluate on a class-balanced dataset or produce a class-balanced error by averaging the per-class accuracies . This is a shift in distribution which can cause poor performance . Therefore , this shift is integrated into the cross entropy loss : LLA ( y , f ( x ) ) = LCE ( y , f ( x ) + log pdata − log ( Unif ( C ) ) ) ≡ LCE ( y , f ( x ) + log pdata ) ( 3 ) where Unif ( C ) is the discrete uniform distribution over C classes , y is the true label of x , f ( x ) is the vector-valued output of the classifier , and pdata is the marginal class distribution of the data as a vector . As elaborated within Menon et al . ( 2021 ) , this has the effect that instead of optimizing f ( x ) directly , we optimize h ( x ) = f ( x ) + log pdata which aligns the source distribution of f ( x ) correctly to the uniform class distribution seen during evaluation . Menon et al . ( 2021 ) also discusses an inference time procedure which attempts to account for this shift without modifications to the training procedure . Since this is “ for free ” , we include results combined with it in Table 1 as “ LA ( Inf ) ” for the inference time procedure .
- This paper tackles the class-imbalanced problem with a semi-supervised learning scenario. Unlike the previous approaches, which require a complicated sampling strategy and multiple training pipelines, the authors provide a simple and unified framework, UDAL, by connecting the ideas of progressive distribution alignment (proposed for imbalanced semi-sup) to logits adjustment (proposed for imbalanced sup). This approach incurs no additional training time on top of the underlying semi-supervised learner. Significant empirical improvement on widely used benchmarks (CIFAR-10-LT, CIFAR-100-LT, and ImageNet-127) demonstrates the effectiveness of the proposed method.
SP:4d01481950c3da65473c9b7d74bfaebe505320f6
Unifying Distribution Alignment as a Loss for Imbalanced Semi-supervised Learning
1 INTRODUCTION . Semi-supervised learning ( SSL ) uses a large pool of unlabeled data to learn a classifier despite having access to only a small amount of labeled data . Recently , techniques have been introduced ( Berthelot et al. , 2019 ; 2020 ; Sohn et al. , 2020 ) which simplify the process while at the same time pushing performance to new levels . However , these approaches have focused on the cases where the class distributions are balanced for both the labeled and unlabeled data . At the same time , work within the supervised learning community has shown renewed focus on imbalanced or sometimes long-tailed learning — owing to the fact that most data in the real world is not well-balanced . A variety of methodologies for this setting have been introduced ( Kang et al. , 2020 ; Menon et al. , 2021 ; Ren et al. , 2020 ; Hong et al. , 2021 ) . For instance , many have observed the bias that ordinary supervised learning techniques suffer from — favoring head classes over the less numerous tail classes . Kang et al . ( 2020 ) show that softmax-based classifiers often produce classification weights which correlate with class frequency and thus reduces the class-balanced performance of the model . Resampling strategies ( Chawla et al. , 2002 ; He & Garcia , 2009 ; Buda et al. , 2018 ; Byrd & Lipton , 2019 ) ( which sample from the pool based on the desired distribution ) are often effective as well . By shifting from sampling the data distribution for the majority of training to a more class-balanced regime at the end , one can learn a good representation while mitigating the aforementioned bias at the final classification layer . In addition , a distributional shift ( Hong et al. , 2021 ) can be observed within existing protocols where training occurs on an imbalanced dataset yet evaluation is done with respect to a balanced one . In this work , we study the combined setting of semi-supervised and imbalanced learning . In particular , we consider both FixMatch ( Sohn et al. , 2020 ) and MixMatch ( Berthelot et al. , 2019 ) as base semi-supervised learners . Both employ two losses : a cross entropy loss on the labeled data , and an unsupervised loss that relies on consistency between the classifier outputs among augmented versions of unlabeled examples . 1000 2000 3000 4000 5000 6000 Total Training Epochs 45 50 55 60 65 70 75 A cc ur ac y FixMatch DA , _min = 0.5 CReST+ UDAL ( ours ) Supervised ( 10 % labels ) Supervised ( 100 % labels ) Therefore , vulnerability to bias from the imbalance can happen in three ways : the supervised loss itself , the quality of pseudo-labels derived from the classifier on unlabeled examples , and the pseudolabels themselves can bias the classifier even if they are perfectly predicted . Furthermore , confirmation bias within semi-supervised learning is already a worrisome factor even without any imbalance between the classes ( Arazo et al. , 2020 ) . Within the balanced setting , the ReMixMatch ( Berthelot et al. , 2020 ) approach to semi-supervised learning introduces strong regularization through distribution alignment ( Bridle et al. , 1992 ) ( i.e . modifying the prediction by the ratio of the desired distribution to model distribution ) to help mitigate this . Recently CReST ( Wei et al. , 2021 ) has shown that distribution alignment also confers benefits in the imbalanced setting and aims to progressively rebalance the distribution of pseudo-labels . Similar to resampling approaches ( Kang et al. , 2020 ) , imbalanced learning considers a shift from random sampling to class-balanced sampling , CReST attempts to align the pseudo-labels themselves to a more balanced distribution as training progresses . However , distribution alignment is not the only technique that CReST relies upon to achieve good performance . In addition , CReST requires a generational approach to self-training which accumulates a relatively balanced subset of confident pseudo-labels to augment the labeled set with as a form of self-training . As the generations proceed , this subset is re-sampled to become more and more balanced . Each generation re-initializes the classifier ’ s network , and therefore , the only “ state ” retained is through these accumulated pseudo-labels ( now treated as ordinary supervised labels ) . This process can be extremely costly with respect to training time , and we hypothesize that it is not optimal because it fails to directly address the imbalance in the labeled data . Instead , we seek a simpler solution to imbalanced semi-supervised learning through distribution alignment alone . We ask the question : is this disjoint methodology truly necessary ? Can a single , central approach be devised to address imbalance in semi-supervised learning ? In this work , as our contributions , we identify an affirmative answer to this question by connecting the ideas of progressive distribution alignment from Berthelot et al . ( 2020 ) ; Wei et al . ( 2021 ) and the method of logit adjustment from fully supervised , imbalanced learning ( Menon et al. , 2021 ; Ren et al. , 2020 ; Hong et al. , 2021 ) . Furthermore , this approach can be implemented with only a few lines of code , has significantly reduced training time requirements , and generally outperforms previous work . Finally , it shows significantly better performance characteristics as more labeled data becomes available and readily scales to larger datasets — achieving a 1.6 % increase in accuracy compared to the best existing method on ImageNet-127 . 2 PREREQUISITES . We present background information on both problem settings . First , we formally define the problem setting of Imbalanced Semi-Supervised Learning ( SSL ) . Second , we outline the idea of distribution alignment ( Berthelot et al. , 2020 ; Wei et al. , 2021 ) to improve pseudo-label quality within both the balanced and imbalanced settings of SSL . We revisit a recent method , CReST ( Wei et al. , 2021 ) , which also attempts to address imbalanced semi-supervised learning . 2.1 CLASS-IMBALANCED SEMI-SUPERVISED LEARNING . Semi-supervised learning relies on two sources of data : a labeled set X = { ( xi , yi ) } Ni=1 where each xi is a training example and yi is the corresponding target . Since classification is the focus of this work , we consider yi as a class label within C = { 1 , . . . , C } with a total number of C classes . In imbalanced learning , we expect varying numbers of training examples across classes . Therefore , we denote the number of examples in our labeled set corresponding to class c ∈ C as Nc such that ∑C c=1Nc = N . We assume that the classes are ordered with respect to frequencies and in a descending manner i.e . Nc ≥ Nc+1 . It is often useful to characterize the degree of imbalance by the ratio N1/NC , and we refer to this as the imbalance ratio of the dataset . We use pdata ( y ) and q ( y ) to denote the marginal distributions of the data and the model . When there is no ambiguity , we drop y as pdata and q to simplify presentations . Additionally , we have an unlabeled set of examples U = { ui } Mi=1 for which we have no corresponding target . While we expect that this set is also imbalanced , we additionally make the common assumption ( Wei et al. , 2021 ) that it follows the same class distribution and thus shares the same imbalance ( ratio ) as the labeled set . Finally , an important measure β = NN+M considers the percentage of overall examples that are labeled . 2.2 DISTRIBUTION ALIGNMENT . Distribution alignment ( DA ) was re-introduced in the setting of semi-supervised learning within ReMixMatch ( Berthelot et al. , 2020 ) . To mitigate the tendencies of semi-supervised learning to suffer from confirmation bias , regularization can be added to the pseudo-label inference step . In particular , if we assume the labeled and unlabeled data both come from the same distribution pdata ( although , we do not know particular labels from the unlabeled set ) , we would expect that our model should produce pseudo-labels that follow the same distribution . This marginal distribution of the model q ( y ) can be estimated by moving average , which we denote as q̂ ( y ) or q̂ , as in Berthelot et al . ( 2020 ) . If we denote our current model ’ s predictions on unlabeled examples as q ( y|xu ) , these predictions can be re-scaled through dividing by q̂ and multiplying by pdata . After normalization , we have : q̃ ( y|x ) = Normalize ( q ( y|x ) pdata q̂ ) . ( 1 ) In equation 1 , we assume element-wise operations beween q ( y|x ) , pdata and q̂ . Normalize ( p ) ensures p as a probability distribution which sums to 1 . As noted in Wei et al . ( 2021 ) , it is not always optimal to align the predictions directly to pdata when pdata is imbalanced . Rather , a smoothed form which ( elementwise ) exponentiates the distribution by a factor of α before normalization : p̃α = Normalize ( pαdata ) , 0 ≤ α ≤ 1 . ( 2 ) is used instead of pdata and found to both regularize the predictions as well as combat bias . As α→ 0 , this approaches an alignment against a more uniform distribution . 2.3 LOGIT ADJUSTMENT . While DA is clearly applicable to pseudo-label inference during training , it has no direct effect on the labeled portion . Since a semi-supervised approach relies on both labeled and unlabeled losses , it is critical to address the problem of imbalance at the supervised level as well . For this , we examine a popular technique within supervised learning , often known as logit adjustment ( Menon et al. , 2021 ) , balanced softmax ( Ren et al. , 2020 ) , or LADE ( Hong et al. , 2021 ) . These methods modify the loss computation to compensate for the class imbalance found in the data distribution . Notably , when a data distribution is class-imbalanced , we attempt to minimize the classification loss with respect to this data distribution . However , at evaluation time , we either evaluate on a class-balanced dataset or produce a class-balanced error by averaging the per-class accuracies . This is a shift in distribution which can cause poor performance . Therefore , this shift is integrated into the cross entropy loss : LLA ( y , f ( x ) ) = LCE ( y , f ( x ) + log pdata − log ( Unif ( C ) ) ) ≡ LCE ( y , f ( x ) + log pdata ) ( 3 ) where Unif ( C ) is the discrete uniform distribution over C classes , y is the true label of x , f ( x ) is the vector-valued output of the classifier , and pdata is the marginal class distribution of the data as a vector . As elaborated within Menon et al . ( 2021 ) , this has the effect that instead of optimizing f ( x ) directly , we optimize h ( x ) = f ( x ) + log pdata which aligns the source distribution of f ( x ) correctly to the uniform class distribution seen during evaluation . Menon et al . ( 2021 ) also discusses an inference time procedure which attempts to account for this shift without modifications to the training procedure . Since this is “ for free ” , we include results combined with it in Table 1 as “ LA ( Inf ) ” for the inference time procedure .
This paper studies class-imbalanced semi-supervised learning. To handle this problem, a unified approach is proposed by combing distribution alignment (DA) and logits adjustment (LA). In particular, this paper proposes to apply DA and LA to both supervised and unsupervised losses, which is new. This method shows significant improvement over baselines on three datasets.
SP:4d01481950c3da65473c9b7d74bfaebe505320f6
Evolving Neural Update Rules for Sequence Learning
1 INTRODUCTION . The field of neural sequence processing has become dominated by neural networks trained by backpropagation , with the best models being transformers currently , and LSTM recurrent networks in the not so recent past . These networks can be used for a range of problems such as label prediction , modelling and reinforcement learning . The basic computational nature of these networks can be characterized as follows : One can design an arbitrary ( differentiable ) forward computation that updates part of the network ( activations ) and then one executes a specific algorithm to do the updates of another set of parameters ( weights - using back-propagation and variants of stochastic gradient descent ) . This has an advantage of freedom to flexibly design a forward computation . However , the hand-coded update algorithm is fixed , and may not be the most effective means of achieving a given training objective . There has been interest , as we review below , in directly finding the full computation ( both activations and weights updates ) . In this paper , we aim to push this approach further by finding update rules that work better and that scale , by comparing different functional forms and methods for optimizing the search for these rules . We pursue what we call end-to-end search for update rules . A good way to explain this is to consider the analogous process in nature . Roughly speaking the brain of an organism is produced from the information of the parents , ” run ” for the lifetime of the organism , and selected based on how it , as well its descendants perform ( for example keep making descendants ) . Over time , this process developed neural networks that can learn , adapt quickly and that possess whichever other abilities were needed for the success of the organisms . Similarly we consider a space of neural networks and search for updates to their parameters , the activations and the weights in our case , that work well on problems we are interested in . This can allow networks to discover whatever updates are needed to solve these problems . We consider parameterization of neural networks that is somewhat similar to classic artificial recurrent neural networks , with a state consisting of activations ht and weights wt at time t. There are , however , two primary differences . The first is that the entire computation ( including learning or whatever computation the network does ) operates online , that is , the network receives input xt and updates according some function ht , wt , pt = fθ ( xt , ht−1 , wt−1 ) ( 1 ) where fθ is the update function parameterized by hyper-parameters θ that are fixed for the lifetime of the network . The pt is the output - in our case the probability over the next character in the sequence ( we will discuss the problems we solve later ) . The second difference is that the activation ht , i of a given neuron i as well as the weight wt , ij between neurons j and i are ( potentially ) small vectors , as in ( Bertens & Lee , 2019 ; Gregor , 2020 ) . Biological neurons were an inspiration for artificial neural networks , and so is the case here . The former are complex objects , and representing the cell body state as well as the synapse state by vectors ( rather then scalars ) should allow for a closer representation of the neuron ’ s computation . Biological neurons furthermore provide a proof of principle that online neural computation is capable of powerful learning . There are several additional motivations for pursuing this approach . Learning general update rules end to end might allow networks to use weights in a more interesting or powerful fashion than that prescribed by back-propagation . Finding online rules might allow for easier hardware implementation , especially in neural hardware architectures ( Modha , 2017 ; Davies et al. , 2018 ) . Developing the ability to search for update rules might help us to discover how brains compute , for example if we can implement what we know about the computational structure and search for what we don ’ t . Finally in artificial life , entities propagate into the future often not based on an objective but simply if they find means of making descendants ( and if those descendants make more descendants and so on ) ( Ray , 1991 ; Aguilar et al. , 2014 ; Soros & Stanley , 2014 ; Gregor & Besse , 2021 ) . Being able to evolve learning algorithms without objective might provide a path to intelligent artificial life . 1.1 RELATION TO PREVIOUS WORKS .. Evolving network weights There are many works that directly evolve network weights , for example ( Wierstra et al. , 2014 ; Salimans et al. , 2017 ) . In ( Stanley & Miikkulainen , 2002 ) they parameterize the weights by compositional pattern producing networks ( CPPN ) and in ( Risi & Stanley , 2010 ) they consider an adaptive version that updates the weights based on pre- and post-synaptic activations . While in one sense this is similar to our approach in that the weights are updated , it is also very different - with a very different parameterization ( CPPN ’ s and no hidden network units ) , and problems ( evolving the system to solve a T-maze task ) , and without a focus on pure end-to-end learning as the parameters are also evolved for the task . End-to-end learning and meta-learning A range of related works explore end-to-end learning and meta-learning , but don ’ t strictly operate in the local update rules setting . Furthermore , many of these works learn the full weights of the network instead of a smaller number of update rule hyper-parameters . In ( Schmidhuber , 1992 ) they consider an end-to-end setting , parametrizing updates of a ( fast ) network , by another ( slow ) network , that outputs vectors used to compute Hebbian updates to parameters . In AutoML zero ( Real et al. , 2020 ) they evolve a sequence of operations such as multiplication and non-linearities that implement a machine learning algorithm ( that includes learning ) . A series of works such as ( Ravi & Larochelle , 2016 ; Metz et al. , 2019 ; Wichrowska et al. , 2017 ; Bello et al. , 2017 ; Li & Malik , 2017 ; Lv et al. , 2017 ) focus on learning generalisations of gradientdescent optimisers that utilise back-propagated gradient information to update weights . Although these works optimise the parameters of the update rule over multiple update steps , the tasks themselves are typically feed-forward in nature , such as image classification . In some works ( Ravi & Larochelle , 2016 ) , they explicitly consider few-shot learning , which is related to online learning , although task relevant information is preserved across tasks in the full network weights . In ( Miconi , 2016 ; Miconi et al. , 2018 ) they consider recurrent neural networks , but introduce perweight state variables that are updated in a Hebbian fashion with parameters of the updates trained by back-propagation , along with a standard set of weights . This is capable of learning fast weight adaptations . Alternative network parametrizations In ( Ha et al. , 2016 ) they consider a recurrent network with weights parameterized by a smaller set of hyper-weights . This is trained by standard back-propagation over a short periods . Because the number of hyper-weights is still relatively large , the system is still relying on back-propagation learning to encode the structure of the current data-sequence , rather than a learning algorithm primarily . Learning local update rules end-to-end There are a number of works more closely related to our objective of end-to-end learning of local update rules ( Bengio et al. , 1992 ; Orchard & Wang , 2016 ; Gu et al. , 2019 ; Munkhdalai et al. , 2019 ; Bertens & Lee , 2019 ; Gregor , 2020 ; Kirsch & Schmidhuber , 2020 ) . For example Bengio et al . ( 1992 ) parameterizes scalar weight updates as a linear combination of terms derived from local scalar activations and previous weights and searches for these parameters using genetic algorithms , stochastic gradient descent or simulated annealing applied to toy problems . In Orchard & Wang ( 2016 ) a population of agents is evolved for the task of foraging in a 2d world . However both the initial weights of the controller network as well as the synaptic update network are evolved , in principle putting good policies in initial weights ( but showing that updates improve performance ) . Bertens & Lee ( 2019 ) ; Gregor ( 2020 ) introduce the idea of using vectors to represent both the activation and weight states , and updates using LSTM ’ s on T-maze tasks in the former and MLP ’ s on sequence memorization respectively in latter . A key drawback in all these works is that the networks used are tiny , of usually less than ten neurons . In Gregor ( 2020 ) , they attempt to use larger hidden layers , but find that the network does all its computation in the input layer . In Kirsch & Schmidhuber ( 2020 ) they consider more general network parameterizations , using LSTM-based local updates and scaling to a larger number of neurons . However they only consider problems such as MNIST classification that do not require an algorithm to learn from long-range time dependencies . That is , because the same classes often appear close to one another in a random sequence , there is a signal over just a few steps ( longer range dependencies might be needed for good performance , however the reported one is low ) . The main contribution of this paper is to find a parameterization of update rules and a method of search that scales to large number of neurons ( we tested up to a thousand hidden units ) and weights ( million ) and that can learn recurrent network training over long time spans ( thousands ) . 2 TASKS . We train the networks on two tasks both defined on a sequence of characters of a text . We use the pg19 data-set ( Rae et al. , 2019 ) containing a large collections of books as our text source . In the first task we consider sequences of length N taken at random points from the text . The N is of the order of ten thousand in our experiments , but in principle it can be arbitrarily long or we should ideally use the entire data-set , this is just the scale we have managed to achieve so far . At each iteration , we initialize a neural network with random weights and zero activations , run it through this sequence online as in eq ( 1 ) , and measure the total log-likelihood : L = − N∑ t=1 log p ( xt|ht−1 ) ( 2 ) We are looking for a set of hyper-parameters θ parameterising the update rule f that minimizes the loss L. The resulting θ should encode activation and weight dynamics that learn a model of the sequence while running through it online . In the second task we play such randomly sampled sequence twice and measure the likelihood on the second part , testing network ’ s ability to memorize . 3 OPTIMIZATION . There are two algorithms we compare for optimizing θ . The first algorithm is ( meta ) -gradient , where we repeat the following process . We sample a mini-batch of randomly selected sequences from the text . We run the network forward and back-propagate , both through the whole batched sequence , and take stochastic gradient optimizer step - we use Adam ( Kingma & Ba , 2014 ) in the experiments . The second technique we employ is natural evolution strategies ( NES ) ( Wierstra et al. , 2014 ; Salimans et al. , 2017 ) . Instead of optimising only the parameters θ , this method instead maintains a search distribution over the parameters . We use separable NES ( SNES ) , in which this distribution is a diagonal normal distribution parameterised by the parameter means and variances . At each iteration , a population of update rules ( population of θ ’ s ) is sampled from this distribution as well as a single sequence from the dataset . The networks are run forward to obtain fitnesses , which are then used to estimate improved values of the parameter means and variances . In contrast to ( Salimans et al. , 2017 ) , we find that updating the variances at each iteration is beneficial . Further details for our implementation are given in App . A.3 .
This paper presents an approach that trains local Hebbian learning rules to allow a neural network to perform reasonably well in two types of problems: sequence memorization and prediction. Two approaches to train these models are compared, which are based on meta-gradients and evolutionary strategies. The evolved model is able to perform sequence predictions of length 1000.
SP:425d01f4c8d89c63baf4d029e90812eb73821fee
Evolving Neural Update Rules for Sequence Learning
1 INTRODUCTION . The field of neural sequence processing has become dominated by neural networks trained by backpropagation , with the best models being transformers currently , and LSTM recurrent networks in the not so recent past . These networks can be used for a range of problems such as label prediction , modelling and reinforcement learning . The basic computational nature of these networks can be characterized as follows : One can design an arbitrary ( differentiable ) forward computation that updates part of the network ( activations ) and then one executes a specific algorithm to do the updates of another set of parameters ( weights - using back-propagation and variants of stochastic gradient descent ) . This has an advantage of freedom to flexibly design a forward computation . However , the hand-coded update algorithm is fixed , and may not be the most effective means of achieving a given training objective . There has been interest , as we review below , in directly finding the full computation ( both activations and weights updates ) . In this paper , we aim to push this approach further by finding update rules that work better and that scale , by comparing different functional forms and methods for optimizing the search for these rules . We pursue what we call end-to-end search for update rules . A good way to explain this is to consider the analogous process in nature . Roughly speaking the brain of an organism is produced from the information of the parents , ” run ” for the lifetime of the organism , and selected based on how it , as well its descendants perform ( for example keep making descendants ) . Over time , this process developed neural networks that can learn , adapt quickly and that possess whichever other abilities were needed for the success of the organisms . Similarly we consider a space of neural networks and search for updates to their parameters , the activations and the weights in our case , that work well on problems we are interested in . This can allow networks to discover whatever updates are needed to solve these problems . We consider parameterization of neural networks that is somewhat similar to classic artificial recurrent neural networks , with a state consisting of activations ht and weights wt at time t. There are , however , two primary differences . The first is that the entire computation ( including learning or whatever computation the network does ) operates online , that is , the network receives input xt and updates according some function ht , wt , pt = fθ ( xt , ht−1 , wt−1 ) ( 1 ) where fθ is the update function parameterized by hyper-parameters θ that are fixed for the lifetime of the network . The pt is the output - in our case the probability over the next character in the sequence ( we will discuss the problems we solve later ) . The second difference is that the activation ht , i of a given neuron i as well as the weight wt , ij between neurons j and i are ( potentially ) small vectors , as in ( Bertens & Lee , 2019 ; Gregor , 2020 ) . Biological neurons were an inspiration for artificial neural networks , and so is the case here . The former are complex objects , and representing the cell body state as well as the synapse state by vectors ( rather then scalars ) should allow for a closer representation of the neuron ’ s computation . Biological neurons furthermore provide a proof of principle that online neural computation is capable of powerful learning . There are several additional motivations for pursuing this approach . Learning general update rules end to end might allow networks to use weights in a more interesting or powerful fashion than that prescribed by back-propagation . Finding online rules might allow for easier hardware implementation , especially in neural hardware architectures ( Modha , 2017 ; Davies et al. , 2018 ) . Developing the ability to search for update rules might help us to discover how brains compute , for example if we can implement what we know about the computational structure and search for what we don ’ t . Finally in artificial life , entities propagate into the future often not based on an objective but simply if they find means of making descendants ( and if those descendants make more descendants and so on ) ( Ray , 1991 ; Aguilar et al. , 2014 ; Soros & Stanley , 2014 ; Gregor & Besse , 2021 ) . Being able to evolve learning algorithms without objective might provide a path to intelligent artificial life . 1.1 RELATION TO PREVIOUS WORKS .. Evolving network weights There are many works that directly evolve network weights , for example ( Wierstra et al. , 2014 ; Salimans et al. , 2017 ) . In ( Stanley & Miikkulainen , 2002 ) they parameterize the weights by compositional pattern producing networks ( CPPN ) and in ( Risi & Stanley , 2010 ) they consider an adaptive version that updates the weights based on pre- and post-synaptic activations . While in one sense this is similar to our approach in that the weights are updated , it is also very different - with a very different parameterization ( CPPN ’ s and no hidden network units ) , and problems ( evolving the system to solve a T-maze task ) , and without a focus on pure end-to-end learning as the parameters are also evolved for the task . End-to-end learning and meta-learning A range of related works explore end-to-end learning and meta-learning , but don ’ t strictly operate in the local update rules setting . Furthermore , many of these works learn the full weights of the network instead of a smaller number of update rule hyper-parameters . In ( Schmidhuber , 1992 ) they consider an end-to-end setting , parametrizing updates of a ( fast ) network , by another ( slow ) network , that outputs vectors used to compute Hebbian updates to parameters . In AutoML zero ( Real et al. , 2020 ) they evolve a sequence of operations such as multiplication and non-linearities that implement a machine learning algorithm ( that includes learning ) . A series of works such as ( Ravi & Larochelle , 2016 ; Metz et al. , 2019 ; Wichrowska et al. , 2017 ; Bello et al. , 2017 ; Li & Malik , 2017 ; Lv et al. , 2017 ) focus on learning generalisations of gradientdescent optimisers that utilise back-propagated gradient information to update weights . Although these works optimise the parameters of the update rule over multiple update steps , the tasks themselves are typically feed-forward in nature , such as image classification . In some works ( Ravi & Larochelle , 2016 ) , they explicitly consider few-shot learning , which is related to online learning , although task relevant information is preserved across tasks in the full network weights . In ( Miconi , 2016 ; Miconi et al. , 2018 ) they consider recurrent neural networks , but introduce perweight state variables that are updated in a Hebbian fashion with parameters of the updates trained by back-propagation , along with a standard set of weights . This is capable of learning fast weight adaptations . Alternative network parametrizations In ( Ha et al. , 2016 ) they consider a recurrent network with weights parameterized by a smaller set of hyper-weights . This is trained by standard back-propagation over a short periods . Because the number of hyper-weights is still relatively large , the system is still relying on back-propagation learning to encode the structure of the current data-sequence , rather than a learning algorithm primarily . Learning local update rules end-to-end There are a number of works more closely related to our objective of end-to-end learning of local update rules ( Bengio et al. , 1992 ; Orchard & Wang , 2016 ; Gu et al. , 2019 ; Munkhdalai et al. , 2019 ; Bertens & Lee , 2019 ; Gregor , 2020 ; Kirsch & Schmidhuber , 2020 ) . For example Bengio et al . ( 1992 ) parameterizes scalar weight updates as a linear combination of terms derived from local scalar activations and previous weights and searches for these parameters using genetic algorithms , stochastic gradient descent or simulated annealing applied to toy problems . In Orchard & Wang ( 2016 ) a population of agents is evolved for the task of foraging in a 2d world . However both the initial weights of the controller network as well as the synaptic update network are evolved , in principle putting good policies in initial weights ( but showing that updates improve performance ) . Bertens & Lee ( 2019 ) ; Gregor ( 2020 ) introduce the idea of using vectors to represent both the activation and weight states , and updates using LSTM ’ s on T-maze tasks in the former and MLP ’ s on sequence memorization respectively in latter . A key drawback in all these works is that the networks used are tiny , of usually less than ten neurons . In Gregor ( 2020 ) , they attempt to use larger hidden layers , but find that the network does all its computation in the input layer . In Kirsch & Schmidhuber ( 2020 ) they consider more general network parameterizations , using LSTM-based local updates and scaling to a larger number of neurons . However they only consider problems such as MNIST classification that do not require an algorithm to learn from long-range time dependencies . That is , because the same classes often appear close to one another in a random sequence , there is a signal over just a few steps ( longer range dependencies might be needed for good performance , however the reported one is low ) . The main contribution of this paper is to find a parameterization of update rules and a method of search that scales to large number of neurons ( we tested up to a thousand hidden units ) and weights ( million ) and that can learn recurrent network training over long time spans ( thousands ) . 2 TASKS . We train the networks on two tasks both defined on a sequence of characters of a text . We use the pg19 data-set ( Rae et al. , 2019 ) containing a large collections of books as our text source . In the first task we consider sequences of length N taken at random points from the text . The N is of the order of ten thousand in our experiments , but in principle it can be arbitrarily long or we should ideally use the entire data-set , this is just the scale we have managed to achieve so far . At each iteration , we initialize a neural network with random weights and zero activations , run it through this sequence online as in eq ( 1 ) , and measure the total log-likelihood : L = − N∑ t=1 log p ( xt|ht−1 ) ( 2 ) We are looking for a set of hyper-parameters θ parameterising the update rule f that minimizes the loss L. The resulting θ should encode activation and weight dynamics that learn a model of the sequence while running through it online . In the second task we play such randomly sampled sequence twice and measure the likelihood on the second part , testing network ’ s ability to memorize . 3 OPTIMIZATION . There are two algorithms we compare for optimizing θ . The first algorithm is ( meta ) -gradient , where we repeat the following process . We sample a mini-batch of randomly selected sequences from the text . We run the network forward and back-propagate , both through the whole batched sequence , and take stochastic gradient optimizer step - we use Adam ( Kingma & Ba , 2014 ) in the experiments . The second technique we employ is natural evolution strategies ( NES ) ( Wierstra et al. , 2014 ; Salimans et al. , 2017 ) . Instead of optimising only the parameters θ , this method instead maintains a search distribution over the parameters . We use separable NES ( SNES ) , in which this distribution is a diagonal normal distribution parameterised by the parameter means and variances . At each iteration , a population of update rules ( population of θ ’ s ) is sampled from this distribution as well as a single sequence from the dataset . The networks are run forward to obtain fitnesses , which are then used to estimate improved values of the parameter means and variances . In contrast to ( Salimans et al. , 2017 ) , we find that updating the variances at each iteration is beneficial . Further details for our implementation are given in App . A.3 .
This paper aims at using evolution strategies instead of back-propagation for evolving the parameters of weight and activation updates for an online-learning sequence model, specifically a next-character prediction model. Each neuron activation is represented by a vector, where each vector element has a different purpose, such as matrix multiplication for forward and multiplication with a transposed matrix for recurrent connections. LSTM, MLP, MLP with self attention/gating (PMLP), and Hebb rule were considered for activation and weight updates. Experiments were conducted on a next-character prediction task on long sequences.
SP:425d01f4c8d89c63baf4d029e90812eb73821fee
Evolving Neural Update Rules for Sequence Learning
1 INTRODUCTION . The field of neural sequence processing has become dominated by neural networks trained by backpropagation , with the best models being transformers currently , and LSTM recurrent networks in the not so recent past . These networks can be used for a range of problems such as label prediction , modelling and reinforcement learning . The basic computational nature of these networks can be characterized as follows : One can design an arbitrary ( differentiable ) forward computation that updates part of the network ( activations ) and then one executes a specific algorithm to do the updates of another set of parameters ( weights - using back-propagation and variants of stochastic gradient descent ) . This has an advantage of freedom to flexibly design a forward computation . However , the hand-coded update algorithm is fixed , and may not be the most effective means of achieving a given training objective . There has been interest , as we review below , in directly finding the full computation ( both activations and weights updates ) . In this paper , we aim to push this approach further by finding update rules that work better and that scale , by comparing different functional forms and methods for optimizing the search for these rules . We pursue what we call end-to-end search for update rules . A good way to explain this is to consider the analogous process in nature . Roughly speaking the brain of an organism is produced from the information of the parents , ” run ” for the lifetime of the organism , and selected based on how it , as well its descendants perform ( for example keep making descendants ) . Over time , this process developed neural networks that can learn , adapt quickly and that possess whichever other abilities were needed for the success of the organisms . Similarly we consider a space of neural networks and search for updates to their parameters , the activations and the weights in our case , that work well on problems we are interested in . This can allow networks to discover whatever updates are needed to solve these problems . We consider parameterization of neural networks that is somewhat similar to classic artificial recurrent neural networks , with a state consisting of activations ht and weights wt at time t. There are , however , two primary differences . The first is that the entire computation ( including learning or whatever computation the network does ) operates online , that is , the network receives input xt and updates according some function ht , wt , pt = fθ ( xt , ht−1 , wt−1 ) ( 1 ) where fθ is the update function parameterized by hyper-parameters θ that are fixed for the lifetime of the network . The pt is the output - in our case the probability over the next character in the sequence ( we will discuss the problems we solve later ) . The second difference is that the activation ht , i of a given neuron i as well as the weight wt , ij between neurons j and i are ( potentially ) small vectors , as in ( Bertens & Lee , 2019 ; Gregor , 2020 ) . Biological neurons were an inspiration for artificial neural networks , and so is the case here . The former are complex objects , and representing the cell body state as well as the synapse state by vectors ( rather then scalars ) should allow for a closer representation of the neuron ’ s computation . Biological neurons furthermore provide a proof of principle that online neural computation is capable of powerful learning . There are several additional motivations for pursuing this approach . Learning general update rules end to end might allow networks to use weights in a more interesting or powerful fashion than that prescribed by back-propagation . Finding online rules might allow for easier hardware implementation , especially in neural hardware architectures ( Modha , 2017 ; Davies et al. , 2018 ) . Developing the ability to search for update rules might help us to discover how brains compute , for example if we can implement what we know about the computational structure and search for what we don ’ t . Finally in artificial life , entities propagate into the future often not based on an objective but simply if they find means of making descendants ( and if those descendants make more descendants and so on ) ( Ray , 1991 ; Aguilar et al. , 2014 ; Soros & Stanley , 2014 ; Gregor & Besse , 2021 ) . Being able to evolve learning algorithms without objective might provide a path to intelligent artificial life . 1.1 RELATION TO PREVIOUS WORKS .. Evolving network weights There are many works that directly evolve network weights , for example ( Wierstra et al. , 2014 ; Salimans et al. , 2017 ) . In ( Stanley & Miikkulainen , 2002 ) they parameterize the weights by compositional pattern producing networks ( CPPN ) and in ( Risi & Stanley , 2010 ) they consider an adaptive version that updates the weights based on pre- and post-synaptic activations . While in one sense this is similar to our approach in that the weights are updated , it is also very different - with a very different parameterization ( CPPN ’ s and no hidden network units ) , and problems ( evolving the system to solve a T-maze task ) , and without a focus on pure end-to-end learning as the parameters are also evolved for the task . End-to-end learning and meta-learning A range of related works explore end-to-end learning and meta-learning , but don ’ t strictly operate in the local update rules setting . Furthermore , many of these works learn the full weights of the network instead of a smaller number of update rule hyper-parameters . In ( Schmidhuber , 1992 ) they consider an end-to-end setting , parametrizing updates of a ( fast ) network , by another ( slow ) network , that outputs vectors used to compute Hebbian updates to parameters . In AutoML zero ( Real et al. , 2020 ) they evolve a sequence of operations such as multiplication and non-linearities that implement a machine learning algorithm ( that includes learning ) . A series of works such as ( Ravi & Larochelle , 2016 ; Metz et al. , 2019 ; Wichrowska et al. , 2017 ; Bello et al. , 2017 ; Li & Malik , 2017 ; Lv et al. , 2017 ) focus on learning generalisations of gradientdescent optimisers that utilise back-propagated gradient information to update weights . Although these works optimise the parameters of the update rule over multiple update steps , the tasks themselves are typically feed-forward in nature , such as image classification . In some works ( Ravi & Larochelle , 2016 ) , they explicitly consider few-shot learning , which is related to online learning , although task relevant information is preserved across tasks in the full network weights . In ( Miconi , 2016 ; Miconi et al. , 2018 ) they consider recurrent neural networks , but introduce perweight state variables that are updated in a Hebbian fashion with parameters of the updates trained by back-propagation , along with a standard set of weights . This is capable of learning fast weight adaptations . Alternative network parametrizations In ( Ha et al. , 2016 ) they consider a recurrent network with weights parameterized by a smaller set of hyper-weights . This is trained by standard back-propagation over a short periods . Because the number of hyper-weights is still relatively large , the system is still relying on back-propagation learning to encode the structure of the current data-sequence , rather than a learning algorithm primarily . Learning local update rules end-to-end There are a number of works more closely related to our objective of end-to-end learning of local update rules ( Bengio et al. , 1992 ; Orchard & Wang , 2016 ; Gu et al. , 2019 ; Munkhdalai et al. , 2019 ; Bertens & Lee , 2019 ; Gregor , 2020 ; Kirsch & Schmidhuber , 2020 ) . For example Bengio et al . ( 1992 ) parameterizes scalar weight updates as a linear combination of terms derived from local scalar activations and previous weights and searches for these parameters using genetic algorithms , stochastic gradient descent or simulated annealing applied to toy problems . In Orchard & Wang ( 2016 ) a population of agents is evolved for the task of foraging in a 2d world . However both the initial weights of the controller network as well as the synaptic update network are evolved , in principle putting good policies in initial weights ( but showing that updates improve performance ) . Bertens & Lee ( 2019 ) ; Gregor ( 2020 ) introduce the idea of using vectors to represent both the activation and weight states , and updates using LSTM ’ s on T-maze tasks in the former and MLP ’ s on sequence memorization respectively in latter . A key drawback in all these works is that the networks used are tiny , of usually less than ten neurons . In Gregor ( 2020 ) , they attempt to use larger hidden layers , but find that the network does all its computation in the input layer . In Kirsch & Schmidhuber ( 2020 ) they consider more general network parameterizations , using LSTM-based local updates and scaling to a larger number of neurons . However they only consider problems such as MNIST classification that do not require an algorithm to learn from long-range time dependencies . That is , because the same classes often appear close to one another in a random sequence , there is a signal over just a few steps ( longer range dependencies might be needed for good performance , however the reported one is low ) . The main contribution of this paper is to find a parameterization of update rules and a method of search that scales to large number of neurons ( we tested up to a thousand hidden units ) and weights ( million ) and that can learn recurrent network training over long time spans ( thousands ) . 2 TASKS . We train the networks on two tasks both defined on a sequence of characters of a text . We use the pg19 data-set ( Rae et al. , 2019 ) containing a large collections of books as our text source . In the first task we consider sequences of length N taken at random points from the text . The N is of the order of ten thousand in our experiments , but in principle it can be arbitrarily long or we should ideally use the entire data-set , this is just the scale we have managed to achieve so far . At each iteration , we initialize a neural network with random weights and zero activations , run it through this sequence online as in eq ( 1 ) , and measure the total log-likelihood : L = − N∑ t=1 log p ( xt|ht−1 ) ( 2 ) We are looking for a set of hyper-parameters θ parameterising the update rule f that minimizes the loss L. The resulting θ should encode activation and weight dynamics that learn a model of the sequence while running through it online . In the second task we play such randomly sampled sequence twice and measure the likelihood on the second part , testing network ’ s ability to memorize . 3 OPTIMIZATION . There are two algorithms we compare for optimizing θ . The first algorithm is ( meta ) -gradient , where we repeat the following process . We sample a mini-batch of randomly selected sequences from the text . We run the network forward and back-propagate , both through the whole batched sequence , and take stochastic gradient optimizer step - we use Adam ( Kingma & Ba , 2014 ) in the experiments . The second technique we employ is natural evolution strategies ( NES ) ( Wierstra et al. , 2014 ; Salimans et al. , 2017 ) . Instead of optimising only the parameters θ , this method instead maintains a search distribution over the parameters . We use separable NES ( SNES ) , in which this distribution is a diagonal normal distribution parameterised by the parameter means and variances . At each iteration , a population of update rules ( population of θ ’ s ) is sampled from this distribution as well as a single sequence from the dataset . The networks are run forward to obtain fitnesses , which are then used to estimate improved values of the parameter means and variances . In contrast to ( Salimans et al. , 2017 ) , we find that updating the variances at each iteration is beneficial . Further details for our implementation are given in App . A.3 .
The authors design and apply recurrent neural network models with "fast weights" to sequence compression problems. The "slow" parameters that determine how the "fast" weights change are determined by gradient descent or evolution strategies. Experimentally, the latter method appears to perform better for the architectures designed by the authors. The paper focuses on a long-sequence working memory problem and additionally presents some results on language modeling.
SP:425d01f4c8d89c63baf4d029e90812eb73821fee
Molecular Graph Representation Learning via Heterogeneous Motif Graph Construction
1 INTRODUCTION . Graph neural networks ( GNNs ) have been proved to effectively solve various challenging tasks in graph embedding fields , such as node classification ( Kipf & Welling , 2016 ) , graph classification ( Xu et al. , 2018 ) , and link prediction ( Schlichtkrull et al. , 2018 ) , which have been extensively applied in social networks , molecular properties prediction , natural language processing , and other fields . Compared with hand-crafted features in the molecular properties prediction field , GNNs map a molecular graph into a dimensional Euclidean space using the topological information among the nodes in the graph ( Scarselli et al. , 2008 ) . Most existing GNNs use the basic molecular graphs topology to obtain structural information through neighborhood feature aggregation and pooling methods ( Kipf & Welling , 2016 ; Ying et al. , 2018 ; Gao & Ji , 2019 ) . However , these methods fail to consider connections among molecular graphs , specifically the sharing of motif patterns in the molecular graph . One of the critical differences between molecular graphs and other graph structures such as social network graphs and citation graphs is that motifs , which can be seen as common sub-graphs in molecular graphs have special meanings . For example , an edge in a molecule represents a bond , and a cycle represents a ring . One ground truth that has been widely used in explanation of GNNs is that carbon rings and NO2 groups tend to be mutagenic ( Debnath et al. , 1991 ) . Thus , motifs deserve more attention when designing GNNs for motif-level feature representation learning . To this end , we propose a novel method to learn motif-level feature embedding for molecular graphs . We first extract motifs from molecular graphs and build a motif vocabulary containing all these motifs . Then we construct a heterogeneous motif graph containing all motif nodes and molecular nodes . We can apply GNNs to learn motif-level representations for each molecular graph based on the heterogeneous motif graph . The message passing scheme in a heterogeneous motif graph enables interaction between motifs and molecules , which helps exchange information between molecular graphs . The experimental results show that the learned motif-level embedding can dramatically improve the representation of a molecule , and our model can significantly outperform other stateof-the-art GNN models on a variety of graph classification datasets . 2 HETEROGENEOUS MOTIF GRAPH NEURAL NETWORKS . In this section , we propose a novel method to construct a motif-based heterogeneous graph , which can advance motif-based feature representation learning on molecular graphs . Then , we use two separate graph neural networks to learn atom-level and motif-level graph feature representations , respectively . 2.1 MOTIF VOCABULARY OF MOLECULAR GRAPHS . In molecular graphs , motifs are sub-graphs that appear repeatedly and are statistically significant . Specific to biochemical molecule graphs , motifs can be bonds and rings . Analogously , an edge in the graph represents a bond , and a cycle represents a ring . Thus , we can construct a molecule from sub-graphs or motifs out of a motif vocabulary . To represent a molecule by motifs , we first build a motif vocabulary that contains valid sub-graphs from given molecular graphs . To build the motif vocabulary , we search all molecular graphs and extract important sub-graphs . In this work , we only keep bonds and rings to ensure a manageable vocabulary size . However , the algorithm can be easily extended to include different motif patterns . We then remove all duplicate bonds and rings . Some motifs may appear in most of molecules , which carry little information for molecule representation . To reduce the impact of these common motifs , we employ the term frequency–inverse document frequency ( TF-IDF ) algorithm ( Ramos et al. , 2003 ) . In particular , term frequency measures the frequency of a motif in a molecule , and inverse document frequency refers to the number of molecules containing a motif . We average the TF-IDFs of those molecules that contain a motif as the TF-IDF value of the motif . By sorting the vocabulary by TF-IDF , we keep the most essential motifs as our final vocabulary . Figure 1 illustrates the procedure of building motifs vocabulary . 2.2 HETEROGENEOUS MOTIF GRAPH CONSTRUCTION . Based on the motif vocabulary , we build a heterogeneous graph that contains motif nodes and molecular nodes . In this graph , each motif node represents a motif in the vocabulary , and each molecular node is a molecule . Then , we build two types of edges between these nodes ; those are motifmolecule edges and motif-motif edges . We add motif-molecule edges between a molecule node and motif nodes that represent its motifs . We add a motif-motif edge between two motifs if they share at least one atom in any molecule . In this way , we can build a heterogeneous graph containing all motifs in the vocabulary and all molecules connected by two kinds of edges . Appendix A contains detailed pseudocode of constructing a Heterogeneous Motif Graph . One thing to notice is that different motifs have different impacts . We assign different weights to edges based on their ending nodes . In particular , for edges between a motif node and a molecule node , we use the TF-IDF value of the motif as the weight . For the edges between two motif nodes , we use the co-occurrence information point-wise mutual information ( PMI ) , which is a popular correlation measure in information theory and statistics ( Yao et al. , 2019 ) . Formally , the edge weight Aij between node i and node j is computed as Aij = { PMIij , if i , j are motifs TF-IDFij , if i or j is a motif 0 , Otherwise ( 1 ) The TF-IDF value of an edge between a motif node i and a molecular node j is computed as TF-IDFij = C ( i ) j ( log 1 +M 1 +N ( i ) + 1 ) , ( 2 ) where C ( i ) j is the number of times that the motif i appears in the molecule j , M is the number of molecules , and N ( i ) is the number of molecules containing motif i . The PMI value of an edge between two motif nodes is computed as PMIij = log p ( i , j ) p ( i ) p ( j ) , ( 3 ) where p ( i , j ) is the probability that a molecule contains both motif i and motif j , p ( i ) is the probability that a molecule contains motif i , and p ( j ) is the probability that a molecule contains motif j . We use following formulas to compute these probabilities . p ( i , j ) = N ( i , j ) M , p ( i ) = N ( i ) M , p ( j ) = N ( j ) M , ( 4 ) where N ( i , j ) is the number of molecules that contain both motif i and motif j . Figure 2 provides an example of heterogeneous motif graph construction . Note that we assign zero weight for motif node pairs with negative PMI value . 2.3 HETEROGENEOUS MOTIF GRAPH NEURAL NETWORKS . In this part , we build a HM-GNN to learn both atom-level and motif-level graph feature representations . In Section 2.2 , we construct a heterogeneous motif graph that contains all motif nodes and molecular nodes . Here , we first initiate features for each motif node and molecular node . We use the one-hot encoding to generate features for motif nodes . In particular , each motif node i has a feature vector Xi of length |V | , where V represents the motif vocabulary we obtained in Section 2.1 . Given the unique index i of the motif in the vocabulary , we set Xi [ i ] = 1 and other positions to 0 . For molecular nodes , we use a bag-of-words method to populate their feature vectors . We consider each motif as a word and each molecule as a document . By applying the bag-of-words model , we can obtain feature vectors for molecular nodes . Based on this heterogeneous graph , a heterogeneous graph neural network can be applied to learn the motif-level feature embedding for each molecule in the graph . At the same time , each molecule can be easily converted into a graph by using atoms as nodes and bonds as edges . The original molecule graph topology and node features contain atom-level graph information , which can supplement motif-level information . Thus , we employ another graph neural network to learn the atom-level feature embedding . Finally , we concatenate feature embeddings from two graph neural networks and feed them into a multi-layer perceptron ( MLP ) for prediction . Figure 3 shows an example of our HM-GNN model . 2.4 MULTI-TASK LEARNING VIA HETEROGENEOUS MOTIF GRAPH . This part will show that our heterogeneous motif graph can help with graph deep learning models on small molecular datasets via multi-task learning . It is well known that deep learning methods require a significant amount of data for training . However , most molecular datasets are relatively small , and graph deep learning methods can easily over-fit on these datasets . Multi-task learning ( Caruana , 1997 ) has been shown to effectively reduce the risk of over-fitting and help improve the generalization performances of all tasks ( Zhang & Yang , 2017 ) . It can effectively increase the size of train data and decrease the influence of data-dependent noise , which leads to a more robust model . However , it is hard to directly apply multi-task learning on several molecular datasets due to the lack of explicit connections among different datasets . Based on our heterogeneous motif graph , we can easily connect a set of molecular datasets and form a multi-task learning paradigm . GivenN molecular datasetsD1 , · · · , DN , each datasetDi contains ni molecules . We first construct a motif vocabulary V that contains motifs from N molecular datasets . Here , the motif only needs to be shared in some datasets but not all of them . Then , we build a heterogeneous motif graph that contains motifs in the vocabulary and molecules from all datasets . We employ our HM-GNN to learn both graph-level and motif-level feature representations for each molecule based on this graph . The resulting features of each dataset are fed into a separate MLP for prediction . In this process , the motif nodes can be considered as connectors connecting molecules from different datasets or tasks . Under the multi-task training paradigm , our motif-heterogeneous graph can improve the feature representation learning on all datasets . 2.5 EFFICIENT TRAINING VIA EDGE SAMPLING . In Section 2.2 , we construct a heterogeneous motif graph that contains all motif nodes and molecular nodes . As the number of molecular nodes increases , there can be an issue with computational resources . To address this issue , we propose to use an edge sampler to reduce the size of the heterogeneous motif graph . Due to the special structure of our heterogeneous motif graph that it has two kinds of nodes and two kinds of edges , we can efficiently generate a computational subgraph by using the type of an edge . We show how sampling edges can save computational resources . Most GNNs follow a neighborhood aggregation learning scheme . Formally , the ` -th layer of a GNN can be represented by x ` +1i = f ( x ` i , φ ( { eji , x ` j | j ∈ N ( i ) } ) ) ) , ( 5 ) where x ` +1i is the new feature vector of node i , f is a function that combines the ` -th layer ’ s features and the aggregated features , φ is the function that aggregate all neighbors ’ feature vectors of node i , eji is the weight of edgeji , and N ( i ) is the set of node i ’ s neighbors . This equation shows that the time and space complexity are both O ( |E| ) , where |E| is the number of edges in the graph . This means we can reduce the usage of computational resources by removing some edges from the graph . Thus , we employ an edge sampler that samples edges from the graph . And a sampling rule is that we prioritize motif-molecule edges . To sample edges , we first randomly select some molecular nodes as “ starting ” nodes . We run a breadth-first algorithm to conduct a hop-by-hop exploration on the heterogeneous motif graph starting from these nodes . In each hop , we randomly sample a fixed size of edges based on edge type . Note that the first-hop neighbors of each molecular node are the motif nodes , which play essential roles in our heterogeneous motif graph . Thus , we retain all first-hop edges to ensure effective learning of feature representations for motif nodes . Starting from the second hop , we only sample motif-motif edges to retain as much motif information as possible . Figure 4 shows an example of sampling a sub-graph for a 3-layer HM-GNN . Appendix B contains complete sample rules and pseudocode .
This paper propose the Heterogeneous Motif Graph Neural Network (HM-GNN) which is based on a motif level graph representation that takes into account commonly occurring motifs like rings in molecules. They apply this to multi-task settings and obtained good experimental performances. They propose an edge sampling scheme to reduce the computational costs needed for training.
SP:35f5748fa58fc716295b58cb58ece4c20d597fc8
Molecular Graph Representation Learning via Heterogeneous Motif Graph Construction
1 INTRODUCTION . Graph neural networks ( GNNs ) have been proved to effectively solve various challenging tasks in graph embedding fields , such as node classification ( Kipf & Welling , 2016 ) , graph classification ( Xu et al. , 2018 ) , and link prediction ( Schlichtkrull et al. , 2018 ) , which have been extensively applied in social networks , molecular properties prediction , natural language processing , and other fields . Compared with hand-crafted features in the molecular properties prediction field , GNNs map a molecular graph into a dimensional Euclidean space using the topological information among the nodes in the graph ( Scarselli et al. , 2008 ) . Most existing GNNs use the basic molecular graphs topology to obtain structural information through neighborhood feature aggregation and pooling methods ( Kipf & Welling , 2016 ; Ying et al. , 2018 ; Gao & Ji , 2019 ) . However , these methods fail to consider connections among molecular graphs , specifically the sharing of motif patterns in the molecular graph . One of the critical differences between molecular graphs and other graph structures such as social network graphs and citation graphs is that motifs , which can be seen as common sub-graphs in molecular graphs have special meanings . For example , an edge in a molecule represents a bond , and a cycle represents a ring . One ground truth that has been widely used in explanation of GNNs is that carbon rings and NO2 groups tend to be mutagenic ( Debnath et al. , 1991 ) . Thus , motifs deserve more attention when designing GNNs for motif-level feature representation learning . To this end , we propose a novel method to learn motif-level feature embedding for molecular graphs . We first extract motifs from molecular graphs and build a motif vocabulary containing all these motifs . Then we construct a heterogeneous motif graph containing all motif nodes and molecular nodes . We can apply GNNs to learn motif-level representations for each molecular graph based on the heterogeneous motif graph . The message passing scheme in a heterogeneous motif graph enables interaction between motifs and molecules , which helps exchange information between molecular graphs . The experimental results show that the learned motif-level embedding can dramatically improve the representation of a molecule , and our model can significantly outperform other stateof-the-art GNN models on a variety of graph classification datasets . 2 HETEROGENEOUS MOTIF GRAPH NEURAL NETWORKS . In this section , we propose a novel method to construct a motif-based heterogeneous graph , which can advance motif-based feature representation learning on molecular graphs . Then , we use two separate graph neural networks to learn atom-level and motif-level graph feature representations , respectively . 2.1 MOTIF VOCABULARY OF MOLECULAR GRAPHS . In molecular graphs , motifs are sub-graphs that appear repeatedly and are statistically significant . Specific to biochemical molecule graphs , motifs can be bonds and rings . Analogously , an edge in the graph represents a bond , and a cycle represents a ring . Thus , we can construct a molecule from sub-graphs or motifs out of a motif vocabulary . To represent a molecule by motifs , we first build a motif vocabulary that contains valid sub-graphs from given molecular graphs . To build the motif vocabulary , we search all molecular graphs and extract important sub-graphs . In this work , we only keep bonds and rings to ensure a manageable vocabulary size . However , the algorithm can be easily extended to include different motif patterns . We then remove all duplicate bonds and rings . Some motifs may appear in most of molecules , which carry little information for molecule representation . To reduce the impact of these common motifs , we employ the term frequency–inverse document frequency ( TF-IDF ) algorithm ( Ramos et al. , 2003 ) . In particular , term frequency measures the frequency of a motif in a molecule , and inverse document frequency refers to the number of molecules containing a motif . We average the TF-IDFs of those molecules that contain a motif as the TF-IDF value of the motif . By sorting the vocabulary by TF-IDF , we keep the most essential motifs as our final vocabulary . Figure 1 illustrates the procedure of building motifs vocabulary . 2.2 HETEROGENEOUS MOTIF GRAPH CONSTRUCTION . Based on the motif vocabulary , we build a heterogeneous graph that contains motif nodes and molecular nodes . In this graph , each motif node represents a motif in the vocabulary , and each molecular node is a molecule . Then , we build two types of edges between these nodes ; those are motifmolecule edges and motif-motif edges . We add motif-molecule edges between a molecule node and motif nodes that represent its motifs . We add a motif-motif edge between two motifs if they share at least one atom in any molecule . In this way , we can build a heterogeneous graph containing all motifs in the vocabulary and all molecules connected by two kinds of edges . Appendix A contains detailed pseudocode of constructing a Heterogeneous Motif Graph . One thing to notice is that different motifs have different impacts . We assign different weights to edges based on their ending nodes . In particular , for edges between a motif node and a molecule node , we use the TF-IDF value of the motif as the weight . For the edges between two motif nodes , we use the co-occurrence information point-wise mutual information ( PMI ) , which is a popular correlation measure in information theory and statistics ( Yao et al. , 2019 ) . Formally , the edge weight Aij between node i and node j is computed as Aij = { PMIij , if i , j are motifs TF-IDFij , if i or j is a motif 0 , Otherwise ( 1 ) The TF-IDF value of an edge between a motif node i and a molecular node j is computed as TF-IDFij = C ( i ) j ( log 1 +M 1 +N ( i ) + 1 ) , ( 2 ) where C ( i ) j is the number of times that the motif i appears in the molecule j , M is the number of molecules , and N ( i ) is the number of molecules containing motif i . The PMI value of an edge between two motif nodes is computed as PMIij = log p ( i , j ) p ( i ) p ( j ) , ( 3 ) where p ( i , j ) is the probability that a molecule contains both motif i and motif j , p ( i ) is the probability that a molecule contains motif i , and p ( j ) is the probability that a molecule contains motif j . We use following formulas to compute these probabilities . p ( i , j ) = N ( i , j ) M , p ( i ) = N ( i ) M , p ( j ) = N ( j ) M , ( 4 ) where N ( i , j ) is the number of molecules that contain both motif i and motif j . Figure 2 provides an example of heterogeneous motif graph construction . Note that we assign zero weight for motif node pairs with negative PMI value . 2.3 HETEROGENEOUS MOTIF GRAPH NEURAL NETWORKS . In this part , we build a HM-GNN to learn both atom-level and motif-level graph feature representations . In Section 2.2 , we construct a heterogeneous motif graph that contains all motif nodes and molecular nodes . Here , we first initiate features for each motif node and molecular node . We use the one-hot encoding to generate features for motif nodes . In particular , each motif node i has a feature vector Xi of length |V | , where V represents the motif vocabulary we obtained in Section 2.1 . Given the unique index i of the motif in the vocabulary , we set Xi [ i ] = 1 and other positions to 0 . For molecular nodes , we use a bag-of-words method to populate their feature vectors . We consider each motif as a word and each molecule as a document . By applying the bag-of-words model , we can obtain feature vectors for molecular nodes . Based on this heterogeneous graph , a heterogeneous graph neural network can be applied to learn the motif-level feature embedding for each molecule in the graph . At the same time , each molecule can be easily converted into a graph by using atoms as nodes and bonds as edges . The original molecule graph topology and node features contain atom-level graph information , which can supplement motif-level information . Thus , we employ another graph neural network to learn the atom-level feature embedding . Finally , we concatenate feature embeddings from two graph neural networks and feed them into a multi-layer perceptron ( MLP ) for prediction . Figure 3 shows an example of our HM-GNN model . 2.4 MULTI-TASK LEARNING VIA HETEROGENEOUS MOTIF GRAPH . This part will show that our heterogeneous motif graph can help with graph deep learning models on small molecular datasets via multi-task learning . It is well known that deep learning methods require a significant amount of data for training . However , most molecular datasets are relatively small , and graph deep learning methods can easily over-fit on these datasets . Multi-task learning ( Caruana , 1997 ) has been shown to effectively reduce the risk of over-fitting and help improve the generalization performances of all tasks ( Zhang & Yang , 2017 ) . It can effectively increase the size of train data and decrease the influence of data-dependent noise , which leads to a more robust model . However , it is hard to directly apply multi-task learning on several molecular datasets due to the lack of explicit connections among different datasets . Based on our heterogeneous motif graph , we can easily connect a set of molecular datasets and form a multi-task learning paradigm . GivenN molecular datasetsD1 , · · · , DN , each datasetDi contains ni molecules . We first construct a motif vocabulary V that contains motifs from N molecular datasets . Here , the motif only needs to be shared in some datasets but not all of them . Then , we build a heterogeneous motif graph that contains motifs in the vocabulary and molecules from all datasets . We employ our HM-GNN to learn both graph-level and motif-level feature representations for each molecule based on this graph . The resulting features of each dataset are fed into a separate MLP for prediction . In this process , the motif nodes can be considered as connectors connecting molecules from different datasets or tasks . Under the multi-task training paradigm , our motif-heterogeneous graph can improve the feature representation learning on all datasets . 2.5 EFFICIENT TRAINING VIA EDGE SAMPLING . In Section 2.2 , we construct a heterogeneous motif graph that contains all motif nodes and molecular nodes . As the number of molecular nodes increases , there can be an issue with computational resources . To address this issue , we propose to use an edge sampler to reduce the size of the heterogeneous motif graph . Due to the special structure of our heterogeneous motif graph that it has two kinds of nodes and two kinds of edges , we can efficiently generate a computational subgraph by using the type of an edge . We show how sampling edges can save computational resources . Most GNNs follow a neighborhood aggregation learning scheme . Formally , the ` -th layer of a GNN can be represented by x ` +1i = f ( x ` i , φ ( { eji , x ` j | j ∈ N ( i ) } ) ) ) , ( 5 ) where x ` +1i is the new feature vector of node i , f is a function that combines the ` -th layer ’ s features and the aggregated features , φ is the function that aggregate all neighbors ’ feature vectors of node i , eji is the weight of edgeji , and N ( i ) is the set of node i ’ s neighbors . This equation shows that the time and space complexity are both O ( |E| ) , where |E| is the number of edges in the graph . This means we can reduce the usage of computational resources by removing some edges from the graph . Thus , we employ an edge sampler that samples edges from the graph . And a sampling rule is that we prioritize motif-molecule edges . To sample edges , we first randomly select some molecular nodes as “ starting ” nodes . We run a breadth-first algorithm to conduct a hop-by-hop exploration on the heterogeneous motif graph starting from these nodes . In each hop , we randomly sample a fixed size of edges based on edge type . Note that the first-hop neighbors of each molecular node are the motif nodes , which play essential roles in our heterogeneous motif graph . Thus , we retain all first-hop edges to ensure effective learning of feature representations for motif nodes . Starting from the second hop , we only sample motif-motif edges to retain as much motif information as possible . Figure 4 shows an example of sampling a sub-graph for a 3-layer HM-GNN . Appendix B contains complete sample rules and pseudocode .
The paper proposes a novel molecular graph representation learning method by constructing a heterogeneous motif graph. In this graph, molecules and motifs are considered as nodes, and thus forming a heterogeneous graph. Motifs are extracted manually and important motifs are selected by TF-IDF. Moreover, the paper demonstrates that a multi-task learning framework is beneficial, thanks to sharing motifs across different datasets.
SP:35f5748fa58fc716295b58cb58ece4c20d597fc8
Molecular Graph Representation Learning via Heterogeneous Motif Graph Construction
1 INTRODUCTION . Graph neural networks ( GNNs ) have been proved to effectively solve various challenging tasks in graph embedding fields , such as node classification ( Kipf & Welling , 2016 ) , graph classification ( Xu et al. , 2018 ) , and link prediction ( Schlichtkrull et al. , 2018 ) , which have been extensively applied in social networks , molecular properties prediction , natural language processing , and other fields . Compared with hand-crafted features in the molecular properties prediction field , GNNs map a molecular graph into a dimensional Euclidean space using the topological information among the nodes in the graph ( Scarselli et al. , 2008 ) . Most existing GNNs use the basic molecular graphs topology to obtain structural information through neighborhood feature aggregation and pooling methods ( Kipf & Welling , 2016 ; Ying et al. , 2018 ; Gao & Ji , 2019 ) . However , these methods fail to consider connections among molecular graphs , specifically the sharing of motif patterns in the molecular graph . One of the critical differences between molecular graphs and other graph structures such as social network graphs and citation graphs is that motifs , which can be seen as common sub-graphs in molecular graphs have special meanings . For example , an edge in a molecule represents a bond , and a cycle represents a ring . One ground truth that has been widely used in explanation of GNNs is that carbon rings and NO2 groups tend to be mutagenic ( Debnath et al. , 1991 ) . Thus , motifs deserve more attention when designing GNNs for motif-level feature representation learning . To this end , we propose a novel method to learn motif-level feature embedding for molecular graphs . We first extract motifs from molecular graphs and build a motif vocabulary containing all these motifs . Then we construct a heterogeneous motif graph containing all motif nodes and molecular nodes . We can apply GNNs to learn motif-level representations for each molecular graph based on the heterogeneous motif graph . The message passing scheme in a heterogeneous motif graph enables interaction between motifs and molecules , which helps exchange information between molecular graphs . The experimental results show that the learned motif-level embedding can dramatically improve the representation of a molecule , and our model can significantly outperform other stateof-the-art GNN models on a variety of graph classification datasets . 2 HETEROGENEOUS MOTIF GRAPH NEURAL NETWORKS . In this section , we propose a novel method to construct a motif-based heterogeneous graph , which can advance motif-based feature representation learning on molecular graphs . Then , we use two separate graph neural networks to learn atom-level and motif-level graph feature representations , respectively . 2.1 MOTIF VOCABULARY OF MOLECULAR GRAPHS . In molecular graphs , motifs are sub-graphs that appear repeatedly and are statistically significant . Specific to biochemical molecule graphs , motifs can be bonds and rings . Analogously , an edge in the graph represents a bond , and a cycle represents a ring . Thus , we can construct a molecule from sub-graphs or motifs out of a motif vocabulary . To represent a molecule by motifs , we first build a motif vocabulary that contains valid sub-graphs from given molecular graphs . To build the motif vocabulary , we search all molecular graphs and extract important sub-graphs . In this work , we only keep bonds and rings to ensure a manageable vocabulary size . However , the algorithm can be easily extended to include different motif patterns . We then remove all duplicate bonds and rings . Some motifs may appear in most of molecules , which carry little information for molecule representation . To reduce the impact of these common motifs , we employ the term frequency–inverse document frequency ( TF-IDF ) algorithm ( Ramos et al. , 2003 ) . In particular , term frequency measures the frequency of a motif in a molecule , and inverse document frequency refers to the number of molecules containing a motif . We average the TF-IDFs of those molecules that contain a motif as the TF-IDF value of the motif . By sorting the vocabulary by TF-IDF , we keep the most essential motifs as our final vocabulary . Figure 1 illustrates the procedure of building motifs vocabulary . 2.2 HETEROGENEOUS MOTIF GRAPH CONSTRUCTION . Based on the motif vocabulary , we build a heterogeneous graph that contains motif nodes and molecular nodes . In this graph , each motif node represents a motif in the vocabulary , and each molecular node is a molecule . Then , we build two types of edges between these nodes ; those are motifmolecule edges and motif-motif edges . We add motif-molecule edges between a molecule node and motif nodes that represent its motifs . We add a motif-motif edge between two motifs if they share at least one atom in any molecule . In this way , we can build a heterogeneous graph containing all motifs in the vocabulary and all molecules connected by two kinds of edges . Appendix A contains detailed pseudocode of constructing a Heterogeneous Motif Graph . One thing to notice is that different motifs have different impacts . We assign different weights to edges based on their ending nodes . In particular , for edges between a motif node and a molecule node , we use the TF-IDF value of the motif as the weight . For the edges between two motif nodes , we use the co-occurrence information point-wise mutual information ( PMI ) , which is a popular correlation measure in information theory and statistics ( Yao et al. , 2019 ) . Formally , the edge weight Aij between node i and node j is computed as Aij = { PMIij , if i , j are motifs TF-IDFij , if i or j is a motif 0 , Otherwise ( 1 ) The TF-IDF value of an edge between a motif node i and a molecular node j is computed as TF-IDFij = C ( i ) j ( log 1 +M 1 +N ( i ) + 1 ) , ( 2 ) where C ( i ) j is the number of times that the motif i appears in the molecule j , M is the number of molecules , and N ( i ) is the number of molecules containing motif i . The PMI value of an edge between two motif nodes is computed as PMIij = log p ( i , j ) p ( i ) p ( j ) , ( 3 ) where p ( i , j ) is the probability that a molecule contains both motif i and motif j , p ( i ) is the probability that a molecule contains motif i , and p ( j ) is the probability that a molecule contains motif j . We use following formulas to compute these probabilities . p ( i , j ) = N ( i , j ) M , p ( i ) = N ( i ) M , p ( j ) = N ( j ) M , ( 4 ) where N ( i , j ) is the number of molecules that contain both motif i and motif j . Figure 2 provides an example of heterogeneous motif graph construction . Note that we assign zero weight for motif node pairs with negative PMI value . 2.3 HETEROGENEOUS MOTIF GRAPH NEURAL NETWORKS . In this part , we build a HM-GNN to learn both atom-level and motif-level graph feature representations . In Section 2.2 , we construct a heterogeneous motif graph that contains all motif nodes and molecular nodes . Here , we first initiate features for each motif node and molecular node . We use the one-hot encoding to generate features for motif nodes . In particular , each motif node i has a feature vector Xi of length |V | , where V represents the motif vocabulary we obtained in Section 2.1 . Given the unique index i of the motif in the vocabulary , we set Xi [ i ] = 1 and other positions to 0 . For molecular nodes , we use a bag-of-words method to populate their feature vectors . We consider each motif as a word and each molecule as a document . By applying the bag-of-words model , we can obtain feature vectors for molecular nodes . Based on this heterogeneous graph , a heterogeneous graph neural network can be applied to learn the motif-level feature embedding for each molecule in the graph . At the same time , each molecule can be easily converted into a graph by using atoms as nodes and bonds as edges . The original molecule graph topology and node features contain atom-level graph information , which can supplement motif-level information . Thus , we employ another graph neural network to learn the atom-level feature embedding . Finally , we concatenate feature embeddings from two graph neural networks and feed them into a multi-layer perceptron ( MLP ) for prediction . Figure 3 shows an example of our HM-GNN model . 2.4 MULTI-TASK LEARNING VIA HETEROGENEOUS MOTIF GRAPH . This part will show that our heterogeneous motif graph can help with graph deep learning models on small molecular datasets via multi-task learning . It is well known that deep learning methods require a significant amount of data for training . However , most molecular datasets are relatively small , and graph deep learning methods can easily over-fit on these datasets . Multi-task learning ( Caruana , 1997 ) has been shown to effectively reduce the risk of over-fitting and help improve the generalization performances of all tasks ( Zhang & Yang , 2017 ) . It can effectively increase the size of train data and decrease the influence of data-dependent noise , which leads to a more robust model . However , it is hard to directly apply multi-task learning on several molecular datasets due to the lack of explicit connections among different datasets . Based on our heterogeneous motif graph , we can easily connect a set of molecular datasets and form a multi-task learning paradigm . GivenN molecular datasetsD1 , · · · , DN , each datasetDi contains ni molecules . We first construct a motif vocabulary V that contains motifs from N molecular datasets . Here , the motif only needs to be shared in some datasets but not all of them . Then , we build a heterogeneous motif graph that contains motifs in the vocabulary and molecules from all datasets . We employ our HM-GNN to learn both graph-level and motif-level feature representations for each molecule based on this graph . The resulting features of each dataset are fed into a separate MLP for prediction . In this process , the motif nodes can be considered as connectors connecting molecules from different datasets or tasks . Under the multi-task training paradigm , our motif-heterogeneous graph can improve the feature representation learning on all datasets . 2.5 EFFICIENT TRAINING VIA EDGE SAMPLING . In Section 2.2 , we construct a heterogeneous motif graph that contains all motif nodes and molecular nodes . As the number of molecular nodes increases , there can be an issue with computational resources . To address this issue , we propose to use an edge sampler to reduce the size of the heterogeneous motif graph . Due to the special structure of our heterogeneous motif graph that it has two kinds of nodes and two kinds of edges , we can efficiently generate a computational subgraph by using the type of an edge . We show how sampling edges can save computational resources . Most GNNs follow a neighborhood aggregation learning scheme . Formally , the ` -th layer of a GNN can be represented by x ` +1i = f ( x ` i , φ ( { eji , x ` j | j ∈ N ( i ) } ) ) ) , ( 5 ) where x ` +1i is the new feature vector of node i , f is a function that combines the ` -th layer ’ s features and the aggregated features , φ is the function that aggregate all neighbors ’ feature vectors of node i , eji is the weight of edgeji , and N ( i ) is the set of node i ’ s neighbors . This equation shows that the time and space complexity are both O ( |E| ) , where |E| is the number of edges in the graph . This means we can reduce the usage of computational resources by removing some edges from the graph . Thus , we employ an edge sampler that samples edges from the graph . And a sampling rule is that we prioritize motif-molecule edges . To sample edges , we first randomly select some molecular nodes as “ starting ” nodes . We run a breadth-first algorithm to conduct a hop-by-hop exploration on the heterogeneous motif graph starting from these nodes . In each hop , we randomly sample a fixed size of edges based on edge type . Note that the first-hop neighbors of each molecular node are the motif nodes , which play essential roles in our heterogeneous motif graph . Thus , we retain all first-hop edges to ensure effective learning of feature representations for motif nodes . Starting from the second hop , we only sample motif-motif edges to retain as much motif information as possible . Figure 4 shows an example of sampling a sub-graph for a 3-layer HM-GNN . Appendix B contains complete sample rules and pseudocode .
This paper proposes learning molecule representations by using motif-level information. The authors construct a heterogeneous graph which consists of molecules and motifs, then learn representations of them using graph neural networks. The learned features are concatenated with the molecule feature learned from a traditional atom-level graph neural networks and fed into an MLP for property prediction.
SP:35f5748fa58fc716295b58cb58ece4c20d597fc8
End-to-End Balancing for Causal Continuous Treatment-Effect Estimation
We study the problem of observational causal inference with continuous treatment . We focus on the challenge of estimating the causal response curve for infrequentlyobserved treatment values . We design a new algorithm based on the framework of entropy balancing which learns weights that directly maximize causal inference accuracy using end-to-end optimization . Our weights can be customized for different datasets and causal inference algorithms . We propose a new theory for consistency of entropy balancing for continuous treatments . Using synthetic and real-world data , we show that our proposed algorithm outperforms the entropy balancing in accuracy of treatment effect estimation . 1 INTRODUCTION . In many applications in business , social , and health sciences , we wish to infer the effect of a continuous treatment such as drug dosage or administration duration on a health outcome variable . Often , several confounding factors are common factors of influencing both treatment and response variable , therefore for accurate causal estimation of the treatment in view , we must appropriately account for their potential impact . Unlike binary treatments , causal inference with continuous treatments is largely understudied and far more challenging than binary treatments . ( Galagate , 2016 ; Ai et al. , 2021 ) . This is primarily because continuous treatments induce uncountably many potential outcomes per unit , only one of which is observed for each unit and across units , a sparse coarsening of the underlying information needed to infer causal effects without uncertainty . Propensity score weighting ( Robins et al. , 2000 ; Imai and Van Dyk , 2004 ) , stand-alone or combined with regression-based models to achieve double robustness ( Kennedy et al. , 2017 ) , has quickly become the state of the art for causal inference . If the weights , inversely proportional to the conditional distribution of the treatment given the confounders , are correctly modeled , the weighted population will appear to come from a randomized study . However , this approach faces several challenges : ( 1 ) The weights only balance the confounders in expectation , not necessarily in the given data ( Zubizarreta et al. , 2011 ) . ( 2 ) The weights can be very large for some of units , leading to unstable estimation and uncertain inference . As a possible remedy , entropy balancing ( Hainmueller , 2012 ) estimates the weights such that they balance confounders subject to a measure of dispersion on the weights to prevent extreme weights . In this work , we note that low-entropy weights do not directly optimize the quality of subsequent weighted regression , and we introduce an alternative approach that does . We propose End-to-End Balancing ( E2B ) to improve the accuracy of the weighted regression used for causal inference . E2B uses end-to-end training to estimate the base weights in the entropy balancing framework . The E2B weights are thus customized for different datasets and causal inference algorithms that are based on weighting . Because we do not know the true treatment response function in real data , we propose a new approach to generate synthetic training datasets for end-to-end training . To theoretically analyze end-to-end balancing , we define Generalized Stable Weights ( GSW ) for causal inference as a generalization of the stable weights proposed by Robins et al . ( 2000 ) . We prove that weights learned by entropy balancing for continuous treatments , including E2B weights , are unbiased estimators of generalized stable weights . We also show that E2B weights are asymptotically consistent and efficient estimators of the population weights . We perform three sets of experiments to demonstrate accuracy improvements by E2B . Two experiments with synthetic data , one with linear and another with non-linear response functions show that the E2B is more accurate than the baseline entropy balancing and inverse propensity score techniques . In the experiments on real-world data , we qualitatively evaluate the average treatment effect function learned by E2B . We also show that the base weights learned by E2B follow our intuition about up-weighting low frequency treatments . 2 PROBLEM DEFINITION AND RELATED WORK . Problem Statement . Suppose we have the triplet of ( x , a , y ) , where x 2 X ⇢ Rr , a 2 A ⇢ R and y 2 R denote the confounders , treatments , and response variables , respectively , from an observational causal study . In our continuous treatment setting ( Galagate , 2016 , Ch . 1.2.6 ) , we denote potential outcomes as y ( a ) , which means the value of y after intervention in the treatment a and setting its value to a . Given an i.i.d . sample of size n , { ( xi , ai , yi ) } ni=1 , our objective is to eliminate the impact of the confounders and identify the average treatment effect function µ ( a ) = E [ y ( a ) ] , which is also called the response function . We make the two classic assumptions : ( 1 ) Strong ignorability : y ( a ) ? ? a | x . ( i.e. , no hidden confounders ) and ( 2 ) Positivity : 0 < P ( a|x ) < 1 . General Causal Inference Literature . The literature on causal inference is vast and we refer the reader to the books for the general inquiry ( Pearl , 2009 ; Imbens and Rubin , 2015 ; Spirtes et al. , 2000 ; Peters et al. , 2017 ) . Instead , we focus on reviewing the inference techniques for continuous treatments . In particular , we narrow down our focus on propensity score weighting approaches ( Robins et al. , 2000 ; Imai and Van Dyk , 2004 ) , because they can either be used alone or combined with the regression algorithms to create double robust algorithms . Causal Inference via Weighting . A popular approach for causal inference is to create a pseudopopulation by weighting data points such that in the pseudo-population the confounders and treatments are independent . Thus , regular regression algorithms can estimate the causal response curve using the pseudo-population , which resembles data from randomized trials . Throughout this paper , we will denote the parameters of the pseudo-population with a tilde mark . Multiple forms of propensity scores have been proposed for continuous treatments ( Hirano and Imbens , 2004 ; Imai and Van Dyk , 2004 ) . The commonly-used stablized weights ( Robins et al. , 2000 ; Zhu et al. , 2015 ) are defined as the ratio of marginal density over the conditional density of the treatments : sw = f ( a ) /f ( a|x ) . Problems with Propensity Scores . Zubizarreta et al . ( 2011 ) list two challenges with the propensity scores : ( 1 ) The weights only balance the confounders in expectation , not necessarily in the given data . ( 2 ) The weights can be very large for some of the data points , leading to unstable estimations . The challenges are amplified in the continuous setting because computing the stabilized weights requires correctly choosing two models , one for the marginal and one for the conditional distributions of the treatments . Kang et al . ( 2007 ) and Smith and Todd ( 2005 ) provide multiple evidence that the propensity score methods can lead to large biases in the estimations . While Robins et al . ( 2007 ) propose techniques to fix the large weights problems in the binary treatment examples discussed by Kang et al . ( 2007 ) , learning more accurate , bounded , and stable weights has been an active research area . Further techniques have proposed techniques to learn more robust propensity scores for binary treatments ( Li et al. , 2018 ; Zhao , 2019 ) too , however , the case of continuous treatments have received considerably less attention . Entropy Balancing . To address the problem of extreme weights , Entropy Balancing ( EB ) ( Hainmueller , 2012 ) estimates weights such that they balance the confounders subject to a measure of dispersion on the weights to prevent extremely large weights . Other loss functions using different dispersion metrics have been proposed for balancing ( Zubizarreta , 2015 ; Chan et al. , 2016 ) . Zhao and Percival ( 2016 ) show that the entropy balancing is double robust . Entropy balancing has been extended to the continuous treatment setting ( Fong et al. , 2018 ; Vegetabile et al. , 2021 ) , where the balancing condition ensures that the weighted correlation between the confounders and the treatment is zero . Ai et al . ( 2021 ) propose a method for estimating the counterfactual distribution in the continuous treatment setting . 3 METHODOLOGY . To describe our end-to-end balancing algorithm , we first need to describe entropy balancing for continuous treatments with base weights . 3.1 ENTROPY BALANCING FOR CONTINUOUS TREATMENTS . Causal Inference via Entropy Balancing . Entropy balancing creates a pseudo-population using instance weights wi , i = 1 , . . . , n , in which the treatment a and the confounders x are independent from each other . The independence is enforced by first selecting a set of functions on the confounders k ( · ) : X 7 ! R , for k = 1 , . . . , K , that are dense and complete in L2 space . Given the functions , we approximate the independence relationship by bEn [ a k ( x ) ] = 0 , for k = 1 , . . . , K , where the empirical expectation bEn is performed on the pseudo population . Hereafter , we will denote the mapped data points as ( xi ) = [ 1 ( xi ) , . . . , K ( xi ) ] . The k ( · ) functions can be chosen based on prior knowledge or defined by the penultimate layer of a neural network that predicts ( a , y ) from x . Our contributions in this paper are orthogonal to the choice of the k ( · ) functions and can benefit from ideas on learning these functions ( Zeng et al. , 2020 ) . The data-driven choice of the number of bases K is beyond the scope of current paper and left to future work . Balancing Constraint for the Continuous Treatments . Following ( Fong et al. , 2018 ; Vegetabile et al. , 2021 ) , in the case of continuous treatments , we first de-mean the confounders ( xi ) and treatments a such that without loss of generality they are taken to have mean zero . The balancing objective is to learn a set of weights wi , i = 1 , . . . , n that satisfy Pn i=1 wi ( xi ) = 0 , Pn i=1 wiai = 0 , and Pn i=1 wiai ( xi ) = 0 . We can write these three constraints in a compact form by defining a ( 2K + 1 ) –dimensional vector gi = [ ( xi ) , ai , ai ( xi ) ] . The constraints become Pn i=1 wigi = 0 . We stack the g vectors in a ( 2K + 1 ) ⇥ n dimensional matrix G for compact notation . In this work , without loss of generality , we will present our idea with the first order balancing , without higher order moments ( Galagate , 2016 ; Wong and Chan , 2018 ; Hazlett , 2020 ) . Primal and Dual EB . A variety of dispersion metrics have been proposed as objective function for minimization such as entropy or variance of the weights ( Wang and Zubizarreta , 2020 ) . Hainmueller ( 2012 ) originally proposed minimizing the KL-divergence between the weights and a set of base weights qi , i = 1 , . . . , n. Details on choice of base weights is discussed below , however , we note that qi = const . leads to minimization of the entropy of weights . Using this dispersion function and the balancing constraints , entropy balancing optimization is as follows : bw = argmin w nX i=1 wi log ✓ wi qi ◆ , ( 1 ) s.t . ( i ) Gw = 0 , ( ii ) 1 > w = 1 , ( iii ) wi 0 for i = 1 , . . . , n. The above optimization problem can be solved efficiently using its Lagrangian dual : b = argmin log 1 > exp > G + ` , ( 2 ) where ` i = log qi are the log-base-weights . Given the solution b , the balancing weights can be computed as w = softmax ⇣ b > G + ` ⌘ . The softmax function is defined as softmax ( v ) = exp v/ ⇣ 1 > exp v ⌘ for any vector v. The log base weight is a degree of freedom that we have in the Eq . ( 2 ) to improve the quality of causal estimation . We select the mapping dimension K such that problem ( 2 ) is well-conditioned and leave the analysis of the high dimensional setting K ⇡ n to future work . We can also add an L1 penalty term to the dual objective in Eq . ( 2 ) , which corresponds to approximate balancing ( Wang and Zubizarreta , 2020 ) . In the next section we propose to parameterize the log-base-weights and learn them . Our analysis in Section 4 shows that with any arbitrary base weights , causal estimation using the weights learned in Eq . ( 2 ) will be consistent . Algorithm 1 Stochastic Training of ` ✓ for End-to-End Balancing Require : Data tuples ( xi , ai , yi ) for i = 1 , . . . , n with an unknown response function µ ( a ) . Require : Representation functions ( · ) and ( · ) , split size n1 < n and batch size B . 1 : Generate a random set of indexes I , |I| = n1 and its complement Ic and split the data to S and Sc using them . 2 : Estimate the distribution of noise in y given ( a , x ) as bF '' . 3 : Compute G by stacking gi = [ ( xi ) , ai , ai ( xi ) ] , for i = 1 , . . . , n. 4 : for Number of Iterations do 5 : Generate B datasets { ( xi , ai , yi , b ) } ni=1 for b = 1 , . . . , B using `` ⇠ bF '' , and randomly selected µ ( a ) b response functions . 6 : ` i ` ✓ ( ( ai , xi ) ) . 7 : b argmin log 1 > exp > G + ` using only S data . 8 : w softmax ⇣ b > G + ` ⌘ using only Sc data . 9 : bµ ( a ) b weighting-based causal estimates using ( ai , yi , b , wi ) in Sc for b = 1 , . . . , B . 10 : Take a step in ✓ to minimize 1B PB b=1 ( bµ ( a ) b µ ( a ) b ) 2 . 11 : end for 12 : return The ` b✓ function .
This paper propose a method to estimate causal effect when the treatment is continuous variable. In particular, the authors focused on employing entropy balancing (EB) to find weights that are not to extreme in performing weighted regression to reduce the variance of the estimate. More specifically, the authors proposed to an end-to-end approach to learn the “base weight” to be used in EB in contrast to the original, uniform distribution using randomly samples.
SP:fa7263d8eba46a5e16f30f6d1efc5c26a9c29304
End-to-End Balancing for Causal Continuous Treatment-Effect Estimation
We study the problem of observational causal inference with continuous treatment . We focus on the challenge of estimating the causal response curve for infrequentlyobserved treatment values . We design a new algorithm based on the framework of entropy balancing which learns weights that directly maximize causal inference accuracy using end-to-end optimization . Our weights can be customized for different datasets and causal inference algorithms . We propose a new theory for consistency of entropy balancing for continuous treatments . Using synthetic and real-world data , we show that our proposed algorithm outperforms the entropy balancing in accuracy of treatment effect estimation . 1 INTRODUCTION . In many applications in business , social , and health sciences , we wish to infer the effect of a continuous treatment such as drug dosage or administration duration on a health outcome variable . Often , several confounding factors are common factors of influencing both treatment and response variable , therefore for accurate causal estimation of the treatment in view , we must appropriately account for their potential impact . Unlike binary treatments , causal inference with continuous treatments is largely understudied and far more challenging than binary treatments . ( Galagate , 2016 ; Ai et al. , 2021 ) . This is primarily because continuous treatments induce uncountably many potential outcomes per unit , only one of which is observed for each unit and across units , a sparse coarsening of the underlying information needed to infer causal effects without uncertainty . Propensity score weighting ( Robins et al. , 2000 ; Imai and Van Dyk , 2004 ) , stand-alone or combined with regression-based models to achieve double robustness ( Kennedy et al. , 2017 ) , has quickly become the state of the art for causal inference . If the weights , inversely proportional to the conditional distribution of the treatment given the confounders , are correctly modeled , the weighted population will appear to come from a randomized study . However , this approach faces several challenges : ( 1 ) The weights only balance the confounders in expectation , not necessarily in the given data ( Zubizarreta et al. , 2011 ) . ( 2 ) The weights can be very large for some of units , leading to unstable estimation and uncertain inference . As a possible remedy , entropy balancing ( Hainmueller , 2012 ) estimates the weights such that they balance confounders subject to a measure of dispersion on the weights to prevent extreme weights . In this work , we note that low-entropy weights do not directly optimize the quality of subsequent weighted regression , and we introduce an alternative approach that does . We propose End-to-End Balancing ( E2B ) to improve the accuracy of the weighted regression used for causal inference . E2B uses end-to-end training to estimate the base weights in the entropy balancing framework . The E2B weights are thus customized for different datasets and causal inference algorithms that are based on weighting . Because we do not know the true treatment response function in real data , we propose a new approach to generate synthetic training datasets for end-to-end training . To theoretically analyze end-to-end balancing , we define Generalized Stable Weights ( GSW ) for causal inference as a generalization of the stable weights proposed by Robins et al . ( 2000 ) . We prove that weights learned by entropy balancing for continuous treatments , including E2B weights , are unbiased estimators of generalized stable weights . We also show that E2B weights are asymptotically consistent and efficient estimators of the population weights . We perform three sets of experiments to demonstrate accuracy improvements by E2B . Two experiments with synthetic data , one with linear and another with non-linear response functions show that the E2B is more accurate than the baseline entropy balancing and inverse propensity score techniques . In the experiments on real-world data , we qualitatively evaluate the average treatment effect function learned by E2B . We also show that the base weights learned by E2B follow our intuition about up-weighting low frequency treatments . 2 PROBLEM DEFINITION AND RELATED WORK . Problem Statement . Suppose we have the triplet of ( x , a , y ) , where x 2 X ⇢ Rr , a 2 A ⇢ R and y 2 R denote the confounders , treatments , and response variables , respectively , from an observational causal study . In our continuous treatment setting ( Galagate , 2016 , Ch . 1.2.6 ) , we denote potential outcomes as y ( a ) , which means the value of y after intervention in the treatment a and setting its value to a . Given an i.i.d . sample of size n , { ( xi , ai , yi ) } ni=1 , our objective is to eliminate the impact of the confounders and identify the average treatment effect function µ ( a ) = E [ y ( a ) ] , which is also called the response function . We make the two classic assumptions : ( 1 ) Strong ignorability : y ( a ) ? ? a | x . ( i.e. , no hidden confounders ) and ( 2 ) Positivity : 0 < P ( a|x ) < 1 . General Causal Inference Literature . The literature on causal inference is vast and we refer the reader to the books for the general inquiry ( Pearl , 2009 ; Imbens and Rubin , 2015 ; Spirtes et al. , 2000 ; Peters et al. , 2017 ) . Instead , we focus on reviewing the inference techniques for continuous treatments . In particular , we narrow down our focus on propensity score weighting approaches ( Robins et al. , 2000 ; Imai and Van Dyk , 2004 ) , because they can either be used alone or combined with the regression algorithms to create double robust algorithms . Causal Inference via Weighting . A popular approach for causal inference is to create a pseudopopulation by weighting data points such that in the pseudo-population the confounders and treatments are independent . Thus , regular regression algorithms can estimate the causal response curve using the pseudo-population , which resembles data from randomized trials . Throughout this paper , we will denote the parameters of the pseudo-population with a tilde mark . Multiple forms of propensity scores have been proposed for continuous treatments ( Hirano and Imbens , 2004 ; Imai and Van Dyk , 2004 ) . The commonly-used stablized weights ( Robins et al. , 2000 ; Zhu et al. , 2015 ) are defined as the ratio of marginal density over the conditional density of the treatments : sw = f ( a ) /f ( a|x ) . Problems with Propensity Scores . Zubizarreta et al . ( 2011 ) list two challenges with the propensity scores : ( 1 ) The weights only balance the confounders in expectation , not necessarily in the given data . ( 2 ) The weights can be very large for some of the data points , leading to unstable estimations . The challenges are amplified in the continuous setting because computing the stabilized weights requires correctly choosing two models , one for the marginal and one for the conditional distributions of the treatments . Kang et al . ( 2007 ) and Smith and Todd ( 2005 ) provide multiple evidence that the propensity score methods can lead to large biases in the estimations . While Robins et al . ( 2007 ) propose techniques to fix the large weights problems in the binary treatment examples discussed by Kang et al . ( 2007 ) , learning more accurate , bounded , and stable weights has been an active research area . Further techniques have proposed techniques to learn more robust propensity scores for binary treatments ( Li et al. , 2018 ; Zhao , 2019 ) too , however , the case of continuous treatments have received considerably less attention . Entropy Balancing . To address the problem of extreme weights , Entropy Balancing ( EB ) ( Hainmueller , 2012 ) estimates weights such that they balance the confounders subject to a measure of dispersion on the weights to prevent extremely large weights . Other loss functions using different dispersion metrics have been proposed for balancing ( Zubizarreta , 2015 ; Chan et al. , 2016 ) . Zhao and Percival ( 2016 ) show that the entropy balancing is double robust . Entropy balancing has been extended to the continuous treatment setting ( Fong et al. , 2018 ; Vegetabile et al. , 2021 ) , where the balancing condition ensures that the weighted correlation between the confounders and the treatment is zero . Ai et al . ( 2021 ) propose a method for estimating the counterfactual distribution in the continuous treatment setting . 3 METHODOLOGY . To describe our end-to-end balancing algorithm , we first need to describe entropy balancing for continuous treatments with base weights . 3.1 ENTROPY BALANCING FOR CONTINUOUS TREATMENTS . Causal Inference via Entropy Balancing . Entropy balancing creates a pseudo-population using instance weights wi , i = 1 , . . . , n , in which the treatment a and the confounders x are independent from each other . The independence is enforced by first selecting a set of functions on the confounders k ( · ) : X 7 ! R , for k = 1 , . . . , K , that are dense and complete in L2 space . Given the functions , we approximate the independence relationship by bEn [ a k ( x ) ] = 0 , for k = 1 , . . . , K , where the empirical expectation bEn is performed on the pseudo population . Hereafter , we will denote the mapped data points as ( xi ) = [ 1 ( xi ) , . . . , K ( xi ) ] . The k ( · ) functions can be chosen based on prior knowledge or defined by the penultimate layer of a neural network that predicts ( a , y ) from x . Our contributions in this paper are orthogonal to the choice of the k ( · ) functions and can benefit from ideas on learning these functions ( Zeng et al. , 2020 ) . The data-driven choice of the number of bases K is beyond the scope of current paper and left to future work . Balancing Constraint for the Continuous Treatments . Following ( Fong et al. , 2018 ; Vegetabile et al. , 2021 ) , in the case of continuous treatments , we first de-mean the confounders ( xi ) and treatments a such that without loss of generality they are taken to have mean zero . The balancing objective is to learn a set of weights wi , i = 1 , . . . , n that satisfy Pn i=1 wi ( xi ) = 0 , Pn i=1 wiai = 0 , and Pn i=1 wiai ( xi ) = 0 . We can write these three constraints in a compact form by defining a ( 2K + 1 ) –dimensional vector gi = [ ( xi ) , ai , ai ( xi ) ] . The constraints become Pn i=1 wigi = 0 . We stack the g vectors in a ( 2K + 1 ) ⇥ n dimensional matrix G for compact notation . In this work , without loss of generality , we will present our idea with the first order balancing , without higher order moments ( Galagate , 2016 ; Wong and Chan , 2018 ; Hazlett , 2020 ) . Primal and Dual EB . A variety of dispersion metrics have been proposed as objective function for minimization such as entropy or variance of the weights ( Wang and Zubizarreta , 2020 ) . Hainmueller ( 2012 ) originally proposed minimizing the KL-divergence between the weights and a set of base weights qi , i = 1 , . . . , n. Details on choice of base weights is discussed below , however , we note that qi = const . leads to minimization of the entropy of weights . Using this dispersion function and the balancing constraints , entropy balancing optimization is as follows : bw = argmin w nX i=1 wi log ✓ wi qi ◆ , ( 1 ) s.t . ( i ) Gw = 0 , ( ii ) 1 > w = 1 , ( iii ) wi 0 for i = 1 , . . . , n. The above optimization problem can be solved efficiently using its Lagrangian dual : b = argmin log 1 > exp > G + ` , ( 2 ) where ` i = log qi are the log-base-weights . Given the solution b , the balancing weights can be computed as w = softmax ⇣ b > G + ` ⌘ . The softmax function is defined as softmax ( v ) = exp v/ ⇣ 1 > exp v ⌘ for any vector v. The log base weight is a degree of freedom that we have in the Eq . ( 2 ) to improve the quality of causal estimation . We select the mapping dimension K such that problem ( 2 ) is well-conditioned and leave the analysis of the high dimensional setting K ⇡ n to future work . We can also add an L1 penalty term to the dual objective in Eq . ( 2 ) , which corresponds to approximate balancing ( Wang and Zubizarreta , 2020 ) . In the next section we propose to parameterize the log-base-weights and learn them . Our analysis in Section 4 shows that with any arbitrary base weights , causal estimation using the weights learned in Eq . ( 2 ) will be consistent . Algorithm 1 Stochastic Training of ` ✓ for End-to-End Balancing Require : Data tuples ( xi , ai , yi ) for i = 1 , . . . , n with an unknown response function µ ( a ) . Require : Representation functions ( · ) and ( · ) , split size n1 < n and batch size B . 1 : Generate a random set of indexes I , |I| = n1 and its complement Ic and split the data to S and Sc using them . 2 : Estimate the distribution of noise in y given ( a , x ) as bF '' . 3 : Compute G by stacking gi = [ ( xi ) , ai , ai ( xi ) ] , for i = 1 , . . . , n. 4 : for Number of Iterations do 5 : Generate B datasets { ( xi , ai , yi , b ) } ni=1 for b = 1 , . . . , B using `` ⇠ bF '' , and randomly selected µ ( a ) b response functions . 6 : ` i ` ✓ ( ( ai , xi ) ) . 7 : b argmin log 1 > exp > G + ` using only S data . 8 : w softmax ⇣ b > G + ` ⌘ using only Sc data . 9 : bµ ( a ) b weighting-based causal estimates using ( ai , yi , b , wi ) in Sc for b = 1 , . . . , B . 10 : Take a step in ✓ to minimize 1B PB b=1 ( bµ ( a ) b µ ( a ) b ) 2 . 11 : end for 12 : return The ` b✓ function .
The authors propose a way to intelligently choose the base weights in an entropy balancing framework for estimating treatment effects from observational data. Typically, the weights are chosen arbitrarily (e.g. to be uniform) or based off of domain expertise, which is not always available. The weights are chosen by positing forms for the response function, generating pseudo-responses, and then finding weights that maximize treatment effect estimation accuracy. In this sense, the procedure is end-to-end because the weighting is done specifically with the subsequent estimation in mind. The authors provide theoretical results showing that the weights are asymptotically normal and that they converge to the population minimizers of the entropy balancing optimization problem. Experimental results on synthetic datasets show that the proposed method is superior to entropy balancing with naive weights with respect to treatment effect estimation accuracy.
SP:fa7263d8eba46a5e16f30f6d1efc5c26a9c29304
End-to-End Balancing for Causal Continuous Treatment-Effect Estimation
We study the problem of observational causal inference with continuous treatment . We focus on the challenge of estimating the causal response curve for infrequentlyobserved treatment values . We design a new algorithm based on the framework of entropy balancing which learns weights that directly maximize causal inference accuracy using end-to-end optimization . Our weights can be customized for different datasets and causal inference algorithms . We propose a new theory for consistency of entropy balancing for continuous treatments . Using synthetic and real-world data , we show that our proposed algorithm outperforms the entropy balancing in accuracy of treatment effect estimation . 1 INTRODUCTION . In many applications in business , social , and health sciences , we wish to infer the effect of a continuous treatment such as drug dosage or administration duration on a health outcome variable . Often , several confounding factors are common factors of influencing both treatment and response variable , therefore for accurate causal estimation of the treatment in view , we must appropriately account for their potential impact . Unlike binary treatments , causal inference with continuous treatments is largely understudied and far more challenging than binary treatments . ( Galagate , 2016 ; Ai et al. , 2021 ) . This is primarily because continuous treatments induce uncountably many potential outcomes per unit , only one of which is observed for each unit and across units , a sparse coarsening of the underlying information needed to infer causal effects without uncertainty . Propensity score weighting ( Robins et al. , 2000 ; Imai and Van Dyk , 2004 ) , stand-alone or combined with regression-based models to achieve double robustness ( Kennedy et al. , 2017 ) , has quickly become the state of the art for causal inference . If the weights , inversely proportional to the conditional distribution of the treatment given the confounders , are correctly modeled , the weighted population will appear to come from a randomized study . However , this approach faces several challenges : ( 1 ) The weights only balance the confounders in expectation , not necessarily in the given data ( Zubizarreta et al. , 2011 ) . ( 2 ) The weights can be very large for some of units , leading to unstable estimation and uncertain inference . As a possible remedy , entropy balancing ( Hainmueller , 2012 ) estimates the weights such that they balance confounders subject to a measure of dispersion on the weights to prevent extreme weights . In this work , we note that low-entropy weights do not directly optimize the quality of subsequent weighted regression , and we introduce an alternative approach that does . We propose End-to-End Balancing ( E2B ) to improve the accuracy of the weighted regression used for causal inference . E2B uses end-to-end training to estimate the base weights in the entropy balancing framework . The E2B weights are thus customized for different datasets and causal inference algorithms that are based on weighting . Because we do not know the true treatment response function in real data , we propose a new approach to generate synthetic training datasets for end-to-end training . To theoretically analyze end-to-end balancing , we define Generalized Stable Weights ( GSW ) for causal inference as a generalization of the stable weights proposed by Robins et al . ( 2000 ) . We prove that weights learned by entropy balancing for continuous treatments , including E2B weights , are unbiased estimators of generalized stable weights . We also show that E2B weights are asymptotically consistent and efficient estimators of the population weights . We perform three sets of experiments to demonstrate accuracy improvements by E2B . Two experiments with synthetic data , one with linear and another with non-linear response functions show that the E2B is more accurate than the baseline entropy balancing and inverse propensity score techniques . In the experiments on real-world data , we qualitatively evaluate the average treatment effect function learned by E2B . We also show that the base weights learned by E2B follow our intuition about up-weighting low frequency treatments . 2 PROBLEM DEFINITION AND RELATED WORK . Problem Statement . Suppose we have the triplet of ( x , a , y ) , where x 2 X ⇢ Rr , a 2 A ⇢ R and y 2 R denote the confounders , treatments , and response variables , respectively , from an observational causal study . In our continuous treatment setting ( Galagate , 2016 , Ch . 1.2.6 ) , we denote potential outcomes as y ( a ) , which means the value of y after intervention in the treatment a and setting its value to a . Given an i.i.d . sample of size n , { ( xi , ai , yi ) } ni=1 , our objective is to eliminate the impact of the confounders and identify the average treatment effect function µ ( a ) = E [ y ( a ) ] , which is also called the response function . We make the two classic assumptions : ( 1 ) Strong ignorability : y ( a ) ? ? a | x . ( i.e. , no hidden confounders ) and ( 2 ) Positivity : 0 < P ( a|x ) < 1 . General Causal Inference Literature . The literature on causal inference is vast and we refer the reader to the books for the general inquiry ( Pearl , 2009 ; Imbens and Rubin , 2015 ; Spirtes et al. , 2000 ; Peters et al. , 2017 ) . Instead , we focus on reviewing the inference techniques for continuous treatments . In particular , we narrow down our focus on propensity score weighting approaches ( Robins et al. , 2000 ; Imai and Van Dyk , 2004 ) , because they can either be used alone or combined with the regression algorithms to create double robust algorithms . Causal Inference via Weighting . A popular approach for causal inference is to create a pseudopopulation by weighting data points such that in the pseudo-population the confounders and treatments are independent . Thus , regular regression algorithms can estimate the causal response curve using the pseudo-population , which resembles data from randomized trials . Throughout this paper , we will denote the parameters of the pseudo-population with a tilde mark . Multiple forms of propensity scores have been proposed for continuous treatments ( Hirano and Imbens , 2004 ; Imai and Van Dyk , 2004 ) . The commonly-used stablized weights ( Robins et al. , 2000 ; Zhu et al. , 2015 ) are defined as the ratio of marginal density over the conditional density of the treatments : sw = f ( a ) /f ( a|x ) . Problems with Propensity Scores . Zubizarreta et al . ( 2011 ) list two challenges with the propensity scores : ( 1 ) The weights only balance the confounders in expectation , not necessarily in the given data . ( 2 ) The weights can be very large for some of the data points , leading to unstable estimations . The challenges are amplified in the continuous setting because computing the stabilized weights requires correctly choosing two models , one for the marginal and one for the conditional distributions of the treatments . Kang et al . ( 2007 ) and Smith and Todd ( 2005 ) provide multiple evidence that the propensity score methods can lead to large biases in the estimations . While Robins et al . ( 2007 ) propose techniques to fix the large weights problems in the binary treatment examples discussed by Kang et al . ( 2007 ) , learning more accurate , bounded , and stable weights has been an active research area . Further techniques have proposed techniques to learn more robust propensity scores for binary treatments ( Li et al. , 2018 ; Zhao , 2019 ) too , however , the case of continuous treatments have received considerably less attention . Entropy Balancing . To address the problem of extreme weights , Entropy Balancing ( EB ) ( Hainmueller , 2012 ) estimates weights such that they balance the confounders subject to a measure of dispersion on the weights to prevent extremely large weights . Other loss functions using different dispersion metrics have been proposed for balancing ( Zubizarreta , 2015 ; Chan et al. , 2016 ) . Zhao and Percival ( 2016 ) show that the entropy balancing is double robust . Entropy balancing has been extended to the continuous treatment setting ( Fong et al. , 2018 ; Vegetabile et al. , 2021 ) , where the balancing condition ensures that the weighted correlation between the confounders and the treatment is zero . Ai et al . ( 2021 ) propose a method for estimating the counterfactual distribution in the continuous treatment setting . 3 METHODOLOGY . To describe our end-to-end balancing algorithm , we first need to describe entropy balancing for continuous treatments with base weights . 3.1 ENTROPY BALANCING FOR CONTINUOUS TREATMENTS . Causal Inference via Entropy Balancing . Entropy balancing creates a pseudo-population using instance weights wi , i = 1 , . . . , n , in which the treatment a and the confounders x are independent from each other . The independence is enforced by first selecting a set of functions on the confounders k ( · ) : X 7 ! R , for k = 1 , . . . , K , that are dense and complete in L2 space . Given the functions , we approximate the independence relationship by bEn [ a k ( x ) ] = 0 , for k = 1 , . . . , K , where the empirical expectation bEn is performed on the pseudo population . Hereafter , we will denote the mapped data points as ( xi ) = [ 1 ( xi ) , . . . , K ( xi ) ] . The k ( · ) functions can be chosen based on prior knowledge or defined by the penultimate layer of a neural network that predicts ( a , y ) from x . Our contributions in this paper are orthogonal to the choice of the k ( · ) functions and can benefit from ideas on learning these functions ( Zeng et al. , 2020 ) . The data-driven choice of the number of bases K is beyond the scope of current paper and left to future work . Balancing Constraint for the Continuous Treatments . Following ( Fong et al. , 2018 ; Vegetabile et al. , 2021 ) , in the case of continuous treatments , we first de-mean the confounders ( xi ) and treatments a such that without loss of generality they are taken to have mean zero . The balancing objective is to learn a set of weights wi , i = 1 , . . . , n that satisfy Pn i=1 wi ( xi ) = 0 , Pn i=1 wiai = 0 , and Pn i=1 wiai ( xi ) = 0 . We can write these three constraints in a compact form by defining a ( 2K + 1 ) –dimensional vector gi = [ ( xi ) , ai , ai ( xi ) ] . The constraints become Pn i=1 wigi = 0 . We stack the g vectors in a ( 2K + 1 ) ⇥ n dimensional matrix G for compact notation . In this work , without loss of generality , we will present our idea with the first order balancing , without higher order moments ( Galagate , 2016 ; Wong and Chan , 2018 ; Hazlett , 2020 ) . Primal and Dual EB . A variety of dispersion metrics have been proposed as objective function for minimization such as entropy or variance of the weights ( Wang and Zubizarreta , 2020 ) . Hainmueller ( 2012 ) originally proposed minimizing the KL-divergence between the weights and a set of base weights qi , i = 1 , . . . , n. Details on choice of base weights is discussed below , however , we note that qi = const . leads to minimization of the entropy of weights . Using this dispersion function and the balancing constraints , entropy balancing optimization is as follows : bw = argmin w nX i=1 wi log ✓ wi qi ◆ , ( 1 ) s.t . ( i ) Gw = 0 , ( ii ) 1 > w = 1 , ( iii ) wi 0 for i = 1 , . . . , n. The above optimization problem can be solved efficiently using its Lagrangian dual : b = argmin log 1 > exp > G + ` , ( 2 ) where ` i = log qi are the log-base-weights . Given the solution b , the balancing weights can be computed as w = softmax ⇣ b > G + ` ⌘ . The softmax function is defined as softmax ( v ) = exp v/ ⇣ 1 > exp v ⌘ for any vector v. The log base weight is a degree of freedom that we have in the Eq . ( 2 ) to improve the quality of causal estimation . We select the mapping dimension K such that problem ( 2 ) is well-conditioned and leave the analysis of the high dimensional setting K ⇡ n to future work . We can also add an L1 penalty term to the dual objective in Eq . ( 2 ) , which corresponds to approximate balancing ( Wang and Zubizarreta , 2020 ) . In the next section we propose to parameterize the log-base-weights and learn them . Our analysis in Section 4 shows that with any arbitrary base weights , causal estimation using the weights learned in Eq . ( 2 ) will be consistent . Algorithm 1 Stochastic Training of ` ✓ for End-to-End Balancing Require : Data tuples ( xi , ai , yi ) for i = 1 , . . . , n with an unknown response function µ ( a ) . Require : Representation functions ( · ) and ( · ) , split size n1 < n and batch size B . 1 : Generate a random set of indexes I , |I| = n1 and its complement Ic and split the data to S and Sc using them . 2 : Estimate the distribution of noise in y given ( a , x ) as bF '' . 3 : Compute G by stacking gi = [ ( xi ) , ai , ai ( xi ) ] , for i = 1 , . . . , n. 4 : for Number of Iterations do 5 : Generate B datasets { ( xi , ai , yi , b ) } ni=1 for b = 1 , . . . , B using `` ⇠ bF '' , and randomly selected µ ( a ) b response functions . 6 : ` i ` ✓ ( ( ai , xi ) ) . 7 : b argmin log 1 > exp > G + ` using only S data . 8 : w softmax ⇣ b > G + ` ⌘ using only Sc data . 9 : bµ ( a ) b weighting-based causal estimates using ( ai , yi , b , wi ) in Sc for b = 1 , . . . , B . 10 : Take a step in ✓ to minimize 1B PB b=1 ( bµ ( a ) b µ ( a ) b ) 2 . 11 : end for 12 : return The ` b✓ function .
This paper introduces an estimation method, End to End Balancing (E2B), to improve propensity weighted regression estimates for continuous treatments in observational causal settings. Propensity scores are a popular and successful tool to help us analyze observational studies, but suffer from two major problems: they only balance confounders in expectation, and they can be large and unstable. Entropy balancing has been used before to estimate weights that balance confounders by creating a pseudo-population in which treatment and confounders are independent. This method, however, does not optimize for the regression we perform afterwards, and hence misses an optimization opportunity. The authors close this gap by leveraging the entropy balancing framework to find base weights that do optimize for the treatment effect estimation regression in the continuous treatment setting. The paper includes proof of asymptotic consistency and experimental validations.
SP:fa7263d8eba46a5e16f30f6d1efc5c26a9c29304
How to Adapt Your Large-Scale Vision-and-Language Model
1 INTRODUCTION . Large-scale deep network models pretrained on ultra large-scale data on the internet , whether text or images , have shown impressive performance recently ( Radford et al. , 2019 ; 2021 ; Brown et al. , 2020 ; Devlin et al. , 2018 ; Jia et al. , 2021 ) . Training such models with billions of parameters on a large internet scale data is an expensive and time consuming process often costing over millions of dollars . Hence , replicating such models is not only difficult but also undesirable for every downstream task . Fortunately , the information gathered by these large-scale models using raw internet data seems to transfer well to several downstream tasks with little to no finetuning at all using natural language as a way for zero-shot evaluation ( Brown et al. , 2020 ; Radford et al. , 2021 ) . While zero-shot transfer performs well , it is generally better to adapt the model itself if there are any labeled examples available for the downstream task . Traditionally , the go-to strategy in the computer vision community has either been to finetune the whole network or an additional MLP layer at the end . With the use of raw language , adaptation techniques such as prompt tuning have surfaced ( Li & Liang , 2021 ; Lester et al. , 2021 ) . Alternative methods include new parameters in between the network instead of adding a layer at the end ( Houlsby et al. , 2019 ; Mahabadi et al. , 2021 ) . However , it remains unclear as to which approach is preferred under which scenarios . We ask what are the general guidelines one should adopt while finetuning a large-scale pretrained model on downstream datasets . To scope this question , we choose CLIP ( Radford et al. , 2021 ) as the base pretrained model and adapt it to several downstream problems . CLIP is a vision-and-language model trained on over 400M pairs of image and text descriptions collected off the internet . There are several reasons to choose CLIP for this study . First , CLIP is one of the few vision models trained on ultra-large scale , unfiltered and varied raw visual data on the internet . Second , the multi-modal nature of CLIP enables use of more general ways of adaptation like using natural language prompts for “ zero-shot ” transfer to new categories – techniques previously popular mostly in NLP . We find that merely tuning the parameters of LayerNorm ( Ba et al. , 2016 ) turns out to be a surprisingly effective approach that is competitive or better than all other adaptation methods across the board . The effectiveness of normalization techniques has been observed by prior work for generalization ( Perez et al. , 2018 ; Lu et al. , 2021 ) as well as training from scratch ( Frankle et al. , 2020 ) . Inspired by this , 1Website at https : //sites.google.com/view/adapt-large-scale-models we further look into different ways of combining LayerNorm-tuning with other adaptation methods that finetune new parameters . We devise an effective scheme that first finetunes the CLIP model using only LayerNorm tuning and uses it as initialization for adapting new parameters . We evaluate our adaptation techniques across 12 downstream tasks spread along two spectra : size of downstream task dataset as well as the similarity of downstream data to the pretraining data . Across both spectra , we find that our two-stage LayerNorm tuning approach is most competitive and show its effectiveness for general-purpose adaptation of CLIP to downstream image-classification tasks . To summarize , our paper ’ s contributions are as follows : • We show the effectiveness of LayerNorm-tuning for adaptation to downstream tasks . • We devise a simple yet effective scheme to combine LayerNorm-tuning with other methods of finetuning to obtain competitive performance across the board . • We show a thorough comparison of different adaptation methods in four scenarios across two spectra ( amount of downstream data and its similarity to pretraining data ) studied on numerous downstream classification tasks . We believe our findings will encourage more research and put existing research in perspective of what works best while finetuning large-scale vision-language models to downstream tasks . 2 BACKGROUND : VISION-AND-LANGUAGE PRETRAINED MODELS . Vision-and-language pre-training methods have recently shown promise on diverse tasks across images and text ( Radford et al. , 2021 ; Jia et al. , 2021 ) . While many such approaches have emerged , we focus on CLIP ( Contrastive Language-Image Pre-training ) , a large-scale model with strong zero-shot performance on downstream classification tasks ( Radford et al. , 2021 ) . Contrastive Language-Image Pre-training ( CLIP ) CLIP consists of two parallel encoders for processing images and text , whose outputs are then projected into a shared embedding space . The text encoder is a Transformer ( Vaswani et al. , 2017 ) following the architecture described in Radford et al . ( 2019 ) , while the image encoder is a Vision Transformer ( ViT ) with a patch size of 16 ( Dosovitskiy et al. , 2020 ) . For our experiments , we utilize the open-sourced pretrained CLIP models . Training Because the image and text features live in the same embedding space , the cosine similarity between any embedded image and text description can be computed . CLIP uses these as prediction probabilities for classifying an image with the correct text caption ( or vice versa ) across batches . Formally , denote I and T as the set of image and text features in a single batch . The prediction probability for the ith image and jth caption in the batch is given by p ( Tj | Ii ) = exp ( cos ( Tj , Ii ) /⌧ ) exp ( cos ( Tj , Ii ) /⌧ ) + P k 6=j exp ( cos ( Tk , Ii ) /⌧ ) where ⌧ is a learnable temperature parameter . CLIP is trained with a contrastive loss accordingly across 400 million pairs of image and text captions collected online ( Radford et al. , 2021 ) . Inference For a downstream classification task at test time , CLIP first embeds the textual descriptions of all classes . These descriptions may range from a phrase like “ a photo of a < class > ” to heavily engineered embeddings ensembled over 80 different templates ( Radford et al. , 2021 ) . Each image is then classified using the embedded classes as labels and the prediction probabilities described above . Notably , this inference scheme allows CLIP to be transferred zero-shot to any downstream image classification task . Radford et al . ( 2021 ) show that zero-shot CLIP is competitive with a fully supervised ResNet ( He et al. , 2016 ) baseline on a suite of image classification tasks . 3 METHODOLOGY : FINE-TUNING LARGE-SCALE PRETRAINED MODEL . Although zero-shot CLIP performs well on natural images and general object classification datasets , its performance degrades quickly on more abstract tasks from out-of-distribution data . Even on a simple dataset like MNIST ( LeCun , 1998 ) , the zero-shot CLIP model ( ViT-B/16 ) we test attains an accuracy of only 55 % . Substantial gains can be achieved by fine-tuning the pre-trained model , but many such strategies have emerged across tasks in vision and language and it ’ s unclear which to use on diverse downstream settings . For this reason , we provide an extensive study of adaptation approaches . Figure 1 illustrates the fine-tuning methods we consider in context of CLIP while Figure 2 shows more detailed information regarding each approach . We propose a general taxonomy of fine-tuning approaches and consider three major classes : ( a ) methods which only fine-tune existing parameters , ( b ) methods which freeze existing parameters and add new parameters , and ( c ) methods which combine ( a ) and ( b ) . We first consider two methods in ( a ) which only fine-tune existing parameters . 3.1 FINE-TUNING EXISTING PARAMETERS . Full Model Fine-tuning The simplest approach to fine-tuning is to train all of the model parameters on the downstream task . However , this is unstable and doesn ’ t scale well to CLIP-size models with hundreds of millions of parameters . Our empirical results show this behavior as well . LayerNorm Tuning Instead of full model fine-tuning for large-scale models , we can tune a small subset of chosen parameters when the downstream data is scarce . In fact , Frankle et al . ( 2020 ) show that just tuning Batch Normalization ( Ioffe & Szegedy , 2015 ) parameters from a random initialization can be highly expressive . In a similar vein , we investigate tuning the parameters of Layer Normalization ( LayerNorm ) layers ( Ba et al. , 2016 ) . Unlike Batch Normalization , LayerNorm applies per-element normalization across mini-batches . Given a mini batch of inputs x , LayerNorm transforms this as y = x E [ x ] p Var [ x ] + ✏ · + where the mean and variance are calculated over the normalized dimensions and , are learned parameters . Because the image and text encoders in CLIP share the same underlying Transformer architecture , in LayerNorm Tuning , we fine-tune the Layer Normalization parameters , across all layers of both encoders . These parameters are 768-dimensional and 512-dimensional for the image and text encoders respectively . 3.2 FINE-TUNING NEW PARAMETERS . An alternative paradigm is to inject new parameters which can more effectively adapt to downstream tasks . These new parameters can act at various stages of a pre-trained model : on the output , input , or intermediate activations . Linear Probe The classic method of training a linear probe on top of frozen features is an example of adding new parameters which act on the model output . Given a pre-trained CLIP model , we discard the text encoder , freeze the image encoder , and learn a linear layer on top of the image features before they ’ re projected to the shared embedding space . The linear layer maps the penultimate image features to logits from which class predictions are made . While this simple method is popular and effective , it ’ s parameter-inefficient for tasks with higher number of classes and fails to leverage any of the language information contained in CLIP . Prompt Tuning Alternatively , we can consider adding parameters which act on the model input . Such an approach known as prompt tuning has emerged as a parameter-efficient fine-tuning method in language ( Li & Liang , 2021 ; Lester et al. , 2021 ) . A fixed number of continuous vectors ( a “ prompt ” ) is prepended to the model input and optimized throughout training . Similar to concurrent work by Zhou et al . ( 2021 ) , we apply prompt tuning to image classification with CLIP . For the model input , we embed the raw text of the classes without a template and prepend a continuous prompt of fixed length . During training , the prompt is learned using a cross-entropy loss according to the prediction probabilities detailed in Section 3.1 . Although prompt tuning can be applied in the same way for transformer-based visual encoders , we find that only applying it for the text encoder produces better and more stable results . Prompt tuning is parameter-efficient and removes the need for manual prompt engineering e.g . specifying “ a photo of a < class > , a type of flower ” for a downstream task on flower classification . Ideally , the learned prompts would contain such domain-specific information . However , prompt tuning suffers from high variance during training and is sensitive to initialization . Adapter and Compacter Networks The above two approaches inject parameters which act either at the end of the network ( linear probe ) or at the beginning ( prompt tuning ) . A third option is to inject new parameters for the downstream task within the layers of the network itself . This idea has been popularized as an efficient transfer learning method in language ( Houlsby et al. , 2019 ) . For Transformer-based architectures , a common strategy is to insert a block of learnable parameters after feed forward layers or the attention mechanism . Adapter networks insert learnable adapter blocks after the feed forward layers in each Transformer layer ( Houlsby et al. , 2019 ) . Each block follows a bottleneck architecture and is composed of a linear down-projection , non-linearity , and linear up-projection as shown in Figure 2 . However , for architectures with many stacked Transformer layers and larger hidden dimensions , adapter modules are parameter-inefficient . To alleviate this issue , Mahabadi et al . ( 2021 ) introduce compacter modules which follow the same architecture but use low-rank parameters and hypercomplex multiplication to improve parameter efficiency . Specifically , if the down-projection layer maps x 2 Rm ! Wx + b 2 Rd where W 2 Rm⇥d , b 2 Rd are learned parameters and d ⌧ m , compacter modules represent W as W = nX i=1 Ai ⌦ ( sitTi ) where Ai are global weights shared across Transformer layers and si , ti are local , rank-1 weights . We insert Adapter and Compacter modules across the Transformer layers in the text encoder .
The goal of the paper is to investigate how to efficiently adapt large-scale pre-trained vision-language models (e.g. CLIP) to downstream tasks. Their paper is based on the observation that while the vision community dominantly uses linear probe as the standard protocol, other approaches such as prompt learning are utilized in language. They compare and analyze several fine-tuning methods -- linear probe, prompt tuning, adapter and compacter networks -- across 12 downstream classification tasks. Focusing on LayerNorm, they further demonstrate combining LayerNorm tuning with existing fine-tuning methods to improve performance.
SP:a57180859bfdbff775635f167c3afa5a585b73e6
How to Adapt Your Large-Scale Vision-and-Language Model
1 INTRODUCTION . Large-scale deep network models pretrained on ultra large-scale data on the internet , whether text or images , have shown impressive performance recently ( Radford et al. , 2019 ; 2021 ; Brown et al. , 2020 ; Devlin et al. , 2018 ; Jia et al. , 2021 ) . Training such models with billions of parameters on a large internet scale data is an expensive and time consuming process often costing over millions of dollars . Hence , replicating such models is not only difficult but also undesirable for every downstream task . Fortunately , the information gathered by these large-scale models using raw internet data seems to transfer well to several downstream tasks with little to no finetuning at all using natural language as a way for zero-shot evaluation ( Brown et al. , 2020 ; Radford et al. , 2021 ) . While zero-shot transfer performs well , it is generally better to adapt the model itself if there are any labeled examples available for the downstream task . Traditionally , the go-to strategy in the computer vision community has either been to finetune the whole network or an additional MLP layer at the end . With the use of raw language , adaptation techniques such as prompt tuning have surfaced ( Li & Liang , 2021 ; Lester et al. , 2021 ) . Alternative methods include new parameters in between the network instead of adding a layer at the end ( Houlsby et al. , 2019 ; Mahabadi et al. , 2021 ) . However , it remains unclear as to which approach is preferred under which scenarios . We ask what are the general guidelines one should adopt while finetuning a large-scale pretrained model on downstream datasets . To scope this question , we choose CLIP ( Radford et al. , 2021 ) as the base pretrained model and adapt it to several downstream problems . CLIP is a vision-and-language model trained on over 400M pairs of image and text descriptions collected off the internet . There are several reasons to choose CLIP for this study . First , CLIP is one of the few vision models trained on ultra-large scale , unfiltered and varied raw visual data on the internet . Second , the multi-modal nature of CLIP enables use of more general ways of adaptation like using natural language prompts for “ zero-shot ” transfer to new categories – techniques previously popular mostly in NLP . We find that merely tuning the parameters of LayerNorm ( Ba et al. , 2016 ) turns out to be a surprisingly effective approach that is competitive or better than all other adaptation methods across the board . The effectiveness of normalization techniques has been observed by prior work for generalization ( Perez et al. , 2018 ; Lu et al. , 2021 ) as well as training from scratch ( Frankle et al. , 2020 ) . Inspired by this , 1Website at https : //sites.google.com/view/adapt-large-scale-models we further look into different ways of combining LayerNorm-tuning with other adaptation methods that finetune new parameters . We devise an effective scheme that first finetunes the CLIP model using only LayerNorm tuning and uses it as initialization for adapting new parameters . We evaluate our adaptation techniques across 12 downstream tasks spread along two spectra : size of downstream task dataset as well as the similarity of downstream data to the pretraining data . Across both spectra , we find that our two-stage LayerNorm tuning approach is most competitive and show its effectiveness for general-purpose adaptation of CLIP to downstream image-classification tasks . To summarize , our paper ’ s contributions are as follows : • We show the effectiveness of LayerNorm-tuning for adaptation to downstream tasks . • We devise a simple yet effective scheme to combine LayerNorm-tuning with other methods of finetuning to obtain competitive performance across the board . • We show a thorough comparison of different adaptation methods in four scenarios across two spectra ( amount of downstream data and its similarity to pretraining data ) studied on numerous downstream classification tasks . We believe our findings will encourage more research and put existing research in perspective of what works best while finetuning large-scale vision-language models to downstream tasks . 2 BACKGROUND : VISION-AND-LANGUAGE PRETRAINED MODELS . Vision-and-language pre-training methods have recently shown promise on diverse tasks across images and text ( Radford et al. , 2021 ; Jia et al. , 2021 ) . While many such approaches have emerged , we focus on CLIP ( Contrastive Language-Image Pre-training ) , a large-scale model with strong zero-shot performance on downstream classification tasks ( Radford et al. , 2021 ) . Contrastive Language-Image Pre-training ( CLIP ) CLIP consists of two parallel encoders for processing images and text , whose outputs are then projected into a shared embedding space . The text encoder is a Transformer ( Vaswani et al. , 2017 ) following the architecture described in Radford et al . ( 2019 ) , while the image encoder is a Vision Transformer ( ViT ) with a patch size of 16 ( Dosovitskiy et al. , 2020 ) . For our experiments , we utilize the open-sourced pretrained CLIP models . Training Because the image and text features live in the same embedding space , the cosine similarity between any embedded image and text description can be computed . CLIP uses these as prediction probabilities for classifying an image with the correct text caption ( or vice versa ) across batches . Formally , denote I and T as the set of image and text features in a single batch . The prediction probability for the ith image and jth caption in the batch is given by p ( Tj | Ii ) = exp ( cos ( Tj , Ii ) /⌧ ) exp ( cos ( Tj , Ii ) /⌧ ) + P k 6=j exp ( cos ( Tk , Ii ) /⌧ ) where ⌧ is a learnable temperature parameter . CLIP is trained with a contrastive loss accordingly across 400 million pairs of image and text captions collected online ( Radford et al. , 2021 ) . Inference For a downstream classification task at test time , CLIP first embeds the textual descriptions of all classes . These descriptions may range from a phrase like “ a photo of a < class > ” to heavily engineered embeddings ensembled over 80 different templates ( Radford et al. , 2021 ) . Each image is then classified using the embedded classes as labels and the prediction probabilities described above . Notably , this inference scheme allows CLIP to be transferred zero-shot to any downstream image classification task . Radford et al . ( 2021 ) show that zero-shot CLIP is competitive with a fully supervised ResNet ( He et al. , 2016 ) baseline on a suite of image classification tasks . 3 METHODOLOGY : FINE-TUNING LARGE-SCALE PRETRAINED MODEL . Although zero-shot CLIP performs well on natural images and general object classification datasets , its performance degrades quickly on more abstract tasks from out-of-distribution data . Even on a simple dataset like MNIST ( LeCun , 1998 ) , the zero-shot CLIP model ( ViT-B/16 ) we test attains an accuracy of only 55 % . Substantial gains can be achieved by fine-tuning the pre-trained model , but many such strategies have emerged across tasks in vision and language and it ’ s unclear which to use on diverse downstream settings . For this reason , we provide an extensive study of adaptation approaches . Figure 1 illustrates the fine-tuning methods we consider in context of CLIP while Figure 2 shows more detailed information regarding each approach . We propose a general taxonomy of fine-tuning approaches and consider three major classes : ( a ) methods which only fine-tune existing parameters , ( b ) methods which freeze existing parameters and add new parameters , and ( c ) methods which combine ( a ) and ( b ) . We first consider two methods in ( a ) which only fine-tune existing parameters . 3.1 FINE-TUNING EXISTING PARAMETERS . Full Model Fine-tuning The simplest approach to fine-tuning is to train all of the model parameters on the downstream task . However , this is unstable and doesn ’ t scale well to CLIP-size models with hundreds of millions of parameters . Our empirical results show this behavior as well . LayerNorm Tuning Instead of full model fine-tuning for large-scale models , we can tune a small subset of chosen parameters when the downstream data is scarce . In fact , Frankle et al . ( 2020 ) show that just tuning Batch Normalization ( Ioffe & Szegedy , 2015 ) parameters from a random initialization can be highly expressive . In a similar vein , we investigate tuning the parameters of Layer Normalization ( LayerNorm ) layers ( Ba et al. , 2016 ) . Unlike Batch Normalization , LayerNorm applies per-element normalization across mini-batches . Given a mini batch of inputs x , LayerNorm transforms this as y = x E [ x ] p Var [ x ] + ✏ · + where the mean and variance are calculated over the normalized dimensions and , are learned parameters . Because the image and text encoders in CLIP share the same underlying Transformer architecture , in LayerNorm Tuning , we fine-tune the Layer Normalization parameters , across all layers of both encoders . These parameters are 768-dimensional and 512-dimensional for the image and text encoders respectively . 3.2 FINE-TUNING NEW PARAMETERS . An alternative paradigm is to inject new parameters which can more effectively adapt to downstream tasks . These new parameters can act at various stages of a pre-trained model : on the output , input , or intermediate activations . Linear Probe The classic method of training a linear probe on top of frozen features is an example of adding new parameters which act on the model output . Given a pre-trained CLIP model , we discard the text encoder , freeze the image encoder , and learn a linear layer on top of the image features before they ’ re projected to the shared embedding space . The linear layer maps the penultimate image features to logits from which class predictions are made . While this simple method is popular and effective , it ’ s parameter-inefficient for tasks with higher number of classes and fails to leverage any of the language information contained in CLIP . Prompt Tuning Alternatively , we can consider adding parameters which act on the model input . Such an approach known as prompt tuning has emerged as a parameter-efficient fine-tuning method in language ( Li & Liang , 2021 ; Lester et al. , 2021 ) . A fixed number of continuous vectors ( a “ prompt ” ) is prepended to the model input and optimized throughout training . Similar to concurrent work by Zhou et al . ( 2021 ) , we apply prompt tuning to image classification with CLIP . For the model input , we embed the raw text of the classes without a template and prepend a continuous prompt of fixed length . During training , the prompt is learned using a cross-entropy loss according to the prediction probabilities detailed in Section 3.1 . Although prompt tuning can be applied in the same way for transformer-based visual encoders , we find that only applying it for the text encoder produces better and more stable results . Prompt tuning is parameter-efficient and removes the need for manual prompt engineering e.g . specifying “ a photo of a < class > , a type of flower ” for a downstream task on flower classification . Ideally , the learned prompts would contain such domain-specific information . However , prompt tuning suffers from high variance during training and is sensitive to initialization . Adapter and Compacter Networks The above two approaches inject parameters which act either at the end of the network ( linear probe ) or at the beginning ( prompt tuning ) . A third option is to inject new parameters for the downstream task within the layers of the network itself . This idea has been popularized as an efficient transfer learning method in language ( Houlsby et al. , 2019 ) . For Transformer-based architectures , a common strategy is to insert a block of learnable parameters after feed forward layers or the attention mechanism . Adapter networks insert learnable adapter blocks after the feed forward layers in each Transformer layer ( Houlsby et al. , 2019 ) . Each block follows a bottleneck architecture and is composed of a linear down-projection , non-linearity , and linear up-projection as shown in Figure 2 . However , for architectures with many stacked Transformer layers and larger hidden dimensions , adapter modules are parameter-inefficient . To alleviate this issue , Mahabadi et al . ( 2021 ) introduce compacter modules which follow the same architecture but use low-rank parameters and hypercomplex multiplication to improve parameter efficiency . Specifically , if the down-projection layer maps x 2 Rm ! Wx + b 2 Rd where W 2 Rm⇥d , b 2 Rd are learned parameters and d ⌧ m , compacter modules represent W as W = nX i=1 Ai ⌦ ( sitTi ) where Ai are global weights shared across Transformer layers and si , ti are local , rank-1 weights . We insert Adapter and Compacter modules across the Transformer layers in the text encoder .
This paper has extensively studied how to adapt the large-scale pre-trained vision-language model CLIP for downstream tasks. Several fine-tuning methods are analyzed across a diverse set of image classification tasks along two spectra. Further, a simple yet effective strategy that combines LayerNorm-tuning with general fine-tuning methods is proposed to improve their performance and benchmark them on few-shot classification tasks.
SP:a57180859bfdbff775635f167c3afa5a585b73e6
How to Adapt Your Large-Scale Vision-and-Language Model
1 INTRODUCTION . Large-scale deep network models pretrained on ultra large-scale data on the internet , whether text or images , have shown impressive performance recently ( Radford et al. , 2019 ; 2021 ; Brown et al. , 2020 ; Devlin et al. , 2018 ; Jia et al. , 2021 ) . Training such models with billions of parameters on a large internet scale data is an expensive and time consuming process often costing over millions of dollars . Hence , replicating such models is not only difficult but also undesirable for every downstream task . Fortunately , the information gathered by these large-scale models using raw internet data seems to transfer well to several downstream tasks with little to no finetuning at all using natural language as a way for zero-shot evaluation ( Brown et al. , 2020 ; Radford et al. , 2021 ) . While zero-shot transfer performs well , it is generally better to adapt the model itself if there are any labeled examples available for the downstream task . Traditionally , the go-to strategy in the computer vision community has either been to finetune the whole network or an additional MLP layer at the end . With the use of raw language , adaptation techniques such as prompt tuning have surfaced ( Li & Liang , 2021 ; Lester et al. , 2021 ) . Alternative methods include new parameters in between the network instead of adding a layer at the end ( Houlsby et al. , 2019 ; Mahabadi et al. , 2021 ) . However , it remains unclear as to which approach is preferred under which scenarios . We ask what are the general guidelines one should adopt while finetuning a large-scale pretrained model on downstream datasets . To scope this question , we choose CLIP ( Radford et al. , 2021 ) as the base pretrained model and adapt it to several downstream problems . CLIP is a vision-and-language model trained on over 400M pairs of image and text descriptions collected off the internet . There are several reasons to choose CLIP for this study . First , CLIP is one of the few vision models trained on ultra-large scale , unfiltered and varied raw visual data on the internet . Second , the multi-modal nature of CLIP enables use of more general ways of adaptation like using natural language prompts for “ zero-shot ” transfer to new categories – techniques previously popular mostly in NLP . We find that merely tuning the parameters of LayerNorm ( Ba et al. , 2016 ) turns out to be a surprisingly effective approach that is competitive or better than all other adaptation methods across the board . The effectiveness of normalization techniques has been observed by prior work for generalization ( Perez et al. , 2018 ; Lu et al. , 2021 ) as well as training from scratch ( Frankle et al. , 2020 ) . Inspired by this , 1Website at https : //sites.google.com/view/adapt-large-scale-models we further look into different ways of combining LayerNorm-tuning with other adaptation methods that finetune new parameters . We devise an effective scheme that first finetunes the CLIP model using only LayerNorm tuning and uses it as initialization for adapting new parameters . We evaluate our adaptation techniques across 12 downstream tasks spread along two spectra : size of downstream task dataset as well as the similarity of downstream data to the pretraining data . Across both spectra , we find that our two-stage LayerNorm tuning approach is most competitive and show its effectiveness for general-purpose adaptation of CLIP to downstream image-classification tasks . To summarize , our paper ’ s contributions are as follows : • We show the effectiveness of LayerNorm-tuning for adaptation to downstream tasks . • We devise a simple yet effective scheme to combine LayerNorm-tuning with other methods of finetuning to obtain competitive performance across the board . • We show a thorough comparison of different adaptation methods in four scenarios across two spectra ( amount of downstream data and its similarity to pretraining data ) studied on numerous downstream classification tasks . We believe our findings will encourage more research and put existing research in perspective of what works best while finetuning large-scale vision-language models to downstream tasks . 2 BACKGROUND : VISION-AND-LANGUAGE PRETRAINED MODELS . Vision-and-language pre-training methods have recently shown promise on diverse tasks across images and text ( Radford et al. , 2021 ; Jia et al. , 2021 ) . While many such approaches have emerged , we focus on CLIP ( Contrastive Language-Image Pre-training ) , a large-scale model with strong zero-shot performance on downstream classification tasks ( Radford et al. , 2021 ) . Contrastive Language-Image Pre-training ( CLIP ) CLIP consists of two parallel encoders for processing images and text , whose outputs are then projected into a shared embedding space . The text encoder is a Transformer ( Vaswani et al. , 2017 ) following the architecture described in Radford et al . ( 2019 ) , while the image encoder is a Vision Transformer ( ViT ) with a patch size of 16 ( Dosovitskiy et al. , 2020 ) . For our experiments , we utilize the open-sourced pretrained CLIP models . Training Because the image and text features live in the same embedding space , the cosine similarity between any embedded image and text description can be computed . CLIP uses these as prediction probabilities for classifying an image with the correct text caption ( or vice versa ) across batches . Formally , denote I and T as the set of image and text features in a single batch . The prediction probability for the ith image and jth caption in the batch is given by p ( Tj | Ii ) = exp ( cos ( Tj , Ii ) /⌧ ) exp ( cos ( Tj , Ii ) /⌧ ) + P k 6=j exp ( cos ( Tk , Ii ) /⌧ ) where ⌧ is a learnable temperature parameter . CLIP is trained with a contrastive loss accordingly across 400 million pairs of image and text captions collected online ( Radford et al. , 2021 ) . Inference For a downstream classification task at test time , CLIP first embeds the textual descriptions of all classes . These descriptions may range from a phrase like “ a photo of a < class > ” to heavily engineered embeddings ensembled over 80 different templates ( Radford et al. , 2021 ) . Each image is then classified using the embedded classes as labels and the prediction probabilities described above . Notably , this inference scheme allows CLIP to be transferred zero-shot to any downstream image classification task . Radford et al . ( 2021 ) show that zero-shot CLIP is competitive with a fully supervised ResNet ( He et al. , 2016 ) baseline on a suite of image classification tasks . 3 METHODOLOGY : FINE-TUNING LARGE-SCALE PRETRAINED MODEL . Although zero-shot CLIP performs well on natural images and general object classification datasets , its performance degrades quickly on more abstract tasks from out-of-distribution data . Even on a simple dataset like MNIST ( LeCun , 1998 ) , the zero-shot CLIP model ( ViT-B/16 ) we test attains an accuracy of only 55 % . Substantial gains can be achieved by fine-tuning the pre-trained model , but many such strategies have emerged across tasks in vision and language and it ’ s unclear which to use on diverse downstream settings . For this reason , we provide an extensive study of adaptation approaches . Figure 1 illustrates the fine-tuning methods we consider in context of CLIP while Figure 2 shows more detailed information regarding each approach . We propose a general taxonomy of fine-tuning approaches and consider three major classes : ( a ) methods which only fine-tune existing parameters , ( b ) methods which freeze existing parameters and add new parameters , and ( c ) methods which combine ( a ) and ( b ) . We first consider two methods in ( a ) which only fine-tune existing parameters . 3.1 FINE-TUNING EXISTING PARAMETERS . Full Model Fine-tuning The simplest approach to fine-tuning is to train all of the model parameters on the downstream task . However , this is unstable and doesn ’ t scale well to CLIP-size models with hundreds of millions of parameters . Our empirical results show this behavior as well . LayerNorm Tuning Instead of full model fine-tuning for large-scale models , we can tune a small subset of chosen parameters when the downstream data is scarce . In fact , Frankle et al . ( 2020 ) show that just tuning Batch Normalization ( Ioffe & Szegedy , 2015 ) parameters from a random initialization can be highly expressive . In a similar vein , we investigate tuning the parameters of Layer Normalization ( LayerNorm ) layers ( Ba et al. , 2016 ) . Unlike Batch Normalization , LayerNorm applies per-element normalization across mini-batches . Given a mini batch of inputs x , LayerNorm transforms this as y = x E [ x ] p Var [ x ] + ✏ · + where the mean and variance are calculated over the normalized dimensions and , are learned parameters . Because the image and text encoders in CLIP share the same underlying Transformer architecture , in LayerNorm Tuning , we fine-tune the Layer Normalization parameters , across all layers of both encoders . These parameters are 768-dimensional and 512-dimensional for the image and text encoders respectively . 3.2 FINE-TUNING NEW PARAMETERS . An alternative paradigm is to inject new parameters which can more effectively adapt to downstream tasks . These new parameters can act at various stages of a pre-trained model : on the output , input , or intermediate activations . Linear Probe The classic method of training a linear probe on top of frozen features is an example of adding new parameters which act on the model output . Given a pre-trained CLIP model , we discard the text encoder , freeze the image encoder , and learn a linear layer on top of the image features before they ’ re projected to the shared embedding space . The linear layer maps the penultimate image features to logits from which class predictions are made . While this simple method is popular and effective , it ’ s parameter-inefficient for tasks with higher number of classes and fails to leverage any of the language information contained in CLIP . Prompt Tuning Alternatively , we can consider adding parameters which act on the model input . Such an approach known as prompt tuning has emerged as a parameter-efficient fine-tuning method in language ( Li & Liang , 2021 ; Lester et al. , 2021 ) . A fixed number of continuous vectors ( a “ prompt ” ) is prepended to the model input and optimized throughout training . Similar to concurrent work by Zhou et al . ( 2021 ) , we apply prompt tuning to image classification with CLIP . For the model input , we embed the raw text of the classes without a template and prepend a continuous prompt of fixed length . During training , the prompt is learned using a cross-entropy loss according to the prediction probabilities detailed in Section 3.1 . Although prompt tuning can be applied in the same way for transformer-based visual encoders , we find that only applying it for the text encoder produces better and more stable results . Prompt tuning is parameter-efficient and removes the need for manual prompt engineering e.g . specifying “ a photo of a < class > , a type of flower ” for a downstream task on flower classification . Ideally , the learned prompts would contain such domain-specific information . However , prompt tuning suffers from high variance during training and is sensitive to initialization . Adapter and Compacter Networks The above two approaches inject parameters which act either at the end of the network ( linear probe ) or at the beginning ( prompt tuning ) . A third option is to inject new parameters for the downstream task within the layers of the network itself . This idea has been popularized as an efficient transfer learning method in language ( Houlsby et al. , 2019 ) . For Transformer-based architectures , a common strategy is to insert a block of learnable parameters after feed forward layers or the attention mechanism . Adapter networks insert learnable adapter blocks after the feed forward layers in each Transformer layer ( Houlsby et al. , 2019 ) . Each block follows a bottleneck architecture and is composed of a linear down-projection , non-linearity , and linear up-projection as shown in Figure 2 . However , for architectures with many stacked Transformer layers and larger hidden dimensions , adapter modules are parameter-inefficient . To alleviate this issue , Mahabadi et al . ( 2021 ) introduce compacter modules which follow the same architecture but use low-rank parameters and hypercomplex multiplication to improve parameter efficiency . Specifically , if the down-projection layer maps x 2 Rm ! Wx + b 2 Rd where W 2 Rm⇥d , b 2 Rd are learned parameters and d ⌧ m , compacter modules represent W as W = nX i=1 Ai ⌦ ( sitTi ) where Ai are global weights shared across Transformer layers and si , ti are local , rank-1 weights . We insert Adapter and Compacter modules across the Transformer layers in the text encoder .
This paper proposed a new method (LayerNorm tunning) for finetuning a pre-trained vision-and-language model. While simple, it is shown this method works competitively with other methods (e.g., prompt tuning) in various settings. For the low data & high similarity setting (Fig 3), it shows the best performance across the methods. The authors further claimed LN could be used to boost the performance of general finetuning methods like linear probe or prompt tuning. However, this seems not useful as the performance typically drops when compared to the LN baseline. In addition, the proposed methods are only evaluated in a single model, it is not clear whether they are generalizable.
SP:a57180859bfdbff775635f167c3afa5a585b73e6
Reward Uncertainty for Exploration in Preference-based Reinforcement Learning
Conveying complex objectives to reinforcement learning ( RL ) agents often requires meticulous reward engineering . Preference-based RL methods are able to learn a more flexible reward model based on human preferences by actively incorporating human feedback , i.e . teacher ’ s preferences between two clips of behaviors . However , poor feedback-efficiency still remains a problem in current preference-based RL algorithms , as tailored human feedback is very expensive . To handle this issue , previous methods have mainly focused on improving query selection and policy initialization . At the same time , recent exploration methods have proven to be a recipe for improving sample-efficiency in RL . We present an exploration method specifically for preference-based RL algorithms . Our main idea is to design an intrinsic reward by measuring the novelty based on learned reward . Specifically , we utilize disagreement across ensemble of learned reward models . Our intuition is that disagreement in learned reward model reflects uncertainty in tailored human feedback and could be useful for exploration . Our experiments show that reward uncertainty exploration improves both feedbackand sample-efficiency of preference-based RL algorithms on complex robot manipulation tasks from Meta-World benchmarks , compared with other existing exploration methods that measure the novelty of state visitation . 1 INTRODUCTION . In reinforcement learning ( RL ) , reward function specifies correct objectives to RL agents . However , it is difficult and time-consuming to carefully design suitable reward functions for a variety of complex behaviors ( e.g. , cooking or book summarization ( Wu et al. , 2021 ) ) . Furthermore , if there are complicated social norms we want RL agents to understand and follow , conveying a reliable reward function to include such information may remain to be an open problem ( Amodei et al. , 2016 ; Hadfield-Menell et al. , 2017 ) . Overall , engineering reward functions purely by human efforts for all tasks remains to be a significant challenge . An alternative to resolve the challenge of reward engineering is preference-based RL ( Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ) . Compared to traditional RL setup , preferencebased RL algorithms are able to teach RL agents without the necessity of designing reward functions . Instead , the agent uses feedback , usually in the form of ( human ) teacher preferences between two behaviors , to learn desired behaviors indicated by teacher . Therefore , instead of using carefullydesigned rewards from the environment , the agent is able to learn a more flexible reward function suitably aligned to teacher feedback . However , preference-based RL usually requires a large amount of teacher feedback , which may be timely or sometimes infeasible to collect . To improve feedback-efficiency , prior works have investigated several sampling strategies ( Biyik & Sadigh , 2018 ; Sadigh et al. , 2017 ; Biyik et al. , 2020 ; Lee et al. , 2021c ) . These methods aim to select more informative queries to improve the quality of the learned reward function while asking for fewer feedback from teacher . Another line of works focus on policy initialization . Ibarz et al . ( 2018 ) initialized the agent ’ s policy with imitation learning from the expert demonstrations , and Lee et al . ( 2021b ) utilized unsupervised pre-training of RL agents before collecting for teacher preferences in the hope of learning diverse behaviors in a self-supervised way to reduce total amount of human feedback . Exploration , in the context of standard RL , has addressed the problems of sample-efficiency ( Stadie et al. , 2015 ; Bellemare et al. , 2016 ; Pathak et al. , 2017 ; 2019 ; Liu & Abbeel , 2021 ; Seo et al. , 2021b ) . When extrinsic rewards from the environment is limited , exploration has been demonstrated to allow RL agents to learn diverse behaviors . However , limited previous works have studied the effects of exploration in preference-based RL . Inspired by the impact of exploration , we present RUNE : Reward UNcertainty for Exploration , a simple and efficient exploration method specifically for preference-based RL algorithms . Our main idea is to incorporate uncertainty from learned reward function as an exploration bonus . Specifically , we capture the novelty of human feedback by measuring the reward uncertainty ( e.g. , variance in predictions of ensemble of reward functions ) . Since reward functions is optimized and learned to align to human feedback , exploration based on reward uncertainty may also reflect high uncertainty in information from teacher feedback . We hope that the proposed intrinsic reward contains information from teacher feedback and can guide exploration that better align to human preferences . Our experiment results show that RUNE can improve both sample- and feedback-efficiency of preference-based RL algorithms ( Lee et al. , 2021b ) . We highlight the main contributions of our paper below : • For preference-based RL , we propose a new exploration method based on uncertainty in learned reward functions . • For the first time , we show that exploration can improve the sample- and feedback-efficiency of preference-based RL algorithms . 2 RELATED WORK . Human-in-the-loop reinforcement learning . We mainly focus on one promising direction that utilizes the human preferences ( Akrour et al. , 2011 ; Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ; Leike et al. , 2018 ; Pilarski et al. , 2011 ; Wilson et al. , 2012 ) to train RL agents . Christiano et al . ( 2017 ) scaled preference-based learning to utilize modern deep learning techniques , and Ibarz et al . ( 2018 ) improved the efficiency of this method by introducing additional forms of feedback such as demonstrations . Recently , Lee et al . ( 2021b ) proposed a feedback-efficient RL algorithm by utilizing off-policy learning and pre-training . To improve sample- and feedback-efficiency of human-in-the-loop RL , previous works ( Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ; Leike et al. , 2018 ) mainly focus on methods such as selecting more informative queries Christiano et al . ( 2017 ) and pre-training of RL agents Ibarz et al . ( 2018 ) ; Lee et al . ( 2021b ) . We further investigate effects of different exploration methods in preference-based RL algorithm . We follow a common approach of exploration methods in RL : generating intrinsic rewards as exploration bonus Pathak et al . ( 2019 ) . Instead of only using learned reward function from human feedback as RL training objective , we alter the reward function to include a combination of the extrinsic reward ( the learned rewards ) and an intrinsic reward ( explo- ration bonus ) . In particular , we present an exploration method with intrinsic reward that measures the disagreement from learned reward models . Exploration in reinforcement learning . The trade off between exploitation and exploration is a critical topic in RL . If agents don ’ t explore enough , then they may learn sub optimal actions . Exploration algorithms aim to encourage the RL agent to visit a wide range of states in the environment . Thrun ( 1992 ) showed that exploration methods that utilize the agent ’ s history has been shown to perform much better than random exploration . Hence , a common setup is to include an intrinsic reward as an exploration bonus . The intrinsic reward can be defined by Count-Based methods which keep count of previously visited states and rewards the agents for visiting new states Bellemare et al . ( 2016 ) ; Tang et al . ( 2017 ) ; Ostrovski et al . ( 2017 ) . Another option is to use a curiosity bonus for the intrinsic reward Houthooft et al . ( 2016 ) ; Pathak et al . ( 2017 ) ; Sekar et al . ( 2020 ) . Curiosity represents how expected and unfamiliar the state is . One way to quantify curiosity is to predict the next state from current state and action Pathak et al . ( 2017 ) , then use prediction error as an estimate of curiosity . If the error is high , that means the next state is unfamiliar and should be explored more . Similarly , instead of predicting the next state , prediction errors from training a neural network to approximate a random function Burda et al . ( 2018 ) can serve as a valid estimate of curiosity . If there are multiple models , then curiosity can also be described as the disagreement between the models Pathak et al . ( 2019 ) . A high disagreement means that the models are unsure about the prediction and need to explore in that direction more . A different approach maximizes the entropy of visited states by incorporating state entropy into the intrinsic reward . State entropy can be estimated by approximating the state density distribution Hazan et al . ( 2019 ) ; Lee et al . ( 2019 ) , approximating the k-nearest neighbor entropy of a randomly initialized encoder Seo et al . ( 2021a ) , or using off-policy RL algorithms to maximize the k-nearest neighbor state entropy estimate in contrastive representation space for unsupervised pre-training Srinivas et al . ( 2020 ) ; Liu & Abbeel ( 2021 ) . These methods all encourage agents to explore diverse states . Our approach adds an intrinsic reward that drives exploration to preference-based RL algorithms . We take advantage of an ensemble of reward models in preference-based RL algorithms , which is not available in other traditional RL settings . To estimate novelty of states and actions , we utilize the disagreement between reward models for our intrinsic reward , in hope of encouraging exploration aligned to directions of human preferences . Trajectory generation in preference-based reinforcement learning . Previous works in preference-based reinforcement learning have investigated several methods to better explore diverse trajectories but close to current optimal policy Wirth et al . ( 2017 ) . One line of works computes agent ’ s stochastic policies that are slightly deviated from optimal policies . Christiano et al . ( 2017 ) uses Trust Region Policy Optimization ( TRPO ) Schulman et al . ( 2015 ) and synchronized A3C Mnih et al . ( 2016 ) . These RL algorithms define stochastic policies to ensure exploration of action space and deviations from optimal policies . However , these exploration methods based on stochastic RL algorithms does not include information from human preferences to drive exploration . Another line of works designs one or multiple criterion to select from multiple possible stochastic policy candidates . Wilson et al . ( 2012 ) proposes to sample several policies from posterior distribution of policy space after updating human preferences . However , such methods come a the cost of requiring many samples collected beforehand . While these methods similarly aims to reduce uncertainty in human preferences , RUNE uses a different metric to estimate such uncertainty through reward functions ensemble . This is different from previous works and is simple , scalable , and easy to implement . A different approach allows human to guide exploration by directly providing additional trajectories . Zucker et al . ( 2010 ) proposes a user-guided exploration method that shows samples of trajectories to human . Human can provide additional feedback to guide exploration . While this method receives exact information from human , it requires additional human labels , which are usually expensive and time-consuming to collect . RUNE however tries to extract information from human feedback revealed in learned reward functions , which doesn ’ t require additional human input .
With this work, the authors propose a bayesian active learning approach to the problem of reinforcement learning from preferences. To do this, they model the epistemic uncertainty of the reward function to intrinsically motivate the RL agents to explore. The solution demonstrates improved sample efficiency.
SP:4f4d66fe53cad3ee407f4f7cf3092d7bc3c355b9
Reward Uncertainty for Exploration in Preference-based Reinforcement Learning
Conveying complex objectives to reinforcement learning ( RL ) agents often requires meticulous reward engineering . Preference-based RL methods are able to learn a more flexible reward model based on human preferences by actively incorporating human feedback , i.e . teacher ’ s preferences between two clips of behaviors . However , poor feedback-efficiency still remains a problem in current preference-based RL algorithms , as tailored human feedback is very expensive . To handle this issue , previous methods have mainly focused on improving query selection and policy initialization . At the same time , recent exploration methods have proven to be a recipe for improving sample-efficiency in RL . We present an exploration method specifically for preference-based RL algorithms . Our main idea is to design an intrinsic reward by measuring the novelty based on learned reward . Specifically , we utilize disagreement across ensemble of learned reward models . Our intuition is that disagreement in learned reward model reflects uncertainty in tailored human feedback and could be useful for exploration . Our experiments show that reward uncertainty exploration improves both feedbackand sample-efficiency of preference-based RL algorithms on complex robot manipulation tasks from Meta-World benchmarks , compared with other existing exploration methods that measure the novelty of state visitation . 1 INTRODUCTION . In reinforcement learning ( RL ) , reward function specifies correct objectives to RL agents . However , it is difficult and time-consuming to carefully design suitable reward functions for a variety of complex behaviors ( e.g. , cooking or book summarization ( Wu et al. , 2021 ) ) . Furthermore , if there are complicated social norms we want RL agents to understand and follow , conveying a reliable reward function to include such information may remain to be an open problem ( Amodei et al. , 2016 ; Hadfield-Menell et al. , 2017 ) . Overall , engineering reward functions purely by human efforts for all tasks remains to be a significant challenge . An alternative to resolve the challenge of reward engineering is preference-based RL ( Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ) . Compared to traditional RL setup , preferencebased RL algorithms are able to teach RL agents without the necessity of designing reward functions . Instead , the agent uses feedback , usually in the form of ( human ) teacher preferences between two behaviors , to learn desired behaviors indicated by teacher . Therefore , instead of using carefullydesigned rewards from the environment , the agent is able to learn a more flexible reward function suitably aligned to teacher feedback . However , preference-based RL usually requires a large amount of teacher feedback , which may be timely or sometimes infeasible to collect . To improve feedback-efficiency , prior works have investigated several sampling strategies ( Biyik & Sadigh , 2018 ; Sadigh et al. , 2017 ; Biyik et al. , 2020 ; Lee et al. , 2021c ) . These methods aim to select more informative queries to improve the quality of the learned reward function while asking for fewer feedback from teacher . Another line of works focus on policy initialization . Ibarz et al . ( 2018 ) initialized the agent ’ s policy with imitation learning from the expert demonstrations , and Lee et al . ( 2021b ) utilized unsupervised pre-training of RL agents before collecting for teacher preferences in the hope of learning diverse behaviors in a self-supervised way to reduce total amount of human feedback . Exploration , in the context of standard RL , has addressed the problems of sample-efficiency ( Stadie et al. , 2015 ; Bellemare et al. , 2016 ; Pathak et al. , 2017 ; 2019 ; Liu & Abbeel , 2021 ; Seo et al. , 2021b ) . When extrinsic rewards from the environment is limited , exploration has been demonstrated to allow RL agents to learn diverse behaviors . However , limited previous works have studied the effects of exploration in preference-based RL . Inspired by the impact of exploration , we present RUNE : Reward UNcertainty for Exploration , a simple and efficient exploration method specifically for preference-based RL algorithms . Our main idea is to incorporate uncertainty from learned reward function as an exploration bonus . Specifically , we capture the novelty of human feedback by measuring the reward uncertainty ( e.g. , variance in predictions of ensemble of reward functions ) . Since reward functions is optimized and learned to align to human feedback , exploration based on reward uncertainty may also reflect high uncertainty in information from teacher feedback . We hope that the proposed intrinsic reward contains information from teacher feedback and can guide exploration that better align to human preferences . Our experiment results show that RUNE can improve both sample- and feedback-efficiency of preference-based RL algorithms ( Lee et al. , 2021b ) . We highlight the main contributions of our paper below : • For preference-based RL , we propose a new exploration method based on uncertainty in learned reward functions . • For the first time , we show that exploration can improve the sample- and feedback-efficiency of preference-based RL algorithms . 2 RELATED WORK . Human-in-the-loop reinforcement learning . We mainly focus on one promising direction that utilizes the human preferences ( Akrour et al. , 2011 ; Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ; Leike et al. , 2018 ; Pilarski et al. , 2011 ; Wilson et al. , 2012 ) to train RL agents . Christiano et al . ( 2017 ) scaled preference-based learning to utilize modern deep learning techniques , and Ibarz et al . ( 2018 ) improved the efficiency of this method by introducing additional forms of feedback such as demonstrations . Recently , Lee et al . ( 2021b ) proposed a feedback-efficient RL algorithm by utilizing off-policy learning and pre-training . To improve sample- and feedback-efficiency of human-in-the-loop RL , previous works ( Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ; Leike et al. , 2018 ) mainly focus on methods such as selecting more informative queries Christiano et al . ( 2017 ) and pre-training of RL agents Ibarz et al . ( 2018 ) ; Lee et al . ( 2021b ) . We further investigate effects of different exploration methods in preference-based RL algorithm . We follow a common approach of exploration methods in RL : generating intrinsic rewards as exploration bonus Pathak et al . ( 2019 ) . Instead of only using learned reward function from human feedback as RL training objective , we alter the reward function to include a combination of the extrinsic reward ( the learned rewards ) and an intrinsic reward ( explo- ration bonus ) . In particular , we present an exploration method with intrinsic reward that measures the disagreement from learned reward models . Exploration in reinforcement learning . The trade off between exploitation and exploration is a critical topic in RL . If agents don ’ t explore enough , then they may learn sub optimal actions . Exploration algorithms aim to encourage the RL agent to visit a wide range of states in the environment . Thrun ( 1992 ) showed that exploration methods that utilize the agent ’ s history has been shown to perform much better than random exploration . Hence , a common setup is to include an intrinsic reward as an exploration bonus . The intrinsic reward can be defined by Count-Based methods which keep count of previously visited states and rewards the agents for visiting new states Bellemare et al . ( 2016 ) ; Tang et al . ( 2017 ) ; Ostrovski et al . ( 2017 ) . Another option is to use a curiosity bonus for the intrinsic reward Houthooft et al . ( 2016 ) ; Pathak et al . ( 2017 ) ; Sekar et al . ( 2020 ) . Curiosity represents how expected and unfamiliar the state is . One way to quantify curiosity is to predict the next state from current state and action Pathak et al . ( 2017 ) , then use prediction error as an estimate of curiosity . If the error is high , that means the next state is unfamiliar and should be explored more . Similarly , instead of predicting the next state , prediction errors from training a neural network to approximate a random function Burda et al . ( 2018 ) can serve as a valid estimate of curiosity . If there are multiple models , then curiosity can also be described as the disagreement between the models Pathak et al . ( 2019 ) . A high disagreement means that the models are unsure about the prediction and need to explore in that direction more . A different approach maximizes the entropy of visited states by incorporating state entropy into the intrinsic reward . State entropy can be estimated by approximating the state density distribution Hazan et al . ( 2019 ) ; Lee et al . ( 2019 ) , approximating the k-nearest neighbor entropy of a randomly initialized encoder Seo et al . ( 2021a ) , or using off-policy RL algorithms to maximize the k-nearest neighbor state entropy estimate in contrastive representation space for unsupervised pre-training Srinivas et al . ( 2020 ) ; Liu & Abbeel ( 2021 ) . These methods all encourage agents to explore diverse states . Our approach adds an intrinsic reward that drives exploration to preference-based RL algorithms . We take advantage of an ensemble of reward models in preference-based RL algorithms , which is not available in other traditional RL settings . To estimate novelty of states and actions , we utilize the disagreement between reward models for our intrinsic reward , in hope of encouraging exploration aligned to directions of human preferences . Trajectory generation in preference-based reinforcement learning . Previous works in preference-based reinforcement learning have investigated several methods to better explore diverse trajectories but close to current optimal policy Wirth et al . ( 2017 ) . One line of works computes agent ’ s stochastic policies that are slightly deviated from optimal policies . Christiano et al . ( 2017 ) uses Trust Region Policy Optimization ( TRPO ) Schulman et al . ( 2015 ) and synchronized A3C Mnih et al . ( 2016 ) . These RL algorithms define stochastic policies to ensure exploration of action space and deviations from optimal policies . However , these exploration methods based on stochastic RL algorithms does not include information from human preferences to drive exploration . Another line of works designs one or multiple criterion to select from multiple possible stochastic policy candidates . Wilson et al . ( 2012 ) proposes to sample several policies from posterior distribution of policy space after updating human preferences . However , such methods come a the cost of requiring many samples collected beforehand . While these methods similarly aims to reduce uncertainty in human preferences , RUNE uses a different metric to estimate such uncertainty through reward functions ensemble . This is different from previous works and is simple , scalable , and easy to implement . A different approach allows human to guide exploration by directly providing additional trajectories . Zucker et al . ( 2010 ) proposes a user-guided exploration method that shows samples of trajectories to human . Human can provide additional feedback to guide exploration . While this method receives exact information from human , it requires additional human labels , which are usually expensive and time-consuming to collect . RUNE however tries to extract information from human feedback revealed in learned reward functions , which doesn ’ t require additional human input .
The paper proposes an exploration method for "preference based Reinforcement Learning" methods, where human feedback is incorporated to the training regime. The authors use an ensemle of learned reward models and add an intrinsic reward based on disagreement (or uncertainity). The authors test the idea on robotic manipulation tasks from meta-world. The agent learns purely based on the feedback from a teacher that provides preference of trajectory one over another. The tasks presented are "door close", "door open" and "drawer open". The authors combine their exploration strategy (RUNE) with the preference based learning method PEBBLE and compare it with PEBBLE with some other exploration strategies. The results show that the proposed method provide some improvement over others. The authors also compare with PEBBLE by using 700 feedback instead of 1000. The results show minor improvement.
SP:4f4d66fe53cad3ee407f4f7cf3092d7bc3c355b9
Reward Uncertainty for Exploration in Preference-based Reinforcement Learning
Conveying complex objectives to reinforcement learning ( RL ) agents often requires meticulous reward engineering . Preference-based RL methods are able to learn a more flexible reward model based on human preferences by actively incorporating human feedback , i.e . teacher ’ s preferences between two clips of behaviors . However , poor feedback-efficiency still remains a problem in current preference-based RL algorithms , as tailored human feedback is very expensive . To handle this issue , previous methods have mainly focused on improving query selection and policy initialization . At the same time , recent exploration methods have proven to be a recipe for improving sample-efficiency in RL . We present an exploration method specifically for preference-based RL algorithms . Our main idea is to design an intrinsic reward by measuring the novelty based on learned reward . Specifically , we utilize disagreement across ensemble of learned reward models . Our intuition is that disagreement in learned reward model reflects uncertainty in tailored human feedback and could be useful for exploration . Our experiments show that reward uncertainty exploration improves both feedbackand sample-efficiency of preference-based RL algorithms on complex robot manipulation tasks from Meta-World benchmarks , compared with other existing exploration methods that measure the novelty of state visitation . 1 INTRODUCTION . In reinforcement learning ( RL ) , reward function specifies correct objectives to RL agents . However , it is difficult and time-consuming to carefully design suitable reward functions for a variety of complex behaviors ( e.g. , cooking or book summarization ( Wu et al. , 2021 ) ) . Furthermore , if there are complicated social norms we want RL agents to understand and follow , conveying a reliable reward function to include such information may remain to be an open problem ( Amodei et al. , 2016 ; Hadfield-Menell et al. , 2017 ) . Overall , engineering reward functions purely by human efforts for all tasks remains to be a significant challenge . An alternative to resolve the challenge of reward engineering is preference-based RL ( Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ) . Compared to traditional RL setup , preferencebased RL algorithms are able to teach RL agents without the necessity of designing reward functions . Instead , the agent uses feedback , usually in the form of ( human ) teacher preferences between two behaviors , to learn desired behaviors indicated by teacher . Therefore , instead of using carefullydesigned rewards from the environment , the agent is able to learn a more flexible reward function suitably aligned to teacher feedback . However , preference-based RL usually requires a large amount of teacher feedback , which may be timely or sometimes infeasible to collect . To improve feedback-efficiency , prior works have investigated several sampling strategies ( Biyik & Sadigh , 2018 ; Sadigh et al. , 2017 ; Biyik et al. , 2020 ; Lee et al. , 2021c ) . These methods aim to select more informative queries to improve the quality of the learned reward function while asking for fewer feedback from teacher . Another line of works focus on policy initialization . Ibarz et al . ( 2018 ) initialized the agent ’ s policy with imitation learning from the expert demonstrations , and Lee et al . ( 2021b ) utilized unsupervised pre-training of RL agents before collecting for teacher preferences in the hope of learning diverse behaviors in a self-supervised way to reduce total amount of human feedback . Exploration , in the context of standard RL , has addressed the problems of sample-efficiency ( Stadie et al. , 2015 ; Bellemare et al. , 2016 ; Pathak et al. , 2017 ; 2019 ; Liu & Abbeel , 2021 ; Seo et al. , 2021b ) . When extrinsic rewards from the environment is limited , exploration has been demonstrated to allow RL agents to learn diverse behaviors . However , limited previous works have studied the effects of exploration in preference-based RL . Inspired by the impact of exploration , we present RUNE : Reward UNcertainty for Exploration , a simple and efficient exploration method specifically for preference-based RL algorithms . Our main idea is to incorporate uncertainty from learned reward function as an exploration bonus . Specifically , we capture the novelty of human feedback by measuring the reward uncertainty ( e.g. , variance in predictions of ensemble of reward functions ) . Since reward functions is optimized and learned to align to human feedback , exploration based on reward uncertainty may also reflect high uncertainty in information from teacher feedback . We hope that the proposed intrinsic reward contains information from teacher feedback and can guide exploration that better align to human preferences . Our experiment results show that RUNE can improve both sample- and feedback-efficiency of preference-based RL algorithms ( Lee et al. , 2021b ) . We highlight the main contributions of our paper below : • For preference-based RL , we propose a new exploration method based on uncertainty in learned reward functions . • For the first time , we show that exploration can improve the sample- and feedback-efficiency of preference-based RL algorithms . 2 RELATED WORK . Human-in-the-loop reinforcement learning . We mainly focus on one promising direction that utilizes the human preferences ( Akrour et al. , 2011 ; Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ; Leike et al. , 2018 ; Pilarski et al. , 2011 ; Wilson et al. , 2012 ) to train RL agents . Christiano et al . ( 2017 ) scaled preference-based learning to utilize modern deep learning techniques , and Ibarz et al . ( 2018 ) improved the efficiency of this method by introducing additional forms of feedback such as demonstrations . Recently , Lee et al . ( 2021b ) proposed a feedback-efficient RL algorithm by utilizing off-policy learning and pre-training . To improve sample- and feedback-efficiency of human-in-the-loop RL , previous works ( Christiano et al. , 2017 ; Ibarz et al. , 2018 ; Lee et al. , 2021b ; Leike et al. , 2018 ) mainly focus on methods such as selecting more informative queries Christiano et al . ( 2017 ) and pre-training of RL agents Ibarz et al . ( 2018 ) ; Lee et al . ( 2021b ) . We further investigate effects of different exploration methods in preference-based RL algorithm . We follow a common approach of exploration methods in RL : generating intrinsic rewards as exploration bonus Pathak et al . ( 2019 ) . Instead of only using learned reward function from human feedback as RL training objective , we alter the reward function to include a combination of the extrinsic reward ( the learned rewards ) and an intrinsic reward ( explo- ration bonus ) . In particular , we present an exploration method with intrinsic reward that measures the disagreement from learned reward models . Exploration in reinforcement learning . The trade off between exploitation and exploration is a critical topic in RL . If agents don ’ t explore enough , then they may learn sub optimal actions . Exploration algorithms aim to encourage the RL agent to visit a wide range of states in the environment . Thrun ( 1992 ) showed that exploration methods that utilize the agent ’ s history has been shown to perform much better than random exploration . Hence , a common setup is to include an intrinsic reward as an exploration bonus . The intrinsic reward can be defined by Count-Based methods which keep count of previously visited states and rewards the agents for visiting new states Bellemare et al . ( 2016 ) ; Tang et al . ( 2017 ) ; Ostrovski et al . ( 2017 ) . Another option is to use a curiosity bonus for the intrinsic reward Houthooft et al . ( 2016 ) ; Pathak et al . ( 2017 ) ; Sekar et al . ( 2020 ) . Curiosity represents how expected and unfamiliar the state is . One way to quantify curiosity is to predict the next state from current state and action Pathak et al . ( 2017 ) , then use prediction error as an estimate of curiosity . If the error is high , that means the next state is unfamiliar and should be explored more . Similarly , instead of predicting the next state , prediction errors from training a neural network to approximate a random function Burda et al . ( 2018 ) can serve as a valid estimate of curiosity . If there are multiple models , then curiosity can also be described as the disagreement between the models Pathak et al . ( 2019 ) . A high disagreement means that the models are unsure about the prediction and need to explore in that direction more . A different approach maximizes the entropy of visited states by incorporating state entropy into the intrinsic reward . State entropy can be estimated by approximating the state density distribution Hazan et al . ( 2019 ) ; Lee et al . ( 2019 ) , approximating the k-nearest neighbor entropy of a randomly initialized encoder Seo et al . ( 2021a ) , or using off-policy RL algorithms to maximize the k-nearest neighbor state entropy estimate in contrastive representation space for unsupervised pre-training Srinivas et al . ( 2020 ) ; Liu & Abbeel ( 2021 ) . These methods all encourage agents to explore diverse states . Our approach adds an intrinsic reward that drives exploration to preference-based RL algorithms . We take advantage of an ensemble of reward models in preference-based RL algorithms , which is not available in other traditional RL settings . To estimate novelty of states and actions , we utilize the disagreement between reward models for our intrinsic reward , in hope of encouraging exploration aligned to directions of human preferences . Trajectory generation in preference-based reinforcement learning . Previous works in preference-based reinforcement learning have investigated several methods to better explore diverse trajectories but close to current optimal policy Wirth et al . ( 2017 ) . One line of works computes agent ’ s stochastic policies that are slightly deviated from optimal policies . Christiano et al . ( 2017 ) uses Trust Region Policy Optimization ( TRPO ) Schulman et al . ( 2015 ) and synchronized A3C Mnih et al . ( 2016 ) . These RL algorithms define stochastic policies to ensure exploration of action space and deviations from optimal policies . However , these exploration methods based on stochastic RL algorithms does not include information from human preferences to drive exploration . Another line of works designs one or multiple criterion to select from multiple possible stochastic policy candidates . Wilson et al . ( 2012 ) proposes to sample several policies from posterior distribution of policy space after updating human preferences . However , such methods come a the cost of requiring many samples collected beforehand . While these methods similarly aims to reduce uncertainty in human preferences , RUNE uses a different metric to estimate such uncertainty through reward functions ensemble . This is different from previous works and is simple , scalable , and easy to implement . A different approach allows human to guide exploration by directly providing additional trajectories . Zucker et al . ( 2010 ) proposes a user-guided exploration method that shows samples of trajectories to human . Human can provide additional feedback to guide exploration . While this method receives exact information from human , it requires additional human labels , which are usually expensive and time-consuming to collect . RUNE however tries to extract information from human feedback revealed in learned reward functions , which doesn ’ t require additional human input .
This work proposes an ensemble-based intrinsic reward to improve exploration in preference-based RL. The key idea is to incorporate uncertainty in teacher preferences as an intrinsic reward. An ensemble of reward functions is used to capture this uncertainty. The paper discusses experiments on three robot tasks with ablation studies to investigate the method's performance.
SP:4f4d66fe53cad3ee407f4f7cf3092d7bc3c355b9
Human-Level Control without Server-Grade Hardware
Deep Q-Network ( DQN ) marked a major milestone for reinforcement learning , demonstrating for the first time that human-level control policies could be learned directly from raw visual inputs via reward maximization . Even years after its introduction , DQN remains highly relevant to the research community since many of its innovations have been adopted by successor methods . Nevertheless , despite significant hardware advances in the interim , DQN ’ s original Atari 2600 experiments remain extremely costly to replicate in full . This poses an immense barrier to researchers who can not afford state-of-the-art hardware or lack access to large-scale cloud computing resources . To facilitate improved access to deep reinforcement learning research , we introduce a DQN implementation that leverages a novel concurrent and synchronized execution framework designed to maximally utilize a heterogeneous CPU-GPU desktop system . With just one NVIDIA GeForce GTX 1080 GPU , our implementation reduces the training time of a 200-million-frame Atari experiment from 25 hours to just 9 hours . The ideas introduced in our paper should be generalizable to a large number of off-policy deep reinforcement learning methods . 1 INTRODUCTION . Reinforcement learning ( Sutton & Barto , 2018 ) has long grappled with the ramifications of the Curse of Dimensionality ( Bellman , 1966 ) , a phenomenon in which exact solution methods become hopelessly intractable in the face of high-dimensional state spaces . As such , Deep Q-Network ( DQN ) ( Mnih et al. , 2013 ; 2015 ) was heralded as a landmark achievement for the field , establishing “ deep ” methods as a promising avenue for controlling environments that emit rich sensory observations . Through an effective combination of Q-Learning ( Watkins , 1989 ) and a deep convolutional neural architecture ( Krizhevsky et al. , 2012 ) , DQN became the first algorithm to achieve human-level performance on a majority of the Atari 2600 games when learning directly from raw pixel inputs . In contrast to previous efforts to integrate neural networks into reinforcement learning ( e.g . Tesauro , 1992 ; Riedmiller , 2005 ) , DQN proved to be robust , efficient , and scalable . Deep reinforcement learning has consequently become an active area of research in recent years . Although DQN ’ s performance on the Atari benchmark ( Bellemare et al. , 2013 ) has been surpassed by later methods ( e.g . Hessel et al. , 2018 ; Badia et al. , 2020 ; Schrittwieser et al. , 2020 ) , the algorithm remains pertinent to ongoing deep reinforcement learning research . Its core elements—minibatched experience replay and a time-delayed target network—have been adopted by most off-policy deep reinforcement learning methods ( e.g . Lillicrap et al. , 2015 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) . As such , the DQN algorithm is a good testbed for new ideas , as improvements to it are often directly transferable to state-of-the-art methods . Furthermore , its relatively straightforward implementation compared to modern successors , as well as its widely replicated results , have made it a reliable baseline for benchmarking and validating new methods . These factors have made DQN crucial to the research community even years after its introduction . In spite of substantial improvements to computing hardware over nearly a decade , DQN is still expensive to train . The large computational cost stems from the need to conduct gradient-based optimization on a multimillion-parameter convolutional neural network . The original Atari 2600 experiments from Mnih et al . ( 2015 ) , in particular , remain prohibitive for many to replicate . Agent training was conducted for 200 million frames on each of the 49 games considered , for a total of 9.8 billion frames ( equivalent to about 5.2 years of real experience ) . Conducting the entirety of these experiments is utterly infeasible without access to costly Graphics Processing Units ( GPUs ) , and can still take many weeks without access to a great number of them . This poses a significant barrier to deep reinforcement learning researchers , particularly those who lack substantial cloud computing resources . Given DQN ’ s importance as a testbed and a baseline , this barrier puts a majority of researchers at an unfair disadvantage when it comes to publishing their ideas . To foster improved accessibility to deep reinforcement learning research , we analyze the algorithmic structure of DQN and look for opportunities to reduce its runtime when executed on a standard CPU-GPU desktop system . In contrast to a separate line of inquiry into distributed DQN methods that focus on scaling to a large number of nodes ( e.g . Nair et al. , 2015 ; Ong et al. , 2015 ; Horgan et al. , 2018 ) , we specifically consider the challenges of optimizing performance for a resource-constrained local system . We develop a modified DQN implementation based on a novel framework of Concurrent Training and Synchronized Execution ; the framework is general and fits nicely into other target network-based methods as well . When trained on a single NVIDIA GeForce GTX 1080 GPU , our implementation reduces the runtime of a full Atari experiment ( 200 million frames ) from 25 hours to just 9 hours compared to a highly optimized baseline DQN implementation . At this rate , all 49 experiments from Mnih et al . ( 2015 ) can be replicated in a relatively short timeframe using highly affordable , off-the-shelf hardware . Our implementation achieves human- and DQN-level performance on a large majority of the games . We plan to publicly release the code after the submission period to aid other researchers in reproducing these results quickly with limited hardware requirements . 2 BACKGROUND . DQN can be understood as the combination of Q-Learning and deep convolutional neural networks , along with supporting infrastructure to make this combination stable . The neural networks ’ generalization ability helps DQN learn effective control policies in the face of high-dimensional sensory inputs ( e.g . images ) where classic reinforcement learning and dynamic programming would be intractable . We provide a brief summary of the DQN algorithm in our notation ( Section 2.1 ) and then establish assumptions about the CPU-GPU hardware model for which we will optimize its runtime ( Section 2.2 ) . 2.1 DQN AND NOTATION . In essence , DQN inherits the same fundamental objective of Q-Learning : to estimate a function Q : S ×A 7→ R such that acting according to a greedy policy π ( s ) = argmaxaQ ( s , a ) maximizes the agent ’ s expected discounted return Eπ [ ∑∞ t=1 γ t−1rt ] for some γ ∈ [ 0 , 1 ] ( Sutton & Barto , 2018 ) . Here , the environment is defined as a Markov Decision Process ( MDP ) of the form ( S , A , T , R ) . The sets S and A contain the environment states and agent actions , respectively , that are permissible in the decision process . The agent executes an action at given the state st at the current timestep t and receives a scalar reward rt : = R ( st , at ) , triggering a stochastic transition to a new state st+1 ∈ S with probability T ( st , at , st+1 ) . Whereas Q-Learning implements the Q-function as a lookup table , DQN implements it as a deep neural network parameterized by a vector θ . Learning then amounts to a first-order minimization with respect to θ of a squared-error loss : L ( θ ) : = 1 2 ( rt + γmax a′∈A Q ( st+1 , a ′ ; θ− ) −Q ( st , at ; θ ) ) 2 ( 1 ) Rather than conducting updates immediately upon collecting a new experience ( as Q-Learning does ) , DQN buffers each experience ( st , at , rt , st+1 ) in a replay memory D. Every F timesteps , the agent conducts a gradient update on a minibatch of replayed experiences ( Lin , 1992 ) sampled randomly from D. This helps to circumvent the strong temporal correlations between successive experiences while also efficiently reusing samples ( Mnih et al. , 2015 ) . For additional stability , the maximization in ( 1 ) is conducted using a stationary “ target ” network parameterized by a separate vector θ− . DQN updates the target network every C timesteps by copying the parameters from the main network : i.e . θ− ← θ . Following Mnih et al . ( 2015 ) , we assume that each action at is selected according to an -greedy policy ( Sutton & Barto , 2018 ) . That is , the agent selects the greedy action argmaxaQ ( st , a ) with probability t ∈ [ 0 , 1 ] and an action randomly from A otherwise . The -greedy strategy linearly interpolates between the uniform-random and greedy policies , helping to balance exploration with exploitation ( Sutton & Barto , 2018 ) . In practice , it is common to start with t = 1 early in training and gradually reduce its value over time ; we discuss the exact -schedules used in our experiments in Section 5 . 2.2 HARDWARE MODEL . Optimizing the performance of any algorithm necessarily requires some assumptions about the type of computer system on which it is being executed . In the interest of generality , we defer a discussion of our particular hardware specifications until Section 5 . For now , it will be more useful to outline the general capabilities of the systems under our consideration here . We define our abstract machine as a heterogeneous system that consists of two components : a Central Processing Unit ( “ CPU ” ) and a coprocessor optimized for massively parallel computation ( “ GPU ” ) .1 We assume that the GPU is suitable only for neural network operations : i.e . Q-value prediction ( inference ) and training ( backpropagation ) . All other faculties are to be handled by the CPU , including but not limited to sampling from the environment , managing the replay memory , and preprocessing input data for the neural network . We also assume that the CPU is capable of executing W program threads simultaneously . As a result , the system can process up toW CPU tasks and one GPU task in parallel . In practice , the original DQN algorithm only executes one task at a time , which is inefficient . The goal in the following sections is to modify the algorithm such that the machine ’ s capabilities are fully utilized . 3 CONCURRENT TRAINING . The DQN algorithm repeatedly alternates between executing F actions in its environment and then conducting a single training update on a minibatch of replayed experiences . While this is effective for finely interleaving data generation with learning , it is not efficient for the heterogeneous CPU-GPU systems we consider here . This is because either the CPU or the GPU is left idle at any given point in the process . Ideally , we would fill these idle intervals by ensuring that a CPU- and GPU-intensive tasks are executed in parallel at all times . Unfortunately , the original DQN algorithm can not be refactored to permit this possibility . The dilemma is that execution and training are sequentially dependent on each other . To see this , suppose that the agent has just completed a gradient update at some timestep t that produces the current network parameters θ . The DQN agent would now interact with its environment for F steps , computing the Q-values Q ( si , ai ; θ ) for i ∈ { t , . . . , t+ F − 1 } in the process . The next update , which produces a new set of parameters θ′ , is scheduled to occur at the beginning of timestep t+F . Note that we could not have conducted this update any earlier , since the Q-values for action selection depended on θ and not θ′ . On the other hand , we can not proceed with the next sequence of Q-values Q ( si , ai ; θ′ ) for i ∈ { t + F , . . . , t + 2F − 1 } 1For ease of presentation , we refer to any such coprocessor as a Graphics Processing Unit ( GPU ) . GPUs are presently the most common form of hardware acceleration for machine learning research ; they were used by Mnih et al . ( 2015 ) and are used for our experiments too ( Section 5 ) . Of course , other matrix-optimized Application-Specific Integrated Circuits ( ASICs ) would be equally suitable ; these may become more prevalent in the future . The optimizations we propose here are sufficiently general to apply to any of these possible implementations . since these will depend on θ′ , which has not yet been prepared by the GPU . Sampling and training are interlocked processes as a consequence , making simultaneous execution impossible . Rather than attempting to preserve DQN ’ s original control flow , let us instead consider a slight modification to it . The sequential dependency in DQN arises from the fact that the training procedure needs to overwrite θ with θ′ , but the sampling procedure requires θ for action selection . If these tasks relied on different sets of parameters , then we would be able to parallelize these tasks . Quite fortunately , DQN already has a second parameter set available : its target network parameters θ− . We therefore propose to substitute θ− as a surrogate for the main parameters θ during execution ; that is , greedy actions are computed by argmaxaQ ( s , a ; θ − ) during -greedy exploration . The mild assumption here is that a policy derived from θ− should perform comparably to a policy derived from θ . Given that θ− is a time-delayed copy of θ that differs by at most C timesteps of experience , this assumption should not be problematic in practice . The result of this subtle change is that execution and training can now be decoupled ( Figure 1 ) , at least until the target parameters θ− must be updated . Instead of sampling F experiences and then training on one minibatch , it is possible to sample C experiences while concurrently training on C / F minibatches using a separate program thread . Overall , the total computation is the same , but both the CPU and GPU are kept busy . After C timesteps have elapsed , the two threads must synchronize to copy θ− from θ as usual . To avoid a race condition between the threads , we also temporarily buffer the experiences collected by the sampler thread , transferring them to the replay memory D only when the threads are synchronized . This ensures that D does not change during training , which could produce non-deterministic results . Related Work We found no previous algorithms that utilize a similar technique to this type of concurrency , in which the target network parameters θ− are used to break the dependency between sampling and training . Daley & Amato ( 2019 ) first demonstrated that DQN with grouped minibatch training—where C / F minibatches of training are conducted every C timesteps—could learn effectively , where it was employed for the purposes of efficient λ-return calculation . However , unlike in our work , actions were still sampled using θ and training was not conducted concurrently . Distributed DQN methods that rely on a centralized training server ( e.g . Nair et al. , 2015 ; Ong et al. , 2015 ; Horgan et al. , 2018 ) could be viewed as an alternative form of concurrent training , since training and sampling occur simultaneously ( although possibly on distinct physical systems ) . This type of concurrency is quite different from ours , since gradient computation is performed at the nodes , and the updates are applied to the parameters asynchronously and in a non-deterministic order . For these reasons , we do not expect that distributed methods would be efficient on a monolithic CPU-GPU system that we consider in our work ( we elaborate on this point in Section 4 ) .
The authors propose an implementation of DQN that focuses on maximising the data / training throughput in a common CPU-GPU machine setting. They do so by (asynchronously) running inference on multiple environments at once (and synchronously blocking on the env batch) and using the DQN target network parameters for inference rather than the default -- latest -- parameters of the network. They show considerable improvements in speed while producing comparable results with previous DQN results on the Atari suite.
SP:caa27a2b60f9b1b3a38c0afe531ccee75ef2cbfd
Human-Level Control without Server-Grade Hardware
Deep Q-Network ( DQN ) marked a major milestone for reinforcement learning , demonstrating for the first time that human-level control policies could be learned directly from raw visual inputs via reward maximization . Even years after its introduction , DQN remains highly relevant to the research community since many of its innovations have been adopted by successor methods . Nevertheless , despite significant hardware advances in the interim , DQN ’ s original Atari 2600 experiments remain extremely costly to replicate in full . This poses an immense barrier to researchers who can not afford state-of-the-art hardware or lack access to large-scale cloud computing resources . To facilitate improved access to deep reinforcement learning research , we introduce a DQN implementation that leverages a novel concurrent and synchronized execution framework designed to maximally utilize a heterogeneous CPU-GPU desktop system . With just one NVIDIA GeForce GTX 1080 GPU , our implementation reduces the training time of a 200-million-frame Atari experiment from 25 hours to just 9 hours . The ideas introduced in our paper should be generalizable to a large number of off-policy deep reinforcement learning methods . 1 INTRODUCTION . Reinforcement learning ( Sutton & Barto , 2018 ) has long grappled with the ramifications of the Curse of Dimensionality ( Bellman , 1966 ) , a phenomenon in which exact solution methods become hopelessly intractable in the face of high-dimensional state spaces . As such , Deep Q-Network ( DQN ) ( Mnih et al. , 2013 ; 2015 ) was heralded as a landmark achievement for the field , establishing “ deep ” methods as a promising avenue for controlling environments that emit rich sensory observations . Through an effective combination of Q-Learning ( Watkins , 1989 ) and a deep convolutional neural architecture ( Krizhevsky et al. , 2012 ) , DQN became the first algorithm to achieve human-level performance on a majority of the Atari 2600 games when learning directly from raw pixel inputs . In contrast to previous efforts to integrate neural networks into reinforcement learning ( e.g . Tesauro , 1992 ; Riedmiller , 2005 ) , DQN proved to be robust , efficient , and scalable . Deep reinforcement learning has consequently become an active area of research in recent years . Although DQN ’ s performance on the Atari benchmark ( Bellemare et al. , 2013 ) has been surpassed by later methods ( e.g . Hessel et al. , 2018 ; Badia et al. , 2020 ; Schrittwieser et al. , 2020 ) , the algorithm remains pertinent to ongoing deep reinforcement learning research . Its core elements—minibatched experience replay and a time-delayed target network—have been adopted by most off-policy deep reinforcement learning methods ( e.g . Lillicrap et al. , 2015 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) . As such , the DQN algorithm is a good testbed for new ideas , as improvements to it are often directly transferable to state-of-the-art methods . Furthermore , its relatively straightforward implementation compared to modern successors , as well as its widely replicated results , have made it a reliable baseline for benchmarking and validating new methods . These factors have made DQN crucial to the research community even years after its introduction . In spite of substantial improvements to computing hardware over nearly a decade , DQN is still expensive to train . The large computational cost stems from the need to conduct gradient-based optimization on a multimillion-parameter convolutional neural network . The original Atari 2600 experiments from Mnih et al . ( 2015 ) , in particular , remain prohibitive for many to replicate . Agent training was conducted for 200 million frames on each of the 49 games considered , for a total of 9.8 billion frames ( equivalent to about 5.2 years of real experience ) . Conducting the entirety of these experiments is utterly infeasible without access to costly Graphics Processing Units ( GPUs ) , and can still take many weeks without access to a great number of them . This poses a significant barrier to deep reinforcement learning researchers , particularly those who lack substantial cloud computing resources . Given DQN ’ s importance as a testbed and a baseline , this barrier puts a majority of researchers at an unfair disadvantage when it comes to publishing their ideas . To foster improved accessibility to deep reinforcement learning research , we analyze the algorithmic structure of DQN and look for opportunities to reduce its runtime when executed on a standard CPU-GPU desktop system . In contrast to a separate line of inquiry into distributed DQN methods that focus on scaling to a large number of nodes ( e.g . Nair et al. , 2015 ; Ong et al. , 2015 ; Horgan et al. , 2018 ) , we specifically consider the challenges of optimizing performance for a resource-constrained local system . We develop a modified DQN implementation based on a novel framework of Concurrent Training and Synchronized Execution ; the framework is general and fits nicely into other target network-based methods as well . When trained on a single NVIDIA GeForce GTX 1080 GPU , our implementation reduces the runtime of a full Atari experiment ( 200 million frames ) from 25 hours to just 9 hours compared to a highly optimized baseline DQN implementation . At this rate , all 49 experiments from Mnih et al . ( 2015 ) can be replicated in a relatively short timeframe using highly affordable , off-the-shelf hardware . Our implementation achieves human- and DQN-level performance on a large majority of the games . We plan to publicly release the code after the submission period to aid other researchers in reproducing these results quickly with limited hardware requirements . 2 BACKGROUND . DQN can be understood as the combination of Q-Learning and deep convolutional neural networks , along with supporting infrastructure to make this combination stable . The neural networks ’ generalization ability helps DQN learn effective control policies in the face of high-dimensional sensory inputs ( e.g . images ) where classic reinforcement learning and dynamic programming would be intractable . We provide a brief summary of the DQN algorithm in our notation ( Section 2.1 ) and then establish assumptions about the CPU-GPU hardware model for which we will optimize its runtime ( Section 2.2 ) . 2.1 DQN AND NOTATION . In essence , DQN inherits the same fundamental objective of Q-Learning : to estimate a function Q : S ×A 7→ R such that acting according to a greedy policy π ( s ) = argmaxaQ ( s , a ) maximizes the agent ’ s expected discounted return Eπ [ ∑∞ t=1 γ t−1rt ] for some γ ∈ [ 0 , 1 ] ( Sutton & Barto , 2018 ) . Here , the environment is defined as a Markov Decision Process ( MDP ) of the form ( S , A , T , R ) . The sets S and A contain the environment states and agent actions , respectively , that are permissible in the decision process . The agent executes an action at given the state st at the current timestep t and receives a scalar reward rt : = R ( st , at ) , triggering a stochastic transition to a new state st+1 ∈ S with probability T ( st , at , st+1 ) . Whereas Q-Learning implements the Q-function as a lookup table , DQN implements it as a deep neural network parameterized by a vector θ . Learning then amounts to a first-order minimization with respect to θ of a squared-error loss : L ( θ ) : = 1 2 ( rt + γmax a′∈A Q ( st+1 , a ′ ; θ− ) −Q ( st , at ; θ ) ) 2 ( 1 ) Rather than conducting updates immediately upon collecting a new experience ( as Q-Learning does ) , DQN buffers each experience ( st , at , rt , st+1 ) in a replay memory D. Every F timesteps , the agent conducts a gradient update on a minibatch of replayed experiences ( Lin , 1992 ) sampled randomly from D. This helps to circumvent the strong temporal correlations between successive experiences while also efficiently reusing samples ( Mnih et al. , 2015 ) . For additional stability , the maximization in ( 1 ) is conducted using a stationary “ target ” network parameterized by a separate vector θ− . DQN updates the target network every C timesteps by copying the parameters from the main network : i.e . θ− ← θ . Following Mnih et al . ( 2015 ) , we assume that each action at is selected according to an -greedy policy ( Sutton & Barto , 2018 ) . That is , the agent selects the greedy action argmaxaQ ( st , a ) with probability t ∈ [ 0 , 1 ] and an action randomly from A otherwise . The -greedy strategy linearly interpolates between the uniform-random and greedy policies , helping to balance exploration with exploitation ( Sutton & Barto , 2018 ) . In practice , it is common to start with t = 1 early in training and gradually reduce its value over time ; we discuss the exact -schedules used in our experiments in Section 5 . 2.2 HARDWARE MODEL . Optimizing the performance of any algorithm necessarily requires some assumptions about the type of computer system on which it is being executed . In the interest of generality , we defer a discussion of our particular hardware specifications until Section 5 . For now , it will be more useful to outline the general capabilities of the systems under our consideration here . We define our abstract machine as a heterogeneous system that consists of two components : a Central Processing Unit ( “ CPU ” ) and a coprocessor optimized for massively parallel computation ( “ GPU ” ) .1 We assume that the GPU is suitable only for neural network operations : i.e . Q-value prediction ( inference ) and training ( backpropagation ) . All other faculties are to be handled by the CPU , including but not limited to sampling from the environment , managing the replay memory , and preprocessing input data for the neural network . We also assume that the CPU is capable of executing W program threads simultaneously . As a result , the system can process up toW CPU tasks and one GPU task in parallel . In practice , the original DQN algorithm only executes one task at a time , which is inefficient . The goal in the following sections is to modify the algorithm such that the machine ’ s capabilities are fully utilized . 3 CONCURRENT TRAINING . The DQN algorithm repeatedly alternates between executing F actions in its environment and then conducting a single training update on a minibatch of replayed experiences . While this is effective for finely interleaving data generation with learning , it is not efficient for the heterogeneous CPU-GPU systems we consider here . This is because either the CPU or the GPU is left idle at any given point in the process . Ideally , we would fill these idle intervals by ensuring that a CPU- and GPU-intensive tasks are executed in parallel at all times . Unfortunately , the original DQN algorithm can not be refactored to permit this possibility . The dilemma is that execution and training are sequentially dependent on each other . To see this , suppose that the agent has just completed a gradient update at some timestep t that produces the current network parameters θ . The DQN agent would now interact with its environment for F steps , computing the Q-values Q ( si , ai ; θ ) for i ∈ { t , . . . , t+ F − 1 } in the process . The next update , which produces a new set of parameters θ′ , is scheduled to occur at the beginning of timestep t+F . Note that we could not have conducted this update any earlier , since the Q-values for action selection depended on θ and not θ′ . On the other hand , we can not proceed with the next sequence of Q-values Q ( si , ai ; θ′ ) for i ∈ { t + F , . . . , t + 2F − 1 } 1For ease of presentation , we refer to any such coprocessor as a Graphics Processing Unit ( GPU ) . GPUs are presently the most common form of hardware acceleration for machine learning research ; they were used by Mnih et al . ( 2015 ) and are used for our experiments too ( Section 5 ) . Of course , other matrix-optimized Application-Specific Integrated Circuits ( ASICs ) would be equally suitable ; these may become more prevalent in the future . The optimizations we propose here are sufficiently general to apply to any of these possible implementations . since these will depend on θ′ , which has not yet been prepared by the GPU . Sampling and training are interlocked processes as a consequence , making simultaneous execution impossible . Rather than attempting to preserve DQN ’ s original control flow , let us instead consider a slight modification to it . The sequential dependency in DQN arises from the fact that the training procedure needs to overwrite θ with θ′ , but the sampling procedure requires θ for action selection . If these tasks relied on different sets of parameters , then we would be able to parallelize these tasks . Quite fortunately , DQN already has a second parameter set available : its target network parameters θ− . We therefore propose to substitute θ− as a surrogate for the main parameters θ during execution ; that is , greedy actions are computed by argmaxaQ ( s , a ; θ − ) during -greedy exploration . The mild assumption here is that a policy derived from θ− should perform comparably to a policy derived from θ . Given that θ− is a time-delayed copy of θ that differs by at most C timesteps of experience , this assumption should not be problematic in practice . The result of this subtle change is that execution and training can now be decoupled ( Figure 1 ) , at least until the target parameters θ− must be updated . Instead of sampling F experiences and then training on one minibatch , it is possible to sample C experiences while concurrently training on C / F minibatches using a separate program thread . Overall , the total computation is the same , but both the CPU and GPU are kept busy . After C timesteps have elapsed , the two threads must synchronize to copy θ− from θ as usual . To avoid a race condition between the threads , we also temporarily buffer the experiences collected by the sampler thread , transferring them to the replay memory D only when the threads are synchronized . This ensures that D does not change during training , which could produce non-deterministic results . Related Work We found no previous algorithms that utilize a similar technique to this type of concurrency , in which the target network parameters θ− are used to break the dependency between sampling and training . Daley & Amato ( 2019 ) first demonstrated that DQN with grouped minibatch training—where C / F minibatches of training are conducted every C timesteps—could learn effectively , where it was employed for the purposes of efficient λ-return calculation . However , unlike in our work , actions were still sampled using θ and training was not conducted concurrently . Distributed DQN methods that rely on a centralized training server ( e.g . Nair et al. , 2015 ; Ong et al. , 2015 ; Horgan et al. , 2018 ) could be viewed as an alternative form of concurrent training , since training and sampling occur simultaneously ( although possibly on distinct physical systems ) . This type of concurrency is quite different from ours , since gradient computation is performed at the nodes , and the updates are applied to the parameters asynchronously and in a non-deterministic order . For these reasons , we do not expect that distributed methods would be efficient on a monolithic CPU-GPU system that we consider in our work ( we elaborate on this point in Section 4 ) .
A new method for benchmarking RL algorithms like DQN, focusing on Atari suite and experiments from seminal DQN paper. The focus is to achieve speedup, allowing researchers with a modest setup (pc with gpu) to run the tests in reasonable time. The method combines concurrent data collection in multiple threads (and multiple environment instances) and training and achieves a speedup of x2.7 compared to vanilla DQN. At the same time the method slightly changes the nature of the algorithm by different action sampling (multiple env instances which is reminiscent of Asynchronous Methods paper from 2016) and slightly different action selection (using target network). The results are comparable to ones in the original paper overall, but vary on per game basis probably due to mentioned changes, slight change in hyper parameters and high variance of a single run.
SP:caa27a2b60f9b1b3a38c0afe531ccee75ef2cbfd
Human-Level Control without Server-Grade Hardware
Deep Q-Network ( DQN ) marked a major milestone for reinforcement learning , demonstrating for the first time that human-level control policies could be learned directly from raw visual inputs via reward maximization . Even years after its introduction , DQN remains highly relevant to the research community since many of its innovations have been adopted by successor methods . Nevertheless , despite significant hardware advances in the interim , DQN ’ s original Atari 2600 experiments remain extremely costly to replicate in full . This poses an immense barrier to researchers who can not afford state-of-the-art hardware or lack access to large-scale cloud computing resources . To facilitate improved access to deep reinforcement learning research , we introduce a DQN implementation that leverages a novel concurrent and synchronized execution framework designed to maximally utilize a heterogeneous CPU-GPU desktop system . With just one NVIDIA GeForce GTX 1080 GPU , our implementation reduces the training time of a 200-million-frame Atari experiment from 25 hours to just 9 hours . The ideas introduced in our paper should be generalizable to a large number of off-policy deep reinforcement learning methods . 1 INTRODUCTION . Reinforcement learning ( Sutton & Barto , 2018 ) has long grappled with the ramifications of the Curse of Dimensionality ( Bellman , 1966 ) , a phenomenon in which exact solution methods become hopelessly intractable in the face of high-dimensional state spaces . As such , Deep Q-Network ( DQN ) ( Mnih et al. , 2013 ; 2015 ) was heralded as a landmark achievement for the field , establishing “ deep ” methods as a promising avenue for controlling environments that emit rich sensory observations . Through an effective combination of Q-Learning ( Watkins , 1989 ) and a deep convolutional neural architecture ( Krizhevsky et al. , 2012 ) , DQN became the first algorithm to achieve human-level performance on a majority of the Atari 2600 games when learning directly from raw pixel inputs . In contrast to previous efforts to integrate neural networks into reinforcement learning ( e.g . Tesauro , 1992 ; Riedmiller , 2005 ) , DQN proved to be robust , efficient , and scalable . Deep reinforcement learning has consequently become an active area of research in recent years . Although DQN ’ s performance on the Atari benchmark ( Bellemare et al. , 2013 ) has been surpassed by later methods ( e.g . Hessel et al. , 2018 ; Badia et al. , 2020 ; Schrittwieser et al. , 2020 ) , the algorithm remains pertinent to ongoing deep reinforcement learning research . Its core elements—minibatched experience replay and a time-delayed target network—have been adopted by most off-policy deep reinforcement learning methods ( e.g . Lillicrap et al. , 2015 ; Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) . As such , the DQN algorithm is a good testbed for new ideas , as improvements to it are often directly transferable to state-of-the-art methods . Furthermore , its relatively straightforward implementation compared to modern successors , as well as its widely replicated results , have made it a reliable baseline for benchmarking and validating new methods . These factors have made DQN crucial to the research community even years after its introduction . In spite of substantial improvements to computing hardware over nearly a decade , DQN is still expensive to train . The large computational cost stems from the need to conduct gradient-based optimization on a multimillion-parameter convolutional neural network . The original Atari 2600 experiments from Mnih et al . ( 2015 ) , in particular , remain prohibitive for many to replicate . Agent training was conducted for 200 million frames on each of the 49 games considered , for a total of 9.8 billion frames ( equivalent to about 5.2 years of real experience ) . Conducting the entirety of these experiments is utterly infeasible without access to costly Graphics Processing Units ( GPUs ) , and can still take many weeks without access to a great number of them . This poses a significant barrier to deep reinforcement learning researchers , particularly those who lack substantial cloud computing resources . Given DQN ’ s importance as a testbed and a baseline , this barrier puts a majority of researchers at an unfair disadvantage when it comes to publishing their ideas . To foster improved accessibility to deep reinforcement learning research , we analyze the algorithmic structure of DQN and look for opportunities to reduce its runtime when executed on a standard CPU-GPU desktop system . In contrast to a separate line of inquiry into distributed DQN methods that focus on scaling to a large number of nodes ( e.g . Nair et al. , 2015 ; Ong et al. , 2015 ; Horgan et al. , 2018 ) , we specifically consider the challenges of optimizing performance for a resource-constrained local system . We develop a modified DQN implementation based on a novel framework of Concurrent Training and Synchronized Execution ; the framework is general and fits nicely into other target network-based methods as well . When trained on a single NVIDIA GeForce GTX 1080 GPU , our implementation reduces the runtime of a full Atari experiment ( 200 million frames ) from 25 hours to just 9 hours compared to a highly optimized baseline DQN implementation . At this rate , all 49 experiments from Mnih et al . ( 2015 ) can be replicated in a relatively short timeframe using highly affordable , off-the-shelf hardware . Our implementation achieves human- and DQN-level performance on a large majority of the games . We plan to publicly release the code after the submission period to aid other researchers in reproducing these results quickly with limited hardware requirements . 2 BACKGROUND . DQN can be understood as the combination of Q-Learning and deep convolutional neural networks , along with supporting infrastructure to make this combination stable . The neural networks ’ generalization ability helps DQN learn effective control policies in the face of high-dimensional sensory inputs ( e.g . images ) where classic reinforcement learning and dynamic programming would be intractable . We provide a brief summary of the DQN algorithm in our notation ( Section 2.1 ) and then establish assumptions about the CPU-GPU hardware model for which we will optimize its runtime ( Section 2.2 ) . 2.1 DQN AND NOTATION . In essence , DQN inherits the same fundamental objective of Q-Learning : to estimate a function Q : S ×A 7→ R such that acting according to a greedy policy π ( s ) = argmaxaQ ( s , a ) maximizes the agent ’ s expected discounted return Eπ [ ∑∞ t=1 γ t−1rt ] for some γ ∈ [ 0 , 1 ] ( Sutton & Barto , 2018 ) . Here , the environment is defined as a Markov Decision Process ( MDP ) of the form ( S , A , T , R ) . The sets S and A contain the environment states and agent actions , respectively , that are permissible in the decision process . The agent executes an action at given the state st at the current timestep t and receives a scalar reward rt : = R ( st , at ) , triggering a stochastic transition to a new state st+1 ∈ S with probability T ( st , at , st+1 ) . Whereas Q-Learning implements the Q-function as a lookup table , DQN implements it as a deep neural network parameterized by a vector θ . Learning then amounts to a first-order minimization with respect to θ of a squared-error loss : L ( θ ) : = 1 2 ( rt + γmax a′∈A Q ( st+1 , a ′ ; θ− ) −Q ( st , at ; θ ) ) 2 ( 1 ) Rather than conducting updates immediately upon collecting a new experience ( as Q-Learning does ) , DQN buffers each experience ( st , at , rt , st+1 ) in a replay memory D. Every F timesteps , the agent conducts a gradient update on a minibatch of replayed experiences ( Lin , 1992 ) sampled randomly from D. This helps to circumvent the strong temporal correlations between successive experiences while also efficiently reusing samples ( Mnih et al. , 2015 ) . For additional stability , the maximization in ( 1 ) is conducted using a stationary “ target ” network parameterized by a separate vector θ− . DQN updates the target network every C timesteps by copying the parameters from the main network : i.e . θ− ← θ . Following Mnih et al . ( 2015 ) , we assume that each action at is selected according to an -greedy policy ( Sutton & Barto , 2018 ) . That is , the agent selects the greedy action argmaxaQ ( st , a ) with probability t ∈ [ 0 , 1 ] and an action randomly from A otherwise . The -greedy strategy linearly interpolates between the uniform-random and greedy policies , helping to balance exploration with exploitation ( Sutton & Barto , 2018 ) . In practice , it is common to start with t = 1 early in training and gradually reduce its value over time ; we discuss the exact -schedules used in our experiments in Section 5 . 2.2 HARDWARE MODEL . Optimizing the performance of any algorithm necessarily requires some assumptions about the type of computer system on which it is being executed . In the interest of generality , we defer a discussion of our particular hardware specifications until Section 5 . For now , it will be more useful to outline the general capabilities of the systems under our consideration here . We define our abstract machine as a heterogeneous system that consists of two components : a Central Processing Unit ( “ CPU ” ) and a coprocessor optimized for massively parallel computation ( “ GPU ” ) .1 We assume that the GPU is suitable only for neural network operations : i.e . Q-value prediction ( inference ) and training ( backpropagation ) . All other faculties are to be handled by the CPU , including but not limited to sampling from the environment , managing the replay memory , and preprocessing input data for the neural network . We also assume that the CPU is capable of executing W program threads simultaneously . As a result , the system can process up toW CPU tasks and one GPU task in parallel . In practice , the original DQN algorithm only executes one task at a time , which is inefficient . The goal in the following sections is to modify the algorithm such that the machine ’ s capabilities are fully utilized . 3 CONCURRENT TRAINING . The DQN algorithm repeatedly alternates between executing F actions in its environment and then conducting a single training update on a minibatch of replayed experiences . While this is effective for finely interleaving data generation with learning , it is not efficient for the heterogeneous CPU-GPU systems we consider here . This is because either the CPU or the GPU is left idle at any given point in the process . Ideally , we would fill these idle intervals by ensuring that a CPU- and GPU-intensive tasks are executed in parallel at all times . Unfortunately , the original DQN algorithm can not be refactored to permit this possibility . The dilemma is that execution and training are sequentially dependent on each other . To see this , suppose that the agent has just completed a gradient update at some timestep t that produces the current network parameters θ . The DQN agent would now interact with its environment for F steps , computing the Q-values Q ( si , ai ; θ ) for i ∈ { t , . . . , t+ F − 1 } in the process . The next update , which produces a new set of parameters θ′ , is scheduled to occur at the beginning of timestep t+F . Note that we could not have conducted this update any earlier , since the Q-values for action selection depended on θ and not θ′ . On the other hand , we can not proceed with the next sequence of Q-values Q ( si , ai ; θ′ ) for i ∈ { t + F , . . . , t + 2F − 1 } 1For ease of presentation , we refer to any such coprocessor as a Graphics Processing Unit ( GPU ) . GPUs are presently the most common form of hardware acceleration for machine learning research ; they were used by Mnih et al . ( 2015 ) and are used for our experiments too ( Section 5 ) . Of course , other matrix-optimized Application-Specific Integrated Circuits ( ASICs ) would be equally suitable ; these may become more prevalent in the future . The optimizations we propose here are sufficiently general to apply to any of these possible implementations . since these will depend on θ′ , which has not yet been prepared by the GPU . Sampling and training are interlocked processes as a consequence , making simultaneous execution impossible . Rather than attempting to preserve DQN ’ s original control flow , let us instead consider a slight modification to it . The sequential dependency in DQN arises from the fact that the training procedure needs to overwrite θ with θ′ , but the sampling procedure requires θ for action selection . If these tasks relied on different sets of parameters , then we would be able to parallelize these tasks . Quite fortunately , DQN already has a second parameter set available : its target network parameters θ− . We therefore propose to substitute θ− as a surrogate for the main parameters θ during execution ; that is , greedy actions are computed by argmaxaQ ( s , a ; θ − ) during -greedy exploration . The mild assumption here is that a policy derived from θ− should perform comparably to a policy derived from θ . Given that θ− is a time-delayed copy of θ that differs by at most C timesteps of experience , this assumption should not be problematic in practice . The result of this subtle change is that execution and training can now be decoupled ( Figure 1 ) , at least until the target parameters θ− must be updated . Instead of sampling F experiences and then training on one minibatch , it is possible to sample C experiences while concurrently training on C / F minibatches using a separate program thread . Overall , the total computation is the same , but both the CPU and GPU are kept busy . After C timesteps have elapsed , the two threads must synchronize to copy θ− from θ as usual . To avoid a race condition between the threads , we also temporarily buffer the experiences collected by the sampler thread , transferring them to the replay memory D only when the threads are synchronized . This ensures that D does not change during training , which could produce non-deterministic results . Related Work We found no previous algorithms that utilize a similar technique to this type of concurrency , in which the target network parameters θ− are used to break the dependency between sampling and training . Daley & Amato ( 2019 ) first demonstrated that DQN with grouped minibatch training—where C / F minibatches of training are conducted every C timesteps—could learn effectively , where it was employed for the purposes of efficient λ-return calculation . However , unlike in our work , actions were still sampled using θ and training was not conducted concurrently . Distributed DQN methods that rely on a centralized training server ( e.g . Nair et al. , 2015 ; Ong et al. , 2015 ; Horgan et al. , 2018 ) could be viewed as an alternative form of concurrent training , since training and sampling occur simultaneously ( although possibly on distinct physical systems ) . This type of concurrency is quite different from ours , since gradient computation is performed at the nodes , and the updates are applied to the parameters asynchronously and in a non-deterministic order . For these reasons , we do not expect that distributed methods would be efficient on a monolithic CPU-GPU system that we consider in our work ( we elaborate on this point in Section 4 ) .
DQN is known to consume a lot of resources. This paper introduces an optimized version of DQN which speeds up training 25->9 hours. The authors use two main techniques to improve throughput. Firstly, the authors propose to select actions by computing the argmax over the target network. This decouples acting and training and allows these two operations to run asynchronously. Secondly, the authors compute actions for multiple environments in parallel on the GPU.
SP:caa27a2b60f9b1b3a38c0afe531ccee75ef2cbfd
Weighted Training for Cross-Task Learning
In this paper , we introduce Target-Aware Weighted Training ( TAWT ) , a weighted training algorithm for cross-task learning based on minimizing a representationbased task distance between the source and target tasks . We show that TAWT is easy to implement , is computationally efficient , requires little hyperparameter tuning , and enjoys non-asymptotic learning-theoretic guarantees . The effectiveness of TAWT is corroborated through extensive experiments with BERT on four sequence tagging tasks in natural language processing ( NLP ) , including part-of-speech ( PoS ) tagging , chunking , predicate detection , and named entity recognition ( NER ) . As a byproduct , the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning , such as the choice of the source data and the impact of fine-tuning.1 1 INTRODUCTION . The state-of-the-art ( SOTA ) models in real-world applications rely increasingly on the usage of weak supervision signals ( Pennington et al. , 2014 ; Devlin et al. , 2019 ; Liu et al. , 2019 ) . Among these , cross-task signals are one of the most widely-used weak signals ( Zamir et al. , 2018 ; McCann et al. , 2018 ) . Despite their popularity , the benefits of cross-task signals are not well understood from a theoretical point of view , especially in the context of deep learning ( He et al. , 2021 ; Neyshabur et al. , 2020 ) , hence impeding the efficient usage of those signals . Previous work has adopted representation learning as a framework to understand the benefits of cross-task signals , where knowledge transfer is achieved by learning a representation shared across different tasks ( Baxter , 2000 ; Maurer et al. , 2016 ; Tripuraneni et al. , 2020 ; 2021 ; Du et al. , 2021 ) . However , the existence of a shared representation is often too strong an assumption in practice . Such an assumption also makes it difficult to reason about several critical aspects of cross-task learning , such as the quantification of the value of the source data and the impact of fine-tuning ( Kalan & Fabian , 2020 ; Chua et al. , 2021 ) . In this paper , we propose Target-Aware Weighted Training ( TAWT ) , a weighted training algorithm for efficient cross-task learning . The algorithm can be easily applied to existing cross-task learning paradigms , such as pre-training and joint training , to boost their sample efficiency by assigning adaptive ( i.e. , trainable ) weights on the source tasks or source samples . The weights are determined in a theoretically principled way by minimizing a representation-based task distance between the source and target tasks . Such a strategy is in sharp contrast to other weighting schemes common in machine learning , such as importance sampling in domain adaptation ( Shimodaira , 2000 ; Cortes et al. , 2010 ; Jiang & Zhai , 2007 ) . 1Our code is publicly available at http : //cogcomp.org/page/publication_view/963 . The effectiveness of TAWT is verified via both theoretical analyses and empirical experiments . Using empirical process theory , we prove a non-asymptotic generalization bound for TAWT . The bound is a superposition of two vanishing terms and a term depending on the task distance , the latter of which is potentially negligible due to the re-weighting operation . We then conduct comprehensive experiments on four sequence tagging tasks in NLP : part-of-speech ( PoS ) tagging , chunking , predicate detection , and named entity recognition ( NER ) . We demonstrate that TAWT further improves the performance of BERT ( Devlin et al. , 2019 ) in both pre-training and joint training for cross-task learning with limited target data , achieving an average absolute improvement of 3.1 % on the performance . As a byproduct , we propose a representation-based task distance that depends on the quality of representations for each task , respectively , instead of assuming the existence of a single shared representation among all tasks . This finer-grained notion of task distance enables a better understanding of cross-task signals . For example , the representation-based task distance gives an interpretable measure of the value of the source data on the target task based on the discrepancy between their optimal representations . Such a measure is more informative than measuring the difference between tasks via the discrepancy of their task-specific functions ( e.g . linear functions ) as done in previous theoretical frameworks ( Tripuraneni et al. , 2020 ) . Furthermore , the representation-based task distance clearly conveys the necessity of fine-tuning : if this distance is non-zero , then fine-tuning the representation becomes necessary as the representation learned from the source data does not converge to the optimal target representation . Finally , we compare our work with some recent attempts in similar directions . Liu et al . ( 2020 ) analyze the benefits of transfer learning by distinguishing source-specific features and transferable features in the source data . Based on the two types of features , they further propose a meta representation learning algorithm to encourage learning transferable and generalizable features . Instead of focusing on the distinction between two types of features , our algorithm and analyses are based on the representation-based task distance and are thus different . Chua et al . ( 2021 ) present a theoretical framework for analyzing representations derived from model agnostic meta-learning ( Finn et al. , 2017 ) , assuming all the tasks use approximately the same underlying representation . In contrast , we do not impose any a priori assumption on the proximity among source and target representations , and our algorithm seeks for a weighting scheme to maximize the proximity . Our work is also different from task weighting in curriculum learning . This line of work tends to learn suitable weights in the stochastic policy to decide which task to study next in curriculum learning ( Graves et al. , 2017 ) , while TAWT aims to learn better representations by assigning more suitable weights on source tasks . Compared to heuristic weighting strategies in multi-task learning ( Gong et al. , 2019 ; Zhang & Yang , 2021 ) , we aim to design a practical algorithm with theoretical guarantees for cross-task learning . 2 TAWT : TARGET-AWARE WEIGHTED TRAINING 2.1 PRELIMINARIES . Suppose we have T source tasks , represented by a collection of probability distributions { Dt } Tt=1 on the sample space X × Y , where X ⊆ Rd is the feature space and Y ⊆ R is the label space . For classification problems , we take Y to be a finite subset of R. We have a single target task , whose probability distribution is denoted as D0 . For the t-th task , where t = 0 , 1 , . . . , T , we observe nt i.i.d . samples St = { ( xti , yti ) } nti=1 from Dt . Typically , the number of samples from the target task , n0 , is much smaller than the samples from the source tasks , and the goal is to use samples from source tasks to aid the learning of the target task . Let Φ be a collection of representations from the feature space X to some latent space Z ⊆ Rr . We refer to Φ as the representation class . Let F be a collection of task-specific functions from the latent space Z to the label space Y . The complexity of the representation class Φ is usually much larger ( i.e. , more expressive ) than that of the task-specific function class F . Given a bounded loss function ` : Y ×Y → [ 0 , 1 ] , the optimal pair of representation and task-specific function of the t-th task is given by ( φ ? t , f ? t ) ∈ argmin φt∈Φ , ft∈F Lt ( φt , ft ) , Lt ( φt , ft ) : = E ( X , Y ) ∼Dt [ ` ( ft ◦ φt ( X ) , Y ) ] . ( 2.1 ) Note that in general , the optimal representations of different tasks are different . For brevity , all proofs for the theory part are deferred to the Appx . A . 2.2 DERIVATION OF TAWT Under the assumption that the optimal representations { φ ? t } Tt=0 are similar , a representation learned using samples only from the source tasks would perform reasonably well on the target task . Consequently , we can devote n0 samples from the target task to learn only the task specific function . This is a much easier task , since the complexity of F is typically much smaller than that of Φ . This discussion leads to a simple yet immensely popular two-step procedure as follows ( Tripuraneni et al. , 2020 ; Du et al. , 2021 ) . First , we solve a weighted empirical risk minimization problem with respect to the source tasks : ( φ̂ , { f̂t } Tt=1 ) ∈ argmin φ∈Φ , { ft } ⊂F T∑ t=1 ωtL̂t ( φ , ft ) , L̂t ( φ , ft ) : = 1 nt nt∑ i=1 ` ( ft ◦ φ ( xti ) , yti ) , ( 2.2 ) where ω ∈ ∆T−1 is a user-specified vector lying in the T -dimensional probability simplex ( i.e. , ∑T t=1 ωt = 1 and ωt ≥ 0 , ∀1 ≤ t ≤ T ) . In the second stage , we freeze the representation φ̂ , and seek the task-specific function that minimizes the empirical risk with respect to the target task : f̂0 ∈ argmin f0∈F L̂0 ( φ̂ , f0 ) . ( 2.3 ) In practice , we can allow φ̂ to slightly vary ( e.g. , via fine-tuning ) to get a performance boost . In the two-step procedure ( 2.2 ) – ( 2.3 ) , the weight vector ω is usually taken to be a hyperparameter and is fixed during training . Popular choices include the uniform weights ( i.e. , ωt = 1/T ) or weights proportional to the sample sizes ( i.e. , ωt = nt/ ∑T t′=1 nt′ ) ( Liu et al. , 2019 ; Johnson & Khoshgoftaar , 2019 ) . This reveals the target-agnostic nature of the two-step procedure ( 2.2 ) – ( 2.3 ) : the weights stay the same regardless the level of proximity between the source tasks and the target task . Consider the following thought experiment : if we know a priori that the first source task D1 is closer ( compared to other source tasks ) to the target task D0 , then we would expect a better performance by raising the importance of D1 , i.e. , make ω1 larger . This thought experiment motivates a target-aware procedure that adaptively adjusts the weights based on the proximity of source tasks to the target . A natural attempt for developing such a task-aware procedure is a follows : min φ∈Φ , f0∈F , ω∈∆T−1 L̂0 ( φ , f0 ) subject to φ ∈ argmin ψ∈Φ min { ft } ⊂F T∑ t=1 ωtL̂t ( ψ , ft ) . ( OPT1 ) That is , we seek for the best weights ω such that solving ( 2.2 ) with this choice of ω would lead to the lowest training error when we subsequently solve ( 2.3 ) . Despite its conceptual simplicity , the formulation ( OPT1 ) is a complicated constrained optimization problem . Nevertheless , we demonstrate that it is possible to transform it into an unconstrained form for which a customized gradient-based optimizer could be applied . To do so , we let ( φω , { fωt } ) be any representation and task-specific functions that minimizes ∑T t=1 ωtL̂t ( φ , ft ) over φ ∈ Φ and { ft } ⊂ F . Equivalently , φω minimizes ∑T t=1 ωt minft∈F L̂t ( φ , ft ) over φ ∈ Φ . With such notations , we can re-write ( OPT1 ) as min f0∈F , ω∈∆T−1 L̂0 ( φω , f0 ) . ( OPT2 ) The gradient of the above objective with respect to the task-specific function , ∇f L̂0 ( φω , f0 ) , is easy to calculate via back-propagation . The calculation of the gradient with respect to the weights requires more work , as φω is an implicit function of ω . By the chain rule , we have ∂∂ωt L̂0 ( φ ω , f0 ) = [ ∇φL̂0 ( φω , f0 ) ] > ∂∂ωtφ ω . Since φω is a minimizer of φ 7→ ∑T t=1 ωt minft∈F L̂t ( φ , ft ) , we have F ( φω , ω ) = 0 , ∀ω ∈ ∆T−1 , F ( φ , ω ) : = ∇φ T∑ t=1 ωt min ft∈F L̂t ( φ , ft ) . ( 2.4 ) By implicit function theorem , if F ( · , · ) is everywhere differentiable and the matrix ∂F ( φ , ω ) /∂φ is invertible for any ( φ , ω ) near some ( φ̃ , ω̃ ) satisfying F ( φ̃ , ω̃ ) = 0 , then we can conclude that the Algorithm 1 : Target-Aware Weighted Training ( TAWT ) Input : Datasets { St } Tt=0 . Output : Final pair of representation and task-specific function ( φ̂ , f̂0 ) for the target task . Initialize parameters ω0 ∈ ∆T−1 , φ0 ∈ Φ , { f0t } Tt=0 ⊂ F ; for k = 0 , . . . , K − 1 do Starting from ( φk , { fkt } Tt=1 ) , run a few steps of SGD to get ( φk+1 , { fk+1t } Tt=1 ) ; Use the approximate gradient∇f L̂0 ( φk+1 , f0 ) to run a few steps of SGD from fk0 to get fk+10 ; Run one step of approximate mirror descent ( 2.7 ) – ( 2.8 ) from ωk to get ωk+1 ; end return φ̂ = φK , f̂0 = fK0 map ω 7→ φω is a locally well-defined function near ω̃ , and the derivative of this map is given by ∂ ∂ωt φω = − ( ∂F ( φ , ω ) ∂φ ∣∣∣∣ φ=φω ) −1 ( ∂F ( φ , ω ) ∂ωt ∣∣∣∣ φ=φω ) . ( 2.5 ) To simplify the above expression , note that under regularity conditions , we can regard ∇φL̂t ( φ , fωt ) as a sub-gradient of the map φ 7→ minft∈F L̂t ( φ , ft ) . This means that we can write F ( φω , ω ) =∑T t=1 ωt∇φL̂t ( φω , fωt ) . Plugging this expression back to ( 2.5 ) and recalling the expression for ∂L̂0 ( φω , f0 ) /∂ωt derived via the chain rule , we get ∂ ∂ωt L̂0 ( φω , f0 ) = − [ ∇φL̂0 ( φω , f0 ) ] > [ T∑ t=1 ωt∇2φL̂t ( φω , fωt ) ] −1 [ ∇φL̂t ( φω , fωt ) ] . ( 2.6 ) Now that we have the expressions for the gradients of L̂0 ( φω , f0 ) with respect to f0 and ω , we can solve ( OPT2 ) via a combination of alternating minimization and mirror descent . To be more specific , suppose that at iteration k , the current weights , representation , and task-specific functions are ωk , φk , and { fkt } Tt=0 , respectively . At this iteration , we conduct the following three steps : 1 . Freeze ωk . Starting from ( φk , { fkt } Tt=1 ) , run a few steps of SGD on the objective function ( φ , { ft } Tt=1 ) 7→ ∑T t=1 ω k t L̂t ( φ , ft ) to get ( φk+1 , { fk+1t } Tt=1 ) , which is regarded as an approxi- mation of ( φω k , { fωkt } Tt=1 ) ; 2 . Freeze ( φk+1 , { fk+1t } Tt=1 ) . Approximate the gradient∇f L̂0 ( φω k , f0 ) by∇f L̂0 ( φk+1 , f0 ) . Using this approximate gradient , run a few steps of SGD from fk0 to get f k+1 0 ; 3 . Freeze ( φk+1 , { fk+1t } Tt=0 ) . Approximate the partial derivative ∂L̂0 ( φω k , fk+10 ) /∂ωt by gkt : = − [ ∇φL̂0 ( φk+1 , fk+10 ) ] > [ T∑ t=1 ωt∇2φL̂t ( φk+1 , fk+1t ) ] −1 [ ∇φL̂t ( φk+1 , fk+1t ) ] . ( 2.7 ) Then run one step of mirror descent ( with step size ηk ) from ωk to get ωk+1 : ωk+1t = ωkt exp { −ηkgkt } ∑T t′=1 ω k t′ exp { −ηkgkt′ } . ( 2.8 ) We use mirror descent in ( 2.8 ) , as it is a canonical generalization of Euclidean gradient descent to gradient descent on the probability simplex ( Beck & Teboulle , 2003 ) . Note that other optimization methods , such as projected gradient descent , can also be used here . The update rule ( 2.8 ) has a rather intuitive explanation . Note that gkt is a weighted dissimilarity measure between the gradients ∇φL̂0 and∇φL̂t . This can further be regarded as a crude dissimilarity measure between the optimal representations of the target task and the t-th source task . The mirror descent updates ωt along the direction where the target task and the t-th source task are more similar . The overall procedure is summarized in Algorithm 1 . A faithful implementation of the above steps would require a costly evaluation of the inverse of the Hessian matrix ∑T t=1 ωt∇2φL̂t ( φk+1 , f k+1 t ) ∈ Rr×r . In practice , we can bypass this step by replacing2 the Hessian-inverse-weighted dissimilarity measure ( 2.7 ) with a consine-similarity-based dissimilarity measure ( see Section 4 for details ) . The previous derivation has focused on weighted pre-training , i.e. , the target data is not used when defining the constrained set in ( OPT1 ) . It can be modified , mutatis mutandis , to handle weighted joint-training , where we change ( OPT1 ) to min φ∈Φ , f0∈F , ω∈∆T L̂0 ( φ , f0 ) subject to φ ∈ argmin ψ∈Φ min { ft } ⊂F T∑ t=0 ωtL̂t ( ψ , ft ) . ( 2.9 ) Compared to ( OPT1 ) , we now also use the data from the target task when learning the representation φ , and thus there is an extra weight ω0 on the target task . The algorithm can also be easily extended to handle multiple target tasks or to put weights on samples ( as opposed to putting weights on tasks ) . The algorithm could also be applied to improve the efficiency of learning from cross-domain and cross-lingual signals , and we postpone such explorations for future work .
The paper proposes a new weighted training algorithm, TAWT, to learn the task-aware weights on tasks for better using the cross-task signals. The author theoretically and empirically verified the effectiveness of TAWT. The paper can bring a new research interest for multi-task learning and transfer learning.
SP:7adb704c7eb8687035f95c5a95463a5823d4e504
Weighted Training for Cross-Task Learning
In this paper , we introduce Target-Aware Weighted Training ( TAWT ) , a weighted training algorithm for cross-task learning based on minimizing a representationbased task distance between the source and target tasks . We show that TAWT is easy to implement , is computationally efficient , requires little hyperparameter tuning , and enjoys non-asymptotic learning-theoretic guarantees . The effectiveness of TAWT is corroborated through extensive experiments with BERT on four sequence tagging tasks in natural language processing ( NLP ) , including part-of-speech ( PoS ) tagging , chunking , predicate detection , and named entity recognition ( NER ) . As a byproduct , the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning , such as the choice of the source data and the impact of fine-tuning.1 1 INTRODUCTION . The state-of-the-art ( SOTA ) models in real-world applications rely increasingly on the usage of weak supervision signals ( Pennington et al. , 2014 ; Devlin et al. , 2019 ; Liu et al. , 2019 ) . Among these , cross-task signals are one of the most widely-used weak signals ( Zamir et al. , 2018 ; McCann et al. , 2018 ) . Despite their popularity , the benefits of cross-task signals are not well understood from a theoretical point of view , especially in the context of deep learning ( He et al. , 2021 ; Neyshabur et al. , 2020 ) , hence impeding the efficient usage of those signals . Previous work has adopted representation learning as a framework to understand the benefits of cross-task signals , where knowledge transfer is achieved by learning a representation shared across different tasks ( Baxter , 2000 ; Maurer et al. , 2016 ; Tripuraneni et al. , 2020 ; 2021 ; Du et al. , 2021 ) . However , the existence of a shared representation is often too strong an assumption in practice . Such an assumption also makes it difficult to reason about several critical aspects of cross-task learning , such as the quantification of the value of the source data and the impact of fine-tuning ( Kalan & Fabian , 2020 ; Chua et al. , 2021 ) . In this paper , we propose Target-Aware Weighted Training ( TAWT ) , a weighted training algorithm for efficient cross-task learning . The algorithm can be easily applied to existing cross-task learning paradigms , such as pre-training and joint training , to boost their sample efficiency by assigning adaptive ( i.e. , trainable ) weights on the source tasks or source samples . The weights are determined in a theoretically principled way by minimizing a representation-based task distance between the source and target tasks . Such a strategy is in sharp contrast to other weighting schemes common in machine learning , such as importance sampling in domain adaptation ( Shimodaira , 2000 ; Cortes et al. , 2010 ; Jiang & Zhai , 2007 ) . 1Our code is publicly available at http : //cogcomp.org/page/publication_view/963 . The effectiveness of TAWT is verified via both theoretical analyses and empirical experiments . Using empirical process theory , we prove a non-asymptotic generalization bound for TAWT . The bound is a superposition of two vanishing terms and a term depending on the task distance , the latter of which is potentially negligible due to the re-weighting operation . We then conduct comprehensive experiments on four sequence tagging tasks in NLP : part-of-speech ( PoS ) tagging , chunking , predicate detection , and named entity recognition ( NER ) . We demonstrate that TAWT further improves the performance of BERT ( Devlin et al. , 2019 ) in both pre-training and joint training for cross-task learning with limited target data , achieving an average absolute improvement of 3.1 % on the performance . As a byproduct , we propose a representation-based task distance that depends on the quality of representations for each task , respectively , instead of assuming the existence of a single shared representation among all tasks . This finer-grained notion of task distance enables a better understanding of cross-task signals . For example , the representation-based task distance gives an interpretable measure of the value of the source data on the target task based on the discrepancy between their optimal representations . Such a measure is more informative than measuring the difference between tasks via the discrepancy of their task-specific functions ( e.g . linear functions ) as done in previous theoretical frameworks ( Tripuraneni et al. , 2020 ) . Furthermore , the representation-based task distance clearly conveys the necessity of fine-tuning : if this distance is non-zero , then fine-tuning the representation becomes necessary as the representation learned from the source data does not converge to the optimal target representation . Finally , we compare our work with some recent attempts in similar directions . Liu et al . ( 2020 ) analyze the benefits of transfer learning by distinguishing source-specific features and transferable features in the source data . Based on the two types of features , they further propose a meta representation learning algorithm to encourage learning transferable and generalizable features . Instead of focusing on the distinction between two types of features , our algorithm and analyses are based on the representation-based task distance and are thus different . Chua et al . ( 2021 ) present a theoretical framework for analyzing representations derived from model agnostic meta-learning ( Finn et al. , 2017 ) , assuming all the tasks use approximately the same underlying representation . In contrast , we do not impose any a priori assumption on the proximity among source and target representations , and our algorithm seeks for a weighting scheme to maximize the proximity . Our work is also different from task weighting in curriculum learning . This line of work tends to learn suitable weights in the stochastic policy to decide which task to study next in curriculum learning ( Graves et al. , 2017 ) , while TAWT aims to learn better representations by assigning more suitable weights on source tasks . Compared to heuristic weighting strategies in multi-task learning ( Gong et al. , 2019 ; Zhang & Yang , 2021 ) , we aim to design a practical algorithm with theoretical guarantees for cross-task learning . 2 TAWT : TARGET-AWARE WEIGHTED TRAINING 2.1 PRELIMINARIES . Suppose we have T source tasks , represented by a collection of probability distributions { Dt } Tt=1 on the sample space X × Y , where X ⊆ Rd is the feature space and Y ⊆ R is the label space . For classification problems , we take Y to be a finite subset of R. We have a single target task , whose probability distribution is denoted as D0 . For the t-th task , where t = 0 , 1 , . . . , T , we observe nt i.i.d . samples St = { ( xti , yti ) } nti=1 from Dt . Typically , the number of samples from the target task , n0 , is much smaller than the samples from the source tasks , and the goal is to use samples from source tasks to aid the learning of the target task . Let Φ be a collection of representations from the feature space X to some latent space Z ⊆ Rr . We refer to Φ as the representation class . Let F be a collection of task-specific functions from the latent space Z to the label space Y . The complexity of the representation class Φ is usually much larger ( i.e. , more expressive ) than that of the task-specific function class F . Given a bounded loss function ` : Y ×Y → [ 0 , 1 ] , the optimal pair of representation and task-specific function of the t-th task is given by ( φ ? t , f ? t ) ∈ argmin φt∈Φ , ft∈F Lt ( φt , ft ) , Lt ( φt , ft ) : = E ( X , Y ) ∼Dt [ ` ( ft ◦ φt ( X ) , Y ) ] . ( 2.1 ) Note that in general , the optimal representations of different tasks are different . For brevity , all proofs for the theory part are deferred to the Appx . A . 2.2 DERIVATION OF TAWT Under the assumption that the optimal representations { φ ? t } Tt=0 are similar , a representation learned using samples only from the source tasks would perform reasonably well on the target task . Consequently , we can devote n0 samples from the target task to learn only the task specific function . This is a much easier task , since the complexity of F is typically much smaller than that of Φ . This discussion leads to a simple yet immensely popular two-step procedure as follows ( Tripuraneni et al. , 2020 ; Du et al. , 2021 ) . First , we solve a weighted empirical risk minimization problem with respect to the source tasks : ( φ̂ , { f̂t } Tt=1 ) ∈ argmin φ∈Φ , { ft } ⊂F T∑ t=1 ωtL̂t ( φ , ft ) , L̂t ( φ , ft ) : = 1 nt nt∑ i=1 ` ( ft ◦ φ ( xti ) , yti ) , ( 2.2 ) where ω ∈ ∆T−1 is a user-specified vector lying in the T -dimensional probability simplex ( i.e. , ∑T t=1 ωt = 1 and ωt ≥ 0 , ∀1 ≤ t ≤ T ) . In the second stage , we freeze the representation φ̂ , and seek the task-specific function that minimizes the empirical risk with respect to the target task : f̂0 ∈ argmin f0∈F L̂0 ( φ̂ , f0 ) . ( 2.3 ) In practice , we can allow φ̂ to slightly vary ( e.g. , via fine-tuning ) to get a performance boost . In the two-step procedure ( 2.2 ) – ( 2.3 ) , the weight vector ω is usually taken to be a hyperparameter and is fixed during training . Popular choices include the uniform weights ( i.e. , ωt = 1/T ) or weights proportional to the sample sizes ( i.e. , ωt = nt/ ∑T t′=1 nt′ ) ( Liu et al. , 2019 ; Johnson & Khoshgoftaar , 2019 ) . This reveals the target-agnostic nature of the two-step procedure ( 2.2 ) – ( 2.3 ) : the weights stay the same regardless the level of proximity between the source tasks and the target task . Consider the following thought experiment : if we know a priori that the first source task D1 is closer ( compared to other source tasks ) to the target task D0 , then we would expect a better performance by raising the importance of D1 , i.e. , make ω1 larger . This thought experiment motivates a target-aware procedure that adaptively adjusts the weights based on the proximity of source tasks to the target . A natural attempt for developing such a task-aware procedure is a follows : min φ∈Φ , f0∈F , ω∈∆T−1 L̂0 ( φ , f0 ) subject to φ ∈ argmin ψ∈Φ min { ft } ⊂F T∑ t=1 ωtL̂t ( ψ , ft ) . ( OPT1 ) That is , we seek for the best weights ω such that solving ( 2.2 ) with this choice of ω would lead to the lowest training error when we subsequently solve ( 2.3 ) . Despite its conceptual simplicity , the formulation ( OPT1 ) is a complicated constrained optimization problem . Nevertheless , we demonstrate that it is possible to transform it into an unconstrained form for which a customized gradient-based optimizer could be applied . To do so , we let ( φω , { fωt } ) be any representation and task-specific functions that minimizes ∑T t=1 ωtL̂t ( φ , ft ) over φ ∈ Φ and { ft } ⊂ F . Equivalently , φω minimizes ∑T t=1 ωt minft∈F L̂t ( φ , ft ) over φ ∈ Φ . With such notations , we can re-write ( OPT1 ) as min f0∈F , ω∈∆T−1 L̂0 ( φω , f0 ) . ( OPT2 ) The gradient of the above objective with respect to the task-specific function , ∇f L̂0 ( φω , f0 ) , is easy to calculate via back-propagation . The calculation of the gradient with respect to the weights requires more work , as φω is an implicit function of ω . By the chain rule , we have ∂∂ωt L̂0 ( φ ω , f0 ) = [ ∇φL̂0 ( φω , f0 ) ] > ∂∂ωtφ ω . Since φω is a minimizer of φ 7→ ∑T t=1 ωt minft∈F L̂t ( φ , ft ) , we have F ( φω , ω ) = 0 , ∀ω ∈ ∆T−1 , F ( φ , ω ) : = ∇φ T∑ t=1 ωt min ft∈F L̂t ( φ , ft ) . ( 2.4 ) By implicit function theorem , if F ( · , · ) is everywhere differentiable and the matrix ∂F ( φ , ω ) /∂φ is invertible for any ( φ , ω ) near some ( φ̃ , ω̃ ) satisfying F ( φ̃ , ω̃ ) = 0 , then we can conclude that the Algorithm 1 : Target-Aware Weighted Training ( TAWT ) Input : Datasets { St } Tt=0 . Output : Final pair of representation and task-specific function ( φ̂ , f̂0 ) for the target task . Initialize parameters ω0 ∈ ∆T−1 , φ0 ∈ Φ , { f0t } Tt=0 ⊂ F ; for k = 0 , . . . , K − 1 do Starting from ( φk , { fkt } Tt=1 ) , run a few steps of SGD to get ( φk+1 , { fk+1t } Tt=1 ) ; Use the approximate gradient∇f L̂0 ( φk+1 , f0 ) to run a few steps of SGD from fk0 to get fk+10 ; Run one step of approximate mirror descent ( 2.7 ) – ( 2.8 ) from ωk to get ωk+1 ; end return φ̂ = φK , f̂0 = fK0 map ω 7→ φω is a locally well-defined function near ω̃ , and the derivative of this map is given by ∂ ∂ωt φω = − ( ∂F ( φ , ω ) ∂φ ∣∣∣∣ φ=φω ) −1 ( ∂F ( φ , ω ) ∂ωt ∣∣∣∣ φ=φω ) . ( 2.5 ) To simplify the above expression , note that under regularity conditions , we can regard ∇φL̂t ( φ , fωt ) as a sub-gradient of the map φ 7→ minft∈F L̂t ( φ , ft ) . This means that we can write F ( φω , ω ) =∑T t=1 ωt∇φL̂t ( φω , fωt ) . Plugging this expression back to ( 2.5 ) and recalling the expression for ∂L̂0 ( φω , f0 ) /∂ωt derived via the chain rule , we get ∂ ∂ωt L̂0 ( φω , f0 ) = − [ ∇φL̂0 ( φω , f0 ) ] > [ T∑ t=1 ωt∇2φL̂t ( φω , fωt ) ] −1 [ ∇φL̂t ( φω , fωt ) ] . ( 2.6 ) Now that we have the expressions for the gradients of L̂0 ( φω , f0 ) with respect to f0 and ω , we can solve ( OPT2 ) via a combination of alternating minimization and mirror descent . To be more specific , suppose that at iteration k , the current weights , representation , and task-specific functions are ωk , φk , and { fkt } Tt=0 , respectively . At this iteration , we conduct the following three steps : 1 . Freeze ωk . Starting from ( φk , { fkt } Tt=1 ) , run a few steps of SGD on the objective function ( φ , { ft } Tt=1 ) 7→ ∑T t=1 ω k t L̂t ( φ , ft ) to get ( φk+1 , { fk+1t } Tt=1 ) , which is regarded as an approxi- mation of ( φω k , { fωkt } Tt=1 ) ; 2 . Freeze ( φk+1 , { fk+1t } Tt=1 ) . Approximate the gradient∇f L̂0 ( φω k , f0 ) by∇f L̂0 ( φk+1 , f0 ) . Using this approximate gradient , run a few steps of SGD from fk0 to get f k+1 0 ; 3 . Freeze ( φk+1 , { fk+1t } Tt=0 ) . Approximate the partial derivative ∂L̂0 ( φω k , fk+10 ) /∂ωt by gkt : = − [ ∇φL̂0 ( φk+1 , fk+10 ) ] > [ T∑ t=1 ωt∇2φL̂t ( φk+1 , fk+1t ) ] −1 [ ∇φL̂t ( φk+1 , fk+1t ) ] . ( 2.7 ) Then run one step of mirror descent ( with step size ηk ) from ωk to get ωk+1 : ωk+1t = ωkt exp { −ηkgkt } ∑T t′=1 ω k t′ exp { −ηkgkt′ } . ( 2.8 ) We use mirror descent in ( 2.8 ) , as it is a canonical generalization of Euclidean gradient descent to gradient descent on the probability simplex ( Beck & Teboulle , 2003 ) . Note that other optimization methods , such as projected gradient descent , can also be used here . The update rule ( 2.8 ) has a rather intuitive explanation . Note that gkt is a weighted dissimilarity measure between the gradients ∇φL̂0 and∇φL̂t . This can further be regarded as a crude dissimilarity measure between the optimal representations of the target task and the t-th source task . The mirror descent updates ωt along the direction where the target task and the t-th source task are more similar . The overall procedure is summarized in Algorithm 1 . A faithful implementation of the above steps would require a costly evaluation of the inverse of the Hessian matrix ∑T t=1 ωt∇2φL̂t ( φk+1 , f k+1 t ) ∈ Rr×r . In practice , we can bypass this step by replacing2 the Hessian-inverse-weighted dissimilarity measure ( 2.7 ) with a consine-similarity-based dissimilarity measure ( see Section 4 for details ) . The previous derivation has focused on weighted pre-training , i.e. , the target data is not used when defining the constrained set in ( OPT1 ) . It can be modified , mutatis mutandis , to handle weighted joint-training , where we change ( OPT1 ) to min φ∈Φ , f0∈F , ω∈∆T L̂0 ( φ , f0 ) subject to φ ∈ argmin ψ∈Φ min { ft } ⊂F T∑ t=0 ωtL̂t ( ψ , ft ) . ( 2.9 ) Compared to ( OPT1 ) , we now also use the data from the target task when learning the representation φ , and thus there is an extra weight ω0 on the target task . The algorithm can also be easily extended to handle multiple target tasks or to put weights on samples ( as opposed to putting weights on tasks ) . The algorithm could also be applied to improve the efficiency of learning from cross-domain and cross-lingual signals , and we postpone such explorations for future work .
The paper discusses an approach that learns to weight data from different tasks in pretraining or mutli-task learning. It also gives a VC/empirical-processes style analysis, which gives guarantees for the algorithm and insights about sample complexity. Finally it describes experiments on a number of NLP problems.
SP:7adb704c7eb8687035f95c5a95463a5823d4e504
Weighted Training for Cross-Task Learning
In this paper , we introduce Target-Aware Weighted Training ( TAWT ) , a weighted training algorithm for cross-task learning based on minimizing a representationbased task distance between the source and target tasks . We show that TAWT is easy to implement , is computationally efficient , requires little hyperparameter tuning , and enjoys non-asymptotic learning-theoretic guarantees . The effectiveness of TAWT is corroborated through extensive experiments with BERT on four sequence tagging tasks in natural language processing ( NLP ) , including part-of-speech ( PoS ) tagging , chunking , predicate detection , and named entity recognition ( NER ) . As a byproduct , the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning , such as the choice of the source data and the impact of fine-tuning.1 1 INTRODUCTION . The state-of-the-art ( SOTA ) models in real-world applications rely increasingly on the usage of weak supervision signals ( Pennington et al. , 2014 ; Devlin et al. , 2019 ; Liu et al. , 2019 ) . Among these , cross-task signals are one of the most widely-used weak signals ( Zamir et al. , 2018 ; McCann et al. , 2018 ) . Despite their popularity , the benefits of cross-task signals are not well understood from a theoretical point of view , especially in the context of deep learning ( He et al. , 2021 ; Neyshabur et al. , 2020 ) , hence impeding the efficient usage of those signals . Previous work has adopted representation learning as a framework to understand the benefits of cross-task signals , where knowledge transfer is achieved by learning a representation shared across different tasks ( Baxter , 2000 ; Maurer et al. , 2016 ; Tripuraneni et al. , 2020 ; 2021 ; Du et al. , 2021 ) . However , the existence of a shared representation is often too strong an assumption in practice . Such an assumption also makes it difficult to reason about several critical aspects of cross-task learning , such as the quantification of the value of the source data and the impact of fine-tuning ( Kalan & Fabian , 2020 ; Chua et al. , 2021 ) . In this paper , we propose Target-Aware Weighted Training ( TAWT ) , a weighted training algorithm for efficient cross-task learning . The algorithm can be easily applied to existing cross-task learning paradigms , such as pre-training and joint training , to boost their sample efficiency by assigning adaptive ( i.e. , trainable ) weights on the source tasks or source samples . The weights are determined in a theoretically principled way by minimizing a representation-based task distance between the source and target tasks . Such a strategy is in sharp contrast to other weighting schemes common in machine learning , such as importance sampling in domain adaptation ( Shimodaira , 2000 ; Cortes et al. , 2010 ; Jiang & Zhai , 2007 ) . 1Our code is publicly available at http : //cogcomp.org/page/publication_view/963 . The effectiveness of TAWT is verified via both theoretical analyses and empirical experiments . Using empirical process theory , we prove a non-asymptotic generalization bound for TAWT . The bound is a superposition of two vanishing terms and a term depending on the task distance , the latter of which is potentially negligible due to the re-weighting operation . We then conduct comprehensive experiments on four sequence tagging tasks in NLP : part-of-speech ( PoS ) tagging , chunking , predicate detection , and named entity recognition ( NER ) . We demonstrate that TAWT further improves the performance of BERT ( Devlin et al. , 2019 ) in both pre-training and joint training for cross-task learning with limited target data , achieving an average absolute improvement of 3.1 % on the performance . As a byproduct , we propose a representation-based task distance that depends on the quality of representations for each task , respectively , instead of assuming the existence of a single shared representation among all tasks . This finer-grained notion of task distance enables a better understanding of cross-task signals . For example , the representation-based task distance gives an interpretable measure of the value of the source data on the target task based on the discrepancy between their optimal representations . Such a measure is more informative than measuring the difference between tasks via the discrepancy of their task-specific functions ( e.g . linear functions ) as done in previous theoretical frameworks ( Tripuraneni et al. , 2020 ) . Furthermore , the representation-based task distance clearly conveys the necessity of fine-tuning : if this distance is non-zero , then fine-tuning the representation becomes necessary as the representation learned from the source data does not converge to the optimal target representation . Finally , we compare our work with some recent attempts in similar directions . Liu et al . ( 2020 ) analyze the benefits of transfer learning by distinguishing source-specific features and transferable features in the source data . Based on the two types of features , they further propose a meta representation learning algorithm to encourage learning transferable and generalizable features . Instead of focusing on the distinction between two types of features , our algorithm and analyses are based on the representation-based task distance and are thus different . Chua et al . ( 2021 ) present a theoretical framework for analyzing representations derived from model agnostic meta-learning ( Finn et al. , 2017 ) , assuming all the tasks use approximately the same underlying representation . In contrast , we do not impose any a priori assumption on the proximity among source and target representations , and our algorithm seeks for a weighting scheme to maximize the proximity . Our work is also different from task weighting in curriculum learning . This line of work tends to learn suitable weights in the stochastic policy to decide which task to study next in curriculum learning ( Graves et al. , 2017 ) , while TAWT aims to learn better representations by assigning more suitable weights on source tasks . Compared to heuristic weighting strategies in multi-task learning ( Gong et al. , 2019 ; Zhang & Yang , 2021 ) , we aim to design a practical algorithm with theoretical guarantees for cross-task learning . 2 TAWT : TARGET-AWARE WEIGHTED TRAINING 2.1 PRELIMINARIES . Suppose we have T source tasks , represented by a collection of probability distributions { Dt } Tt=1 on the sample space X × Y , where X ⊆ Rd is the feature space and Y ⊆ R is the label space . For classification problems , we take Y to be a finite subset of R. We have a single target task , whose probability distribution is denoted as D0 . For the t-th task , where t = 0 , 1 , . . . , T , we observe nt i.i.d . samples St = { ( xti , yti ) } nti=1 from Dt . Typically , the number of samples from the target task , n0 , is much smaller than the samples from the source tasks , and the goal is to use samples from source tasks to aid the learning of the target task . Let Φ be a collection of representations from the feature space X to some latent space Z ⊆ Rr . We refer to Φ as the representation class . Let F be a collection of task-specific functions from the latent space Z to the label space Y . The complexity of the representation class Φ is usually much larger ( i.e. , more expressive ) than that of the task-specific function class F . Given a bounded loss function ` : Y ×Y → [ 0 , 1 ] , the optimal pair of representation and task-specific function of the t-th task is given by ( φ ? t , f ? t ) ∈ argmin φt∈Φ , ft∈F Lt ( φt , ft ) , Lt ( φt , ft ) : = E ( X , Y ) ∼Dt [ ` ( ft ◦ φt ( X ) , Y ) ] . ( 2.1 ) Note that in general , the optimal representations of different tasks are different . For brevity , all proofs for the theory part are deferred to the Appx . A . 2.2 DERIVATION OF TAWT Under the assumption that the optimal representations { φ ? t } Tt=0 are similar , a representation learned using samples only from the source tasks would perform reasonably well on the target task . Consequently , we can devote n0 samples from the target task to learn only the task specific function . This is a much easier task , since the complexity of F is typically much smaller than that of Φ . This discussion leads to a simple yet immensely popular two-step procedure as follows ( Tripuraneni et al. , 2020 ; Du et al. , 2021 ) . First , we solve a weighted empirical risk minimization problem with respect to the source tasks : ( φ̂ , { f̂t } Tt=1 ) ∈ argmin φ∈Φ , { ft } ⊂F T∑ t=1 ωtL̂t ( φ , ft ) , L̂t ( φ , ft ) : = 1 nt nt∑ i=1 ` ( ft ◦ φ ( xti ) , yti ) , ( 2.2 ) where ω ∈ ∆T−1 is a user-specified vector lying in the T -dimensional probability simplex ( i.e. , ∑T t=1 ωt = 1 and ωt ≥ 0 , ∀1 ≤ t ≤ T ) . In the second stage , we freeze the representation φ̂ , and seek the task-specific function that minimizes the empirical risk with respect to the target task : f̂0 ∈ argmin f0∈F L̂0 ( φ̂ , f0 ) . ( 2.3 ) In practice , we can allow φ̂ to slightly vary ( e.g. , via fine-tuning ) to get a performance boost . In the two-step procedure ( 2.2 ) – ( 2.3 ) , the weight vector ω is usually taken to be a hyperparameter and is fixed during training . Popular choices include the uniform weights ( i.e. , ωt = 1/T ) or weights proportional to the sample sizes ( i.e. , ωt = nt/ ∑T t′=1 nt′ ) ( Liu et al. , 2019 ; Johnson & Khoshgoftaar , 2019 ) . This reveals the target-agnostic nature of the two-step procedure ( 2.2 ) – ( 2.3 ) : the weights stay the same regardless the level of proximity between the source tasks and the target task . Consider the following thought experiment : if we know a priori that the first source task D1 is closer ( compared to other source tasks ) to the target task D0 , then we would expect a better performance by raising the importance of D1 , i.e. , make ω1 larger . This thought experiment motivates a target-aware procedure that adaptively adjusts the weights based on the proximity of source tasks to the target . A natural attempt for developing such a task-aware procedure is a follows : min φ∈Φ , f0∈F , ω∈∆T−1 L̂0 ( φ , f0 ) subject to φ ∈ argmin ψ∈Φ min { ft } ⊂F T∑ t=1 ωtL̂t ( ψ , ft ) . ( OPT1 ) That is , we seek for the best weights ω such that solving ( 2.2 ) with this choice of ω would lead to the lowest training error when we subsequently solve ( 2.3 ) . Despite its conceptual simplicity , the formulation ( OPT1 ) is a complicated constrained optimization problem . Nevertheless , we demonstrate that it is possible to transform it into an unconstrained form for which a customized gradient-based optimizer could be applied . To do so , we let ( φω , { fωt } ) be any representation and task-specific functions that minimizes ∑T t=1 ωtL̂t ( φ , ft ) over φ ∈ Φ and { ft } ⊂ F . Equivalently , φω minimizes ∑T t=1 ωt minft∈F L̂t ( φ , ft ) over φ ∈ Φ . With such notations , we can re-write ( OPT1 ) as min f0∈F , ω∈∆T−1 L̂0 ( φω , f0 ) . ( OPT2 ) The gradient of the above objective with respect to the task-specific function , ∇f L̂0 ( φω , f0 ) , is easy to calculate via back-propagation . The calculation of the gradient with respect to the weights requires more work , as φω is an implicit function of ω . By the chain rule , we have ∂∂ωt L̂0 ( φ ω , f0 ) = [ ∇φL̂0 ( φω , f0 ) ] > ∂∂ωtφ ω . Since φω is a minimizer of φ 7→ ∑T t=1 ωt minft∈F L̂t ( φ , ft ) , we have F ( φω , ω ) = 0 , ∀ω ∈ ∆T−1 , F ( φ , ω ) : = ∇φ T∑ t=1 ωt min ft∈F L̂t ( φ , ft ) . ( 2.4 ) By implicit function theorem , if F ( · , · ) is everywhere differentiable and the matrix ∂F ( φ , ω ) /∂φ is invertible for any ( φ , ω ) near some ( φ̃ , ω̃ ) satisfying F ( φ̃ , ω̃ ) = 0 , then we can conclude that the Algorithm 1 : Target-Aware Weighted Training ( TAWT ) Input : Datasets { St } Tt=0 . Output : Final pair of representation and task-specific function ( φ̂ , f̂0 ) for the target task . Initialize parameters ω0 ∈ ∆T−1 , φ0 ∈ Φ , { f0t } Tt=0 ⊂ F ; for k = 0 , . . . , K − 1 do Starting from ( φk , { fkt } Tt=1 ) , run a few steps of SGD to get ( φk+1 , { fk+1t } Tt=1 ) ; Use the approximate gradient∇f L̂0 ( φk+1 , f0 ) to run a few steps of SGD from fk0 to get fk+10 ; Run one step of approximate mirror descent ( 2.7 ) – ( 2.8 ) from ωk to get ωk+1 ; end return φ̂ = φK , f̂0 = fK0 map ω 7→ φω is a locally well-defined function near ω̃ , and the derivative of this map is given by ∂ ∂ωt φω = − ( ∂F ( φ , ω ) ∂φ ∣∣∣∣ φ=φω ) −1 ( ∂F ( φ , ω ) ∂ωt ∣∣∣∣ φ=φω ) . ( 2.5 ) To simplify the above expression , note that under regularity conditions , we can regard ∇φL̂t ( φ , fωt ) as a sub-gradient of the map φ 7→ minft∈F L̂t ( φ , ft ) . This means that we can write F ( φω , ω ) =∑T t=1 ωt∇φL̂t ( φω , fωt ) . Plugging this expression back to ( 2.5 ) and recalling the expression for ∂L̂0 ( φω , f0 ) /∂ωt derived via the chain rule , we get ∂ ∂ωt L̂0 ( φω , f0 ) = − [ ∇φL̂0 ( φω , f0 ) ] > [ T∑ t=1 ωt∇2φL̂t ( φω , fωt ) ] −1 [ ∇φL̂t ( φω , fωt ) ] . ( 2.6 ) Now that we have the expressions for the gradients of L̂0 ( φω , f0 ) with respect to f0 and ω , we can solve ( OPT2 ) via a combination of alternating minimization and mirror descent . To be more specific , suppose that at iteration k , the current weights , representation , and task-specific functions are ωk , φk , and { fkt } Tt=0 , respectively . At this iteration , we conduct the following three steps : 1 . Freeze ωk . Starting from ( φk , { fkt } Tt=1 ) , run a few steps of SGD on the objective function ( φ , { ft } Tt=1 ) 7→ ∑T t=1 ω k t L̂t ( φ , ft ) to get ( φk+1 , { fk+1t } Tt=1 ) , which is regarded as an approxi- mation of ( φω k , { fωkt } Tt=1 ) ; 2 . Freeze ( φk+1 , { fk+1t } Tt=1 ) . Approximate the gradient∇f L̂0 ( φω k , f0 ) by∇f L̂0 ( φk+1 , f0 ) . Using this approximate gradient , run a few steps of SGD from fk0 to get f k+1 0 ; 3 . Freeze ( φk+1 , { fk+1t } Tt=0 ) . Approximate the partial derivative ∂L̂0 ( φω k , fk+10 ) /∂ωt by gkt : = − [ ∇φL̂0 ( φk+1 , fk+10 ) ] > [ T∑ t=1 ωt∇2φL̂t ( φk+1 , fk+1t ) ] −1 [ ∇φL̂t ( φk+1 , fk+1t ) ] . ( 2.7 ) Then run one step of mirror descent ( with step size ηk ) from ωk to get ωk+1 : ωk+1t = ωkt exp { −ηkgkt } ∑T t′=1 ω k t′ exp { −ηkgkt′ } . ( 2.8 ) We use mirror descent in ( 2.8 ) , as it is a canonical generalization of Euclidean gradient descent to gradient descent on the probability simplex ( Beck & Teboulle , 2003 ) . Note that other optimization methods , such as projected gradient descent , can also be used here . The update rule ( 2.8 ) has a rather intuitive explanation . Note that gkt is a weighted dissimilarity measure between the gradients ∇φL̂0 and∇φL̂t . This can further be regarded as a crude dissimilarity measure between the optimal representations of the target task and the t-th source task . The mirror descent updates ωt along the direction where the target task and the t-th source task are more similar . The overall procedure is summarized in Algorithm 1 . A faithful implementation of the above steps would require a costly evaluation of the inverse of the Hessian matrix ∑T t=1 ωt∇2φL̂t ( φk+1 , f k+1 t ) ∈ Rr×r . In practice , we can bypass this step by replacing2 the Hessian-inverse-weighted dissimilarity measure ( 2.7 ) with a consine-similarity-based dissimilarity measure ( see Section 4 for details ) . The previous derivation has focused on weighted pre-training , i.e. , the target data is not used when defining the constrained set in ( OPT1 ) . It can be modified , mutatis mutandis , to handle weighted joint-training , where we change ( OPT1 ) to min φ∈Φ , f0∈F , ω∈∆T L̂0 ( φ , f0 ) subject to φ ∈ argmin ψ∈Φ min { ft } ⊂F T∑ t=0 ωtL̂t ( ψ , ft ) . ( 2.9 ) Compared to ( OPT1 ) , we now also use the data from the target task when learning the representation φ , and thus there is an extra weight ω0 on the target task . The algorithm can also be easily extended to handle multiple target tasks or to put weights on samples ( as opposed to putting weights on tasks ) . The algorithm could also be applied to improve the efficiency of learning from cross-domain and cross-lingual signals , and we postpone such explorations for future work .
This paper introduces Target-Aware Weighted Training (TAWT), a cross-task learning algorithm. In the two-step procedure popular for cross-task training, the weight vector $\omega$ defining the relative weight of each source task is usually exogenous (i.e: a hyperparameter). In contrast, in TAWT it is part of the optimization procedure, with the weights depending on the proximity (throughout training) of each source task to the target task (hence the “target aware” name). The authors first derive an algorithm that enables learning when $ \omega $ is made endogenous in the case of (weighted) pre-training. They then show how it can naturally be extended for weighted joint-training by considering the source tasks as one of the source tasks we learn representations from. They then provide theoretical performance guarantees for TAWT. The authors apply TAWT on 4 NLP tasks, using BERT as their base model. They evaluate their method on both full-data and limited-data settings on each target task, using the 3 other tasks as source tasks. They show that TAWT-pretraining and TAWT-joint training significantly outperform their unweighted counterparts across tasks. These improvements are more marked when target task data is scarce. They also show that weight initialization can be particularly important when the different source datasets have different # examples (e.g: when training jointly on data abundant source tasks and the data-scarce target task). This specific finding is not very novel, but it is great to see TAWT-training also helping when initialization weights are chosen correctly. Critically, the authors also show the importance of varying task weights throughout training. Indeed, fixing the task weights to be the final weights from TAWT leads to worse results than training with TAWT. This suggests that it is important that task-weights vary throughout the training process.
SP:7adb704c7eb8687035f95c5a95463a5823d4e504
Zero-Shot Coordination via Semantic Relationships Between Actions and Observations
1 INTRODUCTION . Successful collaboration between agents requires coordination ( Tomasello et al. , 2005 ; Misyak et al. , 2014 ; Kleiman-Weiner et al. , 2016 ) , which is challenging because coordinated strategies can be arbitrary ( Lewis , 1969 ; Young , 1993 ; Lerer & Peysakhovich , 2018 ) . A priori , one can neither deduce which side of the road to drive , nor which utterance to use to refer to ♥ ( Pal et al. , 2020 ) . In these cases coordination can arise from actors best responding to what others are already doing i.e. , following a convention . For example , Americans drive on the right side of the road and say “ heart ” to refer to ♥ while Japanese drive on the left and say “ shinzo ” . Yet in many situations prior conventions may not be available and agents may be faced with entirely novel situations or partners . In this work we study ways that agents may learn to leverage abstract relations between observations and actions to coordinate with agents they have had no experience interacting with before . To illustrate , consider the following situations where people can figure out how to coordinate without prior experienced or shared conventions . Imagine a store that sells strawberries and blueberries . You want to buy strawberries but you don ’ t share any common language with the clerk . You are however wearing a red hat and you wave the hat at the clerk to hint that the strawberries are what you want . The clerk has two baskets of strawberries remaining , and so you raise a single finger to indicate that you only want one of the baskets . The clerk produces a paper and plastic bag and you point to the paper bag to indicate that you want the paper one . These examples are so simple that they seem obvious : the red hat matches the colors of the strawberries , the number of fingers matches the number of baskets you want , and you extend a finger in the direction of the desired packaging ( Grice , 1975 ) . While obvious to people , who rely on a theory-of-mind in understanding others , we will show that these inferences remain a challenge for multi-agent reinforcement learning agents . Less obvious examples are common in the cognitive science literature . Consider the shapes in Fig . 1 . When asked to assign the names “ Boubo ” and “ Kiki ” to the two shapes people name the jagged object “ Kiki ” and the curvy object “ Boubo ” ( Köhler , 1929 ) . This finding is robust across different linguistic communities and cultures and is even found in young children ( Maurer et al. , 2006 ) . The causal explanation is that people match a “ jaggedness ” -feature and “ curvey ” -feature in both the visual and auditory data . Across the above these cases , there seem to be a generalized mechanism for mapping the features of the persons action with the features of the desired action . All are examples of where in the absence of norms or conventions , people minimize the distance between features when making a choice . This basic form of zero-shot coordination predates verbal behavior ( Tomasello et al. , 2007 ) and this capability has been hypothesized as a key predecessor to more sophisticated language development and acquisition ( Tomasello et al. , 2005 ) . Modeling these capacities is key for building machines that can robustly coordinate with other agents and with people ( Kleiman-Weiner et al. , 2016 ; Dafoe et al. , 2020 ) . However , as we will show , naively training reinforcement learning ( RL ) agents with self-play fails to learn to coordinate even in these obvious ways . Instead , they develop arbitrary private languages that are uninterpretable to both the same models trained with a different random seed as well as to human partners ( Hu et al. , 2020 ) . For instance in the examples above , they would be equally likely to wave a red-hat to hint they want strawberries as they would to indicate that they want blueberries . These problems also emerge at scale in the decentralized partially observable Markov decision process ( Dec-POMDP ) benchmark Hanabi ( Bard et al. , 2019 ) . When agents are trained with self-play using standard architectures they do not develop strategies that take into account the correspondence between the features of the actions ( colored and numbered cards ) and the observation of the game state ( other colored and numbered cards ) . Unfortunately , describing the kind of abstract knowledge that these agents lack in closed form is challenging . Rather than attempting to do so , we take a learning-based approach . Our aim is to build an agent with the capacity to develop these kinds of abstract correspondences during self-play such that they can robustly succeed during cross-play or during play with humans . Our contributions are as follows : ( 1 ) We extend Dec-POMDPs to allow actions and observations to be represented using shared features and develop a novel human-interpretable environment for studying coordination in this setting . ( 2 ) We evaluate the role of neural network ( NN ) architectures including feedforward , recurrent , and attention mechanisms on both cross-play generalization and ability to create human-interpretable conventions . ( 3 ) We show that an attention architecture which takes both the action and observations as input allows the agent to exploit the semantic relationships for coordination , resulting in strong cross-play and human compatible policies that outperform baseline ZSC methods . This model also demonstrates sophisticated coordination patterns that exploit mutual exclusivity and implicature , two well-known phenomena studied in cognitive science ( Markman & Wachtel , 1988 ; Grice , 1975 ) . 2 RELATED WORK . Cooperative MARL . The standard method for training multi-agent reinforcement learning ( MARL ) agents in fully cooperative , partially observable settings is self-play ( SP ) . However , the failure of SP policies in cross-play ( XP ) has been recently explored . Carroll et al . ( 2019 ) used grid-world MDPs to show that both SP and population-based training fail when paired with human collaborators . In Hanabi , agents trained via SP develop conventions that fail to generalize to independently trained agents from the same algorithm with different random seeds ( Bard et al. , 2019 ; Hu et al. , 2020 ) . Zero-Shot Coordination . To address this issue , Hu et al . ( 2020 ) formally introduced the zeroshot coordination ( ZSC ) framework , where the goal is to maximize the XP returns of independently trained agents , allowing them to coordinated at test time . Thus formulated , ZSC is an alternative to ad-hoc teamplay , a framework for measuring coordinated team success when faced with players with unknown behavior ( Stone et al. , 2010 ; Barrett et al. , 2011 ) , which can be formalized as playing a best response to a distribution of a-priori known agents . A few methods have attempted to address the ZSC framework . Other-Play ( OP ) ( Hu et al. , 2020 ) exploits the symmetries in a given Dec-POMDP to prevent agents from learning equivalent but mutually incompatible policies . OP prohibits arbitrary tie-breaking , thereby preventing equivalent conventions from forming . However , OP requires experimenter-coded symmetries , and discovering such symmetries is computationally challenging . In contrast , our learning based approach requires no experimenter-coding . Another recent method , Off-Belief Learning ( OBL ) ( Hu et al. , 2021 ) , regularizes agents ’ ability to make inferences based on the behavior of others . Compared to prior work on Hanabi where SP scores were high but XP scores were near chance , both of OP and OBL drastically improve XP scores and show promising preliminary results in play with people . However , neither of these algorithms can exploit the correspondences between features of an action and the observation of the state as we show in this work , unless this falls out of the environment dynamics . Attention for Modeling Input-Output Relationships . Attention ( Vaswani et al. , 2017 ; Bahdanau et al. , 2015 ; Xu et al. , 2016 ) is an important tool for large sequence models , and exploiting semantic relationships between inputs and outputs via an attention-based model has been studied in the deep learning literature . In natural language processing , such an idea is commonly applied to question answering models ( dos Santos et al. , 2016 ; Tan et al. , 2016 ; Yang et al. , 2016 ) . For instance , Yang et al . ( 2016 ) form a matrix that represents the semantic matching information of term pairs from a question and answer pair , and then use dot-product attention to model question term importance . For regression tasks , Kim et al . ( 2019 ) proposed Attentive Neural Processes ( ANP ) that use dot-product attention to allow each input location to attend to the relevant context points for the prediction , and applied ANP to vision problems . However , to our knowledge , we are the first to apply attention to exploit shared features of actions and observations in a Dec-POMDP setting for coordination . Human Coordination . Our work is also inspired by how humans coordinate in cooperative settings . Theory-of-mind , the mechanism people use to infer intentions from the actions of others , plays a key role in structuring coordination ( Wu et al. , 2021 ; Shum et al. , 2019 ) . In particular , Rational Speech Acts ( RSA ) is a influential model of pragmatic implicature ( Frank & Goodman , 2012 ; Goodman & Stuhlmüller , 2013 ) . At the heart of these approaches are probabilistic representations of belief which allow for the modeling of uncertainty and recursive reasoning about each others beliefs , enabling higher-order mental state inferences . This recursive reasoning step also underlies the cognitive hierarchy and level-K reasoning models , and is useful for explaining certain focal points ( Camerer , 2011 ; Stahl & Wilson , 1995 ; Camerer et al. , 2004 ) . However , constructing recursive models of players beliefs and behavior is computationally expensive as each agent must construct an exponentially growing number of models of each agent modeling each other agent . As a result , recursive models are often limited to one or two levels of recursion . Furthermore , none of these approaches can by itself take advantage of the shared features across actions and observations . 3 BACKGROUND . Dec-POMDPs . We use decentralized partially observable Markov decision processes ( DecPOMDPs ) to formalize our setting ( Nair et al. , 2003 ) . In a Dec-POMDP , each player i observes the underlying state s partially through an observation function Ωi ( s ) ∈ Oi , and takes action ai ∈ Ai . Players receive a common reward R ( s , a ) and the state follows the transition function T ( s , a ) . The historical trajectory is denoted as τ = ( s1 , a1 , . . . , at−1 , st ) . Player i ’ s action-observation history ( AOH ) is denoted as ( Ωi ( s1 ) , ai1 , . . . , a i t−1 , Ω i ( st ) ) . The policy for player i takes as input an AOH and outputs a distribution over actions , denoted by πi ( ai | τ it ) . The joint policy is denoted by π. Dot-Product Attention . Given a sequence of input vectors ( x1 , ... , xm ) , dot-product attention uses three weight matrices ( Q , K , V ) to obtain triples ( Qxi , Kxi , V xi ) for each i ∈ { 1 , . . . , m } , called query vectors , key vectors , and value vectors . We abbreviate these as ( qi , ki , vi ) . Next , for each i , j , dot-product attention computes logits using dot products qi · kj . These logits are in turn used to compute an output matrix [ softmax ( qi · k1/ √ m , . . . , qi · km/ √ m ) · vj ] i , j . We denote this output matrix as Attention ( x1 , . . . , xm ) .
This paper explores several architectures for reinforcement-learning in the context of a newly proposed card game (Hinter-Guesser). A number of the architectures seem to have poor performance on the game. Moreover, whatever they learn is not compatible with what other learners learn, so that they cannot mutually understand Hints & Guesses. The stated aim is: "to build an agent with the capacity to develop these kinds of abstract correspondences during self-play such that they can robustly succeed during cross-play or during play with people." The architectures generally succeed (70-90%) at playing against themselves (with the shared learned policy) but perform badly when playing with independently trained agents, except when trained with the desired action as part of the input (unclear how this can be tested on unknown cases when 'answer' is given as part of the inputs).
SP:bfd682059e93864d927d5d614ed79a7f9c63f4b6
Zero-Shot Coordination via Semantic Relationships Between Actions and Observations
1 INTRODUCTION . Successful collaboration between agents requires coordination ( Tomasello et al. , 2005 ; Misyak et al. , 2014 ; Kleiman-Weiner et al. , 2016 ) , which is challenging because coordinated strategies can be arbitrary ( Lewis , 1969 ; Young , 1993 ; Lerer & Peysakhovich , 2018 ) . A priori , one can neither deduce which side of the road to drive , nor which utterance to use to refer to ♥ ( Pal et al. , 2020 ) . In these cases coordination can arise from actors best responding to what others are already doing i.e. , following a convention . For example , Americans drive on the right side of the road and say “ heart ” to refer to ♥ while Japanese drive on the left and say “ shinzo ” . Yet in many situations prior conventions may not be available and agents may be faced with entirely novel situations or partners . In this work we study ways that agents may learn to leverage abstract relations between observations and actions to coordinate with agents they have had no experience interacting with before . To illustrate , consider the following situations where people can figure out how to coordinate without prior experienced or shared conventions . Imagine a store that sells strawberries and blueberries . You want to buy strawberries but you don ’ t share any common language with the clerk . You are however wearing a red hat and you wave the hat at the clerk to hint that the strawberries are what you want . The clerk has two baskets of strawberries remaining , and so you raise a single finger to indicate that you only want one of the baskets . The clerk produces a paper and plastic bag and you point to the paper bag to indicate that you want the paper one . These examples are so simple that they seem obvious : the red hat matches the colors of the strawberries , the number of fingers matches the number of baskets you want , and you extend a finger in the direction of the desired packaging ( Grice , 1975 ) . While obvious to people , who rely on a theory-of-mind in understanding others , we will show that these inferences remain a challenge for multi-agent reinforcement learning agents . Less obvious examples are common in the cognitive science literature . Consider the shapes in Fig . 1 . When asked to assign the names “ Boubo ” and “ Kiki ” to the two shapes people name the jagged object “ Kiki ” and the curvy object “ Boubo ” ( Köhler , 1929 ) . This finding is robust across different linguistic communities and cultures and is even found in young children ( Maurer et al. , 2006 ) . The causal explanation is that people match a “ jaggedness ” -feature and “ curvey ” -feature in both the visual and auditory data . Across the above these cases , there seem to be a generalized mechanism for mapping the features of the persons action with the features of the desired action . All are examples of where in the absence of norms or conventions , people minimize the distance between features when making a choice . This basic form of zero-shot coordination predates verbal behavior ( Tomasello et al. , 2007 ) and this capability has been hypothesized as a key predecessor to more sophisticated language development and acquisition ( Tomasello et al. , 2005 ) . Modeling these capacities is key for building machines that can robustly coordinate with other agents and with people ( Kleiman-Weiner et al. , 2016 ; Dafoe et al. , 2020 ) . However , as we will show , naively training reinforcement learning ( RL ) agents with self-play fails to learn to coordinate even in these obvious ways . Instead , they develop arbitrary private languages that are uninterpretable to both the same models trained with a different random seed as well as to human partners ( Hu et al. , 2020 ) . For instance in the examples above , they would be equally likely to wave a red-hat to hint they want strawberries as they would to indicate that they want blueberries . These problems also emerge at scale in the decentralized partially observable Markov decision process ( Dec-POMDP ) benchmark Hanabi ( Bard et al. , 2019 ) . When agents are trained with self-play using standard architectures they do not develop strategies that take into account the correspondence between the features of the actions ( colored and numbered cards ) and the observation of the game state ( other colored and numbered cards ) . Unfortunately , describing the kind of abstract knowledge that these agents lack in closed form is challenging . Rather than attempting to do so , we take a learning-based approach . Our aim is to build an agent with the capacity to develop these kinds of abstract correspondences during self-play such that they can robustly succeed during cross-play or during play with humans . Our contributions are as follows : ( 1 ) We extend Dec-POMDPs to allow actions and observations to be represented using shared features and develop a novel human-interpretable environment for studying coordination in this setting . ( 2 ) We evaluate the role of neural network ( NN ) architectures including feedforward , recurrent , and attention mechanisms on both cross-play generalization and ability to create human-interpretable conventions . ( 3 ) We show that an attention architecture which takes both the action and observations as input allows the agent to exploit the semantic relationships for coordination , resulting in strong cross-play and human compatible policies that outperform baseline ZSC methods . This model also demonstrates sophisticated coordination patterns that exploit mutual exclusivity and implicature , two well-known phenomena studied in cognitive science ( Markman & Wachtel , 1988 ; Grice , 1975 ) . 2 RELATED WORK . Cooperative MARL . The standard method for training multi-agent reinforcement learning ( MARL ) agents in fully cooperative , partially observable settings is self-play ( SP ) . However , the failure of SP policies in cross-play ( XP ) has been recently explored . Carroll et al . ( 2019 ) used grid-world MDPs to show that both SP and population-based training fail when paired with human collaborators . In Hanabi , agents trained via SP develop conventions that fail to generalize to independently trained agents from the same algorithm with different random seeds ( Bard et al. , 2019 ; Hu et al. , 2020 ) . Zero-Shot Coordination . To address this issue , Hu et al . ( 2020 ) formally introduced the zeroshot coordination ( ZSC ) framework , where the goal is to maximize the XP returns of independently trained agents , allowing them to coordinated at test time . Thus formulated , ZSC is an alternative to ad-hoc teamplay , a framework for measuring coordinated team success when faced with players with unknown behavior ( Stone et al. , 2010 ; Barrett et al. , 2011 ) , which can be formalized as playing a best response to a distribution of a-priori known agents . A few methods have attempted to address the ZSC framework . Other-Play ( OP ) ( Hu et al. , 2020 ) exploits the symmetries in a given Dec-POMDP to prevent agents from learning equivalent but mutually incompatible policies . OP prohibits arbitrary tie-breaking , thereby preventing equivalent conventions from forming . However , OP requires experimenter-coded symmetries , and discovering such symmetries is computationally challenging . In contrast , our learning based approach requires no experimenter-coding . Another recent method , Off-Belief Learning ( OBL ) ( Hu et al. , 2021 ) , regularizes agents ’ ability to make inferences based on the behavior of others . Compared to prior work on Hanabi where SP scores were high but XP scores were near chance , both of OP and OBL drastically improve XP scores and show promising preliminary results in play with people . However , neither of these algorithms can exploit the correspondences between features of an action and the observation of the state as we show in this work , unless this falls out of the environment dynamics . Attention for Modeling Input-Output Relationships . Attention ( Vaswani et al. , 2017 ; Bahdanau et al. , 2015 ; Xu et al. , 2016 ) is an important tool for large sequence models , and exploiting semantic relationships between inputs and outputs via an attention-based model has been studied in the deep learning literature . In natural language processing , such an idea is commonly applied to question answering models ( dos Santos et al. , 2016 ; Tan et al. , 2016 ; Yang et al. , 2016 ) . For instance , Yang et al . ( 2016 ) form a matrix that represents the semantic matching information of term pairs from a question and answer pair , and then use dot-product attention to model question term importance . For regression tasks , Kim et al . ( 2019 ) proposed Attentive Neural Processes ( ANP ) that use dot-product attention to allow each input location to attend to the relevant context points for the prediction , and applied ANP to vision problems . However , to our knowledge , we are the first to apply attention to exploit shared features of actions and observations in a Dec-POMDP setting for coordination . Human Coordination . Our work is also inspired by how humans coordinate in cooperative settings . Theory-of-mind , the mechanism people use to infer intentions from the actions of others , plays a key role in structuring coordination ( Wu et al. , 2021 ; Shum et al. , 2019 ) . In particular , Rational Speech Acts ( RSA ) is a influential model of pragmatic implicature ( Frank & Goodman , 2012 ; Goodman & Stuhlmüller , 2013 ) . At the heart of these approaches are probabilistic representations of belief which allow for the modeling of uncertainty and recursive reasoning about each others beliefs , enabling higher-order mental state inferences . This recursive reasoning step also underlies the cognitive hierarchy and level-K reasoning models , and is useful for explaining certain focal points ( Camerer , 2011 ; Stahl & Wilson , 1995 ; Camerer et al. , 2004 ) . However , constructing recursive models of players beliefs and behavior is computationally expensive as each agent must construct an exponentially growing number of models of each agent modeling each other agent . As a result , recursive models are often limited to one or two levels of recursion . Furthermore , none of these approaches can by itself take advantage of the shared features across actions and observations . 3 BACKGROUND . Dec-POMDPs . We use decentralized partially observable Markov decision processes ( DecPOMDPs ) to formalize our setting ( Nair et al. , 2003 ) . In a Dec-POMDP , each player i observes the underlying state s partially through an observation function Ωi ( s ) ∈ Oi , and takes action ai ∈ Ai . Players receive a common reward R ( s , a ) and the state follows the transition function T ( s , a ) . The historical trajectory is denoted as τ = ( s1 , a1 , . . . , at−1 , st ) . Player i ’ s action-observation history ( AOH ) is denoted as ( Ωi ( s1 ) , ai1 , . . . , a i t−1 , Ω i ( st ) ) . The policy for player i takes as input an AOH and outputs a distribution over actions , denoted by πi ( ai | τ it ) . The joint policy is denoted by π. Dot-Product Attention . Given a sequence of input vectors ( x1 , ... , xm ) , dot-product attention uses three weight matrices ( Q , K , V ) to obtain triples ( Qxi , Kxi , V xi ) for each i ∈ { 1 , . . . , m } , called query vectors , key vectors , and value vectors . We abbreviate these as ( qi , ki , vi ) . Next , for each i , j , dot-product attention computes logits using dot products qi · kj . These logits are in turn used to compute an output matrix [ softmax ( qi · k1/ √ m , . . . , qi · km/ √ m ) · vj ] i , j . We denote this output matrix as Attention ( x1 , . . . , xm ) .
The paper proposes an approach for zero-shot coordination that learns to assign actions to observations in a semantically meaningful way. The authors show that semantic actions have a better inductive bias leading to increased consistency in cross-play. Their proposed method is a learning based approach where an attention model jointly processes featurized representation of the observation and the action. The evaluations show that the agents produce human-compatible policies on a simplified Hanabi card game play.
SP:bfd682059e93864d927d5d614ed79a7f9c63f4b6
Zero-Shot Coordination via Semantic Relationships Between Actions and Observations
1 INTRODUCTION . Successful collaboration between agents requires coordination ( Tomasello et al. , 2005 ; Misyak et al. , 2014 ; Kleiman-Weiner et al. , 2016 ) , which is challenging because coordinated strategies can be arbitrary ( Lewis , 1969 ; Young , 1993 ; Lerer & Peysakhovich , 2018 ) . A priori , one can neither deduce which side of the road to drive , nor which utterance to use to refer to ♥ ( Pal et al. , 2020 ) . In these cases coordination can arise from actors best responding to what others are already doing i.e. , following a convention . For example , Americans drive on the right side of the road and say “ heart ” to refer to ♥ while Japanese drive on the left and say “ shinzo ” . Yet in many situations prior conventions may not be available and agents may be faced with entirely novel situations or partners . In this work we study ways that agents may learn to leverage abstract relations between observations and actions to coordinate with agents they have had no experience interacting with before . To illustrate , consider the following situations where people can figure out how to coordinate without prior experienced or shared conventions . Imagine a store that sells strawberries and blueberries . You want to buy strawberries but you don ’ t share any common language with the clerk . You are however wearing a red hat and you wave the hat at the clerk to hint that the strawberries are what you want . The clerk has two baskets of strawberries remaining , and so you raise a single finger to indicate that you only want one of the baskets . The clerk produces a paper and plastic bag and you point to the paper bag to indicate that you want the paper one . These examples are so simple that they seem obvious : the red hat matches the colors of the strawberries , the number of fingers matches the number of baskets you want , and you extend a finger in the direction of the desired packaging ( Grice , 1975 ) . While obvious to people , who rely on a theory-of-mind in understanding others , we will show that these inferences remain a challenge for multi-agent reinforcement learning agents . Less obvious examples are common in the cognitive science literature . Consider the shapes in Fig . 1 . When asked to assign the names “ Boubo ” and “ Kiki ” to the two shapes people name the jagged object “ Kiki ” and the curvy object “ Boubo ” ( Köhler , 1929 ) . This finding is robust across different linguistic communities and cultures and is even found in young children ( Maurer et al. , 2006 ) . The causal explanation is that people match a “ jaggedness ” -feature and “ curvey ” -feature in both the visual and auditory data . Across the above these cases , there seem to be a generalized mechanism for mapping the features of the persons action with the features of the desired action . All are examples of where in the absence of norms or conventions , people minimize the distance between features when making a choice . This basic form of zero-shot coordination predates verbal behavior ( Tomasello et al. , 2007 ) and this capability has been hypothesized as a key predecessor to more sophisticated language development and acquisition ( Tomasello et al. , 2005 ) . Modeling these capacities is key for building machines that can robustly coordinate with other agents and with people ( Kleiman-Weiner et al. , 2016 ; Dafoe et al. , 2020 ) . However , as we will show , naively training reinforcement learning ( RL ) agents with self-play fails to learn to coordinate even in these obvious ways . Instead , they develop arbitrary private languages that are uninterpretable to both the same models trained with a different random seed as well as to human partners ( Hu et al. , 2020 ) . For instance in the examples above , they would be equally likely to wave a red-hat to hint they want strawberries as they would to indicate that they want blueberries . These problems also emerge at scale in the decentralized partially observable Markov decision process ( Dec-POMDP ) benchmark Hanabi ( Bard et al. , 2019 ) . When agents are trained with self-play using standard architectures they do not develop strategies that take into account the correspondence between the features of the actions ( colored and numbered cards ) and the observation of the game state ( other colored and numbered cards ) . Unfortunately , describing the kind of abstract knowledge that these agents lack in closed form is challenging . Rather than attempting to do so , we take a learning-based approach . Our aim is to build an agent with the capacity to develop these kinds of abstract correspondences during self-play such that they can robustly succeed during cross-play or during play with humans . Our contributions are as follows : ( 1 ) We extend Dec-POMDPs to allow actions and observations to be represented using shared features and develop a novel human-interpretable environment for studying coordination in this setting . ( 2 ) We evaluate the role of neural network ( NN ) architectures including feedforward , recurrent , and attention mechanisms on both cross-play generalization and ability to create human-interpretable conventions . ( 3 ) We show that an attention architecture which takes both the action and observations as input allows the agent to exploit the semantic relationships for coordination , resulting in strong cross-play and human compatible policies that outperform baseline ZSC methods . This model also demonstrates sophisticated coordination patterns that exploit mutual exclusivity and implicature , two well-known phenomena studied in cognitive science ( Markman & Wachtel , 1988 ; Grice , 1975 ) . 2 RELATED WORK . Cooperative MARL . The standard method for training multi-agent reinforcement learning ( MARL ) agents in fully cooperative , partially observable settings is self-play ( SP ) . However , the failure of SP policies in cross-play ( XP ) has been recently explored . Carroll et al . ( 2019 ) used grid-world MDPs to show that both SP and population-based training fail when paired with human collaborators . In Hanabi , agents trained via SP develop conventions that fail to generalize to independently trained agents from the same algorithm with different random seeds ( Bard et al. , 2019 ; Hu et al. , 2020 ) . Zero-Shot Coordination . To address this issue , Hu et al . ( 2020 ) formally introduced the zeroshot coordination ( ZSC ) framework , where the goal is to maximize the XP returns of independently trained agents , allowing them to coordinated at test time . Thus formulated , ZSC is an alternative to ad-hoc teamplay , a framework for measuring coordinated team success when faced with players with unknown behavior ( Stone et al. , 2010 ; Barrett et al. , 2011 ) , which can be formalized as playing a best response to a distribution of a-priori known agents . A few methods have attempted to address the ZSC framework . Other-Play ( OP ) ( Hu et al. , 2020 ) exploits the symmetries in a given Dec-POMDP to prevent agents from learning equivalent but mutually incompatible policies . OP prohibits arbitrary tie-breaking , thereby preventing equivalent conventions from forming . However , OP requires experimenter-coded symmetries , and discovering such symmetries is computationally challenging . In contrast , our learning based approach requires no experimenter-coding . Another recent method , Off-Belief Learning ( OBL ) ( Hu et al. , 2021 ) , regularizes agents ’ ability to make inferences based on the behavior of others . Compared to prior work on Hanabi where SP scores were high but XP scores were near chance , both of OP and OBL drastically improve XP scores and show promising preliminary results in play with people . However , neither of these algorithms can exploit the correspondences between features of an action and the observation of the state as we show in this work , unless this falls out of the environment dynamics . Attention for Modeling Input-Output Relationships . Attention ( Vaswani et al. , 2017 ; Bahdanau et al. , 2015 ; Xu et al. , 2016 ) is an important tool for large sequence models , and exploiting semantic relationships between inputs and outputs via an attention-based model has been studied in the deep learning literature . In natural language processing , such an idea is commonly applied to question answering models ( dos Santos et al. , 2016 ; Tan et al. , 2016 ; Yang et al. , 2016 ) . For instance , Yang et al . ( 2016 ) form a matrix that represents the semantic matching information of term pairs from a question and answer pair , and then use dot-product attention to model question term importance . For regression tasks , Kim et al . ( 2019 ) proposed Attentive Neural Processes ( ANP ) that use dot-product attention to allow each input location to attend to the relevant context points for the prediction , and applied ANP to vision problems . However , to our knowledge , we are the first to apply attention to exploit shared features of actions and observations in a Dec-POMDP setting for coordination . Human Coordination . Our work is also inspired by how humans coordinate in cooperative settings . Theory-of-mind , the mechanism people use to infer intentions from the actions of others , plays a key role in structuring coordination ( Wu et al. , 2021 ; Shum et al. , 2019 ) . In particular , Rational Speech Acts ( RSA ) is a influential model of pragmatic implicature ( Frank & Goodman , 2012 ; Goodman & Stuhlmüller , 2013 ) . At the heart of these approaches are probabilistic representations of belief which allow for the modeling of uncertainty and recursive reasoning about each others beliefs , enabling higher-order mental state inferences . This recursive reasoning step also underlies the cognitive hierarchy and level-K reasoning models , and is useful for explaining certain focal points ( Camerer , 2011 ; Stahl & Wilson , 1995 ; Camerer et al. , 2004 ) . However , constructing recursive models of players beliefs and behavior is computationally expensive as each agent must construct an exponentially growing number of models of each agent modeling each other agent . As a result , recursive models are often limited to one or two levels of recursion . Furthermore , none of these approaches can by itself take advantage of the shared features across actions and observations . 3 BACKGROUND . Dec-POMDPs . We use decentralized partially observable Markov decision processes ( DecPOMDPs ) to formalize our setting ( Nair et al. , 2003 ) . In a Dec-POMDP , each player i observes the underlying state s partially through an observation function Ωi ( s ) ∈ Oi , and takes action ai ∈ Ai . Players receive a common reward R ( s , a ) and the state follows the transition function T ( s , a ) . The historical trajectory is denoted as τ = ( s1 , a1 , . . . , at−1 , st ) . Player i ’ s action-observation history ( AOH ) is denoted as ( Ωi ( s1 ) , ai1 , . . . , a i t−1 , Ω i ( st ) ) . The policy for player i takes as input an AOH and outputs a distribution over actions , denoted by πi ( ai | τ it ) . The joint policy is denoted by π. Dot-Product Attention . Given a sequence of input vectors ( x1 , ... , xm ) , dot-product attention uses three weight matrices ( Q , K , V ) to obtain triples ( Qxi , Kxi , V xi ) for each i ∈ { 1 , . . . , m } , called query vectors , key vectors , and value vectors . We abbreviate these as ( qi , ki , vi ) . Next , for each i , j , dot-product attention computes logits using dot products qi · kj . These logits are in turn used to compute an output matrix [ softmax ( qi · k1/ √ m , . . . , qi · km/ √ m ) · vj ] i , j . We denote this output matrix as Attention ( x1 , . . . , xm ) .
The paper exploits semantic relationships between features of observations and an action for zero-shot coordination in multi-agent reinforcement learning. It extends decentralized partially observable Markov decision process (Dec-POMDP) by representing observations and actions with shared features. Technically it uses attention mechanism of deep learning to exploit the semantic relationships between observations and actions. Besides, the paper develops a novel human interpretable environment for analysis.
SP:bfd682059e93864d927d5d614ed79a7f9c63f4b6
EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression
1 INTRODUCTION . Despite Graph Neural Networks ( GNNs ) have achieved great success across different graph-related tasks , training GNNs on large graphs is a long-standing challenge due to its extensive memory requirement ( Kipf & Welling , 2017 ; Zhang & Chen , 2018 ; Cai et al. , 2021b ) . The extensive memory consumption of GNN stems from its recursive neighborhood aggregation scheme , where each node aggregates embeddings of its neighbors to update its new embedding at each layer . Thus , training an L-layer GNN requires storing all L layers ’ intermediate node embeddings in GPU memory for computing the gradients , and this typically adds several times more memory than holding the node feature matrix ( see Algorithm 1 for a detailed analysis ) . Hence , storing these node embeddings is the major memory bottleneck for training GNNs on large graphs . Most of the existing works towards this problem can be roughly divided into two categories . First , some works propose to train GNNs with sampled subgraphs instead of the whole graph at each step . In this way , only node embeddings that are present in the current subgraph will be retained in memory ( Chiang et al. , 2019 ; Hamilton et al. , 2017 ; Zeng et al. , 2020 ; Zou et al. , 2019 ; Chen et al. , 2018 ; Huang et al. , 2018 ) . Second , another line of work tries to decouple the neighborhood aggregation from prediction , either as a preprocessing step ( Wu et al. , 2019 ; Klicpera et al. , 2018 ; Yu et al. , 2020 ) or post-processing step ( Huang et al. , 2020 ) , where the model is simplified as the Multi-Layer Perceptron ( MLP ) that can be trained with mini-batch data . In parallel , another orthogonal direction is to store only the compressed node embeddings ( i.e. , activations ) in memory for computing the gradients . Recent works propose to quantize the activations in lower numerical precision ( e.g. , using 8-bit integer ) during the forward pass ( Chakrabarti & Moseley , 2019 ; Fu et al. , 2020 ; Chen et al. , 2021a ; Evans & Aamodt , 2021 ) . This framework successfully trims down the memory requirement for training Convolutional Neural Networks ( CNNs ) by a large margin , at the cost of additional time overhead and loss of accuracy . Ideally , the real-world usage requires the training method should achieve a balanced trade-off among the following three aspects : 1 . Space . It should enable to train GNNs on large graphs using off-the-shelf hardwares , such as GPUs and CPUs ; 2 . Speed . The time overhead should be acceptable , ideally as small as possible ; 3 . Model Performance . The loss of accuracy should be acceptable , ideally as small as possible . Although storing the compressed activations successfully saves the memory for CNNs , up to our knowledge , there is no existing work extends this direction to GNNs and evaluates the mentioned trade-off for analyzing its feasibility . Despite the extension is conceptually straightforward , this direction is less-explored since it can be notoriously difficult to implement to fully leverage hardware potentials . This dilemma stems from the fact that the necessary tools for supporting this direction is usually missed in common graph learning packages . For example , operations in popular graph learning packages only support casting tensors down to 8-bit integer on GPUs , significantly limiting the memory saving ( Paszke et al. , 2019 ; Fey & Lenssen , 2019 ; Wang et al. , 2019 ) . As a result , previous GNN quantization works either emulates inference-time quantization via “ simulated quantization ” ( Tailor et al. , 2021 ; Zhao et al. , 2020 ) , or is impractical to use GPUs for accelerating ( Feng et al. , 2020 ) . To unleash the potential of this direction , we provide a space-efficient GPU implementation for supporting common operations in GNNs with compressed activations . Equipped with our implementation , this paper asks the following open question : To what extent can we compress the activations with both acceptable loss in accuracy and time overhead , for scalable GNN training ? To answer the open question , we first explore two different types of compression methods . Namely , one is “ quantization ” that compresses the activations into lower numerical precision . The other one is called “ random projection ” ( Achlioptas , 2001 ) that projects the activations into low-dimensional space . Both these two simple strategies can achieve near-lossless accuracy at a non-trivial compression ratio . For example , the loss in accuracy is negligible ( 0.2 % ) even under the vanilla 2-bit quantization . However , we can not further push forward the memory saving by these two methods , e.g. , we can not use a numerical precision below 1-bit . Considering that the real-world graphs often contain hundreds of millions of nodes , our main goal is to trim down the memory consumption to the maximum extent among the three aspects , as long as the other two are acceptable . We then naturally explore the direction of combining random projection and quantization , dubbed “ EXACT ” , to aggressively maximize the memory saving . EXACT is essentially applies random projection and quantization sequentially for compressing activations . Despite the superior memory saving , another following question is that , whether the combination bring significantly worse model performance and larger time overhead ? Following the questions , we make three major contributions as follows : • We provide a space-efficient GPU implementation for training GNNs with compressed activations as an extension for Pytorch . Based on our implementation , we are the first one training GNNs with compressed activations and demonstrating its potential for real-world usage . EXACT can complement the existing studies , as it can be integrated with most of existing solutions . • We propose EXACT , a simple-yet-effective framework which applies random projection and quantization sequentially on activation for scalable GNN training . We theoretically and experimentally show that applying random projection and quantization sequentially have an “ interaction ” effect . Namely , from the model performance aspect , after random projection , applying quantization only has a limited impact on the model performance . From the time aspect , EXACT runs comparable or even faster than quantization only . • Despite the simplicity , EXACT achieves non-trivial memory saving with both acceptable time overhead and loss in accuracy : EXACT can reduce the memory footprint of activations by up to 32× with roughly 0.5 % loss in accuracy and 10− 25 % time overhead across models and datasets . Noteworthily , EXACT trims down the hardware requirement of training a full-batch GraphSAGE ( Hamilton et al. , 2017 ) on ogbn-products ( Hu et al. , 2020 ) from a 48GB GPU to a 12GB GPU , which is affordable to most research labs . 2 THE MEMORY CONSUMPTION OF GNNS . Background . Let G = ( V , E ) be an undirected graph with V = ( v1 , · · · , v|V| ) and E = ( e1 , · · · , e|E| ) being the set of nodes and edges , respectively . Let X ∈ R|V|×d be the node feature matrix of the whole graph . The graph structure can be represented by an adjacency matrix A ∈ R|V|×|V| , where Ai , j = 1 if ( vi , vj ) ∈ E else Ai , j = 0 . In this work , we are mostly interested in the task of node classification , where the goal is to learn the representation hv for all v ∈ V such that the label yv can be easily predicted . To obtain such a representation , GNNs follow the neighborhood aggregation scheme . Specifically , GNNs recursively update the representation of a node by aggregating representations of its neighbors . Formally , the lth layer of a GNN can be written as h ( l+1 ) v = UPDATE ( h ( l ) v , ⊕ u∈N ( v ) MSG ( h ( l ) u , h ( l ) v ) ) , where h ( l ) v is the representation of node v at the lth layer . N ( v ) denotes the neighboring nodes of node v , not including v itself . The full table of notations can be found in Appendix A Table 5 . For node v , messages from its neighbors are calculated by MSG ( · ) . Then these messages are aggregated using a permutation-invariant aggregation function ⊕ . The aggregated features at v are updated by UPDATE ( · ) . For example , the Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) layer can be defined as H ( l+1 ) = ReLU ( ÂH ( l ) Θ ( l ) ) , ( 1 ) where H ( l ) is the node embedding matrix consisting of all nodes ’ embeddings at the lth layer and H ( 0 ) = X . Θ ( l ) is the weight matrix of the lth GCN layer .  = D̃− 1 2AD̃− 1 2 is the normalized adjacency matrix , where D̃ is the degree matrix of A + I . We note that  is usually stored using the sparse matrix format . For GCN layers , the message and the aggregation function are fused into a Sparse-Dense Matrix Multiplication ( SPMM 1 ) operation . Namely , ÂH ( l ) Θ ( l ) = SPMM (  , H ( l ) Θ ( l ) ) . Thus , the computation graph of Equation 1 can be written as H ( l+1 ) = ReLU ( SPMM (  , MM ( H ( l ) , Θ ( l ) ) ) ) , ( 2 ) where MM ( · , · ) is the normal Dense-Dense Matrix Multiplication . Equation 2 resembles how GCNs are implemented in popular packages ( Fey & Lenssen , 2019 ; Wang et al. , 2019 ) . Here we analyze the memory consumption for the forward pass of GCN since most of the memory is occupied during the forward pass . For the memory usage of the backward pass , a detailed analysis is given in Appendix C. Specifically , for an L layer GCN , suppose the hidden dimensions of layer 0 , · · · , L − 1 are the same , which are denoted as D. The forward pass of GCN layers is shown in Appendix C Algorithm 1 . For layer l , it saves the following four variables in memory . ( 1 ) the weight matrix Θ ( l ) ∈ RD×D whose shape is independent to the graph size and is generally negligible . ( 2 ) the normalized adjacency matrix  ( in CSR format ) whose space complexity is O ( |V|+ |E| ) . Note that there only needs to store one  in memory which can be accessed by different layers . Thus , the memory consumption of  is independent of the number of layers and is not the main memory bottleneck . ( 3 ) the intermediate result J ( l ) = MM ( H ( l ) , Θ ( l ) ) ∈ R|V|×D . For an L layer GCN , storing { J ( 0 ) , · · ·J ( L−1 ) } has a O ( L|V|D ) space complexity , which is the main memory bottleneck . ( 4 ) the node embedding matrix H ( l ) ∈ R|V|×D . For a L layer GCN , storing { H ( 0 ) , · · ·H ( L−1 ) } has a O ( L|V|D ) space complexity , which is the also main memory bottleneck . In this paper , we use the term “ activation maps ” to encompass H , J , and activation maps of other commonly used layers/operations such as BatchNorm , ReLU and Dropout .
This work attempts to train GNN models with reduced memory requirements. Normally when training, we need to store activations for calculating the gradients on the backwards pass, and we typically keep these at full (32-bit) precision. This work argues that we don't need to do this, and instead we can significantly reduce the memory footprint at training time by keeping low precision activations for the backwards pass. They use two approaches for this: quantization, and random projections
SP:ba153585404187129f88c7b8ca0a9cfc7ad732e6
EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression
1 INTRODUCTION . Despite Graph Neural Networks ( GNNs ) have achieved great success across different graph-related tasks , training GNNs on large graphs is a long-standing challenge due to its extensive memory requirement ( Kipf & Welling , 2017 ; Zhang & Chen , 2018 ; Cai et al. , 2021b ) . The extensive memory consumption of GNN stems from its recursive neighborhood aggregation scheme , where each node aggregates embeddings of its neighbors to update its new embedding at each layer . Thus , training an L-layer GNN requires storing all L layers ’ intermediate node embeddings in GPU memory for computing the gradients , and this typically adds several times more memory than holding the node feature matrix ( see Algorithm 1 for a detailed analysis ) . Hence , storing these node embeddings is the major memory bottleneck for training GNNs on large graphs . Most of the existing works towards this problem can be roughly divided into two categories . First , some works propose to train GNNs with sampled subgraphs instead of the whole graph at each step . In this way , only node embeddings that are present in the current subgraph will be retained in memory ( Chiang et al. , 2019 ; Hamilton et al. , 2017 ; Zeng et al. , 2020 ; Zou et al. , 2019 ; Chen et al. , 2018 ; Huang et al. , 2018 ) . Second , another line of work tries to decouple the neighborhood aggregation from prediction , either as a preprocessing step ( Wu et al. , 2019 ; Klicpera et al. , 2018 ; Yu et al. , 2020 ) or post-processing step ( Huang et al. , 2020 ) , where the model is simplified as the Multi-Layer Perceptron ( MLP ) that can be trained with mini-batch data . In parallel , another orthogonal direction is to store only the compressed node embeddings ( i.e. , activations ) in memory for computing the gradients . Recent works propose to quantize the activations in lower numerical precision ( e.g. , using 8-bit integer ) during the forward pass ( Chakrabarti & Moseley , 2019 ; Fu et al. , 2020 ; Chen et al. , 2021a ; Evans & Aamodt , 2021 ) . This framework successfully trims down the memory requirement for training Convolutional Neural Networks ( CNNs ) by a large margin , at the cost of additional time overhead and loss of accuracy . Ideally , the real-world usage requires the training method should achieve a balanced trade-off among the following three aspects : 1 . Space . It should enable to train GNNs on large graphs using off-the-shelf hardwares , such as GPUs and CPUs ; 2 . Speed . The time overhead should be acceptable , ideally as small as possible ; 3 . Model Performance . The loss of accuracy should be acceptable , ideally as small as possible . Although storing the compressed activations successfully saves the memory for CNNs , up to our knowledge , there is no existing work extends this direction to GNNs and evaluates the mentioned trade-off for analyzing its feasibility . Despite the extension is conceptually straightforward , this direction is less-explored since it can be notoriously difficult to implement to fully leverage hardware potentials . This dilemma stems from the fact that the necessary tools for supporting this direction is usually missed in common graph learning packages . For example , operations in popular graph learning packages only support casting tensors down to 8-bit integer on GPUs , significantly limiting the memory saving ( Paszke et al. , 2019 ; Fey & Lenssen , 2019 ; Wang et al. , 2019 ) . As a result , previous GNN quantization works either emulates inference-time quantization via “ simulated quantization ” ( Tailor et al. , 2021 ; Zhao et al. , 2020 ) , or is impractical to use GPUs for accelerating ( Feng et al. , 2020 ) . To unleash the potential of this direction , we provide a space-efficient GPU implementation for supporting common operations in GNNs with compressed activations . Equipped with our implementation , this paper asks the following open question : To what extent can we compress the activations with both acceptable loss in accuracy and time overhead , for scalable GNN training ? To answer the open question , we first explore two different types of compression methods . Namely , one is “ quantization ” that compresses the activations into lower numerical precision . The other one is called “ random projection ” ( Achlioptas , 2001 ) that projects the activations into low-dimensional space . Both these two simple strategies can achieve near-lossless accuracy at a non-trivial compression ratio . For example , the loss in accuracy is negligible ( 0.2 % ) even under the vanilla 2-bit quantization . However , we can not further push forward the memory saving by these two methods , e.g. , we can not use a numerical precision below 1-bit . Considering that the real-world graphs often contain hundreds of millions of nodes , our main goal is to trim down the memory consumption to the maximum extent among the three aspects , as long as the other two are acceptable . We then naturally explore the direction of combining random projection and quantization , dubbed “ EXACT ” , to aggressively maximize the memory saving . EXACT is essentially applies random projection and quantization sequentially for compressing activations . Despite the superior memory saving , another following question is that , whether the combination bring significantly worse model performance and larger time overhead ? Following the questions , we make three major contributions as follows : • We provide a space-efficient GPU implementation for training GNNs with compressed activations as an extension for Pytorch . Based on our implementation , we are the first one training GNNs with compressed activations and demonstrating its potential for real-world usage . EXACT can complement the existing studies , as it can be integrated with most of existing solutions . • We propose EXACT , a simple-yet-effective framework which applies random projection and quantization sequentially on activation for scalable GNN training . We theoretically and experimentally show that applying random projection and quantization sequentially have an “ interaction ” effect . Namely , from the model performance aspect , after random projection , applying quantization only has a limited impact on the model performance . From the time aspect , EXACT runs comparable or even faster than quantization only . • Despite the simplicity , EXACT achieves non-trivial memory saving with both acceptable time overhead and loss in accuracy : EXACT can reduce the memory footprint of activations by up to 32× with roughly 0.5 % loss in accuracy and 10− 25 % time overhead across models and datasets . Noteworthily , EXACT trims down the hardware requirement of training a full-batch GraphSAGE ( Hamilton et al. , 2017 ) on ogbn-products ( Hu et al. , 2020 ) from a 48GB GPU to a 12GB GPU , which is affordable to most research labs . 2 THE MEMORY CONSUMPTION OF GNNS . Background . Let G = ( V , E ) be an undirected graph with V = ( v1 , · · · , v|V| ) and E = ( e1 , · · · , e|E| ) being the set of nodes and edges , respectively . Let X ∈ R|V|×d be the node feature matrix of the whole graph . The graph structure can be represented by an adjacency matrix A ∈ R|V|×|V| , where Ai , j = 1 if ( vi , vj ) ∈ E else Ai , j = 0 . In this work , we are mostly interested in the task of node classification , where the goal is to learn the representation hv for all v ∈ V such that the label yv can be easily predicted . To obtain such a representation , GNNs follow the neighborhood aggregation scheme . Specifically , GNNs recursively update the representation of a node by aggregating representations of its neighbors . Formally , the lth layer of a GNN can be written as h ( l+1 ) v = UPDATE ( h ( l ) v , ⊕ u∈N ( v ) MSG ( h ( l ) u , h ( l ) v ) ) , where h ( l ) v is the representation of node v at the lth layer . N ( v ) denotes the neighboring nodes of node v , not including v itself . The full table of notations can be found in Appendix A Table 5 . For node v , messages from its neighbors are calculated by MSG ( · ) . Then these messages are aggregated using a permutation-invariant aggregation function ⊕ . The aggregated features at v are updated by UPDATE ( · ) . For example , the Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) layer can be defined as H ( l+1 ) = ReLU ( ÂH ( l ) Θ ( l ) ) , ( 1 ) where H ( l ) is the node embedding matrix consisting of all nodes ’ embeddings at the lth layer and H ( 0 ) = X . Θ ( l ) is the weight matrix of the lth GCN layer .  = D̃− 1 2AD̃− 1 2 is the normalized adjacency matrix , where D̃ is the degree matrix of A + I . We note that  is usually stored using the sparse matrix format . For GCN layers , the message and the aggregation function are fused into a Sparse-Dense Matrix Multiplication ( SPMM 1 ) operation . Namely , ÂH ( l ) Θ ( l ) = SPMM (  , H ( l ) Θ ( l ) ) . Thus , the computation graph of Equation 1 can be written as H ( l+1 ) = ReLU ( SPMM (  , MM ( H ( l ) , Θ ( l ) ) ) ) , ( 2 ) where MM ( · , · ) is the normal Dense-Dense Matrix Multiplication . Equation 2 resembles how GCNs are implemented in popular packages ( Fey & Lenssen , 2019 ; Wang et al. , 2019 ) . Here we analyze the memory consumption for the forward pass of GCN since most of the memory is occupied during the forward pass . For the memory usage of the backward pass , a detailed analysis is given in Appendix C. Specifically , for an L layer GCN , suppose the hidden dimensions of layer 0 , · · · , L − 1 are the same , which are denoted as D. The forward pass of GCN layers is shown in Appendix C Algorithm 1 . For layer l , it saves the following four variables in memory . ( 1 ) the weight matrix Θ ( l ) ∈ RD×D whose shape is independent to the graph size and is generally negligible . ( 2 ) the normalized adjacency matrix  ( in CSR format ) whose space complexity is O ( |V|+ |E| ) . Note that there only needs to store one  in memory which can be accessed by different layers . Thus , the memory consumption of  is independent of the number of layers and is not the main memory bottleneck . ( 3 ) the intermediate result J ( l ) = MM ( H ( l ) , Θ ( l ) ) ∈ R|V|×D . For an L layer GCN , storing { J ( 0 ) , · · ·J ( L−1 ) } has a O ( L|V|D ) space complexity , which is the main memory bottleneck . ( 4 ) the node embedding matrix H ( l ) ∈ R|V|×D . For a L layer GCN , storing { H ( 0 ) , · · ·H ( L−1 ) } has a O ( L|V|D ) space complexity , which is the also main memory bottleneck . In this paper , we use the term “ activation maps ” to encompass H , J , and activation maps of other commonly used layers/operations such as BatchNorm , ReLU and Dropout .
This paper explores the training of GNN with compressed activation maps. It provides an optimized GPU implementation and comprehensively studies the trade-off among the memory saving, time overhead, and accuracy drop. Experimental results show that the proposed framework can reduce the memory footprint of activations by up to 32x with only 0.2-0.5% accuracy drop and 10-25% time overhead.
SP:ba153585404187129f88c7b8ca0a9cfc7ad732e6
EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression
1 INTRODUCTION . Despite Graph Neural Networks ( GNNs ) have achieved great success across different graph-related tasks , training GNNs on large graphs is a long-standing challenge due to its extensive memory requirement ( Kipf & Welling , 2017 ; Zhang & Chen , 2018 ; Cai et al. , 2021b ) . The extensive memory consumption of GNN stems from its recursive neighborhood aggregation scheme , where each node aggregates embeddings of its neighbors to update its new embedding at each layer . Thus , training an L-layer GNN requires storing all L layers ’ intermediate node embeddings in GPU memory for computing the gradients , and this typically adds several times more memory than holding the node feature matrix ( see Algorithm 1 for a detailed analysis ) . Hence , storing these node embeddings is the major memory bottleneck for training GNNs on large graphs . Most of the existing works towards this problem can be roughly divided into two categories . First , some works propose to train GNNs with sampled subgraphs instead of the whole graph at each step . In this way , only node embeddings that are present in the current subgraph will be retained in memory ( Chiang et al. , 2019 ; Hamilton et al. , 2017 ; Zeng et al. , 2020 ; Zou et al. , 2019 ; Chen et al. , 2018 ; Huang et al. , 2018 ) . Second , another line of work tries to decouple the neighborhood aggregation from prediction , either as a preprocessing step ( Wu et al. , 2019 ; Klicpera et al. , 2018 ; Yu et al. , 2020 ) or post-processing step ( Huang et al. , 2020 ) , where the model is simplified as the Multi-Layer Perceptron ( MLP ) that can be trained with mini-batch data . In parallel , another orthogonal direction is to store only the compressed node embeddings ( i.e. , activations ) in memory for computing the gradients . Recent works propose to quantize the activations in lower numerical precision ( e.g. , using 8-bit integer ) during the forward pass ( Chakrabarti & Moseley , 2019 ; Fu et al. , 2020 ; Chen et al. , 2021a ; Evans & Aamodt , 2021 ) . This framework successfully trims down the memory requirement for training Convolutional Neural Networks ( CNNs ) by a large margin , at the cost of additional time overhead and loss of accuracy . Ideally , the real-world usage requires the training method should achieve a balanced trade-off among the following three aspects : 1 . Space . It should enable to train GNNs on large graphs using off-the-shelf hardwares , such as GPUs and CPUs ; 2 . Speed . The time overhead should be acceptable , ideally as small as possible ; 3 . Model Performance . The loss of accuracy should be acceptable , ideally as small as possible . Although storing the compressed activations successfully saves the memory for CNNs , up to our knowledge , there is no existing work extends this direction to GNNs and evaluates the mentioned trade-off for analyzing its feasibility . Despite the extension is conceptually straightforward , this direction is less-explored since it can be notoriously difficult to implement to fully leverage hardware potentials . This dilemma stems from the fact that the necessary tools for supporting this direction is usually missed in common graph learning packages . For example , operations in popular graph learning packages only support casting tensors down to 8-bit integer on GPUs , significantly limiting the memory saving ( Paszke et al. , 2019 ; Fey & Lenssen , 2019 ; Wang et al. , 2019 ) . As a result , previous GNN quantization works either emulates inference-time quantization via “ simulated quantization ” ( Tailor et al. , 2021 ; Zhao et al. , 2020 ) , or is impractical to use GPUs for accelerating ( Feng et al. , 2020 ) . To unleash the potential of this direction , we provide a space-efficient GPU implementation for supporting common operations in GNNs with compressed activations . Equipped with our implementation , this paper asks the following open question : To what extent can we compress the activations with both acceptable loss in accuracy and time overhead , for scalable GNN training ? To answer the open question , we first explore two different types of compression methods . Namely , one is “ quantization ” that compresses the activations into lower numerical precision . The other one is called “ random projection ” ( Achlioptas , 2001 ) that projects the activations into low-dimensional space . Both these two simple strategies can achieve near-lossless accuracy at a non-trivial compression ratio . For example , the loss in accuracy is negligible ( 0.2 % ) even under the vanilla 2-bit quantization . However , we can not further push forward the memory saving by these two methods , e.g. , we can not use a numerical precision below 1-bit . Considering that the real-world graphs often contain hundreds of millions of nodes , our main goal is to trim down the memory consumption to the maximum extent among the three aspects , as long as the other two are acceptable . We then naturally explore the direction of combining random projection and quantization , dubbed “ EXACT ” , to aggressively maximize the memory saving . EXACT is essentially applies random projection and quantization sequentially for compressing activations . Despite the superior memory saving , another following question is that , whether the combination bring significantly worse model performance and larger time overhead ? Following the questions , we make three major contributions as follows : • We provide a space-efficient GPU implementation for training GNNs with compressed activations as an extension for Pytorch . Based on our implementation , we are the first one training GNNs with compressed activations and demonstrating its potential for real-world usage . EXACT can complement the existing studies , as it can be integrated with most of existing solutions . • We propose EXACT , a simple-yet-effective framework which applies random projection and quantization sequentially on activation for scalable GNN training . We theoretically and experimentally show that applying random projection and quantization sequentially have an “ interaction ” effect . Namely , from the model performance aspect , after random projection , applying quantization only has a limited impact on the model performance . From the time aspect , EXACT runs comparable or even faster than quantization only . • Despite the simplicity , EXACT achieves non-trivial memory saving with both acceptable time overhead and loss in accuracy : EXACT can reduce the memory footprint of activations by up to 32× with roughly 0.5 % loss in accuracy and 10− 25 % time overhead across models and datasets . Noteworthily , EXACT trims down the hardware requirement of training a full-batch GraphSAGE ( Hamilton et al. , 2017 ) on ogbn-products ( Hu et al. , 2020 ) from a 48GB GPU to a 12GB GPU , which is affordable to most research labs . 2 THE MEMORY CONSUMPTION OF GNNS . Background . Let G = ( V , E ) be an undirected graph with V = ( v1 , · · · , v|V| ) and E = ( e1 , · · · , e|E| ) being the set of nodes and edges , respectively . Let X ∈ R|V|×d be the node feature matrix of the whole graph . The graph structure can be represented by an adjacency matrix A ∈ R|V|×|V| , where Ai , j = 1 if ( vi , vj ) ∈ E else Ai , j = 0 . In this work , we are mostly interested in the task of node classification , where the goal is to learn the representation hv for all v ∈ V such that the label yv can be easily predicted . To obtain such a representation , GNNs follow the neighborhood aggregation scheme . Specifically , GNNs recursively update the representation of a node by aggregating representations of its neighbors . Formally , the lth layer of a GNN can be written as h ( l+1 ) v = UPDATE ( h ( l ) v , ⊕ u∈N ( v ) MSG ( h ( l ) u , h ( l ) v ) ) , where h ( l ) v is the representation of node v at the lth layer . N ( v ) denotes the neighboring nodes of node v , not including v itself . The full table of notations can be found in Appendix A Table 5 . For node v , messages from its neighbors are calculated by MSG ( · ) . Then these messages are aggregated using a permutation-invariant aggregation function ⊕ . The aggregated features at v are updated by UPDATE ( · ) . For example , the Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) layer can be defined as H ( l+1 ) = ReLU ( ÂH ( l ) Θ ( l ) ) , ( 1 ) where H ( l ) is the node embedding matrix consisting of all nodes ’ embeddings at the lth layer and H ( 0 ) = X . Θ ( l ) is the weight matrix of the lth GCN layer .  = D̃− 1 2AD̃− 1 2 is the normalized adjacency matrix , where D̃ is the degree matrix of A + I . We note that  is usually stored using the sparse matrix format . For GCN layers , the message and the aggregation function are fused into a Sparse-Dense Matrix Multiplication ( SPMM 1 ) operation . Namely , ÂH ( l ) Θ ( l ) = SPMM (  , H ( l ) Θ ( l ) ) . Thus , the computation graph of Equation 1 can be written as H ( l+1 ) = ReLU ( SPMM (  , MM ( H ( l ) , Θ ( l ) ) ) ) , ( 2 ) where MM ( · , · ) is the normal Dense-Dense Matrix Multiplication . Equation 2 resembles how GCNs are implemented in popular packages ( Fey & Lenssen , 2019 ; Wang et al. , 2019 ) . Here we analyze the memory consumption for the forward pass of GCN since most of the memory is occupied during the forward pass . For the memory usage of the backward pass , a detailed analysis is given in Appendix C. Specifically , for an L layer GCN , suppose the hidden dimensions of layer 0 , · · · , L − 1 are the same , which are denoted as D. The forward pass of GCN layers is shown in Appendix C Algorithm 1 . For layer l , it saves the following four variables in memory . ( 1 ) the weight matrix Θ ( l ) ∈ RD×D whose shape is independent to the graph size and is generally negligible . ( 2 ) the normalized adjacency matrix  ( in CSR format ) whose space complexity is O ( |V|+ |E| ) . Note that there only needs to store one  in memory which can be accessed by different layers . Thus , the memory consumption of  is independent of the number of layers and is not the main memory bottleneck . ( 3 ) the intermediate result J ( l ) = MM ( H ( l ) , Θ ( l ) ) ∈ R|V|×D . For an L layer GCN , storing { J ( 0 ) , · · ·J ( L−1 ) } has a O ( L|V|D ) space complexity , which is the main memory bottleneck . ( 4 ) the node embedding matrix H ( l ) ∈ R|V|×D . For a L layer GCN , storing { H ( 0 ) , · · ·H ( L−1 ) } has a O ( L|V|D ) space complexity , which is the also main memory bottleneck . In this paper , we use the term “ activation maps ” to encompass H , J , and activation maps of other commonly used layers/operations such as BatchNorm , ReLU and Dropout .
This paper proposes EXACT, a framework for training GNNs with compressed activations with two methods: quantization and random projection. The primary design objective of EXACT is to trim down the memory consumption in GNN training while maintaining acceptable training speed and accuracy. EXACT achieves significant memory savings with 0.2-0.5% accuracy drop and 10-25% slowdown of training throughput across five models and five graph datasets.
SP:ba153585404187129f88c7b8ca0a9cfc7ad732e6
Learning Neural Contextual Bandits through Perturbed Rewards
√ T ) regret upper bound is still achievable under standard regularity conditions , where T is the number of rounds of interactions and d̃ is the effective dimension of a neural tangent kernel matrix . Extensive comparisons with several benchmark contextual bandit algorithms , including two recent neural contextual bandit models , demonstrate the effectiveness and computational efficiency of our proposed neural bandit algorithm . 1 INTRODUCTION . Contextual bandit is a well-formulated abstraction of many important real-world problems , including content recommendation ( Li et al. , 2010 ; Wu et al. , 2016 ) , online advertising ( Schwartz et al. , 2017 ; Nuara et al. , 2018 ) , and mobile health ( Lei et al. , 2017 ; Tewari & Murphy , 2017 ) . In such problems , an agent iteratively interacts with an environment to maximize its accumulated rewards over time . Its essence is sequential decision-making under uncertainty . Because the reward from the environment for a chosen action ( also referred to as an arm in literature ) under each context is stochastic , a no-regret learning algorithm needs to explore the problem space for improved reward estimation , i.e. , learning the mapping from an arm and its context1 to the expected reward . Linear contextual bandit algorithms ( Abbasi-Yadkori et al. , 2011 ; Li et al. , 2010 ) , which assume the reward mapping is a linear function of the context vector , dominate the community ’ s attention in the study of contextual bandits . Though theoretically sound and practically effective , their linear reward mapping assumption is incompetent to capture possible complex non-linear relations between the context vector and reward . This motivated the extended studies in parametric bandits , such as generalized linear bandits ( Filippi et al. , 2010 ; Faury et al. , 2020 ) and kernelized bandits ( Chowdhury & Gopalan , 2017 ; Krause & Ong , 2011 ) . Recently , to unleash the power of representation learning , deep neural networks ( DNN ) have also been introduced to learn the underlying reward mapping directly . In ( Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , a deep neural network is applied to provide a feature mapping , and exploration is performed at the last layer . NeuralUCB ( Zhou et al. , 2020 ) and NeuralTS ( Zhang et al. , 2020 ) explore the entire neural network parameter space to obtain nearly optimal regret using the neural tangent kernel technique ( Jacot et al. , 2018 ) . These neural contextual bandit algorithms significantly boosted empirical performance compared to their classical counterparts . Nevertheless , a major practical concern of existing neural contextual bandit algorithms is their added computational cost when performing exploration . Take the recently developed NeuralUCB and NeuralTS for example . Their construction of the high-probability confidence set for model exploration depends on the dimensionality of the network parameters and the learned context vectors ’ representations , which is often very large for DNNs . For instance , a performing neural bandit solution often has the number of parameters in the order of 100 thousands ( if not less ) . It is prohibitively 1When no ambiguity is invoked , we refer to the feature vector for an arm and its context as a context vector . expensive to compute inverse of the induced covariance matrix on such a huge number of parameters , as required by their construction of confidence set . As a result , approximations , e.g. , only using the diagonal of the covariance matrix ( Zhou et al. , 2020 ; Zhang et al. , 2020 ) , are employed to make such algorithms operational in practice . But there is no theoretical guarantee for such diagonal approximations , which directly leads to the gap between the theoretical and empirical performance of the neural bandit algorithms . In this work , to alleviate the computational overhead caused by the expensive exploration , we propose to eliminate explicit model exploration by learning a neural bandit model with perturbed rewards . At each round of model update , we inject pseudo noise generated from a zero-mean Gaussian distribution to the observed reward history . With the induced randomization , sufficient exploration is achieved when simply pulling the arm with the highest estimated reward . This brings in considerable advantage over existing neural bandit algorithms : no additional computational cost is needed to obtain no regret . We rigorously prove that with a high probability the algorithm obtains a Õ ( d̃ √ T ) regret , where d̃ is the effective dimension of a neural tangent kernel matrix and T is the number of rounds of interactions . This result recovers existing regret bounds for the linear setting where the effective dimension equals to the input feature dimension . Besides , our extensive empirical evaluations demonstrate the strong advantage in efficiency and effectiveness of our solution against a rich set of state-of-the-art contextual bandit solutions over both synthetic and real-world datasets . 2 RELATED WORK . Most recently , attempts have been made to incorporate DNNs with contextual bandit algorithms . Several existing work study neural-linear bandits ( Riquelme et al. , 2018 ; Zahavy & Mannor , 2019 ) , where exploration is performed on the last layer of the DNN . Under the neural tangent kernel ( NTK ) framework ( Jacot et al. , 2018 ) , NeuralUCB ( Zhou et al. , 2020 ) constructs confidence sets with DNN-based random feature mappings to perform upper confidence bound based exploration . NeuralTS ( Zhang et al. , 2020 ) samples from the posterior distribution constructed with a similar technique . However , as the exploration is performed in the induced random feature space , the added computational overhead is prohibitively high , which makes such solutions impractical . The authors suggested diagonal approximations of the resulting covariance matrix , which however leave the promised theoretical guarantees of those algorithms up in the air . Reward perturbation based exploration has been studied in a number of classical bandit models ( Kveton et al. , 2019a ; 2020 ; 2019b ) . In a context-free k-armed bandit setting , Kveton et al . ( 2019b ) proposed to estimate each arm ’ s reward over a perturbed history and select the arm with the highest estimated reward at each round . Such an arm selection strategy is proved to be optimistic with a sufficiently high probability . Later this strategy has been extended to linear and generalized linear bandits ( Kveton et al. , 2019a ; 2020 ) . In ( Kveton et al. , 2020 ) , the authors suggested its application to neural network models ; but only some simple empirical evaluations were provided , without any theoretical justifications . Our work for the first time provides a rigorous regret analysis of neural contextual bandits with the perturbation-based exploration : a sublinear regret bound is still achievable in terms of the number of interactions between the agent and environment . 3 NEURAL BANDIT LEARNING WITH PERTURBED REWARDS . We study the problem of contextual bandit with finite K arms , where each arm is associated with a d-dimensional context vector : xi ∈ Rd for i ∈ [ K ] . At each round t ∈ [ T ] , the agent needs to select one of the arms , denoted as at , and receives its reward rat , t , which is generated as rat , t = h ( xat ) + ηt . In particular , h ( x ) represents the unknown underlying reward mapping function satisfying 0 ≤ h ( x ) ≤ 1 for any x , and ηt is an R-sub-Gaussian random variable that satisfies E [ exp ( µηt ) ] ≤ exp [ µ2R2 ] for all µ ≥ 0 . The goal is to minimize pseudo regret over T rounds : RT = E [ ∑T t=1 ( ra∗ − rt , at ) ] , ( 3.1 ) where a∗ is the optimal arm with the maximum expected reward . To deal with the potential non-linearity of h ( x ) and unleash the representation learning power of DNNs , we adopt a fully connected neural network f ( x ; θ ) to approximate h ( x ) : f ( x ; θ ) = Algorithm 1 Neural bandit with perturbed reward ( NPR ) 1 : Input : Number of rounds T , regularization coefficient λ , perturbation parameter ν , network width m , network depth L. 2 : Initialization : θ0 = ( vec ( W1 ) , . . . , vec ( WL ) ] ∈ Rp with Gaussian distribution : for 1 ≤ l ≤ L − 1 , Wl = ( W , 0 ; 0 , W ) with each entry of W sampled independently from N ( 0 , 4/m ) ; WL = ( w > , −w > ) with each entry of w sampled independently from N ( 0 , 2/m ) . 3 : for t = 1 , . . . , T do 4 : if t > K then 5 : Pull arm at and receive reward rt , at , where at = argmaxi∈ [ K ] f ( xi , θt−1 ) . 6 : Generate { γts } s∈ [ t ] ∼ N ( 0 , ν2 ) . 7 : Set θt by the output of gradient descent for solving Eq ( 3.2 ) . 8 : else 9 : Pull arm ak . 10 : end if 11 : end for √ mWLφ ( WL−1φ ( · · ·φ ( W1x ) ) ) , where φ ( x ) = ReLU ( x ) , θ = [ vec ( W1 ) , . . . , vec ( WL ) ] ∈ Rp with p = m + md + m2 ( L − 1 ) , and depth L ≥ 2 . Each hidden layer is assumed to have the same width ( i.e. , m ) for convenience in later analysis ; but this does not affect the conclusion of our theoretical analysis . Existing neural bandit solutions perform explicit exploration in the entire model space ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , which introduces prohibitive computational cost . And oftentimes the overhead is so high that approximation has to be employed ( Zhou et al. , 2020 ; Zhang et al. , 2020 ) , which unfortunately breaks the theoretical promise of these algorithms . In our proposed model , to eliminate such explicit model exploration in neural bandit , a randomization strategy is introduced in the neural network update . We name the resulting solution as Neural bandit with Perturbed Rewards , or NPR in short . In NPR , at round t , the neural model is learned with the t rewards perturbed with designed perturbations : min θ L ( θ ) = ∑t s=1 ( f ( xas ; θ ) − ( rs , as + γts ) ) 2 /2 +mλ‖θ − θ0‖22/2 ( 3.2 ) where { γts } ts=1 ∼ N ( 0 , ν2 ) are Gaussian random variables that are independently sampled in each round t , and ν is a hyper-parameter that controls the strength of perturbation ( and thus the exploration ) in NPR . We use an l2-regularized square loss for model estimation , where the regularization centers at the randomly initialization θ0 with the trade-off parameter λ . The detailed procedure of NPR is given in Algorithm 1 . The algorithm starts by pulling all candidate arms once . This guarantees that for any arm , NPR is sufficiently optimistic compared to the true reward with respect to its approximation error ( Lemma 4.4 ) . When all the K arms have been pulled once , the algorithm pulls the arm with the highest estimated reward , at = argmaxi f ( xi ; θt−1 ) . Once received the feedback rat , t , the model perturbs the entire reward history so far via a freshly sampled noise sequence { γts } ts=1 , and updates the neural network by { ( xas , ras , s + γts ) } ts=1 using gradient descent . In the regret analysis in Section 4 , we prove that the variance from the added { γts } s∈ [ t ] will lead to the necessary optimism for exploration ( Abbasi-Yadkori et al. , 2011 ) . We adopt gradient descent for analysis convenience , while stochastic gradient descent can also be used to solve the optimization problem with a similar theoretical guarantee based on recent works ( AllenZhu et al. , 2019 ; Zou et al. , 2019 ) . Compared with existing neural contextual bandit algorithms ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , NPR does not need any added computation for model exploration , besides the regular neural network update . This greatly alleviates the overhead for the computation resources ( both space and time ) and makes NPR sufficiently general to be applied for practical problems . More importantly , our theoretical analysis directly corresponds to its actual behavior when applied , as no approximation is needed .
The paper considers a version of the stochastic contextual bandit where the underlying reward function is modelled by a deep neural network. This framework is useful for a number of complex problems where simpler parametric models of the reward are insufficient. The authors propose an algorithm inspired by the perturbed reward approaches of Kveton et al (2019a,b,2020) which ensures an exploration-exploitation balance by adding additional noise to the observed rewards. This approach avoids the inversion of large covariance matrices which existing approaches (NeuralUCB and NeuralTS) necessitate, and results in a computational speed-up while maintaining near-optimal regret guarantees.
SP:f7425b47e93c121149fff0aea546d91512f7641c
Learning Neural Contextual Bandits through Perturbed Rewards
√ T ) regret upper bound is still achievable under standard regularity conditions , where T is the number of rounds of interactions and d̃ is the effective dimension of a neural tangent kernel matrix . Extensive comparisons with several benchmark contextual bandit algorithms , including two recent neural contextual bandit models , demonstrate the effectiveness and computational efficiency of our proposed neural bandit algorithm . 1 INTRODUCTION . Contextual bandit is a well-formulated abstraction of many important real-world problems , including content recommendation ( Li et al. , 2010 ; Wu et al. , 2016 ) , online advertising ( Schwartz et al. , 2017 ; Nuara et al. , 2018 ) , and mobile health ( Lei et al. , 2017 ; Tewari & Murphy , 2017 ) . In such problems , an agent iteratively interacts with an environment to maximize its accumulated rewards over time . Its essence is sequential decision-making under uncertainty . Because the reward from the environment for a chosen action ( also referred to as an arm in literature ) under each context is stochastic , a no-regret learning algorithm needs to explore the problem space for improved reward estimation , i.e. , learning the mapping from an arm and its context1 to the expected reward . Linear contextual bandit algorithms ( Abbasi-Yadkori et al. , 2011 ; Li et al. , 2010 ) , which assume the reward mapping is a linear function of the context vector , dominate the community ’ s attention in the study of contextual bandits . Though theoretically sound and practically effective , their linear reward mapping assumption is incompetent to capture possible complex non-linear relations between the context vector and reward . This motivated the extended studies in parametric bandits , such as generalized linear bandits ( Filippi et al. , 2010 ; Faury et al. , 2020 ) and kernelized bandits ( Chowdhury & Gopalan , 2017 ; Krause & Ong , 2011 ) . Recently , to unleash the power of representation learning , deep neural networks ( DNN ) have also been introduced to learn the underlying reward mapping directly . In ( Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , a deep neural network is applied to provide a feature mapping , and exploration is performed at the last layer . NeuralUCB ( Zhou et al. , 2020 ) and NeuralTS ( Zhang et al. , 2020 ) explore the entire neural network parameter space to obtain nearly optimal regret using the neural tangent kernel technique ( Jacot et al. , 2018 ) . These neural contextual bandit algorithms significantly boosted empirical performance compared to their classical counterparts . Nevertheless , a major practical concern of existing neural contextual bandit algorithms is their added computational cost when performing exploration . Take the recently developed NeuralUCB and NeuralTS for example . Their construction of the high-probability confidence set for model exploration depends on the dimensionality of the network parameters and the learned context vectors ’ representations , which is often very large for DNNs . For instance , a performing neural bandit solution often has the number of parameters in the order of 100 thousands ( if not less ) . It is prohibitively 1When no ambiguity is invoked , we refer to the feature vector for an arm and its context as a context vector . expensive to compute inverse of the induced covariance matrix on such a huge number of parameters , as required by their construction of confidence set . As a result , approximations , e.g. , only using the diagonal of the covariance matrix ( Zhou et al. , 2020 ; Zhang et al. , 2020 ) , are employed to make such algorithms operational in practice . But there is no theoretical guarantee for such diagonal approximations , which directly leads to the gap between the theoretical and empirical performance of the neural bandit algorithms . In this work , to alleviate the computational overhead caused by the expensive exploration , we propose to eliminate explicit model exploration by learning a neural bandit model with perturbed rewards . At each round of model update , we inject pseudo noise generated from a zero-mean Gaussian distribution to the observed reward history . With the induced randomization , sufficient exploration is achieved when simply pulling the arm with the highest estimated reward . This brings in considerable advantage over existing neural bandit algorithms : no additional computational cost is needed to obtain no regret . We rigorously prove that with a high probability the algorithm obtains a Õ ( d̃ √ T ) regret , where d̃ is the effective dimension of a neural tangent kernel matrix and T is the number of rounds of interactions . This result recovers existing regret bounds for the linear setting where the effective dimension equals to the input feature dimension . Besides , our extensive empirical evaluations demonstrate the strong advantage in efficiency and effectiveness of our solution against a rich set of state-of-the-art contextual bandit solutions over both synthetic and real-world datasets . 2 RELATED WORK . Most recently , attempts have been made to incorporate DNNs with contextual bandit algorithms . Several existing work study neural-linear bandits ( Riquelme et al. , 2018 ; Zahavy & Mannor , 2019 ) , where exploration is performed on the last layer of the DNN . Under the neural tangent kernel ( NTK ) framework ( Jacot et al. , 2018 ) , NeuralUCB ( Zhou et al. , 2020 ) constructs confidence sets with DNN-based random feature mappings to perform upper confidence bound based exploration . NeuralTS ( Zhang et al. , 2020 ) samples from the posterior distribution constructed with a similar technique . However , as the exploration is performed in the induced random feature space , the added computational overhead is prohibitively high , which makes such solutions impractical . The authors suggested diagonal approximations of the resulting covariance matrix , which however leave the promised theoretical guarantees of those algorithms up in the air . Reward perturbation based exploration has been studied in a number of classical bandit models ( Kveton et al. , 2019a ; 2020 ; 2019b ) . In a context-free k-armed bandit setting , Kveton et al . ( 2019b ) proposed to estimate each arm ’ s reward over a perturbed history and select the arm with the highest estimated reward at each round . Such an arm selection strategy is proved to be optimistic with a sufficiently high probability . Later this strategy has been extended to linear and generalized linear bandits ( Kveton et al. , 2019a ; 2020 ) . In ( Kveton et al. , 2020 ) , the authors suggested its application to neural network models ; but only some simple empirical evaluations were provided , without any theoretical justifications . Our work for the first time provides a rigorous regret analysis of neural contextual bandits with the perturbation-based exploration : a sublinear regret bound is still achievable in terms of the number of interactions between the agent and environment . 3 NEURAL BANDIT LEARNING WITH PERTURBED REWARDS . We study the problem of contextual bandit with finite K arms , where each arm is associated with a d-dimensional context vector : xi ∈ Rd for i ∈ [ K ] . At each round t ∈ [ T ] , the agent needs to select one of the arms , denoted as at , and receives its reward rat , t , which is generated as rat , t = h ( xat ) + ηt . In particular , h ( x ) represents the unknown underlying reward mapping function satisfying 0 ≤ h ( x ) ≤ 1 for any x , and ηt is an R-sub-Gaussian random variable that satisfies E [ exp ( µηt ) ] ≤ exp [ µ2R2 ] for all µ ≥ 0 . The goal is to minimize pseudo regret over T rounds : RT = E [ ∑T t=1 ( ra∗ − rt , at ) ] , ( 3.1 ) where a∗ is the optimal arm with the maximum expected reward . To deal with the potential non-linearity of h ( x ) and unleash the representation learning power of DNNs , we adopt a fully connected neural network f ( x ; θ ) to approximate h ( x ) : f ( x ; θ ) = Algorithm 1 Neural bandit with perturbed reward ( NPR ) 1 : Input : Number of rounds T , regularization coefficient λ , perturbation parameter ν , network width m , network depth L. 2 : Initialization : θ0 = ( vec ( W1 ) , . . . , vec ( WL ) ] ∈ Rp with Gaussian distribution : for 1 ≤ l ≤ L − 1 , Wl = ( W , 0 ; 0 , W ) with each entry of W sampled independently from N ( 0 , 4/m ) ; WL = ( w > , −w > ) with each entry of w sampled independently from N ( 0 , 2/m ) . 3 : for t = 1 , . . . , T do 4 : if t > K then 5 : Pull arm at and receive reward rt , at , where at = argmaxi∈ [ K ] f ( xi , θt−1 ) . 6 : Generate { γts } s∈ [ t ] ∼ N ( 0 , ν2 ) . 7 : Set θt by the output of gradient descent for solving Eq ( 3.2 ) . 8 : else 9 : Pull arm ak . 10 : end if 11 : end for √ mWLφ ( WL−1φ ( · · ·φ ( W1x ) ) ) , where φ ( x ) = ReLU ( x ) , θ = [ vec ( W1 ) , . . . , vec ( WL ) ] ∈ Rp with p = m + md + m2 ( L − 1 ) , and depth L ≥ 2 . Each hidden layer is assumed to have the same width ( i.e. , m ) for convenience in later analysis ; but this does not affect the conclusion of our theoretical analysis . Existing neural bandit solutions perform explicit exploration in the entire model space ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , which introduces prohibitive computational cost . And oftentimes the overhead is so high that approximation has to be employed ( Zhou et al. , 2020 ; Zhang et al. , 2020 ) , which unfortunately breaks the theoretical promise of these algorithms . In our proposed model , to eliminate such explicit model exploration in neural bandit , a randomization strategy is introduced in the neural network update . We name the resulting solution as Neural bandit with Perturbed Rewards , or NPR in short . In NPR , at round t , the neural model is learned with the t rewards perturbed with designed perturbations : min θ L ( θ ) = ∑t s=1 ( f ( xas ; θ ) − ( rs , as + γts ) ) 2 /2 +mλ‖θ − θ0‖22/2 ( 3.2 ) where { γts } ts=1 ∼ N ( 0 , ν2 ) are Gaussian random variables that are independently sampled in each round t , and ν is a hyper-parameter that controls the strength of perturbation ( and thus the exploration ) in NPR . We use an l2-regularized square loss for model estimation , where the regularization centers at the randomly initialization θ0 with the trade-off parameter λ . The detailed procedure of NPR is given in Algorithm 1 . The algorithm starts by pulling all candidate arms once . This guarantees that for any arm , NPR is sufficiently optimistic compared to the true reward with respect to its approximation error ( Lemma 4.4 ) . When all the K arms have been pulled once , the algorithm pulls the arm with the highest estimated reward , at = argmaxi f ( xi ; θt−1 ) . Once received the feedback rat , t , the model perturbs the entire reward history so far via a freshly sampled noise sequence { γts } ts=1 , and updates the neural network by { ( xas , ras , s + γts ) } ts=1 using gradient descent . In the regret analysis in Section 4 , we prove that the variance from the added { γts } s∈ [ t ] will lead to the necessary optimism for exploration ( Abbasi-Yadkori et al. , 2011 ) . We adopt gradient descent for analysis convenience , while stochastic gradient descent can also be used to solve the optimization problem with a similar theoretical guarantee based on recent works ( AllenZhu et al. , 2019 ; Zou et al. , 2019 ) . Compared with existing neural contextual bandit algorithms ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , NPR does not need any added computation for model exploration , besides the regular neural network update . This greatly alleviates the overhead for the computation resources ( both space and time ) and makes NPR sufficiently general to be applied for practical problems . More importantly , our theoretical analysis directly corresponds to its actual behavior when applied , as no approximation is needed .
The paper studies neural contextual bandits in the realizable setting. It proposes an algorithm that trains the neural network that maps arm contexts to rewards, with perturbed rewards. The paper proves that under NTK analysis the learnt function provides an optimistic estimate of the reward of each arms and therefore sub-linear regret guarantees can be derived. Crucially the dimension term in the regret only depends on the effective dimension of the NTK. Some experiments are performed on real and synthetic datasets where the proposed algorithm performs at par with neuralTS and neuralUCB algorithms while cutting down the running time by a large factor. Thus this can indicate the practicality of the proposed algorithm.
SP:f7425b47e93c121149fff0aea546d91512f7641c
Learning Neural Contextual Bandits through Perturbed Rewards
√ T ) regret upper bound is still achievable under standard regularity conditions , where T is the number of rounds of interactions and d̃ is the effective dimension of a neural tangent kernel matrix . Extensive comparisons with several benchmark contextual bandit algorithms , including two recent neural contextual bandit models , demonstrate the effectiveness and computational efficiency of our proposed neural bandit algorithm . 1 INTRODUCTION . Contextual bandit is a well-formulated abstraction of many important real-world problems , including content recommendation ( Li et al. , 2010 ; Wu et al. , 2016 ) , online advertising ( Schwartz et al. , 2017 ; Nuara et al. , 2018 ) , and mobile health ( Lei et al. , 2017 ; Tewari & Murphy , 2017 ) . In such problems , an agent iteratively interacts with an environment to maximize its accumulated rewards over time . Its essence is sequential decision-making under uncertainty . Because the reward from the environment for a chosen action ( also referred to as an arm in literature ) under each context is stochastic , a no-regret learning algorithm needs to explore the problem space for improved reward estimation , i.e. , learning the mapping from an arm and its context1 to the expected reward . Linear contextual bandit algorithms ( Abbasi-Yadkori et al. , 2011 ; Li et al. , 2010 ) , which assume the reward mapping is a linear function of the context vector , dominate the community ’ s attention in the study of contextual bandits . Though theoretically sound and practically effective , their linear reward mapping assumption is incompetent to capture possible complex non-linear relations between the context vector and reward . This motivated the extended studies in parametric bandits , such as generalized linear bandits ( Filippi et al. , 2010 ; Faury et al. , 2020 ) and kernelized bandits ( Chowdhury & Gopalan , 2017 ; Krause & Ong , 2011 ) . Recently , to unleash the power of representation learning , deep neural networks ( DNN ) have also been introduced to learn the underlying reward mapping directly . In ( Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , a deep neural network is applied to provide a feature mapping , and exploration is performed at the last layer . NeuralUCB ( Zhou et al. , 2020 ) and NeuralTS ( Zhang et al. , 2020 ) explore the entire neural network parameter space to obtain nearly optimal regret using the neural tangent kernel technique ( Jacot et al. , 2018 ) . These neural contextual bandit algorithms significantly boosted empirical performance compared to their classical counterparts . Nevertheless , a major practical concern of existing neural contextual bandit algorithms is their added computational cost when performing exploration . Take the recently developed NeuralUCB and NeuralTS for example . Their construction of the high-probability confidence set for model exploration depends on the dimensionality of the network parameters and the learned context vectors ’ representations , which is often very large for DNNs . For instance , a performing neural bandit solution often has the number of parameters in the order of 100 thousands ( if not less ) . It is prohibitively 1When no ambiguity is invoked , we refer to the feature vector for an arm and its context as a context vector . expensive to compute inverse of the induced covariance matrix on such a huge number of parameters , as required by their construction of confidence set . As a result , approximations , e.g. , only using the diagonal of the covariance matrix ( Zhou et al. , 2020 ; Zhang et al. , 2020 ) , are employed to make such algorithms operational in practice . But there is no theoretical guarantee for such diagonal approximations , which directly leads to the gap between the theoretical and empirical performance of the neural bandit algorithms . In this work , to alleviate the computational overhead caused by the expensive exploration , we propose to eliminate explicit model exploration by learning a neural bandit model with perturbed rewards . At each round of model update , we inject pseudo noise generated from a zero-mean Gaussian distribution to the observed reward history . With the induced randomization , sufficient exploration is achieved when simply pulling the arm with the highest estimated reward . This brings in considerable advantage over existing neural bandit algorithms : no additional computational cost is needed to obtain no regret . We rigorously prove that with a high probability the algorithm obtains a Õ ( d̃ √ T ) regret , where d̃ is the effective dimension of a neural tangent kernel matrix and T is the number of rounds of interactions . This result recovers existing regret bounds for the linear setting where the effective dimension equals to the input feature dimension . Besides , our extensive empirical evaluations demonstrate the strong advantage in efficiency and effectiveness of our solution against a rich set of state-of-the-art contextual bandit solutions over both synthetic and real-world datasets . 2 RELATED WORK . Most recently , attempts have been made to incorporate DNNs with contextual bandit algorithms . Several existing work study neural-linear bandits ( Riquelme et al. , 2018 ; Zahavy & Mannor , 2019 ) , where exploration is performed on the last layer of the DNN . Under the neural tangent kernel ( NTK ) framework ( Jacot et al. , 2018 ) , NeuralUCB ( Zhou et al. , 2020 ) constructs confidence sets with DNN-based random feature mappings to perform upper confidence bound based exploration . NeuralTS ( Zhang et al. , 2020 ) samples from the posterior distribution constructed with a similar technique . However , as the exploration is performed in the induced random feature space , the added computational overhead is prohibitively high , which makes such solutions impractical . The authors suggested diagonal approximations of the resulting covariance matrix , which however leave the promised theoretical guarantees of those algorithms up in the air . Reward perturbation based exploration has been studied in a number of classical bandit models ( Kveton et al. , 2019a ; 2020 ; 2019b ) . In a context-free k-armed bandit setting , Kveton et al . ( 2019b ) proposed to estimate each arm ’ s reward over a perturbed history and select the arm with the highest estimated reward at each round . Such an arm selection strategy is proved to be optimistic with a sufficiently high probability . Later this strategy has been extended to linear and generalized linear bandits ( Kveton et al. , 2019a ; 2020 ) . In ( Kveton et al. , 2020 ) , the authors suggested its application to neural network models ; but only some simple empirical evaluations were provided , without any theoretical justifications . Our work for the first time provides a rigorous regret analysis of neural contextual bandits with the perturbation-based exploration : a sublinear regret bound is still achievable in terms of the number of interactions between the agent and environment . 3 NEURAL BANDIT LEARNING WITH PERTURBED REWARDS . We study the problem of contextual bandit with finite K arms , where each arm is associated with a d-dimensional context vector : xi ∈ Rd for i ∈ [ K ] . At each round t ∈ [ T ] , the agent needs to select one of the arms , denoted as at , and receives its reward rat , t , which is generated as rat , t = h ( xat ) + ηt . In particular , h ( x ) represents the unknown underlying reward mapping function satisfying 0 ≤ h ( x ) ≤ 1 for any x , and ηt is an R-sub-Gaussian random variable that satisfies E [ exp ( µηt ) ] ≤ exp [ µ2R2 ] for all µ ≥ 0 . The goal is to minimize pseudo regret over T rounds : RT = E [ ∑T t=1 ( ra∗ − rt , at ) ] , ( 3.1 ) where a∗ is the optimal arm with the maximum expected reward . To deal with the potential non-linearity of h ( x ) and unleash the representation learning power of DNNs , we adopt a fully connected neural network f ( x ; θ ) to approximate h ( x ) : f ( x ; θ ) = Algorithm 1 Neural bandit with perturbed reward ( NPR ) 1 : Input : Number of rounds T , regularization coefficient λ , perturbation parameter ν , network width m , network depth L. 2 : Initialization : θ0 = ( vec ( W1 ) , . . . , vec ( WL ) ] ∈ Rp with Gaussian distribution : for 1 ≤ l ≤ L − 1 , Wl = ( W , 0 ; 0 , W ) with each entry of W sampled independently from N ( 0 , 4/m ) ; WL = ( w > , −w > ) with each entry of w sampled independently from N ( 0 , 2/m ) . 3 : for t = 1 , . . . , T do 4 : if t > K then 5 : Pull arm at and receive reward rt , at , where at = argmaxi∈ [ K ] f ( xi , θt−1 ) . 6 : Generate { γts } s∈ [ t ] ∼ N ( 0 , ν2 ) . 7 : Set θt by the output of gradient descent for solving Eq ( 3.2 ) . 8 : else 9 : Pull arm ak . 10 : end if 11 : end for √ mWLφ ( WL−1φ ( · · ·φ ( W1x ) ) ) , where φ ( x ) = ReLU ( x ) , θ = [ vec ( W1 ) , . . . , vec ( WL ) ] ∈ Rp with p = m + md + m2 ( L − 1 ) , and depth L ≥ 2 . Each hidden layer is assumed to have the same width ( i.e. , m ) for convenience in later analysis ; but this does not affect the conclusion of our theoretical analysis . Existing neural bandit solutions perform explicit exploration in the entire model space ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , which introduces prohibitive computational cost . And oftentimes the overhead is so high that approximation has to be employed ( Zhou et al. , 2020 ; Zhang et al. , 2020 ) , which unfortunately breaks the theoretical promise of these algorithms . In our proposed model , to eliminate such explicit model exploration in neural bandit , a randomization strategy is introduced in the neural network update . We name the resulting solution as Neural bandit with Perturbed Rewards , or NPR in short . In NPR , at round t , the neural model is learned with the t rewards perturbed with designed perturbations : min θ L ( θ ) = ∑t s=1 ( f ( xas ; θ ) − ( rs , as + γts ) ) 2 /2 +mλ‖θ − θ0‖22/2 ( 3.2 ) where { γts } ts=1 ∼ N ( 0 , ν2 ) are Gaussian random variables that are independently sampled in each round t , and ν is a hyper-parameter that controls the strength of perturbation ( and thus the exploration ) in NPR . We use an l2-regularized square loss for model estimation , where the regularization centers at the randomly initialization θ0 with the trade-off parameter λ . The detailed procedure of NPR is given in Algorithm 1 . The algorithm starts by pulling all candidate arms once . This guarantees that for any arm , NPR is sufficiently optimistic compared to the true reward with respect to its approximation error ( Lemma 4.4 ) . When all the K arms have been pulled once , the algorithm pulls the arm with the highest estimated reward , at = argmaxi f ( xi ; θt−1 ) . Once received the feedback rat , t , the model perturbs the entire reward history so far via a freshly sampled noise sequence { γts } ts=1 , and updates the neural network by { ( xas , ras , s + γts ) } ts=1 using gradient descent . In the regret analysis in Section 4 , we prove that the variance from the added { γts } s∈ [ t ] will lead to the necessary optimism for exploration ( Abbasi-Yadkori et al. , 2011 ) . We adopt gradient descent for analysis convenience , while stochastic gradient descent can also be used to solve the optimization problem with a similar theoretical guarantee based on recent works ( AllenZhu et al. , 2019 ; Zou et al. , 2019 ) . Compared with existing neural contextual bandit algorithms ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Zahavy & Mannor , 2019 ; Riquelme et al. , 2018 ; Xu et al. , 2020 ) , NPR does not need any added computation for model exploration , besides the regular neural network update . This greatly alleviates the overhead for the computation resources ( both space and time ) and makes NPR sufficiently general to be applied for practical problems . More importantly , our theoretical analysis directly corresponds to its actual behavior when applied , as no approximation is needed .
This paper addresses the reduction of computational cost of the neural contextual bandit problem which uses the deep neural networks to model the reward function. Existing works in this field often require the exploration of the entire neural network parameter space and thus, the computation cost is expensive, especially when the network is large. Their main contribution is to perturb the rewards when updating the network to eliminate the need for exploration. They showed that this idea works well in experiments while still maintaining the same regret bound rate as that of previous works in the NTK regime.
SP:f7425b47e93c121149fff0aea546d91512f7641c
Boosting Search Engines with Interactive Agents
1 INTRODUCTION . Can machines learn to use a search engine as an interactive tool for finding information ? Web search is the portal to a vast ecosystem of general and specialized knowledge , designed to support humans in their effort to seek relevant information and make well-informed decisions . Utilizing search as a tool is intuitive , and most users quickly learn interactive search strategies characterized by sequential reasoning , exploration , and synthesis ( Hearst , 2009 ; Rutter et al. , 2015 ; Russell , 2019 ) . The success of web search relies on machines learning human notions of relevance , but also on the users ’ ability to ( re- ) formulate appropriate queries , grounded in a tacit understanding of strengths and limitations of search engines . Given recent breakthroughs in language models ( LM ) ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) as well as in reinforcement learning ( RL ) ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Berner et al. , 2019 ) , it seems timely to ask whether , and how , agents can be trained to interactively use search engines . However , the lack of expert search sessions puts supervised learning out of reach , and RL is often ineffective in complex natural language understanding ( NLU ) tasks . The feasibility of autonomous search agents hence remains an open question , which inspires our research . We pursue a design philosophy in which search agents operate in structured action spaces defined as generative grammars , resulting in compositional , productive , and semantically transparent policies . Further domain knowledge is included through the use of well-known models and algorithms from NLU and information retrieval ( IR ) . Most notably , we develop a self-supervised learning scheme for generating high-quality search session data , by exploiting insights from relevance feedback ( Rocchio , 1971 ) , used to train a supervised LM search agent based on T5 ( Raffel et al. , 2020 ) . We also build an RL search agent based on MuZero ( Schrittwieser et al. , 2020 ) and BERT ( Devlin et al. , 2019 ) , which performs planning via rule-constrained Monte Carlo tree search and a learned dynamics model . We run experiments on an open-domain question answering task , OpenQA ( Lee et al. , 2019 ) . Search agents learn diverse policies leading to deep , effective explorations of the search results . The MuZero agent outperforms a BM25 ( Robertson & Zaragoza , 2009 ) search function running over a Wikipedia index , on both retrieval and answer quality metrics . Thus , providing novel evidence for the potential of knowledge-infused RL in hard NLU tasks . The T5 agent can more easily leverage large pre-trained encoder-decoders and proves superior to MuZero . Furthermore , a straightforward ensemble of agents is comparable in performance to a state of the art neural retrieval system , DPR ( Karpukhin et al. , 2020 ) , while relying solely on interpretable , symbolic retrieval operations . This suggests new challenges for future work ; e.g. , involving hybrid architectures and policy synthesis.1 1We open-source the code and trained checkpoints for both agents : anonymized during review . 2 SELF-SUPERVISED LEARNING FOR INTERACTIVE SEARCH . It has been a powerful vision for more than 20 years to design search engines that are intuitive and simple to use . Despite their remarkable success , search engines are not perfect and may not yield the most relevant result ( s ) in one shot . This is particularly true for rare and intrinsically difficult queries , which may require interactive exploration by the user to be answered correctly and exhaustively . Contextual query refinement is a common technique ( Jansen et al. , 2009 ) , even among children ( Rutter et al. , 2015 ) , used to improve search by combining evidence from previous results and background knowledge ( Huang & Efthimiadis , 2009 ) . Such refinements often rely on inspecting result snippets and titles or on skimming the content of top-ranked documents . This process is iterative and may be repeated to produce a sequence of queries q0 , q1 , . . . , qT until ( optimistically ) a satisfactory answer is found . It seems natural to mimic this interactive process by a search agent , which learns the basic step of generating a follow-up query from previous queries and their search results . Furthermore , it is noteworthy how power users apply dedicated search operators and sophisticated investigative strategies to solve deep search puzzles ( Russell , 2019 ) . In particular , unary operators offer a great deal of fine-grained control and transparency and as such are highly effective in expert hands . We concentrate on three operators : ‘ + ’ , which limits results to documents that contain a specific term , ‘ - ’ which excludes results that contain the term , and ‘ ∧i ’ which boosts a term weight in the BM25 score computation by a factor i ∈ R. For instance – see also Figure 1 – the query ’ who won the us open ’ , may lead to both tennis and golf-related results . An expert searcher could zero-in on the tennis intent by excluding golf-related terms and boosting tennis-related ones . As we show in this paper , these operators are also pivotal in designing interactive search agents . 2.1 RESULT AGGREGATION FOR CONTEXTUAL REFINEMENT . Web searchers expect the best answer to be among the top two hits on the first results page ( Hearst , 2009 , §5 ) and pay marginal attention to the bottom half of the 10 blue links ( Granka et al. , 2004 ; Joachims et al. , 2005 ; Nielsen & Pernice , 2009 ; Strzelecki , 2020 ) . Likewise , a search agent considers only the top k documents returned by the search engine at every step , where k = 5 . During a search session the agent maintains a list of the top-k documents overall , which is returned at the end . To aggregate results we use a machine reader ( MR , cf . ( Rajpurkar et al. , 2016 ) ) . Specifically , we use a DPR-like reader/passage scorer ( Karpukhin et al. , 2020 ) , which builds upon a pre-trained BERT model . Beyond identifying the most promising answer span within each result document d , the system also estimates the probability of d containing the ( unspecified ) answer P ( d 3 ANSWER | q ) ∈ [ 0 ; 1 ] . This probability can be viewed as a Passage Score ( PS ) that induces a calibrated ranking across all result documents within a session . An observation representing the session at any given step is built by extracting a fixed-length token window centered at the answer span predicted by the reader for each document . In addition , we include the document titles . Finally , the query tokens and refinements describing qt are also included . This leads to a segmented observation token sequence ot which is truncated to length ≤ 512 , a common input length for pre-trained transformer-based LMs ( cf . Appendix B for details , examples ) . We then use BERT or T5 to produce an embedding st from which the search agent will generate the next query . If we denote the result set for qt by Dt , then we get diagrammatically q0 , . . . , qtsearch ↓ engine D0 , . . . , Dt MR/PS7−→ ot︸ ︷︷ ︸ observation LM7−→ st︸ ︷︷ ︸ encoding agent7−→ qt+1︸ ︷︷ ︸ generation ( 1 ) We focus on the case where qt+1 is obtained from qt through augmentation . This may add a keyword w ∈ Σidx , where Σidx is the search index vocabulary , with the usual disjunctive search engine semantics or a structured search term formed by the use of unary operators ( ‘ + ’ , ‘ - ’ , ‘ ∧ ’ ) and fields . 2.2 ROCCHIO QUERY EXPANSIONS . In the absence of training sessions from human expert users , we propose to generate synthetic search sessions in a self-supervised manner , making use of a set of question-answer pairs ( q , a ) . We initialize q0=q and aim to find a sequence of refinements that make progress towards identifying documents containing the answer a , based on a reward function qt 7→ Dt 7→ rt ∈ [ 0 ; 1 ] ( cf . §4 ) . A query is not further refined , if either t=20 ( maximal length ) or if no score increasing refinement can be found . To create candidate refinements , we make use of the idea of relevance feedback as suggested in Rocchio ( 1971 ) . An elementary refinement – called a Rocchio expansion – then takes the form qt+1 : = qt ∆qt , ∆qt : = [ +| − | ∧i TITLE | CONTENT ] wt , wt ∈ Σt : = Σqt ∪ Στt ∪ Σαt ∪ Σβt ( 2 ) where i is the boosting coefficient and Σt refers to a set of terms accessible to the agent . By that we mean terms that occur in the top PS-ranked session documents . We use superscripts to refer to the vocabulary of the question ( q ) , titles ( τ ) , answers ( α ) or bodies ( β ) of documents in ot . Note that adding terms 6∈ Σt would make refinements difficult to reproduce for an agent and thus would provide supervision of low utility . Another aspect of creating sessions as described above has to do with the search complexity of finding optimal sequences of Rocchio expansions . We consider q∗ = q + a as the “ ideal ” query , whose results define the vocabulary Σ∗ . For efficiency reasons , we further constrain the terms to be added via exact matches , term boosting or term exclusions by defining respective constrained dictionaries Σ↑t = Σt ∩ Σ∗ , Σ↓t = Σt − Σ∗ . ( 3 ) This means it is possible to upgrade accessible terms , wt , to exact matches , or weight boosting , if they also occur in the ideal result set ( wt ∈ Σ↑t ) ; and to exclude accessible terms if they are not present in the ideal results ( wt ∈ Σ↓t ) . We have found experimentally that this leads to a good trade-off between the quality of Rocchio expansions and the search effort to find them . The search for sequences of Rocchio expansions is done heuristically . More details , pseudo-code illustrating the procedure and examples can be found in §5 , Appendix A Appendix G . 2.3 SELF-SUPERVISED T5 AGENT . We suggest to train a generative search agent in a supervised manner by making use of synthetic search sessions generated by Rocchio expansions . We use T5 , a pretrained transformer encoder-decoder model which achieves state-of-the-art results on multiple NLU tasks . As a search agent , T5 predicts a single new search expansion from an observed state . In the spirit of everything-is-string-prediction , both state and expansions are represented as plain strings . See Appendix B for a full example . Our T5 agent is trained via Behavioral Cloning ( BC ) ( Michie , 1990 ) . We treat each step in a Rocchio session as a single training example . As is common in sequence prediction tasks , we use the crossentropy loss for optimization . BC is perhaps the simplest form of Imitation Learning ( IL ) , and has been proven effective in a variety of application domains ( Sharma et al. , 2018 ; Rodrı́guez-Hernandez et al. , 2019 ) . In our query refinement task , it allows to inherit the expressive power of the Rocchio query expansions and , differently from other IL approaches ( Ross et al. , 2011 ; Ho & Ermon , 2016 ; Ding , 2020 ) , requires only offline interactions with the search engine . Crucially , this enables scaling to the large action spaces and model sizes typical of recent LMs . Our T5 agent can also be described as a Decision Transformer with fixed max return ( Chen et al. , 2021 ) . At test time , we start with the initial query and incrementally add new expansions , querying the trained T5 model in every step . We then use the refined query to retrieve new documents and continue until either the set of new documents is empty or we reach the maximum number of steps . Throughout the session , we maintain the top-5 documents among all those retrieved .
This work designs an RL (customized BERT MuZero) agent for query reformulation, where the underlying observation is defined as an aggregation of windows of answer spans found with a machine reading module, document titles, query tokens, and refinements. The action space involves adding keywords to the query or form search terms with three operators that constrain the search space. Due to the lack of training data, synthetic search sessions are generated with a pretrained transformer model and imitation learning. The reward is a linear combination of NDCG (well-established IR ranking for non-binary relevance), a revised NDCG-like metric for exact matches (NDCEM) and a top-k Passage Score. Results on an OpenQA dataset are *close* to neural passage retrieval methods.
SP:70a8316a92eba0ce4bd8c106b228da0a4a70ece7
Boosting Search Engines with Interactive Agents
1 INTRODUCTION . Can machines learn to use a search engine as an interactive tool for finding information ? Web search is the portal to a vast ecosystem of general and specialized knowledge , designed to support humans in their effort to seek relevant information and make well-informed decisions . Utilizing search as a tool is intuitive , and most users quickly learn interactive search strategies characterized by sequential reasoning , exploration , and synthesis ( Hearst , 2009 ; Rutter et al. , 2015 ; Russell , 2019 ) . The success of web search relies on machines learning human notions of relevance , but also on the users ’ ability to ( re- ) formulate appropriate queries , grounded in a tacit understanding of strengths and limitations of search engines . Given recent breakthroughs in language models ( LM ) ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) as well as in reinforcement learning ( RL ) ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Berner et al. , 2019 ) , it seems timely to ask whether , and how , agents can be trained to interactively use search engines . However , the lack of expert search sessions puts supervised learning out of reach , and RL is often ineffective in complex natural language understanding ( NLU ) tasks . The feasibility of autonomous search agents hence remains an open question , which inspires our research . We pursue a design philosophy in which search agents operate in structured action spaces defined as generative grammars , resulting in compositional , productive , and semantically transparent policies . Further domain knowledge is included through the use of well-known models and algorithms from NLU and information retrieval ( IR ) . Most notably , we develop a self-supervised learning scheme for generating high-quality search session data , by exploiting insights from relevance feedback ( Rocchio , 1971 ) , used to train a supervised LM search agent based on T5 ( Raffel et al. , 2020 ) . We also build an RL search agent based on MuZero ( Schrittwieser et al. , 2020 ) and BERT ( Devlin et al. , 2019 ) , which performs planning via rule-constrained Monte Carlo tree search and a learned dynamics model . We run experiments on an open-domain question answering task , OpenQA ( Lee et al. , 2019 ) . Search agents learn diverse policies leading to deep , effective explorations of the search results . The MuZero agent outperforms a BM25 ( Robertson & Zaragoza , 2009 ) search function running over a Wikipedia index , on both retrieval and answer quality metrics . Thus , providing novel evidence for the potential of knowledge-infused RL in hard NLU tasks . The T5 agent can more easily leverage large pre-trained encoder-decoders and proves superior to MuZero . Furthermore , a straightforward ensemble of agents is comparable in performance to a state of the art neural retrieval system , DPR ( Karpukhin et al. , 2020 ) , while relying solely on interpretable , symbolic retrieval operations . This suggests new challenges for future work ; e.g. , involving hybrid architectures and policy synthesis.1 1We open-source the code and trained checkpoints for both agents : anonymized during review . 2 SELF-SUPERVISED LEARNING FOR INTERACTIVE SEARCH . It has been a powerful vision for more than 20 years to design search engines that are intuitive and simple to use . Despite their remarkable success , search engines are not perfect and may not yield the most relevant result ( s ) in one shot . This is particularly true for rare and intrinsically difficult queries , which may require interactive exploration by the user to be answered correctly and exhaustively . Contextual query refinement is a common technique ( Jansen et al. , 2009 ) , even among children ( Rutter et al. , 2015 ) , used to improve search by combining evidence from previous results and background knowledge ( Huang & Efthimiadis , 2009 ) . Such refinements often rely on inspecting result snippets and titles or on skimming the content of top-ranked documents . This process is iterative and may be repeated to produce a sequence of queries q0 , q1 , . . . , qT until ( optimistically ) a satisfactory answer is found . It seems natural to mimic this interactive process by a search agent , which learns the basic step of generating a follow-up query from previous queries and their search results . Furthermore , it is noteworthy how power users apply dedicated search operators and sophisticated investigative strategies to solve deep search puzzles ( Russell , 2019 ) . In particular , unary operators offer a great deal of fine-grained control and transparency and as such are highly effective in expert hands . We concentrate on three operators : ‘ + ’ , which limits results to documents that contain a specific term , ‘ - ’ which excludes results that contain the term , and ‘ ∧i ’ which boosts a term weight in the BM25 score computation by a factor i ∈ R. For instance – see also Figure 1 – the query ’ who won the us open ’ , may lead to both tennis and golf-related results . An expert searcher could zero-in on the tennis intent by excluding golf-related terms and boosting tennis-related ones . As we show in this paper , these operators are also pivotal in designing interactive search agents . 2.1 RESULT AGGREGATION FOR CONTEXTUAL REFINEMENT . Web searchers expect the best answer to be among the top two hits on the first results page ( Hearst , 2009 , §5 ) and pay marginal attention to the bottom half of the 10 blue links ( Granka et al. , 2004 ; Joachims et al. , 2005 ; Nielsen & Pernice , 2009 ; Strzelecki , 2020 ) . Likewise , a search agent considers only the top k documents returned by the search engine at every step , where k = 5 . During a search session the agent maintains a list of the top-k documents overall , which is returned at the end . To aggregate results we use a machine reader ( MR , cf . ( Rajpurkar et al. , 2016 ) ) . Specifically , we use a DPR-like reader/passage scorer ( Karpukhin et al. , 2020 ) , which builds upon a pre-trained BERT model . Beyond identifying the most promising answer span within each result document d , the system also estimates the probability of d containing the ( unspecified ) answer P ( d 3 ANSWER | q ) ∈ [ 0 ; 1 ] . This probability can be viewed as a Passage Score ( PS ) that induces a calibrated ranking across all result documents within a session . An observation representing the session at any given step is built by extracting a fixed-length token window centered at the answer span predicted by the reader for each document . In addition , we include the document titles . Finally , the query tokens and refinements describing qt are also included . This leads to a segmented observation token sequence ot which is truncated to length ≤ 512 , a common input length for pre-trained transformer-based LMs ( cf . Appendix B for details , examples ) . We then use BERT or T5 to produce an embedding st from which the search agent will generate the next query . If we denote the result set for qt by Dt , then we get diagrammatically q0 , . . . , qtsearch ↓ engine D0 , . . . , Dt MR/PS7−→ ot︸ ︷︷ ︸ observation LM7−→ st︸ ︷︷ ︸ encoding agent7−→ qt+1︸ ︷︷ ︸ generation ( 1 ) We focus on the case where qt+1 is obtained from qt through augmentation . This may add a keyword w ∈ Σidx , where Σidx is the search index vocabulary , with the usual disjunctive search engine semantics or a structured search term formed by the use of unary operators ( ‘ + ’ , ‘ - ’ , ‘ ∧ ’ ) and fields . 2.2 ROCCHIO QUERY EXPANSIONS . In the absence of training sessions from human expert users , we propose to generate synthetic search sessions in a self-supervised manner , making use of a set of question-answer pairs ( q , a ) . We initialize q0=q and aim to find a sequence of refinements that make progress towards identifying documents containing the answer a , based on a reward function qt 7→ Dt 7→ rt ∈ [ 0 ; 1 ] ( cf . §4 ) . A query is not further refined , if either t=20 ( maximal length ) or if no score increasing refinement can be found . To create candidate refinements , we make use of the idea of relevance feedback as suggested in Rocchio ( 1971 ) . An elementary refinement – called a Rocchio expansion – then takes the form qt+1 : = qt ∆qt , ∆qt : = [ +| − | ∧i TITLE | CONTENT ] wt , wt ∈ Σt : = Σqt ∪ Στt ∪ Σαt ∪ Σβt ( 2 ) where i is the boosting coefficient and Σt refers to a set of terms accessible to the agent . By that we mean terms that occur in the top PS-ranked session documents . We use superscripts to refer to the vocabulary of the question ( q ) , titles ( τ ) , answers ( α ) or bodies ( β ) of documents in ot . Note that adding terms 6∈ Σt would make refinements difficult to reproduce for an agent and thus would provide supervision of low utility . Another aspect of creating sessions as described above has to do with the search complexity of finding optimal sequences of Rocchio expansions . We consider q∗ = q + a as the “ ideal ” query , whose results define the vocabulary Σ∗ . For efficiency reasons , we further constrain the terms to be added via exact matches , term boosting or term exclusions by defining respective constrained dictionaries Σ↑t = Σt ∩ Σ∗ , Σ↓t = Σt − Σ∗ . ( 3 ) This means it is possible to upgrade accessible terms , wt , to exact matches , or weight boosting , if they also occur in the ideal result set ( wt ∈ Σ↑t ) ; and to exclude accessible terms if they are not present in the ideal results ( wt ∈ Σ↓t ) . We have found experimentally that this leads to a good trade-off between the quality of Rocchio expansions and the search effort to find them . The search for sequences of Rocchio expansions is done heuristically . More details , pseudo-code illustrating the procedure and examples can be found in §5 , Appendix A Appendix G . 2.3 SELF-SUPERVISED T5 AGENT . We suggest to train a generative search agent in a supervised manner by making use of synthetic search sessions generated by Rocchio expansions . We use T5 , a pretrained transformer encoder-decoder model which achieves state-of-the-art results on multiple NLU tasks . As a search agent , T5 predicts a single new search expansion from an observed state . In the spirit of everything-is-string-prediction , both state and expansions are represented as plain strings . See Appendix B for a full example . Our T5 agent is trained via Behavioral Cloning ( BC ) ( Michie , 1990 ) . We treat each step in a Rocchio session as a single training example . As is common in sequence prediction tasks , we use the crossentropy loss for optimization . BC is perhaps the simplest form of Imitation Learning ( IL ) , and has been proven effective in a variety of application domains ( Sharma et al. , 2018 ; Rodrı́guez-Hernandez et al. , 2019 ) . In our query refinement task , it allows to inherit the expressive power of the Rocchio query expansions and , differently from other IL approaches ( Ross et al. , 2011 ; Ho & Ermon , 2016 ; Ding , 2020 ) , requires only offline interactions with the search engine . Crucially , this enables scaling to the large action spaces and model sizes typical of recent LMs . Our T5 agent can also be described as a Decision Transformer with fixed max return ( Chen et al. , 2021 ) . At test time , we start with the initial query and incrementally add new expansions , querying the trained T5 model in every step . We then use the refined query to retrieve new documents and continue until either the set of new documents is empty or we reach the maximum number of steps . Throughout the session , we maintain the top-5 documents among all those retrieved .
This paper proposes an approach based on MuZero to learn strategies to enhance the search query. It borrows the idea of pseudo relevance feedback of Rocchio, selects terms from the top-5 retrieved results, and incorporate a term using a predefined grammar. The expanded query is used by BM25 to find documents. The approach is tested on NQ dataset. It is shown to achieve a performance comparable to that of DPR. The key contribution of the exploration of MuZero for query enhancement, combined with the idea of Rochio. This exploration is new.
SP:70a8316a92eba0ce4bd8c106b228da0a4a70ece7
Boosting Search Engines with Interactive Agents
1 INTRODUCTION . Can machines learn to use a search engine as an interactive tool for finding information ? Web search is the portal to a vast ecosystem of general and specialized knowledge , designed to support humans in their effort to seek relevant information and make well-informed decisions . Utilizing search as a tool is intuitive , and most users quickly learn interactive search strategies characterized by sequential reasoning , exploration , and synthesis ( Hearst , 2009 ; Rutter et al. , 2015 ; Russell , 2019 ) . The success of web search relies on machines learning human notions of relevance , but also on the users ’ ability to ( re- ) formulate appropriate queries , grounded in a tacit understanding of strengths and limitations of search engines . Given recent breakthroughs in language models ( LM ) ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ; Brown et al. , 2020 ) as well as in reinforcement learning ( RL ) ( Mnih et al. , 2013 ; Silver et al. , 2016 ; Berner et al. , 2019 ) , it seems timely to ask whether , and how , agents can be trained to interactively use search engines . However , the lack of expert search sessions puts supervised learning out of reach , and RL is often ineffective in complex natural language understanding ( NLU ) tasks . The feasibility of autonomous search agents hence remains an open question , which inspires our research . We pursue a design philosophy in which search agents operate in structured action spaces defined as generative grammars , resulting in compositional , productive , and semantically transparent policies . Further domain knowledge is included through the use of well-known models and algorithms from NLU and information retrieval ( IR ) . Most notably , we develop a self-supervised learning scheme for generating high-quality search session data , by exploiting insights from relevance feedback ( Rocchio , 1971 ) , used to train a supervised LM search agent based on T5 ( Raffel et al. , 2020 ) . We also build an RL search agent based on MuZero ( Schrittwieser et al. , 2020 ) and BERT ( Devlin et al. , 2019 ) , which performs planning via rule-constrained Monte Carlo tree search and a learned dynamics model . We run experiments on an open-domain question answering task , OpenQA ( Lee et al. , 2019 ) . Search agents learn diverse policies leading to deep , effective explorations of the search results . The MuZero agent outperforms a BM25 ( Robertson & Zaragoza , 2009 ) search function running over a Wikipedia index , on both retrieval and answer quality metrics . Thus , providing novel evidence for the potential of knowledge-infused RL in hard NLU tasks . The T5 agent can more easily leverage large pre-trained encoder-decoders and proves superior to MuZero . Furthermore , a straightforward ensemble of agents is comparable in performance to a state of the art neural retrieval system , DPR ( Karpukhin et al. , 2020 ) , while relying solely on interpretable , symbolic retrieval operations . This suggests new challenges for future work ; e.g. , involving hybrid architectures and policy synthesis.1 1We open-source the code and trained checkpoints for both agents : anonymized during review . 2 SELF-SUPERVISED LEARNING FOR INTERACTIVE SEARCH . It has been a powerful vision for more than 20 years to design search engines that are intuitive and simple to use . Despite their remarkable success , search engines are not perfect and may not yield the most relevant result ( s ) in one shot . This is particularly true for rare and intrinsically difficult queries , which may require interactive exploration by the user to be answered correctly and exhaustively . Contextual query refinement is a common technique ( Jansen et al. , 2009 ) , even among children ( Rutter et al. , 2015 ) , used to improve search by combining evidence from previous results and background knowledge ( Huang & Efthimiadis , 2009 ) . Such refinements often rely on inspecting result snippets and titles or on skimming the content of top-ranked documents . This process is iterative and may be repeated to produce a sequence of queries q0 , q1 , . . . , qT until ( optimistically ) a satisfactory answer is found . It seems natural to mimic this interactive process by a search agent , which learns the basic step of generating a follow-up query from previous queries and their search results . Furthermore , it is noteworthy how power users apply dedicated search operators and sophisticated investigative strategies to solve deep search puzzles ( Russell , 2019 ) . In particular , unary operators offer a great deal of fine-grained control and transparency and as such are highly effective in expert hands . We concentrate on three operators : ‘ + ’ , which limits results to documents that contain a specific term , ‘ - ’ which excludes results that contain the term , and ‘ ∧i ’ which boosts a term weight in the BM25 score computation by a factor i ∈ R. For instance – see also Figure 1 – the query ’ who won the us open ’ , may lead to both tennis and golf-related results . An expert searcher could zero-in on the tennis intent by excluding golf-related terms and boosting tennis-related ones . As we show in this paper , these operators are also pivotal in designing interactive search agents . 2.1 RESULT AGGREGATION FOR CONTEXTUAL REFINEMENT . Web searchers expect the best answer to be among the top two hits on the first results page ( Hearst , 2009 , §5 ) and pay marginal attention to the bottom half of the 10 blue links ( Granka et al. , 2004 ; Joachims et al. , 2005 ; Nielsen & Pernice , 2009 ; Strzelecki , 2020 ) . Likewise , a search agent considers only the top k documents returned by the search engine at every step , where k = 5 . During a search session the agent maintains a list of the top-k documents overall , which is returned at the end . To aggregate results we use a machine reader ( MR , cf . ( Rajpurkar et al. , 2016 ) ) . Specifically , we use a DPR-like reader/passage scorer ( Karpukhin et al. , 2020 ) , which builds upon a pre-trained BERT model . Beyond identifying the most promising answer span within each result document d , the system also estimates the probability of d containing the ( unspecified ) answer P ( d 3 ANSWER | q ) ∈ [ 0 ; 1 ] . This probability can be viewed as a Passage Score ( PS ) that induces a calibrated ranking across all result documents within a session . An observation representing the session at any given step is built by extracting a fixed-length token window centered at the answer span predicted by the reader for each document . In addition , we include the document titles . Finally , the query tokens and refinements describing qt are also included . This leads to a segmented observation token sequence ot which is truncated to length ≤ 512 , a common input length for pre-trained transformer-based LMs ( cf . Appendix B for details , examples ) . We then use BERT or T5 to produce an embedding st from which the search agent will generate the next query . If we denote the result set for qt by Dt , then we get diagrammatically q0 , . . . , qtsearch ↓ engine D0 , . . . , Dt MR/PS7−→ ot︸ ︷︷ ︸ observation LM7−→ st︸ ︷︷ ︸ encoding agent7−→ qt+1︸ ︷︷ ︸ generation ( 1 ) We focus on the case where qt+1 is obtained from qt through augmentation . This may add a keyword w ∈ Σidx , where Σidx is the search index vocabulary , with the usual disjunctive search engine semantics or a structured search term formed by the use of unary operators ( ‘ + ’ , ‘ - ’ , ‘ ∧ ’ ) and fields . 2.2 ROCCHIO QUERY EXPANSIONS . In the absence of training sessions from human expert users , we propose to generate synthetic search sessions in a self-supervised manner , making use of a set of question-answer pairs ( q , a ) . We initialize q0=q and aim to find a sequence of refinements that make progress towards identifying documents containing the answer a , based on a reward function qt 7→ Dt 7→ rt ∈ [ 0 ; 1 ] ( cf . §4 ) . A query is not further refined , if either t=20 ( maximal length ) or if no score increasing refinement can be found . To create candidate refinements , we make use of the idea of relevance feedback as suggested in Rocchio ( 1971 ) . An elementary refinement – called a Rocchio expansion – then takes the form qt+1 : = qt ∆qt , ∆qt : = [ +| − | ∧i TITLE | CONTENT ] wt , wt ∈ Σt : = Σqt ∪ Στt ∪ Σαt ∪ Σβt ( 2 ) where i is the boosting coefficient and Σt refers to a set of terms accessible to the agent . By that we mean terms that occur in the top PS-ranked session documents . We use superscripts to refer to the vocabulary of the question ( q ) , titles ( τ ) , answers ( α ) or bodies ( β ) of documents in ot . Note that adding terms 6∈ Σt would make refinements difficult to reproduce for an agent and thus would provide supervision of low utility . Another aspect of creating sessions as described above has to do with the search complexity of finding optimal sequences of Rocchio expansions . We consider q∗ = q + a as the “ ideal ” query , whose results define the vocabulary Σ∗ . For efficiency reasons , we further constrain the terms to be added via exact matches , term boosting or term exclusions by defining respective constrained dictionaries Σ↑t = Σt ∩ Σ∗ , Σ↓t = Σt − Σ∗ . ( 3 ) This means it is possible to upgrade accessible terms , wt , to exact matches , or weight boosting , if they also occur in the ideal result set ( wt ∈ Σ↑t ) ; and to exclude accessible terms if they are not present in the ideal results ( wt ∈ Σ↓t ) . We have found experimentally that this leads to a good trade-off between the quality of Rocchio expansions and the search effort to find them . The search for sequences of Rocchio expansions is done heuristically . More details , pseudo-code illustrating the procedure and examples can be found in §5 , Appendix A Appendix G . 2.3 SELF-SUPERVISED T5 AGENT . We suggest to train a generative search agent in a supervised manner by making use of synthetic search sessions generated by Rocchio expansions . We use T5 , a pretrained transformer encoder-decoder model which achieves state-of-the-art results on multiple NLU tasks . As a search agent , T5 predicts a single new search expansion from an observed state . In the spirit of everything-is-string-prediction , both state and expansions are represented as plain strings . See Appendix B for a full example . Our T5 agent is trained via Behavioral Cloning ( BC ) ( Michie , 1990 ) . We treat each step in a Rocchio session as a single training example . As is common in sequence prediction tasks , we use the crossentropy loss for optimization . BC is perhaps the simplest form of Imitation Learning ( IL ) , and has been proven effective in a variety of application domains ( Sharma et al. , 2018 ; Rodrı́guez-Hernandez et al. , 2019 ) . In our query refinement task , it allows to inherit the expressive power of the Rocchio query expansions and , differently from other IL approaches ( Ross et al. , 2011 ; Ho & Ermon , 2016 ; Ding , 2020 ) , requires only offline interactions with the search engine . Crucially , this enables scaling to the large action spaces and model sizes typical of recent LMs . Our T5 agent can also be described as a Decision Transformer with fixed max return ( Chen et al. , 2021 ) . At test time , we start with the initial query and incrementally add new expansions , querying the trained T5 model in every step . We then use the refined query to retrieve new documents and continue until either the set of new documents is empty or we reach the maximum number of steps . Throughout the session , we maintain the top-5 documents among all those retrieved .
This paper investigates a vital direction: How can RL agents learn to use search engines to find information. The authors propose a method that leverages pre-trained models conditioned on machine reading results to guide the selection of query refinement from aggregated search documents. The method performs comparably to neural retrievers on OpenQA-NQ but operates in a more interpretable way.
SP:70a8316a92eba0ce4bd8c106b228da0a4a70ece7
A Closer Look at Prototype Classifier for Few-shot Image Classification
1 INTRODUCTION . Few-shot learning is used to adapt quickly to new classes with low annotation cost . Meta-learning is a standard training procedure to tackle the few-shot learning problem and Prototypical Network ( Snell et al. , 2017 ) a.k.a ProtoNet is a widely used meta-learning algorithm for few-shot learning . In ProtoNet , we use a prototype classifier based on meta-learning to predict the classes of unobserved objects by constructing class-specific prototypes without adjusting the hyper-parameters during meta-testing . ProtoNet has the following advantages . ( 1 ) Since the nearest neighbor method is applied on query data and class prototypes during the meta-test phase , no hyper-parameters are required in meta-test phase . ( 2 ) Since the number of data for few-shot is small , the inference time is almost negligible . ( 3 ) The classifiers can quickly adapt to new environments because they do not have to be re-trained for the support set when new classes appear . The generalization bound of ProtoNet in relation to the number of shots in a support set has been studied ( Cao et al. , 2020 ) . The bound suggests that the performance of ProtoNet depends on the ratio of the between-class variance to the within-class variance of features of a support set extracted using the meta-trained model . There have been studies on training a new linear classifier on the features extracted using a pretrained model without meta-learning , which can perform comparably with the meta-learned models ( Chen et al. , 2019 ; Tian et al. , 2020 ) . We call this approach as linear-evaluation-based approach . In these studies , the models are trained with the standard classification problem , i.e. , models are trained with cross-entropy loss after linear projection from the embedding space to the class-probability space . The linear-evaluation-based approach has the following advantages over meta-learning . ( 1 ) Training converges faster than meta-learning . ( 2 ) Implementation is simpler . ( 3 ) Meta-learning decreases in performance if the number of shots does not match between meta-training and metatesting ( Cao et al. , 2020 ) ; however , the linear-evaluation-based approach do not need to take this into account . However , the linear-evaluation-based approach requires retraining a linear classifier every time a new class appears . In contrast , a prototype classifier can be applied to any trained feature extractor and does not require model learning in the testing phase . Therefore , a prototype classifier can be a a practical and useful first step for few-shot learning problems . In order to avoid meta-learning during the training phase and the linear evaluation during the testing phase , we focus on using a prototype classifier on the testing phase and training models in a standard classification manner . As we discuss in section 4 , we found that when we directly constructed prototypes from the feature vectors extracted using pretrained models and applied the nearest neighbor method as in the testing phase in ProtoNet , this does not perform as well as the linear-evaluation-based approach . We hypothesize that the reason is the difference between the loss function in ProtoNet and pretrained models . As described in section 3 , if we consider a prototype as a pseudo sample average of the features in each class , the loss function of ProtoNet can be considered having a regularizing effect that makes it closer to the sample average of the class . Since standard classification training computes cross-entropy loss with additional linear projection to make the features linearly separable , the loss function does not have such an effect and can cause large within-class variance . Figure 1 shows a scatter plot of the features extracted using a neural network with two dimension output trained on cifar-10 with ProtoNet ( 1a ) and cross-entropy loss with a linear projection layer ( 1b ) . This figure implies that the features extracted using a model trained in a standard classification manner distributes away from the origin and causes large within-class variance along the direction of the norm of class mean vectors , while that of ProtoNet is more centered to its class means . This phenomenon is also observed in face recognition literatures ( Wen et al. , 2016 ; Liu et al. , 2017 ; Wang et al. , 2018 ; Deng et al. , 2019 ) . We now focus on the theoretical analysis on a prototype classifier . A current study ( Cao et al. , 2020 ) analyzed an upper bound of risk by using a prototype classifier . The bound depends on the number of shots of a support set , between-class variance , and within-class variance . However , the bound requires the class-conditioned distribution of features to be Gaussian and to have the same covariance matrix among classes . In addition , since the bound does not depend on the norm of the feature vectors , it is not clear from the bound what feature-transformation method can lead to performance improvement . Thus , we derive a novel bound for a prototype classifier . Our contributions are threefold . 1 . We relax the assumption ; specifically , the bound does not require that the features distribute on any specific distribution , and each covariance matrix does not have to be the same among classes . 2 . We clarify the effect of the variance of the norm of the feature vectors on the performance of a prototype classifier . 3 . We investigate the effectiveness of reducing the variance of the norm empirically . 2 RELATED WORK . We summarize related work by describing a prototype classifier with meta-learning , linearevaluation-based approach without meta-learning , and theoretical analysis related to the few-shot learning problem . A prototype classifier with meta-learning On the basis of the hypothesis that features well distinguished in the training phase are also useful for classifying new classes , constructing one or multiple prototypes for classifying unseen examples is a widely used approach ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Pahde et al. , 2021 ; Ji et al. , 2021 ; Sung et al. , 2018 ; Allen et al. , 2019 ; Doersch et al. , 2020 ; Qi et al. , 2018 ) . Certain algorithms compute similarities between multiple prototypes and unseen examples by using their own modules , such as attention mechanism ( Vinyals et al. , 2016 ; Doersch et al. , 2020 ) , relation network ( Sung et al. , 2018 ) , reweighting mechanisms by taking into account between-class and within-class interaction ( Ji et al. , 2021 ) , and latent clusters ( Allen et al. , 2019 ) . Prototypes are also constructed in a multi-modal way ( Pahde et al. , 2021 ) . Another line of research is transforming the space of extracted features to a better distinguishable space ( Simon et al. , 2020 ; Yoon et al. , 2019 ; Das et al. , 2020 ; Das & Lee , 2020 ) or taking variance of fetures into account ( Bateni et al. , 2020 ) . Because of its convenience , the approach with a prototype classifier is adapted to other domain such as semantic segmentation ( Dong & Xing , 2018 ) , text classification ( Sun et al. , 2019 ) , and speech recognition ( Wang et al. , 2019a ) . Update-based meta-learning In contrast to the approach with a prototype classifier and metalearning , in update-based meta-learning approaches , model parameters are adjusted in the test phase so that a model can adapt to new classes . Model-agnostic meta-Learning ( MAML ) and its variants ( Finn et al. , 2017 ; 2018 ; Rajeswaran et al. , 2017 ) search for a good initialization parameters that adapt to new classes with a few labeled data and a few update steps of the parameters . Another approach involves learning an effective update rule for the parameters of a base-learner model through a sequence of training episodes ( Bertinetto et al. , 2019 ; Lee et al. , 2019 ) . Both approaches require additional learning of hyper-parameters and training time ; thus , they prevent quick adaptation to new classes . linear-evaluation-based approach without meta-learning Interestingly , recent studies have shown that training a new linear classifier with features extracted using a model trained with crossentropy loss on base-dataset performs comparably with meta-learning based methods ( Chen et al. , 2019 ) ( Wang et al. , 2019b ) . More effective method for training a new classifier in few-shot settings has been proposed ( Yang et al. , 2021 ; Phoo & Hariharan , 2021 ) , such as calibrating distribution generated by a support set ( Yang et al. , 2021 ) , self-supervised learning on query data ( Phoo & Hariharan , 2021 ) , and distilling knowledge to obtain better embeddings ( Tian et al. , 2020 ) . Liu et al . ( 2020 ) focuses on pretraining phase and found the negative margin in cross-entropy helps improving the performance in few-shot settings . However , similar to update-based meta-learning , these studies require additional hyper-parameters and training to apply to new classes ; thus if we tackle these points , we would have an alternative method that is easier and more convenient to use . Theoretical analysis of few-shot learning Even though much improvement has been empirically made in few-shot learning , theoretical analysis is scarce . In the context of meta-learning , Du et al . ( 2021 ) provided a risk bound on the meta-testing phase that is related to the number of meta-training data and meta-testing data . Lucas et al . ( 2021 ) derived information-theoretic lower-bounds on minimax rates of convergence for algorithms that are trained on data from multiple sources and tested on novel data . Cao et al . ( 2020 ) derived a new bound on a prototype classifier and theoretically demonstrated that the mismatch regarding the number of shots in a support set between meta-training and meta-testing degrades the performance of prototypical networks , which has been only experimentally observed . However , their bound depends on several assumptions : the class-conditional distributions of features are Gaussian and have the same covariance matrix among classes . In contrast , we derive a novel bound that does not depend on any specific distribution . 3 THEORETICAL ANALYSIS OF PROTOTYPE CLASSIFIER IN TERMS OF VARIANCE OF NORM OF FEATURE VECTORS . In this section , we first formulate our problem settings and point out the drawbacks of the current theoretical analysis on a prototype classifier . Next we provide our novel bound of a prototype classifier with the bound related to the variance of the norm of the feature vectors . Finally , we list several methods that can improve the performance of a prototype classifier based on our bound . 3.1 PROBLEM SETTING . Let Y be a space of a class , τ a probability distribution over Y , X a space of input data , D a probability distribution over X , Dy a probability distribution over X given a class y . We define D⊗nk and D⊗ny by D⊗ny = Πni=1Dy and D⊗nk ≡ Πni=1D ⊗k i , respectively . We sample N classes from τ to form the N -way classification problem . Denote by K a number of annotated data in each class and x ∈ X , y ∈ Y as input data and its class respectively . We define a set of support data of class c sampled from τ as Sc = { xi | ( xi , yi ) ∈ X × Y ∩ yi = c } Ki=1 and a set of support data in the N -way K-shot classification problem as S = ⋃N c=1 Sc . Suppose a feature extractor computes a function ϕ : X → RD , where D is the number of the embedding dimensions . ϕ ( Sc ) is defined by ϕ ( Sc ) = 1K ∑ x∈Sc ϕ ( x ) . Let Φ be a space of the extractor function ϕ. Denote by M : Φ × X × ( X × Y ) NK → RN a prototype classifier function that computes probabilities of input x belonging to class c as follows . M ( ϕ , x , S ) c = pM ( y = c|x , S , ϕ ) = exp ( −∥ϕ ( x ) − ϕ ( Sc ) ∥2 ) ∑N l=1 exp ( −∥ϕ ( x ) − ϕ ( Sl ) ∥2 ) , ( 1 ) where ∥v∥2 = ∑D d=1 ( v ( d ) ) 2 , and v ( d ) is the d-th dimension of vector v. The prediction of an input x , denoted by ŷ ∈ Y , is computed by taking argmax for M ( ϕ , x , S ) , i.e. , ŷ = argmaxM ( ϕ , x , S ) . We denote by Ez∼q ( z ) [ g ( z ) ] an operation to take the expectation of g ( z ) over z distributed as f ( z ) and we simply denote Ez∼q ( z ) [ g ( z ) ] as Ez [ g ( z ) ] when z is obviously distributed on q ( z ) . We define Varz∼q ( z ) [ g ( z ) ] as an operation to take variance of g ( z ) over z distributed as q ( z ) . With I denoting the indicator function , we define expected risk RM of a prototype classifier as RM ( ϕ ) = ES∼D⊗nkEc∼τEx∼Dc [ I [ argmaxM ( ϕ , x , S ) ̸= c ] . ( 2 ) For simplicity , we now discuss the binary classification setting . We show a case of multi-class classification in Appendix A.5 due to lack of space . Let c1 and c2 denote any pair of classes sampled from τ . We consider that a query data x belongs to class c1 and support sets S consist of class c1 ’ s support set and c2 ’ s support set . Then , equation 2 is written as follows . RM ( ϕ ) = ES∼D⊗2kEc1∼τEx∼Dc1 [ I [ argmaxM ( ϕ , x , Sc1 ∪ Sc2 ) ̸= c1 ] ] . ( 3 )
This paper analyzes how a prototype classifier works well without fine-tuning and meta-learning. The feature is trained with cross-entropy loss with a linear projection layer. In testing, the classifier is a Prototype classifier. This paper derives a novel generalization bound for the prototypical network and shows that focusing on the variance of the norm of a feature vector can improve performance. The proposed upper bound is a modification of (Cao et al., 2020), and it does not require the features distributed on any specific distribution. Each covariance matrix does not have to be the same among classes. The authors experimentally investigated several normalization methods (L2-N, V-N, LDA, EST, EST-L2-N) for minimizing the variance of the norm.
SP:f94a94abd88e96b574699924cdedc5c81f9a30ac
A Closer Look at Prototype Classifier for Few-shot Image Classification
1 INTRODUCTION . Few-shot learning is used to adapt quickly to new classes with low annotation cost . Meta-learning is a standard training procedure to tackle the few-shot learning problem and Prototypical Network ( Snell et al. , 2017 ) a.k.a ProtoNet is a widely used meta-learning algorithm for few-shot learning . In ProtoNet , we use a prototype classifier based on meta-learning to predict the classes of unobserved objects by constructing class-specific prototypes without adjusting the hyper-parameters during meta-testing . ProtoNet has the following advantages . ( 1 ) Since the nearest neighbor method is applied on query data and class prototypes during the meta-test phase , no hyper-parameters are required in meta-test phase . ( 2 ) Since the number of data for few-shot is small , the inference time is almost negligible . ( 3 ) The classifiers can quickly adapt to new environments because they do not have to be re-trained for the support set when new classes appear . The generalization bound of ProtoNet in relation to the number of shots in a support set has been studied ( Cao et al. , 2020 ) . The bound suggests that the performance of ProtoNet depends on the ratio of the between-class variance to the within-class variance of features of a support set extracted using the meta-trained model . There have been studies on training a new linear classifier on the features extracted using a pretrained model without meta-learning , which can perform comparably with the meta-learned models ( Chen et al. , 2019 ; Tian et al. , 2020 ) . We call this approach as linear-evaluation-based approach . In these studies , the models are trained with the standard classification problem , i.e. , models are trained with cross-entropy loss after linear projection from the embedding space to the class-probability space . The linear-evaluation-based approach has the following advantages over meta-learning . ( 1 ) Training converges faster than meta-learning . ( 2 ) Implementation is simpler . ( 3 ) Meta-learning decreases in performance if the number of shots does not match between meta-training and metatesting ( Cao et al. , 2020 ) ; however , the linear-evaluation-based approach do not need to take this into account . However , the linear-evaluation-based approach requires retraining a linear classifier every time a new class appears . In contrast , a prototype classifier can be applied to any trained feature extractor and does not require model learning in the testing phase . Therefore , a prototype classifier can be a a practical and useful first step for few-shot learning problems . In order to avoid meta-learning during the training phase and the linear evaluation during the testing phase , we focus on using a prototype classifier on the testing phase and training models in a standard classification manner . As we discuss in section 4 , we found that when we directly constructed prototypes from the feature vectors extracted using pretrained models and applied the nearest neighbor method as in the testing phase in ProtoNet , this does not perform as well as the linear-evaluation-based approach . We hypothesize that the reason is the difference between the loss function in ProtoNet and pretrained models . As described in section 3 , if we consider a prototype as a pseudo sample average of the features in each class , the loss function of ProtoNet can be considered having a regularizing effect that makes it closer to the sample average of the class . Since standard classification training computes cross-entropy loss with additional linear projection to make the features linearly separable , the loss function does not have such an effect and can cause large within-class variance . Figure 1 shows a scatter plot of the features extracted using a neural network with two dimension output trained on cifar-10 with ProtoNet ( 1a ) and cross-entropy loss with a linear projection layer ( 1b ) . This figure implies that the features extracted using a model trained in a standard classification manner distributes away from the origin and causes large within-class variance along the direction of the norm of class mean vectors , while that of ProtoNet is more centered to its class means . This phenomenon is also observed in face recognition literatures ( Wen et al. , 2016 ; Liu et al. , 2017 ; Wang et al. , 2018 ; Deng et al. , 2019 ) . We now focus on the theoretical analysis on a prototype classifier . A current study ( Cao et al. , 2020 ) analyzed an upper bound of risk by using a prototype classifier . The bound depends on the number of shots of a support set , between-class variance , and within-class variance . However , the bound requires the class-conditioned distribution of features to be Gaussian and to have the same covariance matrix among classes . In addition , since the bound does not depend on the norm of the feature vectors , it is not clear from the bound what feature-transformation method can lead to performance improvement . Thus , we derive a novel bound for a prototype classifier . Our contributions are threefold . 1 . We relax the assumption ; specifically , the bound does not require that the features distribute on any specific distribution , and each covariance matrix does not have to be the same among classes . 2 . We clarify the effect of the variance of the norm of the feature vectors on the performance of a prototype classifier . 3 . We investigate the effectiveness of reducing the variance of the norm empirically . 2 RELATED WORK . We summarize related work by describing a prototype classifier with meta-learning , linearevaluation-based approach without meta-learning , and theoretical analysis related to the few-shot learning problem . A prototype classifier with meta-learning On the basis of the hypothesis that features well distinguished in the training phase are also useful for classifying new classes , constructing one or multiple prototypes for classifying unseen examples is a widely used approach ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Pahde et al. , 2021 ; Ji et al. , 2021 ; Sung et al. , 2018 ; Allen et al. , 2019 ; Doersch et al. , 2020 ; Qi et al. , 2018 ) . Certain algorithms compute similarities between multiple prototypes and unseen examples by using their own modules , such as attention mechanism ( Vinyals et al. , 2016 ; Doersch et al. , 2020 ) , relation network ( Sung et al. , 2018 ) , reweighting mechanisms by taking into account between-class and within-class interaction ( Ji et al. , 2021 ) , and latent clusters ( Allen et al. , 2019 ) . Prototypes are also constructed in a multi-modal way ( Pahde et al. , 2021 ) . Another line of research is transforming the space of extracted features to a better distinguishable space ( Simon et al. , 2020 ; Yoon et al. , 2019 ; Das et al. , 2020 ; Das & Lee , 2020 ) or taking variance of fetures into account ( Bateni et al. , 2020 ) . Because of its convenience , the approach with a prototype classifier is adapted to other domain such as semantic segmentation ( Dong & Xing , 2018 ) , text classification ( Sun et al. , 2019 ) , and speech recognition ( Wang et al. , 2019a ) . Update-based meta-learning In contrast to the approach with a prototype classifier and metalearning , in update-based meta-learning approaches , model parameters are adjusted in the test phase so that a model can adapt to new classes . Model-agnostic meta-Learning ( MAML ) and its variants ( Finn et al. , 2017 ; 2018 ; Rajeswaran et al. , 2017 ) search for a good initialization parameters that adapt to new classes with a few labeled data and a few update steps of the parameters . Another approach involves learning an effective update rule for the parameters of a base-learner model through a sequence of training episodes ( Bertinetto et al. , 2019 ; Lee et al. , 2019 ) . Both approaches require additional learning of hyper-parameters and training time ; thus , they prevent quick adaptation to new classes . linear-evaluation-based approach without meta-learning Interestingly , recent studies have shown that training a new linear classifier with features extracted using a model trained with crossentropy loss on base-dataset performs comparably with meta-learning based methods ( Chen et al. , 2019 ) ( Wang et al. , 2019b ) . More effective method for training a new classifier in few-shot settings has been proposed ( Yang et al. , 2021 ; Phoo & Hariharan , 2021 ) , such as calibrating distribution generated by a support set ( Yang et al. , 2021 ) , self-supervised learning on query data ( Phoo & Hariharan , 2021 ) , and distilling knowledge to obtain better embeddings ( Tian et al. , 2020 ) . Liu et al . ( 2020 ) focuses on pretraining phase and found the negative margin in cross-entropy helps improving the performance in few-shot settings . However , similar to update-based meta-learning , these studies require additional hyper-parameters and training to apply to new classes ; thus if we tackle these points , we would have an alternative method that is easier and more convenient to use . Theoretical analysis of few-shot learning Even though much improvement has been empirically made in few-shot learning , theoretical analysis is scarce . In the context of meta-learning , Du et al . ( 2021 ) provided a risk bound on the meta-testing phase that is related to the number of meta-training data and meta-testing data . Lucas et al . ( 2021 ) derived information-theoretic lower-bounds on minimax rates of convergence for algorithms that are trained on data from multiple sources and tested on novel data . Cao et al . ( 2020 ) derived a new bound on a prototype classifier and theoretically demonstrated that the mismatch regarding the number of shots in a support set between meta-training and meta-testing degrades the performance of prototypical networks , which has been only experimentally observed . However , their bound depends on several assumptions : the class-conditional distributions of features are Gaussian and have the same covariance matrix among classes . In contrast , we derive a novel bound that does not depend on any specific distribution . 3 THEORETICAL ANALYSIS OF PROTOTYPE CLASSIFIER IN TERMS OF VARIANCE OF NORM OF FEATURE VECTORS . In this section , we first formulate our problem settings and point out the drawbacks of the current theoretical analysis on a prototype classifier . Next we provide our novel bound of a prototype classifier with the bound related to the variance of the norm of the feature vectors . Finally , we list several methods that can improve the performance of a prototype classifier based on our bound . 3.1 PROBLEM SETTING . Let Y be a space of a class , τ a probability distribution over Y , X a space of input data , D a probability distribution over X , Dy a probability distribution over X given a class y . We define D⊗nk and D⊗ny by D⊗ny = Πni=1Dy and D⊗nk ≡ Πni=1D ⊗k i , respectively . We sample N classes from τ to form the N -way classification problem . Denote by K a number of annotated data in each class and x ∈ X , y ∈ Y as input data and its class respectively . We define a set of support data of class c sampled from τ as Sc = { xi | ( xi , yi ) ∈ X × Y ∩ yi = c } Ki=1 and a set of support data in the N -way K-shot classification problem as S = ⋃N c=1 Sc . Suppose a feature extractor computes a function ϕ : X → RD , where D is the number of the embedding dimensions . ϕ ( Sc ) is defined by ϕ ( Sc ) = 1K ∑ x∈Sc ϕ ( x ) . Let Φ be a space of the extractor function ϕ. Denote by M : Φ × X × ( X × Y ) NK → RN a prototype classifier function that computes probabilities of input x belonging to class c as follows . M ( ϕ , x , S ) c = pM ( y = c|x , S , ϕ ) = exp ( −∥ϕ ( x ) − ϕ ( Sc ) ∥2 ) ∑N l=1 exp ( −∥ϕ ( x ) − ϕ ( Sl ) ∥2 ) , ( 1 ) where ∥v∥2 = ∑D d=1 ( v ( d ) ) 2 , and v ( d ) is the d-th dimension of vector v. The prediction of an input x , denoted by ŷ ∈ Y , is computed by taking argmax for M ( ϕ , x , S ) , i.e. , ŷ = argmaxM ( ϕ , x , S ) . We denote by Ez∼q ( z ) [ g ( z ) ] an operation to take the expectation of g ( z ) over z distributed as f ( z ) and we simply denote Ez∼q ( z ) [ g ( z ) ] as Ez [ g ( z ) ] when z is obviously distributed on q ( z ) . We define Varz∼q ( z ) [ g ( z ) ] as an operation to take variance of g ( z ) over z distributed as q ( z ) . With I denoting the indicator function , we define expected risk RM of a prototype classifier as RM ( ϕ ) = ES∼D⊗nkEc∼τEx∼Dc [ I [ argmaxM ( ϕ , x , S ) ̸= c ] . ( 2 ) For simplicity , we now discuss the binary classification setting . We show a case of multi-class classification in Appendix A.5 due to lack of space . Let c1 and c2 denote any pair of classes sampled from τ . We consider that a query data x belongs to class c1 and support sets S consist of class c1 ’ s support set and c2 ’ s support set . Then , equation 2 is written as follows . RM ( ϕ ) = ES∼D⊗2kEc1∼τEx∼Dc1 [ I [ argmaxM ( ϕ , x , Sc1 ∪ Sc2 ) ̸= c1 ] ] . ( 3 )
The submission introduces a generalization bound for Prototypical Networks that does not depend on assumptions on the class-conditional distributions. This bound decreases as the variance in norm of the feature vectors decreases, and the authors empirically investigate five feature transformation approaches that are meant to lower this variance: L2-normalization (L2-norm), variance-normalization, Linear Discriminant Analysis (LDA), Embedding Space Transformation (EST), and EST+L2-norm. Experimental results are reported on the mini-ImageNet, tiered-ImageNet, CIFAR-FS, and FC100 benchmarks in the 1-shot and 5-shot settings, using ResNet-12 or ResNet-18 as the backbone architecture. The five feature transformation approaches are applied on top of the Baseline and Baseline++ learners before using the extracted features to build a prototypical classifier. The submission compares against the following: - Prototypical Networks - Baseline and Baseline++ with a linear classifier on top of the extracted features - Baseline and Baseline++ with Wang et al.'s centering+L2-norm approach on top of the extracted features. - Baseline and Baseline++ with no additional feature transformation and a prototypical classifier on top of the extracted features. The submission concludes that EST+L2-norm performs best, and that its performance-boosting effect decreases as the number of examples per class increases.
SP:f94a94abd88e96b574699924cdedc5c81f9a30ac
A Closer Look at Prototype Classifier for Few-shot Image Classification
1 INTRODUCTION . Few-shot learning is used to adapt quickly to new classes with low annotation cost . Meta-learning is a standard training procedure to tackle the few-shot learning problem and Prototypical Network ( Snell et al. , 2017 ) a.k.a ProtoNet is a widely used meta-learning algorithm for few-shot learning . In ProtoNet , we use a prototype classifier based on meta-learning to predict the classes of unobserved objects by constructing class-specific prototypes without adjusting the hyper-parameters during meta-testing . ProtoNet has the following advantages . ( 1 ) Since the nearest neighbor method is applied on query data and class prototypes during the meta-test phase , no hyper-parameters are required in meta-test phase . ( 2 ) Since the number of data for few-shot is small , the inference time is almost negligible . ( 3 ) The classifiers can quickly adapt to new environments because they do not have to be re-trained for the support set when new classes appear . The generalization bound of ProtoNet in relation to the number of shots in a support set has been studied ( Cao et al. , 2020 ) . The bound suggests that the performance of ProtoNet depends on the ratio of the between-class variance to the within-class variance of features of a support set extracted using the meta-trained model . There have been studies on training a new linear classifier on the features extracted using a pretrained model without meta-learning , which can perform comparably with the meta-learned models ( Chen et al. , 2019 ; Tian et al. , 2020 ) . We call this approach as linear-evaluation-based approach . In these studies , the models are trained with the standard classification problem , i.e. , models are trained with cross-entropy loss after linear projection from the embedding space to the class-probability space . The linear-evaluation-based approach has the following advantages over meta-learning . ( 1 ) Training converges faster than meta-learning . ( 2 ) Implementation is simpler . ( 3 ) Meta-learning decreases in performance if the number of shots does not match between meta-training and metatesting ( Cao et al. , 2020 ) ; however , the linear-evaluation-based approach do not need to take this into account . However , the linear-evaluation-based approach requires retraining a linear classifier every time a new class appears . In contrast , a prototype classifier can be applied to any trained feature extractor and does not require model learning in the testing phase . Therefore , a prototype classifier can be a a practical and useful first step for few-shot learning problems . In order to avoid meta-learning during the training phase and the linear evaluation during the testing phase , we focus on using a prototype classifier on the testing phase and training models in a standard classification manner . As we discuss in section 4 , we found that when we directly constructed prototypes from the feature vectors extracted using pretrained models and applied the nearest neighbor method as in the testing phase in ProtoNet , this does not perform as well as the linear-evaluation-based approach . We hypothesize that the reason is the difference between the loss function in ProtoNet and pretrained models . As described in section 3 , if we consider a prototype as a pseudo sample average of the features in each class , the loss function of ProtoNet can be considered having a regularizing effect that makes it closer to the sample average of the class . Since standard classification training computes cross-entropy loss with additional linear projection to make the features linearly separable , the loss function does not have such an effect and can cause large within-class variance . Figure 1 shows a scatter plot of the features extracted using a neural network with two dimension output trained on cifar-10 with ProtoNet ( 1a ) and cross-entropy loss with a linear projection layer ( 1b ) . This figure implies that the features extracted using a model trained in a standard classification manner distributes away from the origin and causes large within-class variance along the direction of the norm of class mean vectors , while that of ProtoNet is more centered to its class means . This phenomenon is also observed in face recognition literatures ( Wen et al. , 2016 ; Liu et al. , 2017 ; Wang et al. , 2018 ; Deng et al. , 2019 ) . We now focus on the theoretical analysis on a prototype classifier . A current study ( Cao et al. , 2020 ) analyzed an upper bound of risk by using a prototype classifier . The bound depends on the number of shots of a support set , between-class variance , and within-class variance . However , the bound requires the class-conditioned distribution of features to be Gaussian and to have the same covariance matrix among classes . In addition , since the bound does not depend on the norm of the feature vectors , it is not clear from the bound what feature-transformation method can lead to performance improvement . Thus , we derive a novel bound for a prototype classifier . Our contributions are threefold . 1 . We relax the assumption ; specifically , the bound does not require that the features distribute on any specific distribution , and each covariance matrix does not have to be the same among classes . 2 . We clarify the effect of the variance of the norm of the feature vectors on the performance of a prototype classifier . 3 . We investigate the effectiveness of reducing the variance of the norm empirically . 2 RELATED WORK . We summarize related work by describing a prototype classifier with meta-learning , linearevaluation-based approach without meta-learning , and theoretical analysis related to the few-shot learning problem . A prototype classifier with meta-learning On the basis of the hypothesis that features well distinguished in the training phase are also useful for classifying new classes , constructing one or multiple prototypes for classifying unseen examples is a widely used approach ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Pahde et al. , 2021 ; Ji et al. , 2021 ; Sung et al. , 2018 ; Allen et al. , 2019 ; Doersch et al. , 2020 ; Qi et al. , 2018 ) . Certain algorithms compute similarities between multiple prototypes and unseen examples by using their own modules , such as attention mechanism ( Vinyals et al. , 2016 ; Doersch et al. , 2020 ) , relation network ( Sung et al. , 2018 ) , reweighting mechanisms by taking into account between-class and within-class interaction ( Ji et al. , 2021 ) , and latent clusters ( Allen et al. , 2019 ) . Prototypes are also constructed in a multi-modal way ( Pahde et al. , 2021 ) . Another line of research is transforming the space of extracted features to a better distinguishable space ( Simon et al. , 2020 ; Yoon et al. , 2019 ; Das et al. , 2020 ; Das & Lee , 2020 ) or taking variance of fetures into account ( Bateni et al. , 2020 ) . Because of its convenience , the approach with a prototype classifier is adapted to other domain such as semantic segmentation ( Dong & Xing , 2018 ) , text classification ( Sun et al. , 2019 ) , and speech recognition ( Wang et al. , 2019a ) . Update-based meta-learning In contrast to the approach with a prototype classifier and metalearning , in update-based meta-learning approaches , model parameters are adjusted in the test phase so that a model can adapt to new classes . Model-agnostic meta-Learning ( MAML ) and its variants ( Finn et al. , 2017 ; 2018 ; Rajeswaran et al. , 2017 ) search for a good initialization parameters that adapt to new classes with a few labeled data and a few update steps of the parameters . Another approach involves learning an effective update rule for the parameters of a base-learner model through a sequence of training episodes ( Bertinetto et al. , 2019 ; Lee et al. , 2019 ) . Both approaches require additional learning of hyper-parameters and training time ; thus , they prevent quick adaptation to new classes . linear-evaluation-based approach without meta-learning Interestingly , recent studies have shown that training a new linear classifier with features extracted using a model trained with crossentropy loss on base-dataset performs comparably with meta-learning based methods ( Chen et al. , 2019 ) ( Wang et al. , 2019b ) . More effective method for training a new classifier in few-shot settings has been proposed ( Yang et al. , 2021 ; Phoo & Hariharan , 2021 ) , such as calibrating distribution generated by a support set ( Yang et al. , 2021 ) , self-supervised learning on query data ( Phoo & Hariharan , 2021 ) , and distilling knowledge to obtain better embeddings ( Tian et al. , 2020 ) . Liu et al . ( 2020 ) focuses on pretraining phase and found the negative margin in cross-entropy helps improving the performance in few-shot settings . However , similar to update-based meta-learning , these studies require additional hyper-parameters and training to apply to new classes ; thus if we tackle these points , we would have an alternative method that is easier and more convenient to use . Theoretical analysis of few-shot learning Even though much improvement has been empirically made in few-shot learning , theoretical analysis is scarce . In the context of meta-learning , Du et al . ( 2021 ) provided a risk bound on the meta-testing phase that is related to the number of meta-training data and meta-testing data . Lucas et al . ( 2021 ) derived information-theoretic lower-bounds on minimax rates of convergence for algorithms that are trained on data from multiple sources and tested on novel data . Cao et al . ( 2020 ) derived a new bound on a prototype classifier and theoretically demonstrated that the mismatch regarding the number of shots in a support set between meta-training and meta-testing degrades the performance of prototypical networks , which has been only experimentally observed . However , their bound depends on several assumptions : the class-conditional distributions of features are Gaussian and have the same covariance matrix among classes . In contrast , we derive a novel bound that does not depend on any specific distribution . 3 THEORETICAL ANALYSIS OF PROTOTYPE CLASSIFIER IN TERMS OF VARIANCE OF NORM OF FEATURE VECTORS . In this section , we first formulate our problem settings and point out the drawbacks of the current theoretical analysis on a prototype classifier . Next we provide our novel bound of a prototype classifier with the bound related to the variance of the norm of the feature vectors . Finally , we list several methods that can improve the performance of a prototype classifier based on our bound . 3.1 PROBLEM SETTING . Let Y be a space of a class , τ a probability distribution over Y , X a space of input data , D a probability distribution over X , Dy a probability distribution over X given a class y . We define D⊗nk and D⊗ny by D⊗ny = Πni=1Dy and D⊗nk ≡ Πni=1D ⊗k i , respectively . We sample N classes from τ to form the N -way classification problem . Denote by K a number of annotated data in each class and x ∈ X , y ∈ Y as input data and its class respectively . We define a set of support data of class c sampled from τ as Sc = { xi | ( xi , yi ) ∈ X × Y ∩ yi = c } Ki=1 and a set of support data in the N -way K-shot classification problem as S = ⋃N c=1 Sc . Suppose a feature extractor computes a function ϕ : X → RD , where D is the number of the embedding dimensions . ϕ ( Sc ) is defined by ϕ ( Sc ) = 1K ∑ x∈Sc ϕ ( x ) . Let Φ be a space of the extractor function ϕ. Denote by M : Φ × X × ( X × Y ) NK → RN a prototype classifier function that computes probabilities of input x belonging to class c as follows . M ( ϕ , x , S ) c = pM ( y = c|x , S , ϕ ) = exp ( −∥ϕ ( x ) − ϕ ( Sc ) ∥2 ) ∑N l=1 exp ( −∥ϕ ( x ) − ϕ ( Sl ) ∥2 ) , ( 1 ) where ∥v∥2 = ∑D d=1 ( v ( d ) ) 2 , and v ( d ) is the d-th dimension of vector v. The prediction of an input x , denoted by ŷ ∈ Y , is computed by taking argmax for M ( ϕ , x , S ) , i.e. , ŷ = argmaxM ( ϕ , x , S ) . We denote by Ez∼q ( z ) [ g ( z ) ] an operation to take the expectation of g ( z ) over z distributed as f ( z ) and we simply denote Ez∼q ( z ) [ g ( z ) ] as Ez [ g ( z ) ] when z is obviously distributed on q ( z ) . We define Varz∼q ( z ) [ g ( z ) ] as an operation to take variance of g ( z ) over z distributed as q ( z ) . With I denoting the indicator function , we define expected risk RM of a prototype classifier as RM ( ϕ ) = ES∼D⊗nkEc∼τEx∼Dc [ I [ argmaxM ( ϕ , x , S ) ̸= c ] . ( 2 ) For simplicity , we now discuss the binary classification setting . We show a case of multi-class classification in Appendix A.5 due to lack of space . Let c1 and c2 denote any pair of classes sampled from τ . We consider that a query data x belongs to class c1 and support sets S consist of class c1 ’ s support set and c2 ’ s support set . Then , equation 2 is written as follows . RM ( ϕ ) = ES∼D⊗2kEc1∼τEx∼Dc1 [ I [ argmaxM ( ϕ , x , Sc1 ∪ Sc2 ) ̸= c1 ] ] . ( 3 )
In this paper, the authors have derived a generalization bound for the prototypical networks, which provides some insights towards improving protonets without finetuning using simply normalizing the feature vectors and reducing the variance of the norm. The derivation has relaxed the previous assumption of considering feature distribution and class covariance matrix to be a particular form. To justify the claim, the authors experimented on several feature transformation methods on standard few-shot benchmarks.
SP:f94a94abd88e96b574699924cdedc5c81f9a30ac
Clean Images are Hard to Reblur: Exploiting the Ill-Posed Inverse Task for Dynamic Scene Deblurring
1 INTRODUCTION . Motion blur is a common photographic artifact that occurs when the camera moves and the scene changes during the exposure in dynamic environments . Dynamic scene deblurring is a challenging ill-posed problem as both the locally-varying blur kernel and the latent sharp image have to be found from a large solution space . Traditional approaches ( Hirsch et al. , 2011 ; Whyte et al. , 2012 ; Kim et al. , 2013 ; Kim & Lee , 2014 ) tried to alleviate the ill-posedness by using prior knowledge on the statistical properties of sharp images , such as gradient sparsity . Instead of using such handcrafted knowledge , recent methods take advantage of large-scale datasets as well as deep neural networks ( Nah et al. , 2017 ; Su et al. , 2017 ; Noroozi et al. , 2017 ; Nah et al. , 2019 ; Shen et al. , 2019 ) . Usually , the learning is driven by minimizing the pixel-wise distance to the ground truth , e.g. , L1 or L2 , so that the PSNR between the deblurred and the sharp reference can be maximized . By utilizing modern ConvNet architectures and training techniques , state-of-the-art approaches ( Nah et al. , 2017 ; Tao et al. , 2017 ; Gao et al. , 2019 ; Yuan et al. , 2020 ; Park et al. , 2020 ; Chi et al. , 2021 ) have been developed toward higher capacity and deblurring accuracy . Still , most methods tend to suffer from the blurry predictions due to the regression-to-mean behavior often witnessed in ill-posed problems with large solution space ( Ledig et al. , 2017 ; Menon et al. , 2020 ) . To overcome limitations of the conventional objectives , concepts of perceptual ( Johnson et al. , 2016 ) and adversarial ( Ledig et al. , 2017 ; Nah et al. , 2017 ; Kupyn et al. , 2018 ) loss terms from highlevel semantic tasks have been introduced to improve the visual quality of the deblurred results . Nevertheless , such high-level losses may not serve as optimal goals for blur removal as low-level structural properties , e.g. , blurriness , are not the primary features considered in their formulations . As illustrated in Figure 1 , results from the previous deblurring methods are still blurry to a degree and the VGG and the adversarial losses are not sufficient to obtain perceptually pleasing and sharp images across different architectures ( Tao et al. , 2018 ; Gao et al. , 2019 ; Kupyn et al. , 2019 ) . While the deblurred images look less blurry compared with the original input , it is still possible to find nontrivial blur kernels with directional motion information . From the observation , we introduce the concept of reblurring which amplifies the unremoved blur in the given image and reconstructs the original blur . We note that our reblurring operation aims to recover the original motion trajectory in the blurry input , rather than to synthesize arbitrary , e.g. , Gaussian , blurs . Therefore , an ideally deblurred clean image is hard to reblur as no noticeable blur can be found to be amplified , making reblurring an ill-posed task . In contrast , it is straightforward to predict the original shape of blur from insufficiently deblurred images as shown in Figure 1 . We propose to use the difference between non-ideally deblurred image and the ideal sharp image in terms of reblurring feasibility as the new optimization objective , reblurring loss for the image deblurring problem . The reblurring loss is realized by jointly training a pair of deblurring and reblurring modules . The reblurring module performs the inverse operation of deblurring , trying to reconstruct the original blurry image from a deblurred output . Using the property that the blurriness of a reblurred image depends on the sharpness quality of the deblurred result , we construct two types of loss functions . During the joint training , supervised reblurring loss compares the amplified blurs between the deblurred and the sharp image . Complementing L1 intensity loss , the supervised reblurring loss guides the deblurring module to focus on and eliminate the remaining blur . While our training strategy is similar to the adversarial training of GANs ( Goodfellow et al. , 2014 ) in a sense that our deblurring and reblurring modules play the opposite roles , the purposes and effects of the adversary are different . The reblurring loss concentrates on image blurriness regardless of image realism . Furthermore , in contrast to the GAN discriminators that are not often used at test time , our reblurring module can be used to facilitate self-supervised reblurring loss . By making the deblurred image harder to reblur , the deblurring module can adaptively optimize itself without referring to the ground truth . Our reblurring loss functions provide additional optimization directives to the deblurring module and can be generally applied to any learning-based image deblurring methods . With the proposed approach , we can derive sharper predictions from existing deblurring methods without modifying their architectures . We summarize our contributions as follows : • Based on the observation that clean images are hard to reblur , we propose novel loss functions for image deblurring . Our reblurring loss reflects the preference for sharper images and contributes to visually pleasing deblurring results . • At test-time , the reblurring loss can be implemented without a ground-truth image . We perform test-time adaptive inference via self-supervised optimization with each input . • Our method is generally applicable to any learning-based methods and jointly with other loss terms . Experiments show that the concept of reblurring loss consistently contributes to achieving state-of-the-art visual sharpness as well as LPIPS and NIQE across different model architectures . 2 RELATED WORKS . Image Deblurring . Classical energy optimization framework is formulated by likelihood and prior terms . Due to the ill-posedness of dynamic scene deblurring problem , prior terms have been essential in alleviating the optimization ambiguity , encoding the preference on the solutions . Sophisticated prior terms were carefully designed with human knowledge on natural image statistics ( Levin , 2006 ; Cho & Lee , 2009 ; Hirsch et al. , 2011 ; Whyte et al. , 2012 ; Sun et al. , 2013 ; Xu et al. , 2013 ; Kim et al. , 2013 ; Kim & Lee , 2014 ; Pan et al. , 2016 ) . Recently in Li et al . ( 2018 ) , learned prior from a classifier discriminating blurry and clean images was also shown to be effective . Deep priors were also used for image deconvolution problems ( Ren et al. , 2020 ; Nan & Ji , 2020 ) . On the other hand , deep learning methods have benefited from learning on large-scale datasets . The datasets consisting of realistic blur ( Nah et al. , 2017 ; Su et al. , 2017 ; Noroozi et al. , 2017 ; Nah et al. , 2019 ; Gao et al. , 2019 ; Jin et al. , 2019 ; Shen et al. , 2019 ) align the temporal center of the blurry and the sharp image pairs with high-speed cameras . Learning from such temporally aligned data relieve the ill-posedness of deblurring compared with difficult energy optimization framework . Thus , more attention has been paid to designing CNN architectures and datasets than designing loss terms . In the early work of Schuler et al . ( 2015 ) , the alternating estimation of blur kernel and restored image ( Cho & Lee , 2009 ) was adopted in CNN architecture . In Sun et al . ( 2015 ) ; Gong et al . ( 2017 ) , the spatially varying blur kernels are estimated by assuming locally linear blur followed by non-blind deconvolution . Later , end-to-end learning without explicit kernel estimation became popular . Motivated from the coarse-to-fine approach , multi-scale CNN was proposed ( Nah et al. , 2017 ) to expand the receptive field efficiently , followed by scale-recurrent architectures ( Tao et al. , 2018 ; Gao et al. , 2019 ) . On the other hand , Zhang et al . ( 2019 ) ; Suin et al . ( 2020 ) sequentially stacked network modules . Recently , Park et al . ( 2020 ) proposed a multi-temporal model that deblurs an image recursively . To handle spatially varying blur kernels efficiently , spatially non-uniform operations were embedded in neural networks ( Zhang et al. , 2018a ; Yuan et al. , 2020 ) . Perceptual Image Restoration . Often , L1 or L2 losses are used at training to achieve higher PSNR . However , such approaches suffer from blurry and over-smoothed outputs ( Johnson et al. , 2016 ; Zhang et al. , 2018b ; Menon et al. , 2020 ) as the learned models predict an average of all possible solutions under the ill-posedness ( Ledig et al. , 2017 ) . To deal with the issue , several studies utilize deep features of the pretrained VGG ( Simonyan & Zisserman , 2014 ) and other networks that are more related to human perception ( Johnson et al. , 2016 ; Zhang et al. , 2018b ) and with analysis on frequency space ( Tariq et al. , 2020 ; Czolbe et al. , 2020 ) . Recent methods introduce adversarial training ( Goodfellow et al. , 2014 ) so that outputs of the restoration models be indistinguishable from real samples ( Nah et al. , 2017 ; Nimisha et al. , 2017 ; Ledig et al. , 2017 ; Kupyn et al. , 2018 ; 2019 ) . Also , there were attempts to exploit statistical properties of images and features with contextual loss ( Mechrez et al. , 2018 ) and projected distribution loss ( Delbracio et al. , 2021 ) . Nevertheless , an inherent limitation of existing perceptual objectives is that they are not taskspecialized for image restoration . For example , the VGG features are learned for high-level visual recognition while the adversarial loss only contributes to reconstructing realistic images without considering the existence of motion blur . Therefore , blindly optimizing those terms may not yield an optimal solution in terms of image deblurring . In practice , we observed that those objectives still tend to leave blur footprints unremoved , making it possible to estimate the original blur . Our reblurring loss is explicitly designed to improve the perceptual sharpness of deblurred images by reducing remaining blurriness and thus more suitable for deblurring , acting as a learned prior . Image Blurring . As an image could be blurred in various directions and strength , image blurring is another ill-posed problem without additional information . Thus , intrinsic or extrinsic information is often incorporated . With a non-ideally sharp image , Bae & Durand ( 2007 ) detected the small local blur kernel in the image to magnify the defocus blur for bokeh effect . On the other hand , Chen et al . ( 2018 ) estimated the kernel by computing the optical flow from the neighboring video frames . Similarly , Brooks & Barron ( 2019 ) used multiple video frames to synthesize blur . Without such external information , Zhang et al . ( 2020 ) used a generative model to synthesize many blurry images . In contrast , Bahat et al . ( 2017 ) deliberately blurred an already blurry image in many ways to find the local blur kernel . Our image reblurring concept is similar to Bae & Durand ( 2007 ) in the sense that intrinsic cue in an image is used to amplify blur . Nonetheless , our main goal is to use reblurring to provide a guide to deblurring model so that such blur cues would be better removed .
The paper proposes a novel approach to using deep networks for image deblurring. Since training with simply reconstruction losses usually leads to oversmoothed results, recent works have looked at using perceptual and adversarial losses. The paper proposes an approach similar to an adversarial loss. If an image is not de-blurred entirely, it leaves within itself some tell-tale signs of the original blur. The paper proposes learning a “reblurring” network that given a “deblurred” image should yield the original “blurry” image, while given a true sharp image should output the image itself. The deblurring network is then trained in a sense to fool the re-blurring network, by generating images that would be left untouched by the reblurring network. The paper also proposes a “test time adaptation” approach.
SP:45f0ffc5e70b1eb06e541c7b0b3b042e6c54b7a6
Clean Images are Hard to Reblur: Exploiting the Ill-Posed Inverse Task for Dynamic Scene Deblurring
1 INTRODUCTION . Motion blur is a common photographic artifact that occurs when the camera moves and the scene changes during the exposure in dynamic environments . Dynamic scene deblurring is a challenging ill-posed problem as both the locally-varying blur kernel and the latent sharp image have to be found from a large solution space . Traditional approaches ( Hirsch et al. , 2011 ; Whyte et al. , 2012 ; Kim et al. , 2013 ; Kim & Lee , 2014 ) tried to alleviate the ill-posedness by using prior knowledge on the statistical properties of sharp images , such as gradient sparsity . Instead of using such handcrafted knowledge , recent methods take advantage of large-scale datasets as well as deep neural networks ( Nah et al. , 2017 ; Su et al. , 2017 ; Noroozi et al. , 2017 ; Nah et al. , 2019 ; Shen et al. , 2019 ) . Usually , the learning is driven by minimizing the pixel-wise distance to the ground truth , e.g. , L1 or L2 , so that the PSNR between the deblurred and the sharp reference can be maximized . By utilizing modern ConvNet architectures and training techniques , state-of-the-art approaches ( Nah et al. , 2017 ; Tao et al. , 2017 ; Gao et al. , 2019 ; Yuan et al. , 2020 ; Park et al. , 2020 ; Chi et al. , 2021 ) have been developed toward higher capacity and deblurring accuracy . Still , most methods tend to suffer from the blurry predictions due to the regression-to-mean behavior often witnessed in ill-posed problems with large solution space ( Ledig et al. , 2017 ; Menon et al. , 2020 ) . To overcome limitations of the conventional objectives , concepts of perceptual ( Johnson et al. , 2016 ) and adversarial ( Ledig et al. , 2017 ; Nah et al. , 2017 ; Kupyn et al. , 2018 ) loss terms from highlevel semantic tasks have been introduced to improve the visual quality of the deblurred results . Nevertheless , such high-level losses may not serve as optimal goals for blur removal as low-level structural properties , e.g. , blurriness , are not the primary features considered in their formulations . As illustrated in Figure 1 , results from the previous deblurring methods are still blurry to a degree and the VGG and the adversarial losses are not sufficient to obtain perceptually pleasing and sharp images across different architectures ( Tao et al. , 2018 ; Gao et al. , 2019 ; Kupyn et al. , 2019 ) . While the deblurred images look less blurry compared with the original input , it is still possible to find nontrivial blur kernels with directional motion information . From the observation , we introduce the concept of reblurring which amplifies the unremoved blur in the given image and reconstructs the original blur . We note that our reblurring operation aims to recover the original motion trajectory in the blurry input , rather than to synthesize arbitrary , e.g. , Gaussian , blurs . Therefore , an ideally deblurred clean image is hard to reblur as no noticeable blur can be found to be amplified , making reblurring an ill-posed task . In contrast , it is straightforward to predict the original shape of blur from insufficiently deblurred images as shown in Figure 1 . We propose to use the difference between non-ideally deblurred image and the ideal sharp image in terms of reblurring feasibility as the new optimization objective , reblurring loss for the image deblurring problem . The reblurring loss is realized by jointly training a pair of deblurring and reblurring modules . The reblurring module performs the inverse operation of deblurring , trying to reconstruct the original blurry image from a deblurred output . Using the property that the blurriness of a reblurred image depends on the sharpness quality of the deblurred result , we construct two types of loss functions . During the joint training , supervised reblurring loss compares the amplified blurs between the deblurred and the sharp image . Complementing L1 intensity loss , the supervised reblurring loss guides the deblurring module to focus on and eliminate the remaining blur . While our training strategy is similar to the adversarial training of GANs ( Goodfellow et al. , 2014 ) in a sense that our deblurring and reblurring modules play the opposite roles , the purposes and effects of the adversary are different . The reblurring loss concentrates on image blurriness regardless of image realism . Furthermore , in contrast to the GAN discriminators that are not often used at test time , our reblurring module can be used to facilitate self-supervised reblurring loss . By making the deblurred image harder to reblur , the deblurring module can adaptively optimize itself without referring to the ground truth . Our reblurring loss functions provide additional optimization directives to the deblurring module and can be generally applied to any learning-based image deblurring methods . With the proposed approach , we can derive sharper predictions from existing deblurring methods without modifying their architectures . We summarize our contributions as follows : • Based on the observation that clean images are hard to reblur , we propose novel loss functions for image deblurring . Our reblurring loss reflects the preference for sharper images and contributes to visually pleasing deblurring results . • At test-time , the reblurring loss can be implemented without a ground-truth image . We perform test-time adaptive inference via self-supervised optimization with each input . • Our method is generally applicable to any learning-based methods and jointly with other loss terms . Experiments show that the concept of reblurring loss consistently contributes to achieving state-of-the-art visual sharpness as well as LPIPS and NIQE across different model architectures . 2 RELATED WORKS . Image Deblurring . Classical energy optimization framework is formulated by likelihood and prior terms . Due to the ill-posedness of dynamic scene deblurring problem , prior terms have been essential in alleviating the optimization ambiguity , encoding the preference on the solutions . Sophisticated prior terms were carefully designed with human knowledge on natural image statistics ( Levin , 2006 ; Cho & Lee , 2009 ; Hirsch et al. , 2011 ; Whyte et al. , 2012 ; Sun et al. , 2013 ; Xu et al. , 2013 ; Kim et al. , 2013 ; Kim & Lee , 2014 ; Pan et al. , 2016 ) . Recently in Li et al . ( 2018 ) , learned prior from a classifier discriminating blurry and clean images was also shown to be effective . Deep priors were also used for image deconvolution problems ( Ren et al. , 2020 ; Nan & Ji , 2020 ) . On the other hand , deep learning methods have benefited from learning on large-scale datasets . The datasets consisting of realistic blur ( Nah et al. , 2017 ; Su et al. , 2017 ; Noroozi et al. , 2017 ; Nah et al. , 2019 ; Gao et al. , 2019 ; Jin et al. , 2019 ; Shen et al. , 2019 ) align the temporal center of the blurry and the sharp image pairs with high-speed cameras . Learning from such temporally aligned data relieve the ill-posedness of deblurring compared with difficult energy optimization framework . Thus , more attention has been paid to designing CNN architectures and datasets than designing loss terms . In the early work of Schuler et al . ( 2015 ) , the alternating estimation of blur kernel and restored image ( Cho & Lee , 2009 ) was adopted in CNN architecture . In Sun et al . ( 2015 ) ; Gong et al . ( 2017 ) , the spatially varying blur kernels are estimated by assuming locally linear blur followed by non-blind deconvolution . Later , end-to-end learning without explicit kernel estimation became popular . Motivated from the coarse-to-fine approach , multi-scale CNN was proposed ( Nah et al. , 2017 ) to expand the receptive field efficiently , followed by scale-recurrent architectures ( Tao et al. , 2018 ; Gao et al. , 2019 ) . On the other hand , Zhang et al . ( 2019 ) ; Suin et al . ( 2020 ) sequentially stacked network modules . Recently , Park et al . ( 2020 ) proposed a multi-temporal model that deblurs an image recursively . To handle spatially varying blur kernels efficiently , spatially non-uniform operations were embedded in neural networks ( Zhang et al. , 2018a ; Yuan et al. , 2020 ) . Perceptual Image Restoration . Often , L1 or L2 losses are used at training to achieve higher PSNR . However , such approaches suffer from blurry and over-smoothed outputs ( Johnson et al. , 2016 ; Zhang et al. , 2018b ; Menon et al. , 2020 ) as the learned models predict an average of all possible solutions under the ill-posedness ( Ledig et al. , 2017 ) . To deal with the issue , several studies utilize deep features of the pretrained VGG ( Simonyan & Zisserman , 2014 ) and other networks that are more related to human perception ( Johnson et al. , 2016 ; Zhang et al. , 2018b ) and with analysis on frequency space ( Tariq et al. , 2020 ; Czolbe et al. , 2020 ) . Recent methods introduce adversarial training ( Goodfellow et al. , 2014 ) so that outputs of the restoration models be indistinguishable from real samples ( Nah et al. , 2017 ; Nimisha et al. , 2017 ; Ledig et al. , 2017 ; Kupyn et al. , 2018 ; 2019 ) . Also , there were attempts to exploit statistical properties of images and features with contextual loss ( Mechrez et al. , 2018 ) and projected distribution loss ( Delbracio et al. , 2021 ) . Nevertheless , an inherent limitation of existing perceptual objectives is that they are not taskspecialized for image restoration . For example , the VGG features are learned for high-level visual recognition while the adversarial loss only contributes to reconstructing realistic images without considering the existence of motion blur . Therefore , blindly optimizing those terms may not yield an optimal solution in terms of image deblurring . In practice , we observed that those objectives still tend to leave blur footprints unremoved , making it possible to estimate the original blur . Our reblurring loss is explicitly designed to improve the perceptual sharpness of deblurred images by reducing remaining blurriness and thus more suitable for deblurring , acting as a learned prior . Image Blurring . As an image could be blurred in various directions and strength , image blurring is another ill-posed problem without additional information . Thus , intrinsic or extrinsic information is often incorporated . With a non-ideally sharp image , Bae & Durand ( 2007 ) detected the small local blur kernel in the image to magnify the defocus blur for bokeh effect . On the other hand , Chen et al . ( 2018 ) estimated the kernel by computing the optical flow from the neighboring video frames . Similarly , Brooks & Barron ( 2019 ) used multiple video frames to synthesize blur . Without such external information , Zhang et al . ( 2020 ) used a generative model to synthesize many blurry images . In contrast , Bahat et al . ( 2017 ) deliberately blurred an already blurry image in many ways to find the local blur kernel . Our image reblurring concept is similar to Bae & Durand ( 2007 ) in the sense that intrinsic cue in an image is used to amplify blur . Nonetheless , our main goal is to use reblurring to provide a guide to deblurring model so that such blur cues would be better removed .
A novel deblurring method is proposed by introducing the concept of reblurring loss. Conventional techniques deblur blurry images to some extent however there are some unremoved blur contents, where it can be used to reconstruct (reblur) blurry images. On the other hand, cleanly deblurred images have only sharp contents and it is difficult to reblur the images since no blurry cues are left. By using such observation, a reblurring module and a deblurring module with reblurring loss are proposed. Experimental results show that the proposed framework outperforms SOTA quantitatively and qualitatively on the GOPRO and REDS dataset.
SP:45f0ffc5e70b1eb06e541c7b0b3b042e6c54b7a6
Clean Images are Hard to Reblur: Exploiting the Ill-Posed Inverse Task for Dynamic Scene Deblurring
1 INTRODUCTION . Motion blur is a common photographic artifact that occurs when the camera moves and the scene changes during the exposure in dynamic environments . Dynamic scene deblurring is a challenging ill-posed problem as both the locally-varying blur kernel and the latent sharp image have to be found from a large solution space . Traditional approaches ( Hirsch et al. , 2011 ; Whyte et al. , 2012 ; Kim et al. , 2013 ; Kim & Lee , 2014 ) tried to alleviate the ill-posedness by using prior knowledge on the statistical properties of sharp images , such as gradient sparsity . Instead of using such handcrafted knowledge , recent methods take advantage of large-scale datasets as well as deep neural networks ( Nah et al. , 2017 ; Su et al. , 2017 ; Noroozi et al. , 2017 ; Nah et al. , 2019 ; Shen et al. , 2019 ) . Usually , the learning is driven by minimizing the pixel-wise distance to the ground truth , e.g. , L1 or L2 , so that the PSNR between the deblurred and the sharp reference can be maximized . By utilizing modern ConvNet architectures and training techniques , state-of-the-art approaches ( Nah et al. , 2017 ; Tao et al. , 2017 ; Gao et al. , 2019 ; Yuan et al. , 2020 ; Park et al. , 2020 ; Chi et al. , 2021 ) have been developed toward higher capacity and deblurring accuracy . Still , most methods tend to suffer from the blurry predictions due to the regression-to-mean behavior often witnessed in ill-posed problems with large solution space ( Ledig et al. , 2017 ; Menon et al. , 2020 ) . To overcome limitations of the conventional objectives , concepts of perceptual ( Johnson et al. , 2016 ) and adversarial ( Ledig et al. , 2017 ; Nah et al. , 2017 ; Kupyn et al. , 2018 ) loss terms from highlevel semantic tasks have been introduced to improve the visual quality of the deblurred results . Nevertheless , such high-level losses may not serve as optimal goals for blur removal as low-level structural properties , e.g. , blurriness , are not the primary features considered in their formulations . As illustrated in Figure 1 , results from the previous deblurring methods are still blurry to a degree and the VGG and the adversarial losses are not sufficient to obtain perceptually pleasing and sharp images across different architectures ( Tao et al. , 2018 ; Gao et al. , 2019 ; Kupyn et al. , 2019 ) . While the deblurred images look less blurry compared with the original input , it is still possible to find nontrivial blur kernels with directional motion information . From the observation , we introduce the concept of reblurring which amplifies the unremoved blur in the given image and reconstructs the original blur . We note that our reblurring operation aims to recover the original motion trajectory in the blurry input , rather than to synthesize arbitrary , e.g. , Gaussian , blurs . Therefore , an ideally deblurred clean image is hard to reblur as no noticeable blur can be found to be amplified , making reblurring an ill-posed task . In contrast , it is straightforward to predict the original shape of blur from insufficiently deblurred images as shown in Figure 1 . We propose to use the difference between non-ideally deblurred image and the ideal sharp image in terms of reblurring feasibility as the new optimization objective , reblurring loss for the image deblurring problem . The reblurring loss is realized by jointly training a pair of deblurring and reblurring modules . The reblurring module performs the inverse operation of deblurring , trying to reconstruct the original blurry image from a deblurred output . Using the property that the blurriness of a reblurred image depends on the sharpness quality of the deblurred result , we construct two types of loss functions . During the joint training , supervised reblurring loss compares the amplified blurs between the deblurred and the sharp image . Complementing L1 intensity loss , the supervised reblurring loss guides the deblurring module to focus on and eliminate the remaining blur . While our training strategy is similar to the adversarial training of GANs ( Goodfellow et al. , 2014 ) in a sense that our deblurring and reblurring modules play the opposite roles , the purposes and effects of the adversary are different . The reblurring loss concentrates on image blurriness regardless of image realism . Furthermore , in contrast to the GAN discriminators that are not often used at test time , our reblurring module can be used to facilitate self-supervised reblurring loss . By making the deblurred image harder to reblur , the deblurring module can adaptively optimize itself without referring to the ground truth . Our reblurring loss functions provide additional optimization directives to the deblurring module and can be generally applied to any learning-based image deblurring methods . With the proposed approach , we can derive sharper predictions from existing deblurring methods without modifying their architectures . We summarize our contributions as follows : • Based on the observation that clean images are hard to reblur , we propose novel loss functions for image deblurring . Our reblurring loss reflects the preference for sharper images and contributes to visually pleasing deblurring results . • At test-time , the reblurring loss can be implemented without a ground-truth image . We perform test-time adaptive inference via self-supervised optimization with each input . • Our method is generally applicable to any learning-based methods and jointly with other loss terms . Experiments show that the concept of reblurring loss consistently contributes to achieving state-of-the-art visual sharpness as well as LPIPS and NIQE across different model architectures . 2 RELATED WORKS . Image Deblurring . Classical energy optimization framework is formulated by likelihood and prior terms . Due to the ill-posedness of dynamic scene deblurring problem , prior terms have been essential in alleviating the optimization ambiguity , encoding the preference on the solutions . Sophisticated prior terms were carefully designed with human knowledge on natural image statistics ( Levin , 2006 ; Cho & Lee , 2009 ; Hirsch et al. , 2011 ; Whyte et al. , 2012 ; Sun et al. , 2013 ; Xu et al. , 2013 ; Kim et al. , 2013 ; Kim & Lee , 2014 ; Pan et al. , 2016 ) . Recently in Li et al . ( 2018 ) , learned prior from a classifier discriminating blurry and clean images was also shown to be effective . Deep priors were also used for image deconvolution problems ( Ren et al. , 2020 ; Nan & Ji , 2020 ) . On the other hand , deep learning methods have benefited from learning on large-scale datasets . The datasets consisting of realistic blur ( Nah et al. , 2017 ; Su et al. , 2017 ; Noroozi et al. , 2017 ; Nah et al. , 2019 ; Gao et al. , 2019 ; Jin et al. , 2019 ; Shen et al. , 2019 ) align the temporal center of the blurry and the sharp image pairs with high-speed cameras . Learning from such temporally aligned data relieve the ill-posedness of deblurring compared with difficult energy optimization framework . Thus , more attention has been paid to designing CNN architectures and datasets than designing loss terms . In the early work of Schuler et al . ( 2015 ) , the alternating estimation of blur kernel and restored image ( Cho & Lee , 2009 ) was adopted in CNN architecture . In Sun et al . ( 2015 ) ; Gong et al . ( 2017 ) , the spatially varying blur kernels are estimated by assuming locally linear blur followed by non-blind deconvolution . Later , end-to-end learning without explicit kernel estimation became popular . Motivated from the coarse-to-fine approach , multi-scale CNN was proposed ( Nah et al. , 2017 ) to expand the receptive field efficiently , followed by scale-recurrent architectures ( Tao et al. , 2018 ; Gao et al. , 2019 ) . On the other hand , Zhang et al . ( 2019 ) ; Suin et al . ( 2020 ) sequentially stacked network modules . Recently , Park et al . ( 2020 ) proposed a multi-temporal model that deblurs an image recursively . To handle spatially varying blur kernels efficiently , spatially non-uniform operations were embedded in neural networks ( Zhang et al. , 2018a ; Yuan et al. , 2020 ) . Perceptual Image Restoration . Often , L1 or L2 losses are used at training to achieve higher PSNR . However , such approaches suffer from blurry and over-smoothed outputs ( Johnson et al. , 2016 ; Zhang et al. , 2018b ; Menon et al. , 2020 ) as the learned models predict an average of all possible solutions under the ill-posedness ( Ledig et al. , 2017 ) . To deal with the issue , several studies utilize deep features of the pretrained VGG ( Simonyan & Zisserman , 2014 ) and other networks that are more related to human perception ( Johnson et al. , 2016 ; Zhang et al. , 2018b ) and with analysis on frequency space ( Tariq et al. , 2020 ; Czolbe et al. , 2020 ) . Recent methods introduce adversarial training ( Goodfellow et al. , 2014 ) so that outputs of the restoration models be indistinguishable from real samples ( Nah et al. , 2017 ; Nimisha et al. , 2017 ; Ledig et al. , 2017 ; Kupyn et al. , 2018 ; 2019 ) . Also , there were attempts to exploit statistical properties of images and features with contextual loss ( Mechrez et al. , 2018 ) and projected distribution loss ( Delbracio et al. , 2021 ) . Nevertheless , an inherent limitation of existing perceptual objectives is that they are not taskspecialized for image restoration . For example , the VGG features are learned for high-level visual recognition while the adversarial loss only contributes to reconstructing realistic images without considering the existence of motion blur . Therefore , blindly optimizing those terms may not yield an optimal solution in terms of image deblurring . In practice , we observed that those objectives still tend to leave blur footprints unremoved , making it possible to estimate the original blur . Our reblurring loss is explicitly designed to improve the perceptual sharpness of deblurred images by reducing remaining blurriness and thus more suitable for deblurring , acting as a learned prior . Image Blurring . As an image could be blurred in various directions and strength , image blurring is another ill-posed problem without additional information . Thus , intrinsic or extrinsic information is often incorporated . With a non-ideally sharp image , Bae & Durand ( 2007 ) detected the small local blur kernel in the image to magnify the defocus blur for bokeh effect . On the other hand , Chen et al . ( 2018 ) estimated the kernel by computing the optical flow from the neighboring video frames . Similarly , Brooks & Barron ( 2019 ) used multiple video frames to synthesize blur . Without such external information , Zhang et al . ( 2020 ) used a generative model to synthesize many blurry images . In contrast , Bahat et al . ( 2017 ) deliberately blurred an already blurry image in many ways to find the local blur kernel . Our image reblurring concept is similar to Bae & Durand ( 2007 ) in the sense that intrinsic cue in an image is used to amplify blur . Nonetheless , our main goal is to use reblurring to provide a guide to deblurring model so that such blur cues would be better removed .
This work addresses the problem of image deblurring by training a deep model in an end-to-end fashion. Similar to recent work, the method makes use of large paired (blurry,sharp) training datasets to train the deblurring network. The main idea of the work is that a correctly deblurred image should not contain any information about the original blur. The method exploits this observation by using an auxiliary network that tries to re-blur the deblurred image so it has the same blur as the original one. If the deblurred image was perfectly deblurred this shouldn't be possible. These two networks end up having a "similar" role as a generator-discriminator on a GAN set-up. But even if there's this superficial connection with GANs the formalism is completely different. Here the reblurring network outputs an image that is compared to another image (i.e., reference-based). The work introduces three different loss terms: L_blur, L_sharp and L_reblur as long as the typical regression loss L1 between deblurred and sharp images. - L_blur: is mainly used to train the re-blurring module; - L_sharp: is used to avoid the re-blurring module to just apply the average blurring; - L_reblur: is used as a regression loss on the space of re-blurred images. Additionally, the paper proposes a test time adaptation where the reblurring network is used to boost the deblurring results on a given image. This is done by approximately inverting the re-blurring network using gradient descent. The work presents several experiments including ablation studies with two of the popular (synthetic) datasets used in image/video deblurring (GoPro and REDS). Several comparisons with SOTA methods and some results on a real image dataset (Lai et al. 2016). Different quantitative metrics are given (PSNR,SSIM, LPIPS, NIQE) as well as several figures showing visual comparisons.
SP:45f0ffc5e70b1eb06e541c7b0b3b042e6c54b7a6
Trident Pyramid Networks: The importance of processing at the feature pyramid level for better object detection
1 INTRODUCTION . Many computer vision tasks such as object detection and instance segmentation require strong features both at low and high resolution to detect both large and small objects respectively . This is in contrast to the image classification task where low resolution features are sufficient as usually only a single object is present in the center of the image . Networks developed specifically for the image classification task ( e.g . Simonyan & Zisserman ( 2014 ) ; He et al . ( 2016a ) ; Xie et al . ( 2017 ) ) , further denoted by backbones , are therefore insufficient for multi-scale vision tasks . Especially poor performance is to be expected on small objects , as shown in Lin et al . ( 2017a ) . In order to alleviate this problem , named the feature fusion problem , top-down mechanisms are added ( Lin et al. , 2017a ) to propagate semantically strong information from the low resolution to the high resolution feature maps , with improved performance on small objects as a result . Additionally , bottom-up mechanisms can also be appended ( Liu et al. , 2018 ) such that the lower resolution maps can benefit from the freshly updated higher resolution maps . These top-down and bottom-up mechanisms can now be grouped into a layer , after which multiple of these layers can be concatenated , as done in Tan et al . ( 2020 ) . We call this part of a computer vision network the core , laying in between the backbone and the task-specific head ( see Figure 1 ) . In general , we define a core module to be any module taking as input a feature pyramid and outputting an updated feature pyramid . These top-down and bottom-up operations can be regarded as communication-based processing operating on two feature maps , as opposed to content-based self-processing operating on a single feature map . Existing cores such as FPN ( Lin et al. , 2017a ) , PANet ( Liu et al. , 2018 ) and BiFPN ( Tan et al. , 2020 ) mostly focus on communication-based processing , as this nicely supplements the backbone merely consisting of self-processing . However , when having multiple communication-based operations in a row , communication tends to saturate ( everyone is up to date ) and hence becomes superfluous . We argue it is therefore more effective to alternate communication-based processing with sufficient self-processing , such that feature maps have the time to come up with new findings to be communicated . Based on this observation , we design the Trident Pyramid Network ( TPN ) core consisting of sequential top-down and bottom-up operations alternated with parallel self-processing mechanisms . The TPN core is equipped with hyperparameters controlling the amount of communication-based processing and self-processing . During the experiments , we empirically investigate what the optimal balance is between communication-based processing and self-processing ( see Subsection 4.3 ) . The TPN core is compared to various baselines on the COCO object detection benchmark ( Lin et al. , 2014 ) . Specific care is taken to ensure the baselines have similar computational characteristics , such that a fair comparison can be made . Using a ResNet-50 backbone and a simple one-stage detector head , our TPN core peaks at 41.8 AP on the COCO validation set when using the 3x training schedule ( see Subsection 4.2 ) . This is a 1.5 AP improvement over a BiFPN core of similar computational expense . When having additional compute to improve performance , practitioners typically decide to replace their backbone with a heavier one . A ResNet-50+FPN network for example gets traded for the heavier ResNet-101+FPN network . Yet , one might wonder whether it is not more beneficial to add additional computation into the core ( i.e . at the feature pyramid level ) by using a ResNet-50+TPN network , rather than into the backbone by using a ResNet-101+FPN network . When comparing both options under similar computational characteristics , we show a 1.7 AP improvement of the ResNet-50+TPN network over the ResNet-101+FPN network . This empirically shows that it is more beneficial to add additional computation into the core , highlighting the importance of performing computation at the feature pyramid level in modern-day object detection systems . We hope this new insight drives researchers to design even better cores in the future . 2 RELATED WORK . In order to obtain multi-scale features , early detectors performed predictions on feature maps directly coming from the backbone , such as MS-CNN ( Cai et al. , 2016 ) and SSD ( Liu et al. , 2016 ) . As the higher resolution maps from the backbone contain relatively weak semantic information , top-down mechanisms were added to propagate semantically strong information from lower resolution maps back to the higher resolution maps as in FPN ( Lin et al. , 2017a ) and TDM ( Shrivastava et al. , 2016 ) . Since , many variants and additions have been proposed : PANet ( Liu et al. , 2018 ) appends bottom-up connections , M2det ( Zhao et al. , 2019 ) uses a U-shape feature interaction architecture , ZigZagNet ( Lin et al. , 2019 ) adds additional pathways between different levels of the top-down and bottom-up hierarchies , NAS-FPN ( Ghiasi et al. , 2019 ) and Hit-Detector ( Guo et al. , 2020 ) use Neural Architecture Search ( NAS ) to automatically design a feature interaction topology , and BiFPN ( Tan et al. , 2020 ) modifies PANet by removing some connections , adding skip connections and using weighted feature map aggregation . All of the above variants focus on improving the communication between the different feature maps . We argue however that to be effective , extra content-based self-processing is needed in between the communication flow . Not all methods use a feature pyramid to deal with scale variation . TridentNet ( Li et al. , 2019 ) applies parallel branches of convolutional blocks with different dilations on a single feature map to obtain scale-aware features . In DetectoRS ( Qiao et al. , 2021 ) , they combine this idea with feature pyramids , by applying their switchable atrous convolutions ( SAC ) inside their recursive feature pyramids ( RFP ) . Note that to avoid any name confusion with TridentNet , we call our core by its abbreviated name TPN as opposed to Trident Pyramid Network . Our TPN core is also related to networks typically used in segmentation such as U-Net ( Ronneberger et al. , 2015 ) and stacked hourglass networks ( Newell et al. , 2016 ) , given that these networks also use a combination of top-down , self-processing and bottom-up operations . A major difference of these networks with our TPN core however , is that they do not operate on a feature pyramid in the sense that lower resolution maps are only generated and used within a single layer ( e.g . within a single hourglass ) and are not shared across layers ( e.g . across two neighboring hourglasses ) . Finally , note that some works such as Guo et al . ( 2020 ) and Bochkovskiy et al . ( 2020 ) refer to the the network part connecting the backbone with the head as the neck ( instead of the core ) . That name implies that the neck is merely a connection piece between the backbone and the head , and is of little importance . Yet , we show that the neck is in fact an important part of the network , and therefore call it the core instead . 3 METHOD . 3.1 TPN CORE ARCHITECTURE . Generally speaking , the core receives a feature pyramid as input , and outputs an updated feature pyramid . Here , a feature pyramid is defined as a collection of feature maps , with feature maps defined as a collection of feature vectors ( called features ) organized in a two-dimensional map . More specifically , feature map Pl denotes a feature map of level l which is 2l times smaller in width and height compared to the initial image resolution . A popular choice for the feature pyramid ( Lin et al. , 2017b ) is to consider feature maps { P3 , P4 , P5 , P6 , P7 } , which we will use as the default setting throughout our discussions and experiments . The core is constructed from three building blocks : top-down operations , self-processing operations and bottom-up operations ( see Figure 2 ) . In this subsection , we focus on how these operations are best combined , independently of their precise implementations . We call this configuration of operations making up a core , the core architecture . The specific implementations corresponding to the top-down , self-processing and bottom-up operations will be discussed in Subsection 3.2 and Subsection 3.3 . Using these general building blocks , we can recreate the popular FPN ( Lin et al. , 2017a ) and PANet ( Liu et al. , 2018 ) core architectures in Figure 3 . Note that the architectures slightly differ from those found in the original works ( Lin et al. , 2017a ; Liu et al. , 2018 ) , as the input layers are missing . Given that these input layers are meant to transition from backbone feature sizes to core feature sizes , we decided to move these transition layers from the core to the backbone instead , such that multiple core layers can easily be concatenated . Note moreover that Figure 3 only defines the architecture of the core , without specifying the implementation of the top-down and bottom-up operations . These implementations could hence differ from those found in the original works ( Lin et al. , 2017a ; Liu et al. , 2018 ) . From the FPN and PANet core architectures from Figure 3 , we make following two observations . First , we can see that the top-down and bottom-up operations are sequential . Secondly , we observe the lack of self-processing operations in both core architectures . In what follows , we discuss both aspects in more depth . First , we discuss the trade-off between sequential and parallel operations in greater detail . By sequential operations , we mean that Pl is updated with the new Pl±1 instead of with the old one , forcing the operations to be performed sequentially as the new feature maps must be available . Alternatively , one could instead opt for parallel operations by solely relying on the old feature maps . The choice between parallel and sequential could be regarded as a trade-off between speed and accuracy , while maintaining a similar memory consumption . Given that the top-down and bottom-up operations can be quite expensive , especially on high resolution maps , it is important to get the most out of every single operation . We hence believe that the sequential variant should be preferred here for the top-down and bottom-up operations , as found in the FPN and PANet core architectures . Secondly , we discuss the lack of self-processing operations in the FPN and PANet core architectures . When looking at the PANet architecture , we see that bottom-up operations immediately follow the top-down operations . We argue that this is sub-optimal . Take for example a look at the P4-P3 topdown operation , followed immediately by the P3-P4 bottom-up operation . The P3 map was just updated with information from P4 and now P3 must immediately communicate its content back to P4 before having the possibility to digest and work on this new information . We hence argue that the top-down and bottom-up operations should be separated with self-processing operations . This gives the feature maps the opportunity to work on themselves before communicating back to their peers . By combining the insights from previous two discussions , we arrive at the Trident Pyramid Network ( TPN ) core architecture consisting of sequential top-down and bottom-up operations alternated with self-processing operations ( see lower part of Figure 4 ) . The name is inspired by the top-down , first self-processing and bottom-up operations resembling a trident . Note that the TPN self-processing operations happen in parallel . This is not necessary , but we believe this to be the most natural choice .
This paper presents a new network architecture for fusing pyramid features in deep neural networks for visual object detection. The authors divide an object detection network into a backbone, a core, and a head. The proposed trident pyramid network, or TPN for short, is a core network structure. The main idea in TPN is that we shall do feature processing along with aggregation in the core part of a network. Experiments are carried out to justify this design.
SP:713596293b212e0ffee3d3473c07e7fc94a2d140
Trident Pyramid Networks: The importance of processing at the feature pyramid level for better object detection
1 INTRODUCTION . Many computer vision tasks such as object detection and instance segmentation require strong features both at low and high resolution to detect both large and small objects respectively . This is in contrast to the image classification task where low resolution features are sufficient as usually only a single object is present in the center of the image . Networks developed specifically for the image classification task ( e.g . Simonyan & Zisserman ( 2014 ) ; He et al . ( 2016a ) ; Xie et al . ( 2017 ) ) , further denoted by backbones , are therefore insufficient for multi-scale vision tasks . Especially poor performance is to be expected on small objects , as shown in Lin et al . ( 2017a ) . In order to alleviate this problem , named the feature fusion problem , top-down mechanisms are added ( Lin et al. , 2017a ) to propagate semantically strong information from the low resolution to the high resolution feature maps , with improved performance on small objects as a result . Additionally , bottom-up mechanisms can also be appended ( Liu et al. , 2018 ) such that the lower resolution maps can benefit from the freshly updated higher resolution maps . These top-down and bottom-up mechanisms can now be grouped into a layer , after which multiple of these layers can be concatenated , as done in Tan et al . ( 2020 ) . We call this part of a computer vision network the core , laying in between the backbone and the task-specific head ( see Figure 1 ) . In general , we define a core module to be any module taking as input a feature pyramid and outputting an updated feature pyramid . These top-down and bottom-up operations can be regarded as communication-based processing operating on two feature maps , as opposed to content-based self-processing operating on a single feature map . Existing cores such as FPN ( Lin et al. , 2017a ) , PANet ( Liu et al. , 2018 ) and BiFPN ( Tan et al. , 2020 ) mostly focus on communication-based processing , as this nicely supplements the backbone merely consisting of self-processing . However , when having multiple communication-based operations in a row , communication tends to saturate ( everyone is up to date ) and hence becomes superfluous . We argue it is therefore more effective to alternate communication-based processing with sufficient self-processing , such that feature maps have the time to come up with new findings to be communicated . Based on this observation , we design the Trident Pyramid Network ( TPN ) core consisting of sequential top-down and bottom-up operations alternated with parallel self-processing mechanisms . The TPN core is equipped with hyperparameters controlling the amount of communication-based processing and self-processing . During the experiments , we empirically investigate what the optimal balance is between communication-based processing and self-processing ( see Subsection 4.3 ) . The TPN core is compared to various baselines on the COCO object detection benchmark ( Lin et al. , 2014 ) . Specific care is taken to ensure the baselines have similar computational characteristics , such that a fair comparison can be made . Using a ResNet-50 backbone and a simple one-stage detector head , our TPN core peaks at 41.8 AP on the COCO validation set when using the 3x training schedule ( see Subsection 4.2 ) . This is a 1.5 AP improvement over a BiFPN core of similar computational expense . When having additional compute to improve performance , practitioners typically decide to replace their backbone with a heavier one . A ResNet-50+FPN network for example gets traded for the heavier ResNet-101+FPN network . Yet , one might wonder whether it is not more beneficial to add additional computation into the core ( i.e . at the feature pyramid level ) by using a ResNet-50+TPN network , rather than into the backbone by using a ResNet-101+FPN network . When comparing both options under similar computational characteristics , we show a 1.7 AP improvement of the ResNet-50+TPN network over the ResNet-101+FPN network . This empirically shows that it is more beneficial to add additional computation into the core , highlighting the importance of performing computation at the feature pyramid level in modern-day object detection systems . We hope this new insight drives researchers to design even better cores in the future . 2 RELATED WORK . In order to obtain multi-scale features , early detectors performed predictions on feature maps directly coming from the backbone , such as MS-CNN ( Cai et al. , 2016 ) and SSD ( Liu et al. , 2016 ) . As the higher resolution maps from the backbone contain relatively weak semantic information , top-down mechanisms were added to propagate semantically strong information from lower resolution maps back to the higher resolution maps as in FPN ( Lin et al. , 2017a ) and TDM ( Shrivastava et al. , 2016 ) . Since , many variants and additions have been proposed : PANet ( Liu et al. , 2018 ) appends bottom-up connections , M2det ( Zhao et al. , 2019 ) uses a U-shape feature interaction architecture , ZigZagNet ( Lin et al. , 2019 ) adds additional pathways between different levels of the top-down and bottom-up hierarchies , NAS-FPN ( Ghiasi et al. , 2019 ) and Hit-Detector ( Guo et al. , 2020 ) use Neural Architecture Search ( NAS ) to automatically design a feature interaction topology , and BiFPN ( Tan et al. , 2020 ) modifies PANet by removing some connections , adding skip connections and using weighted feature map aggregation . All of the above variants focus on improving the communication between the different feature maps . We argue however that to be effective , extra content-based self-processing is needed in between the communication flow . Not all methods use a feature pyramid to deal with scale variation . TridentNet ( Li et al. , 2019 ) applies parallel branches of convolutional blocks with different dilations on a single feature map to obtain scale-aware features . In DetectoRS ( Qiao et al. , 2021 ) , they combine this idea with feature pyramids , by applying their switchable atrous convolutions ( SAC ) inside their recursive feature pyramids ( RFP ) . Note that to avoid any name confusion with TridentNet , we call our core by its abbreviated name TPN as opposed to Trident Pyramid Network . Our TPN core is also related to networks typically used in segmentation such as U-Net ( Ronneberger et al. , 2015 ) and stacked hourglass networks ( Newell et al. , 2016 ) , given that these networks also use a combination of top-down , self-processing and bottom-up operations . A major difference of these networks with our TPN core however , is that they do not operate on a feature pyramid in the sense that lower resolution maps are only generated and used within a single layer ( e.g . within a single hourglass ) and are not shared across layers ( e.g . across two neighboring hourglasses ) . Finally , note that some works such as Guo et al . ( 2020 ) and Bochkovskiy et al . ( 2020 ) refer to the the network part connecting the backbone with the head as the neck ( instead of the core ) . That name implies that the neck is merely a connection piece between the backbone and the head , and is of little importance . Yet , we show that the neck is in fact an important part of the network , and therefore call it the core instead . 3 METHOD . 3.1 TPN CORE ARCHITECTURE . Generally speaking , the core receives a feature pyramid as input , and outputs an updated feature pyramid . Here , a feature pyramid is defined as a collection of feature maps , with feature maps defined as a collection of feature vectors ( called features ) organized in a two-dimensional map . More specifically , feature map Pl denotes a feature map of level l which is 2l times smaller in width and height compared to the initial image resolution . A popular choice for the feature pyramid ( Lin et al. , 2017b ) is to consider feature maps { P3 , P4 , P5 , P6 , P7 } , which we will use as the default setting throughout our discussions and experiments . The core is constructed from three building blocks : top-down operations , self-processing operations and bottom-up operations ( see Figure 2 ) . In this subsection , we focus on how these operations are best combined , independently of their precise implementations . We call this configuration of operations making up a core , the core architecture . The specific implementations corresponding to the top-down , self-processing and bottom-up operations will be discussed in Subsection 3.2 and Subsection 3.3 . Using these general building blocks , we can recreate the popular FPN ( Lin et al. , 2017a ) and PANet ( Liu et al. , 2018 ) core architectures in Figure 3 . Note that the architectures slightly differ from those found in the original works ( Lin et al. , 2017a ; Liu et al. , 2018 ) , as the input layers are missing . Given that these input layers are meant to transition from backbone feature sizes to core feature sizes , we decided to move these transition layers from the core to the backbone instead , such that multiple core layers can easily be concatenated . Note moreover that Figure 3 only defines the architecture of the core , without specifying the implementation of the top-down and bottom-up operations . These implementations could hence differ from those found in the original works ( Lin et al. , 2017a ; Liu et al. , 2018 ) . From the FPN and PANet core architectures from Figure 3 , we make following two observations . First , we can see that the top-down and bottom-up operations are sequential . Secondly , we observe the lack of self-processing operations in both core architectures . In what follows , we discuss both aspects in more depth . First , we discuss the trade-off between sequential and parallel operations in greater detail . By sequential operations , we mean that Pl is updated with the new Pl±1 instead of with the old one , forcing the operations to be performed sequentially as the new feature maps must be available . Alternatively , one could instead opt for parallel operations by solely relying on the old feature maps . The choice between parallel and sequential could be regarded as a trade-off between speed and accuracy , while maintaining a similar memory consumption . Given that the top-down and bottom-up operations can be quite expensive , especially on high resolution maps , it is important to get the most out of every single operation . We hence believe that the sequential variant should be preferred here for the top-down and bottom-up operations , as found in the FPN and PANet core architectures . Secondly , we discuss the lack of self-processing operations in the FPN and PANet core architectures . When looking at the PANet architecture , we see that bottom-up operations immediately follow the top-down operations . We argue that this is sub-optimal . Take for example a look at the P4-P3 topdown operation , followed immediately by the P3-P4 bottom-up operation . The P3 map was just updated with information from P4 and now P3 must immediately communicate its content back to P4 before having the possibility to digest and work on this new information . We hence argue that the top-down and bottom-up operations should be separated with self-processing operations . This gives the feature maps the opportunity to work on themselves before communicating back to their peers . By combining the insights from previous two discussions , we arrive at the Trident Pyramid Network ( TPN ) core architecture consisting of sequential top-down and bottom-up operations alternated with self-processing operations ( see lower part of Figure 4 ) . The name is inspired by the top-down , first self-processing and bottom-up operations resembling a trident . Note that the TPN self-processing operations happen in parallel . This is not necessary , but we believe this to be the most natural choice .
Typically, object detection CNNs consist of a backbone for feature extraction, a core for feature refinement, and a head for predictions. The core is usually a simple arrangement of top-down and/or bottom-up convolutional blocks, used for effectively combining feature maps at different resolutions and this kind of processing is referred to as communication-based processing. This paper claims that the detection performance can be enhanced by the use of a more complex core that enables additional self-processing of individual feature maps (by running them through a series of convolutional layers) before combining them with feature maps of different resolutions. The paper proposes a new core architecture in object detection CNNs and names it the Trident Pyramid Network (TPN). In addition, the paper claims that given an additional computational budget, it is better to invest it in a heavier core rather than a heavier backbone, contrary to the mainstream approach. The proposed TPN has two tunable parameters B and L, that control the amount of self processing (B) and the amount of communication-based processing (L). Experimental studies on the COCO object detection benchmark show that the proposed approach with ResNet-50 backbone and TPN core outperforms existing methods with ResNet-50/101 backbone with FPN or PANet as their cores, under similar computational cost.
SP:713596293b212e0ffee3d3473c07e7fc94a2d140
Trident Pyramid Networks: The importance of processing at the feature pyramid level for better object detection
1 INTRODUCTION . Many computer vision tasks such as object detection and instance segmentation require strong features both at low and high resolution to detect both large and small objects respectively . This is in contrast to the image classification task where low resolution features are sufficient as usually only a single object is present in the center of the image . Networks developed specifically for the image classification task ( e.g . Simonyan & Zisserman ( 2014 ) ; He et al . ( 2016a ) ; Xie et al . ( 2017 ) ) , further denoted by backbones , are therefore insufficient for multi-scale vision tasks . Especially poor performance is to be expected on small objects , as shown in Lin et al . ( 2017a ) . In order to alleviate this problem , named the feature fusion problem , top-down mechanisms are added ( Lin et al. , 2017a ) to propagate semantically strong information from the low resolution to the high resolution feature maps , with improved performance on small objects as a result . Additionally , bottom-up mechanisms can also be appended ( Liu et al. , 2018 ) such that the lower resolution maps can benefit from the freshly updated higher resolution maps . These top-down and bottom-up mechanisms can now be grouped into a layer , after which multiple of these layers can be concatenated , as done in Tan et al . ( 2020 ) . We call this part of a computer vision network the core , laying in between the backbone and the task-specific head ( see Figure 1 ) . In general , we define a core module to be any module taking as input a feature pyramid and outputting an updated feature pyramid . These top-down and bottom-up operations can be regarded as communication-based processing operating on two feature maps , as opposed to content-based self-processing operating on a single feature map . Existing cores such as FPN ( Lin et al. , 2017a ) , PANet ( Liu et al. , 2018 ) and BiFPN ( Tan et al. , 2020 ) mostly focus on communication-based processing , as this nicely supplements the backbone merely consisting of self-processing . However , when having multiple communication-based operations in a row , communication tends to saturate ( everyone is up to date ) and hence becomes superfluous . We argue it is therefore more effective to alternate communication-based processing with sufficient self-processing , such that feature maps have the time to come up with new findings to be communicated . Based on this observation , we design the Trident Pyramid Network ( TPN ) core consisting of sequential top-down and bottom-up operations alternated with parallel self-processing mechanisms . The TPN core is equipped with hyperparameters controlling the amount of communication-based processing and self-processing . During the experiments , we empirically investigate what the optimal balance is between communication-based processing and self-processing ( see Subsection 4.3 ) . The TPN core is compared to various baselines on the COCO object detection benchmark ( Lin et al. , 2014 ) . Specific care is taken to ensure the baselines have similar computational characteristics , such that a fair comparison can be made . Using a ResNet-50 backbone and a simple one-stage detector head , our TPN core peaks at 41.8 AP on the COCO validation set when using the 3x training schedule ( see Subsection 4.2 ) . This is a 1.5 AP improvement over a BiFPN core of similar computational expense . When having additional compute to improve performance , practitioners typically decide to replace their backbone with a heavier one . A ResNet-50+FPN network for example gets traded for the heavier ResNet-101+FPN network . Yet , one might wonder whether it is not more beneficial to add additional computation into the core ( i.e . at the feature pyramid level ) by using a ResNet-50+TPN network , rather than into the backbone by using a ResNet-101+FPN network . When comparing both options under similar computational characteristics , we show a 1.7 AP improvement of the ResNet-50+TPN network over the ResNet-101+FPN network . This empirically shows that it is more beneficial to add additional computation into the core , highlighting the importance of performing computation at the feature pyramid level in modern-day object detection systems . We hope this new insight drives researchers to design even better cores in the future . 2 RELATED WORK . In order to obtain multi-scale features , early detectors performed predictions on feature maps directly coming from the backbone , such as MS-CNN ( Cai et al. , 2016 ) and SSD ( Liu et al. , 2016 ) . As the higher resolution maps from the backbone contain relatively weak semantic information , top-down mechanisms were added to propagate semantically strong information from lower resolution maps back to the higher resolution maps as in FPN ( Lin et al. , 2017a ) and TDM ( Shrivastava et al. , 2016 ) . Since , many variants and additions have been proposed : PANet ( Liu et al. , 2018 ) appends bottom-up connections , M2det ( Zhao et al. , 2019 ) uses a U-shape feature interaction architecture , ZigZagNet ( Lin et al. , 2019 ) adds additional pathways between different levels of the top-down and bottom-up hierarchies , NAS-FPN ( Ghiasi et al. , 2019 ) and Hit-Detector ( Guo et al. , 2020 ) use Neural Architecture Search ( NAS ) to automatically design a feature interaction topology , and BiFPN ( Tan et al. , 2020 ) modifies PANet by removing some connections , adding skip connections and using weighted feature map aggregation . All of the above variants focus on improving the communication between the different feature maps . We argue however that to be effective , extra content-based self-processing is needed in between the communication flow . Not all methods use a feature pyramid to deal with scale variation . TridentNet ( Li et al. , 2019 ) applies parallel branches of convolutional blocks with different dilations on a single feature map to obtain scale-aware features . In DetectoRS ( Qiao et al. , 2021 ) , they combine this idea with feature pyramids , by applying their switchable atrous convolutions ( SAC ) inside their recursive feature pyramids ( RFP ) . Note that to avoid any name confusion with TridentNet , we call our core by its abbreviated name TPN as opposed to Trident Pyramid Network . Our TPN core is also related to networks typically used in segmentation such as U-Net ( Ronneberger et al. , 2015 ) and stacked hourglass networks ( Newell et al. , 2016 ) , given that these networks also use a combination of top-down , self-processing and bottom-up operations . A major difference of these networks with our TPN core however , is that they do not operate on a feature pyramid in the sense that lower resolution maps are only generated and used within a single layer ( e.g . within a single hourglass ) and are not shared across layers ( e.g . across two neighboring hourglasses ) . Finally , note that some works such as Guo et al . ( 2020 ) and Bochkovskiy et al . ( 2020 ) refer to the the network part connecting the backbone with the head as the neck ( instead of the core ) . That name implies that the neck is merely a connection piece between the backbone and the head , and is of little importance . Yet , we show that the neck is in fact an important part of the network , and therefore call it the core instead . 3 METHOD . 3.1 TPN CORE ARCHITECTURE . Generally speaking , the core receives a feature pyramid as input , and outputs an updated feature pyramid . Here , a feature pyramid is defined as a collection of feature maps , with feature maps defined as a collection of feature vectors ( called features ) organized in a two-dimensional map . More specifically , feature map Pl denotes a feature map of level l which is 2l times smaller in width and height compared to the initial image resolution . A popular choice for the feature pyramid ( Lin et al. , 2017b ) is to consider feature maps { P3 , P4 , P5 , P6 , P7 } , which we will use as the default setting throughout our discussions and experiments . The core is constructed from three building blocks : top-down operations , self-processing operations and bottom-up operations ( see Figure 2 ) . In this subsection , we focus on how these operations are best combined , independently of their precise implementations . We call this configuration of operations making up a core , the core architecture . The specific implementations corresponding to the top-down , self-processing and bottom-up operations will be discussed in Subsection 3.2 and Subsection 3.3 . Using these general building blocks , we can recreate the popular FPN ( Lin et al. , 2017a ) and PANet ( Liu et al. , 2018 ) core architectures in Figure 3 . Note that the architectures slightly differ from those found in the original works ( Lin et al. , 2017a ; Liu et al. , 2018 ) , as the input layers are missing . Given that these input layers are meant to transition from backbone feature sizes to core feature sizes , we decided to move these transition layers from the core to the backbone instead , such that multiple core layers can easily be concatenated . Note moreover that Figure 3 only defines the architecture of the core , without specifying the implementation of the top-down and bottom-up operations . These implementations could hence differ from those found in the original works ( Lin et al. , 2017a ; Liu et al. , 2018 ) . From the FPN and PANet core architectures from Figure 3 , we make following two observations . First , we can see that the top-down and bottom-up operations are sequential . Secondly , we observe the lack of self-processing operations in both core architectures . In what follows , we discuss both aspects in more depth . First , we discuss the trade-off between sequential and parallel operations in greater detail . By sequential operations , we mean that Pl is updated with the new Pl±1 instead of with the old one , forcing the operations to be performed sequentially as the new feature maps must be available . Alternatively , one could instead opt for parallel operations by solely relying on the old feature maps . The choice between parallel and sequential could be regarded as a trade-off between speed and accuracy , while maintaining a similar memory consumption . Given that the top-down and bottom-up operations can be quite expensive , especially on high resolution maps , it is important to get the most out of every single operation . We hence believe that the sequential variant should be preferred here for the top-down and bottom-up operations , as found in the FPN and PANet core architectures . Secondly , we discuss the lack of self-processing operations in the FPN and PANet core architectures . When looking at the PANet architecture , we see that bottom-up operations immediately follow the top-down operations . We argue that this is sub-optimal . Take for example a look at the P4-P3 topdown operation , followed immediately by the P3-P4 bottom-up operation . The P3 map was just updated with information from P4 and now P3 must immediately communicate its content back to P4 before having the possibility to digest and work on this new information . We hence argue that the top-down and bottom-up operations should be separated with self-processing operations . This gives the feature maps the opportunity to work on themselves before communicating back to their peers . By combining the insights from previous two discussions , we arrive at the Trident Pyramid Network ( TPN ) core architecture consisting of sequential top-down and bottom-up operations alternated with self-processing operations ( see lower part of Figure 4 ) . The name is inspired by the top-down , first self-processing and bottom-up operations resembling a trident . Note that the TPN self-processing operations happen in parallel . This is not necessary , but we believe this to be the most natural choice .
This paper proposes a new feature pyramid network, trident pyramid network (TPN). TPN stacks multiple FPNs together and a FPN processes the features produced by its previous stage. TPN also replaces the skip connections within each FPN with residual modules to further process the features. Experiments show that TPN outperforms existing works such as BiFPN and PANet on ResNet50 under a similar computational budget by varying the number of FPNs and residual modules.
SP:713596293b212e0ffee3d3473c07e7fc94a2d140
Auditing AI models for Verified Deployment under Semantic Specifications
1 INTRODUCTION . Deep learning ( DL ) models are now ubiquitously deployed in a number of real-world applications , many of which are safety critical such as autonomous driving and healthcare ( Kendall et al. , 2019 ; Miotto et al. , 2018 ; Senior et al. , 2020 ) . As these models are prone to failure , especially under domain shifts , it is important to know when and how they are likely to fail before their deployment , a process we refer to as auditing . Inspired by the failure-mode and effects analysis ( FMEA ) for control systems and software systems ( Teng & Ho , 1996 ) , we propose to audit DL models through a sequence of semantically-aligned unit tests , where each unit test verifies whether a pre-defined specification ( e.g. , accuracy over 95 % ) is satisfied with respect to controlled and semantically meaningful variations in the input space ( e.g. , the angle relative to the camera for a face image ) . Being semantically-aligned is critical for these unit tests to be useful for the auditor of the system to plan the model ’ s deployment . The main challenge for auditing DL models through semantically-aligned unit tests is that the current large-scale DL models mostly lack an interpretable structure . This makes it difficult to quantify how the output varies given controlled semantically-aligned input variations . While there are works that aim to bring interpretable formal verification to DL models ( Henriksen & Lomuscio , 2020 ; Liu et al. , 2019 ) , the scale is still far from the millions if not billions of parameters used in contemporary models ( He et al. , 2016 ; Iandola et al. , 2014 ; Brown et al. , 2020 ) . On the other hand , auditing has taken the form of verified adversarial robustness for DL models ( Samangouei et al. , 2018 ; Xiao et al. , 2018a ; Cohen et al. , 2019 ) . However , this has mostly focused on adversarial perturbations in the pixel space , for example , through Interval Bound Propagation ( IBP ) , where the output is guaranteed to be invariant to input pixel perturbations with respect to Lp norm ( Gowal et al. , 2018 ; Zhang et al. , 2019b ) . While these approaches are much more scalable to modern DL architectures , the pixel space variations are not semantically-aligned , meaning they do not directly relate to semantic changes in the image , unlike in formal verification . Consider a unit test that verifies against the angle relative to the camera for a face image . A small variation in the angle ( e.g. , facing directly at the camera versus 5◦ to the left ) can induce a large variation in the pixel space . Current certified training methods are far from being able to provide guarantees with respect to such large variations in the pixel space with reasonable accuracy . Our Approach . In order to overcome the above limitations , we develop a framework for auditing , AuditAI . We consider a typical machine learning production pipeline ( Fig . 1 ) with three stages , the design and training of the model , its verification , and finally deployment . The verification is crucial in determining whether the model satisfies the necessary specifications before deployment . We address the gap between scalability and interpretability by proposing to verify specifications for variations directly in a semantically-aligned latent space of a generative model . For example in Fig . 1 , unit test 1 verifies whether a given face classification model maintains over 95 % accuracy when the face angle is within d◦ , while unit test 2 checks under what lighting condition the model has over 86 % accuracy . Once the verification is done , the auditor can then use the verified specification to determine whether to use the trained DL model during deployment . For semantically-aligned latent variations , we create a bridge between the generative model and the DL model such that they share the same latent space . We incorporate a variant of IBP ( Zhang et al. , 2019b ) for verification with respect to perturbations in the latent space . Further this leads to a tighter and a much more practical bound in the output space compared to pixel-based certified training . We also show that AuditAI can verify whether a unit test is satisfied by generating a proof for verification based on bound propagation . Fig . 1 gives an overview of our auditing framework and Fig . 2 elaborates on the generative model bridge . Summary of Contributions : . 1 . We develop a framework , AuditAI for auditing deep learning models by creating a bridge with a generative model such that they share the same semantically-aligned latent space . 2 . We propose unit tests as a semantically-aligned way to quantify specifications that can be audited . 3 . We show how IBP can be applied to latent space variations , to provide certifications of semanticallyaligned specifications . We show that AuditAI is applicable to training , verification , and deployment across diverse datasets : ImageNet ( Deng et al. , 2009 ) , Chest X-Rays ( Irvin et al. , 2019 ; Wang et al. , 2017 ) , LSUN ( Yu et al. , 2015 ) , and Flicker Faces HQ ( FFHQ ) ( Karras et al. , 2019 ) . For ImageNet , we show that AuditAI can train verifiably robust models which can tolerate 25 % larger pixel-space variations compared to pixel-based certified-training counterparts for the same overall verified accuracy of 88 % . The variations are measured as L2 distances in the pixel-space . The respective % increase in pixelspace variations that can be certified for Chest X-Rays , LSUN , and FFHQ are 22 % , 19 % , 24 % . We conclude with a human-study of the quality of the generative model for different ranges of latent-space variations revealing that pixel-space variations up to 62 % of the nominal values result in realistic generated images indistinguishable from real images by humans . 2 AUDITAI : A DEEP LEARNING AUDIT FRAMEWORK In this section , we describe the details of our framework , AuditAI outlined in Fig . 1 for verifying and testing deep models before deployment . We propose to use unit tests to verify variations of interest that are semantically-aligned . The advantage of AuditAI is that the verified input ranges are given by several semantically aligned specifications for the end-user to follow during deployment . In the following sub-sections , we formally define unit tests , outline the verification approach , and describe the specifics of AuditAI with a GAN-bridge shown in Fig . 2 . 2.1 UNIT TEST DEFINITION . Consider a machine learning model f : X → Y that predicts outputs y ∈ Y from inputs x ∈ X in dataset D. Each of our unit test can be formulated as providing guarantees such that : F ( x , y ) ≤ 0 ∀x s.t . , ei ( x ) ∈ Si , in , y = f ( x ) ( 1 ) Here , i subscripts an unit test , encoder ei extracts the variation of interest from the input x , and Si , in denotes the set of range of variation that this unit test verifies . If the condition is satisfied , then our unit test specifies that the output of the model f ( x ) would satisfy the constraint given by F ( · , · ) ≤ 0 . For example , x could be an image of a human face , and f could be a classifier for whether the person is wearing eyeglasses ( f ≤ 0 means wearing eyeglasses and f > 0 means not wearing eyeglasses ) . Then ei ( · ) could be extracting the angle of the face from the image , and Si , in could be the set { ei ( x ) | ∀x |ei ( x ) | < 30◦ } constraining the rotation to be smaller than 30◦ . And F ( x , y ) = −f ( x ) f ( x0◦ ) ≤ 0 says that our classifier output would not be changed from the corresponding output of x0◦ , the face image of x without any rotation . In this case , when the end-user is going to deploy f on a face image , they can first apply ei ( · ) to see if the face angle lies within Si , in to decide whether to use the model f . 2.2 VERIFICATION OUTLINE . Given a specification F ( x , y ) ≤ 0 , we need to next answer the following questions : 1 . How do we obtain the corresponding components ( ei , Si , in ) in Eq . ( 1 ) ? 2 . What can we do to guarantee that F ( x , y ) ≤ 0 is indeed satisfied in this case ? For the sake of illustration , we first consider a scenario with a less realistic assumption . We will then discuss how we can relax this assumption . In this scenario , we assume that we are already given the encoder ei and a corresponding generator gi that inverts ei . Continuing the previous example , ei can extract the face angle of a given face image . On the other hand , gi ( x0◦ , d◦ ) would be able to synthesize arbitrary face image xd◦ that is the same as x0◦ except being rotated by d◦ . Given the generator gi and the encoder ei , we propose to obtain Si , in building on interval bound propagation ( IBP ) ( Gowal et al. , 2018 ) . We include a treatment of IBP preliminaries in Appendix A.1 . Here , Si , in is the largest set of variations such that the specification F ( x , y ) ≤ 0 is satisfied . Given a set of variations Si , in , IBP-based methods can be used to obtain a bound on the output of the network Si , out = { y : ly ≤ y ≤ uy } and it can be checked whether the specification F ( x , y ) ≤ 0 is satisfied for all values in Si , out . We can start with a initial set S0i , in and apply gi to this set . Then , we can first find out what would be the corresponding convex bound of variations of S0i , in in the image space X . We can subsequently propagate this bound in X through f to Y to get S0i , out and check whether F ( x , y ) ≤ 0 is satisfied . By iteratively searching for the largest set Sji , in such that F ( x , y ) ≤ 0 is still satisfied by S j i , out , we can obtain Si , in given F . 2.3 CERTIFIED TRAINING THROUGH LATENT REPRESENTATION . In the previous section , we are able to find the set Si , in such that Eq . ( 1 ) is satisfied given specification F , encoder ei , generator gi . However , there are several challenges that limit the practical application of this scenario . First and foremost , while significant progress has been made in generative models , especially controlled generation ( Brock et al. , 2018 ; Karras et al. , 2017 ) , the assumption of having gi apriori is still unrealistic . In addition , since the propagated bound is convex , one can imagine that the bound in X ( high dimensional image-space ) would be much larger than needed . This leads to a much narrower estimate of Si , in . While it could be true that Eq . ( 1 ) is satisfied , we still need a non-trivial size of Si , in for the framework to be useful . For example , if Si , in constrains face angle rotations to be within 1◦ then it may not be practical for the end-user . For auditing ML models thoroughly , we require clear separation between the development and testing phases ( i.e . black-box verification ) . However , it might also be desirable in practice to train the model in such a way that the unit tests are likely to be satisfied in the first place . In this case , we do have a tighter connection between the designer and the verifier . We show that by relaxing the black-box constraint , we can simultaneously address the previously listed challenges in a white-box setting . Now that we have access to the internal working and design of the ML model f , we propose to bypass the image generation step in our verification process through the use of latent disentangled representations . More specifically , consider a ML model mapping f : X → Z → Y ( denote by fd : Z → Y the downstream task ) and a generative model mapping g : X → Z → X̂ , such that f and g share a common latent space Z by virtue of having a common encoder e ( · ) ( Fig . 2 ) . Since Si , in is a subset of the range of e , we have Si , in ⊆ Z . This further implies that we do not have to apply IBP through g and the whole f . Instead we only have to apply it through fd , which does not involve the expansion to the pixel space X . We show in the experiment ( Section 4 ) that this is important to have a practically useful Si , in . In addition , AuditAI also alleviates the need of having a perfect generative model , as long as we can learn a latent space Z through an inference mechanism . Learning the latent space . There are three requirements for the latent space . First , it should be semantically-aligned for us to specify unit tests in this space . A good example is disentangled representation ( Higgins et al. , 2018 ; Tran et al. , 2017 ) , where a set of dimensions of the latent code would correspond to a semantically aligned variation of the original image . As shown in Figure 2 , a latent dimension could correspond to the pose variation , and we can select ei to be the value of this dimension , and Si , in as the proper range that the model would be verified for . Second , our model should be verifiable , where the latent space is learned such that the corresponding Si , in is as large as possible given a specification function F . In this case , AuditAI can be seen as doing certified training ( Zhang et al. , 2019b ) to improve the verifiability of the model . Finally , the latent space should be able to perform well on the downstream task through fd . Combining these three criteria , our full training criteria : L = Ltask + γLspec + δLgen combines three losses Lgen , Lspec , and Ltask that would encourage interpretability through the latent space of a generative model , verifiability , and task performance respectively . γ and δ are relative weights ( between 0 and 1 ) for the loss function terms . We explain each of the losses below : Verifiability ( Lspec ) . Let fd be a feedforward model ( can have fully connected / convolutional / residual layers ) with K layers zk+1 = hk ( Wkzk + bk ) , k = 0 , ... , K − 1 , where , z0 = e ( x ) , zK denotes the output logits , and hk is the activation function for the kth layer . The set of variations Si , in could be such that the lp norm for the ith latent dimension is bounded by ϵ , i.e . Si , in = { z : ||zi − z0 , i||p ≤ ϵ } . We can bound the output of the network Sout = { zK : lK ≤ zK ≤ uK } through a variant of interval bound propagation ( IBP ) . Let the specification be F ( z , y ) = cT zK + d ≤ 0 , ∀z ∈ Si , in , zK = fd ( z ) . For a classification task , this specification implies that the output logits zK should satisfy a linear relationship for each latent space variation such that the classification outcome argmaxi zK , i remains equal to the true outcome ytrue corresponding to z0 . To verify the specification , IBP based methods search for a counter-example , which in our framework amounts to solving the following optimization problem , and checking whether the optimal value is ≤ 0. max z∈Sin F ( z , y ) = cT zK + d s.t . zk+1 = hk ( Wkzk + bk ) k = 0 , ... , K − 1 ( 2 ) By following the IBP equations for bound-propagation , we can obtain upper and lower bounds [ zK , zK ] which can be used to compute the worst case logit difference zK , y − zK , ytrue between the true class ytrue and any other class y . We define ẑK = zK , y ( if y ̸= ytrue ) ; ẑK = zK , ytrue ( otherwise ) . Then , we can define Lspec as Lspec = CE ( ẑK , ytrue ) . In practice , we do not use IBP , but an improved approach CROWN-IBP ( Zhang et al. , 2019b ) that provides tighter bounds and more stable training , based on the AutoLiRPA library ( Xu et al. , 2020 ) . We mention details about this in the Appendix . Task Performance ( Ltask ) . Let , CE ( · ) denote the standard cross-entropy loss . Ltask captures the overall accuracy of the downstream task and can be expressed as Ltask = CE ( zK , ytrue ) . Generative Model ( Lgen ) . Finally , we have a loss term for the generative model , which could be a VAE , a GAN , or variants of these . If the model is a GAN , then we would have an optimization objective for GAN inversion . An assumption is being able to learn a disentangled latent space with semantically aligned concepts . Let the overall loss be Lgen which could also encapsulate other losses for more semantically-aligned disentanglement . Typically we would train this generative model on a large amount of unlabelled data that is not available for training the classifier we are auditing .
This paper proposes a new and useful direction: AI model audit. The proposed solution (framework) is very interesting. The experiments show the effectiveness of the proposed method.
SP:9d19a83c4e16c193b6d3119fb784798cc2433f64
Auditing AI models for Verified Deployment under Semantic Specifications
1 INTRODUCTION . Deep learning ( DL ) models are now ubiquitously deployed in a number of real-world applications , many of which are safety critical such as autonomous driving and healthcare ( Kendall et al. , 2019 ; Miotto et al. , 2018 ; Senior et al. , 2020 ) . As these models are prone to failure , especially under domain shifts , it is important to know when and how they are likely to fail before their deployment , a process we refer to as auditing . Inspired by the failure-mode and effects analysis ( FMEA ) for control systems and software systems ( Teng & Ho , 1996 ) , we propose to audit DL models through a sequence of semantically-aligned unit tests , where each unit test verifies whether a pre-defined specification ( e.g. , accuracy over 95 % ) is satisfied with respect to controlled and semantically meaningful variations in the input space ( e.g. , the angle relative to the camera for a face image ) . Being semantically-aligned is critical for these unit tests to be useful for the auditor of the system to plan the model ’ s deployment . The main challenge for auditing DL models through semantically-aligned unit tests is that the current large-scale DL models mostly lack an interpretable structure . This makes it difficult to quantify how the output varies given controlled semantically-aligned input variations . While there are works that aim to bring interpretable formal verification to DL models ( Henriksen & Lomuscio , 2020 ; Liu et al. , 2019 ) , the scale is still far from the millions if not billions of parameters used in contemporary models ( He et al. , 2016 ; Iandola et al. , 2014 ; Brown et al. , 2020 ) . On the other hand , auditing has taken the form of verified adversarial robustness for DL models ( Samangouei et al. , 2018 ; Xiao et al. , 2018a ; Cohen et al. , 2019 ) . However , this has mostly focused on adversarial perturbations in the pixel space , for example , through Interval Bound Propagation ( IBP ) , where the output is guaranteed to be invariant to input pixel perturbations with respect to Lp norm ( Gowal et al. , 2018 ; Zhang et al. , 2019b ) . While these approaches are much more scalable to modern DL architectures , the pixel space variations are not semantically-aligned , meaning they do not directly relate to semantic changes in the image , unlike in formal verification . Consider a unit test that verifies against the angle relative to the camera for a face image . A small variation in the angle ( e.g. , facing directly at the camera versus 5◦ to the left ) can induce a large variation in the pixel space . Current certified training methods are far from being able to provide guarantees with respect to such large variations in the pixel space with reasonable accuracy . Our Approach . In order to overcome the above limitations , we develop a framework for auditing , AuditAI . We consider a typical machine learning production pipeline ( Fig . 1 ) with three stages , the design and training of the model , its verification , and finally deployment . The verification is crucial in determining whether the model satisfies the necessary specifications before deployment . We address the gap between scalability and interpretability by proposing to verify specifications for variations directly in a semantically-aligned latent space of a generative model . For example in Fig . 1 , unit test 1 verifies whether a given face classification model maintains over 95 % accuracy when the face angle is within d◦ , while unit test 2 checks under what lighting condition the model has over 86 % accuracy . Once the verification is done , the auditor can then use the verified specification to determine whether to use the trained DL model during deployment . For semantically-aligned latent variations , we create a bridge between the generative model and the DL model such that they share the same latent space . We incorporate a variant of IBP ( Zhang et al. , 2019b ) for verification with respect to perturbations in the latent space . Further this leads to a tighter and a much more practical bound in the output space compared to pixel-based certified training . We also show that AuditAI can verify whether a unit test is satisfied by generating a proof for verification based on bound propagation . Fig . 1 gives an overview of our auditing framework and Fig . 2 elaborates on the generative model bridge . Summary of Contributions : . 1 . We develop a framework , AuditAI for auditing deep learning models by creating a bridge with a generative model such that they share the same semantically-aligned latent space . 2 . We propose unit tests as a semantically-aligned way to quantify specifications that can be audited . 3 . We show how IBP can be applied to latent space variations , to provide certifications of semanticallyaligned specifications . We show that AuditAI is applicable to training , verification , and deployment across diverse datasets : ImageNet ( Deng et al. , 2009 ) , Chest X-Rays ( Irvin et al. , 2019 ; Wang et al. , 2017 ) , LSUN ( Yu et al. , 2015 ) , and Flicker Faces HQ ( FFHQ ) ( Karras et al. , 2019 ) . For ImageNet , we show that AuditAI can train verifiably robust models which can tolerate 25 % larger pixel-space variations compared to pixel-based certified-training counterparts for the same overall verified accuracy of 88 % . The variations are measured as L2 distances in the pixel-space . The respective % increase in pixelspace variations that can be certified for Chest X-Rays , LSUN , and FFHQ are 22 % , 19 % , 24 % . We conclude with a human-study of the quality of the generative model for different ranges of latent-space variations revealing that pixel-space variations up to 62 % of the nominal values result in realistic generated images indistinguishable from real images by humans . 2 AUDITAI : A DEEP LEARNING AUDIT FRAMEWORK In this section , we describe the details of our framework , AuditAI outlined in Fig . 1 for verifying and testing deep models before deployment . We propose to use unit tests to verify variations of interest that are semantically-aligned . The advantage of AuditAI is that the verified input ranges are given by several semantically aligned specifications for the end-user to follow during deployment . In the following sub-sections , we formally define unit tests , outline the verification approach , and describe the specifics of AuditAI with a GAN-bridge shown in Fig . 2 . 2.1 UNIT TEST DEFINITION . Consider a machine learning model f : X → Y that predicts outputs y ∈ Y from inputs x ∈ X in dataset D. Each of our unit test can be formulated as providing guarantees such that : F ( x , y ) ≤ 0 ∀x s.t . , ei ( x ) ∈ Si , in , y = f ( x ) ( 1 ) Here , i subscripts an unit test , encoder ei extracts the variation of interest from the input x , and Si , in denotes the set of range of variation that this unit test verifies . If the condition is satisfied , then our unit test specifies that the output of the model f ( x ) would satisfy the constraint given by F ( · , · ) ≤ 0 . For example , x could be an image of a human face , and f could be a classifier for whether the person is wearing eyeglasses ( f ≤ 0 means wearing eyeglasses and f > 0 means not wearing eyeglasses ) . Then ei ( · ) could be extracting the angle of the face from the image , and Si , in could be the set { ei ( x ) | ∀x |ei ( x ) | < 30◦ } constraining the rotation to be smaller than 30◦ . And F ( x , y ) = −f ( x ) f ( x0◦ ) ≤ 0 says that our classifier output would not be changed from the corresponding output of x0◦ , the face image of x without any rotation . In this case , when the end-user is going to deploy f on a face image , they can first apply ei ( · ) to see if the face angle lies within Si , in to decide whether to use the model f . 2.2 VERIFICATION OUTLINE . Given a specification F ( x , y ) ≤ 0 , we need to next answer the following questions : 1 . How do we obtain the corresponding components ( ei , Si , in ) in Eq . ( 1 ) ? 2 . What can we do to guarantee that F ( x , y ) ≤ 0 is indeed satisfied in this case ? For the sake of illustration , we first consider a scenario with a less realistic assumption . We will then discuss how we can relax this assumption . In this scenario , we assume that we are already given the encoder ei and a corresponding generator gi that inverts ei . Continuing the previous example , ei can extract the face angle of a given face image . On the other hand , gi ( x0◦ , d◦ ) would be able to synthesize arbitrary face image xd◦ that is the same as x0◦ except being rotated by d◦ . Given the generator gi and the encoder ei , we propose to obtain Si , in building on interval bound propagation ( IBP ) ( Gowal et al. , 2018 ) . We include a treatment of IBP preliminaries in Appendix A.1 . Here , Si , in is the largest set of variations such that the specification F ( x , y ) ≤ 0 is satisfied . Given a set of variations Si , in , IBP-based methods can be used to obtain a bound on the output of the network Si , out = { y : ly ≤ y ≤ uy } and it can be checked whether the specification F ( x , y ) ≤ 0 is satisfied for all values in Si , out . We can start with a initial set S0i , in and apply gi to this set . Then , we can first find out what would be the corresponding convex bound of variations of S0i , in in the image space X . We can subsequently propagate this bound in X through f to Y to get S0i , out and check whether F ( x , y ) ≤ 0 is satisfied . By iteratively searching for the largest set Sji , in such that F ( x , y ) ≤ 0 is still satisfied by S j i , out , we can obtain Si , in given F . 2.3 CERTIFIED TRAINING THROUGH LATENT REPRESENTATION . In the previous section , we are able to find the set Si , in such that Eq . ( 1 ) is satisfied given specification F , encoder ei , generator gi . However , there are several challenges that limit the practical application of this scenario . First and foremost , while significant progress has been made in generative models , especially controlled generation ( Brock et al. , 2018 ; Karras et al. , 2017 ) , the assumption of having gi apriori is still unrealistic . In addition , since the propagated bound is convex , one can imagine that the bound in X ( high dimensional image-space ) would be much larger than needed . This leads to a much narrower estimate of Si , in . While it could be true that Eq . ( 1 ) is satisfied , we still need a non-trivial size of Si , in for the framework to be useful . For example , if Si , in constrains face angle rotations to be within 1◦ then it may not be practical for the end-user . For auditing ML models thoroughly , we require clear separation between the development and testing phases ( i.e . black-box verification ) . However , it might also be desirable in practice to train the model in such a way that the unit tests are likely to be satisfied in the first place . In this case , we do have a tighter connection between the designer and the verifier . We show that by relaxing the black-box constraint , we can simultaneously address the previously listed challenges in a white-box setting . Now that we have access to the internal working and design of the ML model f , we propose to bypass the image generation step in our verification process through the use of latent disentangled representations . More specifically , consider a ML model mapping f : X → Z → Y ( denote by fd : Z → Y the downstream task ) and a generative model mapping g : X → Z → X̂ , such that f and g share a common latent space Z by virtue of having a common encoder e ( · ) ( Fig . 2 ) . Since Si , in is a subset of the range of e , we have Si , in ⊆ Z . This further implies that we do not have to apply IBP through g and the whole f . Instead we only have to apply it through fd , which does not involve the expansion to the pixel space X . We show in the experiment ( Section 4 ) that this is important to have a practically useful Si , in . In addition , AuditAI also alleviates the need of having a perfect generative model , as long as we can learn a latent space Z through an inference mechanism . Learning the latent space . There are three requirements for the latent space . First , it should be semantically-aligned for us to specify unit tests in this space . A good example is disentangled representation ( Higgins et al. , 2018 ; Tran et al. , 2017 ) , where a set of dimensions of the latent code would correspond to a semantically aligned variation of the original image . As shown in Figure 2 , a latent dimension could correspond to the pose variation , and we can select ei to be the value of this dimension , and Si , in as the proper range that the model would be verified for . Second , our model should be verifiable , where the latent space is learned such that the corresponding Si , in is as large as possible given a specification function F . In this case , AuditAI can be seen as doing certified training ( Zhang et al. , 2019b ) to improve the verifiability of the model . Finally , the latent space should be able to perform well on the downstream task through fd . Combining these three criteria , our full training criteria : L = Ltask + γLspec + δLgen combines three losses Lgen , Lspec , and Ltask that would encourage interpretability through the latent space of a generative model , verifiability , and task performance respectively . γ and δ are relative weights ( between 0 and 1 ) for the loss function terms . We explain each of the losses below : Verifiability ( Lspec ) . Let fd be a feedforward model ( can have fully connected / convolutional / residual layers ) with K layers zk+1 = hk ( Wkzk + bk ) , k = 0 , ... , K − 1 , where , z0 = e ( x ) , zK denotes the output logits , and hk is the activation function for the kth layer . The set of variations Si , in could be such that the lp norm for the ith latent dimension is bounded by ϵ , i.e . Si , in = { z : ||zi − z0 , i||p ≤ ϵ } . We can bound the output of the network Sout = { zK : lK ≤ zK ≤ uK } through a variant of interval bound propagation ( IBP ) . Let the specification be F ( z , y ) = cT zK + d ≤ 0 , ∀z ∈ Si , in , zK = fd ( z ) . For a classification task , this specification implies that the output logits zK should satisfy a linear relationship for each latent space variation such that the classification outcome argmaxi zK , i remains equal to the true outcome ytrue corresponding to z0 . To verify the specification , IBP based methods search for a counter-example , which in our framework amounts to solving the following optimization problem , and checking whether the optimal value is ≤ 0. max z∈Sin F ( z , y ) = cT zK + d s.t . zk+1 = hk ( Wkzk + bk ) k = 0 , ... , K − 1 ( 2 ) By following the IBP equations for bound-propagation , we can obtain upper and lower bounds [ zK , zK ] which can be used to compute the worst case logit difference zK , y − zK , ytrue between the true class ytrue and any other class y . We define ẑK = zK , y ( if y ̸= ytrue ) ; ẑK = zK , ytrue ( otherwise ) . Then , we can define Lspec as Lspec = CE ( ẑK , ytrue ) . In practice , we do not use IBP , but an improved approach CROWN-IBP ( Zhang et al. , 2019b ) that provides tighter bounds and more stable training , based on the AutoLiRPA library ( Xu et al. , 2020 ) . We mention details about this in the Appendix . Task Performance ( Ltask ) . Let , CE ( · ) denote the standard cross-entropy loss . Ltask captures the overall accuracy of the downstream task and can be expressed as Ltask = CE ( zK , ytrue ) . Generative Model ( Lgen ) . Finally , we have a loss term for the generative model , which could be a VAE , a GAN , or variants of these . If the model is a GAN , then we would have an optimization objective for GAN inversion . An assumption is being able to learn a disentangled latent space with semantically aligned concepts . Let the overall loss be Lgen which could also encapsulate other losses for more semantically-aligned disentanglement . Typically we would train this generative model on a large amount of unlabelled data that is not available for training the classifier we are auditing .
This paper presents, AuditAI, a system for verifying that a trained model meets a particular `specification', and also for certified training. The system uses a generative model to construct embeddings for inputs and then tests whether these embedding vectors belong to a particular set. Membership in the pre-specified set is reduced to satisfying the test. The key technical insight in this work is applying the interval bound propagation technique to embedding/latent codes in order to verify semantic behaviors of trained models.
SP:9d19a83c4e16c193b6d3119fb784798cc2433f64
Auditing AI models for Verified Deployment under Semantic Specifications
1 INTRODUCTION . Deep learning ( DL ) models are now ubiquitously deployed in a number of real-world applications , many of which are safety critical such as autonomous driving and healthcare ( Kendall et al. , 2019 ; Miotto et al. , 2018 ; Senior et al. , 2020 ) . As these models are prone to failure , especially under domain shifts , it is important to know when and how they are likely to fail before their deployment , a process we refer to as auditing . Inspired by the failure-mode and effects analysis ( FMEA ) for control systems and software systems ( Teng & Ho , 1996 ) , we propose to audit DL models through a sequence of semantically-aligned unit tests , where each unit test verifies whether a pre-defined specification ( e.g. , accuracy over 95 % ) is satisfied with respect to controlled and semantically meaningful variations in the input space ( e.g. , the angle relative to the camera for a face image ) . Being semantically-aligned is critical for these unit tests to be useful for the auditor of the system to plan the model ’ s deployment . The main challenge for auditing DL models through semantically-aligned unit tests is that the current large-scale DL models mostly lack an interpretable structure . This makes it difficult to quantify how the output varies given controlled semantically-aligned input variations . While there are works that aim to bring interpretable formal verification to DL models ( Henriksen & Lomuscio , 2020 ; Liu et al. , 2019 ) , the scale is still far from the millions if not billions of parameters used in contemporary models ( He et al. , 2016 ; Iandola et al. , 2014 ; Brown et al. , 2020 ) . On the other hand , auditing has taken the form of verified adversarial robustness for DL models ( Samangouei et al. , 2018 ; Xiao et al. , 2018a ; Cohen et al. , 2019 ) . However , this has mostly focused on adversarial perturbations in the pixel space , for example , through Interval Bound Propagation ( IBP ) , where the output is guaranteed to be invariant to input pixel perturbations with respect to Lp norm ( Gowal et al. , 2018 ; Zhang et al. , 2019b ) . While these approaches are much more scalable to modern DL architectures , the pixel space variations are not semantically-aligned , meaning they do not directly relate to semantic changes in the image , unlike in formal verification . Consider a unit test that verifies against the angle relative to the camera for a face image . A small variation in the angle ( e.g. , facing directly at the camera versus 5◦ to the left ) can induce a large variation in the pixel space . Current certified training methods are far from being able to provide guarantees with respect to such large variations in the pixel space with reasonable accuracy . Our Approach . In order to overcome the above limitations , we develop a framework for auditing , AuditAI . We consider a typical machine learning production pipeline ( Fig . 1 ) with three stages , the design and training of the model , its verification , and finally deployment . The verification is crucial in determining whether the model satisfies the necessary specifications before deployment . We address the gap between scalability and interpretability by proposing to verify specifications for variations directly in a semantically-aligned latent space of a generative model . For example in Fig . 1 , unit test 1 verifies whether a given face classification model maintains over 95 % accuracy when the face angle is within d◦ , while unit test 2 checks under what lighting condition the model has over 86 % accuracy . Once the verification is done , the auditor can then use the verified specification to determine whether to use the trained DL model during deployment . For semantically-aligned latent variations , we create a bridge between the generative model and the DL model such that they share the same latent space . We incorporate a variant of IBP ( Zhang et al. , 2019b ) for verification with respect to perturbations in the latent space . Further this leads to a tighter and a much more practical bound in the output space compared to pixel-based certified training . We also show that AuditAI can verify whether a unit test is satisfied by generating a proof for verification based on bound propagation . Fig . 1 gives an overview of our auditing framework and Fig . 2 elaborates on the generative model bridge . Summary of Contributions : . 1 . We develop a framework , AuditAI for auditing deep learning models by creating a bridge with a generative model such that they share the same semantically-aligned latent space . 2 . We propose unit tests as a semantically-aligned way to quantify specifications that can be audited . 3 . We show how IBP can be applied to latent space variations , to provide certifications of semanticallyaligned specifications . We show that AuditAI is applicable to training , verification , and deployment across diverse datasets : ImageNet ( Deng et al. , 2009 ) , Chest X-Rays ( Irvin et al. , 2019 ; Wang et al. , 2017 ) , LSUN ( Yu et al. , 2015 ) , and Flicker Faces HQ ( FFHQ ) ( Karras et al. , 2019 ) . For ImageNet , we show that AuditAI can train verifiably robust models which can tolerate 25 % larger pixel-space variations compared to pixel-based certified-training counterparts for the same overall verified accuracy of 88 % . The variations are measured as L2 distances in the pixel-space . The respective % increase in pixelspace variations that can be certified for Chest X-Rays , LSUN , and FFHQ are 22 % , 19 % , 24 % . We conclude with a human-study of the quality of the generative model for different ranges of latent-space variations revealing that pixel-space variations up to 62 % of the nominal values result in realistic generated images indistinguishable from real images by humans . 2 AUDITAI : A DEEP LEARNING AUDIT FRAMEWORK In this section , we describe the details of our framework , AuditAI outlined in Fig . 1 for verifying and testing deep models before deployment . We propose to use unit tests to verify variations of interest that are semantically-aligned . The advantage of AuditAI is that the verified input ranges are given by several semantically aligned specifications for the end-user to follow during deployment . In the following sub-sections , we formally define unit tests , outline the verification approach , and describe the specifics of AuditAI with a GAN-bridge shown in Fig . 2 . 2.1 UNIT TEST DEFINITION . Consider a machine learning model f : X → Y that predicts outputs y ∈ Y from inputs x ∈ X in dataset D. Each of our unit test can be formulated as providing guarantees such that : F ( x , y ) ≤ 0 ∀x s.t . , ei ( x ) ∈ Si , in , y = f ( x ) ( 1 ) Here , i subscripts an unit test , encoder ei extracts the variation of interest from the input x , and Si , in denotes the set of range of variation that this unit test verifies . If the condition is satisfied , then our unit test specifies that the output of the model f ( x ) would satisfy the constraint given by F ( · , · ) ≤ 0 . For example , x could be an image of a human face , and f could be a classifier for whether the person is wearing eyeglasses ( f ≤ 0 means wearing eyeglasses and f > 0 means not wearing eyeglasses ) . Then ei ( · ) could be extracting the angle of the face from the image , and Si , in could be the set { ei ( x ) | ∀x |ei ( x ) | < 30◦ } constraining the rotation to be smaller than 30◦ . And F ( x , y ) = −f ( x ) f ( x0◦ ) ≤ 0 says that our classifier output would not be changed from the corresponding output of x0◦ , the face image of x without any rotation . In this case , when the end-user is going to deploy f on a face image , they can first apply ei ( · ) to see if the face angle lies within Si , in to decide whether to use the model f . 2.2 VERIFICATION OUTLINE . Given a specification F ( x , y ) ≤ 0 , we need to next answer the following questions : 1 . How do we obtain the corresponding components ( ei , Si , in ) in Eq . ( 1 ) ? 2 . What can we do to guarantee that F ( x , y ) ≤ 0 is indeed satisfied in this case ? For the sake of illustration , we first consider a scenario with a less realistic assumption . We will then discuss how we can relax this assumption . In this scenario , we assume that we are already given the encoder ei and a corresponding generator gi that inverts ei . Continuing the previous example , ei can extract the face angle of a given face image . On the other hand , gi ( x0◦ , d◦ ) would be able to synthesize arbitrary face image xd◦ that is the same as x0◦ except being rotated by d◦ . Given the generator gi and the encoder ei , we propose to obtain Si , in building on interval bound propagation ( IBP ) ( Gowal et al. , 2018 ) . We include a treatment of IBP preliminaries in Appendix A.1 . Here , Si , in is the largest set of variations such that the specification F ( x , y ) ≤ 0 is satisfied . Given a set of variations Si , in , IBP-based methods can be used to obtain a bound on the output of the network Si , out = { y : ly ≤ y ≤ uy } and it can be checked whether the specification F ( x , y ) ≤ 0 is satisfied for all values in Si , out . We can start with a initial set S0i , in and apply gi to this set . Then , we can first find out what would be the corresponding convex bound of variations of S0i , in in the image space X . We can subsequently propagate this bound in X through f to Y to get S0i , out and check whether F ( x , y ) ≤ 0 is satisfied . By iteratively searching for the largest set Sji , in such that F ( x , y ) ≤ 0 is still satisfied by S j i , out , we can obtain Si , in given F . 2.3 CERTIFIED TRAINING THROUGH LATENT REPRESENTATION . In the previous section , we are able to find the set Si , in such that Eq . ( 1 ) is satisfied given specification F , encoder ei , generator gi . However , there are several challenges that limit the practical application of this scenario . First and foremost , while significant progress has been made in generative models , especially controlled generation ( Brock et al. , 2018 ; Karras et al. , 2017 ) , the assumption of having gi apriori is still unrealistic . In addition , since the propagated bound is convex , one can imagine that the bound in X ( high dimensional image-space ) would be much larger than needed . This leads to a much narrower estimate of Si , in . While it could be true that Eq . ( 1 ) is satisfied , we still need a non-trivial size of Si , in for the framework to be useful . For example , if Si , in constrains face angle rotations to be within 1◦ then it may not be practical for the end-user . For auditing ML models thoroughly , we require clear separation between the development and testing phases ( i.e . black-box verification ) . However , it might also be desirable in practice to train the model in such a way that the unit tests are likely to be satisfied in the first place . In this case , we do have a tighter connection between the designer and the verifier . We show that by relaxing the black-box constraint , we can simultaneously address the previously listed challenges in a white-box setting . Now that we have access to the internal working and design of the ML model f , we propose to bypass the image generation step in our verification process through the use of latent disentangled representations . More specifically , consider a ML model mapping f : X → Z → Y ( denote by fd : Z → Y the downstream task ) and a generative model mapping g : X → Z → X̂ , such that f and g share a common latent space Z by virtue of having a common encoder e ( · ) ( Fig . 2 ) . Since Si , in is a subset of the range of e , we have Si , in ⊆ Z . This further implies that we do not have to apply IBP through g and the whole f . Instead we only have to apply it through fd , which does not involve the expansion to the pixel space X . We show in the experiment ( Section 4 ) that this is important to have a practically useful Si , in . In addition , AuditAI also alleviates the need of having a perfect generative model , as long as we can learn a latent space Z through an inference mechanism . Learning the latent space . There are three requirements for the latent space . First , it should be semantically-aligned for us to specify unit tests in this space . A good example is disentangled representation ( Higgins et al. , 2018 ; Tran et al. , 2017 ) , where a set of dimensions of the latent code would correspond to a semantically aligned variation of the original image . As shown in Figure 2 , a latent dimension could correspond to the pose variation , and we can select ei to be the value of this dimension , and Si , in as the proper range that the model would be verified for . Second , our model should be verifiable , where the latent space is learned such that the corresponding Si , in is as large as possible given a specification function F . In this case , AuditAI can be seen as doing certified training ( Zhang et al. , 2019b ) to improve the verifiability of the model . Finally , the latent space should be able to perform well on the downstream task through fd . Combining these three criteria , our full training criteria : L = Ltask + γLspec + δLgen combines three losses Lgen , Lspec , and Ltask that would encourage interpretability through the latent space of a generative model , verifiability , and task performance respectively . γ and δ are relative weights ( between 0 and 1 ) for the loss function terms . We explain each of the losses below : Verifiability ( Lspec ) . Let fd be a feedforward model ( can have fully connected / convolutional / residual layers ) with K layers zk+1 = hk ( Wkzk + bk ) , k = 0 , ... , K − 1 , where , z0 = e ( x ) , zK denotes the output logits , and hk is the activation function for the kth layer . The set of variations Si , in could be such that the lp norm for the ith latent dimension is bounded by ϵ , i.e . Si , in = { z : ||zi − z0 , i||p ≤ ϵ } . We can bound the output of the network Sout = { zK : lK ≤ zK ≤ uK } through a variant of interval bound propagation ( IBP ) . Let the specification be F ( z , y ) = cT zK + d ≤ 0 , ∀z ∈ Si , in , zK = fd ( z ) . For a classification task , this specification implies that the output logits zK should satisfy a linear relationship for each latent space variation such that the classification outcome argmaxi zK , i remains equal to the true outcome ytrue corresponding to z0 . To verify the specification , IBP based methods search for a counter-example , which in our framework amounts to solving the following optimization problem , and checking whether the optimal value is ≤ 0. max z∈Sin F ( z , y ) = cT zK + d s.t . zk+1 = hk ( Wkzk + bk ) k = 0 , ... , K − 1 ( 2 ) By following the IBP equations for bound-propagation , we can obtain upper and lower bounds [ zK , zK ] which can be used to compute the worst case logit difference zK , y − zK , ytrue between the true class ytrue and any other class y . We define ẑK = zK , y ( if y ̸= ytrue ) ; ẑK = zK , ytrue ( otherwise ) . Then , we can define Lspec as Lspec = CE ( ẑK , ytrue ) . In practice , we do not use IBP , but an improved approach CROWN-IBP ( Zhang et al. , 2019b ) that provides tighter bounds and more stable training , based on the AutoLiRPA library ( Xu et al. , 2020 ) . We mention details about this in the Appendix . Task Performance ( Ltask ) . Let , CE ( · ) denote the standard cross-entropy loss . Ltask captures the overall accuracy of the downstream task and can be expressed as Ltask = CE ( zK , ytrue ) . Generative Model ( Lgen ) . Finally , we have a loss term for the generative model , which could be a VAE , a GAN , or variants of these . If the model is a GAN , then we would have an optimization objective for GAN inversion . An assumption is being able to learn a disentangled latent space with semantically aligned concepts . Let the overall loss be Lgen which could also encapsulate other losses for more semantically-aligned disentanglement . Typically we would train this generative model on a large amount of unlabelled data that is not available for training the classifier we are auditing .
This paper presents a framework for auditing deep learning models in regards to a specification. The goal is to increase the confidence in a trained model. The auditing is performed by generating variations in input data to evaluate the boundaries in the input for an expected output. An evaluation was performed against a pixel-based training approach and under consideration of 4 different datasets.
SP:9d19a83c4e16c193b6d3119fb784798cc2433f64
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
1 INTRODUCTION . Deep learning has achieved astounding performances in many fields by relying on larger numbers of parameters and increasingly sophisticated optimization algorithms . However , DNNs with far more parameters than training samples are more prone to poor generalization . Generalization is arguably the most fundamental and yet mysterious aspect of deep learning . Several studies have been conducted to better understand the generalization of DNNs and to train DNNs that generalize well across the natural distribution ( Keskar et al. , 2016 ; Neyshabur et al. , 2017 ; Chaudhari et al. , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2020 ; Foret et al. , 2020 ; Zhang et al. , 2021 ) . For example , Keskar et al . ( 2016 ) investigate the effect of batch size on neural networks ’ generalization ability . Zhang et al . ( 2019 ) propose an optimizer for training DNNs with improved generalization ability . Specifically , Hochreiter & Schmidhuber ( 1995 ) , Li et al . ( 2017 ) and Dinh et al . ( 2017 ) argue that the geometry of the loss landscape affects generalization and DNNs with a flat minimum can generalize better . The recent work by Foret et al . ( 2020 ) proposes an effective training algorithm Sharpness Aware Minimizer ( SAM ) for obtaining a flat minimum . SAM employs a base optimizer such as Stochastic Gradient Descent ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) to minimize both the vanilla training loss and the sharpness . The sharpness , which describes the flatness of a minimum , is characterized using eigenvalues of the Hessian matrix by Keskar et al . ( 2016 ) . SAM quantifies the sharpness as the maximized change of training loss when a constraint perturbation is added to current weights . As a result , SAM leads to a flat minimum and significantly improves the generalization ability of the trained DNNs . SAM and its variants have been shown to outperform the state-of-the-art across a variety of deep learning benchmarks ( Kwon et al. , 2021 ; Chen et al. , 2021 ; Galatolo et al. , 2021 ; Zheng et al. , 2021 ) . Regrettably though , SAM and its variants achieve such remarkable performance at the expense of doubling the computational overhead of the given base optimizers , which minimize the training loss with a single forward and backward propagation step . SAM requires an additional propagation step compared to the base optimizers to resolve the weight perturbation for quantifying the sharpness . The extra propagation step requires the same computational overhead as the single propagation step used by base optimizers , resulting in SAM ’ s computational overhead being doubled ( 2× ) . As demonstrated in Figure 1 , SAM achieves higher test accuracy ( i.e. , 84.46 % vs. 81.89 % ) at the expense of sacrificing half of the training speed of the base optimizer ( i.e. , 276 imgs/s vs. 557 imgs/s ) . SGD SAM ESAM 80 81 82 83 84 85 86 A cc ur ac y ( % ) 100 200 300 400 500 600 700 Tr ai ni ng S pe ed ( i m ag es /s ) Accuracy Training Speed sharpness of DNNs . As a result , the sharpness calculated over the subsets can serve as an upper bound of the SAM ’ s sharpness , ensuring that SDS ’ s performance is comparable to that of SAM ’ s . We verify the effectiveness of ESAM on the CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets with five different DNN architectures . The experimental results demonstrate that ESAM obtains flat minima at a cost of only 40 % ( vs. SAM ’ s 100 % ) extra computational overhead over base optimizers . More importantly , ESAM achieves better performance in terms of the test accuracy compared to SAM . In a nutshell , our contributions are as follows : • We propose two novel and effective training strategies Stochastic Weight Perturbation ( SWP ) and Sharpness-sensitive Data Selection ( SDS ) . Both strategies are designed to improve efficiency without sacrificing performance . The empirical results demonstrate that both of the proposed strategies can improve both the efficiency and effectiveness of SAM . • We introduce the ESAM , which integrates SWP and SDS . ESAM improves the generalization ability of DNNs with marginally additional computational cost compared to standard training . The rest of this paper is structured in this way . Section 2.1 introduces SAM and its computational issues . Section 2.2 and Section 2.3 discuss how the two proposed training strategies SWP and SDS are designed respectively . Section 3 verifies the effectiveness of ESAM across a variety of datasets and DNN architectures . Section 4 presents the related work and Section 5 concludes this paper . 2 METHODOLOGY . We start with recapitulating how SAM achieves a flat minimum with small sharpness , which is quantified by resolving a maximization problem . To compute the sharpness , SAM requires additional forward and backward propagation and results in the doubling of the computational overhead compared to base optimizers . Following that , we demonstrate how we derive and propose ESAM , which integrates SWP and SDS , to maximize efficiency while maintaining the performance . We introduce SWP and SDS in Sections 2.2 and 2.3 respectively . Algorithm 1 shows the overall proposed ESAM algorithm . Throughout this paper , we denote a neural network f with weight parameters θ as fθ . The weights are contained in the vector θ = ( θ1 , θ2 , . . . , θN ) , where N is the number of weight units in the neural network . Given a training dataset S that contains samples i.i.d . drawn from a distribution D , Algorithm 1 Efficient SAM ( ESAM ) Input : Network fθ , θ = ( θ1 , θ2 , . . . , θN ) , Training set S , Batch size b , Learning rate η > 0 , Neighborhood size ρ > 0 , Iterations A , SWP hyperparameter β , SDS hyperparameter γ . Output : A flat minimum solution θ̂ . 1 : for a = 1 to A do 2 : Sample a mini-batch B ⊂ S with size b . 3 : for n = 1 to N do 4 : if θn is chosen by probability β then 5 : n ← ρ1−β∇θnLB ( fθ ) . SWP in B1 6 : else 7 : n ← 0 8 : ̂← ( 1 , ... , N ) . Assign Weight Perturbation 9 : Compute ` ( fθ+̂ , xi , yi ) and construct B+ with selection ratio γ ( Equation 6 ) 10 : Compute gradients g = ∇θLB+ ( fθ+̂ ) . SDS in B2 11 : Update weights θ ← θ − ηg the network is trained to obtain optimal weights θ̂ via empirical risk minimization ( ERM ) , i.e. , θ̂ = arg min θ { LS ( fθ ) = 1 |S| ∑ ( xi , yi ) ∈S ` ( fθ , xi , yi ) } ( 1 ) where ` can be an arbitrary loss function . We take ` to be the cross entropy loss in this paper . The population loss is defined as LD ( fθ ) , E ( xi , yi ) ∼D ( ` ( fθ , xi , yi ) ) . In each training iteration , optimizers sample a mini-batch B ⊂ S with size b to update parameters . 2.1 SHARPNESS-AWARE MINIMIZATION AND ITS COMPUTATIONAL DRAWBACK . To improve the generalization capability of DNNs , Foret et al . ( 2020 ) proposed the SAM training strategy for searching flat minima . SAM trains DNNs by solving the following min-max optimization problem , min θ max : ‖ ‖2≤ρ LS ( fθ+ ) . ( 2 ) Given θ , the inner optimization attempts to find a weight perturbation in Euclidean ball with radius ρ that maximizes the empirical loss . The maximized loss at weights θ is the sum of the empirical loss and the sharpness , which is defined to be RS ( fθ ) = max : ‖ ‖2 < ρ [ LS ( fθ+ ) − LS ( fθ ) ] . This sharpness is quantified by the maximal change of empirical loss when a constraint perturbation is added to θ . The min-max problem encourages SAM to find flat minima . For a certain set of weights θ , Foret et al . ( 2020 ) theoretically justifies that the population loss of DNNs can be upper-bounded by the sum of sharpness , empirical loss , and a regularization term on the norm of weights ( refer to Equation 3 ) . Thus , by minimizing the sharpness together with the empirical loss , SAM produces optimized solutions for DNNs with flat minima , and the resultant models can thus generalize better ( Foret et al. , 2020 ; Chen et al. , 2021 ; Kwon et al. , 2021 ) . LD ( fθ ) ≤ RS ( fθ ) + LS ( fθ ) + λ‖θ‖22 = max : ‖ ‖2≤ρ LS ( fθ+ ) + λ‖θ‖22 ( 3 ) In practice , SAM first approximately solves the inner optimization by means of a single-step gradient descent method , i.e. , ̂ = arg max : ‖ ‖2 < ρ LS ( fθ+ ) ≈ ρ∇θLS ( fθ ) . ( 4 ) The sharpness at weights θ is approximated by RS ( fθ ) = LS ( fθ+̂ ) − LS ( fθ ) . Then , a base optimizer , such as SGD ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) , updates the DNNs ’ weights to minimize LS ( fθ+̂ ) . We refer to LS ( fθ+̂ ) as the SAM loss . Overall , SAM requires two forward and two backward operations to update weights once . We refer to the forward and backward propagation for approximating ̂ as F1 and B1 and those for updating weights by base optimizers as F2 and B2 respectively . Although SAM can effectively improve the generalization of DNNs , it additionally requires one forward and one backward operation ( F1 and B1 ) in each training iteration . Thus , SAM results in a doubling of the computational overhead compared to the use of base optimizers . To improve the efficiency of SAM , we propose ESAM , which consists of two strategies—SWP and SDS , to accelerate the sharpness approximation phase and the weight updating phase . Specifically , on the one hand , when estimating ̂ around weight vector θ , SWP efficiently approximates ̂ by randomly selecting each parameter with a given probability to form a subset of weights to be perturbed . The reduction of the number of perturbed parameters results in lower computational overhead during the backward propagation . SWP rescales the resultant weight perturbation so as to assure that the expected weight perturbation equals to ̂ , and the generalization capability thus will not be significantly degraded . On the other hand , when updating weights via base optimizers , instead of computing the upper bound LB ( fθ+̂ ) over a whole batch of samples , SDS selects a subset of samples , B+ , whose loss values increase the most with respect to the perturbation ̂ . Optimizing the weights based on a fewer number of samples decreases the computational overhead ( in a linear fashion ) . We further justify that LB ( fθ+̂ ) can be upper bounded by LB+ ( fθ+̂ ) and consequently the generalization capability can be preserved . In general , ESAM works much more efficiently and performs as well as SAM in terms of its generalization capability .
Paper proposes techniques to improve the efficiency of Sharpness aware minimization method. They are Stochastic Weight Perturbation (Select subset of the parameters at any step) and Sharpness-sensitive Data Selection. Results demonstrates efficacy over SAM at small batch sizes on multiple models.
SP:136530dda64b2bdd8acfd644e5625206e3aeabfc
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
1 INTRODUCTION . Deep learning has achieved astounding performances in many fields by relying on larger numbers of parameters and increasingly sophisticated optimization algorithms . However , DNNs with far more parameters than training samples are more prone to poor generalization . Generalization is arguably the most fundamental and yet mysterious aspect of deep learning . Several studies have been conducted to better understand the generalization of DNNs and to train DNNs that generalize well across the natural distribution ( Keskar et al. , 2016 ; Neyshabur et al. , 2017 ; Chaudhari et al. , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2020 ; Foret et al. , 2020 ; Zhang et al. , 2021 ) . For example , Keskar et al . ( 2016 ) investigate the effect of batch size on neural networks ’ generalization ability . Zhang et al . ( 2019 ) propose an optimizer for training DNNs with improved generalization ability . Specifically , Hochreiter & Schmidhuber ( 1995 ) , Li et al . ( 2017 ) and Dinh et al . ( 2017 ) argue that the geometry of the loss landscape affects generalization and DNNs with a flat minimum can generalize better . The recent work by Foret et al . ( 2020 ) proposes an effective training algorithm Sharpness Aware Minimizer ( SAM ) for obtaining a flat minimum . SAM employs a base optimizer such as Stochastic Gradient Descent ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) to minimize both the vanilla training loss and the sharpness . The sharpness , which describes the flatness of a minimum , is characterized using eigenvalues of the Hessian matrix by Keskar et al . ( 2016 ) . SAM quantifies the sharpness as the maximized change of training loss when a constraint perturbation is added to current weights . As a result , SAM leads to a flat minimum and significantly improves the generalization ability of the trained DNNs . SAM and its variants have been shown to outperform the state-of-the-art across a variety of deep learning benchmarks ( Kwon et al. , 2021 ; Chen et al. , 2021 ; Galatolo et al. , 2021 ; Zheng et al. , 2021 ) . Regrettably though , SAM and its variants achieve such remarkable performance at the expense of doubling the computational overhead of the given base optimizers , which minimize the training loss with a single forward and backward propagation step . SAM requires an additional propagation step compared to the base optimizers to resolve the weight perturbation for quantifying the sharpness . The extra propagation step requires the same computational overhead as the single propagation step used by base optimizers , resulting in SAM ’ s computational overhead being doubled ( 2× ) . As demonstrated in Figure 1 , SAM achieves higher test accuracy ( i.e. , 84.46 % vs. 81.89 % ) at the expense of sacrificing half of the training speed of the base optimizer ( i.e. , 276 imgs/s vs. 557 imgs/s ) . SGD SAM ESAM 80 81 82 83 84 85 86 A cc ur ac y ( % ) 100 200 300 400 500 600 700 Tr ai ni ng S pe ed ( i m ag es /s ) Accuracy Training Speed sharpness of DNNs . As a result , the sharpness calculated over the subsets can serve as an upper bound of the SAM ’ s sharpness , ensuring that SDS ’ s performance is comparable to that of SAM ’ s . We verify the effectiveness of ESAM on the CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets with five different DNN architectures . The experimental results demonstrate that ESAM obtains flat minima at a cost of only 40 % ( vs. SAM ’ s 100 % ) extra computational overhead over base optimizers . More importantly , ESAM achieves better performance in terms of the test accuracy compared to SAM . In a nutshell , our contributions are as follows : • We propose two novel and effective training strategies Stochastic Weight Perturbation ( SWP ) and Sharpness-sensitive Data Selection ( SDS ) . Both strategies are designed to improve efficiency without sacrificing performance . The empirical results demonstrate that both of the proposed strategies can improve both the efficiency and effectiveness of SAM . • We introduce the ESAM , which integrates SWP and SDS . ESAM improves the generalization ability of DNNs with marginally additional computational cost compared to standard training . The rest of this paper is structured in this way . Section 2.1 introduces SAM and its computational issues . Section 2.2 and Section 2.3 discuss how the two proposed training strategies SWP and SDS are designed respectively . Section 3 verifies the effectiveness of ESAM across a variety of datasets and DNN architectures . Section 4 presents the related work and Section 5 concludes this paper . 2 METHODOLOGY . We start with recapitulating how SAM achieves a flat minimum with small sharpness , which is quantified by resolving a maximization problem . To compute the sharpness , SAM requires additional forward and backward propagation and results in the doubling of the computational overhead compared to base optimizers . Following that , we demonstrate how we derive and propose ESAM , which integrates SWP and SDS , to maximize efficiency while maintaining the performance . We introduce SWP and SDS in Sections 2.2 and 2.3 respectively . Algorithm 1 shows the overall proposed ESAM algorithm . Throughout this paper , we denote a neural network f with weight parameters θ as fθ . The weights are contained in the vector θ = ( θ1 , θ2 , . . . , θN ) , where N is the number of weight units in the neural network . Given a training dataset S that contains samples i.i.d . drawn from a distribution D , Algorithm 1 Efficient SAM ( ESAM ) Input : Network fθ , θ = ( θ1 , θ2 , . . . , θN ) , Training set S , Batch size b , Learning rate η > 0 , Neighborhood size ρ > 0 , Iterations A , SWP hyperparameter β , SDS hyperparameter γ . Output : A flat minimum solution θ̂ . 1 : for a = 1 to A do 2 : Sample a mini-batch B ⊂ S with size b . 3 : for n = 1 to N do 4 : if θn is chosen by probability β then 5 : n ← ρ1−β∇θnLB ( fθ ) . SWP in B1 6 : else 7 : n ← 0 8 : ̂← ( 1 , ... , N ) . Assign Weight Perturbation 9 : Compute ` ( fθ+̂ , xi , yi ) and construct B+ with selection ratio γ ( Equation 6 ) 10 : Compute gradients g = ∇θLB+ ( fθ+̂ ) . SDS in B2 11 : Update weights θ ← θ − ηg the network is trained to obtain optimal weights θ̂ via empirical risk minimization ( ERM ) , i.e. , θ̂ = arg min θ { LS ( fθ ) = 1 |S| ∑ ( xi , yi ) ∈S ` ( fθ , xi , yi ) } ( 1 ) where ` can be an arbitrary loss function . We take ` to be the cross entropy loss in this paper . The population loss is defined as LD ( fθ ) , E ( xi , yi ) ∼D ( ` ( fθ , xi , yi ) ) . In each training iteration , optimizers sample a mini-batch B ⊂ S with size b to update parameters . 2.1 SHARPNESS-AWARE MINIMIZATION AND ITS COMPUTATIONAL DRAWBACK . To improve the generalization capability of DNNs , Foret et al . ( 2020 ) proposed the SAM training strategy for searching flat minima . SAM trains DNNs by solving the following min-max optimization problem , min θ max : ‖ ‖2≤ρ LS ( fθ+ ) . ( 2 ) Given θ , the inner optimization attempts to find a weight perturbation in Euclidean ball with radius ρ that maximizes the empirical loss . The maximized loss at weights θ is the sum of the empirical loss and the sharpness , which is defined to be RS ( fθ ) = max : ‖ ‖2 < ρ [ LS ( fθ+ ) − LS ( fθ ) ] . This sharpness is quantified by the maximal change of empirical loss when a constraint perturbation is added to θ . The min-max problem encourages SAM to find flat minima . For a certain set of weights θ , Foret et al . ( 2020 ) theoretically justifies that the population loss of DNNs can be upper-bounded by the sum of sharpness , empirical loss , and a regularization term on the norm of weights ( refer to Equation 3 ) . Thus , by minimizing the sharpness together with the empirical loss , SAM produces optimized solutions for DNNs with flat minima , and the resultant models can thus generalize better ( Foret et al. , 2020 ; Chen et al. , 2021 ; Kwon et al. , 2021 ) . LD ( fθ ) ≤ RS ( fθ ) + LS ( fθ ) + λ‖θ‖22 = max : ‖ ‖2≤ρ LS ( fθ+ ) + λ‖θ‖22 ( 3 ) In practice , SAM first approximately solves the inner optimization by means of a single-step gradient descent method , i.e. , ̂ = arg max : ‖ ‖2 < ρ LS ( fθ+ ) ≈ ρ∇θLS ( fθ ) . ( 4 ) The sharpness at weights θ is approximated by RS ( fθ ) = LS ( fθ+̂ ) − LS ( fθ ) . Then , a base optimizer , such as SGD ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) , updates the DNNs ’ weights to minimize LS ( fθ+̂ ) . We refer to LS ( fθ+̂ ) as the SAM loss . Overall , SAM requires two forward and two backward operations to update weights once . We refer to the forward and backward propagation for approximating ̂ as F1 and B1 and those for updating weights by base optimizers as F2 and B2 respectively . Although SAM can effectively improve the generalization of DNNs , it additionally requires one forward and one backward operation ( F1 and B1 ) in each training iteration . Thus , SAM results in a doubling of the computational overhead compared to the use of base optimizers . To improve the efficiency of SAM , we propose ESAM , which consists of two strategies—SWP and SDS , to accelerate the sharpness approximation phase and the weight updating phase . Specifically , on the one hand , when estimating ̂ around weight vector θ , SWP efficiently approximates ̂ by randomly selecting each parameter with a given probability to form a subset of weights to be perturbed . The reduction of the number of perturbed parameters results in lower computational overhead during the backward propagation . SWP rescales the resultant weight perturbation so as to assure that the expected weight perturbation equals to ̂ , and the generalization capability thus will not be significantly degraded . On the other hand , when updating weights via base optimizers , instead of computing the upper bound LB ( fθ+̂ ) over a whole batch of samples , SDS selects a subset of samples , B+ , whose loss values increase the most with respect to the perturbation ̂ . Optimizing the weights based on a fewer number of samples decreases the computational overhead ( in a linear fashion ) . We further justify that LB ( fθ+̂ ) can be upper bounded by LB+ ( fθ+̂ ) and consequently the generalization capability can be preserved . In general , ESAM works much more efficiently and performs as well as SAM in terms of its generalization capability .
This paper presents a method called ESAM for improving the efficiency of SAM. ESAM has two components: SWP and SDS. SWS accelerates the estimation of $\epsilon$ via random sampling a subset of parameters in backpropagation. SDS further improves the efficiency via sampling a subset of data points that is enough for calculating the upper bound of $L$. Combining these two techniques, they achieve a speed improvement for SAM while yielding comparable or even better performance.
SP:136530dda64b2bdd8acfd644e5625206e3aeabfc
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
1 INTRODUCTION . Deep learning has achieved astounding performances in many fields by relying on larger numbers of parameters and increasingly sophisticated optimization algorithms . However , DNNs with far more parameters than training samples are more prone to poor generalization . Generalization is arguably the most fundamental and yet mysterious aspect of deep learning . Several studies have been conducted to better understand the generalization of DNNs and to train DNNs that generalize well across the natural distribution ( Keskar et al. , 2016 ; Neyshabur et al. , 2017 ; Chaudhari et al. , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2020 ; Foret et al. , 2020 ; Zhang et al. , 2021 ) . For example , Keskar et al . ( 2016 ) investigate the effect of batch size on neural networks ’ generalization ability . Zhang et al . ( 2019 ) propose an optimizer for training DNNs with improved generalization ability . Specifically , Hochreiter & Schmidhuber ( 1995 ) , Li et al . ( 2017 ) and Dinh et al . ( 2017 ) argue that the geometry of the loss landscape affects generalization and DNNs with a flat minimum can generalize better . The recent work by Foret et al . ( 2020 ) proposes an effective training algorithm Sharpness Aware Minimizer ( SAM ) for obtaining a flat minimum . SAM employs a base optimizer such as Stochastic Gradient Descent ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) to minimize both the vanilla training loss and the sharpness . The sharpness , which describes the flatness of a minimum , is characterized using eigenvalues of the Hessian matrix by Keskar et al . ( 2016 ) . SAM quantifies the sharpness as the maximized change of training loss when a constraint perturbation is added to current weights . As a result , SAM leads to a flat minimum and significantly improves the generalization ability of the trained DNNs . SAM and its variants have been shown to outperform the state-of-the-art across a variety of deep learning benchmarks ( Kwon et al. , 2021 ; Chen et al. , 2021 ; Galatolo et al. , 2021 ; Zheng et al. , 2021 ) . Regrettably though , SAM and its variants achieve such remarkable performance at the expense of doubling the computational overhead of the given base optimizers , which minimize the training loss with a single forward and backward propagation step . SAM requires an additional propagation step compared to the base optimizers to resolve the weight perturbation for quantifying the sharpness . The extra propagation step requires the same computational overhead as the single propagation step used by base optimizers , resulting in SAM ’ s computational overhead being doubled ( 2× ) . As demonstrated in Figure 1 , SAM achieves higher test accuracy ( i.e. , 84.46 % vs. 81.89 % ) at the expense of sacrificing half of the training speed of the base optimizer ( i.e. , 276 imgs/s vs. 557 imgs/s ) . SGD SAM ESAM 80 81 82 83 84 85 86 A cc ur ac y ( % ) 100 200 300 400 500 600 700 Tr ai ni ng S pe ed ( i m ag es /s ) Accuracy Training Speed sharpness of DNNs . As a result , the sharpness calculated over the subsets can serve as an upper bound of the SAM ’ s sharpness , ensuring that SDS ’ s performance is comparable to that of SAM ’ s . We verify the effectiveness of ESAM on the CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Deng et al. , 2009 ) datasets with five different DNN architectures . The experimental results demonstrate that ESAM obtains flat minima at a cost of only 40 % ( vs. SAM ’ s 100 % ) extra computational overhead over base optimizers . More importantly , ESAM achieves better performance in terms of the test accuracy compared to SAM . In a nutshell , our contributions are as follows : • We propose two novel and effective training strategies Stochastic Weight Perturbation ( SWP ) and Sharpness-sensitive Data Selection ( SDS ) . Both strategies are designed to improve efficiency without sacrificing performance . The empirical results demonstrate that both of the proposed strategies can improve both the efficiency and effectiveness of SAM . • We introduce the ESAM , which integrates SWP and SDS . ESAM improves the generalization ability of DNNs with marginally additional computational cost compared to standard training . The rest of this paper is structured in this way . Section 2.1 introduces SAM and its computational issues . Section 2.2 and Section 2.3 discuss how the two proposed training strategies SWP and SDS are designed respectively . Section 3 verifies the effectiveness of ESAM across a variety of datasets and DNN architectures . Section 4 presents the related work and Section 5 concludes this paper . 2 METHODOLOGY . We start with recapitulating how SAM achieves a flat minimum with small sharpness , which is quantified by resolving a maximization problem . To compute the sharpness , SAM requires additional forward and backward propagation and results in the doubling of the computational overhead compared to base optimizers . Following that , we demonstrate how we derive and propose ESAM , which integrates SWP and SDS , to maximize efficiency while maintaining the performance . We introduce SWP and SDS in Sections 2.2 and 2.3 respectively . Algorithm 1 shows the overall proposed ESAM algorithm . Throughout this paper , we denote a neural network f with weight parameters θ as fθ . The weights are contained in the vector θ = ( θ1 , θ2 , . . . , θN ) , where N is the number of weight units in the neural network . Given a training dataset S that contains samples i.i.d . drawn from a distribution D , Algorithm 1 Efficient SAM ( ESAM ) Input : Network fθ , θ = ( θ1 , θ2 , . . . , θN ) , Training set S , Batch size b , Learning rate η > 0 , Neighborhood size ρ > 0 , Iterations A , SWP hyperparameter β , SDS hyperparameter γ . Output : A flat minimum solution θ̂ . 1 : for a = 1 to A do 2 : Sample a mini-batch B ⊂ S with size b . 3 : for n = 1 to N do 4 : if θn is chosen by probability β then 5 : n ← ρ1−β∇θnLB ( fθ ) . SWP in B1 6 : else 7 : n ← 0 8 : ̂← ( 1 , ... , N ) . Assign Weight Perturbation 9 : Compute ` ( fθ+̂ , xi , yi ) and construct B+ with selection ratio γ ( Equation 6 ) 10 : Compute gradients g = ∇θLB+ ( fθ+̂ ) . SDS in B2 11 : Update weights θ ← θ − ηg the network is trained to obtain optimal weights θ̂ via empirical risk minimization ( ERM ) , i.e. , θ̂ = arg min θ { LS ( fθ ) = 1 |S| ∑ ( xi , yi ) ∈S ` ( fθ , xi , yi ) } ( 1 ) where ` can be an arbitrary loss function . We take ` to be the cross entropy loss in this paper . The population loss is defined as LD ( fθ ) , E ( xi , yi ) ∼D ( ` ( fθ , xi , yi ) ) . In each training iteration , optimizers sample a mini-batch B ⊂ S with size b to update parameters . 2.1 SHARPNESS-AWARE MINIMIZATION AND ITS COMPUTATIONAL DRAWBACK . To improve the generalization capability of DNNs , Foret et al . ( 2020 ) proposed the SAM training strategy for searching flat minima . SAM trains DNNs by solving the following min-max optimization problem , min θ max : ‖ ‖2≤ρ LS ( fθ+ ) . ( 2 ) Given θ , the inner optimization attempts to find a weight perturbation in Euclidean ball with radius ρ that maximizes the empirical loss . The maximized loss at weights θ is the sum of the empirical loss and the sharpness , which is defined to be RS ( fθ ) = max : ‖ ‖2 < ρ [ LS ( fθ+ ) − LS ( fθ ) ] . This sharpness is quantified by the maximal change of empirical loss when a constraint perturbation is added to θ . The min-max problem encourages SAM to find flat minima . For a certain set of weights θ , Foret et al . ( 2020 ) theoretically justifies that the population loss of DNNs can be upper-bounded by the sum of sharpness , empirical loss , and a regularization term on the norm of weights ( refer to Equation 3 ) . Thus , by minimizing the sharpness together with the empirical loss , SAM produces optimized solutions for DNNs with flat minima , and the resultant models can thus generalize better ( Foret et al. , 2020 ; Chen et al. , 2021 ; Kwon et al. , 2021 ) . LD ( fθ ) ≤ RS ( fθ ) + LS ( fθ ) + λ‖θ‖22 = max : ‖ ‖2≤ρ LS ( fθ+ ) + λ‖θ‖22 ( 3 ) In practice , SAM first approximately solves the inner optimization by means of a single-step gradient descent method , i.e. , ̂ = arg max : ‖ ‖2 < ρ LS ( fθ+ ) ≈ ρ∇θLS ( fθ ) . ( 4 ) The sharpness at weights θ is approximated by RS ( fθ ) = LS ( fθ+̂ ) − LS ( fθ ) . Then , a base optimizer , such as SGD ( Nesterov , 1983 ) or Adam ( Kingma & Ba , 2014 ) , updates the DNNs ’ weights to minimize LS ( fθ+̂ ) . We refer to LS ( fθ+̂ ) as the SAM loss . Overall , SAM requires two forward and two backward operations to update weights once . We refer to the forward and backward propagation for approximating ̂ as F1 and B1 and those for updating weights by base optimizers as F2 and B2 respectively . Although SAM can effectively improve the generalization of DNNs , it additionally requires one forward and one backward operation ( F1 and B1 ) in each training iteration . Thus , SAM results in a doubling of the computational overhead compared to the use of base optimizers . To improve the efficiency of SAM , we propose ESAM , which consists of two strategies—SWP and SDS , to accelerate the sharpness approximation phase and the weight updating phase . Specifically , on the one hand , when estimating ̂ around weight vector θ , SWP efficiently approximates ̂ by randomly selecting each parameter with a given probability to form a subset of weights to be perturbed . The reduction of the number of perturbed parameters results in lower computational overhead during the backward propagation . SWP rescales the resultant weight perturbation so as to assure that the expected weight perturbation equals to ̂ , and the generalization capability thus will not be significantly degraded . On the other hand , when updating weights via base optimizers , instead of computing the upper bound LB ( fθ+̂ ) over a whole batch of samples , SDS selects a subset of samples , B+ , whose loss values increase the most with respect to the perturbation ̂ . Optimizing the weights based on a fewer number of samples decreases the computational overhead ( in a linear fashion ) . We further justify that LB ( fθ+̂ ) can be upper bounded by LB+ ( fθ+̂ ) and consequently the generalization capability can be preserved . In general , ESAM works much more efficiently and performs as well as SAM in terms of its generalization capability .
This paper investigates the crucial efficiency issue of the sharpness-aware minimizer (SAM). SAM improves the generalization of DNNs but results in a double training time compared to vanilla training. By analyzing the min-max procedure of SAM, the authors observe the computational redundancy and then propose a method ESAM to improve the efficiency from the data and parameters perspectives. The authors argue that SAM can be approximated properly with fewer computations. Empirical results show that ESAM can reduce the extra training time in CIFAR10/100 and ImageNet datasets with improved accuracy compared to SAM.
SP:136530dda64b2bdd8acfd644e5625206e3aeabfc
Multitask Prompted Training Enables Zero-Shot Task Generalization
1 INTRODUCTION . Recent work has shown that large language models exhibit the ability to perform reasonable zeroshot generalization to new tasks ( Brown et al. , 2020 ; Kim et al. , 2021 ) . Despite only being trained on language modeling objectives , these models can perform relatively well at new tasks that they have not been explicitly trained to perform , for instance answering a question on a passage or performing summarization . An influential hypothesis is that large language models generalize to new tasks as a result of an implicit process of multitask learning ( Radford et al. , 2019 ) . As a byproduct of learning to predict the next word , a language model is forced to learn from a mixture of implicit tasks included in their pretraining corpus . For example , by training on generic text from a web forum , a model might implicitly learn the format and structure of question answering . This gives large language models the ability to generalize to unseen tasks presented as natural language prompts , going beyond most large-scale explicit multitask setups ( Khashabi et al. , 2020a ; Ye et al. , 2021 ) . However , this ability requires a sufficiently large model and is sensitive to the wording of its prompts ( Perez et al. , 2021 ; Zhao et al. , 2021 ; Reynolds and McDonell , 2021 ) . Yet , it is an open question how implicit this multitask learning really is . Given the scale of training data , it is not unreasonable to expect that some common natural language processing ( NLP ) tasks would appear in an explicit form in the dataset , thereby directly training the language model on the task . For example , there are many websites that simply contain lists of trivia questions and answers ; 1 this data is precisely supervised training data for the task of closed-book question answering ( Roberts et al. , 2020 ) . Given the scale of large language models and the datasets they are trained on , this explicit multitask supervision could feasibly play a large role in zero-shot generalization . In this paper , we instead focus on intentionally and explicitly training large language models in a supervised and massively multitask fashion . Our approach uses a multitask training mixture made up 1For example , see https : //www.quizbreaker.com/trivia-questions , https : //www.scarymommy.com/best-trivia-questions-answers/ , and https : //parade . com/944584/parade/trivia-questions-for-kids/ . Under review as a conference paper at ICLR 2022 of a large set of different tasks specified in natural language prompts . Our goal is to induce a model to better generalize to unseen tasks without requiring massive scale , as well as being more robust to the wording choices of the prompts . To convert a large set of natural language tasks into prompted form , we use a simple templating language for structured datasets . We develop an interface for prompt collection from public contributors that facilitated the collection of a large multitask mixture with multiple prompts per dataset . We then train a variant of the T5 encoder-decoder model ( Raffel et al. , 2020 ; Lester et al. , 2021 ) on a subset of the tasks ( each with multiple datasets ) and then evaluate tasks that the model was not trained on . Our experiments study two questions . First , does multitask prompted training improve generalization to unseen tasks ? Second , does training on a wider range of prompts improve robustness to prompt wording ? For the first question , we find that multitask training enables zero-shot task generalization by showing that our model matches or exceeds the performance of GPT-3 ( Brown et al. , 2020 ) on 9 out of 11 held-out datasets , despite being about 16× smaller . We also show that the model improves over a large baseline language model on 13/14 comparable tasks in the BIG-bench benchmark 2 . For the second question , we find that training on more prompts per dataset consistently improves the median and decreases the variability of performance on held-out tasks . Training on prompts from a wider range of datasets also generally improves the median but does not decrease the variability . 2 RELATED WORK . In this work , we distinguish implicit multitask learning in language model pretraining from explicit multitask learning ( Caruana , 1997 ) , the technique for mixing multiple tasks into a single supervised training process . Models trained with multitask learning have long been shown to have improved performance in NLP ( Collobert and Weston , 2008 ) . Since different tasks have different outputs , applying multitask learning requires a shared format , and various have been used ( Hashimoto et al. , 2016 ; McCann et al. , 2018 ) . Several multitask works also explore few-shot and zero-shot generalization to new datasets with large pretrained models ( e.g. , Vu et al. , 2020 ; Ye et al. , 2021 ) . Natural language prompting is the method of reformatting NLP tasks in the format of a natural language response to natural language input . The development of text-to-text pretrained models 2https : //github.com/google/BIG-bench Under review as a conference paper at ICLR 2022 such as T5 ( Raffel et al. , 2020 ) makes prompts a particularly useful method for multitask learning . For example , Khashabi et al . ( 2020a ) reformat 20 question-answering datasets into a single prompt of question : ... ( A ) ... ( B ) ... ( C ) ... context : ... , while later work such as Zhong et al . ( 2021 ) and Wang et al . ( 2021 ) cast a range of datasets into a single boolean QA prompt or a single NLI prompt , respectively . Although effective , these single-prompt methods typically do not generalize to new prompts or new tasks inexpressible in their fixed format . More generally , Schick and Schütze ( 2021 ) and Brown et al . ( 2020 ) popularized using prompts as a generic method for all NLP tasks . Mishra et al . ( 2021 ) further extend this approach to a multitask setup , training on prompts for 61 narrowly defined tasks ( e.g. , question generation , incorrect answer generation ) adapted from 9 datasets ’ crowdsourcing instructions , whereas we train on and measure generalization across 62 datasets and 12 tasks as traditionally defined in the NLP literature ( §3 ) . Additionally , their prompts include examples in addition to instructions , whereas we focus on zeroshot generalization . Finally , concurrent work by Wei et al . ( 2021 ) shares a similar research question with us , although we differ in several substantive regards , e.g. , prompt diversity , model scale , and held-out-task scheme . We discuss our differences in detail in Section 7 . Finally , in explaining the success of prompts , the leading hypothesis is that models learn to understand the prompts as task instructions which help them generalize to unseen tasks ( Wei et al. , 2021 ; Mishra et al. , 2021 ; Schick and Schütze , 2021 ; Brown et al. , 2020 ) . However , the extent to which this success depends on the semantic meaningfulness of the prompts has been challenged ( Webson and Pavlick , 2021 ; Logan et al. , 2021 ) . Thus , in this work , we remain agnostic as to why prompts support generalization . We only claim that prompts serve as a natural format for multitask training which empirically supports generalization to unseen tasks . 3 MEASURING GENERALIZATION TO UNSEEN TASKS . We begin by assuming an underlying partition of NLP datasets into tasks . We use the term “ task ” to refer to a general NLP ability that is tested by a group of specific datasets . To evaluate zero-shot generalization to new tasks , we train on a subset of tasks and evaluate on a held-out group of tasks . Unfortunately , NLP task categorization is fuzzy , particularly if trying to isolate a unique skill . For example , many datasets evaluate commonsense knowledge , and some multitask works ( e.g. , Brown Under review as a conference paper at ICLR 2022 et al. , 2020 ; Wei et al. , 2021 ) define commonsense as a standalone task . However , commonsense datasets differ vastly , ranging from innate knowledge and grade-school science to DIY instructions , US cultural norms , and graduate-level theorems ( see Appendix C.1 for more details ) . Noting that grouping by task is an imperfect heuristic , we err on the side of organizing our task taxonomy based on the task format as opposed to required skill , largely based on conventions in the literature ( Khashabi et al. , 2020b ; Vu et al. , 2020 ; Ye et al. , 2021 ) . We collect all datasets from these papers and exclude those that are not in English ( which also excludes programming languages and structured annotations such as parse trees ) or if they require special domain knowledge ( e.g. , biomedicine ) . This yields 12 tasks and 62 datasets with publicly contributed prompts as of writing in our training and evaluation mixtures ( Figure 2 ) . All experiments use datasets in the Hugging Face datasets library ( Lhoest et al. , 2021 ) . To test zero-shot generalization , we hold out all constituent datasets of four tasks : natural language inference ( NLI ) , sentence completion , word sense disambiguation , and coreference resolution . We choose NLI as a held-out task because humans also zero-shot generalize to NLI as an unseen task : Most humans are never explicitly trained to classify whether a premise sentence entails or contradicts a hypothesis sentence , yet they find it intuitive to perform this task without training ( Williams et al. , 2020 ) . For the same reason , we also hold out coreference resolution and word sense disambiguation . We further hold out sentence completion because it is a task possibly too similar to NLI ( Appendix C.2 discusses this in detail ) . Additionally , we do not train our main model on any datasets that GPT3 used for evaluation , so that our main results will be a fair zero-shot comparison . We verify that data for those tasks is not leaked through the pretraining corpus as detailed in Appendix E. Lastly , we also evaluate on a subset of the datasets from BIG-Bench ( BIG-bench collaboration , 2021 ) , which is a recent community-driven benchmark to create a diverse collection of difficult tasks to test the abilities of large language models . The subset of BIG-Bench comprise a language-oriented selection of tasks for which the BIG-Bench maintainers have prepared preliminary results and which constitute text that is in-vocabulary for the T5 tokenizer ( i.e . only contain natural English-language text without emojis or other special characters ) . All tasks from BIG-Bench are novel tasks that were unseen in our training . 4 A UNIFIED PROMPT FORMAT . All datasets are given to our model in natural language prompted form to enable zero-shot experimentation . To facilitate writing a large collection of prompts , we develop a templating language and an application that make it easy to convert diverse datasets into prompts . We define a prompt as consisting of an input template and target template , along with a collection of associated metadata . The templates are functions mapping a data example into natural language for the input and target sequences . Practically , the templates allow the user to mix arbitrary text with the data fields , metadata , and other code for rendering and formatting raw fields . For example , in the case of an NLI dataset , the example would include fields for Premise , Hypothesis , Label . An input tem- Under review as a conference paper at ICLR 2022 plate would be If { Premise } is true , is it also true that { Hypothesis } ? , whereas a target template can be defined with the label choices { Choices [ label ] } . Here Choices is prompt-specific metadata that consists of the options yes , maybe , no corresponding to label being entailment ( 0 ) , neutral ( 1 ) or contradiction ( 2 ) . Other metadata documents additional properties , such as an evaluation metric . Each data example is materialized with many different prompt templates as shown in Figure 3 . To develop prompts , we built an interface for interactively writing prompts on datasets . We put out an open call in the research community for users to contribute prompts . 36 contributors affiliated with 24 institutions in 8 countries participated . Since our goal was to train a model to be robust to prompt format , and since the question of what makes a prompt effective remains unresolved ( Webson and Pavlick , 2021 ; Logan et al. , 2021 ; Reynolds and McDonell , 2021 ) , we encouraged contributors to be open in their style and create a diverse set of prompts . The main annotation guideline was that prompts needed to be grammatical and understandable by a native English speaker with no prior experience of the tasks . Additionally , prompts that required explicit counting or numerical indexing were removed in favor of natural language variants . For example , instead of predicting indices of a span to extract ( e.g . in extractive QA ) , the model was expected to copy the span ’ s text instead . With these minimal constraints , prompt writers were encouraged to use both formal and creative prompts and various orderings of the data . Most of the prompts correspond directly to a version of the original proposed task , although we also allowed prompts that permuted the original task ( for instance , generating a document from its summary ) or allowed for ambiguous output ( for instance , not indicating a list of available choices ) . Such non-original-task prompts are included in our training mixtures for improved diversity , but they are not reported in evaluation since they deviate from the metrics and baselines reported by the original datasets . The details of the prompting language and tool are given in Appendix B , and the prompts themselves are given in Appendix G. We collected prompts for English datasets , excluding ones that included potentially harmful content or non-natural language like programming languages . We refer to this collection as the Public Pool of Prompts ( P3 ) . As of writing , P3 contains 1939 prompts for 171 datasets ( 11.3 prompts per dataset on average ) . These prompts contain on average 14.4 tokens , not including variables and other elements from the templating language . Prompts used in experiments are sourced from P3 , except for BIG-Bench , for which the prompts are provided by its maintainers .
This paper trains a sequence to sequence model in a large multi-task setting and tests the model's ability to generalize to unseen tasks (zero-shot). The crux of the contribution lies in the design of prompt setups, extensive experiment, and the evaluation. The conclusive of this paper supports a growing trend/consensus in the community that multi-task learning can be a good way for generalizability on unseen tasks.
SP:b0088e9c729b8d937f71786c83db339958450c8a