paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Adversarial Support Alignment | 1 INTRODUCTION . Learning tasks often involve estimating properties of distributions from samples or aligning such characteristics across domains . We can align full distributions ( adversarial domain alignment ) , certain statistics ( canonical correlation analysis ) , or the support of distributions ( this paper ) . Much of the recent work has focused on full distributional alignment , for good reasons . In domain adaptation , motivated by theoretical results ( Ben-David et al. , 2007 ; 2010 ) , a series of papers ( Ajakan et al. , 2014 ; Ganin & Lempitsky , 2015 ; Ganin et al. , 2016 ; Tzeng et al. , 2017 ; Shen et al. , 2018 ; Pei et al. , 2018 ; Zhao et al. , 2018 ; Li et al. , 2018a ; Wang et al. , 2021 ; Kumar et al. , 2018 ) seek to align distributions of representations between domains , and utilize a shared classifier on the aligned representation space . Alignment in distributions implies alignment in supports . However , when there are additional objectives/constraints to satisfy , the minimizer for a distribution alignment objective does not necessarily minimize a support alignment objective . Example in Figure 1 demonstrates the qualitative distinction between two minimizers when distribution alignment is not achievable . The distribution alignment objective prefers to keep supports unaligned even if support alignment is achievable . Recent works ( Zhao et al. , 2019 ; Li et al. , 2020 ; Tan et al. , 2020 ; Wu et al. , 2019b ; Tachet des Combes et al. , 2020 ) have demonstrated that a shift in label distributions between source and target leads to a characterizable performance drop when the representations are forced into a distribution alignment . The error bound in Johansson et al . ( 2019 ) suggests aligning the supports of representations instead . In this paper , we focus on distribution support as the key characteristic to align . We introduce a support divergence to measure the support mismatch and algorithms to optimize such alignment . We also position our approach in the spectrum of other alignment methods . Our contributions are as follows ( all proofs can be found in Appendix A ) : 1 . In Section 2.1 , we measure the differences between supports of distributions . Building on the Hausdorff distance , we introduce a novel support divergence better suited for optimization , which we refer to as symmetric support difference ( SSD ) divergence . 2 . In Section 2.2 , we identify an important property of the discriminator trained for Jensen–Shannon divergence : support differences in the original space of interest are “ preserved ” as support differences in its one-dimensional output space . 3 . In Section 3 , we present our practical algorithm for support alignment , Adversarial Support Alignment ( ASA ) . Essentially , based on the analysis presented in Section 2.2 , our solution is to align supports in the discriminator 1D space , which is computationally efficient . 4 . In Section 4 , we place different notions of alignment – distribution alignment , relaxed distribution alignment and support alignment – within a coherent spectrum from the point of view of optimal transport , characterizing their relationships , both theoretically in terms of their objectives and practically in terms of their algorithms . 5 . In Section 5 , we demonstrate the theoretical effectiveness of support alignment in practice for domain adaptation setting . Compared to other alignment-based baselines , our proposed method is more robust against shifts in label distributions . 2 SSD DIVERGENCE AND SUPPORT ALIGNMENT . Notation . We consider an Euclidean space X = Rn equipped with Borel sigma algebra B and a metric d : X ×X → R ( e.g . Euclidean distance ) . Let P be the set of probability measures on ( X , B ) . For p ∈ P , the support of p is denoted by supp ( p ) and is defined as the smallest closed set X ⊆ X such that p ( X ) = 1. f♯p denotes the pushforward measure of p induced by a measurable mapping f . The distance between a point x ∈ X and a subset Y ⊆ X is defined as d ( x , Y ) = infy∈Y d ( x , y ) . The symmetric difference of two sets A and B is defined as A△B = ( A \B ) ∪ ( B \A ) . 2.1 DIFFERENCE BETWEEN SUPPORTS . To align the supports of distributions , we first evaluate how different they are . Similar to distribution divergences like Jensen–Shannon divergence and Wasserstein distance , we introduce a notion of support divergence . A support divergence1 between two distributions in P is a function DS ( · , · ) : P × P → R satisfying : 1 ) DS ( p , q ) ≥ 0 for all p , q ∈ P ; 2 ) DS ( p , q ) = 0 iff supp ( p ) = supp ( q ) . While a distribution divergence is sensitive to both density and support differences , a support divergence only needs to detect mismatches in supports , which are subsets of the metric space X . An example of a distance between subsets of a metric space is the Hausdorff distance : dH ( X , Y ) = max { supx∈X d ( x , Y ) , supy∈Y d ( y , X ) } . Since it depends only on the greatest distance between a point and a set , minimizing this objective for alignment only provides signal to a single point . To make the optimization less sparse , we consider all points that violate the support alignment criterion and introduce symmetric support difference ( SSD ) divergence : D△ ( p , q ) = Ep [ d ( x , supp ( q ) ) ] + Eq [ d ( x , supp ( p ) ) ] . ( 1 ) Proposition 2.1 . SSD divergence D△ ( p , q ) is a support divergence . 1It is not technically a divergence on the space of distributions , since DS ( p , q ) = 0 does not imply p = q . 2.2 SUPPORT ALIGNMENT IN ONE-DIMENSIONAL SPACE . Goodfellow et al . ( 2014 ) showed that the log-loss discriminator f : X → [ 0 , 1 ] , trained to distinguish samples from distributions p and q ( supf Ex∼p [ log f ( x ) ] + Ex∼q [ log ( 1− f ( x ) ) ] ) can be used to estimate the Jensen–Shannon divergence between p and q . The closed form maximizer f∗ is f∗ ( x ) = p ( x ) p ( x ) + q ( x ) , ∀x ∈ supp ( p ) ∪ supp ( q ) . ( 2 ) Note that for a point x /∈ supp ( p ) ∪ supp ( q ) the value of f∗ ( x ) can be set to an arbitrary value in [ 0 , 1 ] , since the log-loss does not depend on f ( x ) for such x . Using ( 2 ) , we can establish a connection between the pushforward distributions f∗♯p and f ∗ ♯q . Proposition 2.2 . Let f∗ be the optimal log-loss discriminator ( 2 ) between p and q , and let τp and τq be the probability density functions of the pushforward measures f∗♯p and f ∗ ♯q respectively 2 . Then , τp ( a ) τp ( a ) + τq ( a ) = a , ∀a ∈ supp ( τp ) ∪ supp ( τq ) . ( 3 ) Intuitively this proposition states the following . If the optimal discriminator f∗ maps x ∈ X to a ∈ [ 0 , 1 ] , i.e . f∗ ( x ) = a , a directly corresponds to the ratio of densities in not only the original space a = p ( x ) / ( p ( x ) + q ( x ) ) , but also the 1D discriminator output space a = τp ( a ) / ( τp ( a ) + τq ( a ) ) , which leads to the fact that p ( x ) / ( p ( x ) + q ( x ) ) = f∗ ( x ) = a = τp ( a ) / ( τp ( a ) + τq ( a ) ) . Using Proposition 2.2 , we can prove our main theorem , which characterizes the ability of the log-loss discriminator to identify support misalignment . Theorem 2.1 . Let f∗ be the optimal discriminator ( 2 ) between p and q . Then , D△ ( p , q ) = 0 if and only if D△ ( f∗♯p , f∗♯q ) = 0 . Proof sketch . We observe that for any point x ∈ supp ( p ) \ supp ( q ) we have f∗ ( x ) = 1 and , by Proposition 2.2 , τp ( 1 ) > 0 and τq ( 1 ) = 0 . Repeating the same argument in reverse direction we see that supp ( p ) \ supp ( q ) ̸= ∅ if and only if 1 ∈ supp ( f∗♯p ) \ supp ( f∗♯q ) . Similarly , supp ( q ) \ supp ( p ) ̸= ∅ if and only if 0 ∈ supp ( f∗♯q ) \ supp ( f∗♯p ) . In other words , f∗ outputs the extreme values 1 and 0 if and only if the supports of p and q are misaligned and each of this extreme value belongs to the support of only one of the pushforward distributions supp ( f∗♯p ) or supp ( f ∗ ♯q ) . We conclude this section with two technical remarks on Theorem 2.1 . Remark 2.1.1 . The result of Theorem 2.1 does not necessarily hold for other types of discriminator . For instance , the dual Wasserstein discriminator ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ) does not always highlight the support difference in the original space as a support difference in the discriminator output space . This observation is formaly stated in the following proposition . Proposition 2.3 . Let f⋆W be the optimal solution of supf : Lip ( f ) ≤1 Ex∼p [ f ( x ) ] − Ex∼q [ f ( x ) ] , where Lip ( f ) is the Lipschitz constant of f . There exist distributions p and q such that supp ( p ) ̸= supp ( q ) but supp ( f⋆W ♯p ) = supp ( f ⋆ W ♯q ) . Remark 2.1.2 . We note that in practice the discriminator is typically parameterized as f ( x ) = σ ( g ( x ) ) , where g : X → R is realized by a deep neural network and σ ( x ) = ( 1 + e−x ) −1 is the sigmoid function . The optimization problem for g is inf g Ex∼p [ log ( 1 + e−g ( x ) ) ] + Ex∼q [ log ( 1 + eg ( x ) ) ] , ( 4 ) and the optimal solution is g∗ ( x ) = log p ( x ) − log q ( x ) . Naturally the result of Theorem 2.1 holds for discriminator g∗ , since g∗ ( x ) = σ−1 ( f∗ ( x ) ) and σ is a bijective mapping from ( −∞ , ∞ ) to [ 0 , 1 ] . The main difference of g∗ from f∗ is that in the case of supp ( p ) ̸= supp ( q ) , optimization of the log-loss makes the values g∗ ( x ) infinite ( ±∞ ) for x ∈ supp ( p ) △ supp ( q ) . 2In order to simplify the argument , we assume that all distributions p , q , f∗♯p , f ∗ ♯q have PDFs . | Motivated by label shift between source and target domains, the paper proposes a novel support alignment task with a novel support divergence. They provide theoretical results for these and build up theory that leads to a simple adversarial algorithm for support alignment using a discriminator and nearest neighbor algorithms in 1D. The paper provides extensive empirical results comparing to common domain adaptation methods. | SP:178a08461d9d2b09b0c845d2a76908eb31d874a3 |
Adversarial Support Alignment | 1 INTRODUCTION . Learning tasks often involve estimating properties of distributions from samples or aligning such characteristics across domains . We can align full distributions ( adversarial domain alignment ) , certain statistics ( canonical correlation analysis ) , or the support of distributions ( this paper ) . Much of the recent work has focused on full distributional alignment , for good reasons . In domain adaptation , motivated by theoretical results ( Ben-David et al. , 2007 ; 2010 ) , a series of papers ( Ajakan et al. , 2014 ; Ganin & Lempitsky , 2015 ; Ganin et al. , 2016 ; Tzeng et al. , 2017 ; Shen et al. , 2018 ; Pei et al. , 2018 ; Zhao et al. , 2018 ; Li et al. , 2018a ; Wang et al. , 2021 ; Kumar et al. , 2018 ) seek to align distributions of representations between domains , and utilize a shared classifier on the aligned representation space . Alignment in distributions implies alignment in supports . However , when there are additional objectives/constraints to satisfy , the minimizer for a distribution alignment objective does not necessarily minimize a support alignment objective . Example in Figure 1 demonstrates the qualitative distinction between two minimizers when distribution alignment is not achievable . The distribution alignment objective prefers to keep supports unaligned even if support alignment is achievable . Recent works ( Zhao et al. , 2019 ; Li et al. , 2020 ; Tan et al. , 2020 ; Wu et al. , 2019b ; Tachet des Combes et al. , 2020 ) have demonstrated that a shift in label distributions between source and target leads to a characterizable performance drop when the representations are forced into a distribution alignment . The error bound in Johansson et al . ( 2019 ) suggests aligning the supports of representations instead . In this paper , we focus on distribution support as the key characteristic to align . We introduce a support divergence to measure the support mismatch and algorithms to optimize such alignment . We also position our approach in the spectrum of other alignment methods . Our contributions are as follows ( all proofs can be found in Appendix A ) : 1 . In Section 2.1 , we measure the differences between supports of distributions . Building on the Hausdorff distance , we introduce a novel support divergence better suited for optimization , which we refer to as symmetric support difference ( SSD ) divergence . 2 . In Section 2.2 , we identify an important property of the discriminator trained for Jensen–Shannon divergence : support differences in the original space of interest are “ preserved ” as support differences in its one-dimensional output space . 3 . In Section 3 , we present our practical algorithm for support alignment , Adversarial Support Alignment ( ASA ) . Essentially , based on the analysis presented in Section 2.2 , our solution is to align supports in the discriminator 1D space , which is computationally efficient . 4 . In Section 4 , we place different notions of alignment – distribution alignment , relaxed distribution alignment and support alignment – within a coherent spectrum from the point of view of optimal transport , characterizing their relationships , both theoretically in terms of their objectives and practically in terms of their algorithms . 5 . In Section 5 , we demonstrate the theoretical effectiveness of support alignment in practice for domain adaptation setting . Compared to other alignment-based baselines , our proposed method is more robust against shifts in label distributions . 2 SSD DIVERGENCE AND SUPPORT ALIGNMENT . Notation . We consider an Euclidean space X = Rn equipped with Borel sigma algebra B and a metric d : X ×X → R ( e.g . Euclidean distance ) . Let P be the set of probability measures on ( X , B ) . For p ∈ P , the support of p is denoted by supp ( p ) and is defined as the smallest closed set X ⊆ X such that p ( X ) = 1. f♯p denotes the pushforward measure of p induced by a measurable mapping f . The distance between a point x ∈ X and a subset Y ⊆ X is defined as d ( x , Y ) = infy∈Y d ( x , y ) . The symmetric difference of two sets A and B is defined as A△B = ( A \B ) ∪ ( B \A ) . 2.1 DIFFERENCE BETWEEN SUPPORTS . To align the supports of distributions , we first evaluate how different they are . Similar to distribution divergences like Jensen–Shannon divergence and Wasserstein distance , we introduce a notion of support divergence . A support divergence1 between two distributions in P is a function DS ( · , · ) : P × P → R satisfying : 1 ) DS ( p , q ) ≥ 0 for all p , q ∈ P ; 2 ) DS ( p , q ) = 0 iff supp ( p ) = supp ( q ) . While a distribution divergence is sensitive to both density and support differences , a support divergence only needs to detect mismatches in supports , which are subsets of the metric space X . An example of a distance between subsets of a metric space is the Hausdorff distance : dH ( X , Y ) = max { supx∈X d ( x , Y ) , supy∈Y d ( y , X ) } . Since it depends only on the greatest distance between a point and a set , minimizing this objective for alignment only provides signal to a single point . To make the optimization less sparse , we consider all points that violate the support alignment criterion and introduce symmetric support difference ( SSD ) divergence : D△ ( p , q ) = Ep [ d ( x , supp ( q ) ) ] + Eq [ d ( x , supp ( p ) ) ] . ( 1 ) Proposition 2.1 . SSD divergence D△ ( p , q ) is a support divergence . 1It is not technically a divergence on the space of distributions , since DS ( p , q ) = 0 does not imply p = q . 2.2 SUPPORT ALIGNMENT IN ONE-DIMENSIONAL SPACE . Goodfellow et al . ( 2014 ) showed that the log-loss discriminator f : X → [ 0 , 1 ] , trained to distinguish samples from distributions p and q ( supf Ex∼p [ log f ( x ) ] + Ex∼q [ log ( 1− f ( x ) ) ] ) can be used to estimate the Jensen–Shannon divergence between p and q . The closed form maximizer f∗ is f∗ ( x ) = p ( x ) p ( x ) + q ( x ) , ∀x ∈ supp ( p ) ∪ supp ( q ) . ( 2 ) Note that for a point x /∈ supp ( p ) ∪ supp ( q ) the value of f∗ ( x ) can be set to an arbitrary value in [ 0 , 1 ] , since the log-loss does not depend on f ( x ) for such x . Using ( 2 ) , we can establish a connection between the pushforward distributions f∗♯p and f ∗ ♯q . Proposition 2.2 . Let f∗ be the optimal log-loss discriminator ( 2 ) between p and q , and let τp and τq be the probability density functions of the pushforward measures f∗♯p and f ∗ ♯q respectively 2 . Then , τp ( a ) τp ( a ) + τq ( a ) = a , ∀a ∈ supp ( τp ) ∪ supp ( τq ) . ( 3 ) Intuitively this proposition states the following . If the optimal discriminator f∗ maps x ∈ X to a ∈ [ 0 , 1 ] , i.e . f∗ ( x ) = a , a directly corresponds to the ratio of densities in not only the original space a = p ( x ) / ( p ( x ) + q ( x ) ) , but also the 1D discriminator output space a = τp ( a ) / ( τp ( a ) + τq ( a ) ) , which leads to the fact that p ( x ) / ( p ( x ) + q ( x ) ) = f∗ ( x ) = a = τp ( a ) / ( τp ( a ) + τq ( a ) ) . Using Proposition 2.2 , we can prove our main theorem , which characterizes the ability of the log-loss discriminator to identify support misalignment . Theorem 2.1 . Let f∗ be the optimal discriminator ( 2 ) between p and q . Then , D△ ( p , q ) = 0 if and only if D△ ( f∗♯p , f∗♯q ) = 0 . Proof sketch . We observe that for any point x ∈ supp ( p ) \ supp ( q ) we have f∗ ( x ) = 1 and , by Proposition 2.2 , τp ( 1 ) > 0 and τq ( 1 ) = 0 . Repeating the same argument in reverse direction we see that supp ( p ) \ supp ( q ) ̸= ∅ if and only if 1 ∈ supp ( f∗♯p ) \ supp ( f∗♯q ) . Similarly , supp ( q ) \ supp ( p ) ̸= ∅ if and only if 0 ∈ supp ( f∗♯q ) \ supp ( f∗♯p ) . In other words , f∗ outputs the extreme values 1 and 0 if and only if the supports of p and q are misaligned and each of this extreme value belongs to the support of only one of the pushforward distributions supp ( f∗♯p ) or supp ( f ∗ ♯q ) . We conclude this section with two technical remarks on Theorem 2.1 . Remark 2.1.1 . The result of Theorem 2.1 does not necessarily hold for other types of discriminator . For instance , the dual Wasserstein discriminator ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ) does not always highlight the support difference in the original space as a support difference in the discriminator output space . This observation is formaly stated in the following proposition . Proposition 2.3 . Let f⋆W be the optimal solution of supf : Lip ( f ) ≤1 Ex∼p [ f ( x ) ] − Ex∼q [ f ( x ) ] , where Lip ( f ) is the Lipschitz constant of f . There exist distributions p and q such that supp ( p ) ̸= supp ( q ) but supp ( f⋆W ♯p ) = supp ( f ⋆ W ♯q ) . Remark 2.1.2 . We note that in practice the discriminator is typically parameterized as f ( x ) = σ ( g ( x ) ) , where g : X → R is realized by a deep neural network and σ ( x ) = ( 1 + e−x ) −1 is the sigmoid function . The optimization problem for g is inf g Ex∼p [ log ( 1 + e−g ( x ) ) ] + Ex∼q [ log ( 1 + eg ( x ) ) ] , ( 4 ) and the optimal solution is g∗ ( x ) = log p ( x ) − log q ( x ) . Naturally the result of Theorem 2.1 holds for discriminator g∗ , since g∗ ( x ) = σ−1 ( f∗ ( x ) ) and σ is a bijective mapping from ( −∞ , ∞ ) to [ 0 , 1 ] . The main difference of g∗ from f∗ is that in the case of supp ( p ) ̸= supp ( q ) , optimization of the log-loss makes the values g∗ ( x ) infinite ( ±∞ ) for x ∈ supp ( p ) △ supp ( q ) . 2In order to simplify the argument , we assume that all distributions p , q , f∗♯p , f ∗ ♯q have PDFs . | The authors propose symmetric support difference (SSD) divergence to align supports of distributions. The authors further propose to use the optimal log-loss discriminator to project supports into 1-dimensional space for SSD. The authors also illustrate the advantages of the proposed methods for domain adaptation. | SP:178a08461d9d2b09b0c845d2a76908eb31d874a3 |
Learning Pessimism for Robust and Efficient Off-Policy Reinforcement Learning | 1 INTRODUCTION . Sample efficiency and generality are two directions in which reinforcement learning ( RL ) algorithms are still lacking , yet , they are crucial for tackling complex real-world problems ( Mahmood et al. , 2018 ) . Consequently , so far , many RL milestones have been achieved through simulating conspicuous amounts of experience and tuning for effective task-specific parameters ( Mnih et al. , 2013 ; Silver et al. , 2017 ) . Recent off-policy model-free ( Lee et al. , 2021 ; Chen et al. , 2021 ) and modelbased algorithms ( Chua et al. , 2018 ; Janner et al. , 2019 ) , pushed forward the state-of-the-art sampleefficiency on several benchmark tasks ( Brockman et al. , 2016 ) . We attribute such improvements to two main linked advances : more expressive models to capture uncertainty and better strategies to counteract detrimental biases from the learning process . These advances yielded the stabilization to adopt more aggressive optimization procedures , with particular benefits in lower data regimes . Modern off-policy algorithms learn behavior by optimizing the expected performance as predicted by a trained parametric deep model of the environment . Within this process , overestimation bias naturally arises from the maximization performed over the model ’ s performance predictions , and consequently , also over the model ’ s possible errors . In the context of model-free RL , such model is trained to predict the agent ’ s future returns via temporal difference ( TD- ) learning and is referred to as the critic . A common strategy to counteract overestimation is to parameterize the critic with multiple , independently-initialized networks and optimize the agent ’ s behavior over the minimum value of the relative outputs ( Fujimoto et al. , 2018 ) . Empirically , this strategy consistently yields pessimistic target performance measures , avoiding overestimation bias propagating through the TDlearning target bootstraps . However , this approach directly links the critic ’ s parameterization to bias counteraction , making improvements in each of these components hard to pursue independently . Based on these observations , we propose a more general formulation for counteracting overestimation bias independently . In particular , we compute the critic ’ s target performance predictions by replacing the ubiquitous minimization procedure with an explicit penalty that is agnostic to the critic ’ s parameterization . Our proposed penalty is the output of a linear model of the epistemic uncertainty , computed as the expected Wasserstein distance between the return distributions predicted by the critic . Based on this formulation , we derive Generalized Pessimism Learning ( GPL ) , a new strategy that learns an accurate penalization throughout the RL process . Within this strategy , we propose optimizing the penalty ’ s weight with dual TD-learning , a new procedure minimizing the estimated bias in the critic ’ s performance predictions with dual gradient descent . GPL is the first effective method able to freely learn an unbiased performance objective throughout training . Furthermore , we also extend GPL by introducing a new pessimism annealing procedure , motivated by the principle of optimism in the face of uncertainty ( Brafman & Tennenholtz , 2002 ) . This procedure leads the agent to adopt a risk-seeking behavior policy by utilizing a purposely biased estimate of the performance in the initial training stages . Hence , it trades-off expected immediate performance for directed exploration , incentivizing the visitation of states with high epistemic uncertainty from which the critic would gain more information . We incorporate GPL with modern implementations of the Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018a ; b ) and Data-regularized Q ( DrQ ) ( Yarats et al. , 2021b ; a ) algorithms , yielding GPL-SAC and GPL-DrQ , respectively . On the Mujoco environments from the OpenAI Gym suite ( Todorov et al. , 2012 ; Brockman et al. , 2016 ) , GPL-SAC outperforms prior , more expensive , model-based ( Janner et al. , 2019 ) and model-free ( Chen et al. , 2021 ) state-of-the-art algorithms . For instance , in the Humanoid environment GPL-SAC is able to recover a score of 5000 in less than 100K experience steps , more than nine times faster than regular SAC . Additionally , on pixel-based environments from the DeepMind Control Suite ( Tassa et al. , 2018 ) , GPL-DrQ provides significant performance improvements from the recent state-of-the-art DrQv2 algorithm . These results highlight the effectiveness and applicability of GPL , in spite of introducing only negligible computational overheads . We release our implementations to facilitate future comparisons and extensions . In summary , we make several contributions towards improving off-policy reinforcement learning : • We propose a novel penalty to counteract overestimation bias , disentangling the critic ’ s parameterization from the enforced pessimism . • We propose the first optimization method to estimate and precisely counteract overestimation bias throughout training with dual gradient descent . • We propose a pessimism annealing strategy that exploits epistemic uncertainty to actively seek informative states in the early training stages . • Integrating our method with SAC and DrQ , we achieve new state-of-the-art performance results with trivial overheads on both proprioceptive and pixel observations tasks . 2 RELATED WORK . Modern model-free off-policy algorithms utilize different strategies to counteract overestimation bias arising in the critic ’ s TD-targets ( Thrun & Schwartz , 1993 ; Pendrith et al. , 1997 ; Mannor et al. , 2007 ) . Many approaches combine the predictions of multiple function approximators to estimate the expected returns , for instance , by independently selecting the bootstrap action ( Hasselt , 2010 ) . In discrete control , such technique appears to mitigate the bias of the seminal DQN algorithm ( Mnih et al. , 2013 ) , consistently improving performance ( Van Hasselt et al. , 2016 ; Hessel et al. , 2018 ) . In continuous control , similar strategies successfully stabilize algorithms based on the policy gradient theorem ( Silver et al. , 2014 ; Lillicrap et al. , 2015 ) . Most notably , Fujimoto et al . ( 2018 ) proposed to compute the critic ’ s TD-targets by taking the minimum over the outputs of two different actionvalue models . This particular minimization strategy has become ubiquitous , being employed in many popular follow-up algorithms ( Haarnoja et al. , 2018b ; Yarats et al. , 2021b ) . To better trade-off optimism and pessimism , Zhang et al . ( 2017 ) proposed using a weighted combination of the original and minimized targets . Instead , Kuznetsov et al . ( 2020 ) proposed to parameterize a distributional critic and drop a fixed fraction of the predicted quantiles to compute the targets . Moreover , as in our approach , several works also considered explicit penalties based on heuristic measures of epistemic uncertainty ( Lee et al. , 2013 ; Ciosek et al. , 2019 ) . Recently , Kumar et al . ( 2020 ) proposed to complement these strategies by further reducing bias propagation through actively weighing the TD-loss of different experience samples . Aleatoric uncertainty is also an additional source of bias in TD-learning , due to the practical inability of considering multiple transition samples in stochastic environments ( Baird , 1995 ) . This phenomenon is known as the double-sample issue , but has been rarely addressed in prior off-policy literature ( Dai et al. , 2018 ) . Within model-based RL ( Atkeson & Santamaria , 1997 ) , recent works have achieved remarkable sample efficiency by learning large ensembles of dynamic models for better predictions ( Chua et al. , 2018 ; Wang & Ba , 2019 ; Janner et al. , 2019 ) . In the model-free framework , prior works used large critic ensembles for more diverse scopes . Anschel et al . ( 2017 ) proposed to build an ensemble using several past versions of the value network to reduce the magnitude of the TD-target ’ s bias . Moreover , Lan et al . ( 2020 ) introduced a sampling procedure for the critic ’ s ensemble predictions to regulate underestimation in the TD-targets . Their work was later extended to the continuous setting by Chen et al . ( 2021 ) , which showed that large ensembles combined with a high update-to-data ratio enable to outperform the sample efficiency of contemporary model-based methods . Ensembling has also been used to achieve better exploration following the principle of optimism in the face of uncertainty ( Brafman & Tennenholtz , 2002 ) in both discrete ( Osband et al. , 2016 ; Chen et al. , 2017 ) and continuous settings ( Ciosek et al. , 2019 ) . Lee et al . ( 2021 ) further showed the effectiveness of combining several of these strategies in a unified framework . In the same spirit as this work , multiple prior methods attempted to learn components and parameters of underlying RL algorithms . Several works have approached this problem by utilizing expensive meta-learning strategies to obtain new learning objectives based on the multi-task performance from low-computation environments ( Bechtle et al. , 2021 ; Oh et al. , 2020 ; Xu et al. , 2020 ; Co-Reyes et al. , 2021 ) . More related to our work , Moskovitz et al . ( 2021 ) recently proposed to use an adaptive binary controller that switches on or off a bias correction penalty for the TD-targets . In particular , they treat the controller optimization task as a multi-armed bandit problem performed throughout different training iterations to maximize immediate performance improvements . Instead , GPL makes effective use of dual gradient descent to minimize bias directly , similarly to how Haarnoja et al . ( 2018a ; b ) learns the exploration temperature parameter in the Soft Actor-Critic ( SAC ) algorithm . 3 PRELIMINARIES . In RL , we aim to autonomously recover optimal agent behavior for performing a particular task . Formally , we describe this problem setting as a Markov Decision Process ( MDP ) , defined as the tuple ( S , A , P , p0 , r , γ ) . At each time-step of interaction , the agent observes some state in the state space , s ∈ S , and performs some action in the action space , a ∈ A . The transition dynamics function P : S × A × S → R and the initial state distribution p0 : S → R describe the evolution of the environment as a consequence of the agent ’ s behavior . The reward function r : S × A → R quantifies the effectiveness of each performed action , while the discount factor γ ∈ [ 0 , 1 ) represents the agent ’ s preference for earlier rewards . A policy π : S ×A → R maps each state to a probability distribution over actions and represents the agent ’ s behavior . An episode of interactions between the agent and the environment yields some trajectory τ containing the transitions experienced , τ = ( s0 , a0 , r0 , s1 , a1 , r1 , ... ) . The RL objective is then to find an optimal policy π∗ that maximizes the expected sum of discounted future rewards : π∗ = argmax π Epπ ( τ ) [ ∞∑ t=0 γtr ( st , at ) ] , ( 1 ) where pπ ( τ ) represents the distribution of trajectories stemming from the agent ’ s interaction with the environment . Off-policy RL algorithms commonly utilize some critic model to evaluate the effectiveness of the agent ’ s behavior . A straightforward choice for the critic is to represent the policy ’ s action-value function Qπ : S × A → R. This function quantifies the expected sum of discounted future rewards after executing some particular action from a given state : Qπ ( s , a ) = Epπ ( τ |s0=s , a0=a ) [ ∞∑ t=0 γtr ( st , at ) ] . ( 2 ) Most RL algorithms consider learning parameterized models for both the policy , πθ , and the corresponding action-value function , Qπϕ . In particular , after storing experience transitions ( s , a , s ′ , r ) in a replay data buffer D , we learn Qπϕ by iteratively minimizing a squared TD-loss of the form : JQ ( ϕ ) = E ( s , a , s′ , r ) ∼D [ ( Qπϕ ( s , a ) − y ) 2 ] , y = r + γEa∼π ( s′ ) [ Q̂πϕ′ ( s ′ , a ) ] . ( 3 ) Here , the TD-targets y are obtained by computing a 1-step bootstrap with a target action-value estimator Q̂πϕ′ . Usually , Q̂ π ϕ′ is a regularized function of action-value predictions from a target critic model using delayed parameters ϕ′ . Following the policy gradient theorem ( Sutton et al. , 2000 ; Silver et al. , 2014 ) , we can then improve our policy by maximizing the expected returns as predicted by the critic , e.g. , by minimizing the negated action-value estimates : Jπ ( θ ) = −Es∼D , a∼πθ ( s ) [ Q̂πϕ ( s , a ) ] . ( 4 ) | This paper addresses the issue of overestimation bias in TD learning by deriving a Generalized Pessimism Learning (GPL) framework. Authors have used a dual TD learning procedure to minimise the estimated bias in critic’s performance and adaptively learns a penalty to recover an unbiased performance objective. The idea has been integrated into SAC and DrQ on various tasks. | SP:4f44a9285670f8355549732887414ec04bcb7399 |
Learning Pessimism for Robust and Efficient Off-Policy Reinforcement Learning | 1 INTRODUCTION . Sample efficiency and generality are two directions in which reinforcement learning ( RL ) algorithms are still lacking , yet , they are crucial for tackling complex real-world problems ( Mahmood et al. , 2018 ) . Consequently , so far , many RL milestones have been achieved through simulating conspicuous amounts of experience and tuning for effective task-specific parameters ( Mnih et al. , 2013 ; Silver et al. , 2017 ) . Recent off-policy model-free ( Lee et al. , 2021 ; Chen et al. , 2021 ) and modelbased algorithms ( Chua et al. , 2018 ; Janner et al. , 2019 ) , pushed forward the state-of-the-art sampleefficiency on several benchmark tasks ( Brockman et al. , 2016 ) . We attribute such improvements to two main linked advances : more expressive models to capture uncertainty and better strategies to counteract detrimental biases from the learning process . These advances yielded the stabilization to adopt more aggressive optimization procedures , with particular benefits in lower data regimes . Modern off-policy algorithms learn behavior by optimizing the expected performance as predicted by a trained parametric deep model of the environment . Within this process , overestimation bias naturally arises from the maximization performed over the model ’ s performance predictions , and consequently , also over the model ’ s possible errors . In the context of model-free RL , such model is trained to predict the agent ’ s future returns via temporal difference ( TD- ) learning and is referred to as the critic . A common strategy to counteract overestimation is to parameterize the critic with multiple , independently-initialized networks and optimize the agent ’ s behavior over the minimum value of the relative outputs ( Fujimoto et al. , 2018 ) . Empirically , this strategy consistently yields pessimistic target performance measures , avoiding overestimation bias propagating through the TDlearning target bootstraps . However , this approach directly links the critic ’ s parameterization to bias counteraction , making improvements in each of these components hard to pursue independently . Based on these observations , we propose a more general formulation for counteracting overestimation bias independently . In particular , we compute the critic ’ s target performance predictions by replacing the ubiquitous minimization procedure with an explicit penalty that is agnostic to the critic ’ s parameterization . Our proposed penalty is the output of a linear model of the epistemic uncertainty , computed as the expected Wasserstein distance between the return distributions predicted by the critic . Based on this formulation , we derive Generalized Pessimism Learning ( GPL ) , a new strategy that learns an accurate penalization throughout the RL process . Within this strategy , we propose optimizing the penalty ’ s weight with dual TD-learning , a new procedure minimizing the estimated bias in the critic ’ s performance predictions with dual gradient descent . GPL is the first effective method able to freely learn an unbiased performance objective throughout training . Furthermore , we also extend GPL by introducing a new pessimism annealing procedure , motivated by the principle of optimism in the face of uncertainty ( Brafman & Tennenholtz , 2002 ) . This procedure leads the agent to adopt a risk-seeking behavior policy by utilizing a purposely biased estimate of the performance in the initial training stages . Hence , it trades-off expected immediate performance for directed exploration , incentivizing the visitation of states with high epistemic uncertainty from which the critic would gain more information . We incorporate GPL with modern implementations of the Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018a ; b ) and Data-regularized Q ( DrQ ) ( Yarats et al. , 2021b ; a ) algorithms , yielding GPL-SAC and GPL-DrQ , respectively . On the Mujoco environments from the OpenAI Gym suite ( Todorov et al. , 2012 ; Brockman et al. , 2016 ) , GPL-SAC outperforms prior , more expensive , model-based ( Janner et al. , 2019 ) and model-free ( Chen et al. , 2021 ) state-of-the-art algorithms . For instance , in the Humanoid environment GPL-SAC is able to recover a score of 5000 in less than 100K experience steps , more than nine times faster than regular SAC . Additionally , on pixel-based environments from the DeepMind Control Suite ( Tassa et al. , 2018 ) , GPL-DrQ provides significant performance improvements from the recent state-of-the-art DrQv2 algorithm . These results highlight the effectiveness and applicability of GPL , in spite of introducing only negligible computational overheads . We release our implementations to facilitate future comparisons and extensions . In summary , we make several contributions towards improving off-policy reinforcement learning : • We propose a novel penalty to counteract overestimation bias , disentangling the critic ’ s parameterization from the enforced pessimism . • We propose the first optimization method to estimate and precisely counteract overestimation bias throughout training with dual gradient descent . • We propose a pessimism annealing strategy that exploits epistemic uncertainty to actively seek informative states in the early training stages . • Integrating our method with SAC and DrQ , we achieve new state-of-the-art performance results with trivial overheads on both proprioceptive and pixel observations tasks . 2 RELATED WORK . Modern model-free off-policy algorithms utilize different strategies to counteract overestimation bias arising in the critic ’ s TD-targets ( Thrun & Schwartz , 1993 ; Pendrith et al. , 1997 ; Mannor et al. , 2007 ) . Many approaches combine the predictions of multiple function approximators to estimate the expected returns , for instance , by independently selecting the bootstrap action ( Hasselt , 2010 ) . In discrete control , such technique appears to mitigate the bias of the seminal DQN algorithm ( Mnih et al. , 2013 ) , consistently improving performance ( Van Hasselt et al. , 2016 ; Hessel et al. , 2018 ) . In continuous control , similar strategies successfully stabilize algorithms based on the policy gradient theorem ( Silver et al. , 2014 ; Lillicrap et al. , 2015 ) . Most notably , Fujimoto et al . ( 2018 ) proposed to compute the critic ’ s TD-targets by taking the minimum over the outputs of two different actionvalue models . This particular minimization strategy has become ubiquitous , being employed in many popular follow-up algorithms ( Haarnoja et al. , 2018b ; Yarats et al. , 2021b ) . To better trade-off optimism and pessimism , Zhang et al . ( 2017 ) proposed using a weighted combination of the original and minimized targets . Instead , Kuznetsov et al . ( 2020 ) proposed to parameterize a distributional critic and drop a fixed fraction of the predicted quantiles to compute the targets . Moreover , as in our approach , several works also considered explicit penalties based on heuristic measures of epistemic uncertainty ( Lee et al. , 2013 ; Ciosek et al. , 2019 ) . Recently , Kumar et al . ( 2020 ) proposed to complement these strategies by further reducing bias propagation through actively weighing the TD-loss of different experience samples . Aleatoric uncertainty is also an additional source of bias in TD-learning , due to the practical inability of considering multiple transition samples in stochastic environments ( Baird , 1995 ) . This phenomenon is known as the double-sample issue , but has been rarely addressed in prior off-policy literature ( Dai et al. , 2018 ) . Within model-based RL ( Atkeson & Santamaria , 1997 ) , recent works have achieved remarkable sample efficiency by learning large ensembles of dynamic models for better predictions ( Chua et al. , 2018 ; Wang & Ba , 2019 ; Janner et al. , 2019 ) . In the model-free framework , prior works used large critic ensembles for more diverse scopes . Anschel et al . ( 2017 ) proposed to build an ensemble using several past versions of the value network to reduce the magnitude of the TD-target ’ s bias . Moreover , Lan et al . ( 2020 ) introduced a sampling procedure for the critic ’ s ensemble predictions to regulate underestimation in the TD-targets . Their work was later extended to the continuous setting by Chen et al . ( 2021 ) , which showed that large ensembles combined with a high update-to-data ratio enable to outperform the sample efficiency of contemporary model-based methods . Ensembling has also been used to achieve better exploration following the principle of optimism in the face of uncertainty ( Brafman & Tennenholtz , 2002 ) in both discrete ( Osband et al. , 2016 ; Chen et al. , 2017 ) and continuous settings ( Ciosek et al. , 2019 ) . Lee et al . ( 2021 ) further showed the effectiveness of combining several of these strategies in a unified framework . In the same spirit as this work , multiple prior methods attempted to learn components and parameters of underlying RL algorithms . Several works have approached this problem by utilizing expensive meta-learning strategies to obtain new learning objectives based on the multi-task performance from low-computation environments ( Bechtle et al. , 2021 ; Oh et al. , 2020 ; Xu et al. , 2020 ; Co-Reyes et al. , 2021 ) . More related to our work , Moskovitz et al . ( 2021 ) recently proposed to use an adaptive binary controller that switches on or off a bias correction penalty for the TD-targets . In particular , they treat the controller optimization task as a multi-armed bandit problem performed throughout different training iterations to maximize immediate performance improvements . Instead , GPL makes effective use of dual gradient descent to minimize bias directly , similarly to how Haarnoja et al . ( 2018a ; b ) learns the exploration temperature parameter in the Soft Actor-Critic ( SAC ) algorithm . 3 PRELIMINARIES . In RL , we aim to autonomously recover optimal agent behavior for performing a particular task . Formally , we describe this problem setting as a Markov Decision Process ( MDP ) , defined as the tuple ( S , A , P , p0 , r , γ ) . At each time-step of interaction , the agent observes some state in the state space , s ∈ S , and performs some action in the action space , a ∈ A . The transition dynamics function P : S × A × S → R and the initial state distribution p0 : S → R describe the evolution of the environment as a consequence of the agent ’ s behavior . The reward function r : S × A → R quantifies the effectiveness of each performed action , while the discount factor γ ∈ [ 0 , 1 ) represents the agent ’ s preference for earlier rewards . A policy π : S ×A → R maps each state to a probability distribution over actions and represents the agent ’ s behavior . An episode of interactions between the agent and the environment yields some trajectory τ containing the transitions experienced , τ = ( s0 , a0 , r0 , s1 , a1 , r1 , ... ) . The RL objective is then to find an optimal policy π∗ that maximizes the expected sum of discounted future rewards : π∗ = argmax π Epπ ( τ ) [ ∞∑ t=0 γtr ( st , at ) ] , ( 1 ) where pπ ( τ ) represents the distribution of trajectories stemming from the agent ’ s interaction with the environment . Off-policy RL algorithms commonly utilize some critic model to evaluate the effectiveness of the agent ’ s behavior . A straightforward choice for the critic is to represent the policy ’ s action-value function Qπ : S × A → R. This function quantifies the expected sum of discounted future rewards after executing some particular action from a given state : Qπ ( s , a ) = Epπ ( τ |s0=s , a0=a ) [ ∞∑ t=0 γtr ( st , at ) ] . ( 2 ) Most RL algorithms consider learning parameterized models for both the policy , πθ , and the corresponding action-value function , Qπϕ . In particular , after storing experience transitions ( s , a , s ′ , r ) in a replay data buffer D , we learn Qπϕ by iteratively minimizing a squared TD-loss of the form : JQ ( ϕ ) = E ( s , a , s′ , r ) ∼D [ ( Qπϕ ( s , a ) − y ) 2 ] , y = r + γEa∼π ( s′ ) [ Q̂πϕ′ ( s ′ , a ) ] . ( 3 ) Here , the TD-targets y are obtained by computing a 1-step bootstrap with a target action-value estimator Q̂πϕ′ . Usually , Q̂ π ϕ′ is a regularized function of action-value predictions from a target critic model using delayed parameters ϕ′ . Following the policy gradient theorem ( Sutton et al. , 2000 ; Silver et al. , 2014 ) , we can then improve our policy by maximizing the expected returns as predicted by the critic , e.g. , by minimizing the negated action-value estimates : Jπ ( θ ) = −Es∼D , a∼πθ ( s ) [ Q̂πϕ ( s , a ) ] . ( 4 ) | This paper introduces GPL, an off-policy RL algorithm that aims to address the overestimation bias in Q learning. The authors propose to estimate the bias in Q learning by uncertainty quantification as a penalization to estimated Q functions. The authors also propose to learn the appropriate penalty through minimizing the bias. The authors further propose GPL, which penalizes the estimated Q functions based on uncertainty quantification. In addition, the authors propose to adjust the penalization parameter to conduct exploration. The authors conducted experiments on GPL incorporated with SAC and DrQ. Experimentations demonstrate better performance than the SAC and DrQ baselines. | SP:4f44a9285670f8355549732887414ec04bcb7399 |
Learning Pessimism for Robust and Efficient Off-Policy Reinforcement Learning | 1 INTRODUCTION . Sample efficiency and generality are two directions in which reinforcement learning ( RL ) algorithms are still lacking , yet , they are crucial for tackling complex real-world problems ( Mahmood et al. , 2018 ) . Consequently , so far , many RL milestones have been achieved through simulating conspicuous amounts of experience and tuning for effective task-specific parameters ( Mnih et al. , 2013 ; Silver et al. , 2017 ) . Recent off-policy model-free ( Lee et al. , 2021 ; Chen et al. , 2021 ) and modelbased algorithms ( Chua et al. , 2018 ; Janner et al. , 2019 ) , pushed forward the state-of-the-art sampleefficiency on several benchmark tasks ( Brockman et al. , 2016 ) . We attribute such improvements to two main linked advances : more expressive models to capture uncertainty and better strategies to counteract detrimental biases from the learning process . These advances yielded the stabilization to adopt more aggressive optimization procedures , with particular benefits in lower data regimes . Modern off-policy algorithms learn behavior by optimizing the expected performance as predicted by a trained parametric deep model of the environment . Within this process , overestimation bias naturally arises from the maximization performed over the model ’ s performance predictions , and consequently , also over the model ’ s possible errors . In the context of model-free RL , such model is trained to predict the agent ’ s future returns via temporal difference ( TD- ) learning and is referred to as the critic . A common strategy to counteract overestimation is to parameterize the critic with multiple , independently-initialized networks and optimize the agent ’ s behavior over the minimum value of the relative outputs ( Fujimoto et al. , 2018 ) . Empirically , this strategy consistently yields pessimistic target performance measures , avoiding overestimation bias propagating through the TDlearning target bootstraps . However , this approach directly links the critic ’ s parameterization to bias counteraction , making improvements in each of these components hard to pursue independently . Based on these observations , we propose a more general formulation for counteracting overestimation bias independently . In particular , we compute the critic ’ s target performance predictions by replacing the ubiquitous minimization procedure with an explicit penalty that is agnostic to the critic ’ s parameterization . Our proposed penalty is the output of a linear model of the epistemic uncertainty , computed as the expected Wasserstein distance between the return distributions predicted by the critic . Based on this formulation , we derive Generalized Pessimism Learning ( GPL ) , a new strategy that learns an accurate penalization throughout the RL process . Within this strategy , we propose optimizing the penalty ’ s weight with dual TD-learning , a new procedure minimizing the estimated bias in the critic ’ s performance predictions with dual gradient descent . GPL is the first effective method able to freely learn an unbiased performance objective throughout training . Furthermore , we also extend GPL by introducing a new pessimism annealing procedure , motivated by the principle of optimism in the face of uncertainty ( Brafman & Tennenholtz , 2002 ) . This procedure leads the agent to adopt a risk-seeking behavior policy by utilizing a purposely biased estimate of the performance in the initial training stages . Hence , it trades-off expected immediate performance for directed exploration , incentivizing the visitation of states with high epistemic uncertainty from which the critic would gain more information . We incorporate GPL with modern implementations of the Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018a ; b ) and Data-regularized Q ( DrQ ) ( Yarats et al. , 2021b ; a ) algorithms , yielding GPL-SAC and GPL-DrQ , respectively . On the Mujoco environments from the OpenAI Gym suite ( Todorov et al. , 2012 ; Brockman et al. , 2016 ) , GPL-SAC outperforms prior , more expensive , model-based ( Janner et al. , 2019 ) and model-free ( Chen et al. , 2021 ) state-of-the-art algorithms . For instance , in the Humanoid environment GPL-SAC is able to recover a score of 5000 in less than 100K experience steps , more than nine times faster than regular SAC . Additionally , on pixel-based environments from the DeepMind Control Suite ( Tassa et al. , 2018 ) , GPL-DrQ provides significant performance improvements from the recent state-of-the-art DrQv2 algorithm . These results highlight the effectiveness and applicability of GPL , in spite of introducing only negligible computational overheads . We release our implementations to facilitate future comparisons and extensions . In summary , we make several contributions towards improving off-policy reinforcement learning : • We propose a novel penalty to counteract overestimation bias , disentangling the critic ’ s parameterization from the enforced pessimism . • We propose the first optimization method to estimate and precisely counteract overestimation bias throughout training with dual gradient descent . • We propose a pessimism annealing strategy that exploits epistemic uncertainty to actively seek informative states in the early training stages . • Integrating our method with SAC and DrQ , we achieve new state-of-the-art performance results with trivial overheads on both proprioceptive and pixel observations tasks . 2 RELATED WORK . Modern model-free off-policy algorithms utilize different strategies to counteract overestimation bias arising in the critic ’ s TD-targets ( Thrun & Schwartz , 1993 ; Pendrith et al. , 1997 ; Mannor et al. , 2007 ) . Many approaches combine the predictions of multiple function approximators to estimate the expected returns , for instance , by independently selecting the bootstrap action ( Hasselt , 2010 ) . In discrete control , such technique appears to mitigate the bias of the seminal DQN algorithm ( Mnih et al. , 2013 ) , consistently improving performance ( Van Hasselt et al. , 2016 ; Hessel et al. , 2018 ) . In continuous control , similar strategies successfully stabilize algorithms based on the policy gradient theorem ( Silver et al. , 2014 ; Lillicrap et al. , 2015 ) . Most notably , Fujimoto et al . ( 2018 ) proposed to compute the critic ’ s TD-targets by taking the minimum over the outputs of two different actionvalue models . This particular minimization strategy has become ubiquitous , being employed in many popular follow-up algorithms ( Haarnoja et al. , 2018b ; Yarats et al. , 2021b ) . To better trade-off optimism and pessimism , Zhang et al . ( 2017 ) proposed using a weighted combination of the original and minimized targets . Instead , Kuznetsov et al . ( 2020 ) proposed to parameterize a distributional critic and drop a fixed fraction of the predicted quantiles to compute the targets . Moreover , as in our approach , several works also considered explicit penalties based on heuristic measures of epistemic uncertainty ( Lee et al. , 2013 ; Ciosek et al. , 2019 ) . Recently , Kumar et al . ( 2020 ) proposed to complement these strategies by further reducing bias propagation through actively weighing the TD-loss of different experience samples . Aleatoric uncertainty is also an additional source of bias in TD-learning , due to the practical inability of considering multiple transition samples in stochastic environments ( Baird , 1995 ) . This phenomenon is known as the double-sample issue , but has been rarely addressed in prior off-policy literature ( Dai et al. , 2018 ) . Within model-based RL ( Atkeson & Santamaria , 1997 ) , recent works have achieved remarkable sample efficiency by learning large ensembles of dynamic models for better predictions ( Chua et al. , 2018 ; Wang & Ba , 2019 ; Janner et al. , 2019 ) . In the model-free framework , prior works used large critic ensembles for more diverse scopes . Anschel et al . ( 2017 ) proposed to build an ensemble using several past versions of the value network to reduce the magnitude of the TD-target ’ s bias . Moreover , Lan et al . ( 2020 ) introduced a sampling procedure for the critic ’ s ensemble predictions to regulate underestimation in the TD-targets . Their work was later extended to the continuous setting by Chen et al . ( 2021 ) , which showed that large ensembles combined with a high update-to-data ratio enable to outperform the sample efficiency of contemporary model-based methods . Ensembling has also been used to achieve better exploration following the principle of optimism in the face of uncertainty ( Brafman & Tennenholtz , 2002 ) in both discrete ( Osband et al. , 2016 ; Chen et al. , 2017 ) and continuous settings ( Ciosek et al. , 2019 ) . Lee et al . ( 2021 ) further showed the effectiveness of combining several of these strategies in a unified framework . In the same spirit as this work , multiple prior methods attempted to learn components and parameters of underlying RL algorithms . Several works have approached this problem by utilizing expensive meta-learning strategies to obtain new learning objectives based on the multi-task performance from low-computation environments ( Bechtle et al. , 2021 ; Oh et al. , 2020 ; Xu et al. , 2020 ; Co-Reyes et al. , 2021 ) . More related to our work , Moskovitz et al . ( 2021 ) recently proposed to use an adaptive binary controller that switches on or off a bias correction penalty for the TD-targets . In particular , they treat the controller optimization task as a multi-armed bandit problem performed throughout different training iterations to maximize immediate performance improvements . Instead , GPL makes effective use of dual gradient descent to minimize bias directly , similarly to how Haarnoja et al . ( 2018a ; b ) learns the exploration temperature parameter in the Soft Actor-Critic ( SAC ) algorithm . 3 PRELIMINARIES . In RL , we aim to autonomously recover optimal agent behavior for performing a particular task . Formally , we describe this problem setting as a Markov Decision Process ( MDP ) , defined as the tuple ( S , A , P , p0 , r , γ ) . At each time-step of interaction , the agent observes some state in the state space , s ∈ S , and performs some action in the action space , a ∈ A . The transition dynamics function P : S × A × S → R and the initial state distribution p0 : S → R describe the evolution of the environment as a consequence of the agent ’ s behavior . The reward function r : S × A → R quantifies the effectiveness of each performed action , while the discount factor γ ∈ [ 0 , 1 ) represents the agent ’ s preference for earlier rewards . A policy π : S ×A → R maps each state to a probability distribution over actions and represents the agent ’ s behavior . An episode of interactions between the agent and the environment yields some trajectory τ containing the transitions experienced , τ = ( s0 , a0 , r0 , s1 , a1 , r1 , ... ) . The RL objective is then to find an optimal policy π∗ that maximizes the expected sum of discounted future rewards : π∗ = argmax π Epπ ( τ ) [ ∞∑ t=0 γtr ( st , at ) ] , ( 1 ) where pπ ( τ ) represents the distribution of trajectories stemming from the agent ’ s interaction with the environment . Off-policy RL algorithms commonly utilize some critic model to evaluate the effectiveness of the agent ’ s behavior . A straightforward choice for the critic is to represent the policy ’ s action-value function Qπ : S × A → R. This function quantifies the expected sum of discounted future rewards after executing some particular action from a given state : Qπ ( s , a ) = Epπ ( τ |s0=s , a0=a ) [ ∞∑ t=0 γtr ( st , at ) ] . ( 2 ) Most RL algorithms consider learning parameterized models for both the policy , πθ , and the corresponding action-value function , Qπϕ . In particular , after storing experience transitions ( s , a , s ′ , r ) in a replay data buffer D , we learn Qπϕ by iteratively minimizing a squared TD-loss of the form : JQ ( ϕ ) = E ( s , a , s′ , r ) ∼D [ ( Qπϕ ( s , a ) − y ) 2 ] , y = r + γEa∼π ( s′ ) [ Q̂πϕ′ ( s ′ , a ) ] . ( 3 ) Here , the TD-targets y are obtained by computing a 1-step bootstrap with a target action-value estimator Q̂πϕ′ . Usually , Q̂ π ϕ′ is a regularized function of action-value predictions from a target critic model using delayed parameters ϕ′ . Following the policy gradient theorem ( Sutton et al. , 2000 ; Silver et al. , 2014 ) , we can then improve our policy by maximizing the expected returns as predicted by the critic , e.g. , by minimizing the negated action-value estimates : Jπ ( θ ) = −Es∼D , a∼πθ ( s ) [ Q̂πϕ ( s , a ) ] . ( 4 ) | The paper is motivated by the over/under-estimation bias in TD-learning for deep RL. The paper proposes to dynamically adjust the pessimism level of the learning rule, by adapting a multiplier parameter using dual gradient update. The paper provides some semi-theoretic motivations and empirical performance gains. | SP:4f44a9285670f8355549732887414ec04bcb7399 |
Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions | 1 INTRODUCTION . When artificial intelligence was first studied in the 1950s , researchers seek to answer the question of what is the solution to the question if the agent were “ perfect rational ” . The term “ perfect rational ” here refers to the decision made with infinite amounts of computations . However , without taking into consideration the practical computation time , one can only solve small-scale problems , since classical search algorithms usually exhibit exponential running time . Recent AI researches no longer seek to achieve “ perfect rational ” , but instead carefully trade-off computation versus the level of rationality . People have developed computational models like “ bounded optimality ” to model these settings ( Russell & Subramanian , 1994 ) . The increasing level of rationality under the same computational budget has given us a lot of AI successes nowadays . Notable algorithms include the Monte-Carlo sampling algorithms , the variational inference algorithms , and using neural networks as universal function approximators ( Coulom , 2006 ; Chaslot et al. , 2008 ; Gelly & Silver , 2011 ; Silver et al. , 2016 ; Hoffman et al. , 2013 ) . More recently , MCTS-based RL algorithms have achieved a lot of success , mainly in board games . The most notable achievement is that AlphaGO beats Hui Fan in 2015 ( Silver et al. , 2016 ) . This is the first time that a computer program beats a human professional player . After that , AlphaGo beats two top-ranking human players , Lee Sedol in 2016 and Jie Ke in 2017 , the latter of which rank first worldwide at the time . Later , the MCTS-based RL algorithms are further extended to other board games , as well as the Atari video games ( Schrittwieser et al. , 2020 ) . EfficientZero ( Ye et al. , 2021 ) greatly improves the sample efficiency of MCTS-based RL algorithms , shedding light on its future applications in real-world applications like robotics and self-driving . Despite the impressive performance of MCTS-based RL algorithms , they require massive computations to train and evaluate . For example , Schrittwieser et al . ( 2020 ) used 1000 TPUs trained for 12 hours to learn the game of GO , and for a single Atari game , it needs 40 TPUs to train 12 hours . Compared to previous algorithms on the Atari games benchmark , it needs around two orders of magnitude more compute . This prohibitively large computational requirement has slowed down both the further development of MCTS-based RL algorithms , as well as its practical use . Under the hood , MCTS-based RL algorithms are model-based methods , that imagine what the futures look like when doing different future action sequences . However , this imaging process for the current method is not computationally efficient . For example , AlphaGo needs to look ahead 1600 game states to place a single stone . On the contrary , top human professional players can only think through around 100-200 game states per minute ( Silver et al. , 2016 ) . Besides being computationally inefficient , the current MCTS algorithm deals with easy cases and hard ones with the same computational budget . On the other hand , human knows to use their time when it is most needed . In this paper , we aim to design new algorithms that save the computational time of the MCTS-based RL methods . More specifically , we are interested in pushing the Pareto front of the rationality level - computation curve . Empirical results show that our method can achieve comparable performance while requiring less than 50 % simulations to search on average . 2 RELATED WORK . 2.1 MULTI-ARMED BANDIT PROBLEM . RL algorithms are always brought into the exploration and exploitation dilemma . Multi-armed bandit ( MAB ) problem ( Berry & Fristedt , 1985 ; Auer et al. , 2002 ; Lattimore & Szepesvári , 2020 ) is one of the most extensively studied but fundamental instances . TheK-armed MAB problem is a sequential game with a collection of K unidentified but independent reward distributions , each associated with the corresponding arms . For each round , the learner pulls an arm and receives a reward sampled from the corresponding distributions . The optimal policy of the learner for the MAB problem is to maximize the cumulative rewards obtained from the sequential decisions . In the case where the cost of pulling arms is little , the learner is allowed to trial and error for enough times until convergence . A series of upper confidence bound ( UCB ) algorithms ( Auer et al. , 2002 ; Bubeck & Cesa-Bianchi , 2012 ) are proposed to solve the stochastic MAB problem and they have theoretical bounds . When there exist costs for each trial , pure exploration attempts to make the best use of the finite trials ( Bubeck et al. , 2011 ; Lattimore & Szepesvári , 2020 ) . Kocsis & Szepesvári ( 2006 ) proposed UCT to adapt UCB algorithms to the tree structures , which is the basis of MCTS . 2.2 REINFORCEMENT LEARNING WITH MCTS . For a long time , Computer Go is regarded as a very challenging game ( Bouzy & Cazenave , 2001 ; Cai & Wunsch , 2007 ) . Researchers attempt to use Monte-Carlo techniques that evaluate the value of the node state through random playouts ( Bouzy & Helmstetter , 2004 ; Gelly & Silver , 2007 ; 2008 ; Silver et al. , 2016 ) . Afterwards , UCT has generally replaced those earlier heuristic methods for Monte-Carlo tree search ( MCTS ) . UCT algorithms ( Kocsis & Szepesvári , 2006 ) apply UCB1 to select action at each node of the tree . Recently , MCTS-based methods ( Silver et al. , 2016 ; 2017 ; 2018 ; Schrittwieser et al. , 2020 ) have become increasingly popular and achieved super-human performances on board games because of the strong ability to search . Modern MCTS-based RL algorithms include four stages in each search iteration , namely simulation : selection , expansion , evaluation , and backpropagation . The selection stage targets selecting a new leaf node with UCT . The expansion stage expands the selected node and updates the search tree . The evaluation stage evaluates the value of the new node . The backpropagation stage propagates the newly computed value to the nodes along the search path to obtain more accurate Q-values with Bellman backup . However , search is quite time-consuming , which prevents MCTS-based methods to be used in wider scenarios . 2.3 ACCELERATION OF MCTS . There are two kinds of bottlenecks in MCTS in the aspect of speed : the evaluation/selection stage of each iteration and the outer search loop . In the previous research , people attempted to evaluate the node value by random playouts to the end of the game , which makes the evaluation stage quite expensive . In addition , compared to other model-free RL methods like PPO ( Schulman et al. , 2017 ) and SAC ( Haarnoja et al. , 2018 ) , MCTS-based algorithms have much larger computations due to the search loop . Therefore , a lot of works are devoted to accelerating MCTS . Some heuristic pruning methods ( Gelly et al. , 2006 ; Wang & Gelly , 2007 ; Sephton et al. , 2014 ; Baier & Winands , 2014 ; 2018 ) are developed to make the selection or evaluation more effectively . Lorentz ( 2015 ) proposed early playout termination of MCTS ( MCTS-EPT ) to stop the random playouts early and use an evaluation function to assess win or loss , which is an improvement in the evaluation stage of normal MCTS . And Hsueh et al . ( 2016 ) applied MCTS-EPT to the Chinese dark chess . Afterwards , MCTSEPT similar ideas have been applied in the evaluation stage of AlphaGoZero ( Silver et al. , 2017 ) and later MCTS-based methods ( Silver et al. , 2018 ; Schrittwieser et al. , 2020 ; Ye et al. , 2021 ) including our baseline models . They evaluate the Q-values through evaluation networks instead of running playouts to the end . However , these methods focus on the specific stage of the search iteration to accelerate the MCTS . We propose Virtual MCTS , which aims to terminate the outer search iteration adaptively in MCTS under distinct circumstances without sacrificing the final policy quality . 3 BACKGROUND . The AlphaGo series of work ( Silver et al. , 2016 ; 2017 ; 2018 ; Schrittwieser et al. , 2020 ) are all MCTS-based reinforcement learning algorithms . Those algorithms assume the environment transition dynamics are known or learn the environment dynamics . Based on the dynamics , they use the Monte-Carlo tree search ( MCTS ) as the policy improvement operator . I.e . taking in the current policy , MCTS returns a better policy with the search algorithm . The systematic search allows the MCTS-based RL algorithm to quickly improve the policy , and perform much better in the setting where a lot of reasoning is required . MCTS is the core component in the algorithms like AlphaGo . 3.1 MCTS . In this part , we give a brief introduction to the MCTS method implemented in reinforcement learning applications . MCTS takes in the current MDP state and runs a search algorithm guided by the current policy function . It outputs an improved policy of the current state . The improved policy is later to select an action in the environment . In the selection stage , an action will be selected by maximizing over UCB . Specifically , AlphaZero ( Silver et al. , 2018 ) and MuZero ( Schrittwieser et al. , 2020 ) are developed based on a variant of UCB , P-UCT ( Rosin , 2011 ) and have achieved great success on board games and Atari games . The formula of P-UCT in the two methods is the Eq ( 1 ) : ak = argmax a∈A Q ( s , a ) + P ( s , a ) √∑ bN ( s , b ) 1 +N ( s , a ) ( c1 + log ( ∑ bN ( s , b ) + c2 + 1 c2 ) ) , ( 1 ) where k is the index of the iterative step , A is the action set , Q ( s , a ) is the estimated Q-value , P ( s , a ) is the policy prior obtained from neural networks and N ( s , a ) is the visit counts to select the action a from the state s. The output of MCTS is the visit count of each action of the root node . After N search iterations , the final policy π ( s ) is defined as the normalized root visit count distribution πN ( s ) , where πk ( s , a ) = ( N ( s , a ) ) / ∑ b∈AN ( s , b ) = N ( s , a ) /k , a ∈ A . For simplification , we use πk in place of πk ( s ) sometimes . In our method , we propose to approximate the final policy πN ( s ) with π̂k ( s ) , which we name as a policy candidate , through a new expansion method and a termination rule . In this way , the number of iterations in MCTS can be reduced from N to k . 3.2 COMPUTATION REQUIREMENT . Most of the computations in MCTS-based RL are in the MCTS procedure . For each action taken by MCTS , it needs N times neural network evaluations , where N is the number of search iterations in MCTS . Traditional RL algorithms , such as PPO ( Schulman et al. , 2017 ) or DQN ( Mnih et al. , 2015 ) , only need a single neural network evaluation per action . Thus , MCTS-based RL is roughly N times computationally more expansive than traditional RL algorithms . In practice , training a single Atari game needs 12 hours of computation time on 40 TPUs ( Schrittwieser et al. , 2020 ) . The computation need is roughly two orders of magnitude more than traditional RL algorithms ( Schulman et al. , 2017 ) , although the final performance of MuZero is much better . | This paper proposes an approach for a significant speedup of Monte-Carlo Tree Search (MCTS) at a relatively small cost in playing strength. The basic idea is that, when the change in the distribution of visit counts between two different time points $\frac{k}{2}$ and $k$ is less than some constant $\epsilon$, it can also be shown that the remaining change in distributions between $k$ and $N$ (where $N$ is the maximum visit count allowed by some budget) will be bounded below some value, and if we consider such a maximum possible error to be sufficiently small we can just terminate the search early at time $k$. The paper also proposes a very simple but important idea called Virtual Expansions, which basically consists of running $N - k$ additional iterations of the bandit algorithm used by MCTS solely in the root node, providing the current estimated average values as rewards for every pull (i.e., leaving average reward estimates unchanged), in order to transform the distribution of visits at iteration $k$ into a prediction of the distribution we would end up with after $N$ iterations. Several empirical evaluations in Go ($9\times9$ board) and a few Atari games, including ablation studies, demonstrate that the approach can substantially reduce the search times and self-play training times, while only decreasing playing strength by a small amount. | SP:5620446769f9eda800a7f515d59ecaabbd7a6d24 |
Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions | 1 INTRODUCTION . When artificial intelligence was first studied in the 1950s , researchers seek to answer the question of what is the solution to the question if the agent were “ perfect rational ” . The term “ perfect rational ” here refers to the decision made with infinite amounts of computations . However , without taking into consideration the practical computation time , one can only solve small-scale problems , since classical search algorithms usually exhibit exponential running time . Recent AI researches no longer seek to achieve “ perfect rational ” , but instead carefully trade-off computation versus the level of rationality . People have developed computational models like “ bounded optimality ” to model these settings ( Russell & Subramanian , 1994 ) . The increasing level of rationality under the same computational budget has given us a lot of AI successes nowadays . Notable algorithms include the Monte-Carlo sampling algorithms , the variational inference algorithms , and using neural networks as universal function approximators ( Coulom , 2006 ; Chaslot et al. , 2008 ; Gelly & Silver , 2011 ; Silver et al. , 2016 ; Hoffman et al. , 2013 ) . More recently , MCTS-based RL algorithms have achieved a lot of success , mainly in board games . The most notable achievement is that AlphaGO beats Hui Fan in 2015 ( Silver et al. , 2016 ) . This is the first time that a computer program beats a human professional player . After that , AlphaGo beats two top-ranking human players , Lee Sedol in 2016 and Jie Ke in 2017 , the latter of which rank first worldwide at the time . Later , the MCTS-based RL algorithms are further extended to other board games , as well as the Atari video games ( Schrittwieser et al. , 2020 ) . EfficientZero ( Ye et al. , 2021 ) greatly improves the sample efficiency of MCTS-based RL algorithms , shedding light on its future applications in real-world applications like robotics and self-driving . Despite the impressive performance of MCTS-based RL algorithms , they require massive computations to train and evaluate . For example , Schrittwieser et al . ( 2020 ) used 1000 TPUs trained for 12 hours to learn the game of GO , and for a single Atari game , it needs 40 TPUs to train 12 hours . Compared to previous algorithms on the Atari games benchmark , it needs around two orders of magnitude more compute . This prohibitively large computational requirement has slowed down both the further development of MCTS-based RL algorithms , as well as its practical use . Under the hood , MCTS-based RL algorithms are model-based methods , that imagine what the futures look like when doing different future action sequences . However , this imaging process for the current method is not computationally efficient . For example , AlphaGo needs to look ahead 1600 game states to place a single stone . On the contrary , top human professional players can only think through around 100-200 game states per minute ( Silver et al. , 2016 ) . Besides being computationally inefficient , the current MCTS algorithm deals with easy cases and hard ones with the same computational budget . On the other hand , human knows to use their time when it is most needed . In this paper , we aim to design new algorithms that save the computational time of the MCTS-based RL methods . More specifically , we are interested in pushing the Pareto front of the rationality level - computation curve . Empirical results show that our method can achieve comparable performance while requiring less than 50 % simulations to search on average . 2 RELATED WORK . 2.1 MULTI-ARMED BANDIT PROBLEM . RL algorithms are always brought into the exploration and exploitation dilemma . Multi-armed bandit ( MAB ) problem ( Berry & Fristedt , 1985 ; Auer et al. , 2002 ; Lattimore & Szepesvári , 2020 ) is one of the most extensively studied but fundamental instances . TheK-armed MAB problem is a sequential game with a collection of K unidentified but independent reward distributions , each associated with the corresponding arms . For each round , the learner pulls an arm and receives a reward sampled from the corresponding distributions . The optimal policy of the learner for the MAB problem is to maximize the cumulative rewards obtained from the sequential decisions . In the case where the cost of pulling arms is little , the learner is allowed to trial and error for enough times until convergence . A series of upper confidence bound ( UCB ) algorithms ( Auer et al. , 2002 ; Bubeck & Cesa-Bianchi , 2012 ) are proposed to solve the stochastic MAB problem and they have theoretical bounds . When there exist costs for each trial , pure exploration attempts to make the best use of the finite trials ( Bubeck et al. , 2011 ; Lattimore & Szepesvári , 2020 ) . Kocsis & Szepesvári ( 2006 ) proposed UCT to adapt UCB algorithms to the tree structures , which is the basis of MCTS . 2.2 REINFORCEMENT LEARNING WITH MCTS . For a long time , Computer Go is regarded as a very challenging game ( Bouzy & Cazenave , 2001 ; Cai & Wunsch , 2007 ) . Researchers attempt to use Monte-Carlo techniques that evaluate the value of the node state through random playouts ( Bouzy & Helmstetter , 2004 ; Gelly & Silver , 2007 ; 2008 ; Silver et al. , 2016 ) . Afterwards , UCT has generally replaced those earlier heuristic methods for Monte-Carlo tree search ( MCTS ) . UCT algorithms ( Kocsis & Szepesvári , 2006 ) apply UCB1 to select action at each node of the tree . Recently , MCTS-based methods ( Silver et al. , 2016 ; 2017 ; 2018 ; Schrittwieser et al. , 2020 ) have become increasingly popular and achieved super-human performances on board games because of the strong ability to search . Modern MCTS-based RL algorithms include four stages in each search iteration , namely simulation : selection , expansion , evaluation , and backpropagation . The selection stage targets selecting a new leaf node with UCT . The expansion stage expands the selected node and updates the search tree . The evaluation stage evaluates the value of the new node . The backpropagation stage propagates the newly computed value to the nodes along the search path to obtain more accurate Q-values with Bellman backup . However , search is quite time-consuming , which prevents MCTS-based methods to be used in wider scenarios . 2.3 ACCELERATION OF MCTS . There are two kinds of bottlenecks in MCTS in the aspect of speed : the evaluation/selection stage of each iteration and the outer search loop . In the previous research , people attempted to evaluate the node value by random playouts to the end of the game , which makes the evaluation stage quite expensive . In addition , compared to other model-free RL methods like PPO ( Schulman et al. , 2017 ) and SAC ( Haarnoja et al. , 2018 ) , MCTS-based algorithms have much larger computations due to the search loop . Therefore , a lot of works are devoted to accelerating MCTS . Some heuristic pruning methods ( Gelly et al. , 2006 ; Wang & Gelly , 2007 ; Sephton et al. , 2014 ; Baier & Winands , 2014 ; 2018 ) are developed to make the selection or evaluation more effectively . Lorentz ( 2015 ) proposed early playout termination of MCTS ( MCTS-EPT ) to stop the random playouts early and use an evaluation function to assess win or loss , which is an improvement in the evaluation stage of normal MCTS . And Hsueh et al . ( 2016 ) applied MCTS-EPT to the Chinese dark chess . Afterwards , MCTSEPT similar ideas have been applied in the evaluation stage of AlphaGoZero ( Silver et al. , 2017 ) and later MCTS-based methods ( Silver et al. , 2018 ; Schrittwieser et al. , 2020 ; Ye et al. , 2021 ) including our baseline models . They evaluate the Q-values through evaluation networks instead of running playouts to the end . However , these methods focus on the specific stage of the search iteration to accelerate the MCTS . We propose Virtual MCTS , which aims to terminate the outer search iteration adaptively in MCTS under distinct circumstances without sacrificing the final policy quality . 3 BACKGROUND . The AlphaGo series of work ( Silver et al. , 2016 ; 2017 ; 2018 ; Schrittwieser et al. , 2020 ) are all MCTS-based reinforcement learning algorithms . Those algorithms assume the environment transition dynamics are known or learn the environment dynamics . Based on the dynamics , they use the Monte-Carlo tree search ( MCTS ) as the policy improvement operator . I.e . taking in the current policy , MCTS returns a better policy with the search algorithm . The systematic search allows the MCTS-based RL algorithm to quickly improve the policy , and perform much better in the setting where a lot of reasoning is required . MCTS is the core component in the algorithms like AlphaGo . 3.1 MCTS . In this part , we give a brief introduction to the MCTS method implemented in reinforcement learning applications . MCTS takes in the current MDP state and runs a search algorithm guided by the current policy function . It outputs an improved policy of the current state . The improved policy is later to select an action in the environment . In the selection stage , an action will be selected by maximizing over UCB . Specifically , AlphaZero ( Silver et al. , 2018 ) and MuZero ( Schrittwieser et al. , 2020 ) are developed based on a variant of UCB , P-UCT ( Rosin , 2011 ) and have achieved great success on board games and Atari games . The formula of P-UCT in the two methods is the Eq ( 1 ) : ak = argmax a∈A Q ( s , a ) + P ( s , a ) √∑ bN ( s , b ) 1 +N ( s , a ) ( c1 + log ( ∑ bN ( s , b ) + c2 + 1 c2 ) ) , ( 1 ) where k is the index of the iterative step , A is the action set , Q ( s , a ) is the estimated Q-value , P ( s , a ) is the policy prior obtained from neural networks and N ( s , a ) is the visit counts to select the action a from the state s. The output of MCTS is the visit count of each action of the root node . After N search iterations , the final policy π ( s ) is defined as the normalized root visit count distribution πN ( s ) , where πk ( s , a ) = ( N ( s , a ) ) / ∑ b∈AN ( s , b ) = N ( s , a ) /k , a ∈ A . For simplification , we use πk in place of πk ( s ) sometimes . In our method , we propose to approximate the final policy πN ( s ) with π̂k ( s ) , which we name as a policy candidate , through a new expansion method and a termination rule . In this way , the number of iterations in MCTS can be reduced from N to k . 3.2 COMPUTATION REQUIREMENT . Most of the computations in MCTS-based RL are in the MCTS procedure . For each action taken by MCTS , it needs N times neural network evaluations , where N is the number of search iterations in MCTS . Traditional RL algorithms , such as PPO ( Schulman et al. , 2017 ) or DQN ( Mnih et al. , 2015 ) , only need a single neural network evaluation per action . Thus , MCTS-based RL is roughly N times computationally more expansive than traditional RL algorithms . In practice , training a single Atari game needs 12 hours of computation time on 40 TPUs ( Schrittwieser et al. , 2020 ) . The computation need is roughly two orders of magnitude more than traditional RL algorithms ( Schulman et al. , 2017 ) , although the final performance of MuZero is much better . | The paper proposes Virtual MCTS, an early-termination rule for MCTS to improve its efficiency. Roughly speaking, the termination rule prunes the search process when the final policy at the root node is unlikely to change by too much from the current policy. This strategy improves the efficiency of AlphaGoZero-style algorithms. Specifically, the authors showed that Virtual MCTS improves the learning efficiency on 9*9 Go. | SP:5620446769f9eda800a7f515d59ecaabbd7a6d24 |
Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions | 1 INTRODUCTION . When artificial intelligence was first studied in the 1950s , researchers seek to answer the question of what is the solution to the question if the agent were “ perfect rational ” . The term “ perfect rational ” here refers to the decision made with infinite amounts of computations . However , without taking into consideration the practical computation time , one can only solve small-scale problems , since classical search algorithms usually exhibit exponential running time . Recent AI researches no longer seek to achieve “ perfect rational ” , but instead carefully trade-off computation versus the level of rationality . People have developed computational models like “ bounded optimality ” to model these settings ( Russell & Subramanian , 1994 ) . The increasing level of rationality under the same computational budget has given us a lot of AI successes nowadays . Notable algorithms include the Monte-Carlo sampling algorithms , the variational inference algorithms , and using neural networks as universal function approximators ( Coulom , 2006 ; Chaslot et al. , 2008 ; Gelly & Silver , 2011 ; Silver et al. , 2016 ; Hoffman et al. , 2013 ) . More recently , MCTS-based RL algorithms have achieved a lot of success , mainly in board games . The most notable achievement is that AlphaGO beats Hui Fan in 2015 ( Silver et al. , 2016 ) . This is the first time that a computer program beats a human professional player . After that , AlphaGo beats two top-ranking human players , Lee Sedol in 2016 and Jie Ke in 2017 , the latter of which rank first worldwide at the time . Later , the MCTS-based RL algorithms are further extended to other board games , as well as the Atari video games ( Schrittwieser et al. , 2020 ) . EfficientZero ( Ye et al. , 2021 ) greatly improves the sample efficiency of MCTS-based RL algorithms , shedding light on its future applications in real-world applications like robotics and self-driving . Despite the impressive performance of MCTS-based RL algorithms , they require massive computations to train and evaluate . For example , Schrittwieser et al . ( 2020 ) used 1000 TPUs trained for 12 hours to learn the game of GO , and for a single Atari game , it needs 40 TPUs to train 12 hours . Compared to previous algorithms on the Atari games benchmark , it needs around two orders of magnitude more compute . This prohibitively large computational requirement has slowed down both the further development of MCTS-based RL algorithms , as well as its practical use . Under the hood , MCTS-based RL algorithms are model-based methods , that imagine what the futures look like when doing different future action sequences . However , this imaging process for the current method is not computationally efficient . For example , AlphaGo needs to look ahead 1600 game states to place a single stone . On the contrary , top human professional players can only think through around 100-200 game states per minute ( Silver et al. , 2016 ) . Besides being computationally inefficient , the current MCTS algorithm deals with easy cases and hard ones with the same computational budget . On the other hand , human knows to use their time when it is most needed . In this paper , we aim to design new algorithms that save the computational time of the MCTS-based RL methods . More specifically , we are interested in pushing the Pareto front of the rationality level - computation curve . Empirical results show that our method can achieve comparable performance while requiring less than 50 % simulations to search on average . 2 RELATED WORK . 2.1 MULTI-ARMED BANDIT PROBLEM . RL algorithms are always brought into the exploration and exploitation dilemma . Multi-armed bandit ( MAB ) problem ( Berry & Fristedt , 1985 ; Auer et al. , 2002 ; Lattimore & Szepesvári , 2020 ) is one of the most extensively studied but fundamental instances . TheK-armed MAB problem is a sequential game with a collection of K unidentified but independent reward distributions , each associated with the corresponding arms . For each round , the learner pulls an arm and receives a reward sampled from the corresponding distributions . The optimal policy of the learner for the MAB problem is to maximize the cumulative rewards obtained from the sequential decisions . In the case where the cost of pulling arms is little , the learner is allowed to trial and error for enough times until convergence . A series of upper confidence bound ( UCB ) algorithms ( Auer et al. , 2002 ; Bubeck & Cesa-Bianchi , 2012 ) are proposed to solve the stochastic MAB problem and they have theoretical bounds . When there exist costs for each trial , pure exploration attempts to make the best use of the finite trials ( Bubeck et al. , 2011 ; Lattimore & Szepesvári , 2020 ) . Kocsis & Szepesvári ( 2006 ) proposed UCT to adapt UCB algorithms to the tree structures , which is the basis of MCTS . 2.2 REINFORCEMENT LEARNING WITH MCTS . For a long time , Computer Go is regarded as a very challenging game ( Bouzy & Cazenave , 2001 ; Cai & Wunsch , 2007 ) . Researchers attempt to use Monte-Carlo techniques that evaluate the value of the node state through random playouts ( Bouzy & Helmstetter , 2004 ; Gelly & Silver , 2007 ; 2008 ; Silver et al. , 2016 ) . Afterwards , UCT has generally replaced those earlier heuristic methods for Monte-Carlo tree search ( MCTS ) . UCT algorithms ( Kocsis & Szepesvári , 2006 ) apply UCB1 to select action at each node of the tree . Recently , MCTS-based methods ( Silver et al. , 2016 ; 2017 ; 2018 ; Schrittwieser et al. , 2020 ) have become increasingly popular and achieved super-human performances on board games because of the strong ability to search . Modern MCTS-based RL algorithms include four stages in each search iteration , namely simulation : selection , expansion , evaluation , and backpropagation . The selection stage targets selecting a new leaf node with UCT . The expansion stage expands the selected node and updates the search tree . The evaluation stage evaluates the value of the new node . The backpropagation stage propagates the newly computed value to the nodes along the search path to obtain more accurate Q-values with Bellman backup . However , search is quite time-consuming , which prevents MCTS-based methods to be used in wider scenarios . 2.3 ACCELERATION OF MCTS . There are two kinds of bottlenecks in MCTS in the aspect of speed : the evaluation/selection stage of each iteration and the outer search loop . In the previous research , people attempted to evaluate the node value by random playouts to the end of the game , which makes the evaluation stage quite expensive . In addition , compared to other model-free RL methods like PPO ( Schulman et al. , 2017 ) and SAC ( Haarnoja et al. , 2018 ) , MCTS-based algorithms have much larger computations due to the search loop . Therefore , a lot of works are devoted to accelerating MCTS . Some heuristic pruning methods ( Gelly et al. , 2006 ; Wang & Gelly , 2007 ; Sephton et al. , 2014 ; Baier & Winands , 2014 ; 2018 ) are developed to make the selection or evaluation more effectively . Lorentz ( 2015 ) proposed early playout termination of MCTS ( MCTS-EPT ) to stop the random playouts early and use an evaluation function to assess win or loss , which is an improvement in the evaluation stage of normal MCTS . And Hsueh et al . ( 2016 ) applied MCTS-EPT to the Chinese dark chess . Afterwards , MCTSEPT similar ideas have been applied in the evaluation stage of AlphaGoZero ( Silver et al. , 2017 ) and later MCTS-based methods ( Silver et al. , 2018 ; Schrittwieser et al. , 2020 ; Ye et al. , 2021 ) including our baseline models . They evaluate the Q-values through evaluation networks instead of running playouts to the end . However , these methods focus on the specific stage of the search iteration to accelerate the MCTS . We propose Virtual MCTS , which aims to terminate the outer search iteration adaptively in MCTS under distinct circumstances without sacrificing the final policy quality . 3 BACKGROUND . The AlphaGo series of work ( Silver et al. , 2016 ; 2017 ; 2018 ; Schrittwieser et al. , 2020 ) are all MCTS-based reinforcement learning algorithms . Those algorithms assume the environment transition dynamics are known or learn the environment dynamics . Based on the dynamics , they use the Monte-Carlo tree search ( MCTS ) as the policy improvement operator . I.e . taking in the current policy , MCTS returns a better policy with the search algorithm . The systematic search allows the MCTS-based RL algorithm to quickly improve the policy , and perform much better in the setting where a lot of reasoning is required . MCTS is the core component in the algorithms like AlphaGo . 3.1 MCTS . In this part , we give a brief introduction to the MCTS method implemented in reinforcement learning applications . MCTS takes in the current MDP state and runs a search algorithm guided by the current policy function . It outputs an improved policy of the current state . The improved policy is later to select an action in the environment . In the selection stage , an action will be selected by maximizing over UCB . Specifically , AlphaZero ( Silver et al. , 2018 ) and MuZero ( Schrittwieser et al. , 2020 ) are developed based on a variant of UCB , P-UCT ( Rosin , 2011 ) and have achieved great success on board games and Atari games . The formula of P-UCT in the two methods is the Eq ( 1 ) : ak = argmax a∈A Q ( s , a ) + P ( s , a ) √∑ bN ( s , b ) 1 +N ( s , a ) ( c1 + log ( ∑ bN ( s , b ) + c2 + 1 c2 ) ) , ( 1 ) where k is the index of the iterative step , A is the action set , Q ( s , a ) is the estimated Q-value , P ( s , a ) is the policy prior obtained from neural networks and N ( s , a ) is the visit counts to select the action a from the state s. The output of MCTS is the visit count of each action of the root node . After N search iterations , the final policy π ( s ) is defined as the normalized root visit count distribution πN ( s ) , where πk ( s , a ) = ( N ( s , a ) ) / ∑ b∈AN ( s , b ) = N ( s , a ) /k , a ∈ A . For simplification , we use πk in place of πk ( s ) sometimes . In our method , we propose to approximate the final policy πN ( s ) with π̂k ( s ) , which we name as a policy candidate , through a new expansion method and a termination rule . In this way , the number of iterations in MCTS can be reduced from N to k . 3.2 COMPUTATION REQUIREMENT . Most of the computations in MCTS-based RL are in the MCTS procedure . For each action taken by MCTS , it needs N times neural network evaluations , where N is the number of search iterations in MCTS . Traditional RL algorithms , such as PPO ( Schulman et al. , 2017 ) or DQN ( Mnih et al. , 2015 ) , only need a single neural network evaluation per action . Thus , MCTS-based RL is roughly N times computationally more expansive than traditional RL algorithms . In practice , training a single Atari game needs 12 hours of computation time on 40 TPUs ( Schrittwieser et al. , 2020 ) . The computation need is roughly two orders of magnitude more than traditional RL algorithms ( Schulman et al. , 2017 ) , although the final performance of MuZero is much better . | This paper proposes the Virtual MCTS (V- MCTS), a variant of MCTS that mimics the human behavior, and is 50% more sample efficient, by performing a type of forward pruning. MCTS is characterized as a model-based reinforcement learning algorithm, that imagines what the future would look like, using terminology borrowed from Sutton in his description of Dyna, a model-based algorithm. I agree with the planning part, but am less certain if I agree with the learning characterization of MCTS. Often, MCTS is used inside a model-based approach as the planning component. MCTS is usually not regarded as a full model-based approach. | SP:5620446769f9eda800a7f515d59ecaabbd7a6d24 |
Few-shot Learning with Big Prototypes | 1 INTRODUCTION . Learning from few examples , i.e. , few-shot learning , receives increasing attention in modern deep learning . On the one hand , constituting cognition of novel concepts with few instances is a crucial way for machines to imitate human intelligence . On the other hand , annotating large-scale supervised datasets is expensive and time-consuming ( Lu et al. , 2020 ) . Although traditional deep neural models have achieved tremendous success under sufficient supervision , it is still challenging to produce comparable performance when training examples are limited . Hence , a series of studies are proposed to generalize deep neural networks to low-data scenarios . One crucial branch of them is meta-learning with prototypes ( Reed , 1972 ; Nosofsky , 1986 ; Snell et al. , 2017 ) , where models are trained to quickly adapt to current tasks and carry out classification via metric-based comparisons between examples and newly introduced variables , prototypes of those classes . In a general way , prototypes are designed to represent abstract class-level information and calculated by taking the mean output of a few examples belonging to one same class . Thus , prototypes are represented as dense vectors with the same dimensions as the embeddings of training examples and can be regarded as center points of those classes in the embedding space . And for the queried examples that the model needs to predict , the basic idea is to calculate the distances between the queried examples and prototypes and conduct classification based on the distances . Originated from Prototypical Network ( Snell et al. , 2017 ) , sets of derivative prototype-based methods demonstrate the effectiveness in few-shot learning ( Ren et al. , 2018 ; Gao et al. , 2019a ; Allen et al. , 2019 ; Pan et al. , 2019 ; Ding et al. , 2021a ) . However , prototypes are estimated from a few sampled examples , which can not uncover the overall ground truth distribution ( Yang et al. , 2021 ) . Such biased distributions may generate biased prototypes and trigger sequent classification errors . In other words , this modeling approach of prototypes could lack the capability to express the universal class-level information . Hence , to enhance such expressibility of prototypes , we propose to use areas , i.e. , tensor fields , rather than points in the embedding space to represent the class-level information . In this paper , we propose big prototypes to use hyperspheres in the feature space to abstract the class-level information , and the feature points can be distribued inside or around the big prototypes . Such modeling is equipped with two obvious advantages : easy to model and easy to calculate the distances . On the one hand , even if we attempt to use areas but not points to represent class-level information , it is difficult to explicitly characterize manifolds with complex boundaries in deep learning . But via hyperspheres modeling , we can obtain a big prototype only through two sets of parameters : the center and the radius of hyperspheres . On the other hand , hyperspheres are suitable for calculating the distances in Euclidean space . We can simply calculate the distance from one feature point to the surface of the hypersphere to perform metric-based classification , which is also difficult for other manifolds . Moreover , it is easy to combine these two advantages in few-shot learning , the distances from one feature point to the surface of a big prototype can be formalized as the distance from the point to the center of the hypersphere minus the radius . Thus , both radius and hypersphere center can appear in the loss function and participate in the backward propagation during optimization . Intuitively , for the classes with sparse feature distribution , the corresponding radii of their prototypes are large , and the radii are small otherwise . We conduct extensive experiments in both natural language processing ( NLP ) and computer vision ( CV ) to evaluate the effectiveness of big prototypes . Specifically for NLP , we choose widely used benchmarks in the few-shot named entity ( Ding et al. , 2021b ) and relation extraction ( Han et al. , 2018 ; Gao et al. , 2019b ) . For CV , we use classical image classification datasets ( Vinyals et al. , 2016 ) in experiments . The results demonstrate that , with only a few additional parameters introduced , such modeling significantly outperforms the baseline . Surprisingly , big prototypes perform extremely well in cross-domain few-shot relation extraction , indicating the promising ability to domain adaptation . Given that such small changes can bring huge benefits , hopefully , big prototypes can inspire new ideas for the research community of representation learning . The source code and checkpoints of our models in experiments will be publicly available for reproducibility . 2 RELATED WORK . This work is related to studies of few-shot learning and meta-learning , whose primary goal is to quickly adapt deep neural models to new tasks with few training examples . To achieve that , two branches of studies are proposed : optimization-based methods and metric-based methods . The optimization-based studies ( Finn et al. , 2017 ; Franceschi et al. , 2018 ; Ravi & Beatson , 2018 ) regard few-shot learning as a bi-level optimization process , where a global optimization is conducted to learn a good initialization across various tasks , and a local optimization quickly adapts the initialization parameters to specific tasks with few training examples by few steps of gradient descent . Compared to the mentioned studies , our work is more related to the metric-based meta-learning approaches ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Satorras & Estrach , 2018 ; Sung et al. , 2018 ) , whose general idea is to learn a metric to measure the similarity between representations and find the closest labeled example ( or a derived prototype ) for an unlabeled example . Typically , these methods learn a metric function during episodic optimization . More specifically , we inherit the spirit that uses prototypes to abstractly represent class-level information , which could be tracked back to cognitive science ( Reed , 1972 ; Rosch et al. , 1976 ; Nosofsky , 1986 ) , statistical machine learning ( Graf et al. , 2009 ) and similar to the Nearest Mean Classifier ( Mensink et al. , 2013 ) . In the area of deep learning , Snell et al . ( 2017 ) propose the prototypical network to exploit the average of example embeddings as a prototype to perform metric-based classification in few-shot learning . In Prototypical Network , prototypes are estimated by the embeddings of instances , and it is hard to find a satisfying location of the prototypes of the entire dataset . Ren et al . ( 2018 ) adapt such prototype-based networks in the semi-supervised scenario where the dataset is partially annotated . A set of prototype-based networks are proposed concentrating on the improvements of prototype estimations and application to various downstream tasks ( Allen et al. , 2019 ; Gao et al. , 2019a ; Li et al. , 2019b ; Pan et al. , 2019 ; Seth et al. , 2019 ; Ding et al. , 2021a ; Li et al. , 2020c ) . We discuss big prototypes and some other prototype-enhanced methods in Appendix C. There has also been a series of works that embed prototypes into a non-Euclidean output space ( Mettes et al. , 2019 ; Keller-Ressel , 2020 ; Atigh et al. , 2021 ) . It should be noted that these studies regard hyperspheres or other non-Euclidean manifolds as the embedding space , and our proposed method use hyperspheres to represent big prototypes and conduct metric-based classification in the Euclidean space . Therefore , the focus of our proposed big prototypes is different from the nonEuclidean prototype-based works . We evaluate the effectiveness of big prototypes in three downstream tasks across NLP and CV , including few-shot named entity recognition , few-shot relation extraction , and few-shot image classification . Another technical route to achieve promising results in these downstream few-shot tasks is to use the ability of large-scale pre-trained models ( Brown et al. , 2020 ; Han et al. , 2021b ) , or build large-scale data sets to design specific pre-training tasks ( Huang et al. , 2020 ; Soares et al. , 2019 ) ( which may face the risk of information leakage ) . These techniques are also orthogonal to our contribution . Generally , the method of big prototypes is model-agnostic . 3 BIG PROTOTYPES . This section begins with the problem setup of few-shot learning . Then , we introduce the metrics , initialization , and learning of big prototypes . Generally , unlike previous prototype-based models that use estimated dense vectors as prototypes , our approach use hyperspheres to represent the conceptlevel information for classes . One big prototype is represented by two parameters : the center and the radius of the hypersphere , which are firstly initialized via estimation and then optimized by gradient descent in an end-to-end fashion . 3.1 PROBLEM SETUP . In this work , we consider the episodic N way K shot few-shot learning paradigm . In this setting , we have a large-scale annotated training setDtrain , and our goal is to learn a model that could predict for a set of new classes Dtest , where only a few examples are labeled . In this setting , the model will be trained in episodes constructed using Dtrain and tested on episodes constructed using Dtest . Each episode contains a support set for learning S = { xi , yi } N×Ki=1 with N classes and K examples for each class , and a query set for inference Q = { x∗j , y∗j } N×K′ j=1 of examples in the same N classes . Each input data is a vector xi ∈ RL with the dimension of L and yi is an index of class label . For each input xi , let hi = fϕ ( xi ) ∈ RD denote the D-dimensional output embedding of a neural network fϕ : RL → RD parameterized by ϕ . Beyond the conventional few-shot classification , we also carry out experiments in few-shot named entity recognition . This is a sequence labeling task where each token in a sequence is asked to be labeled as if it is a part of a named entity . But as the context is extremely important for this task , the examples are sampled in sequence-level . Thus the problem setup is slightly different from the traditional N way K shot classification . We follow the strategy in Ding et al . ( 2021b ) and sample sequences in a N way K ∼ 2K shot manner ( see Appendix B ) . 3.2 METRIC . We now introduce big prototypes , which are a set of hyperspheres in the feature space to abstractly represent the intrinsic features of classes . No matter what the dimension of embedding space is , one big prototype is represented by p = ( z , r ) , where z ∈ RD is the center of the hypersphere with the same dimension to h = fϕ ( x ) , and r ∈ R is a scalar denoting the radius of the hypersphere . The central idea is to learn a big prototype for each class with limited episodic supervision , and each example in the query set ( x∗j , y ∗ j ) is predicted by measuring the distance from the embedding h ∗ to the surface of the hyperspheres . The distance of two vectors is calculated by a metric function : d : RD × RD → [ 0 , +∞ ) . In the Euclidean space , the metric being d ( h , z ) = ∥h− z∥2 . ( 1 ) And the distance d̃ from an embedding to a big prototype p is the distance from the point to the center of the hypersphere minus the radius d̃ ( x , p ) = d ( fϕ ( x ) , z ) − r = ∥h− z∥2 − r. ( 2 ) Note that in this case , the value of d̃ ( · ) may be negative , that is geometrically speaking , the point is contained inside the hypersphere , and it does not affect the calculation of loss . Although more generally , the idea is to use areas but not points in the embedding space to model prototypes , hyperspheres naturally have two obvious advantages . As stated in § 1 , one big prototype could be uniquely modeled by the center and the radius p = { z , r } . While characterizing manifolds with complex boundaries in the embedding space is difficult.s Furthermore , it is easy to calculate metrics by Equation 2 . We use the distance from a point to the surface as the metric . In this way , the center and the radius are spontaneously in the loss function and optimized . In this geometric interpretation , sparse classes will have larger learned radii , while compact classes will have smaller learned radii . | The paper proposed to represent prototypes by hyperspheres with dynamic sizes. A so-called big prototype is characterized by the center of the hypersphere and the radius of the sphere. Empirical results are conducted on few-shot named entity recognition (NER), few-shot relation extraction (RE) and few-shot image classifica- tion. | SP:f7d5271c6dcdaf0cf99d3c6f5e2b517904b93ce3 |
Few-shot Learning with Big Prototypes | 1 INTRODUCTION . Learning from few examples , i.e. , few-shot learning , receives increasing attention in modern deep learning . On the one hand , constituting cognition of novel concepts with few instances is a crucial way for machines to imitate human intelligence . On the other hand , annotating large-scale supervised datasets is expensive and time-consuming ( Lu et al. , 2020 ) . Although traditional deep neural models have achieved tremendous success under sufficient supervision , it is still challenging to produce comparable performance when training examples are limited . Hence , a series of studies are proposed to generalize deep neural networks to low-data scenarios . One crucial branch of them is meta-learning with prototypes ( Reed , 1972 ; Nosofsky , 1986 ; Snell et al. , 2017 ) , where models are trained to quickly adapt to current tasks and carry out classification via metric-based comparisons between examples and newly introduced variables , prototypes of those classes . In a general way , prototypes are designed to represent abstract class-level information and calculated by taking the mean output of a few examples belonging to one same class . Thus , prototypes are represented as dense vectors with the same dimensions as the embeddings of training examples and can be regarded as center points of those classes in the embedding space . And for the queried examples that the model needs to predict , the basic idea is to calculate the distances between the queried examples and prototypes and conduct classification based on the distances . Originated from Prototypical Network ( Snell et al. , 2017 ) , sets of derivative prototype-based methods demonstrate the effectiveness in few-shot learning ( Ren et al. , 2018 ; Gao et al. , 2019a ; Allen et al. , 2019 ; Pan et al. , 2019 ; Ding et al. , 2021a ) . However , prototypes are estimated from a few sampled examples , which can not uncover the overall ground truth distribution ( Yang et al. , 2021 ) . Such biased distributions may generate biased prototypes and trigger sequent classification errors . In other words , this modeling approach of prototypes could lack the capability to express the universal class-level information . Hence , to enhance such expressibility of prototypes , we propose to use areas , i.e. , tensor fields , rather than points in the embedding space to represent the class-level information . In this paper , we propose big prototypes to use hyperspheres in the feature space to abstract the class-level information , and the feature points can be distribued inside or around the big prototypes . Such modeling is equipped with two obvious advantages : easy to model and easy to calculate the distances . On the one hand , even if we attempt to use areas but not points to represent class-level information , it is difficult to explicitly characterize manifolds with complex boundaries in deep learning . But via hyperspheres modeling , we can obtain a big prototype only through two sets of parameters : the center and the radius of hyperspheres . On the other hand , hyperspheres are suitable for calculating the distances in Euclidean space . We can simply calculate the distance from one feature point to the surface of the hypersphere to perform metric-based classification , which is also difficult for other manifolds . Moreover , it is easy to combine these two advantages in few-shot learning , the distances from one feature point to the surface of a big prototype can be formalized as the distance from the point to the center of the hypersphere minus the radius . Thus , both radius and hypersphere center can appear in the loss function and participate in the backward propagation during optimization . Intuitively , for the classes with sparse feature distribution , the corresponding radii of their prototypes are large , and the radii are small otherwise . We conduct extensive experiments in both natural language processing ( NLP ) and computer vision ( CV ) to evaluate the effectiveness of big prototypes . Specifically for NLP , we choose widely used benchmarks in the few-shot named entity ( Ding et al. , 2021b ) and relation extraction ( Han et al. , 2018 ; Gao et al. , 2019b ) . For CV , we use classical image classification datasets ( Vinyals et al. , 2016 ) in experiments . The results demonstrate that , with only a few additional parameters introduced , such modeling significantly outperforms the baseline . Surprisingly , big prototypes perform extremely well in cross-domain few-shot relation extraction , indicating the promising ability to domain adaptation . Given that such small changes can bring huge benefits , hopefully , big prototypes can inspire new ideas for the research community of representation learning . The source code and checkpoints of our models in experiments will be publicly available for reproducibility . 2 RELATED WORK . This work is related to studies of few-shot learning and meta-learning , whose primary goal is to quickly adapt deep neural models to new tasks with few training examples . To achieve that , two branches of studies are proposed : optimization-based methods and metric-based methods . The optimization-based studies ( Finn et al. , 2017 ; Franceschi et al. , 2018 ; Ravi & Beatson , 2018 ) regard few-shot learning as a bi-level optimization process , where a global optimization is conducted to learn a good initialization across various tasks , and a local optimization quickly adapts the initialization parameters to specific tasks with few training examples by few steps of gradient descent . Compared to the mentioned studies , our work is more related to the metric-based meta-learning approaches ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Satorras & Estrach , 2018 ; Sung et al. , 2018 ) , whose general idea is to learn a metric to measure the similarity between representations and find the closest labeled example ( or a derived prototype ) for an unlabeled example . Typically , these methods learn a metric function during episodic optimization . More specifically , we inherit the spirit that uses prototypes to abstractly represent class-level information , which could be tracked back to cognitive science ( Reed , 1972 ; Rosch et al. , 1976 ; Nosofsky , 1986 ) , statistical machine learning ( Graf et al. , 2009 ) and similar to the Nearest Mean Classifier ( Mensink et al. , 2013 ) . In the area of deep learning , Snell et al . ( 2017 ) propose the prototypical network to exploit the average of example embeddings as a prototype to perform metric-based classification in few-shot learning . In Prototypical Network , prototypes are estimated by the embeddings of instances , and it is hard to find a satisfying location of the prototypes of the entire dataset . Ren et al . ( 2018 ) adapt such prototype-based networks in the semi-supervised scenario where the dataset is partially annotated . A set of prototype-based networks are proposed concentrating on the improvements of prototype estimations and application to various downstream tasks ( Allen et al. , 2019 ; Gao et al. , 2019a ; Li et al. , 2019b ; Pan et al. , 2019 ; Seth et al. , 2019 ; Ding et al. , 2021a ; Li et al. , 2020c ) . We discuss big prototypes and some other prototype-enhanced methods in Appendix C. There has also been a series of works that embed prototypes into a non-Euclidean output space ( Mettes et al. , 2019 ; Keller-Ressel , 2020 ; Atigh et al. , 2021 ) . It should be noted that these studies regard hyperspheres or other non-Euclidean manifolds as the embedding space , and our proposed method use hyperspheres to represent big prototypes and conduct metric-based classification in the Euclidean space . Therefore , the focus of our proposed big prototypes is different from the nonEuclidean prototype-based works . We evaluate the effectiveness of big prototypes in three downstream tasks across NLP and CV , including few-shot named entity recognition , few-shot relation extraction , and few-shot image classification . Another technical route to achieve promising results in these downstream few-shot tasks is to use the ability of large-scale pre-trained models ( Brown et al. , 2020 ; Han et al. , 2021b ) , or build large-scale data sets to design specific pre-training tasks ( Huang et al. , 2020 ; Soares et al. , 2019 ) ( which may face the risk of information leakage ) . These techniques are also orthogonal to our contribution . Generally , the method of big prototypes is model-agnostic . 3 BIG PROTOTYPES . This section begins with the problem setup of few-shot learning . Then , we introduce the metrics , initialization , and learning of big prototypes . Generally , unlike previous prototype-based models that use estimated dense vectors as prototypes , our approach use hyperspheres to represent the conceptlevel information for classes . One big prototype is represented by two parameters : the center and the radius of the hypersphere , which are firstly initialized via estimation and then optimized by gradient descent in an end-to-end fashion . 3.1 PROBLEM SETUP . In this work , we consider the episodic N way K shot few-shot learning paradigm . In this setting , we have a large-scale annotated training setDtrain , and our goal is to learn a model that could predict for a set of new classes Dtest , where only a few examples are labeled . In this setting , the model will be trained in episodes constructed using Dtrain and tested on episodes constructed using Dtest . Each episode contains a support set for learning S = { xi , yi } N×Ki=1 with N classes and K examples for each class , and a query set for inference Q = { x∗j , y∗j } N×K′ j=1 of examples in the same N classes . Each input data is a vector xi ∈ RL with the dimension of L and yi is an index of class label . For each input xi , let hi = fϕ ( xi ) ∈ RD denote the D-dimensional output embedding of a neural network fϕ : RL → RD parameterized by ϕ . Beyond the conventional few-shot classification , we also carry out experiments in few-shot named entity recognition . This is a sequence labeling task where each token in a sequence is asked to be labeled as if it is a part of a named entity . But as the context is extremely important for this task , the examples are sampled in sequence-level . Thus the problem setup is slightly different from the traditional N way K shot classification . We follow the strategy in Ding et al . ( 2021b ) and sample sequences in a N way K ∼ 2K shot manner ( see Appendix B ) . 3.2 METRIC . We now introduce big prototypes , which are a set of hyperspheres in the feature space to abstractly represent the intrinsic features of classes . No matter what the dimension of embedding space is , one big prototype is represented by p = ( z , r ) , where z ∈ RD is the center of the hypersphere with the same dimension to h = fϕ ( x ) , and r ∈ R is a scalar denoting the radius of the hypersphere . The central idea is to learn a big prototype for each class with limited episodic supervision , and each example in the query set ( x∗j , y ∗ j ) is predicted by measuring the distance from the embedding h ∗ to the surface of the hyperspheres . The distance of two vectors is calculated by a metric function : d : RD × RD → [ 0 , +∞ ) . In the Euclidean space , the metric being d ( h , z ) = ∥h− z∥2 . ( 1 ) And the distance d̃ from an embedding to a big prototype p is the distance from the point to the center of the hypersphere minus the radius d̃ ( x , p ) = d ( fϕ ( x ) , z ) − r = ∥h− z∥2 − r. ( 2 ) Note that in this case , the value of d̃ ( · ) may be negative , that is geometrically speaking , the point is contained inside the hypersphere , and it does not affect the calculation of loss . Although more generally , the idea is to use areas but not points in the embedding space to model prototypes , hyperspheres naturally have two obvious advantages . As stated in § 1 , one big prototype could be uniquely modeled by the center and the radius p = { z , r } . While characterizing manifolds with complex boundaries in the embedding space is difficult.s Furthermore , it is easy to calculate metrics by Equation 2 . We use the distance from a point to the surface as the metric . In this way , the center and the radius are spontaneously in the loss function and optimized . In this geometric interpretation , sparse classes will have larger learned radii , while compact classes will have smaller learned radii . | The paper proposes to use areas to model prototypes in FSL. The prototypes, named as big prototypes, are represented by hyper-spheres with dynamic sizes. Rather than point-based prototypes, the new area-based prototypes in the embedding space can represent the class-level information with more expressivity. | SP:f7d5271c6dcdaf0cf99d3c6f5e2b517904b93ce3 |
Few-shot Learning with Big Prototypes | 1 INTRODUCTION . Learning from few examples , i.e. , few-shot learning , receives increasing attention in modern deep learning . On the one hand , constituting cognition of novel concepts with few instances is a crucial way for machines to imitate human intelligence . On the other hand , annotating large-scale supervised datasets is expensive and time-consuming ( Lu et al. , 2020 ) . Although traditional deep neural models have achieved tremendous success under sufficient supervision , it is still challenging to produce comparable performance when training examples are limited . Hence , a series of studies are proposed to generalize deep neural networks to low-data scenarios . One crucial branch of them is meta-learning with prototypes ( Reed , 1972 ; Nosofsky , 1986 ; Snell et al. , 2017 ) , where models are trained to quickly adapt to current tasks and carry out classification via metric-based comparisons between examples and newly introduced variables , prototypes of those classes . In a general way , prototypes are designed to represent abstract class-level information and calculated by taking the mean output of a few examples belonging to one same class . Thus , prototypes are represented as dense vectors with the same dimensions as the embeddings of training examples and can be regarded as center points of those classes in the embedding space . And for the queried examples that the model needs to predict , the basic idea is to calculate the distances between the queried examples and prototypes and conduct classification based on the distances . Originated from Prototypical Network ( Snell et al. , 2017 ) , sets of derivative prototype-based methods demonstrate the effectiveness in few-shot learning ( Ren et al. , 2018 ; Gao et al. , 2019a ; Allen et al. , 2019 ; Pan et al. , 2019 ; Ding et al. , 2021a ) . However , prototypes are estimated from a few sampled examples , which can not uncover the overall ground truth distribution ( Yang et al. , 2021 ) . Such biased distributions may generate biased prototypes and trigger sequent classification errors . In other words , this modeling approach of prototypes could lack the capability to express the universal class-level information . Hence , to enhance such expressibility of prototypes , we propose to use areas , i.e. , tensor fields , rather than points in the embedding space to represent the class-level information . In this paper , we propose big prototypes to use hyperspheres in the feature space to abstract the class-level information , and the feature points can be distribued inside or around the big prototypes . Such modeling is equipped with two obvious advantages : easy to model and easy to calculate the distances . On the one hand , even if we attempt to use areas but not points to represent class-level information , it is difficult to explicitly characterize manifolds with complex boundaries in deep learning . But via hyperspheres modeling , we can obtain a big prototype only through two sets of parameters : the center and the radius of hyperspheres . On the other hand , hyperspheres are suitable for calculating the distances in Euclidean space . We can simply calculate the distance from one feature point to the surface of the hypersphere to perform metric-based classification , which is also difficult for other manifolds . Moreover , it is easy to combine these two advantages in few-shot learning , the distances from one feature point to the surface of a big prototype can be formalized as the distance from the point to the center of the hypersphere minus the radius . Thus , both radius and hypersphere center can appear in the loss function and participate in the backward propagation during optimization . Intuitively , for the classes with sparse feature distribution , the corresponding radii of their prototypes are large , and the radii are small otherwise . We conduct extensive experiments in both natural language processing ( NLP ) and computer vision ( CV ) to evaluate the effectiveness of big prototypes . Specifically for NLP , we choose widely used benchmarks in the few-shot named entity ( Ding et al. , 2021b ) and relation extraction ( Han et al. , 2018 ; Gao et al. , 2019b ) . For CV , we use classical image classification datasets ( Vinyals et al. , 2016 ) in experiments . The results demonstrate that , with only a few additional parameters introduced , such modeling significantly outperforms the baseline . Surprisingly , big prototypes perform extremely well in cross-domain few-shot relation extraction , indicating the promising ability to domain adaptation . Given that such small changes can bring huge benefits , hopefully , big prototypes can inspire new ideas for the research community of representation learning . The source code and checkpoints of our models in experiments will be publicly available for reproducibility . 2 RELATED WORK . This work is related to studies of few-shot learning and meta-learning , whose primary goal is to quickly adapt deep neural models to new tasks with few training examples . To achieve that , two branches of studies are proposed : optimization-based methods and metric-based methods . The optimization-based studies ( Finn et al. , 2017 ; Franceschi et al. , 2018 ; Ravi & Beatson , 2018 ) regard few-shot learning as a bi-level optimization process , where a global optimization is conducted to learn a good initialization across various tasks , and a local optimization quickly adapts the initialization parameters to specific tasks with few training examples by few steps of gradient descent . Compared to the mentioned studies , our work is more related to the metric-based meta-learning approaches ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Satorras & Estrach , 2018 ; Sung et al. , 2018 ) , whose general idea is to learn a metric to measure the similarity between representations and find the closest labeled example ( or a derived prototype ) for an unlabeled example . Typically , these methods learn a metric function during episodic optimization . More specifically , we inherit the spirit that uses prototypes to abstractly represent class-level information , which could be tracked back to cognitive science ( Reed , 1972 ; Rosch et al. , 1976 ; Nosofsky , 1986 ) , statistical machine learning ( Graf et al. , 2009 ) and similar to the Nearest Mean Classifier ( Mensink et al. , 2013 ) . In the area of deep learning , Snell et al . ( 2017 ) propose the prototypical network to exploit the average of example embeddings as a prototype to perform metric-based classification in few-shot learning . In Prototypical Network , prototypes are estimated by the embeddings of instances , and it is hard to find a satisfying location of the prototypes of the entire dataset . Ren et al . ( 2018 ) adapt such prototype-based networks in the semi-supervised scenario where the dataset is partially annotated . A set of prototype-based networks are proposed concentrating on the improvements of prototype estimations and application to various downstream tasks ( Allen et al. , 2019 ; Gao et al. , 2019a ; Li et al. , 2019b ; Pan et al. , 2019 ; Seth et al. , 2019 ; Ding et al. , 2021a ; Li et al. , 2020c ) . We discuss big prototypes and some other prototype-enhanced methods in Appendix C. There has also been a series of works that embed prototypes into a non-Euclidean output space ( Mettes et al. , 2019 ; Keller-Ressel , 2020 ; Atigh et al. , 2021 ) . It should be noted that these studies regard hyperspheres or other non-Euclidean manifolds as the embedding space , and our proposed method use hyperspheres to represent big prototypes and conduct metric-based classification in the Euclidean space . Therefore , the focus of our proposed big prototypes is different from the nonEuclidean prototype-based works . We evaluate the effectiveness of big prototypes in three downstream tasks across NLP and CV , including few-shot named entity recognition , few-shot relation extraction , and few-shot image classification . Another technical route to achieve promising results in these downstream few-shot tasks is to use the ability of large-scale pre-trained models ( Brown et al. , 2020 ; Han et al. , 2021b ) , or build large-scale data sets to design specific pre-training tasks ( Huang et al. , 2020 ; Soares et al. , 2019 ) ( which may face the risk of information leakage ) . These techniques are also orthogonal to our contribution . Generally , the method of big prototypes is model-agnostic . 3 BIG PROTOTYPES . This section begins with the problem setup of few-shot learning . Then , we introduce the metrics , initialization , and learning of big prototypes . Generally , unlike previous prototype-based models that use estimated dense vectors as prototypes , our approach use hyperspheres to represent the conceptlevel information for classes . One big prototype is represented by two parameters : the center and the radius of the hypersphere , which are firstly initialized via estimation and then optimized by gradient descent in an end-to-end fashion . 3.1 PROBLEM SETUP . In this work , we consider the episodic N way K shot few-shot learning paradigm . In this setting , we have a large-scale annotated training setDtrain , and our goal is to learn a model that could predict for a set of new classes Dtest , where only a few examples are labeled . In this setting , the model will be trained in episodes constructed using Dtrain and tested on episodes constructed using Dtest . Each episode contains a support set for learning S = { xi , yi } N×Ki=1 with N classes and K examples for each class , and a query set for inference Q = { x∗j , y∗j } N×K′ j=1 of examples in the same N classes . Each input data is a vector xi ∈ RL with the dimension of L and yi is an index of class label . For each input xi , let hi = fϕ ( xi ) ∈ RD denote the D-dimensional output embedding of a neural network fϕ : RL → RD parameterized by ϕ . Beyond the conventional few-shot classification , we also carry out experiments in few-shot named entity recognition . This is a sequence labeling task where each token in a sequence is asked to be labeled as if it is a part of a named entity . But as the context is extremely important for this task , the examples are sampled in sequence-level . Thus the problem setup is slightly different from the traditional N way K shot classification . We follow the strategy in Ding et al . ( 2021b ) and sample sequences in a N way K ∼ 2K shot manner ( see Appendix B ) . 3.2 METRIC . We now introduce big prototypes , which are a set of hyperspheres in the feature space to abstractly represent the intrinsic features of classes . No matter what the dimension of embedding space is , one big prototype is represented by p = ( z , r ) , where z ∈ RD is the center of the hypersphere with the same dimension to h = fϕ ( x ) , and r ∈ R is a scalar denoting the radius of the hypersphere . The central idea is to learn a big prototype for each class with limited episodic supervision , and each example in the query set ( x∗j , y ∗ j ) is predicted by measuring the distance from the embedding h ∗ to the surface of the hyperspheres . The distance of two vectors is calculated by a metric function : d : RD × RD → [ 0 , +∞ ) . In the Euclidean space , the metric being d ( h , z ) = ∥h− z∥2 . ( 1 ) And the distance d̃ from an embedding to a big prototype p is the distance from the point to the center of the hypersphere minus the radius d̃ ( x , p ) = d ( fϕ ( x ) , z ) − r = ∥h− z∥2 − r. ( 2 ) Note that in this case , the value of d̃ ( · ) may be negative , that is geometrically speaking , the point is contained inside the hypersphere , and it does not affect the calculation of loss . Although more generally , the idea is to use areas but not points in the embedding space to model prototypes , hyperspheres naturally have two obvious advantages . As stated in § 1 , one big prototype could be uniquely modeled by the center and the radius p = { z , r } . While characterizing manifolds with complex boundaries in the embedding space is difficult.s Furthermore , it is easy to calculate metrics by Equation 2 . We use the distance from a point to the surface as the metric . In this way , the center and the radius are spontaneously in the loss function and optimized . In this geometric interpretation , sparse classes will have larger learned radii , while compact classes will have smaller learned radii . | In few-shot learning, prototypes have been widely used to represent classes and then classification can be performed by computing distances to prototype representations of each class. This paper proposes to use hyperspheres to model prototypes in the feature space, instead of vector points, to enhance the expressivity of class-level information. The proposed hypersphere model only needs two sets of parameters (the center and the radius of hyperspheres), so it does not bring additional burden to the optimization calculation of the objective function. Extensive experiments in both NLP and CV are conducted to evaluate the effectiveness of the proposed model. The results shows that the proposed model significantly outperforms the baseline, and it performs well in cross-domain few-shot relation extraction. | SP:f7d5271c6dcdaf0cf99d3c6f5e2b517904b93ce3 |
Revisiting and Advancing Fast Adversarial Training Through the lens of Bi-Level Optimization | 1 INTRODUCTION . Given the fact that machine learning ( ML ) models can be easily fooled by tiny adversarial perturbations ( also known as adversarial attacks ) on the input ( Goodfellow et al. , 2014 ; Carlini & Wagner , 2017 ; Papernot et al. , 2016 ) , learning robust deep neural networks ( DNNs ) is now a major focus in research . Nearly all existing effective defense mechanisms ( Madry et al. , 2018 ; Zhang et al. , 2019b ; Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ; Athalye et al. , 2018a ) are built on the adversarial training ( AT ) recipe , first developed in ( Szegedy et al. , 2014 ) and later formalized in ( Madry et al. , 2018 ) using min-max optimization . In contrast to standard model training using empirical risk minimization , AT ( Madry et al. , 2018 ) calls min-max optimization . That is , a minimizer ( i.e . defender ) seeks to update model parameters against a maximizer ( i.e . attacker ) that aims to worsen the training loss by perturbing each training example . The AT-type defenses have been widely adopted in various application domains including image classification ( Goodfellow et al. , 2014 ; Madry et al. , 2018 ; Kurakin et al. , 2017 ) , object detection ( Zhang & Wang , 2019 ) , natural language processing ( Miyato et al. , 2016 ; Zhu et al. , 2019 ) , and healthcare ( Finlayson et al. , 2019 ; Mahmood et al. , 2019 ) . Despite their effectiveness , the min-max optimization nature makes them difficult to scale . This is because multiple maximization steps ( required by an iterative attack generator ) are needed at every model training step in AT . The resulting prohibitive computation cost prevents AT from a feasible solution to enhance adversarial robustness when computing resource is limited . For example , Xie et al . ( 2019 ) used 128 GPUs to make AT practical on ImageNet . Thereby , how to speed up AT without losing accuracy and robustness is now a grand challenge for adversarial defense . Very recently , some work attempted to develop computationally-efficient alternatives of AT , which we call ‘ fast ’ versions of AT ( Shafahi et al. , 2019 ; Zhang et al. , 2019a ; Wong et al. , 2020 ; Andriushchenko & Flammarion , 2020 ) . To the best of our knowledge , FAST-AT ( Wong et al. , 2020 ) and FAST-AT with gradient alignment ( GA ) regularization , termed FAST-AT-GA ( Andriushchenko & Flammarion , 2020 ) , are the two state-of-the-art ( SOTA ) ‘ fast ’ versions of AT , since they achieve a significant reduction in computation complexity and preserve accuracy and robustness to some extent . To be specific , FAST-AT ( Wong et al. , 2020 ) replaces an iterative attack generator used in AT with a heuristics-based single-step attack generation method . Thus , it merely takes computation cost comparable to standard model training . However , FAST-AT suffers two main issues : ( i ) lack of stability , i.e. , large variance in performance ( Li et al. , 2020 ) , and ( ii ) robustness catastrophic overfitting , i.e. , a large drop of robustness when training with strong adversaries ( Andriushchenko & Flammarion , 2020 ) . To alleviate these problems , Andriushchenko & Flammarion ( 2020 ) proposed FAST-AT-GA by penalizing FAST-AT using an explicit robust regularization given by GA . However , we will show that FAST-AT-GA encounters a new problem ( iii ) : FAST-AT-GA hampers standard accuracy , making a poor accuracy-robustness tradeoff at large attack budget ( = 16/255 ) , i.e . the improvement on RA is at cost of a sharp drop on SA . Given the limitations ( i ) - ( iii ) , we ask : How to design a theoretically-grounded ‘ fast ’ version of AT with improved stability , mitigated catastrophic overfitting , and enhanced accuracy-robustness tradeoff ? To address above question , paper we revisit and advance AT through the lens of bi-level optimization ( BLO ) ( Dempe , 2002 ) , where we cast the attack generation problem as a lower-level optimization problem with constraints and the defense as an upper-level optimization problem in the objective . To the best of our knowledge , this is the first work to make a solid connection between adversarial defense and BLO . Technically , we show that FAST-AT can be interpreted as BLO with linearized lower-level problems . Delving into linearization of BLO , we propose a novel , theoretically-grounded ‘ fast ’ AT framework , fast bi-level AT ( FAST-BAT ) . Practically , Table 1 highlights some achieved improvements over FAST-AT and FAST-AT-GA : When a stronger train-time attack ( i.e. , = 16/255 vs. 8/255 ) is adopted , FAST-AT suffers a large degradation of robust accuracy ( RA ) and standard accuracy ( SA ) , together with higher variances than proposed FAST-BAT . Although FAST-AT-GA outperforms FAST-AT , it still incurs a significant SA loss ( over 21 % ) at = 16/255 . By contrast , FAST-BAT yields a more graceful SA-RA tradeoff : 9 % improvement of SA without loss of RA . Different from FAST-AT-GA , FAST-BAT achieves above improvements in stability , RA and SA without resorting to any extra robust regularization and thus takes less computation cost . Contributions . We summarize our contributions below . ¬ We propose a new formulation of adversarially robust training through the lens of BLO , yielding a novel and theoretically-grounded interpretation of FAST-AT . We propose a new systematic and effective fast BLO-oriented AT framework , termed FAST-BAT , with rigorously-established theory and algorithm . ® We conduct extensive experiments on FAST-BAT , showing its improved stability , mitigated catastrophic overfitting , and enhanced accuracy-robustness tradeoff ; see illustrations in Table 1 . 2 RELATED WORK . Adversarial attack . Adversarial attacks are techniques to generate malicious perturbations that are imperceptible to humans but can mislead the machine learning ( ML ) models ( Goodfellow et al. , 2014 ; Carlini & Wagner , 2017 ; Croce & Hein , 2020 ; Xu et al. , 2019 ; Athalye et al. , 2018b ) . A popular threat model that an adversary used is known as ` p-norm ball constrained attack ( p ∈ { 0 , 1 , 2 , ∞ } ) . This is also the focus of this paper . The adversarial attack has become a major approach to evaluate the robustness of deep neural networks ( DNNs ) and thus , help build safe artificial intelligence in many high stakes applications such as autonomous driving ( Deng et al. , 2020 ; Kumar et al. , 2020 ) , surveillance ( Thys et al. , 2019 ; Xu et al. , 2020 ) , and healthcare ( Finlayson et al. , 2019 ) . Adversarial defense and robust training at scale . Our work falls into the category of robust training , which was mostly built upon min-max optimization . For example , Madry et al . ( 2018 ) established the framework of AT for the first time , which has been recognized as one of the most powerful defenses ( Athalye et al. , 2018a ) . Extended from AT , TRADES ( Zhang et al. , 2019b ) sought the optimal balance between robustness and generalization ability . Further , AT-type defense has been generalized to the semi-/self-supervised settings ( Carmon et al. , 2019 ; Chen et al. , 2020 ) and integrated 1 with certified defense techniques such as randomized smoothing ( Salman et al. , 2019 ) . Despite the effectiveness of AT and its variants , they need to take high computation costs . How to speed up AT without losing performance remains an open question . Some recent works attempted to impose algorithmic simplifications to AT , leading to fast but approximate AT algorithms , such as ‘ free ’ AT ( Shafahi et al. , 2019 ) , you only propagate once ( YOPO ) ( Zhang et al. , 2019a ) , FASTAT ( Wong et al. , 2020 ) , and FAST-AT regularized by gradient alignment ( termed FAST-AT-GA ) ( Andriushchenko & Flammarion , 2020 ) . In particular , FAST-AT and FAST-AT-GA are the baselines most relevant to ours since they were designed with the least computation complexity . However , their defense performance is far from satisfactory . For example , FAST-AT has poor training stability ( Li et al. , 2020 ) and suffers catastrophic overfitting when facing strong attacks ( Andriushchenko & Flammarion , 2020 ) . In contrast to FAST-AT , FAST-AT-GA yields improved robustness but has a poor accuracy-robustness tradeoff ( e.g. , Table 1 ) . In this paper , we aim to advance the algorithm foundation of ‘ fast robust training ’ through the lens of BLO ( bi-level optimization ) . We will show that the proposed FAST-BAT can lead to stable robust learning without suffering catastrophic overfitting and graceful tradeoff between accuracy and robustness . Bi-level optimization ( BLO ) . BLO is a unified hierarchical learning framework , where the objective and variables of an upper-level problem depend on the optimizer of certain lower-level problems . The BLO problem in its most generic form is a class of very challenging problems , and thus , the design of algorithms and theory for BLO focuses on special cases ( Vicente et al. , 1994 ; White & Anandalingam , 1993 ; Gould et al. , 2016 ; Ghadimi & Wang , 2018 ; Ji et al. , 2020 ; Hong et al. , 2020 ) . In practice , some successful applications of BLO to ML have been witnessed in meta-learning ( Rajeswaran et al. , 2019 ) , data poisoning attack design ( Huang et al. , 2020 ) , and reinforcement learning ( Chen et al. , 2019 ) . However , as will be evident later , the existing BLO approach is not directly applied to adversarial defense due to the presence of the constrained nonconvex lower-level problem ( for attack generation ) . To the best of our knowledge , our work makes a rigorous connection between adversarial defense and BLO for the first time . 3 A BI-LEVEL OPTIMIZATION VIEW ON FAST-AT . Preliminaries on FAST-AT . FAST-AT is designed for solving the adversarial training problem ( Madry et al. , 2018 ) given below minimize θ E ( x , y ) ∈D [ maximize δ∈C ` tr ( θ , x+ δ , y ) ] , ( 1 ) where θ ∈ Rn denotes model parameters , D is the training set consisting of labeled data pairs with feature x and label y , δ ∈ Rd represents adversarial perturbations subject to the perturbation constraint C , e.g. , C = { δ | ‖δ‖∞ ≤ , δ ∈ [ 0,1 ] } for -toleration ` ∞-norm constrained attack ( normalized to [ 0,1 ] ) , ( x+ δ ) is then called adversarial example , and ` tr ( · ) represents a training loss . The standard solver to problem ( 1 ) is known as AT ( Madry et al. , 2018 ) . However , it has to call an iterative optimization method ( e.g. , K-step PGD attack ) to solve the inner maximization problem of ( 1 ) . As a result , AT is computationally intensive . To improve its scalability , FAST-AT that only takes the single-step PGD attack for inner maximization was proposed and successfully implemented in ( Wong et al. , 2020 ) . The algorithm backbone of FAST-AT is summarized below . | This work focuses on the problem of speeding up adversarial training in the $\ell_\inf$ threat model. The work first describes two previous works designed to speed up adversarial training: - Fast-AT: carefully perform single-step PGD (otherwise known as FGSM) to train $\ell_\inf$ robust models - from Wong, Rice and Kolter 2020 - Fast-AT-GA: single-step PGD + a gradient alignment loss function - from Andriushchenko and Flammarion 2020. The work then lists a problem with Fast-AT: - Fast-AT experiences catastrophic overfitting (i.e. provides no robustness on held-out data) for $\epsilon = 16/255$ (Note: Fast-AT-GA does not have this catastrophic overfitting issue) Finally, the authors present a new method based on bi-level optimization to perform fast $\ell_\inf$-robust learning. They evaluate it and compare with Fast-AT-GA using a 20 epoch schedule, finding that for $\epsilon = 16/255$, the technique can provide the same robustness except with standard accuracy ~68% instead of ~59%. | SP:ec087b0c8ba704914274f5d88e3229c113bd59b2 |
Revisiting and Advancing Fast Adversarial Training Through the lens of Bi-Level Optimization | 1 INTRODUCTION . Given the fact that machine learning ( ML ) models can be easily fooled by tiny adversarial perturbations ( also known as adversarial attacks ) on the input ( Goodfellow et al. , 2014 ; Carlini & Wagner , 2017 ; Papernot et al. , 2016 ) , learning robust deep neural networks ( DNNs ) is now a major focus in research . Nearly all existing effective defense mechanisms ( Madry et al. , 2018 ; Zhang et al. , 2019b ; Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ; Athalye et al. , 2018a ) are built on the adversarial training ( AT ) recipe , first developed in ( Szegedy et al. , 2014 ) and later formalized in ( Madry et al. , 2018 ) using min-max optimization . In contrast to standard model training using empirical risk minimization , AT ( Madry et al. , 2018 ) calls min-max optimization . That is , a minimizer ( i.e . defender ) seeks to update model parameters against a maximizer ( i.e . attacker ) that aims to worsen the training loss by perturbing each training example . The AT-type defenses have been widely adopted in various application domains including image classification ( Goodfellow et al. , 2014 ; Madry et al. , 2018 ; Kurakin et al. , 2017 ) , object detection ( Zhang & Wang , 2019 ) , natural language processing ( Miyato et al. , 2016 ; Zhu et al. , 2019 ) , and healthcare ( Finlayson et al. , 2019 ; Mahmood et al. , 2019 ) . Despite their effectiveness , the min-max optimization nature makes them difficult to scale . This is because multiple maximization steps ( required by an iterative attack generator ) are needed at every model training step in AT . The resulting prohibitive computation cost prevents AT from a feasible solution to enhance adversarial robustness when computing resource is limited . For example , Xie et al . ( 2019 ) used 128 GPUs to make AT practical on ImageNet . Thereby , how to speed up AT without losing accuracy and robustness is now a grand challenge for adversarial defense . Very recently , some work attempted to develop computationally-efficient alternatives of AT , which we call ‘ fast ’ versions of AT ( Shafahi et al. , 2019 ; Zhang et al. , 2019a ; Wong et al. , 2020 ; Andriushchenko & Flammarion , 2020 ) . To the best of our knowledge , FAST-AT ( Wong et al. , 2020 ) and FAST-AT with gradient alignment ( GA ) regularization , termed FAST-AT-GA ( Andriushchenko & Flammarion , 2020 ) , are the two state-of-the-art ( SOTA ) ‘ fast ’ versions of AT , since they achieve a significant reduction in computation complexity and preserve accuracy and robustness to some extent . To be specific , FAST-AT ( Wong et al. , 2020 ) replaces an iterative attack generator used in AT with a heuristics-based single-step attack generation method . Thus , it merely takes computation cost comparable to standard model training . However , FAST-AT suffers two main issues : ( i ) lack of stability , i.e. , large variance in performance ( Li et al. , 2020 ) , and ( ii ) robustness catastrophic overfitting , i.e. , a large drop of robustness when training with strong adversaries ( Andriushchenko & Flammarion , 2020 ) . To alleviate these problems , Andriushchenko & Flammarion ( 2020 ) proposed FAST-AT-GA by penalizing FAST-AT using an explicit robust regularization given by GA . However , we will show that FAST-AT-GA encounters a new problem ( iii ) : FAST-AT-GA hampers standard accuracy , making a poor accuracy-robustness tradeoff at large attack budget ( = 16/255 ) , i.e . the improvement on RA is at cost of a sharp drop on SA . Given the limitations ( i ) - ( iii ) , we ask : How to design a theoretically-grounded ‘ fast ’ version of AT with improved stability , mitigated catastrophic overfitting , and enhanced accuracy-robustness tradeoff ? To address above question , paper we revisit and advance AT through the lens of bi-level optimization ( BLO ) ( Dempe , 2002 ) , where we cast the attack generation problem as a lower-level optimization problem with constraints and the defense as an upper-level optimization problem in the objective . To the best of our knowledge , this is the first work to make a solid connection between adversarial defense and BLO . Technically , we show that FAST-AT can be interpreted as BLO with linearized lower-level problems . Delving into linearization of BLO , we propose a novel , theoretically-grounded ‘ fast ’ AT framework , fast bi-level AT ( FAST-BAT ) . Practically , Table 1 highlights some achieved improvements over FAST-AT and FAST-AT-GA : When a stronger train-time attack ( i.e. , = 16/255 vs. 8/255 ) is adopted , FAST-AT suffers a large degradation of robust accuracy ( RA ) and standard accuracy ( SA ) , together with higher variances than proposed FAST-BAT . Although FAST-AT-GA outperforms FAST-AT , it still incurs a significant SA loss ( over 21 % ) at = 16/255 . By contrast , FAST-BAT yields a more graceful SA-RA tradeoff : 9 % improvement of SA without loss of RA . Different from FAST-AT-GA , FAST-BAT achieves above improvements in stability , RA and SA without resorting to any extra robust regularization and thus takes less computation cost . Contributions . We summarize our contributions below . ¬ We propose a new formulation of adversarially robust training through the lens of BLO , yielding a novel and theoretically-grounded interpretation of FAST-AT . We propose a new systematic and effective fast BLO-oriented AT framework , termed FAST-BAT , with rigorously-established theory and algorithm . ® We conduct extensive experiments on FAST-BAT , showing its improved stability , mitigated catastrophic overfitting , and enhanced accuracy-robustness tradeoff ; see illustrations in Table 1 . 2 RELATED WORK . Adversarial attack . Adversarial attacks are techniques to generate malicious perturbations that are imperceptible to humans but can mislead the machine learning ( ML ) models ( Goodfellow et al. , 2014 ; Carlini & Wagner , 2017 ; Croce & Hein , 2020 ; Xu et al. , 2019 ; Athalye et al. , 2018b ) . A popular threat model that an adversary used is known as ` p-norm ball constrained attack ( p ∈ { 0 , 1 , 2 , ∞ } ) . This is also the focus of this paper . The adversarial attack has become a major approach to evaluate the robustness of deep neural networks ( DNNs ) and thus , help build safe artificial intelligence in many high stakes applications such as autonomous driving ( Deng et al. , 2020 ; Kumar et al. , 2020 ) , surveillance ( Thys et al. , 2019 ; Xu et al. , 2020 ) , and healthcare ( Finlayson et al. , 2019 ) . Adversarial defense and robust training at scale . Our work falls into the category of robust training , which was mostly built upon min-max optimization . For example , Madry et al . ( 2018 ) established the framework of AT for the first time , which has been recognized as one of the most powerful defenses ( Athalye et al. , 2018a ) . Extended from AT , TRADES ( Zhang et al. , 2019b ) sought the optimal balance between robustness and generalization ability . Further , AT-type defense has been generalized to the semi-/self-supervised settings ( Carmon et al. , 2019 ; Chen et al. , 2020 ) and integrated 1 with certified defense techniques such as randomized smoothing ( Salman et al. , 2019 ) . Despite the effectiveness of AT and its variants , they need to take high computation costs . How to speed up AT without losing performance remains an open question . Some recent works attempted to impose algorithmic simplifications to AT , leading to fast but approximate AT algorithms , such as ‘ free ’ AT ( Shafahi et al. , 2019 ) , you only propagate once ( YOPO ) ( Zhang et al. , 2019a ) , FASTAT ( Wong et al. , 2020 ) , and FAST-AT regularized by gradient alignment ( termed FAST-AT-GA ) ( Andriushchenko & Flammarion , 2020 ) . In particular , FAST-AT and FAST-AT-GA are the baselines most relevant to ours since they were designed with the least computation complexity . However , their defense performance is far from satisfactory . For example , FAST-AT has poor training stability ( Li et al. , 2020 ) and suffers catastrophic overfitting when facing strong attacks ( Andriushchenko & Flammarion , 2020 ) . In contrast to FAST-AT , FAST-AT-GA yields improved robustness but has a poor accuracy-robustness tradeoff ( e.g. , Table 1 ) . In this paper , we aim to advance the algorithm foundation of ‘ fast robust training ’ through the lens of BLO ( bi-level optimization ) . We will show that the proposed FAST-BAT can lead to stable robust learning without suffering catastrophic overfitting and graceful tradeoff between accuracy and robustness . Bi-level optimization ( BLO ) . BLO is a unified hierarchical learning framework , where the objective and variables of an upper-level problem depend on the optimizer of certain lower-level problems . The BLO problem in its most generic form is a class of very challenging problems , and thus , the design of algorithms and theory for BLO focuses on special cases ( Vicente et al. , 1994 ; White & Anandalingam , 1993 ; Gould et al. , 2016 ; Ghadimi & Wang , 2018 ; Ji et al. , 2020 ; Hong et al. , 2020 ) . In practice , some successful applications of BLO to ML have been witnessed in meta-learning ( Rajeswaran et al. , 2019 ) , data poisoning attack design ( Huang et al. , 2020 ) , and reinforcement learning ( Chen et al. , 2019 ) . However , as will be evident later , the existing BLO approach is not directly applied to adversarial defense due to the presence of the constrained nonconvex lower-level problem ( for attack generation ) . To the best of our knowledge , our work makes a rigorous connection between adversarial defense and BLO for the first time . 3 A BI-LEVEL OPTIMIZATION VIEW ON FAST-AT . Preliminaries on FAST-AT . FAST-AT is designed for solving the adversarial training problem ( Madry et al. , 2018 ) given below minimize θ E ( x , y ) ∈D [ maximize δ∈C ` tr ( θ , x+ δ , y ) ] , ( 1 ) where θ ∈ Rn denotes model parameters , D is the training set consisting of labeled data pairs with feature x and label y , δ ∈ Rd represents adversarial perturbations subject to the perturbation constraint C , e.g. , C = { δ | ‖δ‖∞ ≤ , δ ∈ [ 0,1 ] } for -toleration ` ∞-norm constrained attack ( normalized to [ 0,1 ] ) , ( x+ δ ) is then called adversarial example , and ` tr ( · ) represents a training loss . The standard solver to problem ( 1 ) is known as AT ( Madry et al. , 2018 ) . However , it has to call an iterative optimization method ( e.g. , K-step PGD attack ) to solve the inner maximization problem of ( 1 ) . As a result , AT is computationally intensive . To improve its scalability , FAST-AT that only takes the single-step PGD attack for inner maximization was proposed and successfully implemented in ( Wong et al. , 2020 ) . The algorithm backbone of FAST-AT is summarized below . | The paper studies adversarial training as a bi-level optimization problem. The authors show that Fast Adversarial Training can be viewed as a bi-level optimization problem where the lower-level problem is a linearization of the loss in the direction of the sign of the gradient. Motivated by this observation, the authors propose Fast Bi-level Adversarial Training, where the lower-level optimization linearizes the loss in the direction of the gradient (as opposed to the sign of the gradient). The authors then show analytically how to obtain the gradient of the upper-level problem under the assumption that the loss Hessian is equal to zero. This gradient is then used in an iterative manner similar to standard adversarial training. The proposed method is evaluated on CIFAR10 and ImageNet against multiple baselines including Fast-AT, Fast-AT-GA, and PGD-2. The empirical results show that compared to the baselines, within the same order of computational cost, the proposed method enjoys improved stability and mitigates the catastrophic overfitting present in other baselines. | SP:ec087b0c8ba704914274f5d88e3229c113bd59b2 |
Revisiting and Advancing Fast Adversarial Training Through the lens of Bi-Level Optimization | 1 INTRODUCTION . Given the fact that machine learning ( ML ) models can be easily fooled by tiny adversarial perturbations ( also known as adversarial attacks ) on the input ( Goodfellow et al. , 2014 ; Carlini & Wagner , 2017 ; Papernot et al. , 2016 ) , learning robust deep neural networks ( DNNs ) is now a major focus in research . Nearly all existing effective defense mechanisms ( Madry et al. , 2018 ; Zhang et al. , 2019b ; Shafahi et al. , 2019 ; Wong et al. , 2020 ; Zhang et al. , 2019a ; Athalye et al. , 2018a ) are built on the adversarial training ( AT ) recipe , first developed in ( Szegedy et al. , 2014 ) and later formalized in ( Madry et al. , 2018 ) using min-max optimization . In contrast to standard model training using empirical risk minimization , AT ( Madry et al. , 2018 ) calls min-max optimization . That is , a minimizer ( i.e . defender ) seeks to update model parameters against a maximizer ( i.e . attacker ) that aims to worsen the training loss by perturbing each training example . The AT-type defenses have been widely adopted in various application domains including image classification ( Goodfellow et al. , 2014 ; Madry et al. , 2018 ; Kurakin et al. , 2017 ) , object detection ( Zhang & Wang , 2019 ) , natural language processing ( Miyato et al. , 2016 ; Zhu et al. , 2019 ) , and healthcare ( Finlayson et al. , 2019 ; Mahmood et al. , 2019 ) . Despite their effectiveness , the min-max optimization nature makes them difficult to scale . This is because multiple maximization steps ( required by an iterative attack generator ) are needed at every model training step in AT . The resulting prohibitive computation cost prevents AT from a feasible solution to enhance adversarial robustness when computing resource is limited . For example , Xie et al . ( 2019 ) used 128 GPUs to make AT practical on ImageNet . Thereby , how to speed up AT without losing accuracy and robustness is now a grand challenge for adversarial defense . Very recently , some work attempted to develop computationally-efficient alternatives of AT , which we call ‘ fast ’ versions of AT ( Shafahi et al. , 2019 ; Zhang et al. , 2019a ; Wong et al. , 2020 ; Andriushchenko & Flammarion , 2020 ) . To the best of our knowledge , FAST-AT ( Wong et al. , 2020 ) and FAST-AT with gradient alignment ( GA ) regularization , termed FAST-AT-GA ( Andriushchenko & Flammarion , 2020 ) , are the two state-of-the-art ( SOTA ) ‘ fast ’ versions of AT , since they achieve a significant reduction in computation complexity and preserve accuracy and robustness to some extent . To be specific , FAST-AT ( Wong et al. , 2020 ) replaces an iterative attack generator used in AT with a heuristics-based single-step attack generation method . Thus , it merely takes computation cost comparable to standard model training . However , FAST-AT suffers two main issues : ( i ) lack of stability , i.e. , large variance in performance ( Li et al. , 2020 ) , and ( ii ) robustness catastrophic overfitting , i.e. , a large drop of robustness when training with strong adversaries ( Andriushchenko & Flammarion , 2020 ) . To alleviate these problems , Andriushchenko & Flammarion ( 2020 ) proposed FAST-AT-GA by penalizing FAST-AT using an explicit robust regularization given by GA . However , we will show that FAST-AT-GA encounters a new problem ( iii ) : FAST-AT-GA hampers standard accuracy , making a poor accuracy-robustness tradeoff at large attack budget ( = 16/255 ) , i.e . the improvement on RA is at cost of a sharp drop on SA . Given the limitations ( i ) - ( iii ) , we ask : How to design a theoretically-grounded ‘ fast ’ version of AT with improved stability , mitigated catastrophic overfitting , and enhanced accuracy-robustness tradeoff ? To address above question , paper we revisit and advance AT through the lens of bi-level optimization ( BLO ) ( Dempe , 2002 ) , where we cast the attack generation problem as a lower-level optimization problem with constraints and the defense as an upper-level optimization problem in the objective . To the best of our knowledge , this is the first work to make a solid connection between adversarial defense and BLO . Technically , we show that FAST-AT can be interpreted as BLO with linearized lower-level problems . Delving into linearization of BLO , we propose a novel , theoretically-grounded ‘ fast ’ AT framework , fast bi-level AT ( FAST-BAT ) . Practically , Table 1 highlights some achieved improvements over FAST-AT and FAST-AT-GA : When a stronger train-time attack ( i.e. , = 16/255 vs. 8/255 ) is adopted , FAST-AT suffers a large degradation of robust accuracy ( RA ) and standard accuracy ( SA ) , together with higher variances than proposed FAST-BAT . Although FAST-AT-GA outperforms FAST-AT , it still incurs a significant SA loss ( over 21 % ) at = 16/255 . By contrast , FAST-BAT yields a more graceful SA-RA tradeoff : 9 % improvement of SA without loss of RA . Different from FAST-AT-GA , FAST-BAT achieves above improvements in stability , RA and SA without resorting to any extra robust regularization and thus takes less computation cost . Contributions . We summarize our contributions below . ¬ We propose a new formulation of adversarially robust training through the lens of BLO , yielding a novel and theoretically-grounded interpretation of FAST-AT . We propose a new systematic and effective fast BLO-oriented AT framework , termed FAST-BAT , with rigorously-established theory and algorithm . ® We conduct extensive experiments on FAST-BAT , showing its improved stability , mitigated catastrophic overfitting , and enhanced accuracy-robustness tradeoff ; see illustrations in Table 1 . 2 RELATED WORK . Adversarial attack . Adversarial attacks are techniques to generate malicious perturbations that are imperceptible to humans but can mislead the machine learning ( ML ) models ( Goodfellow et al. , 2014 ; Carlini & Wagner , 2017 ; Croce & Hein , 2020 ; Xu et al. , 2019 ; Athalye et al. , 2018b ) . A popular threat model that an adversary used is known as ` p-norm ball constrained attack ( p ∈ { 0 , 1 , 2 , ∞ } ) . This is also the focus of this paper . The adversarial attack has become a major approach to evaluate the robustness of deep neural networks ( DNNs ) and thus , help build safe artificial intelligence in many high stakes applications such as autonomous driving ( Deng et al. , 2020 ; Kumar et al. , 2020 ) , surveillance ( Thys et al. , 2019 ; Xu et al. , 2020 ) , and healthcare ( Finlayson et al. , 2019 ) . Adversarial defense and robust training at scale . Our work falls into the category of robust training , which was mostly built upon min-max optimization . For example , Madry et al . ( 2018 ) established the framework of AT for the first time , which has been recognized as one of the most powerful defenses ( Athalye et al. , 2018a ) . Extended from AT , TRADES ( Zhang et al. , 2019b ) sought the optimal balance between robustness and generalization ability . Further , AT-type defense has been generalized to the semi-/self-supervised settings ( Carmon et al. , 2019 ; Chen et al. , 2020 ) and integrated 1 with certified defense techniques such as randomized smoothing ( Salman et al. , 2019 ) . Despite the effectiveness of AT and its variants , they need to take high computation costs . How to speed up AT without losing performance remains an open question . Some recent works attempted to impose algorithmic simplifications to AT , leading to fast but approximate AT algorithms , such as ‘ free ’ AT ( Shafahi et al. , 2019 ) , you only propagate once ( YOPO ) ( Zhang et al. , 2019a ) , FASTAT ( Wong et al. , 2020 ) , and FAST-AT regularized by gradient alignment ( termed FAST-AT-GA ) ( Andriushchenko & Flammarion , 2020 ) . In particular , FAST-AT and FAST-AT-GA are the baselines most relevant to ours since they were designed with the least computation complexity . However , their defense performance is far from satisfactory . For example , FAST-AT has poor training stability ( Li et al. , 2020 ) and suffers catastrophic overfitting when facing strong attacks ( Andriushchenko & Flammarion , 2020 ) . In contrast to FAST-AT , FAST-AT-GA yields improved robustness but has a poor accuracy-robustness tradeoff ( e.g. , Table 1 ) . In this paper , we aim to advance the algorithm foundation of ‘ fast robust training ’ through the lens of BLO ( bi-level optimization ) . We will show that the proposed FAST-BAT can lead to stable robust learning without suffering catastrophic overfitting and graceful tradeoff between accuracy and robustness . Bi-level optimization ( BLO ) . BLO is a unified hierarchical learning framework , where the objective and variables of an upper-level problem depend on the optimizer of certain lower-level problems . The BLO problem in its most generic form is a class of very challenging problems , and thus , the design of algorithms and theory for BLO focuses on special cases ( Vicente et al. , 1994 ; White & Anandalingam , 1993 ; Gould et al. , 2016 ; Ghadimi & Wang , 2018 ; Ji et al. , 2020 ; Hong et al. , 2020 ) . In practice , some successful applications of BLO to ML have been witnessed in meta-learning ( Rajeswaran et al. , 2019 ) , data poisoning attack design ( Huang et al. , 2020 ) , and reinforcement learning ( Chen et al. , 2019 ) . However , as will be evident later , the existing BLO approach is not directly applied to adversarial defense due to the presence of the constrained nonconvex lower-level problem ( for attack generation ) . To the best of our knowledge , our work makes a rigorous connection between adversarial defense and BLO for the first time . 3 A BI-LEVEL OPTIMIZATION VIEW ON FAST-AT . Preliminaries on FAST-AT . FAST-AT is designed for solving the adversarial training problem ( Madry et al. , 2018 ) given below minimize θ E ( x , y ) ∈D [ maximize δ∈C ` tr ( θ , x+ δ , y ) ] , ( 1 ) where θ ∈ Rn denotes model parameters , D is the training set consisting of labeled data pairs with feature x and label y , δ ∈ Rd represents adversarial perturbations subject to the perturbation constraint C , e.g. , C = { δ | ‖δ‖∞ ≤ , δ ∈ [ 0,1 ] } for -toleration ` ∞-norm constrained attack ( normalized to [ 0,1 ] ) , ( x+ δ ) is then called adversarial example , and ` tr ( · ) represents a training loss . The standard solver to problem ( 1 ) is known as AT ( Madry et al. , 2018 ) . However , it has to call an iterative optimization method ( e.g. , K-step PGD attack ) to solve the inner maximization problem of ( 1 ) . As a result , AT is computationally intensive . To improve its scalability , FAST-AT that only takes the single-step PGD attack for inner maximization was proposed and successfully implemented in ( Wong et al. , 2020 ) . The algorithm backbone of FAST-AT is summarized below . | This paper aims to interpret the fast adversarial training methods from the perspective bi-level optimization. Though the discovery is straightforward, it is worthy of letting the community know this important connection. And then the authors proposed a new linearization of the lower-level optimization problem and introduced a new FAST-BAT approach for improving both accuracy and robustness. Various experiments were conducted to verify the effectiveness of the proposed method. | SP:ec087b0c8ba704914274f5d88e3229c113bd59b2 |
The weighted mean trick – optimization strategies for robustness | 1 INTRODUCTION . The most common objective in machine learning is to minimize the average loss . However , by doing this it implies that all samples are equal . The goal of this work is to convince the reader that from the perspective of the model , not all samples are equal and optimizing a weighted mean is a more progressive approach . For instance , at the end of the training , few samples are left uncaptured by the model which yield large loss values . One might choose to put more weight on those samples to help the model learn ( individual fairness ) , or consider them noise and assign them less weight ( robustness ) . The decision will also be reflected by the loss variance ; it will decrease when hard samples are weighted more and increase otherwise . Before introducing more formally the weighted mean , we first justify theoretically the impact of variance penalization . Using the empirical Bernstein bound ( Maurer & Pontil , 2009 ) for i.i.d . loss values Z and bounded variance we have with probability 1− δ : E [ Z ] − 1 n n∑ i=1 Zi ≤ C1 √ 2Vn [ Z ] ln 2/δ n + C2 7 ln 2/δ 3 ( n− 1 ) ( 1 ) where C1 , C2 are problem depended constants . This inequality reveals two things . First , we have with high probability that the empirical mean is close to the theoretical value . This bound along with similar PAC-Bayes bounds ( Seldin et al. , 2012 ; Tolstikhin & Seldin , 2013 ) justifies the practical success of the empirical risk minimization ( ERM ) which has the objective to minimize the mean loss value ( Namkoong & Duchi , 2017 ) . Secondly , the difference between the two is bounded in terms of the empirical variance . This second implication led to a large and growing body of studies that investigate variance penalization ( Maurer & Pontil , 2009 ; Namkoong & Duchi , 2017 ; Duchi & Namkoong , 2019 ; Staib et al. , 2019 ; Lam , 2019 ; Heinze-Deml & Meinshausen , 2021 ; Hu et al. , 2018 ) . Moreover , by penalizing the variance , the intrinsic bias-variance tradeoff of the ERM can be controlled . Two studies influential for this paper are that of Duchi & Namkoong ( 2019 ) and Li et al . ( 2021 ) . Duchi & Namkoong ( 2019 ) proposed taking the expectation with respect to a different distribution which allowed penalizing variance while preserving the convexity of the objective . Similarly , Li et al . ( 2021 ) investigated optimizing a tilted empirical risk which is equivalent to penalizing all the higher-order moments simultaneously . In summary , previous methods either penalize only one moment or all the higher-order moments but are not flexible enough to penalize any desired combination of the higher-order moments . Inspired by the above works , we propose to optimize a weighted mean and prove that for certain weights , it is equivalent to optimizing any higher-order moments of the loss distribution such as variance , skewness , and kurtosis . Our approach generalizes that of Duchi & Namkoong ( 2019 ) ; Li et al . ( 2021 ) while simplifying the optimization procedure and enabling separate penalization of the higher-order moments . In particular , we will construct the weights w such that optimizing the weighted mean of the loss ` is equivalent to applying a variance penalization : E [ w ` ] = E [ ` ] + λV [ ` ] ( 2 ) or penalization of other central moments ( e.g. , skewness , kurtosis ) . Construction of these weights are treated by Theorems 1 and 3 from section 3 , which require minimal computational resources . Important note , penalizing directly the variance preserves convexity only for values of λ within a very narrow range ( Maurer & Pontil , 2009 ) . However , the weighted mean method yields a convex objective for any positive values of λ . In detail , our contributions are as follows : ( C1 ) We build upon the work of Duchi & Namkoong ( 2019 ) and prove that optimizing a weighted mean is equivalent to reducing or amplifying both variance ( Theorem 1 ) and higher-order moments ( Theorem 3 ) . ( C2 ) We derive the limits of the penalization interval which preserves convexity ( Lemma 2 ) and also show how to penalize with values outside this interval while still maintaining convexity ( Lemma 4 ) . ( C3 ) We connect the variance and higher-order moments penalization using weighted mean to the robustification of loss functions ( Lemma 5 and Lemma 6 ) . ( C4 ) We develop a convex version of the variance penalized cross-entropy loss which provides a higher accuracy in high noise scenarios with class dependent noise . ( C5 ) We show experimentally that a negative variance penalization improves model accuracy when training with noisy labels . The implications of the weighted mean are much broader than what we investigate in this work . We limit the scope of this paper to classification using deep neural networks trained with noisy labels . But the mathematical framework also covers the control of the bias-variance trade-off and can also be applicable to regression problems . In the sequel , we start by introducing the notation in section 2 , then in section 3 we present the moment penalization strategy and the weighted mean trick which is a computational technique to make moment penalization practical . Next , in section 4 , we illustrate the application of moment penalization using the weighted mean formulation to optimize for robustness when training with noisy labels . 2 NOTATIONS . In subsequent sections we will use the following notation . A training data set is defined as D = { ( xi , yi ) } ni=1 where xi ∈ X are the features and yi ∈ Y = { 1 , . . . , k } represent the class labels . A classifier f ( x ; θ ) is a mapping f : X ×Θ→ V from the feature space to the probability simplex V parameterized by θ . A loss function ` : V × Y → [ 0 , ∞ ) gives a penalty ` ( v , y ) when the model predicted the value v and label y was observed . A weight function w : V × Y → R assigns to each sample a weight . As we are interested in optimizing the model output value v that minimizes the penalty , we will focus our investigation on loss functions ` ( v , y ) that are convex in v and seek to preserve the convexity when optimizing the weighted mean . In ERM , we are interested with finding the model parameters θ that minimize the empirical risk calculated as ED [ ` ( f ( xi ; θ ) , yi ) ] and the weighted form ED [ w ( ` ( f ( xi ; θ ) , yi ) ) ` ( f ( xi ; θ ) , yi ) ] . To simplify the notation , we will drop the dataset D from the expectation subscript and the arguments of the loss and the weight function , i.e. , E [ ` ] ⇔ ED [ ` ( f ( xi ; θ ) , yi ) ] and E [ w ` ] ⇔ ED [ w ( ` ( f ( xi ; θ ) , yi ) ) ` ( f ( xi ; θ ) , yi ) ] . Similarly , the notation for minimum is simplified as min ` ⇔ minxi , yi∈D ` ( f ( xi ; θ ) , yi ) . −3 0 3 −3 0 3 0.0 0.0 0.2 0.2 1.0 1.0 2.02.0 3.9 3.9 −3 0 3 0.0 0.0 0.5 0.5 1.0 1.0 1.3 1.3 −3 0 3 λ < 0 λ = 0 λ > 0 λ > 0 λ < 0 1Figure 1 : Variance penalization for classification problems . Contour lines of the weights distribution for positive λ are shown on the left and for negative on the center plot . The right plot shows the use of variance penalization for outlier suppression or amplification ( robustification versus generalization ) . 3 PENALIZING MOMENTS . In ERM , the impact on the model parameters of a sample is determined by the balance between its loss value and the average of the training batch . Especially observed in the latter stages of the training when majority of samples have a small loss value except few which are left uncaptured by the model . Case in which we consider them as unlearned samples and amplify their impact or as outliers and suppress their impact on the model . The weighted mean trick consist in assigning weights to each sample based on the loss value to either amplify or suppress their impact on training . This is similar to dropout ( Hinton et al. , 2012 ) but for losses . However , weighted mean trick is a deterministic process and the weights are not restricted to 0 or 1 but can take any non-negative value . In what follows , we apply the weighted mean trick to extend the ERM framework to multi-objective optimization of the mean and higher-order moments of the loss function . First , we show that for some distinct weights , optimizing the weighted mean is equivalent to a simultaneous optimization of the mean and variance . Theorem 1 ( Variance Expansion ) . Let ` be a loss function with finite E [ ` ] and finite V [ ` ] and let w ( v , y ) = 1 + λ ( ` ( v , y ) − E [ ` ( v , y ) ] ) , then we have : E [ w ` ] = E [ ` ] + λV [ ` ] ( 3 ) Proof . Replacing w with the definition and using the linearity property of the expectation along with Proposition 7 we get : E [ w ` ] = E [ ` ] +λE [ ( ` −E [ ` ] ) ` ] = E [ ` ] +λE [ ( ` −E [ ` ] ) 2 ] = E [ ` ] +λV [ ` ] Thus , switching from mean to weighted mean allows us to control the bias-variance tradeoff through λ to improve the distributional robustness of the model ( Maurer & Pontil , 2009 ) . However , the range of λ values that preserve the convexity of the objective depends on the average and the minimum penalty returned by the loss function ` as shown by the next lemma . Lemma 2 . As introduced in Theorem 1 , the variance expansion of a convex loss function ` ( v , y ) in v yields a new objective that is also convex in v if λ ∈ [ 0 , λmax ] , where λmax = 1/ ( E [ ` ] −min ` ) . Proof is provided in Appendix A . This lemma precisely shows why directly penalizing the variance does not preserve convexity besides when λ takes values in a narrow interval . Moreover , directly penalizing the variance as part of a numeric optimization objective , the upper limit of the interval , λmax , is not constant and changes with each iteration of the optimization algorithm . Note that the further the minimum value min ` is from the average loss value E [ ` ] , the narrower the interval is . Conversely , when E [ ` ] = min ` results that ` is constant and V [ ` ] = 0 thus the objective is convex for λ > 0 . However , using the weighted mean trick the objective remains convex for any positive λ irrespective if V [ ` ] = 0 . Figure 1 shows the weights distribution for different values of λ . When λ is positive ( left plot ) samples closer to the decision boundary ( which also means with larger loss values ) receive more weight . On the other hand , when λ is negative ( center plot ) samples with larger loss values receive less weight . Of note , for λ < 0 the objective is not convex , however as we will show later will still converge to an optimal solution . The right plot shows a binary classification problem where each class consists of two clusters , squares and circles . The parameter λ controls the placement of the decision boundary with respect to these two clusters . Positive λ values place more weight on the cluster of squares which is closer to the decision boundary and as a result the boundary is horizontally aligned . On the other hand , negative λ values place more weight on samples farther from the boundary and in this case the cluster of circles . Therefore , this aligns the decision boundary with respect to the cluster of circles oriented diagonally . Variance is not the only central moment that can be penalized using the weighted mean . In fact , any combination of the moments can be penalized . The next result generalizes Theorem 1 . Theorem 3 ( Moments Expansion ) . Let ` be a loss function with finite first m central moments and define w ( v , y ) = ∑m i=1 λi ( ` ( v , y ) − E [ ` ( v , y ) ] ) i−1 , then we have : E [ w ` ] = λ1E [ ` ] + m∑ i=2 λ̃iE [ ( ` − E [ ` ] ) i ] ( 4 ) where λ̃i = λi + λi+1E [ ` ] for i < m and λ̃m = λm Proof is provided in Appendix A . We notice that penalizing moments higher than two incurs an additional penalization of the previous moment . For example , penalizing skewness by λ3 incurs a variance penalization of λ3E [ ` ] . Theorem 3 also has an algebraic interpretation . Note that the formula for weights is nothing more than a polynomial in ` ( v , y ) translated by E [ ` ] . When penalizing the variance , λ2 controls the slope of the linear equation , and when penalizing the skewness , λ3 controls the curvature of the quadratic equation . Moreover , the penalization factors λi also define the placement of the roots of the polynomial and the convexity of the weighted mean objective . Lemma 4 ( Convexity of Moments Expansion ) . Let ` ( v , y ) be a loss function convex in v and p : R→ [ 0 , ∞ ) be a non negative and differentiable convex function and M ≥ 0 , then the weighted objective w ( v , y ) ` ( v , y ) with w ( v , y ) = p ( ` ( v , y ) −M ) is convex in v if p is non decreasing . Proof is provided in Appendix A . The above result lists two requirements on the polynomial function p such that the weighted mean objective remains convex . First , the function must be non-negative , thus , the negative weights must be clipped to 0 . Secondly , when the function takes positive values , p must be non decreasing and convex . Of note , this result is similar in scope to Lemma 2 but does not generalize it as the weights are non negative . Figure 2 shows several examples of polynomials . Variance penalization implies p is an affine function , and thus , convex . However , for λ1 > 0 the polynomial is non decreasing ( left plot ) and for λ1 < 0 the polynomial is non increasing ( center plot ) . Thus , only positive variance penalization will result in a convex objective . Of note , Lemma 2 upper bounds λ1 if weights are not clipped to 0 , however , if clipping is used λ1 is not upper bounded . When penalizing skewness , p is a quadratic function ( right plot ) . In this case , if λ2 > 0 then p is convex and non decreasing only on a restricted interval instead of the entire real line . Thus , convexity can be preserved by appropriately adjusting the roots of the polynomial such that p is convex on [ min ` − E [ ` ] , +∞ ] . Clipping negative weights to 0 prevents the optimization objective to switch from minimization to maximization , usually an undesirable behavior . Consequently , the samples with a corresponding zero weight reach their maximum contribution in the moments penalization . Further increasing the factors λi , will have no effect on those samples and only samples with non-zero weights will participate in the training . As a result , the efficiency of the moments penalization will slightly fall . Moreover , the samples with zero weight will be excluded from training , technique used in the past by multiple studies . Rockafellar & Uryasev ( 2000 ) used a similar clipping technique to optimize only for samples that are part of the tail of the distribution . The Moments Expansion Theorem extends the variance expansion proposed by Duchi & Namkoong ( 2019 ) to include higher-order moments . However , even when only the variance is penalized , the two methods are still slightly different . Moments Expansion Theorem computes the weights directly from the loss values , whereas the variance expansion of Duchi & Namkoong ( 2019 ) solves a secondary optimization problem to find the weights . The use of a secondary optimization problem has the −1 0 1 −1 0 1 2 3 λ2 = 2 λ1 = 1 λ4 = 1 λ3 = 2.5 λ2 = 3 λ1 = 1 −1 0 1 λ2 = −2 λ1 = 1 −1 0 1 λ3 = 1 λ2 = 0.5 λ1 = −0.5 1Figure 2 : Polynomial functions for moments penalization . Dotted lines show the complete polynomialwhereas solid lines the clipped version . Left plot shows two convex and non decreasing polynomials . Center plot shows a convex but non increasing polynomial and thus will result in a non convex objective . Right plot shows a convex and non decreasing polynomial on the interval [ −1 , +∞ ] , and thus will yield a convex objective when min ( ` ) − E [ ` ] ≥ −1 . advantage of penalizing the variance more consistently despite being more computationally expensive . On the contrary , when using the moments expansion , the penalized variance can be slightly lower depending on the number of weights that are 0 . However , the advantage is that the direct computation of the weights makes it easier to include the method into existing analysis frameworks . A similar method that extends the optimization objective to include penalization factors for higher-order moments was proposed by Li et al . ( 2021 ) . The proposed method replaces the ERM objective with a tilted version calculated as 1tK ( t ) = 1 t logE [ e t ` ] where K ( t ) is the cumulant-generating function of the loss ` . The penalization of the higher order moments of the tilted ERM can be recovered from the power series expansion of K ( t ) . The distinction between the two is that the moments penalization introduced in this paper represents a generalization of the tilted ERM as it allows any combination of the higher order moments to be penalized whereas tilted ERM uses a single parameter that governs the penalization factors . In summary , the moments penalization implemented using the weighted mean trick is more flexible , however , it comes at a cost as there are more parameters to tune when penalizing multiple moments compared to tilted ERM of Li et al . ( 2021 ) . Convergence and convergence rates . The moments penalization problem along with variance expansion of Duchi & Namkoong ( 2019 ) fall under the class of distributional robust stochastic programsSun & Xu ( 2016 ) ( DRSP ) which is a subclass of ambiguity programsRoyset & Wets ( 2017 ) ( AP ) where the general objective is : AP : min θ∈Θ sup P∈D ( θ ) ϕ ( θ , P ) ( 5 ) where Θ is the set of model parameters and D ( θ ) is the ambiguity set . In DSRP , the bivariate function ϕ ( θ , P ) = EP [ ` ] where ` is the loss function and the ambiguity set D ( θ ) is a set of probability distributions . In the case of moments penalization problem , the ambiguity set D ( θ ) depends on the model parameters and is a singleton as the weights uniquely transform the empirical distribution . As a result , the optimal value of the inner maximization problem becomes supP∈D ( θ ) ϕ ( θ , P ) = EP [ ` ] . Intuitively , a model will converge if changes in its parameters will cause minor changes in the distribution P , and with each step the distribution will approach the optimum distribution P ∗ . Formally , to quantify the changes in the distribution , we would need a distance or a metric . Sun & Xu ( 2016 ) use the total variation metric and a pseudometric to prove uniform convergence , possibly at an exponential rate , if P converges to P ∗ under total variation metric and ` is uniformly bounded ( see Sun & Xu , 2016 , Th . 1 and Prop . 3 ) . Royset & Wets ( 2017 ) proposed a hypo-distance metric and proved lop-convergence given that the bivariate function ϕ ( θ , P ) satisfies some assumptions , ( see Royset & Wets ( 2017 ) Def . 4.1 ) . Duchi & Namkoong ( 2019 ) provide guarantees for a number of stochastic risk minimization problems when only the variance is penalized and P is in the local neighborhood of the empirical distribution defined using the χ2-divergence . We refer the reader to the works of Sun & Xu ( 2016 ) and Royset & Wets ( 2017 ) and the references therein for additional guarantees if more information about the problem structure is available , or if other metrics are used . For the moments penalization problem , the moments penalization factors λi for i ≥ 2 determine how much the distribution P changes when the loss changes . Small values of the penalization factors will keep P in the neighborhood of the empirical distribution , whereas large values will make the weights sensitive to changes in the loss values that can cause stability or convergence issues . The exact values depend on the empirical distribution of the data and the choice of the model and loss function . Weighted mean trick in practice . To apply the method in practice , the classical batch training algorithm must be extended to include an additional step , the weights calculation . Instead of directly calculating the average loss , the user will calculate the loss value for each sample in the batch and then use the expression from Theorem 3 to compute the weights and the weighted mean . The moments penalization factors λi are the hyper-parameters and are tuned in ascending order with respect to i . However , penalizing higher-order moments might affect the impact of the lower-order ones , and thus , it might require a few iterations to find the optimal combination . The implementation of this algorithm in Pytorch ( Paszke et al. , 2019 ) is available on GitHub.1 Since the gradient of the weighted mean is the weighted gradient of the elements , this allows weights to control the impact of each sample on the model parameters . Of note , the theoretical results hold when switching from expectation to sample expectation , En [ ` ] = 1n ∑n i=1 ` ( f ( xi , θ ) , yi ) . Algorithm 1 : Training with Moments Penalization input : f ( x ; θ ) – model to be trained { xi , yi } n1 – batch of training data { λ } m1 – penalization factors ` ( v , y ) – loss function while stopping criteria not reached do for i← 1 to n do zi ← ` ( f ( xi , θ ) , yi ) ; / * sample loss * / w ← [ ∑m j=1 λj ( z − En [ z ] ) j−1 ] + Lw ← 1n ∑n i=1 wizi ; / * weighted mean * / θ ← θ − γ∇θLw ; / * update model parameters * / | This paper studies weighted Empirical Risk Minimization (ERM), where the weight at data point $(x_i, y_i)$ is a (polynomial) function of the loss value $\ell(f(x_i), y_i)$. For affine functions, the authors show that it is equivalent to perform variance penalization (Theorem 1 and Lemma 2). Polynomials with higher degrees involve higher order moments of the losses (Theorem 3 and Lemma 4). The authors also propose an iterative algorithm to perform the weighted ERM (Algorithm 1). In Section 3, a different choice of weights is proposed, that ensure the convexity of the cross-entropy criterion despite a negative $\lambda_1$, and experiments are presented. | SP:bbbe0606b16633c4fc439f0b24c546705ed41d6b |
The weighted mean trick – optimization strategies for robustness | 1 INTRODUCTION . The most common objective in machine learning is to minimize the average loss . However , by doing this it implies that all samples are equal . The goal of this work is to convince the reader that from the perspective of the model , not all samples are equal and optimizing a weighted mean is a more progressive approach . For instance , at the end of the training , few samples are left uncaptured by the model which yield large loss values . One might choose to put more weight on those samples to help the model learn ( individual fairness ) , or consider them noise and assign them less weight ( robustness ) . The decision will also be reflected by the loss variance ; it will decrease when hard samples are weighted more and increase otherwise . Before introducing more formally the weighted mean , we first justify theoretically the impact of variance penalization . Using the empirical Bernstein bound ( Maurer & Pontil , 2009 ) for i.i.d . loss values Z and bounded variance we have with probability 1− δ : E [ Z ] − 1 n n∑ i=1 Zi ≤ C1 √ 2Vn [ Z ] ln 2/δ n + C2 7 ln 2/δ 3 ( n− 1 ) ( 1 ) where C1 , C2 are problem depended constants . This inequality reveals two things . First , we have with high probability that the empirical mean is close to the theoretical value . This bound along with similar PAC-Bayes bounds ( Seldin et al. , 2012 ; Tolstikhin & Seldin , 2013 ) justifies the practical success of the empirical risk minimization ( ERM ) which has the objective to minimize the mean loss value ( Namkoong & Duchi , 2017 ) . Secondly , the difference between the two is bounded in terms of the empirical variance . This second implication led to a large and growing body of studies that investigate variance penalization ( Maurer & Pontil , 2009 ; Namkoong & Duchi , 2017 ; Duchi & Namkoong , 2019 ; Staib et al. , 2019 ; Lam , 2019 ; Heinze-Deml & Meinshausen , 2021 ; Hu et al. , 2018 ) . Moreover , by penalizing the variance , the intrinsic bias-variance tradeoff of the ERM can be controlled . Two studies influential for this paper are that of Duchi & Namkoong ( 2019 ) and Li et al . ( 2021 ) . Duchi & Namkoong ( 2019 ) proposed taking the expectation with respect to a different distribution which allowed penalizing variance while preserving the convexity of the objective . Similarly , Li et al . ( 2021 ) investigated optimizing a tilted empirical risk which is equivalent to penalizing all the higher-order moments simultaneously . In summary , previous methods either penalize only one moment or all the higher-order moments but are not flexible enough to penalize any desired combination of the higher-order moments . Inspired by the above works , we propose to optimize a weighted mean and prove that for certain weights , it is equivalent to optimizing any higher-order moments of the loss distribution such as variance , skewness , and kurtosis . Our approach generalizes that of Duchi & Namkoong ( 2019 ) ; Li et al . ( 2021 ) while simplifying the optimization procedure and enabling separate penalization of the higher-order moments . In particular , we will construct the weights w such that optimizing the weighted mean of the loss ` is equivalent to applying a variance penalization : E [ w ` ] = E [ ` ] + λV [ ` ] ( 2 ) or penalization of other central moments ( e.g. , skewness , kurtosis ) . Construction of these weights are treated by Theorems 1 and 3 from section 3 , which require minimal computational resources . Important note , penalizing directly the variance preserves convexity only for values of λ within a very narrow range ( Maurer & Pontil , 2009 ) . However , the weighted mean method yields a convex objective for any positive values of λ . In detail , our contributions are as follows : ( C1 ) We build upon the work of Duchi & Namkoong ( 2019 ) and prove that optimizing a weighted mean is equivalent to reducing or amplifying both variance ( Theorem 1 ) and higher-order moments ( Theorem 3 ) . ( C2 ) We derive the limits of the penalization interval which preserves convexity ( Lemma 2 ) and also show how to penalize with values outside this interval while still maintaining convexity ( Lemma 4 ) . ( C3 ) We connect the variance and higher-order moments penalization using weighted mean to the robustification of loss functions ( Lemma 5 and Lemma 6 ) . ( C4 ) We develop a convex version of the variance penalized cross-entropy loss which provides a higher accuracy in high noise scenarios with class dependent noise . ( C5 ) We show experimentally that a negative variance penalization improves model accuracy when training with noisy labels . The implications of the weighted mean are much broader than what we investigate in this work . We limit the scope of this paper to classification using deep neural networks trained with noisy labels . But the mathematical framework also covers the control of the bias-variance trade-off and can also be applicable to regression problems . In the sequel , we start by introducing the notation in section 2 , then in section 3 we present the moment penalization strategy and the weighted mean trick which is a computational technique to make moment penalization practical . Next , in section 4 , we illustrate the application of moment penalization using the weighted mean formulation to optimize for robustness when training with noisy labels . 2 NOTATIONS . In subsequent sections we will use the following notation . A training data set is defined as D = { ( xi , yi ) } ni=1 where xi ∈ X are the features and yi ∈ Y = { 1 , . . . , k } represent the class labels . A classifier f ( x ; θ ) is a mapping f : X ×Θ→ V from the feature space to the probability simplex V parameterized by θ . A loss function ` : V × Y → [ 0 , ∞ ) gives a penalty ` ( v , y ) when the model predicted the value v and label y was observed . A weight function w : V × Y → R assigns to each sample a weight . As we are interested in optimizing the model output value v that minimizes the penalty , we will focus our investigation on loss functions ` ( v , y ) that are convex in v and seek to preserve the convexity when optimizing the weighted mean . In ERM , we are interested with finding the model parameters θ that minimize the empirical risk calculated as ED [ ` ( f ( xi ; θ ) , yi ) ] and the weighted form ED [ w ( ` ( f ( xi ; θ ) , yi ) ) ` ( f ( xi ; θ ) , yi ) ] . To simplify the notation , we will drop the dataset D from the expectation subscript and the arguments of the loss and the weight function , i.e. , E [ ` ] ⇔ ED [ ` ( f ( xi ; θ ) , yi ) ] and E [ w ` ] ⇔ ED [ w ( ` ( f ( xi ; θ ) , yi ) ) ` ( f ( xi ; θ ) , yi ) ] . Similarly , the notation for minimum is simplified as min ` ⇔ minxi , yi∈D ` ( f ( xi ; θ ) , yi ) . −3 0 3 −3 0 3 0.0 0.0 0.2 0.2 1.0 1.0 2.02.0 3.9 3.9 −3 0 3 0.0 0.0 0.5 0.5 1.0 1.0 1.3 1.3 −3 0 3 λ < 0 λ = 0 λ > 0 λ > 0 λ < 0 1Figure 1 : Variance penalization for classification problems . Contour lines of the weights distribution for positive λ are shown on the left and for negative on the center plot . The right plot shows the use of variance penalization for outlier suppression or amplification ( robustification versus generalization ) . 3 PENALIZING MOMENTS . In ERM , the impact on the model parameters of a sample is determined by the balance between its loss value and the average of the training batch . Especially observed in the latter stages of the training when majority of samples have a small loss value except few which are left uncaptured by the model . Case in which we consider them as unlearned samples and amplify their impact or as outliers and suppress their impact on the model . The weighted mean trick consist in assigning weights to each sample based on the loss value to either amplify or suppress their impact on training . This is similar to dropout ( Hinton et al. , 2012 ) but for losses . However , weighted mean trick is a deterministic process and the weights are not restricted to 0 or 1 but can take any non-negative value . In what follows , we apply the weighted mean trick to extend the ERM framework to multi-objective optimization of the mean and higher-order moments of the loss function . First , we show that for some distinct weights , optimizing the weighted mean is equivalent to a simultaneous optimization of the mean and variance . Theorem 1 ( Variance Expansion ) . Let ` be a loss function with finite E [ ` ] and finite V [ ` ] and let w ( v , y ) = 1 + λ ( ` ( v , y ) − E [ ` ( v , y ) ] ) , then we have : E [ w ` ] = E [ ` ] + λV [ ` ] ( 3 ) Proof . Replacing w with the definition and using the linearity property of the expectation along with Proposition 7 we get : E [ w ` ] = E [ ` ] +λE [ ( ` −E [ ` ] ) ` ] = E [ ` ] +λE [ ( ` −E [ ` ] ) 2 ] = E [ ` ] +λV [ ` ] Thus , switching from mean to weighted mean allows us to control the bias-variance tradeoff through λ to improve the distributional robustness of the model ( Maurer & Pontil , 2009 ) . However , the range of λ values that preserve the convexity of the objective depends on the average and the minimum penalty returned by the loss function ` as shown by the next lemma . Lemma 2 . As introduced in Theorem 1 , the variance expansion of a convex loss function ` ( v , y ) in v yields a new objective that is also convex in v if λ ∈ [ 0 , λmax ] , where λmax = 1/ ( E [ ` ] −min ` ) . Proof is provided in Appendix A . This lemma precisely shows why directly penalizing the variance does not preserve convexity besides when λ takes values in a narrow interval . Moreover , directly penalizing the variance as part of a numeric optimization objective , the upper limit of the interval , λmax , is not constant and changes with each iteration of the optimization algorithm . Note that the further the minimum value min ` is from the average loss value E [ ` ] , the narrower the interval is . Conversely , when E [ ` ] = min ` results that ` is constant and V [ ` ] = 0 thus the objective is convex for λ > 0 . However , using the weighted mean trick the objective remains convex for any positive λ irrespective if V [ ` ] = 0 . Figure 1 shows the weights distribution for different values of λ . When λ is positive ( left plot ) samples closer to the decision boundary ( which also means with larger loss values ) receive more weight . On the other hand , when λ is negative ( center plot ) samples with larger loss values receive less weight . Of note , for λ < 0 the objective is not convex , however as we will show later will still converge to an optimal solution . The right plot shows a binary classification problem where each class consists of two clusters , squares and circles . The parameter λ controls the placement of the decision boundary with respect to these two clusters . Positive λ values place more weight on the cluster of squares which is closer to the decision boundary and as a result the boundary is horizontally aligned . On the other hand , negative λ values place more weight on samples farther from the boundary and in this case the cluster of circles . Therefore , this aligns the decision boundary with respect to the cluster of circles oriented diagonally . Variance is not the only central moment that can be penalized using the weighted mean . In fact , any combination of the moments can be penalized . The next result generalizes Theorem 1 . Theorem 3 ( Moments Expansion ) . Let ` be a loss function with finite first m central moments and define w ( v , y ) = ∑m i=1 λi ( ` ( v , y ) − E [ ` ( v , y ) ] ) i−1 , then we have : E [ w ` ] = λ1E [ ` ] + m∑ i=2 λ̃iE [ ( ` − E [ ` ] ) i ] ( 4 ) where λ̃i = λi + λi+1E [ ` ] for i < m and λ̃m = λm Proof is provided in Appendix A . We notice that penalizing moments higher than two incurs an additional penalization of the previous moment . For example , penalizing skewness by λ3 incurs a variance penalization of λ3E [ ` ] . Theorem 3 also has an algebraic interpretation . Note that the formula for weights is nothing more than a polynomial in ` ( v , y ) translated by E [ ` ] . When penalizing the variance , λ2 controls the slope of the linear equation , and when penalizing the skewness , λ3 controls the curvature of the quadratic equation . Moreover , the penalization factors λi also define the placement of the roots of the polynomial and the convexity of the weighted mean objective . Lemma 4 ( Convexity of Moments Expansion ) . Let ` ( v , y ) be a loss function convex in v and p : R→ [ 0 , ∞ ) be a non negative and differentiable convex function and M ≥ 0 , then the weighted objective w ( v , y ) ` ( v , y ) with w ( v , y ) = p ( ` ( v , y ) −M ) is convex in v if p is non decreasing . Proof is provided in Appendix A . The above result lists two requirements on the polynomial function p such that the weighted mean objective remains convex . First , the function must be non-negative , thus , the negative weights must be clipped to 0 . Secondly , when the function takes positive values , p must be non decreasing and convex . Of note , this result is similar in scope to Lemma 2 but does not generalize it as the weights are non negative . Figure 2 shows several examples of polynomials . Variance penalization implies p is an affine function , and thus , convex . However , for λ1 > 0 the polynomial is non decreasing ( left plot ) and for λ1 < 0 the polynomial is non increasing ( center plot ) . Thus , only positive variance penalization will result in a convex objective . Of note , Lemma 2 upper bounds λ1 if weights are not clipped to 0 , however , if clipping is used λ1 is not upper bounded . When penalizing skewness , p is a quadratic function ( right plot ) . In this case , if λ2 > 0 then p is convex and non decreasing only on a restricted interval instead of the entire real line . Thus , convexity can be preserved by appropriately adjusting the roots of the polynomial such that p is convex on [ min ` − E [ ` ] , +∞ ] . Clipping negative weights to 0 prevents the optimization objective to switch from minimization to maximization , usually an undesirable behavior . Consequently , the samples with a corresponding zero weight reach their maximum contribution in the moments penalization . Further increasing the factors λi , will have no effect on those samples and only samples with non-zero weights will participate in the training . As a result , the efficiency of the moments penalization will slightly fall . Moreover , the samples with zero weight will be excluded from training , technique used in the past by multiple studies . Rockafellar & Uryasev ( 2000 ) used a similar clipping technique to optimize only for samples that are part of the tail of the distribution . The Moments Expansion Theorem extends the variance expansion proposed by Duchi & Namkoong ( 2019 ) to include higher-order moments . However , even when only the variance is penalized , the two methods are still slightly different . Moments Expansion Theorem computes the weights directly from the loss values , whereas the variance expansion of Duchi & Namkoong ( 2019 ) solves a secondary optimization problem to find the weights . The use of a secondary optimization problem has the −1 0 1 −1 0 1 2 3 λ2 = 2 λ1 = 1 λ4 = 1 λ3 = 2.5 λ2 = 3 λ1 = 1 −1 0 1 λ2 = −2 λ1 = 1 −1 0 1 λ3 = 1 λ2 = 0.5 λ1 = −0.5 1Figure 2 : Polynomial functions for moments penalization . Dotted lines show the complete polynomialwhereas solid lines the clipped version . Left plot shows two convex and non decreasing polynomials . Center plot shows a convex but non increasing polynomial and thus will result in a non convex objective . Right plot shows a convex and non decreasing polynomial on the interval [ −1 , +∞ ] , and thus will yield a convex objective when min ( ` ) − E [ ` ] ≥ −1 . advantage of penalizing the variance more consistently despite being more computationally expensive . On the contrary , when using the moments expansion , the penalized variance can be slightly lower depending on the number of weights that are 0 . However , the advantage is that the direct computation of the weights makes it easier to include the method into existing analysis frameworks . A similar method that extends the optimization objective to include penalization factors for higher-order moments was proposed by Li et al . ( 2021 ) . The proposed method replaces the ERM objective with a tilted version calculated as 1tK ( t ) = 1 t logE [ e t ` ] where K ( t ) is the cumulant-generating function of the loss ` . The penalization of the higher order moments of the tilted ERM can be recovered from the power series expansion of K ( t ) . The distinction between the two is that the moments penalization introduced in this paper represents a generalization of the tilted ERM as it allows any combination of the higher order moments to be penalized whereas tilted ERM uses a single parameter that governs the penalization factors . In summary , the moments penalization implemented using the weighted mean trick is more flexible , however , it comes at a cost as there are more parameters to tune when penalizing multiple moments compared to tilted ERM of Li et al . ( 2021 ) . Convergence and convergence rates . The moments penalization problem along with variance expansion of Duchi & Namkoong ( 2019 ) fall under the class of distributional robust stochastic programsSun & Xu ( 2016 ) ( DRSP ) which is a subclass of ambiguity programsRoyset & Wets ( 2017 ) ( AP ) where the general objective is : AP : min θ∈Θ sup P∈D ( θ ) ϕ ( θ , P ) ( 5 ) where Θ is the set of model parameters and D ( θ ) is the ambiguity set . In DSRP , the bivariate function ϕ ( θ , P ) = EP [ ` ] where ` is the loss function and the ambiguity set D ( θ ) is a set of probability distributions . In the case of moments penalization problem , the ambiguity set D ( θ ) depends on the model parameters and is a singleton as the weights uniquely transform the empirical distribution . As a result , the optimal value of the inner maximization problem becomes supP∈D ( θ ) ϕ ( θ , P ) = EP [ ` ] . Intuitively , a model will converge if changes in its parameters will cause minor changes in the distribution P , and with each step the distribution will approach the optimum distribution P ∗ . Formally , to quantify the changes in the distribution , we would need a distance or a metric . Sun & Xu ( 2016 ) use the total variation metric and a pseudometric to prove uniform convergence , possibly at an exponential rate , if P converges to P ∗ under total variation metric and ` is uniformly bounded ( see Sun & Xu , 2016 , Th . 1 and Prop . 3 ) . Royset & Wets ( 2017 ) proposed a hypo-distance metric and proved lop-convergence given that the bivariate function ϕ ( θ , P ) satisfies some assumptions , ( see Royset & Wets ( 2017 ) Def . 4.1 ) . Duchi & Namkoong ( 2019 ) provide guarantees for a number of stochastic risk minimization problems when only the variance is penalized and P is in the local neighborhood of the empirical distribution defined using the χ2-divergence . We refer the reader to the works of Sun & Xu ( 2016 ) and Royset & Wets ( 2017 ) and the references therein for additional guarantees if more information about the problem structure is available , or if other metrics are used . For the moments penalization problem , the moments penalization factors λi for i ≥ 2 determine how much the distribution P changes when the loss changes . Small values of the penalization factors will keep P in the neighborhood of the empirical distribution , whereas large values will make the weights sensitive to changes in the loss values that can cause stability or convergence issues . The exact values depend on the empirical distribution of the data and the choice of the model and loss function . Weighted mean trick in practice . To apply the method in practice , the classical batch training algorithm must be extended to include an additional step , the weights calculation . Instead of directly calculating the average loss , the user will calculate the loss value for each sample in the batch and then use the expression from Theorem 3 to compute the weights and the weighted mean . The moments penalization factors λi are the hyper-parameters and are tuned in ascending order with respect to i . However , penalizing higher-order moments might affect the impact of the lower-order ones , and thus , it might require a few iterations to find the optimal combination . The implementation of this algorithm in Pytorch ( Paszke et al. , 2019 ) is available on GitHub.1 Since the gradient of the weighted mean is the weighted gradient of the elements , this allows weights to control the impact of each sample on the model parameters . Of note , the theoretical results hold when switching from expectation to sample expectation , En [ ` ] = 1n ∑n i=1 ` ( f ( xi , θ ) , yi ) . Algorithm 1 : Training with Moments Penalization input : f ( x ; θ ) – model to be trained { xi , yi } n1 – batch of training data { λ } m1 – penalization factors ` ( v , y ) – loss function while stopping criteria not reached do for i← 1 to n do zi ← ` ( f ( xi , θ ) , yi ) ; / * sample loss * / w ← [ ∑m j=1 λj ( z − En [ z ] ) j−1 ] + Lw ← 1n ∑n i=1 wizi ; / * weighted mean * / θ ← θ − γ∇θLw ; / * update model parameters * / | The paper studies the role and implications of weighted empirical risk minimization, where one can also choose the weight of each sample instead of fixing it to be equal. The paper then shows that specific choices of weights lead to variance penalization and higher-order moments penalization (Theorems 1 and 3). Lemma 2 and 4 study the regimes for which these chosen weights also preserve the convexity of the original loss functions. This framework is then empirically validated through multiple experiments. | SP:bbbe0606b16633c4fc439f0b24c546705ed41d6b |
The weighted mean trick – optimization strategies for robustness | 1 INTRODUCTION . The most common objective in machine learning is to minimize the average loss . However , by doing this it implies that all samples are equal . The goal of this work is to convince the reader that from the perspective of the model , not all samples are equal and optimizing a weighted mean is a more progressive approach . For instance , at the end of the training , few samples are left uncaptured by the model which yield large loss values . One might choose to put more weight on those samples to help the model learn ( individual fairness ) , or consider them noise and assign them less weight ( robustness ) . The decision will also be reflected by the loss variance ; it will decrease when hard samples are weighted more and increase otherwise . Before introducing more formally the weighted mean , we first justify theoretically the impact of variance penalization . Using the empirical Bernstein bound ( Maurer & Pontil , 2009 ) for i.i.d . loss values Z and bounded variance we have with probability 1− δ : E [ Z ] − 1 n n∑ i=1 Zi ≤ C1 √ 2Vn [ Z ] ln 2/δ n + C2 7 ln 2/δ 3 ( n− 1 ) ( 1 ) where C1 , C2 are problem depended constants . This inequality reveals two things . First , we have with high probability that the empirical mean is close to the theoretical value . This bound along with similar PAC-Bayes bounds ( Seldin et al. , 2012 ; Tolstikhin & Seldin , 2013 ) justifies the practical success of the empirical risk minimization ( ERM ) which has the objective to minimize the mean loss value ( Namkoong & Duchi , 2017 ) . Secondly , the difference between the two is bounded in terms of the empirical variance . This second implication led to a large and growing body of studies that investigate variance penalization ( Maurer & Pontil , 2009 ; Namkoong & Duchi , 2017 ; Duchi & Namkoong , 2019 ; Staib et al. , 2019 ; Lam , 2019 ; Heinze-Deml & Meinshausen , 2021 ; Hu et al. , 2018 ) . Moreover , by penalizing the variance , the intrinsic bias-variance tradeoff of the ERM can be controlled . Two studies influential for this paper are that of Duchi & Namkoong ( 2019 ) and Li et al . ( 2021 ) . Duchi & Namkoong ( 2019 ) proposed taking the expectation with respect to a different distribution which allowed penalizing variance while preserving the convexity of the objective . Similarly , Li et al . ( 2021 ) investigated optimizing a tilted empirical risk which is equivalent to penalizing all the higher-order moments simultaneously . In summary , previous methods either penalize only one moment or all the higher-order moments but are not flexible enough to penalize any desired combination of the higher-order moments . Inspired by the above works , we propose to optimize a weighted mean and prove that for certain weights , it is equivalent to optimizing any higher-order moments of the loss distribution such as variance , skewness , and kurtosis . Our approach generalizes that of Duchi & Namkoong ( 2019 ) ; Li et al . ( 2021 ) while simplifying the optimization procedure and enabling separate penalization of the higher-order moments . In particular , we will construct the weights w such that optimizing the weighted mean of the loss ` is equivalent to applying a variance penalization : E [ w ` ] = E [ ` ] + λV [ ` ] ( 2 ) or penalization of other central moments ( e.g. , skewness , kurtosis ) . Construction of these weights are treated by Theorems 1 and 3 from section 3 , which require minimal computational resources . Important note , penalizing directly the variance preserves convexity only for values of λ within a very narrow range ( Maurer & Pontil , 2009 ) . However , the weighted mean method yields a convex objective for any positive values of λ . In detail , our contributions are as follows : ( C1 ) We build upon the work of Duchi & Namkoong ( 2019 ) and prove that optimizing a weighted mean is equivalent to reducing or amplifying both variance ( Theorem 1 ) and higher-order moments ( Theorem 3 ) . ( C2 ) We derive the limits of the penalization interval which preserves convexity ( Lemma 2 ) and also show how to penalize with values outside this interval while still maintaining convexity ( Lemma 4 ) . ( C3 ) We connect the variance and higher-order moments penalization using weighted mean to the robustification of loss functions ( Lemma 5 and Lemma 6 ) . ( C4 ) We develop a convex version of the variance penalized cross-entropy loss which provides a higher accuracy in high noise scenarios with class dependent noise . ( C5 ) We show experimentally that a negative variance penalization improves model accuracy when training with noisy labels . The implications of the weighted mean are much broader than what we investigate in this work . We limit the scope of this paper to classification using deep neural networks trained with noisy labels . But the mathematical framework also covers the control of the bias-variance trade-off and can also be applicable to regression problems . In the sequel , we start by introducing the notation in section 2 , then in section 3 we present the moment penalization strategy and the weighted mean trick which is a computational technique to make moment penalization practical . Next , in section 4 , we illustrate the application of moment penalization using the weighted mean formulation to optimize for robustness when training with noisy labels . 2 NOTATIONS . In subsequent sections we will use the following notation . A training data set is defined as D = { ( xi , yi ) } ni=1 where xi ∈ X are the features and yi ∈ Y = { 1 , . . . , k } represent the class labels . A classifier f ( x ; θ ) is a mapping f : X ×Θ→ V from the feature space to the probability simplex V parameterized by θ . A loss function ` : V × Y → [ 0 , ∞ ) gives a penalty ` ( v , y ) when the model predicted the value v and label y was observed . A weight function w : V × Y → R assigns to each sample a weight . As we are interested in optimizing the model output value v that minimizes the penalty , we will focus our investigation on loss functions ` ( v , y ) that are convex in v and seek to preserve the convexity when optimizing the weighted mean . In ERM , we are interested with finding the model parameters θ that minimize the empirical risk calculated as ED [ ` ( f ( xi ; θ ) , yi ) ] and the weighted form ED [ w ( ` ( f ( xi ; θ ) , yi ) ) ` ( f ( xi ; θ ) , yi ) ] . To simplify the notation , we will drop the dataset D from the expectation subscript and the arguments of the loss and the weight function , i.e. , E [ ` ] ⇔ ED [ ` ( f ( xi ; θ ) , yi ) ] and E [ w ` ] ⇔ ED [ w ( ` ( f ( xi ; θ ) , yi ) ) ` ( f ( xi ; θ ) , yi ) ] . Similarly , the notation for minimum is simplified as min ` ⇔ minxi , yi∈D ` ( f ( xi ; θ ) , yi ) . −3 0 3 −3 0 3 0.0 0.0 0.2 0.2 1.0 1.0 2.02.0 3.9 3.9 −3 0 3 0.0 0.0 0.5 0.5 1.0 1.0 1.3 1.3 −3 0 3 λ < 0 λ = 0 λ > 0 λ > 0 λ < 0 1Figure 1 : Variance penalization for classification problems . Contour lines of the weights distribution for positive λ are shown on the left and for negative on the center plot . The right plot shows the use of variance penalization for outlier suppression or amplification ( robustification versus generalization ) . 3 PENALIZING MOMENTS . In ERM , the impact on the model parameters of a sample is determined by the balance between its loss value and the average of the training batch . Especially observed in the latter stages of the training when majority of samples have a small loss value except few which are left uncaptured by the model . Case in which we consider them as unlearned samples and amplify their impact or as outliers and suppress their impact on the model . The weighted mean trick consist in assigning weights to each sample based on the loss value to either amplify or suppress their impact on training . This is similar to dropout ( Hinton et al. , 2012 ) but for losses . However , weighted mean trick is a deterministic process and the weights are not restricted to 0 or 1 but can take any non-negative value . In what follows , we apply the weighted mean trick to extend the ERM framework to multi-objective optimization of the mean and higher-order moments of the loss function . First , we show that for some distinct weights , optimizing the weighted mean is equivalent to a simultaneous optimization of the mean and variance . Theorem 1 ( Variance Expansion ) . Let ` be a loss function with finite E [ ` ] and finite V [ ` ] and let w ( v , y ) = 1 + λ ( ` ( v , y ) − E [ ` ( v , y ) ] ) , then we have : E [ w ` ] = E [ ` ] + λV [ ` ] ( 3 ) Proof . Replacing w with the definition and using the linearity property of the expectation along with Proposition 7 we get : E [ w ` ] = E [ ` ] +λE [ ( ` −E [ ` ] ) ` ] = E [ ` ] +λE [ ( ` −E [ ` ] ) 2 ] = E [ ` ] +λV [ ` ] Thus , switching from mean to weighted mean allows us to control the bias-variance tradeoff through λ to improve the distributional robustness of the model ( Maurer & Pontil , 2009 ) . However , the range of λ values that preserve the convexity of the objective depends on the average and the minimum penalty returned by the loss function ` as shown by the next lemma . Lemma 2 . As introduced in Theorem 1 , the variance expansion of a convex loss function ` ( v , y ) in v yields a new objective that is also convex in v if λ ∈ [ 0 , λmax ] , where λmax = 1/ ( E [ ` ] −min ` ) . Proof is provided in Appendix A . This lemma precisely shows why directly penalizing the variance does not preserve convexity besides when λ takes values in a narrow interval . Moreover , directly penalizing the variance as part of a numeric optimization objective , the upper limit of the interval , λmax , is not constant and changes with each iteration of the optimization algorithm . Note that the further the minimum value min ` is from the average loss value E [ ` ] , the narrower the interval is . Conversely , when E [ ` ] = min ` results that ` is constant and V [ ` ] = 0 thus the objective is convex for λ > 0 . However , using the weighted mean trick the objective remains convex for any positive λ irrespective if V [ ` ] = 0 . Figure 1 shows the weights distribution for different values of λ . When λ is positive ( left plot ) samples closer to the decision boundary ( which also means with larger loss values ) receive more weight . On the other hand , when λ is negative ( center plot ) samples with larger loss values receive less weight . Of note , for λ < 0 the objective is not convex , however as we will show later will still converge to an optimal solution . The right plot shows a binary classification problem where each class consists of two clusters , squares and circles . The parameter λ controls the placement of the decision boundary with respect to these two clusters . Positive λ values place more weight on the cluster of squares which is closer to the decision boundary and as a result the boundary is horizontally aligned . On the other hand , negative λ values place more weight on samples farther from the boundary and in this case the cluster of circles . Therefore , this aligns the decision boundary with respect to the cluster of circles oriented diagonally . Variance is not the only central moment that can be penalized using the weighted mean . In fact , any combination of the moments can be penalized . The next result generalizes Theorem 1 . Theorem 3 ( Moments Expansion ) . Let ` be a loss function with finite first m central moments and define w ( v , y ) = ∑m i=1 λi ( ` ( v , y ) − E [ ` ( v , y ) ] ) i−1 , then we have : E [ w ` ] = λ1E [ ` ] + m∑ i=2 λ̃iE [ ( ` − E [ ` ] ) i ] ( 4 ) where λ̃i = λi + λi+1E [ ` ] for i < m and λ̃m = λm Proof is provided in Appendix A . We notice that penalizing moments higher than two incurs an additional penalization of the previous moment . For example , penalizing skewness by λ3 incurs a variance penalization of λ3E [ ` ] . Theorem 3 also has an algebraic interpretation . Note that the formula for weights is nothing more than a polynomial in ` ( v , y ) translated by E [ ` ] . When penalizing the variance , λ2 controls the slope of the linear equation , and when penalizing the skewness , λ3 controls the curvature of the quadratic equation . Moreover , the penalization factors λi also define the placement of the roots of the polynomial and the convexity of the weighted mean objective . Lemma 4 ( Convexity of Moments Expansion ) . Let ` ( v , y ) be a loss function convex in v and p : R→ [ 0 , ∞ ) be a non negative and differentiable convex function and M ≥ 0 , then the weighted objective w ( v , y ) ` ( v , y ) with w ( v , y ) = p ( ` ( v , y ) −M ) is convex in v if p is non decreasing . Proof is provided in Appendix A . The above result lists two requirements on the polynomial function p such that the weighted mean objective remains convex . First , the function must be non-negative , thus , the negative weights must be clipped to 0 . Secondly , when the function takes positive values , p must be non decreasing and convex . Of note , this result is similar in scope to Lemma 2 but does not generalize it as the weights are non negative . Figure 2 shows several examples of polynomials . Variance penalization implies p is an affine function , and thus , convex . However , for λ1 > 0 the polynomial is non decreasing ( left plot ) and for λ1 < 0 the polynomial is non increasing ( center plot ) . Thus , only positive variance penalization will result in a convex objective . Of note , Lemma 2 upper bounds λ1 if weights are not clipped to 0 , however , if clipping is used λ1 is not upper bounded . When penalizing skewness , p is a quadratic function ( right plot ) . In this case , if λ2 > 0 then p is convex and non decreasing only on a restricted interval instead of the entire real line . Thus , convexity can be preserved by appropriately adjusting the roots of the polynomial such that p is convex on [ min ` − E [ ` ] , +∞ ] . Clipping negative weights to 0 prevents the optimization objective to switch from minimization to maximization , usually an undesirable behavior . Consequently , the samples with a corresponding zero weight reach their maximum contribution in the moments penalization . Further increasing the factors λi , will have no effect on those samples and only samples with non-zero weights will participate in the training . As a result , the efficiency of the moments penalization will slightly fall . Moreover , the samples with zero weight will be excluded from training , technique used in the past by multiple studies . Rockafellar & Uryasev ( 2000 ) used a similar clipping technique to optimize only for samples that are part of the tail of the distribution . The Moments Expansion Theorem extends the variance expansion proposed by Duchi & Namkoong ( 2019 ) to include higher-order moments . However , even when only the variance is penalized , the two methods are still slightly different . Moments Expansion Theorem computes the weights directly from the loss values , whereas the variance expansion of Duchi & Namkoong ( 2019 ) solves a secondary optimization problem to find the weights . The use of a secondary optimization problem has the −1 0 1 −1 0 1 2 3 λ2 = 2 λ1 = 1 λ4 = 1 λ3 = 2.5 λ2 = 3 λ1 = 1 −1 0 1 λ2 = −2 λ1 = 1 −1 0 1 λ3 = 1 λ2 = 0.5 λ1 = −0.5 1Figure 2 : Polynomial functions for moments penalization . Dotted lines show the complete polynomialwhereas solid lines the clipped version . Left plot shows two convex and non decreasing polynomials . Center plot shows a convex but non increasing polynomial and thus will result in a non convex objective . Right plot shows a convex and non decreasing polynomial on the interval [ −1 , +∞ ] , and thus will yield a convex objective when min ( ` ) − E [ ` ] ≥ −1 . advantage of penalizing the variance more consistently despite being more computationally expensive . On the contrary , when using the moments expansion , the penalized variance can be slightly lower depending on the number of weights that are 0 . However , the advantage is that the direct computation of the weights makes it easier to include the method into existing analysis frameworks . A similar method that extends the optimization objective to include penalization factors for higher-order moments was proposed by Li et al . ( 2021 ) . The proposed method replaces the ERM objective with a tilted version calculated as 1tK ( t ) = 1 t logE [ e t ` ] where K ( t ) is the cumulant-generating function of the loss ` . The penalization of the higher order moments of the tilted ERM can be recovered from the power series expansion of K ( t ) . The distinction between the two is that the moments penalization introduced in this paper represents a generalization of the tilted ERM as it allows any combination of the higher order moments to be penalized whereas tilted ERM uses a single parameter that governs the penalization factors . In summary , the moments penalization implemented using the weighted mean trick is more flexible , however , it comes at a cost as there are more parameters to tune when penalizing multiple moments compared to tilted ERM of Li et al . ( 2021 ) . Convergence and convergence rates . The moments penalization problem along with variance expansion of Duchi & Namkoong ( 2019 ) fall under the class of distributional robust stochastic programsSun & Xu ( 2016 ) ( DRSP ) which is a subclass of ambiguity programsRoyset & Wets ( 2017 ) ( AP ) where the general objective is : AP : min θ∈Θ sup P∈D ( θ ) ϕ ( θ , P ) ( 5 ) where Θ is the set of model parameters and D ( θ ) is the ambiguity set . In DSRP , the bivariate function ϕ ( θ , P ) = EP [ ` ] where ` is the loss function and the ambiguity set D ( θ ) is a set of probability distributions . In the case of moments penalization problem , the ambiguity set D ( θ ) depends on the model parameters and is a singleton as the weights uniquely transform the empirical distribution . As a result , the optimal value of the inner maximization problem becomes supP∈D ( θ ) ϕ ( θ , P ) = EP [ ` ] . Intuitively , a model will converge if changes in its parameters will cause minor changes in the distribution P , and with each step the distribution will approach the optimum distribution P ∗ . Formally , to quantify the changes in the distribution , we would need a distance or a metric . Sun & Xu ( 2016 ) use the total variation metric and a pseudometric to prove uniform convergence , possibly at an exponential rate , if P converges to P ∗ under total variation metric and ` is uniformly bounded ( see Sun & Xu , 2016 , Th . 1 and Prop . 3 ) . Royset & Wets ( 2017 ) proposed a hypo-distance metric and proved lop-convergence given that the bivariate function ϕ ( θ , P ) satisfies some assumptions , ( see Royset & Wets ( 2017 ) Def . 4.1 ) . Duchi & Namkoong ( 2019 ) provide guarantees for a number of stochastic risk minimization problems when only the variance is penalized and P is in the local neighborhood of the empirical distribution defined using the χ2-divergence . We refer the reader to the works of Sun & Xu ( 2016 ) and Royset & Wets ( 2017 ) and the references therein for additional guarantees if more information about the problem structure is available , or if other metrics are used . For the moments penalization problem , the moments penalization factors λi for i ≥ 2 determine how much the distribution P changes when the loss changes . Small values of the penalization factors will keep P in the neighborhood of the empirical distribution , whereas large values will make the weights sensitive to changes in the loss values that can cause stability or convergence issues . The exact values depend on the empirical distribution of the data and the choice of the model and loss function . Weighted mean trick in practice . To apply the method in practice , the classical batch training algorithm must be extended to include an additional step , the weights calculation . Instead of directly calculating the average loss , the user will calculate the loss value for each sample in the batch and then use the expression from Theorem 3 to compute the weights and the weighted mean . The moments penalization factors λi are the hyper-parameters and are tuned in ascending order with respect to i . However , penalizing higher-order moments might affect the impact of the lower-order ones , and thus , it might require a few iterations to find the optimal combination . The implementation of this algorithm in Pytorch ( Paszke et al. , 2019 ) is available on GitHub.1 Since the gradient of the weighted mean is the weighted gradient of the elements , this allows weights to control the impact of each sample on the model parameters . Of note , the theoretical results hold when switching from expectation to sample expectation , En [ ` ] = 1n ∑n i=1 ` ( f ( xi , θ ) , yi ) . Algorithm 1 : Training with Moments Penalization input : f ( x ; θ ) – model to be trained { xi , yi } n1 – batch of training data { λ } m1 – penalization factors ` ( v , y ) – loss function while stopping criteria not reached do for i← 1 to n do zi ← ` ( f ( xi , θ ) , yi ) ; / * sample loss * / w ← [ ∑m j=1 λj ( z − En [ z ] ) j−1 ] + Lw ← 1n ∑n i=1 wizi ; / * weighted mean * / θ ← θ − γ∇θLw ; / * update model parameters * / | In this paper, the authors demonstrate that minimizing the weighted mean results in higher oder moments of the loss distribution. They do this by explicitly demonstrating specific choices for the weights such that the expected weighted loss actually corresponds to minimizing the original loss regularized by a linear combination of the higher moments. They also note the ranges of the regularization parameter that preserve convexity. The authors state their work in the context of two recent papers [1], [2] which attempt. to control the bias-variance tradeoff in different ways, effectively controlling either just the variance, or all the higher moments. In contrast, this technique allows them to control any specific combination of higher moments. [1] "Variance-based regularization with convex objectives" Duchi, Namkoong'19 [2] "Tilted Empirical Risk Minimization" Li, Beirami, Sanjabi, Smith'20 | SP:bbbe0606b16633c4fc439f0b24c546705ed41d6b |
DIVERSIFY to Generalize: Learning Generalized Representations for Time Series Classification | 1 INTRODUCTION . Time series classification is one of the most challenging problems in machine learning and statistics community . Example applications include sensor-based human activity recognition , Parkinson ’ s disease diagnosis , and electronic power consumption ( Fawaz et al. , 2019 ) . One important nature of time series is non-stationary property , which means that its statistical features are changing over time . For years , there have been tremendous efforts to tackle the time series classification problem , such as hidden Markov models ( Fulcher & Jones , 2014 ) , RNN-based methods ( Hüsken & Stagge , 2003 ) , and Transformer-based approaches ( Li et al. , 2019 ) . In this paper , we are specifically interested in modeling time series from the distribution perspective . More precisely , we aim to learn representations for time series that can generalize to unseen distributions . Note that this scenario has been extensively studied in existing literature of domain generalization ( Muandet et al. , 2013 ; Wang et al. , 2021a ) and out-of-distribution generalization ( Krueger et al. , 2021 ) , where researchers are keen to bridge the gap between known and unknown distributions , thus generalize well . While most of the efforts are done in image classification , few of them focus on the time series domain , which is more challenging . Although time series share a similar goal as image data in domain generalization , it naturally brings more challenges due to its nonstationary property : the distribution keeps changing over time , which contains diverse distribution information that should be harnessed well for better generalization . We show an illustrative example in Figure 1 . Domain generalization in image classification often involves several domains and the domain information is known ( subfigure ( a ) ) . Thus , we can leverage such domain information to build generalization models . However , in Figure 1 ( b ) , we see that in time series data , although the distribution is changing dynamically over time , its domain information is not available . This will dramatically impede the modeling of existing domain generalization algorithms as they typically assume access to domain information ( subfigure ( c ) ) . In order to learn a generalized time series model , we propose DIVERSIFY , a domain generalization algorithm to characterize the latent distributions inside the time series data . Concretely speaking , our method consists of a min-max adversarial game that : on one hand , it learns to segment the time series data into several latent sub-domains by maximizing the segment-wise distribution gap to preserve diversities , i.e. , the worst-case distribution scenario ; on the other hand , it learns domain-invariant representations by reducing the distribution divergence for the worst-case scenario . Such diversification naturally exists in a non-stationary dataset where the data from multiple people naturally follow several latent distributions . Moreover , it is also surprising to find that even the data of one person still has such diversification : it can also be split into several latent distributions . Obviously , DIVERSIFY can effectively characterize the latent distributions ( Figure 1 ( d ) ) . To summarize , our contributions are three-fold : • Novel problem : For deep learning-based time series classification , we identify the generalized representation problem , which is challenging than traditional image classification problem due to the existence of unidentified latent distributions . • New methodology : We propose DIVERSIFY , a theoretically-motivated solution to solve the generalized representation learning problem to identify the latent distributions . • Good performance : Our approach is extensively evaluated in three types of tasks : gesture recognition , speech command recognition , and sensor-based activity recognition . By qualitative and quantitative analysis , we demonstrate the superiority of DIVERSIFY on several challenging scenarios : on difficult tasks , significantly diverse datasets , and limited data . 2 METHODOLOGY . 2.1 PROBLEM FORMULATION . We are given a time-series dataset Dtr = { ( xi , yi ) } Ni=1 as the training dataset , where N is the number of samples , xi ∈ X ⊂ Rp is the p-dimensional instance ( sliding window ) and yi ∈ Y = { 1 , . . . , C } is its label . We use Ptr ( x , y ) on X × Y to denote the joint distribution of the training dataset . Our goal is to learn a generalized model fromDtr to predict well on an unseen target dataset , Dte , which is inaccessible in training . Like Dtr , time series in Dte also be split into short series . In our problem , the training and test datasets have the same input and output spaces but different distributions , i.e. , X tr = X te , Yte = Yte , but Ptr ( x , y ) 6= Pte ( x , y ) . We aim to train a model h from Dtr to minimize the risk on Dte : minh E ( x , y ) ∼Pte [ h ( x ) 6= y ] . Note that due to the non-stationary property , the training dataset may be composed of several unknown latent distributions , instead of one fixed distribution , i.e. , Ptr ( x , y ) = ∑K i=1 πiPi ( x , y ) , where Pi ( x , y ) is the distribution of the i-th sub-domains of the training data and πi is its weight . K is the number of sub-domains that is unknown , and ∑K i=1 πi = 1 . 2.2 MOTIVATION . The labeled time series data can be composed of several latent distributions ( domains ) that are challenging to characterize , even if the dataset is fully labeled . For instance , data collected by sensors of three persons may belong to two different distributions when considering their similarities . Moreover , even for data from one single person , different segments of one sequence may follow different distributions . In a nutshell , in reality , there often exist several sub-domains in one time series dataset . To ensure good generalization performance on the test dataset , it is important to learn distributioninvariant , or domain-invariant representations from the training dataset by characterizing its latent distributions . These latent distributions may contain both benign and malignant knowledge that influences generalization on the target dataset . In Figure 1 ( c ) , we assume the source domain contains two sub-domains ( circle and plus points ) . Directly learning from the entire source domain by treating it as one distribution may generate the black margin . Although green star data points and red star points can be classified easily , red star data points are misclassified to the green class when predicting on the out-of-distribution domain ( star points ) with the learned model . While other methods fail , our method can characterize the latent distributions , which will be introduced later . 2.3 DIVERSIFY . In this paper , we propose DIVERSIFY to learn generalized representations for time series classification . The core of DIVERSIFY is to characterize the latent distributions in a time series dataset and then to minimize the distribution divergence between each two . To characterize the diverse information for better generalization , DIVERSIFY utilizes an iterative process : it first obtains the worst-case distribution scenario from a given dataset , then bridges the distribution gaps between each pair of latent distributions . Why the worst-case scenario ? We argue that the worst-case scenario will maximally preserve the diverse information of each latent distribution , thus benefiting generalization . Figure 2 describes the main procedures of DIVERSIFY : 1 . Pre-processing : this step adopts the sliding window to split the entire training dataset into fixed-size windows . We believe that the data from one window is the smallest domain unit . 2 . Fine-grained feature update : this step updates the feature extractor using the proposed pseudo domain-class labels as the supervision . 3 . Latent distribution characterization : this step aims to identify the domain label for each instance to acquire the latent distribution information . It tries to maximize the different distribution gaps to enlarge diversity . 4 . Domain-invariant representation learning : this step utilizes pseudo domain labels from the last step to learn domain-invariant representations and train a generalizable model . Note that steps 2 ∼ 4 are iterative as shown in Figure 2 . Now we elaborate their details . Fine-grained Feature Update Before we characterize the latent distributions , we perform finegrained feature update to obtain useful representations . As shown in Figure 2 ( blue ) , in order to fully utilize the knowledge contained in the domains and classes , we propose a new concept : pseudo domain-class label , which serves as the supervision for feature extractor . This will ensure that the feature extractor can learn fine-grained information which benefits the later steps . At the first iteration , there is no domain label d′ and we simply initialize d′ = 0 for all samples . We treat per class per domain as a new class , and set the label to s ∈ { 1 , 2 , · · · , S } . We have S = K×C where K is the pre-defined number of latent distributions that can be tuned in experiments . We perform the following pseudo domain-class label assignment to get discrete values for supervision : s = d′ × C + y . ( Response to reviewer kiW8 : this equation is put inline . ) Let h ( 2 ) f , h ( 2 ) b , h ( 2 ) c be feature extractor , bottleneck , and classifier , respectively ( we use superscript to denote step number ) , then , the supervised loss is computed using the cross-entropy loss ` : Lsuper = E ( x , y ) ∼Ptr ` ( h ( 2 ) c ( h ( 2 ) b ( h ( 2 ) f ( x ) ) ) , s ) . ( 1 ) Latent Distribution Characterization This step characterizes the latent distributions contained in one dataset . As shown in Figure 2 ( green ) , we employ an adversarial training strategy to disentangle the domain labels from the class labels . However , there are no actual domain labels provided , which hinders such disentanglement . Inspired by ( Liang et al. , 2020 ) , we adopt a self-supervised pseudolabeling strategy to obtain domain labels . Firstly , we attain the centroid for each domain with class-invariant features : µ̃k = ∑ xi∈X tr δk ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) ) ) h ( 3 ) b ( h ( 3 ) f ( xi ) ) ∑ xi∈X tr δk ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) ) ) , ( 2 ) where h ( 3 ) f , h ( 3 ) b , h ( 3 ) c are feature extractor , bottleneck , and classifier , respectively . µ̃k is the initial centroid of the kth latent sub-domains while δk is the kth element of the logit soft-max output . Then , we obtain the pseudo domain labels via the nearest centroid classifier using a distance function D : d̃′i = argmin k D ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) , µ̃k ) . ( 3 ) Then , we compute the centroids based on the new pseudo labels and obtain the updated pseudo domain labels : µk = ∑ xi∈X tr I ( d̃ ′ i = k ) h ( 3 ) b ( h ( 3 ) f ( x ) ) ∑ xi∈X tr I ( d̃ ′ i = k ) , d′i = argmin k D ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) , µk ) , ( 4 ) where I ( a ) = 1 when a is true , otherwise 0 . After we obtain d′ , we can compute the goal of step 2 : Lself + Lcls = E ( x , y ) ∼Ptr ` ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( x ) ) ) , d ′ ) + ` ( h ( 3 ) adv ( Rλ1 ( h ( 3 ) b ( h ( 3 ) f ( x ) ) ) ) , y ) , ( 5 ) where h ( 3 ) adv is the discriminator for step 3 that contains several linear layers and one classification layer . Rλ1 is the gradient reverse layer with hyperparameter λ1 ( Ganin et al. , 2016 ) . After this step ends , we can obtain pseudo domain label d′ for x. Domain-invariant Representation Learning After obtaining the latent distributions , we learn domain-invariant representations for generalization . In fact , this step ( Figure 2 purple ) is simple : we borrow the idea from DANN ( Ganin et al. , 2016 ) and directly use adversarial training to update the classification loss Lcls and domain classifier loss Ldom using GRL : Lcls + Ldom = E ( x , y ) ∼Ptr ` ( h ( 4 ) c ( h ( 4 ) b ( h ( 4 ) f ( x ) ) ) , y ) + ` ( h ( 4 ) adv ( Rλ2 ( h ( 4 ) b ( h ( 4 ) f ( x ) ) ) ) , d ′ ) , ( 6 ) where ` is the cross-entropy loss and Rλ2 is the gradient reverse layer with hyperparameter λ2 ( Ganin et al. , 2016 ) . More details of GRL and adv . training are presented in appendix A.1 . We repeat the above three steps until convergence or max epochs . The final model is selected via a validation dataset split from the source domain ( Gulrajani & Lopez-Paz , 2021 ) . As for inference , we predict the labels with the modules from the last step . | This paper proposed a time series classification method with loss function that simultaneously promotes diverse distribution among different sub-domains within each domain and domain invariant feature representation within the same class. Here the domain is considered in a rather granular manner such as different person may be considered as different domains in gesture recognition application. An iterative process is proposed to learn a classification model. Both theoratical analysis and empirical evaluation on three different applications are provided. The proposed method shows better accuracy especially on generalizing across different domains than other competing methods. | SP:086dabce4a95956102191b52c4268fa344fc794c |
DIVERSIFY to Generalize: Learning Generalized Representations for Time Series Classification | 1 INTRODUCTION . Time series classification is one of the most challenging problems in machine learning and statistics community . Example applications include sensor-based human activity recognition , Parkinson ’ s disease diagnosis , and electronic power consumption ( Fawaz et al. , 2019 ) . One important nature of time series is non-stationary property , which means that its statistical features are changing over time . For years , there have been tremendous efforts to tackle the time series classification problem , such as hidden Markov models ( Fulcher & Jones , 2014 ) , RNN-based methods ( Hüsken & Stagge , 2003 ) , and Transformer-based approaches ( Li et al. , 2019 ) . In this paper , we are specifically interested in modeling time series from the distribution perspective . More precisely , we aim to learn representations for time series that can generalize to unseen distributions . Note that this scenario has been extensively studied in existing literature of domain generalization ( Muandet et al. , 2013 ; Wang et al. , 2021a ) and out-of-distribution generalization ( Krueger et al. , 2021 ) , where researchers are keen to bridge the gap between known and unknown distributions , thus generalize well . While most of the efforts are done in image classification , few of them focus on the time series domain , which is more challenging . Although time series share a similar goal as image data in domain generalization , it naturally brings more challenges due to its nonstationary property : the distribution keeps changing over time , which contains diverse distribution information that should be harnessed well for better generalization . We show an illustrative example in Figure 1 . Domain generalization in image classification often involves several domains and the domain information is known ( subfigure ( a ) ) . Thus , we can leverage such domain information to build generalization models . However , in Figure 1 ( b ) , we see that in time series data , although the distribution is changing dynamically over time , its domain information is not available . This will dramatically impede the modeling of existing domain generalization algorithms as they typically assume access to domain information ( subfigure ( c ) ) . In order to learn a generalized time series model , we propose DIVERSIFY , a domain generalization algorithm to characterize the latent distributions inside the time series data . Concretely speaking , our method consists of a min-max adversarial game that : on one hand , it learns to segment the time series data into several latent sub-domains by maximizing the segment-wise distribution gap to preserve diversities , i.e. , the worst-case distribution scenario ; on the other hand , it learns domain-invariant representations by reducing the distribution divergence for the worst-case scenario . Such diversification naturally exists in a non-stationary dataset where the data from multiple people naturally follow several latent distributions . Moreover , it is also surprising to find that even the data of one person still has such diversification : it can also be split into several latent distributions . Obviously , DIVERSIFY can effectively characterize the latent distributions ( Figure 1 ( d ) ) . To summarize , our contributions are three-fold : • Novel problem : For deep learning-based time series classification , we identify the generalized representation problem , which is challenging than traditional image classification problem due to the existence of unidentified latent distributions . • New methodology : We propose DIVERSIFY , a theoretically-motivated solution to solve the generalized representation learning problem to identify the latent distributions . • Good performance : Our approach is extensively evaluated in three types of tasks : gesture recognition , speech command recognition , and sensor-based activity recognition . By qualitative and quantitative analysis , we demonstrate the superiority of DIVERSIFY on several challenging scenarios : on difficult tasks , significantly diverse datasets , and limited data . 2 METHODOLOGY . 2.1 PROBLEM FORMULATION . We are given a time-series dataset Dtr = { ( xi , yi ) } Ni=1 as the training dataset , where N is the number of samples , xi ∈ X ⊂ Rp is the p-dimensional instance ( sliding window ) and yi ∈ Y = { 1 , . . . , C } is its label . We use Ptr ( x , y ) on X × Y to denote the joint distribution of the training dataset . Our goal is to learn a generalized model fromDtr to predict well on an unseen target dataset , Dte , which is inaccessible in training . Like Dtr , time series in Dte also be split into short series . In our problem , the training and test datasets have the same input and output spaces but different distributions , i.e. , X tr = X te , Yte = Yte , but Ptr ( x , y ) 6= Pte ( x , y ) . We aim to train a model h from Dtr to minimize the risk on Dte : minh E ( x , y ) ∼Pte [ h ( x ) 6= y ] . Note that due to the non-stationary property , the training dataset may be composed of several unknown latent distributions , instead of one fixed distribution , i.e. , Ptr ( x , y ) = ∑K i=1 πiPi ( x , y ) , where Pi ( x , y ) is the distribution of the i-th sub-domains of the training data and πi is its weight . K is the number of sub-domains that is unknown , and ∑K i=1 πi = 1 . 2.2 MOTIVATION . The labeled time series data can be composed of several latent distributions ( domains ) that are challenging to characterize , even if the dataset is fully labeled . For instance , data collected by sensors of three persons may belong to two different distributions when considering their similarities . Moreover , even for data from one single person , different segments of one sequence may follow different distributions . In a nutshell , in reality , there often exist several sub-domains in one time series dataset . To ensure good generalization performance on the test dataset , it is important to learn distributioninvariant , or domain-invariant representations from the training dataset by characterizing its latent distributions . These latent distributions may contain both benign and malignant knowledge that influences generalization on the target dataset . In Figure 1 ( c ) , we assume the source domain contains two sub-domains ( circle and plus points ) . Directly learning from the entire source domain by treating it as one distribution may generate the black margin . Although green star data points and red star points can be classified easily , red star data points are misclassified to the green class when predicting on the out-of-distribution domain ( star points ) with the learned model . While other methods fail , our method can characterize the latent distributions , which will be introduced later . 2.3 DIVERSIFY . In this paper , we propose DIVERSIFY to learn generalized representations for time series classification . The core of DIVERSIFY is to characterize the latent distributions in a time series dataset and then to minimize the distribution divergence between each two . To characterize the diverse information for better generalization , DIVERSIFY utilizes an iterative process : it first obtains the worst-case distribution scenario from a given dataset , then bridges the distribution gaps between each pair of latent distributions . Why the worst-case scenario ? We argue that the worst-case scenario will maximally preserve the diverse information of each latent distribution , thus benefiting generalization . Figure 2 describes the main procedures of DIVERSIFY : 1 . Pre-processing : this step adopts the sliding window to split the entire training dataset into fixed-size windows . We believe that the data from one window is the smallest domain unit . 2 . Fine-grained feature update : this step updates the feature extractor using the proposed pseudo domain-class labels as the supervision . 3 . Latent distribution characterization : this step aims to identify the domain label for each instance to acquire the latent distribution information . It tries to maximize the different distribution gaps to enlarge diversity . 4 . Domain-invariant representation learning : this step utilizes pseudo domain labels from the last step to learn domain-invariant representations and train a generalizable model . Note that steps 2 ∼ 4 are iterative as shown in Figure 2 . Now we elaborate their details . Fine-grained Feature Update Before we characterize the latent distributions , we perform finegrained feature update to obtain useful representations . As shown in Figure 2 ( blue ) , in order to fully utilize the knowledge contained in the domains and classes , we propose a new concept : pseudo domain-class label , which serves as the supervision for feature extractor . This will ensure that the feature extractor can learn fine-grained information which benefits the later steps . At the first iteration , there is no domain label d′ and we simply initialize d′ = 0 for all samples . We treat per class per domain as a new class , and set the label to s ∈ { 1 , 2 , · · · , S } . We have S = K×C where K is the pre-defined number of latent distributions that can be tuned in experiments . We perform the following pseudo domain-class label assignment to get discrete values for supervision : s = d′ × C + y . ( Response to reviewer kiW8 : this equation is put inline . ) Let h ( 2 ) f , h ( 2 ) b , h ( 2 ) c be feature extractor , bottleneck , and classifier , respectively ( we use superscript to denote step number ) , then , the supervised loss is computed using the cross-entropy loss ` : Lsuper = E ( x , y ) ∼Ptr ` ( h ( 2 ) c ( h ( 2 ) b ( h ( 2 ) f ( x ) ) ) , s ) . ( 1 ) Latent Distribution Characterization This step characterizes the latent distributions contained in one dataset . As shown in Figure 2 ( green ) , we employ an adversarial training strategy to disentangle the domain labels from the class labels . However , there are no actual domain labels provided , which hinders such disentanglement . Inspired by ( Liang et al. , 2020 ) , we adopt a self-supervised pseudolabeling strategy to obtain domain labels . Firstly , we attain the centroid for each domain with class-invariant features : µ̃k = ∑ xi∈X tr δk ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) ) ) h ( 3 ) b ( h ( 3 ) f ( xi ) ) ∑ xi∈X tr δk ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) ) ) , ( 2 ) where h ( 3 ) f , h ( 3 ) b , h ( 3 ) c are feature extractor , bottleneck , and classifier , respectively . µ̃k is the initial centroid of the kth latent sub-domains while δk is the kth element of the logit soft-max output . Then , we obtain the pseudo domain labels via the nearest centroid classifier using a distance function D : d̃′i = argmin k D ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) , µ̃k ) . ( 3 ) Then , we compute the centroids based on the new pseudo labels and obtain the updated pseudo domain labels : µk = ∑ xi∈X tr I ( d̃ ′ i = k ) h ( 3 ) b ( h ( 3 ) f ( x ) ) ∑ xi∈X tr I ( d̃ ′ i = k ) , d′i = argmin k D ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) , µk ) , ( 4 ) where I ( a ) = 1 when a is true , otherwise 0 . After we obtain d′ , we can compute the goal of step 2 : Lself + Lcls = E ( x , y ) ∼Ptr ` ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( x ) ) ) , d ′ ) + ` ( h ( 3 ) adv ( Rλ1 ( h ( 3 ) b ( h ( 3 ) f ( x ) ) ) ) , y ) , ( 5 ) where h ( 3 ) adv is the discriminator for step 3 that contains several linear layers and one classification layer . Rλ1 is the gradient reverse layer with hyperparameter λ1 ( Ganin et al. , 2016 ) . After this step ends , we can obtain pseudo domain label d′ for x. Domain-invariant Representation Learning After obtaining the latent distributions , we learn domain-invariant representations for generalization . In fact , this step ( Figure 2 purple ) is simple : we borrow the idea from DANN ( Ganin et al. , 2016 ) and directly use adversarial training to update the classification loss Lcls and domain classifier loss Ldom using GRL : Lcls + Ldom = E ( x , y ) ∼Ptr ` ( h ( 4 ) c ( h ( 4 ) b ( h ( 4 ) f ( x ) ) ) , y ) + ` ( h ( 4 ) adv ( Rλ2 ( h ( 4 ) b ( h ( 4 ) f ( x ) ) ) ) , d ′ ) , ( 6 ) where ` is the cross-entropy loss and Rλ2 is the gradient reverse layer with hyperparameter λ2 ( Ganin et al. , 2016 ) . More details of GRL and adv . training are presented in appendix A.1 . We repeat the above three steps until convergence or max epochs . The final model is selected via a validation dataset split from the source domain ( Gulrajani & Lopez-Paz , 2021 ) . As for inference , we predict the labels with the modules from the last step . | This paper focuses on time-series classification problems, such as gesture or speech command recognition, where the data for any given sequence contains information from one user, but for which a priori you don't know anything about that user. Their goal is to be able to train on data with one set of users ("domains") and evaluate on a different set of users ("domains"). The users (at train and test) are unknown and are modeled by discrete latent variables. Contributions include an algorithm ("DIVERSIFY") for learning distributions of domains/users. Their approach ultimately results in a model for time-series classification and for latent segmentation of data into discrete domains (aka user profiles). The model is framed using a min-max adversarial formulation. The paper documents results on three datasets (EMG signals, Speech Commands, and Human Activities) show improvements over other distributional algorithms. | SP:086dabce4a95956102191b52c4268fa344fc794c |
DIVERSIFY to Generalize: Learning Generalized Representations for Time Series Classification | 1 INTRODUCTION . Time series classification is one of the most challenging problems in machine learning and statistics community . Example applications include sensor-based human activity recognition , Parkinson ’ s disease diagnosis , and electronic power consumption ( Fawaz et al. , 2019 ) . One important nature of time series is non-stationary property , which means that its statistical features are changing over time . For years , there have been tremendous efforts to tackle the time series classification problem , such as hidden Markov models ( Fulcher & Jones , 2014 ) , RNN-based methods ( Hüsken & Stagge , 2003 ) , and Transformer-based approaches ( Li et al. , 2019 ) . In this paper , we are specifically interested in modeling time series from the distribution perspective . More precisely , we aim to learn representations for time series that can generalize to unseen distributions . Note that this scenario has been extensively studied in existing literature of domain generalization ( Muandet et al. , 2013 ; Wang et al. , 2021a ) and out-of-distribution generalization ( Krueger et al. , 2021 ) , where researchers are keen to bridge the gap between known and unknown distributions , thus generalize well . While most of the efforts are done in image classification , few of them focus on the time series domain , which is more challenging . Although time series share a similar goal as image data in domain generalization , it naturally brings more challenges due to its nonstationary property : the distribution keeps changing over time , which contains diverse distribution information that should be harnessed well for better generalization . We show an illustrative example in Figure 1 . Domain generalization in image classification often involves several domains and the domain information is known ( subfigure ( a ) ) . Thus , we can leverage such domain information to build generalization models . However , in Figure 1 ( b ) , we see that in time series data , although the distribution is changing dynamically over time , its domain information is not available . This will dramatically impede the modeling of existing domain generalization algorithms as they typically assume access to domain information ( subfigure ( c ) ) . In order to learn a generalized time series model , we propose DIVERSIFY , a domain generalization algorithm to characterize the latent distributions inside the time series data . Concretely speaking , our method consists of a min-max adversarial game that : on one hand , it learns to segment the time series data into several latent sub-domains by maximizing the segment-wise distribution gap to preserve diversities , i.e. , the worst-case distribution scenario ; on the other hand , it learns domain-invariant representations by reducing the distribution divergence for the worst-case scenario . Such diversification naturally exists in a non-stationary dataset where the data from multiple people naturally follow several latent distributions . Moreover , it is also surprising to find that even the data of one person still has such diversification : it can also be split into several latent distributions . Obviously , DIVERSIFY can effectively characterize the latent distributions ( Figure 1 ( d ) ) . To summarize , our contributions are three-fold : • Novel problem : For deep learning-based time series classification , we identify the generalized representation problem , which is challenging than traditional image classification problem due to the existence of unidentified latent distributions . • New methodology : We propose DIVERSIFY , a theoretically-motivated solution to solve the generalized representation learning problem to identify the latent distributions . • Good performance : Our approach is extensively evaluated in three types of tasks : gesture recognition , speech command recognition , and sensor-based activity recognition . By qualitative and quantitative analysis , we demonstrate the superiority of DIVERSIFY on several challenging scenarios : on difficult tasks , significantly diverse datasets , and limited data . 2 METHODOLOGY . 2.1 PROBLEM FORMULATION . We are given a time-series dataset Dtr = { ( xi , yi ) } Ni=1 as the training dataset , where N is the number of samples , xi ∈ X ⊂ Rp is the p-dimensional instance ( sliding window ) and yi ∈ Y = { 1 , . . . , C } is its label . We use Ptr ( x , y ) on X × Y to denote the joint distribution of the training dataset . Our goal is to learn a generalized model fromDtr to predict well on an unseen target dataset , Dte , which is inaccessible in training . Like Dtr , time series in Dte also be split into short series . In our problem , the training and test datasets have the same input and output spaces but different distributions , i.e. , X tr = X te , Yte = Yte , but Ptr ( x , y ) 6= Pte ( x , y ) . We aim to train a model h from Dtr to minimize the risk on Dte : minh E ( x , y ) ∼Pte [ h ( x ) 6= y ] . Note that due to the non-stationary property , the training dataset may be composed of several unknown latent distributions , instead of one fixed distribution , i.e. , Ptr ( x , y ) = ∑K i=1 πiPi ( x , y ) , where Pi ( x , y ) is the distribution of the i-th sub-domains of the training data and πi is its weight . K is the number of sub-domains that is unknown , and ∑K i=1 πi = 1 . 2.2 MOTIVATION . The labeled time series data can be composed of several latent distributions ( domains ) that are challenging to characterize , even if the dataset is fully labeled . For instance , data collected by sensors of three persons may belong to two different distributions when considering their similarities . Moreover , even for data from one single person , different segments of one sequence may follow different distributions . In a nutshell , in reality , there often exist several sub-domains in one time series dataset . To ensure good generalization performance on the test dataset , it is important to learn distributioninvariant , or domain-invariant representations from the training dataset by characterizing its latent distributions . These latent distributions may contain both benign and malignant knowledge that influences generalization on the target dataset . In Figure 1 ( c ) , we assume the source domain contains two sub-domains ( circle and plus points ) . Directly learning from the entire source domain by treating it as one distribution may generate the black margin . Although green star data points and red star points can be classified easily , red star data points are misclassified to the green class when predicting on the out-of-distribution domain ( star points ) with the learned model . While other methods fail , our method can characterize the latent distributions , which will be introduced later . 2.3 DIVERSIFY . In this paper , we propose DIVERSIFY to learn generalized representations for time series classification . The core of DIVERSIFY is to characterize the latent distributions in a time series dataset and then to minimize the distribution divergence between each two . To characterize the diverse information for better generalization , DIVERSIFY utilizes an iterative process : it first obtains the worst-case distribution scenario from a given dataset , then bridges the distribution gaps between each pair of latent distributions . Why the worst-case scenario ? We argue that the worst-case scenario will maximally preserve the diverse information of each latent distribution , thus benefiting generalization . Figure 2 describes the main procedures of DIVERSIFY : 1 . Pre-processing : this step adopts the sliding window to split the entire training dataset into fixed-size windows . We believe that the data from one window is the smallest domain unit . 2 . Fine-grained feature update : this step updates the feature extractor using the proposed pseudo domain-class labels as the supervision . 3 . Latent distribution characterization : this step aims to identify the domain label for each instance to acquire the latent distribution information . It tries to maximize the different distribution gaps to enlarge diversity . 4 . Domain-invariant representation learning : this step utilizes pseudo domain labels from the last step to learn domain-invariant representations and train a generalizable model . Note that steps 2 ∼ 4 are iterative as shown in Figure 2 . Now we elaborate their details . Fine-grained Feature Update Before we characterize the latent distributions , we perform finegrained feature update to obtain useful representations . As shown in Figure 2 ( blue ) , in order to fully utilize the knowledge contained in the domains and classes , we propose a new concept : pseudo domain-class label , which serves as the supervision for feature extractor . This will ensure that the feature extractor can learn fine-grained information which benefits the later steps . At the first iteration , there is no domain label d′ and we simply initialize d′ = 0 for all samples . We treat per class per domain as a new class , and set the label to s ∈ { 1 , 2 , · · · , S } . We have S = K×C where K is the pre-defined number of latent distributions that can be tuned in experiments . We perform the following pseudo domain-class label assignment to get discrete values for supervision : s = d′ × C + y . ( Response to reviewer kiW8 : this equation is put inline . ) Let h ( 2 ) f , h ( 2 ) b , h ( 2 ) c be feature extractor , bottleneck , and classifier , respectively ( we use superscript to denote step number ) , then , the supervised loss is computed using the cross-entropy loss ` : Lsuper = E ( x , y ) ∼Ptr ` ( h ( 2 ) c ( h ( 2 ) b ( h ( 2 ) f ( x ) ) ) , s ) . ( 1 ) Latent Distribution Characterization This step characterizes the latent distributions contained in one dataset . As shown in Figure 2 ( green ) , we employ an adversarial training strategy to disentangle the domain labels from the class labels . However , there are no actual domain labels provided , which hinders such disentanglement . Inspired by ( Liang et al. , 2020 ) , we adopt a self-supervised pseudolabeling strategy to obtain domain labels . Firstly , we attain the centroid for each domain with class-invariant features : µ̃k = ∑ xi∈X tr δk ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) ) ) h ( 3 ) b ( h ( 3 ) f ( xi ) ) ∑ xi∈X tr δk ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) ) ) , ( 2 ) where h ( 3 ) f , h ( 3 ) b , h ( 3 ) c are feature extractor , bottleneck , and classifier , respectively . µ̃k is the initial centroid of the kth latent sub-domains while δk is the kth element of the logit soft-max output . Then , we obtain the pseudo domain labels via the nearest centroid classifier using a distance function D : d̃′i = argmin k D ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) , µ̃k ) . ( 3 ) Then , we compute the centroids based on the new pseudo labels and obtain the updated pseudo domain labels : µk = ∑ xi∈X tr I ( d̃ ′ i = k ) h ( 3 ) b ( h ( 3 ) f ( x ) ) ∑ xi∈X tr I ( d̃ ′ i = k ) , d′i = argmin k D ( h ( 3 ) b ( h ( 3 ) f ( xi ) ) , µk ) , ( 4 ) where I ( a ) = 1 when a is true , otherwise 0 . After we obtain d′ , we can compute the goal of step 2 : Lself + Lcls = E ( x , y ) ∼Ptr ` ( h ( 3 ) c ( h ( 3 ) b ( h ( 3 ) f ( x ) ) ) , d ′ ) + ` ( h ( 3 ) adv ( Rλ1 ( h ( 3 ) b ( h ( 3 ) f ( x ) ) ) ) , y ) , ( 5 ) where h ( 3 ) adv is the discriminator for step 3 that contains several linear layers and one classification layer . Rλ1 is the gradient reverse layer with hyperparameter λ1 ( Ganin et al. , 2016 ) . After this step ends , we can obtain pseudo domain label d′ for x. Domain-invariant Representation Learning After obtaining the latent distributions , we learn domain-invariant representations for generalization . In fact , this step ( Figure 2 purple ) is simple : we borrow the idea from DANN ( Ganin et al. , 2016 ) and directly use adversarial training to update the classification loss Lcls and domain classifier loss Ldom using GRL : Lcls + Ldom = E ( x , y ) ∼Ptr ` ( h ( 4 ) c ( h ( 4 ) b ( h ( 4 ) f ( x ) ) ) , y ) + ` ( h ( 4 ) adv ( Rλ2 ( h ( 4 ) b ( h ( 4 ) f ( x ) ) ) ) , d ′ ) , ( 6 ) where ` is the cross-entropy loss and Rλ2 is the gradient reverse layer with hyperparameter λ2 ( Ganin et al. , 2016 ) . More details of GRL and adv . training are presented in appendix A.1 . We repeat the above three steps until convergence or max epochs . The final model is selected via a validation dataset split from the source domain ( Gulrajani & Lopez-Paz , 2021 ) . As for inference , we predict the labels with the modules from the last step . | This work proposes a framework to learn a domain-invariant representation of time series. Using the idea of domain generalization, this paper removes the influence of varying domains/distributions across patients/datasets through pseudo domain classification. Although this kind of model has been rather studied in images, but not a lot in time series. | SP:086dabce4a95956102191b52c4268fa344fc794c |
Stabilized Self-training with Negative Sampling on Few-labeled Graph Data | Specifically , we observe that existing GNNs suffer from unstable training process on few-labeled graph data , resulting to inferior performance on node classification . Therefore , we propose an effective framework , Stabilized self-training with Negative sampling ( SN ) , which is applicable to existing GNNs to stabilize the training process and enhance the training data , and consequently , boost classification accuracy on graphs with few labeled data . In experiments , we apply our SN framework to two existing GNN base models ( GCN and DAGNN ) to get SNGCN and SNDAGNN , and evaluate the two methods against 13 existing solutions over 4 benchmarking datasets . Extensive experiments show that the proposed SN framework is highly effective compared with existing solutions , especially under settings with very few labeled data . In particular , on a benchmark dataset Cora with only 1 labeled node per class , while GCN only has 44.6 % accuracy , SNGCN achieves 62.5 % accuracy , improving GCN by 17.9 % ; SNDAGNN has accuracy 66.4 % , improving that of the base model DAGNN ( 59.8 % ) by 6.6 % . 1 INTRODUCTION . Graph is an expressive data model , representing objects and the relationships between objects as nodes and edges respectively . Graph data are ubiquitous with a wide range of real-world applications , e.g. , social network analysis ( Qiu et al. , 2018 ; Li & Goldwasser , 2019 ) , traffic network prediction ( Guo et al. , 2019 ; Li et al. , 2019 ) , protein interface prediction ( Fout et al. , 2017 ) , recommendation systems ( Fan et al. , 2019 ; Yang et al. , 2020a ) . Among these applications , an important task is to classify the nodes in a graph into various classes . However , one tough situation commonly existing is the lack of sufficient labeled data , which are also expensive to collect . To ease the situation , semi-supervised node classification on graphs has attracted much attention from both industry ( Qiu et al. , 2018 ; Li & Goldwasser , 2019 ) and academia ( Defferrard et al. , 2016 ; Hamilton et al. , 2017 ; Velickovic et al. , 2018 ; Liu et al. , 2020 ; Li et al. , 2018 ; Klicpera et al. , 2019 ) . It aims to leverage a small amount of labeled nodes and additionally a large amount of unlabeled nodes in a graph to train an accurate classifier . There exists a collection of graph neural networks for semi-supervised node classification ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Monti et al. , 2017 ; Hamilton et al. , 2017 ; Klicpera et al. , 2019 ; Liu et al. , 2020 ) . For instance , Graph convolution networks ( GCNs ) rely on a message passing scheme called graph convolution that aggregates the neighborhood information of a node , including node features and graph topology , to learn node representations , which can then be used in downstream classification tasks ( Kipf & Welling , 2017 ) . Despite the great success of GCNs , under the extreme cases when very few labels are given ( e.g. , only one labeled node per class ) , the shallow GCN architecture , typically with two layers ( Kipf & Welling , 2017 ) , can not effectively propagate the training labels over the input graph , leading to inferior performance . In particular , as shown in our experiments , on a benchmark dataset Cora with 1 labeled node per class ( Cora-1 ) , GCN is even less accurate than some unsupervised methods , such as DGI ( Velickovic et al. , 2019 ) and G2G ( Bojchevski & Günnemann , 2018 ) . Recently , several latest studies try to improve classification accuracy by designing deeper GNN architectures , e.g. , DAGNN ( Liu et al. , 2020 ) , which also address the over-smoothing issue identified in ( Xu et al. , 2018 ; Li et al. , 2018 ; Chen et al. , 2020a ) . However , these deep GNNs are still not directly designed to tackle the scarcity of labeled data , especially when only very few labels are available . After conducting an in-depth study , we have an important finding that existing GNNs suffer from unstable training process , when labeled nodes are few . In particular , on Cora dataset with 7 classes , for each run , we randomly select 1 labeled node per class as the training data for both GCN and DAGNN , and repeat 100 runs with 300 epochs per run , to get the average number of predicted labels in percentage per class at each epoch and also the standard deviation . The statistical results of GCN and DAGNN are shown in Figures 1 ( a ) and 1 ( b ) respectively . x-axis is the epoch from 0 to 300 , and y-axis is the percentage of a class in the predicted node labels . There are 7 colored lines representing the average percentage of the predicted labels of the respective classes , as the epoch increases . The dashed lines are the ground-truth percentage of each class in the Cora dataset . The shaded areas in colors represent the standard deviation . Observe that in Figure 1 ( a ) , GCN has high variance at different runs when predicting node labels , and the variance keeps large at late epochs , e.g. , 300 , which indicates that GCN is quite unstable at different runs with 1 training label per class sampled randomly , leading to inferior classification accuracy as illustrated in our experiments . Moreover , as shown in Figure 1 ( b ) , DAGNN also suffers from unstable training process . The variance of DAGNN is relatively smaller than that of GCN , which provides a hint about why DAGNN performs better than GCN . Nevertheless , both GCN and DAGNN yield unstable training process with large variance . Since there is only 1 labeled node per class in Cora-1 , at different runs , the randomly sampled training nodes can heavily influence the message passing process in GCN and DAGNN , depending on the connectivity of the training nodes to their neighborhoods over the graph topology , which result to the unstable training process observed above . To address the unstable training process of existing GNNs when only very few labeled data are available , we propose a framework , Stabilized self-training with Negative sampling ( SN ) , which is readily applicable to existing GNNs to improve classification accuracy via stabilized training process . In the proposed SN framework , at each epoch , we select a set of nodes with predicted labels of high confidence as pseudo labels and add such pseudo labels into training data to enhance the training of next epoch . To tackle the unstable issue of existing GNNs , we develop a stabilizing technique in self-training to balance the training . We then design a negative sampling regularization technique over pseudo labels to further improve node classification accuracy . In experiments , we apply our SN framework to GCN and DAGNN , denoted as SNGCN and SNDAGNN respectively . Figures 1 ( c ) and 1 ( d ) report the average percentage and standard variance of the predicted labels per class per epoch of SNGCN and SNDAGNN on Cora-1 respectively . With our stabilized selftraining technique , obviously , the variance of SNGCN in Figure 1 ( c ) decreases quickly and becomes stable as epoch increases , compared with Figure 1 ( a ) of GCN . SNDAGNN is also more stable than DAGNN as shown in Figures 1 ( d ) and 1 ( b ) respectively . As reported later in experiments , with the proposed SN framework , SNGCN achieves 62.5 % node classification accuracy on Cora-1 , significantly improving GCN ( 44.6 % ) by 17.9 % , and SNDAGNN obtains 66.4 % accuracy on Cora-1 and outperforms DAGNN ( 59.8 % ) by a substantial margin of 6.6 % . We conduct extensive experiments on 4 benchmarking datasets , and compare with 13 existing solutions , to evaluate the performance of the proposed SN framework . Experimental results demonstrate that our SN framework is able to significantly improve classification accuracy of existing GNNs when only few labels are available , and is also effective when training labels are sufficient . 2 RELATED WORK . In literature , there are two directions to address the scarcity of labeled data for semi-supervised node classification : ( i ) explore multi-hop graph topological features to propagate the labels in L over the input graph , e.g. , GCN ( Kipf & Welling , 2017 ) and DAGNN ( Liu et al. , 2020 ) ; ( ii ) enhance the training data by pseudo labels ( self-training ) ( Li et al. , 2018 ) or augmenting graph data by new edges and features ( Kong et al. , 2020 ; Zhao et al. , 2021 ) . Note that these two directions are not mutually exclusive , but can work together on few-labeled graph data . Here we review the existing studies that are most relevant to this paper . GNNs . There exist a large collection of GNNs , such as GCN , DAGNN , GAT , MoNet , and APPNP ( Kipf & Welling , 2017 ; Liu et al. , 2020 ; Bruna et al. , 2014 ; Henaff et al. , 2015 ; Defferrard et al. , 2016 ; Velickovic et al. , 2018 ; Monti et al. , 2017 ; Chen et al. , 2020b ; Klicpera et al. , 2019 ) . We introduce the details of GCN ( Kipf & Welling , 2017 ) and DAGNN ( Liu et al. , 2020 ) here . GCN learns the representation of each node by iteratively aggregating the representations of its neighbors . Specifically , GCN consists of k > 0 layers , each with the same propagation rule defined as follows . At the ℓ-th layer , the representations H ( ℓ−1 ) of previous layer are aggregated to get H ( ℓ ) . H ( ℓ ) = σ ( ÂH ( ℓ−1 ) W ( ℓ ) ) , ℓ = 1 , 2 , ... , k. ( 1 )  = D̃− 1 2 ÃD̃ − 1 2 is the graph laplacian , where à = A + I is the adjacency matrix of G after adding self-loops ( I is the identity matrix ) and D̃ is a diagonal matrix with D̃ii = ∑ j Ãij . W ( ℓ ) is a trainable weight matrix of the ℓ-th layer , and σ is a nonlinear activation function . Initially , H ( 0 ) = X . Note that GCN usually achieves superior performance with 1-layer or 2-layer models ( Kipf & Welling , 2017 ) . When applying multiple layers to leverage large receptive fields , the performance degrades severely , due to the over-smoothing issue identified in ( Xu et al. , 2018 ; Li et al. , 2018 ; Chen et al. , 2020a ) . A recent deep GNN architecture , DAGNN , tackles the over-smoothing issue and achieves state-of-the-art results by decoupling representation transformation and propagation in GNNs ( Liu et al. , 2020 ) . Then it utilizes an adaptive adjustment mechanism to balance the information from local and global neighborhoods of each node . Specifically , the mathematical expression of DAGNN is as follows . DAGNN uses a learnable parameter s ∈ Rc×1 to adjust the weight of embeddings at different propagation level ( from 1 to k ) . It processes data in the following way . Z = MLP ( X ) ∈ Rn×c , Hℓ =  ℓ · Z ∈ Rn×c , ℓ = 1 , 2 , ... , k , Sℓ = Hℓ · s ∈ R n×1 , ℓ = 1 , 2 , ... , k , Ŝℓ = [ Sℓ , Sℓ , ... , Sℓ ] ∈ R n×c , ℓ = 1 , 2 , ... , k , Xout = softmax ( ∑k ℓ=1 Hℓ⊙ Ŝℓ ) , where  ℓ is the ℓ-th power of matrix  , ⊙ is the Hadamard product , · is dot product , MLP is the Multilayer Perceptron and softmax operation is on the second dimension . Data Augmentation . Another way to address the situation of limited labeled data is to add pseudo labels to training dataset by self-training ( Li et al. , 2018 ) , or enhance the graph data by adding new edges and features ( Zhao et al. , 2021 ; Kong et al. , 2020 ) . Self-training itself is a general methodology ( Scudder , 1965 ) and is used in various domains in addition . It is used in word-sense disambiguation ( Yarowsky , 1995 ; Hearst , 1991 ) , bootstrap for information extraction and learning subjective nouns ( Riloff & Jones , 1999 ) , and text classification ( Nigam et al. , 2000 ) . In ( Zhou et al. , 2012 ) , it suggests that selecting informative unlabeled data using a guided search algorithm can significantly improve performance over standard self-training framework . Buchnik & Cohen ( 2018 ) mainly consider selftraining for diffusion-based techniques . Recently , self-training has been adopted for semi-supervised tasks on graphs . For instance , Li et al . ( 2018 ) propose self-training and co-training techniques for GCN . This self-training work selects the top-k confident predicted labels as pseudo labels . The co-training technique co-trains a GCN with a random walk model to handle few-labeled data . Compared with existing self-training work , our framework are different as shown later . In particular , our framework has a different strategy to select pseudo labels and also has a stabilizer to address the deficiencies of existing GNNs ; moreover , we propose a negative sampling regularization technique to further boost accuracy . Besides , in existing work , if a node is selected as a pseudo label , it will never be moved out even if the pseudo label becomes obviously wrong in later epochs . On the other hand , in our framework , we update pseudo labels in each epoch to avoid such an issue . There also exist studies to augment the original graph data , which is different from self-training . For instance , Zhao et al . ( 2021 ) utilize link prediction to promote intra-class edges and demote inter-class edges in a given graph . Kong et al . ( 2020 ) iteratively augment node features with gradient-based adversarial perturbations to enhance the performance of GNNs . | This paper proposes a self-training method with negative sampling for node classification on few-labeled graph data. The proposed method applies data augmentation (i.e., pseudo label) and negative sampling regularization to augment node classification model. Experiments are conducted to show that the proposed method outperforms some baseline methods. | SP:c4417146f33ba5f65ac5590b08380804beeafb89 |
Stabilized Self-training with Negative Sampling on Few-labeled Graph Data | Specifically , we observe that existing GNNs suffer from unstable training process on few-labeled graph data , resulting to inferior performance on node classification . Therefore , we propose an effective framework , Stabilized self-training with Negative sampling ( SN ) , which is applicable to existing GNNs to stabilize the training process and enhance the training data , and consequently , boost classification accuracy on graphs with few labeled data . In experiments , we apply our SN framework to two existing GNN base models ( GCN and DAGNN ) to get SNGCN and SNDAGNN , and evaluate the two methods against 13 existing solutions over 4 benchmarking datasets . Extensive experiments show that the proposed SN framework is highly effective compared with existing solutions , especially under settings with very few labeled data . In particular , on a benchmark dataset Cora with only 1 labeled node per class , while GCN only has 44.6 % accuracy , SNGCN achieves 62.5 % accuracy , improving GCN by 17.9 % ; SNDAGNN has accuracy 66.4 % , improving that of the base model DAGNN ( 59.8 % ) by 6.6 % . 1 INTRODUCTION . Graph is an expressive data model , representing objects and the relationships between objects as nodes and edges respectively . Graph data are ubiquitous with a wide range of real-world applications , e.g. , social network analysis ( Qiu et al. , 2018 ; Li & Goldwasser , 2019 ) , traffic network prediction ( Guo et al. , 2019 ; Li et al. , 2019 ) , protein interface prediction ( Fout et al. , 2017 ) , recommendation systems ( Fan et al. , 2019 ; Yang et al. , 2020a ) . Among these applications , an important task is to classify the nodes in a graph into various classes . However , one tough situation commonly existing is the lack of sufficient labeled data , which are also expensive to collect . To ease the situation , semi-supervised node classification on graphs has attracted much attention from both industry ( Qiu et al. , 2018 ; Li & Goldwasser , 2019 ) and academia ( Defferrard et al. , 2016 ; Hamilton et al. , 2017 ; Velickovic et al. , 2018 ; Liu et al. , 2020 ; Li et al. , 2018 ; Klicpera et al. , 2019 ) . It aims to leverage a small amount of labeled nodes and additionally a large amount of unlabeled nodes in a graph to train an accurate classifier . There exists a collection of graph neural networks for semi-supervised node classification ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Monti et al. , 2017 ; Hamilton et al. , 2017 ; Klicpera et al. , 2019 ; Liu et al. , 2020 ) . For instance , Graph convolution networks ( GCNs ) rely on a message passing scheme called graph convolution that aggregates the neighborhood information of a node , including node features and graph topology , to learn node representations , which can then be used in downstream classification tasks ( Kipf & Welling , 2017 ) . Despite the great success of GCNs , under the extreme cases when very few labels are given ( e.g. , only one labeled node per class ) , the shallow GCN architecture , typically with two layers ( Kipf & Welling , 2017 ) , can not effectively propagate the training labels over the input graph , leading to inferior performance . In particular , as shown in our experiments , on a benchmark dataset Cora with 1 labeled node per class ( Cora-1 ) , GCN is even less accurate than some unsupervised methods , such as DGI ( Velickovic et al. , 2019 ) and G2G ( Bojchevski & Günnemann , 2018 ) . Recently , several latest studies try to improve classification accuracy by designing deeper GNN architectures , e.g. , DAGNN ( Liu et al. , 2020 ) , which also address the over-smoothing issue identified in ( Xu et al. , 2018 ; Li et al. , 2018 ; Chen et al. , 2020a ) . However , these deep GNNs are still not directly designed to tackle the scarcity of labeled data , especially when only very few labels are available . After conducting an in-depth study , we have an important finding that existing GNNs suffer from unstable training process , when labeled nodes are few . In particular , on Cora dataset with 7 classes , for each run , we randomly select 1 labeled node per class as the training data for both GCN and DAGNN , and repeat 100 runs with 300 epochs per run , to get the average number of predicted labels in percentage per class at each epoch and also the standard deviation . The statistical results of GCN and DAGNN are shown in Figures 1 ( a ) and 1 ( b ) respectively . x-axis is the epoch from 0 to 300 , and y-axis is the percentage of a class in the predicted node labels . There are 7 colored lines representing the average percentage of the predicted labels of the respective classes , as the epoch increases . The dashed lines are the ground-truth percentage of each class in the Cora dataset . The shaded areas in colors represent the standard deviation . Observe that in Figure 1 ( a ) , GCN has high variance at different runs when predicting node labels , and the variance keeps large at late epochs , e.g. , 300 , which indicates that GCN is quite unstable at different runs with 1 training label per class sampled randomly , leading to inferior classification accuracy as illustrated in our experiments . Moreover , as shown in Figure 1 ( b ) , DAGNN also suffers from unstable training process . The variance of DAGNN is relatively smaller than that of GCN , which provides a hint about why DAGNN performs better than GCN . Nevertheless , both GCN and DAGNN yield unstable training process with large variance . Since there is only 1 labeled node per class in Cora-1 , at different runs , the randomly sampled training nodes can heavily influence the message passing process in GCN and DAGNN , depending on the connectivity of the training nodes to their neighborhoods over the graph topology , which result to the unstable training process observed above . To address the unstable training process of existing GNNs when only very few labeled data are available , we propose a framework , Stabilized self-training with Negative sampling ( SN ) , which is readily applicable to existing GNNs to improve classification accuracy via stabilized training process . In the proposed SN framework , at each epoch , we select a set of nodes with predicted labels of high confidence as pseudo labels and add such pseudo labels into training data to enhance the training of next epoch . To tackle the unstable issue of existing GNNs , we develop a stabilizing technique in self-training to balance the training . We then design a negative sampling regularization technique over pseudo labels to further improve node classification accuracy . In experiments , we apply our SN framework to GCN and DAGNN , denoted as SNGCN and SNDAGNN respectively . Figures 1 ( c ) and 1 ( d ) report the average percentage and standard variance of the predicted labels per class per epoch of SNGCN and SNDAGNN on Cora-1 respectively . With our stabilized selftraining technique , obviously , the variance of SNGCN in Figure 1 ( c ) decreases quickly and becomes stable as epoch increases , compared with Figure 1 ( a ) of GCN . SNDAGNN is also more stable than DAGNN as shown in Figures 1 ( d ) and 1 ( b ) respectively . As reported later in experiments , with the proposed SN framework , SNGCN achieves 62.5 % node classification accuracy on Cora-1 , significantly improving GCN ( 44.6 % ) by 17.9 % , and SNDAGNN obtains 66.4 % accuracy on Cora-1 and outperforms DAGNN ( 59.8 % ) by a substantial margin of 6.6 % . We conduct extensive experiments on 4 benchmarking datasets , and compare with 13 existing solutions , to evaluate the performance of the proposed SN framework . Experimental results demonstrate that our SN framework is able to significantly improve classification accuracy of existing GNNs when only few labels are available , and is also effective when training labels are sufficient . 2 RELATED WORK . In literature , there are two directions to address the scarcity of labeled data for semi-supervised node classification : ( i ) explore multi-hop graph topological features to propagate the labels in L over the input graph , e.g. , GCN ( Kipf & Welling , 2017 ) and DAGNN ( Liu et al. , 2020 ) ; ( ii ) enhance the training data by pseudo labels ( self-training ) ( Li et al. , 2018 ) or augmenting graph data by new edges and features ( Kong et al. , 2020 ; Zhao et al. , 2021 ) . Note that these two directions are not mutually exclusive , but can work together on few-labeled graph data . Here we review the existing studies that are most relevant to this paper . GNNs . There exist a large collection of GNNs , such as GCN , DAGNN , GAT , MoNet , and APPNP ( Kipf & Welling , 2017 ; Liu et al. , 2020 ; Bruna et al. , 2014 ; Henaff et al. , 2015 ; Defferrard et al. , 2016 ; Velickovic et al. , 2018 ; Monti et al. , 2017 ; Chen et al. , 2020b ; Klicpera et al. , 2019 ) . We introduce the details of GCN ( Kipf & Welling , 2017 ) and DAGNN ( Liu et al. , 2020 ) here . GCN learns the representation of each node by iteratively aggregating the representations of its neighbors . Specifically , GCN consists of k > 0 layers , each with the same propagation rule defined as follows . At the ℓ-th layer , the representations H ( ℓ−1 ) of previous layer are aggregated to get H ( ℓ ) . H ( ℓ ) = σ ( ÂH ( ℓ−1 ) W ( ℓ ) ) , ℓ = 1 , 2 , ... , k. ( 1 )  = D̃− 1 2 ÃD̃ − 1 2 is the graph laplacian , where à = A + I is the adjacency matrix of G after adding self-loops ( I is the identity matrix ) and D̃ is a diagonal matrix with D̃ii = ∑ j Ãij . W ( ℓ ) is a trainable weight matrix of the ℓ-th layer , and σ is a nonlinear activation function . Initially , H ( 0 ) = X . Note that GCN usually achieves superior performance with 1-layer or 2-layer models ( Kipf & Welling , 2017 ) . When applying multiple layers to leverage large receptive fields , the performance degrades severely , due to the over-smoothing issue identified in ( Xu et al. , 2018 ; Li et al. , 2018 ; Chen et al. , 2020a ) . A recent deep GNN architecture , DAGNN , tackles the over-smoothing issue and achieves state-of-the-art results by decoupling representation transformation and propagation in GNNs ( Liu et al. , 2020 ) . Then it utilizes an adaptive adjustment mechanism to balance the information from local and global neighborhoods of each node . Specifically , the mathematical expression of DAGNN is as follows . DAGNN uses a learnable parameter s ∈ Rc×1 to adjust the weight of embeddings at different propagation level ( from 1 to k ) . It processes data in the following way . Z = MLP ( X ) ∈ Rn×c , Hℓ =  ℓ · Z ∈ Rn×c , ℓ = 1 , 2 , ... , k , Sℓ = Hℓ · s ∈ R n×1 , ℓ = 1 , 2 , ... , k , Ŝℓ = [ Sℓ , Sℓ , ... , Sℓ ] ∈ R n×c , ℓ = 1 , 2 , ... , k , Xout = softmax ( ∑k ℓ=1 Hℓ⊙ Ŝℓ ) , where  ℓ is the ℓ-th power of matrix  , ⊙ is the Hadamard product , · is dot product , MLP is the Multilayer Perceptron and softmax operation is on the second dimension . Data Augmentation . Another way to address the situation of limited labeled data is to add pseudo labels to training dataset by self-training ( Li et al. , 2018 ) , or enhance the graph data by adding new edges and features ( Zhao et al. , 2021 ; Kong et al. , 2020 ) . Self-training itself is a general methodology ( Scudder , 1965 ) and is used in various domains in addition . It is used in word-sense disambiguation ( Yarowsky , 1995 ; Hearst , 1991 ) , bootstrap for information extraction and learning subjective nouns ( Riloff & Jones , 1999 ) , and text classification ( Nigam et al. , 2000 ) . In ( Zhou et al. , 2012 ) , it suggests that selecting informative unlabeled data using a guided search algorithm can significantly improve performance over standard self-training framework . Buchnik & Cohen ( 2018 ) mainly consider selftraining for diffusion-based techniques . Recently , self-training has been adopted for semi-supervised tasks on graphs . For instance , Li et al . ( 2018 ) propose self-training and co-training techniques for GCN . This self-training work selects the top-k confident predicted labels as pseudo labels . The co-training technique co-trains a GCN with a random walk model to handle few-labeled data . Compared with existing self-training work , our framework are different as shown later . In particular , our framework has a different strategy to select pseudo labels and also has a stabilizer to address the deficiencies of existing GNNs ; moreover , we propose a negative sampling regularization technique to further boost accuracy . Besides , in existing work , if a node is selected as a pseudo label , it will never be moved out even if the pseudo label becomes obviously wrong in later epochs . On the other hand , in our framework , we update pseudo labels in each epoch to avoid such an issue . There also exist studies to augment the original graph data , which is different from self-training . For instance , Zhao et al . ( 2021 ) utilize link prediction to promote intra-class edges and demote inter-class edges in a given graph . Kong et al . ( 2020 ) iteratively augment node features with gradient-based adversarial perturbations to enhance the performance of GNNs . | This paper discusses *unstable* training procedure of graph neural network when training data is extremely limited. The author proposes use self-training and negative sampling to mitigate this issue. Specifically, it uses high-confidence prediction of the model as training seeds and control the importance by population. The result shows improvement on several node classification dataset. | SP:c4417146f33ba5f65ac5590b08380804beeafb89 |
Stabilized Self-training with Negative Sampling on Few-labeled Graph Data | Specifically , we observe that existing GNNs suffer from unstable training process on few-labeled graph data , resulting to inferior performance on node classification . Therefore , we propose an effective framework , Stabilized self-training with Negative sampling ( SN ) , which is applicable to existing GNNs to stabilize the training process and enhance the training data , and consequently , boost classification accuracy on graphs with few labeled data . In experiments , we apply our SN framework to two existing GNN base models ( GCN and DAGNN ) to get SNGCN and SNDAGNN , and evaluate the two methods against 13 existing solutions over 4 benchmarking datasets . Extensive experiments show that the proposed SN framework is highly effective compared with existing solutions , especially under settings with very few labeled data . In particular , on a benchmark dataset Cora with only 1 labeled node per class , while GCN only has 44.6 % accuracy , SNGCN achieves 62.5 % accuracy , improving GCN by 17.9 % ; SNDAGNN has accuracy 66.4 % , improving that of the base model DAGNN ( 59.8 % ) by 6.6 % . 1 INTRODUCTION . Graph is an expressive data model , representing objects and the relationships between objects as nodes and edges respectively . Graph data are ubiquitous with a wide range of real-world applications , e.g. , social network analysis ( Qiu et al. , 2018 ; Li & Goldwasser , 2019 ) , traffic network prediction ( Guo et al. , 2019 ; Li et al. , 2019 ) , protein interface prediction ( Fout et al. , 2017 ) , recommendation systems ( Fan et al. , 2019 ; Yang et al. , 2020a ) . Among these applications , an important task is to classify the nodes in a graph into various classes . However , one tough situation commonly existing is the lack of sufficient labeled data , which are also expensive to collect . To ease the situation , semi-supervised node classification on graphs has attracted much attention from both industry ( Qiu et al. , 2018 ; Li & Goldwasser , 2019 ) and academia ( Defferrard et al. , 2016 ; Hamilton et al. , 2017 ; Velickovic et al. , 2018 ; Liu et al. , 2020 ; Li et al. , 2018 ; Klicpera et al. , 2019 ) . It aims to leverage a small amount of labeled nodes and additionally a large amount of unlabeled nodes in a graph to train an accurate classifier . There exists a collection of graph neural networks for semi-supervised node classification ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Monti et al. , 2017 ; Hamilton et al. , 2017 ; Klicpera et al. , 2019 ; Liu et al. , 2020 ) . For instance , Graph convolution networks ( GCNs ) rely on a message passing scheme called graph convolution that aggregates the neighborhood information of a node , including node features and graph topology , to learn node representations , which can then be used in downstream classification tasks ( Kipf & Welling , 2017 ) . Despite the great success of GCNs , under the extreme cases when very few labels are given ( e.g. , only one labeled node per class ) , the shallow GCN architecture , typically with two layers ( Kipf & Welling , 2017 ) , can not effectively propagate the training labels over the input graph , leading to inferior performance . In particular , as shown in our experiments , on a benchmark dataset Cora with 1 labeled node per class ( Cora-1 ) , GCN is even less accurate than some unsupervised methods , such as DGI ( Velickovic et al. , 2019 ) and G2G ( Bojchevski & Günnemann , 2018 ) . Recently , several latest studies try to improve classification accuracy by designing deeper GNN architectures , e.g. , DAGNN ( Liu et al. , 2020 ) , which also address the over-smoothing issue identified in ( Xu et al. , 2018 ; Li et al. , 2018 ; Chen et al. , 2020a ) . However , these deep GNNs are still not directly designed to tackle the scarcity of labeled data , especially when only very few labels are available . After conducting an in-depth study , we have an important finding that existing GNNs suffer from unstable training process , when labeled nodes are few . In particular , on Cora dataset with 7 classes , for each run , we randomly select 1 labeled node per class as the training data for both GCN and DAGNN , and repeat 100 runs with 300 epochs per run , to get the average number of predicted labels in percentage per class at each epoch and also the standard deviation . The statistical results of GCN and DAGNN are shown in Figures 1 ( a ) and 1 ( b ) respectively . x-axis is the epoch from 0 to 300 , and y-axis is the percentage of a class in the predicted node labels . There are 7 colored lines representing the average percentage of the predicted labels of the respective classes , as the epoch increases . The dashed lines are the ground-truth percentage of each class in the Cora dataset . The shaded areas in colors represent the standard deviation . Observe that in Figure 1 ( a ) , GCN has high variance at different runs when predicting node labels , and the variance keeps large at late epochs , e.g. , 300 , which indicates that GCN is quite unstable at different runs with 1 training label per class sampled randomly , leading to inferior classification accuracy as illustrated in our experiments . Moreover , as shown in Figure 1 ( b ) , DAGNN also suffers from unstable training process . The variance of DAGNN is relatively smaller than that of GCN , which provides a hint about why DAGNN performs better than GCN . Nevertheless , both GCN and DAGNN yield unstable training process with large variance . Since there is only 1 labeled node per class in Cora-1 , at different runs , the randomly sampled training nodes can heavily influence the message passing process in GCN and DAGNN , depending on the connectivity of the training nodes to their neighborhoods over the graph topology , which result to the unstable training process observed above . To address the unstable training process of existing GNNs when only very few labeled data are available , we propose a framework , Stabilized self-training with Negative sampling ( SN ) , which is readily applicable to existing GNNs to improve classification accuracy via stabilized training process . In the proposed SN framework , at each epoch , we select a set of nodes with predicted labels of high confidence as pseudo labels and add such pseudo labels into training data to enhance the training of next epoch . To tackle the unstable issue of existing GNNs , we develop a stabilizing technique in self-training to balance the training . We then design a negative sampling regularization technique over pseudo labels to further improve node classification accuracy . In experiments , we apply our SN framework to GCN and DAGNN , denoted as SNGCN and SNDAGNN respectively . Figures 1 ( c ) and 1 ( d ) report the average percentage and standard variance of the predicted labels per class per epoch of SNGCN and SNDAGNN on Cora-1 respectively . With our stabilized selftraining technique , obviously , the variance of SNGCN in Figure 1 ( c ) decreases quickly and becomes stable as epoch increases , compared with Figure 1 ( a ) of GCN . SNDAGNN is also more stable than DAGNN as shown in Figures 1 ( d ) and 1 ( b ) respectively . As reported later in experiments , with the proposed SN framework , SNGCN achieves 62.5 % node classification accuracy on Cora-1 , significantly improving GCN ( 44.6 % ) by 17.9 % , and SNDAGNN obtains 66.4 % accuracy on Cora-1 and outperforms DAGNN ( 59.8 % ) by a substantial margin of 6.6 % . We conduct extensive experiments on 4 benchmarking datasets , and compare with 13 existing solutions , to evaluate the performance of the proposed SN framework . Experimental results demonstrate that our SN framework is able to significantly improve classification accuracy of existing GNNs when only few labels are available , and is also effective when training labels are sufficient . 2 RELATED WORK . In literature , there are two directions to address the scarcity of labeled data for semi-supervised node classification : ( i ) explore multi-hop graph topological features to propagate the labels in L over the input graph , e.g. , GCN ( Kipf & Welling , 2017 ) and DAGNN ( Liu et al. , 2020 ) ; ( ii ) enhance the training data by pseudo labels ( self-training ) ( Li et al. , 2018 ) or augmenting graph data by new edges and features ( Kong et al. , 2020 ; Zhao et al. , 2021 ) . Note that these two directions are not mutually exclusive , but can work together on few-labeled graph data . Here we review the existing studies that are most relevant to this paper . GNNs . There exist a large collection of GNNs , such as GCN , DAGNN , GAT , MoNet , and APPNP ( Kipf & Welling , 2017 ; Liu et al. , 2020 ; Bruna et al. , 2014 ; Henaff et al. , 2015 ; Defferrard et al. , 2016 ; Velickovic et al. , 2018 ; Monti et al. , 2017 ; Chen et al. , 2020b ; Klicpera et al. , 2019 ) . We introduce the details of GCN ( Kipf & Welling , 2017 ) and DAGNN ( Liu et al. , 2020 ) here . GCN learns the representation of each node by iteratively aggregating the representations of its neighbors . Specifically , GCN consists of k > 0 layers , each with the same propagation rule defined as follows . At the ℓ-th layer , the representations H ( ℓ−1 ) of previous layer are aggregated to get H ( ℓ ) . H ( ℓ ) = σ ( ÂH ( ℓ−1 ) W ( ℓ ) ) , ℓ = 1 , 2 , ... , k. ( 1 )  = D̃− 1 2 ÃD̃ − 1 2 is the graph laplacian , where à = A + I is the adjacency matrix of G after adding self-loops ( I is the identity matrix ) and D̃ is a diagonal matrix with D̃ii = ∑ j Ãij . W ( ℓ ) is a trainable weight matrix of the ℓ-th layer , and σ is a nonlinear activation function . Initially , H ( 0 ) = X . Note that GCN usually achieves superior performance with 1-layer or 2-layer models ( Kipf & Welling , 2017 ) . When applying multiple layers to leverage large receptive fields , the performance degrades severely , due to the over-smoothing issue identified in ( Xu et al. , 2018 ; Li et al. , 2018 ; Chen et al. , 2020a ) . A recent deep GNN architecture , DAGNN , tackles the over-smoothing issue and achieves state-of-the-art results by decoupling representation transformation and propagation in GNNs ( Liu et al. , 2020 ) . Then it utilizes an adaptive adjustment mechanism to balance the information from local and global neighborhoods of each node . Specifically , the mathematical expression of DAGNN is as follows . DAGNN uses a learnable parameter s ∈ Rc×1 to adjust the weight of embeddings at different propagation level ( from 1 to k ) . It processes data in the following way . Z = MLP ( X ) ∈ Rn×c , Hℓ =  ℓ · Z ∈ Rn×c , ℓ = 1 , 2 , ... , k , Sℓ = Hℓ · s ∈ R n×1 , ℓ = 1 , 2 , ... , k , Ŝℓ = [ Sℓ , Sℓ , ... , Sℓ ] ∈ R n×c , ℓ = 1 , 2 , ... , k , Xout = softmax ( ∑k ℓ=1 Hℓ⊙ Ŝℓ ) , where  ℓ is the ℓ-th power of matrix  , ⊙ is the Hadamard product , · is dot product , MLP is the Multilayer Perceptron and softmax operation is on the second dimension . Data Augmentation . Another way to address the situation of limited labeled data is to add pseudo labels to training dataset by self-training ( Li et al. , 2018 ) , or enhance the graph data by adding new edges and features ( Zhao et al. , 2021 ; Kong et al. , 2020 ) . Self-training itself is a general methodology ( Scudder , 1965 ) and is used in various domains in addition . It is used in word-sense disambiguation ( Yarowsky , 1995 ; Hearst , 1991 ) , bootstrap for information extraction and learning subjective nouns ( Riloff & Jones , 1999 ) , and text classification ( Nigam et al. , 2000 ) . In ( Zhou et al. , 2012 ) , it suggests that selecting informative unlabeled data using a guided search algorithm can significantly improve performance over standard self-training framework . Buchnik & Cohen ( 2018 ) mainly consider selftraining for diffusion-based techniques . Recently , self-training has been adopted for semi-supervised tasks on graphs . For instance , Li et al . ( 2018 ) propose self-training and co-training techniques for GCN . This self-training work selects the top-k confident predicted labels as pseudo labels . The co-training technique co-trains a GCN with a random walk model to handle few-labeled data . Compared with existing self-training work , our framework are different as shown later . In particular , our framework has a different strategy to select pseudo labels and also has a stabilizer to address the deficiencies of existing GNNs ; moreover , we propose a negative sampling regularization technique to further boost accuracy . Besides , in existing work , if a node is selected as a pseudo label , it will never be moved out even if the pseudo label becomes obviously wrong in later epochs . On the other hand , in our framework , we update pseudo labels in each epoch to avoid such an issue . There also exist studies to augment the original graph data , which is different from self-training . For instance , Zhao et al . ( 2021 ) utilize link prediction to promote intra-class edges and demote inter-class edges in a given graph . Kong et al . ( 2020 ) iteratively augment node features with gradient-based adversarial perturbations to enhance the performance of GNNs . | This paper considers the semi-supervised node classification on graph data with only a few node labels available. Under this extreme situation, the paper demonstrates the unstable performances of existing GNNs, such as GCN and DAGNN. To address the unstable problem, the paper proposes two strategies consisting of pseudo-labelling based self-training and negative-sampling based regularization. The experiments show that the two strategies work well on graph data with only a few node labels. | SP:c4417146f33ba5f65ac5590b08380804beeafb89 |
Reinforcement Learning with Efficient Active Feature Acquisition | 1 INTRODUCTION . Recently , machine learning models for automated sequential decision making have shown remarkable success across many application areas , such as visual recognition ( Mathe et al. , 2016 ; Das et al. , 2017 ) , robotics control ( Finn et al. , 2016 ; Zhang et al. , 2018 ) , medical diagnosis ( Ling et al. , 2017 ; Peng et al. , 2018 ) and computer games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) . One fundamental reason that drives the success of such models and enables them to outperform classical algorithms is the availability of large amounts of training data . Typically such training data is either fully observed or the features stem from an action-independent observation model ( which clearly can depend on the state of the system ) . However , the fundamental assumption that the same features are always readily available during deployment could not hold in many real-world applications . For instance , consider a medical support system for monitoring and treating patients during their stay at hospital which was trained on rich historical medical data . To provide the best possible treatment , the system might need to perform several measurements of the patient over time , while some of them could be costly or even pose a health risk . Therefore , during deployment , it is more ideal that the system could function with minimal features while during training more features might have been available . In such cases , we are interested in decision making models that actively take the measurement process , i.e. , feature acquisition , into account and only acquire the information relevant for making a decision . In this paper , we consider the challenging problem of learning effective policies when the cost of information acquisition can not be neglected . To be successful , we need to learn policies which acquires the information required for solving a task in the cheapest way possible . For simplicity , we can think of the policy as being constituted of an acquisition policy which actively selects meaningful features to be observed and a task policy , which selects actions to change the state of the system towards some goal.1 As such , we consider a partially observable learning problem with the following two distinguishing properties compared to the most commonly studied problems ( see also Figure 3.2 for an illustration ) . ( i ) By incorporating active feature acquisition , the training of the task policy is based upon subsets of features only , i.e. , there are missing features , where the missingness is 1Clearly , these two policies are not independent in general , e.g. , acquiring features can change the state of the system . observe action acquire ( e.g . navigation ) ( e.g . medical treatments ) observe action controlled by the acquisition policy . Thus , the resulting POMDP is different from the conventional POMDPs in RL literature ( Cassandra , 1998 ) where the partial observability for later stems from a fixed and action-independent observation model . Also , the state transitions in conventional POMDPs are only determined by the choice of the task action , whereas in our setting the state-transition is affected by both the task action and the feature acquisition choice . ( ii ) The learning of the acquisition policy introduces an additional dimension to the exploration-exploitation problem : each execution of the policy needs to solve an exploration-exploitation problem , and thus we often need to learn sophisticated policies . Most reinforcement learning research has not taken active feature acquisition into consideration . In this work , we propose a unified approach that jointly learns a policy for optimizing the task reward while performing active feature acquisition . Although some of the prior works have exploited the use of reinforcement learning for sequential feature acquisition tasks ( Shim et al. , 2018 ; Zannone et al. , 2019 ) , they considered variable-wise information acquisition in a static setting only , corresponding to feature selection for non-time-dependent prediction tasks . However , our considered setting is truly time-dependent and feature acquisitions need to be made at each time step while the state of the system evolves simultaneously . As such , both the model dynamics of the underlying MDP and the choice of feature acquisition introduce considerable challenges to the learning of the sequential feature acquisition strategy . Due to the challenge of the exploration-exploitation problem , it is a non-trivial task to jointly learn the two policies . The conventional end-to-end approaches often result in inferior solutions in complex scenarios . Ideally , policies based on high-quality representations would be easier for the algorithm to search for better solutions through exploration-exploitation . Therefore , our proposed framework also tackles the joint policy training task from a representation learning perspective . Specifically , we introduce a representation learning model that not only encodes the sequential partially observed information into its latent features , but also efficiently imputes the unobserved features to offer more meaningful information for the policy training . To this end , we formulate a sequential generative model that can efficiently learn model dynamics during representation learning . Overall , the contributions of our paper are three-fold : • We propose an approach for learning sequential decision making policies with active feature acquisition through a unified reinforcement learning framework . Our proposed approach simultaneously learns policies for reward optimization and active feature acquisition . • We present a novel sequential representation learning approach to account for the encoding of the partially observed states . Our proposed approach is based on variational autoencoders ( VAE ) with amortized inference . The imputation of the unobserved features is achieved via learning the model dynamics . • We demonstrate our proposed framework can be applied to various applications . We conduct extensive experiments on an image-based control task as well as a medical simulator fitted from real-life data where our method shows clear improvements over conventional baselines . 2 RELATED WORK . In this work , we integrate active learning with reinforcement learning to accomplish the policy training task while attempting to acquire fewest observed features as possible . We thus review related methods on active feature acquisition and representation learning for POMDP , respectively . 2.1 ACTIVE FEATURE ACQUISITION . Our work draws motivation from the existing instance-wise active feature selection approaches . One category of the instance-wise feature selection methods consider feature acquisition as a one time effort to select a subset of features as a whole . One typical example is the conventional linear model that poses sparsity inducing prior distribution to the model ( Tibshirani , 1996 ) . Recently , there also emerged approaches that adopt reinforcement learning to actively find optimal feature subsets ( Yoon et al. , 2018 ; Shim et al. , 2018 ; Zannone et al. , 2019 ) . Though such attempts have demonstrated certain efficacy in handling non time-series instance-wise data , they do not suffice for handling sequential dataset . There is also an alternative category that models feature acquisition as a Bayesian experimental design ( Ma et al. , 2019 ; Gong et al. , 2019 ) . However , the sequential decision making is for variable-wise feature acquisition and the problems are still non time-series tasks in nature . The key difference between all the aforementioned approaches and ours is that we tackle active feature acquisition problems with time-series data , where an active feature selection decision needs to be formed at each time step along the multi-step reinforcement learning trajectory . Therefore , the feature acquisition for our presented work needs to consider more complex information over model dynamics and control , apart from the static instance-wise features . 2.2 REPRESENTATION LEARNING IN POMDP . In complex tasks , policies trained upon different representations can even converge to different performance levels . Most conventional deep reinforcement learning approaches unifies the process of representation learning with policy training and results in policies trained in an end-to-end fashion ( Mnih et al. , 2013 ; Lillicrap et al. , 2016 ; Mnih et al. , 2016 ) . However , to accomplish the representation learning task , such models often engage trainable parameters which could come with considerable size and thereby result in significant degradation in sample efficiency . When considering problems with POMDPs where the state space is partially accessible to the agent , representation learning becomes an important and non-trivial research challenge . Among the existing literature , one prominent line of research tackles the representation learning for POMDP in an off-line fashion and thus resulting in multi-stage reinforcement learning . Higgins et al . ( 2016 ; 2017 ) adopt pretrained VAE models as a representation module to build agents with strong domain adaptation performance . The key difference between their work and ours is that they encode instance-wise image frames from POMDP domains where each image presents a partial view over the task environment , while our work considers cost-sensitive reinforcement learning with distinct partial observability , i.e. , the feature-level information is missing at each time step for the agent . We thus adopt a sequential representation learning approach to infer a more representative state information . Recently , there also emerged several works on sequential representation learning for POMDP ( Gregor et al. , 2019 ; Vezzani et al. , 2019 ) . However , most of the works utilize VAE training as an auxiliary task to jointly update the representation model with the policy learning loss . In our work , due to the high acquisition cost to observe the features , we adopt an off-line representation learning setting . Also , our proposed representation learning is model-based , where the model learns to impute the missing features with such attempt yielding significant benefit to derive high-quality representation for policy training . 3 METHODOLOGY . 3.1 TASK SETTING . In this section , we formally define the problem settings for the task of jointly learning the task and feature acquisition policy . To this end , we define the active feature acquisition POMDP , a rich class of discrete-time stochastic control processes generalizing standard POMDPs : Definition 1 ( AFA-POMDP ) . The active feature acquisition POMDP is a tuple M = 〈S , A , T , O , R , C , γ〉 , where S is the state space and A = ( Af , Ac ) is a joint action space of feature acquisition actionsAf and control actionsAc . The transition kernel T : S ×Ac×Af → PS maps any joint action a = ( af , ac ) in state s ∈ S to a distribution PS over next states . In each state s , when taking action af , the agent observes xp = x ( af ) , i.e. , a subset of the features x = ( xp , xu ) ∼ O ( s ) indicated by af , whereO ( s ) is a distribution over possible feature observation for state s and xu are features not observed by the agent . When taking a joint action , the agent obtains rewards according to the reward functionR : S ×Ac → R and pays a cost of C : S ×Af → R+ for feature acquisition . Rewards and costs are discounted by the discount factor γ ∈ [ 0 , 1 ) . Simplifying assumptions For simplicity , we assume that x consists of a fixed number of features Nf for all states , that Af = 2 [ Nf ] is the power set of all the Nf features , and that xp ( af ) consists of all the features in x indicated by the subset af ∈ Af . Note that the feature acquisition action for a specific application can take various different forms . For instance , in our experiments in Section 4 , for the Sepsis task , we define feature acquisition as selecting a subset over possible measurement tests , whereas for the Bouncing Ball+ task , we divide an image into four observation regions and let the feature acquisition policy select a subset of observation regions ( rather than raw pixels ) . Please also note that while in a general AFA-POMDP , the transition between two states depends on the joint action , we assume in the following that it depends only on the control action , i.e. , T ( s , ac , af ′ ) = T ( s , ac , af ) for all af ′ , af ∈ Af . While not true for all possible applications , this assumption can be a reasonable approximation for instance for medical settings in which tests are non-invasive . For simplicity we furthermore assume that acquiring each feature has the same cost , denoted as c , i.e. , C ( af , s ) = c |af | , but our approach can be straightforwardly adapted to have different costs for different feature acquisitions . Objective We aim to learn a policy which trades off reward maximization and the cost for feature acquisition by jointly optimizing a task policy πc and a feature acquisition policy πf . That is , we aim to solve the optimization problem max πf , πc E ∞∑ t=0 γt ( R ( xt , act ) − |Af |∑ i c · I ( af ( i ) t ) ) , ( 1 ) where the expectation is over the randomness of the stochastic process and the policies , af ( i ) t denotes the i-th feature acquisition action at timestep t , and I ( · ) is an indicator function whose value equals to 1 if that feature has been acquired . Note that the above optimization problem is very challenging : an optimal solution needs to maintain beliefs bt over the state of the system at time t which is a function of partial observations obtained so far . Both the the feature acquisition policy πf ( aft | bt ) and the task policy i.e. , πc ( act | bt ) depend on this belief . The information in the belief itself can be controlled by the feature acquisition policy through querying subsets from the features xt and hence the task policy and feature acquisition policy itself strongly depend on effectiveness of the feature acquisition policy . | This paper proposes a reinforcement learning + representation learning approach for simultaneously learning a control policy and feature acquisition policy in environments where feature observation is costly. The authors formulate an approach for learning time series latent variable models that incorporate information from both observation and action histories. They demonstrate through a series of experiments that their approach leads to better imputation (i.e., filling in missing values) and better rewards. | SP:2949b7383e579d29639b08597ed1808c54123cb4 |
Reinforcement Learning with Efficient Active Feature Acquisition | 1 INTRODUCTION . Recently , machine learning models for automated sequential decision making have shown remarkable success across many application areas , such as visual recognition ( Mathe et al. , 2016 ; Das et al. , 2017 ) , robotics control ( Finn et al. , 2016 ; Zhang et al. , 2018 ) , medical diagnosis ( Ling et al. , 2017 ; Peng et al. , 2018 ) and computer games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) . One fundamental reason that drives the success of such models and enables them to outperform classical algorithms is the availability of large amounts of training data . Typically such training data is either fully observed or the features stem from an action-independent observation model ( which clearly can depend on the state of the system ) . However , the fundamental assumption that the same features are always readily available during deployment could not hold in many real-world applications . For instance , consider a medical support system for monitoring and treating patients during their stay at hospital which was trained on rich historical medical data . To provide the best possible treatment , the system might need to perform several measurements of the patient over time , while some of them could be costly or even pose a health risk . Therefore , during deployment , it is more ideal that the system could function with minimal features while during training more features might have been available . In such cases , we are interested in decision making models that actively take the measurement process , i.e. , feature acquisition , into account and only acquire the information relevant for making a decision . In this paper , we consider the challenging problem of learning effective policies when the cost of information acquisition can not be neglected . To be successful , we need to learn policies which acquires the information required for solving a task in the cheapest way possible . For simplicity , we can think of the policy as being constituted of an acquisition policy which actively selects meaningful features to be observed and a task policy , which selects actions to change the state of the system towards some goal.1 As such , we consider a partially observable learning problem with the following two distinguishing properties compared to the most commonly studied problems ( see also Figure 3.2 for an illustration ) . ( i ) By incorporating active feature acquisition , the training of the task policy is based upon subsets of features only , i.e. , there are missing features , where the missingness is 1Clearly , these two policies are not independent in general , e.g. , acquiring features can change the state of the system . observe action acquire ( e.g . navigation ) ( e.g . medical treatments ) observe action controlled by the acquisition policy . Thus , the resulting POMDP is different from the conventional POMDPs in RL literature ( Cassandra , 1998 ) where the partial observability for later stems from a fixed and action-independent observation model . Also , the state transitions in conventional POMDPs are only determined by the choice of the task action , whereas in our setting the state-transition is affected by both the task action and the feature acquisition choice . ( ii ) The learning of the acquisition policy introduces an additional dimension to the exploration-exploitation problem : each execution of the policy needs to solve an exploration-exploitation problem , and thus we often need to learn sophisticated policies . Most reinforcement learning research has not taken active feature acquisition into consideration . In this work , we propose a unified approach that jointly learns a policy for optimizing the task reward while performing active feature acquisition . Although some of the prior works have exploited the use of reinforcement learning for sequential feature acquisition tasks ( Shim et al. , 2018 ; Zannone et al. , 2019 ) , they considered variable-wise information acquisition in a static setting only , corresponding to feature selection for non-time-dependent prediction tasks . However , our considered setting is truly time-dependent and feature acquisitions need to be made at each time step while the state of the system evolves simultaneously . As such , both the model dynamics of the underlying MDP and the choice of feature acquisition introduce considerable challenges to the learning of the sequential feature acquisition strategy . Due to the challenge of the exploration-exploitation problem , it is a non-trivial task to jointly learn the two policies . The conventional end-to-end approaches often result in inferior solutions in complex scenarios . Ideally , policies based on high-quality representations would be easier for the algorithm to search for better solutions through exploration-exploitation . Therefore , our proposed framework also tackles the joint policy training task from a representation learning perspective . Specifically , we introduce a representation learning model that not only encodes the sequential partially observed information into its latent features , but also efficiently imputes the unobserved features to offer more meaningful information for the policy training . To this end , we formulate a sequential generative model that can efficiently learn model dynamics during representation learning . Overall , the contributions of our paper are three-fold : • We propose an approach for learning sequential decision making policies with active feature acquisition through a unified reinforcement learning framework . Our proposed approach simultaneously learns policies for reward optimization and active feature acquisition . • We present a novel sequential representation learning approach to account for the encoding of the partially observed states . Our proposed approach is based on variational autoencoders ( VAE ) with amortized inference . The imputation of the unobserved features is achieved via learning the model dynamics . • We demonstrate our proposed framework can be applied to various applications . We conduct extensive experiments on an image-based control task as well as a medical simulator fitted from real-life data where our method shows clear improvements over conventional baselines . 2 RELATED WORK . In this work , we integrate active learning with reinforcement learning to accomplish the policy training task while attempting to acquire fewest observed features as possible . We thus review related methods on active feature acquisition and representation learning for POMDP , respectively . 2.1 ACTIVE FEATURE ACQUISITION . Our work draws motivation from the existing instance-wise active feature selection approaches . One category of the instance-wise feature selection methods consider feature acquisition as a one time effort to select a subset of features as a whole . One typical example is the conventional linear model that poses sparsity inducing prior distribution to the model ( Tibshirani , 1996 ) . Recently , there also emerged approaches that adopt reinforcement learning to actively find optimal feature subsets ( Yoon et al. , 2018 ; Shim et al. , 2018 ; Zannone et al. , 2019 ) . Though such attempts have demonstrated certain efficacy in handling non time-series instance-wise data , they do not suffice for handling sequential dataset . There is also an alternative category that models feature acquisition as a Bayesian experimental design ( Ma et al. , 2019 ; Gong et al. , 2019 ) . However , the sequential decision making is for variable-wise feature acquisition and the problems are still non time-series tasks in nature . The key difference between all the aforementioned approaches and ours is that we tackle active feature acquisition problems with time-series data , where an active feature selection decision needs to be formed at each time step along the multi-step reinforcement learning trajectory . Therefore , the feature acquisition for our presented work needs to consider more complex information over model dynamics and control , apart from the static instance-wise features . 2.2 REPRESENTATION LEARNING IN POMDP . In complex tasks , policies trained upon different representations can even converge to different performance levels . Most conventional deep reinforcement learning approaches unifies the process of representation learning with policy training and results in policies trained in an end-to-end fashion ( Mnih et al. , 2013 ; Lillicrap et al. , 2016 ; Mnih et al. , 2016 ) . However , to accomplish the representation learning task , such models often engage trainable parameters which could come with considerable size and thereby result in significant degradation in sample efficiency . When considering problems with POMDPs where the state space is partially accessible to the agent , representation learning becomes an important and non-trivial research challenge . Among the existing literature , one prominent line of research tackles the representation learning for POMDP in an off-line fashion and thus resulting in multi-stage reinforcement learning . Higgins et al . ( 2016 ; 2017 ) adopt pretrained VAE models as a representation module to build agents with strong domain adaptation performance . The key difference between their work and ours is that they encode instance-wise image frames from POMDP domains where each image presents a partial view over the task environment , while our work considers cost-sensitive reinforcement learning with distinct partial observability , i.e. , the feature-level information is missing at each time step for the agent . We thus adopt a sequential representation learning approach to infer a more representative state information . Recently , there also emerged several works on sequential representation learning for POMDP ( Gregor et al. , 2019 ; Vezzani et al. , 2019 ) . However , most of the works utilize VAE training as an auxiliary task to jointly update the representation model with the policy learning loss . In our work , due to the high acquisition cost to observe the features , we adopt an off-line representation learning setting . Also , our proposed representation learning is model-based , where the model learns to impute the missing features with such attempt yielding significant benefit to derive high-quality representation for policy training . 3 METHODOLOGY . 3.1 TASK SETTING . In this section , we formally define the problem settings for the task of jointly learning the task and feature acquisition policy . To this end , we define the active feature acquisition POMDP , a rich class of discrete-time stochastic control processes generalizing standard POMDPs : Definition 1 ( AFA-POMDP ) . The active feature acquisition POMDP is a tuple M = 〈S , A , T , O , R , C , γ〉 , where S is the state space and A = ( Af , Ac ) is a joint action space of feature acquisition actionsAf and control actionsAc . The transition kernel T : S ×Ac×Af → PS maps any joint action a = ( af , ac ) in state s ∈ S to a distribution PS over next states . In each state s , when taking action af , the agent observes xp = x ( af ) , i.e. , a subset of the features x = ( xp , xu ) ∼ O ( s ) indicated by af , whereO ( s ) is a distribution over possible feature observation for state s and xu are features not observed by the agent . When taking a joint action , the agent obtains rewards according to the reward functionR : S ×Ac → R and pays a cost of C : S ×Af → R+ for feature acquisition . Rewards and costs are discounted by the discount factor γ ∈ [ 0 , 1 ) . Simplifying assumptions For simplicity , we assume that x consists of a fixed number of features Nf for all states , that Af = 2 [ Nf ] is the power set of all the Nf features , and that xp ( af ) consists of all the features in x indicated by the subset af ∈ Af . Note that the feature acquisition action for a specific application can take various different forms . For instance , in our experiments in Section 4 , for the Sepsis task , we define feature acquisition as selecting a subset over possible measurement tests , whereas for the Bouncing Ball+ task , we divide an image into four observation regions and let the feature acquisition policy select a subset of observation regions ( rather than raw pixels ) . Please also note that while in a general AFA-POMDP , the transition between two states depends on the joint action , we assume in the following that it depends only on the control action , i.e. , T ( s , ac , af ′ ) = T ( s , ac , af ) for all af ′ , af ∈ Af . While not true for all possible applications , this assumption can be a reasonable approximation for instance for medical settings in which tests are non-invasive . For simplicity we furthermore assume that acquiring each feature has the same cost , denoted as c , i.e. , C ( af , s ) = c |af | , but our approach can be straightforwardly adapted to have different costs for different feature acquisitions . Objective We aim to learn a policy which trades off reward maximization and the cost for feature acquisition by jointly optimizing a task policy πc and a feature acquisition policy πf . That is , we aim to solve the optimization problem max πf , πc E ∞∑ t=0 γt ( R ( xt , act ) − |Af |∑ i c · I ( af ( i ) t ) ) , ( 1 ) where the expectation is over the randomness of the stochastic process and the policies , af ( i ) t denotes the i-th feature acquisition action at timestep t , and I ( · ) is an indicator function whose value equals to 1 if that feature has been acquired . Note that the above optimization problem is very challenging : an optimal solution needs to maintain beliefs bt over the state of the system at time t which is a function of partial observations obtained so far . Both the the feature acquisition policy πf ( aft | bt ) and the task policy i.e. , πc ( act | bt ) depend on this belief . The information in the belief itself can be controlled by the feature acquisition policy through querying subsets from the features xt and hence the task policy and feature acquisition policy itself strongly depend on effectiveness of the feature acquisition policy . | The authors study the problem of reinforcement learning in environments where the agent can spend some reward in order to gain access to observations. The authors introduce a generalization of a POMDP, which they call an AFA-POMDP for Active Feature Acquisition, that divides the action into two pieces, an action for control and action for feature acquisition. The authors' solution approach begins with fully observed trajectories that are used to train a sequential VAE as an inference model. Then, using the pre-trained VAE, an RL algorithm jointly learns the control and feature acquisition policies. The experiments are on a synthetic "bouncing ball" task where there are five discrete control actions that hit the ball from different directions and the feature acquisition chooses which quadrants of the space to acquire. There is also a sepsis task with three discrete actions and 8 features that correspond to measurements of the patient. | SP:2949b7383e579d29639b08597ed1808c54123cb4 |
Reinforcement Learning with Efficient Active Feature Acquisition | 1 INTRODUCTION . Recently , machine learning models for automated sequential decision making have shown remarkable success across many application areas , such as visual recognition ( Mathe et al. , 2016 ; Das et al. , 2017 ) , robotics control ( Finn et al. , 2016 ; Zhang et al. , 2018 ) , medical diagnosis ( Ling et al. , 2017 ; Peng et al. , 2018 ) and computer games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) . One fundamental reason that drives the success of such models and enables them to outperform classical algorithms is the availability of large amounts of training data . Typically such training data is either fully observed or the features stem from an action-independent observation model ( which clearly can depend on the state of the system ) . However , the fundamental assumption that the same features are always readily available during deployment could not hold in many real-world applications . For instance , consider a medical support system for monitoring and treating patients during their stay at hospital which was trained on rich historical medical data . To provide the best possible treatment , the system might need to perform several measurements of the patient over time , while some of them could be costly or even pose a health risk . Therefore , during deployment , it is more ideal that the system could function with minimal features while during training more features might have been available . In such cases , we are interested in decision making models that actively take the measurement process , i.e. , feature acquisition , into account and only acquire the information relevant for making a decision . In this paper , we consider the challenging problem of learning effective policies when the cost of information acquisition can not be neglected . To be successful , we need to learn policies which acquires the information required for solving a task in the cheapest way possible . For simplicity , we can think of the policy as being constituted of an acquisition policy which actively selects meaningful features to be observed and a task policy , which selects actions to change the state of the system towards some goal.1 As such , we consider a partially observable learning problem with the following two distinguishing properties compared to the most commonly studied problems ( see also Figure 3.2 for an illustration ) . ( i ) By incorporating active feature acquisition , the training of the task policy is based upon subsets of features only , i.e. , there are missing features , where the missingness is 1Clearly , these two policies are not independent in general , e.g. , acquiring features can change the state of the system . observe action acquire ( e.g . navigation ) ( e.g . medical treatments ) observe action controlled by the acquisition policy . Thus , the resulting POMDP is different from the conventional POMDPs in RL literature ( Cassandra , 1998 ) where the partial observability for later stems from a fixed and action-independent observation model . Also , the state transitions in conventional POMDPs are only determined by the choice of the task action , whereas in our setting the state-transition is affected by both the task action and the feature acquisition choice . ( ii ) The learning of the acquisition policy introduces an additional dimension to the exploration-exploitation problem : each execution of the policy needs to solve an exploration-exploitation problem , and thus we often need to learn sophisticated policies . Most reinforcement learning research has not taken active feature acquisition into consideration . In this work , we propose a unified approach that jointly learns a policy for optimizing the task reward while performing active feature acquisition . Although some of the prior works have exploited the use of reinforcement learning for sequential feature acquisition tasks ( Shim et al. , 2018 ; Zannone et al. , 2019 ) , they considered variable-wise information acquisition in a static setting only , corresponding to feature selection for non-time-dependent prediction tasks . However , our considered setting is truly time-dependent and feature acquisitions need to be made at each time step while the state of the system evolves simultaneously . As such , both the model dynamics of the underlying MDP and the choice of feature acquisition introduce considerable challenges to the learning of the sequential feature acquisition strategy . Due to the challenge of the exploration-exploitation problem , it is a non-trivial task to jointly learn the two policies . The conventional end-to-end approaches often result in inferior solutions in complex scenarios . Ideally , policies based on high-quality representations would be easier for the algorithm to search for better solutions through exploration-exploitation . Therefore , our proposed framework also tackles the joint policy training task from a representation learning perspective . Specifically , we introduce a representation learning model that not only encodes the sequential partially observed information into its latent features , but also efficiently imputes the unobserved features to offer more meaningful information for the policy training . To this end , we formulate a sequential generative model that can efficiently learn model dynamics during representation learning . Overall , the contributions of our paper are three-fold : • We propose an approach for learning sequential decision making policies with active feature acquisition through a unified reinforcement learning framework . Our proposed approach simultaneously learns policies for reward optimization and active feature acquisition . • We present a novel sequential representation learning approach to account for the encoding of the partially observed states . Our proposed approach is based on variational autoencoders ( VAE ) with amortized inference . The imputation of the unobserved features is achieved via learning the model dynamics . • We demonstrate our proposed framework can be applied to various applications . We conduct extensive experiments on an image-based control task as well as a medical simulator fitted from real-life data where our method shows clear improvements over conventional baselines . 2 RELATED WORK . In this work , we integrate active learning with reinforcement learning to accomplish the policy training task while attempting to acquire fewest observed features as possible . We thus review related methods on active feature acquisition and representation learning for POMDP , respectively . 2.1 ACTIVE FEATURE ACQUISITION . Our work draws motivation from the existing instance-wise active feature selection approaches . One category of the instance-wise feature selection methods consider feature acquisition as a one time effort to select a subset of features as a whole . One typical example is the conventional linear model that poses sparsity inducing prior distribution to the model ( Tibshirani , 1996 ) . Recently , there also emerged approaches that adopt reinforcement learning to actively find optimal feature subsets ( Yoon et al. , 2018 ; Shim et al. , 2018 ; Zannone et al. , 2019 ) . Though such attempts have demonstrated certain efficacy in handling non time-series instance-wise data , they do not suffice for handling sequential dataset . There is also an alternative category that models feature acquisition as a Bayesian experimental design ( Ma et al. , 2019 ; Gong et al. , 2019 ) . However , the sequential decision making is for variable-wise feature acquisition and the problems are still non time-series tasks in nature . The key difference between all the aforementioned approaches and ours is that we tackle active feature acquisition problems with time-series data , where an active feature selection decision needs to be formed at each time step along the multi-step reinforcement learning trajectory . Therefore , the feature acquisition for our presented work needs to consider more complex information over model dynamics and control , apart from the static instance-wise features . 2.2 REPRESENTATION LEARNING IN POMDP . In complex tasks , policies trained upon different representations can even converge to different performance levels . Most conventional deep reinforcement learning approaches unifies the process of representation learning with policy training and results in policies trained in an end-to-end fashion ( Mnih et al. , 2013 ; Lillicrap et al. , 2016 ; Mnih et al. , 2016 ) . However , to accomplish the representation learning task , such models often engage trainable parameters which could come with considerable size and thereby result in significant degradation in sample efficiency . When considering problems with POMDPs where the state space is partially accessible to the agent , representation learning becomes an important and non-trivial research challenge . Among the existing literature , one prominent line of research tackles the representation learning for POMDP in an off-line fashion and thus resulting in multi-stage reinforcement learning . Higgins et al . ( 2016 ; 2017 ) adopt pretrained VAE models as a representation module to build agents with strong domain adaptation performance . The key difference between their work and ours is that they encode instance-wise image frames from POMDP domains where each image presents a partial view over the task environment , while our work considers cost-sensitive reinforcement learning with distinct partial observability , i.e. , the feature-level information is missing at each time step for the agent . We thus adopt a sequential representation learning approach to infer a more representative state information . Recently , there also emerged several works on sequential representation learning for POMDP ( Gregor et al. , 2019 ; Vezzani et al. , 2019 ) . However , most of the works utilize VAE training as an auxiliary task to jointly update the representation model with the policy learning loss . In our work , due to the high acquisition cost to observe the features , we adopt an off-line representation learning setting . Also , our proposed representation learning is model-based , where the model learns to impute the missing features with such attempt yielding significant benefit to derive high-quality representation for policy training . 3 METHODOLOGY . 3.1 TASK SETTING . In this section , we formally define the problem settings for the task of jointly learning the task and feature acquisition policy . To this end , we define the active feature acquisition POMDP , a rich class of discrete-time stochastic control processes generalizing standard POMDPs : Definition 1 ( AFA-POMDP ) . The active feature acquisition POMDP is a tuple M = 〈S , A , T , O , R , C , γ〉 , where S is the state space and A = ( Af , Ac ) is a joint action space of feature acquisition actionsAf and control actionsAc . The transition kernel T : S ×Ac×Af → PS maps any joint action a = ( af , ac ) in state s ∈ S to a distribution PS over next states . In each state s , when taking action af , the agent observes xp = x ( af ) , i.e. , a subset of the features x = ( xp , xu ) ∼ O ( s ) indicated by af , whereO ( s ) is a distribution over possible feature observation for state s and xu are features not observed by the agent . When taking a joint action , the agent obtains rewards according to the reward functionR : S ×Ac → R and pays a cost of C : S ×Af → R+ for feature acquisition . Rewards and costs are discounted by the discount factor γ ∈ [ 0 , 1 ) . Simplifying assumptions For simplicity , we assume that x consists of a fixed number of features Nf for all states , that Af = 2 [ Nf ] is the power set of all the Nf features , and that xp ( af ) consists of all the features in x indicated by the subset af ∈ Af . Note that the feature acquisition action for a specific application can take various different forms . For instance , in our experiments in Section 4 , for the Sepsis task , we define feature acquisition as selecting a subset over possible measurement tests , whereas for the Bouncing Ball+ task , we divide an image into four observation regions and let the feature acquisition policy select a subset of observation regions ( rather than raw pixels ) . Please also note that while in a general AFA-POMDP , the transition between two states depends on the joint action , we assume in the following that it depends only on the control action , i.e. , T ( s , ac , af ′ ) = T ( s , ac , af ) for all af ′ , af ∈ Af . While not true for all possible applications , this assumption can be a reasonable approximation for instance for medical settings in which tests are non-invasive . For simplicity we furthermore assume that acquiring each feature has the same cost , denoted as c , i.e. , C ( af , s ) = c |af | , but our approach can be straightforwardly adapted to have different costs for different feature acquisitions . Objective We aim to learn a policy which trades off reward maximization and the cost for feature acquisition by jointly optimizing a task policy πc and a feature acquisition policy πf . That is , we aim to solve the optimization problem max πf , πc E ∞∑ t=0 γt ( R ( xt , act ) − |Af |∑ i c · I ( af ( i ) t ) ) , ( 1 ) where the expectation is over the randomness of the stochastic process and the policies , af ( i ) t denotes the i-th feature acquisition action at timestep t , and I ( · ) is an indicator function whose value equals to 1 if that feature has been acquired . Note that the above optimization problem is very challenging : an optimal solution needs to maintain beliefs bt over the state of the system at time t which is a function of partial observations obtained so far . Both the the feature acquisition policy πf ( aft | bt ) and the task policy i.e. , πc ( act | bt ) depend on this belief . The information in the belief itself can be controlled by the feature acquisition policy through querying subsets from the features xt and hence the task policy and feature acquisition policy itself strongly depend on effectiveness of the feature acquisition policy . | In this paper the authors propose an approach for simultaneously learning how to explore more efficiently in POMDPs via targeted feature acquisition, and learning a reward-maximizing control policy, balancing the cost of feature acquisition with the expected reward. Learning is done via a VAE framework which combines a belief inference model and an observation decoder, with a key innovation being that inference is done as a sequential process. Results comparing this approach to other variational inference approaches show the proposed framework reaches better performance with lower cost (particularly, number of acquired features). | SP:2949b7383e579d29639b08597ed1808c54123cb4 |
Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs | Q-learning is a popular Reinforcement Learning ( RL ) algorithm which is widely used in practice with function approximation ( Mnih et al. , 2015 ) . In contrast , existing theoretical results are pessimistic about Q-learning . For example , ( Baird , 1995 ) shows that Q-learning does not converge even with linear function approximation for linear MDPs . Furthermore , even for tabular MDPs with synchronous updates , Q-learning was shown to have sub-optimal sample complexity ( Li et al. , 2021 ; Azar et al. , 2013 ) . The goal of this work is to bridge the gap between practical success of Q-learning and the relatively pessimistic theoretical results . The starting point of our work is the observation that in practice , Q-learning is used with two important modifications : ( i ) training with two networks , called online network and target network simultaneously ( online target learning , or OTL ) , and ( ii ) experience replay ( ER ) ( Mnih et al. , 2015 ) . While they have been observed to play a significant role in the practical success of Q-learning , a thorough theoretical understanding of how these two modifications improve the convergence behavior of Q-learning has been missing in literature . By carefully combining the Q-learning with OTL and reverse experience replay ( RER ) ( a form of experience replay ) , we present novel methods Q-Rex and Q-RexDaRe ( Q-Rex+ data reuse ) . We show that Q-Rex efficiently finds the optimal policy for linear MDPs and provide non-asymptotic bounds on sample complexity – the first such result for a Q-learning method for Linear MDPs under standard assumptions . Furthermore , we demonstrate that Q-RexDaRe in fact achieves near optimal sample complexity in the tabular setting , improving upon the existing results for vanilla Q-learning . 1 INTRODUCTION . Reinforcement Learning ( RL ) has been shown to be highly successful for a variety of practical problems in the realm of long term decision making ( Mnih et al. , 2015 ) . Several classical works have studied RL methods like TD-learning , Q-learning , SARSA and their variants for many decades ( Sutton & Barto , 2018 ; Bertsekas , 2011 ; Borkar & Meyn , 2000 ; Sutton , 1988 ; Tsitsiklis & Van Roy , 1997 ; Watkins & Dayan , 1992 ; Watkins , 1989 ) but the guarantees are mostly asymptotic and therefore do not sufficiently answer important questions that are relevant to practitioners who struggle with constraints on the number of data points and the computation power . Recent works provide non-asymptotic results for a variety of important settings ( Kearns & Singh , 1999 ; Even-Dar et al. , 2003 ; Beck & Srikant , 2012 ; Qu & Wierman , 2020 ; Ghavamzadeh et al. , 2011 ; Bhandari et al. , 2018 ; Chen et al. , 2020 ; 2019 ; Dalal et al. , 2018a ; b ; Doan et al. , 2020 ; Gupta et al. , 2019 ; Srikant & Ying , 2019 ; Weng et al. , 2020 ; Xu & Gu , 2020 ; Yang & Wang , 2019 ; Zou et al. , 2019 ) . Despite a large body of work , several aspects of fundamental methods like Q-learning ( Watkins & Dayan , 1992 ) are still ill-understood . Q-learning ’ s simplicity and the ability to learn from off-policy data makes it attractive to the practitioner . However , theoretical analyses show that even with linear function approximation and when the approximation is exact , Q-learning can fail to converge even in simple examples ( Baird , 1995 ; Boyan & Moore , 1995 ; Tsitsiklis & Van Roy , 1996 ) . Furthermore , even in the simple case of tabular RL with synchronous updates , Q-learning is known to have suboptimal sample complexity ( Wainwright , 2019a ; Li et al. , 2021 ) . Despite the negative results , Q-learning has been deployed with tremendous success in practice . The practitioners , however , use Q-learning with “ heuristic ” modifications like experience replay ( ER ) and online target learning ( OTL ) . ER is used to alleviate the issue that the samples obtained in an episode might be highly dependent on each other whereas OTL helps stabilize the Q iteration . Mnih et al . ( 2015 ) conducted extensive experiments to show that both these techniques , along with neural function approximation , are essential for the success of Q-learning . But , existing analyses for ER with Q-learning either require stringent assumptions ( Carvalho et al. , 2020 ) to ensure convergence to a good Q value , or assume that ER provides i.i.d . samples which might not hold in practice ( Fan et al. , 2020 ; Carvalho et al. , 2020 ) . In this paper , we attempt to bridge the gap between theory and practice , by rigorously investigating how Q-learning performs with these practical heuristics . To this end , we introduce two model free algorithms : Q-Rex and its sample efficient variant Q-RexDaRe that combine the standard Q-learning with OTL and reverse experience replay ( RER ) . RER is a form of ER which was recently introduced to unravel spurious correlations present while learning form Markovian data in the context of system identification ( Rotinov , 2019 ; Jain et al. , 2021b ) . We show that OTL stabilizes the Q value by essentially serving as a variance reduction technique and RER unravels the spurious correlations present in the off-policy Markovian data to remove inherent biases introduced in vanilla Q learning . Inclusion of these simple heuristics has surprisingly far-reaching consequences . Firstly , this allows us to show that unlike vanilla Q-learning , Q-Rex finds the optimal policy for linear MDPs and allows us to derive non-asymptotic sample complexity bounds . In the tabular setting , Q-Rex even with asynchronous data is able to match the best known bounds for Q-learning with synchronous data . Furthermore , we extend Q-Rex to obtain a new method Q-RexDaRethat reuses old samples and admits nearly optimal sample complexity for recovering the optimal Q-function in tabular setting . Previously , only Q-learning methods with explicit variance-reduction techniques ( not popular in practice ) ( Wainwright , 2019b ; Li et al. , 2020b ) or model based methods ( Agarwal et al. , 2020 ; Li et al. , 2020a ) were known to achieve such sample complexity bound . Our experiments show that when the algorithmic parameters are chosen carefully , Q-Rex and its variants outperform both vanilla Q-learning and OTL+ER+Q-learning with the same parameters ( see Appendix A ) . To summarize , in this work , we study Q-learning with practical heuristics like ER and OTL , and propose two concrete methods Q-Rex and Q-RexDaRe based on OTL and reverse experience replay – a modification of the standard ER used in practice . We show that Q-Rex is able to find the optimal policy for linear MDPs with strong sample complexity bound which is the first such result for Q-learning . We also show that Q-RexDaRe obtains nearly optimal sample complexity for the simpler tabular setting despite not using any explicit variance reduction technique . See Table 1 for an comparison of our guarantees against the state-of-the-results for linear MDPs and tabular setting . Organization We review related works in next subsection . In Section 2 we develop the MDP problem which we seek to solve and present our algorithm , Q-Rex in Section 3 . The main theoretical results are presented in Section 4 . We present a brief overview of the analysis in Section 5 and present our experiments in Section A . Most of the formal proofs are relegated to the appendix . 1.1 RELATED WORKS . Tabular Q-learning Tabular MDPs are the most basic examples of MDPs where the state space ( S ) and the action space ( A ) are both finite and the Q-values are represented by assigning a unique co-ordinate to each state-action pair . This setting has been well studied over the last few decades and convergence guarantees have been derived in both asymptotic and non-asymptotic regime for popular model-free and model-based algorithms . Azar et al . ( 2013 ) shows that the minimax lower bounds on the sample complexity of obtaining the optimal Q-function up-to error is |S||A| ( 1−γ ) 3 2 , where γ is the discount factor . Near sample-optimal estimation is achieved by several model-based algorithms ( Agarwal et al. , 2020 ; Li et al. , 2020a ) and model-free algorithms like variance reduced Q-learning ( Wainwright , 2019b ; Li et al. , 2020b ) . ( Li et al. , 2021 ) also shows that vanilla Q-learning with standard step sizes , even in the synchronous data setting – where transitions corresponding to each state action pair are sampled independently at each step – suffers from a sample complexity of |S||A| ( 1−γ ) 4 2 and the best known bounds in the asynchronous setting – where data is derived from a Markovian trajectory and only one Q value is updated in each step – is |S||A| ( 1−γ ) 5 2 . These results seem unsatisfactory since γ ∼ 0.99 ( or even 0.999 ) in most practical applications . In contrast , our algorithm Q-Rex with asynchronous data has a sample complexity that matches Q-learning bound with synchronous data and its data-efficient variant Q-RexDaRe has near minimax optimal sample complexity ( see Table 1 ) . For details on model based algorithms , and previous works with suboptimal guarantees we refer to ( Agarwal et al. , 2020 ; Li et al. , 2020b ) . Q-learning with Linear Function Approximation Even though tabular Q-learning is fairly well understood , it is intractable in most practical RL problems due to large size of the state space S. Therefore , Q-learning is deployed with function approximation . Linear function approximation is the simplest such case where the Q-function is approximated with a linear function of the ‘ feature embedding ’ associated with each state-action pair . However , Q-learning can be shown to diverge even in the simplest cases as was first noticed in ( Baird , 1995 ) , which also introduced residual gradient methods which converged rather slowly but provably . We will only discuss recent works closest to our work and refer the reader to ( Carvalho et al. , 2020 ; Jin et al. , 2020 ; Yang & Wang , 2019 ) for a full survey of various works in this direction . Yang & Wang ( 2019 ) consider MDPs with approximate linear function representation - which is more general than the assumptions in this work but require additional assumptions like finite stateaction space and existence of known anchor subsets which might not hold in practice . Our results on the other hand hold with standard assumptions , with asynchronous updates and can handle infinite state-action spaces ( see Theorem 1 ) . Similarly , Chen et al . ( 2019 ) consider Q-learning with linear function approximation for finite state-action spaces , which need not be exact . But the result requires a rather restrictive assumption that the offline policy is close to optimal policy . In contrast , we consider the less general but well-studied case of linear MDPs and provide global convergence without restrictive assumptions on the behaviour policy . Under the most general conditions Maei et al . ( 2010 ) present the Greedy-GQ algorithm which converges to a point asymptotically instead of diverging . Similar results are obtained by Carvalho et al . ( 2020 ) for Coupled Q-learning , a 2-timescale variant of Q-learning which uses a version of OTL and ER1 . This algorithm experimentally resolves the popular counter-examples provided by ( Tsitsiklis & Van Roy , 1996 ; Baird , 1995 ) . Carvalho et al . ( 2020 , Theorem 2 ) provides value function guarantees for the point to which the algorithm converges ( albeit without sample complexity guarantees ) . However , the assumptions for this result are very stringent and even in the case of tabular Q-learning , the method might not converge to the optimal policy . Experience Replay and Reverse Experience Replay Reinforcement learning involves learning on-the-go with Markovian data , which are highly correlated . Iterative learning algorithms like Qlearning can sometimes get coupled to the Markov chain resulting in sub-optimal convergence . Experience replay ( ER ) was introduced in order to mitigate this drawback ( Lin , 1992 ) – here a large FIFO buffer of a fixed size stores the streaming data and the learning algorithm samples a data point uniformly at random from this buffer at each step . This makes the samples look roughly i.i.d . due 1The version of ER used in Carvalho et al . ( 2020 ) makes the setting completely synchronous as opposed to the asynchronous setting considered by us . to mixing , thus breaking the harmful correlations . Reverse experience replay ( RER ) is a form of experience replay which stores a buffer just like ER but processes the data points in the reverse order as stored in the buffer . This was introduced in entirely different contexts by ( Rotinov , 2019 ; Jain et al. , 2021b ; a ) . In the context of this work , we note that reverse order traversal endows a supermartingale structure which yields the strong concentration result in Theorem 4 , which is not possible with forward order traversal . Yet another way to look at RER is through the lens of Dynamic programming ( Bertsekas , 2011 ) – where the value function is evaluated backwards starting from time T to time 1 . Similarly , RER bootstraps Q values to the future Q values instead of the past Q values . Online Target Learning OTL ( Mnih et al. , 2015 ) maintains two different Q-values ( called online Q-value and target Q-value ) where the target Q-value is held constant for some time and only the online Q-value is updated by ‘ bootstrapping ’ to the target . After a number of such iterations , the target Q-value is set to the current online Q value . OTL thus attempts to mitigate the destabilizing effects of bootstrapping by removing the ‘ moving target ’ . This technique has been noted to allow for an unbiased estimation of the bellman operator ( Fan et al. , 2020 ) and when trained with large batch sizes is similar to the well known neural fitted Q-iteration ( Riedmiller , 2005 ) . | This paper studies the convergence of Q-learning with two popular empirical heuristics (a) online target learning and (b) experience replay. The analysis in this paper is established upon the off-policy setting along with fast-mixing and minimum reachability assumptions. Main convergence results are provided for the linear MDP where all relevant quantities (transition, reward, and Q functions) can be represented as linear functions. The authors propose two variations of Q-learning, namely, Q-Rex and Q-RexDaRe. Convergence results are also given for two algorithms with the sample-optimality guarantee for Q-RexDaRe. | SP:c1ee4177eed2e1e7a2993bb77366db6ab6d761d7 |
Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs | Q-learning is a popular Reinforcement Learning ( RL ) algorithm which is widely used in practice with function approximation ( Mnih et al. , 2015 ) . In contrast , existing theoretical results are pessimistic about Q-learning . For example , ( Baird , 1995 ) shows that Q-learning does not converge even with linear function approximation for linear MDPs . Furthermore , even for tabular MDPs with synchronous updates , Q-learning was shown to have sub-optimal sample complexity ( Li et al. , 2021 ; Azar et al. , 2013 ) . The goal of this work is to bridge the gap between practical success of Q-learning and the relatively pessimistic theoretical results . The starting point of our work is the observation that in practice , Q-learning is used with two important modifications : ( i ) training with two networks , called online network and target network simultaneously ( online target learning , or OTL ) , and ( ii ) experience replay ( ER ) ( Mnih et al. , 2015 ) . While they have been observed to play a significant role in the practical success of Q-learning , a thorough theoretical understanding of how these two modifications improve the convergence behavior of Q-learning has been missing in literature . By carefully combining the Q-learning with OTL and reverse experience replay ( RER ) ( a form of experience replay ) , we present novel methods Q-Rex and Q-RexDaRe ( Q-Rex+ data reuse ) . We show that Q-Rex efficiently finds the optimal policy for linear MDPs and provide non-asymptotic bounds on sample complexity – the first such result for a Q-learning method for Linear MDPs under standard assumptions . Furthermore , we demonstrate that Q-RexDaRe in fact achieves near optimal sample complexity in the tabular setting , improving upon the existing results for vanilla Q-learning . 1 INTRODUCTION . Reinforcement Learning ( RL ) has been shown to be highly successful for a variety of practical problems in the realm of long term decision making ( Mnih et al. , 2015 ) . Several classical works have studied RL methods like TD-learning , Q-learning , SARSA and their variants for many decades ( Sutton & Barto , 2018 ; Bertsekas , 2011 ; Borkar & Meyn , 2000 ; Sutton , 1988 ; Tsitsiklis & Van Roy , 1997 ; Watkins & Dayan , 1992 ; Watkins , 1989 ) but the guarantees are mostly asymptotic and therefore do not sufficiently answer important questions that are relevant to practitioners who struggle with constraints on the number of data points and the computation power . Recent works provide non-asymptotic results for a variety of important settings ( Kearns & Singh , 1999 ; Even-Dar et al. , 2003 ; Beck & Srikant , 2012 ; Qu & Wierman , 2020 ; Ghavamzadeh et al. , 2011 ; Bhandari et al. , 2018 ; Chen et al. , 2020 ; 2019 ; Dalal et al. , 2018a ; b ; Doan et al. , 2020 ; Gupta et al. , 2019 ; Srikant & Ying , 2019 ; Weng et al. , 2020 ; Xu & Gu , 2020 ; Yang & Wang , 2019 ; Zou et al. , 2019 ) . Despite a large body of work , several aspects of fundamental methods like Q-learning ( Watkins & Dayan , 1992 ) are still ill-understood . Q-learning ’ s simplicity and the ability to learn from off-policy data makes it attractive to the practitioner . However , theoretical analyses show that even with linear function approximation and when the approximation is exact , Q-learning can fail to converge even in simple examples ( Baird , 1995 ; Boyan & Moore , 1995 ; Tsitsiklis & Van Roy , 1996 ) . Furthermore , even in the simple case of tabular RL with synchronous updates , Q-learning is known to have suboptimal sample complexity ( Wainwright , 2019a ; Li et al. , 2021 ) . Despite the negative results , Q-learning has been deployed with tremendous success in practice . The practitioners , however , use Q-learning with “ heuristic ” modifications like experience replay ( ER ) and online target learning ( OTL ) . ER is used to alleviate the issue that the samples obtained in an episode might be highly dependent on each other whereas OTL helps stabilize the Q iteration . Mnih et al . ( 2015 ) conducted extensive experiments to show that both these techniques , along with neural function approximation , are essential for the success of Q-learning . But , existing analyses for ER with Q-learning either require stringent assumptions ( Carvalho et al. , 2020 ) to ensure convergence to a good Q value , or assume that ER provides i.i.d . samples which might not hold in practice ( Fan et al. , 2020 ; Carvalho et al. , 2020 ) . In this paper , we attempt to bridge the gap between theory and practice , by rigorously investigating how Q-learning performs with these practical heuristics . To this end , we introduce two model free algorithms : Q-Rex and its sample efficient variant Q-RexDaRe that combine the standard Q-learning with OTL and reverse experience replay ( RER ) . RER is a form of ER which was recently introduced to unravel spurious correlations present while learning form Markovian data in the context of system identification ( Rotinov , 2019 ; Jain et al. , 2021b ) . We show that OTL stabilizes the Q value by essentially serving as a variance reduction technique and RER unravels the spurious correlations present in the off-policy Markovian data to remove inherent biases introduced in vanilla Q learning . Inclusion of these simple heuristics has surprisingly far-reaching consequences . Firstly , this allows us to show that unlike vanilla Q-learning , Q-Rex finds the optimal policy for linear MDPs and allows us to derive non-asymptotic sample complexity bounds . In the tabular setting , Q-Rex even with asynchronous data is able to match the best known bounds for Q-learning with synchronous data . Furthermore , we extend Q-Rex to obtain a new method Q-RexDaRethat reuses old samples and admits nearly optimal sample complexity for recovering the optimal Q-function in tabular setting . Previously , only Q-learning methods with explicit variance-reduction techniques ( not popular in practice ) ( Wainwright , 2019b ; Li et al. , 2020b ) or model based methods ( Agarwal et al. , 2020 ; Li et al. , 2020a ) were known to achieve such sample complexity bound . Our experiments show that when the algorithmic parameters are chosen carefully , Q-Rex and its variants outperform both vanilla Q-learning and OTL+ER+Q-learning with the same parameters ( see Appendix A ) . To summarize , in this work , we study Q-learning with practical heuristics like ER and OTL , and propose two concrete methods Q-Rex and Q-RexDaRe based on OTL and reverse experience replay – a modification of the standard ER used in practice . We show that Q-Rex is able to find the optimal policy for linear MDPs with strong sample complexity bound which is the first such result for Q-learning . We also show that Q-RexDaRe obtains nearly optimal sample complexity for the simpler tabular setting despite not using any explicit variance reduction technique . See Table 1 for an comparison of our guarantees against the state-of-the-results for linear MDPs and tabular setting . Organization We review related works in next subsection . In Section 2 we develop the MDP problem which we seek to solve and present our algorithm , Q-Rex in Section 3 . The main theoretical results are presented in Section 4 . We present a brief overview of the analysis in Section 5 and present our experiments in Section A . Most of the formal proofs are relegated to the appendix . 1.1 RELATED WORKS . Tabular Q-learning Tabular MDPs are the most basic examples of MDPs where the state space ( S ) and the action space ( A ) are both finite and the Q-values are represented by assigning a unique co-ordinate to each state-action pair . This setting has been well studied over the last few decades and convergence guarantees have been derived in both asymptotic and non-asymptotic regime for popular model-free and model-based algorithms . Azar et al . ( 2013 ) shows that the minimax lower bounds on the sample complexity of obtaining the optimal Q-function up-to error is |S||A| ( 1−γ ) 3 2 , where γ is the discount factor . Near sample-optimal estimation is achieved by several model-based algorithms ( Agarwal et al. , 2020 ; Li et al. , 2020a ) and model-free algorithms like variance reduced Q-learning ( Wainwright , 2019b ; Li et al. , 2020b ) . ( Li et al. , 2021 ) also shows that vanilla Q-learning with standard step sizes , even in the synchronous data setting – where transitions corresponding to each state action pair are sampled independently at each step – suffers from a sample complexity of |S||A| ( 1−γ ) 4 2 and the best known bounds in the asynchronous setting – where data is derived from a Markovian trajectory and only one Q value is updated in each step – is |S||A| ( 1−γ ) 5 2 . These results seem unsatisfactory since γ ∼ 0.99 ( or even 0.999 ) in most practical applications . In contrast , our algorithm Q-Rex with asynchronous data has a sample complexity that matches Q-learning bound with synchronous data and its data-efficient variant Q-RexDaRe has near minimax optimal sample complexity ( see Table 1 ) . For details on model based algorithms , and previous works with suboptimal guarantees we refer to ( Agarwal et al. , 2020 ; Li et al. , 2020b ) . Q-learning with Linear Function Approximation Even though tabular Q-learning is fairly well understood , it is intractable in most practical RL problems due to large size of the state space S. Therefore , Q-learning is deployed with function approximation . Linear function approximation is the simplest such case where the Q-function is approximated with a linear function of the ‘ feature embedding ’ associated with each state-action pair . However , Q-learning can be shown to diverge even in the simplest cases as was first noticed in ( Baird , 1995 ) , which also introduced residual gradient methods which converged rather slowly but provably . We will only discuss recent works closest to our work and refer the reader to ( Carvalho et al. , 2020 ; Jin et al. , 2020 ; Yang & Wang , 2019 ) for a full survey of various works in this direction . Yang & Wang ( 2019 ) consider MDPs with approximate linear function representation - which is more general than the assumptions in this work but require additional assumptions like finite stateaction space and existence of known anchor subsets which might not hold in practice . Our results on the other hand hold with standard assumptions , with asynchronous updates and can handle infinite state-action spaces ( see Theorem 1 ) . Similarly , Chen et al . ( 2019 ) consider Q-learning with linear function approximation for finite state-action spaces , which need not be exact . But the result requires a rather restrictive assumption that the offline policy is close to optimal policy . In contrast , we consider the less general but well-studied case of linear MDPs and provide global convergence without restrictive assumptions on the behaviour policy . Under the most general conditions Maei et al . ( 2010 ) present the Greedy-GQ algorithm which converges to a point asymptotically instead of diverging . Similar results are obtained by Carvalho et al . ( 2020 ) for Coupled Q-learning , a 2-timescale variant of Q-learning which uses a version of OTL and ER1 . This algorithm experimentally resolves the popular counter-examples provided by ( Tsitsiklis & Van Roy , 1996 ; Baird , 1995 ) . Carvalho et al . ( 2020 , Theorem 2 ) provides value function guarantees for the point to which the algorithm converges ( albeit without sample complexity guarantees ) . However , the assumptions for this result are very stringent and even in the case of tabular Q-learning , the method might not converge to the optimal policy . Experience Replay and Reverse Experience Replay Reinforcement learning involves learning on-the-go with Markovian data , which are highly correlated . Iterative learning algorithms like Qlearning can sometimes get coupled to the Markov chain resulting in sub-optimal convergence . Experience replay ( ER ) was introduced in order to mitigate this drawback ( Lin , 1992 ) – here a large FIFO buffer of a fixed size stores the streaming data and the learning algorithm samples a data point uniformly at random from this buffer at each step . This makes the samples look roughly i.i.d . due 1The version of ER used in Carvalho et al . ( 2020 ) makes the setting completely synchronous as opposed to the asynchronous setting considered by us . to mixing , thus breaking the harmful correlations . Reverse experience replay ( RER ) is a form of experience replay which stores a buffer just like ER but processes the data points in the reverse order as stored in the buffer . This was introduced in entirely different contexts by ( Rotinov , 2019 ; Jain et al. , 2021b ; a ) . In the context of this work , we note that reverse order traversal endows a supermartingale structure which yields the strong concentration result in Theorem 4 , which is not possible with forward order traversal . Yet another way to look at RER is through the lens of Dynamic programming ( Bertsekas , 2011 ) – where the value function is evaluated backwards starting from time T to time 1 . Similarly , RER bootstraps Q values to the future Q values instead of the past Q values . Online Target Learning OTL ( Mnih et al. , 2015 ) maintains two different Q-values ( called online Q-value and target Q-value ) where the target Q-value is held constant for some time and only the online Q-value is updated by ‘ bootstrapping ’ to the target . After a number of such iterations , the target Q-value is set to the current online Q value . OTL thus attempts to mitigate the destabilizing effects of bootstrapping by removing the ‘ moving target ’ . This technique has been noted to allow for an unbiased estimation of the bellman operator ( Fan et al. , 2020 ) and when trained with large batch sizes is similar to the well known neural fitted Q-iteration ( Riedmiller , 2005 ) . | The paper provides sample complexity bounds for a Q-learning based algorithm with target Q and experience replay. Contributions: The authors provide an algorithm with proven sample complexity for linear MDPs and the tabular setting bridging the current state of the art. The analysis relies on common heuristics of experience replay buffer and using a target network given these some theoretical grounding. | SP:c1ee4177eed2e1e7a2993bb77366db6ab6d761d7 |
Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs | Q-learning is a popular Reinforcement Learning ( RL ) algorithm which is widely used in practice with function approximation ( Mnih et al. , 2015 ) . In contrast , existing theoretical results are pessimistic about Q-learning . For example , ( Baird , 1995 ) shows that Q-learning does not converge even with linear function approximation for linear MDPs . Furthermore , even for tabular MDPs with synchronous updates , Q-learning was shown to have sub-optimal sample complexity ( Li et al. , 2021 ; Azar et al. , 2013 ) . The goal of this work is to bridge the gap between practical success of Q-learning and the relatively pessimistic theoretical results . The starting point of our work is the observation that in practice , Q-learning is used with two important modifications : ( i ) training with two networks , called online network and target network simultaneously ( online target learning , or OTL ) , and ( ii ) experience replay ( ER ) ( Mnih et al. , 2015 ) . While they have been observed to play a significant role in the practical success of Q-learning , a thorough theoretical understanding of how these two modifications improve the convergence behavior of Q-learning has been missing in literature . By carefully combining the Q-learning with OTL and reverse experience replay ( RER ) ( a form of experience replay ) , we present novel methods Q-Rex and Q-RexDaRe ( Q-Rex+ data reuse ) . We show that Q-Rex efficiently finds the optimal policy for linear MDPs and provide non-asymptotic bounds on sample complexity – the first such result for a Q-learning method for Linear MDPs under standard assumptions . Furthermore , we demonstrate that Q-RexDaRe in fact achieves near optimal sample complexity in the tabular setting , improving upon the existing results for vanilla Q-learning . 1 INTRODUCTION . Reinforcement Learning ( RL ) has been shown to be highly successful for a variety of practical problems in the realm of long term decision making ( Mnih et al. , 2015 ) . Several classical works have studied RL methods like TD-learning , Q-learning , SARSA and their variants for many decades ( Sutton & Barto , 2018 ; Bertsekas , 2011 ; Borkar & Meyn , 2000 ; Sutton , 1988 ; Tsitsiklis & Van Roy , 1997 ; Watkins & Dayan , 1992 ; Watkins , 1989 ) but the guarantees are mostly asymptotic and therefore do not sufficiently answer important questions that are relevant to practitioners who struggle with constraints on the number of data points and the computation power . Recent works provide non-asymptotic results for a variety of important settings ( Kearns & Singh , 1999 ; Even-Dar et al. , 2003 ; Beck & Srikant , 2012 ; Qu & Wierman , 2020 ; Ghavamzadeh et al. , 2011 ; Bhandari et al. , 2018 ; Chen et al. , 2020 ; 2019 ; Dalal et al. , 2018a ; b ; Doan et al. , 2020 ; Gupta et al. , 2019 ; Srikant & Ying , 2019 ; Weng et al. , 2020 ; Xu & Gu , 2020 ; Yang & Wang , 2019 ; Zou et al. , 2019 ) . Despite a large body of work , several aspects of fundamental methods like Q-learning ( Watkins & Dayan , 1992 ) are still ill-understood . Q-learning ’ s simplicity and the ability to learn from off-policy data makes it attractive to the practitioner . However , theoretical analyses show that even with linear function approximation and when the approximation is exact , Q-learning can fail to converge even in simple examples ( Baird , 1995 ; Boyan & Moore , 1995 ; Tsitsiklis & Van Roy , 1996 ) . Furthermore , even in the simple case of tabular RL with synchronous updates , Q-learning is known to have suboptimal sample complexity ( Wainwright , 2019a ; Li et al. , 2021 ) . Despite the negative results , Q-learning has been deployed with tremendous success in practice . The practitioners , however , use Q-learning with “ heuristic ” modifications like experience replay ( ER ) and online target learning ( OTL ) . ER is used to alleviate the issue that the samples obtained in an episode might be highly dependent on each other whereas OTL helps stabilize the Q iteration . Mnih et al . ( 2015 ) conducted extensive experiments to show that both these techniques , along with neural function approximation , are essential for the success of Q-learning . But , existing analyses for ER with Q-learning either require stringent assumptions ( Carvalho et al. , 2020 ) to ensure convergence to a good Q value , or assume that ER provides i.i.d . samples which might not hold in practice ( Fan et al. , 2020 ; Carvalho et al. , 2020 ) . In this paper , we attempt to bridge the gap between theory and practice , by rigorously investigating how Q-learning performs with these practical heuristics . To this end , we introduce two model free algorithms : Q-Rex and its sample efficient variant Q-RexDaRe that combine the standard Q-learning with OTL and reverse experience replay ( RER ) . RER is a form of ER which was recently introduced to unravel spurious correlations present while learning form Markovian data in the context of system identification ( Rotinov , 2019 ; Jain et al. , 2021b ) . We show that OTL stabilizes the Q value by essentially serving as a variance reduction technique and RER unravels the spurious correlations present in the off-policy Markovian data to remove inherent biases introduced in vanilla Q learning . Inclusion of these simple heuristics has surprisingly far-reaching consequences . Firstly , this allows us to show that unlike vanilla Q-learning , Q-Rex finds the optimal policy for linear MDPs and allows us to derive non-asymptotic sample complexity bounds . In the tabular setting , Q-Rex even with asynchronous data is able to match the best known bounds for Q-learning with synchronous data . Furthermore , we extend Q-Rex to obtain a new method Q-RexDaRethat reuses old samples and admits nearly optimal sample complexity for recovering the optimal Q-function in tabular setting . Previously , only Q-learning methods with explicit variance-reduction techniques ( not popular in practice ) ( Wainwright , 2019b ; Li et al. , 2020b ) or model based methods ( Agarwal et al. , 2020 ; Li et al. , 2020a ) were known to achieve such sample complexity bound . Our experiments show that when the algorithmic parameters are chosen carefully , Q-Rex and its variants outperform both vanilla Q-learning and OTL+ER+Q-learning with the same parameters ( see Appendix A ) . To summarize , in this work , we study Q-learning with practical heuristics like ER and OTL , and propose two concrete methods Q-Rex and Q-RexDaRe based on OTL and reverse experience replay – a modification of the standard ER used in practice . We show that Q-Rex is able to find the optimal policy for linear MDPs with strong sample complexity bound which is the first such result for Q-learning . We also show that Q-RexDaRe obtains nearly optimal sample complexity for the simpler tabular setting despite not using any explicit variance reduction technique . See Table 1 for an comparison of our guarantees against the state-of-the-results for linear MDPs and tabular setting . Organization We review related works in next subsection . In Section 2 we develop the MDP problem which we seek to solve and present our algorithm , Q-Rex in Section 3 . The main theoretical results are presented in Section 4 . We present a brief overview of the analysis in Section 5 and present our experiments in Section A . Most of the formal proofs are relegated to the appendix . 1.1 RELATED WORKS . Tabular Q-learning Tabular MDPs are the most basic examples of MDPs where the state space ( S ) and the action space ( A ) are both finite and the Q-values are represented by assigning a unique co-ordinate to each state-action pair . This setting has been well studied over the last few decades and convergence guarantees have been derived in both asymptotic and non-asymptotic regime for popular model-free and model-based algorithms . Azar et al . ( 2013 ) shows that the minimax lower bounds on the sample complexity of obtaining the optimal Q-function up-to error is |S||A| ( 1−γ ) 3 2 , where γ is the discount factor . Near sample-optimal estimation is achieved by several model-based algorithms ( Agarwal et al. , 2020 ; Li et al. , 2020a ) and model-free algorithms like variance reduced Q-learning ( Wainwright , 2019b ; Li et al. , 2020b ) . ( Li et al. , 2021 ) also shows that vanilla Q-learning with standard step sizes , even in the synchronous data setting – where transitions corresponding to each state action pair are sampled independently at each step – suffers from a sample complexity of |S||A| ( 1−γ ) 4 2 and the best known bounds in the asynchronous setting – where data is derived from a Markovian trajectory and only one Q value is updated in each step – is |S||A| ( 1−γ ) 5 2 . These results seem unsatisfactory since γ ∼ 0.99 ( or even 0.999 ) in most practical applications . In contrast , our algorithm Q-Rex with asynchronous data has a sample complexity that matches Q-learning bound with synchronous data and its data-efficient variant Q-RexDaRe has near minimax optimal sample complexity ( see Table 1 ) . For details on model based algorithms , and previous works with suboptimal guarantees we refer to ( Agarwal et al. , 2020 ; Li et al. , 2020b ) . Q-learning with Linear Function Approximation Even though tabular Q-learning is fairly well understood , it is intractable in most practical RL problems due to large size of the state space S. Therefore , Q-learning is deployed with function approximation . Linear function approximation is the simplest such case where the Q-function is approximated with a linear function of the ‘ feature embedding ’ associated with each state-action pair . However , Q-learning can be shown to diverge even in the simplest cases as was first noticed in ( Baird , 1995 ) , which also introduced residual gradient methods which converged rather slowly but provably . We will only discuss recent works closest to our work and refer the reader to ( Carvalho et al. , 2020 ; Jin et al. , 2020 ; Yang & Wang , 2019 ) for a full survey of various works in this direction . Yang & Wang ( 2019 ) consider MDPs with approximate linear function representation - which is more general than the assumptions in this work but require additional assumptions like finite stateaction space and existence of known anchor subsets which might not hold in practice . Our results on the other hand hold with standard assumptions , with asynchronous updates and can handle infinite state-action spaces ( see Theorem 1 ) . Similarly , Chen et al . ( 2019 ) consider Q-learning with linear function approximation for finite state-action spaces , which need not be exact . But the result requires a rather restrictive assumption that the offline policy is close to optimal policy . In contrast , we consider the less general but well-studied case of linear MDPs and provide global convergence without restrictive assumptions on the behaviour policy . Under the most general conditions Maei et al . ( 2010 ) present the Greedy-GQ algorithm which converges to a point asymptotically instead of diverging . Similar results are obtained by Carvalho et al . ( 2020 ) for Coupled Q-learning , a 2-timescale variant of Q-learning which uses a version of OTL and ER1 . This algorithm experimentally resolves the popular counter-examples provided by ( Tsitsiklis & Van Roy , 1996 ; Baird , 1995 ) . Carvalho et al . ( 2020 , Theorem 2 ) provides value function guarantees for the point to which the algorithm converges ( albeit without sample complexity guarantees ) . However , the assumptions for this result are very stringent and even in the case of tabular Q-learning , the method might not converge to the optimal policy . Experience Replay and Reverse Experience Replay Reinforcement learning involves learning on-the-go with Markovian data , which are highly correlated . Iterative learning algorithms like Qlearning can sometimes get coupled to the Markov chain resulting in sub-optimal convergence . Experience replay ( ER ) was introduced in order to mitigate this drawback ( Lin , 1992 ) – here a large FIFO buffer of a fixed size stores the streaming data and the learning algorithm samples a data point uniformly at random from this buffer at each step . This makes the samples look roughly i.i.d . due 1The version of ER used in Carvalho et al . ( 2020 ) makes the setting completely synchronous as opposed to the asynchronous setting considered by us . to mixing , thus breaking the harmful correlations . Reverse experience replay ( RER ) is a form of experience replay which stores a buffer just like ER but processes the data points in the reverse order as stored in the buffer . This was introduced in entirely different contexts by ( Rotinov , 2019 ; Jain et al. , 2021b ; a ) . In the context of this work , we note that reverse order traversal endows a supermartingale structure which yields the strong concentration result in Theorem 4 , which is not possible with forward order traversal . Yet another way to look at RER is through the lens of Dynamic programming ( Bertsekas , 2011 ) – where the value function is evaluated backwards starting from time T to time 1 . Similarly , RER bootstraps Q values to the future Q values instead of the past Q values . Online Target Learning OTL ( Mnih et al. , 2015 ) maintains two different Q-values ( called online Q-value and target Q-value ) where the target Q-value is held constant for some time and only the online Q-value is updated by ‘ bootstrapping ’ to the target . After a number of such iterations , the target Q-value is set to the current online Q value . OTL thus attempts to mitigate the destabilizing effects of bootstrapping by removing the ‘ moving target ’ . This technique has been noted to allow for an unbiased estimation of the bellman operator ( Fan et al. , 2020 ) and when trained with large batch sizes is similar to the well known neural fitted Q-iteration ( Riedmiller , 2005 ) . | This paper connects Q-learning with online target learning and reverse experience replay and also obtain the result for linear MDPs. Those results are the first of its kind and are shown to be near-optimal in their respective regime. In particular, for asynchronous setting the Q-REX has order $\frac{|\mathcal{S}||\mathcal{A}|}{\epsilon^{2}(1-\gamma)^{4}}$ and Q-REXDARE has order $\frac{\max \left(\bar{d}, \frac{1}{\epsilon^{2}}\right)}{\mu_{\min }(1-\gamma)^{3}}$. The latter one nearly matches the standard lower bound. | SP:c1ee4177eed2e1e7a2993bb77366db6ab6d761d7 |
Visual hyperacuity with moving sensor and recurrent neural computations | 1 INTRODUCTION . Biological vision is known to be a dynamical process . Two factors contributing to these dynamics are eye motion and recurrent neuronal connections in the brain . Our eyes move constantly with movements that , kinematically , can be divided into saccades - quick gaze shifts , and drifts - small scanning movements between saccades ( often referred to as “ fixational drift ” ) ( Rucci et al. , 2018 ) . These dynamical aspects of vision are reflected only partially in contemporary computer vision systems . Some works addressed large scale shifts in visual attention resembling saccades ( Mnih et al. , 2014 ) . Others explored properties and benefits of recurrent top down connections ( Nayebi et al. , 2018 ) , reminiscent of top-down processing in biological vision ( Hochstein & Ahissar , 2002 ) . Notably , the dynamics of low-level visual processes , occurring early in the bottom-up visual hierarchy and sensitive to the fixational drift ( Snodderly et al. , 2001 ; Ölveczky et al. , 2003 ; Malevich et al. , 2020 ; Hohl & Lisberger , 2011 ) , remains largely overlooked in models of vision as well as in bio-inspired computer vision systems . In fact , since the seminal studies by Hubel & Wiesel ( 1962 ) , selectivity in primary visual cortex has been traditionally described in terms of static spatial filters ( e.g. , simple and complex spatial fields or Gabors of varying frequency and orientation ) . In convolutional neural networks ( CNNs ) ( Krizhevsky et al. , 2012 ) , which have dominated computer vision over the last decade , features resembling the spatial filters deduced from biological studies emerge spontaneously over the course of training ( Zeiler & Fergus , 2014 ; Lindsey et al. , 2019 ) . In some cases , remarkable correlations were found between spatial neural representations in CNNs and those identified in the biological brain ( Yamins & DiCarlo , 2016 ) . On the other hand , temporal dynamics , and sensitivity to temporal features , characterize visual neurons throughout the visual system , from retinal receptors and ganglion cells to thalamic and cortical neurons ( Berry et al. , 1997 ; Chichilnisky , 2001 ; Lee et al. , 1981 ; Levick et al. , 1972 ; Reinagel & Reid , 2000 ; Shimaoka et al. , 2018 ) . Existing evidence suggests that both eye motion ( Snodderly et al. , 2001 ; Ahissar & Arieli , 2001 ; Ölveczky et al. , 2003 ; Malevich et al. , 2020 ; Gruber et al. , 2021 ; Hohl & Lisberger , 2011 ) and recurrent neuronal connectivity ( Bejjanki et al. , 2011 ; Samonds et al. , 2013 ) contribute to this temporal dynamics . Furthermore , it was found that recurrent connections improve correlates of artificial neural networks to neural activity in visual cortical areas ( Kar et al. , 2019 ; Kubilius et al. , 2019 ; Kietzmann et al. , 2019 ) . One niche where spatio-temporal computation is probably necessary is the perception of tiny objects . It is well known that the acuity of biological vision is not limited by the spatial resolution of retinal photoreceptors ( “ visual hyperacuity ” ; Westheimer ( 2009 ) ; Barlow ( 1979 ) ) . Vernier acuity , for example , is dramatically higher than might be expected from pure spatial acuity derived from the photoreceptor density in the retinal mosaic ( Westheimer , 2009 ) . Whether hyperacuity is obtained via spatial , temporal , or spatio-temporal mechanisms is not yet known ( Rucci et al. , 2018 ) . In any case , it is evident that the visual processing allowing hyperacuity , or perception of any tiny stimulus , should cope with the fixational drift ; if it doesn ’ t , the drift , whose amplitude is at least two orders of magnitude larger than the smallest perceivable spatial offsets , would impair acuity ( Ahissar & Arieli , 2001 ; Rucci et al. , 2018 ; Ratnam et al. , 2017 ) . The same drift motion could potentially improve acuity if spatio-temporal computations are employed . Such computations can be based on the emphasis of high-frequency spatial details ( Rucci et al. , 2007 ) , temporal coding of spatial offsets ( Ahissar & Arieli , 2001 ; 2012 ) , Bayesian inference ( Anderson et al. , 2020 ) , or on any other derivative of the interactions between ocular motion and the external image . Furthermore , it is reasonable to attribute such spatio-temporal computations to early visual areas , which are known to exhibit faster dynamics and shorter integration windows compared to regions upstream in the visual processing chain ( Gauthier et al. , 2012 ) . Indeed , it had been shown that the recurrent neuronal circuitry in early visual areas could enable countering the blurring from retinal motion ( Burak et al. , 2010 ) . Using the information available from over-sampling low-resolution images has an extensive history in computer vision as part of the field of super-resolution ( Milanfar , 2017 ) . Multi-image superresolution ( MISR ) ( Farsiu et al. , 2004 ) , distinguished from single-image super-resolution ( Glasner et al. , 2009 ; Dong et al. , 2015 ) , aims to reconstruct high-resolution images from a set of lowresolution ones ( Arefin et al. , 2020 ; Ge et al. , 2018 ; Bhat et al. , 2021 ; Li et al. , 2017 ) . An adjacent field of research , low-resolution object recognition , investigates algorithms to maximize the performance on a given task ( Xi et al. , 2020 ; Ge et al. , 2018 ) . Both fields use low-resolution images as input but differ in the goal of the training and evaluation . In this paper , we introduce a classifier that exploits spatio-temporal computations in early layers to perceive tiny images . More specifically , we trained a convolutional neural network with recurrent connectivity ( Arefin et al. , 2020 ) introduced to early layers . The network receives a sequence of low-resolution images generated via sensor motion mimicking ocular drift . We used high-resolution images to obtain a set of features that were then used to facilitate learning in a teacher-student framework ( Hinton et al. , 2015 ) . The outcome is a dynamical classifier that suffers from only a small drop in accuracy when tasked with a significant decrease in spatial resolution , a decrease that substantially impairs the accuracy of a comparable static feed-forward classifier . Using a novel generative model , we found that our dynamical classifier developed features that were primarily sensitive to spatial changes , others that were primarily sensitive to temporal changes , and a majority that exhibited sensitivity to mixed spatio-temporal patterns . Finally , when examining the correlations between patterns of motion and accuracy of classification , we observed that curved trajectories are favorable for recognition , consistent with recent findings of the curvature of fixational drift trajectories in humans . ( Intoy & Rucci , 2020 ; Gruber & Ahissar , 2020 ) . 2 RESULTS . 2.1 TASK AND MODELS . To create a synthetic setting reminiscent of ocular drift , we used images from popular CiFAR datasets ( Krizhevsky et al. , 2009 ) , embedded in a large ( 200x200 pixel ) scene padded by zeros . Sensor position was defined in units of pixels on the scene and its motion was modeled by a stochastic process that is discussed below . The sensor ’ s frames were obtained by cropping a 32x32 pixels window from the scene around the sensor position . Resolution was then reduced to 8x8 using a standard OpenCV ( Bradski , 2000 ) function ( that does not include anti-aliasing filter ) with bi-cubic interpolation ( Fig . 1A ) . A ResNet50 ( He et al. , 2016 ) network pre-trained on Imagenet ( Deng et al. , 2009 ) , which is available as a part of Keras package ( Chollet , 2015 ) , was used as a model of reference . The model was finetuned to one of the CiFAR datasets , reaching accuracy of 96.83 and 82.94 percent for CiFAR-10 and CiFAR 100 , respectively ( Table 1 , Standard resolution ( 32x32 ) ) . In order to feed the 32x32 pix CiFAR images to the network , images were upsampled by factor of 7 to ( 32x7 ) x ( 32x7 ) pix ( i.e . 224x224 pix which are the dimensions of ImageNet images ) . In order to verify the generality of our conclusions , we tested another more compact variant of reference CNN with 3M parameter model . This smaller network that we refer to as Small-net ( Table S7 , S8 ) receives 32x32 pix images as input , therefore , eliminating the need to upsample the CiFAR images before feeding them into the network . This smaller network also simplified the analysis of internal representations as explained below . 2.1.1 TRAINING . We applied a feature learning paradigm ( Hinton et al. , 2015 ) . while using our reference network as teachers for the dynamical recurrent classifier ( DRC ) student . Typical CNNs perform a series of spatial pooling operations . Max pooling layers in the reference CNNs effectively reduce spatial resolution while preserving relevant information about the underlying scene . To develop our DRC , we exploited this spatial pooling line-up . We thus took instances of trained CNNs and replaced their bottom layers with recurrent convolutional networks ( Fig . 1B ) . Specifically , we used a stack of ConvGRU layers ( Ballas et al. , 2015 ; Van Valen et al. , 2016 ) without spatial poolings to replace the original network all the way from the input to the point where the CNN ’ s spatial resolution is reduced by the desired factor ( Table S3 ) . In our case , the resolution was decreased by factor of 4 , therefore the appropriate resolution was achieved after the second max-pooling layer . At this point , considering the 2 DRC systems we have tested , the resolution of the ResNet50-based teacher is ( 8x7 ) x ( 8x7 ) ( the ’ x7 ’ factor is due to the upsampling of the original 32x32 images by a factor of 7 to ( 32x7 ) x ( 32x7 ) ) ; while the Small-net-based teacher resolution is 8x8 . We refer to the bottom recurrent part of the DRC as DRC-front-end ( DRC-FE ) . For the rest of the processing stack we reuse the reference ( teacher ) network architecture ( either ResNet50 or Small-net ) . We refer to this reused part of the DRC as DRC- back-end ( DRC-BE ) ( Fig . 1B ) . We trained the DRC in two steps - first , the DRC-FE was trained to reproduce features of the teacher network . Here we used mean-squared loss between the teacher network and the DRC-FE ( other optimization goals , such as mean absolute loss and cosine similarity , resulted in very similar performance and are not shown ) . Positional data were concatenated with the images time series ; see Appendix B.4 for further details . Next , the DRC-BE was fine-tuned using cross-entropy loss . Our model was mostly implemented in the Keras package ( Chollet , 2015 ) , with the convolutional GRU layer adapted from the project of ( Van Valen et al. , 2016 ) 1 . 2.2 PERFORMANCE . 2.2.1 BASELINE . In order to evaluate the performance improvement which can be attributed to the unique architecture of the DRC , we considered a few baseline solutions . 1Anonymized code is provided as supplementary material . The accuracy of the ResNet50 reference ( teacher ) network when applied to a single low resolution image , was chosen as a simplistic baseline . The performance of this network , shown in Table 1 ( ’ Naive training ’ ) , demonstrates a large degradation of accuracy in both datasets ( See also Table S4 for network ’ s architecture ) . To facilitate fair comparison , we also checked trained such a naive classifier with feature learning as done in DRC . The results here did not change significantly . As a more advanced baseline , we considered an averaged prediction ( AP ) of a feed-forward model over the T sampled frames . Namely , the estimated probability p̂k of a class k is given by p̂k = 1 T ∑T t=1 p̂ t k , where p̂ t k are predictions of the above naive baseline . The situation here is similar to test time data augmentation ( Perez & Wang , 2017 ) with sensor motion being the augmenter . Notably , the AP saturated with the number of timesteps while our full system , as described below kept improving ( Table 1 ) . Here application of a teacher slightly improve the accuracy , and in the case of CiFAR-100 , the improvement is significant ( 1.34 % on average ) . Next , we evaluated models where a convGRU ( resp . GRU ) is connected before ( resp . on the top of ) the last global average pooling layer of ResNet ; we denote it as Resnet+convGRU ( resp . ResNet+GRU ) ( Table S6 ) . At their best , these models achieved accuracy lower by approximately 4 % and 7 % for CiFAR-10 and CiFAR-100 datasets respectively , compared to 5-step DRC w/o positional information . The fact that these models and the AP achieve approximately equal performance indicates that trainable recurrent connectivity in top layers has little benefit over-simplistic integration . This is in contrast to the DRC , where recurrent connectivity is implemented in the low layers . This result is not surprising since convolutional networks tend to develop invariance to small shifts ( Zeiler & Fergus , 2014 ) , such as those that DRC relies on , albeit with important caveats ( Azulay & Weiss , 2018 ) . Finally , we refer to a recent work ( Xi et al. , 2020 ) that uses a generative adversarial network to enhance feature representation in CiFAR-10 task with 8x8 resolution . This solution performs slightly better than DRC without positional information and five timesteps but underperforms versus the same DRC setting with ten steps . Furthermore , no results for CiFAR-100 are available in this work . | The paper's main claim is that recurrence aids to enhance visual acuity in settings with limited resolution, such as the one imposed by limited photoreceptors in the retina. The authors therefore build a convolutional network with recurrent connectivity in its early layers (termed DRC) that receives a time-series of low resolution frames and learns representations -- for classification in CIFAR -- from a teacher network receiving full resolution inputs. DRC outperforms a low-resolution baseline and approaches standard resolution performance. Additionally, the paper visualizes the DRC's learned features. | SP:7255c71605fcd36976f7841a41d2229da6bd37dc |
Visual hyperacuity with moving sensor and recurrent neural computations | 1 INTRODUCTION . Biological vision is known to be a dynamical process . Two factors contributing to these dynamics are eye motion and recurrent neuronal connections in the brain . Our eyes move constantly with movements that , kinematically , can be divided into saccades - quick gaze shifts , and drifts - small scanning movements between saccades ( often referred to as “ fixational drift ” ) ( Rucci et al. , 2018 ) . These dynamical aspects of vision are reflected only partially in contemporary computer vision systems . Some works addressed large scale shifts in visual attention resembling saccades ( Mnih et al. , 2014 ) . Others explored properties and benefits of recurrent top down connections ( Nayebi et al. , 2018 ) , reminiscent of top-down processing in biological vision ( Hochstein & Ahissar , 2002 ) . Notably , the dynamics of low-level visual processes , occurring early in the bottom-up visual hierarchy and sensitive to the fixational drift ( Snodderly et al. , 2001 ; Ölveczky et al. , 2003 ; Malevich et al. , 2020 ; Hohl & Lisberger , 2011 ) , remains largely overlooked in models of vision as well as in bio-inspired computer vision systems . In fact , since the seminal studies by Hubel & Wiesel ( 1962 ) , selectivity in primary visual cortex has been traditionally described in terms of static spatial filters ( e.g. , simple and complex spatial fields or Gabors of varying frequency and orientation ) . In convolutional neural networks ( CNNs ) ( Krizhevsky et al. , 2012 ) , which have dominated computer vision over the last decade , features resembling the spatial filters deduced from biological studies emerge spontaneously over the course of training ( Zeiler & Fergus , 2014 ; Lindsey et al. , 2019 ) . In some cases , remarkable correlations were found between spatial neural representations in CNNs and those identified in the biological brain ( Yamins & DiCarlo , 2016 ) . On the other hand , temporal dynamics , and sensitivity to temporal features , characterize visual neurons throughout the visual system , from retinal receptors and ganglion cells to thalamic and cortical neurons ( Berry et al. , 1997 ; Chichilnisky , 2001 ; Lee et al. , 1981 ; Levick et al. , 1972 ; Reinagel & Reid , 2000 ; Shimaoka et al. , 2018 ) . Existing evidence suggests that both eye motion ( Snodderly et al. , 2001 ; Ahissar & Arieli , 2001 ; Ölveczky et al. , 2003 ; Malevich et al. , 2020 ; Gruber et al. , 2021 ; Hohl & Lisberger , 2011 ) and recurrent neuronal connectivity ( Bejjanki et al. , 2011 ; Samonds et al. , 2013 ) contribute to this temporal dynamics . Furthermore , it was found that recurrent connections improve correlates of artificial neural networks to neural activity in visual cortical areas ( Kar et al. , 2019 ; Kubilius et al. , 2019 ; Kietzmann et al. , 2019 ) . One niche where spatio-temporal computation is probably necessary is the perception of tiny objects . It is well known that the acuity of biological vision is not limited by the spatial resolution of retinal photoreceptors ( “ visual hyperacuity ” ; Westheimer ( 2009 ) ; Barlow ( 1979 ) ) . Vernier acuity , for example , is dramatically higher than might be expected from pure spatial acuity derived from the photoreceptor density in the retinal mosaic ( Westheimer , 2009 ) . Whether hyperacuity is obtained via spatial , temporal , or spatio-temporal mechanisms is not yet known ( Rucci et al. , 2018 ) . In any case , it is evident that the visual processing allowing hyperacuity , or perception of any tiny stimulus , should cope with the fixational drift ; if it doesn ’ t , the drift , whose amplitude is at least two orders of magnitude larger than the smallest perceivable spatial offsets , would impair acuity ( Ahissar & Arieli , 2001 ; Rucci et al. , 2018 ; Ratnam et al. , 2017 ) . The same drift motion could potentially improve acuity if spatio-temporal computations are employed . Such computations can be based on the emphasis of high-frequency spatial details ( Rucci et al. , 2007 ) , temporal coding of spatial offsets ( Ahissar & Arieli , 2001 ; 2012 ) , Bayesian inference ( Anderson et al. , 2020 ) , or on any other derivative of the interactions between ocular motion and the external image . Furthermore , it is reasonable to attribute such spatio-temporal computations to early visual areas , which are known to exhibit faster dynamics and shorter integration windows compared to regions upstream in the visual processing chain ( Gauthier et al. , 2012 ) . Indeed , it had been shown that the recurrent neuronal circuitry in early visual areas could enable countering the blurring from retinal motion ( Burak et al. , 2010 ) . Using the information available from over-sampling low-resolution images has an extensive history in computer vision as part of the field of super-resolution ( Milanfar , 2017 ) . Multi-image superresolution ( MISR ) ( Farsiu et al. , 2004 ) , distinguished from single-image super-resolution ( Glasner et al. , 2009 ; Dong et al. , 2015 ) , aims to reconstruct high-resolution images from a set of lowresolution ones ( Arefin et al. , 2020 ; Ge et al. , 2018 ; Bhat et al. , 2021 ; Li et al. , 2017 ) . An adjacent field of research , low-resolution object recognition , investigates algorithms to maximize the performance on a given task ( Xi et al. , 2020 ; Ge et al. , 2018 ) . Both fields use low-resolution images as input but differ in the goal of the training and evaluation . In this paper , we introduce a classifier that exploits spatio-temporal computations in early layers to perceive tiny images . More specifically , we trained a convolutional neural network with recurrent connectivity ( Arefin et al. , 2020 ) introduced to early layers . The network receives a sequence of low-resolution images generated via sensor motion mimicking ocular drift . We used high-resolution images to obtain a set of features that were then used to facilitate learning in a teacher-student framework ( Hinton et al. , 2015 ) . The outcome is a dynamical classifier that suffers from only a small drop in accuracy when tasked with a significant decrease in spatial resolution , a decrease that substantially impairs the accuracy of a comparable static feed-forward classifier . Using a novel generative model , we found that our dynamical classifier developed features that were primarily sensitive to spatial changes , others that were primarily sensitive to temporal changes , and a majority that exhibited sensitivity to mixed spatio-temporal patterns . Finally , when examining the correlations between patterns of motion and accuracy of classification , we observed that curved trajectories are favorable for recognition , consistent with recent findings of the curvature of fixational drift trajectories in humans . ( Intoy & Rucci , 2020 ; Gruber & Ahissar , 2020 ) . 2 RESULTS . 2.1 TASK AND MODELS . To create a synthetic setting reminiscent of ocular drift , we used images from popular CiFAR datasets ( Krizhevsky et al. , 2009 ) , embedded in a large ( 200x200 pixel ) scene padded by zeros . Sensor position was defined in units of pixels on the scene and its motion was modeled by a stochastic process that is discussed below . The sensor ’ s frames were obtained by cropping a 32x32 pixels window from the scene around the sensor position . Resolution was then reduced to 8x8 using a standard OpenCV ( Bradski , 2000 ) function ( that does not include anti-aliasing filter ) with bi-cubic interpolation ( Fig . 1A ) . A ResNet50 ( He et al. , 2016 ) network pre-trained on Imagenet ( Deng et al. , 2009 ) , which is available as a part of Keras package ( Chollet , 2015 ) , was used as a model of reference . The model was finetuned to one of the CiFAR datasets , reaching accuracy of 96.83 and 82.94 percent for CiFAR-10 and CiFAR 100 , respectively ( Table 1 , Standard resolution ( 32x32 ) ) . In order to feed the 32x32 pix CiFAR images to the network , images were upsampled by factor of 7 to ( 32x7 ) x ( 32x7 ) pix ( i.e . 224x224 pix which are the dimensions of ImageNet images ) . In order to verify the generality of our conclusions , we tested another more compact variant of reference CNN with 3M parameter model . This smaller network that we refer to as Small-net ( Table S7 , S8 ) receives 32x32 pix images as input , therefore , eliminating the need to upsample the CiFAR images before feeding them into the network . This smaller network also simplified the analysis of internal representations as explained below . 2.1.1 TRAINING . We applied a feature learning paradigm ( Hinton et al. , 2015 ) . while using our reference network as teachers for the dynamical recurrent classifier ( DRC ) student . Typical CNNs perform a series of spatial pooling operations . Max pooling layers in the reference CNNs effectively reduce spatial resolution while preserving relevant information about the underlying scene . To develop our DRC , we exploited this spatial pooling line-up . We thus took instances of trained CNNs and replaced their bottom layers with recurrent convolutional networks ( Fig . 1B ) . Specifically , we used a stack of ConvGRU layers ( Ballas et al. , 2015 ; Van Valen et al. , 2016 ) without spatial poolings to replace the original network all the way from the input to the point where the CNN ’ s spatial resolution is reduced by the desired factor ( Table S3 ) . In our case , the resolution was decreased by factor of 4 , therefore the appropriate resolution was achieved after the second max-pooling layer . At this point , considering the 2 DRC systems we have tested , the resolution of the ResNet50-based teacher is ( 8x7 ) x ( 8x7 ) ( the ’ x7 ’ factor is due to the upsampling of the original 32x32 images by a factor of 7 to ( 32x7 ) x ( 32x7 ) ) ; while the Small-net-based teacher resolution is 8x8 . We refer to the bottom recurrent part of the DRC as DRC-front-end ( DRC-FE ) . For the rest of the processing stack we reuse the reference ( teacher ) network architecture ( either ResNet50 or Small-net ) . We refer to this reused part of the DRC as DRC- back-end ( DRC-BE ) ( Fig . 1B ) . We trained the DRC in two steps - first , the DRC-FE was trained to reproduce features of the teacher network . Here we used mean-squared loss between the teacher network and the DRC-FE ( other optimization goals , such as mean absolute loss and cosine similarity , resulted in very similar performance and are not shown ) . Positional data were concatenated with the images time series ; see Appendix B.4 for further details . Next , the DRC-BE was fine-tuned using cross-entropy loss . Our model was mostly implemented in the Keras package ( Chollet , 2015 ) , with the convolutional GRU layer adapted from the project of ( Van Valen et al. , 2016 ) 1 . 2.2 PERFORMANCE . 2.2.1 BASELINE . In order to evaluate the performance improvement which can be attributed to the unique architecture of the DRC , we considered a few baseline solutions . 1Anonymized code is provided as supplementary material . The accuracy of the ResNet50 reference ( teacher ) network when applied to a single low resolution image , was chosen as a simplistic baseline . The performance of this network , shown in Table 1 ( ’ Naive training ’ ) , demonstrates a large degradation of accuracy in both datasets ( See also Table S4 for network ’ s architecture ) . To facilitate fair comparison , we also checked trained such a naive classifier with feature learning as done in DRC . The results here did not change significantly . As a more advanced baseline , we considered an averaged prediction ( AP ) of a feed-forward model over the T sampled frames . Namely , the estimated probability p̂k of a class k is given by p̂k = 1 T ∑T t=1 p̂ t k , where p̂ t k are predictions of the above naive baseline . The situation here is similar to test time data augmentation ( Perez & Wang , 2017 ) with sensor motion being the augmenter . Notably , the AP saturated with the number of timesteps while our full system , as described below kept improving ( Table 1 ) . Here application of a teacher slightly improve the accuracy , and in the case of CiFAR-100 , the improvement is significant ( 1.34 % on average ) . Next , we evaluated models where a convGRU ( resp . GRU ) is connected before ( resp . on the top of ) the last global average pooling layer of ResNet ; we denote it as Resnet+convGRU ( resp . ResNet+GRU ) ( Table S6 ) . At their best , these models achieved accuracy lower by approximately 4 % and 7 % for CiFAR-10 and CiFAR-100 datasets respectively , compared to 5-step DRC w/o positional information . The fact that these models and the AP achieve approximately equal performance indicates that trainable recurrent connectivity in top layers has little benefit over-simplistic integration . This is in contrast to the DRC , where recurrent connectivity is implemented in the low layers . This result is not surprising since convolutional networks tend to develop invariance to small shifts ( Zeiler & Fergus , 2014 ) , such as those that DRC relies on , albeit with important caveats ( Azulay & Weiss , 2018 ) . Finally , we refer to a recent work ( Xi et al. , 2020 ) that uses a generative adversarial network to enhance feature representation in CiFAR-10 task with 8x8 resolution . This solution performs slightly better than DRC without positional information and five timesteps but underperforms versus the same DRC setting with ten steps . Furthermore , no results for CiFAR-100 are available in this work . | This paper takes inspiration from the biological phenomenon of fixation drift, slow, low-amplitude movements during fixation that are believed to result in hyper-resolution in human vision. They hypothesize that this phenomenon can be explained by a model that has a recurrent convolutional front end that integrates over fixation drift, feeding into a well-trained back-end from a conventional model (ResNet 50). They demonstrate that this "Dynamical Recurrent Classifier" (DRC) is capable of restoring performance on 8X8 images to nearly the performance on "high" resolution 32X32 CIFAR images (actually, no one would call 32X32 high resolution!). They analyze the representations learned by the model and show they have strong spatio-temporal features, with some learned features emphasizing spatial features, some emphasizing temporal features, but most combine the two. Finally, they show that using curved trajectories improves performance over more random walks, which can potentially explain recent results in humans. They suggest this model can be useful in AI applications involving limited resolution but with multiple samples over time. | SP:7255c71605fcd36976f7841a41d2229da6bd37dc |
Visual hyperacuity with moving sensor and recurrent neural computations | 1 INTRODUCTION . Biological vision is known to be a dynamical process . Two factors contributing to these dynamics are eye motion and recurrent neuronal connections in the brain . Our eyes move constantly with movements that , kinematically , can be divided into saccades - quick gaze shifts , and drifts - small scanning movements between saccades ( often referred to as “ fixational drift ” ) ( Rucci et al. , 2018 ) . These dynamical aspects of vision are reflected only partially in contemporary computer vision systems . Some works addressed large scale shifts in visual attention resembling saccades ( Mnih et al. , 2014 ) . Others explored properties and benefits of recurrent top down connections ( Nayebi et al. , 2018 ) , reminiscent of top-down processing in biological vision ( Hochstein & Ahissar , 2002 ) . Notably , the dynamics of low-level visual processes , occurring early in the bottom-up visual hierarchy and sensitive to the fixational drift ( Snodderly et al. , 2001 ; Ölveczky et al. , 2003 ; Malevich et al. , 2020 ; Hohl & Lisberger , 2011 ) , remains largely overlooked in models of vision as well as in bio-inspired computer vision systems . In fact , since the seminal studies by Hubel & Wiesel ( 1962 ) , selectivity in primary visual cortex has been traditionally described in terms of static spatial filters ( e.g. , simple and complex spatial fields or Gabors of varying frequency and orientation ) . In convolutional neural networks ( CNNs ) ( Krizhevsky et al. , 2012 ) , which have dominated computer vision over the last decade , features resembling the spatial filters deduced from biological studies emerge spontaneously over the course of training ( Zeiler & Fergus , 2014 ; Lindsey et al. , 2019 ) . In some cases , remarkable correlations were found between spatial neural representations in CNNs and those identified in the biological brain ( Yamins & DiCarlo , 2016 ) . On the other hand , temporal dynamics , and sensitivity to temporal features , characterize visual neurons throughout the visual system , from retinal receptors and ganglion cells to thalamic and cortical neurons ( Berry et al. , 1997 ; Chichilnisky , 2001 ; Lee et al. , 1981 ; Levick et al. , 1972 ; Reinagel & Reid , 2000 ; Shimaoka et al. , 2018 ) . Existing evidence suggests that both eye motion ( Snodderly et al. , 2001 ; Ahissar & Arieli , 2001 ; Ölveczky et al. , 2003 ; Malevich et al. , 2020 ; Gruber et al. , 2021 ; Hohl & Lisberger , 2011 ) and recurrent neuronal connectivity ( Bejjanki et al. , 2011 ; Samonds et al. , 2013 ) contribute to this temporal dynamics . Furthermore , it was found that recurrent connections improve correlates of artificial neural networks to neural activity in visual cortical areas ( Kar et al. , 2019 ; Kubilius et al. , 2019 ; Kietzmann et al. , 2019 ) . One niche where spatio-temporal computation is probably necessary is the perception of tiny objects . It is well known that the acuity of biological vision is not limited by the spatial resolution of retinal photoreceptors ( “ visual hyperacuity ” ; Westheimer ( 2009 ) ; Barlow ( 1979 ) ) . Vernier acuity , for example , is dramatically higher than might be expected from pure spatial acuity derived from the photoreceptor density in the retinal mosaic ( Westheimer , 2009 ) . Whether hyperacuity is obtained via spatial , temporal , or spatio-temporal mechanisms is not yet known ( Rucci et al. , 2018 ) . In any case , it is evident that the visual processing allowing hyperacuity , or perception of any tiny stimulus , should cope with the fixational drift ; if it doesn ’ t , the drift , whose amplitude is at least two orders of magnitude larger than the smallest perceivable spatial offsets , would impair acuity ( Ahissar & Arieli , 2001 ; Rucci et al. , 2018 ; Ratnam et al. , 2017 ) . The same drift motion could potentially improve acuity if spatio-temporal computations are employed . Such computations can be based on the emphasis of high-frequency spatial details ( Rucci et al. , 2007 ) , temporal coding of spatial offsets ( Ahissar & Arieli , 2001 ; 2012 ) , Bayesian inference ( Anderson et al. , 2020 ) , or on any other derivative of the interactions between ocular motion and the external image . Furthermore , it is reasonable to attribute such spatio-temporal computations to early visual areas , which are known to exhibit faster dynamics and shorter integration windows compared to regions upstream in the visual processing chain ( Gauthier et al. , 2012 ) . Indeed , it had been shown that the recurrent neuronal circuitry in early visual areas could enable countering the blurring from retinal motion ( Burak et al. , 2010 ) . Using the information available from over-sampling low-resolution images has an extensive history in computer vision as part of the field of super-resolution ( Milanfar , 2017 ) . Multi-image superresolution ( MISR ) ( Farsiu et al. , 2004 ) , distinguished from single-image super-resolution ( Glasner et al. , 2009 ; Dong et al. , 2015 ) , aims to reconstruct high-resolution images from a set of lowresolution ones ( Arefin et al. , 2020 ; Ge et al. , 2018 ; Bhat et al. , 2021 ; Li et al. , 2017 ) . An adjacent field of research , low-resolution object recognition , investigates algorithms to maximize the performance on a given task ( Xi et al. , 2020 ; Ge et al. , 2018 ) . Both fields use low-resolution images as input but differ in the goal of the training and evaluation . In this paper , we introduce a classifier that exploits spatio-temporal computations in early layers to perceive tiny images . More specifically , we trained a convolutional neural network with recurrent connectivity ( Arefin et al. , 2020 ) introduced to early layers . The network receives a sequence of low-resolution images generated via sensor motion mimicking ocular drift . We used high-resolution images to obtain a set of features that were then used to facilitate learning in a teacher-student framework ( Hinton et al. , 2015 ) . The outcome is a dynamical classifier that suffers from only a small drop in accuracy when tasked with a significant decrease in spatial resolution , a decrease that substantially impairs the accuracy of a comparable static feed-forward classifier . Using a novel generative model , we found that our dynamical classifier developed features that were primarily sensitive to spatial changes , others that were primarily sensitive to temporal changes , and a majority that exhibited sensitivity to mixed spatio-temporal patterns . Finally , when examining the correlations between patterns of motion and accuracy of classification , we observed that curved trajectories are favorable for recognition , consistent with recent findings of the curvature of fixational drift trajectories in humans . ( Intoy & Rucci , 2020 ; Gruber & Ahissar , 2020 ) . 2 RESULTS . 2.1 TASK AND MODELS . To create a synthetic setting reminiscent of ocular drift , we used images from popular CiFAR datasets ( Krizhevsky et al. , 2009 ) , embedded in a large ( 200x200 pixel ) scene padded by zeros . Sensor position was defined in units of pixels on the scene and its motion was modeled by a stochastic process that is discussed below . The sensor ’ s frames were obtained by cropping a 32x32 pixels window from the scene around the sensor position . Resolution was then reduced to 8x8 using a standard OpenCV ( Bradski , 2000 ) function ( that does not include anti-aliasing filter ) with bi-cubic interpolation ( Fig . 1A ) . A ResNet50 ( He et al. , 2016 ) network pre-trained on Imagenet ( Deng et al. , 2009 ) , which is available as a part of Keras package ( Chollet , 2015 ) , was used as a model of reference . The model was finetuned to one of the CiFAR datasets , reaching accuracy of 96.83 and 82.94 percent for CiFAR-10 and CiFAR 100 , respectively ( Table 1 , Standard resolution ( 32x32 ) ) . In order to feed the 32x32 pix CiFAR images to the network , images were upsampled by factor of 7 to ( 32x7 ) x ( 32x7 ) pix ( i.e . 224x224 pix which are the dimensions of ImageNet images ) . In order to verify the generality of our conclusions , we tested another more compact variant of reference CNN with 3M parameter model . This smaller network that we refer to as Small-net ( Table S7 , S8 ) receives 32x32 pix images as input , therefore , eliminating the need to upsample the CiFAR images before feeding them into the network . This smaller network also simplified the analysis of internal representations as explained below . 2.1.1 TRAINING . We applied a feature learning paradigm ( Hinton et al. , 2015 ) . while using our reference network as teachers for the dynamical recurrent classifier ( DRC ) student . Typical CNNs perform a series of spatial pooling operations . Max pooling layers in the reference CNNs effectively reduce spatial resolution while preserving relevant information about the underlying scene . To develop our DRC , we exploited this spatial pooling line-up . We thus took instances of trained CNNs and replaced their bottom layers with recurrent convolutional networks ( Fig . 1B ) . Specifically , we used a stack of ConvGRU layers ( Ballas et al. , 2015 ; Van Valen et al. , 2016 ) without spatial poolings to replace the original network all the way from the input to the point where the CNN ’ s spatial resolution is reduced by the desired factor ( Table S3 ) . In our case , the resolution was decreased by factor of 4 , therefore the appropriate resolution was achieved after the second max-pooling layer . At this point , considering the 2 DRC systems we have tested , the resolution of the ResNet50-based teacher is ( 8x7 ) x ( 8x7 ) ( the ’ x7 ’ factor is due to the upsampling of the original 32x32 images by a factor of 7 to ( 32x7 ) x ( 32x7 ) ) ; while the Small-net-based teacher resolution is 8x8 . We refer to the bottom recurrent part of the DRC as DRC-front-end ( DRC-FE ) . For the rest of the processing stack we reuse the reference ( teacher ) network architecture ( either ResNet50 or Small-net ) . We refer to this reused part of the DRC as DRC- back-end ( DRC-BE ) ( Fig . 1B ) . We trained the DRC in two steps - first , the DRC-FE was trained to reproduce features of the teacher network . Here we used mean-squared loss between the teacher network and the DRC-FE ( other optimization goals , such as mean absolute loss and cosine similarity , resulted in very similar performance and are not shown ) . Positional data were concatenated with the images time series ; see Appendix B.4 for further details . Next , the DRC-BE was fine-tuned using cross-entropy loss . Our model was mostly implemented in the Keras package ( Chollet , 2015 ) , with the convolutional GRU layer adapted from the project of ( Van Valen et al. , 2016 ) 1 . 2.2 PERFORMANCE . 2.2.1 BASELINE . In order to evaluate the performance improvement which can be attributed to the unique architecture of the DRC , we considered a few baseline solutions . 1Anonymized code is provided as supplementary material . The accuracy of the ResNet50 reference ( teacher ) network when applied to a single low resolution image , was chosen as a simplistic baseline . The performance of this network , shown in Table 1 ( ’ Naive training ’ ) , demonstrates a large degradation of accuracy in both datasets ( See also Table S4 for network ’ s architecture ) . To facilitate fair comparison , we also checked trained such a naive classifier with feature learning as done in DRC . The results here did not change significantly . As a more advanced baseline , we considered an averaged prediction ( AP ) of a feed-forward model over the T sampled frames . Namely , the estimated probability p̂k of a class k is given by p̂k = 1 T ∑T t=1 p̂ t k , where p̂ t k are predictions of the above naive baseline . The situation here is similar to test time data augmentation ( Perez & Wang , 2017 ) with sensor motion being the augmenter . Notably , the AP saturated with the number of timesteps while our full system , as described below kept improving ( Table 1 ) . Here application of a teacher slightly improve the accuracy , and in the case of CiFAR-100 , the improvement is significant ( 1.34 % on average ) . Next , we evaluated models where a convGRU ( resp . GRU ) is connected before ( resp . on the top of ) the last global average pooling layer of ResNet ; we denote it as Resnet+convGRU ( resp . ResNet+GRU ) ( Table S6 ) . At their best , these models achieved accuracy lower by approximately 4 % and 7 % for CiFAR-10 and CiFAR-100 datasets respectively , compared to 5-step DRC w/o positional information . The fact that these models and the AP achieve approximately equal performance indicates that trainable recurrent connectivity in top layers has little benefit over-simplistic integration . This is in contrast to the DRC , where recurrent connectivity is implemented in the low layers . This result is not surprising since convolutional networks tend to develop invariance to small shifts ( Zeiler & Fergus , 2014 ) , such as those that DRC relies on , albeit with important caveats ( Azulay & Weiss , 2018 ) . Finally , we refer to a recent work ( Xi et al. , 2020 ) that uses a generative adversarial network to enhance feature representation in CiFAR-10 task with 8x8 resolution . This solution performs slightly better than DRC without positional information and five timesteps but underperforms versus the same DRC setting with ten steps . Furthermore , no results for CiFAR-100 are available in this work . | Here, the authors attempt to leverage spatio-temporal computations for object recognition on the standard CIFAR-10 and CIFAR-100 datasets. In short, they use a network with a front-end of recurrent units (ConvGRU) to recognize objects given spatially jittered downsampled images – effectively approximating an active sensor. The network is trained in a student-teacher configuration, where weights in a temporal pooling layer after the recurrent layers are trained to match the weights of a feature layer inResNet50. Next, the network is fine-tuned to increase classification accuracy. Altogether, the authors are asking if spatio-temporal computations are enough to produce a feature layer similar to a larger network train on full-res images and, in turn, if this feature layer supports object recognition on par with full-res performance of ResNet50. The authors demonstrate that their network is almost as performant as ResNet50 with 4x downsampled images, especially when the down-sampled images are jittered in a spiral formation. They also present analysis demonstrating that the network is in fact performing spatio-temporal calculations. | SP:7255c71605fcd36976f7841a41d2229da6bd37dc |
A theoretically grounded characterization of feature representations | A large body of work has explored how learned feature representations can be useful for a variety of downstream tasks . This is true even when the downstream tasks differ greatly from the actual objective used to ( pre ) train the feature representation . This observation underlies the success of , e.g. , few-shot learning , transfer learning and self-supervised learning , among others . However , very little is understood about why such transfer is successful , and more importantly , how one should choose the pre-training task . As a first step towards this understanding , we ask : what makes a feature representation good for a target task ? We present simple , intuitive measurements of the feature space that are good predictors of downstream task performance . We present theoretical results showing how these measurements can be used to bound the error of the downstream classifiers , and show empirically that these bounds correlate well with actual downstream performance . Finally , we show that our bounds are practically useful for choosing the right pre-trained representation for a target task . 1 INTRODUCTION . Since the ( re ) -discovery of neural networks for visual recognition , a plethora of work has observed that the feature representations trained on ImageNet generalize to many downstream tasks , even to new domains ( Donahue et al. , 2014 ; Kornblith et al. , 2019 ; Kolesnikov et al. , 2020 ; Wallace & Hariharan , 2020 ) . This observation , and the resulting gain in accuracy even with very limited labels , has heralded new research directions into other ways of learning representations , such as self-supervised learning ( Chen et al. , 2020 ) or meta-learning ( Snell et al. , 2017 ) . This growing field of representation learning has yielded ostensibly better and better feature representations . However , a closer look reveals many mysterious results . For example , meta-learning methods do not transfer across domains ( Guo et al. , 2020 ) and self-supervised representations struggle with fine-grained recognition ( Wallace & Hariharan , 2020 ) . These empirical results are extremely valuable , but do not provide a deeper understanding of the corresponding phenomena . On the other side of the spectrum , theoretical work on neural representations is illuminating , but often makes assumptions about models or tasks ( Du et al. , 2020 ; Arora et al. , 2019b ) . We lack a general understanding of which neural representations work well for a given task and why . In this paper , we take a first step towards such an understanding by developing lower and upper bounds on classifier accuracy based on data-driven properties of the feature space . Classical theoretical bounds focus entirely on the complexity of the classifier and ignore the feature space . In contrast , our bounds are based on two intuitive properties of the feature representation ( Figure 1 ) : ( a ) Local alignment , which is the degree to which nearby data points share labels , and ( b ) local congregation which is the degree to which data points embed close to each other . Intuitively , if the feature representation is locally aligned to the task , then any smooth classifier will be able to model the task well . If it is additionally congregated , then most test points will have nearby training instances , so the classifier will generalize well from limited data . We show that our bounds are not only intuitive and theoretically justified , but also predictive of actual performance in practical settings : on a large dataset of realistic few-shot tasks , we can use our bounds to pick the best pre-trained representation without any training . Taken together , our work is a first step towards a general , intuitive characterization of the feature space that is predictive of downstream classifier performance . 2 RELATED WORK . Analyzing transferability : The motivations of our work lie in understanding the empirical effectiveness of transferring feature representations from supervised ( Donahue et al. , 2014 ; Zhai et al. , 2019 ; Kolesnikov et al. , 2020 ) or self-supervised ( Goyal et al . ; Wallace & Hariharan , 2020 ; Chen et al. , 2020 ) tasks . With these successes on transfer , and with the availability of a large number of pretrained features trained on a wide variety of domains , there has been an increasing interest in predicting transferability , or in selecting the right features for a particular target task . The simplest approach is to train a classifier with every available pre-trained representation and pick the best performer ( Zamir et al. , 2018 ; You et al. , 2021 ) . This is both computationally expensive and requires lots of labeled data for the target task . If the pre-training task is a classification task with a known label space then the conditional distribution of targets given pre-training task labels is informative of transfer performance ( Tran et al. , 2019 ; Nguyen et al. , 2020 ) . However , this approach is difficult to apply if the pre-training task is not classification , or is inaccessible ( as with models trained on proprietary data ) . This inaccessibility of pre-training data is also a problem for approaches that match pre-training and target distributions ( Gao & Chaudhari , 2021 ) , or explicitly adapt the pre-training images or label space ( Cui et al. , 2018 ) . In contrast to these approaches , we focus on directly analyzing the pre-trained feature representation , which allows us to be agnostic to the actual task it was trained on . Our work is most similar in setup and motivation to work on measuring the alignment between feature representations and the target labels ( Huang et al. , 2021 ; Bao et al. , 2019 ) , but in addition to allowing model selection , yields intuitive and general bounds on accuracy . Our work is orthogonal to work on characterizing tasks and measuring task similarity ; this frequently requires pre-trained features in the first place ( Achille et al. , 2019 ; Wallace et al. , 2021 ; Song et al. , 2020 ; Dwivedi et al. , 2020 ; Dwivedi & Roig , 2019 ) . Analyzing feature representations : Our work is also related to research on understanding feature representations in general . Some of this work has focused on understanding invariance properties of convnet features ( Aubry & Russell , 2015 ) . Others have looked at what individual feature dimensions mean ( Agrawal et al. , 2014 ; Zeiler & Fergus , 2014 ; Szegedy et al. , 2013 ) . Still others have explored if and when features from pre-trained networks transfer well between tasks as a function of the layer chosen ( Yosinski et al. , 2014 ) or the architecture ( Kornblith et al. , 2019 ) . Recently these explorations have been extended to other representation learning techniques , notably self supervised techniques ( Wang & Isola , 2020 ; Wallace & Hariharan , 2020 ) . The insights from these explorations inspire our results . However , these explorations have been primarily empirical , and are therefore limited by the diversity of real world benchmarks that they experiment with . On the other end of the spectrum , there is prior work on theoretical analysis of transfer learning . This prior work often follows the framework proposed by Baxter ( 2000 ) , and in so doing makes assumptions about the distributions of the different tasks ( Maurer , 2009 ; Du et al. , 2020 ; Galanti et al. , 2016 ; Pentina & Lampert , 2014 ) . In contrast , our approach eschews these assumptions in lieu of data-driven measurements . Data-driven complexity measures have been explored before for analyzing neural network training dynamics and generalization . These measures include eigenvectors of the gram matrix between data points ( Arora et al. , 2019a ) , the variance and separation of class-specific manifolds ( Cohen et al. , 2020 ) , the underlying intrinsic dimensionality of the task ( Lampinen & Ganguli , 2018 ) or the mutual information between representations and the inputs or labels ( Shwartz-Ziv & Tishby , 2017 ) . The corresponding results can be used for analyzing feature representations as well . Our proposed measurements are similar , but are simpler to measure and potentially more intuitive . 3 PROBLEM SETUP . Suppose we are interested in mapping a space of inputs X to a space of targets Y . There is an underlying distribution D over X × Y . We have a feature representation φ : X → Rf which might be pre-trained on another dataset or task , which is inaccessible to us . We will assume that ‖φ ( x ) ‖ ≤ B . For ease of exposition in the paper , we focus on the task of binary classification ; the case of multiclass classification is similar . For binary classification , we write the label space as Y = { −1 , 1 } . Our classifier will use a scoring function that operates on feature space , h : Rf → R. The predicted label will then be sign ( h ( φ ( x ) ) ) , where if h ( φ ( x ) ) is 0 , we will arbitrarily assign a label of -1 . The set of possible functions h defines the hypothesis class for the classifier ; denote this byH . For most of our analysis , we primarily care about the smoothness of the functions in H. We will assume that all functions inH are Lipschitz continuous with Lipschitz constant less than W . Thus : h ( φ ( x ) ) − h ( φ ( x′ ) ) ≤W‖φ ( x ) − φ ( x′ ) ‖ ∀h ∈ H ( 1 ) We note that one such hypothesis class which is commonly used in practice is the class of linear classifiers of bounded norm : W = { v 7→ wTv ; ‖w‖ ≤W } . Because the zero-one loss , l∗ ( h ( φ ( x ) ) , y ) = I [ sign ( h ( φ ( x ) ) ) 6= y ] is difficult to analyze , we will use the following continuous upper bound which is standard in theoretical treatments ( Mohri et al. , 2018 ) l ( h ( φ ( x ) ) , y ) = min ( 1 , max ( 0 , 1− yh ( φ ( x ) ) ) ) ≥ l∗ ( h ( φ ( x ) ) , y ) ( 2 ) Our focus in this paper is to understand how the properties of the feature extractor φ affect the loss of the classifiers l ( h ( φ ( x ) ) ) . In particular , we wish to understand how φ impacts ( a ) the lowest average error that one can achieve , and ( b ) the true error when one generalizes from a small training set . We begin by first identifying the key properties of feature representations . 4 TWO PROPERTIES OF FEATURE REPRESENTATIONS . What properties should we use to characterize feature representations ? First , we should use properties that are easy to measure , potentially with limited labeled data . Second , these properties should be easy for human developers and practitioners to reason about . Third , they should correlate well with the final accuracy of downstream classifiers . In sum , we want simple , intuitive measurements of the feature space that are predictive of downstream accuracy . One kind of intuitive measurement is to look at what the feature representation considers as similar . In particular , we could look at pairs of examples that embed close to each other in feature space and ask if they are indeed similar in terms of their ground truth labels . We call this property local alignment . In particular , we make the following definition Definition 1 . Suppose α > 0 . The local alignment of the feature space φ , denoted by pφa ( α ) is the probability that two data points ( x , y ) , ( x′ , y′ ) ∼ D share a label given that they embed within a distance of α : pφa ( α ) , P ( y = y ′ ∣∣ ‖φ ( x ) − φ ( x′ ) ‖ ≤ α , ( x , y ) , ( x′ , y′ ) ∼ D ) ( 3 ) α here is a hyperparameter which governs the resolution at which we do the analysis . Note that this notion of local alignment is different from the alignment proposed by Wang & Isola ( 2020 ) : the latter looks at how often two data points that are semantically similar embed close to each other , while the former looks at how often two data points that embed close to each other are semantically similar . A feature space that is locally aligned per our definition may not actually be aligned per the definition of Wang & Isola ( 2020 ) because two very similar images may be embedded far away from each other . Local alignment alone may be meaningless if data points do not generally embed close to each other . We also need the feature space to be such that data points generally congregate : Definition 2 . Suppose α > 0 . The degree of congregation of the feature space φ , denoted by pφc , is the probability that two points x , x′ sampled from D embed within a distance of α : pφc ( α ) , P ( ‖φ ( x ) − φ ( x′ ) ‖ ≤ α|x , x′ ∼ D ) ( 4 ) We have α as a hyperparameter here too . We now use these notions of local alignment and congregation to analyze the downstream error of classifiers . | This work studies the relation between properties of the feature extractor, and the downstream performance on binary classification tasks. The authors identify two key properties, namely "local alignment" and "congregation", which can be used to derive lower bound on the test accuracy and generalization bounds. The theoretical results are verified empirically on CIFAR10 and CUB200. | SP:c894e857dba275f322dbd6d5f6c3c17b4c8bb03a |
A theoretically grounded characterization of feature representations | A large body of work has explored how learned feature representations can be useful for a variety of downstream tasks . This is true even when the downstream tasks differ greatly from the actual objective used to ( pre ) train the feature representation . This observation underlies the success of , e.g. , few-shot learning , transfer learning and self-supervised learning , among others . However , very little is understood about why such transfer is successful , and more importantly , how one should choose the pre-training task . As a first step towards this understanding , we ask : what makes a feature representation good for a target task ? We present simple , intuitive measurements of the feature space that are good predictors of downstream task performance . We present theoretical results showing how these measurements can be used to bound the error of the downstream classifiers , and show empirically that these bounds correlate well with actual downstream performance . Finally , we show that our bounds are practically useful for choosing the right pre-trained representation for a target task . 1 INTRODUCTION . Since the ( re ) -discovery of neural networks for visual recognition , a plethora of work has observed that the feature representations trained on ImageNet generalize to many downstream tasks , even to new domains ( Donahue et al. , 2014 ; Kornblith et al. , 2019 ; Kolesnikov et al. , 2020 ; Wallace & Hariharan , 2020 ) . This observation , and the resulting gain in accuracy even with very limited labels , has heralded new research directions into other ways of learning representations , such as self-supervised learning ( Chen et al. , 2020 ) or meta-learning ( Snell et al. , 2017 ) . This growing field of representation learning has yielded ostensibly better and better feature representations . However , a closer look reveals many mysterious results . For example , meta-learning methods do not transfer across domains ( Guo et al. , 2020 ) and self-supervised representations struggle with fine-grained recognition ( Wallace & Hariharan , 2020 ) . These empirical results are extremely valuable , but do not provide a deeper understanding of the corresponding phenomena . On the other side of the spectrum , theoretical work on neural representations is illuminating , but often makes assumptions about models or tasks ( Du et al. , 2020 ; Arora et al. , 2019b ) . We lack a general understanding of which neural representations work well for a given task and why . In this paper , we take a first step towards such an understanding by developing lower and upper bounds on classifier accuracy based on data-driven properties of the feature space . Classical theoretical bounds focus entirely on the complexity of the classifier and ignore the feature space . In contrast , our bounds are based on two intuitive properties of the feature representation ( Figure 1 ) : ( a ) Local alignment , which is the degree to which nearby data points share labels , and ( b ) local congregation which is the degree to which data points embed close to each other . Intuitively , if the feature representation is locally aligned to the task , then any smooth classifier will be able to model the task well . If it is additionally congregated , then most test points will have nearby training instances , so the classifier will generalize well from limited data . We show that our bounds are not only intuitive and theoretically justified , but also predictive of actual performance in practical settings : on a large dataset of realistic few-shot tasks , we can use our bounds to pick the best pre-trained representation without any training . Taken together , our work is a first step towards a general , intuitive characterization of the feature space that is predictive of downstream classifier performance . 2 RELATED WORK . Analyzing transferability : The motivations of our work lie in understanding the empirical effectiveness of transferring feature representations from supervised ( Donahue et al. , 2014 ; Zhai et al. , 2019 ; Kolesnikov et al. , 2020 ) or self-supervised ( Goyal et al . ; Wallace & Hariharan , 2020 ; Chen et al. , 2020 ) tasks . With these successes on transfer , and with the availability of a large number of pretrained features trained on a wide variety of domains , there has been an increasing interest in predicting transferability , or in selecting the right features for a particular target task . The simplest approach is to train a classifier with every available pre-trained representation and pick the best performer ( Zamir et al. , 2018 ; You et al. , 2021 ) . This is both computationally expensive and requires lots of labeled data for the target task . If the pre-training task is a classification task with a known label space then the conditional distribution of targets given pre-training task labels is informative of transfer performance ( Tran et al. , 2019 ; Nguyen et al. , 2020 ) . However , this approach is difficult to apply if the pre-training task is not classification , or is inaccessible ( as with models trained on proprietary data ) . This inaccessibility of pre-training data is also a problem for approaches that match pre-training and target distributions ( Gao & Chaudhari , 2021 ) , or explicitly adapt the pre-training images or label space ( Cui et al. , 2018 ) . In contrast to these approaches , we focus on directly analyzing the pre-trained feature representation , which allows us to be agnostic to the actual task it was trained on . Our work is most similar in setup and motivation to work on measuring the alignment between feature representations and the target labels ( Huang et al. , 2021 ; Bao et al. , 2019 ) , but in addition to allowing model selection , yields intuitive and general bounds on accuracy . Our work is orthogonal to work on characterizing tasks and measuring task similarity ; this frequently requires pre-trained features in the first place ( Achille et al. , 2019 ; Wallace et al. , 2021 ; Song et al. , 2020 ; Dwivedi et al. , 2020 ; Dwivedi & Roig , 2019 ) . Analyzing feature representations : Our work is also related to research on understanding feature representations in general . Some of this work has focused on understanding invariance properties of convnet features ( Aubry & Russell , 2015 ) . Others have looked at what individual feature dimensions mean ( Agrawal et al. , 2014 ; Zeiler & Fergus , 2014 ; Szegedy et al. , 2013 ) . Still others have explored if and when features from pre-trained networks transfer well between tasks as a function of the layer chosen ( Yosinski et al. , 2014 ) or the architecture ( Kornblith et al. , 2019 ) . Recently these explorations have been extended to other representation learning techniques , notably self supervised techniques ( Wang & Isola , 2020 ; Wallace & Hariharan , 2020 ) . The insights from these explorations inspire our results . However , these explorations have been primarily empirical , and are therefore limited by the diversity of real world benchmarks that they experiment with . On the other end of the spectrum , there is prior work on theoretical analysis of transfer learning . This prior work often follows the framework proposed by Baxter ( 2000 ) , and in so doing makes assumptions about the distributions of the different tasks ( Maurer , 2009 ; Du et al. , 2020 ; Galanti et al. , 2016 ; Pentina & Lampert , 2014 ) . In contrast , our approach eschews these assumptions in lieu of data-driven measurements . Data-driven complexity measures have been explored before for analyzing neural network training dynamics and generalization . These measures include eigenvectors of the gram matrix between data points ( Arora et al. , 2019a ) , the variance and separation of class-specific manifolds ( Cohen et al. , 2020 ) , the underlying intrinsic dimensionality of the task ( Lampinen & Ganguli , 2018 ) or the mutual information between representations and the inputs or labels ( Shwartz-Ziv & Tishby , 2017 ) . The corresponding results can be used for analyzing feature representations as well . Our proposed measurements are similar , but are simpler to measure and potentially more intuitive . 3 PROBLEM SETUP . Suppose we are interested in mapping a space of inputs X to a space of targets Y . There is an underlying distribution D over X × Y . We have a feature representation φ : X → Rf which might be pre-trained on another dataset or task , which is inaccessible to us . We will assume that ‖φ ( x ) ‖ ≤ B . For ease of exposition in the paper , we focus on the task of binary classification ; the case of multiclass classification is similar . For binary classification , we write the label space as Y = { −1 , 1 } . Our classifier will use a scoring function that operates on feature space , h : Rf → R. The predicted label will then be sign ( h ( φ ( x ) ) ) , where if h ( φ ( x ) ) is 0 , we will arbitrarily assign a label of -1 . The set of possible functions h defines the hypothesis class for the classifier ; denote this byH . For most of our analysis , we primarily care about the smoothness of the functions in H. We will assume that all functions inH are Lipschitz continuous with Lipschitz constant less than W . Thus : h ( φ ( x ) ) − h ( φ ( x′ ) ) ≤W‖φ ( x ) − φ ( x′ ) ‖ ∀h ∈ H ( 1 ) We note that one such hypothesis class which is commonly used in practice is the class of linear classifiers of bounded norm : W = { v 7→ wTv ; ‖w‖ ≤W } . Because the zero-one loss , l∗ ( h ( φ ( x ) ) , y ) = I [ sign ( h ( φ ( x ) ) ) 6= y ] is difficult to analyze , we will use the following continuous upper bound which is standard in theoretical treatments ( Mohri et al. , 2018 ) l ( h ( φ ( x ) ) , y ) = min ( 1 , max ( 0 , 1− yh ( φ ( x ) ) ) ) ≥ l∗ ( h ( φ ( x ) ) , y ) ( 2 ) Our focus in this paper is to understand how the properties of the feature extractor φ affect the loss of the classifiers l ( h ( φ ( x ) ) ) . In particular , we wish to understand how φ impacts ( a ) the lowest average error that one can achieve , and ( b ) the true error when one generalizes from a small training set . We begin by first identifying the key properties of feature representations . 4 TWO PROPERTIES OF FEATURE REPRESENTATIONS . What properties should we use to characterize feature representations ? First , we should use properties that are easy to measure , potentially with limited labeled data . Second , these properties should be easy for human developers and practitioners to reason about . Third , they should correlate well with the final accuracy of downstream classifiers . In sum , we want simple , intuitive measurements of the feature space that are predictive of downstream accuracy . One kind of intuitive measurement is to look at what the feature representation considers as similar . In particular , we could look at pairs of examples that embed close to each other in feature space and ask if they are indeed similar in terms of their ground truth labels . We call this property local alignment . In particular , we make the following definition Definition 1 . Suppose α > 0 . The local alignment of the feature space φ , denoted by pφa ( α ) is the probability that two data points ( x , y ) , ( x′ , y′ ) ∼ D share a label given that they embed within a distance of α : pφa ( α ) , P ( y = y ′ ∣∣ ‖φ ( x ) − φ ( x′ ) ‖ ≤ α , ( x , y ) , ( x′ , y′ ) ∼ D ) ( 3 ) α here is a hyperparameter which governs the resolution at which we do the analysis . Note that this notion of local alignment is different from the alignment proposed by Wang & Isola ( 2020 ) : the latter looks at how often two data points that are semantically similar embed close to each other , while the former looks at how often two data points that embed close to each other are semantically similar . A feature space that is locally aligned per our definition may not actually be aligned per the definition of Wang & Isola ( 2020 ) because two very similar images may be embedded far away from each other . Local alignment alone may be meaningless if data points do not generally embed close to each other . We also need the feature space to be such that data points generally congregate : Definition 2 . Suppose α > 0 . The degree of congregation of the feature space φ , denoted by pφc , is the probability that two points x , x′ sampled from D embed within a distance of α : pφc ( α ) , P ( ‖φ ( x ) − φ ( x′ ) ‖ ≤ α|x , x′ ∼ D ) ( 4 ) We have α as a hyperparameter here too . We now use these notions of local alignment and congregation to analyze the downstream error of classifiers . | The paper addresses the question of what makes a representation suitable or "good" for a particular task. The submission includes three data-dependent bounds: a lower bound on the average accuracy and two upper bounds on generalization. The analysis is based on the simple and intuitive concepts of local alignment and degree of congregation of the data points within a representational space and according to a binary labeling of the points. Two experiments support the theoretical claims. | SP:c894e857dba275f322dbd6d5f6c3c17b4c8bb03a |
A theoretically grounded characterization of feature representations | A large body of work has explored how learned feature representations can be useful for a variety of downstream tasks . This is true even when the downstream tasks differ greatly from the actual objective used to ( pre ) train the feature representation . This observation underlies the success of , e.g. , few-shot learning , transfer learning and self-supervised learning , among others . However , very little is understood about why such transfer is successful , and more importantly , how one should choose the pre-training task . As a first step towards this understanding , we ask : what makes a feature representation good for a target task ? We present simple , intuitive measurements of the feature space that are good predictors of downstream task performance . We present theoretical results showing how these measurements can be used to bound the error of the downstream classifiers , and show empirically that these bounds correlate well with actual downstream performance . Finally , we show that our bounds are practically useful for choosing the right pre-trained representation for a target task . 1 INTRODUCTION . Since the ( re ) -discovery of neural networks for visual recognition , a plethora of work has observed that the feature representations trained on ImageNet generalize to many downstream tasks , even to new domains ( Donahue et al. , 2014 ; Kornblith et al. , 2019 ; Kolesnikov et al. , 2020 ; Wallace & Hariharan , 2020 ) . This observation , and the resulting gain in accuracy even with very limited labels , has heralded new research directions into other ways of learning representations , such as self-supervised learning ( Chen et al. , 2020 ) or meta-learning ( Snell et al. , 2017 ) . This growing field of representation learning has yielded ostensibly better and better feature representations . However , a closer look reveals many mysterious results . For example , meta-learning methods do not transfer across domains ( Guo et al. , 2020 ) and self-supervised representations struggle with fine-grained recognition ( Wallace & Hariharan , 2020 ) . These empirical results are extremely valuable , but do not provide a deeper understanding of the corresponding phenomena . On the other side of the spectrum , theoretical work on neural representations is illuminating , but often makes assumptions about models or tasks ( Du et al. , 2020 ; Arora et al. , 2019b ) . We lack a general understanding of which neural representations work well for a given task and why . In this paper , we take a first step towards such an understanding by developing lower and upper bounds on classifier accuracy based on data-driven properties of the feature space . Classical theoretical bounds focus entirely on the complexity of the classifier and ignore the feature space . In contrast , our bounds are based on two intuitive properties of the feature representation ( Figure 1 ) : ( a ) Local alignment , which is the degree to which nearby data points share labels , and ( b ) local congregation which is the degree to which data points embed close to each other . Intuitively , if the feature representation is locally aligned to the task , then any smooth classifier will be able to model the task well . If it is additionally congregated , then most test points will have nearby training instances , so the classifier will generalize well from limited data . We show that our bounds are not only intuitive and theoretically justified , but also predictive of actual performance in practical settings : on a large dataset of realistic few-shot tasks , we can use our bounds to pick the best pre-trained representation without any training . Taken together , our work is a first step towards a general , intuitive characterization of the feature space that is predictive of downstream classifier performance . 2 RELATED WORK . Analyzing transferability : The motivations of our work lie in understanding the empirical effectiveness of transferring feature representations from supervised ( Donahue et al. , 2014 ; Zhai et al. , 2019 ; Kolesnikov et al. , 2020 ) or self-supervised ( Goyal et al . ; Wallace & Hariharan , 2020 ; Chen et al. , 2020 ) tasks . With these successes on transfer , and with the availability of a large number of pretrained features trained on a wide variety of domains , there has been an increasing interest in predicting transferability , or in selecting the right features for a particular target task . The simplest approach is to train a classifier with every available pre-trained representation and pick the best performer ( Zamir et al. , 2018 ; You et al. , 2021 ) . This is both computationally expensive and requires lots of labeled data for the target task . If the pre-training task is a classification task with a known label space then the conditional distribution of targets given pre-training task labels is informative of transfer performance ( Tran et al. , 2019 ; Nguyen et al. , 2020 ) . However , this approach is difficult to apply if the pre-training task is not classification , or is inaccessible ( as with models trained on proprietary data ) . This inaccessibility of pre-training data is also a problem for approaches that match pre-training and target distributions ( Gao & Chaudhari , 2021 ) , or explicitly adapt the pre-training images or label space ( Cui et al. , 2018 ) . In contrast to these approaches , we focus on directly analyzing the pre-trained feature representation , which allows us to be agnostic to the actual task it was trained on . Our work is most similar in setup and motivation to work on measuring the alignment between feature representations and the target labels ( Huang et al. , 2021 ; Bao et al. , 2019 ) , but in addition to allowing model selection , yields intuitive and general bounds on accuracy . Our work is orthogonal to work on characterizing tasks and measuring task similarity ; this frequently requires pre-trained features in the first place ( Achille et al. , 2019 ; Wallace et al. , 2021 ; Song et al. , 2020 ; Dwivedi et al. , 2020 ; Dwivedi & Roig , 2019 ) . Analyzing feature representations : Our work is also related to research on understanding feature representations in general . Some of this work has focused on understanding invariance properties of convnet features ( Aubry & Russell , 2015 ) . Others have looked at what individual feature dimensions mean ( Agrawal et al. , 2014 ; Zeiler & Fergus , 2014 ; Szegedy et al. , 2013 ) . Still others have explored if and when features from pre-trained networks transfer well between tasks as a function of the layer chosen ( Yosinski et al. , 2014 ) or the architecture ( Kornblith et al. , 2019 ) . Recently these explorations have been extended to other representation learning techniques , notably self supervised techniques ( Wang & Isola , 2020 ; Wallace & Hariharan , 2020 ) . The insights from these explorations inspire our results . However , these explorations have been primarily empirical , and are therefore limited by the diversity of real world benchmarks that they experiment with . On the other end of the spectrum , there is prior work on theoretical analysis of transfer learning . This prior work often follows the framework proposed by Baxter ( 2000 ) , and in so doing makes assumptions about the distributions of the different tasks ( Maurer , 2009 ; Du et al. , 2020 ; Galanti et al. , 2016 ; Pentina & Lampert , 2014 ) . In contrast , our approach eschews these assumptions in lieu of data-driven measurements . Data-driven complexity measures have been explored before for analyzing neural network training dynamics and generalization . These measures include eigenvectors of the gram matrix between data points ( Arora et al. , 2019a ) , the variance and separation of class-specific manifolds ( Cohen et al. , 2020 ) , the underlying intrinsic dimensionality of the task ( Lampinen & Ganguli , 2018 ) or the mutual information between representations and the inputs or labels ( Shwartz-Ziv & Tishby , 2017 ) . The corresponding results can be used for analyzing feature representations as well . Our proposed measurements are similar , but are simpler to measure and potentially more intuitive . 3 PROBLEM SETUP . Suppose we are interested in mapping a space of inputs X to a space of targets Y . There is an underlying distribution D over X × Y . We have a feature representation φ : X → Rf which might be pre-trained on another dataset or task , which is inaccessible to us . We will assume that ‖φ ( x ) ‖ ≤ B . For ease of exposition in the paper , we focus on the task of binary classification ; the case of multiclass classification is similar . For binary classification , we write the label space as Y = { −1 , 1 } . Our classifier will use a scoring function that operates on feature space , h : Rf → R. The predicted label will then be sign ( h ( φ ( x ) ) ) , where if h ( φ ( x ) ) is 0 , we will arbitrarily assign a label of -1 . The set of possible functions h defines the hypothesis class for the classifier ; denote this byH . For most of our analysis , we primarily care about the smoothness of the functions in H. We will assume that all functions inH are Lipschitz continuous with Lipschitz constant less than W . Thus : h ( φ ( x ) ) − h ( φ ( x′ ) ) ≤W‖φ ( x ) − φ ( x′ ) ‖ ∀h ∈ H ( 1 ) We note that one such hypothesis class which is commonly used in practice is the class of linear classifiers of bounded norm : W = { v 7→ wTv ; ‖w‖ ≤W } . Because the zero-one loss , l∗ ( h ( φ ( x ) ) , y ) = I [ sign ( h ( φ ( x ) ) ) 6= y ] is difficult to analyze , we will use the following continuous upper bound which is standard in theoretical treatments ( Mohri et al. , 2018 ) l ( h ( φ ( x ) ) , y ) = min ( 1 , max ( 0 , 1− yh ( φ ( x ) ) ) ) ≥ l∗ ( h ( φ ( x ) ) , y ) ( 2 ) Our focus in this paper is to understand how the properties of the feature extractor φ affect the loss of the classifiers l ( h ( φ ( x ) ) ) . In particular , we wish to understand how φ impacts ( a ) the lowest average error that one can achieve , and ( b ) the true error when one generalizes from a small training set . We begin by first identifying the key properties of feature representations . 4 TWO PROPERTIES OF FEATURE REPRESENTATIONS . What properties should we use to characterize feature representations ? First , we should use properties that are easy to measure , potentially with limited labeled data . Second , these properties should be easy for human developers and practitioners to reason about . Third , they should correlate well with the final accuracy of downstream classifiers . In sum , we want simple , intuitive measurements of the feature space that are predictive of downstream accuracy . One kind of intuitive measurement is to look at what the feature representation considers as similar . In particular , we could look at pairs of examples that embed close to each other in feature space and ask if they are indeed similar in terms of their ground truth labels . We call this property local alignment . In particular , we make the following definition Definition 1 . Suppose α > 0 . The local alignment of the feature space φ , denoted by pφa ( α ) is the probability that two data points ( x , y ) , ( x′ , y′ ) ∼ D share a label given that they embed within a distance of α : pφa ( α ) , P ( y = y ′ ∣∣ ‖φ ( x ) − φ ( x′ ) ‖ ≤ α , ( x , y ) , ( x′ , y′ ) ∼ D ) ( 3 ) α here is a hyperparameter which governs the resolution at which we do the analysis . Note that this notion of local alignment is different from the alignment proposed by Wang & Isola ( 2020 ) : the latter looks at how often two data points that are semantically similar embed close to each other , while the former looks at how often two data points that embed close to each other are semantically similar . A feature space that is locally aligned per our definition may not actually be aligned per the definition of Wang & Isola ( 2020 ) because two very similar images may be embedded far away from each other . Local alignment alone may be meaningless if data points do not generally embed close to each other . We also need the feature space to be such that data points generally congregate : Definition 2 . Suppose α > 0 . The degree of congregation of the feature space φ , denoted by pφc , is the probability that two points x , x′ sampled from D embed within a distance of α : pφc ( α ) , P ( ‖φ ( x ) − φ ( x′ ) ‖ ≤ α|x , x′ ∼ D ) ( 4 ) We have α as a hyperparameter here too . We now use these notions of local alignment and congregation to analyze the downstream error of classifiers . | This paper proposes to study how feature representations are transferable to downstream tasks. It presents a theoretical characterization of such transfer, in terms of relatively intuitive concepts of congregation and alignment. Specifically, it presents various bounds on classifier's expected error, probability of high-error inputs, and Rademacher complexity. The paper validates these theoretical observations by experiments with transfer in visual settings, between supervised tasks, or to few-shot (+unlabeled data) transfer. | SP:c894e857dba275f322dbd6d5f6c3c17b4c8bb03a |
Zero-Cost Operation Scoring in Differentiable Architecture Search | Differentiable neural architecture search ( NAS ) has attracted significant attention in recent years due to its ability to quickly discover promising architectures of deep neural networks even in very large search spaces . Despite its success , many differentiable NAS methods lack robustness and may degenerate to trivial architectures with excessive parameter-free operations such as skip connections thus leading to inferior performance . In fact , selecting operations based on the magnitude of architectural parameters was recently proven to be fundamentally wrong , showcasing the need to rethink how operation scoring and selection occurs in differentiable NAS . To this end , we formalize and analyze a fundamental component of differentiable NAS : local “ operation scoring ” that occurs at each choice of operation . When comparing existing operation scoring functions , we find that existing methods can be viewed as inexact proxies for accuracy . We also find that existing methods perform poorly when analyzed empirically on NAS benchmarks . From this perspective , we introduce new training-free proxies to the context of differentiable NAS , and show that we can significantly speed up the search process while improving accuracy on multiple search spaces . We take inspiration from zero-cost proxies that were recently studied in the context of sample-based NAS but shown to degrade significantly for larger search spaces like DARTS . Our novel “ perturbation-based zero-cost operation scoring ” ( ZeroCost-PT ) improves searching time and accuracy compared to the best available differentiable architecture search for many search space sizes , including very large ones . Specifically , we are able improve accuracy compared to the best current method ( DARTS-PT ) on the DARTS CNN search space while being over 40× faster ( total searching time 25 minutes on a single GPU ) . Our code is available at : https : //github.com/avail-upon-acceptance . 1 INTRODUCTION . Since the recent dawn of deep learning , researchers have designed new architectures of neural networks on an unprecedented scale , with more efficient and accurate networks being proposed each year ( Iandola et al. , 2016 ; Howard et al. , 2017 ; Tan & Le , 2019 ; 2021 ) . However , the manual design of better DNN architectures has proven quite challenging , requiring extensive domain knowledge and persistent trial-and-error in search of the optimal hyperparameters ( Sandler et al. , 2018 ; Tan & Le , 2021 ) . Recently this process has been successfully aided through automated methods , especially neural architecture search ( NAS ) which can be found behind many of the state-of-the-art deep neural networks ( Real et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2020 ; Moons et al. , 2020 ) . However , one of the biggest problems in NAS is the computational cost – even training a single deep network can require enormous computational resources and many NAS methods need to train tens , if not hundreds , of networks in order to converge to a good architecture ( Real et al. , 2019 ; Luo et al. , 2018 ; Dudziak et al. , 2020 ) . A related problem concerns the search space size – a larger search space would typically contain better architectures , but requires longer searching time ( Real et al. , 2019 ) . Differentiable architecture search ( DARTS ) was first proposed to tackle those challenges , showcasing promising results when searching for a network in a set of over 1018 possible variations ( Liu et al. , 2019 ) . Unfortunately , DARTS has significant robustness issues as demonstrated through many recent works ( Zela et al. , 2020a ; Shu et al. , 2020 ; Yu et al. , 2020 ) . It also requires careful selection of hyperparameters which makes it somewhat difficult to adapt to a new task . Recently , Wang et al . ( 2021 ) showed that operation selection in DARTS based on the magnitude of architectural parameters ( α ) is fundamentally wrong , and will always simply select skip connections over other more meaningful operations . They proposed an alternative operation selection method based on perturbation , where the importance of an operation is determined by the decrease of the supernet ’ s validation accuracy when it is removed . Then the most important operations are selected by exhaustively comparing them with other alternatives on each single edge of the supernet , until the final architecture is found . In a parallel line of work that aims to speed up NAS , proxies are often used instead of training accuracy to obtain an indication of performance quickly without performing expensive full training for each searched model . Conventional proxies typically consist of a reduced form of training with fewer epochs , less data or a smaller DNN architecture ( Zhou et al. , 2020 ) . Most recently , zero-cost proxies , which are extreme types of NAS proxies that do not require any training , have gained interest and are shown to empirically outperform conventional training-based proxies and deliver outstanding results on common NAS benchmarks ( Abdelfattah et al. , 2021 ; Mellor et al. , 2021 ) . However , their efficient usage on a large search spaces , typical for differentiable NAS , has been shown to be more challenging and thus remains an open problem ( Mellor et al. , 2021 ) . The objective of our paper is to shed some light onto the implicit proxies that are used for operation scoring in differentiable NAS , and to propose new proxies in this setting that have the potential of improving both search speed and quality . We decompose differentiable NAS into its two constituent parts : ( 1 ) supernet training and ( 2 ) operation scoring . We focus on the second component and we formalize the concept of “ operation scoring ” that happens during local operation selection at each edge in a supernet . Through this lens , we are able to empirically compare the efficacy of existing differentiable NAS operation scoring functions . We find that existing methods act as a proxy to accuracy and perform quite poorly on common NAS benchmarks , consequently , we propose new operation scoring functions based on zero-cost proxies that outperform existing methods in both search speed and accuracy . Our main contributions are : 1 . Formalize “ operation scoring '' in differentiable NAS and perform a first-of-its-kind analysis of the implicit proxies that are present in existing differentiable NAS methods . 2 . Propose , evaluate and compare perturbation-based zero-cost operation scoring ( Zero-Cost-PT ) for differentiable NAS building upon recent work on training-free NAS proxies . 3 . Perform a thorough empirical evaluation of Zero-Cost-PT in six search spaces and 3 datasets including DARTS , DARTS subspaces S1-S4 and NAS-Bench-201 . 2 RELATED WORK . Classic NAS and Proxies . Zoph & Lee were among the first to propose an automated method to search neural network architectures , using a reinforcement learning agent to maximize rewards coming from training different models ( Zoph & Le , 2017 ) . Since then , a number of alternative approaches have been proposed in order to reduce the significant cost introduced by training each proposed model . In general , reduced training can be found in many NAS works ( Pham et al. , 2018 ; Zhou et al. , 2020 ) , and different proxies have been proposed , e.g . searching for a model on a smaller dataset and then transfer the architecture to the larger target dataset ( Real et al. , 2019 ; Mehrotra et al. , 2021 ) or incorporating a predictor into the search process ( Wei et al. , 2020 ; Dudziak et al. , 2020 ; Wu et al. , 2021 ; Wen et al. , 2019 ) . Zero-cost Proxies . Very recently , zero-cost proxies ( Mellor et al. , 2021 ; Abdelfattah et al. , 2021 ) for NAS emerged from pruning-at-initialisation literature ( Tanaka et al. , 2020 ; Wang et al. , 2020 ; Lee et al. , 2019 ; Turner et al. , 2020 ) . Such proxies can be formulated as architecture scoring functions S ( A ) that evaluate the potential or “ saliency ” of a given architecture A in achieving accuracy at initialization , without the expensive training process . In this paper , we adopt the recently proposed zero-cost proxies ( Abdelfattah et al. , 2021 ; Mellor et al. , 2021 ) , namely grad_norm , snip , grasp , synflow , fisher and nwot . Those metrics either aggregate the saliency of model parameters to compute the score of an architecture ( Abdelfattah et al. , 2021 ) , or use the overlapping of activations between different samples within a minibatch of data as a performance indicator ( Mellor et al. , 2021 ) . In a similar vein , Chen et al . ( 2021 ) proposed the use of training-free scoring for operations based on the neural tangent kernel ( Jacot et al. , 2021 ) and number of linear regions in a DNN ; the operations with the lowest score are pruned from the supernet iteratively until a subnetwork is found . Differentiable NAS and Operation Perturbation . Liu et al . first proposed to search for a neural network ’ s architecture by parameterizing it with continuous values ( called architectural parameters α ) in a differentiable way . Their method constructs a supernet , i.e. , a superposition of all networks in the search space , and optimizes the architectural parameters ( α ) together with supernet weights ( w ) . The final architecture is extracted from the supernet by preserving operations with the largest α . Despite the significant reduction in searching time , the stability and generalizability of DARTS have been challenged , e.g. , it may produce trivial models dominated by skip connections ( Zela et al. , 2020a ) . SDARTS ( Chen & Hsieh , 2020 ) proposed to overcome such issues by smoothing the loss landscape , while SGAS ( Li et al. , 2020 ) considered a greedy algorithm to select and prune operations sequentially . The recent DARTS-PT ( Wang et al. , 2021 ) proposed a perturbation-based operation selection strategy , showing promising results on DARTS space . In DARTS-PT operations are no longer selected by optimizing architectural parameters ( α ) , but via a scoring function evaluating the impact on supernet ’ s validation accuracy when the operation is removed . 3 RETHINKING OPERATION SCORING IN DIFFERENTIABLE NAS . In the context of differentiable NAS , a supernet would contain multiple candidate operations on each edge as shown in Figure 1 . Operation scoring functions assign a score to rank operations and select the best one . In this section , we empirically quantify the effectiveness of existing operation scoring methods in differentiable NAS , with a specific focus on DARTS ( Liu et al. , 2019 ) and the recently-proposed DARTS-PT ( Wang et al. , 2021 ) . Concretely , we view these scoring functions as proxies for final subnetwork accuracies and we evaluate them on that basis to quantify how well these functions perform . We challenge many assumptions made in previous work and show that we can outperform existing methods with lightweight alternatives . 3.1 OPERATION SCORING PRELIMINARIES . For a supernet A we want to be able to start discretizing edges in order to derive a subnetwork . When discretizing we replace an edge composed of multiple candidate operations and their respective ( optional ) architectural parameters α with only one operation selected from the candidates . We will denote the process of discretization of an edge e with operation o , given a model A , as : A+ ( e , o ) . Analogously , the perturbation of a supernet A by removing an operation o from an edge e will be denoted as A − ( e , o ) . Figure 1 illustrates discretization and perturbation . Furthermore , we will use A , E and O to refer to the set of all possible network architectures , edges in the supernet and candidate operations , respectively – extra details about notation can be found in the Appendix A.1 . NAS can then be performed by iterative discretization of edges in the supernet , yielding in the process a sequence of partially discretized architectures : A0 , A1 , ... , A|E| , where A0 is the original supernet , A|E| is the final fully-discretized subnetwork ( result of NAS ) , and At is At−1 after discretizing a next edge , i.e. , At = At−1 + ( et , ot ) where t is an iteration counter . The problem of finding the sequence of ( et , ot ) that maximizes performance of the resulting network AE has an optimal substructure and can be reduced to the problem of finding the optimal policy π : A× E → O that is used to decide on an operation to assign to an edge at each iteration , given current model ( state ) . This policy function is defined by means of an analogous scoring function f : A× E ×O → R , that assigns scores to the possible values of the policy function , and the policy is then defined as argmax or argmin over f , depending on the type of scores produced by f . 1 We begin by defining the optimal scoring function that we will later use to assess quality of different , practical approaches . For a given partially-discretized model At , let us denote the set of all possible fully-discretized networks that can be obtained from At after a next edge e is discretized with an operation o as At , e , o . Our optimal scoring function can then be defined as : πbest-acc ( At , e ) = argmax o∈Oe max A|E|∈At , e , o V ∗ ( A|E| ) ( 1 ) 1Since a scoring function clearly defines a relevant policy function , we will sometimes talk about a scoring function even though the context might be directly related to a policy function – in those cases it should be understood as the policy function that follows from the relevant scoring function ( and vice versa ) . where V ∗ is validation accuracy of a network after full training ( we will use V to denote validation accuracy without training ) . It is easy to see that this policy meets Bellman ’ s principle of optimality ( Bellman , 1957 ) – in fact its definition follows directly from it – and therefore is the optimal solution to our problem . However , it might be more practical to consider the expected achievable accuracy when an operation is selected , instead of the best . Therefore we also define the function πavg : πavg-acc ( At , e ) = argmax o∈Oe E A|E|∈At , e , o V ∗ ( A|E| ) ( 2 ) In practice , we are unable to use either πbest-acc or πavg-acc since we would need to have the final validation accuracy V ∗ of all the networks in the search space . There have been many attempts at finding approximate operation scoring functions , in the following we consider the following practical alternatives from DARTS ( Liu et al. , 2019 ) and DARTS-PT ( Wang et al. , 2021 ) : πdarts ( At , e ) = argmax o∈Oe αe , o ( 3 ) πdisc-acc ( At , e ) = argmax o∈Oe V ∗ ( At + ( e , o ) ) , πdarts-pt ( At , e ) = argmin o∈Oe V ( At − ( e , o ) ) ( 4 ) where αe , o is the architectural parameter assigned to operation o on edge e as presented in DARTS ( Liu et al. , 2019 ) . πdisc-acc uses accuracy of a supernet after an operation o is assigned to an edge e – this is referred to as “ discretization accuracy ” in DARTS-PT and is assumed to be a good operation scoring function ( Wang et al. , 2021 ) , most intuitively , it could approximate favg-acc . πdarts-pt is the perturbation-based approach used by DARTS-PT – it is presented as a practical and lightweight alternative to πdisc-acc ( Wang et al. , 2021 ) . Zero-Cost Operation Scoring . We argue that the scoring functions ( 3 ) and ( 4 ) are merely proxies for the best achievable accuracy ( Equation 1 ) . As such , we see an opportunity to use a new class of training-free proxies that are very fast to compute and have been shown to work well within NAS , albeit not in differentiable NAS , nor within large search spaces . We present the following scoring functions that use a zero-cost proxy S instead of validation accuracy when discretizing an edge or perturbing an operation . Note that the supernet is randomly-initialized and untrained in this case . πdisc-zc ( At , e ) = argmax o∈Oe S ( At + ( e , o ) ) , πzc-pt ( At , e ) = argmin o∈Oe S ( At − ( e , o ) ) ( 5 ) Note that TE-NAS ( Chen et al. , 2021 ) also uses training-free scoring of operations , however , they use different scoring metrics to prune operations from a supernet as opposed to discretizing or perturbing operations as we show above . We include a comparison to TE-NAS throughout our paper . | Differentiable neural architecture search (NAS), and more recently, perturbation-based operation selection in differentiable NAS, is a popular recent area of study in NAS. In parallel, zero-cost proxies are also gaining in popularity in NAS. In this paper, the authors combine the insights from perturbation-based operation selection and zero-cost proxies, to create a new method that sees substantial time speedups. Specifically, they formalize "operation scoring", and use zero-cost proxies to score operations with the perturbation paradigm. This leads them to a new NAS method (Zero-Cost-PT) that outperforms existing methods. They evaluate their proposed algorithm on the DARTS search space (and its subsets) and NAS-Bench-201, showing that it can achieve up to 40x speedups. | SP:54383f60863043b30fb83cf5d5499165fed5ae8a |
Zero-Cost Operation Scoring in Differentiable Architecture Search | Differentiable neural architecture search ( NAS ) has attracted significant attention in recent years due to its ability to quickly discover promising architectures of deep neural networks even in very large search spaces . Despite its success , many differentiable NAS methods lack robustness and may degenerate to trivial architectures with excessive parameter-free operations such as skip connections thus leading to inferior performance . In fact , selecting operations based on the magnitude of architectural parameters was recently proven to be fundamentally wrong , showcasing the need to rethink how operation scoring and selection occurs in differentiable NAS . To this end , we formalize and analyze a fundamental component of differentiable NAS : local “ operation scoring ” that occurs at each choice of operation . When comparing existing operation scoring functions , we find that existing methods can be viewed as inexact proxies for accuracy . We also find that existing methods perform poorly when analyzed empirically on NAS benchmarks . From this perspective , we introduce new training-free proxies to the context of differentiable NAS , and show that we can significantly speed up the search process while improving accuracy on multiple search spaces . We take inspiration from zero-cost proxies that were recently studied in the context of sample-based NAS but shown to degrade significantly for larger search spaces like DARTS . Our novel “ perturbation-based zero-cost operation scoring ” ( ZeroCost-PT ) improves searching time and accuracy compared to the best available differentiable architecture search for many search space sizes , including very large ones . Specifically , we are able improve accuracy compared to the best current method ( DARTS-PT ) on the DARTS CNN search space while being over 40× faster ( total searching time 25 minutes on a single GPU ) . Our code is available at : https : //github.com/avail-upon-acceptance . 1 INTRODUCTION . Since the recent dawn of deep learning , researchers have designed new architectures of neural networks on an unprecedented scale , with more efficient and accurate networks being proposed each year ( Iandola et al. , 2016 ; Howard et al. , 2017 ; Tan & Le , 2019 ; 2021 ) . However , the manual design of better DNN architectures has proven quite challenging , requiring extensive domain knowledge and persistent trial-and-error in search of the optimal hyperparameters ( Sandler et al. , 2018 ; Tan & Le , 2021 ) . Recently this process has been successfully aided through automated methods , especially neural architecture search ( NAS ) which can be found behind many of the state-of-the-art deep neural networks ( Real et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2020 ; Moons et al. , 2020 ) . However , one of the biggest problems in NAS is the computational cost – even training a single deep network can require enormous computational resources and many NAS methods need to train tens , if not hundreds , of networks in order to converge to a good architecture ( Real et al. , 2019 ; Luo et al. , 2018 ; Dudziak et al. , 2020 ) . A related problem concerns the search space size – a larger search space would typically contain better architectures , but requires longer searching time ( Real et al. , 2019 ) . Differentiable architecture search ( DARTS ) was first proposed to tackle those challenges , showcasing promising results when searching for a network in a set of over 1018 possible variations ( Liu et al. , 2019 ) . Unfortunately , DARTS has significant robustness issues as demonstrated through many recent works ( Zela et al. , 2020a ; Shu et al. , 2020 ; Yu et al. , 2020 ) . It also requires careful selection of hyperparameters which makes it somewhat difficult to adapt to a new task . Recently , Wang et al . ( 2021 ) showed that operation selection in DARTS based on the magnitude of architectural parameters ( α ) is fundamentally wrong , and will always simply select skip connections over other more meaningful operations . They proposed an alternative operation selection method based on perturbation , where the importance of an operation is determined by the decrease of the supernet ’ s validation accuracy when it is removed . Then the most important operations are selected by exhaustively comparing them with other alternatives on each single edge of the supernet , until the final architecture is found . In a parallel line of work that aims to speed up NAS , proxies are often used instead of training accuracy to obtain an indication of performance quickly without performing expensive full training for each searched model . Conventional proxies typically consist of a reduced form of training with fewer epochs , less data or a smaller DNN architecture ( Zhou et al. , 2020 ) . Most recently , zero-cost proxies , which are extreme types of NAS proxies that do not require any training , have gained interest and are shown to empirically outperform conventional training-based proxies and deliver outstanding results on common NAS benchmarks ( Abdelfattah et al. , 2021 ; Mellor et al. , 2021 ) . However , their efficient usage on a large search spaces , typical for differentiable NAS , has been shown to be more challenging and thus remains an open problem ( Mellor et al. , 2021 ) . The objective of our paper is to shed some light onto the implicit proxies that are used for operation scoring in differentiable NAS , and to propose new proxies in this setting that have the potential of improving both search speed and quality . We decompose differentiable NAS into its two constituent parts : ( 1 ) supernet training and ( 2 ) operation scoring . We focus on the second component and we formalize the concept of “ operation scoring ” that happens during local operation selection at each edge in a supernet . Through this lens , we are able to empirically compare the efficacy of existing differentiable NAS operation scoring functions . We find that existing methods act as a proxy to accuracy and perform quite poorly on common NAS benchmarks , consequently , we propose new operation scoring functions based on zero-cost proxies that outperform existing methods in both search speed and accuracy . Our main contributions are : 1 . Formalize “ operation scoring '' in differentiable NAS and perform a first-of-its-kind analysis of the implicit proxies that are present in existing differentiable NAS methods . 2 . Propose , evaluate and compare perturbation-based zero-cost operation scoring ( Zero-Cost-PT ) for differentiable NAS building upon recent work on training-free NAS proxies . 3 . Perform a thorough empirical evaluation of Zero-Cost-PT in six search spaces and 3 datasets including DARTS , DARTS subspaces S1-S4 and NAS-Bench-201 . 2 RELATED WORK . Classic NAS and Proxies . Zoph & Lee were among the first to propose an automated method to search neural network architectures , using a reinforcement learning agent to maximize rewards coming from training different models ( Zoph & Le , 2017 ) . Since then , a number of alternative approaches have been proposed in order to reduce the significant cost introduced by training each proposed model . In general , reduced training can be found in many NAS works ( Pham et al. , 2018 ; Zhou et al. , 2020 ) , and different proxies have been proposed , e.g . searching for a model on a smaller dataset and then transfer the architecture to the larger target dataset ( Real et al. , 2019 ; Mehrotra et al. , 2021 ) or incorporating a predictor into the search process ( Wei et al. , 2020 ; Dudziak et al. , 2020 ; Wu et al. , 2021 ; Wen et al. , 2019 ) . Zero-cost Proxies . Very recently , zero-cost proxies ( Mellor et al. , 2021 ; Abdelfattah et al. , 2021 ) for NAS emerged from pruning-at-initialisation literature ( Tanaka et al. , 2020 ; Wang et al. , 2020 ; Lee et al. , 2019 ; Turner et al. , 2020 ) . Such proxies can be formulated as architecture scoring functions S ( A ) that evaluate the potential or “ saliency ” of a given architecture A in achieving accuracy at initialization , without the expensive training process . In this paper , we adopt the recently proposed zero-cost proxies ( Abdelfattah et al. , 2021 ; Mellor et al. , 2021 ) , namely grad_norm , snip , grasp , synflow , fisher and nwot . Those metrics either aggregate the saliency of model parameters to compute the score of an architecture ( Abdelfattah et al. , 2021 ) , or use the overlapping of activations between different samples within a minibatch of data as a performance indicator ( Mellor et al. , 2021 ) . In a similar vein , Chen et al . ( 2021 ) proposed the use of training-free scoring for operations based on the neural tangent kernel ( Jacot et al. , 2021 ) and number of linear regions in a DNN ; the operations with the lowest score are pruned from the supernet iteratively until a subnetwork is found . Differentiable NAS and Operation Perturbation . Liu et al . first proposed to search for a neural network ’ s architecture by parameterizing it with continuous values ( called architectural parameters α ) in a differentiable way . Their method constructs a supernet , i.e. , a superposition of all networks in the search space , and optimizes the architectural parameters ( α ) together with supernet weights ( w ) . The final architecture is extracted from the supernet by preserving operations with the largest α . Despite the significant reduction in searching time , the stability and generalizability of DARTS have been challenged , e.g. , it may produce trivial models dominated by skip connections ( Zela et al. , 2020a ) . SDARTS ( Chen & Hsieh , 2020 ) proposed to overcome such issues by smoothing the loss landscape , while SGAS ( Li et al. , 2020 ) considered a greedy algorithm to select and prune operations sequentially . The recent DARTS-PT ( Wang et al. , 2021 ) proposed a perturbation-based operation selection strategy , showing promising results on DARTS space . In DARTS-PT operations are no longer selected by optimizing architectural parameters ( α ) , but via a scoring function evaluating the impact on supernet ’ s validation accuracy when the operation is removed . 3 RETHINKING OPERATION SCORING IN DIFFERENTIABLE NAS . In the context of differentiable NAS , a supernet would contain multiple candidate operations on each edge as shown in Figure 1 . Operation scoring functions assign a score to rank operations and select the best one . In this section , we empirically quantify the effectiveness of existing operation scoring methods in differentiable NAS , with a specific focus on DARTS ( Liu et al. , 2019 ) and the recently-proposed DARTS-PT ( Wang et al. , 2021 ) . Concretely , we view these scoring functions as proxies for final subnetwork accuracies and we evaluate them on that basis to quantify how well these functions perform . We challenge many assumptions made in previous work and show that we can outperform existing methods with lightweight alternatives . 3.1 OPERATION SCORING PRELIMINARIES . For a supernet A we want to be able to start discretizing edges in order to derive a subnetwork . When discretizing we replace an edge composed of multiple candidate operations and their respective ( optional ) architectural parameters α with only one operation selected from the candidates . We will denote the process of discretization of an edge e with operation o , given a model A , as : A+ ( e , o ) . Analogously , the perturbation of a supernet A by removing an operation o from an edge e will be denoted as A − ( e , o ) . Figure 1 illustrates discretization and perturbation . Furthermore , we will use A , E and O to refer to the set of all possible network architectures , edges in the supernet and candidate operations , respectively – extra details about notation can be found in the Appendix A.1 . NAS can then be performed by iterative discretization of edges in the supernet , yielding in the process a sequence of partially discretized architectures : A0 , A1 , ... , A|E| , where A0 is the original supernet , A|E| is the final fully-discretized subnetwork ( result of NAS ) , and At is At−1 after discretizing a next edge , i.e. , At = At−1 + ( et , ot ) where t is an iteration counter . The problem of finding the sequence of ( et , ot ) that maximizes performance of the resulting network AE has an optimal substructure and can be reduced to the problem of finding the optimal policy π : A× E → O that is used to decide on an operation to assign to an edge at each iteration , given current model ( state ) . This policy function is defined by means of an analogous scoring function f : A× E ×O → R , that assigns scores to the possible values of the policy function , and the policy is then defined as argmax or argmin over f , depending on the type of scores produced by f . 1 We begin by defining the optimal scoring function that we will later use to assess quality of different , practical approaches . For a given partially-discretized model At , let us denote the set of all possible fully-discretized networks that can be obtained from At after a next edge e is discretized with an operation o as At , e , o . Our optimal scoring function can then be defined as : πbest-acc ( At , e ) = argmax o∈Oe max A|E|∈At , e , o V ∗ ( A|E| ) ( 1 ) 1Since a scoring function clearly defines a relevant policy function , we will sometimes talk about a scoring function even though the context might be directly related to a policy function – in those cases it should be understood as the policy function that follows from the relevant scoring function ( and vice versa ) . where V ∗ is validation accuracy of a network after full training ( we will use V to denote validation accuracy without training ) . It is easy to see that this policy meets Bellman ’ s principle of optimality ( Bellman , 1957 ) – in fact its definition follows directly from it – and therefore is the optimal solution to our problem . However , it might be more practical to consider the expected achievable accuracy when an operation is selected , instead of the best . Therefore we also define the function πavg : πavg-acc ( At , e ) = argmax o∈Oe E A|E|∈At , e , o V ∗ ( A|E| ) ( 2 ) In practice , we are unable to use either πbest-acc or πavg-acc since we would need to have the final validation accuracy V ∗ of all the networks in the search space . There have been many attempts at finding approximate operation scoring functions , in the following we consider the following practical alternatives from DARTS ( Liu et al. , 2019 ) and DARTS-PT ( Wang et al. , 2021 ) : πdarts ( At , e ) = argmax o∈Oe αe , o ( 3 ) πdisc-acc ( At , e ) = argmax o∈Oe V ∗ ( At + ( e , o ) ) , πdarts-pt ( At , e ) = argmin o∈Oe V ( At − ( e , o ) ) ( 4 ) where αe , o is the architectural parameter assigned to operation o on edge e as presented in DARTS ( Liu et al. , 2019 ) . πdisc-acc uses accuracy of a supernet after an operation o is assigned to an edge e – this is referred to as “ discretization accuracy ” in DARTS-PT and is assumed to be a good operation scoring function ( Wang et al. , 2021 ) , most intuitively , it could approximate favg-acc . πdarts-pt is the perturbation-based approach used by DARTS-PT – it is presented as a practical and lightweight alternative to πdisc-acc ( Wang et al. , 2021 ) . Zero-Cost Operation Scoring . We argue that the scoring functions ( 3 ) and ( 4 ) are merely proxies for the best achievable accuracy ( Equation 1 ) . As such , we see an opportunity to use a new class of training-free proxies that are very fast to compute and have been shown to work well within NAS , albeit not in differentiable NAS , nor within large search spaces . We present the following scoring functions that use a zero-cost proxy S instead of validation accuracy when discretizing an edge or perturbing an operation . Note that the supernet is randomly-initialized and untrained in this case . πdisc-zc ( At , e ) = argmax o∈Oe S ( At + ( e , o ) ) , πzc-pt ( At , e ) = argmin o∈Oe S ( At − ( e , o ) ) ( 5 ) Note that TE-NAS ( Chen et al. , 2021 ) also uses training-free scoring of operations , however , they use different scoring metrics to prune operations from a supernet as opposed to discretizing or perturbing operations as we show above . We include a comparison to TE-NAS throughout our paper . | In this paper, the authors introduce new training-free proxies to the context of differentiable NAS, which can speed up the search process while improving accuracy. Further, the authors propose, evaluate and compare perturbation-based zero-cost operation scoring (Zero-Cost-PT) for differentiable NAS. Extensive experiments empirically show the effectiveness of Zero-Cost-PT in six search spaces and 3 datasets. | SP:54383f60863043b30fb83cf5d5499165fed5ae8a |
Zero-Cost Operation Scoring in Differentiable Architecture Search | Differentiable neural architecture search ( NAS ) has attracted significant attention in recent years due to its ability to quickly discover promising architectures of deep neural networks even in very large search spaces . Despite its success , many differentiable NAS methods lack robustness and may degenerate to trivial architectures with excessive parameter-free operations such as skip connections thus leading to inferior performance . In fact , selecting operations based on the magnitude of architectural parameters was recently proven to be fundamentally wrong , showcasing the need to rethink how operation scoring and selection occurs in differentiable NAS . To this end , we formalize and analyze a fundamental component of differentiable NAS : local “ operation scoring ” that occurs at each choice of operation . When comparing existing operation scoring functions , we find that existing methods can be viewed as inexact proxies for accuracy . We also find that existing methods perform poorly when analyzed empirically on NAS benchmarks . From this perspective , we introduce new training-free proxies to the context of differentiable NAS , and show that we can significantly speed up the search process while improving accuracy on multiple search spaces . We take inspiration from zero-cost proxies that were recently studied in the context of sample-based NAS but shown to degrade significantly for larger search spaces like DARTS . Our novel “ perturbation-based zero-cost operation scoring ” ( ZeroCost-PT ) improves searching time and accuracy compared to the best available differentiable architecture search for many search space sizes , including very large ones . Specifically , we are able improve accuracy compared to the best current method ( DARTS-PT ) on the DARTS CNN search space while being over 40× faster ( total searching time 25 minutes on a single GPU ) . Our code is available at : https : //github.com/avail-upon-acceptance . 1 INTRODUCTION . Since the recent dawn of deep learning , researchers have designed new architectures of neural networks on an unprecedented scale , with more efficient and accurate networks being proposed each year ( Iandola et al. , 2016 ; Howard et al. , 2017 ; Tan & Le , 2019 ; 2021 ) . However , the manual design of better DNN architectures has proven quite challenging , requiring extensive domain knowledge and persistent trial-and-error in search of the optimal hyperparameters ( Sandler et al. , 2018 ; Tan & Le , 2021 ) . Recently this process has been successfully aided through automated methods , especially neural architecture search ( NAS ) which can be found behind many of the state-of-the-art deep neural networks ( Real et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2020 ; Moons et al. , 2020 ) . However , one of the biggest problems in NAS is the computational cost – even training a single deep network can require enormous computational resources and many NAS methods need to train tens , if not hundreds , of networks in order to converge to a good architecture ( Real et al. , 2019 ; Luo et al. , 2018 ; Dudziak et al. , 2020 ) . A related problem concerns the search space size – a larger search space would typically contain better architectures , but requires longer searching time ( Real et al. , 2019 ) . Differentiable architecture search ( DARTS ) was first proposed to tackle those challenges , showcasing promising results when searching for a network in a set of over 1018 possible variations ( Liu et al. , 2019 ) . Unfortunately , DARTS has significant robustness issues as demonstrated through many recent works ( Zela et al. , 2020a ; Shu et al. , 2020 ; Yu et al. , 2020 ) . It also requires careful selection of hyperparameters which makes it somewhat difficult to adapt to a new task . Recently , Wang et al . ( 2021 ) showed that operation selection in DARTS based on the magnitude of architectural parameters ( α ) is fundamentally wrong , and will always simply select skip connections over other more meaningful operations . They proposed an alternative operation selection method based on perturbation , where the importance of an operation is determined by the decrease of the supernet ’ s validation accuracy when it is removed . Then the most important operations are selected by exhaustively comparing them with other alternatives on each single edge of the supernet , until the final architecture is found . In a parallel line of work that aims to speed up NAS , proxies are often used instead of training accuracy to obtain an indication of performance quickly without performing expensive full training for each searched model . Conventional proxies typically consist of a reduced form of training with fewer epochs , less data or a smaller DNN architecture ( Zhou et al. , 2020 ) . Most recently , zero-cost proxies , which are extreme types of NAS proxies that do not require any training , have gained interest and are shown to empirically outperform conventional training-based proxies and deliver outstanding results on common NAS benchmarks ( Abdelfattah et al. , 2021 ; Mellor et al. , 2021 ) . However , their efficient usage on a large search spaces , typical for differentiable NAS , has been shown to be more challenging and thus remains an open problem ( Mellor et al. , 2021 ) . The objective of our paper is to shed some light onto the implicit proxies that are used for operation scoring in differentiable NAS , and to propose new proxies in this setting that have the potential of improving both search speed and quality . We decompose differentiable NAS into its two constituent parts : ( 1 ) supernet training and ( 2 ) operation scoring . We focus on the second component and we formalize the concept of “ operation scoring ” that happens during local operation selection at each edge in a supernet . Through this lens , we are able to empirically compare the efficacy of existing differentiable NAS operation scoring functions . We find that existing methods act as a proxy to accuracy and perform quite poorly on common NAS benchmarks , consequently , we propose new operation scoring functions based on zero-cost proxies that outperform existing methods in both search speed and accuracy . Our main contributions are : 1 . Formalize “ operation scoring '' in differentiable NAS and perform a first-of-its-kind analysis of the implicit proxies that are present in existing differentiable NAS methods . 2 . Propose , evaluate and compare perturbation-based zero-cost operation scoring ( Zero-Cost-PT ) for differentiable NAS building upon recent work on training-free NAS proxies . 3 . Perform a thorough empirical evaluation of Zero-Cost-PT in six search spaces and 3 datasets including DARTS , DARTS subspaces S1-S4 and NAS-Bench-201 . 2 RELATED WORK . Classic NAS and Proxies . Zoph & Lee were among the first to propose an automated method to search neural network architectures , using a reinforcement learning agent to maximize rewards coming from training different models ( Zoph & Le , 2017 ) . Since then , a number of alternative approaches have been proposed in order to reduce the significant cost introduced by training each proposed model . In general , reduced training can be found in many NAS works ( Pham et al. , 2018 ; Zhou et al. , 2020 ) , and different proxies have been proposed , e.g . searching for a model on a smaller dataset and then transfer the architecture to the larger target dataset ( Real et al. , 2019 ; Mehrotra et al. , 2021 ) or incorporating a predictor into the search process ( Wei et al. , 2020 ; Dudziak et al. , 2020 ; Wu et al. , 2021 ; Wen et al. , 2019 ) . Zero-cost Proxies . Very recently , zero-cost proxies ( Mellor et al. , 2021 ; Abdelfattah et al. , 2021 ) for NAS emerged from pruning-at-initialisation literature ( Tanaka et al. , 2020 ; Wang et al. , 2020 ; Lee et al. , 2019 ; Turner et al. , 2020 ) . Such proxies can be formulated as architecture scoring functions S ( A ) that evaluate the potential or “ saliency ” of a given architecture A in achieving accuracy at initialization , without the expensive training process . In this paper , we adopt the recently proposed zero-cost proxies ( Abdelfattah et al. , 2021 ; Mellor et al. , 2021 ) , namely grad_norm , snip , grasp , synflow , fisher and nwot . Those metrics either aggregate the saliency of model parameters to compute the score of an architecture ( Abdelfattah et al. , 2021 ) , or use the overlapping of activations between different samples within a minibatch of data as a performance indicator ( Mellor et al. , 2021 ) . In a similar vein , Chen et al . ( 2021 ) proposed the use of training-free scoring for operations based on the neural tangent kernel ( Jacot et al. , 2021 ) and number of linear regions in a DNN ; the operations with the lowest score are pruned from the supernet iteratively until a subnetwork is found . Differentiable NAS and Operation Perturbation . Liu et al . first proposed to search for a neural network ’ s architecture by parameterizing it with continuous values ( called architectural parameters α ) in a differentiable way . Their method constructs a supernet , i.e. , a superposition of all networks in the search space , and optimizes the architectural parameters ( α ) together with supernet weights ( w ) . The final architecture is extracted from the supernet by preserving operations with the largest α . Despite the significant reduction in searching time , the stability and generalizability of DARTS have been challenged , e.g. , it may produce trivial models dominated by skip connections ( Zela et al. , 2020a ) . SDARTS ( Chen & Hsieh , 2020 ) proposed to overcome such issues by smoothing the loss landscape , while SGAS ( Li et al. , 2020 ) considered a greedy algorithm to select and prune operations sequentially . The recent DARTS-PT ( Wang et al. , 2021 ) proposed a perturbation-based operation selection strategy , showing promising results on DARTS space . In DARTS-PT operations are no longer selected by optimizing architectural parameters ( α ) , but via a scoring function evaluating the impact on supernet ’ s validation accuracy when the operation is removed . 3 RETHINKING OPERATION SCORING IN DIFFERENTIABLE NAS . In the context of differentiable NAS , a supernet would contain multiple candidate operations on each edge as shown in Figure 1 . Operation scoring functions assign a score to rank operations and select the best one . In this section , we empirically quantify the effectiveness of existing operation scoring methods in differentiable NAS , with a specific focus on DARTS ( Liu et al. , 2019 ) and the recently-proposed DARTS-PT ( Wang et al. , 2021 ) . Concretely , we view these scoring functions as proxies for final subnetwork accuracies and we evaluate them on that basis to quantify how well these functions perform . We challenge many assumptions made in previous work and show that we can outperform existing methods with lightweight alternatives . 3.1 OPERATION SCORING PRELIMINARIES . For a supernet A we want to be able to start discretizing edges in order to derive a subnetwork . When discretizing we replace an edge composed of multiple candidate operations and their respective ( optional ) architectural parameters α with only one operation selected from the candidates . We will denote the process of discretization of an edge e with operation o , given a model A , as : A+ ( e , o ) . Analogously , the perturbation of a supernet A by removing an operation o from an edge e will be denoted as A − ( e , o ) . Figure 1 illustrates discretization and perturbation . Furthermore , we will use A , E and O to refer to the set of all possible network architectures , edges in the supernet and candidate operations , respectively – extra details about notation can be found in the Appendix A.1 . NAS can then be performed by iterative discretization of edges in the supernet , yielding in the process a sequence of partially discretized architectures : A0 , A1 , ... , A|E| , where A0 is the original supernet , A|E| is the final fully-discretized subnetwork ( result of NAS ) , and At is At−1 after discretizing a next edge , i.e. , At = At−1 + ( et , ot ) where t is an iteration counter . The problem of finding the sequence of ( et , ot ) that maximizes performance of the resulting network AE has an optimal substructure and can be reduced to the problem of finding the optimal policy π : A× E → O that is used to decide on an operation to assign to an edge at each iteration , given current model ( state ) . This policy function is defined by means of an analogous scoring function f : A× E ×O → R , that assigns scores to the possible values of the policy function , and the policy is then defined as argmax or argmin over f , depending on the type of scores produced by f . 1 We begin by defining the optimal scoring function that we will later use to assess quality of different , practical approaches . For a given partially-discretized model At , let us denote the set of all possible fully-discretized networks that can be obtained from At after a next edge e is discretized with an operation o as At , e , o . Our optimal scoring function can then be defined as : πbest-acc ( At , e ) = argmax o∈Oe max A|E|∈At , e , o V ∗ ( A|E| ) ( 1 ) 1Since a scoring function clearly defines a relevant policy function , we will sometimes talk about a scoring function even though the context might be directly related to a policy function – in those cases it should be understood as the policy function that follows from the relevant scoring function ( and vice versa ) . where V ∗ is validation accuracy of a network after full training ( we will use V to denote validation accuracy without training ) . It is easy to see that this policy meets Bellman ’ s principle of optimality ( Bellman , 1957 ) – in fact its definition follows directly from it – and therefore is the optimal solution to our problem . However , it might be more practical to consider the expected achievable accuracy when an operation is selected , instead of the best . Therefore we also define the function πavg : πavg-acc ( At , e ) = argmax o∈Oe E A|E|∈At , e , o V ∗ ( A|E| ) ( 2 ) In practice , we are unable to use either πbest-acc or πavg-acc since we would need to have the final validation accuracy V ∗ of all the networks in the search space . There have been many attempts at finding approximate operation scoring functions , in the following we consider the following practical alternatives from DARTS ( Liu et al. , 2019 ) and DARTS-PT ( Wang et al. , 2021 ) : πdarts ( At , e ) = argmax o∈Oe αe , o ( 3 ) πdisc-acc ( At , e ) = argmax o∈Oe V ∗ ( At + ( e , o ) ) , πdarts-pt ( At , e ) = argmin o∈Oe V ( At − ( e , o ) ) ( 4 ) where αe , o is the architectural parameter assigned to operation o on edge e as presented in DARTS ( Liu et al. , 2019 ) . πdisc-acc uses accuracy of a supernet after an operation o is assigned to an edge e – this is referred to as “ discretization accuracy ” in DARTS-PT and is assumed to be a good operation scoring function ( Wang et al. , 2021 ) , most intuitively , it could approximate favg-acc . πdarts-pt is the perturbation-based approach used by DARTS-PT – it is presented as a practical and lightweight alternative to πdisc-acc ( Wang et al. , 2021 ) . Zero-Cost Operation Scoring . We argue that the scoring functions ( 3 ) and ( 4 ) are merely proxies for the best achievable accuracy ( Equation 1 ) . As such , we see an opportunity to use a new class of training-free proxies that are very fast to compute and have been shown to work well within NAS , albeit not in differentiable NAS , nor within large search spaces . We present the following scoring functions that use a zero-cost proxy S instead of validation accuracy when discretizing an edge or perturbing an operation . Note that the supernet is randomly-initialized and untrained in this case . πdisc-zc ( At , e ) = argmax o∈Oe S ( At + ( e , o ) ) , πzc-pt ( At , e ) = argmin o∈Oe S ( At − ( e , o ) ) ( 5 ) Note that TE-NAS ( Chen et al. , 2021 ) also uses training-free scoring of operations , however , they use different scoring metrics to prune operations from a supernet as opposed to discretizing or perturbing operations as we show above . We include a comparison to TE-NAS throughout our paper . | The paper proposes to use the zero-cost proxy in the operation selection in differentiable neural architecture search. Specifically, the paper formalizes the selection procedure into perturbation and discretization. By introducing the zero-cost proxy, they first find the proxy could yield a better ranking compared to both Darts and Darts-PT in the Nas-Bench 201. They also experiment with different proxies proposed by previous work and find they all perform better than the Darts-PT. In the Darts-CNN and S1-S4 experiments, they select one of the best-performed proxies and find it could yield better accuracy than other baselines. | SP:54383f60863043b30fb83cf5d5499165fed5ae8a |
COLA: Consistent Learning with Opponent-Learning Awareness | Optimization problems with multiple , interdependent losses , such as Generative Adversarial Networks ( GANs ) or multi-agent RL , are commonly formalized as differentiable games . Learning with Opponent-Learning Awareness ( LOLA ) introduced opponent shaping to this setting . More specifically , LOLA introduced an augmented learning rule that accounts for the agent ’ s influence on the anticipated learning step of the other agents . However , the original LOLA formulation is inconsistent because LOLA models other agents as naive learners rather than LOLA agents . In previous work , this inconsistency was suggested as a root cause of LOLA ’ s failure to preserve stable fixed points ( SFPs ) . We show that , contrary to claims in previous work , Competitive Gradient Descent ( CGD ) does not solve the consistency problem and does not recover high-order LOLA ( HOLA ) as a series expansion . Working towards a remedy , we formalize consistency and show that HOLA is consistent whenever it converges ; however , it may fail to converge altogether . We propose a new method called Consistent LOLA ( COLA ) which learns update functions that are consistent under mutual opponent shaping . We prove that even such consistent update functions do not preserve SFPs , contradicting the hypothesis that this shortcoming is due to inconsistency . Finally , we empirically compare the performance and consistency of aforementioned algorithms on a range of general-sum learning games . 1 INTRODUCTION . Multi-objective problems can be found in many domains , such as GANs ( Goodfellow et al. , 2014 ) or single- and multi-agent reinforcement learning ( RL ) in the form of imaginative agents ( Racanière et al. , 2017 ) , hierarchical RL ( Barto & Mahadevan , 2002 ) , and intrinsic curiosity ( Schmidhuber , 1991 ) . A popular framework to understand systems with multiple , interdependent losses is differentiable games ( Balduzzi et al. , 2018 ) . For example , in the case of GANs , the differentiable game framework models the generator and the discriminator as competing agents , each trying to optimize their respective loss . The action space of the game consists of choosing the respective network parameters ( Balduzzi et al. , 2018 ) . An effective paradigm to improve learning in differentiable games is opponent shaping , where the players use their ability to shape each other ’ s learning steps . LOLA ( Foerster et al. , 2018 ) was the first work to make explicit use of opponent shaping in the differentiable game setting . LOLA is also one of the only general learning methods designed for differentiable games that obtains mutual cooperation with the Tit-for-Tat strategy in the Iterated Prisoner ’ s Dilemma ( IPD ) . The Tit-for-Tat strategy starts out cooperating and retaliates once whenever the opponent does not cooperate . It achieves mutual cooperation and has proven to be successful at IPD tournaments ( Axelrod , 1984 ; Harper et al. , 2017 ) . In contrast , naive gradient descent and other more sophisticated methods typically converge to the mutual defection policy under random initialization ( Letcher et al. , 2019b ) . While LOLA discovers these interesting equilibria , the original LOLA formulation is inconsistent because LOLA agents assume that their opponent is a naive learner . This assumption is clearly violated if two LOLA agents learn together in a game . It has been suggested that this inconsistency is the root cause for LOLA ’ s shortcomings , such as not converging to SFPs in some simple quadratic games ( Letcher 2018 , p. 2 , 26 ; see also Letcher et al . 2019b ) . Contributions . How can LOLA ’ s inconsistency be resolved ? To answer this question , we first revisit the concept of higher-order LOLA ( HOLA ) ( Foerster et al. , 2018 ) in Section 4.1 . For example , second-order LOLA assumes that the opponent is a first-order LOLA agent ( which in turn assumes the opponent is a naive learner ) and so on . Assuming that HOLA converges with increasing order , we define infinite-order LOLA ( iLOLA ) as the limit of HOLA whenever it exists . Intuitively , it should follow that two iLOLA agents have a consistent view of each other , meaning they make an accurate assumption about the learning behavior of the opponent under mutual opponent shaping . We introduce a formal definition of consistency and prove that iLOLA is indeed self-consistent under mutual opponent shaping . Previous work has claimed that a series expansion of Competitive Gradient Descent ( CGD ) ( Schäfer & Anandkumar , 2020 ) recovers high-order LOLA . This would imply that CGD corresponds to iLOLA and thus solves the consistency problem . In Section 4.2 , we prove that this is false : CGD does not in general correspond to iLOLA , and , unlike iLOLA , does not resolve the problem of consistency . In particular , we show that , contrary to previous claims , the series expansion of CGD does not correspond to higher-order LOLA . There are a number of problems with addressing consistency using a limiting update ( iLOLA ) : the process may not converge , and requires computation of arbitrarily high derivatives . In Section 4.3 , we propose Consistent LOLA ( COLA ) as a more general and efficient alternative . Instead of repeatedly applying the LOLA learning rule ( iLOLA ) , COLA learns a pair of consistent update functions by explicitly minimizing a consistency loss . By reframing the problem as such , the method only requires up to second-order derivatives , and instead of having a handcrafted update function as for LOLA or CGD , we use the representation power of neural networks to learn the update step . In Section 4.4 , we prove initial results about COLA . First , we show that COLA ’ s solutions are not necessarily unique . Second , despite being consistent , COLA does not recover SFPs , contradicting the prior belief that this shortcoming is caused by inconsistency . Third , we provide an example in which COLA converges more robustly , i.e. , under a wider range of learning rates , than LOLA . Finally , in Sections 5 and 6 , we report our experimental setup and results , investigating COLA and HOLA and comparing it to LOLA and CGD in a range of games . We show that , despite its nonuniqueness , COLA tends to find similar solutions in different runs empirically . Moreover , we show that COLA finds the iLOLA solution when HOLA converges but finds different solutions when HOLA diverges . These solutions have lower consistency loss and converge under a broader range of learning rates than LOLA and HOLA . Our experiments also show that , while COLA does not find Tit-for-Tat on the IPD ( unlike LOLA ) , it does learn policies with near-optimal total payoff . 2 RELATED WORK . General-sum learning algorithms and their consequences have been investigated from different perspectives in the reinforcement learning , game theory , and GAN literature , see e.g . ( Schmidhuber , 1991 ; Barto & Mahadevan , 2002 ; Racanière et al. , 2017 ; Goodfellow et al. , 2014 ) to name a few . Next , we will highlight a few of the approaches to the mutual opponent shaping problem . Opponent modeling maintains an explicit belief of the opponent , which allows to reason over their strategies and compute optimal responses . Opponent modeling can be divided into different subcategories : There are classification methods , classifying the opponents into pre-defined types ( Weber & Mateas , 2009 ; Synnaeve & Bessiere , 2011 ) , or policy reconstruction methods , where we explicitly predict the actions of the opponent ( Mealing & Shapiro , 2017 ) . Most closely related to opponent shaping is recursive reasoning , where methods model nested beliefs of the opponents ( He et al. , 2016 ; Albrecht & Stone , 2019 ; Wen et al. , 2019 ) . In comparison , COLA assumes that we have access to the ground-truth model of the opponent , e.g. , the opponent ’ s payoff function , parameters , and gradients , which puts COLA into the framework of differentiable games ( Balduzzi et al. , 2018 ) . Various methods have been proposed , investigating the local convergence properties to different solution concepts ( Mescheder et al. , 2018 ; Mazumdar et al. , 2019 ; Letcher et al. , 2019b ; Azizian et al. , 2020 ; Schäfer & Anandkumar , 2020 ; Schäfer et al. , 2020 ; Hutter , 2020 ) . Most of the work in differentiable games has not focused on the issue of opponent shaping and consistency . Mescheder et al . ( 2018 ) and Mazumdar et al . ( 2019 ) focus solely on zerosum games without shaping . Letcher et al . ( 2019b ) improve on LOLA , but do not investigate the consistency issue . CGD ( Schäfer & Anandkumar , 2020 ) addresses the consistency issue of LOLA for zero-sum games but not for general-sum games . The exact difference between CGD and LOLA is addressed in the Section 4.2 . 3 BACKGROUND . 3.1 DIFFERENTIABLE GAMES . The framework of differentiable games has become increasingly popular to model the problem of multi-agent learning . Whereas in the framework of stochastic games we are typically limited to parameters such as action-state probabilities , differentiable game generalizes to any parameters as long as the loss function is differentiable with respect to them ( Balduzzi et al. , 2018 ) . We restrict our attention on two-player games , as is standard in the current differentable games literature . Definition 1 ( Differentiable games ) . In a two-player differentiable game , players i = 1 , 2 control parameters θi ∈ Rdi to minimize twice continuously differentiable losses Li : Rd1+d2 → R. We adopt the convention to write −i to denote the respective other player . A fundamental challenge of the multi-loss setting is finding a meaningful solution concept . Whereas in the single loss setting the typical solution concept is local minima , in multi-loss settings there are different sensible solution concepts . Most prominently , there are Nash Equilibria ( Osborne & Rubinstein , 1994 ) . However , Nash Equilibria include unstable saddle points that can not be reasonably found via gradient-based learning algorithms ( Letcher et al. , 2019b ) . A more appropriate concept are stable fixed points ( SFPs ) , which could be considered a differentiable game analogon to local minima in single loss optimization . We will omit a formal definition here for brevity and point the interested reader to previous work on the topic ( Letcher et al. , 2019a ) . 3.2 LOLA AND SOS . Consider a differentiable game with two players . A LOLA agent θ1 uses its access to the opponent ’ s parameters θ2 to differentiate through the learning step of the opponent . In other words , agent 1 reformulates their loss to L1 ( θ1 , θ2 + ∆θ2 ) , where ∆θ2 represents the assumed learning step of the opponent . In first-order LOLA we assume the opponent to be a naive learner : ∆θ2 = −α∇2L2 , which is what makes LOLA inconsistent if the opponent was any other type of learner . Note that ∇2 denotes the gradient with respect to θ2 . Also note that α represents the look-ahead rate , which is the assumed learning rate of the opponent . In the original paper the loss was approximated using a Taylor expansion L1 + ( ∇2L1 ) > ∆θ2 . For agent 1 , their first-order ( Taylor ) LOLA update is then defined as ∆θ1 : = −α ( ∇1L1 +∇12L1∆θ2 + ( ∇1∆θ2 ) > ∇2L1 ) . Alternatively , in exact LOLA , the derivative is taken directly with respect to L1 ( θ1 , θ2 + ∆θ2 ) . LOLA has had some empirical success , being one of the first general learning methods to discover Tit-for-Tat like solutions in social dilemmas . However , later work showed that LOLA does not preserve SFPs θ̄ since the rightmost term can be nonzero at θ̄ . In fact , LOLA agents show “ arrogant ” behavior : they assume they can shape the learning of their naive opponents without having to adapt to the shaping of the opponent . Prior work hypothesized that this arrogant behavior is due to LOLA ’ s inconsistent formulation ( Letcher 2018 , p. 2 , 26 ; see also Letcher et al . 2019b ) . To improve upon LOLA , Letcher et al . ( 2019b ) have suggested the Stable Opponent Shaping ( SOS ) algorithm . SOS applies a correction to the LOLA update , leading to theoretically guaranteed convergence to SFPs . However , despite its desirable convergence properties , SOS still does not solve the conceptual issue of inconsistent assumptions about the opponent . | This paper deals with the problem of learning in differentiable games. Mainly the paper tackles the problem of learning in games while taking into account the learning of the opponent as well. The main contribution of the paper is to point out a flaw in the existing claims regarding correspondence between competitive gradient descent and iLOLA. The paper further gives a definition of consistent update rules for differentiable games and based on this definition show that iLOLA is update rule is consistent. The paper proposes a new algorithm COLA and shows that it find more consistent solutions empirically. | SP:aea3f3085d89f2fbee9ea32f27443b153ff7e4b4 |
COLA: Consistent Learning with Opponent-Learning Awareness | Optimization problems with multiple , interdependent losses , such as Generative Adversarial Networks ( GANs ) or multi-agent RL , are commonly formalized as differentiable games . Learning with Opponent-Learning Awareness ( LOLA ) introduced opponent shaping to this setting . More specifically , LOLA introduced an augmented learning rule that accounts for the agent ’ s influence on the anticipated learning step of the other agents . However , the original LOLA formulation is inconsistent because LOLA models other agents as naive learners rather than LOLA agents . In previous work , this inconsistency was suggested as a root cause of LOLA ’ s failure to preserve stable fixed points ( SFPs ) . We show that , contrary to claims in previous work , Competitive Gradient Descent ( CGD ) does not solve the consistency problem and does not recover high-order LOLA ( HOLA ) as a series expansion . Working towards a remedy , we formalize consistency and show that HOLA is consistent whenever it converges ; however , it may fail to converge altogether . We propose a new method called Consistent LOLA ( COLA ) which learns update functions that are consistent under mutual opponent shaping . We prove that even such consistent update functions do not preserve SFPs , contradicting the hypothesis that this shortcoming is due to inconsistency . Finally , we empirically compare the performance and consistency of aforementioned algorithms on a range of general-sum learning games . 1 INTRODUCTION . Multi-objective problems can be found in many domains , such as GANs ( Goodfellow et al. , 2014 ) or single- and multi-agent reinforcement learning ( RL ) in the form of imaginative agents ( Racanière et al. , 2017 ) , hierarchical RL ( Barto & Mahadevan , 2002 ) , and intrinsic curiosity ( Schmidhuber , 1991 ) . A popular framework to understand systems with multiple , interdependent losses is differentiable games ( Balduzzi et al. , 2018 ) . For example , in the case of GANs , the differentiable game framework models the generator and the discriminator as competing agents , each trying to optimize their respective loss . The action space of the game consists of choosing the respective network parameters ( Balduzzi et al. , 2018 ) . An effective paradigm to improve learning in differentiable games is opponent shaping , where the players use their ability to shape each other ’ s learning steps . LOLA ( Foerster et al. , 2018 ) was the first work to make explicit use of opponent shaping in the differentiable game setting . LOLA is also one of the only general learning methods designed for differentiable games that obtains mutual cooperation with the Tit-for-Tat strategy in the Iterated Prisoner ’ s Dilemma ( IPD ) . The Tit-for-Tat strategy starts out cooperating and retaliates once whenever the opponent does not cooperate . It achieves mutual cooperation and has proven to be successful at IPD tournaments ( Axelrod , 1984 ; Harper et al. , 2017 ) . In contrast , naive gradient descent and other more sophisticated methods typically converge to the mutual defection policy under random initialization ( Letcher et al. , 2019b ) . While LOLA discovers these interesting equilibria , the original LOLA formulation is inconsistent because LOLA agents assume that their opponent is a naive learner . This assumption is clearly violated if two LOLA agents learn together in a game . It has been suggested that this inconsistency is the root cause for LOLA ’ s shortcomings , such as not converging to SFPs in some simple quadratic games ( Letcher 2018 , p. 2 , 26 ; see also Letcher et al . 2019b ) . Contributions . How can LOLA ’ s inconsistency be resolved ? To answer this question , we first revisit the concept of higher-order LOLA ( HOLA ) ( Foerster et al. , 2018 ) in Section 4.1 . For example , second-order LOLA assumes that the opponent is a first-order LOLA agent ( which in turn assumes the opponent is a naive learner ) and so on . Assuming that HOLA converges with increasing order , we define infinite-order LOLA ( iLOLA ) as the limit of HOLA whenever it exists . Intuitively , it should follow that two iLOLA agents have a consistent view of each other , meaning they make an accurate assumption about the learning behavior of the opponent under mutual opponent shaping . We introduce a formal definition of consistency and prove that iLOLA is indeed self-consistent under mutual opponent shaping . Previous work has claimed that a series expansion of Competitive Gradient Descent ( CGD ) ( Schäfer & Anandkumar , 2020 ) recovers high-order LOLA . This would imply that CGD corresponds to iLOLA and thus solves the consistency problem . In Section 4.2 , we prove that this is false : CGD does not in general correspond to iLOLA , and , unlike iLOLA , does not resolve the problem of consistency . In particular , we show that , contrary to previous claims , the series expansion of CGD does not correspond to higher-order LOLA . There are a number of problems with addressing consistency using a limiting update ( iLOLA ) : the process may not converge , and requires computation of arbitrarily high derivatives . In Section 4.3 , we propose Consistent LOLA ( COLA ) as a more general and efficient alternative . Instead of repeatedly applying the LOLA learning rule ( iLOLA ) , COLA learns a pair of consistent update functions by explicitly minimizing a consistency loss . By reframing the problem as such , the method only requires up to second-order derivatives , and instead of having a handcrafted update function as for LOLA or CGD , we use the representation power of neural networks to learn the update step . In Section 4.4 , we prove initial results about COLA . First , we show that COLA ’ s solutions are not necessarily unique . Second , despite being consistent , COLA does not recover SFPs , contradicting the prior belief that this shortcoming is caused by inconsistency . Third , we provide an example in which COLA converges more robustly , i.e. , under a wider range of learning rates , than LOLA . Finally , in Sections 5 and 6 , we report our experimental setup and results , investigating COLA and HOLA and comparing it to LOLA and CGD in a range of games . We show that , despite its nonuniqueness , COLA tends to find similar solutions in different runs empirically . Moreover , we show that COLA finds the iLOLA solution when HOLA converges but finds different solutions when HOLA diverges . These solutions have lower consistency loss and converge under a broader range of learning rates than LOLA and HOLA . Our experiments also show that , while COLA does not find Tit-for-Tat on the IPD ( unlike LOLA ) , it does learn policies with near-optimal total payoff . 2 RELATED WORK . General-sum learning algorithms and their consequences have been investigated from different perspectives in the reinforcement learning , game theory , and GAN literature , see e.g . ( Schmidhuber , 1991 ; Barto & Mahadevan , 2002 ; Racanière et al. , 2017 ; Goodfellow et al. , 2014 ) to name a few . Next , we will highlight a few of the approaches to the mutual opponent shaping problem . Opponent modeling maintains an explicit belief of the opponent , which allows to reason over their strategies and compute optimal responses . Opponent modeling can be divided into different subcategories : There are classification methods , classifying the opponents into pre-defined types ( Weber & Mateas , 2009 ; Synnaeve & Bessiere , 2011 ) , or policy reconstruction methods , where we explicitly predict the actions of the opponent ( Mealing & Shapiro , 2017 ) . Most closely related to opponent shaping is recursive reasoning , where methods model nested beliefs of the opponents ( He et al. , 2016 ; Albrecht & Stone , 2019 ; Wen et al. , 2019 ) . In comparison , COLA assumes that we have access to the ground-truth model of the opponent , e.g. , the opponent ’ s payoff function , parameters , and gradients , which puts COLA into the framework of differentiable games ( Balduzzi et al. , 2018 ) . Various methods have been proposed , investigating the local convergence properties to different solution concepts ( Mescheder et al. , 2018 ; Mazumdar et al. , 2019 ; Letcher et al. , 2019b ; Azizian et al. , 2020 ; Schäfer & Anandkumar , 2020 ; Schäfer et al. , 2020 ; Hutter , 2020 ) . Most of the work in differentiable games has not focused on the issue of opponent shaping and consistency . Mescheder et al . ( 2018 ) and Mazumdar et al . ( 2019 ) focus solely on zerosum games without shaping . Letcher et al . ( 2019b ) improve on LOLA , but do not investigate the consistency issue . CGD ( Schäfer & Anandkumar , 2020 ) addresses the consistency issue of LOLA for zero-sum games but not for general-sum games . The exact difference between CGD and LOLA is addressed in the Section 4.2 . 3 BACKGROUND . 3.1 DIFFERENTIABLE GAMES . The framework of differentiable games has become increasingly popular to model the problem of multi-agent learning . Whereas in the framework of stochastic games we are typically limited to parameters such as action-state probabilities , differentiable game generalizes to any parameters as long as the loss function is differentiable with respect to them ( Balduzzi et al. , 2018 ) . We restrict our attention on two-player games , as is standard in the current differentable games literature . Definition 1 ( Differentiable games ) . In a two-player differentiable game , players i = 1 , 2 control parameters θi ∈ Rdi to minimize twice continuously differentiable losses Li : Rd1+d2 → R. We adopt the convention to write −i to denote the respective other player . A fundamental challenge of the multi-loss setting is finding a meaningful solution concept . Whereas in the single loss setting the typical solution concept is local minima , in multi-loss settings there are different sensible solution concepts . Most prominently , there are Nash Equilibria ( Osborne & Rubinstein , 1994 ) . However , Nash Equilibria include unstable saddle points that can not be reasonably found via gradient-based learning algorithms ( Letcher et al. , 2019b ) . A more appropriate concept are stable fixed points ( SFPs ) , which could be considered a differentiable game analogon to local minima in single loss optimization . We will omit a formal definition here for brevity and point the interested reader to previous work on the topic ( Letcher et al. , 2019a ) . 3.2 LOLA AND SOS . Consider a differentiable game with two players . A LOLA agent θ1 uses its access to the opponent ’ s parameters θ2 to differentiate through the learning step of the opponent . In other words , agent 1 reformulates their loss to L1 ( θ1 , θ2 + ∆θ2 ) , where ∆θ2 represents the assumed learning step of the opponent . In first-order LOLA we assume the opponent to be a naive learner : ∆θ2 = −α∇2L2 , which is what makes LOLA inconsistent if the opponent was any other type of learner . Note that ∇2 denotes the gradient with respect to θ2 . Also note that α represents the look-ahead rate , which is the assumed learning rate of the opponent . In the original paper the loss was approximated using a Taylor expansion L1 + ( ∇2L1 ) > ∆θ2 . For agent 1 , their first-order ( Taylor ) LOLA update is then defined as ∆θ1 : = −α ( ∇1L1 +∇12L1∆θ2 + ( ∇1∆θ2 ) > ∇2L1 ) . Alternatively , in exact LOLA , the derivative is taken directly with respect to L1 ( θ1 , θ2 + ∆θ2 ) . LOLA has had some empirical success , being one of the first general learning methods to discover Tit-for-Tat like solutions in social dilemmas . However , later work showed that LOLA does not preserve SFPs θ̄ since the rightmost term can be nonzero at θ̄ . In fact , LOLA agents show “ arrogant ” behavior : they assume they can shape the learning of their naive opponents without having to adapt to the shaping of the opponent . Prior work hypothesized that this arrogant behavior is due to LOLA ’ s inconsistent formulation ( Letcher 2018 , p. 2 , 26 ; see also Letcher et al . 2019b ) . To improve upon LOLA , Letcher et al . ( 2019b ) have suggested the Stable Opponent Shaping ( SOS ) algorithm . SOS applies a correction to the LOLA update , leading to theoretically guaranteed convergence to SFPs . However , despite its desirable convergence properties , SOS still does not solve the conceptual issue of inconsistent assumptions about the opponent . | This paper investigates the inconsistency problem in LOLA: each LOLA agent assumes the other agent as a naive learner, resulting in LOLA agents not converging to SFPs in some games. This paper aims to address this problem by the infinite-order LOLA, which can have a consistent view of each other and empirically observes that HOLA may not resolve LOLA's convergence issues. Instead of HOLA, this paper proposes COLA that employs neural networks to explicitly minimize the consistency loss. COLA empirically shows that COLA finds the consistent solution when HOLA converges and finds more stable solutions when HOLA diverges. | SP:aea3f3085d89f2fbee9ea32f27443b153ff7e4b4 |
COLA: Consistent Learning with Opponent-Learning Awareness | Optimization problems with multiple , interdependent losses , such as Generative Adversarial Networks ( GANs ) or multi-agent RL , are commonly formalized as differentiable games . Learning with Opponent-Learning Awareness ( LOLA ) introduced opponent shaping to this setting . More specifically , LOLA introduced an augmented learning rule that accounts for the agent ’ s influence on the anticipated learning step of the other agents . However , the original LOLA formulation is inconsistent because LOLA models other agents as naive learners rather than LOLA agents . In previous work , this inconsistency was suggested as a root cause of LOLA ’ s failure to preserve stable fixed points ( SFPs ) . We show that , contrary to claims in previous work , Competitive Gradient Descent ( CGD ) does not solve the consistency problem and does not recover high-order LOLA ( HOLA ) as a series expansion . Working towards a remedy , we formalize consistency and show that HOLA is consistent whenever it converges ; however , it may fail to converge altogether . We propose a new method called Consistent LOLA ( COLA ) which learns update functions that are consistent under mutual opponent shaping . We prove that even such consistent update functions do not preserve SFPs , contradicting the hypothesis that this shortcoming is due to inconsistency . Finally , we empirically compare the performance and consistency of aforementioned algorithms on a range of general-sum learning games . 1 INTRODUCTION . Multi-objective problems can be found in many domains , such as GANs ( Goodfellow et al. , 2014 ) or single- and multi-agent reinforcement learning ( RL ) in the form of imaginative agents ( Racanière et al. , 2017 ) , hierarchical RL ( Barto & Mahadevan , 2002 ) , and intrinsic curiosity ( Schmidhuber , 1991 ) . A popular framework to understand systems with multiple , interdependent losses is differentiable games ( Balduzzi et al. , 2018 ) . For example , in the case of GANs , the differentiable game framework models the generator and the discriminator as competing agents , each trying to optimize their respective loss . The action space of the game consists of choosing the respective network parameters ( Balduzzi et al. , 2018 ) . An effective paradigm to improve learning in differentiable games is opponent shaping , where the players use their ability to shape each other ’ s learning steps . LOLA ( Foerster et al. , 2018 ) was the first work to make explicit use of opponent shaping in the differentiable game setting . LOLA is also one of the only general learning methods designed for differentiable games that obtains mutual cooperation with the Tit-for-Tat strategy in the Iterated Prisoner ’ s Dilemma ( IPD ) . The Tit-for-Tat strategy starts out cooperating and retaliates once whenever the opponent does not cooperate . It achieves mutual cooperation and has proven to be successful at IPD tournaments ( Axelrod , 1984 ; Harper et al. , 2017 ) . In contrast , naive gradient descent and other more sophisticated methods typically converge to the mutual defection policy under random initialization ( Letcher et al. , 2019b ) . While LOLA discovers these interesting equilibria , the original LOLA formulation is inconsistent because LOLA agents assume that their opponent is a naive learner . This assumption is clearly violated if two LOLA agents learn together in a game . It has been suggested that this inconsistency is the root cause for LOLA ’ s shortcomings , such as not converging to SFPs in some simple quadratic games ( Letcher 2018 , p. 2 , 26 ; see also Letcher et al . 2019b ) . Contributions . How can LOLA ’ s inconsistency be resolved ? To answer this question , we first revisit the concept of higher-order LOLA ( HOLA ) ( Foerster et al. , 2018 ) in Section 4.1 . For example , second-order LOLA assumes that the opponent is a first-order LOLA agent ( which in turn assumes the opponent is a naive learner ) and so on . Assuming that HOLA converges with increasing order , we define infinite-order LOLA ( iLOLA ) as the limit of HOLA whenever it exists . Intuitively , it should follow that two iLOLA agents have a consistent view of each other , meaning they make an accurate assumption about the learning behavior of the opponent under mutual opponent shaping . We introduce a formal definition of consistency and prove that iLOLA is indeed self-consistent under mutual opponent shaping . Previous work has claimed that a series expansion of Competitive Gradient Descent ( CGD ) ( Schäfer & Anandkumar , 2020 ) recovers high-order LOLA . This would imply that CGD corresponds to iLOLA and thus solves the consistency problem . In Section 4.2 , we prove that this is false : CGD does not in general correspond to iLOLA , and , unlike iLOLA , does not resolve the problem of consistency . In particular , we show that , contrary to previous claims , the series expansion of CGD does not correspond to higher-order LOLA . There are a number of problems with addressing consistency using a limiting update ( iLOLA ) : the process may not converge , and requires computation of arbitrarily high derivatives . In Section 4.3 , we propose Consistent LOLA ( COLA ) as a more general and efficient alternative . Instead of repeatedly applying the LOLA learning rule ( iLOLA ) , COLA learns a pair of consistent update functions by explicitly minimizing a consistency loss . By reframing the problem as such , the method only requires up to second-order derivatives , and instead of having a handcrafted update function as for LOLA or CGD , we use the representation power of neural networks to learn the update step . In Section 4.4 , we prove initial results about COLA . First , we show that COLA ’ s solutions are not necessarily unique . Second , despite being consistent , COLA does not recover SFPs , contradicting the prior belief that this shortcoming is caused by inconsistency . Third , we provide an example in which COLA converges more robustly , i.e. , under a wider range of learning rates , than LOLA . Finally , in Sections 5 and 6 , we report our experimental setup and results , investigating COLA and HOLA and comparing it to LOLA and CGD in a range of games . We show that , despite its nonuniqueness , COLA tends to find similar solutions in different runs empirically . Moreover , we show that COLA finds the iLOLA solution when HOLA converges but finds different solutions when HOLA diverges . These solutions have lower consistency loss and converge under a broader range of learning rates than LOLA and HOLA . Our experiments also show that , while COLA does not find Tit-for-Tat on the IPD ( unlike LOLA ) , it does learn policies with near-optimal total payoff . 2 RELATED WORK . General-sum learning algorithms and their consequences have been investigated from different perspectives in the reinforcement learning , game theory , and GAN literature , see e.g . ( Schmidhuber , 1991 ; Barto & Mahadevan , 2002 ; Racanière et al. , 2017 ; Goodfellow et al. , 2014 ) to name a few . Next , we will highlight a few of the approaches to the mutual opponent shaping problem . Opponent modeling maintains an explicit belief of the opponent , which allows to reason over their strategies and compute optimal responses . Opponent modeling can be divided into different subcategories : There are classification methods , classifying the opponents into pre-defined types ( Weber & Mateas , 2009 ; Synnaeve & Bessiere , 2011 ) , or policy reconstruction methods , where we explicitly predict the actions of the opponent ( Mealing & Shapiro , 2017 ) . Most closely related to opponent shaping is recursive reasoning , where methods model nested beliefs of the opponents ( He et al. , 2016 ; Albrecht & Stone , 2019 ; Wen et al. , 2019 ) . In comparison , COLA assumes that we have access to the ground-truth model of the opponent , e.g. , the opponent ’ s payoff function , parameters , and gradients , which puts COLA into the framework of differentiable games ( Balduzzi et al. , 2018 ) . Various methods have been proposed , investigating the local convergence properties to different solution concepts ( Mescheder et al. , 2018 ; Mazumdar et al. , 2019 ; Letcher et al. , 2019b ; Azizian et al. , 2020 ; Schäfer & Anandkumar , 2020 ; Schäfer et al. , 2020 ; Hutter , 2020 ) . Most of the work in differentiable games has not focused on the issue of opponent shaping and consistency . Mescheder et al . ( 2018 ) and Mazumdar et al . ( 2019 ) focus solely on zerosum games without shaping . Letcher et al . ( 2019b ) improve on LOLA , but do not investigate the consistency issue . CGD ( Schäfer & Anandkumar , 2020 ) addresses the consistency issue of LOLA for zero-sum games but not for general-sum games . The exact difference between CGD and LOLA is addressed in the Section 4.2 . 3 BACKGROUND . 3.1 DIFFERENTIABLE GAMES . The framework of differentiable games has become increasingly popular to model the problem of multi-agent learning . Whereas in the framework of stochastic games we are typically limited to parameters such as action-state probabilities , differentiable game generalizes to any parameters as long as the loss function is differentiable with respect to them ( Balduzzi et al. , 2018 ) . We restrict our attention on two-player games , as is standard in the current differentable games literature . Definition 1 ( Differentiable games ) . In a two-player differentiable game , players i = 1 , 2 control parameters θi ∈ Rdi to minimize twice continuously differentiable losses Li : Rd1+d2 → R. We adopt the convention to write −i to denote the respective other player . A fundamental challenge of the multi-loss setting is finding a meaningful solution concept . Whereas in the single loss setting the typical solution concept is local minima , in multi-loss settings there are different sensible solution concepts . Most prominently , there are Nash Equilibria ( Osborne & Rubinstein , 1994 ) . However , Nash Equilibria include unstable saddle points that can not be reasonably found via gradient-based learning algorithms ( Letcher et al. , 2019b ) . A more appropriate concept are stable fixed points ( SFPs ) , which could be considered a differentiable game analogon to local minima in single loss optimization . We will omit a formal definition here for brevity and point the interested reader to previous work on the topic ( Letcher et al. , 2019a ) . 3.2 LOLA AND SOS . Consider a differentiable game with two players . A LOLA agent θ1 uses its access to the opponent ’ s parameters θ2 to differentiate through the learning step of the opponent . In other words , agent 1 reformulates their loss to L1 ( θ1 , θ2 + ∆θ2 ) , where ∆θ2 represents the assumed learning step of the opponent . In first-order LOLA we assume the opponent to be a naive learner : ∆θ2 = −α∇2L2 , which is what makes LOLA inconsistent if the opponent was any other type of learner . Note that ∇2 denotes the gradient with respect to θ2 . Also note that α represents the look-ahead rate , which is the assumed learning rate of the opponent . In the original paper the loss was approximated using a Taylor expansion L1 + ( ∇2L1 ) > ∆θ2 . For agent 1 , their first-order ( Taylor ) LOLA update is then defined as ∆θ1 : = −α ( ∇1L1 +∇12L1∆θ2 + ( ∇1∆θ2 ) > ∇2L1 ) . Alternatively , in exact LOLA , the derivative is taken directly with respect to L1 ( θ1 , θ2 + ∆θ2 ) . LOLA has had some empirical success , being one of the first general learning methods to discover Tit-for-Tat like solutions in social dilemmas . However , later work showed that LOLA does not preserve SFPs θ̄ since the rightmost term can be nonzero at θ̄ . In fact , LOLA agents show “ arrogant ” behavior : they assume they can shape the learning of their naive opponents without having to adapt to the shaping of the opponent . Prior work hypothesized that this arrogant behavior is due to LOLA ’ s inconsistent formulation ( Letcher 2018 , p. 2 , 26 ; see also Letcher et al . 2019b ) . To improve upon LOLA , Letcher et al . ( 2019b ) have suggested the Stable Opponent Shaping ( SOS ) algorithm . SOS applies a correction to the LOLA update , leading to theoretically guaranteed convergence to SFPs . However , despite its desirable convergence properties , SOS still does not solve the conceptual issue of inconsistent assumptions about the opponent . | The authors tackle the consistency problem of the original LOLA formulation. The paper investigates HOLA convergence, demonstrates that CGD does not correspond to high-order LOLA in general, and proposes COLA to directly address the consistency problem. The proposed method COLA seems more robust to different look-ahead values where HOLA diverges. The authors also find that COLA is still sometime susceptible to the ‘arrogant’ LOLA behavior, opening questions for future work. | SP:aea3f3085d89f2fbee9ea32f27443b153ff7e4b4 |
Visual Representation Learning over Latent Domains | 1 INTRODUCTION . Datasets have been a major driving force behind the rapid progress in computer vision research in the last two decades . They provide a testbed for developing new algorithms and comparing them to existing ones . However , datasets can also narrow down the focus of research into overspecialized solutions and impede developing a broader understanding of the world . In recent years this narrow scope of datasets has been widely questioned ( Torralba & Efros , 2011 ; Tommasi et al. , 2017 ; Recht et al. , 2019 ) and addressing some of these limitations has become a very active area of research . Two actively studied themes to investigate broader learning criteria are multi-domain learning ( Nam & Han , 2016 ; Bulat et al. , 2019 ; Schoenauer-Sebag et al. , 2019 ) and domain adaptation ( Ganin et al. , 2016 ; Tzeng et al. , 2017 ; Hoffman et al. , 2018 ; Xu et al. , 2018 ; Peng et al. , 2019a ; Sun et al. , 2019b ) . While multi-domain techniques focus on learning a single model that can generalize over multiple domains , domain adaptation techniques aim to efficiently transfer the representations that are learned in one dataset to another . Related themes have also been studied in domain generalization ( Li et al. , 2018 ; 2019b ; a ; Gulrajani & Lopez-Paz , 2020 ) and continual learning ( Kirkpatrick et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ; Riemer et al. , 2019 ) , where the focus lies on learning representations that can generalize to unseen domains , and to preserve knowledge acquired from previously seen tasks , respectively . While there exists no canonical definition for what exactly a visual domain is , previous works in multi-domain learning assume that different subsets of data exist , with some defining characteristic that allows them to be separated from each another . Each subset , indexed by d = 1 , . . . , D , is assigned to a pre-defined visual domain , and vice-versa multi-domain methods then use such domain associations to parameterize their representations and learn some pθ ( y|x , d ) . In some cases domains are intuitive and their annotation straightforward . Consider a problem where images have little visual relationship , for example joint learning of Omniglot handwritten symbols ( Lake et al. , 2015 ) and CIFAR-10 objects ( Krizhevsky & Hinton , 2009 ) . In this case , it is safe to assume that encoding an explicit domain-specific identifier into pθ is a good idea , and results in the multi-domain literature provide clear evidence that it is highly beneficial to do so ( Rebuffi et al. , 2018 ; Liu et al. , 2019a ; Guo et al. , 2019a ; Mancini et al. , 2020 ) . The assumption that domain labels are always available has been widely adopted in multi-domain learning ; however this assumption is not without difficulty . For one , unless the process of domain annotation is automated due to combining existing datasets as in e.g . Rebuffi et al . ( 2017 ) , their manual collection , curation , and domain labeling is very laborious . And even if adequate resources exist , it is often difficult to decide the optimal criteria for the annotation of d : some datasets contain sketches , paintings and real world images ( Li et al. , 2017 ) , others images captured during day or night ( Sultani et al. , 2018 ) . Automatically collected datasets ( Thomee et al. , 2016 ; Sun et al. , 2017 ) contain mixtures of low/high resolution images , taken with different cameras by amateurs/professionals . There is no obvious answer which of these should form their own distinct domain subset . Moreover , the work of Bouchacourt et al . ( 2018 ) considers semantic groupings of data : they show that when dividing data by subcategories , such as size , shape , etc. , and incorporating this information into the model , then this benefits performance . Should one therefore also encode the number of objects into domains , or their color , shape , and so on ? Given the relatively loose requirement that domains are supposed to be different while related in some sense ( Pan & Yang , 2009 ) , these examples hint at the difficulty of deciding whether domains are needed , and – if the answer to that is yes – what the optimal domain criteria are . And note that even if such assignments are made very carefully for some problem , nothing guarantees that they will transfer effectively to some other task . This paper carefully investigates this ambiguity and studies two central questions : 1 . Are domain labels always optimal for learning multi-domain representations ? 2 . Is it possible to learn models that generalize well over visually diverse domains without domain labels ? To study this problem , we introduce a new setting ( c.f . Fig . 1 ) in which models are learned over multiple domains without domain annotations — or latent domain learning for short . While latent domain learning is a highly practical research problem in the context of transfer learning , it poses multiple challenges that have not been previously investigated in connection with deep visual representation learning . In particular , we find that the removal of domain associations leads to severe performance losses for standard architectures on most domains , due to imbalances in the underlying distribution and different difficulty levels of the associated domain-level tasks . We carry out a rigorous quantitative analysis that includes concepts from multi-domain learning ( Rebuffi et al. , 2018 ; Chang et al. , 2018 ) , and find that their performance benefits do not extend to latent domain learning . To account for this lost performance , we formulate a novel method called sparse latent adaptation ( Section 3.2 ) which enables internal feature representations to dynamically adapt to instances from multiple domains in data , without requiring annotations for this . Furthermore , we show that latent domain learning naturally extends to several challenging real world tasks and single domain data , such as fairness problems ( Appendix F ) , and learning over imbalanced distributions ( Appendix G ) . 2 LATENT DOMAIN LEARNING . This section provides an overview over latent domain learning and contrasts it against other types of related learning problems , in particular multi-domain learning . 2.1 PROBLEM SETTING . When learning on multiple domains , the common assumption is that data is sampled i.i.d . from a mixture of distributions Pd with domain indices d = 1 , . . . , D. Together , they constitute the datagenerating distribution as P = ∑ d πdPd , where each domain is associated with a relative share πd = Nd/N , with N the total number of samples , and Nd those belonging to the d ’ th domain . In multi-domain learning , domain labels are available for all samples ( Nam & Han , 2016 ; Rebuffi et al. , 2017 ; 2018 ; Bulat et al. , 2019 ) , such that the overall data available for learning consists of DMD = { ( xi , di , yi ) } with i = 1 , . . . , N . In latent domain learning the information associating each sample xi with a domain di is not available . As such , domain-specific labels yi can not be inferred from sample-domain pairs ( xi , di ) and one is instead forced to learn a single model fθ over the latent domain dataset DLD = { ( xi , yi ) } . While latent domain learning can include mutually exclusive classes and disjoint label spaces Y1 ∪ · · · ∪ YD ( as in long-tailed recognition , see Appendix G ) , we mainly focus on the setting of shared label spaces , i.e . Yd = Yd′ . For example a dataset may contain images of dogs or elephants that can appear as either photos , paintings , or sketches . Latent domains have previously attracted interest in the context of domain adaptation , where the lack of annotations was recovered through hierarchical Hoffman et al . ( 2012 ) and kernel-based clustering ( Gong et al. , 2013 ) , via exemplar SVMs ( Xu et al. , 2014 ) , or by measuring mutual information ( Xiong et al. , 2014 ) . More recent work corrects batch statistics of domain adaptation layers using Gaussian mixtures ( Mancini et al. , 2018 ) , or studies the shift from some source domain to a target distribution that contains multiple latent domains ( Peng et al. , 2019b ; Matsuura & Harada , 2020 ) . Latent domain learning however differs fundamentally from these works : Table 1 contains a comparison to existing transfer learning settings . A common baseline in multi-domain learning is to finetune D models , one for each individual domain ( Rebuffi et al. , 2018 ; Liu et al. , 2019a ) . Doing so requires learning a large number of parameters and shares no parameters across domains , but can serve as a strong baseline to compare against . We show that in many cases , even when domains were carefully annotated , a dynamic latent domain approach can surpass the performance of such domain-supervised baselines , see Section 4 . 2.2 OBSERVED VS . UNIFORM ACCURACY . Consider a problem in which the data is sampled i.i.d . from P = πaPda + πbPdb , i.e . two hidden domains . Since domain labels are not available in latent domain learning , the best one can do is to treat all samples equally , and measure the observed accuracy : OAcc [ f ] = E ( xi , yi ) ∼P [ 1yf ( xi ) =yi ] , ( 1 ) where yf denotes the class assigned to sample xi by the model f , and yi its corresponding label for training . The OAcc has a problematic property : if P consists of two imbalanced domains such that πa ≥ πb , then the performance on da dominates it . For example if da has a 90 % overall share , and the model perfectly classifies this domain while obtaining 0 % accuracy on db , then OAcc would still assume 0.9 , hiding the underlying damage to domain db . This motivates alternative formulations for latent domain learning , to anticipate ( and account for ) imbalanced domains in real world data . If it is possible to identify some semantic domain labeling ( as typically included in multi-domain/domain adaptation benchmarks ) , one can compare performances across individual subgroups . This allows picking up on performance losses that just occur on some of them , and which traditional metrics ( such as OAcc ) fail to capture . Where this is possible , we therefore propose to also measure latent domain performance in terms of uniform accuracy which decouples accuracies from relative ground-truth domain sizes : UAcc [ f ] = 1 D D∑ d=1 E ( xi , yi ) ∼Pd [ 1yf ( xi ) =yi ] . ( 2 ) Returning to the above example , a uniform measurement reflects the model ’ s lack of performance on db as UAcc = 0.5 . Once again note while ground-truth domain annotations are required in order to compute uniform accuracy , these are never used to train latent domain models . 3 METHODS . To enable robust learning in the new proposed setting , we formulate a novel module called sparse latent adaptation strategy which adaptively accounts for latent domains . Section 3.1 reviews adaptation strategies popular in the multi-domain context , which our method extends ( and generalizes ) . | In this paper, the authors proposed latent domain learning for adaptation. Experiments on multiple benchmark datasets show improved result. Several visualizations also illustrate the effectiveness of the proposed approach. | SP:16fc080356fe7672b56be8755b3ebea5aa1c7471 |
Visual Representation Learning over Latent Domains | 1 INTRODUCTION . Datasets have been a major driving force behind the rapid progress in computer vision research in the last two decades . They provide a testbed for developing new algorithms and comparing them to existing ones . However , datasets can also narrow down the focus of research into overspecialized solutions and impede developing a broader understanding of the world . In recent years this narrow scope of datasets has been widely questioned ( Torralba & Efros , 2011 ; Tommasi et al. , 2017 ; Recht et al. , 2019 ) and addressing some of these limitations has become a very active area of research . Two actively studied themes to investigate broader learning criteria are multi-domain learning ( Nam & Han , 2016 ; Bulat et al. , 2019 ; Schoenauer-Sebag et al. , 2019 ) and domain adaptation ( Ganin et al. , 2016 ; Tzeng et al. , 2017 ; Hoffman et al. , 2018 ; Xu et al. , 2018 ; Peng et al. , 2019a ; Sun et al. , 2019b ) . While multi-domain techniques focus on learning a single model that can generalize over multiple domains , domain adaptation techniques aim to efficiently transfer the representations that are learned in one dataset to another . Related themes have also been studied in domain generalization ( Li et al. , 2018 ; 2019b ; a ; Gulrajani & Lopez-Paz , 2020 ) and continual learning ( Kirkpatrick et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ; Riemer et al. , 2019 ) , where the focus lies on learning representations that can generalize to unseen domains , and to preserve knowledge acquired from previously seen tasks , respectively . While there exists no canonical definition for what exactly a visual domain is , previous works in multi-domain learning assume that different subsets of data exist , with some defining characteristic that allows them to be separated from each another . Each subset , indexed by d = 1 , . . . , D , is assigned to a pre-defined visual domain , and vice-versa multi-domain methods then use such domain associations to parameterize their representations and learn some pθ ( y|x , d ) . In some cases domains are intuitive and their annotation straightforward . Consider a problem where images have little visual relationship , for example joint learning of Omniglot handwritten symbols ( Lake et al. , 2015 ) and CIFAR-10 objects ( Krizhevsky & Hinton , 2009 ) . In this case , it is safe to assume that encoding an explicit domain-specific identifier into pθ is a good idea , and results in the multi-domain literature provide clear evidence that it is highly beneficial to do so ( Rebuffi et al. , 2018 ; Liu et al. , 2019a ; Guo et al. , 2019a ; Mancini et al. , 2020 ) . The assumption that domain labels are always available has been widely adopted in multi-domain learning ; however this assumption is not without difficulty . For one , unless the process of domain annotation is automated due to combining existing datasets as in e.g . Rebuffi et al . ( 2017 ) , their manual collection , curation , and domain labeling is very laborious . And even if adequate resources exist , it is often difficult to decide the optimal criteria for the annotation of d : some datasets contain sketches , paintings and real world images ( Li et al. , 2017 ) , others images captured during day or night ( Sultani et al. , 2018 ) . Automatically collected datasets ( Thomee et al. , 2016 ; Sun et al. , 2017 ) contain mixtures of low/high resolution images , taken with different cameras by amateurs/professionals . There is no obvious answer which of these should form their own distinct domain subset . Moreover , the work of Bouchacourt et al . ( 2018 ) considers semantic groupings of data : they show that when dividing data by subcategories , such as size , shape , etc. , and incorporating this information into the model , then this benefits performance . Should one therefore also encode the number of objects into domains , or their color , shape , and so on ? Given the relatively loose requirement that domains are supposed to be different while related in some sense ( Pan & Yang , 2009 ) , these examples hint at the difficulty of deciding whether domains are needed , and – if the answer to that is yes – what the optimal domain criteria are . And note that even if such assignments are made very carefully for some problem , nothing guarantees that they will transfer effectively to some other task . This paper carefully investigates this ambiguity and studies two central questions : 1 . Are domain labels always optimal for learning multi-domain representations ? 2 . Is it possible to learn models that generalize well over visually diverse domains without domain labels ? To study this problem , we introduce a new setting ( c.f . Fig . 1 ) in which models are learned over multiple domains without domain annotations — or latent domain learning for short . While latent domain learning is a highly practical research problem in the context of transfer learning , it poses multiple challenges that have not been previously investigated in connection with deep visual representation learning . In particular , we find that the removal of domain associations leads to severe performance losses for standard architectures on most domains , due to imbalances in the underlying distribution and different difficulty levels of the associated domain-level tasks . We carry out a rigorous quantitative analysis that includes concepts from multi-domain learning ( Rebuffi et al. , 2018 ; Chang et al. , 2018 ) , and find that their performance benefits do not extend to latent domain learning . To account for this lost performance , we formulate a novel method called sparse latent adaptation ( Section 3.2 ) which enables internal feature representations to dynamically adapt to instances from multiple domains in data , without requiring annotations for this . Furthermore , we show that latent domain learning naturally extends to several challenging real world tasks and single domain data , such as fairness problems ( Appendix F ) , and learning over imbalanced distributions ( Appendix G ) . 2 LATENT DOMAIN LEARNING . This section provides an overview over latent domain learning and contrasts it against other types of related learning problems , in particular multi-domain learning . 2.1 PROBLEM SETTING . When learning on multiple domains , the common assumption is that data is sampled i.i.d . from a mixture of distributions Pd with domain indices d = 1 , . . . , D. Together , they constitute the datagenerating distribution as P = ∑ d πdPd , where each domain is associated with a relative share πd = Nd/N , with N the total number of samples , and Nd those belonging to the d ’ th domain . In multi-domain learning , domain labels are available for all samples ( Nam & Han , 2016 ; Rebuffi et al. , 2017 ; 2018 ; Bulat et al. , 2019 ) , such that the overall data available for learning consists of DMD = { ( xi , di , yi ) } with i = 1 , . . . , N . In latent domain learning the information associating each sample xi with a domain di is not available . As such , domain-specific labels yi can not be inferred from sample-domain pairs ( xi , di ) and one is instead forced to learn a single model fθ over the latent domain dataset DLD = { ( xi , yi ) } . While latent domain learning can include mutually exclusive classes and disjoint label spaces Y1 ∪ · · · ∪ YD ( as in long-tailed recognition , see Appendix G ) , we mainly focus on the setting of shared label spaces , i.e . Yd = Yd′ . For example a dataset may contain images of dogs or elephants that can appear as either photos , paintings , or sketches . Latent domains have previously attracted interest in the context of domain adaptation , where the lack of annotations was recovered through hierarchical Hoffman et al . ( 2012 ) and kernel-based clustering ( Gong et al. , 2013 ) , via exemplar SVMs ( Xu et al. , 2014 ) , or by measuring mutual information ( Xiong et al. , 2014 ) . More recent work corrects batch statistics of domain adaptation layers using Gaussian mixtures ( Mancini et al. , 2018 ) , or studies the shift from some source domain to a target distribution that contains multiple latent domains ( Peng et al. , 2019b ; Matsuura & Harada , 2020 ) . Latent domain learning however differs fundamentally from these works : Table 1 contains a comparison to existing transfer learning settings . A common baseline in multi-domain learning is to finetune D models , one for each individual domain ( Rebuffi et al. , 2018 ; Liu et al. , 2019a ) . Doing so requires learning a large number of parameters and shares no parameters across domains , but can serve as a strong baseline to compare against . We show that in many cases , even when domains were carefully annotated , a dynamic latent domain approach can surpass the performance of such domain-supervised baselines , see Section 4 . 2.2 OBSERVED VS . UNIFORM ACCURACY . Consider a problem in which the data is sampled i.i.d . from P = πaPda + πbPdb , i.e . two hidden domains . Since domain labels are not available in latent domain learning , the best one can do is to treat all samples equally , and measure the observed accuracy : OAcc [ f ] = E ( xi , yi ) ∼P [ 1yf ( xi ) =yi ] , ( 1 ) where yf denotes the class assigned to sample xi by the model f , and yi its corresponding label for training . The OAcc has a problematic property : if P consists of two imbalanced domains such that πa ≥ πb , then the performance on da dominates it . For example if da has a 90 % overall share , and the model perfectly classifies this domain while obtaining 0 % accuracy on db , then OAcc would still assume 0.9 , hiding the underlying damage to domain db . This motivates alternative formulations for latent domain learning , to anticipate ( and account for ) imbalanced domains in real world data . If it is possible to identify some semantic domain labeling ( as typically included in multi-domain/domain adaptation benchmarks ) , one can compare performances across individual subgroups . This allows picking up on performance losses that just occur on some of them , and which traditional metrics ( such as OAcc ) fail to capture . Where this is possible , we therefore propose to also measure latent domain performance in terms of uniform accuracy which decouples accuracies from relative ground-truth domain sizes : UAcc [ f ] = 1 D D∑ d=1 E ( xi , yi ) ∼Pd [ 1yf ( xi ) =yi ] . ( 2 ) Returning to the above example , a uniform measurement reflects the model ’ s lack of performance on db as UAcc = 0.5 . Once again note while ground-truth domain annotations are required in order to compute uniform accuracy , these are never used to train latent domain models . 3 METHODS . To enable robust learning in the new proposed setting , we formulate a novel module called sparse latent adaptation strategy which adaptively accounts for latent domains . Section 3.1 reviews adaptation strategies popular in the multi-domain context , which our method extends ( and generalizes ) . | The paper addresses the problem of learning a classification model on data from multiple domains, when explicit domain assignment for each data point is not provided. To solve this problem, the paper proposes the data to be coming from 1 of K unknown (or latent) domains. A single 1x1 convolutional layer per latent domain is applied to feature maps in each residual layer of ResNet. A gating function (based on sparsemax) is proposed to decide which of the K convolutional layers will be applied. The entire network is learned jointly by minimizing the classification objective function. | SP:16fc080356fe7672b56be8755b3ebea5aa1c7471 |
Visual Representation Learning over Latent Domains | 1 INTRODUCTION . Datasets have been a major driving force behind the rapid progress in computer vision research in the last two decades . They provide a testbed for developing new algorithms and comparing them to existing ones . However , datasets can also narrow down the focus of research into overspecialized solutions and impede developing a broader understanding of the world . In recent years this narrow scope of datasets has been widely questioned ( Torralba & Efros , 2011 ; Tommasi et al. , 2017 ; Recht et al. , 2019 ) and addressing some of these limitations has become a very active area of research . Two actively studied themes to investigate broader learning criteria are multi-domain learning ( Nam & Han , 2016 ; Bulat et al. , 2019 ; Schoenauer-Sebag et al. , 2019 ) and domain adaptation ( Ganin et al. , 2016 ; Tzeng et al. , 2017 ; Hoffman et al. , 2018 ; Xu et al. , 2018 ; Peng et al. , 2019a ; Sun et al. , 2019b ) . While multi-domain techniques focus on learning a single model that can generalize over multiple domains , domain adaptation techniques aim to efficiently transfer the representations that are learned in one dataset to another . Related themes have also been studied in domain generalization ( Li et al. , 2018 ; 2019b ; a ; Gulrajani & Lopez-Paz , 2020 ) and continual learning ( Kirkpatrick et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ; Riemer et al. , 2019 ) , where the focus lies on learning representations that can generalize to unseen domains , and to preserve knowledge acquired from previously seen tasks , respectively . While there exists no canonical definition for what exactly a visual domain is , previous works in multi-domain learning assume that different subsets of data exist , with some defining characteristic that allows them to be separated from each another . Each subset , indexed by d = 1 , . . . , D , is assigned to a pre-defined visual domain , and vice-versa multi-domain methods then use such domain associations to parameterize their representations and learn some pθ ( y|x , d ) . In some cases domains are intuitive and their annotation straightforward . Consider a problem where images have little visual relationship , for example joint learning of Omniglot handwritten symbols ( Lake et al. , 2015 ) and CIFAR-10 objects ( Krizhevsky & Hinton , 2009 ) . In this case , it is safe to assume that encoding an explicit domain-specific identifier into pθ is a good idea , and results in the multi-domain literature provide clear evidence that it is highly beneficial to do so ( Rebuffi et al. , 2018 ; Liu et al. , 2019a ; Guo et al. , 2019a ; Mancini et al. , 2020 ) . The assumption that domain labels are always available has been widely adopted in multi-domain learning ; however this assumption is not without difficulty . For one , unless the process of domain annotation is automated due to combining existing datasets as in e.g . Rebuffi et al . ( 2017 ) , their manual collection , curation , and domain labeling is very laborious . And even if adequate resources exist , it is often difficult to decide the optimal criteria for the annotation of d : some datasets contain sketches , paintings and real world images ( Li et al. , 2017 ) , others images captured during day or night ( Sultani et al. , 2018 ) . Automatically collected datasets ( Thomee et al. , 2016 ; Sun et al. , 2017 ) contain mixtures of low/high resolution images , taken with different cameras by amateurs/professionals . There is no obvious answer which of these should form their own distinct domain subset . Moreover , the work of Bouchacourt et al . ( 2018 ) considers semantic groupings of data : they show that when dividing data by subcategories , such as size , shape , etc. , and incorporating this information into the model , then this benefits performance . Should one therefore also encode the number of objects into domains , or their color , shape , and so on ? Given the relatively loose requirement that domains are supposed to be different while related in some sense ( Pan & Yang , 2009 ) , these examples hint at the difficulty of deciding whether domains are needed , and – if the answer to that is yes – what the optimal domain criteria are . And note that even if such assignments are made very carefully for some problem , nothing guarantees that they will transfer effectively to some other task . This paper carefully investigates this ambiguity and studies two central questions : 1 . Are domain labels always optimal for learning multi-domain representations ? 2 . Is it possible to learn models that generalize well over visually diverse domains without domain labels ? To study this problem , we introduce a new setting ( c.f . Fig . 1 ) in which models are learned over multiple domains without domain annotations — or latent domain learning for short . While latent domain learning is a highly practical research problem in the context of transfer learning , it poses multiple challenges that have not been previously investigated in connection with deep visual representation learning . In particular , we find that the removal of domain associations leads to severe performance losses for standard architectures on most domains , due to imbalances in the underlying distribution and different difficulty levels of the associated domain-level tasks . We carry out a rigorous quantitative analysis that includes concepts from multi-domain learning ( Rebuffi et al. , 2018 ; Chang et al. , 2018 ) , and find that their performance benefits do not extend to latent domain learning . To account for this lost performance , we formulate a novel method called sparse latent adaptation ( Section 3.2 ) which enables internal feature representations to dynamically adapt to instances from multiple domains in data , without requiring annotations for this . Furthermore , we show that latent domain learning naturally extends to several challenging real world tasks and single domain data , such as fairness problems ( Appendix F ) , and learning over imbalanced distributions ( Appendix G ) . 2 LATENT DOMAIN LEARNING . This section provides an overview over latent domain learning and contrasts it against other types of related learning problems , in particular multi-domain learning . 2.1 PROBLEM SETTING . When learning on multiple domains , the common assumption is that data is sampled i.i.d . from a mixture of distributions Pd with domain indices d = 1 , . . . , D. Together , they constitute the datagenerating distribution as P = ∑ d πdPd , where each domain is associated with a relative share πd = Nd/N , with N the total number of samples , and Nd those belonging to the d ’ th domain . In multi-domain learning , domain labels are available for all samples ( Nam & Han , 2016 ; Rebuffi et al. , 2017 ; 2018 ; Bulat et al. , 2019 ) , such that the overall data available for learning consists of DMD = { ( xi , di , yi ) } with i = 1 , . . . , N . In latent domain learning the information associating each sample xi with a domain di is not available . As such , domain-specific labels yi can not be inferred from sample-domain pairs ( xi , di ) and one is instead forced to learn a single model fθ over the latent domain dataset DLD = { ( xi , yi ) } . While latent domain learning can include mutually exclusive classes and disjoint label spaces Y1 ∪ · · · ∪ YD ( as in long-tailed recognition , see Appendix G ) , we mainly focus on the setting of shared label spaces , i.e . Yd = Yd′ . For example a dataset may contain images of dogs or elephants that can appear as either photos , paintings , or sketches . Latent domains have previously attracted interest in the context of domain adaptation , where the lack of annotations was recovered through hierarchical Hoffman et al . ( 2012 ) and kernel-based clustering ( Gong et al. , 2013 ) , via exemplar SVMs ( Xu et al. , 2014 ) , or by measuring mutual information ( Xiong et al. , 2014 ) . More recent work corrects batch statistics of domain adaptation layers using Gaussian mixtures ( Mancini et al. , 2018 ) , or studies the shift from some source domain to a target distribution that contains multiple latent domains ( Peng et al. , 2019b ; Matsuura & Harada , 2020 ) . Latent domain learning however differs fundamentally from these works : Table 1 contains a comparison to existing transfer learning settings . A common baseline in multi-domain learning is to finetune D models , one for each individual domain ( Rebuffi et al. , 2018 ; Liu et al. , 2019a ) . Doing so requires learning a large number of parameters and shares no parameters across domains , but can serve as a strong baseline to compare against . We show that in many cases , even when domains were carefully annotated , a dynamic latent domain approach can surpass the performance of such domain-supervised baselines , see Section 4 . 2.2 OBSERVED VS . UNIFORM ACCURACY . Consider a problem in which the data is sampled i.i.d . from P = πaPda + πbPdb , i.e . two hidden domains . Since domain labels are not available in latent domain learning , the best one can do is to treat all samples equally , and measure the observed accuracy : OAcc [ f ] = E ( xi , yi ) ∼P [ 1yf ( xi ) =yi ] , ( 1 ) where yf denotes the class assigned to sample xi by the model f , and yi its corresponding label for training . The OAcc has a problematic property : if P consists of two imbalanced domains such that πa ≥ πb , then the performance on da dominates it . For example if da has a 90 % overall share , and the model perfectly classifies this domain while obtaining 0 % accuracy on db , then OAcc would still assume 0.9 , hiding the underlying damage to domain db . This motivates alternative formulations for latent domain learning , to anticipate ( and account for ) imbalanced domains in real world data . If it is possible to identify some semantic domain labeling ( as typically included in multi-domain/domain adaptation benchmarks ) , one can compare performances across individual subgroups . This allows picking up on performance losses that just occur on some of them , and which traditional metrics ( such as OAcc ) fail to capture . Where this is possible , we therefore propose to also measure latent domain performance in terms of uniform accuracy which decouples accuracies from relative ground-truth domain sizes : UAcc [ f ] = 1 D D∑ d=1 E ( xi , yi ) ∼Pd [ 1yf ( xi ) =yi ] . ( 2 ) Returning to the above example , a uniform measurement reflects the model ’ s lack of performance on db as UAcc = 0.5 . Once again note while ground-truth domain annotations are required in order to compute uniform accuracy , these are never used to train latent domain models . 3 METHODS . To enable robust learning in the new proposed setting , we formulate a novel module called sparse latent adaptation strategy which adaptively accounts for latent domains . Section 3.1 reviews adaptation strategies popular in the multi-domain context , which our method extends ( and generalizes ) . | This paper introduces a new task called latent domain learning and proposes a baseline that extends an existing multi-domain learning method. The latent domain learning assumes that training data are sampled from different domains yet their domain labels are latent, and aims at learning models that generalize well to the domains. The proposed baseline deploys multiple parallel feature transform layers that are chosen on the fly through gating variables, with the hope that the gating variables learn to predict the latent domain label of input and choose feature transforms of the domain accordingly. This model demonstrates superior performance to existing multi-domain learning methods in the latent domain learning setting. I recognize that latent domain learning is an interesting problem with great potential, and has many applications such as learning using web-crawled images. However, the proposed method looks limited in terms of novelty and the quality of writing is below the standard. | SP:16fc080356fe7672b56be8755b3ebea5aa1c7471 |
Task-driven Discovery of Perceptual Schemas for Generalization in Reinforcement Learning | TASK-DRIVEN DISCOVERY OF PERCEPTUAL SCHEMAS FOR GENERALIZATION IN REINFORCEMENT LEARNING Anonymous authors Paper under double-blind review A B S T R A C T Deep reinforcement learning ( Deep RL ) has recently seen significant progress in developing algorithms for generalization . However , most algorithms target a single type of generalization setting . In this work , we study generalization across three disparate task structures : ( a ) tasks composed of spatial and temporal compositions of regularly occurring object motions ; ( b ) tasks composed of active perception of and navigation towards regularly occurring 3D objects ; and ( c ) tasks composed of remembering goal-information over sequences of regularly occurring object-configurations . These diverse task structures all share an underlying idea of compositionality : task completion always involves combining reccurring segments of task-oriented perception and behavior . We hypothesize that an agent can generalize within a task structure if it can discover representations that capture these reccurring task-segments . For our tasks , this corresponds to representations for recognizing individual object motions , for navigation towards 3D objects , and for navigating through object-configurations . Taking inspiration from cognitive science , we term representations for reccurring segments of an agent ’ s experience , “ perceptual schemas ” . We propose Feature Attending Recurrent Modules ( FARM ) , which learns a state representation where perceptual schemas are distributed across multiple , relatively small recurrent modules . We compare FARM to recurrent architectures that leverage spatial attention , which reduces observation features to a weighted average over spatial positions . Our experiments indicate that our feature-attention mechanism better enables FARM to generalize across the diverse object-centric domains we study . 1 I N T R O D U C T I O N Cognitive scientists theorize that humans generalize broadly with “ schemas ” they discover for regularly occurring structures within their experience ( Minsky , 1979 ; Rumelhart , 1980 ; Rumelhart et al. , 1986 ) . Schemas are representations that capture common features over diverse aspects of the environment . For example , when we learn to drive , we learn schemas for common car types ( such as sedans ) , common car motions ( such as accelerating or stopping ) , and common car arrangements ( such as a row of cars ) . Importantly , schemas are composable representations over portions of our observations . This allows us to recombine them in novel ways . For example , once we learn schemas for sedans , car motions , and rows of cars , we can recognize many rows of sedans moving in opposite directions—even if we ’ ve never seen this before . While substantial progress has been made on developing deep reinforcement learning ( deep RL ) algorithms which can generalize , algorithms are typically limited to one type of generalization . For example , one algorithm will generalize to novel compositions of familiar shapes and colors ( Higgins et al. , 2017 ; Chaplot et al. , 2018 ) , whereas another algorithm will generalize to longer sequences of observed subtasks ( Sohn et al. , 2018 ; 2021 ; Brooks et al. , 2021 ) . In this work , we hypothesize that we can develop a single deep RL architecture that can exhibit multiple types of generalization if it can learn schema-like representations for regularly occurring structures within its experience . As a first step , we study learning schemas for perception ( i.e . perceptual schemas ) that support generalization within a diverse set of tasks and environments . We study generalization across three diverse environments and task structures , each with their own regularly occurring structures ( Figure 1 ) . Across these environments , test tasks are novel compositions of the regularly occurring structures the agent experiences during training . Generalization for the “ Ballet task ” involves recalling novel spatial and temporal compositions of regularly occurring object motions ( Figure 1 , a ) ; generalization for the “ Place X on Y task ” involves generalizing active perception of and navigation towards regularly appearing 3D objects ( Figure 1 , b ) ; and generalization for the “ Keybox task ” involves generalizing memory-retention of goal-information to larger environments composed of sequences of regularly occurring object-configurations ( Figure 1 , c ) . We hypothesize that discovering perceptual schemas for these regularly occurring structures will facilitate zero-shot generalization to tasks defined over novel compositions of these structures . We propose Feature Attending Recurrent Modules ( FARM ) , a state representation learning architecture for discovering task-relevant perceptual schemas . FARM learns perceptual schemas that are distributed across multiple , smaller recurrent modules . To consider why this might be helpful , consider the benefits of using word embeddings . A word embedding can represent more information than a one-hot encoding of the same dimension because it can represent different patterns of word usage with the same dimension . Analogously , learning multiple modules enables FARM to represent different patterns of an agent ’ s experience—i.e . different perceptual schemas—with the same module . To maximize the expressivity of the patterns a module can represent , each module employs a novel dynamic feature attention mechanism to dynamically attends to important features in the agent ’ s observation . When combined with spatio-temporal features , our results suggest that the perceptual schemas FARM discovers capture diverse structures including object motions , 3D objects , and spatial-relationships between objects . To have the modules coordinate what they attend to , they share information using transformer-style attention ( Vaswani et al. , 2017 ) . Recent work indicates that spatial attention is a simple inductive bias for strong performance on object-centric vision tasks ( Greff et al. , 2020 ; Locatello et al. , 2020 ; Goyal et al. , 2020b ; a ) . We compare FARM to recurrent architectures that employ spatial attention and contribute the following : 1 . FARM , which combines dynamic feature attention with learning multiple recurrent modules ( §3 ) . 2 . We show that FARM ’ s components synergestically enable generalizing ( a ) recall to novel compositions of object motions ; ( b ) active perception of 3D objects to larger environments ; and ( c ) generalizing memory of goal-information to longer tasks filled with more distractors ( §4 ) . 3 . We show that spatial attention—which reduces observation features to a weighted average over spatial positions—can be detrimental to reinforcement learning of our diverse object-centric tasks and interfere with the benefits that come from learning multiple recurrent modules . 4 . Our analysis of the representations learned by FARM provide evidence that it learns perceptual schemas that are flexibly distributed across combinations of recurrent modules ( §4.3.1 ) . 2 R E L AT E D W O R K O N G E N E R A L I Z AT I O N I N D E E P R L While a large body of work has focused on studying generalization in deep RL ( Kansky et al. , 2017 ; Witty et al. , 2018 ; Farebrother et al. , 2018 ; Zhang et al. , 2018b ; Cobbe et al. , 2019 ; Raileanu & Fergus , 2021 ) , there has been less emphasis on studying generalization within diverse environments and task structures . One research direction has focused on generalizing to longer tasks , e.g . executing longer sequences ( Oh et al. , 2017 ; Zhang et al. , 2018a ; Lampinen et al. , 2021 ) or executing novel subtask structures ( Sohn et al. , 2018 ; 2021 ; Brooks et al. , 2021 ) . Another direction has focused on generalizing to tasks with novel features , e.g. , novel shape-color combinations ( Higgins et al. , 2017 ; Chaplot et al. , 2018 ; Lee et al. , 2020 ; Hill et al. , 2020 ) , novel backgrounds ( Zhang et al. , 2021 ; Agarwal et al. , 2021 ) , and novel distractors ( Mott et al. , 2019 ; Goyal et al. , 2020b ) . We attempt to bridge these prior strands of research by developing a single architecture for ( a ) generalizing recall to novel compositions of object motions ; ( b ) generalizing sequential perception of 3D objects to larger environments ; and ( c ) generalizing memory of goal information to longer task lengths . Task-driven generalization . Recent work which has shown that a diverse training curriculum can promote generalization ( Tobin et al. , 2017 ; Packer et al. , 2018 ; Hill et al. , 2020 ) . This research inspired our task-driven approach to discovering generalizable “ schema-like ” representations . Additionally , our procedurally-generated KeyBox task follows previous research on using procedural level generalization for faster learning and generalization ( Justesen et al. , 2019 ; Jiang et al. , 2021 ) . Generalizing with feature attention . Most similar to our feature-attention mechanism are the attention mechanisms by Perez et al . ( 2018 ) ; Chaplot et al . ( 2018 ) . In particularly , Chaplot et al . ( 2018 ) showed that mapping language instructions to non-linear feature coefficients enabled generalizing to tasks specified over unseen feature combinations in a 3D environment . While FARM also learns non-linear feature coefficients , our work has two important differences . First , we develop a multi-head version where individual feature coefficients are produced by their own recurrent modules . This enables FARM to leverage this form of attention in settings where language instructions don ’ t indicate what to attend to ( this is true in 2/3 of our tasks ) . Second , we are the first to show that feature attention facilitates generalizing recall of object dynamics ( Figure 4.1 ) and generalizing memory-retention to larger environments ( Figure 4.3 ) . Generalizing with top-down spatial attention . Most similar to FARM are the Attention Augmented Agent ( AAA ) ( Mott et al. , 2019 ) and Recurrent Independent Mechanisms ( RIMs ) ( Goyal et al. , 2020b ) . Both are recurrent architectures that produce top-down spatial attention over observations . Additionally , RIMs also uses a modular state representation produced by a set of recurrent modules . AAA showed generalization to unseen distractors and RIMs to more distractors than trained on . While follow-up work on RIMs has addressed problems such as learning type-specific update rules ( Goyal et al. , 2020a ; Didolkar et al. , 2021 ) and improved information sharing among modules ( Mittal et al. , 2020 ; Goyal et al. , 2021 ) , there hasn ’ t been an emphasis on which components enable generalization in reinforcement learning across diverse domains . The major difference between AAA , RIMs , and FARM is that FARM attends to an observation with feature attention as opposed to spatial attention . Our experiments indicate that spatial attention can be an impediment to reinforcement learning of tasks defined by shape-color agnostic object-dynamics and 3D objects . 3 A R C H I T E C T U R E : F E AT U R E AT T E N D I N G R E C U R R E N T M O D U L E S We study an agent that experiences a partial observation and task description ( xt , τt ) ∈ X × T , takes an action at ∈ A , and experiences resultant reward rt ∈ R. The agent needs a representation for state that enables learning a policy π : S → A that maximizes the expected reward from taking an action at a state Eπ [ ∑ t=0 γ tRt+1|St = a , At = a ] . Since the agent only observes observation-task-actionreward tuples ( xt , τt , at , rt ) , it needs to learn a time-series representation mapping episode histories ( x≤t , τ≤t , a < t , r < t ) to state representations st . The agent learns these representations with Feature Attending Recurrent Modules ( FARM ) . Exploiting structured observation features for generalization . We assume that FARM can generalize if it represents task-relevant regularly occurring perceptual structures , i.e . perceptual schemas . To discovering perceptual schemas , we assume an observation encoder φ ( · ) that produces observation features Zt ∈ Z ∈ Rm×dz where dz features are shared across m rows representing different portions of the observation . For an image , different rows might correspond to different spatial positions ; for audio , to different frequency bands ; or for robotic proprioception , to spatial information about different body parts . In this work , we focus on observations in the form of images . Feature Attending Recurrent Modules . FARM learns representations for perceptual schemas that are distributed over n recurrent modules { η ( i ) } . Module i uses a distinct initial module-state h ( i ) 0 ∈ H ( i ) and the subsequently experienced observation-task-action-reward tuple ( xt , τt , at , rt ) to learn the following mapping , η ( i ) : Z ×T ×H ( i ) ×A×R → H ( i ) . For convenience , we can define a module context c ( i ) t as the concatenation of the previous module-state , the action chosen with it , the resultant reward , and the current task-description as c ( i ) t = [ τt , h ( i ) t−1 , at−1 , rt−1 ] . The agent integrates this context c ( i ) t with Zt to compute st using the collection of outputs from the modules : st = [ η ( 1 ) ( Zt , c ( 1 ) t ) , . . . , η ( n ) ( Zt , c ( n ) t ) ] ( 1 ) = [ h ( 1 ) t , . . . , h ( n ) t ] . ( 2 ) Module update rule . Each module will use its context c ( i ) t to ( a ) attend to some aspect of the observation and ( b ) to retrieve information from other modules . This manifests with two functions , ( a ) f ( i ) att and ( b ) f ( i ) share , respectively . Defining f ( i ) update as the module update-function , we model η ( i ) as the following composition of functions : η ( i ) ( Zt , c ( i ) t ) = f ( i ) update ( c ( i ) t , f ( i ) att ( Zt , c ( i ) t ) , ︸ ︷︷ ︸ attend to observation f ( i ) share ( c ( i ) t , { h ( j ) t−1 } n1 ) ︸ ︷︷ ︸ share information ) ( 3 ) Modules capture diverse perceptual structures with dynamic feature attention . Our first insight was that modules can attend to high-level object-dynamics information if FARM learns a recurrent observation encoder Zt = φ ( xt , Zt−1 ) . By doing so , information about how feature-values shift between positions ( i.e . feature dynamics ) can be captured in the features of Zt . Our second insight was that feature attention is an expressive attention mechanism for selecting observation information to update with . Each module predicts its own feature attention coefficients and applies them identically to all spatial positions in Zt ( Perez et al. , 2018 ; Chaplot et al. , 2018 ) . We found it useful to linearly project the features before and after using shared parameters as in Andreas et al . ( 2016 ) ; Hu et al . ( 2018 ) . The operations are summarized below : f ( i ) att ( Zt , c ( i ) t ) = ( ZtW1 σ ( W iattc ( i ) t ) ) W2 ( 4 ) where denotes an element-wise product over the feature dimension and σ is a sigmoid nonlinearity . Equation 4 computes the degree to which important features are expressed across all positions . Since our features capture dynamics information , this allows a module to attend to dynamics ( § 4.1 ) . When updating with equation 3 , we flatten the output of equation 4 and give this as input to an RNN . Flattening leads all spatial positions to be treated uniquely and enables a module to represent aspects of the observation that span multiple positions , such as 3D objects ( § 4.2 ) and spatial arrangements of objects ( § 4.3 ) . Since the feature-coefficients for the next time-step are produced with observation features from the current time-step , modules can dynamically shift their attention when task-relevant events occur ( see Figure 6 , c ) . Modules share information to coordinate what they represent . Similar to RIMs Goyal et al . ( 2020b ) , before updating , each module retrieves information from other modules using transformerstyle attention ( Vaswani et al. , 2017 ) . We define the collection of previous module-states as Ht−1 = [ h ( 1 ) t−1 ; . . . ; h ( n ) t−1 ; 0 ] ∈ R ( n+1 ) ×dh , where 0 is a null-vector used to retrieve no information . Each module retrieves information as follows : f ( i ) share ( c ( i ) t , { h ( j ) t−1 } n1 ) = softmax ( c ( i ) t W q i ) ( Ht−1W k i ) > √ dh Ht−1W vi ( 5 ) 4 E X P E R I M E N T S In this section , we study the following questions : 1 . Can FARM enable generalization of memory-retention to novel spatio-temporal compositions of object-dynamics ? 2 . Can FARM generalize sequential active perception of 3D objects to larger environments ? 3 . Can FARM generalize memory-retention of goal-information to longer tasks composed of longer sequences of observed obstacles ? Baselines . Our first baseline is the canonical choice for learning state-representations , a Long Short-term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) . Our other two baselines — the Attention Augmented Agent ( AAA ) ( Mott et al. , 2019 ) and Recurrent Independent Mechanisms ( RIMs ) ( Goyal et al. , 2020b ) — also employ top-down attention over observation features . However , they both employ transformer-style attention ( Locatello et al. , 2020 ; Vaswani et al. , 2017 ) to dynamically attend to spatial positions in the observation ; whereas , we dynamically attend to features shared across all spatial positions . RIMs , like FARM , composes state with a set of recurrent modules . Implementation details . We implement our recurrent observation encoder , φ , as a ResNet ( He et al. , 2016 ) followed by a Convolutional LSTM ( ConvLSTM ) ( Shi et al. , 2015 ) . We implement the update function of each module with an LSTM . We used multihead-attention ( Vaswani et al. , 2017 ) for f ( i ) share . We trained the architecture with the IMPALA algorithm ( Espeholt et al. , 2018 ) and an Adam optimizer ( Kingma & Ba , 2015 ) . We tune hyperparameters for all architectures with the “ Place X next to Y ” task from the BabyAI environment ( Chevalier-Boisvert et al. , 2019 ) ( § C.2 ) . We keep most hyperparameters fixed across our tasks . Our main change is to make the RNNs employed by each architecture larger for the KeyBox task . For details on hyperparameters , see §B . | This paper proposes Composable Perceptual Schemas (CPS), a modular state representation learning architecture for reinforcement learning that combines modular RNNs as in RIMs (Goyal et al., 2020) with a dynamic feature attention mechanism. It is hypothesized that this lets CPS exhibit multiple different kinds of generalization encountered previously in the literature: (1) generalization to environments given by novel compositions of known object motions, (2) generalizing active perception of 3D objects to larger environments, and (3) generalizing goal-directed behavior and memory retention to larger environments. The idea of CPS is as follows: * A recurrent observation encoder produces observation features for different parses of the input observation (here spatial locations in the image) to yield a feature matrix. Here each row corresponds to a different parse. * A number of recurrent modules (here called "subschemas"), each process the observation features + context (action, reward, task encoding) and the previous state of all other modules using dynamic attention. From this, an updated state is computed using an LSTM. * The dynamic attention for observations + context proceeds by scaling the observation features at all positions with a vector of learned attention coefficient. In this way, feature selection can take place (but shared across positions). * The dynamic attention for other modules is similar to the implementation in RIMs using transformer-style attention. CPS is evaluated on three environments and compared to several baselines. In general, it can be seen to exhibit the desired generalization behavior and outperform the baselines. Finally, some evidence is presented that subschemas are selective for semantically meaningful aspects of the input (eg. goal information, distractor information, etc.). | SP:eaeaa324cf6ca9253179d901df14159d0792ebf1 |
Task-driven Discovery of Perceptual Schemas for Generalization in Reinforcement Learning | TASK-DRIVEN DISCOVERY OF PERCEPTUAL SCHEMAS FOR GENERALIZATION IN REINFORCEMENT LEARNING Anonymous authors Paper under double-blind review A B S T R A C T Deep reinforcement learning ( Deep RL ) has recently seen significant progress in developing algorithms for generalization . However , most algorithms target a single type of generalization setting . In this work , we study generalization across three disparate task structures : ( a ) tasks composed of spatial and temporal compositions of regularly occurring object motions ; ( b ) tasks composed of active perception of and navigation towards regularly occurring 3D objects ; and ( c ) tasks composed of remembering goal-information over sequences of regularly occurring object-configurations . These diverse task structures all share an underlying idea of compositionality : task completion always involves combining reccurring segments of task-oriented perception and behavior . We hypothesize that an agent can generalize within a task structure if it can discover representations that capture these reccurring task-segments . For our tasks , this corresponds to representations for recognizing individual object motions , for navigation towards 3D objects , and for navigating through object-configurations . Taking inspiration from cognitive science , we term representations for reccurring segments of an agent ’ s experience , “ perceptual schemas ” . We propose Feature Attending Recurrent Modules ( FARM ) , which learns a state representation where perceptual schemas are distributed across multiple , relatively small recurrent modules . We compare FARM to recurrent architectures that leverage spatial attention , which reduces observation features to a weighted average over spatial positions . Our experiments indicate that our feature-attention mechanism better enables FARM to generalize across the diverse object-centric domains we study . 1 I N T R O D U C T I O N Cognitive scientists theorize that humans generalize broadly with “ schemas ” they discover for regularly occurring structures within their experience ( Minsky , 1979 ; Rumelhart , 1980 ; Rumelhart et al. , 1986 ) . Schemas are representations that capture common features over diverse aspects of the environment . For example , when we learn to drive , we learn schemas for common car types ( such as sedans ) , common car motions ( such as accelerating or stopping ) , and common car arrangements ( such as a row of cars ) . Importantly , schemas are composable representations over portions of our observations . This allows us to recombine them in novel ways . For example , once we learn schemas for sedans , car motions , and rows of cars , we can recognize many rows of sedans moving in opposite directions—even if we ’ ve never seen this before . While substantial progress has been made on developing deep reinforcement learning ( deep RL ) algorithms which can generalize , algorithms are typically limited to one type of generalization . For example , one algorithm will generalize to novel compositions of familiar shapes and colors ( Higgins et al. , 2017 ; Chaplot et al. , 2018 ) , whereas another algorithm will generalize to longer sequences of observed subtasks ( Sohn et al. , 2018 ; 2021 ; Brooks et al. , 2021 ) . In this work , we hypothesize that we can develop a single deep RL architecture that can exhibit multiple types of generalization if it can learn schema-like representations for regularly occurring structures within its experience . As a first step , we study learning schemas for perception ( i.e . perceptual schemas ) that support generalization within a diverse set of tasks and environments . We study generalization across three diverse environments and task structures , each with their own regularly occurring structures ( Figure 1 ) . Across these environments , test tasks are novel compositions of the regularly occurring structures the agent experiences during training . Generalization for the “ Ballet task ” involves recalling novel spatial and temporal compositions of regularly occurring object motions ( Figure 1 , a ) ; generalization for the “ Place X on Y task ” involves generalizing active perception of and navigation towards regularly appearing 3D objects ( Figure 1 , b ) ; and generalization for the “ Keybox task ” involves generalizing memory-retention of goal-information to larger environments composed of sequences of regularly occurring object-configurations ( Figure 1 , c ) . We hypothesize that discovering perceptual schemas for these regularly occurring structures will facilitate zero-shot generalization to tasks defined over novel compositions of these structures . We propose Feature Attending Recurrent Modules ( FARM ) , a state representation learning architecture for discovering task-relevant perceptual schemas . FARM learns perceptual schemas that are distributed across multiple , smaller recurrent modules . To consider why this might be helpful , consider the benefits of using word embeddings . A word embedding can represent more information than a one-hot encoding of the same dimension because it can represent different patterns of word usage with the same dimension . Analogously , learning multiple modules enables FARM to represent different patterns of an agent ’ s experience—i.e . different perceptual schemas—with the same module . To maximize the expressivity of the patterns a module can represent , each module employs a novel dynamic feature attention mechanism to dynamically attends to important features in the agent ’ s observation . When combined with spatio-temporal features , our results suggest that the perceptual schemas FARM discovers capture diverse structures including object motions , 3D objects , and spatial-relationships between objects . To have the modules coordinate what they attend to , they share information using transformer-style attention ( Vaswani et al. , 2017 ) . Recent work indicates that spatial attention is a simple inductive bias for strong performance on object-centric vision tasks ( Greff et al. , 2020 ; Locatello et al. , 2020 ; Goyal et al. , 2020b ; a ) . We compare FARM to recurrent architectures that employ spatial attention and contribute the following : 1 . FARM , which combines dynamic feature attention with learning multiple recurrent modules ( §3 ) . 2 . We show that FARM ’ s components synergestically enable generalizing ( a ) recall to novel compositions of object motions ; ( b ) active perception of 3D objects to larger environments ; and ( c ) generalizing memory of goal-information to longer tasks filled with more distractors ( §4 ) . 3 . We show that spatial attention—which reduces observation features to a weighted average over spatial positions—can be detrimental to reinforcement learning of our diverse object-centric tasks and interfere with the benefits that come from learning multiple recurrent modules . 4 . Our analysis of the representations learned by FARM provide evidence that it learns perceptual schemas that are flexibly distributed across combinations of recurrent modules ( §4.3.1 ) . 2 R E L AT E D W O R K O N G E N E R A L I Z AT I O N I N D E E P R L While a large body of work has focused on studying generalization in deep RL ( Kansky et al. , 2017 ; Witty et al. , 2018 ; Farebrother et al. , 2018 ; Zhang et al. , 2018b ; Cobbe et al. , 2019 ; Raileanu & Fergus , 2021 ) , there has been less emphasis on studying generalization within diverse environments and task structures . One research direction has focused on generalizing to longer tasks , e.g . executing longer sequences ( Oh et al. , 2017 ; Zhang et al. , 2018a ; Lampinen et al. , 2021 ) or executing novel subtask structures ( Sohn et al. , 2018 ; 2021 ; Brooks et al. , 2021 ) . Another direction has focused on generalizing to tasks with novel features , e.g. , novel shape-color combinations ( Higgins et al. , 2017 ; Chaplot et al. , 2018 ; Lee et al. , 2020 ; Hill et al. , 2020 ) , novel backgrounds ( Zhang et al. , 2021 ; Agarwal et al. , 2021 ) , and novel distractors ( Mott et al. , 2019 ; Goyal et al. , 2020b ) . We attempt to bridge these prior strands of research by developing a single architecture for ( a ) generalizing recall to novel compositions of object motions ; ( b ) generalizing sequential perception of 3D objects to larger environments ; and ( c ) generalizing memory of goal information to longer task lengths . Task-driven generalization . Recent work which has shown that a diverse training curriculum can promote generalization ( Tobin et al. , 2017 ; Packer et al. , 2018 ; Hill et al. , 2020 ) . This research inspired our task-driven approach to discovering generalizable “ schema-like ” representations . Additionally , our procedurally-generated KeyBox task follows previous research on using procedural level generalization for faster learning and generalization ( Justesen et al. , 2019 ; Jiang et al. , 2021 ) . Generalizing with feature attention . Most similar to our feature-attention mechanism are the attention mechanisms by Perez et al . ( 2018 ) ; Chaplot et al . ( 2018 ) . In particularly , Chaplot et al . ( 2018 ) showed that mapping language instructions to non-linear feature coefficients enabled generalizing to tasks specified over unseen feature combinations in a 3D environment . While FARM also learns non-linear feature coefficients , our work has two important differences . First , we develop a multi-head version where individual feature coefficients are produced by their own recurrent modules . This enables FARM to leverage this form of attention in settings where language instructions don ’ t indicate what to attend to ( this is true in 2/3 of our tasks ) . Second , we are the first to show that feature attention facilitates generalizing recall of object dynamics ( Figure 4.1 ) and generalizing memory-retention to larger environments ( Figure 4.3 ) . Generalizing with top-down spatial attention . Most similar to FARM are the Attention Augmented Agent ( AAA ) ( Mott et al. , 2019 ) and Recurrent Independent Mechanisms ( RIMs ) ( Goyal et al. , 2020b ) . Both are recurrent architectures that produce top-down spatial attention over observations . Additionally , RIMs also uses a modular state representation produced by a set of recurrent modules . AAA showed generalization to unseen distractors and RIMs to more distractors than trained on . While follow-up work on RIMs has addressed problems such as learning type-specific update rules ( Goyal et al. , 2020a ; Didolkar et al. , 2021 ) and improved information sharing among modules ( Mittal et al. , 2020 ; Goyal et al. , 2021 ) , there hasn ’ t been an emphasis on which components enable generalization in reinforcement learning across diverse domains . The major difference between AAA , RIMs , and FARM is that FARM attends to an observation with feature attention as opposed to spatial attention . Our experiments indicate that spatial attention can be an impediment to reinforcement learning of tasks defined by shape-color agnostic object-dynamics and 3D objects . 3 A R C H I T E C T U R E : F E AT U R E AT T E N D I N G R E C U R R E N T M O D U L E S We study an agent that experiences a partial observation and task description ( xt , τt ) ∈ X × T , takes an action at ∈ A , and experiences resultant reward rt ∈ R. The agent needs a representation for state that enables learning a policy π : S → A that maximizes the expected reward from taking an action at a state Eπ [ ∑ t=0 γ tRt+1|St = a , At = a ] . Since the agent only observes observation-task-actionreward tuples ( xt , τt , at , rt ) , it needs to learn a time-series representation mapping episode histories ( x≤t , τ≤t , a < t , r < t ) to state representations st . The agent learns these representations with Feature Attending Recurrent Modules ( FARM ) . Exploiting structured observation features for generalization . We assume that FARM can generalize if it represents task-relevant regularly occurring perceptual structures , i.e . perceptual schemas . To discovering perceptual schemas , we assume an observation encoder φ ( · ) that produces observation features Zt ∈ Z ∈ Rm×dz where dz features are shared across m rows representing different portions of the observation . For an image , different rows might correspond to different spatial positions ; for audio , to different frequency bands ; or for robotic proprioception , to spatial information about different body parts . In this work , we focus on observations in the form of images . Feature Attending Recurrent Modules . FARM learns representations for perceptual schemas that are distributed over n recurrent modules { η ( i ) } . Module i uses a distinct initial module-state h ( i ) 0 ∈ H ( i ) and the subsequently experienced observation-task-action-reward tuple ( xt , τt , at , rt ) to learn the following mapping , η ( i ) : Z ×T ×H ( i ) ×A×R → H ( i ) . For convenience , we can define a module context c ( i ) t as the concatenation of the previous module-state , the action chosen with it , the resultant reward , and the current task-description as c ( i ) t = [ τt , h ( i ) t−1 , at−1 , rt−1 ] . The agent integrates this context c ( i ) t with Zt to compute st using the collection of outputs from the modules : st = [ η ( 1 ) ( Zt , c ( 1 ) t ) , . . . , η ( n ) ( Zt , c ( n ) t ) ] ( 1 ) = [ h ( 1 ) t , . . . , h ( n ) t ] . ( 2 ) Module update rule . Each module will use its context c ( i ) t to ( a ) attend to some aspect of the observation and ( b ) to retrieve information from other modules . This manifests with two functions , ( a ) f ( i ) att and ( b ) f ( i ) share , respectively . Defining f ( i ) update as the module update-function , we model η ( i ) as the following composition of functions : η ( i ) ( Zt , c ( i ) t ) = f ( i ) update ( c ( i ) t , f ( i ) att ( Zt , c ( i ) t ) , ︸ ︷︷ ︸ attend to observation f ( i ) share ( c ( i ) t , { h ( j ) t−1 } n1 ) ︸ ︷︷ ︸ share information ) ( 3 ) Modules capture diverse perceptual structures with dynamic feature attention . Our first insight was that modules can attend to high-level object-dynamics information if FARM learns a recurrent observation encoder Zt = φ ( xt , Zt−1 ) . By doing so , information about how feature-values shift between positions ( i.e . feature dynamics ) can be captured in the features of Zt . Our second insight was that feature attention is an expressive attention mechanism for selecting observation information to update with . Each module predicts its own feature attention coefficients and applies them identically to all spatial positions in Zt ( Perez et al. , 2018 ; Chaplot et al. , 2018 ) . We found it useful to linearly project the features before and after using shared parameters as in Andreas et al . ( 2016 ) ; Hu et al . ( 2018 ) . The operations are summarized below : f ( i ) att ( Zt , c ( i ) t ) = ( ZtW1 σ ( W iattc ( i ) t ) ) W2 ( 4 ) where denotes an element-wise product over the feature dimension and σ is a sigmoid nonlinearity . Equation 4 computes the degree to which important features are expressed across all positions . Since our features capture dynamics information , this allows a module to attend to dynamics ( § 4.1 ) . When updating with equation 3 , we flatten the output of equation 4 and give this as input to an RNN . Flattening leads all spatial positions to be treated uniquely and enables a module to represent aspects of the observation that span multiple positions , such as 3D objects ( § 4.2 ) and spatial arrangements of objects ( § 4.3 ) . Since the feature-coefficients for the next time-step are produced with observation features from the current time-step , modules can dynamically shift their attention when task-relevant events occur ( see Figure 6 , c ) . Modules share information to coordinate what they represent . Similar to RIMs Goyal et al . ( 2020b ) , before updating , each module retrieves information from other modules using transformerstyle attention ( Vaswani et al. , 2017 ) . We define the collection of previous module-states as Ht−1 = [ h ( 1 ) t−1 ; . . . ; h ( n ) t−1 ; 0 ] ∈ R ( n+1 ) ×dh , where 0 is a null-vector used to retrieve no information . Each module retrieves information as follows : f ( i ) share ( c ( i ) t , { h ( j ) t−1 } n1 ) = softmax ( c ( i ) t W q i ) ( Ht−1W k i ) > √ dh Ht−1W vi ( 5 ) 4 E X P E R I M E N T S In this section , we study the following questions : 1 . Can FARM enable generalization of memory-retention to novel spatio-temporal compositions of object-dynamics ? 2 . Can FARM generalize sequential active perception of 3D objects to larger environments ? 3 . Can FARM generalize memory-retention of goal-information to longer tasks composed of longer sequences of observed obstacles ? Baselines . Our first baseline is the canonical choice for learning state-representations , a Long Short-term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) . Our other two baselines — the Attention Augmented Agent ( AAA ) ( Mott et al. , 2019 ) and Recurrent Independent Mechanisms ( RIMs ) ( Goyal et al. , 2020b ) — also employ top-down attention over observation features . However , they both employ transformer-style attention ( Locatello et al. , 2020 ; Vaswani et al. , 2017 ) to dynamically attend to spatial positions in the observation ; whereas , we dynamically attend to features shared across all spatial positions . RIMs , like FARM , composes state with a set of recurrent modules . Implementation details . We implement our recurrent observation encoder , φ , as a ResNet ( He et al. , 2016 ) followed by a Convolutional LSTM ( ConvLSTM ) ( Shi et al. , 2015 ) . We implement the update function of each module with an LSTM . We used multihead-attention ( Vaswani et al. , 2017 ) for f ( i ) share . We trained the architecture with the IMPALA algorithm ( Espeholt et al. , 2018 ) and an Adam optimizer ( Kingma & Ba , 2015 ) . We tune hyperparameters for all architectures with the “ Place X next to Y ” task from the BabyAI environment ( Chevalier-Boisvert et al. , 2019 ) ( § C.2 ) . We keep most hyperparameters fixed across our tasks . Our main change is to make the RNNs employed by each architecture larger for the KeyBox task . For details on hyperparameters , see §B . | This paper proposes the Composable Perceptual Schemas recurrent architecture. This recurrent architecture is decomposed into $n$ LSTMs, each of which the authors call a _schema module_. At each time-step: 1. a recurrent encoder produces a set of $m$ feature vectors of dimension $d_z$, one vector for a different spatial location 2. the task information $\tau_t$, the previous reward $r_{t-1}$, and previous action $a_{t-1}$ gets concatenate with the previous hidden state $h_{t-1}^{(i)}$ of module $i$. The authors call this concatenation the _context_ $c_t^{(i)}$. 3. The update rule of each schema module produces the new hidden state $h_{t}^{(i)}$ from three inputs: 1. The output of $f_{att}$: A single soft mask of length $d_z$ is computed from $c_t^{(i)}$ and is applied to each of the $m$ feature vectors. 2. $c_t^{(i)}$ 3. The output of a multi-head attention layer, where the queries are the $c_t^{(i)}$, and the keys and values are the previous hidden states of the schemas, along with the null vector: $[h_{t-1}^{(1)}, ... h_{t-1}^{(n)}, 0]$ The main difference between CPS and the baseline recurrent architectures (AAA and RIMs) is in step 1 of the update, how $f_{att}$ is computed: instead of attending over the different spatial locations CPS attends over the same dimensions of all the spatial features. The authors apply their architecture to three generalization scenarios: 1. " Can CPS enable generalization of memory-retention to novel spatial and temporal (spatiotemporal) compositions of object-dynamics?" --> tested on the Ballet task from https://arxiv.org/pdf/2105.14039.pdf. CPS performs slightly better than the LSTM when generalizing to different spatial compositions, and much better than all baselines when generalizing to different spatial and temporal compositions. 2. " Can CPS generalize sequential active perception of 3D objects to larger environments?" --> tested on the "Place X on Y" task from https://arxiv.org/pdf/1910.00571.pdf. CPS learns faster than the baselines. 3. "Can CPS generalize goal-oriented behavior and memory-retention to environments composed of longer sequences of observed object-configurations?" --> tested on a custom KeyBox environment written in the BabyAI framework. CPS performs similarly to the AAA baseline. | SP:eaeaa324cf6ca9253179d901df14159d0792ebf1 |
Task-driven Discovery of Perceptual Schemas for Generalization in Reinforcement Learning | TASK-DRIVEN DISCOVERY OF PERCEPTUAL SCHEMAS FOR GENERALIZATION IN REINFORCEMENT LEARNING Anonymous authors Paper under double-blind review A B S T R A C T Deep reinforcement learning ( Deep RL ) has recently seen significant progress in developing algorithms for generalization . However , most algorithms target a single type of generalization setting . In this work , we study generalization across three disparate task structures : ( a ) tasks composed of spatial and temporal compositions of regularly occurring object motions ; ( b ) tasks composed of active perception of and navigation towards regularly occurring 3D objects ; and ( c ) tasks composed of remembering goal-information over sequences of regularly occurring object-configurations . These diverse task structures all share an underlying idea of compositionality : task completion always involves combining reccurring segments of task-oriented perception and behavior . We hypothesize that an agent can generalize within a task structure if it can discover representations that capture these reccurring task-segments . For our tasks , this corresponds to representations for recognizing individual object motions , for navigation towards 3D objects , and for navigating through object-configurations . Taking inspiration from cognitive science , we term representations for reccurring segments of an agent ’ s experience , “ perceptual schemas ” . We propose Feature Attending Recurrent Modules ( FARM ) , which learns a state representation where perceptual schemas are distributed across multiple , relatively small recurrent modules . We compare FARM to recurrent architectures that leverage spatial attention , which reduces observation features to a weighted average over spatial positions . Our experiments indicate that our feature-attention mechanism better enables FARM to generalize across the diverse object-centric domains we study . 1 I N T R O D U C T I O N Cognitive scientists theorize that humans generalize broadly with “ schemas ” they discover for regularly occurring structures within their experience ( Minsky , 1979 ; Rumelhart , 1980 ; Rumelhart et al. , 1986 ) . Schemas are representations that capture common features over diverse aspects of the environment . For example , when we learn to drive , we learn schemas for common car types ( such as sedans ) , common car motions ( such as accelerating or stopping ) , and common car arrangements ( such as a row of cars ) . Importantly , schemas are composable representations over portions of our observations . This allows us to recombine them in novel ways . For example , once we learn schemas for sedans , car motions , and rows of cars , we can recognize many rows of sedans moving in opposite directions—even if we ’ ve never seen this before . While substantial progress has been made on developing deep reinforcement learning ( deep RL ) algorithms which can generalize , algorithms are typically limited to one type of generalization . For example , one algorithm will generalize to novel compositions of familiar shapes and colors ( Higgins et al. , 2017 ; Chaplot et al. , 2018 ) , whereas another algorithm will generalize to longer sequences of observed subtasks ( Sohn et al. , 2018 ; 2021 ; Brooks et al. , 2021 ) . In this work , we hypothesize that we can develop a single deep RL architecture that can exhibit multiple types of generalization if it can learn schema-like representations for regularly occurring structures within its experience . As a first step , we study learning schemas for perception ( i.e . perceptual schemas ) that support generalization within a diverse set of tasks and environments . We study generalization across three diverse environments and task structures , each with their own regularly occurring structures ( Figure 1 ) . Across these environments , test tasks are novel compositions of the regularly occurring structures the agent experiences during training . Generalization for the “ Ballet task ” involves recalling novel spatial and temporal compositions of regularly occurring object motions ( Figure 1 , a ) ; generalization for the “ Place X on Y task ” involves generalizing active perception of and navigation towards regularly appearing 3D objects ( Figure 1 , b ) ; and generalization for the “ Keybox task ” involves generalizing memory-retention of goal-information to larger environments composed of sequences of regularly occurring object-configurations ( Figure 1 , c ) . We hypothesize that discovering perceptual schemas for these regularly occurring structures will facilitate zero-shot generalization to tasks defined over novel compositions of these structures . We propose Feature Attending Recurrent Modules ( FARM ) , a state representation learning architecture for discovering task-relevant perceptual schemas . FARM learns perceptual schemas that are distributed across multiple , smaller recurrent modules . To consider why this might be helpful , consider the benefits of using word embeddings . A word embedding can represent more information than a one-hot encoding of the same dimension because it can represent different patterns of word usage with the same dimension . Analogously , learning multiple modules enables FARM to represent different patterns of an agent ’ s experience—i.e . different perceptual schemas—with the same module . To maximize the expressivity of the patterns a module can represent , each module employs a novel dynamic feature attention mechanism to dynamically attends to important features in the agent ’ s observation . When combined with spatio-temporal features , our results suggest that the perceptual schemas FARM discovers capture diverse structures including object motions , 3D objects , and spatial-relationships between objects . To have the modules coordinate what they attend to , they share information using transformer-style attention ( Vaswani et al. , 2017 ) . Recent work indicates that spatial attention is a simple inductive bias for strong performance on object-centric vision tasks ( Greff et al. , 2020 ; Locatello et al. , 2020 ; Goyal et al. , 2020b ; a ) . We compare FARM to recurrent architectures that employ spatial attention and contribute the following : 1 . FARM , which combines dynamic feature attention with learning multiple recurrent modules ( §3 ) . 2 . We show that FARM ’ s components synergestically enable generalizing ( a ) recall to novel compositions of object motions ; ( b ) active perception of 3D objects to larger environments ; and ( c ) generalizing memory of goal-information to longer tasks filled with more distractors ( §4 ) . 3 . We show that spatial attention—which reduces observation features to a weighted average over spatial positions—can be detrimental to reinforcement learning of our diverse object-centric tasks and interfere with the benefits that come from learning multiple recurrent modules . 4 . Our analysis of the representations learned by FARM provide evidence that it learns perceptual schemas that are flexibly distributed across combinations of recurrent modules ( §4.3.1 ) . 2 R E L AT E D W O R K O N G E N E R A L I Z AT I O N I N D E E P R L While a large body of work has focused on studying generalization in deep RL ( Kansky et al. , 2017 ; Witty et al. , 2018 ; Farebrother et al. , 2018 ; Zhang et al. , 2018b ; Cobbe et al. , 2019 ; Raileanu & Fergus , 2021 ) , there has been less emphasis on studying generalization within diverse environments and task structures . One research direction has focused on generalizing to longer tasks , e.g . executing longer sequences ( Oh et al. , 2017 ; Zhang et al. , 2018a ; Lampinen et al. , 2021 ) or executing novel subtask structures ( Sohn et al. , 2018 ; 2021 ; Brooks et al. , 2021 ) . Another direction has focused on generalizing to tasks with novel features , e.g. , novel shape-color combinations ( Higgins et al. , 2017 ; Chaplot et al. , 2018 ; Lee et al. , 2020 ; Hill et al. , 2020 ) , novel backgrounds ( Zhang et al. , 2021 ; Agarwal et al. , 2021 ) , and novel distractors ( Mott et al. , 2019 ; Goyal et al. , 2020b ) . We attempt to bridge these prior strands of research by developing a single architecture for ( a ) generalizing recall to novel compositions of object motions ; ( b ) generalizing sequential perception of 3D objects to larger environments ; and ( c ) generalizing memory of goal information to longer task lengths . Task-driven generalization . Recent work which has shown that a diverse training curriculum can promote generalization ( Tobin et al. , 2017 ; Packer et al. , 2018 ; Hill et al. , 2020 ) . This research inspired our task-driven approach to discovering generalizable “ schema-like ” representations . Additionally , our procedurally-generated KeyBox task follows previous research on using procedural level generalization for faster learning and generalization ( Justesen et al. , 2019 ; Jiang et al. , 2021 ) . Generalizing with feature attention . Most similar to our feature-attention mechanism are the attention mechanisms by Perez et al . ( 2018 ) ; Chaplot et al . ( 2018 ) . In particularly , Chaplot et al . ( 2018 ) showed that mapping language instructions to non-linear feature coefficients enabled generalizing to tasks specified over unseen feature combinations in a 3D environment . While FARM also learns non-linear feature coefficients , our work has two important differences . First , we develop a multi-head version where individual feature coefficients are produced by their own recurrent modules . This enables FARM to leverage this form of attention in settings where language instructions don ’ t indicate what to attend to ( this is true in 2/3 of our tasks ) . Second , we are the first to show that feature attention facilitates generalizing recall of object dynamics ( Figure 4.1 ) and generalizing memory-retention to larger environments ( Figure 4.3 ) . Generalizing with top-down spatial attention . Most similar to FARM are the Attention Augmented Agent ( AAA ) ( Mott et al. , 2019 ) and Recurrent Independent Mechanisms ( RIMs ) ( Goyal et al. , 2020b ) . Both are recurrent architectures that produce top-down spatial attention over observations . Additionally , RIMs also uses a modular state representation produced by a set of recurrent modules . AAA showed generalization to unseen distractors and RIMs to more distractors than trained on . While follow-up work on RIMs has addressed problems such as learning type-specific update rules ( Goyal et al. , 2020a ; Didolkar et al. , 2021 ) and improved information sharing among modules ( Mittal et al. , 2020 ; Goyal et al. , 2021 ) , there hasn ’ t been an emphasis on which components enable generalization in reinforcement learning across diverse domains . The major difference between AAA , RIMs , and FARM is that FARM attends to an observation with feature attention as opposed to spatial attention . Our experiments indicate that spatial attention can be an impediment to reinforcement learning of tasks defined by shape-color agnostic object-dynamics and 3D objects . 3 A R C H I T E C T U R E : F E AT U R E AT T E N D I N G R E C U R R E N T M O D U L E S We study an agent that experiences a partial observation and task description ( xt , τt ) ∈ X × T , takes an action at ∈ A , and experiences resultant reward rt ∈ R. The agent needs a representation for state that enables learning a policy π : S → A that maximizes the expected reward from taking an action at a state Eπ [ ∑ t=0 γ tRt+1|St = a , At = a ] . Since the agent only observes observation-task-actionreward tuples ( xt , τt , at , rt ) , it needs to learn a time-series representation mapping episode histories ( x≤t , τ≤t , a < t , r < t ) to state representations st . The agent learns these representations with Feature Attending Recurrent Modules ( FARM ) . Exploiting structured observation features for generalization . We assume that FARM can generalize if it represents task-relevant regularly occurring perceptual structures , i.e . perceptual schemas . To discovering perceptual schemas , we assume an observation encoder φ ( · ) that produces observation features Zt ∈ Z ∈ Rm×dz where dz features are shared across m rows representing different portions of the observation . For an image , different rows might correspond to different spatial positions ; for audio , to different frequency bands ; or for robotic proprioception , to spatial information about different body parts . In this work , we focus on observations in the form of images . Feature Attending Recurrent Modules . FARM learns representations for perceptual schemas that are distributed over n recurrent modules { η ( i ) } . Module i uses a distinct initial module-state h ( i ) 0 ∈ H ( i ) and the subsequently experienced observation-task-action-reward tuple ( xt , τt , at , rt ) to learn the following mapping , η ( i ) : Z ×T ×H ( i ) ×A×R → H ( i ) . For convenience , we can define a module context c ( i ) t as the concatenation of the previous module-state , the action chosen with it , the resultant reward , and the current task-description as c ( i ) t = [ τt , h ( i ) t−1 , at−1 , rt−1 ] . The agent integrates this context c ( i ) t with Zt to compute st using the collection of outputs from the modules : st = [ η ( 1 ) ( Zt , c ( 1 ) t ) , . . . , η ( n ) ( Zt , c ( n ) t ) ] ( 1 ) = [ h ( 1 ) t , . . . , h ( n ) t ] . ( 2 ) Module update rule . Each module will use its context c ( i ) t to ( a ) attend to some aspect of the observation and ( b ) to retrieve information from other modules . This manifests with two functions , ( a ) f ( i ) att and ( b ) f ( i ) share , respectively . Defining f ( i ) update as the module update-function , we model η ( i ) as the following composition of functions : η ( i ) ( Zt , c ( i ) t ) = f ( i ) update ( c ( i ) t , f ( i ) att ( Zt , c ( i ) t ) , ︸ ︷︷ ︸ attend to observation f ( i ) share ( c ( i ) t , { h ( j ) t−1 } n1 ) ︸ ︷︷ ︸ share information ) ( 3 ) Modules capture diverse perceptual structures with dynamic feature attention . Our first insight was that modules can attend to high-level object-dynamics information if FARM learns a recurrent observation encoder Zt = φ ( xt , Zt−1 ) . By doing so , information about how feature-values shift between positions ( i.e . feature dynamics ) can be captured in the features of Zt . Our second insight was that feature attention is an expressive attention mechanism for selecting observation information to update with . Each module predicts its own feature attention coefficients and applies them identically to all spatial positions in Zt ( Perez et al. , 2018 ; Chaplot et al. , 2018 ) . We found it useful to linearly project the features before and after using shared parameters as in Andreas et al . ( 2016 ) ; Hu et al . ( 2018 ) . The operations are summarized below : f ( i ) att ( Zt , c ( i ) t ) = ( ZtW1 σ ( W iattc ( i ) t ) ) W2 ( 4 ) where denotes an element-wise product over the feature dimension and σ is a sigmoid nonlinearity . Equation 4 computes the degree to which important features are expressed across all positions . Since our features capture dynamics information , this allows a module to attend to dynamics ( § 4.1 ) . When updating with equation 3 , we flatten the output of equation 4 and give this as input to an RNN . Flattening leads all spatial positions to be treated uniquely and enables a module to represent aspects of the observation that span multiple positions , such as 3D objects ( § 4.2 ) and spatial arrangements of objects ( § 4.3 ) . Since the feature-coefficients for the next time-step are produced with observation features from the current time-step , modules can dynamically shift their attention when task-relevant events occur ( see Figure 6 , c ) . Modules share information to coordinate what they represent . Similar to RIMs Goyal et al . ( 2020b ) , before updating , each module retrieves information from other modules using transformerstyle attention ( Vaswani et al. , 2017 ) . We define the collection of previous module-states as Ht−1 = [ h ( 1 ) t−1 ; . . . ; h ( n ) t−1 ; 0 ] ∈ R ( n+1 ) ×dh , where 0 is a null-vector used to retrieve no information . Each module retrieves information as follows : f ( i ) share ( c ( i ) t , { h ( j ) t−1 } n1 ) = softmax ( c ( i ) t W q i ) ( Ht−1W k i ) > √ dh Ht−1W vi ( 5 ) 4 E X P E R I M E N T S In this section , we study the following questions : 1 . Can FARM enable generalization of memory-retention to novel spatio-temporal compositions of object-dynamics ? 2 . Can FARM generalize sequential active perception of 3D objects to larger environments ? 3 . Can FARM generalize memory-retention of goal-information to longer tasks composed of longer sequences of observed obstacles ? Baselines . Our first baseline is the canonical choice for learning state-representations , a Long Short-term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) . Our other two baselines — the Attention Augmented Agent ( AAA ) ( Mott et al. , 2019 ) and Recurrent Independent Mechanisms ( RIMs ) ( Goyal et al. , 2020b ) — also employ top-down attention over observation features . However , they both employ transformer-style attention ( Locatello et al. , 2020 ; Vaswani et al. , 2017 ) to dynamically attend to spatial positions in the observation ; whereas , we dynamically attend to features shared across all spatial positions . RIMs , like FARM , composes state with a set of recurrent modules . Implementation details . We implement our recurrent observation encoder , φ , as a ResNet ( He et al. , 2016 ) followed by a Convolutional LSTM ( ConvLSTM ) ( Shi et al. , 2015 ) . We implement the update function of each module with an LSTM . We used multihead-attention ( Vaswani et al. , 2017 ) for f ( i ) share . We trained the architecture with the IMPALA algorithm ( Espeholt et al. , 2018 ) and an Adam optimizer ( Kingma & Ba , 2015 ) . We tune hyperparameters for all architectures with the “ Place X next to Y ” task from the BabyAI environment ( Chevalier-Boisvert et al. , 2019 ) ( § C.2 ) . We keep most hyperparameters fixed across our tasks . Our main change is to make the RNNs employed by each architecture larger for the KeyBox task . For details on hyperparameters , see §B . | The paper proposes a modular state representation learning architecture called Composable Perceptual Schemas to discover regularly recurring patterns in its observations into “schemas” and process them in a modular and factorized manner. The proposed model is tested for generalization in the deepRL setting along various axes such as varying object types, object numbers, distractors, task solution lengths etc. and shows promising improvements over other recurrent baseline models. | SP:eaeaa324cf6ca9253179d901df14159d0792ebf1 |
Tackling Oversmoothing of GNNs with Contrastive Learning | 1 INTRODUCTION . Combining the graph data comprehensive relations with the neural network models representation learning ability , graph neural networks ( GNNs ) achieve state-of-the-art performances in many realworld applications , such as document classification , natural language processing , computer vision , and recommender systems ( Zhang et al. , 2019 ) . GNNs consist of many variant neural network models with different message-passing mechanisms , to name a few , such as GCN ( Kipf & Welling , 2017 ) , GraphSAGE ( Hamilton et al. , 2017 ) , GAT ( Velickovic et al. , 2018 ) , GIN ( Xu et al. , 2019 ) , and GMNN ( Qu et al. , 2019 ) . In the complex real-world settings of applying GNNs , not every node is lucky enough to have node labels and/or node features . Hence , increasing the depth ( i.e. , the number of layers ) of GNNs is a viable solution to capture more latent knowledge to reduce the uncertainty caused by missing values ( Zhao & Akoglu , 2020 ) . However , as the number of layers increases , the performance of GNN will decrease to a large degree ( Kipf & Welling , 2017 ) . The reasons may come from many aspects after involving more parameters like vanishing gradient , overfitting , and oversmoothing . Compared with the first two reasons , oversmoothing of GNNs is recently introduced ( Li et al. , 2018 ; Oono & Suzuki , 2020 ) and widely discussed ( Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) . It is the phenomenon that the learned node representations become indistinguishable as the number of the hidden layers increases , thus hurting the performance of down-streaming tasks like node classification and link prediction . To tackle the oversmoothing problem of GNNs , some nascent research works are proposed ( Klicpera et al. , 2019 ; Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) . They share the same logic ( i.e. , keeping the divergence between nodes ) but differ in specific methodologies ( i.e. , rescaling divergences of learned representa- tions ( Zhao & Akoglu , 2020 ) , adding the divergence regularizer in the learning process ( Chen et al. , 2020a ; Zhou et al. , 2020 ) , changing input graph structures ( Chen et al. , 2020a ; Rong et al. , 2020 ; Chen et al. , 2020b ) , or personalizing the information aggregation for each specific node ( Klicpera et al. , 2019 ; Liu et al. , 2020 ) ) . Despite of good performance , some drawbacks still exist in those mentioned solutions . By surveying these SOTA de-oversmoothing strategies , we summarize three major metrics to evaluate a de-oversmoothing strategy : 1 ) constant divergence indicator , 2 ) easyto-determine divergence indicator , and 3 ) model-agnostic de-oversmoothing strategy . ( The detailed discussion could be found in Section 2 ) . We find that no prevalent de-oversmoothing methods for GNNs could maintain all of them . To bridge this gap , we propose a Topology-guided Graph Contrastive Layer ( TGCL ) with the inspiration from the contrastive learning concept ( van den Oord et al. , 2018 ) , where we contrast node topological information to obtain discriminative node representations after many GNN layers . TGCL is the first de-oversmoothing strategy attempting to maintain all three mentioned metrics . Specifically , we set a constant and easy-to-determine divergence indicator between nodes , which is purely based on the topology of the input graph . With this divergence indicator , we aim to guide latent representations of neighbor node pairs closer and non-neighbor node pairs farther apart to mitigate the oversmoothing of GNNs . Last but not least , the proposed TGCL is model-agnostic , which means TGCL could be incorporated in multiple GNN models . With theoretical proof and empirical analysis , we show that the proposed TGCL could alleviate the oversmoothing problem of GNNs to a large extent . Our contributions can be summarized as follows : • We survey current de-oversmoothing methods by analyzing the advantages and the disadvantages of each method and summarize three metrics to evaluate a de-oversmoothing method for GNNs . • We propose a topology-guided graph contrastive layer named TGCL to tackle the oversmoothing problem of GNNs , which enjoys all three metrics simultaneously . • We show the effectiveness of the proposed TGCL from the theoretical proof and the empirical aspect with extensive experiments . The rest of this paper is organized as follows . After a brief survey of de-oversmoothing methods in Section 2 , we introduce the proposed TGCL in Section 3 . The empirical evaluation of the proposed TGCL on real-world datasets is presented in Section 4 . Then , we review the related work in Section 5 before we conclude the paper in Section 6 . 2 BACKGROUND . As mentioned above , de-oversmoothing methods ( Klicpera et al. , 2019 ; Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) share the same logic of keeping the divergence between node representations but differ in specific methodologies focusing on different merits . By taking the union of the metrics used in different state-of-thearts , we get three metrics to evaluate a de-oversmoothing algorithm comprehensively . There are three metrics as shown in Table 1 , including constant divergence indicator , easy-todetermine divergence indicator , and model-agnostic de-oversmoothing strategy . Divergence indicator is indispensable for guiding the final node representation similarity based on the specified distance measurement . Several de-oversmoothing methods like ( Klicpera et al. , 2019 ; Zhao & Akoglu , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ) achieve the constant divergence indicator , which means the guidance is much more robust and not dependent on the training process of GNNs . However , to guide the node representation similarity reasonably , the divergence indicator is not that easy to be determined . For example , PairNorm ( Zhao & Akoglu , 2020 ) is proposed as a normalization layer to keep the divergence of node representation against the original node feature . Instead of adding this regularizer directly to the learning objective of GNN models , PairNorm takes an alternative by rescaling the learned node representations with a constant hyperparameter to keep the original node feature divergence . PairNorm achieves two metrics : constant divergence indicator ( i.e. , the constant hyperparameter ) and model-agnostic strategy ( i.e. , PairNorm can be added on different GNN models as a layer ) . However , the selection of that constant hyperparameter heavily depends on the prior knowledge of the input graph data , which is hard to determine . ( The discussion of other de-oversmoothing methods can be found in Section 5 . ) As shown in Table 1 , PairNorm is an effective de-oversmoothing method that maintains two metrics but needs prior knowledge to scale divergence between node pairs . While our proposed TGCL transfers this hard-to-acquire prior knowledge into the topology information of the input graph , where the divergence guidance between nodes is constant and easy to be determined . To be specific , our TGCL is the first de-oversmoothing method attempting to maintain these three metrics at the same time . In the next section , we formally introduce the proposed TGCL with theoretical proof for the model effectiveness . Moreover , we prove that the objective of PairNorm is just a special case of our TGCL , which shows the effectiveness of our TGCL from another perspective . 3 PROPOSED METHOD . In this section , we begin with the notions used in this paper . Then , we prove that the objective of the de-oversmoothing model PairNorm ( Zhao & Akoglu , 2020 ) is a just special case of our Topologyguided Graph Contrastive Layer ( TGCL ) . After analyzing the limitations of PairNorm , we formally introduce our proposed TGCL and show why it could better alleviate the oversmoothing issue with the contrastive learning manner . 3.1 NOTION . Throughout this paper , we use regular letters to denote scalars ( e.g. , α ) , boldface lowercase letters to denote vectors ( e.g. , v ) , and boldface uppercase letters to denote matrices ( e.g. , A ) . We formalize the graph mining problem in the context of undirected graph G = ( V , E , X ) , where V consists of n vertices , E consists of m edges , X ∈ Rn×d denote the feature matrix and d is the feature dimension . We let A ∈ Rn×n denote the adjacency matrix , D ∈ Rn×n denote the diagonal matrix of vertex degrees , and I ∈ Rn×n denote the identity matrix . For ease explanation , we denote vi as node i , xi as the input feature of node i , zi as the embedding of node i by any type of GNNs and Ai as the adjacency vector for node i. Ni is a set that contains the neighbors of node i and N̄i is the complement of Ni , which contains the non-neighbor of node i . 3.2 PRELIMINARY . Each graph convolutional layer can be understood as a smoothing operation ( Li et al. , 2018 ) but stacking many layers renders the final representation of a node indistinguishable from others . Therefore , how to recover the divergence between node representations but preserving the shared information becomes a vital problem in graph mining . In PairNorm ( Zhao & Akoglu , 2020 ) , the divergence between node pairs is based on a hyper-parameter , which requires prior knowledge of the input graph data and is hard to acquire . More specifically , PairNorm is proposed as a novel normalization layer to prevent all node embeddings from becoming too similar by minimizing the following objective : Lp = ∑ vi∈V ‖zi − xi‖2 + ∑ ( i , j ) ∈E ‖zi − zj‖2 − ∑ ( i , k ) 6∈E ‖zi − zk‖2 ( 1 ) where zi is the node embedding vector of node vi and xi is the original feature vector of node vi . In the equation above , the first term is the reconstruction error , the second term is responsible for min- imizing the difference between two representations of a neighbor node pair , and the last term aims to maximize the difference between two representations of a remote node pair . By reformulating Eq . 1 , we could derive a upper bound of Lp in the form of contrastive learning loss term as follows : Lp = ∑ vi∈V ‖zi − xi‖2 + ∑ vi∈V ∑ vj∈Ni ‖zi − zj‖2 − ∑ vi∼V ∑ vk /∈Ni ‖zi − zk‖2 = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V ∑ vk /∈Ni log ( e−‖zi−zk‖ 2 ) ( 2 ) ≤ ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V log ( ∑ vk /∈Ni e−‖zi−zk‖ 2 ) ( 3 ) ≤ ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V ∑ vj∈Ni log ( ∑ vk /∈Ni e−‖zi−zk‖ 2 ) = ∑ vi∼V ‖zi − xi‖2 + ∑ vi∼V ∑ vj∈Ni log ( ∑ vk /∈Ni e −‖zi−zk‖2 e−‖zi−zj‖ 2 ) ≤ ∑ vi∼V ‖zi − xi‖2 + ∑ vi∼V ∑ vj∈Ni [ log ( 1 + ∑ vk /∈Ni e −‖zi−zk‖2 e−‖zi−zj‖ 2 ) ] = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni [ log ( e−‖zi−zj‖ 2 e−‖zi−zj‖ 2 + ∑ vk /∈Ni e −‖zi−zk‖2 ) ] = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni [ log ( f ( zi , zj ) f ( zi , zj ) + ∑ vk /∈Ni f ( zi , zk ) ) ] = L1 ( 4 ) where f ( zi , zk ) = e−‖zi−zk‖ 2 . Here , we apply Jensen ’ s inequality to derive Eq . 3 as a upper bound of Eq.2 since log ( · ) is concave . We observe that L1 is a upper bound of PairNorm and we could interpret two regularization terms ‖zi − zj‖2 and ‖zi − zk‖2 of PairNorm as a special case of a contrastive learning loss term in L1 by setting the similarity measurement function f ( zi , zk ) to be e−‖zi−zk‖ 2 . However , both PairNorm ( Eq . 1 ) and the upper bound of PairNorm ( Eq . 4 ) only consider the firstorder neighbor information but neglect the K-hop neighbors information . For example , in a realworld scenario , we are given a remote pair ( vk , vi ) . It is highly possible that vk and vi have the similar representations , if they share the same label information . However , simply minimizing the third term of PairNorm ( i.e. , −‖zi − zk‖2 ) will push zi away from zk , resulting in sub-optimal solution . In addition , if we are given two remote pairs ( vk1 , vi ) and ( vk2 , vi ) such that node vk1 is far from node vi and node vk2 is near node vi ( e.g. , 2-hop neighbor ) , the weight imposed on these two remote pairs should be different as we expect that zk1 should be more different from zi than zk2 due to the topological information in the graph . However , PairNorm and L1 ( Eq . 4 ) assume that all unconnected node pairs ( zi and zk ) have the same weight by setting the weights to be 1 for neighbor pairs and remote pairs . Therefore , if the K-hop neighbors of zi share the same topological structure of zi or the same label information , pushing zi away from the representation of its K-hop neighbors ( K > 1 ) and ignoring the different weights for different remote pairs will result in a sub-optimal solution . Motivated by these , we propose to utilize the similarity of two adjacency vectors of each node pair and embed the global topological structure information into the representation of each node such that GNNs can derive better discriminative representations for all nodes . | In this work, the author analyzed the oversmoothing issue in GNN learning. To compare with the related works, the author proposed three metrics: divergence indicator, the intuition to set its value, and the ease to adopt. The author proposed a new method Topology-guided Graph Conttrasive Layer which achieved de-oversmoothing and maintaining the proposed metrics. Lower bound analysis of the TGCL loss was analyzed and the empirical experiment showed an advantage on four graph datasets. | SP:34a76b67449b4cd7b4908b6819d73e40cd670c52 |
Tackling Oversmoothing of GNNs with Contrastive Learning | 1 INTRODUCTION . Combining the graph data comprehensive relations with the neural network models representation learning ability , graph neural networks ( GNNs ) achieve state-of-the-art performances in many realworld applications , such as document classification , natural language processing , computer vision , and recommender systems ( Zhang et al. , 2019 ) . GNNs consist of many variant neural network models with different message-passing mechanisms , to name a few , such as GCN ( Kipf & Welling , 2017 ) , GraphSAGE ( Hamilton et al. , 2017 ) , GAT ( Velickovic et al. , 2018 ) , GIN ( Xu et al. , 2019 ) , and GMNN ( Qu et al. , 2019 ) . In the complex real-world settings of applying GNNs , not every node is lucky enough to have node labels and/or node features . Hence , increasing the depth ( i.e. , the number of layers ) of GNNs is a viable solution to capture more latent knowledge to reduce the uncertainty caused by missing values ( Zhao & Akoglu , 2020 ) . However , as the number of layers increases , the performance of GNN will decrease to a large degree ( Kipf & Welling , 2017 ) . The reasons may come from many aspects after involving more parameters like vanishing gradient , overfitting , and oversmoothing . Compared with the first two reasons , oversmoothing of GNNs is recently introduced ( Li et al. , 2018 ; Oono & Suzuki , 2020 ) and widely discussed ( Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) . It is the phenomenon that the learned node representations become indistinguishable as the number of the hidden layers increases , thus hurting the performance of down-streaming tasks like node classification and link prediction . To tackle the oversmoothing problem of GNNs , some nascent research works are proposed ( Klicpera et al. , 2019 ; Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) . They share the same logic ( i.e. , keeping the divergence between nodes ) but differ in specific methodologies ( i.e. , rescaling divergences of learned representa- tions ( Zhao & Akoglu , 2020 ) , adding the divergence regularizer in the learning process ( Chen et al. , 2020a ; Zhou et al. , 2020 ) , changing input graph structures ( Chen et al. , 2020a ; Rong et al. , 2020 ; Chen et al. , 2020b ) , or personalizing the information aggregation for each specific node ( Klicpera et al. , 2019 ; Liu et al. , 2020 ) ) . Despite of good performance , some drawbacks still exist in those mentioned solutions . By surveying these SOTA de-oversmoothing strategies , we summarize three major metrics to evaluate a de-oversmoothing strategy : 1 ) constant divergence indicator , 2 ) easyto-determine divergence indicator , and 3 ) model-agnostic de-oversmoothing strategy . ( The detailed discussion could be found in Section 2 ) . We find that no prevalent de-oversmoothing methods for GNNs could maintain all of them . To bridge this gap , we propose a Topology-guided Graph Contrastive Layer ( TGCL ) with the inspiration from the contrastive learning concept ( van den Oord et al. , 2018 ) , where we contrast node topological information to obtain discriminative node representations after many GNN layers . TGCL is the first de-oversmoothing strategy attempting to maintain all three mentioned metrics . Specifically , we set a constant and easy-to-determine divergence indicator between nodes , which is purely based on the topology of the input graph . With this divergence indicator , we aim to guide latent representations of neighbor node pairs closer and non-neighbor node pairs farther apart to mitigate the oversmoothing of GNNs . Last but not least , the proposed TGCL is model-agnostic , which means TGCL could be incorporated in multiple GNN models . With theoretical proof and empirical analysis , we show that the proposed TGCL could alleviate the oversmoothing problem of GNNs to a large extent . Our contributions can be summarized as follows : • We survey current de-oversmoothing methods by analyzing the advantages and the disadvantages of each method and summarize three metrics to evaluate a de-oversmoothing method for GNNs . • We propose a topology-guided graph contrastive layer named TGCL to tackle the oversmoothing problem of GNNs , which enjoys all three metrics simultaneously . • We show the effectiveness of the proposed TGCL from the theoretical proof and the empirical aspect with extensive experiments . The rest of this paper is organized as follows . After a brief survey of de-oversmoothing methods in Section 2 , we introduce the proposed TGCL in Section 3 . The empirical evaluation of the proposed TGCL on real-world datasets is presented in Section 4 . Then , we review the related work in Section 5 before we conclude the paper in Section 6 . 2 BACKGROUND . As mentioned above , de-oversmoothing methods ( Klicpera et al. , 2019 ; Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) share the same logic of keeping the divergence between node representations but differ in specific methodologies focusing on different merits . By taking the union of the metrics used in different state-of-thearts , we get three metrics to evaluate a de-oversmoothing algorithm comprehensively . There are three metrics as shown in Table 1 , including constant divergence indicator , easy-todetermine divergence indicator , and model-agnostic de-oversmoothing strategy . Divergence indicator is indispensable for guiding the final node representation similarity based on the specified distance measurement . Several de-oversmoothing methods like ( Klicpera et al. , 2019 ; Zhao & Akoglu , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ) achieve the constant divergence indicator , which means the guidance is much more robust and not dependent on the training process of GNNs . However , to guide the node representation similarity reasonably , the divergence indicator is not that easy to be determined . For example , PairNorm ( Zhao & Akoglu , 2020 ) is proposed as a normalization layer to keep the divergence of node representation against the original node feature . Instead of adding this regularizer directly to the learning objective of GNN models , PairNorm takes an alternative by rescaling the learned node representations with a constant hyperparameter to keep the original node feature divergence . PairNorm achieves two metrics : constant divergence indicator ( i.e. , the constant hyperparameter ) and model-agnostic strategy ( i.e. , PairNorm can be added on different GNN models as a layer ) . However , the selection of that constant hyperparameter heavily depends on the prior knowledge of the input graph data , which is hard to determine . ( The discussion of other de-oversmoothing methods can be found in Section 5 . ) As shown in Table 1 , PairNorm is an effective de-oversmoothing method that maintains two metrics but needs prior knowledge to scale divergence between node pairs . While our proposed TGCL transfers this hard-to-acquire prior knowledge into the topology information of the input graph , where the divergence guidance between nodes is constant and easy to be determined . To be specific , our TGCL is the first de-oversmoothing method attempting to maintain these three metrics at the same time . In the next section , we formally introduce the proposed TGCL with theoretical proof for the model effectiveness . Moreover , we prove that the objective of PairNorm is just a special case of our TGCL , which shows the effectiveness of our TGCL from another perspective . 3 PROPOSED METHOD . In this section , we begin with the notions used in this paper . Then , we prove that the objective of the de-oversmoothing model PairNorm ( Zhao & Akoglu , 2020 ) is a just special case of our Topologyguided Graph Contrastive Layer ( TGCL ) . After analyzing the limitations of PairNorm , we formally introduce our proposed TGCL and show why it could better alleviate the oversmoothing issue with the contrastive learning manner . 3.1 NOTION . Throughout this paper , we use regular letters to denote scalars ( e.g. , α ) , boldface lowercase letters to denote vectors ( e.g. , v ) , and boldface uppercase letters to denote matrices ( e.g. , A ) . We formalize the graph mining problem in the context of undirected graph G = ( V , E , X ) , where V consists of n vertices , E consists of m edges , X ∈ Rn×d denote the feature matrix and d is the feature dimension . We let A ∈ Rn×n denote the adjacency matrix , D ∈ Rn×n denote the diagonal matrix of vertex degrees , and I ∈ Rn×n denote the identity matrix . For ease explanation , we denote vi as node i , xi as the input feature of node i , zi as the embedding of node i by any type of GNNs and Ai as the adjacency vector for node i. Ni is a set that contains the neighbors of node i and N̄i is the complement of Ni , which contains the non-neighbor of node i . 3.2 PRELIMINARY . Each graph convolutional layer can be understood as a smoothing operation ( Li et al. , 2018 ) but stacking many layers renders the final representation of a node indistinguishable from others . Therefore , how to recover the divergence between node representations but preserving the shared information becomes a vital problem in graph mining . In PairNorm ( Zhao & Akoglu , 2020 ) , the divergence between node pairs is based on a hyper-parameter , which requires prior knowledge of the input graph data and is hard to acquire . More specifically , PairNorm is proposed as a novel normalization layer to prevent all node embeddings from becoming too similar by minimizing the following objective : Lp = ∑ vi∈V ‖zi − xi‖2 + ∑ ( i , j ) ∈E ‖zi − zj‖2 − ∑ ( i , k ) 6∈E ‖zi − zk‖2 ( 1 ) where zi is the node embedding vector of node vi and xi is the original feature vector of node vi . In the equation above , the first term is the reconstruction error , the second term is responsible for min- imizing the difference between two representations of a neighbor node pair , and the last term aims to maximize the difference between two representations of a remote node pair . By reformulating Eq . 1 , we could derive a upper bound of Lp in the form of contrastive learning loss term as follows : Lp = ∑ vi∈V ‖zi − xi‖2 + ∑ vi∈V ∑ vj∈Ni ‖zi − zj‖2 − ∑ vi∼V ∑ vk /∈Ni ‖zi − zk‖2 = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V ∑ vk /∈Ni log ( e−‖zi−zk‖ 2 ) ( 2 ) ≤ ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V log ( ∑ vk /∈Ni e−‖zi−zk‖ 2 ) ( 3 ) ≤ ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V ∑ vj∈Ni log ( ∑ vk /∈Ni e−‖zi−zk‖ 2 ) = ∑ vi∼V ‖zi − xi‖2 + ∑ vi∼V ∑ vj∈Ni log ( ∑ vk /∈Ni e −‖zi−zk‖2 e−‖zi−zj‖ 2 ) ≤ ∑ vi∼V ‖zi − xi‖2 + ∑ vi∼V ∑ vj∈Ni [ log ( 1 + ∑ vk /∈Ni e −‖zi−zk‖2 e−‖zi−zj‖ 2 ) ] = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni [ log ( e−‖zi−zj‖ 2 e−‖zi−zj‖ 2 + ∑ vk /∈Ni e −‖zi−zk‖2 ) ] = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni [ log ( f ( zi , zj ) f ( zi , zj ) + ∑ vk /∈Ni f ( zi , zk ) ) ] = L1 ( 4 ) where f ( zi , zk ) = e−‖zi−zk‖ 2 . Here , we apply Jensen ’ s inequality to derive Eq . 3 as a upper bound of Eq.2 since log ( · ) is concave . We observe that L1 is a upper bound of PairNorm and we could interpret two regularization terms ‖zi − zj‖2 and ‖zi − zk‖2 of PairNorm as a special case of a contrastive learning loss term in L1 by setting the similarity measurement function f ( zi , zk ) to be e−‖zi−zk‖ 2 . However , both PairNorm ( Eq . 1 ) and the upper bound of PairNorm ( Eq . 4 ) only consider the firstorder neighbor information but neglect the K-hop neighbors information . For example , in a realworld scenario , we are given a remote pair ( vk , vi ) . It is highly possible that vk and vi have the similar representations , if they share the same label information . However , simply minimizing the third term of PairNorm ( i.e. , −‖zi − zk‖2 ) will push zi away from zk , resulting in sub-optimal solution . In addition , if we are given two remote pairs ( vk1 , vi ) and ( vk2 , vi ) such that node vk1 is far from node vi and node vk2 is near node vi ( e.g. , 2-hop neighbor ) , the weight imposed on these two remote pairs should be different as we expect that zk1 should be more different from zi than zk2 due to the topological information in the graph . However , PairNorm and L1 ( Eq . 4 ) assume that all unconnected node pairs ( zi and zk ) have the same weight by setting the weights to be 1 for neighbor pairs and remote pairs . Therefore , if the K-hop neighbors of zi share the same topological structure of zi or the same label information , pushing zi away from the representation of its K-hop neighbors ( K > 1 ) and ignoring the different weights for different remote pairs will result in a sub-optimal solution . Motivated by these , we propose to utilize the similarity of two adjacency vectors of each node pair and embed the global topological structure information into the representation of each node such that GNNs can derive better discriminative representations for all nodes . | This paper proposes a method named TGCL which aims to deal with the over smoothing problem in GNN when a number of layers go deeper. Different from existing works, the proposed TGCL considers three different concepts which are Constant Divergence Indicator, Easy-to-Determine Divergence Indicator, and Model-Agnostic Strategy. Most of the existing works are only considered two of them. The proposed method can be recognized as an improvement of the PairNorm. | SP:34a76b67449b4cd7b4908b6819d73e40cd670c52 |
Tackling Oversmoothing of GNNs with Contrastive Learning | 1 INTRODUCTION . Combining the graph data comprehensive relations with the neural network models representation learning ability , graph neural networks ( GNNs ) achieve state-of-the-art performances in many realworld applications , such as document classification , natural language processing , computer vision , and recommender systems ( Zhang et al. , 2019 ) . GNNs consist of many variant neural network models with different message-passing mechanisms , to name a few , such as GCN ( Kipf & Welling , 2017 ) , GraphSAGE ( Hamilton et al. , 2017 ) , GAT ( Velickovic et al. , 2018 ) , GIN ( Xu et al. , 2019 ) , and GMNN ( Qu et al. , 2019 ) . In the complex real-world settings of applying GNNs , not every node is lucky enough to have node labels and/or node features . Hence , increasing the depth ( i.e. , the number of layers ) of GNNs is a viable solution to capture more latent knowledge to reduce the uncertainty caused by missing values ( Zhao & Akoglu , 2020 ) . However , as the number of layers increases , the performance of GNN will decrease to a large degree ( Kipf & Welling , 2017 ) . The reasons may come from many aspects after involving more parameters like vanishing gradient , overfitting , and oversmoothing . Compared with the first two reasons , oversmoothing of GNNs is recently introduced ( Li et al. , 2018 ; Oono & Suzuki , 2020 ) and widely discussed ( Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) . It is the phenomenon that the learned node representations become indistinguishable as the number of the hidden layers increases , thus hurting the performance of down-streaming tasks like node classification and link prediction . To tackle the oversmoothing problem of GNNs , some nascent research works are proposed ( Klicpera et al. , 2019 ; Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) . They share the same logic ( i.e. , keeping the divergence between nodes ) but differ in specific methodologies ( i.e. , rescaling divergences of learned representa- tions ( Zhao & Akoglu , 2020 ) , adding the divergence regularizer in the learning process ( Chen et al. , 2020a ; Zhou et al. , 2020 ) , changing input graph structures ( Chen et al. , 2020a ; Rong et al. , 2020 ; Chen et al. , 2020b ) , or personalizing the information aggregation for each specific node ( Klicpera et al. , 2019 ; Liu et al. , 2020 ) ) . Despite of good performance , some drawbacks still exist in those mentioned solutions . By surveying these SOTA de-oversmoothing strategies , we summarize three major metrics to evaluate a de-oversmoothing strategy : 1 ) constant divergence indicator , 2 ) easyto-determine divergence indicator , and 3 ) model-agnostic de-oversmoothing strategy . ( The detailed discussion could be found in Section 2 ) . We find that no prevalent de-oversmoothing methods for GNNs could maintain all of them . To bridge this gap , we propose a Topology-guided Graph Contrastive Layer ( TGCL ) with the inspiration from the contrastive learning concept ( van den Oord et al. , 2018 ) , where we contrast node topological information to obtain discriminative node representations after many GNN layers . TGCL is the first de-oversmoothing strategy attempting to maintain all three mentioned metrics . Specifically , we set a constant and easy-to-determine divergence indicator between nodes , which is purely based on the topology of the input graph . With this divergence indicator , we aim to guide latent representations of neighbor node pairs closer and non-neighbor node pairs farther apart to mitigate the oversmoothing of GNNs . Last but not least , the proposed TGCL is model-agnostic , which means TGCL could be incorporated in multiple GNN models . With theoretical proof and empirical analysis , we show that the proposed TGCL could alleviate the oversmoothing problem of GNNs to a large extent . Our contributions can be summarized as follows : • We survey current de-oversmoothing methods by analyzing the advantages and the disadvantages of each method and summarize three metrics to evaluate a de-oversmoothing method for GNNs . • We propose a topology-guided graph contrastive layer named TGCL to tackle the oversmoothing problem of GNNs , which enjoys all three metrics simultaneously . • We show the effectiveness of the proposed TGCL from the theoretical proof and the empirical aspect with extensive experiments . The rest of this paper is organized as follows . After a brief survey of de-oversmoothing methods in Section 2 , we introduce the proposed TGCL in Section 3 . The empirical evaluation of the proposed TGCL on real-world datasets is presented in Section 4 . Then , we review the related work in Section 5 before we conclude the paper in Section 6 . 2 BACKGROUND . As mentioned above , de-oversmoothing methods ( Klicpera et al. , 2019 ; Chen et al. , 2020a ; Zhao & Akoglu , 2020 ; Rong et al. , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ; Zhou et al. , 2020 ) share the same logic of keeping the divergence between node representations but differ in specific methodologies focusing on different merits . By taking the union of the metrics used in different state-of-thearts , we get three metrics to evaluate a de-oversmoothing algorithm comprehensively . There are three metrics as shown in Table 1 , including constant divergence indicator , easy-todetermine divergence indicator , and model-agnostic de-oversmoothing strategy . Divergence indicator is indispensable for guiding the final node representation similarity based on the specified distance measurement . Several de-oversmoothing methods like ( Klicpera et al. , 2019 ; Zhao & Akoglu , 2020 ; Chen et al. , 2020b ; Liu et al. , 2020 ) achieve the constant divergence indicator , which means the guidance is much more robust and not dependent on the training process of GNNs . However , to guide the node representation similarity reasonably , the divergence indicator is not that easy to be determined . For example , PairNorm ( Zhao & Akoglu , 2020 ) is proposed as a normalization layer to keep the divergence of node representation against the original node feature . Instead of adding this regularizer directly to the learning objective of GNN models , PairNorm takes an alternative by rescaling the learned node representations with a constant hyperparameter to keep the original node feature divergence . PairNorm achieves two metrics : constant divergence indicator ( i.e. , the constant hyperparameter ) and model-agnostic strategy ( i.e. , PairNorm can be added on different GNN models as a layer ) . However , the selection of that constant hyperparameter heavily depends on the prior knowledge of the input graph data , which is hard to determine . ( The discussion of other de-oversmoothing methods can be found in Section 5 . ) As shown in Table 1 , PairNorm is an effective de-oversmoothing method that maintains two metrics but needs prior knowledge to scale divergence between node pairs . While our proposed TGCL transfers this hard-to-acquire prior knowledge into the topology information of the input graph , where the divergence guidance between nodes is constant and easy to be determined . To be specific , our TGCL is the first de-oversmoothing method attempting to maintain these three metrics at the same time . In the next section , we formally introduce the proposed TGCL with theoretical proof for the model effectiveness . Moreover , we prove that the objective of PairNorm is just a special case of our TGCL , which shows the effectiveness of our TGCL from another perspective . 3 PROPOSED METHOD . In this section , we begin with the notions used in this paper . Then , we prove that the objective of the de-oversmoothing model PairNorm ( Zhao & Akoglu , 2020 ) is a just special case of our Topologyguided Graph Contrastive Layer ( TGCL ) . After analyzing the limitations of PairNorm , we formally introduce our proposed TGCL and show why it could better alleviate the oversmoothing issue with the contrastive learning manner . 3.1 NOTION . Throughout this paper , we use regular letters to denote scalars ( e.g. , α ) , boldface lowercase letters to denote vectors ( e.g. , v ) , and boldface uppercase letters to denote matrices ( e.g. , A ) . We formalize the graph mining problem in the context of undirected graph G = ( V , E , X ) , where V consists of n vertices , E consists of m edges , X ∈ Rn×d denote the feature matrix and d is the feature dimension . We let A ∈ Rn×n denote the adjacency matrix , D ∈ Rn×n denote the diagonal matrix of vertex degrees , and I ∈ Rn×n denote the identity matrix . For ease explanation , we denote vi as node i , xi as the input feature of node i , zi as the embedding of node i by any type of GNNs and Ai as the adjacency vector for node i. Ni is a set that contains the neighbors of node i and N̄i is the complement of Ni , which contains the non-neighbor of node i . 3.2 PRELIMINARY . Each graph convolutional layer can be understood as a smoothing operation ( Li et al. , 2018 ) but stacking many layers renders the final representation of a node indistinguishable from others . Therefore , how to recover the divergence between node representations but preserving the shared information becomes a vital problem in graph mining . In PairNorm ( Zhao & Akoglu , 2020 ) , the divergence between node pairs is based on a hyper-parameter , which requires prior knowledge of the input graph data and is hard to acquire . More specifically , PairNorm is proposed as a novel normalization layer to prevent all node embeddings from becoming too similar by minimizing the following objective : Lp = ∑ vi∈V ‖zi − xi‖2 + ∑ ( i , j ) ∈E ‖zi − zj‖2 − ∑ ( i , k ) 6∈E ‖zi − zk‖2 ( 1 ) where zi is the node embedding vector of node vi and xi is the original feature vector of node vi . In the equation above , the first term is the reconstruction error , the second term is responsible for min- imizing the difference between two representations of a neighbor node pair , and the last term aims to maximize the difference between two representations of a remote node pair . By reformulating Eq . 1 , we could derive a upper bound of Lp in the form of contrastive learning loss term as follows : Lp = ∑ vi∈V ‖zi − xi‖2 + ∑ vi∈V ∑ vj∈Ni ‖zi − zj‖2 − ∑ vi∼V ∑ vk /∈Ni ‖zi − zk‖2 = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V ∑ vk /∈Ni log ( e−‖zi−zk‖ 2 ) ( 2 ) ≤ ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V log ( ∑ vk /∈Ni e−‖zi−zk‖ 2 ) ( 3 ) ≤ ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni log ( e−‖zi−zj‖ 2 ) + ∑ vi∼V ∑ vj∈Ni log ( ∑ vk /∈Ni e−‖zi−zk‖ 2 ) = ∑ vi∼V ‖zi − xi‖2 + ∑ vi∼V ∑ vj∈Ni log ( ∑ vk /∈Ni e −‖zi−zk‖2 e−‖zi−zj‖ 2 ) ≤ ∑ vi∼V ‖zi − xi‖2 + ∑ vi∼V ∑ vj∈Ni [ log ( 1 + ∑ vk /∈Ni e −‖zi−zk‖2 e−‖zi−zj‖ 2 ) ] = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni [ log ( e−‖zi−zj‖ 2 e−‖zi−zj‖ 2 + ∑ vk /∈Ni e −‖zi−zk‖2 ) ] = ∑ vi∼V ‖zi − xi‖2 − ∑ vi∼V ∑ vj∈Ni [ log ( f ( zi , zj ) f ( zi , zj ) + ∑ vk /∈Ni f ( zi , zk ) ) ] = L1 ( 4 ) where f ( zi , zk ) = e−‖zi−zk‖ 2 . Here , we apply Jensen ’ s inequality to derive Eq . 3 as a upper bound of Eq.2 since log ( · ) is concave . We observe that L1 is a upper bound of PairNorm and we could interpret two regularization terms ‖zi − zj‖2 and ‖zi − zk‖2 of PairNorm as a special case of a contrastive learning loss term in L1 by setting the similarity measurement function f ( zi , zk ) to be e−‖zi−zk‖ 2 . However , both PairNorm ( Eq . 1 ) and the upper bound of PairNorm ( Eq . 4 ) only consider the firstorder neighbor information but neglect the K-hop neighbors information . For example , in a realworld scenario , we are given a remote pair ( vk , vi ) . It is highly possible that vk and vi have the similar representations , if they share the same label information . However , simply minimizing the third term of PairNorm ( i.e. , −‖zi − zk‖2 ) will push zi away from zk , resulting in sub-optimal solution . In addition , if we are given two remote pairs ( vk1 , vi ) and ( vk2 , vi ) such that node vk1 is far from node vi and node vk2 is near node vi ( e.g. , 2-hop neighbor ) , the weight imposed on these two remote pairs should be different as we expect that zk1 should be more different from zi than zk2 due to the topological information in the graph . However , PairNorm and L1 ( Eq . 4 ) assume that all unconnected node pairs ( zi and zk ) have the same weight by setting the weights to be 1 for neighbor pairs and remote pairs . Therefore , if the K-hop neighbors of zi share the same topological structure of zi or the same label information , pushing zi away from the representation of its K-hop neighbors ( K > 1 ) and ignoring the different weights for different remote pairs will result in a sub-optimal solution . Motivated by these , we propose to utilize the similarity of two adjacency vectors of each node pair and embed the global topological structure information into the representation of each node such that GNNs can derive better discriminative representations for all nodes . | The paper proposes a new layer TGCL for GNNs which is used to tackle the oversmoothing problem. This model satisfies three properties: constant divergence indicator, easy-to-determine divergence indicator, and model-agnostic strategy. They show their model alleviates the oversmoothing problem through experiments on 4 datasets. | SP:34a76b67449b4cd7b4908b6819d73e40cd670c52 |
Learning to Map for Active Semantic Goal Navigation | 1 INTRODUCTION . What enables biological systems to successfully navigate to semantic targets in novel environments ? Consider the example of a dog whose food tray at its own house is situated next to the fridge . Upon entering a new house for the first time , the dog will look for its food tray next to the fridge , even though the new house can largely differ in appearance and layout . This is remarkable , as it suggests that the dog is able to encode spatial associations between semantic entities that can be leveraged when trying to accomplish a navigation task . Humans exhibit the same skills in similar scenarios , albeit more nuanced , since given existing observations we can consciously choose to trust our prior knowledge over the semantic structure of the world , or to continue exploring the environment . In other words , if we have a partial view of a room containing an oven , we can infer that a fridge most likely exists in the unobserved space . In addition , if we are trying to reach the sofa , then we can infer with high certainty that it will be located in a different room . This implies that we have internal mechanisms for quantifying the uncertainty of inferred information from unobserved spaces , which guides our decision making process . Inspired by these observations , in this work , we study the problem of object goal navigation for robotic agents in unseen environments and propose an active learning method for encoding semantic priors in indoor scenes . Our approach involves learning a mapping model than can predict ( hallucinate ) semantics in unobserved regions of the map containing both objects ( e.g . chairs , beds ) and structures ( e.g . floor , wall ) , and during testing uses the uncertainty over these predictions to plan a path towards the target . Contrary to traditional approaches for mapping and navigation Cadena et al . ( 2016 ) ( i.e . SLAM ) where the focus is on building accurate 3D metric maps , our uncertainty formulation is designed to capture our lack of confidence about whether a certain object exists at a particular location . This results in a much more meaningful representation , suitable for target-driven tasks . Recently , learned approaches to navigation have been gaining popularity , where initial efforts in addressing target-driven navigation focused on end-to-end reactive approaches that learn to map pixels directly to actions Zhu et al . ( 2017 ) ; Mousavian et al . ( 2019 ) . These methods do not have an explicit representation of the environment and tend to suffer from poor generalization . To remedy this issue , most current methods learn a map representation that enables the encoding of prior information about the geometry and semantics of a scene , acting as an episodic memory Chaplot et al . ( 2020b ; a ) ; Gupta et al . ( 2017 ) ; Georgakis et al . ( 2019 ) . However , maps created by these methods are restricted to contain information only from areas that the agent has directly observed , which led to the introduction of spatial prediction models that either anticipate occupancy Ramakrishnan et al . ( 2020 ) or room layouts Narasimhan et al . ( 2020 ) beyond the agent ’ s field of view and demonstrated improved performance on navigation tasks . Our work differs from these methods in three principled ways : 1 ) We formulate an active training strategy for learning the semantic maps , 2 ) we exploit the uncertainty over the predictions in the planning process , and 3 ) in contrast to predicting occupancy , our model tackles a harder problem which requires learning semantic patterns ( e.g . tables are usually surrounded by chairs ) . In this work we introduce Learning to Map ( L2M ) , a novel framework for object-goal navigation consisting of two parts . First , we actively learn an ensemble of two-stage segmentation models by choosing training samples through an information gain objective . The models operate on top-down maps and predict both occupancy and semantic regions . Second , we estimate the model uncertainty through the disagreement formulation of ensemble models from Pathak et al . ( 2019 ) , and show its effectiveness in defining objectives in planners to actively select long-term goals for semantic navigation . In addition , we investigate different information gain objectives during active training and illustrate how the use of model uncertainty can balance exploration with exploitation in finding semantic targets . Our proposed approach demonstrates improved success rates on the object-goal navigation task over competitive baselines in the Matterport3D Chang et al . ( 2017 ) dataset using the Habitat Savva et al . ( 2019 ) simulator . 2 RELATED WORK . Semantic SLAM . Classical approaches for navigation focus on building 3D representations of the environment before considering downstream tasks Cadena et al . ( 2016 ) . While these methods are typically geometric , several SLAM methods have attempted to associate semantic information to the reconstructed geometric map , mainly at the object-level Salas-Moreno et al . ( 2013 ) ; Yang & Scherer ( 2019 ) ; McCormac et al . ( 2018 ) ; Bowman et al . ( 2017 ) ; Kostavelis & Gasteratos ( 2015 ) . For example , in McCormac et al . ( 2018 ) instance segmentations predicted by Mask R-CNN are incorporated to facilitate per-object reconstructions , while the work of Bowman et al . ( 2017 ) proposes a probabilistic formulation to address uncertain object data association . However , SLAM systems rarely consider active exploration as they are not naturally compatible with task-driven learnable representations from deep learning architectures that can encode semantic information . Other recent works Katsumata et al . ( 2020 ) ; Cartillier et al . ( 2020 ) have sought to build 2D semantic maps and focused either on semantic transfer of a global scene or assumed the environments were accessible before-hand . In contrast , our proposed approach tackles the object goal task in unknown environments by actively learning how to predict semantics in both observed and unobserved areas of the map around the agent . Learning based navigation methods . There has been a recent surge of learning based methods Zhu et al . ( 2017 ) ; Mousavian et al . ( 2019 ) ; Gupta et al . ( 2017 ) ; Chen et al . ( 2019 ) ; Chaplot et al . ( 2020b ) ; Fang et al . ( 2019 ) ; Yang et al . ( 2018 ) ; Ye et al . ( 2021 ) ; Zhang et al . ( 2021 ) ; Chattopadhyay et al . ( 2021 ) for indoor navigation tasks Anderson et al . ( 2018 ) ; Batra et al . ( 2020 ) ; Das et al . ( 2018 ) , propelled by the introduction of high quality simulators such as Gibson Xia et al . ( 2018 ) , Habitat Savva et al . ( 2019 ) , and AI2-THOR Kolve et al . ( 2017 ) . Methods which use explicit task-dependent map representations Parisotto & Salakhutdinov ( 2017 ) ; Gupta et al . ( 2017 ) ; Chaplot et al . ( 2020b ; a ) ; Georgakis et al . ( 2019 ) ; Gordon et al . ( 2018 ) ; Mishkin et al . ( 2019 ) have shown to generalize better in unknown environments than end-to-end approaches with implicit world representations . For example , in Gupta et al . ( 2017 ) a differentiable mapper learns to predict top-down egocentric views of the scene from RGB images , followed by a differentiable planner , while in Chaplot et al . ( 2020a ) Mask R-CNN is used to build a top-down semantic map of the scene used by a learned policy that predicts goal locations in the map . More conceptually similar to our method , are approaches that attempt to encode semantic or layout priors by learning to predict outside the field-of-view of the agent Ramakrishnan et al . ( 2020 ) ; Liang et al . ( 2020 ) ; Narasimhan et al . ( 2020 ) . In contrast to all these works we formulate an active , target-independent strategy to predict semantic maps and define goal selection objectives . Uncertainty Estimation . Many recent works estimate uncertainty of deep learning models Gal ( 2016 ) ; Abdar et al . ( 2021 ) . We leverage the approach first proposed by Seung et al . ( 1992 ) to estimate our model ( epistemic ) uncertainty with the variance between the output of an ensemble of models . This was first used for active exploration in vision-based reinforcement learning by Pathak et al . ( 2019 ) . Maximizing epistemic uncertainty is used as a proxy for maximizing information gain Seung et al . ( 1992 ) ; Pathak et al . ( 2019 ) . We use this uncertainty objective to actively fine-tune our models with a training procedure similar to the active learning approaches leveraged during training presented by Bucher et al . ( 2020 ) ; Chaplot et al . ( 2020c ) ; Sener & Savarese ( 2017 ) . In particular , Chaplot et al . ( 2020c ) uses this active training method to improve performance on a semantic segmentation model , leading us to initially hypothesize analogous results would be possible for our semantic hallucination task . We also use this epistemic uncertainty estimate to construct confidence bounds for our estimated probability distribution which we use to select goals for target-driven navigation at test time . Both lower Galichet et al . ( 2013 ) and upper Auer et al . ( 2002 ) confidence bound strategies for balancing exploration , exploitation , and safety have been previously proposed in the multi-armed bandit literature and extended for use in MDPs Azar et al . ( 2017 ) and reinforcement learning Chen et al . ( 2017 ) . 3 APPROACH . We present a new framework for object-goal navigation that uses a learned semantic map predictor to select informative goals . In contrast to prior work Ramakrishnan et al . ( 2020 ) ; Chaplot et al . ( 2020a ) , we leverage the predictions outside the field-of-view of the agent to formulate uncertainty-based goal selection policies . Furthermore , we actively collect data for training the map predictor and investigate different information gain objectives . Due to our goal selection policy formulation , our method does not need to be trained specifically to predict goals for every target object , enabling a target-independent learning of the semantic priors . Our method takes as input an RGB-D observation , and predicts semantics in the unobserved areas of the map . This is followed by goal selection based on the estimated uncertainty of the predictions . Finally a local policy is responsible for reaching the goal . An overview of our pipeline can be seen in Figure 1 . 3.1 SEMANTIC MAP PREDICTION . We describe a method for learning how to map by predicting the semantic information outside the field of view of the agent . We emphasize that this goes beyond traditional mapping ( i.e . accumulating multiple views in an agent ’ s path ) as it relies on prior information encoded as spatial associations between semantic entities in order to hallucinate the missing information . Motivated by the past success of semantic segmentation models in learning contextual information Zhang et al . ( 2018 ) ; Yuan et al . ( 2019 ) , we formulate the semantic map prediction as a two-stage segmentation problem . Our method takes as input an incomplete occupancy region pt ∈ R|C o|×h×w and a ground-projected semantic segmentation ŝt ∈ R|C s|×h×w at time-step t. The output is a top-down semantic local region m̂t ∈ R|C s|×h×w , where Co is the set of occupancy classes containing unknown , occupied , and free , Cs is the set of semantic object classes , and h , w are the dimensions of the local crop . To obtain pt we use the provided camera intrinsics and depth observation at time t to first get a point cloud which is then discretized and ground-projected similar to Ramakrishnan et al . ( 2020 ) . To estimate ŝt we first train a UNet Ronneberger et al . ( 2015 ) model to predict the semantic segmentation of the RGB observation at time t. All local regions are egocentric , i.e. , the agent is in the middle of the crop looking upwards . Each spatial location in our map ( cell ) has dimensions 10cm× 10cm . The proposed two-stage segmentation model predicts the hallucinated semantic region in two stages . First , we estimate the missing values for the occupancy crop in the unobserved areas by learning to hallucinate unseen spatial configurations based on what is already observed . Second , given predicted occupancy , we predict the final semantic region m̂t . These steps are realized as two UNet encoder-decoder models , fo that predicts in occupancy space p̂t = fo ( pt ; θo ) , and fs that predicts in semantic space m̂t = fs ( p̂t ⊕ ŝt ; θs ) , where p̂t is the predicted local occupancy crop which includes unobserved regions , ⊕ refers to the concatenation operation , and θo , θs are the randomly initialized weights of the occupancy and semantic networks respectively . The image segmentation model is trained independently and its ground projected output ŝt conditions fs on the egocentric single-view observation of the agent . The model is trained end-to-end using pixel-wise cross-entropy losses for both occupancy and semantic classes and predicts a probability distribution over the objects for each map location . We assume that ground-truth semantic information is available such that we can generate egocentric top-down occupancy and semantic examples . This combined objective incentivizes learning to predict plausible semantic regions by having the semantic loss backpropagating gradients affecting both fo and fs . Also , performing this procedure enables the initial hallucination of unknown areas over a small set of classes Co , before expanding to the more difficult task of predicting semantic object categories Cs . An overview of the semantic map predictor can be seen in Figure 2 . During a navigation episode , the local semantic region is registered to a global map which is used during planning . Since we predict a probability distribution at each location over the set of objects , the local regions are registered using Bayes Theorem . The global map is initialized with a uniform prior probability distribution across all classes . | The paper proposes a method for leveraging semantic predictions in unobserved areas to help agent navigation. The strategy is motivated by how humans leverage priors on house layouts to perform navigation and tasks without being the house before. The key idea is to train a semantic segmentation model which can predict plausible class assignments in unseen regions. Then use the uncertainty estimates during navigation to pick goals when trying to reach a particular target category. | SP:ff1cb9d9876aa398771cb4204fecfc32aa14ec4b |
Learning to Map for Active Semantic Goal Navigation | 1 INTRODUCTION . What enables biological systems to successfully navigate to semantic targets in novel environments ? Consider the example of a dog whose food tray at its own house is situated next to the fridge . Upon entering a new house for the first time , the dog will look for its food tray next to the fridge , even though the new house can largely differ in appearance and layout . This is remarkable , as it suggests that the dog is able to encode spatial associations between semantic entities that can be leveraged when trying to accomplish a navigation task . Humans exhibit the same skills in similar scenarios , albeit more nuanced , since given existing observations we can consciously choose to trust our prior knowledge over the semantic structure of the world , or to continue exploring the environment . In other words , if we have a partial view of a room containing an oven , we can infer that a fridge most likely exists in the unobserved space . In addition , if we are trying to reach the sofa , then we can infer with high certainty that it will be located in a different room . This implies that we have internal mechanisms for quantifying the uncertainty of inferred information from unobserved spaces , which guides our decision making process . Inspired by these observations , in this work , we study the problem of object goal navigation for robotic agents in unseen environments and propose an active learning method for encoding semantic priors in indoor scenes . Our approach involves learning a mapping model than can predict ( hallucinate ) semantics in unobserved regions of the map containing both objects ( e.g . chairs , beds ) and structures ( e.g . floor , wall ) , and during testing uses the uncertainty over these predictions to plan a path towards the target . Contrary to traditional approaches for mapping and navigation Cadena et al . ( 2016 ) ( i.e . SLAM ) where the focus is on building accurate 3D metric maps , our uncertainty formulation is designed to capture our lack of confidence about whether a certain object exists at a particular location . This results in a much more meaningful representation , suitable for target-driven tasks . Recently , learned approaches to navigation have been gaining popularity , where initial efforts in addressing target-driven navigation focused on end-to-end reactive approaches that learn to map pixels directly to actions Zhu et al . ( 2017 ) ; Mousavian et al . ( 2019 ) . These methods do not have an explicit representation of the environment and tend to suffer from poor generalization . To remedy this issue , most current methods learn a map representation that enables the encoding of prior information about the geometry and semantics of a scene , acting as an episodic memory Chaplot et al . ( 2020b ; a ) ; Gupta et al . ( 2017 ) ; Georgakis et al . ( 2019 ) . However , maps created by these methods are restricted to contain information only from areas that the agent has directly observed , which led to the introduction of spatial prediction models that either anticipate occupancy Ramakrishnan et al . ( 2020 ) or room layouts Narasimhan et al . ( 2020 ) beyond the agent ’ s field of view and demonstrated improved performance on navigation tasks . Our work differs from these methods in three principled ways : 1 ) We formulate an active training strategy for learning the semantic maps , 2 ) we exploit the uncertainty over the predictions in the planning process , and 3 ) in contrast to predicting occupancy , our model tackles a harder problem which requires learning semantic patterns ( e.g . tables are usually surrounded by chairs ) . In this work we introduce Learning to Map ( L2M ) , a novel framework for object-goal navigation consisting of two parts . First , we actively learn an ensemble of two-stage segmentation models by choosing training samples through an information gain objective . The models operate on top-down maps and predict both occupancy and semantic regions . Second , we estimate the model uncertainty through the disagreement formulation of ensemble models from Pathak et al . ( 2019 ) , and show its effectiveness in defining objectives in planners to actively select long-term goals for semantic navigation . In addition , we investigate different information gain objectives during active training and illustrate how the use of model uncertainty can balance exploration with exploitation in finding semantic targets . Our proposed approach demonstrates improved success rates on the object-goal navigation task over competitive baselines in the Matterport3D Chang et al . ( 2017 ) dataset using the Habitat Savva et al . ( 2019 ) simulator . 2 RELATED WORK . Semantic SLAM . Classical approaches for navigation focus on building 3D representations of the environment before considering downstream tasks Cadena et al . ( 2016 ) . While these methods are typically geometric , several SLAM methods have attempted to associate semantic information to the reconstructed geometric map , mainly at the object-level Salas-Moreno et al . ( 2013 ) ; Yang & Scherer ( 2019 ) ; McCormac et al . ( 2018 ) ; Bowman et al . ( 2017 ) ; Kostavelis & Gasteratos ( 2015 ) . For example , in McCormac et al . ( 2018 ) instance segmentations predicted by Mask R-CNN are incorporated to facilitate per-object reconstructions , while the work of Bowman et al . ( 2017 ) proposes a probabilistic formulation to address uncertain object data association . However , SLAM systems rarely consider active exploration as they are not naturally compatible with task-driven learnable representations from deep learning architectures that can encode semantic information . Other recent works Katsumata et al . ( 2020 ) ; Cartillier et al . ( 2020 ) have sought to build 2D semantic maps and focused either on semantic transfer of a global scene or assumed the environments were accessible before-hand . In contrast , our proposed approach tackles the object goal task in unknown environments by actively learning how to predict semantics in both observed and unobserved areas of the map around the agent . Learning based navigation methods . There has been a recent surge of learning based methods Zhu et al . ( 2017 ) ; Mousavian et al . ( 2019 ) ; Gupta et al . ( 2017 ) ; Chen et al . ( 2019 ) ; Chaplot et al . ( 2020b ) ; Fang et al . ( 2019 ) ; Yang et al . ( 2018 ) ; Ye et al . ( 2021 ) ; Zhang et al . ( 2021 ) ; Chattopadhyay et al . ( 2021 ) for indoor navigation tasks Anderson et al . ( 2018 ) ; Batra et al . ( 2020 ) ; Das et al . ( 2018 ) , propelled by the introduction of high quality simulators such as Gibson Xia et al . ( 2018 ) , Habitat Savva et al . ( 2019 ) , and AI2-THOR Kolve et al . ( 2017 ) . Methods which use explicit task-dependent map representations Parisotto & Salakhutdinov ( 2017 ) ; Gupta et al . ( 2017 ) ; Chaplot et al . ( 2020b ; a ) ; Georgakis et al . ( 2019 ) ; Gordon et al . ( 2018 ) ; Mishkin et al . ( 2019 ) have shown to generalize better in unknown environments than end-to-end approaches with implicit world representations . For example , in Gupta et al . ( 2017 ) a differentiable mapper learns to predict top-down egocentric views of the scene from RGB images , followed by a differentiable planner , while in Chaplot et al . ( 2020a ) Mask R-CNN is used to build a top-down semantic map of the scene used by a learned policy that predicts goal locations in the map . More conceptually similar to our method , are approaches that attempt to encode semantic or layout priors by learning to predict outside the field-of-view of the agent Ramakrishnan et al . ( 2020 ) ; Liang et al . ( 2020 ) ; Narasimhan et al . ( 2020 ) . In contrast to all these works we formulate an active , target-independent strategy to predict semantic maps and define goal selection objectives . Uncertainty Estimation . Many recent works estimate uncertainty of deep learning models Gal ( 2016 ) ; Abdar et al . ( 2021 ) . We leverage the approach first proposed by Seung et al . ( 1992 ) to estimate our model ( epistemic ) uncertainty with the variance between the output of an ensemble of models . This was first used for active exploration in vision-based reinforcement learning by Pathak et al . ( 2019 ) . Maximizing epistemic uncertainty is used as a proxy for maximizing information gain Seung et al . ( 1992 ) ; Pathak et al . ( 2019 ) . We use this uncertainty objective to actively fine-tune our models with a training procedure similar to the active learning approaches leveraged during training presented by Bucher et al . ( 2020 ) ; Chaplot et al . ( 2020c ) ; Sener & Savarese ( 2017 ) . In particular , Chaplot et al . ( 2020c ) uses this active training method to improve performance on a semantic segmentation model , leading us to initially hypothesize analogous results would be possible for our semantic hallucination task . We also use this epistemic uncertainty estimate to construct confidence bounds for our estimated probability distribution which we use to select goals for target-driven navigation at test time . Both lower Galichet et al . ( 2013 ) and upper Auer et al . ( 2002 ) confidence bound strategies for balancing exploration , exploitation , and safety have been previously proposed in the multi-armed bandit literature and extended for use in MDPs Azar et al . ( 2017 ) and reinforcement learning Chen et al . ( 2017 ) . 3 APPROACH . We present a new framework for object-goal navigation that uses a learned semantic map predictor to select informative goals . In contrast to prior work Ramakrishnan et al . ( 2020 ) ; Chaplot et al . ( 2020a ) , we leverage the predictions outside the field-of-view of the agent to formulate uncertainty-based goal selection policies . Furthermore , we actively collect data for training the map predictor and investigate different information gain objectives . Due to our goal selection policy formulation , our method does not need to be trained specifically to predict goals for every target object , enabling a target-independent learning of the semantic priors . Our method takes as input an RGB-D observation , and predicts semantics in the unobserved areas of the map . This is followed by goal selection based on the estimated uncertainty of the predictions . Finally a local policy is responsible for reaching the goal . An overview of our pipeline can be seen in Figure 1 . 3.1 SEMANTIC MAP PREDICTION . We describe a method for learning how to map by predicting the semantic information outside the field of view of the agent . We emphasize that this goes beyond traditional mapping ( i.e . accumulating multiple views in an agent ’ s path ) as it relies on prior information encoded as spatial associations between semantic entities in order to hallucinate the missing information . Motivated by the past success of semantic segmentation models in learning contextual information Zhang et al . ( 2018 ) ; Yuan et al . ( 2019 ) , we formulate the semantic map prediction as a two-stage segmentation problem . Our method takes as input an incomplete occupancy region pt ∈ R|C o|×h×w and a ground-projected semantic segmentation ŝt ∈ R|C s|×h×w at time-step t. The output is a top-down semantic local region m̂t ∈ R|C s|×h×w , where Co is the set of occupancy classes containing unknown , occupied , and free , Cs is the set of semantic object classes , and h , w are the dimensions of the local crop . To obtain pt we use the provided camera intrinsics and depth observation at time t to first get a point cloud which is then discretized and ground-projected similar to Ramakrishnan et al . ( 2020 ) . To estimate ŝt we first train a UNet Ronneberger et al . ( 2015 ) model to predict the semantic segmentation of the RGB observation at time t. All local regions are egocentric , i.e. , the agent is in the middle of the crop looking upwards . Each spatial location in our map ( cell ) has dimensions 10cm× 10cm . The proposed two-stage segmentation model predicts the hallucinated semantic region in two stages . First , we estimate the missing values for the occupancy crop in the unobserved areas by learning to hallucinate unseen spatial configurations based on what is already observed . Second , given predicted occupancy , we predict the final semantic region m̂t . These steps are realized as two UNet encoder-decoder models , fo that predicts in occupancy space p̂t = fo ( pt ; θo ) , and fs that predicts in semantic space m̂t = fs ( p̂t ⊕ ŝt ; θs ) , where p̂t is the predicted local occupancy crop which includes unobserved regions , ⊕ refers to the concatenation operation , and θo , θs are the randomly initialized weights of the occupancy and semantic networks respectively . The image segmentation model is trained independently and its ground projected output ŝt conditions fs on the egocentric single-view observation of the agent . The model is trained end-to-end using pixel-wise cross-entropy losses for both occupancy and semantic classes and predicts a probability distribution over the objects for each map location . We assume that ground-truth semantic information is available such that we can generate egocentric top-down occupancy and semantic examples . This combined objective incentivizes learning to predict plausible semantic regions by having the semantic loss backpropagating gradients affecting both fo and fs . Also , performing this procedure enables the initial hallucination of unknown areas over a small set of classes Co , before expanding to the more difficult task of predicting semantic object categories Cs . An overview of the semantic map predictor can be seen in Figure 2 . During a navigation episode , the local semantic region is registered to a global map which is used during planning . Since we predict a probability distribution at each location over the set of objects , the local regions are registered using Bayes Theorem . The global map is initialized with a uniform prior probability distribution across all classes . | This paper presents a method to perform ObjectGoal navigation (i.e. goto the table) based on active semantic mapping. Specifically the proposed approach contains a semantic mapping module that hallucinates unseen areas. This mapping module is initially trained on a set of trajectories that are shortest paths between two points. Then additional trajectories are selected via active learning to maximize the learning signal. ObjectGoal navigation is performed by training class specific map predictors and selecting goal locations via the upper confidence bound. This method outperforms prior map-based works (SemExp) on the Habitat Challenge ObjectGoal navigation dataset. | SP:ff1cb9d9876aa398771cb4204fecfc32aa14ec4b |
Learning to Map for Active Semantic Goal Navigation | 1 INTRODUCTION . What enables biological systems to successfully navigate to semantic targets in novel environments ? Consider the example of a dog whose food tray at its own house is situated next to the fridge . Upon entering a new house for the first time , the dog will look for its food tray next to the fridge , even though the new house can largely differ in appearance and layout . This is remarkable , as it suggests that the dog is able to encode spatial associations between semantic entities that can be leveraged when trying to accomplish a navigation task . Humans exhibit the same skills in similar scenarios , albeit more nuanced , since given existing observations we can consciously choose to trust our prior knowledge over the semantic structure of the world , or to continue exploring the environment . In other words , if we have a partial view of a room containing an oven , we can infer that a fridge most likely exists in the unobserved space . In addition , if we are trying to reach the sofa , then we can infer with high certainty that it will be located in a different room . This implies that we have internal mechanisms for quantifying the uncertainty of inferred information from unobserved spaces , which guides our decision making process . Inspired by these observations , in this work , we study the problem of object goal navigation for robotic agents in unseen environments and propose an active learning method for encoding semantic priors in indoor scenes . Our approach involves learning a mapping model than can predict ( hallucinate ) semantics in unobserved regions of the map containing both objects ( e.g . chairs , beds ) and structures ( e.g . floor , wall ) , and during testing uses the uncertainty over these predictions to plan a path towards the target . Contrary to traditional approaches for mapping and navigation Cadena et al . ( 2016 ) ( i.e . SLAM ) where the focus is on building accurate 3D metric maps , our uncertainty formulation is designed to capture our lack of confidence about whether a certain object exists at a particular location . This results in a much more meaningful representation , suitable for target-driven tasks . Recently , learned approaches to navigation have been gaining popularity , where initial efforts in addressing target-driven navigation focused on end-to-end reactive approaches that learn to map pixels directly to actions Zhu et al . ( 2017 ) ; Mousavian et al . ( 2019 ) . These methods do not have an explicit representation of the environment and tend to suffer from poor generalization . To remedy this issue , most current methods learn a map representation that enables the encoding of prior information about the geometry and semantics of a scene , acting as an episodic memory Chaplot et al . ( 2020b ; a ) ; Gupta et al . ( 2017 ) ; Georgakis et al . ( 2019 ) . However , maps created by these methods are restricted to contain information only from areas that the agent has directly observed , which led to the introduction of spatial prediction models that either anticipate occupancy Ramakrishnan et al . ( 2020 ) or room layouts Narasimhan et al . ( 2020 ) beyond the agent ’ s field of view and demonstrated improved performance on navigation tasks . Our work differs from these methods in three principled ways : 1 ) We formulate an active training strategy for learning the semantic maps , 2 ) we exploit the uncertainty over the predictions in the planning process , and 3 ) in contrast to predicting occupancy , our model tackles a harder problem which requires learning semantic patterns ( e.g . tables are usually surrounded by chairs ) . In this work we introduce Learning to Map ( L2M ) , a novel framework for object-goal navigation consisting of two parts . First , we actively learn an ensemble of two-stage segmentation models by choosing training samples through an information gain objective . The models operate on top-down maps and predict both occupancy and semantic regions . Second , we estimate the model uncertainty through the disagreement formulation of ensemble models from Pathak et al . ( 2019 ) , and show its effectiveness in defining objectives in planners to actively select long-term goals for semantic navigation . In addition , we investigate different information gain objectives during active training and illustrate how the use of model uncertainty can balance exploration with exploitation in finding semantic targets . Our proposed approach demonstrates improved success rates on the object-goal navigation task over competitive baselines in the Matterport3D Chang et al . ( 2017 ) dataset using the Habitat Savva et al . ( 2019 ) simulator . 2 RELATED WORK . Semantic SLAM . Classical approaches for navigation focus on building 3D representations of the environment before considering downstream tasks Cadena et al . ( 2016 ) . While these methods are typically geometric , several SLAM methods have attempted to associate semantic information to the reconstructed geometric map , mainly at the object-level Salas-Moreno et al . ( 2013 ) ; Yang & Scherer ( 2019 ) ; McCormac et al . ( 2018 ) ; Bowman et al . ( 2017 ) ; Kostavelis & Gasteratos ( 2015 ) . For example , in McCormac et al . ( 2018 ) instance segmentations predicted by Mask R-CNN are incorporated to facilitate per-object reconstructions , while the work of Bowman et al . ( 2017 ) proposes a probabilistic formulation to address uncertain object data association . However , SLAM systems rarely consider active exploration as they are not naturally compatible with task-driven learnable representations from deep learning architectures that can encode semantic information . Other recent works Katsumata et al . ( 2020 ) ; Cartillier et al . ( 2020 ) have sought to build 2D semantic maps and focused either on semantic transfer of a global scene or assumed the environments were accessible before-hand . In contrast , our proposed approach tackles the object goal task in unknown environments by actively learning how to predict semantics in both observed and unobserved areas of the map around the agent . Learning based navigation methods . There has been a recent surge of learning based methods Zhu et al . ( 2017 ) ; Mousavian et al . ( 2019 ) ; Gupta et al . ( 2017 ) ; Chen et al . ( 2019 ) ; Chaplot et al . ( 2020b ) ; Fang et al . ( 2019 ) ; Yang et al . ( 2018 ) ; Ye et al . ( 2021 ) ; Zhang et al . ( 2021 ) ; Chattopadhyay et al . ( 2021 ) for indoor navigation tasks Anderson et al . ( 2018 ) ; Batra et al . ( 2020 ) ; Das et al . ( 2018 ) , propelled by the introduction of high quality simulators such as Gibson Xia et al . ( 2018 ) , Habitat Savva et al . ( 2019 ) , and AI2-THOR Kolve et al . ( 2017 ) . Methods which use explicit task-dependent map representations Parisotto & Salakhutdinov ( 2017 ) ; Gupta et al . ( 2017 ) ; Chaplot et al . ( 2020b ; a ) ; Georgakis et al . ( 2019 ) ; Gordon et al . ( 2018 ) ; Mishkin et al . ( 2019 ) have shown to generalize better in unknown environments than end-to-end approaches with implicit world representations . For example , in Gupta et al . ( 2017 ) a differentiable mapper learns to predict top-down egocentric views of the scene from RGB images , followed by a differentiable planner , while in Chaplot et al . ( 2020a ) Mask R-CNN is used to build a top-down semantic map of the scene used by a learned policy that predicts goal locations in the map . More conceptually similar to our method , are approaches that attempt to encode semantic or layout priors by learning to predict outside the field-of-view of the agent Ramakrishnan et al . ( 2020 ) ; Liang et al . ( 2020 ) ; Narasimhan et al . ( 2020 ) . In contrast to all these works we formulate an active , target-independent strategy to predict semantic maps and define goal selection objectives . Uncertainty Estimation . Many recent works estimate uncertainty of deep learning models Gal ( 2016 ) ; Abdar et al . ( 2021 ) . We leverage the approach first proposed by Seung et al . ( 1992 ) to estimate our model ( epistemic ) uncertainty with the variance between the output of an ensemble of models . This was first used for active exploration in vision-based reinforcement learning by Pathak et al . ( 2019 ) . Maximizing epistemic uncertainty is used as a proxy for maximizing information gain Seung et al . ( 1992 ) ; Pathak et al . ( 2019 ) . We use this uncertainty objective to actively fine-tune our models with a training procedure similar to the active learning approaches leveraged during training presented by Bucher et al . ( 2020 ) ; Chaplot et al . ( 2020c ) ; Sener & Savarese ( 2017 ) . In particular , Chaplot et al . ( 2020c ) uses this active training method to improve performance on a semantic segmentation model , leading us to initially hypothesize analogous results would be possible for our semantic hallucination task . We also use this epistemic uncertainty estimate to construct confidence bounds for our estimated probability distribution which we use to select goals for target-driven navigation at test time . Both lower Galichet et al . ( 2013 ) and upper Auer et al . ( 2002 ) confidence bound strategies for balancing exploration , exploitation , and safety have been previously proposed in the multi-armed bandit literature and extended for use in MDPs Azar et al . ( 2017 ) and reinforcement learning Chen et al . ( 2017 ) . 3 APPROACH . We present a new framework for object-goal navigation that uses a learned semantic map predictor to select informative goals . In contrast to prior work Ramakrishnan et al . ( 2020 ) ; Chaplot et al . ( 2020a ) , we leverage the predictions outside the field-of-view of the agent to formulate uncertainty-based goal selection policies . Furthermore , we actively collect data for training the map predictor and investigate different information gain objectives . Due to our goal selection policy formulation , our method does not need to be trained specifically to predict goals for every target object , enabling a target-independent learning of the semantic priors . Our method takes as input an RGB-D observation , and predicts semantics in the unobserved areas of the map . This is followed by goal selection based on the estimated uncertainty of the predictions . Finally a local policy is responsible for reaching the goal . An overview of our pipeline can be seen in Figure 1 . 3.1 SEMANTIC MAP PREDICTION . We describe a method for learning how to map by predicting the semantic information outside the field of view of the agent . We emphasize that this goes beyond traditional mapping ( i.e . accumulating multiple views in an agent ’ s path ) as it relies on prior information encoded as spatial associations between semantic entities in order to hallucinate the missing information . Motivated by the past success of semantic segmentation models in learning contextual information Zhang et al . ( 2018 ) ; Yuan et al . ( 2019 ) , we formulate the semantic map prediction as a two-stage segmentation problem . Our method takes as input an incomplete occupancy region pt ∈ R|C o|×h×w and a ground-projected semantic segmentation ŝt ∈ R|C s|×h×w at time-step t. The output is a top-down semantic local region m̂t ∈ R|C s|×h×w , where Co is the set of occupancy classes containing unknown , occupied , and free , Cs is the set of semantic object classes , and h , w are the dimensions of the local crop . To obtain pt we use the provided camera intrinsics and depth observation at time t to first get a point cloud which is then discretized and ground-projected similar to Ramakrishnan et al . ( 2020 ) . To estimate ŝt we first train a UNet Ronneberger et al . ( 2015 ) model to predict the semantic segmentation of the RGB observation at time t. All local regions are egocentric , i.e. , the agent is in the middle of the crop looking upwards . Each spatial location in our map ( cell ) has dimensions 10cm× 10cm . The proposed two-stage segmentation model predicts the hallucinated semantic region in two stages . First , we estimate the missing values for the occupancy crop in the unobserved areas by learning to hallucinate unseen spatial configurations based on what is already observed . Second , given predicted occupancy , we predict the final semantic region m̂t . These steps are realized as two UNet encoder-decoder models , fo that predicts in occupancy space p̂t = fo ( pt ; θo ) , and fs that predicts in semantic space m̂t = fs ( p̂t ⊕ ŝt ; θs ) , where p̂t is the predicted local occupancy crop which includes unobserved regions , ⊕ refers to the concatenation operation , and θo , θs are the randomly initialized weights of the occupancy and semantic networks respectively . The image segmentation model is trained independently and its ground projected output ŝt conditions fs on the egocentric single-view observation of the agent . The model is trained end-to-end using pixel-wise cross-entropy losses for both occupancy and semantic classes and predicts a probability distribution over the objects for each map location . We assume that ground-truth semantic information is available such that we can generate egocentric top-down occupancy and semantic examples . This combined objective incentivizes learning to predict plausible semantic regions by having the semantic loss backpropagating gradients affecting both fo and fs . Also , performing this procedure enables the initial hallucination of unknown areas over a small set of classes Co , before expanding to the more difficult task of predicting semantic object categories Cs . An overview of the semantic map predictor can be seen in Figure 2 . During a navigation episode , the local semantic region is registered to a global map which is used during planning . Since we predict a probability distribution at each location over the set of objects , the local regions are registered using Bayes Theorem . The global map is initialized with a uniform prior probability distribution across all classes . | The paper presents a novel framework that learns how to construct spatial + semantic maps for target-driven semantic (ObjectGoal) navigation. The scene maps being learned can also hallucinate beyond the immediate observed regions by exploiting semantic priors in the scene during the learning process. The key contribution of this work is in two different places of the navigation pipeline: (1) an active learning setting for training the mapper that optimizes selection of maximally informative regions of the environment’s state space (metric locations on the floor-plan) for the agent to navigate towards and (2) incorporating the uncertainty in the mapper’s predictions while selecting the target semantic object on the map learned so far. The novel insight that connects both these contributions is borrowed from literature on intrinsic rewards via disagreement — the variance between the predictions of an ensemble of multiple mappers serves as a proxy for the model’s uncertainty about a specific location / the informativeness of that location for training the mapper. | SP:ff1cb9d9876aa398771cb4204fecfc32aa14ec4b |
AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning | 1 INTRODUCTION AND RELATED WORK . Over the last decades , reinforcement learning ( RL ) ( Sutton and Barto , 1998 ) has been successful in many tasks ( Mnih et al. , 2013 ; Silver et al. , 2016 ) . Most of these early successes focus on a fixed task in a fixed environment . However , in real applications we often have changing environments , and it has been demonstrated that the optimal policy learned in a specific domain may not be generalized to other domains ( Taylor and Stone , 2009 ) . In contrast , humans are usually good at transferring acquired knowledge to new environments and tasks both efficiently and effectively ( Pearl and Mackenzie , 2018 ) , thanks to the ability to understand the environments . Generally speaking , to achieve reliable , low-cost , and interpretable transfer , it is essential to understand the underlying process—which decision-making factors have changes , where the changes are , and how they change , instead of transferring blindly ( e.g. , transferring the distribution of high-dimensional images directly ) . There are roughly two research lines in transfer RL ( Taylor and Stone , 2009 ; Zhu et al. , 2020 ) : ( 1 ) finding policies that are robust to environment variations , and ( 2 ) adapting policies from the source domain to the target domain as efficiently as possible . For the first line , the focus is on learning policies that are robust to environment variations , e.g. , by maximizing a risk-sensitive objective over a distribution of environments ( Tamar et al. , 2015 ) or by extracting a set of invariant states ( Zhang et al. , 2020a ; 2021a ; Tomar et al. , 2021 ) . A more recent method encodes task-relevant invariances by putting behaviorally equivalent states together , which helps better generalization ( Agarwal et al. , 2021 ) . On the other hand , with the increase of the number of domains , the common part may get even smaller , running counter to the intention of collecting more information with more domains . Moreover , focusing only on the invariant part and disregarding domain-specific information may not be optimal ; for instance , in the context of domain adaptation , it has been demonstrated that the variable part also contains information helpful to improve prediction accuracy ( Zhang et al. , 2020b ) . In this paper , we propose a method along the second line , adapting source policies to the target . Approaches along this line adapt knowledge from source domains and reuse it in the target domain to improve data efficiency , i.e. , in order for the agent to require fewer explorations to learn the target policy . For example , an agent could use importance reweighting on samples 〈s , a , r , s′〉 from sources ( Tirinzoni et al. , 2018 ; 2019 ) or start from the optimal source policy to initialize a learner in the target domain , as a near-optimal initializer ( Taylor et al. , 2007 ; Fernández et al. , 2010 ) . Another widely-used technique is finetuning : a model is pretrained on a source domain and the output layers are finetuned via backpropagation in the target domain ( Hinton and Salakhutdinov , 2006 ; Mesnil et al. , 2012 ) . PNNs ( Rusu et al. , 2016 ) , instead , retain a pool of pretrained models and learn lateral connections from them to extract useful features for a new task . Moreover , a set of approaches focus on sim2real transfer by adapting the parameters ( Yu et al. , 2017 ; Peng et al. , 2020 ) . However , many of these approaches still require a large amount of explorations and optimization in the target domain . Recently , meta-RL approches such as MAML ( Finn et al. , 2017 ) , PEARL ( Rakelly et al. , 2019 ) , CAVIA ( Zintgraf et al. , 2019 ) , Meta-Q learning ( Fakoor et al. , 2020 ) , and others ( Mendonca et al. , 2019 ; Nagabandi et al. , 2018 ; Duan et al. , 2016 ) have been successfully applied to learn an inductive bias that accelerates the learning of a new task by training on a large number of tasks . Some of these methods ( e.g. , CAVIA and PEARL ) , as well as some prior work ( e.g. , HiMDPs ( Doshi-Velez and Konidaris , 2016 ) ) and recent follow-ups ( Zhang et al. , 2021b ) , have a similar motivation to our work : in a new environment not all parameters need to be updated , so we can force the model to only adapt a set of context parameters . However , these methods mostly focus on MDPs ( except the Block MDP assumption in Zhang et al . ( 2021b ) ) and model all changes as a black-box , which may be less efficient for adaptation , as opposed to a factorized representation of change factors . Considering these limitations , we propose AdaRL , a transfer RL approach that achieves low-cost , reliable , and interpretable transfer for partially observable Markov decision processes ( POMDPs ) , with MDPs as a special case . In contrast to state-of-the-art approaches , we learn a parsimonious graphical representation that is able to characterize structural relationships among different dimensions of states , change factors , the perception , the reward variable , and the action variable . It allows us to model changes in transition , observation , and reward functions in a component-wise way . This representation is related to Factored MDPs ( Kearns and Koller , 1999 ; Boutilier et al. , 2000 ; Strehl et al. , 2007 ) and Factored POMDPs ( Katt et al. , 2019 ) , but augmented with change factors that represent a low-dimensional embedding of the changes across domains . Our main motivation is that distribution shifts are usually localized – they are often due to the changes of only a few variables in the generative processes , so we can just adapt the distribution of a small portion of variables ( Huang et al. , 2020 ; Schölkopf et al. , 2021 ) and , furthermore , factorized according to the graph structure , each distribution module can be adapted separately ( Schölkopf , 2019 ; Zhang et al. , 2020b ) . In Fig . 1 we give a motivating example and a general description of AdaRL . In this example , we consider learning policies for Pong ( Bellemare et al. , 2013 ) that can easily generalize to different rotations ω and to images corrupted with white noise . Specifically , given data from n source domains with different rotations and noise variances , we learn a parsimonious latent state representation shared by all domains , denoted by st , and characterize the changes across domains by a two-dimensional factor θk . We identify a set of minimal sufficient representations ( smint , θ min k ) for policy transfer . For instance , here only the rotation factor ω needs adapting ( i.e. , θmink = ωk ) , since the noise factor does not affect the optimal policy . Similarly , as we will show formally in the rest of the paper , not all components si , t of the state vector st are necessary for policy transfer . For example , s2 , t 6∈ smint , since it never affects the future reward . We learn an optimal policy π∗ ( ·|θmink ) on source domains 1 . In the target domain , we only need a few samples to quickly estimate the value of the low-dimensional θmintarget , and then we can apply π ∗ ( ·|θmintarget ) directly . Our main contributions are summarized below : • We assume a generative environment model , which explicitly takes into account the structural relationships among variables in the RL system . Such graphical representations provide a compact way to encode what and where the changes across domains are . • Based on this model , we characterize a minimal set of representations that suffice for policy learning across domains , including the domain-specific change factors and domain-shared state representations . With this characterization , we adapt the policy with only a few target samples and without policy optimization in the target domain , achieving low-cost and reliable policy transfer . • By leveraging a compact way to encode the changes , we also benefit from multi-task learning in model estimation . In particular , we propose the Multi-model Structured Sequential Variational Auto-Encoder ( MiSS-VAE ) for reliable model estimation in general cases . 2 A COMPACT REPRESENTATION OF ENVIRONMENTAL SHIFTS . Suppose there are n source domains and n′ target domains . In each source domain , we observe sequences { 〈ot , at , rt〉 } Tt=1 , where ot ∈ O are the perceived signals at time t ( e.g. , images ) , at ∈ A is the executed action , and rt ∈ R is the reward signal . We denote the underlying latent states by st = ( s1 , t , · · · , sd , t ) > , where d is the dimensionality of latent states . We assume that the generative process of the environment in the k-th domain ( with k = 1 , . . . n+ n′ ) can be described in terms of the transition function for each dimension of s and the observation and reward functions as si , t = fi ( c s ) s i st−1 , ca ) si · at−1 , c θk ) s i θ s k , s i , t ) , for i = 1 , · · · , d , ot = g ( c s ) o st , cθk ) o · θok , ot ) , rt = h ( c s ) r st−1 , ca ) r · at−1 , cθk ) r · θrk , rt ) , ( 1 ) where denotes the element-wise product , the si , t , ot , rt terms are i.i.d . random noises . As explained below , c· ) · are masks ( binary vectors or scalars that represent structural relationships from one variable to the other ) , and θk = ( θsk , θ o k , θ r k ) are the change factors that have a constant value in each domain , but vary across domains in the transition , observation , and reward function , respectively . The latent states st+1 form an MDP : given st and at , st+1 is independent of previous states and actions . The perceived signals ot are generated from the underlying states st . The actions at directly influence the latent states st+1 , instead of the observed signals ot , and the reward is determined by the latent states and the action . Eq . 1 can also represent MDPs as a special case if states st are directly observed , in which case the observation function of ot is not needed . Structural relationships and graphs . Often the action variable at−1 does not influence every dimension of st , and similarly , the reward rt may not be influenced by every dimension of st−1 . Furthermore , there are structural relationships between different dimensions of st−1 and st. To characterize these constraints , we explicitly take into account the graph structure G over the variables in the system characterized by a Dynamic Bayesian Network ( Murphy , 2002 ) and encode the edges with masks c· ) · . In the first equation in Eq . 1 the transition function for the state component si , where the jth entry of cs ) si ∈ { 0 , 1 } d is 1 if and only if sj , t influences si , t+1 ( graphically represented by an edge ) , while ca ) si ∈ { 0 , 1 } is 1 if and only if the action at has any effect on si , t+1 . Similarly , the binary vector cθk ) si ∈ { 0 , 1 } p encodes which components of the change factor θ s k = ( θ s 1 , k , . . . , θ s p , k ) > affect si , t+1 . The masks in the observation function g and reward function h have similar functions . The masks and the parameters of the functions f , g , and h , are invariant ; all changes are encoded in θk . For simplicity of notation , we collect all the transition mask vectors in the matrices Cs ) s : = [ cs ) si ] d i=1 and Cθk ) s : = [ cθk ) si ] d i=1 and the scalars in the vector c a ) s : = [ ca ) si ] d i=1 . Characterization of change factors in a compact way . In practical scenarios , the environment model may change across domains . Moreover , it is often the case that given a high-dimensional input , only a few factors may change , which is known as minimal change principle ( Ghassami et al. , 2018 ) or sparse mechanism shift assumption ( Schölkopf et al. , 2021 ) . In such a case , instead of learning the distribution shift over the high-dimensional input , thanks to the parsimonious graphical representation , we introduce a low-dimensional vector θk to characterize the domain-specific information in a compact way ( Zhang et al. , 2020b ) . Specifically , θok , θ r k , and θ s k capture the change factors in the observation function , reward function , and transition dynamics , respectively ; each of them can be 1We consider optimality of the policy w.r.t . the model estimated on source domains and AdaRL assumptions . multi-dimensional and that they are constant within each domain . In general , θk can capture both the changes in the influencing strength and those in the graph structure , e.g. , some edges may appear only in some domains . Since we assume that the structural relationships in Eq . 1 are invariant across domains , this means that the masks c· ) · have to encode an edge even if it presents only in one domain , and furthermore , since θk encodes the changes , it can switch the edge off in other domains . Fig . 1 shows an example of the graphical representation of the ( estimated ) environment model . Specifically , in this example , θsk only influences s1 , t , at−1 does not have an edge to s1 , t , and among the states , only sd , t−1 has an edge to rt . In this example , we consider the case when the control signals are random , so there is no edge between st and at . | The paper proposes the algorithm "AdaRL" a transfer method for reinforcement learning for different domains. The method is based on learning a latent representation with "domain shared" and "domain specific" components. A policy that is parameterized by "domain specific" components is learned. The transfer is done by collecting some data in target domain and estimating "domain specific" variable of the target domain. The authors show the algorithms mechanism for POMDP and MDP settings. The method is evaluated on modified versions of Cart-Pole and Pong domains. The authors test different settings where parameters of the environment as well as rewards functions are changed within source domains and the target domain. The authors also evaluate interpolation vs extrapolation cases. The method is compared with latest transfer learning methods and the results show that AdaRL performs better compared to other methods | SP:c1c0a7aafffec8e9cf2f2414d40d48db6e32798b |
AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning | 1 INTRODUCTION AND RELATED WORK . Over the last decades , reinforcement learning ( RL ) ( Sutton and Barto , 1998 ) has been successful in many tasks ( Mnih et al. , 2013 ; Silver et al. , 2016 ) . Most of these early successes focus on a fixed task in a fixed environment . However , in real applications we often have changing environments , and it has been demonstrated that the optimal policy learned in a specific domain may not be generalized to other domains ( Taylor and Stone , 2009 ) . In contrast , humans are usually good at transferring acquired knowledge to new environments and tasks both efficiently and effectively ( Pearl and Mackenzie , 2018 ) , thanks to the ability to understand the environments . Generally speaking , to achieve reliable , low-cost , and interpretable transfer , it is essential to understand the underlying process—which decision-making factors have changes , where the changes are , and how they change , instead of transferring blindly ( e.g. , transferring the distribution of high-dimensional images directly ) . There are roughly two research lines in transfer RL ( Taylor and Stone , 2009 ; Zhu et al. , 2020 ) : ( 1 ) finding policies that are robust to environment variations , and ( 2 ) adapting policies from the source domain to the target domain as efficiently as possible . For the first line , the focus is on learning policies that are robust to environment variations , e.g. , by maximizing a risk-sensitive objective over a distribution of environments ( Tamar et al. , 2015 ) or by extracting a set of invariant states ( Zhang et al. , 2020a ; 2021a ; Tomar et al. , 2021 ) . A more recent method encodes task-relevant invariances by putting behaviorally equivalent states together , which helps better generalization ( Agarwal et al. , 2021 ) . On the other hand , with the increase of the number of domains , the common part may get even smaller , running counter to the intention of collecting more information with more domains . Moreover , focusing only on the invariant part and disregarding domain-specific information may not be optimal ; for instance , in the context of domain adaptation , it has been demonstrated that the variable part also contains information helpful to improve prediction accuracy ( Zhang et al. , 2020b ) . In this paper , we propose a method along the second line , adapting source policies to the target . Approaches along this line adapt knowledge from source domains and reuse it in the target domain to improve data efficiency , i.e. , in order for the agent to require fewer explorations to learn the target policy . For example , an agent could use importance reweighting on samples 〈s , a , r , s′〉 from sources ( Tirinzoni et al. , 2018 ; 2019 ) or start from the optimal source policy to initialize a learner in the target domain , as a near-optimal initializer ( Taylor et al. , 2007 ; Fernández et al. , 2010 ) . Another widely-used technique is finetuning : a model is pretrained on a source domain and the output layers are finetuned via backpropagation in the target domain ( Hinton and Salakhutdinov , 2006 ; Mesnil et al. , 2012 ) . PNNs ( Rusu et al. , 2016 ) , instead , retain a pool of pretrained models and learn lateral connections from them to extract useful features for a new task . Moreover , a set of approaches focus on sim2real transfer by adapting the parameters ( Yu et al. , 2017 ; Peng et al. , 2020 ) . However , many of these approaches still require a large amount of explorations and optimization in the target domain . Recently , meta-RL approches such as MAML ( Finn et al. , 2017 ) , PEARL ( Rakelly et al. , 2019 ) , CAVIA ( Zintgraf et al. , 2019 ) , Meta-Q learning ( Fakoor et al. , 2020 ) , and others ( Mendonca et al. , 2019 ; Nagabandi et al. , 2018 ; Duan et al. , 2016 ) have been successfully applied to learn an inductive bias that accelerates the learning of a new task by training on a large number of tasks . Some of these methods ( e.g. , CAVIA and PEARL ) , as well as some prior work ( e.g. , HiMDPs ( Doshi-Velez and Konidaris , 2016 ) ) and recent follow-ups ( Zhang et al. , 2021b ) , have a similar motivation to our work : in a new environment not all parameters need to be updated , so we can force the model to only adapt a set of context parameters . However , these methods mostly focus on MDPs ( except the Block MDP assumption in Zhang et al . ( 2021b ) ) and model all changes as a black-box , which may be less efficient for adaptation , as opposed to a factorized representation of change factors . Considering these limitations , we propose AdaRL , a transfer RL approach that achieves low-cost , reliable , and interpretable transfer for partially observable Markov decision processes ( POMDPs ) , with MDPs as a special case . In contrast to state-of-the-art approaches , we learn a parsimonious graphical representation that is able to characterize structural relationships among different dimensions of states , change factors , the perception , the reward variable , and the action variable . It allows us to model changes in transition , observation , and reward functions in a component-wise way . This representation is related to Factored MDPs ( Kearns and Koller , 1999 ; Boutilier et al. , 2000 ; Strehl et al. , 2007 ) and Factored POMDPs ( Katt et al. , 2019 ) , but augmented with change factors that represent a low-dimensional embedding of the changes across domains . Our main motivation is that distribution shifts are usually localized – they are often due to the changes of only a few variables in the generative processes , so we can just adapt the distribution of a small portion of variables ( Huang et al. , 2020 ; Schölkopf et al. , 2021 ) and , furthermore , factorized according to the graph structure , each distribution module can be adapted separately ( Schölkopf , 2019 ; Zhang et al. , 2020b ) . In Fig . 1 we give a motivating example and a general description of AdaRL . In this example , we consider learning policies for Pong ( Bellemare et al. , 2013 ) that can easily generalize to different rotations ω and to images corrupted with white noise . Specifically , given data from n source domains with different rotations and noise variances , we learn a parsimonious latent state representation shared by all domains , denoted by st , and characterize the changes across domains by a two-dimensional factor θk . We identify a set of minimal sufficient representations ( smint , θ min k ) for policy transfer . For instance , here only the rotation factor ω needs adapting ( i.e. , θmink = ωk ) , since the noise factor does not affect the optimal policy . Similarly , as we will show formally in the rest of the paper , not all components si , t of the state vector st are necessary for policy transfer . For example , s2 , t 6∈ smint , since it never affects the future reward . We learn an optimal policy π∗ ( ·|θmink ) on source domains 1 . In the target domain , we only need a few samples to quickly estimate the value of the low-dimensional θmintarget , and then we can apply π ∗ ( ·|θmintarget ) directly . Our main contributions are summarized below : • We assume a generative environment model , which explicitly takes into account the structural relationships among variables in the RL system . Such graphical representations provide a compact way to encode what and where the changes across domains are . • Based on this model , we characterize a minimal set of representations that suffice for policy learning across domains , including the domain-specific change factors and domain-shared state representations . With this characterization , we adapt the policy with only a few target samples and without policy optimization in the target domain , achieving low-cost and reliable policy transfer . • By leveraging a compact way to encode the changes , we also benefit from multi-task learning in model estimation . In particular , we propose the Multi-model Structured Sequential Variational Auto-Encoder ( MiSS-VAE ) for reliable model estimation in general cases . 2 A COMPACT REPRESENTATION OF ENVIRONMENTAL SHIFTS . Suppose there are n source domains and n′ target domains . In each source domain , we observe sequences { 〈ot , at , rt〉 } Tt=1 , where ot ∈ O are the perceived signals at time t ( e.g. , images ) , at ∈ A is the executed action , and rt ∈ R is the reward signal . We denote the underlying latent states by st = ( s1 , t , · · · , sd , t ) > , where d is the dimensionality of latent states . We assume that the generative process of the environment in the k-th domain ( with k = 1 , . . . n+ n′ ) can be described in terms of the transition function for each dimension of s and the observation and reward functions as si , t = fi ( c s ) s i st−1 , ca ) si · at−1 , c θk ) s i θ s k , s i , t ) , for i = 1 , · · · , d , ot = g ( c s ) o st , cθk ) o · θok , ot ) , rt = h ( c s ) r st−1 , ca ) r · at−1 , cθk ) r · θrk , rt ) , ( 1 ) where denotes the element-wise product , the si , t , ot , rt terms are i.i.d . random noises . As explained below , c· ) · are masks ( binary vectors or scalars that represent structural relationships from one variable to the other ) , and θk = ( θsk , θ o k , θ r k ) are the change factors that have a constant value in each domain , but vary across domains in the transition , observation , and reward function , respectively . The latent states st+1 form an MDP : given st and at , st+1 is independent of previous states and actions . The perceived signals ot are generated from the underlying states st . The actions at directly influence the latent states st+1 , instead of the observed signals ot , and the reward is determined by the latent states and the action . Eq . 1 can also represent MDPs as a special case if states st are directly observed , in which case the observation function of ot is not needed . Structural relationships and graphs . Often the action variable at−1 does not influence every dimension of st , and similarly , the reward rt may not be influenced by every dimension of st−1 . Furthermore , there are structural relationships between different dimensions of st−1 and st. To characterize these constraints , we explicitly take into account the graph structure G over the variables in the system characterized by a Dynamic Bayesian Network ( Murphy , 2002 ) and encode the edges with masks c· ) · . In the first equation in Eq . 1 the transition function for the state component si , where the jth entry of cs ) si ∈ { 0 , 1 } d is 1 if and only if sj , t influences si , t+1 ( graphically represented by an edge ) , while ca ) si ∈ { 0 , 1 } is 1 if and only if the action at has any effect on si , t+1 . Similarly , the binary vector cθk ) si ∈ { 0 , 1 } p encodes which components of the change factor θ s k = ( θ s 1 , k , . . . , θ s p , k ) > affect si , t+1 . The masks in the observation function g and reward function h have similar functions . The masks and the parameters of the functions f , g , and h , are invariant ; all changes are encoded in θk . For simplicity of notation , we collect all the transition mask vectors in the matrices Cs ) s : = [ cs ) si ] d i=1 and Cθk ) s : = [ cθk ) si ] d i=1 and the scalars in the vector c a ) s : = [ ca ) si ] d i=1 . Characterization of change factors in a compact way . In practical scenarios , the environment model may change across domains . Moreover , it is often the case that given a high-dimensional input , only a few factors may change , which is known as minimal change principle ( Ghassami et al. , 2018 ) or sparse mechanism shift assumption ( Schölkopf et al. , 2021 ) . In such a case , instead of learning the distribution shift over the high-dimensional input , thanks to the parsimonious graphical representation , we introduce a low-dimensional vector θk to characterize the domain-specific information in a compact way ( Zhang et al. , 2020b ) . Specifically , θok , θ r k , and θ s k capture the change factors in the observation function , reward function , and transition dynamics , respectively ; each of them can be 1We consider optimality of the policy w.r.t . the model estimated on source domains and AdaRL assumptions . multi-dimensional and that they are constant within each domain . In general , θk can capture both the changes in the influencing strength and those in the graph structure , e.g. , some edges may appear only in some domains . Since we assume that the structural relationships in Eq . 1 are invariant across domains , this means that the masks c· ) · have to encode an edge even if it presents only in one domain , and furthermore , since θk encodes the changes , it can switch the edge off in other domains . Fig . 1 shows an example of the graphical representation of the ( estimated ) environment model . Specifically , in this example , θsk only influences s1 , t , at−1 does not have an edge to s1 , t , and among the states , only sd , t−1 has an edge to rt . In this example , we consider the case when the control signals are random , so there is no edge between st and at . | Instead of implicitly updating the policy using data from the source domain, learn a particularly structured latent model and the elements of variation and learn a policy that performs pretty well on some set of the elements of variation. At test time, estimate the elements of variation and provide them as input to the policy. I think this is a good paper but could benefit from some clearer writing as discussed in the clarity section below. | SP:c1c0a7aafffec8e9cf2f2414d40d48db6e32798b |
AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning | 1 INTRODUCTION AND RELATED WORK . Over the last decades , reinforcement learning ( RL ) ( Sutton and Barto , 1998 ) has been successful in many tasks ( Mnih et al. , 2013 ; Silver et al. , 2016 ) . Most of these early successes focus on a fixed task in a fixed environment . However , in real applications we often have changing environments , and it has been demonstrated that the optimal policy learned in a specific domain may not be generalized to other domains ( Taylor and Stone , 2009 ) . In contrast , humans are usually good at transferring acquired knowledge to new environments and tasks both efficiently and effectively ( Pearl and Mackenzie , 2018 ) , thanks to the ability to understand the environments . Generally speaking , to achieve reliable , low-cost , and interpretable transfer , it is essential to understand the underlying process—which decision-making factors have changes , where the changes are , and how they change , instead of transferring blindly ( e.g. , transferring the distribution of high-dimensional images directly ) . There are roughly two research lines in transfer RL ( Taylor and Stone , 2009 ; Zhu et al. , 2020 ) : ( 1 ) finding policies that are robust to environment variations , and ( 2 ) adapting policies from the source domain to the target domain as efficiently as possible . For the first line , the focus is on learning policies that are robust to environment variations , e.g. , by maximizing a risk-sensitive objective over a distribution of environments ( Tamar et al. , 2015 ) or by extracting a set of invariant states ( Zhang et al. , 2020a ; 2021a ; Tomar et al. , 2021 ) . A more recent method encodes task-relevant invariances by putting behaviorally equivalent states together , which helps better generalization ( Agarwal et al. , 2021 ) . On the other hand , with the increase of the number of domains , the common part may get even smaller , running counter to the intention of collecting more information with more domains . Moreover , focusing only on the invariant part and disregarding domain-specific information may not be optimal ; for instance , in the context of domain adaptation , it has been demonstrated that the variable part also contains information helpful to improve prediction accuracy ( Zhang et al. , 2020b ) . In this paper , we propose a method along the second line , adapting source policies to the target . Approaches along this line adapt knowledge from source domains and reuse it in the target domain to improve data efficiency , i.e. , in order for the agent to require fewer explorations to learn the target policy . For example , an agent could use importance reweighting on samples 〈s , a , r , s′〉 from sources ( Tirinzoni et al. , 2018 ; 2019 ) or start from the optimal source policy to initialize a learner in the target domain , as a near-optimal initializer ( Taylor et al. , 2007 ; Fernández et al. , 2010 ) . Another widely-used technique is finetuning : a model is pretrained on a source domain and the output layers are finetuned via backpropagation in the target domain ( Hinton and Salakhutdinov , 2006 ; Mesnil et al. , 2012 ) . PNNs ( Rusu et al. , 2016 ) , instead , retain a pool of pretrained models and learn lateral connections from them to extract useful features for a new task . Moreover , a set of approaches focus on sim2real transfer by adapting the parameters ( Yu et al. , 2017 ; Peng et al. , 2020 ) . However , many of these approaches still require a large amount of explorations and optimization in the target domain . Recently , meta-RL approches such as MAML ( Finn et al. , 2017 ) , PEARL ( Rakelly et al. , 2019 ) , CAVIA ( Zintgraf et al. , 2019 ) , Meta-Q learning ( Fakoor et al. , 2020 ) , and others ( Mendonca et al. , 2019 ; Nagabandi et al. , 2018 ; Duan et al. , 2016 ) have been successfully applied to learn an inductive bias that accelerates the learning of a new task by training on a large number of tasks . Some of these methods ( e.g. , CAVIA and PEARL ) , as well as some prior work ( e.g. , HiMDPs ( Doshi-Velez and Konidaris , 2016 ) ) and recent follow-ups ( Zhang et al. , 2021b ) , have a similar motivation to our work : in a new environment not all parameters need to be updated , so we can force the model to only adapt a set of context parameters . However , these methods mostly focus on MDPs ( except the Block MDP assumption in Zhang et al . ( 2021b ) ) and model all changes as a black-box , which may be less efficient for adaptation , as opposed to a factorized representation of change factors . Considering these limitations , we propose AdaRL , a transfer RL approach that achieves low-cost , reliable , and interpretable transfer for partially observable Markov decision processes ( POMDPs ) , with MDPs as a special case . In contrast to state-of-the-art approaches , we learn a parsimonious graphical representation that is able to characterize structural relationships among different dimensions of states , change factors , the perception , the reward variable , and the action variable . It allows us to model changes in transition , observation , and reward functions in a component-wise way . This representation is related to Factored MDPs ( Kearns and Koller , 1999 ; Boutilier et al. , 2000 ; Strehl et al. , 2007 ) and Factored POMDPs ( Katt et al. , 2019 ) , but augmented with change factors that represent a low-dimensional embedding of the changes across domains . Our main motivation is that distribution shifts are usually localized – they are often due to the changes of only a few variables in the generative processes , so we can just adapt the distribution of a small portion of variables ( Huang et al. , 2020 ; Schölkopf et al. , 2021 ) and , furthermore , factorized according to the graph structure , each distribution module can be adapted separately ( Schölkopf , 2019 ; Zhang et al. , 2020b ) . In Fig . 1 we give a motivating example and a general description of AdaRL . In this example , we consider learning policies for Pong ( Bellemare et al. , 2013 ) that can easily generalize to different rotations ω and to images corrupted with white noise . Specifically , given data from n source domains with different rotations and noise variances , we learn a parsimonious latent state representation shared by all domains , denoted by st , and characterize the changes across domains by a two-dimensional factor θk . We identify a set of minimal sufficient representations ( smint , θ min k ) for policy transfer . For instance , here only the rotation factor ω needs adapting ( i.e. , θmink = ωk ) , since the noise factor does not affect the optimal policy . Similarly , as we will show formally in the rest of the paper , not all components si , t of the state vector st are necessary for policy transfer . For example , s2 , t 6∈ smint , since it never affects the future reward . We learn an optimal policy π∗ ( ·|θmink ) on source domains 1 . In the target domain , we only need a few samples to quickly estimate the value of the low-dimensional θmintarget , and then we can apply π ∗ ( ·|θmintarget ) directly . Our main contributions are summarized below : • We assume a generative environment model , which explicitly takes into account the structural relationships among variables in the RL system . Such graphical representations provide a compact way to encode what and where the changes across domains are . • Based on this model , we characterize a minimal set of representations that suffice for policy learning across domains , including the domain-specific change factors and domain-shared state representations . With this characterization , we adapt the policy with only a few target samples and without policy optimization in the target domain , achieving low-cost and reliable policy transfer . • By leveraging a compact way to encode the changes , we also benefit from multi-task learning in model estimation . In particular , we propose the Multi-model Structured Sequential Variational Auto-Encoder ( MiSS-VAE ) for reliable model estimation in general cases . 2 A COMPACT REPRESENTATION OF ENVIRONMENTAL SHIFTS . Suppose there are n source domains and n′ target domains . In each source domain , we observe sequences { 〈ot , at , rt〉 } Tt=1 , where ot ∈ O are the perceived signals at time t ( e.g. , images ) , at ∈ A is the executed action , and rt ∈ R is the reward signal . We denote the underlying latent states by st = ( s1 , t , · · · , sd , t ) > , where d is the dimensionality of latent states . We assume that the generative process of the environment in the k-th domain ( with k = 1 , . . . n+ n′ ) can be described in terms of the transition function for each dimension of s and the observation and reward functions as si , t = fi ( c s ) s i st−1 , ca ) si · at−1 , c θk ) s i θ s k , s i , t ) , for i = 1 , · · · , d , ot = g ( c s ) o st , cθk ) o · θok , ot ) , rt = h ( c s ) r st−1 , ca ) r · at−1 , cθk ) r · θrk , rt ) , ( 1 ) where denotes the element-wise product , the si , t , ot , rt terms are i.i.d . random noises . As explained below , c· ) · are masks ( binary vectors or scalars that represent structural relationships from one variable to the other ) , and θk = ( θsk , θ o k , θ r k ) are the change factors that have a constant value in each domain , but vary across domains in the transition , observation , and reward function , respectively . The latent states st+1 form an MDP : given st and at , st+1 is independent of previous states and actions . The perceived signals ot are generated from the underlying states st . The actions at directly influence the latent states st+1 , instead of the observed signals ot , and the reward is determined by the latent states and the action . Eq . 1 can also represent MDPs as a special case if states st are directly observed , in which case the observation function of ot is not needed . Structural relationships and graphs . Often the action variable at−1 does not influence every dimension of st , and similarly , the reward rt may not be influenced by every dimension of st−1 . Furthermore , there are structural relationships between different dimensions of st−1 and st. To characterize these constraints , we explicitly take into account the graph structure G over the variables in the system characterized by a Dynamic Bayesian Network ( Murphy , 2002 ) and encode the edges with masks c· ) · . In the first equation in Eq . 1 the transition function for the state component si , where the jth entry of cs ) si ∈ { 0 , 1 } d is 1 if and only if sj , t influences si , t+1 ( graphically represented by an edge ) , while ca ) si ∈ { 0 , 1 } is 1 if and only if the action at has any effect on si , t+1 . Similarly , the binary vector cθk ) si ∈ { 0 , 1 } p encodes which components of the change factor θ s k = ( θ s 1 , k , . . . , θ s p , k ) > affect si , t+1 . The masks in the observation function g and reward function h have similar functions . The masks and the parameters of the functions f , g , and h , are invariant ; all changes are encoded in θk . For simplicity of notation , we collect all the transition mask vectors in the matrices Cs ) s : = [ cs ) si ] d i=1 and Cθk ) s : = [ cθk ) si ] d i=1 and the scalars in the vector c a ) s : = [ ca ) si ] d i=1 . Characterization of change factors in a compact way . In practical scenarios , the environment model may change across domains . Moreover , it is often the case that given a high-dimensional input , only a few factors may change , which is known as minimal change principle ( Ghassami et al. , 2018 ) or sparse mechanism shift assumption ( Schölkopf et al. , 2021 ) . In such a case , instead of learning the distribution shift over the high-dimensional input , thanks to the parsimonious graphical representation , we introduce a low-dimensional vector θk to characterize the domain-specific information in a compact way ( Zhang et al. , 2020b ) . Specifically , θok , θ r k , and θ s k capture the change factors in the observation function , reward function , and transition dynamics , respectively ; each of them can be 1We consider optimality of the policy w.r.t . the model estimated on source domains and AdaRL assumptions . multi-dimensional and that they are constant within each domain . In general , θk can capture both the changes in the influencing strength and those in the graph structure , e.g. , some edges may appear only in some domains . Since we assume that the structural relationships in Eq . 1 are invariant across domains , this means that the masks c· ) · have to encode an edge even if it presents only in one domain , and furthermore , since θk encodes the changes , it can switch the edge off in other domains . Fig . 1 shows an example of the graphical representation of the ( estimated ) environment model . Specifically , in this example , θsk only influences s1 , t , at−1 does not have an edge to s1 , t , and among the states , only sd , t−1 has an edge to rt . In this example , we consider the case when the control signals are random , so there is no edge between st and at . | The submission proposes a method for transfer in reinforcement learning building on estimating a small set of factors describing a system and modelling the agent as a dynamic bayesian network. In detail, the method splits into domain shared and domain dependent factors which are modelled via a new form of structured sequential VAE. The VAE is trained to enable dynamics, observation and reward prediction (including reconstruction, prediction, KL and sparsity regularisation losses). Data for VAE training is collected from random policies and model fitting is followed by policy training on source domains to be transferred to target domains by identifying domain specific features for the target domains. The method is evaluated on variations of a cartpole and a pong domain (both from images) and evaluated against a set of recent, competitive baselines. | SP:c1c0a7aafffec8e9cf2f2414d40d48db6e32798b |
Learn the Time to Learn: Replay Scheduling for Continual Learning | 1 INTRODUCTION . Many organizations deploying machine learning systems receive large volumes of data daily where these new data are often associated with new tasks . Although all historical data are stored in the cloud in practice , retraining machine learning systems on a daily basis is prohibitive both in time and cost . In this setting , the systems must continuously adapt to new tasks without forgetting the previously learned abilities . Continual learning methods ( De Lange et al. , 2019 ; McCloskey & Cohen , 1989 ; Parisi et al. , 2019 ) address this challenge where , in particular , replay-based methods ( Chaudhry et al. , 2019 ; Hayes et al. , 2020 ) have shown to be very effective in achieving great prediction performance and retaining knowledge of old tasks . Replay-based methods mitigate catastrophic forgetting by revisiting a small set of samples , which is feasible to process compared to the size of the historical data . In traditional continual learning literature , the replay memory is limited due to the assumption that historical data are not available . In the real-world setting where historical data are in fact always available , the requirement of small memory remains due to processing time and cost issues . Most research on replay-based continual learning has been focused on the sample quality in the memory ( Aljundi et al. , 2019 ; Borsos et al. , 2020 ; Chaudhry et al. , 2019 ; Chrysakis & Moens , 2020 ; Nguyen et al. , 2017 ; Rebuffi et al. , 2017 ; Yoon et al. , 2021 ) or data compression to increase the memory capacity ( Hayes et al. , 2020 ; Iscen et al. , 2020 ; Pellegrini et al. , 2019 ) . Common for these methods is that the memory allocates an equal amount of space for storing samples from old tasks . When learning new tasks , the whole memory is replayed to mitigate catastrophic forgetting . However , in life-long learning settings , this simple strategy would be inefficient as the memory must store a large number of tasks . Furthermore , these methods ignore the time to learn old tasks again which is important in human learning . Humans are continual learning systems , and different methods have been developed to enhance memory retention , such as spaced repetition ( Dempster , 1989 ; Ebbinghaus , 2013 ; Landauer & Bjork , 1977 ) which is used often in education . These education methods focus on the scheduling of learning and rehearsal of previous learned knowledge . In this work , we argue that finding the proper schedule of what tasks to replay in the fixed memory setting is critical for continual learning . To demonstrate our claim , we perform a simple experiment on the Split MNIST ( Zenke et al. , 2017 ) dataset where each task consists of learning the digits 0/1 , 2/3 , etc . arriving in sequence . The replay memory contains data from task 1 and can only be replayed at one point in time . Figure 1 shows how the task performances progress over time when the memory is replayed at different time steps . In this example , the best final performance is achieved when the memory is used when learning task 5 . Note that choosing different time points to replay the same memory leads to noticeably different results in the final performance . These results indicate that scheduling the time when to apply replay can influence the final performance significantly of a continual learning system . To this end , we propose learning the time to learn , in which we learn replay schedules of which tasks to replay at different times inspired from human learning ( Dempster , 1989 ) . We illustrate the advantages with replay scheduling by using Monte Carlo tree search ( MCTS ) ( Coulom , 2006 ) to learn policies for replay . More specifically , we train a neural network on the current task dataset mixed with the scheduled replay samples and measure the final performance of the network to evaluate the replay schedules selected by MCTS . In summary , our contributions are : • We demonstrate the importance of replay scheduling in continual learning and propose to learn the time to learn which tasks to replay ( Section 3.1 and 3.2 ) . • We use MCTS as an example method to illustrate how replay schedules for continual learning can be learned by establishing a finite set of memory compositions that can be replayed at every task ( Section 3.3 ) . • We demonstrate with several benchmark datasets that learned scheduling can improve the continual learning performance significantly in the fixed size memory setting ( Section 4.1 and 4.4 ) . Furthermore , we show that our method can be combined with any other memory selection methods ( Section 4.3 ) , as well as being efficient in situations where the memory size is even smaller than the number of classes ( Section 4.5 ) . 2 RELATED WORK . In this section , we give a brief overview of continual learning methods , essentially replay-based methods , as well as spaced repetition techniques for human continual learning . Continual Learning . Traditional continual learning can be divided into three main areas , namely regularization-based , architecture-based , and replay-based approaches . Regularization-based methods aim to mitigate catastrophic forgetting by protecting parameters influencing the predictive performance from wide changes and use the rest of the parameters for learning the new tasks ( Adel et al. , 2019 ; Chaudhry et al. , 2018a ; Kirkpatrick et al. , 2017 ; Li & Hoiem , 2017 ; Nguyen et al. , 2017 ; Rannen et al. , 2017 ; Schwarz et al. , 2018 ; Zenke et al. , 2017 ) . Architecture-based methods isolate task-specific parameters by either increasing network capacity ( Rusu et al. , 2016 ; Yoon et al. , 2019 ; 2017 ) or freezing parts of the network ( Mallya & Lazebnik , 2018 ; Serra et al. , 2018 ) to maintain good performance on previous tasks . Replay-based methods mix samples from old tasks with the current dataset to mitigate catastrophic forgetting , where the replay samples are either stored in an external memory ( Chaudhry et al. , 2019 ; Hayes et al. , 2020 ; Isele & Cosgun , 2018 ; Lopez-Paz & Ranzato , 2017 ) or generated using a generative model ( Shin et al. , 2017 ; van de Ven & Tolias , 2018 ) . Regularization-based approaches and dynamic architectures have been combined with replay-based approaches to methods to overcome their limitations ( Chaudhry et al. , 2018a ; b ; Douillard et al. , 2020 ; Ebrahimi et al. , 2020 ; Joseph & Balasubramanian , 2020 ; Mirzadeh et al. , 2020 ; Nguyen et al. , 2017 ; Pan et al. , 2020 ; Pellegrini et al. , 2019 ; Rolnick et al. , 2018 ; von Oswald et al. , 2019 ) . Our work relates most to replay-based methods with external memory which we spend more time on describing in the next paragraph . Replay-based Continual Learning . A commonly used memory selection strategy of replay samples is random selection . Much research effort has focused on selecting higher quality samples to store in memory ( Aljundi et al. , 2019 ; Borsos et al. , 2020 ; Chaudhry et al. , 2019 ; Chrysakis & Moens , 2020 ; Hayes et al. , 2019 ; Isele & Cosgun , 2018 ; Lopez-Paz & Ranzato , 2017 ; Nguyen et al. , 2017 ; Rebuffi et al. , 2017 ; Yoon et al. , 2021 ) . Chaudhry et al . ( 2019 ) reviews several selection strategies in scenarios with tiny memory capacity , e.g. , reservoir sampling ( Vitter , 1985 ) , first-in first-out buffer ( Lopez-Paz & Ranzato , 2017 ) , k-Means , and Mean-of-Features ( Rebuffi et al. , 2017 ) . However , more elaborate selection strategies have been shown to give little benefit over random selection for image classification problems ( Chaudhry et al. , 2018a ; Hayes et al. , 2020 ) . More recently , there has been work on compressing raw images to feature representations to increase the number of memory examples for replay ( Hayes et al. , 2020 ; Iscen et al. , 2020 ; Pellegrini et al. , 2019 ) . Our approach differs from the above mentioned works since we focus on learning to select which tasks to replay at the current task rather than improving memory selection or compression quality of the samples in the memory . Replay scheduling can however be combined with any selection strategy as well as storing feature representations . Human Continual Learning . Humans are continual learning systems in the sense of learning tasks and concepts sequentially . Furthermore , humans have an impressive ability to memorize experiences but can forget learned knowledge gradually rather than catastrophically ( French , 1999 ) . Different learning techniques have been suggested for humans to memorize better ( Dunlosky et al. , 2013 ; Willis , 2007 ) . An example is spaced repetition which gradually increases time-intervals between rehearsals for retaining long-term memory ( Dempster , 1989 ) . This technique has been studied frequently and was inspired from the works of Ebbinghaus ( 2013 ) on memory retention . For example , Landauer & Bjork ( 1977 ) demonstrated that memory training schedules using adjusted spaced repetition were better at preserving memory than uniformly spaced training . Hawley et al . ( 2008 ) studies the efficacy of spaced repetition on adults with probable Alzheimer ’ s disease for learning face-name association . Several works in continual learning with neural networks are inspired by or have a connection to human learning techniques , including spaced repetition ( Amiri , 2019 ; Amiri et al. , 2017 ; Feng et al. , 2019 ; Smolen et al. , 2016 ) , mechanisms of sleep ( Ball et al. , 2020 ; Mallya & Lazebnik , 2018 ; Schwarz et al. , 2018 ) , and reactivation of memories ( Hayes et al. , 2020 ; van de Ven et al. , 2020 ) . Our replay scheduling method is inspired by spaced repetition ; we learn schedules of which memory samples to use for replay at different time steps . 3 METHOD . In this section , we describe our method for learning replay schedules for continual learning . The idea is to learn schedules of which memory examples the network should rehearse at different times . We use Monte Carlo tree search ( MCTS ) ( Browne et al. , 2012 ; Coulom , 2006 ) to learn a scheduling policy by encouraging searches for promising replay schedules based on the classification accuracy . 3.1 PROBLEM SETTING . We focus on the setting considering the real-world continual learning needs where all historical data are available but are prohibitively large . Therefore , only a small amount of historical data can be used when adapting the model to new data due to processing capability consideration . Thus , the goal is to learn how to select subsets of historical data to efficiently mitigate catastrophic forgetting when learning new tasks . We refer to these subsets of historical data as the replay memory throughout the paper , where the size of the replay memory affects the processing time when learning a new task t. Moreover , we focus on composing the replay memory based on the seen tasks in the historical data rather than single stored instances . Next , we introduce the notation of our problem setting which resembles the traditional continual learning setting for image classification . We let a neural network fθ , parameterized by θ , learn T tasks sequentially given their corresponding task datasets D1 , . . . , DT arriving in order . The t-th datasetDt = { ( x ( i ) t , y ( i ) t ) } Nt i=1 consists of Nt samples where x ( i ) t and y ( i ) t are the i-th data point and class label respectively . The training objective at task t is given by min θ Nt∑ i=1 ` ( fθ ( x ( i ) t ) , y ( i ) t ) , ( 1 ) where ` ( · ) is the loss function , e.g. , cross-entropy loss in our case . Since Dt is only accessible at time step t , the network fθ is at risk of catastrophically forgetting the previous t − 1 tasks when learning the current task . Replay-based continual learning methods mitigate the forgetting of old tasks by storing old examples in an external replay memory , that is mixed with the current task dataset during training . Next , we describe our method for constructing this replay memory . We assume that historical data from old tasks are accessible at any time step t. However , since the historical data is prohibitively large , we can only fill a small replay memoryM with M historical samples for replay due to processing time constraints . The challenge is how to fill the replay memory withM samples that efficiently retain the knowledge of old tasks when learning new tasks . We focus on selecting the samples on task-level by deciding on the task proportion ( a1 , . . . , at−1 ) of samples to fetch from each task , where ai ≥ 0 is the proportion ofM examples from task i and ∑t−1 i=1 ai = 1 . Consequently , we need a method for choosing these task proportions of which old tasks to replay . To simplify this selection , we construct a discrete set of choices for possible task proportions telling how many samples from each task to use when constructing the replay memoryM . | The key motivation of this work is that the bottleneck of replay in continual learning is the processing time in each training cycle and not storage space for the historical dataset. Hence, this work has been approached from the angle of fixed-sized memory allowance for each experience training cycle. The main research question of this work is: Can the replay schedule (i.e. which task is replayed is which time) significantly affect the training process over time? After demonstrating the effect of the replay schedule, a monte carlo tree search is introduced as a methodology to learn an optimal replay schedule for a series of tasks. This approach shows improvement over naive selection process of replay memory as well as gives the flexibility to apply any selection process for individual sample points for each task. Finally, the efficiency of this approach is tested in extreme scenarios where the replay memory is smaller than the number of classes (i.e. in the training cycles, samples of some classes will not be sampled). Experiments on 6 datasets (varying over 5, 10, and 20 tasks) is used as empirical proof of performance. | SP:673e72d300d7e740b99c79b84783bda6616b1337 |
Learn the Time to Learn: Replay Scheduling for Continual Learning | 1 INTRODUCTION . Many organizations deploying machine learning systems receive large volumes of data daily where these new data are often associated with new tasks . Although all historical data are stored in the cloud in practice , retraining machine learning systems on a daily basis is prohibitive both in time and cost . In this setting , the systems must continuously adapt to new tasks without forgetting the previously learned abilities . Continual learning methods ( De Lange et al. , 2019 ; McCloskey & Cohen , 1989 ; Parisi et al. , 2019 ) address this challenge where , in particular , replay-based methods ( Chaudhry et al. , 2019 ; Hayes et al. , 2020 ) have shown to be very effective in achieving great prediction performance and retaining knowledge of old tasks . Replay-based methods mitigate catastrophic forgetting by revisiting a small set of samples , which is feasible to process compared to the size of the historical data . In traditional continual learning literature , the replay memory is limited due to the assumption that historical data are not available . In the real-world setting where historical data are in fact always available , the requirement of small memory remains due to processing time and cost issues . Most research on replay-based continual learning has been focused on the sample quality in the memory ( Aljundi et al. , 2019 ; Borsos et al. , 2020 ; Chaudhry et al. , 2019 ; Chrysakis & Moens , 2020 ; Nguyen et al. , 2017 ; Rebuffi et al. , 2017 ; Yoon et al. , 2021 ) or data compression to increase the memory capacity ( Hayes et al. , 2020 ; Iscen et al. , 2020 ; Pellegrini et al. , 2019 ) . Common for these methods is that the memory allocates an equal amount of space for storing samples from old tasks . When learning new tasks , the whole memory is replayed to mitigate catastrophic forgetting . However , in life-long learning settings , this simple strategy would be inefficient as the memory must store a large number of tasks . Furthermore , these methods ignore the time to learn old tasks again which is important in human learning . Humans are continual learning systems , and different methods have been developed to enhance memory retention , such as spaced repetition ( Dempster , 1989 ; Ebbinghaus , 2013 ; Landauer & Bjork , 1977 ) which is used often in education . These education methods focus on the scheduling of learning and rehearsal of previous learned knowledge . In this work , we argue that finding the proper schedule of what tasks to replay in the fixed memory setting is critical for continual learning . To demonstrate our claim , we perform a simple experiment on the Split MNIST ( Zenke et al. , 2017 ) dataset where each task consists of learning the digits 0/1 , 2/3 , etc . arriving in sequence . The replay memory contains data from task 1 and can only be replayed at one point in time . Figure 1 shows how the task performances progress over time when the memory is replayed at different time steps . In this example , the best final performance is achieved when the memory is used when learning task 5 . Note that choosing different time points to replay the same memory leads to noticeably different results in the final performance . These results indicate that scheduling the time when to apply replay can influence the final performance significantly of a continual learning system . To this end , we propose learning the time to learn , in which we learn replay schedules of which tasks to replay at different times inspired from human learning ( Dempster , 1989 ) . We illustrate the advantages with replay scheduling by using Monte Carlo tree search ( MCTS ) ( Coulom , 2006 ) to learn policies for replay . More specifically , we train a neural network on the current task dataset mixed with the scheduled replay samples and measure the final performance of the network to evaluate the replay schedules selected by MCTS . In summary , our contributions are : • We demonstrate the importance of replay scheduling in continual learning and propose to learn the time to learn which tasks to replay ( Section 3.1 and 3.2 ) . • We use MCTS as an example method to illustrate how replay schedules for continual learning can be learned by establishing a finite set of memory compositions that can be replayed at every task ( Section 3.3 ) . • We demonstrate with several benchmark datasets that learned scheduling can improve the continual learning performance significantly in the fixed size memory setting ( Section 4.1 and 4.4 ) . Furthermore , we show that our method can be combined with any other memory selection methods ( Section 4.3 ) , as well as being efficient in situations where the memory size is even smaller than the number of classes ( Section 4.5 ) . 2 RELATED WORK . In this section , we give a brief overview of continual learning methods , essentially replay-based methods , as well as spaced repetition techniques for human continual learning . Continual Learning . Traditional continual learning can be divided into three main areas , namely regularization-based , architecture-based , and replay-based approaches . Regularization-based methods aim to mitigate catastrophic forgetting by protecting parameters influencing the predictive performance from wide changes and use the rest of the parameters for learning the new tasks ( Adel et al. , 2019 ; Chaudhry et al. , 2018a ; Kirkpatrick et al. , 2017 ; Li & Hoiem , 2017 ; Nguyen et al. , 2017 ; Rannen et al. , 2017 ; Schwarz et al. , 2018 ; Zenke et al. , 2017 ) . Architecture-based methods isolate task-specific parameters by either increasing network capacity ( Rusu et al. , 2016 ; Yoon et al. , 2019 ; 2017 ) or freezing parts of the network ( Mallya & Lazebnik , 2018 ; Serra et al. , 2018 ) to maintain good performance on previous tasks . Replay-based methods mix samples from old tasks with the current dataset to mitigate catastrophic forgetting , where the replay samples are either stored in an external memory ( Chaudhry et al. , 2019 ; Hayes et al. , 2020 ; Isele & Cosgun , 2018 ; Lopez-Paz & Ranzato , 2017 ) or generated using a generative model ( Shin et al. , 2017 ; van de Ven & Tolias , 2018 ) . Regularization-based approaches and dynamic architectures have been combined with replay-based approaches to methods to overcome their limitations ( Chaudhry et al. , 2018a ; b ; Douillard et al. , 2020 ; Ebrahimi et al. , 2020 ; Joseph & Balasubramanian , 2020 ; Mirzadeh et al. , 2020 ; Nguyen et al. , 2017 ; Pan et al. , 2020 ; Pellegrini et al. , 2019 ; Rolnick et al. , 2018 ; von Oswald et al. , 2019 ) . Our work relates most to replay-based methods with external memory which we spend more time on describing in the next paragraph . Replay-based Continual Learning . A commonly used memory selection strategy of replay samples is random selection . Much research effort has focused on selecting higher quality samples to store in memory ( Aljundi et al. , 2019 ; Borsos et al. , 2020 ; Chaudhry et al. , 2019 ; Chrysakis & Moens , 2020 ; Hayes et al. , 2019 ; Isele & Cosgun , 2018 ; Lopez-Paz & Ranzato , 2017 ; Nguyen et al. , 2017 ; Rebuffi et al. , 2017 ; Yoon et al. , 2021 ) . Chaudhry et al . ( 2019 ) reviews several selection strategies in scenarios with tiny memory capacity , e.g. , reservoir sampling ( Vitter , 1985 ) , first-in first-out buffer ( Lopez-Paz & Ranzato , 2017 ) , k-Means , and Mean-of-Features ( Rebuffi et al. , 2017 ) . However , more elaborate selection strategies have been shown to give little benefit over random selection for image classification problems ( Chaudhry et al. , 2018a ; Hayes et al. , 2020 ) . More recently , there has been work on compressing raw images to feature representations to increase the number of memory examples for replay ( Hayes et al. , 2020 ; Iscen et al. , 2020 ; Pellegrini et al. , 2019 ) . Our approach differs from the above mentioned works since we focus on learning to select which tasks to replay at the current task rather than improving memory selection or compression quality of the samples in the memory . Replay scheduling can however be combined with any selection strategy as well as storing feature representations . Human Continual Learning . Humans are continual learning systems in the sense of learning tasks and concepts sequentially . Furthermore , humans have an impressive ability to memorize experiences but can forget learned knowledge gradually rather than catastrophically ( French , 1999 ) . Different learning techniques have been suggested for humans to memorize better ( Dunlosky et al. , 2013 ; Willis , 2007 ) . An example is spaced repetition which gradually increases time-intervals between rehearsals for retaining long-term memory ( Dempster , 1989 ) . This technique has been studied frequently and was inspired from the works of Ebbinghaus ( 2013 ) on memory retention . For example , Landauer & Bjork ( 1977 ) demonstrated that memory training schedules using adjusted spaced repetition were better at preserving memory than uniformly spaced training . Hawley et al . ( 2008 ) studies the efficacy of spaced repetition on adults with probable Alzheimer ’ s disease for learning face-name association . Several works in continual learning with neural networks are inspired by or have a connection to human learning techniques , including spaced repetition ( Amiri , 2019 ; Amiri et al. , 2017 ; Feng et al. , 2019 ; Smolen et al. , 2016 ) , mechanisms of sleep ( Ball et al. , 2020 ; Mallya & Lazebnik , 2018 ; Schwarz et al. , 2018 ) , and reactivation of memories ( Hayes et al. , 2020 ; van de Ven et al. , 2020 ) . Our replay scheduling method is inspired by spaced repetition ; we learn schedules of which memory samples to use for replay at different time steps . 3 METHOD . In this section , we describe our method for learning replay schedules for continual learning . The idea is to learn schedules of which memory examples the network should rehearse at different times . We use Monte Carlo tree search ( MCTS ) ( Browne et al. , 2012 ; Coulom , 2006 ) to learn a scheduling policy by encouraging searches for promising replay schedules based on the classification accuracy . 3.1 PROBLEM SETTING . We focus on the setting considering the real-world continual learning needs where all historical data are available but are prohibitively large . Therefore , only a small amount of historical data can be used when adapting the model to new data due to processing capability consideration . Thus , the goal is to learn how to select subsets of historical data to efficiently mitigate catastrophic forgetting when learning new tasks . We refer to these subsets of historical data as the replay memory throughout the paper , where the size of the replay memory affects the processing time when learning a new task t. Moreover , we focus on composing the replay memory based on the seen tasks in the historical data rather than single stored instances . Next , we introduce the notation of our problem setting which resembles the traditional continual learning setting for image classification . We let a neural network fθ , parameterized by θ , learn T tasks sequentially given their corresponding task datasets D1 , . . . , DT arriving in order . The t-th datasetDt = { ( x ( i ) t , y ( i ) t ) } Nt i=1 consists of Nt samples where x ( i ) t and y ( i ) t are the i-th data point and class label respectively . The training objective at task t is given by min θ Nt∑ i=1 ` ( fθ ( x ( i ) t ) , y ( i ) t ) , ( 1 ) where ` ( · ) is the loss function , e.g. , cross-entropy loss in our case . Since Dt is only accessible at time step t , the network fθ is at risk of catastrophically forgetting the previous t − 1 tasks when learning the current task . Replay-based continual learning methods mitigate the forgetting of old tasks by storing old examples in an external replay memory , that is mixed with the current task dataset during training . Next , we describe our method for constructing this replay memory . We assume that historical data from old tasks are accessible at any time step t. However , since the historical data is prohibitively large , we can only fill a small replay memoryM with M historical samples for replay due to processing time constraints . The challenge is how to fill the replay memory withM samples that efficiently retain the knowledge of old tasks when learning new tasks . We focus on selecting the samples on task-level by deciding on the task proportion ( a1 , . . . , at−1 ) of samples to fetch from each task , where ai ≥ 0 is the proportion ofM examples from task i and ∑t−1 i=1 ai = 1 . Consequently , we need a method for choosing these task proportions of which old tasks to replay . To simplify this selection , we construct a discrete set of choices for possible task proportions telling how many samples from each task to use when constructing the replay memoryM . | The paper proposes a new scheduling technique for constructing the replay buffer during training. As different to existing baselines which use a task-equal selection, they suggest a dynamic selection of the past tasks' replay exemplars, based on the observation that the time to revisit past tasks affects the averaged performance of the continual learner. The technique is simple yet seems to be effective compared to task-equivalent selection schedule. | SP:673e72d300d7e740b99c79b84783bda6616b1337 |
Learn the Time to Learn: Replay Scheduling for Continual Learning | 1 INTRODUCTION . Many organizations deploying machine learning systems receive large volumes of data daily where these new data are often associated with new tasks . Although all historical data are stored in the cloud in practice , retraining machine learning systems on a daily basis is prohibitive both in time and cost . In this setting , the systems must continuously adapt to new tasks without forgetting the previously learned abilities . Continual learning methods ( De Lange et al. , 2019 ; McCloskey & Cohen , 1989 ; Parisi et al. , 2019 ) address this challenge where , in particular , replay-based methods ( Chaudhry et al. , 2019 ; Hayes et al. , 2020 ) have shown to be very effective in achieving great prediction performance and retaining knowledge of old tasks . Replay-based methods mitigate catastrophic forgetting by revisiting a small set of samples , which is feasible to process compared to the size of the historical data . In traditional continual learning literature , the replay memory is limited due to the assumption that historical data are not available . In the real-world setting where historical data are in fact always available , the requirement of small memory remains due to processing time and cost issues . Most research on replay-based continual learning has been focused on the sample quality in the memory ( Aljundi et al. , 2019 ; Borsos et al. , 2020 ; Chaudhry et al. , 2019 ; Chrysakis & Moens , 2020 ; Nguyen et al. , 2017 ; Rebuffi et al. , 2017 ; Yoon et al. , 2021 ) or data compression to increase the memory capacity ( Hayes et al. , 2020 ; Iscen et al. , 2020 ; Pellegrini et al. , 2019 ) . Common for these methods is that the memory allocates an equal amount of space for storing samples from old tasks . When learning new tasks , the whole memory is replayed to mitigate catastrophic forgetting . However , in life-long learning settings , this simple strategy would be inefficient as the memory must store a large number of tasks . Furthermore , these methods ignore the time to learn old tasks again which is important in human learning . Humans are continual learning systems , and different methods have been developed to enhance memory retention , such as spaced repetition ( Dempster , 1989 ; Ebbinghaus , 2013 ; Landauer & Bjork , 1977 ) which is used often in education . These education methods focus on the scheduling of learning and rehearsal of previous learned knowledge . In this work , we argue that finding the proper schedule of what tasks to replay in the fixed memory setting is critical for continual learning . To demonstrate our claim , we perform a simple experiment on the Split MNIST ( Zenke et al. , 2017 ) dataset where each task consists of learning the digits 0/1 , 2/3 , etc . arriving in sequence . The replay memory contains data from task 1 and can only be replayed at one point in time . Figure 1 shows how the task performances progress over time when the memory is replayed at different time steps . In this example , the best final performance is achieved when the memory is used when learning task 5 . Note that choosing different time points to replay the same memory leads to noticeably different results in the final performance . These results indicate that scheduling the time when to apply replay can influence the final performance significantly of a continual learning system . To this end , we propose learning the time to learn , in which we learn replay schedules of which tasks to replay at different times inspired from human learning ( Dempster , 1989 ) . We illustrate the advantages with replay scheduling by using Monte Carlo tree search ( MCTS ) ( Coulom , 2006 ) to learn policies for replay . More specifically , we train a neural network on the current task dataset mixed with the scheduled replay samples and measure the final performance of the network to evaluate the replay schedules selected by MCTS . In summary , our contributions are : • We demonstrate the importance of replay scheduling in continual learning and propose to learn the time to learn which tasks to replay ( Section 3.1 and 3.2 ) . • We use MCTS as an example method to illustrate how replay schedules for continual learning can be learned by establishing a finite set of memory compositions that can be replayed at every task ( Section 3.3 ) . • We demonstrate with several benchmark datasets that learned scheduling can improve the continual learning performance significantly in the fixed size memory setting ( Section 4.1 and 4.4 ) . Furthermore , we show that our method can be combined with any other memory selection methods ( Section 4.3 ) , as well as being efficient in situations where the memory size is even smaller than the number of classes ( Section 4.5 ) . 2 RELATED WORK . In this section , we give a brief overview of continual learning methods , essentially replay-based methods , as well as spaced repetition techniques for human continual learning . Continual Learning . Traditional continual learning can be divided into three main areas , namely regularization-based , architecture-based , and replay-based approaches . Regularization-based methods aim to mitigate catastrophic forgetting by protecting parameters influencing the predictive performance from wide changes and use the rest of the parameters for learning the new tasks ( Adel et al. , 2019 ; Chaudhry et al. , 2018a ; Kirkpatrick et al. , 2017 ; Li & Hoiem , 2017 ; Nguyen et al. , 2017 ; Rannen et al. , 2017 ; Schwarz et al. , 2018 ; Zenke et al. , 2017 ) . Architecture-based methods isolate task-specific parameters by either increasing network capacity ( Rusu et al. , 2016 ; Yoon et al. , 2019 ; 2017 ) or freezing parts of the network ( Mallya & Lazebnik , 2018 ; Serra et al. , 2018 ) to maintain good performance on previous tasks . Replay-based methods mix samples from old tasks with the current dataset to mitigate catastrophic forgetting , where the replay samples are either stored in an external memory ( Chaudhry et al. , 2019 ; Hayes et al. , 2020 ; Isele & Cosgun , 2018 ; Lopez-Paz & Ranzato , 2017 ) or generated using a generative model ( Shin et al. , 2017 ; van de Ven & Tolias , 2018 ) . Regularization-based approaches and dynamic architectures have been combined with replay-based approaches to methods to overcome their limitations ( Chaudhry et al. , 2018a ; b ; Douillard et al. , 2020 ; Ebrahimi et al. , 2020 ; Joseph & Balasubramanian , 2020 ; Mirzadeh et al. , 2020 ; Nguyen et al. , 2017 ; Pan et al. , 2020 ; Pellegrini et al. , 2019 ; Rolnick et al. , 2018 ; von Oswald et al. , 2019 ) . Our work relates most to replay-based methods with external memory which we spend more time on describing in the next paragraph . Replay-based Continual Learning . A commonly used memory selection strategy of replay samples is random selection . Much research effort has focused on selecting higher quality samples to store in memory ( Aljundi et al. , 2019 ; Borsos et al. , 2020 ; Chaudhry et al. , 2019 ; Chrysakis & Moens , 2020 ; Hayes et al. , 2019 ; Isele & Cosgun , 2018 ; Lopez-Paz & Ranzato , 2017 ; Nguyen et al. , 2017 ; Rebuffi et al. , 2017 ; Yoon et al. , 2021 ) . Chaudhry et al . ( 2019 ) reviews several selection strategies in scenarios with tiny memory capacity , e.g. , reservoir sampling ( Vitter , 1985 ) , first-in first-out buffer ( Lopez-Paz & Ranzato , 2017 ) , k-Means , and Mean-of-Features ( Rebuffi et al. , 2017 ) . However , more elaborate selection strategies have been shown to give little benefit over random selection for image classification problems ( Chaudhry et al. , 2018a ; Hayes et al. , 2020 ) . More recently , there has been work on compressing raw images to feature representations to increase the number of memory examples for replay ( Hayes et al. , 2020 ; Iscen et al. , 2020 ; Pellegrini et al. , 2019 ) . Our approach differs from the above mentioned works since we focus on learning to select which tasks to replay at the current task rather than improving memory selection or compression quality of the samples in the memory . Replay scheduling can however be combined with any selection strategy as well as storing feature representations . Human Continual Learning . Humans are continual learning systems in the sense of learning tasks and concepts sequentially . Furthermore , humans have an impressive ability to memorize experiences but can forget learned knowledge gradually rather than catastrophically ( French , 1999 ) . Different learning techniques have been suggested for humans to memorize better ( Dunlosky et al. , 2013 ; Willis , 2007 ) . An example is spaced repetition which gradually increases time-intervals between rehearsals for retaining long-term memory ( Dempster , 1989 ) . This technique has been studied frequently and was inspired from the works of Ebbinghaus ( 2013 ) on memory retention . For example , Landauer & Bjork ( 1977 ) demonstrated that memory training schedules using adjusted spaced repetition were better at preserving memory than uniformly spaced training . Hawley et al . ( 2008 ) studies the efficacy of spaced repetition on adults with probable Alzheimer ’ s disease for learning face-name association . Several works in continual learning with neural networks are inspired by or have a connection to human learning techniques , including spaced repetition ( Amiri , 2019 ; Amiri et al. , 2017 ; Feng et al. , 2019 ; Smolen et al. , 2016 ) , mechanisms of sleep ( Ball et al. , 2020 ; Mallya & Lazebnik , 2018 ; Schwarz et al. , 2018 ) , and reactivation of memories ( Hayes et al. , 2020 ; van de Ven et al. , 2020 ) . Our replay scheduling method is inspired by spaced repetition ; we learn schedules of which memory samples to use for replay at different time steps . 3 METHOD . In this section , we describe our method for learning replay schedules for continual learning . The idea is to learn schedules of which memory examples the network should rehearse at different times . We use Monte Carlo tree search ( MCTS ) ( Browne et al. , 2012 ; Coulom , 2006 ) to learn a scheduling policy by encouraging searches for promising replay schedules based on the classification accuracy . 3.1 PROBLEM SETTING . We focus on the setting considering the real-world continual learning needs where all historical data are available but are prohibitively large . Therefore , only a small amount of historical data can be used when adapting the model to new data due to processing capability consideration . Thus , the goal is to learn how to select subsets of historical data to efficiently mitigate catastrophic forgetting when learning new tasks . We refer to these subsets of historical data as the replay memory throughout the paper , where the size of the replay memory affects the processing time when learning a new task t. Moreover , we focus on composing the replay memory based on the seen tasks in the historical data rather than single stored instances . Next , we introduce the notation of our problem setting which resembles the traditional continual learning setting for image classification . We let a neural network fθ , parameterized by θ , learn T tasks sequentially given their corresponding task datasets D1 , . . . , DT arriving in order . The t-th datasetDt = { ( x ( i ) t , y ( i ) t ) } Nt i=1 consists of Nt samples where x ( i ) t and y ( i ) t are the i-th data point and class label respectively . The training objective at task t is given by min θ Nt∑ i=1 ` ( fθ ( x ( i ) t ) , y ( i ) t ) , ( 1 ) where ` ( · ) is the loss function , e.g. , cross-entropy loss in our case . Since Dt is only accessible at time step t , the network fθ is at risk of catastrophically forgetting the previous t − 1 tasks when learning the current task . Replay-based continual learning methods mitigate the forgetting of old tasks by storing old examples in an external replay memory , that is mixed with the current task dataset during training . Next , we describe our method for constructing this replay memory . We assume that historical data from old tasks are accessible at any time step t. However , since the historical data is prohibitively large , we can only fill a small replay memoryM with M historical samples for replay due to processing time constraints . The challenge is how to fill the replay memory withM samples that efficiently retain the knowledge of old tasks when learning new tasks . We focus on selecting the samples on task-level by deciding on the task proportion ( a1 , . . . , at−1 ) of samples to fetch from each task , where ai ≥ 0 is the proportion ofM examples from task i and ∑t−1 i=1 ai = 1 . Consequently , we need a method for choosing these task proportions of which old tasks to replay . To simplify this selection , we construct a discrete set of choices for possible task proportions telling how many samples from each task to use when constructing the replay memoryM . | This paper proposes a new continual learning method that learns to select samples for the replaying process. The replay memory is filled with samples from previous tasks according to a specific proportion corresponding to the action performed at the current task. The paper proposes to find the optimal sequence of actions by using Monte-Carlo Tree Search (MCTS) with the reward as the average accuracy over all tasks after seeing the final task. The experiments demonstrate that MCTS improves the continual learning performance especially when the memory size is small and is compatible with some sample-selection strategies. | SP:673e72d300d7e740b99c79b84783bda6616b1337 |
Closed-Loop Control of Additive Manufacturing via Reinforcement Learning | 1 INTRODUCTION . A critical component of manufacturing is identifying process parameters that consistently produce , high-quality structures . In commercial devices , this is typically achieved by expensive trial-and-error experimentation ( Gao et al. , 2015 ) . To make such an optimization feasible , a critical assumption is made : there exists a set of parameters for which the relationship between process parameters and process outcome is predictable . However , such an assumption does not hold in practice because all manufacturing processes are stochastic in nature . Specifically in additive manufacturing , variability in both materials and intrinsic process parameters can cause geometric errors leading to imprecision that can compromise the functional properties of the final prints . Therefore , transition to closed-loop control is indispensable for industrial adoption of additive manufacturing ( Wang et al. , 2020 ) . Recently , we have seen promising progress in learning policies for interaction with amorphous materials ( Li et al. , 2019b ; Zhang et al. , 2020 ) . Unfortunately , in the context of additive manufacturing , discovering effective control strategies is significantly more challenging . The deposition parameters have a non-linear coupling to the dynamic material properties . To assess the severity of deposition errors , we need to observe the material over long time horizons . Available simulators either lack predictive power ( Mozaffar et al. , 2018 ) or are too complex for learning ( Tang et al. , 2018 ; Yan et al. , 2018 ) . Moreover , learning on hardware is intractable as we require tens of thousands of printed samples . These challenges are further exaggerated by the limited perception of printing hardware , where typically , only a small in-situ view is available to assess the deposition quality . In this work , we propose the first closed-loop controller for additive manufacturing based on reinforcement learning deployed on real hardware . To achieve this we formulate a custom numerical model of the deposition process . Motivated by the limited hardware perception we make a key assumption : to learn closed-loop control it is sufficient to model the deposition only qualitatively . This allows us to replace physically accurate but prohibitively slow simulations with efficient approximations . To ameliorate the sim-to-real gap , we enhance the simulation with a data-driven noise distribution on the spread of the deposited material . We further show that careful selection of input and action space is necessary for hardware transfer . Lastly , we leverage the privileged information about the deposition process to formulate a reward function that encourages policies that account for material changes over long horizons . Thanks to the above advancements , our control policy can be trained exclusively in simulation with a minimal sim-to-real gap . We demonstrate that our policy outperforms baseline deposition methods in simulation and physical hardware with low or high viscosity materials . Furthermore , our numerical model can serve as an essential building block for future research in optimal material deposition , and we plan to make the source code available . 2 RELATED WORK . To identify process parameters for additive manufacturing , it is important to understand the complex interaction between a material and a deposition process . This is typically done through trial-anderror experimentation ( Kappes et al. , 2018 ; Wang et al. , 2018 ; Baturynska et al. , 2018 ) . Recently , optimal experiment design and , more specifically , Gaussian processes have become a tool for efficient use of the samples to understand the deposition problem ( Erps et al. , 2021 ) . However , even though Gaussian Processes model the deposition variance , they do not offer tools to adjust the deposition on-the-fly . Another approach to improve the printing process is to design closed-loop controllers . One of the first designs was proposed by Sitthi-Amorn et al . ( 2015 ) that monitors each layer deposited by a printing process to compute an adjustment layer . Liu et al . ( 2017 ) built upon the idea and trained a discriminator that can identify the type and magnitude of observed defects . A similar approach was proposed by Yao et al . ( 2018 ) that uses handcrafted features to identify when a print significantly drops in quality . The main disadvantage of these methods is that they rely on collecting the in-situ observations to propose one corrective step by adjusting the process parameters . However , this means that the prints continue with sub-optimal parameters , and it can take several layers to adjust the deposition . In contrast , our system runs in-process and reacts to the in-situ views immediately . This ensures high-quality deposition and adaptability to material changes . Recently machine learning techniques sparked a new interest in the design of adaptive control policies ( Mnih et al. , 2015 ) . A particularly successful approach for high-quality in-process control is to adopt the Model Predictive Control paradigm ( MPC ) ( Gu et al. , 2016 ; Silver et al. , 2017 ; Oh et al. , 2017 ; Srinivas et al. , 2018 ; Nagabandi et al. , 2018 ) . The control scheme of MPC relies on an observation of the current state and a short-horizon prediction of the future states . By manipulating the process parameters , we observe the changes in future predictions and can pick a future with desirable characteristics . Particularly useful is to utilize deep models to generate differentiable predictors that provide derivatives with respect to control changes ( de Avila Belbute-Peres et al. , 2018 ; Schenck & Fox , 2018 ; Toussaint et al. , 2018 ; Li et al. , 2019a ) . However , addressing the uncertainties of the deposition process with MPC is challenging . In a noisy environment , we can rely only on the expected prediction of the deposition . This leads to a conservative control policy that effectively executes the mean action . Moreover , reacting to material changes over time requires optimizing actions for long time horizons which is a known weakness of the MPC paradigm ( Garcia et al. , 1989 ) . As a result , MPC is not suitable for in-process control in noisy environments . Another option to derive control policies is to leverage deep reinforced learning ( Rajeswaran et al. , 2017 ; Liu & Hodgins , 2018 ; Peng et al. , 2018 ; Yu et al. , 2019 ; Lee et al. , 2019 ; Akkaya et al. , 2019 ) . The key challenge in the design of such controllers is formulating an efficient numerical model that captures the governing physical phenomena . As a consequence , it is most commonly applied to rigid body dynamics and rigid robots where such models are readily available ( Todorov et al. , 2012 ; Bender et al. , 2014 ; Coumans & Bai , 2016 ; Lee et al. , 2018 ) . In contrast , learning with non-rigid objects is significantly more challenging as the computation time for deformable materials is higher and relies on some prior knowledge on the task ( Clegg et al. , 2018 ; Elliott & Cakmak , 2018 ; Ma et al. , 2018 ; Wu et al. , 2019 ) . Recently Zhang et al . ( 2020 ) proposed a numerical model for training control policies where a rigid object interacts with amorphous materials . Similarly , in our work a rigid printing nozzle interacts with the fluid-like printing material . However , our model is specialized for the printing hardware and models not only the deposition but also its variance . We demonstrate that this is an important component in minimizing the sim-to-real gap and design control policies that are readily applicable to the physical hardware . 3 HARDWARE PRELIMINARIES . The choice of additive manufacturing technology constraints the subsequent numerical modeling . To keep the applicability of our developed system as wide as possible , we opted for a direct write Camera Material Nozzle needle deposition system mounted on a 3-axis Cartesian robot ( inset ) . The robot allows us to freely control the acceleration and position of the dispenser . The dispenser can process a wide range of viscous materials , and the deposition is very similar to fused deposition modeling . We further enhance the apparatus with two camera modules . The cameras lie on the opposite sides of the nozzle to allow our apparatus to perceive the location around the deposition . It is this locality of the in-situ view that we will leverage to formulate our numerical model . 3.1 BASELINE CONTROLLER Material Width Outline Path Target Infill Path Figure 1 : Baseline slicer . To control the printing apparatus , we employ a baseline slicer . The input to the slicer is a three-dimensional object . The output is a series of locations the printing head visits to reproduce the model as closely as possible . To generate a single slice of the object , we start by intersecting the 3D model with a Z-axis aligned plane ( please note that this does not affect the generalizability since the input can be arbitrarily rotated ) . The slice is represented by a polygon that marks the outline of the printout ( Figure 1 gray ) . To generate the printing path , we assume a constant width of deposition ( Figure 1 red ) that acts as a convolution on the printing path . The printing path ( Figure 1 blue ) is created by offsetting the print boundary by half the width of the material using the Clipper algorithm ( Johnson , 2015 ) . The infill pattern is generated by tracing a zig-zag line through the area of the print ( Figure 1 green ) . 4 REINFORCEMENT LEARNING FOR ADDITIVE MANUFACTURING . The baseline control strictly relies on a constant width of the material . To discover policies that can adapt to the in-situ observations , we formulate the search in a reinforcement learning framework . The control problem is described by a Markov decision process ( S , A , P , R ) , where S is the observation space , A is a continuous action space , P = P ( s′|s , a ) is the transition function , and R ( s , a ) → R is the reward function . To learn a control policy we take a model free approach by learning directly from printing . Unfortunately , learning on a physical device is challenging . The interaction between various process parameters can lead to deposition errors that require manual attention . As such discovering control policies directly on the hardware has too steep sample complexity to be practical . A potential solution is to learn the control behavior in simulation and transfer to the physical device . However , transfer from simulation to real world is a notoriously hard problem that hinges on applicability of the learned knowledge . In this work , we propose a framework for preparing numerical models for additive manufacturing that facilitate the sim-to-real transfer . Our model has three key components that facilitate the generalization of the learned control policies . The first component is the design of the observation space . To facilitate the transfer of learning between simulation and a physical device , we rely on an abstraction of the observation space ( Kaufmann et al. , 2020 ) . Rather than using the direct appearance feed from our camera module we process the signal into a heightmap . A heightmap is a 2D image where each pixel stores the height of the deposited material . For each height map location , the height is measured as a distance from the building plate to the deposited material . This allows our system to generalize to many different sensors such as cameras , depth sensors , or laser profilometers . However , unlike Kaufmann et al . ( 2020 ) , we do not extract the feature vectors manually . Instead , similarly to OpenAI et al . ( 2018 ) , we learn the features directly from the heightmap . In contrast to OpenAI et al . ( 2018 ) , we do not randomize the observation domain . Additional randomization is not necessary in our case thanks to the controlled observation conditions of the physical apparatus . A key insight of our approach is that the engineered observation space coupled with learned features can significantly help with policy learning . A careful design of the observation space can facilitate the sim-to-real transfer , make the hardware design more flexible by enabling the use of a range of sensors that compute similar observations , and remove the need to hand-craft the features . It is therefore worth wile to invest in the design of observation spaces . The second component of our system is the design of the action space . Instead of directly controlling the motors of the printer we rely on a high-level control scheme and tune coupled parameters such as velocity or offset from the printing path . This idea is similar in spirit to OpenAI et al . ( 2018 ) . OpenAI et al . ( 2018 ) suggest not using direct sensory inputs from the mechanical hand as observations due to their noisiness and lack of generalization across environments . Instead , they use image data to track the robotic hand . Similarly , but instead in action space , we do not control the printer by directly inputting the typically noisy and hardware-specific voltages that actuate the motors of the apparatus . Instead , we control the printer by setting the desired velocity and offset and letting the apparatus match them to the best of its capabilities . This translation layer allows us to utilize the controller on a broader range of devices without per-device training . This idea could also be generalized to other robotic tasks , for example , by applying a hierarchical divide and conquer approach to the action space . The control policies could output only high-level actions such as desired locations for robots actuators or deviations from a baseline behavior . Lowlevel controllers could then execute these higher-level actions . Such a control hierarchy can facilitate training by decoupling the higher-level goals from low-level inputs and transferring existing control policies to new devices through specialized low-level controllers . The third and last component of our system is an approximative transition function . Rather than modelling the deposition process exactly we propose to approximate it qualitatively . A qualitative approximation allows us to design an efficient computational model . To facilitate the transfer of the simulated model to the physical device we reintroduce the device uncertainty in a data-driven fashion . This is similar to OpenAI et al . ( 2018 ) , but instead of covering a large array of options , we specialize the randomization . Inspired by Chebotar et al . ( 2019 ) , we designed a data-driven LPC filter that matches the statistical distribution of variations observed during a typical printing process . This noise enables our control policies to adapt to changing environments and , to some extent , to changes in material properties such as viscosity . Our approximative transition function shows that it is not necessary to reproduce the physical world in simulation perfectly . A qualitative approximation is sufficient as long as we learn behavior patterns that translate to real-world experiences . This is an important observation for any task where we manipulate objects and elastic or frictional forces dominate the behavior . Relying on computationally more affordable simulations allows for applying existing learning algorithms to a broader range of problems where precise numerical modeling has prohibitive computational complexity . Moreover , by leveraging a numerical model it is possible to utilize privileged information that would be challenging if not impossible to collect in the real world . For full description of our methods please see Appendix A . | This paper demonstrates the feasibility of obtaining a closed-loop control policy using reinforcement learning for additive manufacturing also known as 3D printing. The paper proposes a sim-to-real approach that relies on a novel simulated environment that allows for synthesising successful off-the-shelve RL policies capable of improving upon existing state-of-the-art print deposition controllers. An underlying assumption of this work is to rely on qualitative information only assuming that the difference in colour between the background and the applied deposition is sufficiently large. The work is evaluated in both simulation and sim-to-real and shows that the proposed method works well under different circumstances in simulation such as using static or dynamic depositing pressure variations and depositing material with different viscosity. In addition, the paper provides a comprehensive ablation study over the choice of observation and action spaces as well as the choice of reward. Finally, this work demonstrates the applicability of the approach on a sim-to-real task with no additional fine-tuning showcasing its ability to achieve a higher offset improvement than a baseline controller. I think that this work proposes a reasonable solution to an interesting problem that can potentially impact the 3D printing industry. Although there was not necessarily a novel contribution from a learning perspective, I think this work proposes a novel solution to a curious application and is therefore worthy of consideration. However, I have additional questions and concerns that prevent me from recommending this work for acceptance yet. I detail those below. | SP:35d60063e24da659cbdbee06cbae828683ac244b |
Closed-Loop Control of Additive Manufacturing via Reinforcement Learning | 1 INTRODUCTION . A critical component of manufacturing is identifying process parameters that consistently produce , high-quality structures . In commercial devices , this is typically achieved by expensive trial-and-error experimentation ( Gao et al. , 2015 ) . To make such an optimization feasible , a critical assumption is made : there exists a set of parameters for which the relationship between process parameters and process outcome is predictable . However , such an assumption does not hold in practice because all manufacturing processes are stochastic in nature . Specifically in additive manufacturing , variability in both materials and intrinsic process parameters can cause geometric errors leading to imprecision that can compromise the functional properties of the final prints . Therefore , transition to closed-loop control is indispensable for industrial adoption of additive manufacturing ( Wang et al. , 2020 ) . Recently , we have seen promising progress in learning policies for interaction with amorphous materials ( Li et al. , 2019b ; Zhang et al. , 2020 ) . Unfortunately , in the context of additive manufacturing , discovering effective control strategies is significantly more challenging . The deposition parameters have a non-linear coupling to the dynamic material properties . To assess the severity of deposition errors , we need to observe the material over long time horizons . Available simulators either lack predictive power ( Mozaffar et al. , 2018 ) or are too complex for learning ( Tang et al. , 2018 ; Yan et al. , 2018 ) . Moreover , learning on hardware is intractable as we require tens of thousands of printed samples . These challenges are further exaggerated by the limited perception of printing hardware , where typically , only a small in-situ view is available to assess the deposition quality . In this work , we propose the first closed-loop controller for additive manufacturing based on reinforcement learning deployed on real hardware . To achieve this we formulate a custom numerical model of the deposition process . Motivated by the limited hardware perception we make a key assumption : to learn closed-loop control it is sufficient to model the deposition only qualitatively . This allows us to replace physically accurate but prohibitively slow simulations with efficient approximations . To ameliorate the sim-to-real gap , we enhance the simulation with a data-driven noise distribution on the spread of the deposited material . We further show that careful selection of input and action space is necessary for hardware transfer . Lastly , we leverage the privileged information about the deposition process to formulate a reward function that encourages policies that account for material changes over long horizons . Thanks to the above advancements , our control policy can be trained exclusively in simulation with a minimal sim-to-real gap . We demonstrate that our policy outperforms baseline deposition methods in simulation and physical hardware with low or high viscosity materials . Furthermore , our numerical model can serve as an essential building block for future research in optimal material deposition , and we plan to make the source code available . 2 RELATED WORK . To identify process parameters for additive manufacturing , it is important to understand the complex interaction between a material and a deposition process . This is typically done through trial-anderror experimentation ( Kappes et al. , 2018 ; Wang et al. , 2018 ; Baturynska et al. , 2018 ) . Recently , optimal experiment design and , more specifically , Gaussian processes have become a tool for efficient use of the samples to understand the deposition problem ( Erps et al. , 2021 ) . However , even though Gaussian Processes model the deposition variance , they do not offer tools to adjust the deposition on-the-fly . Another approach to improve the printing process is to design closed-loop controllers . One of the first designs was proposed by Sitthi-Amorn et al . ( 2015 ) that monitors each layer deposited by a printing process to compute an adjustment layer . Liu et al . ( 2017 ) built upon the idea and trained a discriminator that can identify the type and magnitude of observed defects . A similar approach was proposed by Yao et al . ( 2018 ) that uses handcrafted features to identify when a print significantly drops in quality . The main disadvantage of these methods is that they rely on collecting the in-situ observations to propose one corrective step by adjusting the process parameters . However , this means that the prints continue with sub-optimal parameters , and it can take several layers to adjust the deposition . In contrast , our system runs in-process and reacts to the in-situ views immediately . This ensures high-quality deposition and adaptability to material changes . Recently machine learning techniques sparked a new interest in the design of adaptive control policies ( Mnih et al. , 2015 ) . A particularly successful approach for high-quality in-process control is to adopt the Model Predictive Control paradigm ( MPC ) ( Gu et al. , 2016 ; Silver et al. , 2017 ; Oh et al. , 2017 ; Srinivas et al. , 2018 ; Nagabandi et al. , 2018 ) . The control scheme of MPC relies on an observation of the current state and a short-horizon prediction of the future states . By manipulating the process parameters , we observe the changes in future predictions and can pick a future with desirable characteristics . Particularly useful is to utilize deep models to generate differentiable predictors that provide derivatives with respect to control changes ( de Avila Belbute-Peres et al. , 2018 ; Schenck & Fox , 2018 ; Toussaint et al. , 2018 ; Li et al. , 2019a ) . However , addressing the uncertainties of the deposition process with MPC is challenging . In a noisy environment , we can rely only on the expected prediction of the deposition . This leads to a conservative control policy that effectively executes the mean action . Moreover , reacting to material changes over time requires optimizing actions for long time horizons which is a known weakness of the MPC paradigm ( Garcia et al. , 1989 ) . As a result , MPC is not suitable for in-process control in noisy environments . Another option to derive control policies is to leverage deep reinforced learning ( Rajeswaran et al. , 2017 ; Liu & Hodgins , 2018 ; Peng et al. , 2018 ; Yu et al. , 2019 ; Lee et al. , 2019 ; Akkaya et al. , 2019 ) . The key challenge in the design of such controllers is formulating an efficient numerical model that captures the governing physical phenomena . As a consequence , it is most commonly applied to rigid body dynamics and rigid robots where such models are readily available ( Todorov et al. , 2012 ; Bender et al. , 2014 ; Coumans & Bai , 2016 ; Lee et al. , 2018 ) . In contrast , learning with non-rigid objects is significantly more challenging as the computation time for deformable materials is higher and relies on some prior knowledge on the task ( Clegg et al. , 2018 ; Elliott & Cakmak , 2018 ; Ma et al. , 2018 ; Wu et al. , 2019 ) . Recently Zhang et al . ( 2020 ) proposed a numerical model for training control policies where a rigid object interacts with amorphous materials . Similarly , in our work a rigid printing nozzle interacts with the fluid-like printing material . However , our model is specialized for the printing hardware and models not only the deposition but also its variance . We demonstrate that this is an important component in minimizing the sim-to-real gap and design control policies that are readily applicable to the physical hardware . 3 HARDWARE PRELIMINARIES . The choice of additive manufacturing technology constraints the subsequent numerical modeling . To keep the applicability of our developed system as wide as possible , we opted for a direct write Camera Material Nozzle needle deposition system mounted on a 3-axis Cartesian robot ( inset ) . The robot allows us to freely control the acceleration and position of the dispenser . The dispenser can process a wide range of viscous materials , and the deposition is very similar to fused deposition modeling . We further enhance the apparatus with two camera modules . The cameras lie on the opposite sides of the nozzle to allow our apparatus to perceive the location around the deposition . It is this locality of the in-situ view that we will leverage to formulate our numerical model . 3.1 BASELINE CONTROLLER Material Width Outline Path Target Infill Path Figure 1 : Baseline slicer . To control the printing apparatus , we employ a baseline slicer . The input to the slicer is a three-dimensional object . The output is a series of locations the printing head visits to reproduce the model as closely as possible . To generate a single slice of the object , we start by intersecting the 3D model with a Z-axis aligned plane ( please note that this does not affect the generalizability since the input can be arbitrarily rotated ) . The slice is represented by a polygon that marks the outline of the printout ( Figure 1 gray ) . To generate the printing path , we assume a constant width of deposition ( Figure 1 red ) that acts as a convolution on the printing path . The printing path ( Figure 1 blue ) is created by offsetting the print boundary by half the width of the material using the Clipper algorithm ( Johnson , 2015 ) . The infill pattern is generated by tracing a zig-zag line through the area of the print ( Figure 1 green ) . 4 REINFORCEMENT LEARNING FOR ADDITIVE MANUFACTURING . The baseline control strictly relies on a constant width of the material . To discover policies that can adapt to the in-situ observations , we formulate the search in a reinforcement learning framework . The control problem is described by a Markov decision process ( S , A , P , R ) , where S is the observation space , A is a continuous action space , P = P ( s′|s , a ) is the transition function , and R ( s , a ) → R is the reward function . To learn a control policy we take a model free approach by learning directly from printing . Unfortunately , learning on a physical device is challenging . The interaction between various process parameters can lead to deposition errors that require manual attention . As such discovering control policies directly on the hardware has too steep sample complexity to be practical . A potential solution is to learn the control behavior in simulation and transfer to the physical device . However , transfer from simulation to real world is a notoriously hard problem that hinges on applicability of the learned knowledge . In this work , we propose a framework for preparing numerical models for additive manufacturing that facilitate the sim-to-real transfer . Our model has three key components that facilitate the generalization of the learned control policies . The first component is the design of the observation space . To facilitate the transfer of learning between simulation and a physical device , we rely on an abstraction of the observation space ( Kaufmann et al. , 2020 ) . Rather than using the direct appearance feed from our camera module we process the signal into a heightmap . A heightmap is a 2D image where each pixel stores the height of the deposited material . For each height map location , the height is measured as a distance from the building plate to the deposited material . This allows our system to generalize to many different sensors such as cameras , depth sensors , or laser profilometers . However , unlike Kaufmann et al . ( 2020 ) , we do not extract the feature vectors manually . Instead , similarly to OpenAI et al . ( 2018 ) , we learn the features directly from the heightmap . In contrast to OpenAI et al . ( 2018 ) , we do not randomize the observation domain . Additional randomization is not necessary in our case thanks to the controlled observation conditions of the physical apparatus . A key insight of our approach is that the engineered observation space coupled with learned features can significantly help with policy learning . A careful design of the observation space can facilitate the sim-to-real transfer , make the hardware design more flexible by enabling the use of a range of sensors that compute similar observations , and remove the need to hand-craft the features . It is therefore worth wile to invest in the design of observation spaces . The second component of our system is the design of the action space . Instead of directly controlling the motors of the printer we rely on a high-level control scheme and tune coupled parameters such as velocity or offset from the printing path . This idea is similar in spirit to OpenAI et al . ( 2018 ) . OpenAI et al . ( 2018 ) suggest not using direct sensory inputs from the mechanical hand as observations due to their noisiness and lack of generalization across environments . Instead , they use image data to track the robotic hand . Similarly , but instead in action space , we do not control the printer by directly inputting the typically noisy and hardware-specific voltages that actuate the motors of the apparatus . Instead , we control the printer by setting the desired velocity and offset and letting the apparatus match them to the best of its capabilities . This translation layer allows us to utilize the controller on a broader range of devices without per-device training . This idea could also be generalized to other robotic tasks , for example , by applying a hierarchical divide and conquer approach to the action space . The control policies could output only high-level actions such as desired locations for robots actuators or deviations from a baseline behavior . Lowlevel controllers could then execute these higher-level actions . Such a control hierarchy can facilitate training by decoupling the higher-level goals from low-level inputs and transferring existing control policies to new devices through specialized low-level controllers . The third and last component of our system is an approximative transition function . Rather than modelling the deposition process exactly we propose to approximate it qualitatively . A qualitative approximation allows us to design an efficient computational model . To facilitate the transfer of the simulated model to the physical device we reintroduce the device uncertainty in a data-driven fashion . This is similar to OpenAI et al . ( 2018 ) , but instead of covering a large array of options , we specialize the randomization . Inspired by Chebotar et al . ( 2019 ) , we designed a data-driven LPC filter that matches the statistical distribution of variations observed during a typical printing process . This noise enables our control policies to adapt to changing environments and , to some extent , to changes in material properties such as viscosity . Our approximative transition function shows that it is not necessary to reproduce the physical world in simulation perfectly . A qualitative approximation is sufficient as long as we learn behavior patterns that translate to real-world experiences . This is an important observation for any task where we manipulate objects and elastic or frictional forces dominate the behavior . Relying on computationally more affordable simulations allows for applying existing learning algorithms to a broader range of problems where precise numerical modeling has prohibitive computational complexity . Moreover , by leveraging a numerical model it is possible to utilize privileged information that would be challenging if not impossible to collect in the real world . For full description of our methods please see Appendix A . | This paper presents a method for using RL for adjustment of process parameters in additive manufacturing. This allows for closed-loop control that outperforms the state-of-the-art in terms of printing quality. Multiple experiments showed this approach outperforming baselines, and it was also shown that the approach could be applied directly on physical hardware. | SP:35d60063e24da659cbdbee06cbae828683ac244b |
Closed-Loop Control of Additive Manufacturing via Reinforcement Learning | 1 INTRODUCTION . A critical component of manufacturing is identifying process parameters that consistently produce , high-quality structures . In commercial devices , this is typically achieved by expensive trial-and-error experimentation ( Gao et al. , 2015 ) . To make such an optimization feasible , a critical assumption is made : there exists a set of parameters for which the relationship between process parameters and process outcome is predictable . However , such an assumption does not hold in practice because all manufacturing processes are stochastic in nature . Specifically in additive manufacturing , variability in both materials and intrinsic process parameters can cause geometric errors leading to imprecision that can compromise the functional properties of the final prints . Therefore , transition to closed-loop control is indispensable for industrial adoption of additive manufacturing ( Wang et al. , 2020 ) . Recently , we have seen promising progress in learning policies for interaction with amorphous materials ( Li et al. , 2019b ; Zhang et al. , 2020 ) . Unfortunately , in the context of additive manufacturing , discovering effective control strategies is significantly more challenging . The deposition parameters have a non-linear coupling to the dynamic material properties . To assess the severity of deposition errors , we need to observe the material over long time horizons . Available simulators either lack predictive power ( Mozaffar et al. , 2018 ) or are too complex for learning ( Tang et al. , 2018 ; Yan et al. , 2018 ) . Moreover , learning on hardware is intractable as we require tens of thousands of printed samples . These challenges are further exaggerated by the limited perception of printing hardware , where typically , only a small in-situ view is available to assess the deposition quality . In this work , we propose the first closed-loop controller for additive manufacturing based on reinforcement learning deployed on real hardware . To achieve this we formulate a custom numerical model of the deposition process . Motivated by the limited hardware perception we make a key assumption : to learn closed-loop control it is sufficient to model the deposition only qualitatively . This allows us to replace physically accurate but prohibitively slow simulations with efficient approximations . To ameliorate the sim-to-real gap , we enhance the simulation with a data-driven noise distribution on the spread of the deposited material . We further show that careful selection of input and action space is necessary for hardware transfer . Lastly , we leverage the privileged information about the deposition process to formulate a reward function that encourages policies that account for material changes over long horizons . Thanks to the above advancements , our control policy can be trained exclusively in simulation with a minimal sim-to-real gap . We demonstrate that our policy outperforms baseline deposition methods in simulation and physical hardware with low or high viscosity materials . Furthermore , our numerical model can serve as an essential building block for future research in optimal material deposition , and we plan to make the source code available . 2 RELATED WORK . To identify process parameters for additive manufacturing , it is important to understand the complex interaction between a material and a deposition process . This is typically done through trial-anderror experimentation ( Kappes et al. , 2018 ; Wang et al. , 2018 ; Baturynska et al. , 2018 ) . Recently , optimal experiment design and , more specifically , Gaussian processes have become a tool for efficient use of the samples to understand the deposition problem ( Erps et al. , 2021 ) . However , even though Gaussian Processes model the deposition variance , they do not offer tools to adjust the deposition on-the-fly . Another approach to improve the printing process is to design closed-loop controllers . One of the first designs was proposed by Sitthi-Amorn et al . ( 2015 ) that monitors each layer deposited by a printing process to compute an adjustment layer . Liu et al . ( 2017 ) built upon the idea and trained a discriminator that can identify the type and magnitude of observed defects . A similar approach was proposed by Yao et al . ( 2018 ) that uses handcrafted features to identify when a print significantly drops in quality . The main disadvantage of these methods is that they rely on collecting the in-situ observations to propose one corrective step by adjusting the process parameters . However , this means that the prints continue with sub-optimal parameters , and it can take several layers to adjust the deposition . In contrast , our system runs in-process and reacts to the in-situ views immediately . This ensures high-quality deposition and adaptability to material changes . Recently machine learning techniques sparked a new interest in the design of adaptive control policies ( Mnih et al. , 2015 ) . A particularly successful approach for high-quality in-process control is to adopt the Model Predictive Control paradigm ( MPC ) ( Gu et al. , 2016 ; Silver et al. , 2017 ; Oh et al. , 2017 ; Srinivas et al. , 2018 ; Nagabandi et al. , 2018 ) . The control scheme of MPC relies on an observation of the current state and a short-horizon prediction of the future states . By manipulating the process parameters , we observe the changes in future predictions and can pick a future with desirable characteristics . Particularly useful is to utilize deep models to generate differentiable predictors that provide derivatives with respect to control changes ( de Avila Belbute-Peres et al. , 2018 ; Schenck & Fox , 2018 ; Toussaint et al. , 2018 ; Li et al. , 2019a ) . However , addressing the uncertainties of the deposition process with MPC is challenging . In a noisy environment , we can rely only on the expected prediction of the deposition . This leads to a conservative control policy that effectively executes the mean action . Moreover , reacting to material changes over time requires optimizing actions for long time horizons which is a known weakness of the MPC paradigm ( Garcia et al. , 1989 ) . As a result , MPC is not suitable for in-process control in noisy environments . Another option to derive control policies is to leverage deep reinforced learning ( Rajeswaran et al. , 2017 ; Liu & Hodgins , 2018 ; Peng et al. , 2018 ; Yu et al. , 2019 ; Lee et al. , 2019 ; Akkaya et al. , 2019 ) . The key challenge in the design of such controllers is formulating an efficient numerical model that captures the governing physical phenomena . As a consequence , it is most commonly applied to rigid body dynamics and rigid robots where such models are readily available ( Todorov et al. , 2012 ; Bender et al. , 2014 ; Coumans & Bai , 2016 ; Lee et al. , 2018 ) . In contrast , learning with non-rigid objects is significantly more challenging as the computation time for deformable materials is higher and relies on some prior knowledge on the task ( Clegg et al. , 2018 ; Elliott & Cakmak , 2018 ; Ma et al. , 2018 ; Wu et al. , 2019 ) . Recently Zhang et al . ( 2020 ) proposed a numerical model for training control policies where a rigid object interacts with amorphous materials . Similarly , in our work a rigid printing nozzle interacts with the fluid-like printing material . However , our model is specialized for the printing hardware and models not only the deposition but also its variance . We demonstrate that this is an important component in minimizing the sim-to-real gap and design control policies that are readily applicable to the physical hardware . 3 HARDWARE PRELIMINARIES . The choice of additive manufacturing technology constraints the subsequent numerical modeling . To keep the applicability of our developed system as wide as possible , we opted for a direct write Camera Material Nozzle needle deposition system mounted on a 3-axis Cartesian robot ( inset ) . The robot allows us to freely control the acceleration and position of the dispenser . The dispenser can process a wide range of viscous materials , and the deposition is very similar to fused deposition modeling . We further enhance the apparatus with two camera modules . The cameras lie on the opposite sides of the nozzle to allow our apparatus to perceive the location around the deposition . It is this locality of the in-situ view that we will leverage to formulate our numerical model . 3.1 BASELINE CONTROLLER Material Width Outline Path Target Infill Path Figure 1 : Baseline slicer . To control the printing apparatus , we employ a baseline slicer . The input to the slicer is a three-dimensional object . The output is a series of locations the printing head visits to reproduce the model as closely as possible . To generate a single slice of the object , we start by intersecting the 3D model with a Z-axis aligned plane ( please note that this does not affect the generalizability since the input can be arbitrarily rotated ) . The slice is represented by a polygon that marks the outline of the printout ( Figure 1 gray ) . To generate the printing path , we assume a constant width of deposition ( Figure 1 red ) that acts as a convolution on the printing path . The printing path ( Figure 1 blue ) is created by offsetting the print boundary by half the width of the material using the Clipper algorithm ( Johnson , 2015 ) . The infill pattern is generated by tracing a zig-zag line through the area of the print ( Figure 1 green ) . 4 REINFORCEMENT LEARNING FOR ADDITIVE MANUFACTURING . The baseline control strictly relies on a constant width of the material . To discover policies that can adapt to the in-situ observations , we formulate the search in a reinforcement learning framework . The control problem is described by a Markov decision process ( S , A , P , R ) , where S is the observation space , A is a continuous action space , P = P ( s′|s , a ) is the transition function , and R ( s , a ) → R is the reward function . To learn a control policy we take a model free approach by learning directly from printing . Unfortunately , learning on a physical device is challenging . The interaction between various process parameters can lead to deposition errors that require manual attention . As such discovering control policies directly on the hardware has too steep sample complexity to be practical . A potential solution is to learn the control behavior in simulation and transfer to the physical device . However , transfer from simulation to real world is a notoriously hard problem that hinges on applicability of the learned knowledge . In this work , we propose a framework for preparing numerical models for additive manufacturing that facilitate the sim-to-real transfer . Our model has three key components that facilitate the generalization of the learned control policies . The first component is the design of the observation space . To facilitate the transfer of learning between simulation and a physical device , we rely on an abstraction of the observation space ( Kaufmann et al. , 2020 ) . Rather than using the direct appearance feed from our camera module we process the signal into a heightmap . A heightmap is a 2D image where each pixel stores the height of the deposited material . For each height map location , the height is measured as a distance from the building plate to the deposited material . This allows our system to generalize to many different sensors such as cameras , depth sensors , or laser profilometers . However , unlike Kaufmann et al . ( 2020 ) , we do not extract the feature vectors manually . Instead , similarly to OpenAI et al . ( 2018 ) , we learn the features directly from the heightmap . In contrast to OpenAI et al . ( 2018 ) , we do not randomize the observation domain . Additional randomization is not necessary in our case thanks to the controlled observation conditions of the physical apparatus . A key insight of our approach is that the engineered observation space coupled with learned features can significantly help with policy learning . A careful design of the observation space can facilitate the sim-to-real transfer , make the hardware design more flexible by enabling the use of a range of sensors that compute similar observations , and remove the need to hand-craft the features . It is therefore worth wile to invest in the design of observation spaces . The second component of our system is the design of the action space . Instead of directly controlling the motors of the printer we rely on a high-level control scheme and tune coupled parameters such as velocity or offset from the printing path . This idea is similar in spirit to OpenAI et al . ( 2018 ) . OpenAI et al . ( 2018 ) suggest not using direct sensory inputs from the mechanical hand as observations due to their noisiness and lack of generalization across environments . Instead , they use image data to track the robotic hand . Similarly , but instead in action space , we do not control the printer by directly inputting the typically noisy and hardware-specific voltages that actuate the motors of the apparatus . Instead , we control the printer by setting the desired velocity and offset and letting the apparatus match them to the best of its capabilities . This translation layer allows us to utilize the controller on a broader range of devices without per-device training . This idea could also be generalized to other robotic tasks , for example , by applying a hierarchical divide and conquer approach to the action space . The control policies could output only high-level actions such as desired locations for robots actuators or deviations from a baseline behavior . Lowlevel controllers could then execute these higher-level actions . Such a control hierarchy can facilitate training by decoupling the higher-level goals from low-level inputs and transferring existing control policies to new devices through specialized low-level controllers . The third and last component of our system is an approximative transition function . Rather than modelling the deposition process exactly we propose to approximate it qualitatively . A qualitative approximation allows us to design an efficient computational model . To facilitate the transfer of the simulated model to the physical device we reintroduce the device uncertainty in a data-driven fashion . This is similar to OpenAI et al . ( 2018 ) , but instead of covering a large array of options , we specialize the randomization . Inspired by Chebotar et al . ( 2019 ) , we designed a data-driven LPC filter that matches the statistical distribution of variations observed during a typical printing process . This noise enables our control policies to adapt to changing environments and , to some extent , to changes in material properties such as viscosity . Our approximative transition function shows that it is not necessary to reproduce the physical world in simulation perfectly . A qualitative approximation is sufficient as long as we learn behavior patterns that translate to real-world experiences . This is an important observation for any task where we manipulate objects and elastic or frictional forces dominate the behavior . Relying on computationally more affordable simulations allows for applying existing learning algorithms to a broader range of problems where precise numerical modeling has prohibitive computational complexity . Moreover , by leveraging a numerical model it is possible to utilize privileged information that would be challenging if not impossible to collect in the real world . For full description of our methods please see Appendix A . | This paper proposes an approach to learn a closed-loop controller for additive manufacturing using reinforcement learning. The sensorimotor policy is trained exclusively in simulation and evaluated on a physical system in the real world without any fine-tuning with real-world data. To account for the difficulty to model the complex material deposition process, the approach builds a simplified simulator that can capture the process only qualitatively. Interestingly, even if only trained on the simple model, the approach generalizes to a real-world system. The training methodology is described and evaluated in detail and compared against a baseline non-adaptive controller. | SP:35d60063e24da659cbdbee06cbae828683ac244b |
Comparing Distributions by Measuring Differences that Affect Decision Making | 1 Introduction . Quantifying the difference between two probability distributions is a fundamental problem in machine learning . Modelers choose different types of discrepancies ( or probability divergences ) to encode their prior knowledge about which aspects are relevant to evaluate the difference . Integral probability metrics ( IPMs , Müller ( 1997 ) ) and f -divergences ( Csiszár , 1964 ) are widely used discrepancies in machine learning . IPMs , such as the Wasserstein distance , maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) , are based on the idea that if two distributions are identical , any function should have the same expectation under both distributions . IPMs are used to define training objectives for generative models ( Arjovsky et al. , 2017 ) , perform independence tests ( Doran et al. , 2014 ) , robust optimization ( Esfahani & Kuhn , 2018 ) among many other applications . f -divergences , such as the KL divergence and the Jensen Shannon divergence , are based on the idea that if two distributions are identical , they assign the same likelihood to every point . One can then define a discrepancy based on how different the likelihood ratio is from one . KL divergence underlies some of the most commonly used training objectives for both supervised and unsupervised machine learning algorithms , such as cross entropy loss . We propose a third category of divergences called H-divergences that overlaps with but also extends the set of integral probability metrics or the set f -divergences . Intuitively , H-divergence compares two distributions in terms of the optimal loss for a certain decision task . This optimal loss corresponds to a generalized notion of entropy ( DeGroot et al. , 1962 ) . Instead of measuring the best average code length of any encoding scheme ( Shannon entropy ) , the generalized entropy uses arbitrary loss function ( rather than code length ) and set of actions ( rather than encoding schemes ) , and is defined as the best expected loss among the set of actions . In particular , given two distribution p and q , we compare the generalized entropy of the mixture distribution ( p + q ) /2 and the generalized entropy of p and q individually . Intuitively , if p and q are different , it is more difficult to minimize expected loss under the mixture distribution ( p+ q ) /2 , and hence the mixture distribution should have higher generalized entropy ; if p and q are identical , then the mixture distribution is identical to p or q , and hence should have the same generalized entropy . Our divergence strictly generalizes the maximum mean discrepancy family and the Jensen Shannon divergence , which can be obtained with specific choices of the loss function . We illustrate this via the Venn diagram in Figure 1 . Our formulation allows us to choose alternative losses to leverage inductive biases and machine learning models from different problem domains . For example , if we choose the generalized entropy as the maximum log likelihood of deep generative models , we are able to leverage recent progress in modeling high dimensional images . We demonstrate the effectiveness of H-divergence in two sample tests , i.e . to decide whether two sets of samples come from the same distribution or not . A test based on a probability discrepancy declares two sets of samples different if their discrepancy exceeds some threshold . We use H-divergences based on generalized entropy defined by the log likelihood of offthe-shelf generative models . Compared to state-of-the-art tests based on MMD with deep kernels ( Liu et al. , 2020 ) , tests based on the H-divergence achieve better test power ( given identical type I error ) on a large set of benchmarks . More importantly , scientists and policy makers are often interested not only in if two distributions are different , but how two distributions are different and whether the differences affect decision making . Typical divergence measures ( such as KL ) or two sample tests only quantify if two distributions are different , while we show that H-divergence is a useful tool for quantifying how distributions are different with three application examples : studying the effect of climate change , feature selection , and sample quality evaluation . In each of these examples , we compare different aspects of the distributions by choosing specific decision loss functions . For example , climate change ( Figure 3 ) might impact agriculture in a region but not energy production , or vice versa . By choosing suitable loss functions ( related to agriculture , energy , etc ) we can quantify and test if the change in climate distribution impact different economic activities . 2 Background . 2.1 Probability Divergences . Let X denote a finite set or a finite dimensional vector space , and P ( X ) denote the set of probability distributions on X that have a density . We consider the problem of defining a probability divergence between any two distributions in P ( X ) , where a probability divergence is any function D : P ( X ) × P ( X ) → R that satisfies D ( p‖q ) ≥ 0 , D ( p‖p ) = 0 , ∀p , q ∈ P ( X ) . We call the divergence D “ strict ” if D ( p‖q ) > 0 ∀p 6= q , and “ non-strict “ otherwise . In this paper we consider both types of divergences . Integral Probability Metrics Let F denote a set of functions X → R. An integral probability metric is defined as IPMF ( p‖q ) = supf∈F |Ep [ f ( X ) ] −Eq [ f ( X ) ] | . Several important divergences belong to integral probability metrics . Examples include the Wasserstein distance , where F is the set of 1-Lipschitz functions ; the total variation distance , where F is the set of functions X → [ −1 , 1 ] . The maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) chooses a kernel function k : X × X → R+ and is defined by MMD2 ( p‖q ) = Ep , pk ( X , Y ) + Eq , qk ( X , Y ) − 2Ep , qk ( X , Y ) MMD is an IPM where F is the unit norm functions in the reproducing kernel Hilbert space ( RKHS ) associated with the kernel k. f-Divergences Given any convex continuous function f : R+ → R such that f ( 1 ) = 0 , the f -Divergence is defined as ( assuming densities exist ) Df ( p‖q ) = Eq [ f ( p ( X ) /q ( X ) ) ] . Examples include the KL divergence , where f : t 7→ t log t and the Jensen Shannon divergence , where f : t 7→ ( t+ 1 ) log ( 2 t+1 ) + t log t . 2.2 H-Entropy . For any action space A and any loss function ` : X ×A → R , the H-entropy ( DeGroot et al. , 1962 ; DeGroot , 2005 ; Grünwald et al. , 2004 ) is defined as H ` ( p ) = inf a∈A Ep [ ` ( X , a ) ] In words , H-entropy is the Bayes optimal loss of a decision maker who must select some action a not for a particular x , but in expectation for a random x drawn from p ( x ) . Hentropy generalizes several important notions of uncertainty . Examples include : Shannon Entropy , where A as the set of probabilities P ( X ) , and ` ( x , a ) = − log a ( x ) ; Variance where A = X , and ` ( x , a ) = ‖x − a‖22 ; Predictive V-entropy , where A ⊂ P ( X ) is some subset of distributions , and ` ( x , a ) = − log a ( x ) ( Xu et al. , 2020 ) . A key property we will use is that H-entropy is concave ( DeGroot et al. , 1962 ) . Lemma 1 . For any choice of ` : X ×A → R , H ` is a concave function . This Lemma can be proved by observing that inf is a concave function : it is always better to pick an optimal action for p and q separately rather than a single one for both . H ` ( αp+ ( 1− α ) q ) = inf a ( αEp [ ` ( X , a ) ] + ( 1− α ) Eq [ ` ( X , a ) ] ) ≥ α inf a Ep [ ` ( X , a ) ] + ( 1− α ) inf a Eq [ ` ( X , a ) ] = αH ` ( p ) + ( 1− α ) H ` ( q ) This Lemma reflects why H ` can be thought of as a measurement of entropy or uncertainty . If the distribution is more uncertain ( e.g . a mixture of p and q , rather than p or q separately ) then decisions made under higher uncertainty will suffer a higher loss . 3 Definition and Theoretical Properties . 3.1 H-Jensen Shannon Divergence . As a warm up , we present a special case of our divergence . Definition 1 ( H-Jensen Shannon divergence ) . DJS ` ( p , q ) = H ` ( p+ q 2 ) − 1 2 ( H ` ( p ) +H ` ( q ) ) ( 1 ) DJS ` is always non-negative because H-entropy is concave ( Lemma 1 ) , and clearly D JS ` ( p , q ) = 0 whenever p = q . Therefore , DJS ` is a valid probability divergence . In particular , if we choose H ` as the Shannon entropy , Definition 1 recovers the Jensen Shannon divergence . Other special loss function choices can recover definitions in ( Burbea & Rao , 1982 ) . 3.2 General H-divergence . In addition to the H-Jensen Shannon divergence , there are other functions based on the H-entropy that satisfy the requirements of a divergence . For example , DMin ` = H ` ( p+ q 2 ) −min ( H ` ( p ) , H ` ( q ) ) ( 2 ) is also a valid divergence ( this will be proved later as a special case of Lemma 2 ) . We can define a general set of divergences that includes the above two divergences with the following definition : Definition 2 ( H-divergence ) . For two distributions p , q on X , given any continuous function φ : R2 → R such that φ ( θ , λ ) > 0 whenever θ + λ > 0 and φ ( 0 , 0 ) = 0 , define Dφ ` ( p‖q ) = φ ( H ` ( p+ q 2 ) −H ` ( p ) , H ` ( p+ q 2 ) −H ` ( q ) ) Intuitively H ` ( p+q 2 ) −H ` ( p ) and H ` ( p+q 2 ) −H ` ( q ) measure how much more difficult it is to minimize loss on the mixture distribution ( p+ q ) /2 than on p and q respectively . φ is a general class of functions that map these differences into a scalar divergence , while satisfying some desirable properties described in the next section . The following proposition shows that the H-divergence generalizes the previous definitions ( 1 ) and ( 2 ) . Therefore , any property of H-divergence is inherited by e.g . the H-Jensen Shannon divergence . Proposition 1 . If φ ( θ , λ ) = θ+λ2 then D φ ` ( p , q ) is the H-Jensen Shannon divergence in Eq. ( 1 ) . If φ ( θ , λ ) = max ( θ , λ ) then Dφ ` ( p , q ) is the H-Min divergence in Eq . ( 2 ) . | This paper presents a new class of discrepancies between two continuous probability distributions. The proposed class, which is called the H-divergence, contains an extended class of Jensen Shannon divergences called H-Jensen Shannon divergences and another class (2) called the H-Min divergences as special cases. The conditions that the two probability distributions have non-negative H-divergence are given. It is seen that the set of H-Jensen Shannon divergences includes the set of squared Maximum Mean Discrepancies as a subset. Estimation and convergence of the H-divergence are discussed. The H-divergence is applied to propose two-sample tests and experiments suggest that the proposed tests outperform some existing tests in terms of power. The proposed methods are applied to climate data for decision making in agriculture and energy production. | SP:203342f544f71d617c3b7573ee78419a01e187ee |
Comparing Distributions by Measuring Differences that Affect Decision Making | 1 Introduction . Quantifying the difference between two probability distributions is a fundamental problem in machine learning . Modelers choose different types of discrepancies ( or probability divergences ) to encode their prior knowledge about which aspects are relevant to evaluate the difference . Integral probability metrics ( IPMs , Müller ( 1997 ) ) and f -divergences ( Csiszár , 1964 ) are widely used discrepancies in machine learning . IPMs , such as the Wasserstein distance , maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) , are based on the idea that if two distributions are identical , any function should have the same expectation under both distributions . IPMs are used to define training objectives for generative models ( Arjovsky et al. , 2017 ) , perform independence tests ( Doran et al. , 2014 ) , robust optimization ( Esfahani & Kuhn , 2018 ) among many other applications . f -divergences , such as the KL divergence and the Jensen Shannon divergence , are based on the idea that if two distributions are identical , they assign the same likelihood to every point . One can then define a discrepancy based on how different the likelihood ratio is from one . KL divergence underlies some of the most commonly used training objectives for both supervised and unsupervised machine learning algorithms , such as cross entropy loss . We propose a third category of divergences called H-divergences that overlaps with but also extends the set of integral probability metrics or the set f -divergences . Intuitively , H-divergence compares two distributions in terms of the optimal loss for a certain decision task . This optimal loss corresponds to a generalized notion of entropy ( DeGroot et al. , 1962 ) . Instead of measuring the best average code length of any encoding scheme ( Shannon entropy ) , the generalized entropy uses arbitrary loss function ( rather than code length ) and set of actions ( rather than encoding schemes ) , and is defined as the best expected loss among the set of actions . In particular , given two distribution p and q , we compare the generalized entropy of the mixture distribution ( p + q ) /2 and the generalized entropy of p and q individually . Intuitively , if p and q are different , it is more difficult to minimize expected loss under the mixture distribution ( p+ q ) /2 , and hence the mixture distribution should have higher generalized entropy ; if p and q are identical , then the mixture distribution is identical to p or q , and hence should have the same generalized entropy . Our divergence strictly generalizes the maximum mean discrepancy family and the Jensen Shannon divergence , which can be obtained with specific choices of the loss function . We illustrate this via the Venn diagram in Figure 1 . Our formulation allows us to choose alternative losses to leverage inductive biases and machine learning models from different problem domains . For example , if we choose the generalized entropy as the maximum log likelihood of deep generative models , we are able to leverage recent progress in modeling high dimensional images . We demonstrate the effectiveness of H-divergence in two sample tests , i.e . to decide whether two sets of samples come from the same distribution or not . A test based on a probability discrepancy declares two sets of samples different if their discrepancy exceeds some threshold . We use H-divergences based on generalized entropy defined by the log likelihood of offthe-shelf generative models . Compared to state-of-the-art tests based on MMD with deep kernels ( Liu et al. , 2020 ) , tests based on the H-divergence achieve better test power ( given identical type I error ) on a large set of benchmarks . More importantly , scientists and policy makers are often interested not only in if two distributions are different , but how two distributions are different and whether the differences affect decision making . Typical divergence measures ( such as KL ) or two sample tests only quantify if two distributions are different , while we show that H-divergence is a useful tool for quantifying how distributions are different with three application examples : studying the effect of climate change , feature selection , and sample quality evaluation . In each of these examples , we compare different aspects of the distributions by choosing specific decision loss functions . For example , climate change ( Figure 3 ) might impact agriculture in a region but not energy production , or vice versa . By choosing suitable loss functions ( related to agriculture , energy , etc ) we can quantify and test if the change in climate distribution impact different economic activities . 2 Background . 2.1 Probability Divergences . Let X denote a finite set or a finite dimensional vector space , and P ( X ) denote the set of probability distributions on X that have a density . We consider the problem of defining a probability divergence between any two distributions in P ( X ) , where a probability divergence is any function D : P ( X ) × P ( X ) → R that satisfies D ( p‖q ) ≥ 0 , D ( p‖p ) = 0 , ∀p , q ∈ P ( X ) . We call the divergence D “ strict ” if D ( p‖q ) > 0 ∀p 6= q , and “ non-strict “ otherwise . In this paper we consider both types of divergences . Integral Probability Metrics Let F denote a set of functions X → R. An integral probability metric is defined as IPMF ( p‖q ) = supf∈F |Ep [ f ( X ) ] −Eq [ f ( X ) ] | . Several important divergences belong to integral probability metrics . Examples include the Wasserstein distance , where F is the set of 1-Lipschitz functions ; the total variation distance , where F is the set of functions X → [ −1 , 1 ] . The maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) chooses a kernel function k : X × X → R+ and is defined by MMD2 ( p‖q ) = Ep , pk ( X , Y ) + Eq , qk ( X , Y ) − 2Ep , qk ( X , Y ) MMD is an IPM where F is the unit norm functions in the reproducing kernel Hilbert space ( RKHS ) associated with the kernel k. f-Divergences Given any convex continuous function f : R+ → R such that f ( 1 ) = 0 , the f -Divergence is defined as ( assuming densities exist ) Df ( p‖q ) = Eq [ f ( p ( X ) /q ( X ) ) ] . Examples include the KL divergence , where f : t 7→ t log t and the Jensen Shannon divergence , where f : t 7→ ( t+ 1 ) log ( 2 t+1 ) + t log t . 2.2 H-Entropy . For any action space A and any loss function ` : X ×A → R , the H-entropy ( DeGroot et al. , 1962 ; DeGroot , 2005 ; Grünwald et al. , 2004 ) is defined as H ` ( p ) = inf a∈A Ep [ ` ( X , a ) ] In words , H-entropy is the Bayes optimal loss of a decision maker who must select some action a not for a particular x , but in expectation for a random x drawn from p ( x ) . Hentropy generalizes several important notions of uncertainty . Examples include : Shannon Entropy , where A as the set of probabilities P ( X ) , and ` ( x , a ) = − log a ( x ) ; Variance where A = X , and ` ( x , a ) = ‖x − a‖22 ; Predictive V-entropy , where A ⊂ P ( X ) is some subset of distributions , and ` ( x , a ) = − log a ( x ) ( Xu et al. , 2020 ) . A key property we will use is that H-entropy is concave ( DeGroot et al. , 1962 ) . Lemma 1 . For any choice of ` : X ×A → R , H ` is a concave function . This Lemma can be proved by observing that inf is a concave function : it is always better to pick an optimal action for p and q separately rather than a single one for both . H ` ( αp+ ( 1− α ) q ) = inf a ( αEp [ ` ( X , a ) ] + ( 1− α ) Eq [ ` ( X , a ) ] ) ≥ α inf a Ep [ ` ( X , a ) ] + ( 1− α ) inf a Eq [ ` ( X , a ) ] = αH ` ( p ) + ( 1− α ) H ` ( q ) This Lemma reflects why H ` can be thought of as a measurement of entropy or uncertainty . If the distribution is more uncertain ( e.g . a mixture of p and q , rather than p or q separately ) then decisions made under higher uncertainty will suffer a higher loss . 3 Definition and Theoretical Properties . 3.1 H-Jensen Shannon Divergence . As a warm up , we present a special case of our divergence . Definition 1 ( H-Jensen Shannon divergence ) . DJS ` ( p , q ) = H ` ( p+ q 2 ) − 1 2 ( H ` ( p ) +H ` ( q ) ) ( 1 ) DJS ` is always non-negative because H-entropy is concave ( Lemma 1 ) , and clearly D JS ` ( p , q ) = 0 whenever p = q . Therefore , DJS ` is a valid probability divergence . In particular , if we choose H ` as the Shannon entropy , Definition 1 recovers the Jensen Shannon divergence . Other special loss function choices can recover definitions in ( Burbea & Rao , 1982 ) . 3.2 General H-divergence . In addition to the H-Jensen Shannon divergence , there are other functions based on the H-entropy that satisfy the requirements of a divergence . For example , DMin ` = H ` ( p+ q 2 ) −min ( H ` ( p ) , H ` ( q ) ) ( 2 ) is also a valid divergence ( this will be proved later as a special case of Lemma 2 ) . We can define a general set of divergences that includes the above two divergences with the following definition : Definition 2 ( H-divergence ) . For two distributions p , q on X , given any continuous function φ : R2 → R such that φ ( θ , λ ) > 0 whenever θ + λ > 0 and φ ( 0 , 0 ) = 0 , define Dφ ` ( p‖q ) = φ ( H ` ( p+ q 2 ) −H ` ( p ) , H ` ( p+ q 2 ) −H ` ( q ) ) Intuitively H ` ( p+q 2 ) −H ` ( p ) and H ` ( p+q 2 ) −H ` ( q ) measure how much more difficult it is to minimize loss on the mixture distribution ( p+ q ) /2 than on p and q respectively . φ is a general class of functions that map these differences into a scalar divergence , while satisfying some desirable properties described in the next section . The following proposition shows that the H-divergence generalizes the previous definitions ( 1 ) and ( 2 ) . Therefore , any property of H-divergence is inherited by e.g . the H-Jensen Shannon divergence . Proposition 1 . If φ ( θ , λ ) = θ+λ2 then D φ ` ( p , q ) is the H-Jensen Shannon divergence in Eq. ( 1 ) . If φ ( θ , λ ) = max ( θ , λ ) then Dφ ` ( p , q ) is the H-Min divergence in Eq . ( 2 ) . | This paper proposes a new category of divergences, called H-divergence, for measuring the discrepancy between two probability distributions. H-divergence is based on the optimal loss with respect to a chosen decision task and is therefore decision dependent. Further, it generalizes a few well known divergences such as the Jensen-Shannon divergence and the maximum mean discrepancy family. The paper proves a few properties of H-divergence including a convergence result when H-divergence is estimated using a finite set of samples. Further, three examples are used to illustrate the applications of H-divergence including two sample tests, assessing climate change, and feature selection, which demonstrate the advantage of H-divergence compared with other commonly used discrepancies. | SP:203342f544f71d617c3b7573ee78419a01e187ee |
Comparing Distributions by Measuring Differences that Affect Decision Making | 1 Introduction . Quantifying the difference between two probability distributions is a fundamental problem in machine learning . Modelers choose different types of discrepancies ( or probability divergences ) to encode their prior knowledge about which aspects are relevant to evaluate the difference . Integral probability metrics ( IPMs , Müller ( 1997 ) ) and f -divergences ( Csiszár , 1964 ) are widely used discrepancies in machine learning . IPMs , such as the Wasserstein distance , maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) , are based on the idea that if two distributions are identical , any function should have the same expectation under both distributions . IPMs are used to define training objectives for generative models ( Arjovsky et al. , 2017 ) , perform independence tests ( Doran et al. , 2014 ) , robust optimization ( Esfahani & Kuhn , 2018 ) among many other applications . f -divergences , such as the KL divergence and the Jensen Shannon divergence , are based on the idea that if two distributions are identical , they assign the same likelihood to every point . One can then define a discrepancy based on how different the likelihood ratio is from one . KL divergence underlies some of the most commonly used training objectives for both supervised and unsupervised machine learning algorithms , such as cross entropy loss . We propose a third category of divergences called H-divergences that overlaps with but also extends the set of integral probability metrics or the set f -divergences . Intuitively , H-divergence compares two distributions in terms of the optimal loss for a certain decision task . This optimal loss corresponds to a generalized notion of entropy ( DeGroot et al. , 1962 ) . Instead of measuring the best average code length of any encoding scheme ( Shannon entropy ) , the generalized entropy uses arbitrary loss function ( rather than code length ) and set of actions ( rather than encoding schemes ) , and is defined as the best expected loss among the set of actions . In particular , given two distribution p and q , we compare the generalized entropy of the mixture distribution ( p + q ) /2 and the generalized entropy of p and q individually . Intuitively , if p and q are different , it is more difficult to minimize expected loss under the mixture distribution ( p+ q ) /2 , and hence the mixture distribution should have higher generalized entropy ; if p and q are identical , then the mixture distribution is identical to p or q , and hence should have the same generalized entropy . Our divergence strictly generalizes the maximum mean discrepancy family and the Jensen Shannon divergence , which can be obtained with specific choices of the loss function . We illustrate this via the Venn diagram in Figure 1 . Our formulation allows us to choose alternative losses to leverage inductive biases and machine learning models from different problem domains . For example , if we choose the generalized entropy as the maximum log likelihood of deep generative models , we are able to leverage recent progress in modeling high dimensional images . We demonstrate the effectiveness of H-divergence in two sample tests , i.e . to decide whether two sets of samples come from the same distribution or not . A test based on a probability discrepancy declares two sets of samples different if their discrepancy exceeds some threshold . We use H-divergences based on generalized entropy defined by the log likelihood of offthe-shelf generative models . Compared to state-of-the-art tests based on MMD with deep kernels ( Liu et al. , 2020 ) , tests based on the H-divergence achieve better test power ( given identical type I error ) on a large set of benchmarks . More importantly , scientists and policy makers are often interested not only in if two distributions are different , but how two distributions are different and whether the differences affect decision making . Typical divergence measures ( such as KL ) or two sample tests only quantify if two distributions are different , while we show that H-divergence is a useful tool for quantifying how distributions are different with three application examples : studying the effect of climate change , feature selection , and sample quality evaluation . In each of these examples , we compare different aspects of the distributions by choosing specific decision loss functions . For example , climate change ( Figure 3 ) might impact agriculture in a region but not energy production , or vice versa . By choosing suitable loss functions ( related to agriculture , energy , etc ) we can quantify and test if the change in climate distribution impact different economic activities . 2 Background . 2.1 Probability Divergences . Let X denote a finite set or a finite dimensional vector space , and P ( X ) denote the set of probability distributions on X that have a density . We consider the problem of defining a probability divergence between any two distributions in P ( X ) , where a probability divergence is any function D : P ( X ) × P ( X ) → R that satisfies D ( p‖q ) ≥ 0 , D ( p‖p ) = 0 , ∀p , q ∈ P ( X ) . We call the divergence D “ strict ” if D ( p‖q ) > 0 ∀p 6= q , and “ non-strict “ otherwise . In this paper we consider both types of divergences . Integral Probability Metrics Let F denote a set of functions X → R. An integral probability metric is defined as IPMF ( p‖q ) = supf∈F |Ep [ f ( X ) ] −Eq [ f ( X ) ] | . Several important divergences belong to integral probability metrics . Examples include the Wasserstein distance , where F is the set of 1-Lipschitz functions ; the total variation distance , where F is the set of functions X → [ −1 , 1 ] . The maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) chooses a kernel function k : X × X → R+ and is defined by MMD2 ( p‖q ) = Ep , pk ( X , Y ) + Eq , qk ( X , Y ) − 2Ep , qk ( X , Y ) MMD is an IPM where F is the unit norm functions in the reproducing kernel Hilbert space ( RKHS ) associated with the kernel k. f-Divergences Given any convex continuous function f : R+ → R such that f ( 1 ) = 0 , the f -Divergence is defined as ( assuming densities exist ) Df ( p‖q ) = Eq [ f ( p ( X ) /q ( X ) ) ] . Examples include the KL divergence , where f : t 7→ t log t and the Jensen Shannon divergence , where f : t 7→ ( t+ 1 ) log ( 2 t+1 ) + t log t . 2.2 H-Entropy . For any action space A and any loss function ` : X ×A → R , the H-entropy ( DeGroot et al. , 1962 ; DeGroot , 2005 ; Grünwald et al. , 2004 ) is defined as H ` ( p ) = inf a∈A Ep [ ` ( X , a ) ] In words , H-entropy is the Bayes optimal loss of a decision maker who must select some action a not for a particular x , but in expectation for a random x drawn from p ( x ) . Hentropy generalizes several important notions of uncertainty . Examples include : Shannon Entropy , where A as the set of probabilities P ( X ) , and ` ( x , a ) = − log a ( x ) ; Variance where A = X , and ` ( x , a ) = ‖x − a‖22 ; Predictive V-entropy , where A ⊂ P ( X ) is some subset of distributions , and ` ( x , a ) = − log a ( x ) ( Xu et al. , 2020 ) . A key property we will use is that H-entropy is concave ( DeGroot et al. , 1962 ) . Lemma 1 . For any choice of ` : X ×A → R , H ` is a concave function . This Lemma can be proved by observing that inf is a concave function : it is always better to pick an optimal action for p and q separately rather than a single one for both . H ` ( αp+ ( 1− α ) q ) = inf a ( αEp [ ` ( X , a ) ] + ( 1− α ) Eq [ ` ( X , a ) ] ) ≥ α inf a Ep [ ` ( X , a ) ] + ( 1− α ) inf a Eq [ ` ( X , a ) ] = αH ` ( p ) + ( 1− α ) H ` ( q ) This Lemma reflects why H ` can be thought of as a measurement of entropy or uncertainty . If the distribution is more uncertain ( e.g . a mixture of p and q , rather than p or q separately ) then decisions made under higher uncertainty will suffer a higher loss . 3 Definition and Theoretical Properties . 3.1 H-Jensen Shannon Divergence . As a warm up , we present a special case of our divergence . Definition 1 ( H-Jensen Shannon divergence ) . DJS ` ( p , q ) = H ` ( p+ q 2 ) − 1 2 ( H ` ( p ) +H ` ( q ) ) ( 1 ) DJS ` is always non-negative because H-entropy is concave ( Lemma 1 ) , and clearly D JS ` ( p , q ) = 0 whenever p = q . Therefore , DJS ` is a valid probability divergence . In particular , if we choose H ` as the Shannon entropy , Definition 1 recovers the Jensen Shannon divergence . Other special loss function choices can recover definitions in ( Burbea & Rao , 1982 ) . 3.2 General H-divergence . In addition to the H-Jensen Shannon divergence , there are other functions based on the H-entropy that satisfy the requirements of a divergence . For example , DMin ` = H ` ( p+ q 2 ) −min ( H ` ( p ) , H ` ( q ) ) ( 2 ) is also a valid divergence ( this will be proved later as a special case of Lemma 2 ) . We can define a general set of divergences that includes the above two divergences with the following definition : Definition 2 ( H-divergence ) . For two distributions p , q on X , given any continuous function φ : R2 → R such that φ ( θ , λ ) > 0 whenever θ + λ > 0 and φ ( 0 , 0 ) = 0 , define Dφ ` ( p‖q ) = φ ( H ` ( p+ q 2 ) −H ` ( p ) , H ` ( p+ q 2 ) −H ` ( q ) ) Intuitively H ` ( p+q 2 ) −H ` ( p ) and H ` ( p+q 2 ) −H ` ( q ) measure how much more difficult it is to minimize loss on the mixture distribution ( p+ q ) /2 than on p and q respectively . φ is a general class of functions that map these differences into a scalar divergence , while satisfying some desirable properties described in the next section . The following proposition shows that the H-divergence generalizes the previous definitions ( 1 ) and ( 2 ) . Therefore , any property of H-divergence is inherited by e.g . the H-Jensen Shannon divergence . Proposition 1 . If φ ( θ , λ ) = θ+λ2 then D φ ` ( p , q ) is the H-Jensen Shannon divergence in Eq. ( 1 ) . If φ ( θ , λ ) = max ( θ , λ ) then Dφ ` ( p , q ) is the H-Min divergence in Eq . ( 2 ) . | This paper proposes H-divergence, a new type of divergence based on H-entropy, to compare two probability distributions. This new divergence includes some of the commonly used integral probability metrics and f-divergences, such as Jensen Shannon divergence and MMD as special cases. A crucial property of H-divergence is that it takes into account the decision loss; namely, it compares two distributions in a way that distinguishes them based on the optimal decision loss, i.e., "two distributions are different if the optimal decision loss is higher on their mixture than on each individual distribution." It also provides an empirical estimator for H-divergence and studies its convergence properties. The paper studies several use cases of H-divergence, including two-sample tests. Experiments demonstrate that H-divergence achieves higher test power than tests based on MMD under the same type I error rate. Another important use case of H-divergence is understanding whether differences between distributions are significant enough to affect decision making (from the viewpoint of minimizing loss) in different experiments, including how climate change affects various economic activities. | SP:203342f544f71d617c3b7573ee78419a01e187ee |
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion | 1 INTRODUCTION . With the rapid developments of 3D sensors , 3D point clouds are an important data format that captures 3D information owing to their ease of acquisition and efficiency in storage . Unfortunately , point clouds scanned in the real world are often incomplete due to partial observation and self occlusion . It is important to recover the complete shape by inferring the missing parts for many downstream tasks such as 3D reconstruction , augmented reality and scene understanding . To tackle this problem , many learning-based methods ( Yuan et al. , 2018 ; Yang et al. , 2018 ; Tchapmi et al. , 2019 ; Xie et al. , 2020 ; Liu et al. , 2020 ; Pan et al. , 2021 ) are proposed , which are supervised by using either the Chamfer Distance ( CD ) or Earth Mover Distance ( EMD ) to penalize the discrepancies between the generated complete point cloud and the ground truth . However , CD loss is not sensitive to overall density distribution , and thus networks trained by CD loss could generate non-uniform point cloud completion results ( See Figure 10 and 11 in Appendix ) . EMD is more distinctive to measure density distributions , but it is too expensive to compute in training . The absence of an effective and efficient training loss highly limits the capabilities of many existing point cloud completion networks . We find that denoising diffusion probabilistic models ( DDPM ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) can potentially generate uniform and high quality point clouds with an effective and efficient loss function . It can iteratively move a set of Gaussian noise towards a complete and clean point cloud . DDPM defines a one-to-one pointwise mapping between two consecutive point clouds in the diffusion process , which enables it to use a simple mean squared error loss function for training . This loss function is efficient to compute and explicitly requires the generated point cloud to be uniform , as a one-to-one point mapping is naturally established between the generated point cloud and the ground truth . Point cloud completion task can be treated as a conditional generation problem in the framework of DDPM ( Zhou et al. , 2021 ; Luo & Hu , 2021 ) . Indeed , we find the complete point clouds generated by a conditional DDPM often have a good overall distribution that uniformly covers the shape of the object . Nonetheless , due to the probabilistic nature of DDPM and the lack of a suitable network architecture to train the conditional DDPM for 3D point cloud completion in previous works , we find DDPM completed point clouds often lack smooth surfaces and sharp details ( See Figure 1 and Appendix Figure 12 ) , which is also reflected by their high CD loss compared with state-of-the-art point cloud completion methods in our experiments . Another problem with DDPM is its inefficiency in the inference phase . It usually takes several hundreds and even up to one thousand forward steps to generate a single point cloud . Several methods ( Song et al. , 2020 ; Nichol & Dhariwal , 2021 ; Kong & Ping , 2021 ) are proposed to accelerate DDPM using jumping steps without retraining networks , which however , leads to an obvious performance drop when using a small number of diffusion steps . In this work , we propose the Conditional Point Diffusion-Refinement ( PDR ) paradigm to generate both uniform and high quality complete point clouds . As shown in Figure 1 , our PDR paradigm performs point cloud completion in a coarse-to-fine fashion . Firstly , we use the Conditional Generation Network ( CGNet ) to generate a coarse complete point cloud by the DDPM conditioned on the partial point cloud . It iteratively moves a set of Gaussian noise towards a complete point cloud . Following , the ReFinement Network ( RFNet ) further refines the coarse complete point cloud generated from the Conditional Generation Network with the help of partial point clouds . In addition , RFNet can be used to refine the low quality point clouds generated by an accelerated DDPM , so that we could enjoy an acceleration up to 50 times , while minimizing the performance drop . In this way , the completion results generated by our PDR paradigm demonstrate both good overall density distribution ( i.e . uniform ) and sharp local details . Both CGNet and RFNet have a novel dual-path network architecture shown in Figure 2 , which is composed of two parallel sub-networks , a Denoise subnet and a Condition Feature Extraction subnet for noisy point clouds and partial point clouds , respectively . Specifically , we propose Point Adaptive Deconvolution ( PA-Deconv ) operation for upsampling , which can effectively manipulate spatial locations of 3D points . Furthermore , we propose the Feature Transfer ( FT ) module to directly transmit encoded point features at different scales from the Condition Feature Extraction subnet to the corresponding hierarchy in the Denoise subnet . Extensive experimental results show that our PDR paradigm can provide new state-of-the-art performance for point cloud completion . Our Key contributions can be summarized as : 1 ) We identify conditional DDPM to be a good model with an effective and efficient loss function to generate uniform point clouds in point cloud completion task . 2 ) By using RFNet to refine the coarse point clouds , our PDR paradigm can generate complete point cloud with both good overall density distribution ( i.e . uniform ) and sharp local details . 3 ) We design novel point learning modules , including PA-Deconv and Feature Transfer modules , for constructing CGNet in DDPM and RFNet , which effectively and efficiently utilizes multi-level features extracted from incomplete point clouds for point cloud completion . 4 ) With the help of our proposed RFNet , we can accelerate the generation process of DDPM up to 50 times without a significant drop in point cloud quality . 2 PROBLEM STATEMENT . In this paper , we focus on the 3D point cloud completion task . A 3D point cloud is represented by N points in the 3D space : X = { xj |1 ≤ j ≤ N } , where each xj ∈ R3 is the 3D coordinates of the j-th point . We assume the dataset is composed of M data pairs { ( Xi , Ci ) |1 ≤ i ≤ M } , where Xi is the i-th ground-truth point cloud , and Ci is the incomplete point cloud from a partial observation ofXi . The goal is to develop a model that completes the partial observation Ci and outputs a point cloud as close to the ground truth Xi as possible . For algebraic convenience , we let x ∈ R3N be the vector form of a point cloudX , and similarly c be the vector form of C . 3 METHODOLOGY . We consider the point cloud completion task as a conditional generation problem , where the incomplete point cloud C serves as the conditioner . We use the powerful generative model called denoising diffusion probabilistic models ( DDPM ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) to first generate a coarse completion of the partial observation . Then we use another network to refine the coarse point cloud to improve its visual quality . Our point cloud completion pipeline is shown in Figure 1 . We first briefly introduce the theory of DDPM in Section 3.1 , and then describe detailed architecture of the Conditional Generation Network ( CGNet ) and ReFinement Network ( RFNet ) in Section 3.2 and Section 3.3 . 3.1 BACKGROUND ON CONDITIONAL DENOISING DIFFUSION PROBABILISTIC MODELS . We assume pdata to be the distribution of the complete point cloud xi in the dataset , and platent = N ( 03N , I3N×3N ) to be the latent distribution , whereN is the Gaussian distribution . Then , the conditional DDPM consists of two Markov chains called the diffusion process and the reverse process . Both processes have length equal to T . We set T = 1000 in this paper . The Diffusion Process . The diffusion process is a Markov process that adds Gaussian noise into the clean data distribution pdata until the output distribution is close to platent . The diffusion process is irrelevant of the conditioner , the incomplete point cloud ci . Formally , let x0 ∼ pdata . We use the superscript to denote the diffusion step t. For conciseness , we omit the subscription i in the following discussion . The diffusion process from clean data x0 to xT is defined as q ( x1 , · · · , xT |x0 ) = T∏ t=1 q ( xt|xt−1 ) , where q ( xt|xt−1 ) = N ( xt ; √ 1− βtxt−1 , βtI ) . ( 1 ) The hyperparameters βt are pre-defined , small positive constants ( See details in Appendix Section A.1 ) . According to Ho et al . ( 2020 ) , there is a closed form expression for q ( xt|x0 ) . We first define constants αt = 1− βt , ᾱt = ∏t i=1 αi . Then , we have q ( x t|x0 ) = N ( xt ; √ ᾱtx 0 , ( 1− ᾱt ) I ) . Therefore , when T is large enough , ᾱt goes to 0 , and q ( xT |x0 ) becomes close to the latent distribution platent ( xT ) . Note that xt can be directly sampled through the following equation : xt = √ ᾱtx 0 + √ 1− ᾱt , where is a Gaussian noise . ( 2 ) We emphasize that q ( xt|xt−1 ) can be seen as a one-to-one pointwise mapping as xt can be sampled through the equation xt = √ 1− βtxt−1 + βt . Therefore , the order of points in x0 is preserved in the diffusion process . However , it does not matter what kind of order we input the points in x0 . That is because when T is large enough , xT will become a Gaussian distribution . Every point in a Gaussian distribution is equivalent and there is no way to distinguish one point from another . The Reverse Process . The reverse process is a Markov process that predicts and eliminates the noise added in the diffusion process . The reverse process is conditioned on the conditioner , the incomplete point cloud c. Let xT ∼ platent be a latent variable . The reverse process from latent xT to clean data x0 is defined as pθ ( x 0 , · · · , xT−1|xT , c ) = T∏ t=1 pθ ( x t−1|xt , c ) , where pθ ( xt−1|xt , c ) = N ( xt−1 ; µθ ( xt , c , t ) , σ2t I ) . ( 3 ) The mean µθ ( xt , c , t ) is a neural network parameterized by θ and the variance σ2t is a time-step dependent constant . To generate a sample conditioned on c , we first sample xT ∼ N ( 03N , I3N×3N ) , then draw xt−1 ∼ pθ ( xt−1|xt , c ) for t = T , T − 1 , · · · , 1 , and finally outputs x0 . Training . DDPM is trained via variational inference . Ho et al . ( 2020 ) introduced a certain parameterization for µθ that can largely simplify the training objective . The parameterization is σ2t = 1−ᾱt−1 1−ᾱt βt , and µθ ( x t , c , t ) = 1√αt ( xt − βt√ 1−ᾱt θ ( x t , c , t ) ) , where θ is a neural network taking noisy point cloud xt ∼ q ( xt|x0 ) in equation ( 2 ) , diffusion step t , and conditioner c as inputs . Then , the simplified training objective becomes L ( θ ) = Ei∼U ( [ M ] ) , t∼U ( [ T ] ) , ∼N ( 0 , I ) ‖ − θ ( √ ᾱtx 0 i + √ 1− ᾱt , ci , t ) ‖2 , ( 4 ) where U ( [ M ] ) is the uniform distribution over { 1 , 2 , · · · , M } . The neural network θ learns to predict the noise added to the clean point cloud x0 , which can be used to denoise the noisy point cloud xt = √ ᾱtx 0 + √ 1− ᾱt . Note that traditional CD loss or EMD loss is NOT present in Equation 4 . The reason that we are able to use the simple mean squared error is because DDPM naturally defines a one-to-one pointwise mapping between two consecutive point clouds in the diffusion process as shown in Equation 1 . Note that at each training step , we not only need to sample a pair of point clouds xi , ci , but also a diffusion step t and a Gaussian noise . | The paper tackles the problem of shape-level point cloud completion in a fully supervised setting. The big picture of the proposed method is to use a conditional DDPM to generate a noisy but complete point cloud given incomplete input, and then use another refinement network, also conditional on the incomplete input, to further refine the noisy point cloud. Both networks adopt a novel, dual-path network architecture, based on PointNet++, to allow localized guidance from the incomplete point cloud. Thanks to the 2-staged approach, up to 50x acceleration of evaluation speed is achievable using step jumping, with moderate performance drop. The model is evaluated on MVP, MVP-40 and Completion3D datasets. | SP:d1c45bec49322fb29ab7d1251479d45e5297000d |
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion | 1 INTRODUCTION . With the rapid developments of 3D sensors , 3D point clouds are an important data format that captures 3D information owing to their ease of acquisition and efficiency in storage . Unfortunately , point clouds scanned in the real world are often incomplete due to partial observation and self occlusion . It is important to recover the complete shape by inferring the missing parts for many downstream tasks such as 3D reconstruction , augmented reality and scene understanding . To tackle this problem , many learning-based methods ( Yuan et al. , 2018 ; Yang et al. , 2018 ; Tchapmi et al. , 2019 ; Xie et al. , 2020 ; Liu et al. , 2020 ; Pan et al. , 2021 ) are proposed , which are supervised by using either the Chamfer Distance ( CD ) or Earth Mover Distance ( EMD ) to penalize the discrepancies between the generated complete point cloud and the ground truth . However , CD loss is not sensitive to overall density distribution , and thus networks trained by CD loss could generate non-uniform point cloud completion results ( See Figure 10 and 11 in Appendix ) . EMD is more distinctive to measure density distributions , but it is too expensive to compute in training . The absence of an effective and efficient training loss highly limits the capabilities of many existing point cloud completion networks . We find that denoising diffusion probabilistic models ( DDPM ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) can potentially generate uniform and high quality point clouds with an effective and efficient loss function . It can iteratively move a set of Gaussian noise towards a complete and clean point cloud . DDPM defines a one-to-one pointwise mapping between two consecutive point clouds in the diffusion process , which enables it to use a simple mean squared error loss function for training . This loss function is efficient to compute and explicitly requires the generated point cloud to be uniform , as a one-to-one point mapping is naturally established between the generated point cloud and the ground truth . Point cloud completion task can be treated as a conditional generation problem in the framework of DDPM ( Zhou et al. , 2021 ; Luo & Hu , 2021 ) . Indeed , we find the complete point clouds generated by a conditional DDPM often have a good overall distribution that uniformly covers the shape of the object . Nonetheless , due to the probabilistic nature of DDPM and the lack of a suitable network architecture to train the conditional DDPM for 3D point cloud completion in previous works , we find DDPM completed point clouds often lack smooth surfaces and sharp details ( See Figure 1 and Appendix Figure 12 ) , which is also reflected by their high CD loss compared with state-of-the-art point cloud completion methods in our experiments . Another problem with DDPM is its inefficiency in the inference phase . It usually takes several hundreds and even up to one thousand forward steps to generate a single point cloud . Several methods ( Song et al. , 2020 ; Nichol & Dhariwal , 2021 ; Kong & Ping , 2021 ) are proposed to accelerate DDPM using jumping steps without retraining networks , which however , leads to an obvious performance drop when using a small number of diffusion steps . In this work , we propose the Conditional Point Diffusion-Refinement ( PDR ) paradigm to generate both uniform and high quality complete point clouds . As shown in Figure 1 , our PDR paradigm performs point cloud completion in a coarse-to-fine fashion . Firstly , we use the Conditional Generation Network ( CGNet ) to generate a coarse complete point cloud by the DDPM conditioned on the partial point cloud . It iteratively moves a set of Gaussian noise towards a complete point cloud . Following , the ReFinement Network ( RFNet ) further refines the coarse complete point cloud generated from the Conditional Generation Network with the help of partial point clouds . In addition , RFNet can be used to refine the low quality point clouds generated by an accelerated DDPM , so that we could enjoy an acceleration up to 50 times , while minimizing the performance drop . In this way , the completion results generated by our PDR paradigm demonstrate both good overall density distribution ( i.e . uniform ) and sharp local details . Both CGNet and RFNet have a novel dual-path network architecture shown in Figure 2 , which is composed of two parallel sub-networks , a Denoise subnet and a Condition Feature Extraction subnet for noisy point clouds and partial point clouds , respectively . Specifically , we propose Point Adaptive Deconvolution ( PA-Deconv ) operation for upsampling , which can effectively manipulate spatial locations of 3D points . Furthermore , we propose the Feature Transfer ( FT ) module to directly transmit encoded point features at different scales from the Condition Feature Extraction subnet to the corresponding hierarchy in the Denoise subnet . Extensive experimental results show that our PDR paradigm can provide new state-of-the-art performance for point cloud completion . Our Key contributions can be summarized as : 1 ) We identify conditional DDPM to be a good model with an effective and efficient loss function to generate uniform point clouds in point cloud completion task . 2 ) By using RFNet to refine the coarse point clouds , our PDR paradigm can generate complete point cloud with both good overall density distribution ( i.e . uniform ) and sharp local details . 3 ) We design novel point learning modules , including PA-Deconv and Feature Transfer modules , for constructing CGNet in DDPM and RFNet , which effectively and efficiently utilizes multi-level features extracted from incomplete point clouds for point cloud completion . 4 ) With the help of our proposed RFNet , we can accelerate the generation process of DDPM up to 50 times without a significant drop in point cloud quality . 2 PROBLEM STATEMENT . In this paper , we focus on the 3D point cloud completion task . A 3D point cloud is represented by N points in the 3D space : X = { xj |1 ≤ j ≤ N } , where each xj ∈ R3 is the 3D coordinates of the j-th point . We assume the dataset is composed of M data pairs { ( Xi , Ci ) |1 ≤ i ≤ M } , where Xi is the i-th ground-truth point cloud , and Ci is the incomplete point cloud from a partial observation ofXi . The goal is to develop a model that completes the partial observation Ci and outputs a point cloud as close to the ground truth Xi as possible . For algebraic convenience , we let x ∈ R3N be the vector form of a point cloudX , and similarly c be the vector form of C . 3 METHODOLOGY . We consider the point cloud completion task as a conditional generation problem , where the incomplete point cloud C serves as the conditioner . We use the powerful generative model called denoising diffusion probabilistic models ( DDPM ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) to first generate a coarse completion of the partial observation . Then we use another network to refine the coarse point cloud to improve its visual quality . Our point cloud completion pipeline is shown in Figure 1 . We first briefly introduce the theory of DDPM in Section 3.1 , and then describe detailed architecture of the Conditional Generation Network ( CGNet ) and ReFinement Network ( RFNet ) in Section 3.2 and Section 3.3 . 3.1 BACKGROUND ON CONDITIONAL DENOISING DIFFUSION PROBABILISTIC MODELS . We assume pdata to be the distribution of the complete point cloud xi in the dataset , and platent = N ( 03N , I3N×3N ) to be the latent distribution , whereN is the Gaussian distribution . Then , the conditional DDPM consists of two Markov chains called the diffusion process and the reverse process . Both processes have length equal to T . We set T = 1000 in this paper . The Diffusion Process . The diffusion process is a Markov process that adds Gaussian noise into the clean data distribution pdata until the output distribution is close to platent . The diffusion process is irrelevant of the conditioner , the incomplete point cloud ci . Formally , let x0 ∼ pdata . We use the superscript to denote the diffusion step t. For conciseness , we omit the subscription i in the following discussion . The diffusion process from clean data x0 to xT is defined as q ( x1 , · · · , xT |x0 ) = T∏ t=1 q ( xt|xt−1 ) , where q ( xt|xt−1 ) = N ( xt ; √ 1− βtxt−1 , βtI ) . ( 1 ) The hyperparameters βt are pre-defined , small positive constants ( See details in Appendix Section A.1 ) . According to Ho et al . ( 2020 ) , there is a closed form expression for q ( xt|x0 ) . We first define constants αt = 1− βt , ᾱt = ∏t i=1 αi . Then , we have q ( x t|x0 ) = N ( xt ; √ ᾱtx 0 , ( 1− ᾱt ) I ) . Therefore , when T is large enough , ᾱt goes to 0 , and q ( xT |x0 ) becomes close to the latent distribution platent ( xT ) . Note that xt can be directly sampled through the following equation : xt = √ ᾱtx 0 + √ 1− ᾱt , where is a Gaussian noise . ( 2 ) We emphasize that q ( xt|xt−1 ) can be seen as a one-to-one pointwise mapping as xt can be sampled through the equation xt = √ 1− βtxt−1 + βt . Therefore , the order of points in x0 is preserved in the diffusion process . However , it does not matter what kind of order we input the points in x0 . That is because when T is large enough , xT will become a Gaussian distribution . Every point in a Gaussian distribution is equivalent and there is no way to distinguish one point from another . The Reverse Process . The reverse process is a Markov process that predicts and eliminates the noise added in the diffusion process . The reverse process is conditioned on the conditioner , the incomplete point cloud c. Let xT ∼ platent be a latent variable . The reverse process from latent xT to clean data x0 is defined as pθ ( x 0 , · · · , xT−1|xT , c ) = T∏ t=1 pθ ( x t−1|xt , c ) , where pθ ( xt−1|xt , c ) = N ( xt−1 ; µθ ( xt , c , t ) , σ2t I ) . ( 3 ) The mean µθ ( xt , c , t ) is a neural network parameterized by θ and the variance σ2t is a time-step dependent constant . To generate a sample conditioned on c , we first sample xT ∼ N ( 03N , I3N×3N ) , then draw xt−1 ∼ pθ ( xt−1|xt , c ) for t = T , T − 1 , · · · , 1 , and finally outputs x0 . Training . DDPM is trained via variational inference . Ho et al . ( 2020 ) introduced a certain parameterization for µθ that can largely simplify the training objective . The parameterization is σ2t = 1−ᾱt−1 1−ᾱt βt , and µθ ( x t , c , t ) = 1√αt ( xt − βt√ 1−ᾱt θ ( x t , c , t ) ) , where θ is a neural network taking noisy point cloud xt ∼ q ( xt|x0 ) in equation ( 2 ) , diffusion step t , and conditioner c as inputs . Then , the simplified training objective becomes L ( θ ) = Ei∼U ( [ M ] ) , t∼U ( [ T ] ) , ∼N ( 0 , I ) ‖ − θ ( √ ᾱtx 0 i + √ 1− ᾱt , ci , t ) ‖2 , ( 4 ) where U ( [ M ] ) is the uniform distribution over { 1 , 2 , · · · , M } . The neural network θ learns to predict the noise added to the clean point cloud x0 , which can be used to denoise the noisy point cloud xt = √ ᾱtx 0 + √ 1− ᾱt . Note that traditional CD loss or EMD loss is NOT present in Equation 4 . The reason that we are able to use the simple mean squared error is because DDPM naturally defines a one-to-one pointwise mapping between two consecutive point clouds in the diffusion process as shown in Equation 1 . Note that at each training step , we not only need to sample a pair of point clouds xi , ci , but also a diffusion step t and a Gaussian noise . | In this paper, the authors propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion. The proposed method effectively and efficiently extracts multi-level features from partially observed point clouds to guide completion. Moreover, it accurately manipulates spatial locations of 3D points to obtain smooth surfaces and sharp details. Experimental results demonstrate its state-of-the-art performance on the mainstream datasets and benchmarks. | SP:d1c45bec49322fb29ab7d1251479d45e5297000d |
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion | 1 INTRODUCTION . With the rapid developments of 3D sensors , 3D point clouds are an important data format that captures 3D information owing to their ease of acquisition and efficiency in storage . Unfortunately , point clouds scanned in the real world are often incomplete due to partial observation and self occlusion . It is important to recover the complete shape by inferring the missing parts for many downstream tasks such as 3D reconstruction , augmented reality and scene understanding . To tackle this problem , many learning-based methods ( Yuan et al. , 2018 ; Yang et al. , 2018 ; Tchapmi et al. , 2019 ; Xie et al. , 2020 ; Liu et al. , 2020 ; Pan et al. , 2021 ) are proposed , which are supervised by using either the Chamfer Distance ( CD ) or Earth Mover Distance ( EMD ) to penalize the discrepancies between the generated complete point cloud and the ground truth . However , CD loss is not sensitive to overall density distribution , and thus networks trained by CD loss could generate non-uniform point cloud completion results ( See Figure 10 and 11 in Appendix ) . EMD is more distinctive to measure density distributions , but it is too expensive to compute in training . The absence of an effective and efficient training loss highly limits the capabilities of many existing point cloud completion networks . We find that denoising diffusion probabilistic models ( DDPM ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) can potentially generate uniform and high quality point clouds with an effective and efficient loss function . It can iteratively move a set of Gaussian noise towards a complete and clean point cloud . DDPM defines a one-to-one pointwise mapping between two consecutive point clouds in the diffusion process , which enables it to use a simple mean squared error loss function for training . This loss function is efficient to compute and explicitly requires the generated point cloud to be uniform , as a one-to-one point mapping is naturally established between the generated point cloud and the ground truth . Point cloud completion task can be treated as a conditional generation problem in the framework of DDPM ( Zhou et al. , 2021 ; Luo & Hu , 2021 ) . Indeed , we find the complete point clouds generated by a conditional DDPM often have a good overall distribution that uniformly covers the shape of the object . Nonetheless , due to the probabilistic nature of DDPM and the lack of a suitable network architecture to train the conditional DDPM for 3D point cloud completion in previous works , we find DDPM completed point clouds often lack smooth surfaces and sharp details ( See Figure 1 and Appendix Figure 12 ) , which is also reflected by their high CD loss compared with state-of-the-art point cloud completion methods in our experiments . Another problem with DDPM is its inefficiency in the inference phase . It usually takes several hundreds and even up to one thousand forward steps to generate a single point cloud . Several methods ( Song et al. , 2020 ; Nichol & Dhariwal , 2021 ; Kong & Ping , 2021 ) are proposed to accelerate DDPM using jumping steps without retraining networks , which however , leads to an obvious performance drop when using a small number of diffusion steps . In this work , we propose the Conditional Point Diffusion-Refinement ( PDR ) paradigm to generate both uniform and high quality complete point clouds . As shown in Figure 1 , our PDR paradigm performs point cloud completion in a coarse-to-fine fashion . Firstly , we use the Conditional Generation Network ( CGNet ) to generate a coarse complete point cloud by the DDPM conditioned on the partial point cloud . It iteratively moves a set of Gaussian noise towards a complete point cloud . Following , the ReFinement Network ( RFNet ) further refines the coarse complete point cloud generated from the Conditional Generation Network with the help of partial point clouds . In addition , RFNet can be used to refine the low quality point clouds generated by an accelerated DDPM , so that we could enjoy an acceleration up to 50 times , while minimizing the performance drop . In this way , the completion results generated by our PDR paradigm demonstrate both good overall density distribution ( i.e . uniform ) and sharp local details . Both CGNet and RFNet have a novel dual-path network architecture shown in Figure 2 , which is composed of two parallel sub-networks , a Denoise subnet and a Condition Feature Extraction subnet for noisy point clouds and partial point clouds , respectively . Specifically , we propose Point Adaptive Deconvolution ( PA-Deconv ) operation for upsampling , which can effectively manipulate spatial locations of 3D points . Furthermore , we propose the Feature Transfer ( FT ) module to directly transmit encoded point features at different scales from the Condition Feature Extraction subnet to the corresponding hierarchy in the Denoise subnet . Extensive experimental results show that our PDR paradigm can provide new state-of-the-art performance for point cloud completion . Our Key contributions can be summarized as : 1 ) We identify conditional DDPM to be a good model with an effective and efficient loss function to generate uniform point clouds in point cloud completion task . 2 ) By using RFNet to refine the coarse point clouds , our PDR paradigm can generate complete point cloud with both good overall density distribution ( i.e . uniform ) and sharp local details . 3 ) We design novel point learning modules , including PA-Deconv and Feature Transfer modules , for constructing CGNet in DDPM and RFNet , which effectively and efficiently utilizes multi-level features extracted from incomplete point clouds for point cloud completion . 4 ) With the help of our proposed RFNet , we can accelerate the generation process of DDPM up to 50 times without a significant drop in point cloud quality . 2 PROBLEM STATEMENT . In this paper , we focus on the 3D point cloud completion task . A 3D point cloud is represented by N points in the 3D space : X = { xj |1 ≤ j ≤ N } , where each xj ∈ R3 is the 3D coordinates of the j-th point . We assume the dataset is composed of M data pairs { ( Xi , Ci ) |1 ≤ i ≤ M } , where Xi is the i-th ground-truth point cloud , and Ci is the incomplete point cloud from a partial observation ofXi . The goal is to develop a model that completes the partial observation Ci and outputs a point cloud as close to the ground truth Xi as possible . For algebraic convenience , we let x ∈ R3N be the vector form of a point cloudX , and similarly c be the vector form of C . 3 METHODOLOGY . We consider the point cloud completion task as a conditional generation problem , where the incomplete point cloud C serves as the conditioner . We use the powerful generative model called denoising diffusion probabilistic models ( DDPM ) ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) to first generate a coarse completion of the partial observation . Then we use another network to refine the coarse point cloud to improve its visual quality . Our point cloud completion pipeline is shown in Figure 1 . We first briefly introduce the theory of DDPM in Section 3.1 , and then describe detailed architecture of the Conditional Generation Network ( CGNet ) and ReFinement Network ( RFNet ) in Section 3.2 and Section 3.3 . 3.1 BACKGROUND ON CONDITIONAL DENOISING DIFFUSION PROBABILISTIC MODELS . We assume pdata to be the distribution of the complete point cloud xi in the dataset , and platent = N ( 03N , I3N×3N ) to be the latent distribution , whereN is the Gaussian distribution . Then , the conditional DDPM consists of two Markov chains called the diffusion process and the reverse process . Both processes have length equal to T . We set T = 1000 in this paper . The Diffusion Process . The diffusion process is a Markov process that adds Gaussian noise into the clean data distribution pdata until the output distribution is close to platent . The diffusion process is irrelevant of the conditioner , the incomplete point cloud ci . Formally , let x0 ∼ pdata . We use the superscript to denote the diffusion step t. For conciseness , we omit the subscription i in the following discussion . The diffusion process from clean data x0 to xT is defined as q ( x1 , · · · , xT |x0 ) = T∏ t=1 q ( xt|xt−1 ) , where q ( xt|xt−1 ) = N ( xt ; √ 1− βtxt−1 , βtI ) . ( 1 ) The hyperparameters βt are pre-defined , small positive constants ( See details in Appendix Section A.1 ) . According to Ho et al . ( 2020 ) , there is a closed form expression for q ( xt|x0 ) . We first define constants αt = 1− βt , ᾱt = ∏t i=1 αi . Then , we have q ( x t|x0 ) = N ( xt ; √ ᾱtx 0 , ( 1− ᾱt ) I ) . Therefore , when T is large enough , ᾱt goes to 0 , and q ( xT |x0 ) becomes close to the latent distribution platent ( xT ) . Note that xt can be directly sampled through the following equation : xt = √ ᾱtx 0 + √ 1− ᾱt , where is a Gaussian noise . ( 2 ) We emphasize that q ( xt|xt−1 ) can be seen as a one-to-one pointwise mapping as xt can be sampled through the equation xt = √ 1− βtxt−1 + βt . Therefore , the order of points in x0 is preserved in the diffusion process . However , it does not matter what kind of order we input the points in x0 . That is because when T is large enough , xT will become a Gaussian distribution . Every point in a Gaussian distribution is equivalent and there is no way to distinguish one point from another . The Reverse Process . The reverse process is a Markov process that predicts and eliminates the noise added in the diffusion process . The reverse process is conditioned on the conditioner , the incomplete point cloud c. Let xT ∼ platent be a latent variable . The reverse process from latent xT to clean data x0 is defined as pθ ( x 0 , · · · , xT−1|xT , c ) = T∏ t=1 pθ ( x t−1|xt , c ) , where pθ ( xt−1|xt , c ) = N ( xt−1 ; µθ ( xt , c , t ) , σ2t I ) . ( 3 ) The mean µθ ( xt , c , t ) is a neural network parameterized by θ and the variance σ2t is a time-step dependent constant . To generate a sample conditioned on c , we first sample xT ∼ N ( 03N , I3N×3N ) , then draw xt−1 ∼ pθ ( xt−1|xt , c ) for t = T , T − 1 , · · · , 1 , and finally outputs x0 . Training . DDPM is trained via variational inference . Ho et al . ( 2020 ) introduced a certain parameterization for µθ that can largely simplify the training objective . The parameterization is σ2t = 1−ᾱt−1 1−ᾱt βt , and µθ ( x t , c , t ) = 1√αt ( xt − βt√ 1−ᾱt θ ( x t , c , t ) ) , where θ is a neural network taking noisy point cloud xt ∼ q ( xt|x0 ) in equation ( 2 ) , diffusion step t , and conditioner c as inputs . Then , the simplified training objective becomes L ( θ ) = Ei∼U ( [ M ] ) , t∼U ( [ T ] ) , ∼N ( 0 , I ) ‖ − θ ( √ ᾱtx 0 i + √ 1− ᾱt , ci , t ) ‖2 , ( 4 ) where U ( [ M ] ) is the uniform distribution over { 1 , 2 , · · · , M } . The neural network θ learns to predict the noise added to the clean point cloud x0 , which can be used to denoise the noisy point cloud xt = √ ᾱtx 0 + √ 1− ᾱt . Note that traditional CD loss or EMD loss is NOT present in Equation 4 . The reason that we are able to use the simple mean squared error is because DDPM naturally defines a one-to-one pointwise mapping between two consecutive point clouds in the diffusion process as shown in Equation 1 . Note that at each training step , we not only need to sample a pair of point clouds xi , ci , but also a diffusion step t and a Gaussian noise . | This paper proposed a novel way to utilize DDPM's for point cloud completion. They use a conditional DDPM to generate a plausible "complete" point cloud from pure Gaussian noise, conditional on a partial subset of the point cloud points. Their main contribution is is in the design of the conditional feature extraction and denoising subnet, which essentially form dual path connected U-Net type structures with internal modules based on an improved PointNet++ design. Additionally, the authors propose a refinement network to mitigate the computational burden of needing so many diffusion steps, which seems to offer an acceptable performance tradeoff for increases computational efficiency. The method convincingly demonstrates superior performance over its competitors, with crisp details preserved on the datasets tested. | SP:d1c45bec49322fb29ab7d1251479d45e5297000d |
Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models | Modeling the world can benefit robot learning by providing a rich training signal for shaping an agent ’ s latent state space . However , learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging . One source of difficulty is the presence of irrelevant but hard-to-model background distractions , and unimportant visual details of taskrelevant entities . We address this issue by learning a recurrent latent dynamics model which contrastively predicts the next observation . This simple model leads to surprisingly robust robotic control even with simultaneous camera , background , and color distractions . We outperform alternatives such as bisimulation methods which impose state-similarity measures derived from divergence in future reward or future optimal actions . We obtain state-of-the-art results on the Distracting Control Suite , a challenging benchmark for pixel-based robotic control . 1 INTRODUCTION . For a robot , predicting the future conditioned on its actions is a rich source of information about itself and the world that it lives in . The gradient-rich training signal from future prediction can be used to shape a robot ’ s internal latent representation of the world . These models can be used to generate imagined interactions , facilitate model-predictive control , and dynamically attend to mispredicted events . Video prediction models ( Finn et al. , 2016 ) have been shown to be effective for planning robot actions . Model-based Reinforcement Learning ( MBRL ) methods such as Dreamer ( Hafner et al. , 2020 ) and SLAC ( Lee et al. , 2020 ) have been shown to reduce the sample complexity of learning control tasks from pixel-based observations . These methods learn an action-conditioned model of the observations received by the robot . This works well when observations are clean , backgrounds are stationary , and only task-relevant information is contained in the observations . However , in a real-world unconstrained environment , observations are likely to include extraneous information that is irrelevant to the task ( see Figure 1a ) . Extraneous information can include independently moving objects , changes in lighting , colors , as well as changes due to camera motion . High-frequency details , even in task-relevant entities , such as the texture of a door or the shininess of a door handle that a robot is trying to open are irrelevant . Modeling the observations at a pixel-level requires spending capacity to explain all of these attributes , which is inefficient . The prevalence of extraneous information poses an immediate challenge to reconstruction based methods , e.g . Dreamer . One promising approach is to impose a metric on the latent state space which groups together states that are indistinguishable with respect to future reward sequences ( DBC , Zhang et al . ( 2021 ) ) or future action sequences ( PSE , Agarwal et al . ( 2021 ) ) when running the optimal policy . However , using rewards alone for grounding is sample inefficient , especially in tasks with sparse rewards . Similarly , actions may not provide enough training signal , especially if they are low-dimensional . On the other hand , observations are usually high-dimensional and carry more bits of information . One way to make use of this signal while avoiding pixel-level reconstruction is to maximize the mutual information between the observation and its learned representation ( Hjelm et al. , 2019 ) . This objective can be approximated using contrastive learning models where the task is to predict a learned representation which matches the encoding of the true future observation ( positive pair ) , but does not match the encoding of other observations ( negative pairs ) . However , the Under review as a conference paper at ICLR 2022CoRe | Visualization Binder Networks | Progress 06/07 performance of contrastive variants of Dreamer and DBC has been shown to be inferior , for example , Fig . 8 in Hafner et al . ( 2020 ) and Fig . 3 in Zhang et al . ( 2021 ) . In this work , we show that contrastive learning can in fact lead to surprisingly strong robustness to severe distractions , provided that a recurrent state-space model is used . We call our model CoRe : Contrastive Recurrent State-Space Model . The key intuition is that recurrent models such as GRUs and LSTMs maintain temporal smoothness in their hidden state because they propagate their state using gating and incremental modifications . When used along with contrastive learning , this smoothness ensures the presence of informative hard negatives in the same mini-batch of training . Using CoRe , we get state-of-the-art results on the challenging Distracting Control Suite benchmark ( Stone et al. , 2021 ) which includes background , camera , and color distractions applied simultaneously . Video results from our model can be seen at supplementary/index.html . 2 MODEL DESCRIPTION . We formulate the robotics control problem from visual observations as a discrete-time partially observable Markov decision process ( POMDP ) . At any time step t , the robot agent has an internal state denoted by zt . It receives an observation ot , takes action at , and moves to state zt+1 , receiving a reward rt+1 . We present our model with continuous-valued states , actions , and observations . A straightforward extension can be made to discrete states . The objective is to find a policy that maximizes the expected sum of future rewards ∑∞ t=0 γ trt , where γ is a discount factor . 2.1 MODEL COMPONENTS . As shown in Figure 1b , our model consists of three components : a recurrent state space model ( RSSM ) ( Hafner et al. , 2019 ) which encodes the learned dynamics , a behavior model based on Soft-Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) which is used to control the robot , and an observation encoder . Observation encoder : This component is a convolutional network f which takes an observation as input and outputs its encoding xt = f ( ot ) . LayerNorm ( Ba et al. , 2016 ) is applied in the last layer of the encoder . Recurrent State Space Model ( RSSM ) : This component consists of the following modules , Future predictor p ( ẑt|zt−1 , at−1 ) , Observation representation decoder yt = g ( ẑt ) , Correction Model q ( zt|ẑt , xt ) , Inverse Dynamics model ât−1 = a ( zt , zt−1 ) , Reward predictor r̂t = r ( zt ) . Each module is parameterized as a fully-connected MLP , except the future predictor which uses GRU cells ( Cho et al. , 2014 ) . The future predictor outputs a prior distribution ẑt over the next state , given the previous state and action . This is decoded into yt which corresponds to an encoding of the observation that the agent expects to see next . The correction model q is given access to the current observation encoding xt , along with the prior ẑt and outputs the posterior distribution zt , i.e. , it corrects the prior belief given the new observation . The posterior state zt is used to predict the reward , and also feeds into the actor and critic models that control the agent ’ s behavior . The inverse dynamics model predicts the action that took the agent from state zt−1 to zt . The RSSM is similar to the one used in Dreamer and SLAC except that it includes an inverse dynamics model and that the observation representation is predicted from the prior latent state instead of the posterior latent state . These differences lead to slightly better results and a different probabilistic interpretation of the underlying model which we discuss in Section 2.2 . Similar to Dreamer , the latent state zt consists of a deterministic and stochastic component ( More details in Appendix A ) . Behavior Model : This component consists of an actor πφ ( at|zt ) and two critic networks Qθi ( zt , a ) , i ∈ { 1 , 2 } which are standard for training using the SAC algorithm ( Haarnoja et al. , 2018 ) . 2.2 REPRESENTATION LEARNING WITH CORE . Our goal is to represent the agent ’ s latent state by extracting task-relevant information from the observations , while ignoring the distractions . We formulate this representation learning problem as dynamics-regularized mutual information maximization between the observations ot and the model ’ s prediction of the observation ’ s encoding yt . Mutual information maximization ( Hjelm et al. , 2019 ) has been used extensively for representation learning . In our case , it can be approximated using a contrastive learning objective defined over a mini-batch of sequences , in a way similar to Contrastive Predictive Coding ( CPC ) ( van den Oord et al. , 2018 ) . At time-step t in example i in the mini-batch , let xi , t and yi , t denote the real observation ’ s encoding and the predicted observation encoding , respectively . The loss function is Lc = − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i , tyi′ , t′ ) ) − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i′ , t′yi , t ) ) , ( 1 ) where λ is a learned inverse temperature parameter and i′ and t′ index over all sequences and timesteps in the mini-batch , respectively . Intuitively , the loss function makes the predicted observation encoding match the corresponding real observation ’ s encoding , but not match other real observation encodings , and vice-versa . In addition , the predicted and corrected latent state distributions should match , which can be formulated as minimizing a KL-divergence term LKL = KL ( q ( zt|ẑt , xt ) ||p ( ẑt|zt−1 , at−1 ) ) . ( 2 ) An additional source of training signal is the reward prediction loss , ( rt−r̂t ) 2 . Furthermore , modeling the inverse dynamics , i.e . predicting at from zt and zt+1 is a different way of representing the transition dynamics . This does not provide any additional information because we already model forward dynamics . However , we found that it helps in making faster training progress . We incorporate this by adding an action prediction loss . The combined loss is JM ( Θ ) = Lc + βLKL + αr ( rt − r̂t ) 2 + αa||at − ât||2 , where β , αr , αa are tunable hyperparameters and Θ is the combined set of model parameters which includes parameters of the observation encoder , the RSSM , and the inverse temperature λ . Relationship with Action-Conditioned Sequential VAEs The above objective function bears resemblance to the ELBO from a Conditional Sequential VAE which is used as the underlying probabilistic model in Dreamer and SLAC . The objective function there is the sum of an observation reconstruction term ||ot − ôt||2 and LKL scaled by β , corresponding to a β-VAE formulation ( Higgins et al. , 2017 ) . This formulation relies on decoding the posterior latent state to compute ôt . Since the posterior latent state distribution q ( zt|ẑt , xt ) is conditioned on the true observation xt , observation reconstruction is an auto-encoding task . Only the KL-term corresponds to the future prediction task . On the other hand , in our case the prior latent state is decoded to a representation of the observation , which already constitutes a future prediction task . The KL-term only provides additional regularization . This makes the model less reliant on careful tuning of β ( empirically validated in Appendix C.3 ) . In fact , setting it to zero also leads to reasonable performance in our experiments , though not the best . | The paper shows a latent-planning RL model to learn control policies directly from pixels. To learn a better representation it uses recurrent model contrastive learning approach which enhances the representation learning performance of single frame based contrastive methods. This was tested robotic control suites with challenging distracting backgrounds. The main contribution of the work is the addition of recurrence to the model and the extensive testing and explanation on the reason it tends to work better. | SP:e598aee34b4adb898d8495153982746795b7be62 |
Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models | Modeling the world can benefit robot learning by providing a rich training signal for shaping an agent ’ s latent state space . However , learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging . One source of difficulty is the presence of irrelevant but hard-to-model background distractions , and unimportant visual details of taskrelevant entities . We address this issue by learning a recurrent latent dynamics model which contrastively predicts the next observation . This simple model leads to surprisingly robust robotic control even with simultaneous camera , background , and color distractions . We outperform alternatives such as bisimulation methods which impose state-similarity measures derived from divergence in future reward or future optimal actions . We obtain state-of-the-art results on the Distracting Control Suite , a challenging benchmark for pixel-based robotic control . 1 INTRODUCTION . For a robot , predicting the future conditioned on its actions is a rich source of information about itself and the world that it lives in . The gradient-rich training signal from future prediction can be used to shape a robot ’ s internal latent representation of the world . These models can be used to generate imagined interactions , facilitate model-predictive control , and dynamically attend to mispredicted events . Video prediction models ( Finn et al. , 2016 ) have been shown to be effective for planning robot actions . Model-based Reinforcement Learning ( MBRL ) methods such as Dreamer ( Hafner et al. , 2020 ) and SLAC ( Lee et al. , 2020 ) have been shown to reduce the sample complexity of learning control tasks from pixel-based observations . These methods learn an action-conditioned model of the observations received by the robot . This works well when observations are clean , backgrounds are stationary , and only task-relevant information is contained in the observations . However , in a real-world unconstrained environment , observations are likely to include extraneous information that is irrelevant to the task ( see Figure 1a ) . Extraneous information can include independently moving objects , changes in lighting , colors , as well as changes due to camera motion . High-frequency details , even in task-relevant entities , such as the texture of a door or the shininess of a door handle that a robot is trying to open are irrelevant . Modeling the observations at a pixel-level requires spending capacity to explain all of these attributes , which is inefficient . The prevalence of extraneous information poses an immediate challenge to reconstruction based methods , e.g . Dreamer . One promising approach is to impose a metric on the latent state space which groups together states that are indistinguishable with respect to future reward sequences ( DBC , Zhang et al . ( 2021 ) ) or future action sequences ( PSE , Agarwal et al . ( 2021 ) ) when running the optimal policy . However , using rewards alone for grounding is sample inefficient , especially in tasks with sparse rewards . Similarly , actions may not provide enough training signal , especially if they are low-dimensional . On the other hand , observations are usually high-dimensional and carry more bits of information . One way to make use of this signal while avoiding pixel-level reconstruction is to maximize the mutual information between the observation and its learned representation ( Hjelm et al. , 2019 ) . This objective can be approximated using contrastive learning models where the task is to predict a learned representation which matches the encoding of the true future observation ( positive pair ) , but does not match the encoding of other observations ( negative pairs ) . However , the Under review as a conference paper at ICLR 2022CoRe | Visualization Binder Networks | Progress 06/07 performance of contrastive variants of Dreamer and DBC has been shown to be inferior , for example , Fig . 8 in Hafner et al . ( 2020 ) and Fig . 3 in Zhang et al . ( 2021 ) . In this work , we show that contrastive learning can in fact lead to surprisingly strong robustness to severe distractions , provided that a recurrent state-space model is used . We call our model CoRe : Contrastive Recurrent State-Space Model . The key intuition is that recurrent models such as GRUs and LSTMs maintain temporal smoothness in their hidden state because they propagate their state using gating and incremental modifications . When used along with contrastive learning , this smoothness ensures the presence of informative hard negatives in the same mini-batch of training . Using CoRe , we get state-of-the-art results on the challenging Distracting Control Suite benchmark ( Stone et al. , 2021 ) which includes background , camera , and color distractions applied simultaneously . Video results from our model can be seen at supplementary/index.html . 2 MODEL DESCRIPTION . We formulate the robotics control problem from visual observations as a discrete-time partially observable Markov decision process ( POMDP ) . At any time step t , the robot agent has an internal state denoted by zt . It receives an observation ot , takes action at , and moves to state zt+1 , receiving a reward rt+1 . We present our model with continuous-valued states , actions , and observations . A straightforward extension can be made to discrete states . The objective is to find a policy that maximizes the expected sum of future rewards ∑∞ t=0 γ trt , where γ is a discount factor . 2.1 MODEL COMPONENTS . As shown in Figure 1b , our model consists of three components : a recurrent state space model ( RSSM ) ( Hafner et al. , 2019 ) which encodes the learned dynamics , a behavior model based on Soft-Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) which is used to control the robot , and an observation encoder . Observation encoder : This component is a convolutional network f which takes an observation as input and outputs its encoding xt = f ( ot ) . LayerNorm ( Ba et al. , 2016 ) is applied in the last layer of the encoder . Recurrent State Space Model ( RSSM ) : This component consists of the following modules , Future predictor p ( ẑt|zt−1 , at−1 ) , Observation representation decoder yt = g ( ẑt ) , Correction Model q ( zt|ẑt , xt ) , Inverse Dynamics model ât−1 = a ( zt , zt−1 ) , Reward predictor r̂t = r ( zt ) . Each module is parameterized as a fully-connected MLP , except the future predictor which uses GRU cells ( Cho et al. , 2014 ) . The future predictor outputs a prior distribution ẑt over the next state , given the previous state and action . This is decoded into yt which corresponds to an encoding of the observation that the agent expects to see next . The correction model q is given access to the current observation encoding xt , along with the prior ẑt and outputs the posterior distribution zt , i.e. , it corrects the prior belief given the new observation . The posterior state zt is used to predict the reward , and also feeds into the actor and critic models that control the agent ’ s behavior . The inverse dynamics model predicts the action that took the agent from state zt−1 to zt . The RSSM is similar to the one used in Dreamer and SLAC except that it includes an inverse dynamics model and that the observation representation is predicted from the prior latent state instead of the posterior latent state . These differences lead to slightly better results and a different probabilistic interpretation of the underlying model which we discuss in Section 2.2 . Similar to Dreamer , the latent state zt consists of a deterministic and stochastic component ( More details in Appendix A ) . Behavior Model : This component consists of an actor πφ ( at|zt ) and two critic networks Qθi ( zt , a ) , i ∈ { 1 , 2 } which are standard for training using the SAC algorithm ( Haarnoja et al. , 2018 ) . 2.2 REPRESENTATION LEARNING WITH CORE . Our goal is to represent the agent ’ s latent state by extracting task-relevant information from the observations , while ignoring the distractions . We formulate this representation learning problem as dynamics-regularized mutual information maximization between the observations ot and the model ’ s prediction of the observation ’ s encoding yt . Mutual information maximization ( Hjelm et al. , 2019 ) has been used extensively for representation learning . In our case , it can be approximated using a contrastive learning objective defined over a mini-batch of sequences , in a way similar to Contrastive Predictive Coding ( CPC ) ( van den Oord et al. , 2018 ) . At time-step t in example i in the mini-batch , let xi , t and yi , t denote the real observation ’ s encoding and the predicted observation encoding , respectively . The loss function is Lc = − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i , tyi′ , t′ ) ) − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i′ , t′yi , t ) ) , ( 1 ) where λ is a learned inverse temperature parameter and i′ and t′ index over all sequences and timesteps in the mini-batch , respectively . Intuitively , the loss function makes the predicted observation encoding match the corresponding real observation ’ s encoding , but not match other real observation encodings , and vice-versa . In addition , the predicted and corrected latent state distributions should match , which can be formulated as minimizing a KL-divergence term LKL = KL ( q ( zt|ẑt , xt ) ||p ( ẑt|zt−1 , at−1 ) ) . ( 2 ) An additional source of training signal is the reward prediction loss , ( rt−r̂t ) 2 . Furthermore , modeling the inverse dynamics , i.e . predicting at from zt and zt+1 is a different way of representing the transition dynamics . This does not provide any additional information because we already model forward dynamics . However , we found that it helps in making faster training progress . We incorporate this by adding an action prediction loss . The combined loss is JM ( Θ ) = Lc + βLKL + αr ( rt − r̂t ) 2 + αa||at − ât||2 , where β , αr , αa are tunable hyperparameters and Θ is the combined set of model parameters which includes parameters of the observation encoder , the RSSM , and the inverse temperature λ . Relationship with Action-Conditioned Sequential VAEs The above objective function bears resemblance to the ELBO from a Conditional Sequential VAE which is used as the underlying probabilistic model in Dreamer and SLAC . The objective function there is the sum of an observation reconstruction term ||ot − ôt||2 and LKL scaled by β , corresponding to a β-VAE formulation ( Higgins et al. , 2017 ) . This formulation relies on decoding the posterior latent state to compute ôt . Since the posterior latent state distribution q ( zt|ẑt , xt ) is conditioned on the true observation xt , observation reconstruction is an auto-encoding task . Only the KL-term corresponds to the future prediction task . On the other hand , in our case the prior latent state is decoded to a representation of the observation , which already constitutes a future prediction task . The KL-term only provides additional regularization . This makes the model less reliant on careful tuning of β ( empirically validated in Appendix C.3 ) . In fact , setting it to zero also leads to reasonable performance in our experiments , though not the best . | This paper has presented CoRe, Contrastive Recurrent state space model, for model-based robust model-based reinforcement learning for robotic control. Standard reconstruction-based state space models are less robust in the unstructured real-world scenarios because of the high-frequency details. Instead, CoRe learns the state space model with contrastive learning, which greatly improves robustness. In addition to this, a policy is being learned with SAC. Experiments on distracting control suites and several robotic control tasks demonstrate the better robustness of CoRe. | SP:e598aee34b4adb898d8495153982746795b7be62 |
Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models | Modeling the world can benefit robot learning by providing a rich training signal for shaping an agent ’ s latent state space . However , learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging . One source of difficulty is the presence of irrelevant but hard-to-model background distractions , and unimportant visual details of taskrelevant entities . We address this issue by learning a recurrent latent dynamics model which contrastively predicts the next observation . This simple model leads to surprisingly robust robotic control even with simultaneous camera , background , and color distractions . We outperform alternatives such as bisimulation methods which impose state-similarity measures derived from divergence in future reward or future optimal actions . We obtain state-of-the-art results on the Distracting Control Suite , a challenging benchmark for pixel-based robotic control . 1 INTRODUCTION . For a robot , predicting the future conditioned on its actions is a rich source of information about itself and the world that it lives in . The gradient-rich training signal from future prediction can be used to shape a robot ’ s internal latent representation of the world . These models can be used to generate imagined interactions , facilitate model-predictive control , and dynamically attend to mispredicted events . Video prediction models ( Finn et al. , 2016 ) have been shown to be effective for planning robot actions . Model-based Reinforcement Learning ( MBRL ) methods such as Dreamer ( Hafner et al. , 2020 ) and SLAC ( Lee et al. , 2020 ) have been shown to reduce the sample complexity of learning control tasks from pixel-based observations . These methods learn an action-conditioned model of the observations received by the robot . This works well when observations are clean , backgrounds are stationary , and only task-relevant information is contained in the observations . However , in a real-world unconstrained environment , observations are likely to include extraneous information that is irrelevant to the task ( see Figure 1a ) . Extraneous information can include independently moving objects , changes in lighting , colors , as well as changes due to camera motion . High-frequency details , even in task-relevant entities , such as the texture of a door or the shininess of a door handle that a robot is trying to open are irrelevant . Modeling the observations at a pixel-level requires spending capacity to explain all of these attributes , which is inefficient . The prevalence of extraneous information poses an immediate challenge to reconstruction based methods , e.g . Dreamer . One promising approach is to impose a metric on the latent state space which groups together states that are indistinguishable with respect to future reward sequences ( DBC , Zhang et al . ( 2021 ) ) or future action sequences ( PSE , Agarwal et al . ( 2021 ) ) when running the optimal policy . However , using rewards alone for grounding is sample inefficient , especially in tasks with sparse rewards . Similarly , actions may not provide enough training signal , especially if they are low-dimensional . On the other hand , observations are usually high-dimensional and carry more bits of information . One way to make use of this signal while avoiding pixel-level reconstruction is to maximize the mutual information between the observation and its learned representation ( Hjelm et al. , 2019 ) . This objective can be approximated using contrastive learning models where the task is to predict a learned representation which matches the encoding of the true future observation ( positive pair ) , but does not match the encoding of other observations ( negative pairs ) . However , the Under review as a conference paper at ICLR 2022CoRe | Visualization Binder Networks | Progress 06/07 performance of contrastive variants of Dreamer and DBC has been shown to be inferior , for example , Fig . 8 in Hafner et al . ( 2020 ) and Fig . 3 in Zhang et al . ( 2021 ) . In this work , we show that contrastive learning can in fact lead to surprisingly strong robustness to severe distractions , provided that a recurrent state-space model is used . We call our model CoRe : Contrastive Recurrent State-Space Model . The key intuition is that recurrent models such as GRUs and LSTMs maintain temporal smoothness in their hidden state because they propagate their state using gating and incremental modifications . When used along with contrastive learning , this smoothness ensures the presence of informative hard negatives in the same mini-batch of training . Using CoRe , we get state-of-the-art results on the challenging Distracting Control Suite benchmark ( Stone et al. , 2021 ) which includes background , camera , and color distractions applied simultaneously . Video results from our model can be seen at supplementary/index.html . 2 MODEL DESCRIPTION . We formulate the robotics control problem from visual observations as a discrete-time partially observable Markov decision process ( POMDP ) . At any time step t , the robot agent has an internal state denoted by zt . It receives an observation ot , takes action at , and moves to state zt+1 , receiving a reward rt+1 . We present our model with continuous-valued states , actions , and observations . A straightforward extension can be made to discrete states . The objective is to find a policy that maximizes the expected sum of future rewards ∑∞ t=0 γ trt , where γ is a discount factor . 2.1 MODEL COMPONENTS . As shown in Figure 1b , our model consists of three components : a recurrent state space model ( RSSM ) ( Hafner et al. , 2019 ) which encodes the learned dynamics , a behavior model based on Soft-Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) which is used to control the robot , and an observation encoder . Observation encoder : This component is a convolutional network f which takes an observation as input and outputs its encoding xt = f ( ot ) . LayerNorm ( Ba et al. , 2016 ) is applied in the last layer of the encoder . Recurrent State Space Model ( RSSM ) : This component consists of the following modules , Future predictor p ( ẑt|zt−1 , at−1 ) , Observation representation decoder yt = g ( ẑt ) , Correction Model q ( zt|ẑt , xt ) , Inverse Dynamics model ât−1 = a ( zt , zt−1 ) , Reward predictor r̂t = r ( zt ) . Each module is parameterized as a fully-connected MLP , except the future predictor which uses GRU cells ( Cho et al. , 2014 ) . The future predictor outputs a prior distribution ẑt over the next state , given the previous state and action . This is decoded into yt which corresponds to an encoding of the observation that the agent expects to see next . The correction model q is given access to the current observation encoding xt , along with the prior ẑt and outputs the posterior distribution zt , i.e. , it corrects the prior belief given the new observation . The posterior state zt is used to predict the reward , and also feeds into the actor and critic models that control the agent ’ s behavior . The inverse dynamics model predicts the action that took the agent from state zt−1 to zt . The RSSM is similar to the one used in Dreamer and SLAC except that it includes an inverse dynamics model and that the observation representation is predicted from the prior latent state instead of the posterior latent state . These differences lead to slightly better results and a different probabilistic interpretation of the underlying model which we discuss in Section 2.2 . Similar to Dreamer , the latent state zt consists of a deterministic and stochastic component ( More details in Appendix A ) . Behavior Model : This component consists of an actor πφ ( at|zt ) and two critic networks Qθi ( zt , a ) , i ∈ { 1 , 2 } which are standard for training using the SAC algorithm ( Haarnoja et al. , 2018 ) . 2.2 REPRESENTATION LEARNING WITH CORE . Our goal is to represent the agent ’ s latent state by extracting task-relevant information from the observations , while ignoring the distractions . We formulate this representation learning problem as dynamics-regularized mutual information maximization between the observations ot and the model ’ s prediction of the observation ’ s encoding yt . Mutual information maximization ( Hjelm et al. , 2019 ) has been used extensively for representation learning . In our case , it can be approximated using a contrastive learning objective defined over a mini-batch of sequences , in a way similar to Contrastive Predictive Coding ( CPC ) ( van den Oord et al. , 2018 ) . At time-step t in example i in the mini-batch , let xi , t and yi , t denote the real observation ’ s encoding and the predicted observation encoding , respectively . The loss function is Lc = − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i , tyi′ , t′ ) ) − ∑ i , t log ( exp ( λx > i , tyi , t ) ∑ i′ , t′ exp ( λx > i′ , t′yi , t ) ) , ( 1 ) where λ is a learned inverse temperature parameter and i′ and t′ index over all sequences and timesteps in the mini-batch , respectively . Intuitively , the loss function makes the predicted observation encoding match the corresponding real observation ’ s encoding , but not match other real observation encodings , and vice-versa . In addition , the predicted and corrected latent state distributions should match , which can be formulated as minimizing a KL-divergence term LKL = KL ( q ( zt|ẑt , xt ) ||p ( ẑt|zt−1 , at−1 ) ) . ( 2 ) An additional source of training signal is the reward prediction loss , ( rt−r̂t ) 2 . Furthermore , modeling the inverse dynamics , i.e . predicting at from zt and zt+1 is a different way of representing the transition dynamics . This does not provide any additional information because we already model forward dynamics . However , we found that it helps in making faster training progress . We incorporate this by adding an action prediction loss . The combined loss is JM ( Θ ) = Lc + βLKL + αr ( rt − r̂t ) 2 + αa||at − ât||2 , where β , αr , αa are tunable hyperparameters and Θ is the combined set of model parameters which includes parameters of the observation encoder , the RSSM , and the inverse temperature λ . Relationship with Action-Conditioned Sequential VAEs The above objective function bears resemblance to the ELBO from a Conditional Sequential VAE which is used as the underlying probabilistic model in Dreamer and SLAC . The objective function there is the sum of an observation reconstruction term ||ot − ôt||2 and LKL scaled by β , corresponding to a β-VAE formulation ( Higgins et al. , 2017 ) . This formulation relies on decoding the posterior latent state to compute ôt . Since the posterior latent state distribution q ( zt|ẑt , xt ) is conditioned on the true observation xt , observation reconstruction is an auto-encoding task . Only the KL-term corresponds to the future prediction task . On the other hand , in our case the prior latent state is decoded to a representation of the observation , which already constitutes a future prediction task . The KL-term only provides additional regularization . This makes the model less reliant on careful tuning of β ( empirically validated in Appendix C.3 ) . In fact , setting it to zero also leads to reasonable performance in our experiments , though not the best . | The paper proposes a recurrent state-space model that learns robust representations for robotic control. The proposed method builds on top of prior works on world-models which learn a latent dynamics model of the agent which can be used for planning and action selection. Different from prior work such as Dreamer and SLAC which rely on pixel-based observation reconstruction, this paper highlights that a simpler contrastive loss for the next-observation prediction achieves better results **if** a recurrent state-space model is used for the latent space. Results are presented on the Distracting Control Suite benchmark and show strong improvements over prior approaches. | SP:e598aee34b4adb898d8495153982746795b7be62 |
Adam is no better than normalized SGD: Dissecting how adaptivity improves GAN performance | 1 INTRODUCTION . It is commonly accepted that adaptive algorithms are required to train modern neural network architectures in various deep learning tasks . This includes minimization problems that arise in natural language processing ( Vaswani et al. , 2017 ) and fMRI ( Zbontar et al. , 2018 ) or min-max problems such as generative adversarial network ( GAN ) training ( Goodfellow et al. , 2014 ) . Indeed , it has been empirically observed that Adam ( Kingma & Ba , 2014 ) yields a solution with better generalization than stochastic gradient descent ( SGD ) in these problems ( Choi et al. , 2019 ) . Several works have attempted to explain this phenomenon in the minimization case . Common explanations are that adaptive methods train faster ( Zhou et al. , 2018 ) , escape faster very flat saddle-point like plateaus ( Orvieto et al. , 2021 ) or deal better with heavy-tailed stochastic gradients ( Zhang et al. , 2019b ) . However , much less is known regarding min-max problems such as GANs . In this paper , we investigate why GANs trained with adaptive methods outperform those trained using stochastic gradient descent ascent with momentum ( SGDA ) . Some prior works attribute this outperformance to the superior convergence speed of adaptive methods . For instance , Liu et al . ( 2019 ) show that a variant of Optimistic Gradient Descent ( Daskalakis et al. , 2017 ) converges faster than SGDA for a class of non-convex non-concave min-max problems . However , contrary to the minimization setting , convergence to a stationary point is not guaranteed and not even a requirement to ensure a satisfactory GAN performance . Indeed , Mescheder et al . ( 2018 ) empirically shows that popular architectures such as Wasserstein GANs ( WGANs ) ( Arjovsky et al. , 2017 ) do not always converge , and yet produce realistic images . We support this observation through the following experiment . We train a DCGAN ( Radford et al. , 2015 ) using Adam –the most popular adaptive method– and set up the generator ( G ) and discriminator ( D ) step-sizes respectively as ηD , ηG . Note that D is usually trained faster than G i.e . ηD ≥ ηG . Figure 1 ( a ) displays the GAN convergence measured by the ratio of gradient norms , and the GAN ’ s performance measured in FID score ( Heusel et al. , 2017 ) . We observe that when ηD/ηG is close to 1 , the algorithm does not converge and yet , the model produces high-quality solutions . On the other hand , when ηD/ηG 1 , the model converges to an equilibrium –a similar statement has been proved by Jin et al . ( 2020 ) and Fiez & Ratliff ( 2020 ) in the case of SGDA . However , the GAN produces low-quality solutions at this equilibrium . Thus , simply comparing the convergence speed of adaptive methods and SGDA can not explain the GAN ’ s performance obtained with adaptive methods . This observation motivates the central question in this paper : What factors explain that Adam produces better quality solutions than SGDA when training GANs ? To address this question , we dissect Adam following the approach by Agarwal et al . ( 2020 ) . They frame a generic optimizer ’ s update as W ( t+1 ) = W ( t ) − ηa ( t ) G ( t ) , where W ( t ) ∈ Rd is the iterate , G ( t ) ∈ Rd such that ‖G ( t ) ‖2 = 1 is the optimizer ’ s direction and a ( t ) ≥ 0 is the optimizer ’ s magnitude . Therefore , a first step in our paper is to understand whether Adam outperforms SGDA maintly due to its direction or to its magnitude . As detailed in Section 2 , we train a GAN using i ) AdaLR , an algorithm that updates in the direction of SGDA but with the magnitude of Adam ii ) AdaDir which uses the direction of Adam but the magnitude of SGDA . We empirically show that not only , AdaLR significantly outperforms AdaDir , SGDA , and Adam itself . This observation encourages us to conclude that : Adam produces higher quality solutions relative to SGDA in GANs mainly due to its adaptive magnitude and not to its adaptive direction . In Section 2 , we empirically analyze the adaptive magnitude of AdaLR and observe that it stays approximately constant throughout training . This observation eventually encourages the study of AdaLR with a constant step-size . Such algorithm actually corresponds to normalized SGDA ( nSGDA ) .Compared to SGDA , nSGDA has the same direction but differs in magnitude since we divide the gradient by its norm . Intuitively , this normalization forces D and G to be updated by vectors with constant magnitudes no matter how different the norms of D ’ s and G ’ s gradients are . Motivated by the aforementioned observations , this paper studies the performance of GANs trained with nSGDA . We believe that this is a first step to formally understand the role of adaptive methods in GANs . Our contributions are divided as follows : – In Section 3 , we experimentally confirm that nSDGA consistently competes with Adam and outperforms SGDA when using different GAN architectures on a wide range of datasets . – In Section 4 , we provide a theoretical explanation on why GANs trained with nSGDA outperform those trained with SGDA . More precisely , we devise a data generation problem where the target distribution D is made of multiple modes . The model trained with nSGDA provably recovers all the modes in the target distribution while the SGDA ’ s one fails to do it under any step-size configuration : We prove that even when SGDA converges to a locally optimal min-max equilibrium , the model still suffers from mode collapsing and fails to learn recover the modes separately . The key insight of our theoretical analysis is that no matter how the step-sizes are prescribed , D and G necessarily update at very different speeds when we use SGDA . Therefore , i ) either D updates its weights too fast , thus learns a weighted average of the modes of D. This makes G learn this weighted average of modes ii ) or D does not update its weights fast enough and thus , G aligns its weights with those of D. This forces D to converge to a locally optimal min-max equilibrium that classifies any instance as ” fake ” . On the other hand , by normalizing the gradients as done in SGDA , we force D and G to update at the same speed throughout training . Thus , whenever D learns a mode of the distribution , G learns it right after , which makes both of them learn all the modes of the distribution separately . Our paper advocates for the use of balanced updates in GAN training i.e . the ratio of D vs G updates should remain close to constant . To our knowledge , we are the first to theoretically show the importance of these balanced updates . This insight contrasts with the related work that analyzes GANs , and more generally zero-sum differentiable games , in the regime where D is updated much faster than G i.e . ηD/ηG 1 ( Fiez & Ratliff , 2020 ; Jin et al. , 2020 ; Fiez et al. , 2020 ) . RELATED WORK Adaptive methods in games optimization . Several works designed adaptive algorithms and analyzed their convergence to show their benefits relative to SGDA . For variational inequality problems , Gasnikov et al . ( 2019 ) ; Antonakopoulos et al . ( 2019 ) ; Bach & Levy ( 2019 ) ; Antonakopoulos et al . ( 2020 ) propose adaptive algorithms that reach optimal convergence rates under regularity assumptions . For a class of non-convex non-concave min-max problems , Liu et al . ( 2019 ) ; Barazandeh et al . ( 2021 ) design algorithms that converge faster than SGDA . On the other hand , Heusel et al . ( 2017 ) show that Adam locally converge to a Nash equilibrium in the regime where the step-size of the discriminator is much larger than the one of the generator . Our work differs from these papers as we analyze Adam and do not focus on the convergence properties but rather on the fit of the trained model on the true ( and not empirical ) data distribution . Besides , contrary to some of the aforementioned papers , our work is not in the two-time scale learning-rates regime which do not correspond to what is mostly done in practice . Importance of balanced updates in GANs . The importance of balanced updates for GANs has been noticed in the literature . The most popular GAN architectures ( Radford et al. , 2015 ; Arjovsky et al. , 2017 ; Brock et al. , 2018 ) set the step-sizes such that ηD/ηG is constant . On the other hand , Berthelot et al . ( 2017 ) introduced a control variable to ensure that the updates between G and D are at the same speed . In this work , we do not modify the GAN objective loss to enforce balancedness . Instead , we empirically and theoretically investigates how Adam and nSGDA enforce balanced updates . Statistical results in GANs . Early works investigate whether GANs memorize the training data or actually learn the distribution ( Arora et al. , 2017 ; 2018 ; Dumoulin et al. , 2016 ) . Zhang et al . ( 2017 ) ; Bai et al . ( 2018 ) then show that for specific GANs , the model learn some distributions with nonexponential sample complexity ( Liang , 2017 ; Feizi et al. , 2017 ) . Recently , Li & Dou ( 2020 ) ; AllenZhu & Li ( 2021 ) further characterized the distributions learned by the generator . On the other hand , some works attempted to explain GAN performance through the optimization lens . Lei et al . ( 2020 ) ; Balaji et al . ( 2021 ) show that GAN models trained with SGDA converge to a global saddle point when the generator is one-layer neural network and the discriminator is a specific quadratic/linear function . Our contribution significantly differs from these two works as i ) we construct a setting where SGDA converges to a locally optimal min-max equilibrium and yet suffer from mode collapse . Conversely , nSGDA does not necessarily converge and yet , recovers the true distribution ii ) our setting is more challenging since we need at least a degree-3 discriminator to learn the distribution – see Section 4 for a justification . Normalized gradient descent . Introduced by Nesterov ( 1984 ) , normalized gradient descent has been widely used in the minimization setting . Indeed , it has been observed that normalizing out the gradient improves the ’ slow crawling ’ problem of gradient descent and avoids the iterates to be stuck in flat regions – such as spurious local minima or saddle points – ( Hazan et al. , 2015 ; Levy , 2016 ; Murray et al. , 2019 ) . Normalized gradient descent or its variants outperform the non-normalized counterparts in multi-agent coordination ( Cortés , 2006 ) and deep learning tasks ( You et al. , 2017 ; 2019 ; Cutkosky & Mehta , 2020 ; Liu et al. , 2021 ) . Our work rather considers the min-max setting and shows that nSGDA performs better than SGDA as it forces the discriminator and generator to update at the same rate . | This paper studies how adaptive methods help performance in GANs. The study empirically finds that SGDA with the same vector norm as Adam reaches similar better performance. Based on this observation, normalized SGDA (nSGDA) is proposed as a simpler alternative to Adam. nSGDA is evaluated on several datasets and the results demonstrate that nSGDA is more stable than SGDA. | SP:465903c777ca2961502e3d8726e1cfa9b725d1a3 |
Adam is no better than normalized SGD: Dissecting how adaptivity improves GAN performance | 1 INTRODUCTION . It is commonly accepted that adaptive algorithms are required to train modern neural network architectures in various deep learning tasks . This includes minimization problems that arise in natural language processing ( Vaswani et al. , 2017 ) and fMRI ( Zbontar et al. , 2018 ) or min-max problems such as generative adversarial network ( GAN ) training ( Goodfellow et al. , 2014 ) . Indeed , it has been empirically observed that Adam ( Kingma & Ba , 2014 ) yields a solution with better generalization than stochastic gradient descent ( SGD ) in these problems ( Choi et al. , 2019 ) . Several works have attempted to explain this phenomenon in the minimization case . Common explanations are that adaptive methods train faster ( Zhou et al. , 2018 ) , escape faster very flat saddle-point like plateaus ( Orvieto et al. , 2021 ) or deal better with heavy-tailed stochastic gradients ( Zhang et al. , 2019b ) . However , much less is known regarding min-max problems such as GANs . In this paper , we investigate why GANs trained with adaptive methods outperform those trained using stochastic gradient descent ascent with momentum ( SGDA ) . Some prior works attribute this outperformance to the superior convergence speed of adaptive methods . For instance , Liu et al . ( 2019 ) show that a variant of Optimistic Gradient Descent ( Daskalakis et al. , 2017 ) converges faster than SGDA for a class of non-convex non-concave min-max problems . However , contrary to the minimization setting , convergence to a stationary point is not guaranteed and not even a requirement to ensure a satisfactory GAN performance . Indeed , Mescheder et al . ( 2018 ) empirically shows that popular architectures such as Wasserstein GANs ( WGANs ) ( Arjovsky et al. , 2017 ) do not always converge , and yet produce realistic images . We support this observation through the following experiment . We train a DCGAN ( Radford et al. , 2015 ) using Adam –the most popular adaptive method– and set up the generator ( G ) and discriminator ( D ) step-sizes respectively as ηD , ηG . Note that D is usually trained faster than G i.e . ηD ≥ ηG . Figure 1 ( a ) displays the GAN convergence measured by the ratio of gradient norms , and the GAN ’ s performance measured in FID score ( Heusel et al. , 2017 ) . We observe that when ηD/ηG is close to 1 , the algorithm does not converge and yet , the model produces high-quality solutions . On the other hand , when ηD/ηG 1 , the model converges to an equilibrium –a similar statement has been proved by Jin et al . ( 2020 ) and Fiez & Ratliff ( 2020 ) in the case of SGDA . However , the GAN produces low-quality solutions at this equilibrium . Thus , simply comparing the convergence speed of adaptive methods and SGDA can not explain the GAN ’ s performance obtained with adaptive methods . This observation motivates the central question in this paper : What factors explain that Adam produces better quality solutions than SGDA when training GANs ? To address this question , we dissect Adam following the approach by Agarwal et al . ( 2020 ) . They frame a generic optimizer ’ s update as W ( t+1 ) = W ( t ) − ηa ( t ) G ( t ) , where W ( t ) ∈ Rd is the iterate , G ( t ) ∈ Rd such that ‖G ( t ) ‖2 = 1 is the optimizer ’ s direction and a ( t ) ≥ 0 is the optimizer ’ s magnitude . Therefore , a first step in our paper is to understand whether Adam outperforms SGDA maintly due to its direction or to its magnitude . As detailed in Section 2 , we train a GAN using i ) AdaLR , an algorithm that updates in the direction of SGDA but with the magnitude of Adam ii ) AdaDir which uses the direction of Adam but the magnitude of SGDA . We empirically show that not only , AdaLR significantly outperforms AdaDir , SGDA , and Adam itself . This observation encourages us to conclude that : Adam produces higher quality solutions relative to SGDA in GANs mainly due to its adaptive magnitude and not to its adaptive direction . In Section 2 , we empirically analyze the adaptive magnitude of AdaLR and observe that it stays approximately constant throughout training . This observation eventually encourages the study of AdaLR with a constant step-size . Such algorithm actually corresponds to normalized SGDA ( nSGDA ) .Compared to SGDA , nSGDA has the same direction but differs in magnitude since we divide the gradient by its norm . Intuitively , this normalization forces D and G to be updated by vectors with constant magnitudes no matter how different the norms of D ’ s and G ’ s gradients are . Motivated by the aforementioned observations , this paper studies the performance of GANs trained with nSGDA . We believe that this is a first step to formally understand the role of adaptive methods in GANs . Our contributions are divided as follows : – In Section 3 , we experimentally confirm that nSDGA consistently competes with Adam and outperforms SGDA when using different GAN architectures on a wide range of datasets . – In Section 4 , we provide a theoretical explanation on why GANs trained with nSGDA outperform those trained with SGDA . More precisely , we devise a data generation problem where the target distribution D is made of multiple modes . The model trained with nSGDA provably recovers all the modes in the target distribution while the SGDA ’ s one fails to do it under any step-size configuration : We prove that even when SGDA converges to a locally optimal min-max equilibrium , the model still suffers from mode collapsing and fails to learn recover the modes separately . The key insight of our theoretical analysis is that no matter how the step-sizes are prescribed , D and G necessarily update at very different speeds when we use SGDA . Therefore , i ) either D updates its weights too fast , thus learns a weighted average of the modes of D. This makes G learn this weighted average of modes ii ) or D does not update its weights fast enough and thus , G aligns its weights with those of D. This forces D to converge to a locally optimal min-max equilibrium that classifies any instance as ” fake ” . On the other hand , by normalizing the gradients as done in SGDA , we force D and G to update at the same speed throughout training . Thus , whenever D learns a mode of the distribution , G learns it right after , which makes both of them learn all the modes of the distribution separately . Our paper advocates for the use of balanced updates in GAN training i.e . the ratio of D vs G updates should remain close to constant . To our knowledge , we are the first to theoretically show the importance of these balanced updates . This insight contrasts with the related work that analyzes GANs , and more generally zero-sum differentiable games , in the regime where D is updated much faster than G i.e . ηD/ηG 1 ( Fiez & Ratliff , 2020 ; Jin et al. , 2020 ; Fiez et al. , 2020 ) . RELATED WORK Adaptive methods in games optimization . Several works designed adaptive algorithms and analyzed their convergence to show their benefits relative to SGDA . For variational inequality problems , Gasnikov et al . ( 2019 ) ; Antonakopoulos et al . ( 2019 ) ; Bach & Levy ( 2019 ) ; Antonakopoulos et al . ( 2020 ) propose adaptive algorithms that reach optimal convergence rates under regularity assumptions . For a class of non-convex non-concave min-max problems , Liu et al . ( 2019 ) ; Barazandeh et al . ( 2021 ) design algorithms that converge faster than SGDA . On the other hand , Heusel et al . ( 2017 ) show that Adam locally converge to a Nash equilibrium in the regime where the step-size of the discriminator is much larger than the one of the generator . Our work differs from these papers as we analyze Adam and do not focus on the convergence properties but rather on the fit of the trained model on the true ( and not empirical ) data distribution . Besides , contrary to some of the aforementioned papers , our work is not in the two-time scale learning-rates regime which do not correspond to what is mostly done in practice . Importance of balanced updates in GANs . The importance of balanced updates for GANs has been noticed in the literature . The most popular GAN architectures ( Radford et al. , 2015 ; Arjovsky et al. , 2017 ; Brock et al. , 2018 ) set the step-sizes such that ηD/ηG is constant . On the other hand , Berthelot et al . ( 2017 ) introduced a control variable to ensure that the updates between G and D are at the same speed . In this work , we do not modify the GAN objective loss to enforce balancedness . Instead , we empirically and theoretically investigates how Adam and nSGDA enforce balanced updates . Statistical results in GANs . Early works investigate whether GANs memorize the training data or actually learn the distribution ( Arora et al. , 2017 ; 2018 ; Dumoulin et al. , 2016 ) . Zhang et al . ( 2017 ) ; Bai et al . ( 2018 ) then show that for specific GANs , the model learn some distributions with nonexponential sample complexity ( Liang , 2017 ; Feizi et al. , 2017 ) . Recently , Li & Dou ( 2020 ) ; AllenZhu & Li ( 2021 ) further characterized the distributions learned by the generator . On the other hand , some works attempted to explain GAN performance through the optimization lens . Lei et al . ( 2020 ) ; Balaji et al . ( 2021 ) show that GAN models trained with SGDA converge to a global saddle point when the generator is one-layer neural network and the discriminator is a specific quadratic/linear function . Our contribution significantly differs from these two works as i ) we construct a setting where SGDA converges to a locally optimal min-max equilibrium and yet suffer from mode collapse . Conversely , nSGDA does not necessarily converge and yet , recovers the true distribution ii ) our setting is more challenging since we need at least a degree-3 discriminator to learn the distribution – see Section 4 for a justification . Normalized gradient descent . Introduced by Nesterov ( 1984 ) , normalized gradient descent has been widely used in the minimization setting . Indeed , it has been observed that normalizing out the gradient improves the ’ slow crawling ’ problem of gradient descent and avoids the iterates to be stuck in flat regions – such as spurious local minima or saddle points – ( Hazan et al. , 2015 ; Levy , 2016 ; Murray et al. , 2019 ) . Normalized gradient descent or its variants outperform the non-normalized counterparts in multi-agent coordination ( Cortés , 2006 ) and deep learning tasks ( You et al. , 2017 ; 2019 ; Cutkosky & Mehta , 2020 ; Liu et al. , 2021 ) . Our work rather considers the min-max setting and shows that nSGDA performs better than SGDA as it forces the discriminator and generator to update at the same rate . | In this manuscript the authors investigate the effect of normalized and unormalized gradient updates on the convergence of GANs. In a first experiment a DCGAN model is trained with Adam and it is shown that the best models evaluated with the FID have balanced learning rates but haven't converged while models trained with extremly unbalanced learning rates converge while having high FIDs i.e. didn't learn the training distribution. In the second experiment it is shown that an adaptive gradient magnitude helps to train a good model compared to an adaptive gradient direction. In a next step using the WGAN-GP model on CIFAR-10, LSUN Churches, STL-10, and CelebA-HQ datasets the optimizers Adam, normalized gradient updates and unnormalized SGDA were compared. The normalized optimizers including Adam outperformed SGDA. Finally it was shown that SGDA is more sensitive to the batch-size. In a final experiment on a small toy dataset it was shown that SGDA suffers from mode collapse while normalized gradient updates guide the generator to learn the modes of the dataset. | SP:465903c777ca2961502e3d8726e1cfa9b725d1a3 |
Adam is no better than normalized SGD: Dissecting how adaptivity improves GAN performance | 1 INTRODUCTION . It is commonly accepted that adaptive algorithms are required to train modern neural network architectures in various deep learning tasks . This includes minimization problems that arise in natural language processing ( Vaswani et al. , 2017 ) and fMRI ( Zbontar et al. , 2018 ) or min-max problems such as generative adversarial network ( GAN ) training ( Goodfellow et al. , 2014 ) . Indeed , it has been empirically observed that Adam ( Kingma & Ba , 2014 ) yields a solution with better generalization than stochastic gradient descent ( SGD ) in these problems ( Choi et al. , 2019 ) . Several works have attempted to explain this phenomenon in the minimization case . Common explanations are that adaptive methods train faster ( Zhou et al. , 2018 ) , escape faster very flat saddle-point like plateaus ( Orvieto et al. , 2021 ) or deal better with heavy-tailed stochastic gradients ( Zhang et al. , 2019b ) . However , much less is known regarding min-max problems such as GANs . In this paper , we investigate why GANs trained with adaptive methods outperform those trained using stochastic gradient descent ascent with momentum ( SGDA ) . Some prior works attribute this outperformance to the superior convergence speed of adaptive methods . For instance , Liu et al . ( 2019 ) show that a variant of Optimistic Gradient Descent ( Daskalakis et al. , 2017 ) converges faster than SGDA for a class of non-convex non-concave min-max problems . However , contrary to the minimization setting , convergence to a stationary point is not guaranteed and not even a requirement to ensure a satisfactory GAN performance . Indeed , Mescheder et al . ( 2018 ) empirically shows that popular architectures such as Wasserstein GANs ( WGANs ) ( Arjovsky et al. , 2017 ) do not always converge , and yet produce realistic images . We support this observation through the following experiment . We train a DCGAN ( Radford et al. , 2015 ) using Adam –the most popular adaptive method– and set up the generator ( G ) and discriminator ( D ) step-sizes respectively as ηD , ηG . Note that D is usually trained faster than G i.e . ηD ≥ ηG . Figure 1 ( a ) displays the GAN convergence measured by the ratio of gradient norms , and the GAN ’ s performance measured in FID score ( Heusel et al. , 2017 ) . We observe that when ηD/ηG is close to 1 , the algorithm does not converge and yet , the model produces high-quality solutions . On the other hand , when ηD/ηG 1 , the model converges to an equilibrium –a similar statement has been proved by Jin et al . ( 2020 ) and Fiez & Ratliff ( 2020 ) in the case of SGDA . However , the GAN produces low-quality solutions at this equilibrium . Thus , simply comparing the convergence speed of adaptive methods and SGDA can not explain the GAN ’ s performance obtained with adaptive methods . This observation motivates the central question in this paper : What factors explain that Adam produces better quality solutions than SGDA when training GANs ? To address this question , we dissect Adam following the approach by Agarwal et al . ( 2020 ) . They frame a generic optimizer ’ s update as W ( t+1 ) = W ( t ) − ηa ( t ) G ( t ) , where W ( t ) ∈ Rd is the iterate , G ( t ) ∈ Rd such that ‖G ( t ) ‖2 = 1 is the optimizer ’ s direction and a ( t ) ≥ 0 is the optimizer ’ s magnitude . Therefore , a first step in our paper is to understand whether Adam outperforms SGDA maintly due to its direction or to its magnitude . As detailed in Section 2 , we train a GAN using i ) AdaLR , an algorithm that updates in the direction of SGDA but with the magnitude of Adam ii ) AdaDir which uses the direction of Adam but the magnitude of SGDA . We empirically show that not only , AdaLR significantly outperforms AdaDir , SGDA , and Adam itself . This observation encourages us to conclude that : Adam produces higher quality solutions relative to SGDA in GANs mainly due to its adaptive magnitude and not to its adaptive direction . In Section 2 , we empirically analyze the adaptive magnitude of AdaLR and observe that it stays approximately constant throughout training . This observation eventually encourages the study of AdaLR with a constant step-size . Such algorithm actually corresponds to normalized SGDA ( nSGDA ) .Compared to SGDA , nSGDA has the same direction but differs in magnitude since we divide the gradient by its norm . Intuitively , this normalization forces D and G to be updated by vectors with constant magnitudes no matter how different the norms of D ’ s and G ’ s gradients are . Motivated by the aforementioned observations , this paper studies the performance of GANs trained with nSGDA . We believe that this is a first step to formally understand the role of adaptive methods in GANs . Our contributions are divided as follows : – In Section 3 , we experimentally confirm that nSDGA consistently competes with Adam and outperforms SGDA when using different GAN architectures on a wide range of datasets . – In Section 4 , we provide a theoretical explanation on why GANs trained with nSGDA outperform those trained with SGDA . More precisely , we devise a data generation problem where the target distribution D is made of multiple modes . The model trained with nSGDA provably recovers all the modes in the target distribution while the SGDA ’ s one fails to do it under any step-size configuration : We prove that even when SGDA converges to a locally optimal min-max equilibrium , the model still suffers from mode collapsing and fails to learn recover the modes separately . The key insight of our theoretical analysis is that no matter how the step-sizes are prescribed , D and G necessarily update at very different speeds when we use SGDA . Therefore , i ) either D updates its weights too fast , thus learns a weighted average of the modes of D. This makes G learn this weighted average of modes ii ) or D does not update its weights fast enough and thus , G aligns its weights with those of D. This forces D to converge to a locally optimal min-max equilibrium that classifies any instance as ” fake ” . On the other hand , by normalizing the gradients as done in SGDA , we force D and G to update at the same speed throughout training . Thus , whenever D learns a mode of the distribution , G learns it right after , which makes both of them learn all the modes of the distribution separately . Our paper advocates for the use of balanced updates in GAN training i.e . the ratio of D vs G updates should remain close to constant . To our knowledge , we are the first to theoretically show the importance of these balanced updates . This insight contrasts with the related work that analyzes GANs , and more generally zero-sum differentiable games , in the regime where D is updated much faster than G i.e . ηD/ηG 1 ( Fiez & Ratliff , 2020 ; Jin et al. , 2020 ; Fiez et al. , 2020 ) . RELATED WORK Adaptive methods in games optimization . Several works designed adaptive algorithms and analyzed their convergence to show their benefits relative to SGDA . For variational inequality problems , Gasnikov et al . ( 2019 ) ; Antonakopoulos et al . ( 2019 ) ; Bach & Levy ( 2019 ) ; Antonakopoulos et al . ( 2020 ) propose adaptive algorithms that reach optimal convergence rates under regularity assumptions . For a class of non-convex non-concave min-max problems , Liu et al . ( 2019 ) ; Barazandeh et al . ( 2021 ) design algorithms that converge faster than SGDA . On the other hand , Heusel et al . ( 2017 ) show that Adam locally converge to a Nash equilibrium in the regime where the step-size of the discriminator is much larger than the one of the generator . Our work differs from these papers as we analyze Adam and do not focus on the convergence properties but rather on the fit of the trained model on the true ( and not empirical ) data distribution . Besides , contrary to some of the aforementioned papers , our work is not in the two-time scale learning-rates regime which do not correspond to what is mostly done in practice . Importance of balanced updates in GANs . The importance of balanced updates for GANs has been noticed in the literature . The most popular GAN architectures ( Radford et al. , 2015 ; Arjovsky et al. , 2017 ; Brock et al. , 2018 ) set the step-sizes such that ηD/ηG is constant . On the other hand , Berthelot et al . ( 2017 ) introduced a control variable to ensure that the updates between G and D are at the same speed . In this work , we do not modify the GAN objective loss to enforce balancedness . Instead , we empirically and theoretically investigates how Adam and nSGDA enforce balanced updates . Statistical results in GANs . Early works investigate whether GANs memorize the training data or actually learn the distribution ( Arora et al. , 2017 ; 2018 ; Dumoulin et al. , 2016 ) . Zhang et al . ( 2017 ) ; Bai et al . ( 2018 ) then show that for specific GANs , the model learn some distributions with nonexponential sample complexity ( Liang , 2017 ; Feizi et al. , 2017 ) . Recently , Li & Dou ( 2020 ) ; AllenZhu & Li ( 2021 ) further characterized the distributions learned by the generator . On the other hand , some works attempted to explain GAN performance through the optimization lens . Lei et al . ( 2020 ) ; Balaji et al . ( 2021 ) show that GAN models trained with SGDA converge to a global saddle point when the generator is one-layer neural network and the discriminator is a specific quadratic/linear function . Our contribution significantly differs from these two works as i ) we construct a setting where SGDA converges to a locally optimal min-max equilibrium and yet suffer from mode collapse . Conversely , nSGDA does not necessarily converge and yet , recovers the true distribution ii ) our setting is more challenging since we need at least a degree-3 discriminator to learn the distribution – see Section 4 for a justification . Normalized gradient descent . Introduced by Nesterov ( 1984 ) , normalized gradient descent has been widely used in the minimization setting . Indeed , it has been observed that normalizing out the gradient improves the ’ slow crawling ’ problem of gradient descent and avoids the iterates to be stuck in flat regions – such as spurious local minima or saddle points – ( Hazan et al. , 2015 ; Levy , 2016 ; Murray et al. , 2019 ) . Normalized gradient descent or its variants outperform the non-normalized counterparts in multi-agent coordination ( Cortés , 2006 ) and deep learning tasks ( You et al. , 2017 ; 2019 ; Cutkosky & Mehta , 2020 ; Liu et al. , 2021 ) . Our work rather considers the min-max setting and shows that nSGDA performs better than SGDA as it forces the discriminator and generator to update at the same rate . | This paper dissects the success of Adam in GAN training by swap the gradient direction/magnitude with that of SGDA. By doing so, authors found the Adam produces higher quality solutions relative to SGDA in GANs mainly due to its adaptive magnitude and not to its adaptive direction. Inspired by this observation, the combination of direction of SGDA and magnitude of Adam yields the use of normalized SGDA (nSGDA) in GAN training, which consistently competes with Adam. | SP:465903c777ca2961502e3d8726e1cfa9b725d1a3 |
Specialized Transformers: Faster, Smaller and more Accurate NLP Models | 1 INTRODUCTION . Transformers models such as BERT ( Devlin et al. , 2019 ) , GPT-2 ( Radford et al. , 2019 ) and GPT3 ( Brown et al. , 2020 ) have revolutionized the field of Natural Language Processing ( NLP ) , greatly advancing the state-of-the-art in many NLP tasks . Models that achieve good performance on these tasks are of high practical significance , finding their place in commercial applications such as social media monitoring ( sentiment analysis ) , AI chat assistants ( question answering ) , automated summarization tools ( analyzing sentence similarity ) , etc . Therefore , there is a strong interest in creating more accurate and efficient Transformer models for these tasks . Transformers are first pre-trained on very large datasets , and subsequently fine-tuned for different downstream tasks . However , the current method of pre-training and fine-tuning Transformers has two major drawbacks . First , fine-tuning the pre-trained Transformers leads to models that are highly over-parameterized models for the downstream tasks , especially since many of these tasks have very limited training data . This could lead to unstable models ( Dodge et al. , 2020 ) with sub-optimal generalization ability ( Michel et al. , 2019 ) . Second , these large fine-tuned models present high computation and storage requirements for inference . This problem is further exacerbated by the trend towards larger and more accurate models over time . For instance , increasing the number of parameters from 1.5B to 175B enabled a reduction in perplexity for Language Modelling ( on the Penn Treebank dataset ) from 35.8 in GPT-2 ( 2019 ) to 20.5 in GPT-3 ( 2020 ) . In this work , we address both these challenges , taking advantage of the over-parameterized nature of pre-trained models to create individually Specialized models for the different downstream tasks that are smaller , faster and more accurate than the conventional fine-tuned models . Prior research efforts have explored approximation techniques , such as quantization , pruning and distillation , for improving the inference efficiency of various classes of neural networks . However , these techniques invariably involve a trade-off between accuracy and efficiency . In addition , the vast majority of these techniques require re-training or additional fine-tuning . This becomes especially problematic for Transformers , since these large models require significant time and energy to train and fine-tune . Fine-tuning is usually performed on a very small data set , and as a result , conventional iterative prune-and-retrain methods quickly end up overfitting when applied to Transformer models . In contrast , our Specialization framework utilizes the unique characteristics of Transformer training and deployment to enable substantial gain in both accuracy as well as efficiency . Our framework also does not require any additional re-training or fine-tuning , and can be applied in a plug-and-play manner to any Transformer model that is fine-tuned for any downstream task . During pre-training , Transformers capture rich linguistic knowledge and gain a deep understanding of the structure of the target language . Fine-tuning refines this knowledge for a specific downstream task by training a task-specific final layer . We observe that , due to the nature of their design process , Transformer models are not only over-parameterized but also contain parts that are , in fact , harmful to performance of downstream tasks . In order to exploit this observation in a principled manner , we introduce a framework to identify and prune the harmful elements of a Transformer ( parameters , grouped at different levels of granularity i.e. , self-attention blocks , feed-forward neural network blocks , attention heads and neurons ) , with the goal of maximizing accuracy on the downstream task . In contrast with prior pruning methods that prune elements with little-or-no impact on the network output , the proposed method prunes elements that have a considerable impact on the output , leading to the highest positive impact on accuracy . In order to reduce the large pruning space , we analyze the different elements of the fine-tuned Transformer in a hierarchical manner , starting with entire self-attention or feed-forward neural network blocks , followed by attention heads , and neurons , and prune the harmful elements . The core of the Transformer is self-attention , where each token in the input builds its representation based on the extent of attention it places on all the other tokens . However , we observe that in some cases , restricting the attention span of each token to only focus on the relevant tokens ( in certain layers ) leads to better information flow inside the model . Hence , our framework is also equipped with the ability to identify the appropriate layers and replace the ” soft ” self-attention with ” hard ” self-attention in these layers . We also introduce Transformer-specific heuristics to minimize the run-time of our framework , thereby enabling it to scale to large Transformer models . We summarize our main contributions as follows : • We introduce a Specialization framework that optimizes Transformer models for specific downstream tasks through the use of accuracy-driven pruning and selective hard attention . • We incorporate multiple heuristics in the framework , such as hierarchical processing , model-driven insights , and run-time based ordering of elements , in order to minimize the overheads . • We propose a significance analysis technique to identify the importance of each element of the fine-tuned Transformer for a given downstream task . We use this technique to prune elements that are harmful to performance on the downstream task . • We propose the selective replacement of the ” soft ” self-attention with hard attention in the appropriate layers , helping the model focus only on the relevant parts on the input to build better representations . • Across a suite of different Transformer networks , we demonstrate that Specialized models are consistently more accurate and stable , while also being significantly faster and smaller than their conventional fine-tuned non-Specialized counterparts . 2 RELATED WORK . Task-agnostic optimizations . Given the effectiveness and popularity of Transformer models , several techniques have been proposed to overcome their computational and memory challenges , and to accelerate inference using these models . A vast majority of these works introduce task-agnostic optimizations , using popular approximation techniques such as knowledge distillation ( Sanh et al. , 2019 ; Sun et al. , 2020 ; Wang et al. , 2020 ) , early exit/ depth modulation ( Elbayad et al. , 2020 ; Xin et al. , 2020 ; Zhou et al. , 2020 ) , quantization ( Zafrir et al. , 2019 ) , attention head pruning ( Zhang et al. , 2020 ) and parameter sharing ( Lan et al. , 2020 ) . In addition , Fan et al . ( 2020 ) randomly drop layers during pre-training , thereby enabling their dropping during inference ; Khetan & Karnin ( 2020 ) learn the optimal sizes of the BERT elements during pre-training , and Wu et al . ( 2020 ) use Long-Short Range Attention to speed up the self-attention operation . Using DistilBERT and Q8BERT as examples , we demonstrate that our techniques are complementary to these works , and can be applied while fine-tuning for a specific task to create accurate ultra-efficient models for specific downstream tasks . Task-specific optimizations . Hou et al . ( 2020 ) , Shen et al . ( 2020 ) , Jiao et al . ( 2019 ) , Wang et al . ( 2020 ) , Lagunas et al . ( 2021 ) and Sajjad et al . ( 2020 ) introduce task-specific optimizations , but the gain in efficiency comes at the cost of degradation in accuracy on the downstream task . In contrast , our framework improves both accuracy as well as efficiency , and hence appeals to a wider range of users . We also demonstrate that task-specific optimizations are more effective when applied to Specialized models compared to conventional fine-tuned models . Pruning techniques . Finally , Structured Pruning has been applied to various classes of neural networks ( Anwar et al. , 2017 ; Molchanov et al. , 2017 ) , and greedy pruning strategies have also been explored to identify weights and parameters that the output is least sensitive to ( Zhuang et al. , 2018 ; Ye et al. , 2020 ) . In contrast , our method is designed to identify and prune parameters that have most detrimental effect on the output . 3 METHODOLOGY TO SPECIALIZE TRANSFORMERS . We propose a framework for producing Specialized Transformer models that are optimized for a specific downstream task , illustrated in Algorithm 1 . Our framework performs two main optimizations : ( 1 ) It identifies and prunes elements that hinder performance on the downstream task at hand . ( 2 ) It selectively replaces soft self-attention with hard self-attention to help the model focus only on the relevant parts of the input . 3.1 ACCURACY-DRIVEN PRUNING . The problem of identifying an optimal set of elements to prune is challenging , and this is especially true for Transformers . In order to optimize a given model , we would ideally want to characterize the significance of each and every parameter in the model , rank them in order of importance , and finally prune only the least significant parameters . However , Transformers have billions of parameters , making this process computationally infeasible . In addition , previously proposed techniques that can efficiently estimate the importance of each parameter , such as using Taylor expansion , are not applicable . This is because the { approximate , fine-tune , approximate } cycle does not work for Transformers during fine-tuning , since they very quickly overfit the limited training data for the downstream tasks ( usually within 5 epochs ) . We address both these issues through the use of a hierarchical greedy algorithm that does not require any additional training or fine-tuning . To determine the significance of each Transformer element , we first fine-tune the original Transformer model for the given downstream task to obtain the baseline loss . Then , for the element under consideration in each iteration of the framework , we compute the loss of the current Transformer model with the element removed . We prune the element under consideration if the validation loss when it is removed is less than the minimum loss seen thus far during the optimization process , since the goal is to find a model with minimum loss . Also , in order to prevent overfitting to the validation set , we introduce a generalization constraint in addition to the aforementioned loss condition . This constraint ensures that an element is pruned only if it decreases ( or retains ) the loss of at least a certain number ( N ) of samples in the validation set over the current best solution ( computed by num samples helped function in Alg . 1 ) , where N is the number of elements in the validation dataset whose loss is less than the average loss of the misclassified samples ( we consider samples whose loss is greater than the average loss of the misclassified samples to be outliers ) . Therefore , elements are pruned only if a vast majority of the samples in the validation set benefit from their removal , resulting in improved generalization performance . If the loss with the element removed is greater than the minimum loss Algorithm 1 : Transformer Specialization Input : Fine-tuned ( for the given downstream task ) Transformer T , Validation set D Output : Specialized Transformer for the given downstream task T Function analyze element ( element E ) : Tpruned = T − E New Loss = Evaluate ( Tpruned , D ) if New Loss < Min Loss and num samples helped > N then Min Loss = New Loss T = Tpruned Function num samples helped : computes the number of samples in the validation set that benefit from the pruning of an element ; must be > N for an element to be pruned Baseline Loss = Evaluate ( T , D ) Min Loss = Baseline Loss Q = Order elements for inspection ( T , D ) for each layer L in T do Replace soft self-attention in L with hard self-attention New Loss = Evaluate ( T , D ) if New Loss < Min Loss and num samples helped > N then Min Loss = New Loss else Restore soft self-attention in L while Q is not empty do TrialElement = Q.pop ( ) analyze element ( TrialElement ) if TrialElement has not been pruned from T and New Loss < Baseline Loss then if TrialElement is an attention block then for each attention head h in TrialElement do analyze element ( h ) else if TrialElement is a feed-forward block then for each neuron w in TrialElement do analyze element ( w ) return T seen so far but less than the baseline loss , we inspect the element at a finer granularity , and prune only parts of the element that hinder performance ( rather than pruning the entire element ) . Hierarchical processing of elements . It is computationally prohibitive to analyze every single parameter in large Transformers using the method described in Alg 1 . Since the framework iterates through the entries of the queue sequentially , its efficacy is dependent on both the total number of elements under consideration , and the time required to analyze each element . We take advantage of the inherently hierarchical structure of Transformers and consider the elements in a hierarchical manner , ordered by increasing granularity . Specifically , we analyze entire feed-forward and selfattention blocks first , and inspect them at finer granularity ( attention heads and neurons ) only when required . Through this ordering , we are able to quickly eliminate large numbers of parameters from further consideration . In addition , due to the over-parameterized nature of Transformers , it is likely that time-consuming blocks are pruned from the Transformer earlier in the process , thereby speeding up future iterations of the framework . For example , eliminating a single feed-forward block in the BERT-Base model removes 5.6 % of all parameters under consideration , and speeds up future iterations by 1.15× . To further reduce the number of elements under consideration , we also dynamically remove elements if they are encompassed by a high-importance block . For example , if a given self-attention block is determined to be of high importance ( the validation loss with the block removed is greater than the baseline loss ) , we remove all heads within that block from further consideration . Creating an ordered queue of elements for inspection . Since our framework performs greedy pruning of highly over-parameterized models , it is essential to know where the harmful elements are likely to be . Some elements may appear to be harmful for the downstream task ( especially in early iterations of the framework ) , but this often ends up being an artifact of over-parameterization . As a result , when elements are not carefully ordered for inspection , our framework lands in local minima of the validation loss function , leading to inefficient models with sub-optimal generalization ability . Our solution to this problem utilizes the unique linguistic properties captured by the different Transformer layers ( Jawahar et al. , 2019 ) to guide the ordering of elements for inspection . For example , it was found that BERT captures phrase-level information in the lower layers , mapping related tokens together . The lower layers also capture surface features , while the middle layers capture syntactic features and higher layers capture semantic features . It was also observed that BERT requires deeper layers only when long-range dependency information is required . Different tasks require different types of linguistic knowledge . For example , sentiment analysis requires only local context , and long-range information often ends up confusing the model , since sentiments often change rapidly ; it is also unlikely that syntactic and semantic information are needed . Hence , we place the final layer at the front of the queue , and work our way backwards towards the first layer , since blocks in the final layers are more likely to hinder performance on sentiment analysis . This ordering of elements ensures that elements that are pruned early in our framework ( when the model is most over-parameterized ) do not lead the system into bad local minima . | This paper proposes a new framework for pruning a pretrained Transformer on downstream tasks for better performance and efficiency. The resulted model is 2.5x faster and 3.2x smaller than a fine-tuned model. The authors test the effectiveness on GLUE and SQUAD with BERT-base, Q8BERT, DistilBERT and XLNet. | SP:25948bee751010ad32f6cfef2217f7dd3fb3e556 |
Specialized Transformers: Faster, Smaller and more Accurate NLP Models | 1 INTRODUCTION . Transformers models such as BERT ( Devlin et al. , 2019 ) , GPT-2 ( Radford et al. , 2019 ) and GPT3 ( Brown et al. , 2020 ) have revolutionized the field of Natural Language Processing ( NLP ) , greatly advancing the state-of-the-art in many NLP tasks . Models that achieve good performance on these tasks are of high practical significance , finding their place in commercial applications such as social media monitoring ( sentiment analysis ) , AI chat assistants ( question answering ) , automated summarization tools ( analyzing sentence similarity ) , etc . Therefore , there is a strong interest in creating more accurate and efficient Transformer models for these tasks . Transformers are first pre-trained on very large datasets , and subsequently fine-tuned for different downstream tasks . However , the current method of pre-training and fine-tuning Transformers has two major drawbacks . First , fine-tuning the pre-trained Transformers leads to models that are highly over-parameterized models for the downstream tasks , especially since many of these tasks have very limited training data . This could lead to unstable models ( Dodge et al. , 2020 ) with sub-optimal generalization ability ( Michel et al. , 2019 ) . Second , these large fine-tuned models present high computation and storage requirements for inference . This problem is further exacerbated by the trend towards larger and more accurate models over time . For instance , increasing the number of parameters from 1.5B to 175B enabled a reduction in perplexity for Language Modelling ( on the Penn Treebank dataset ) from 35.8 in GPT-2 ( 2019 ) to 20.5 in GPT-3 ( 2020 ) . In this work , we address both these challenges , taking advantage of the over-parameterized nature of pre-trained models to create individually Specialized models for the different downstream tasks that are smaller , faster and more accurate than the conventional fine-tuned models . Prior research efforts have explored approximation techniques , such as quantization , pruning and distillation , for improving the inference efficiency of various classes of neural networks . However , these techniques invariably involve a trade-off between accuracy and efficiency . In addition , the vast majority of these techniques require re-training or additional fine-tuning . This becomes especially problematic for Transformers , since these large models require significant time and energy to train and fine-tune . Fine-tuning is usually performed on a very small data set , and as a result , conventional iterative prune-and-retrain methods quickly end up overfitting when applied to Transformer models . In contrast , our Specialization framework utilizes the unique characteristics of Transformer training and deployment to enable substantial gain in both accuracy as well as efficiency . Our framework also does not require any additional re-training or fine-tuning , and can be applied in a plug-and-play manner to any Transformer model that is fine-tuned for any downstream task . During pre-training , Transformers capture rich linguistic knowledge and gain a deep understanding of the structure of the target language . Fine-tuning refines this knowledge for a specific downstream task by training a task-specific final layer . We observe that , due to the nature of their design process , Transformer models are not only over-parameterized but also contain parts that are , in fact , harmful to performance of downstream tasks . In order to exploit this observation in a principled manner , we introduce a framework to identify and prune the harmful elements of a Transformer ( parameters , grouped at different levels of granularity i.e. , self-attention blocks , feed-forward neural network blocks , attention heads and neurons ) , with the goal of maximizing accuracy on the downstream task . In contrast with prior pruning methods that prune elements with little-or-no impact on the network output , the proposed method prunes elements that have a considerable impact on the output , leading to the highest positive impact on accuracy . In order to reduce the large pruning space , we analyze the different elements of the fine-tuned Transformer in a hierarchical manner , starting with entire self-attention or feed-forward neural network blocks , followed by attention heads , and neurons , and prune the harmful elements . The core of the Transformer is self-attention , where each token in the input builds its representation based on the extent of attention it places on all the other tokens . However , we observe that in some cases , restricting the attention span of each token to only focus on the relevant tokens ( in certain layers ) leads to better information flow inside the model . Hence , our framework is also equipped with the ability to identify the appropriate layers and replace the ” soft ” self-attention with ” hard ” self-attention in these layers . We also introduce Transformer-specific heuristics to minimize the run-time of our framework , thereby enabling it to scale to large Transformer models . We summarize our main contributions as follows : • We introduce a Specialization framework that optimizes Transformer models for specific downstream tasks through the use of accuracy-driven pruning and selective hard attention . • We incorporate multiple heuristics in the framework , such as hierarchical processing , model-driven insights , and run-time based ordering of elements , in order to minimize the overheads . • We propose a significance analysis technique to identify the importance of each element of the fine-tuned Transformer for a given downstream task . We use this technique to prune elements that are harmful to performance on the downstream task . • We propose the selective replacement of the ” soft ” self-attention with hard attention in the appropriate layers , helping the model focus only on the relevant parts on the input to build better representations . • Across a suite of different Transformer networks , we demonstrate that Specialized models are consistently more accurate and stable , while also being significantly faster and smaller than their conventional fine-tuned non-Specialized counterparts . 2 RELATED WORK . Task-agnostic optimizations . Given the effectiveness and popularity of Transformer models , several techniques have been proposed to overcome their computational and memory challenges , and to accelerate inference using these models . A vast majority of these works introduce task-agnostic optimizations , using popular approximation techniques such as knowledge distillation ( Sanh et al. , 2019 ; Sun et al. , 2020 ; Wang et al. , 2020 ) , early exit/ depth modulation ( Elbayad et al. , 2020 ; Xin et al. , 2020 ; Zhou et al. , 2020 ) , quantization ( Zafrir et al. , 2019 ) , attention head pruning ( Zhang et al. , 2020 ) and parameter sharing ( Lan et al. , 2020 ) . In addition , Fan et al . ( 2020 ) randomly drop layers during pre-training , thereby enabling their dropping during inference ; Khetan & Karnin ( 2020 ) learn the optimal sizes of the BERT elements during pre-training , and Wu et al . ( 2020 ) use Long-Short Range Attention to speed up the self-attention operation . Using DistilBERT and Q8BERT as examples , we demonstrate that our techniques are complementary to these works , and can be applied while fine-tuning for a specific task to create accurate ultra-efficient models for specific downstream tasks . Task-specific optimizations . Hou et al . ( 2020 ) , Shen et al . ( 2020 ) , Jiao et al . ( 2019 ) , Wang et al . ( 2020 ) , Lagunas et al . ( 2021 ) and Sajjad et al . ( 2020 ) introduce task-specific optimizations , but the gain in efficiency comes at the cost of degradation in accuracy on the downstream task . In contrast , our framework improves both accuracy as well as efficiency , and hence appeals to a wider range of users . We also demonstrate that task-specific optimizations are more effective when applied to Specialized models compared to conventional fine-tuned models . Pruning techniques . Finally , Structured Pruning has been applied to various classes of neural networks ( Anwar et al. , 2017 ; Molchanov et al. , 2017 ) , and greedy pruning strategies have also been explored to identify weights and parameters that the output is least sensitive to ( Zhuang et al. , 2018 ; Ye et al. , 2020 ) . In contrast , our method is designed to identify and prune parameters that have most detrimental effect on the output . 3 METHODOLOGY TO SPECIALIZE TRANSFORMERS . We propose a framework for producing Specialized Transformer models that are optimized for a specific downstream task , illustrated in Algorithm 1 . Our framework performs two main optimizations : ( 1 ) It identifies and prunes elements that hinder performance on the downstream task at hand . ( 2 ) It selectively replaces soft self-attention with hard self-attention to help the model focus only on the relevant parts of the input . 3.1 ACCURACY-DRIVEN PRUNING . The problem of identifying an optimal set of elements to prune is challenging , and this is especially true for Transformers . In order to optimize a given model , we would ideally want to characterize the significance of each and every parameter in the model , rank them in order of importance , and finally prune only the least significant parameters . However , Transformers have billions of parameters , making this process computationally infeasible . In addition , previously proposed techniques that can efficiently estimate the importance of each parameter , such as using Taylor expansion , are not applicable . This is because the { approximate , fine-tune , approximate } cycle does not work for Transformers during fine-tuning , since they very quickly overfit the limited training data for the downstream tasks ( usually within 5 epochs ) . We address both these issues through the use of a hierarchical greedy algorithm that does not require any additional training or fine-tuning . To determine the significance of each Transformer element , we first fine-tune the original Transformer model for the given downstream task to obtain the baseline loss . Then , for the element under consideration in each iteration of the framework , we compute the loss of the current Transformer model with the element removed . We prune the element under consideration if the validation loss when it is removed is less than the minimum loss seen thus far during the optimization process , since the goal is to find a model with minimum loss . Also , in order to prevent overfitting to the validation set , we introduce a generalization constraint in addition to the aforementioned loss condition . This constraint ensures that an element is pruned only if it decreases ( or retains ) the loss of at least a certain number ( N ) of samples in the validation set over the current best solution ( computed by num samples helped function in Alg . 1 ) , where N is the number of elements in the validation dataset whose loss is less than the average loss of the misclassified samples ( we consider samples whose loss is greater than the average loss of the misclassified samples to be outliers ) . Therefore , elements are pruned only if a vast majority of the samples in the validation set benefit from their removal , resulting in improved generalization performance . If the loss with the element removed is greater than the minimum loss Algorithm 1 : Transformer Specialization Input : Fine-tuned ( for the given downstream task ) Transformer T , Validation set D Output : Specialized Transformer for the given downstream task T Function analyze element ( element E ) : Tpruned = T − E New Loss = Evaluate ( Tpruned , D ) if New Loss < Min Loss and num samples helped > N then Min Loss = New Loss T = Tpruned Function num samples helped : computes the number of samples in the validation set that benefit from the pruning of an element ; must be > N for an element to be pruned Baseline Loss = Evaluate ( T , D ) Min Loss = Baseline Loss Q = Order elements for inspection ( T , D ) for each layer L in T do Replace soft self-attention in L with hard self-attention New Loss = Evaluate ( T , D ) if New Loss < Min Loss and num samples helped > N then Min Loss = New Loss else Restore soft self-attention in L while Q is not empty do TrialElement = Q.pop ( ) analyze element ( TrialElement ) if TrialElement has not been pruned from T and New Loss < Baseline Loss then if TrialElement is an attention block then for each attention head h in TrialElement do analyze element ( h ) else if TrialElement is a feed-forward block then for each neuron w in TrialElement do analyze element ( w ) return T seen so far but less than the baseline loss , we inspect the element at a finer granularity , and prune only parts of the element that hinder performance ( rather than pruning the entire element ) . Hierarchical processing of elements . It is computationally prohibitive to analyze every single parameter in large Transformers using the method described in Alg 1 . Since the framework iterates through the entries of the queue sequentially , its efficacy is dependent on both the total number of elements under consideration , and the time required to analyze each element . We take advantage of the inherently hierarchical structure of Transformers and consider the elements in a hierarchical manner , ordered by increasing granularity . Specifically , we analyze entire feed-forward and selfattention blocks first , and inspect them at finer granularity ( attention heads and neurons ) only when required . Through this ordering , we are able to quickly eliminate large numbers of parameters from further consideration . In addition , due to the over-parameterized nature of Transformers , it is likely that time-consuming blocks are pruned from the Transformer earlier in the process , thereby speeding up future iterations of the framework . For example , eliminating a single feed-forward block in the BERT-Base model removes 5.6 % of all parameters under consideration , and speeds up future iterations by 1.15× . To further reduce the number of elements under consideration , we also dynamically remove elements if they are encompassed by a high-importance block . For example , if a given self-attention block is determined to be of high importance ( the validation loss with the block removed is greater than the baseline loss ) , we remove all heads within that block from further consideration . Creating an ordered queue of elements for inspection . Since our framework performs greedy pruning of highly over-parameterized models , it is essential to know where the harmful elements are likely to be . Some elements may appear to be harmful for the downstream task ( especially in early iterations of the framework ) , but this often ends up being an artifact of over-parameterization . As a result , when elements are not carefully ordered for inspection , our framework lands in local minima of the validation loss function , leading to inefficient models with sub-optimal generalization ability . Our solution to this problem utilizes the unique linguistic properties captured by the different Transformer layers ( Jawahar et al. , 2019 ) to guide the ordering of elements for inspection . For example , it was found that BERT captures phrase-level information in the lower layers , mapping related tokens together . The lower layers also capture surface features , while the middle layers capture syntactic features and higher layers capture semantic features . It was also observed that BERT requires deeper layers only when long-range dependency information is required . Different tasks require different types of linguistic knowledge . For example , sentiment analysis requires only local context , and long-range information often ends up confusing the model , since sentiments often change rapidly ; it is also unlikely that syntactic and semantic information are needed . Hence , we place the final layer at the front of the queue , and work our way backwards towards the first layer , since blocks in the final layers are more likely to hinder performance on sentiment analysis . This ordering of elements ensures that elements that are pruned early in our framework ( when the model is most over-parameterized ) do not lead the system into bad local minima . | The authors propose a Specialization framework to create optimized transformer models for a given downstream task. The framework systematically uses accuracy-driven pruning. The authors proposed two ways to reduce model parameters, 1) Hierarchical pruning. Start from analyzing entire feed-forward and self-attention blocks, and inspect them at finer granularity (attention heads and neurons) only when required. 2) Replacing soft-attention with hard-attention. The proposed method significantly improves benchmark models, BERT, Q8BERT and DistillBERT. | SP:25948bee751010ad32f6cfef2217f7dd3fb3e556 |
Specialized Transformers: Faster, Smaller and more Accurate NLP Models | 1 INTRODUCTION . Transformers models such as BERT ( Devlin et al. , 2019 ) , GPT-2 ( Radford et al. , 2019 ) and GPT3 ( Brown et al. , 2020 ) have revolutionized the field of Natural Language Processing ( NLP ) , greatly advancing the state-of-the-art in many NLP tasks . Models that achieve good performance on these tasks are of high practical significance , finding their place in commercial applications such as social media monitoring ( sentiment analysis ) , AI chat assistants ( question answering ) , automated summarization tools ( analyzing sentence similarity ) , etc . Therefore , there is a strong interest in creating more accurate and efficient Transformer models for these tasks . Transformers are first pre-trained on very large datasets , and subsequently fine-tuned for different downstream tasks . However , the current method of pre-training and fine-tuning Transformers has two major drawbacks . First , fine-tuning the pre-trained Transformers leads to models that are highly over-parameterized models for the downstream tasks , especially since many of these tasks have very limited training data . This could lead to unstable models ( Dodge et al. , 2020 ) with sub-optimal generalization ability ( Michel et al. , 2019 ) . Second , these large fine-tuned models present high computation and storage requirements for inference . This problem is further exacerbated by the trend towards larger and more accurate models over time . For instance , increasing the number of parameters from 1.5B to 175B enabled a reduction in perplexity for Language Modelling ( on the Penn Treebank dataset ) from 35.8 in GPT-2 ( 2019 ) to 20.5 in GPT-3 ( 2020 ) . In this work , we address both these challenges , taking advantage of the over-parameterized nature of pre-trained models to create individually Specialized models for the different downstream tasks that are smaller , faster and more accurate than the conventional fine-tuned models . Prior research efforts have explored approximation techniques , such as quantization , pruning and distillation , for improving the inference efficiency of various classes of neural networks . However , these techniques invariably involve a trade-off between accuracy and efficiency . In addition , the vast majority of these techniques require re-training or additional fine-tuning . This becomes especially problematic for Transformers , since these large models require significant time and energy to train and fine-tune . Fine-tuning is usually performed on a very small data set , and as a result , conventional iterative prune-and-retrain methods quickly end up overfitting when applied to Transformer models . In contrast , our Specialization framework utilizes the unique characteristics of Transformer training and deployment to enable substantial gain in both accuracy as well as efficiency . Our framework also does not require any additional re-training or fine-tuning , and can be applied in a plug-and-play manner to any Transformer model that is fine-tuned for any downstream task . During pre-training , Transformers capture rich linguistic knowledge and gain a deep understanding of the structure of the target language . Fine-tuning refines this knowledge for a specific downstream task by training a task-specific final layer . We observe that , due to the nature of their design process , Transformer models are not only over-parameterized but also contain parts that are , in fact , harmful to performance of downstream tasks . In order to exploit this observation in a principled manner , we introduce a framework to identify and prune the harmful elements of a Transformer ( parameters , grouped at different levels of granularity i.e. , self-attention blocks , feed-forward neural network blocks , attention heads and neurons ) , with the goal of maximizing accuracy on the downstream task . In contrast with prior pruning methods that prune elements with little-or-no impact on the network output , the proposed method prunes elements that have a considerable impact on the output , leading to the highest positive impact on accuracy . In order to reduce the large pruning space , we analyze the different elements of the fine-tuned Transformer in a hierarchical manner , starting with entire self-attention or feed-forward neural network blocks , followed by attention heads , and neurons , and prune the harmful elements . The core of the Transformer is self-attention , where each token in the input builds its representation based on the extent of attention it places on all the other tokens . However , we observe that in some cases , restricting the attention span of each token to only focus on the relevant tokens ( in certain layers ) leads to better information flow inside the model . Hence , our framework is also equipped with the ability to identify the appropriate layers and replace the ” soft ” self-attention with ” hard ” self-attention in these layers . We also introduce Transformer-specific heuristics to minimize the run-time of our framework , thereby enabling it to scale to large Transformer models . We summarize our main contributions as follows : • We introduce a Specialization framework that optimizes Transformer models for specific downstream tasks through the use of accuracy-driven pruning and selective hard attention . • We incorporate multiple heuristics in the framework , such as hierarchical processing , model-driven insights , and run-time based ordering of elements , in order to minimize the overheads . • We propose a significance analysis technique to identify the importance of each element of the fine-tuned Transformer for a given downstream task . We use this technique to prune elements that are harmful to performance on the downstream task . • We propose the selective replacement of the ” soft ” self-attention with hard attention in the appropriate layers , helping the model focus only on the relevant parts on the input to build better representations . • Across a suite of different Transformer networks , we demonstrate that Specialized models are consistently more accurate and stable , while also being significantly faster and smaller than their conventional fine-tuned non-Specialized counterparts . 2 RELATED WORK . Task-agnostic optimizations . Given the effectiveness and popularity of Transformer models , several techniques have been proposed to overcome their computational and memory challenges , and to accelerate inference using these models . A vast majority of these works introduce task-agnostic optimizations , using popular approximation techniques such as knowledge distillation ( Sanh et al. , 2019 ; Sun et al. , 2020 ; Wang et al. , 2020 ) , early exit/ depth modulation ( Elbayad et al. , 2020 ; Xin et al. , 2020 ; Zhou et al. , 2020 ) , quantization ( Zafrir et al. , 2019 ) , attention head pruning ( Zhang et al. , 2020 ) and parameter sharing ( Lan et al. , 2020 ) . In addition , Fan et al . ( 2020 ) randomly drop layers during pre-training , thereby enabling their dropping during inference ; Khetan & Karnin ( 2020 ) learn the optimal sizes of the BERT elements during pre-training , and Wu et al . ( 2020 ) use Long-Short Range Attention to speed up the self-attention operation . Using DistilBERT and Q8BERT as examples , we demonstrate that our techniques are complementary to these works , and can be applied while fine-tuning for a specific task to create accurate ultra-efficient models for specific downstream tasks . Task-specific optimizations . Hou et al . ( 2020 ) , Shen et al . ( 2020 ) , Jiao et al . ( 2019 ) , Wang et al . ( 2020 ) , Lagunas et al . ( 2021 ) and Sajjad et al . ( 2020 ) introduce task-specific optimizations , but the gain in efficiency comes at the cost of degradation in accuracy on the downstream task . In contrast , our framework improves both accuracy as well as efficiency , and hence appeals to a wider range of users . We also demonstrate that task-specific optimizations are more effective when applied to Specialized models compared to conventional fine-tuned models . Pruning techniques . Finally , Structured Pruning has been applied to various classes of neural networks ( Anwar et al. , 2017 ; Molchanov et al. , 2017 ) , and greedy pruning strategies have also been explored to identify weights and parameters that the output is least sensitive to ( Zhuang et al. , 2018 ; Ye et al. , 2020 ) . In contrast , our method is designed to identify and prune parameters that have most detrimental effect on the output . 3 METHODOLOGY TO SPECIALIZE TRANSFORMERS . We propose a framework for producing Specialized Transformer models that are optimized for a specific downstream task , illustrated in Algorithm 1 . Our framework performs two main optimizations : ( 1 ) It identifies and prunes elements that hinder performance on the downstream task at hand . ( 2 ) It selectively replaces soft self-attention with hard self-attention to help the model focus only on the relevant parts of the input . 3.1 ACCURACY-DRIVEN PRUNING . The problem of identifying an optimal set of elements to prune is challenging , and this is especially true for Transformers . In order to optimize a given model , we would ideally want to characterize the significance of each and every parameter in the model , rank them in order of importance , and finally prune only the least significant parameters . However , Transformers have billions of parameters , making this process computationally infeasible . In addition , previously proposed techniques that can efficiently estimate the importance of each parameter , such as using Taylor expansion , are not applicable . This is because the { approximate , fine-tune , approximate } cycle does not work for Transformers during fine-tuning , since they very quickly overfit the limited training data for the downstream tasks ( usually within 5 epochs ) . We address both these issues through the use of a hierarchical greedy algorithm that does not require any additional training or fine-tuning . To determine the significance of each Transformer element , we first fine-tune the original Transformer model for the given downstream task to obtain the baseline loss . Then , for the element under consideration in each iteration of the framework , we compute the loss of the current Transformer model with the element removed . We prune the element under consideration if the validation loss when it is removed is less than the minimum loss seen thus far during the optimization process , since the goal is to find a model with minimum loss . Also , in order to prevent overfitting to the validation set , we introduce a generalization constraint in addition to the aforementioned loss condition . This constraint ensures that an element is pruned only if it decreases ( or retains ) the loss of at least a certain number ( N ) of samples in the validation set over the current best solution ( computed by num samples helped function in Alg . 1 ) , where N is the number of elements in the validation dataset whose loss is less than the average loss of the misclassified samples ( we consider samples whose loss is greater than the average loss of the misclassified samples to be outliers ) . Therefore , elements are pruned only if a vast majority of the samples in the validation set benefit from their removal , resulting in improved generalization performance . If the loss with the element removed is greater than the minimum loss Algorithm 1 : Transformer Specialization Input : Fine-tuned ( for the given downstream task ) Transformer T , Validation set D Output : Specialized Transformer for the given downstream task T Function analyze element ( element E ) : Tpruned = T − E New Loss = Evaluate ( Tpruned , D ) if New Loss < Min Loss and num samples helped > N then Min Loss = New Loss T = Tpruned Function num samples helped : computes the number of samples in the validation set that benefit from the pruning of an element ; must be > N for an element to be pruned Baseline Loss = Evaluate ( T , D ) Min Loss = Baseline Loss Q = Order elements for inspection ( T , D ) for each layer L in T do Replace soft self-attention in L with hard self-attention New Loss = Evaluate ( T , D ) if New Loss < Min Loss and num samples helped > N then Min Loss = New Loss else Restore soft self-attention in L while Q is not empty do TrialElement = Q.pop ( ) analyze element ( TrialElement ) if TrialElement has not been pruned from T and New Loss < Baseline Loss then if TrialElement is an attention block then for each attention head h in TrialElement do analyze element ( h ) else if TrialElement is a feed-forward block then for each neuron w in TrialElement do analyze element ( w ) return T seen so far but less than the baseline loss , we inspect the element at a finer granularity , and prune only parts of the element that hinder performance ( rather than pruning the entire element ) . Hierarchical processing of elements . It is computationally prohibitive to analyze every single parameter in large Transformers using the method described in Alg 1 . Since the framework iterates through the entries of the queue sequentially , its efficacy is dependent on both the total number of elements under consideration , and the time required to analyze each element . We take advantage of the inherently hierarchical structure of Transformers and consider the elements in a hierarchical manner , ordered by increasing granularity . Specifically , we analyze entire feed-forward and selfattention blocks first , and inspect them at finer granularity ( attention heads and neurons ) only when required . Through this ordering , we are able to quickly eliminate large numbers of parameters from further consideration . In addition , due to the over-parameterized nature of Transformers , it is likely that time-consuming blocks are pruned from the Transformer earlier in the process , thereby speeding up future iterations of the framework . For example , eliminating a single feed-forward block in the BERT-Base model removes 5.6 % of all parameters under consideration , and speeds up future iterations by 1.15× . To further reduce the number of elements under consideration , we also dynamically remove elements if they are encompassed by a high-importance block . For example , if a given self-attention block is determined to be of high importance ( the validation loss with the block removed is greater than the baseline loss ) , we remove all heads within that block from further consideration . Creating an ordered queue of elements for inspection . Since our framework performs greedy pruning of highly over-parameterized models , it is essential to know where the harmful elements are likely to be . Some elements may appear to be harmful for the downstream task ( especially in early iterations of the framework ) , but this often ends up being an artifact of over-parameterization . As a result , when elements are not carefully ordered for inspection , our framework lands in local minima of the validation loss function , leading to inefficient models with sub-optimal generalization ability . Our solution to this problem utilizes the unique linguistic properties captured by the different Transformer layers ( Jawahar et al. , 2019 ) to guide the ordering of elements for inspection . For example , it was found that BERT captures phrase-level information in the lower layers , mapping related tokens together . The lower layers also capture surface features , while the middle layers capture syntactic features and higher layers capture semantic features . It was also observed that BERT requires deeper layers only when long-range dependency information is required . Different tasks require different types of linguistic knowledge . For example , sentiment analysis requires only local context , and long-range information often ends up confusing the model , since sentiments often change rapidly ; it is also unlikely that syntactic and semantic information are needed . Hence , we place the final layer at the front of the queue , and work our way backwards towards the first layer , since blocks in the final layers are more likely to hinder performance on sentiment analysis . This ordering of elements ensures that elements that are pruned early in our framework ( when the model is most over-parameterized ) do not lead the system into bad local minima . | This paper introduces a framework to prune parameters in transformer-based structures. The authors claim that the proposed method leads to a smaller model with faster and more accurate performance. It first replaces soft self-attention with hard self-attention by checking whether the replacement leads to a smaller loss and benefits more than N samples in the validation set. Using a similar way, it further prunes attention block, feed-forward block, and their neurons. | SP:25948bee751010ad32f6cfef2217f7dd3fb3e556 |
Stability Regularization for Discrete Representation Learning | 1 INTRODUCTION Neural networks are universal approximators of continuous functions . Often , however , discrete computations are desirable , whether for the intermediate neurons and their representations ( Oord et al. , 2017 ) , the parameters ( Courbariaux et al . ) , or the outputs . Current methods for training neural networks require differentiability which means that it is not straightforward to train neural networks with discrete variables . This has led to the development of several approximate methods ( Williams , 1992 ; Jang et al. , 2017 ; Bengio et al. , 2013 ; Tucker et al. , 2017 ; Pervez et al. , 2020 ) with various trade-offs of bias , variance , and complexity . In this work we focus on neural networks with discrete intermediate representations . Building upon techniques from the analysis of functions in Gaussian spaces ( Janson et al. , 1997 ) , and specifically the notion of stability of Gaussian functions , we propose a novel regularization strategy on representations that yields precise and hassle-free discrete representations . Several approaches have been introduced in the literature for learning discrete representations with backpropagation . The simplest approach is the Straight-Through estimator ( Bengio et al. , 2013 ) , which essentially ignores the intermediate discrete function allowing the gradients to flow . Another popular choice is the Gumbel-Softmax ( Maddison et al. , 2017 ; Jang et al. , 2017 ) , which replaces the discrete categorical variables with relaxed stochastic continuous ones . In both approaches the discrete variables are replaced by approximations and the model is biased with respect to the original discrete objective . When employed with complex architectures , Straight-Through and Gumbel-Softmax estimators often underperform due to this bias , as in figure 1 . The reason is that with continuous relaxation methods there is a tension between obtaining better optima and objective function values , and obtaining discrete representations . Importantly , the more complex the optimization ( or the model ) is the greater as well is the pressure towards non-discrete solutions , thus increasing bias further . Adding to the complexity of obtaining discrete representations , with current methods there is no direct incentive for the optimization procedure to obtain discrete representations : with Gumbel-Softmax the extent of how close-to-discrete a representation is obtained is controlled by a temperature variable , which must be manually tuned . Unbiased estimators like REINFORCE ( Williams , 1992 ) – and reduced variance extensions like REBAR ( Tucker et al. , 2017 ) and RELAX ( Grathwohl et al. , 2018 ) – have also been explored . However , these methods tend to be computationally expensive , which limits their usefulness for complex models . All in all , whether due to bias , high variance , high computational complexity , or the need for manual tuning , there remains a need for alternative methods for obtaining hassle-free discrete representations with neural networks , especially when increasing their complexity . In this work we present a regularization procedure for discrete representations , which can be used either as a standalone method , or in combination with existing continuous relaxation or straightthrough estimators . In its standalone form , the method replaces a discrete variable by a parameterized continuous function whose output ( say a sigmoid or softmax function ) corresponds to the discrete variable , and which is then regularized to produce discrete outputs . In combination with continuous relaxations such as Gumbel-Softmax , the method can be used to regularize the logits input to the sampling procedure to serve as an implicit temperature control by making the logits noise stable . We achieve this by resting upon the notion of noise stability developed in the analysis of Gaussian functions ( Borell , 1985 ; Mossel & Neeman , 2012 ) . Roughly speaking , the noise stability of a Gaussian function is a measure of its resilience to noise . Given a Gaussian function f and correlated Gaussian variables , ′ ∈ Rd , noise stability of f is defined as Stab = E , ′ [ f ( ) f ( ′ ) ] . Borell ’ s isoperimetric theorem ( Borell , 1985 ) , states that for bounded functions of some fixed volume with range [ 0 , 1 ] , noise stability is maximized by functions that are indicator functions of half spaces . Given that half space indicators maximize noise stability in Gaussian space , we suggest that optimizing stability is a very simple and effective method of transforming Gaussian inputs to binary vectors , thus simplifying the process of obtaining discrete representations . In summary , we demonstrate how the concept of noise stability can be used to regularize stochastic neural networks with Gaussian variables to train hassle-free neural networks with discrete ( Bernoulli or categorical ) variables . In the following , we first give a short introduction to noise stability in Gaussian analysis . We then motivate our proposal for using noise stability to regularize Gaussian functions for learning discrete representations . We validate by experiments in the Neural Variational Inference framework to learning graph structured latent spaces , learning discrete ( deterministic ) autoencoders , clustering with Gaussian Mixture VAEs , gating ResNets , and structured prediction . 2 NOISE STABILITY OF GAUSSIAN FUNCTIONS . 2.1 STABILITY AND GAUSSIAN ISOPERIMETRY . Noise Stability of a Gaussian function f : Rn → R is defined for a noise parameter ρ ∈ ( 0 , 1 ) as Stabρ [ f ] = E , ′ [ f ( ) f ( ′ ) ] , ( 1 ) ′ = ρ + √ 1− ρ2 ′′ ( 2 ) where , ′ are called ρ-correlated Gaussian pairs and , ′′ ∼ N ( 0 , 1 ) , are samples from the standard normal distribution . Stability is defined here in terms of standard normal Gaussian variables , but it is easily extended to any distribution of independent Gaussian variables by reparameterization , given the mean and standard deviation . For the special case where f = 1A , the indicator function of a set A , stability measures the probability that both and ′ remain within A. Stabρ [ f ] = P [ ∈ A ∧ ′ ∈ A ] , ( 3 ) By Borell ’ s Gaussian isoperimetric theorem ( Borell , 1985 ; Mossel & Neeman , 2012 ) , stability is related to the Gaussian isoperimetric inequality . According to Gaussian isoperimetric inequality , geometric objects with minimum boundary ( i.e. , surface area ) in Gaussian space with fixed Gaussian volume ( i.e. , E [ f ] for f the object ’ s indicator function ) are half spaces . Theorem 1 ( Borell Isoperimetric Theorem ( Borell , 1985 ) ) . For fixed ρ ∈ ( 0 , 1 ) and f ∈ L2 ( Rn ) in Gaussian space with range [ 0 , 1 ] and fixed volume E [ f ] = α , Stabρ [ f ] is maximized by f = 1H where 1H is an indicator function of a half space with volume α . As a consequence , given a parameterized bounded Gaussian function , maximizing the stability makes the function f approach the indicator function of some half space as illustrated in figure 2 . For further details on noise stability in the context of Gaussian analysis of functions we refer the interested reader to ( O ’ Donnell , 2014 ) . 3 STABILITY REGULARIZATION . Stability regularization can be used either as a standalone method or in combination with GumbelSoftmax style continuous relaxation . 3.1 REGULARIZATION FOR DISCRETE VARIABLES . We start from a continuous model , say a neural network with L layers or modules , f = fL◦· · ·◦f1 ( x ) . Next , we describe how to employ stability regularization so that any arbitrary intermediate function fl ( z ; θ ) with bounded output learns to output discrete variables . Given input z for fl we estimate the stability of fl and , thereafter , maximize it by adding it to the loss objective as a regularizing term . For a single input vector z ∈ Rk , following the definition of stability in equation ( 1 ) , we sample ρ-correlated Gaussian variables , ′ as in equation ( 2 ) . We then evaluate fl twice : once for z + and once for z + ′ . The expectation of their product , E , ′ [ fl ( z + ) fl ( z + ′ ) ] is the stability , Stabρ [ fl ( z ) ] . Maximizing the stability for a single input z , Stabρ [ fl ( z ) ] for a fixed ρ ∈ ( 0 , 1 ) , the function fl approaches an indicator as described by Borell ’ s theorem . In a batch setting , we compute a Monte Carlo estimate of the expected stability over the input , that is Ez [ Stabρ [ fl ( z ) ] ] , by sampling one ρ-correlated Gaussian pair per batch element . Given z , , ′ ∈ Rn×k , so that i and ′i are ρ-correlated Gaussian , the estimate is computed as Ez [ Stabρ [ fl ( z ) ] ] ≈ 1 n ∑ i fl ( zi + i ) fl ( zi + ′ i ) , ( 4 ) where n is the batch size and the arithmetic operations are done element-wise . To maximize stability we sum or average the estimate in equation ( 4 ) across dimension and add the result as an additional regularization term to the loss function with which we train the model . 3.2 MEAN-CENTERED STABILITY REGULARIZATION . The regularization makes the function stable relative to correlated Gaussian noise by moving the inputs zi further apart . In some cases the inputs can become too far separated which can hurt optimization if left uncontrolled . For such problematic cases we introduce mean-centered stability regularization which preserves the expected value of fl given the input zi ensuring that the induced separation remains limited . The idea behind mean-centered stability regularization is compute the stability of fl − E [ fl ] so that maximizing stability reorients the separating hyperplane without changing the Gaussian volume of the corresponding halfspace . According to Borell ’ s theorem maximizing stability of fl for any fixed E [ fl ] causes it to approach a Gaussian halfspace indicator ( figure 2 ) . Since it can be expensive to compute the expectation of a neural network , we maximize the difference of two stability computations : Given parameters ρ1 , ρ2 , ρ2 < ρ1 we optimize Stabρ1 [ fl ] − Stabρ2 [ fl ] . We can show that this objective is equal to Stabρ1−ρ2 [ fl − E [ f ] ] to first order with an error that is quadratic in ρ1 , ρ2 . Proposition 1 . Given a Gaussian function f : Rn → [ 0 , 1 ] and parameters ρ1 , ρ2 ∈ ( 0 , 1 ) , Stabρ1−ρ2 [ f − E [ f ] ] = Stabρ1 [ f ] − Stabρ2 [ f ] +O ( ρ2 ( ρ2 − ρ1 ) ) . Algorithm 1 Stability Regularization Require : Input z ∈ Rn×k ; stability layer fl with range ( 0 , 1 ) m ; noise parameter ρ ∈ ( 0 , 1 ) ; stability constraint α ∈ ( 0 , 1 ) 1 : Sample , ′ ∈ Rn×k ρ-correlated Gaussian vectors . 2 : Compute y1 = fl ( z + ) , y2 = fl ( z + ′ ) 3 : Estimate average stability over batch per dimension as S = 1n ∑ i y1 , iy2 , i . 4 : Apply stability constraint per dimension : S = clip ( S , 0 , α ) . 5 : Sum S across dimensions and optimize by gradient descent Algorithm 2 Stability Regularization with Mean Centering Require : Input z ∈ Rn×k ; stability layer fl with range ( 0 , 1 ) m ; noise parameter ρ1 , ρ2 ∈ ( 0 , 1 ) , ρ2 < ρ1 1 : Sample ( 1 , ′1 ) ρ1-correlated and ( 2 , ′ 2 ) ρ2-correlated Gaussian vectors from Rn×k . 2 : Compute y1 = fl ( z + 1 ) fl ( z + ′1 ) , y2 = fl ( z + 2 ) fl ( z + ′ 2 ) 3 : Estimate average stability over batch per dimension as S = 1n ∑ i ( y1 , i − y2 , i ) . 4 : Sum S across dimensions and optimize by gradient descent See appendix A for a proof . For stability regularization without mean centering we clip the stability at a maximum value to prevent the network output from becoming overly saturated . In figure 2 this would correspond to the points becoming far from the boundary leading to saturation and slowdown of optimization . Without mean centering , a constraint on stability limits how far the points can be from the boundary improving optimization . Borell ’ s theorem guarantees that the function f will converge to a halfspace for any ρ . We did not observe the method to be sensitive to ρ in our experiments . The precise procedures are described in algorithms 2 and 1 . Stability Regularized Layers . The stability regularized neural network layers can be any arbitrary bounded output neural network layer , possibly even a layer of activations without learned parameters . We use sigmoid activations for Bernoulli and softmax for categorical variables . Probabilistic Models and Gumbel-Softmax . We use stability regularization alongside Gumbel noise in probabilistic models such as VAEs where it is important to be able to compute log probabilities of obtained samples . Given a block of layers fl with a Gumbel softmax ( or Gumbel sigmoid ) activation , i.e. , fl = GumbelSoftmax ( logits ) , we compute stability using a standard softmax or sigmoid without adding the Gumbel noise as Stabρ [ Softmax ( logits ) ] and use the Gumbel softmax output as input to the downstream network . The optimization procedure with continuous relaxations provides no incentive to encourage discrete representation . The consequence is that such methods work better when there is little pressure from the optimization pressure to be non-discrete , as happens with larger latent space dimension . With smaller bottleneck latent spaces , however , there is greater optimization pressure to be continuous and the optimization with continuous relaxations becomes harder because of the need to manually tune the temperature . With stability regularization the regularization procedure is a form of implicit temperature control and the extent of how discrete a representation becomes is controlled by the extent of regularization . Computational Complexity . Stability regularization is easy to implement and adds some extra computations due to the extra evaluations for correlated Gaussians . We emphasize that any extra computation is local to the stability layer fl . The rest of the network is unaffected . Depending on the application , there usually exist only a few such layers in a large model , in which case the stability computation is a small fraction of the total cost and we do not observe a noticeable increase in computational cost in our experiments . | The paper proposes a method for regularizing neural networks with boolean and categorical stochastic variables. The proposed stability regularization is based on the Gaussian noise stability, which is maximized by the indicator functions of half spaces. Experiments with various benchmarks show promising results comparing to existing methods. | SP:74862f9ddc95a89390b8b10c5cbce49655173852 |
Stability Regularization for Discrete Representation Learning | 1 INTRODUCTION Neural networks are universal approximators of continuous functions . Often , however , discrete computations are desirable , whether for the intermediate neurons and their representations ( Oord et al. , 2017 ) , the parameters ( Courbariaux et al . ) , or the outputs . Current methods for training neural networks require differentiability which means that it is not straightforward to train neural networks with discrete variables . This has led to the development of several approximate methods ( Williams , 1992 ; Jang et al. , 2017 ; Bengio et al. , 2013 ; Tucker et al. , 2017 ; Pervez et al. , 2020 ) with various trade-offs of bias , variance , and complexity . In this work we focus on neural networks with discrete intermediate representations . Building upon techniques from the analysis of functions in Gaussian spaces ( Janson et al. , 1997 ) , and specifically the notion of stability of Gaussian functions , we propose a novel regularization strategy on representations that yields precise and hassle-free discrete representations . Several approaches have been introduced in the literature for learning discrete representations with backpropagation . The simplest approach is the Straight-Through estimator ( Bengio et al. , 2013 ) , which essentially ignores the intermediate discrete function allowing the gradients to flow . Another popular choice is the Gumbel-Softmax ( Maddison et al. , 2017 ; Jang et al. , 2017 ) , which replaces the discrete categorical variables with relaxed stochastic continuous ones . In both approaches the discrete variables are replaced by approximations and the model is biased with respect to the original discrete objective . When employed with complex architectures , Straight-Through and Gumbel-Softmax estimators often underperform due to this bias , as in figure 1 . The reason is that with continuous relaxation methods there is a tension between obtaining better optima and objective function values , and obtaining discrete representations . Importantly , the more complex the optimization ( or the model ) is the greater as well is the pressure towards non-discrete solutions , thus increasing bias further . Adding to the complexity of obtaining discrete representations , with current methods there is no direct incentive for the optimization procedure to obtain discrete representations : with Gumbel-Softmax the extent of how close-to-discrete a representation is obtained is controlled by a temperature variable , which must be manually tuned . Unbiased estimators like REINFORCE ( Williams , 1992 ) – and reduced variance extensions like REBAR ( Tucker et al. , 2017 ) and RELAX ( Grathwohl et al. , 2018 ) – have also been explored . However , these methods tend to be computationally expensive , which limits their usefulness for complex models . All in all , whether due to bias , high variance , high computational complexity , or the need for manual tuning , there remains a need for alternative methods for obtaining hassle-free discrete representations with neural networks , especially when increasing their complexity . In this work we present a regularization procedure for discrete representations , which can be used either as a standalone method , or in combination with existing continuous relaxation or straightthrough estimators . In its standalone form , the method replaces a discrete variable by a parameterized continuous function whose output ( say a sigmoid or softmax function ) corresponds to the discrete variable , and which is then regularized to produce discrete outputs . In combination with continuous relaxations such as Gumbel-Softmax , the method can be used to regularize the logits input to the sampling procedure to serve as an implicit temperature control by making the logits noise stable . We achieve this by resting upon the notion of noise stability developed in the analysis of Gaussian functions ( Borell , 1985 ; Mossel & Neeman , 2012 ) . Roughly speaking , the noise stability of a Gaussian function is a measure of its resilience to noise . Given a Gaussian function f and correlated Gaussian variables , ′ ∈ Rd , noise stability of f is defined as Stab = E , ′ [ f ( ) f ( ′ ) ] . Borell ’ s isoperimetric theorem ( Borell , 1985 ) , states that for bounded functions of some fixed volume with range [ 0 , 1 ] , noise stability is maximized by functions that are indicator functions of half spaces . Given that half space indicators maximize noise stability in Gaussian space , we suggest that optimizing stability is a very simple and effective method of transforming Gaussian inputs to binary vectors , thus simplifying the process of obtaining discrete representations . In summary , we demonstrate how the concept of noise stability can be used to regularize stochastic neural networks with Gaussian variables to train hassle-free neural networks with discrete ( Bernoulli or categorical ) variables . In the following , we first give a short introduction to noise stability in Gaussian analysis . We then motivate our proposal for using noise stability to regularize Gaussian functions for learning discrete representations . We validate by experiments in the Neural Variational Inference framework to learning graph structured latent spaces , learning discrete ( deterministic ) autoencoders , clustering with Gaussian Mixture VAEs , gating ResNets , and structured prediction . 2 NOISE STABILITY OF GAUSSIAN FUNCTIONS . 2.1 STABILITY AND GAUSSIAN ISOPERIMETRY . Noise Stability of a Gaussian function f : Rn → R is defined for a noise parameter ρ ∈ ( 0 , 1 ) as Stabρ [ f ] = E , ′ [ f ( ) f ( ′ ) ] , ( 1 ) ′ = ρ + √ 1− ρ2 ′′ ( 2 ) where , ′ are called ρ-correlated Gaussian pairs and , ′′ ∼ N ( 0 , 1 ) , are samples from the standard normal distribution . Stability is defined here in terms of standard normal Gaussian variables , but it is easily extended to any distribution of independent Gaussian variables by reparameterization , given the mean and standard deviation . For the special case where f = 1A , the indicator function of a set A , stability measures the probability that both and ′ remain within A. Stabρ [ f ] = P [ ∈ A ∧ ′ ∈ A ] , ( 3 ) By Borell ’ s Gaussian isoperimetric theorem ( Borell , 1985 ; Mossel & Neeman , 2012 ) , stability is related to the Gaussian isoperimetric inequality . According to Gaussian isoperimetric inequality , geometric objects with minimum boundary ( i.e. , surface area ) in Gaussian space with fixed Gaussian volume ( i.e. , E [ f ] for f the object ’ s indicator function ) are half spaces . Theorem 1 ( Borell Isoperimetric Theorem ( Borell , 1985 ) ) . For fixed ρ ∈ ( 0 , 1 ) and f ∈ L2 ( Rn ) in Gaussian space with range [ 0 , 1 ] and fixed volume E [ f ] = α , Stabρ [ f ] is maximized by f = 1H where 1H is an indicator function of a half space with volume α . As a consequence , given a parameterized bounded Gaussian function , maximizing the stability makes the function f approach the indicator function of some half space as illustrated in figure 2 . For further details on noise stability in the context of Gaussian analysis of functions we refer the interested reader to ( O ’ Donnell , 2014 ) . 3 STABILITY REGULARIZATION . Stability regularization can be used either as a standalone method or in combination with GumbelSoftmax style continuous relaxation . 3.1 REGULARIZATION FOR DISCRETE VARIABLES . We start from a continuous model , say a neural network with L layers or modules , f = fL◦· · ·◦f1 ( x ) . Next , we describe how to employ stability regularization so that any arbitrary intermediate function fl ( z ; θ ) with bounded output learns to output discrete variables . Given input z for fl we estimate the stability of fl and , thereafter , maximize it by adding it to the loss objective as a regularizing term . For a single input vector z ∈ Rk , following the definition of stability in equation ( 1 ) , we sample ρ-correlated Gaussian variables , ′ as in equation ( 2 ) . We then evaluate fl twice : once for z + and once for z + ′ . The expectation of their product , E , ′ [ fl ( z + ) fl ( z + ′ ) ] is the stability , Stabρ [ fl ( z ) ] . Maximizing the stability for a single input z , Stabρ [ fl ( z ) ] for a fixed ρ ∈ ( 0 , 1 ) , the function fl approaches an indicator as described by Borell ’ s theorem . In a batch setting , we compute a Monte Carlo estimate of the expected stability over the input , that is Ez [ Stabρ [ fl ( z ) ] ] , by sampling one ρ-correlated Gaussian pair per batch element . Given z , , ′ ∈ Rn×k , so that i and ′i are ρ-correlated Gaussian , the estimate is computed as Ez [ Stabρ [ fl ( z ) ] ] ≈ 1 n ∑ i fl ( zi + i ) fl ( zi + ′ i ) , ( 4 ) where n is the batch size and the arithmetic operations are done element-wise . To maximize stability we sum or average the estimate in equation ( 4 ) across dimension and add the result as an additional regularization term to the loss function with which we train the model . 3.2 MEAN-CENTERED STABILITY REGULARIZATION . The regularization makes the function stable relative to correlated Gaussian noise by moving the inputs zi further apart . In some cases the inputs can become too far separated which can hurt optimization if left uncontrolled . For such problematic cases we introduce mean-centered stability regularization which preserves the expected value of fl given the input zi ensuring that the induced separation remains limited . The idea behind mean-centered stability regularization is compute the stability of fl − E [ fl ] so that maximizing stability reorients the separating hyperplane without changing the Gaussian volume of the corresponding halfspace . According to Borell ’ s theorem maximizing stability of fl for any fixed E [ fl ] causes it to approach a Gaussian halfspace indicator ( figure 2 ) . Since it can be expensive to compute the expectation of a neural network , we maximize the difference of two stability computations : Given parameters ρ1 , ρ2 , ρ2 < ρ1 we optimize Stabρ1 [ fl ] − Stabρ2 [ fl ] . We can show that this objective is equal to Stabρ1−ρ2 [ fl − E [ f ] ] to first order with an error that is quadratic in ρ1 , ρ2 . Proposition 1 . Given a Gaussian function f : Rn → [ 0 , 1 ] and parameters ρ1 , ρ2 ∈ ( 0 , 1 ) , Stabρ1−ρ2 [ f − E [ f ] ] = Stabρ1 [ f ] − Stabρ2 [ f ] +O ( ρ2 ( ρ2 − ρ1 ) ) . Algorithm 1 Stability Regularization Require : Input z ∈ Rn×k ; stability layer fl with range ( 0 , 1 ) m ; noise parameter ρ ∈ ( 0 , 1 ) ; stability constraint α ∈ ( 0 , 1 ) 1 : Sample , ′ ∈ Rn×k ρ-correlated Gaussian vectors . 2 : Compute y1 = fl ( z + ) , y2 = fl ( z + ′ ) 3 : Estimate average stability over batch per dimension as S = 1n ∑ i y1 , iy2 , i . 4 : Apply stability constraint per dimension : S = clip ( S , 0 , α ) . 5 : Sum S across dimensions and optimize by gradient descent Algorithm 2 Stability Regularization with Mean Centering Require : Input z ∈ Rn×k ; stability layer fl with range ( 0 , 1 ) m ; noise parameter ρ1 , ρ2 ∈ ( 0 , 1 ) , ρ2 < ρ1 1 : Sample ( 1 , ′1 ) ρ1-correlated and ( 2 , ′ 2 ) ρ2-correlated Gaussian vectors from Rn×k . 2 : Compute y1 = fl ( z + 1 ) fl ( z + ′1 ) , y2 = fl ( z + 2 ) fl ( z + ′ 2 ) 3 : Estimate average stability over batch per dimension as S = 1n ∑ i ( y1 , i − y2 , i ) . 4 : Sum S across dimensions and optimize by gradient descent See appendix A for a proof . For stability regularization without mean centering we clip the stability at a maximum value to prevent the network output from becoming overly saturated . In figure 2 this would correspond to the points becoming far from the boundary leading to saturation and slowdown of optimization . Without mean centering , a constraint on stability limits how far the points can be from the boundary improving optimization . Borell ’ s theorem guarantees that the function f will converge to a halfspace for any ρ . We did not observe the method to be sensitive to ρ in our experiments . The precise procedures are described in algorithms 2 and 1 . Stability Regularized Layers . The stability regularized neural network layers can be any arbitrary bounded output neural network layer , possibly even a layer of activations without learned parameters . We use sigmoid activations for Bernoulli and softmax for categorical variables . Probabilistic Models and Gumbel-Softmax . We use stability regularization alongside Gumbel noise in probabilistic models such as VAEs where it is important to be able to compute log probabilities of obtained samples . Given a block of layers fl with a Gumbel softmax ( or Gumbel sigmoid ) activation , i.e. , fl = GumbelSoftmax ( logits ) , we compute stability using a standard softmax or sigmoid without adding the Gumbel noise as Stabρ [ Softmax ( logits ) ] and use the Gumbel softmax output as input to the downstream network . The optimization procedure with continuous relaxations provides no incentive to encourage discrete representation . The consequence is that such methods work better when there is little pressure from the optimization pressure to be non-discrete , as happens with larger latent space dimension . With smaller bottleneck latent spaces , however , there is greater optimization pressure to be continuous and the optimization with continuous relaxations becomes harder because of the need to manually tune the temperature . With stability regularization the regularization procedure is a form of implicit temperature control and the extent of how discrete a representation becomes is controlled by the extent of regularization . Computational Complexity . Stability regularization is easy to implement and adds some extra computations due to the extra evaluations for correlated Gaussians . We emphasize that any extra computation is local to the stability layer fl . The rest of the network is unaffected . Depending on the application , there usually exist only a few such layers in a large model , in which case the stability computation is a small fraction of the total cost and we do not observe a noticeable increase in computational cost in our experiments . | The paper proposes an interesting regularization method for encouraging function values to be discrete. The main idea is very interesting: with two correlated random vectors, the function is encouraged to maximize the correlation of outputs from the two vectors. The function that maximizes the correlation is the indicator function of a half-space. Combining this objective and original training objectives, a learning model can output random values that are near 0 or 1. The model plays the same role as the Gumbel-softmax trick. The submission has done extensive experiments comparing the proposed technique against Gumbel-softmax. The results indicate that the new technique improves the performance in several learning tasks, though the improvement is not consistent. | SP:74862f9ddc95a89390b8b10c5cbce49655173852 |
Stability Regularization for Discrete Representation Learning | 1 INTRODUCTION Neural networks are universal approximators of continuous functions . Often , however , discrete computations are desirable , whether for the intermediate neurons and their representations ( Oord et al. , 2017 ) , the parameters ( Courbariaux et al . ) , or the outputs . Current methods for training neural networks require differentiability which means that it is not straightforward to train neural networks with discrete variables . This has led to the development of several approximate methods ( Williams , 1992 ; Jang et al. , 2017 ; Bengio et al. , 2013 ; Tucker et al. , 2017 ; Pervez et al. , 2020 ) with various trade-offs of bias , variance , and complexity . In this work we focus on neural networks with discrete intermediate representations . Building upon techniques from the analysis of functions in Gaussian spaces ( Janson et al. , 1997 ) , and specifically the notion of stability of Gaussian functions , we propose a novel regularization strategy on representations that yields precise and hassle-free discrete representations . Several approaches have been introduced in the literature for learning discrete representations with backpropagation . The simplest approach is the Straight-Through estimator ( Bengio et al. , 2013 ) , which essentially ignores the intermediate discrete function allowing the gradients to flow . Another popular choice is the Gumbel-Softmax ( Maddison et al. , 2017 ; Jang et al. , 2017 ) , which replaces the discrete categorical variables with relaxed stochastic continuous ones . In both approaches the discrete variables are replaced by approximations and the model is biased with respect to the original discrete objective . When employed with complex architectures , Straight-Through and Gumbel-Softmax estimators often underperform due to this bias , as in figure 1 . The reason is that with continuous relaxation methods there is a tension between obtaining better optima and objective function values , and obtaining discrete representations . Importantly , the more complex the optimization ( or the model ) is the greater as well is the pressure towards non-discrete solutions , thus increasing bias further . Adding to the complexity of obtaining discrete representations , with current methods there is no direct incentive for the optimization procedure to obtain discrete representations : with Gumbel-Softmax the extent of how close-to-discrete a representation is obtained is controlled by a temperature variable , which must be manually tuned . Unbiased estimators like REINFORCE ( Williams , 1992 ) – and reduced variance extensions like REBAR ( Tucker et al. , 2017 ) and RELAX ( Grathwohl et al. , 2018 ) – have also been explored . However , these methods tend to be computationally expensive , which limits their usefulness for complex models . All in all , whether due to bias , high variance , high computational complexity , or the need for manual tuning , there remains a need for alternative methods for obtaining hassle-free discrete representations with neural networks , especially when increasing their complexity . In this work we present a regularization procedure for discrete representations , which can be used either as a standalone method , or in combination with existing continuous relaxation or straightthrough estimators . In its standalone form , the method replaces a discrete variable by a parameterized continuous function whose output ( say a sigmoid or softmax function ) corresponds to the discrete variable , and which is then regularized to produce discrete outputs . In combination with continuous relaxations such as Gumbel-Softmax , the method can be used to regularize the logits input to the sampling procedure to serve as an implicit temperature control by making the logits noise stable . We achieve this by resting upon the notion of noise stability developed in the analysis of Gaussian functions ( Borell , 1985 ; Mossel & Neeman , 2012 ) . Roughly speaking , the noise stability of a Gaussian function is a measure of its resilience to noise . Given a Gaussian function f and correlated Gaussian variables , ′ ∈ Rd , noise stability of f is defined as Stab = E , ′ [ f ( ) f ( ′ ) ] . Borell ’ s isoperimetric theorem ( Borell , 1985 ) , states that for bounded functions of some fixed volume with range [ 0 , 1 ] , noise stability is maximized by functions that are indicator functions of half spaces . Given that half space indicators maximize noise stability in Gaussian space , we suggest that optimizing stability is a very simple and effective method of transforming Gaussian inputs to binary vectors , thus simplifying the process of obtaining discrete representations . In summary , we demonstrate how the concept of noise stability can be used to regularize stochastic neural networks with Gaussian variables to train hassle-free neural networks with discrete ( Bernoulli or categorical ) variables . In the following , we first give a short introduction to noise stability in Gaussian analysis . We then motivate our proposal for using noise stability to regularize Gaussian functions for learning discrete representations . We validate by experiments in the Neural Variational Inference framework to learning graph structured latent spaces , learning discrete ( deterministic ) autoencoders , clustering with Gaussian Mixture VAEs , gating ResNets , and structured prediction . 2 NOISE STABILITY OF GAUSSIAN FUNCTIONS . 2.1 STABILITY AND GAUSSIAN ISOPERIMETRY . Noise Stability of a Gaussian function f : Rn → R is defined for a noise parameter ρ ∈ ( 0 , 1 ) as Stabρ [ f ] = E , ′ [ f ( ) f ( ′ ) ] , ( 1 ) ′ = ρ + √ 1− ρ2 ′′ ( 2 ) where , ′ are called ρ-correlated Gaussian pairs and , ′′ ∼ N ( 0 , 1 ) , are samples from the standard normal distribution . Stability is defined here in terms of standard normal Gaussian variables , but it is easily extended to any distribution of independent Gaussian variables by reparameterization , given the mean and standard deviation . For the special case where f = 1A , the indicator function of a set A , stability measures the probability that both and ′ remain within A. Stabρ [ f ] = P [ ∈ A ∧ ′ ∈ A ] , ( 3 ) By Borell ’ s Gaussian isoperimetric theorem ( Borell , 1985 ; Mossel & Neeman , 2012 ) , stability is related to the Gaussian isoperimetric inequality . According to Gaussian isoperimetric inequality , geometric objects with minimum boundary ( i.e. , surface area ) in Gaussian space with fixed Gaussian volume ( i.e. , E [ f ] for f the object ’ s indicator function ) are half spaces . Theorem 1 ( Borell Isoperimetric Theorem ( Borell , 1985 ) ) . For fixed ρ ∈ ( 0 , 1 ) and f ∈ L2 ( Rn ) in Gaussian space with range [ 0 , 1 ] and fixed volume E [ f ] = α , Stabρ [ f ] is maximized by f = 1H where 1H is an indicator function of a half space with volume α . As a consequence , given a parameterized bounded Gaussian function , maximizing the stability makes the function f approach the indicator function of some half space as illustrated in figure 2 . For further details on noise stability in the context of Gaussian analysis of functions we refer the interested reader to ( O ’ Donnell , 2014 ) . 3 STABILITY REGULARIZATION . Stability regularization can be used either as a standalone method or in combination with GumbelSoftmax style continuous relaxation . 3.1 REGULARIZATION FOR DISCRETE VARIABLES . We start from a continuous model , say a neural network with L layers or modules , f = fL◦· · ·◦f1 ( x ) . Next , we describe how to employ stability regularization so that any arbitrary intermediate function fl ( z ; θ ) with bounded output learns to output discrete variables . Given input z for fl we estimate the stability of fl and , thereafter , maximize it by adding it to the loss objective as a regularizing term . For a single input vector z ∈ Rk , following the definition of stability in equation ( 1 ) , we sample ρ-correlated Gaussian variables , ′ as in equation ( 2 ) . We then evaluate fl twice : once for z + and once for z + ′ . The expectation of their product , E , ′ [ fl ( z + ) fl ( z + ′ ) ] is the stability , Stabρ [ fl ( z ) ] . Maximizing the stability for a single input z , Stabρ [ fl ( z ) ] for a fixed ρ ∈ ( 0 , 1 ) , the function fl approaches an indicator as described by Borell ’ s theorem . In a batch setting , we compute a Monte Carlo estimate of the expected stability over the input , that is Ez [ Stabρ [ fl ( z ) ] ] , by sampling one ρ-correlated Gaussian pair per batch element . Given z , , ′ ∈ Rn×k , so that i and ′i are ρ-correlated Gaussian , the estimate is computed as Ez [ Stabρ [ fl ( z ) ] ] ≈ 1 n ∑ i fl ( zi + i ) fl ( zi + ′ i ) , ( 4 ) where n is the batch size and the arithmetic operations are done element-wise . To maximize stability we sum or average the estimate in equation ( 4 ) across dimension and add the result as an additional regularization term to the loss function with which we train the model . 3.2 MEAN-CENTERED STABILITY REGULARIZATION . The regularization makes the function stable relative to correlated Gaussian noise by moving the inputs zi further apart . In some cases the inputs can become too far separated which can hurt optimization if left uncontrolled . For such problematic cases we introduce mean-centered stability regularization which preserves the expected value of fl given the input zi ensuring that the induced separation remains limited . The idea behind mean-centered stability regularization is compute the stability of fl − E [ fl ] so that maximizing stability reorients the separating hyperplane without changing the Gaussian volume of the corresponding halfspace . According to Borell ’ s theorem maximizing stability of fl for any fixed E [ fl ] causes it to approach a Gaussian halfspace indicator ( figure 2 ) . Since it can be expensive to compute the expectation of a neural network , we maximize the difference of two stability computations : Given parameters ρ1 , ρ2 , ρ2 < ρ1 we optimize Stabρ1 [ fl ] − Stabρ2 [ fl ] . We can show that this objective is equal to Stabρ1−ρ2 [ fl − E [ f ] ] to first order with an error that is quadratic in ρ1 , ρ2 . Proposition 1 . Given a Gaussian function f : Rn → [ 0 , 1 ] and parameters ρ1 , ρ2 ∈ ( 0 , 1 ) , Stabρ1−ρ2 [ f − E [ f ] ] = Stabρ1 [ f ] − Stabρ2 [ f ] +O ( ρ2 ( ρ2 − ρ1 ) ) . Algorithm 1 Stability Regularization Require : Input z ∈ Rn×k ; stability layer fl with range ( 0 , 1 ) m ; noise parameter ρ ∈ ( 0 , 1 ) ; stability constraint α ∈ ( 0 , 1 ) 1 : Sample , ′ ∈ Rn×k ρ-correlated Gaussian vectors . 2 : Compute y1 = fl ( z + ) , y2 = fl ( z + ′ ) 3 : Estimate average stability over batch per dimension as S = 1n ∑ i y1 , iy2 , i . 4 : Apply stability constraint per dimension : S = clip ( S , 0 , α ) . 5 : Sum S across dimensions and optimize by gradient descent Algorithm 2 Stability Regularization with Mean Centering Require : Input z ∈ Rn×k ; stability layer fl with range ( 0 , 1 ) m ; noise parameter ρ1 , ρ2 ∈ ( 0 , 1 ) , ρ2 < ρ1 1 : Sample ( 1 , ′1 ) ρ1-correlated and ( 2 , ′ 2 ) ρ2-correlated Gaussian vectors from Rn×k . 2 : Compute y1 = fl ( z + 1 ) fl ( z + ′1 ) , y2 = fl ( z + 2 ) fl ( z + ′ 2 ) 3 : Estimate average stability over batch per dimension as S = 1n ∑ i ( y1 , i − y2 , i ) . 4 : Sum S across dimensions and optimize by gradient descent See appendix A for a proof . For stability regularization without mean centering we clip the stability at a maximum value to prevent the network output from becoming overly saturated . In figure 2 this would correspond to the points becoming far from the boundary leading to saturation and slowdown of optimization . Without mean centering , a constraint on stability limits how far the points can be from the boundary improving optimization . Borell ’ s theorem guarantees that the function f will converge to a halfspace for any ρ . We did not observe the method to be sensitive to ρ in our experiments . The precise procedures are described in algorithms 2 and 1 . Stability Regularized Layers . The stability regularized neural network layers can be any arbitrary bounded output neural network layer , possibly even a layer of activations without learned parameters . We use sigmoid activations for Bernoulli and softmax for categorical variables . Probabilistic Models and Gumbel-Softmax . We use stability regularization alongside Gumbel noise in probabilistic models such as VAEs where it is important to be able to compute log probabilities of obtained samples . Given a block of layers fl with a Gumbel softmax ( or Gumbel sigmoid ) activation , i.e. , fl = GumbelSoftmax ( logits ) , we compute stability using a standard softmax or sigmoid without adding the Gumbel noise as Stabρ [ Softmax ( logits ) ] and use the Gumbel softmax output as input to the downstream network . The optimization procedure with continuous relaxations provides no incentive to encourage discrete representation . The consequence is that such methods work better when there is little pressure from the optimization pressure to be non-discrete , as happens with larger latent space dimension . With smaller bottleneck latent spaces , however , there is greater optimization pressure to be continuous and the optimization with continuous relaxations becomes harder because of the need to manually tune the temperature . With stability regularization the regularization procedure is a form of implicit temperature control and the extent of how discrete a representation becomes is controlled by the extent of regularization . Computational Complexity . Stability regularization is easy to implement and adds some extra computations due to the extra evaluations for correlated Gaussians . We emphasize that any extra computation is local to the stability layer fl . The rest of the network is unaffected . Depending on the application , there usually exist only a few such layers in a large model , in which case the stability computation is a small fraction of the total cost and we do not observe a noticeable increase in computational cost in our experiments . | The paper presents a new method to train neural networks with stochastic discrete variables called stability regularisation. The method pushes the outputs of functions of Gaussian random variables to be close to discrete, and unlike other methods used with discrete variables, such as Gumbel Softmax which requires temperature annealing, it is easy to tune. The method is demonstrated on a very rich variety of tasks and models and shows state of the art performance. | SP:74862f9ddc95a89390b8b10c5cbce49655173852 |
Pretext Tasks Selection for Multitask Self-Supervised Speech Representation Learning | Through solving pretext tasks , self-supervised learning leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task . In audio/speech signal processing , a wide range of features where engineered through decades of research efforts . As it turns out , learning to predict such features ( a.k.a pseudo-labels ) has proven to be a particularly relevant pretext task , leading to useful self-supervised representations which prove to be effective for downstream tasks . However , methods and common practices for combining such pretext tasks for better performance on the downstream task have not been explored and understood properly . In fact , the process relies almost exclusively on a computationally heavy experimental procedure , which becomes intractable with the increase of the number of pretext tasks . This paper introduces a method to select a group of pretext tasks among a set of candidates . The method we propose estimates calibrated weights for the partial losses corresponding to the considered pretext tasks during the self-supervised training process . The experiments conducted on automatic speech recognition , speaker and emotion recognition validate our approach , as the groups selected and weighted with our method perform better than classic baselines , thus facilitating the selection and combination of relevant pseudo-labels for self-supervised representation learning . 1 INTRODUCTION . Self-supervised learning ( SSL ) methods usually rely on a supervision obtained from the data itself through solving specific pretext tasks leveraging the underlying structure of the considered data ( Doersch et al. , 2016 ; Arandjelovic & Zisserman , 2018 ) . This technique is used in various domains including image processing ( Misra & Maaten , 2020 ; Jing & Tian , 2020 ; Grill et al. , 2020 ) , natural language understanding ( Chen et al. , 2020b ; Du et al. , 2020 ; Lan et al. , 2019 ) or speech and audio processing ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Jiang et al. , 2020 ) . It offers numerous advantages , such as the independence from labeled data , stronger performance on downstream tasks , more robust models and an easier transfer to low-resource setups ( e.g. , low-resource languages ) ( Baevski et al. , 2020b ; Jing & Tian , 2020 ) . The numerous existing SSL approaches are characterized by the nature of the pretext tasks they solve . For instance , common techniques include predictive coding ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Song et al. , 2020 ; Zhang et al. , 2020 ; Hsu et al. , 2021 ) , pseudo-label learning ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) , auto-encoding ( Renshaw et al. , 2015 ; Algayres et al. , 2020 ) , tripletloss learning ( Shor et al. , 2020 ; Peplinski et al. , 2020 ) , generative modelling ( Khurana et al. , 2020 ) or contrastive learning ( Saeed et al. , 2020 ; Jiang et al. , 2020 ) . More precisely , these pretext tasks may be defined through the choice of pretext labels , hereafter referred to as pseudo-labels . The automatic extraction of pseudo-labels for SSL ( i.e . from the data itself ) is common in many application domains , such as computer vision ( Noroozi & Favaro , 2017 ; Gidaris et al. , 2018 ) , music processing ( Hung et al. , 2019 ; Wu et al. , 2021 ) and speech processing ( Pascual et al. , 2019 ; Shukla et al. , 2020 ) , and is commonly referred to as multitask self supervised learning . In the specific context of speech processing , the process of designing pseudo-labels may benefit from decades of research in signal processing . For instance , potential candidates are pitch estimators , energy-based features , voicing state and many more . As demonstrated by Pascual et al . ( 2019 ) , multitask speech representation learning is a powerful tool to build representations that are beneficial for a wide range of distinct downstream tasks , by combining different pseudo-labels which “ intuitively ” correspond to these tasks . Unfortunately , there is no clear understanding of how these pseudo-labels may interact when optimised together , and therefore , no common practice of how to select groups of pseudo-labels to obtain better performance on a known downstream task . As a matter of fact , this design process has been essentially driven by empirical validation and there is therefore no evidence that the obtained model is even the best one . This empirical approach can rapidly become intractable with modern SSL architectures which may contain hundreds of millions or billions of parameters trained on thousands of hours of speech , not to mention the carbon footprint of such pseudo-label searches . For instance , the self-supervised training of a single state-of-the-art large wav2vec 2.0 model ( Baevski et al. , 2020b ) on 53.2k hours of speech requires 128 GPUs for 5.2 days . This work aims at providing a clear , efficient and theoretically motivated procedure for pseudolabel group selection and weighting based on conditional independence ( CI ) . The method presented allows one to design ahead of training the most adapted multitask self-supervised speech representation learning model which perfectly suits the considered downstream tasks . Such an approach may also enable researchers to save a substantial amount of time and computation usually devoted to pseudo-label search . Hence , the contributions of this work are fourfold : 1 . Introduce a theoretically motivated and computationally efficient method for the selection of pseudo-label groups among a set of candidates and with respect to the considered downstream tasks ( Sections 3 and 4 ) . 2 . Validate empirically the proposed approach with a first model SSL model relying on different sets of pseudo-labels corresponding to the ones obtained for three considered speech tasks . ( Sections 5 ) . 3 . Extend our method to the SOTA wav2vec 2.0 to enhance its performance ( Section 6 ) . 4 . Release the code base developed with SpeechBrain ( Ravanelli et al. , 2021 ) for replication and to encourage further investigations.1 The conducted experiments demonstrate that the proposed method allows for a more intelligent , i.e . better informed , pseudo-label group selection for multitask SSL settings . Indeed , we find that the models built with the proposed method obtain a word error rate and an equal error rate , respectively , 31.6 % and 27.4 % lower than the baseline , without the need for any empirical search . 2 RELATED WORKS AND MOTIVATIONS . SSL recently became a key component to achieve good performance on downstream tasks especially with low-resource setups , either in speech ( Baevski et al. , 2020b ; Conneau et al. , 2020 ) , natural language processing ( Lan et al. , 2019 ; Chen et al. , 2020b ) or computer vision ( Gidaris et al. , 2019 ; Misra & Maaten , 2020 ; Jing & Tian , 2020 ) . Due to its very nature , SSL relies on large amounts of unlabeled data used to train large deep neural networks over long periods of time . It it thus crucial to understand properly what makes a good SSL model to lower the amount of computation and time needed to obtain the best downstream performance . SSL for Speech . Self-supervised learning for speech has recently enabled researchers to reach stateof-the-art results on various speech processing tasks ( Fan et al. , 2021 ) . The most successful models rely on predictive and contrastive objectives ( Baevski et al. , 2020b ; Chung et al. , 2019 ; Hsu et al. , 2021 ; Shor et al. , 2021 ) performing well across the different tasks even in low-resource settings . This led to the design of different benchmarks evaluating the self-supervised representations in different languages ( Yang et al. , 2021 ; Evain et al. , 2021 ) . However , in contrast to this proposition , these works have not tried to motivate beforehand the different choices made in the self-supervision pipeline . Understanding SSL . A few works have tried to shed some theoretical light on the mainly empirical field of self-supervised learning . Following the different paradigms in SSL , various tracks have 1https : //github.com/iclrsubmitter22/iclr_2022_submission been followed to understand what makes for a good self-supervised representation , exploring different approaches ( Lee et al. , 2020 ; Arora et al. , 2019 ; Wei et al. , 2020 ) . On the one hand , contrastive learning ( Oord et al. , 2018 ; Chen et al. , 2020a ) has been advocated both theoretically and empirically to achieve a balance in the mutual information between alternative representations of the data , keeping just enough shared information to keep the class-related content ( Tschannen et al. , 2020 ; Tian et al. , 2020 ; Bachman et al. , 2019 ) . In a recent work from Li et al . ( 2021 ) , independence testing has been used to produce better transformations in contrastive learning settings for image representations . Predictive learning , on the other hand , requires the model to predict masked elements in sequential data . This technique is powerful on downstream tasks that can be reduced to a masking problem , as suggested by research on language modeling ( Saunshi et al. , 2020 ) . However , all these works have been focusing on computer vision or text-related applications , and none of them addressed the multi-tasked self supervision problem . Multi-task self-supervised learning . While the literature on multi-tasking in self-supervised learning remains scarce , it has been shown in classic supervised learning settings , that through estimates of similarity between tasks or thorough empirical testing , several tasks can take advantage of being solved with a common encoder ( Zamir et al. , 2018 ; Dwivedi & Roig , 2019 ; Shafey et al. , 2019 ; Chen et al. , 2015 ) . More specifically , combining pretext tasks with SSL has been mainly explored in computer vision and speech ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) . Pretext tasks such as Jigsaw ( Doersch et al. , 2016 ) , colourisation and rotation ( Gidaris et al. , 2018 ) have been combined successfully to improve downstream performance ( Kim et al. , 2018 ; Shin ’ ya Yamaguchi et al. ) . The two closest works to our line of research are from Lee et al . ( 2020 ) and Doersch et al . ( 2017 ) . The former shows that a theoretical link can be established between conditional independence and an improvement of the performance on the downstream task , while the latter proposes to select layers from a multitask self-supervised encoder according to the pretext task to be solved . However , in both cases , the studies do not offer practical and theoretical solutions to select groups of pseudo-labels to build an adapted SSL model that will perform well on the considered downstream tasks . Group feature selection . Finally , feature selection , and especially group feature selection is another close and inspiring field given the problem we consider . The relationship and interactions between features have been largely investigated in the supervised learning literature ( Guyon & Elisseeff , 2003 ) . This led to multiple solutions to the feature group selection problem , including LASSObased techniques ( Yuan & Lin , 2006 ) , or multiple kernel formulations ( Sonnenburg et al. , 2006 ; Rakotomamonjy et al. , 2007 ) . However , these works do not involve any self-supervision , and links between feature selection and self-supervision design and pretext-task selection are yet to be proved . We will further consider these lines of works for concurrent baselines . With this work , we aim at shortening the process of designing SSL models while giving insights on the pseudo-label importance and the underlying mechanisms between pretext and downstream tasks at the same time . We decided to experiment with speech due to the lack of literature on this domain for multitask SSL , and for the various pseudo-labels available , which are based on decades of signal processing research . The whole pipeline starting from the acoustic feature extraction to the downstream task scoring follows three major steps summarized in Figure 1 . First , for every downstream task , our method produces a pretext task selection and weighting . Then , a SSL model is trained , before being used as a feature extractor front-end to one or many downstream tasks . | This paper proposes a method to select a group of pretext tasks from a given set of tasks for optimising the network training during self-supervised learning phase. The weights for the given set of tasks are learned based on Hilbert Schmidt Independence Criterion (HSIC) using a few data samples. Using 2 downstream tasks, the authors show that their approach could benefit the learning of features relevant for the downstream tasks. Thus, improves the accuracy of these downstream tasks. | SP:7e79ce46dc3e4421878c395346d690aa77284dd7 |
Pretext Tasks Selection for Multitask Self-Supervised Speech Representation Learning | Through solving pretext tasks , self-supervised learning leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task . In audio/speech signal processing , a wide range of features where engineered through decades of research efforts . As it turns out , learning to predict such features ( a.k.a pseudo-labels ) has proven to be a particularly relevant pretext task , leading to useful self-supervised representations which prove to be effective for downstream tasks . However , methods and common practices for combining such pretext tasks for better performance on the downstream task have not been explored and understood properly . In fact , the process relies almost exclusively on a computationally heavy experimental procedure , which becomes intractable with the increase of the number of pretext tasks . This paper introduces a method to select a group of pretext tasks among a set of candidates . The method we propose estimates calibrated weights for the partial losses corresponding to the considered pretext tasks during the self-supervised training process . The experiments conducted on automatic speech recognition , speaker and emotion recognition validate our approach , as the groups selected and weighted with our method perform better than classic baselines , thus facilitating the selection and combination of relevant pseudo-labels for self-supervised representation learning . 1 INTRODUCTION . Self-supervised learning ( SSL ) methods usually rely on a supervision obtained from the data itself through solving specific pretext tasks leveraging the underlying structure of the considered data ( Doersch et al. , 2016 ; Arandjelovic & Zisserman , 2018 ) . This technique is used in various domains including image processing ( Misra & Maaten , 2020 ; Jing & Tian , 2020 ; Grill et al. , 2020 ) , natural language understanding ( Chen et al. , 2020b ; Du et al. , 2020 ; Lan et al. , 2019 ) or speech and audio processing ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Jiang et al. , 2020 ) . It offers numerous advantages , such as the independence from labeled data , stronger performance on downstream tasks , more robust models and an easier transfer to low-resource setups ( e.g. , low-resource languages ) ( Baevski et al. , 2020b ; Jing & Tian , 2020 ) . The numerous existing SSL approaches are characterized by the nature of the pretext tasks they solve . For instance , common techniques include predictive coding ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Song et al. , 2020 ; Zhang et al. , 2020 ; Hsu et al. , 2021 ) , pseudo-label learning ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) , auto-encoding ( Renshaw et al. , 2015 ; Algayres et al. , 2020 ) , tripletloss learning ( Shor et al. , 2020 ; Peplinski et al. , 2020 ) , generative modelling ( Khurana et al. , 2020 ) or contrastive learning ( Saeed et al. , 2020 ; Jiang et al. , 2020 ) . More precisely , these pretext tasks may be defined through the choice of pretext labels , hereafter referred to as pseudo-labels . The automatic extraction of pseudo-labels for SSL ( i.e . from the data itself ) is common in many application domains , such as computer vision ( Noroozi & Favaro , 2017 ; Gidaris et al. , 2018 ) , music processing ( Hung et al. , 2019 ; Wu et al. , 2021 ) and speech processing ( Pascual et al. , 2019 ; Shukla et al. , 2020 ) , and is commonly referred to as multitask self supervised learning . In the specific context of speech processing , the process of designing pseudo-labels may benefit from decades of research in signal processing . For instance , potential candidates are pitch estimators , energy-based features , voicing state and many more . As demonstrated by Pascual et al . ( 2019 ) , multitask speech representation learning is a powerful tool to build representations that are beneficial for a wide range of distinct downstream tasks , by combining different pseudo-labels which “ intuitively ” correspond to these tasks . Unfortunately , there is no clear understanding of how these pseudo-labels may interact when optimised together , and therefore , no common practice of how to select groups of pseudo-labels to obtain better performance on a known downstream task . As a matter of fact , this design process has been essentially driven by empirical validation and there is therefore no evidence that the obtained model is even the best one . This empirical approach can rapidly become intractable with modern SSL architectures which may contain hundreds of millions or billions of parameters trained on thousands of hours of speech , not to mention the carbon footprint of such pseudo-label searches . For instance , the self-supervised training of a single state-of-the-art large wav2vec 2.0 model ( Baevski et al. , 2020b ) on 53.2k hours of speech requires 128 GPUs for 5.2 days . This work aims at providing a clear , efficient and theoretically motivated procedure for pseudolabel group selection and weighting based on conditional independence ( CI ) . The method presented allows one to design ahead of training the most adapted multitask self-supervised speech representation learning model which perfectly suits the considered downstream tasks . Such an approach may also enable researchers to save a substantial amount of time and computation usually devoted to pseudo-label search . Hence , the contributions of this work are fourfold : 1 . Introduce a theoretically motivated and computationally efficient method for the selection of pseudo-label groups among a set of candidates and with respect to the considered downstream tasks ( Sections 3 and 4 ) . 2 . Validate empirically the proposed approach with a first model SSL model relying on different sets of pseudo-labels corresponding to the ones obtained for three considered speech tasks . ( Sections 5 ) . 3 . Extend our method to the SOTA wav2vec 2.0 to enhance its performance ( Section 6 ) . 4 . Release the code base developed with SpeechBrain ( Ravanelli et al. , 2021 ) for replication and to encourage further investigations.1 The conducted experiments demonstrate that the proposed method allows for a more intelligent , i.e . better informed , pseudo-label group selection for multitask SSL settings . Indeed , we find that the models built with the proposed method obtain a word error rate and an equal error rate , respectively , 31.6 % and 27.4 % lower than the baseline , without the need for any empirical search . 2 RELATED WORKS AND MOTIVATIONS . SSL recently became a key component to achieve good performance on downstream tasks especially with low-resource setups , either in speech ( Baevski et al. , 2020b ; Conneau et al. , 2020 ) , natural language processing ( Lan et al. , 2019 ; Chen et al. , 2020b ) or computer vision ( Gidaris et al. , 2019 ; Misra & Maaten , 2020 ; Jing & Tian , 2020 ) . Due to its very nature , SSL relies on large amounts of unlabeled data used to train large deep neural networks over long periods of time . It it thus crucial to understand properly what makes a good SSL model to lower the amount of computation and time needed to obtain the best downstream performance . SSL for Speech . Self-supervised learning for speech has recently enabled researchers to reach stateof-the-art results on various speech processing tasks ( Fan et al. , 2021 ) . The most successful models rely on predictive and contrastive objectives ( Baevski et al. , 2020b ; Chung et al. , 2019 ; Hsu et al. , 2021 ; Shor et al. , 2021 ) performing well across the different tasks even in low-resource settings . This led to the design of different benchmarks evaluating the self-supervised representations in different languages ( Yang et al. , 2021 ; Evain et al. , 2021 ) . However , in contrast to this proposition , these works have not tried to motivate beforehand the different choices made in the self-supervision pipeline . Understanding SSL . A few works have tried to shed some theoretical light on the mainly empirical field of self-supervised learning . Following the different paradigms in SSL , various tracks have 1https : //github.com/iclrsubmitter22/iclr_2022_submission been followed to understand what makes for a good self-supervised representation , exploring different approaches ( Lee et al. , 2020 ; Arora et al. , 2019 ; Wei et al. , 2020 ) . On the one hand , contrastive learning ( Oord et al. , 2018 ; Chen et al. , 2020a ) has been advocated both theoretically and empirically to achieve a balance in the mutual information between alternative representations of the data , keeping just enough shared information to keep the class-related content ( Tschannen et al. , 2020 ; Tian et al. , 2020 ; Bachman et al. , 2019 ) . In a recent work from Li et al . ( 2021 ) , independence testing has been used to produce better transformations in contrastive learning settings for image representations . Predictive learning , on the other hand , requires the model to predict masked elements in sequential data . This technique is powerful on downstream tasks that can be reduced to a masking problem , as suggested by research on language modeling ( Saunshi et al. , 2020 ) . However , all these works have been focusing on computer vision or text-related applications , and none of them addressed the multi-tasked self supervision problem . Multi-task self-supervised learning . While the literature on multi-tasking in self-supervised learning remains scarce , it has been shown in classic supervised learning settings , that through estimates of similarity between tasks or thorough empirical testing , several tasks can take advantage of being solved with a common encoder ( Zamir et al. , 2018 ; Dwivedi & Roig , 2019 ; Shafey et al. , 2019 ; Chen et al. , 2015 ) . More specifically , combining pretext tasks with SSL has been mainly explored in computer vision and speech ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) . Pretext tasks such as Jigsaw ( Doersch et al. , 2016 ) , colourisation and rotation ( Gidaris et al. , 2018 ) have been combined successfully to improve downstream performance ( Kim et al. , 2018 ; Shin ’ ya Yamaguchi et al. ) . The two closest works to our line of research are from Lee et al . ( 2020 ) and Doersch et al . ( 2017 ) . The former shows that a theoretical link can be established between conditional independence and an improvement of the performance on the downstream task , while the latter proposes to select layers from a multitask self-supervised encoder according to the pretext task to be solved . However , in both cases , the studies do not offer practical and theoretical solutions to select groups of pseudo-labels to build an adapted SSL model that will perform well on the considered downstream tasks . Group feature selection . Finally , feature selection , and especially group feature selection is another close and inspiring field given the problem we consider . The relationship and interactions between features have been largely investigated in the supervised learning literature ( Guyon & Elisseeff , 2003 ) . This led to multiple solutions to the feature group selection problem , including LASSObased techniques ( Yuan & Lin , 2006 ) , or multiple kernel formulations ( Sonnenburg et al. , 2006 ; Rakotomamonjy et al. , 2007 ) . However , these works do not involve any self-supervision , and links between feature selection and self-supervision design and pretext-task selection are yet to be proved . We will further consider these lines of works for concurrent baselines . With this work , we aim at shortening the process of designing SSL models while giving insights on the pseudo-label importance and the underlying mechanisms between pretext and downstream tasks at the same time . We decided to experiment with speech due to the lack of literature on this domain for multitask SSL , and for the various pseudo-labels available , which are based on decades of signal processing research . The whole pipeline starting from the acoustic feature extraction to the downstream task scoring follows three major steps summarized in Figure 1 . First , for every downstream task , our method produces a pretext task selection and weighting . Then , a SSL model is trained , before being used as a feature extractor front-end to one or many downstream tasks . | This paper describes authors' proposal for selecting pretext tasks (pseudo-labels) for multitask self-supervised learning, to improve self-supervised learning for speech representations. It leveraged the Hilbert Schmidt Independence Criterion (HSIC) for selecting pseudo-label loss weights and used softmax and its sparsity version to realize it. Experimental results on speech, speaker and emotion recognition showed effectiveness of the proposed approach. | SP:7e79ce46dc3e4421878c395346d690aa77284dd7 |
Pretext Tasks Selection for Multitask Self-Supervised Speech Representation Learning | Through solving pretext tasks , self-supervised learning leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task . In audio/speech signal processing , a wide range of features where engineered through decades of research efforts . As it turns out , learning to predict such features ( a.k.a pseudo-labels ) has proven to be a particularly relevant pretext task , leading to useful self-supervised representations which prove to be effective for downstream tasks . However , methods and common practices for combining such pretext tasks for better performance on the downstream task have not been explored and understood properly . In fact , the process relies almost exclusively on a computationally heavy experimental procedure , which becomes intractable with the increase of the number of pretext tasks . This paper introduces a method to select a group of pretext tasks among a set of candidates . The method we propose estimates calibrated weights for the partial losses corresponding to the considered pretext tasks during the self-supervised training process . The experiments conducted on automatic speech recognition , speaker and emotion recognition validate our approach , as the groups selected and weighted with our method perform better than classic baselines , thus facilitating the selection and combination of relevant pseudo-labels for self-supervised representation learning . 1 INTRODUCTION . Self-supervised learning ( SSL ) methods usually rely on a supervision obtained from the data itself through solving specific pretext tasks leveraging the underlying structure of the considered data ( Doersch et al. , 2016 ; Arandjelovic & Zisserman , 2018 ) . This technique is used in various domains including image processing ( Misra & Maaten , 2020 ; Jing & Tian , 2020 ; Grill et al. , 2020 ) , natural language understanding ( Chen et al. , 2020b ; Du et al. , 2020 ; Lan et al. , 2019 ) or speech and audio processing ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Jiang et al. , 2020 ) . It offers numerous advantages , such as the independence from labeled data , stronger performance on downstream tasks , more robust models and an easier transfer to low-resource setups ( e.g. , low-resource languages ) ( Baevski et al. , 2020b ; Jing & Tian , 2020 ) . The numerous existing SSL approaches are characterized by the nature of the pretext tasks they solve . For instance , common techniques include predictive coding ( Baevski et al. , 2020b ; Liu et al. , 2020 ; Song et al. , 2020 ; Zhang et al. , 2020 ; Hsu et al. , 2021 ) , pseudo-label learning ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) , auto-encoding ( Renshaw et al. , 2015 ; Algayres et al. , 2020 ) , tripletloss learning ( Shor et al. , 2020 ; Peplinski et al. , 2020 ) , generative modelling ( Khurana et al. , 2020 ) or contrastive learning ( Saeed et al. , 2020 ; Jiang et al. , 2020 ) . More precisely , these pretext tasks may be defined through the choice of pretext labels , hereafter referred to as pseudo-labels . The automatic extraction of pseudo-labels for SSL ( i.e . from the data itself ) is common in many application domains , such as computer vision ( Noroozi & Favaro , 2017 ; Gidaris et al. , 2018 ) , music processing ( Hung et al. , 2019 ; Wu et al. , 2021 ) and speech processing ( Pascual et al. , 2019 ; Shukla et al. , 2020 ) , and is commonly referred to as multitask self supervised learning . In the specific context of speech processing , the process of designing pseudo-labels may benefit from decades of research in signal processing . For instance , potential candidates are pitch estimators , energy-based features , voicing state and many more . As demonstrated by Pascual et al . ( 2019 ) , multitask speech representation learning is a powerful tool to build representations that are beneficial for a wide range of distinct downstream tasks , by combining different pseudo-labels which “ intuitively ” correspond to these tasks . Unfortunately , there is no clear understanding of how these pseudo-labels may interact when optimised together , and therefore , no common practice of how to select groups of pseudo-labels to obtain better performance on a known downstream task . As a matter of fact , this design process has been essentially driven by empirical validation and there is therefore no evidence that the obtained model is even the best one . This empirical approach can rapidly become intractable with modern SSL architectures which may contain hundreds of millions or billions of parameters trained on thousands of hours of speech , not to mention the carbon footprint of such pseudo-label searches . For instance , the self-supervised training of a single state-of-the-art large wav2vec 2.0 model ( Baevski et al. , 2020b ) on 53.2k hours of speech requires 128 GPUs for 5.2 days . This work aims at providing a clear , efficient and theoretically motivated procedure for pseudolabel group selection and weighting based on conditional independence ( CI ) . The method presented allows one to design ahead of training the most adapted multitask self-supervised speech representation learning model which perfectly suits the considered downstream tasks . Such an approach may also enable researchers to save a substantial amount of time and computation usually devoted to pseudo-label search . Hence , the contributions of this work are fourfold : 1 . Introduce a theoretically motivated and computationally efficient method for the selection of pseudo-label groups among a set of candidates and with respect to the considered downstream tasks ( Sections 3 and 4 ) . 2 . Validate empirically the proposed approach with a first model SSL model relying on different sets of pseudo-labels corresponding to the ones obtained for three considered speech tasks . ( Sections 5 ) . 3 . Extend our method to the SOTA wav2vec 2.0 to enhance its performance ( Section 6 ) . 4 . Release the code base developed with SpeechBrain ( Ravanelli et al. , 2021 ) for replication and to encourage further investigations.1 The conducted experiments demonstrate that the proposed method allows for a more intelligent , i.e . better informed , pseudo-label group selection for multitask SSL settings . Indeed , we find that the models built with the proposed method obtain a word error rate and an equal error rate , respectively , 31.6 % and 27.4 % lower than the baseline , without the need for any empirical search . 2 RELATED WORKS AND MOTIVATIONS . SSL recently became a key component to achieve good performance on downstream tasks especially with low-resource setups , either in speech ( Baevski et al. , 2020b ; Conneau et al. , 2020 ) , natural language processing ( Lan et al. , 2019 ; Chen et al. , 2020b ) or computer vision ( Gidaris et al. , 2019 ; Misra & Maaten , 2020 ; Jing & Tian , 2020 ) . Due to its very nature , SSL relies on large amounts of unlabeled data used to train large deep neural networks over long periods of time . It it thus crucial to understand properly what makes a good SSL model to lower the amount of computation and time needed to obtain the best downstream performance . SSL for Speech . Self-supervised learning for speech has recently enabled researchers to reach stateof-the-art results on various speech processing tasks ( Fan et al. , 2021 ) . The most successful models rely on predictive and contrastive objectives ( Baevski et al. , 2020b ; Chung et al. , 2019 ; Hsu et al. , 2021 ; Shor et al. , 2021 ) performing well across the different tasks even in low-resource settings . This led to the design of different benchmarks evaluating the self-supervised representations in different languages ( Yang et al. , 2021 ; Evain et al. , 2021 ) . However , in contrast to this proposition , these works have not tried to motivate beforehand the different choices made in the self-supervision pipeline . Understanding SSL . A few works have tried to shed some theoretical light on the mainly empirical field of self-supervised learning . Following the different paradigms in SSL , various tracks have 1https : //github.com/iclrsubmitter22/iclr_2022_submission been followed to understand what makes for a good self-supervised representation , exploring different approaches ( Lee et al. , 2020 ; Arora et al. , 2019 ; Wei et al. , 2020 ) . On the one hand , contrastive learning ( Oord et al. , 2018 ; Chen et al. , 2020a ) has been advocated both theoretically and empirically to achieve a balance in the mutual information between alternative representations of the data , keeping just enough shared information to keep the class-related content ( Tschannen et al. , 2020 ; Tian et al. , 2020 ; Bachman et al. , 2019 ) . In a recent work from Li et al . ( 2021 ) , independence testing has been used to produce better transformations in contrastive learning settings for image representations . Predictive learning , on the other hand , requires the model to predict masked elements in sequential data . This technique is powerful on downstream tasks that can be reduced to a masking problem , as suggested by research on language modeling ( Saunshi et al. , 2020 ) . However , all these works have been focusing on computer vision or text-related applications , and none of them addressed the multi-tasked self supervision problem . Multi-task self-supervised learning . While the literature on multi-tasking in self-supervised learning remains scarce , it has been shown in classic supervised learning settings , that through estimates of similarity between tasks or thorough empirical testing , several tasks can take advantage of being solved with a common encoder ( Zamir et al. , 2018 ; Dwivedi & Roig , 2019 ; Shafey et al. , 2019 ; Chen et al. , 2015 ) . More specifically , combining pretext tasks with SSL has been mainly explored in computer vision and speech ( Pascual et al. , 2019 ; Ravanelli et al. , 2020 ) . Pretext tasks such as Jigsaw ( Doersch et al. , 2016 ) , colourisation and rotation ( Gidaris et al. , 2018 ) have been combined successfully to improve downstream performance ( Kim et al. , 2018 ; Shin ’ ya Yamaguchi et al. ) . The two closest works to our line of research are from Lee et al . ( 2020 ) and Doersch et al . ( 2017 ) . The former shows that a theoretical link can be established between conditional independence and an improvement of the performance on the downstream task , while the latter proposes to select layers from a multitask self-supervised encoder according to the pretext task to be solved . However , in both cases , the studies do not offer practical and theoretical solutions to select groups of pseudo-labels to build an adapted SSL model that will perform well on the considered downstream tasks . Group feature selection . Finally , feature selection , and especially group feature selection is another close and inspiring field given the problem we consider . The relationship and interactions between features have been largely investigated in the supervised learning literature ( Guyon & Elisseeff , 2003 ) . This led to multiple solutions to the feature group selection problem , including LASSObased techniques ( Yuan & Lin , 2006 ) , or multiple kernel formulations ( Sonnenburg et al. , 2006 ; Rakotomamonjy et al. , 2007 ) . However , these works do not involve any self-supervision , and links between feature selection and self-supervision design and pretext-task selection are yet to be proved . We will further consider these lines of works for concurrent baselines . With this work , we aim at shortening the process of designing SSL models while giving insights on the pseudo-label importance and the underlying mechanisms between pretext and downstream tasks at the same time . We decided to experiment with speech due to the lack of literature on this domain for multitask SSL , and for the various pseudo-labels available , which are based on decades of signal processing research . The whole pipeline starting from the acoustic feature extraction to the downstream task scoring follows three major steps summarized in Figure 1 . First , for every downstream task , our method produces a pretext task selection and weighting . Then , a SSL model is trained , before being used as a feature extractor front-end to one or many downstream tasks . | This paper attempts to improve self-supervised (SSL) speech representations by determining what linear combination of SSL targets will perform best for downstream tasks (audio SSL targets like fundamental frequency or MFCC). The technique learns to weight these targets according to a Conditional Independence criteria: a target is more valuable if the SSL target is more independent of the data given a label. There are two main experimental results on which the usefulness of this work rests. The first is a “sanity check”, which demonstrates that this weighting scheme outperforms two previous ones, from 2002 and 2005, on two tasks: ASR in LibriSpeech and speaker ID in Voxceleb1. The second is that they manage to improve LibriSpeech, Voxceleb, and IEMOCAP performance over a wav2vec2 model by adding SSL targets weighted according to their algorithm. | SP:7e79ce46dc3e4421878c395346d690aa77284dd7 |
On Incorporating Inductive Biases into VAEs | 1 INTRODUCTION . VAEs provide a rich class of deep generative models ( DGMs ) with many variants ( Kingma & Welling , 2014 ; Rezende & Mohamed , 2015 ; Burda et al. , 2016 ; Gulrajani et al. , 2016 ; Vahdat & Kautz , 2020 ) . Based on an encoder-decoder structure , VAEs encode datapoints into latent embeddings before decoding them back to data space . By parameterizing the encoder and decoder using expressive neural networks , VAEs provide a powerful basis for learning both generative models and representations . The standard VAE framework assumes an isotropic Gaussian prior . However , this can cause issues , such as when one desires the learned representations to exhibit some properties of interest , for example sparsity ( Tonolini et al. , 2020 ) or clustering ( Dilokthanakul et al. , 2016 ) , or when the data distribution has very different topological properties from a Gaussian , for example multi-modality ( Shi et al. , 2020 ) or group structure ( Falorsi et al. , 2018 ) . Therefore , a variety of recent works have looked to use non-Gaussian priors ( van den Oord et al. , 2017 ; Tomczak & Welling , 2018 ; Casale et al. , 2018 ; Razavi et al. , 2019 ; Bauer & Mnih , 2019 ) , often with the motivation of adding inductive biases into the model ( Davidson et al. , 2018b ; Mathieu et al. , 2019b ; Nagano et al. , 2019 ; Skopek et al. , 2019 ) . In this work , we argue that this approach of using non-Gaussian priors can be a problematic , and even ineffective , mechanism for adding inductive biases into VAEs . Firstly , non-Gaussian priors will often necessitate complex encoder models to maintain consistency with the prior ’ s shape and dependency structure ( Webb et al. , 2018 ) , which typically no longer permit simple parameterization . Secondly , the latent encodings are still not guaranteed to follow the desired structure because the ‘ prior ’ only appears in the training objective as a regularizer on the encoder . Indeed , Mathieu et al . ( 2019b ) find that changing the prior is typically insufficient in practice to learn the desired representations at a population level , with mismatches occurring between the data distribution and learned model . To provide an alternative , more effective , approach that does not suffer from these pathologies , we introduce Intermediary Latent Space VAEs ( InteL-VAEs ) , an extension to the standard VAE framework that allows a wide range of powerful inductive biases to be incorporated while maintaining an isotropic Gaussian prior . This is achieved by introducing an intermediary set of latent variables that deal with the stochasticity of the encoding process before incorporating the desired inductive biases via a parametric function that maps these intermediary latents to the latent representation itself , with the decoder taking this final representation as input . See Fig . 1 for an example . 1Department of Statistics , University of Oxford , 2University of Edinburgh * Correspondence to : Ning Miao < ning.miao @ stats.ox.ac.uk > , Tom Rainforth < rainforth @ stats.ox.ac.uk > The InteL-VAE framework provides a variety of advantages over directly replacing the prior . Firstly , it directly enforces our inductive biases on the representations , rather than relying on the regularizing effect of the prior to encourage this implicitly . Secondly , it provides a natural congruence between the generative and representational models via sharing of the mapping function , side-stepping the issues that non-Gaussian priors can cause for the inference model . Finally , it allows for more general and more flexible inductive biases to be incorporated , by removing the need to express them with an explicit density function and allowing for parts of the mapping to be learned during training . To further introduce a number of novel specific realizations of the InteL-VAE framework , showing how they can be used to incorporate various inductive biases , enforcing latent representations that are , for example , multiply connected , multi-modal , sparse , or hierarchical . Experimental results show their superiority compared with baseline methods in both generation and feature quality , most notably providing state-of-the-art performance for learning sparse representations in the VAE framework . To summarize , we a ) highlight the need for inductive biases in VAEs and explain why directly changing the prior is a suboptimal means for incorporating them ; b ) propose InteL-VAEs as a simple but effective general framework to introduce inductive biases ; and c ) introduce specific InteL-VAE variants which can learn improved generative models and representations over existing baselines on a number of tasks . Accompanying code is provided at https : //github.com/NingMiao/InteL-VAE . 2 THE NEED FOR INDUCTIVE BIASES IN VAES . Variational auto-encoders ( VAEs ) are deep stochastic auto-encoders that can be used for learning both deep generative models and low-dimensional representations of complex data . Their key components are an encoder , qϕ ( z|x ) , which probabilistically maps from data x ∈ X to latents z ∈ Z ; a decoder , pθ ( x|z ) , which probabilistically maps from latents to data ; and a prior , p ( z ) , that completes the generative model , p ( z ) pθ ( x|z ) , and regularizes the encoder during training . The encoder and decoder are parameterized by deep neural networks and are simultaneously trained using a dataset { x1 , x2 , ... , xN } and a variational lower bound on the log-likelihood , most commonly , L ( x , θ , ϕ ) : = Ez∼qϕ ( z|x ) [ log pθ ( x|z ) ] −DKL ( qϕ ( z|x ) ∥ p ( z ) ) . ( 1 ) Namely , we optimize L ( θ , ϕ ) : = Ex∼pdata ( x ) [ L ( x , θ , ϕ ) ] , where pdata ( x ) represents the empirical data distribution . Here the prior is typically fixed to a standard Gaussian , i.e . p ( z ) = N ( z ; 0 , I ) . While it is well documented that this standard VAE setup with a ‘ Gaussian ’ latent space can be suboptimal ( Davidson et al. , 2018a ; Mathieu et al. , 2019b ; Tomczak & Welling , 2018 ; Bauer & Mnih , 2019 ; Tonolini et al. , 2020 ) , there is perhaps less of a unified high-level view on exactly when , why , and how one should change it to incorporate inductive biases . Note here that the prior does not play the same role as in a Bayesian model : because the latents themselves are somewhat arbitrary and the model is learned from data , it does not encapsulate our initial beliefs in the way one might expect . We argue that there are two core reasons why inductive biases can be important for VAEs : ( a ) standard VAEs can fail to encourage , and even prohibit , desired structure in the representations we learn ; and ( b ) standard VAEs do not allow one to impart prior information or desired topological characteristic into the generative model . Considering the former , one often has some a priori desired characteristics , or constraints , on the representations learned ( Bengio et al. , 2013 ) . For example , sparse features can be desirable because they can improve data efficiency ( Yip & Sussman , 1997 ) , and provide robustness to noise ( Wright et al. , 2009 ; Ahmad & Scheinkman , 2019 ) and attacks ( Gopalakrishnan et al. , 2018 ) . In other settings one might desire clustered ( Jiang et al. , 2017 ) , disentangled ( Ansari & Soh , 2019 ; Kim & Mnih , 2018 ; Higgins et al. , 2018 ) or hierarchical representations ( Song & Li , 2013 ; Sønderby et al. , 2016 ; Zhao et al. , 2017 ) . The KL-divergence term in Eq . ( 1 ) regularizes the encoding distribution towards the prior and , as a standard Gaussian distribution typically does not exhibit our desired characteristics , this regularization can significantly hinder our ability to learn representations with the desired properties . Not only can this be problematic at an individual sample level , it can cause even more pronounced issues at the population level : desired structural characteristics of our representations often relate to the pushforward distribution of the data in the latent space , qϕ ( z ) : = Epdata ( x ) [ qϕ ( z|x ) ] , which is both difficult to control and only implicitly regularized to the prior ( Hoffman & Johnson , 2016 ) . Inductive biases can also be essential to the generation quality of VAEs : because the generation process of standard VAEs is essentially pushing-forward the Gaussian prior on Z to data space X by a ‘ smooth ’ decoder , there is an underlying inductive bias that standard VAEs prefer sample distributions with similar topology structures to Gaussians . As a result , VAEs can perform poorly when the data manifold exhibits certain different topological properties ( Caterini et al. , 2020 ) . For example , they can struggle when data is clustered into unconnected com- ponents as shown in Fig . 2 , or when data is not simply-connected . This renders learning effective mappings using finite datasets and conventional architectures ( potentially prohibitively ) difficult . In particular , it can necessitate large Lipschitz constants in the decoder , causing knock-on issues like unstable training and brittle models ( Scaman & Virmaux , 2018 ) , as well as posterior collapse ( van den Oord et al. , 2017 ; Alemi et al. , 2018 ) . In short , the Gaussian prior of a standard VAE can induce fundamental topological differences to the true data distribution ( Falorsi et al. , 2018 ; Shi et al. , 2020 ) . 3 SHORTFALLS OF VAES WITH NON-GAUSSIAN PRIORS . Though directly replacing the Gaussian prior with a different prior sounds like a simple solution , effectively introducing inductive biases can , unfortunately , be more complicated . Firstly , the only influence of the prior during training is as a regularizer on the encoder through the DKL ( qϕ ( z|x ) ∥ p ( z ) ) term . This regularization is always competing with the need for effective reconstructions and only has an indirect influence on qϕ ( z ) . As such , simply replacing the prior can be an ineffective way of inducing desired structure at the population level ( Mathieu et al. , 2019b ) , particularly if p ( z ) is a complex distribution that it is difficult to fit ( see , e.g. , Fig . 3a ) . Mismatches between qϕ ( z ) and p ( z ) can also have further deleterious effects on the learned generative model : the former represents the distribution of the data in latent space during training , while the latter is what is used by the learned generative model , leading to unrepresentative generations if there is mismatch . Secondly , it can be extremely difficult to construct appropriate encoder mappings and distributions for non-Gaussian priors . While the typical choice of a mean-field Gaussian for the encoder distribution is simple , easy to train , and often effective for Gaussian priors , it is often inappropriate for other choices of prior . For example , in Fig . 3 , we consider replacement with a sparse prior . A VAE with a Gaussian encoder struggles to encode points in a manner that even remotely matches the prior . One might suggest replacing the encoder distribution as well , but this has its own issues , most notably that other distributions can be hard to effectively parameterize or train . In particular , the form of the required encoding noise might become heavily spatially variant ; in our sparse example , the noise must be elongated in a particular direction depending on where the mean embedding is . If the prior has constraints or topological properties distinct from the data , it can even be difficult to learn a mean encoder mapping that respects these , due to the continuous nature of neural networks . | The paper proposes to incorporate some inductive biases into VAEs by modifying decoders. The idea is to first transform stochastic latent variables using some specific mapping and then apply a deep neural network. The expected result is that the encoder will be forced to assign z’s in such a way that the mapping can turn them into a manifold of a given form. | SP:04fb345a18b48db4cdbf9119aaf07838a4e4e973 |
On Incorporating Inductive Biases into VAEs | 1 INTRODUCTION . VAEs provide a rich class of deep generative models ( DGMs ) with many variants ( Kingma & Welling , 2014 ; Rezende & Mohamed , 2015 ; Burda et al. , 2016 ; Gulrajani et al. , 2016 ; Vahdat & Kautz , 2020 ) . Based on an encoder-decoder structure , VAEs encode datapoints into latent embeddings before decoding them back to data space . By parameterizing the encoder and decoder using expressive neural networks , VAEs provide a powerful basis for learning both generative models and representations . The standard VAE framework assumes an isotropic Gaussian prior . However , this can cause issues , such as when one desires the learned representations to exhibit some properties of interest , for example sparsity ( Tonolini et al. , 2020 ) or clustering ( Dilokthanakul et al. , 2016 ) , or when the data distribution has very different topological properties from a Gaussian , for example multi-modality ( Shi et al. , 2020 ) or group structure ( Falorsi et al. , 2018 ) . Therefore , a variety of recent works have looked to use non-Gaussian priors ( van den Oord et al. , 2017 ; Tomczak & Welling , 2018 ; Casale et al. , 2018 ; Razavi et al. , 2019 ; Bauer & Mnih , 2019 ) , often with the motivation of adding inductive biases into the model ( Davidson et al. , 2018b ; Mathieu et al. , 2019b ; Nagano et al. , 2019 ; Skopek et al. , 2019 ) . In this work , we argue that this approach of using non-Gaussian priors can be a problematic , and even ineffective , mechanism for adding inductive biases into VAEs . Firstly , non-Gaussian priors will often necessitate complex encoder models to maintain consistency with the prior ’ s shape and dependency structure ( Webb et al. , 2018 ) , which typically no longer permit simple parameterization . Secondly , the latent encodings are still not guaranteed to follow the desired structure because the ‘ prior ’ only appears in the training objective as a regularizer on the encoder . Indeed , Mathieu et al . ( 2019b ) find that changing the prior is typically insufficient in practice to learn the desired representations at a population level , with mismatches occurring between the data distribution and learned model . To provide an alternative , more effective , approach that does not suffer from these pathologies , we introduce Intermediary Latent Space VAEs ( InteL-VAEs ) , an extension to the standard VAE framework that allows a wide range of powerful inductive biases to be incorporated while maintaining an isotropic Gaussian prior . This is achieved by introducing an intermediary set of latent variables that deal with the stochasticity of the encoding process before incorporating the desired inductive biases via a parametric function that maps these intermediary latents to the latent representation itself , with the decoder taking this final representation as input . See Fig . 1 for an example . 1Department of Statistics , University of Oxford , 2University of Edinburgh * Correspondence to : Ning Miao < ning.miao @ stats.ox.ac.uk > , Tom Rainforth < rainforth @ stats.ox.ac.uk > The InteL-VAE framework provides a variety of advantages over directly replacing the prior . Firstly , it directly enforces our inductive biases on the representations , rather than relying on the regularizing effect of the prior to encourage this implicitly . Secondly , it provides a natural congruence between the generative and representational models via sharing of the mapping function , side-stepping the issues that non-Gaussian priors can cause for the inference model . Finally , it allows for more general and more flexible inductive biases to be incorporated , by removing the need to express them with an explicit density function and allowing for parts of the mapping to be learned during training . To further introduce a number of novel specific realizations of the InteL-VAE framework , showing how they can be used to incorporate various inductive biases , enforcing latent representations that are , for example , multiply connected , multi-modal , sparse , or hierarchical . Experimental results show their superiority compared with baseline methods in both generation and feature quality , most notably providing state-of-the-art performance for learning sparse representations in the VAE framework . To summarize , we a ) highlight the need for inductive biases in VAEs and explain why directly changing the prior is a suboptimal means for incorporating them ; b ) propose InteL-VAEs as a simple but effective general framework to introduce inductive biases ; and c ) introduce specific InteL-VAE variants which can learn improved generative models and representations over existing baselines on a number of tasks . Accompanying code is provided at https : //github.com/NingMiao/InteL-VAE . 2 THE NEED FOR INDUCTIVE BIASES IN VAES . Variational auto-encoders ( VAEs ) are deep stochastic auto-encoders that can be used for learning both deep generative models and low-dimensional representations of complex data . Their key components are an encoder , qϕ ( z|x ) , which probabilistically maps from data x ∈ X to latents z ∈ Z ; a decoder , pθ ( x|z ) , which probabilistically maps from latents to data ; and a prior , p ( z ) , that completes the generative model , p ( z ) pθ ( x|z ) , and regularizes the encoder during training . The encoder and decoder are parameterized by deep neural networks and are simultaneously trained using a dataset { x1 , x2 , ... , xN } and a variational lower bound on the log-likelihood , most commonly , L ( x , θ , ϕ ) : = Ez∼qϕ ( z|x ) [ log pθ ( x|z ) ] −DKL ( qϕ ( z|x ) ∥ p ( z ) ) . ( 1 ) Namely , we optimize L ( θ , ϕ ) : = Ex∼pdata ( x ) [ L ( x , θ , ϕ ) ] , where pdata ( x ) represents the empirical data distribution . Here the prior is typically fixed to a standard Gaussian , i.e . p ( z ) = N ( z ; 0 , I ) . While it is well documented that this standard VAE setup with a ‘ Gaussian ’ latent space can be suboptimal ( Davidson et al. , 2018a ; Mathieu et al. , 2019b ; Tomczak & Welling , 2018 ; Bauer & Mnih , 2019 ; Tonolini et al. , 2020 ) , there is perhaps less of a unified high-level view on exactly when , why , and how one should change it to incorporate inductive biases . Note here that the prior does not play the same role as in a Bayesian model : because the latents themselves are somewhat arbitrary and the model is learned from data , it does not encapsulate our initial beliefs in the way one might expect . We argue that there are two core reasons why inductive biases can be important for VAEs : ( a ) standard VAEs can fail to encourage , and even prohibit , desired structure in the representations we learn ; and ( b ) standard VAEs do not allow one to impart prior information or desired topological characteristic into the generative model . Considering the former , one often has some a priori desired characteristics , or constraints , on the representations learned ( Bengio et al. , 2013 ) . For example , sparse features can be desirable because they can improve data efficiency ( Yip & Sussman , 1997 ) , and provide robustness to noise ( Wright et al. , 2009 ; Ahmad & Scheinkman , 2019 ) and attacks ( Gopalakrishnan et al. , 2018 ) . In other settings one might desire clustered ( Jiang et al. , 2017 ) , disentangled ( Ansari & Soh , 2019 ; Kim & Mnih , 2018 ; Higgins et al. , 2018 ) or hierarchical representations ( Song & Li , 2013 ; Sønderby et al. , 2016 ; Zhao et al. , 2017 ) . The KL-divergence term in Eq . ( 1 ) regularizes the encoding distribution towards the prior and , as a standard Gaussian distribution typically does not exhibit our desired characteristics , this regularization can significantly hinder our ability to learn representations with the desired properties . Not only can this be problematic at an individual sample level , it can cause even more pronounced issues at the population level : desired structural characteristics of our representations often relate to the pushforward distribution of the data in the latent space , qϕ ( z ) : = Epdata ( x ) [ qϕ ( z|x ) ] , which is both difficult to control and only implicitly regularized to the prior ( Hoffman & Johnson , 2016 ) . Inductive biases can also be essential to the generation quality of VAEs : because the generation process of standard VAEs is essentially pushing-forward the Gaussian prior on Z to data space X by a ‘ smooth ’ decoder , there is an underlying inductive bias that standard VAEs prefer sample distributions with similar topology structures to Gaussians . As a result , VAEs can perform poorly when the data manifold exhibits certain different topological properties ( Caterini et al. , 2020 ) . For example , they can struggle when data is clustered into unconnected com- ponents as shown in Fig . 2 , or when data is not simply-connected . This renders learning effective mappings using finite datasets and conventional architectures ( potentially prohibitively ) difficult . In particular , it can necessitate large Lipschitz constants in the decoder , causing knock-on issues like unstable training and brittle models ( Scaman & Virmaux , 2018 ) , as well as posterior collapse ( van den Oord et al. , 2017 ; Alemi et al. , 2018 ) . In short , the Gaussian prior of a standard VAE can induce fundamental topological differences to the true data distribution ( Falorsi et al. , 2018 ; Shi et al. , 2020 ) . 3 SHORTFALLS OF VAES WITH NON-GAUSSIAN PRIORS . Though directly replacing the Gaussian prior with a different prior sounds like a simple solution , effectively introducing inductive biases can , unfortunately , be more complicated . Firstly , the only influence of the prior during training is as a regularizer on the encoder through the DKL ( qϕ ( z|x ) ∥ p ( z ) ) term . This regularization is always competing with the need for effective reconstructions and only has an indirect influence on qϕ ( z ) . As such , simply replacing the prior can be an ineffective way of inducing desired structure at the population level ( Mathieu et al. , 2019b ) , particularly if p ( z ) is a complex distribution that it is difficult to fit ( see , e.g. , Fig . 3a ) . Mismatches between qϕ ( z ) and p ( z ) can also have further deleterious effects on the learned generative model : the former represents the distribution of the data in latent space during training , while the latter is what is used by the learned generative model , leading to unrepresentative generations if there is mismatch . Secondly , it can be extremely difficult to construct appropriate encoder mappings and distributions for non-Gaussian priors . While the typical choice of a mean-field Gaussian for the encoder distribution is simple , easy to train , and often effective for Gaussian priors , it is often inappropriate for other choices of prior . For example , in Fig . 3 , we consider replacement with a sparse prior . A VAE with a Gaussian encoder struggles to encode points in a manner that even remotely matches the prior . One might suggest replacing the encoder distribution as well , but this has its own issues , most notably that other distributions can be hard to effectively parameterize or train . In particular , the form of the required encoding noise might become heavily spatially variant ; in our sparse example , the noise must be elongated in a particular direction depending on where the mean embedding is . If the prior has constraints or topological properties distinct from the data , it can even be difficult to learn a mean encoder mapping that respects these , due to the continuous nature of neural networks . | The paper presents a novel method for introducing inductive biases into variational auto-encoders (VAEs). The inductive biases are specified through a deterministic transformation in the latent space, which is shared between the encoder and the decoder. Through experiments on toy data, the authors demonstrate how to design appropriate inductive biases by matching the topological properties of the transformed representations to those of the real data distributions. They further quantify the performance of the model's ability to learn informative representations through experiments on MNIST, Fashion-MNIST, and CelebA, and show that it outperforms competitors both in terms of the generation quality with and without sparsity constraints as well as terms of downstream classification based on the representations. | SP:04fb345a18b48db4cdbf9119aaf07838a4e4e973 |
On Incorporating Inductive Biases into VAEs | 1 INTRODUCTION . VAEs provide a rich class of deep generative models ( DGMs ) with many variants ( Kingma & Welling , 2014 ; Rezende & Mohamed , 2015 ; Burda et al. , 2016 ; Gulrajani et al. , 2016 ; Vahdat & Kautz , 2020 ) . Based on an encoder-decoder structure , VAEs encode datapoints into latent embeddings before decoding them back to data space . By parameterizing the encoder and decoder using expressive neural networks , VAEs provide a powerful basis for learning both generative models and representations . The standard VAE framework assumes an isotropic Gaussian prior . However , this can cause issues , such as when one desires the learned representations to exhibit some properties of interest , for example sparsity ( Tonolini et al. , 2020 ) or clustering ( Dilokthanakul et al. , 2016 ) , or when the data distribution has very different topological properties from a Gaussian , for example multi-modality ( Shi et al. , 2020 ) or group structure ( Falorsi et al. , 2018 ) . Therefore , a variety of recent works have looked to use non-Gaussian priors ( van den Oord et al. , 2017 ; Tomczak & Welling , 2018 ; Casale et al. , 2018 ; Razavi et al. , 2019 ; Bauer & Mnih , 2019 ) , often with the motivation of adding inductive biases into the model ( Davidson et al. , 2018b ; Mathieu et al. , 2019b ; Nagano et al. , 2019 ; Skopek et al. , 2019 ) . In this work , we argue that this approach of using non-Gaussian priors can be a problematic , and even ineffective , mechanism for adding inductive biases into VAEs . Firstly , non-Gaussian priors will often necessitate complex encoder models to maintain consistency with the prior ’ s shape and dependency structure ( Webb et al. , 2018 ) , which typically no longer permit simple parameterization . Secondly , the latent encodings are still not guaranteed to follow the desired structure because the ‘ prior ’ only appears in the training objective as a regularizer on the encoder . Indeed , Mathieu et al . ( 2019b ) find that changing the prior is typically insufficient in practice to learn the desired representations at a population level , with mismatches occurring between the data distribution and learned model . To provide an alternative , more effective , approach that does not suffer from these pathologies , we introduce Intermediary Latent Space VAEs ( InteL-VAEs ) , an extension to the standard VAE framework that allows a wide range of powerful inductive biases to be incorporated while maintaining an isotropic Gaussian prior . This is achieved by introducing an intermediary set of latent variables that deal with the stochasticity of the encoding process before incorporating the desired inductive biases via a parametric function that maps these intermediary latents to the latent representation itself , with the decoder taking this final representation as input . See Fig . 1 for an example . 1Department of Statistics , University of Oxford , 2University of Edinburgh * Correspondence to : Ning Miao < ning.miao @ stats.ox.ac.uk > , Tom Rainforth < rainforth @ stats.ox.ac.uk > The InteL-VAE framework provides a variety of advantages over directly replacing the prior . Firstly , it directly enforces our inductive biases on the representations , rather than relying on the regularizing effect of the prior to encourage this implicitly . Secondly , it provides a natural congruence between the generative and representational models via sharing of the mapping function , side-stepping the issues that non-Gaussian priors can cause for the inference model . Finally , it allows for more general and more flexible inductive biases to be incorporated , by removing the need to express them with an explicit density function and allowing for parts of the mapping to be learned during training . To further introduce a number of novel specific realizations of the InteL-VAE framework , showing how they can be used to incorporate various inductive biases , enforcing latent representations that are , for example , multiply connected , multi-modal , sparse , or hierarchical . Experimental results show their superiority compared with baseline methods in both generation and feature quality , most notably providing state-of-the-art performance for learning sparse representations in the VAE framework . To summarize , we a ) highlight the need for inductive biases in VAEs and explain why directly changing the prior is a suboptimal means for incorporating them ; b ) propose InteL-VAEs as a simple but effective general framework to introduce inductive biases ; and c ) introduce specific InteL-VAE variants which can learn improved generative models and representations over existing baselines on a number of tasks . Accompanying code is provided at https : //github.com/NingMiao/InteL-VAE . 2 THE NEED FOR INDUCTIVE BIASES IN VAES . Variational auto-encoders ( VAEs ) are deep stochastic auto-encoders that can be used for learning both deep generative models and low-dimensional representations of complex data . Their key components are an encoder , qϕ ( z|x ) , which probabilistically maps from data x ∈ X to latents z ∈ Z ; a decoder , pθ ( x|z ) , which probabilistically maps from latents to data ; and a prior , p ( z ) , that completes the generative model , p ( z ) pθ ( x|z ) , and regularizes the encoder during training . The encoder and decoder are parameterized by deep neural networks and are simultaneously trained using a dataset { x1 , x2 , ... , xN } and a variational lower bound on the log-likelihood , most commonly , L ( x , θ , ϕ ) : = Ez∼qϕ ( z|x ) [ log pθ ( x|z ) ] −DKL ( qϕ ( z|x ) ∥ p ( z ) ) . ( 1 ) Namely , we optimize L ( θ , ϕ ) : = Ex∼pdata ( x ) [ L ( x , θ , ϕ ) ] , where pdata ( x ) represents the empirical data distribution . Here the prior is typically fixed to a standard Gaussian , i.e . p ( z ) = N ( z ; 0 , I ) . While it is well documented that this standard VAE setup with a ‘ Gaussian ’ latent space can be suboptimal ( Davidson et al. , 2018a ; Mathieu et al. , 2019b ; Tomczak & Welling , 2018 ; Bauer & Mnih , 2019 ; Tonolini et al. , 2020 ) , there is perhaps less of a unified high-level view on exactly when , why , and how one should change it to incorporate inductive biases . Note here that the prior does not play the same role as in a Bayesian model : because the latents themselves are somewhat arbitrary and the model is learned from data , it does not encapsulate our initial beliefs in the way one might expect . We argue that there are two core reasons why inductive biases can be important for VAEs : ( a ) standard VAEs can fail to encourage , and even prohibit , desired structure in the representations we learn ; and ( b ) standard VAEs do not allow one to impart prior information or desired topological characteristic into the generative model . Considering the former , one often has some a priori desired characteristics , or constraints , on the representations learned ( Bengio et al. , 2013 ) . For example , sparse features can be desirable because they can improve data efficiency ( Yip & Sussman , 1997 ) , and provide robustness to noise ( Wright et al. , 2009 ; Ahmad & Scheinkman , 2019 ) and attacks ( Gopalakrishnan et al. , 2018 ) . In other settings one might desire clustered ( Jiang et al. , 2017 ) , disentangled ( Ansari & Soh , 2019 ; Kim & Mnih , 2018 ; Higgins et al. , 2018 ) or hierarchical representations ( Song & Li , 2013 ; Sønderby et al. , 2016 ; Zhao et al. , 2017 ) . The KL-divergence term in Eq . ( 1 ) regularizes the encoding distribution towards the prior and , as a standard Gaussian distribution typically does not exhibit our desired characteristics , this regularization can significantly hinder our ability to learn representations with the desired properties . Not only can this be problematic at an individual sample level , it can cause even more pronounced issues at the population level : desired structural characteristics of our representations often relate to the pushforward distribution of the data in the latent space , qϕ ( z ) : = Epdata ( x ) [ qϕ ( z|x ) ] , which is both difficult to control and only implicitly regularized to the prior ( Hoffman & Johnson , 2016 ) . Inductive biases can also be essential to the generation quality of VAEs : because the generation process of standard VAEs is essentially pushing-forward the Gaussian prior on Z to data space X by a ‘ smooth ’ decoder , there is an underlying inductive bias that standard VAEs prefer sample distributions with similar topology structures to Gaussians . As a result , VAEs can perform poorly when the data manifold exhibits certain different topological properties ( Caterini et al. , 2020 ) . For example , they can struggle when data is clustered into unconnected com- ponents as shown in Fig . 2 , or when data is not simply-connected . This renders learning effective mappings using finite datasets and conventional architectures ( potentially prohibitively ) difficult . In particular , it can necessitate large Lipschitz constants in the decoder , causing knock-on issues like unstable training and brittle models ( Scaman & Virmaux , 2018 ) , as well as posterior collapse ( van den Oord et al. , 2017 ; Alemi et al. , 2018 ) . In short , the Gaussian prior of a standard VAE can induce fundamental topological differences to the true data distribution ( Falorsi et al. , 2018 ; Shi et al. , 2020 ) . 3 SHORTFALLS OF VAES WITH NON-GAUSSIAN PRIORS . Though directly replacing the Gaussian prior with a different prior sounds like a simple solution , effectively introducing inductive biases can , unfortunately , be more complicated . Firstly , the only influence of the prior during training is as a regularizer on the encoder through the DKL ( qϕ ( z|x ) ∥ p ( z ) ) term . This regularization is always competing with the need for effective reconstructions and only has an indirect influence on qϕ ( z ) . As such , simply replacing the prior can be an ineffective way of inducing desired structure at the population level ( Mathieu et al. , 2019b ) , particularly if p ( z ) is a complex distribution that it is difficult to fit ( see , e.g. , Fig . 3a ) . Mismatches between qϕ ( z ) and p ( z ) can also have further deleterious effects on the learned generative model : the former represents the distribution of the data in latent space during training , while the latter is what is used by the learned generative model , leading to unrepresentative generations if there is mismatch . Secondly , it can be extremely difficult to construct appropriate encoder mappings and distributions for non-Gaussian priors . While the typical choice of a mean-field Gaussian for the encoder distribution is simple , easy to train , and often effective for Gaussian priors , it is often inappropriate for other choices of prior . For example , in Fig . 3 , we consider replacement with a sparse prior . A VAE with a Gaussian encoder struggles to encode points in a manner that even remotely matches the prior . One might suggest replacing the encoder distribution as well , but this has its own issues , most notably that other distributions can be hard to effectively parameterize or train . In particular , the form of the required encoding noise might become heavily spatially variant ; in our sparse example , the noise must be elongated in a particular direction depending on where the mean embedding is . If the prior has constraints or topological properties distinct from the data , it can even be difficult to learn a mean encoder mapping that respects these , due to the continuous nature of neural networks . | This work introduces a new method to embed used-designed inductive-biases into VAE architectures. the basic idea is straightforward: a fixed user-defined deterministic transformation is used to map the latent z into a "structured latent" y. The transformation is used to induces topological and/or sparsity constraints. Technically, this is identical to using the transformation as the first layer of the decoder architecture. Besides of the general idea, the authors introduce a variety of simple but clever transformations used to enforce latent constraints. The approach is shown to offer an interpretable latent space and attractive performance in a large class of density estimation tasks. | SP:04fb345a18b48db4cdbf9119aaf07838a4e4e973 |
PF-GNN: Differentiable particle filtering based approximation of universal graph representations | 1 INTRODUCTION . In recent years , Graph Neural Networks ( GNNs ) have emerged as learning models of choice on graph structured data . GNNs operate on a message passing paradigm ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Veličković et al. , 2017 ; Gilmer et al. , 2017 ) , where nodes maintain latent embeddings which are updated iteratively based on their neighborhood . This way of representation learning on graphs provides the necessary inductive bias to encode the structural information of graphs into the node embeddings . The process of message passing in GNNs is equivalent to vertex color-refinement procedure or the 1-dimensional Weisfeiler-Lehman ( WL ) test used to distinguish non-isomorphic graphs ( Xu et al. , 2018 ; Morris et al. , 2019 ) . Consequently , GNNs suffer from the limitations of 1-WL color-refinement in their expressive power . In each step of 1-WL color-refinement , two vertices get different colors if the colors of neighboring vertices are different . The procedure stabilizes after a few steps when the colors can not be further refined . Due to the symmetry in graph structures , certain non-isomorphic graphs induce same colors upon 1-WL refinement . Higher-order WL refinement and their neural k-GNN versions break some of the symmetry by operating on k-tuples of nodes . They are more expressive but require exponentially increasing computation time and hence , are not practical for large k. Motivated by the fact that a fully expressive graph representation learning model should be able to produce embeddings that can distinguish any two non-isomorphic graphs ( Chen et al. , 2019b ) , we turn to exact graph isomorphism solvers for better inductive biases in our learning algorithm . Most of the practical graph isomorphism solvers use 1-WL in combination with the traditional technique of individualization and refinement ( IR ) ( McKay & Piperno , 2014 ; Junttila & Kaski , 2011 ) for coloring the graph . Individualization is the process of artificially introducing asymmetry by recoloring a vertex and thereby , distinguishing it from the rest of the vertices . Refinement refers to 1-WL refinement which can propagate this information to recolor the rest of the graph . The two graphs shown in Fig . 1 are not distinguishable after 1-WL refinement but induce different colorings after one IR step . The IR process is repeated for each refinement until every vertex gets a unique color . However , in order to maintain permutation-invariance , whenever a vertex is individualized , other vertices that have the same color need to individualized as well and thereafter refined . This process generates a search tree with colorings as nodes , and can grow exponentially in worst case . In this work , we propose to learn graph representations with the inductive bias of the search tree generated by the graph-isomorphism solvers . However , generating the complete search tree is computationally expensive . Isomorphism-solvers prune the search tree by detecting automorphisms on the fly as they generate the tree . Nevertheless , detecting automorphisms is non-trivial from the perspective of end-to-end discriminative learning and hence , we need to approximate it . To this end , we first define a universal neural graph representation based on the search tree of colorings . Then we take a probabilistic view and approximate it by sampling multiple paths from root to leaves of the search tree . We observe that the process of sampling a vertex , individualizing it and the subsequent refinement resembles the state transition in sequential state estimation problems.Hence , we model our sampling approach on the principled technique of Sequential Monte Carlo or Particle Filtering . We introduce Particle Filter Graph Neural Networks ( PF-GNN ) , an end-to-end learnable algorithm which maintains a weighted belief distribution over a set of K graph colorings/embeddings . With each step of IR , PF-GNN transitions to a new set of embeddings . It then updates the belief by re-weighting each particle with a discriminatively learned function that measures the quality of the refinement induced after the transition . With this inductive bias , the belief evolves over time to be more discriminative of the input graph . After a few steps , we can use the belief along with the embeddings to generate the final representation of the graph . Our approach is simple , efficient , parallelizable , easy to implement and can learn representations to distinguish non-isomorphic graphs beyond 1-WL GNNs . We evaluate PF-GNN over diverse set of datasets on tasks which are provably not learnable with 1-WL equivalent GNNs . Furthermore , our experiments on real-world benchmark datasets show the strong performance of PF-GNN over other more expressive GNNs . 2 RELATED WORK . It was established that GNNs are limited in expressive power , and can not go beyond 1-WL test for graph isomorphism by Xu et al . ( 2018 ) and Morris et al . ( 2019 ) . Later analysis of GNN has shown other limits of GNNs like counting substructures and detecting graph properties ( Arvind et al. , 2020 ; Chen et al. , 2020 ; Loukas , 2019 ; Dehmamy et al. , 2019 ; Srinivasan & Ribeiro , 2019 ) . Chen et al . ( 2019b ) further formalizes the intuition that there is equivalence between learning universal graph representations and solving the graph isomorphism problem . Thereafter , many models have been proposed that are more expressive than 1-WL . Prominent ones are k-GNNs and their equivalent models ( Morris et al. , 2019 ; Maron et al. , 2019 ; Vignac et al. , 2020 ; Morris et al. , 2020 ) , but they are difficult to scale beyond 3-WL . Other methods need to preprocess the graph to find substructures ( Bouritsas et al. , 2020 ; Li et al. , 2020 ) , which are not task-specific and may incur more computation time . Another way to improve expressivity of GNNs is to introduce randomness ( You et al. , 2019 ; Sato et al. , 2020 ; Abboud et al. , 2020 ; Zambon et al. , 2020 ) . However , adding uniform randomness interferes with the learning task , and hence these models have not shown good performance on real-world datasets . In PF-GNN , randomness is controlled as only a subset of nodes are sampled . Furthermore , since all components are learned discriminatively , the representation is tuned towards the end task . Our approach of learning with particle filter updates is inspired from recent works which make particle filters differentiable ( Karkus et al. , 2018 ; Ma et al. , 2019 ; 2020 ) . 3 PRELIMINARIES . Let Gn be the set of all graphs of n vertices with vertex set V = { 1 , 2 , . . . , n } and edge set E , Isomorphism between any two graphs G , H ∈ Gn is a bijection f : VG → VH such that ( u , v ) ∈ EG ⇐⇒ ( f ( v ) , f ( u ) ) ∈ EH . An automorphism of G is an isomorphism that maps G onto itself . One way of identifying non-isomorphic graphs is by generating unique colorings for graphs based on their structures in a permutation-invariant way and then , comparing them . A colouring of the graph G ∈ Gn is a surjective function π : V → N , which assigns each vertex to a natural number . The number of colors is denoted by |π| . The set of vertices sharing the same color form a color cell in the coloring . We denote the set of colored vertices with π = { p1 , p2 , . . . , pk } where pi is a color cell . A coloring in which every vertex gets a distinct color is called a discrete colouring i.e . |π| = n. For any two colorings π , π′ , we say that π′ is finer than or equal to π , written π′ π , if π ( v ) < π ( w ) ⇒ π′ ( v ) < π′ ( w ) for all v , w ∈ V . i.e . each cell of π′ is a subset of a cell of π , but the converse is not true . Coloring is also loosely called a partition , since it partitions V into cells . A coloring is an equitable partition when any two vertices of the same color are adjacent to the same number of vertices of each color . 1-dimensional Weisfeiler Lehman test is a simple and fast procedure to color graphs . It starts with the same initial color for all vertices . Then , it iteratively refines the coloring of the graph by mapping the the tuple of the color of a vertex and its neighbors to a distinct new color . i.e at step t , πt+1 ( v ) = HASH ( πt ( v ) , { { πt ( u ) , u ∈ N ( v ) } } ) , where { { } denotes a multiset and N ( v ) is the set of adjacent vertices of v. The algorithm terminates when π forms an equitable partition . 3.1 SEARCH TREE OF COLORINGS Equitable coloring can not be further refined due to the symmetry in the graph structure . To break the symmetry , exact isomorphism solvers employ the technique of individualization-refinement to generate further refined colorings . Individualization is the technique of breaking the symmetry in the graph by distinguishing a vertex with a new unique color . Once a vertex is individualized , 1-WL refinement is used to refine the coloring until a further refined equitable partition is reached . However , this comes at a cost . In order to maintain permutation-invariance , we have to individualize and refine all the vertices with the same color . As a result , we get as many refined colorings as the number of vertices with the chosen color . This procedure can be repeated iteratively for each of the refined coloring and the process takes shape of a rooted search tree of equitable colorings . The search tree has initial 1-WL coloring at the root . Then a non-singleton color cell called the target cell is chosen and all vertices of the target cell are individualized in parallel and thereafter , colorings are refined . The refined equitable colorings form the child nodes of the root . After sufficient repetitions , we get a set of discrete colorings at the leaves . This search tree is unique to the isomorphism class of graphs , i.e . all non-isomorphic graphs produce distinct search trees , and consequently , discrete colorings at the leaves . An example of search tree is shown in Fig . 2 . 4 PROPOSED METHOD . In this section , we first define a universal representation of any n-vertex graph , then propose a practical approximation of the representation . For any arbitrary n-vertex graph G , we aim to learn its neural representation f ( G ) that can uniquely identify G. To achieve this , we can construct a computation graph which mimics the search tree of colorings . We propose to design a neural equivalent of the search tree . To do this , we can model the target-cell selector using a learnable function that outputs scalar scores on the vertices based on their embeddings , and then individualize those which have the maximum score . Note that node embeddings in GNN are analogous to colors in 1-WL refinement . We can then adopt GNN to produce refined set of embeddings . If we have discrete embeddings after T iterations , then f ( G ) can be computed as , f ( G ) = ρ ( ∑ ∀I ψ ( G , πIT ) ) ( 1 ) where I is a sequence of vertices individualized iteratively ( I identifies a single path from root to a leaf in the search tree ) and the summation is over all such paths . πIT is the discrete coloring ( discrete embeddings ) of G after individualizing and refining with vertices in I. ψ is an multiset function approximator and ρ is an MLP . Theorem 1 presents the expressive power of f ( G ) . Theorem 1 . Consider any n-vertex graph G with no distinct node attributes . Assume we use universal multiset function approximators for target-cell selection and graph pooling function ψ , and a GNN with 1-WL equivalent expressive power for color refinement , then the representation f ( G ) in Eqn . 1 is permutation-invariant and can uniquely identify G. Proof of Theorem 1 is provided in Appendix A.3 . | The paper introduces PF-GNN, a new method through which to individualize node representations with the aim of increasing GNN expressiveness. It is inspired by graph isomorphism algorithms' individualize and refine (IR) approaches, which individualize individual nodes then refine colorings, and then aggregate all colorings across the search tree of possible individualization paths. Specifically, PF-GNN approximately emulates IR by repeating a random sampling k times and returning an aggregate coloring from each sample. For every sample, T iterations are made, such that a sample node is selected for individualization, and then representations are updated using a GNN. However, PF-GNN introduces learnable components to this mechanism, such that affinities for node selection (yielding a distribution over all possible individualization candidate nodes) are learned, rather than simple uniform sampling. Furthermore, beliefs for sampling are iteratively updated in keeping with a particle filtering approach (PF), and node individualization itself is made using a learnable function, to enable more flexibility and data-driven individualization. Finally, the model is empirically evaluated on a series of datasets and compared with existing standard GNNs and individualization techniques, e.g., random node initialization (RNI). In these experiments, PF-GNN is shown to improve model expressiveness, is more resilient than RNI for larger numbers of nodes, and achieves strong performance on real-world datasets. | SP:99978853ae41e1d55321156280021149627fb8da |
PF-GNN: Differentiable particle filtering based approximation of universal graph representations | 1 INTRODUCTION . In recent years , Graph Neural Networks ( GNNs ) have emerged as learning models of choice on graph structured data . GNNs operate on a message passing paradigm ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Veličković et al. , 2017 ; Gilmer et al. , 2017 ) , where nodes maintain latent embeddings which are updated iteratively based on their neighborhood . This way of representation learning on graphs provides the necessary inductive bias to encode the structural information of graphs into the node embeddings . The process of message passing in GNNs is equivalent to vertex color-refinement procedure or the 1-dimensional Weisfeiler-Lehman ( WL ) test used to distinguish non-isomorphic graphs ( Xu et al. , 2018 ; Morris et al. , 2019 ) . Consequently , GNNs suffer from the limitations of 1-WL color-refinement in their expressive power . In each step of 1-WL color-refinement , two vertices get different colors if the colors of neighboring vertices are different . The procedure stabilizes after a few steps when the colors can not be further refined . Due to the symmetry in graph structures , certain non-isomorphic graphs induce same colors upon 1-WL refinement . Higher-order WL refinement and their neural k-GNN versions break some of the symmetry by operating on k-tuples of nodes . They are more expressive but require exponentially increasing computation time and hence , are not practical for large k. Motivated by the fact that a fully expressive graph representation learning model should be able to produce embeddings that can distinguish any two non-isomorphic graphs ( Chen et al. , 2019b ) , we turn to exact graph isomorphism solvers for better inductive biases in our learning algorithm . Most of the practical graph isomorphism solvers use 1-WL in combination with the traditional technique of individualization and refinement ( IR ) ( McKay & Piperno , 2014 ; Junttila & Kaski , 2011 ) for coloring the graph . Individualization is the process of artificially introducing asymmetry by recoloring a vertex and thereby , distinguishing it from the rest of the vertices . Refinement refers to 1-WL refinement which can propagate this information to recolor the rest of the graph . The two graphs shown in Fig . 1 are not distinguishable after 1-WL refinement but induce different colorings after one IR step . The IR process is repeated for each refinement until every vertex gets a unique color . However , in order to maintain permutation-invariance , whenever a vertex is individualized , other vertices that have the same color need to individualized as well and thereafter refined . This process generates a search tree with colorings as nodes , and can grow exponentially in worst case . In this work , we propose to learn graph representations with the inductive bias of the search tree generated by the graph-isomorphism solvers . However , generating the complete search tree is computationally expensive . Isomorphism-solvers prune the search tree by detecting automorphisms on the fly as they generate the tree . Nevertheless , detecting automorphisms is non-trivial from the perspective of end-to-end discriminative learning and hence , we need to approximate it . To this end , we first define a universal neural graph representation based on the search tree of colorings . Then we take a probabilistic view and approximate it by sampling multiple paths from root to leaves of the search tree . We observe that the process of sampling a vertex , individualizing it and the subsequent refinement resembles the state transition in sequential state estimation problems.Hence , we model our sampling approach on the principled technique of Sequential Monte Carlo or Particle Filtering . We introduce Particle Filter Graph Neural Networks ( PF-GNN ) , an end-to-end learnable algorithm which maintains a weighted belief distribution over a set of K graph colorings/embeddings . With each step of IR , PF-GNN transitions to a new set of embeddings . It then updates the belief by re-weighting each particle with a discriminatively learned function that measures the quality of the refinement induced after the transition . With this inductive bias , the belief evolves over time to be more discriminative of the input graph . After a few steps , we can use the belief along with the embeddings to generate the final representation of the graph . Our approach is simple , efficient , parallelizable , easy to implement and can learn representations to distinguish non-isomorphic graphs beyond 1-WL GNNs . We evaluate PF-GNN over diverse set of datasets on tasks which are provably not learnable with 1-WL equivalent GNNs . Furthermore , our experiments on real-world benchmark datasets show the strong performance of PF-GNN over other more expressive GNNs . 2 RELATED WORK . It was established that GNNs are limited in expressive power , and can not go beyond 1-WL test for graph isomorphism by Xu et al . ( 2018 ) and Morris et al . ( 2019 ) . Later analysis of GNN has shown other limits of GNNs like counting substructures and detecting graph properties ( Arvind et al. , 2020 ; Chen et al. , 2020 ; Loukas , 2019 ; Dehmamy et al. , 2019 ; Srinivasan & Ribeiro , 2019 ) . Chen et al . ( 2019b ) further formalizes the intuition that there is equivalence between learning universal graph representations and solving the graph isomorphism problem . Thereafter , many models have been proposed that are more expressive than 1-WL . Prominent ones are k-GNNs and their equivalent models ( Morris et al. , 2019 ; Maron et al. , 2019 ; Vignac et al. , 2020 ; Morris et al. , 2020 ) , but they are difficult to scale beyond 3-WL . Other methods need to preprocess the graph to find substructures ( Bouritsas et al. , 2020 ; Li et al. , 2020 ) , which are not task-specific and may incur more computation time . Another way to improve expressivity of GNNs is to introduce randomness ( You et al. , 2019 ; Sato et al. , 2020 ; Abboud et al. , 2020 ; Zambon et al. , 2020 ) . However , adding uniform randomness interferes with the learning task , and hence these models have not shown good performance on real-world datasets . In PF-GNN , randomness is controlled as only a subset of nodes are sampled . Furthermore , since all components are learned discriminatively , the representation is tuned towards the end task . Our approach of learning with particle filter updates is inspired from recent works which make particle filters differentiable ( Karkus et al. , 2018 ; Ma et al. , 2019 ; 2020 ) . 3 PRELIMINARIES . Let Gn be the set of all graphs of n vertices with vertex set V = { 1 , 2 , . . . , n } and edge set E , Isomorphism between any two graphs G , H ∈ Gn is a bijection f : VG → VH such that ( u , v ) ∈ EG ⇐⇒ ( f ( v ) , f ( u ) ) ∈ EH . An automorphism of G is an isomorphism that maps G onto itself . One way of identifying non-isomorphic graphs is by generating unique colorings for graphs based on their structures in a permutation-invariant way and then , comparing them . A colouring of the graph G ∈ Gn is a surjective function π : V → N , which assigns each vertex to a natural number . The number of colors is denoted by |π| . The set of vertices sharing the same color form a color cell in the coloring . We denote the set of colored vertices with π = { p1 , p2 , . . . , pk } where pi is a color cell . A coloring in which every vertex gets a distinct color is called a discrete colouring i.e . |π| = n. For any two colorings π , π′ , we say that π′ is finer than or equal to π , written π′ π , if π ( v ) < π ( w ) ⇒ π′ ( v ) < π′ ( w ) for all v , w ∈ V . i.e . each cell of π′ is a subset of a cell of π , but the converse is not true . Coloring is also loosely called a partition , since it partitions V into cells . A coloring is an equitable partition when any two vertices of the same color are adjacent to the same number of vertices of each color . 1-dimensional Weisfeiler Lehman test is a simple and fast procedure to color graphs . It starts with the same initial color for all vertices . Then , it iteratively refines the coloring of the graph by mapping the the tuple of the color of a vertex and its neighbors to a distinct new color . i.e at step t , πt+1 ( v ) = HASH ( πt ( v ) , { { πt ( u ) , u ∈ N ( v ) } } ) , where { { } denotes a multiset and N ( v ) is the set of adjacent vertices of v. The algorithm terminates when π forms an equitable partition . 3.1 SEARCH TREE OF COLORINGS Equitable coloring can not be further refined due to the symmetry in the graph structure . To break the symmetry , exact isomorphism solvers employ the technique of individualization-refinement to generate further refined colorings . Individualization is the technique of breaking the symmetry in the graph by distinguishing a vertex with a new unique color . Once a vertex is individualized , 1-WL refinement is used to refine the coloring until a further refined equitable partition is reached . However , this comes at a cost . In order to maintain permutation-invariance , we have to individualize and refine all the vertices with the same color . As a result , we get as many refined colorings as the number of vertices with the chosen color . This procedure can be repeated iteratively for each of the refined coloring and the process takes shape of a rooted search tree of equitable colorings . The search tree has initial 1-WL coloring at the root . Then a non-singleton color cell called the target cell is chosen and all vertices of the target cell are individualized in parallel and thereafter , colorings are refined . The refined equitable colorings form the child nodes of the root . After sufficient repetitions , we get a set of discrete colorings at the leaves . This search tree is unique to the isomorphism class of graphs , i.e . all non-isomorphic graphs produce distinct search trees , and consequently , discrete colorings at the leaves . An example of search tree is shown in Fig . 2 . 4 PROPOSED METHOD . In this section , we first define a universal representation of any n-vertex graph , then propose a practical approximation of the representation . For any arbitrary n-vertex graph G , we aim to learn its neural representation f ( G ) that can uniquely identify G. To achieve this , we can construct a computation graph which mimics the search tree of colorings . We propose to design a neural equivalent of the search tree . To do this , we can model the target-cell selector using a learnable function that outputs scalar scores on the vertices based on their embeddings , and then individualize those which have the maximum score . Note that node embeddings in GNN are analogous to colors in 1-WL refinement . We can then adopt GNN to produce refined set of embeddings . If we have discrete embeddings after T iterations , then f ( G ) can be computed as , f ( G ) = ρ ( ∑ ∀I ψ ( G , πIT ) ) ( 1 ) where I is a sequence of vertices individualized iteratively ( I identifies a single path from root to a leaf in the search tree ) and the summation is over all such paths . πIT is the discrete coloring ( discrete embeddings ) of G after individualizing and refining with vertices in I. ψ is an multiset function approximator and ρ is an MLP . Theorem 1 presents the expressive power of f ( G ) . Theorem 1 . Consider any n-vertex graph G with no distinct node attributes . Assume we use universal multiset function approximators for target-cell selection and graph pooling function ψ , and a GNN with 1-WL equivalent expressive power for color refinement , then the representation f ( G ) in Eqn . 1 is permutation-invariant and can uniquely identify G. Proof of Theorem 1 is provided in Appendix A.3 . | This paper presents a neural version of individual-refinement (IR) architecture for improving the expressiveness of GNN in terms of isomorphism test. As IR is the dominant approach of practical graph isomorphism test, adapting IR to GNN is a novel and important idea. As IR suffers from the exponential number of branches, the paper adapts particle filtering algorithm to sample K paths to approximate the full version of IR algorithm. Simulation and real-world datasets are used to demonstrate the improvement over base GNN. | SP:99978853ae41e1d55321156280021149627fb8da |
PF-GNN: Differentiable particle filtering based approximation of universal graph representations | 1 INTRODUCTION . In recent years , Graph Neural Networks ( GNNs ) have emerged as learning models of choice on graph structured data . GNNs operate on a message passing paradigm ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Veličković et al. , 2017 ; Gilmer et al. , 2017 ) , where nodes maintain latent embeddings which are updated iteratively based on their neighborhood . This way of representation learning on graphs provides the necessary inductive bias to encode the structural information of graphs into the node embeddings . The process of message passing in GNNs is equivalent to vertex color-refinement procedure or the 1-dimensional Weisfeiler-Lehman ( WL ) test used to distinguish non-isomorphic graphs ( Xu et al. , 2018 ; Morris et al. , 2019 ) . Consequently , GNNs suffer from the limitations of 1-WL color-refinement in their expressive power . In each step of 1-WL color-refinement , two vertices get different colors if the colors of neighboring vertices are different . The procedure stabilizes after a few steps when the colors can not be further refined . Due to the symmetry in graph structures , certain non-isomorphic graphs induce same colors upon 1-WL refinement . Higher-order WL refinement and their neural k-GNN versions break some of the symmetry by operating on k-tuples of nodes . They are more expressive but require exponentially increasing computation time and hence , are not practical for large k. Motivated by the fact that a fully expressive graph representation learning model should be able to produce embeddings that can distinguish any two non-isomorphic graphs ( Chen et al. , 2019b ) , we turn to exact graph isomorphism solvers for better inductive biases in our learning algorithm . Most of the practical graph isomorphism solvers use 1-WL in combination with the traditional technique of individualization and refinement ( IR ) ( McKay & Piperno , 2014 ; Junttila & Kaski , 2011 ) for coloring the graph . Individualization is the process of artificially introducing asymmetry by recoloring a vertex and thereby , distinguishing it from the rest of the vertices . Refinement refers to 1-WL refinement which can propagate this information to recolor the rest of the graph . The two graphs shown in Fig . 1 are not distinguishable after 1-WL refinement but induce different colorings after one IR step . The IR process is repeated for each refinement until every vertex gets a unique color . However , in order to maintain permutation-invariance , whenever a vertex is individualized , other vertices that have the same color need to individualized as well and thereafter refined . This process generates a search tree with colorings as nodes , and can grow exponentially in worst case . In this work , we propose to learn graph representations with the inductive bias of the search tree generated by the graph-isomorphism solvers . However , generating the complete search tree is computationally expensive . Isomorphism-solvers prune the search tree by detecting automorphisms on the fly as they generate the tree . Nevertheless , detecting automorphisms is non-trivial from the perspective of end-to-end discriminative learning and hence , we need to approximate it . To this end , we first define a universal neural graph representation based on the search tree of colorings . Then we take a probabilistic view and approximate it by sampling multiple paths from root to leaves of the search tree . We observe that the process of sampling a vertex , individualizing it and the subsequent refinement resembles the state transition in sequential state estimation problems.Hence , we model our sampling approach on the principled technique of Sequential Monte Carlo or Particle Filtering . We introduce Particle Filter Graph Neural Networks ( PF-GNN ) , an end-to-end learnable algorithm which maintains a weighted belief distribution over a set of K graph colorings/embeddings . With each step of IR , PF-GNN transitions to a new set of embeddings . It then updates the belief by re-weighting each particle with a discriminatively learned function that measures the quality of the refinement induced after the transition . With this inductive bias , the belief evolves over time to be more discriminative of the input graph . After a few steps , we can use the belief along with the embeddings to generate the final representation of the graph . Our approach is simple , efficient , parallelizable , easy to implement and can learn representations to distinguish non-isomorphic graphs beyond 1-WL GNNs . We evaluate PF-GNN over diverse set of datasets on tasks which are provably not learnable with 1-WL equivalent GNNs . Furthermore , our experiments on real-world benchmark datasets show the strong performance of PF-GNN over other more expressive GNNs . 2 RELATED WORK . It was established that GNNs are limited in expressive power , and can not go beyond 1-WL test for graph isomorphism by Xu et al . ( 2018 ) and Morris et al . ( 2019 ) . Later analysis of GNN has shown other limits of GNNs like counting substructures and detecting graph properties ( Arvind et al. , 2020 ; Chen et al. , 2020 ; Loukas , 2019 ; Dehmamy et al. , 2019 ; Srinivasan & Ribeiro , 2019 ) . Chen et al . ( 2019b ) further formalizes the intuition that there is equivalence between learning universal graph representations and solving the graph isomorphism problem . Thereafter , many models have been proposed that are more expressive than 1-WL . Prominent ones are k-GNNs and their equivalent models ( Morris et al. , 2019 ; Maron et al. , 2019 ; Vignac et al. , 2020 ; Morris et al. , 2020 ) , but they are difficult to scale beyond 3-WL . Other methods need to preprocess the graph to find substructures ( Bouritsas et al. , 2020 ; Li et al. , 2020 ) , which are not task-specific and may incur more computation time . Another way to improve expressivity of GNNs is to introduce randomness ( You et al. , 2019 ; Sato et al. , 2020 ; Abboud et al. , 2020 ; Zambon et al. , 2020 ) . However , adding uniform randomness interferes with the learning task , and hence these models have not shown good performance on real-world datasets . In PF-GNN , randomness is controlled as only a subset of nodes are sampled . Furthermore , since all components are learned discriminatively , the representation is tuned towards the end task . Our approach of learning with particle filter updates is inspired from recent works which make particle filters differentiable ( Karkus et al. , 2018 ; Ma et al. , 2019 ; 2020 ) . 3 PRELIMINARIES . Let Gn be the set of all graphs of n vertices with vertex set V = { 1 , 2 , . . . , n } and edge set E , Isomorphism between any two graphs G , H ∈ Gn is a bijection f : VG → VH such that ( u , v ) ∈ EG ⇐⇒ ( f ( v ) , f ( u ) ) ∈ EH . An automorphism of G is an isomorphism that maps G onto itself . One way of identifying non-isomorphic graphs is by generating unique colorings for graphs based on their structures in a permutation-invariant way and then , comparing them . A colouring of the graph G ∈ Gn is a surjective function π : V → N , which assigns each vertex to a natural number . The number of colors is denoted by |π| . The set of vertices sharing the same color form a color cell in the coloring . We denote the set of colored vertices with π = { p1 , p2 , . . . , pk } where pi is a color cell . A coloring in which every vertex gets a distinct color is called a discrete colouring i.e . |π| = n. For any two colorings π , π′ , we say that π′ is finer than or equal to π , written π′ π , if π ( v ) < π ( w ) ⇒ π′ ( v ) < π′ ( w ) for all v , w ∈ V . i.e . each cell of π′ is a subset of a cell of π , but the converse is not true . Coloring is also loosely called a partition , since it partitions V into cells . A coloring is an equitable partition when any two vertices of the same color are adjacent to the same number of vertices of each color . 1-dimensional Weisfeiler Lehman test is a simple and fast procedure to color graphs . It starts with the same initial color for all vertices . Then , it iteratively refines the coloring of the graph by mapping the the tuple of the color of a vertex and its neighbors to a distinct new color . i.e at step t , πt+1 ( v ) = HASH ( πt ( v ) , { { πt ( u ) , u ∈ N ( v ) } } ) , where { { } denotes a multiset and N ( v ) is the set of adjacent vertices of v. The algorithm terminates when π forms an equitable partition . 3.1 SEARCH TREE OF COLORINGS Equitable coloring can not be further refined due to the symmetry in the graph structure . To break the symmetry , exact isomorphism solvers employ the technique of individualization-refinement to generate further refined colorings . Individualization is the technique of breaking the symmetry in the graph by distinguishing a vertex with a new unique color . Once a vertex is individualized , 1-WL refinement is used to refine the coloring until a further refined equitable partition is reached . However , this comes at a cost . In order to maintain permutation-invariance , we have to individualize and refine all the vertices with the same color . As a result , we get as many refined colorings as the number of vertices with the chosen color . This procedure can be repeated iteratively for each of the refined coloring and the process takes shape of a rooted search tree of equitable colorings . The search tree has initial 1-WL coloring at the root . Then a non-singleton color cell called the target cell is chosen and all vertices of the target cell are individualized in parallel and thereafter , colorings are refined . The refined equitable colorings form the child nodes of the root . After sufficient repetitions , we get a set of discrete colorings at the leaves . This search tree is unique to the isomorphism class of graphs , i.e . all non-isomorphic graphs produce distinct search trees , and consequently , discrete colorings at the leaves . An example of search tree is shown in Fig . 2 . 4 PROPOSED METHOD . In this section , we first define a universal representation of any n-vertex graph , then propose a practical approximation of the representation . For any arbitrary n-vertex graph G , we aim to learn its neural representation f ( G ) that can uniquely identify G. To achieve this , we can construct a computation graph which mimics the search tree of colorings . We propose to design a neural equivalent of the search tree . To do this , we can model the target-cell selector using a learnable function that outputs scalar scores on the vertices based on their embeddings , and then individualize those which have the maximum score . Note that node embeddings in GNN are analogous to colors in 1-WL refinement . We can then adopt GNN to produce refined set of embeddings . If we have discrete embeddings after T iterations , then f ( G ) can be computed as , f ( G ) = ρ ( ∑ ∀I ψ ( G , πIT ) ) ( 1 ) where I is a sequence of vertices individualized iteratively ( I identifies a single path from root to a leaf in the search tree ) and the summation is over all such paths . πIT is the discrete coloring ( discrete embeddings ) of G after individualizing and refining with vertices in I. ψ is an multiset function approximator and ρ is an MLP . Theorem 1 presents the expressive power of f ( G ) . Theorem 1 . Consider any n-vertex graph G with no distinct node attributes . Assume we use universal multiset function approximators for target-cell selection and graph pooling function ψ , and a GNN with 1-WL equivalent expressive power for color refinement , then the representation f ( G ) in Eqn . 1 is permutation-invariant and can uniquely identify G. Proof of Theorem 1 is provided in Appendix A.3 . | The authors propose PF-GNN for graph-level tasks. Their design is based on exact isomorphism solver. They propose sampling process with particle filter updates to alleviate the high complexity issue. | SP:99978853ae41e1d55321156280021149627fb8da |
RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning | 1 INTRODUCTION . Deep neural networks have achieved great success in visual recognition . However , their ability for visual relational reasoning , i.e . reasoning with entities and their relationships in a visual scene , still falls short of human-level performances , especially in real-world domains . The challenges of common visual relational reasoning tasks , e.g . HICO and GQA benchmarks ( Chao et al. , 2015 ; Hudson & Manning , 2019 ) are manifested in three aspects : 1 ) object-centric learning : to identify objects ( including humans ) as well as their visual properties ; 2 ) relational reasoning : to infer all pairwise relationships between the object entities ; and 3 ) systematic generalization : to reason with visual entities and relations on novel object-relation combinations and extrapolate to longer reasoning hops ( Bahdanau et al. , 2018 ; Hupkes et al. , 2020 ) . While existing models have leveraged pre-trained object detectors ( Ren et al. , 2015 ; Jiang et al. , 2020 ) and/or explicit symbolic reasoning methods ( Yi et al. , 2018 ) to tackle these challenges , they leave ample space for improvement . More recently , vision transformers ( ViTs ) have become the new paradigm for visual recognition and have made great strides in a broad range of visual recognition tasks ( Dosovitskiy et al. , 2020 ; Wang et al. , 2021a ; Liu et al. , 2021 ) . Several properties of ViT make it a compelling model choice for visual relational reasoning . First , the self-attention mechanism in ViT offers a strong relational inductive bias , explicitly modeling the relations between input entities . Second , the design of image as patches facilitates the learning of object-centric representations , as evidenced by recent works , e.g . DINO and EsViT ( Caron et al. , 2021 ; Li et al. , 2021 ) , that demonstrate ViTs trained with self-supervised learning ( SSL ) capture objects in the image without label annotations . To investigate the efficacy of the ViT backbone for visual relational reasoning , in particular on systematic generalization , we introduce new systematic splits to canonical benchmarks and compare the ViT backbone with the CNN backbone . Results on GQA show that switching to ViTs in MCAN model ( Yu et al. , 2019 ) brings an immediate 11 % gain in accuracy . However , the performance gap between the original GQA testing split and the new systematic split remains considerable ( 15 % in accuracy ) for both backbones . It suggests that generic ViTs can still be improved to tackle the reasoning task , especially on systematic generalization . Recent works have shown that neural networks can learn representations with better generalization , by learning certain auxiliary tasks of predicting human-specified concepts ( Hill et al. , 2020 ; Koh et al. , 2020 ) . A natural question emerges : can we exploit these concepts to improve the reasoning ability of ViTs ? Our approach is to make better use of concepts ( e.g . the labels in the original training dataset ) in the ViT training for better relational reasoning . To this end , we first introduce a novel concept-feature dictionary , where each key is a concept and its value is a queue of image features with the same concept , as shown in Figure 1 . It allows dynamic and flexible training-time image feature retrieval during training . Based on this dictionary , we then augment the canonical ViT training pipeline with two auxiliary tasks : To facilitate high-level reasoning about relationships , we design a global task that helps cluster images with the same concept together to produce semantically consistent relational representations . To learn better object-centric representations , we develop a local task that guides the model to discover object-centric semantic correspondence across images ( Liu et al. , 2010 ) . Thanks to the plug-and-play feature of our concept-feature dictionary , our auxiliary tasks can be easily incorporated into the existing ViT training pipeline without additional input pre-processing . We term the resulting model concept-guided vision transformer ( or RelViT for short ) . We evaluate our method on two standard visual relational reasoning benchmarks : HICO and GQA . Beyond the original independent and identically distributed ( I.I.D . ) training-testing split , we introduce a systematic split for each dataset to examine the ability of systematic generalization , i.e. , recognizing novel object-relation combinations . Our results show that RelViT significantly outperforms previous approaches . On HICO , it improves the best baseline by 16 % , 43 % , and 7 % on the original non-systematic and two new systematic splits , respectively , as shown in Figure 2 . On GQA , it further closes the gap of overall accuracy between models using visual backbone feature only and models using additional bounding box features ( obtained from pre-trained object detectors ) by 13 % and 18 % on the two splits . We also show that our method is compatible with various ViT variants and robust to hyperparameters . Finally , our qualitative inspection indicates that RelViT does improve ViTs on learning relational and object-centric representations . Our main contributions are summarized as follows : • We propose RelViT , by incorporating visual relational concepts to the ViT training with the newlyintroduced concept-guided global and local auxiliary tasks , where a concept-feature dictionary is proposed to enable dynamic and flexible image feature retrieval with the concept keys . • In extensive experiments on the original non-systematic and new systematic split of the HICO and GQA datasets , we demonstrate the advantages of RelViT over various strong baselines for visual relational reasoning . • We perform ablation studies on RelViT to show the contributions of its key components , its compatibility to various ViT architectures , and its robustness to hyper-parameters . We provide qualitative results to confirm our improved learning of relational and object-centric representations . 2 METHODOLOGY . 2.1 BACKGROUND . Vision transformers . Here we briefly review the architecture of multi-staged ViTs ( Dosovitskiy et al. , 2020 ) . Given an image I ∈ RH×W×C , a ViT model g first tokenizes the input into N image tokens ( patches ) with a resolution of ( T , T ) : tokenize ( I ) = [ t1 , · · · , tN ] , ti ∈ RT 2×C , N = HW/T 2 , where ( H , W ) and C denotes the original resolution and number of channel of the image , respectively . Then in each stage , a patch embedding and a multi-head self attention ( MHSA ) module is applied to these tokens to produce input for the next stage . The final output of ViT g ( I ) is a sequence of tokens [ z1 , · · · , zN ] that correspond to the aforementioned input tokens . For global prediction tasks , e.g . image categorization , a summary of the input image can be obtained by either inserting an extra [ CLS ] token to the input sequence of image tokens or performing an extra pooling operation over the output tokens ( Zhai et al. , 2021 ) . Self-supervised learning with DINO and EsViT . Our method is developed upon the recently proposed self-supervised learning ( SSL ) approach self-distillation with no labels ( DINO ) ( Caron et al. , 2021 ) and its follow-up EsViT ( Li et al. , 2021 ) . As shown in Figure 1 , their main idea is to encourage the output consistency between a teacher gt and a student network gs , parameterized by θt and θs , respectively . Given an input image I , both networks map it to a probability distribution Pt ( I ) = ht ( gt ( I ) ) and Ps ( I ) = hs ( gs ( I ) ) via an extra projection head h ( · ) . The teacher and student network will be updated alternatively by following these two rules : ( 1 ) For the student network : θs ← argminθs LGlobal , where LGlobal = −Pt ( I ) logPs ( I ) ; ( 2 ) For the teacher network , θt is updated using an exponential moving average ( EMA ) on θs : θt ← λθt + ( 1− λ ) θs , where λ controls the updating momentum . In practice , multiple views of the input image I will be generated via data augmentation and the teacher and student networks will receive different views , preventing the task from being trivialized . EsViT further extends the image-level loss LGlobal to patch-level by applying dense SSL ( Wang et al. , 2021c ) for learning correspondence between the different views , enhancing the performance on dense prediction . Readers are encouraged to refer to Caron et al . ( 2021 ) and Li et al . ( 2021 ) for more details about these two works . 2.2 RELVIT . RelViT is a concept-guided ViT that makes better use of the concepts in the ViT training for better relational reasoning . In this section , we first introduce a concept-feature dictionary to store and retrieve image features with their concept keys . We then augment the canonical ViT training pipeline with two auxiliary tasks : a global level task and a local level task , both are concept-guided by resorting to the concept-feature dictionary . Intuitively , the global task help cluster images with the same concept together to produce semantically consistent relational features , while the local task guides the model to discover object-centric semantic correspondence across images . Concept-feature dictionary . We assume the total number of concepts is M , and the set of all concepts C = { c1 , · · · , cM } . A concept-feature dictionary is denoted by D = { ( c1 , Q1 ) , · · · , ( cM , QM ) } , where each concept ci is associated with a queue Qi of image features . During training , each image I may come with multiple concepts , which we denote by CI ⊂ C. For instance , there may exist several human-object interactions in an image from the HICO dataset , each of which may correspond to a concept . As shown in Figure 1 , whenever a new image-concept pair ( I , CI ) comes , we uniformly draw a concept code c from CI , pick up the queue Q from the dictionary that corresponds to c , and then retrieve the image feature f from Q . Meanwhile , we pass the input image I to the teacher network gt to get the new image feature f ′ = gt ( I ) , and enqueue it to Q . Note that if Q is full already , we first need to dequeue the oldest image feature from Q . During training , we use the retrieved image feature f for the two auxiliary tasks below , rather than the input image feature f ′ . Furthermore , the sampling strategy , i.e . how to retrieve image feature f from Q , plays an important role in the overall performance of our method . We consider the following two sampling strategies : Algorithm 1 RelViT : Concept-guided Vision Transformer Input : A set of training images with concepts { ( I1 , C1 ) , · · · } , an image augmentation function aug ( · ) , momentum update factor λ , loss weight α , a concept-feature dictionary D , teacher and student ViT gt and gs , parameterized by θt and θs , respectively . 1 : for ( Ii , Ci ) in { ( I1 , C1 ) , · · · } do 2 : I ( 1 ) i , I ( 2 ) i = aug ( Ii ) , aug ( Ii ) 3 : Uniformly draw a concept code c ∼ Ci . 4 : Retrieve Q from D with c. 5 : if Q is not empty then 6 : Sample feature f ∼ Q , following some sampling tactics . 7 : Laux = LGlobal ( f , gs ( I ( 2 ) i ) ) + LLocal ( f , gs ( I ( 2 ) i ) ) 8 : Insert feature gt ( I ( 1 ) i ) into Q ; if it is full , remove the oldest feature . 9 : else 10 : Laux = LGlobal ( gt ( I ( 1 ) i ) , gs ( I ( 2 ) i ) ) + LLocal ( gt ( I ( 1 ) i ) , gs ( I ( 2 ) i ) ) 11 : end if 12 : Update θs with the loss function L = Lmain + αLaux . 13 : Update θt using an EMA : θt ← λθt + ( 1− λ ) θs . 14 : end for • Uniform sampling . Each image feature is drawn with equal probability from the queue , i.e . suppose we have N features in the queue , then the probability of each feature being sampled is 1/N . This tactic encourages the diversity of the retrieved image features , benefiting the overall performance . However , some older features in the queue may largely fall behind the current model if the teacher network gt is updated quickly , eliciting unstable training . • “ Most-recent ” sampling . The sampling probability mass is allocated based on the freshness of image features , and the most recent feature has the highest chance to be retrieved . Specifically , suppose we have N features in the queue Q ( |Q| > = N ) . Then for the i-th newest feature f , we define its weight wi = N − i + 1 . Finally , the probability of the i-th newest feature being sampled is wi/ ∑N j=1 wj . This tactic ensures we retrieve more up-to-date features and thereby stabilizes the learning . But it may hurt the overall performance due to a lack of feature diversity , as the chance of older features being sampled is small . Note that the feature queue is empty at the beginning of training . In this case , we simply use the input image feature f ′ for the auxiliary tasks , and also enqueue it to Q that corresponds to the concept of the input image , as shown in Algorithm 1 . As we can show in the next , now our proposed global and local tasks reduce to DINO ( Caron et al. , 2021 ) and EsViT ( Li et al. , 2021 ) , respectively . Concept-guided global task . Suppose we have two views { I ( 1 ) , I ( 2 ) } of an image I , the main idea of our concept-guided global task is to replace I ( 1 ) in the DINO loss ( Caron et al. , 2021 ) with the image feature f sampled from the concept-feature dictionary , which becomes LGlobal = −ht ( f ) log hs ( gs ( I ( 2 ) ) ) , ( 1 ) where ht and hs are the projection head of the teacher and student network , respectively , and gs is the student network . Intuitively , minimizing the global loss is equivalent to encouraging the similarity of any two different image features with the same concept . Hence , it can help produce more semantically consistent relational representations , in particular when the concepts stored in the concept-feature dictionary are themselves relational . Similar inter-class representation learning techniques have been explored before ( Wang et al. , 2017 ; Caron et al. , 2018 ) . However , these approaches require a rather complex pre-processing stage , e.g . the images have to be split in terms of the concept before training , making them not directly applicable to existing training pipelines . Rather , with our proposed concept-feature dictionary that dynamically saves & retrieves image features from the running storage , our concept-guided global task becomes a plug-n-play task to existing training pipelines . Besides , our dictionary also allows replaying features relationally , which can help tackle the long-tailed data distribution issue ( Shen et al. , 2018 ) in many real-world visual reasoning tasks . Concept-guided local task . As we mentioned earlier , our concept-guided local task aims at facilitating object-centric learning , by the means of correspondence learning ( Liu et al. , 2010 ; Wang et al. , 2019 ) . Recent studies have unveiled the possibility of learning correspondence with SSL ( Wang et al. , 2021c ; Li et al. , 2021 ) . However , only low-level correspondence between two augmented ( e.g . rotated ) views of an image can be discovered , while the semantic information of objects is missing . To remedy this , we bring concepts to these methods , endowing them the capability of learning semantic correspondence from images . Specifically , suppose we have two views { I ( 1 ) , I ( 2 ) } of an image I , and we also tokenize the image feature into a sequence of N local image tokens . Then at the output of ViT , we obtain gt ( I ( 1 ) ) = [ z ( 1 ) 1 , · · · , z ( 1 ) N ] and gs ( I ( 2 ) ) = [ z ( 2 ) 1 , · · · , z ( 2 ) N ] , where z denotes the local feature . Prior work , such as EsViT ( Li et al. , 2021 ) , relies on the local features gt ( I ( 1 ) ) and gt ( I ( 2 ) ) for the local task . Instead , we replace gt ( I ( 1 ) ) with the image feature f retrieved from the concept-feature dictionary with the concept of I . We then split f into multiple local features , i.e . f = [ z ( f ) 1 , · · · , z ( f ) N ] and our conceptguided local loss becomes LLocal = − 1 N N∑ i=1 ht ( z ( f ) j ? ) log hs ( z ( 2 ) i ) , j ? = argmax j CosineDistance ( z ( f ) j , z ( 2 ) i ) , ( 2 ) where ht ( · ) , hs ( · ) are the projection heads that map local features to probability distributions1 . Intuitively , it greedily matches the output between two local regions that have minimal feature distance – bootstrapping the object-level semantic correspondence among images with the same concept . Overall loss . By combining the global and local tasks , we add an auxiliary task loss Laux to the main loss Lmain ( e.g . cross-entropy loss of the reasoning task ) . The eventual objective is L = Lmain + αLaux , Laux = LGlobal + LLocal , ( 3 ) where a trade-off weight α is added for better flexibility . As we mentioned above , our method will reduce to EsViT , a baseline with non-concept auxiliary tasks , when we use the current input features gt ( I ( 1 ) ) instead of f sampled from our dictionary for computing LGlobal and LLocal . | The paper proposes a new concept-guided approach for visual relational reasoning. Particularly, it uses a concept-feature dictionary to store and retrieve the feature of an image by the corresponding visual concepts. Then, the concept-guided features are integrated into the training process with the global and local auxiliary tasks. Experimental results on HICO and GQA in both original and systematic settings evidenced the effectiveness of the proposed method against existing works. | SP:73538d37acd0fd293e098d84bda05e3f5b537837 |
RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning | 1 INTRODUCTION . Deep neural networks have achieved great success in visual recognition . However , their ability for visual relational reasoning , i.e . reasoning with entities and their relationships in a visual scene , still falls short of human-level performances , especially in real-world domains . The challenges of common visual relational reasoning tasks , e.g . HICO and GQA benchmarks ( Chao et al. , 2015 ; Hudson & Manning , 2019 ) are manifested in three aspects : 1 ) object-centric learning : to identify objects ( including humans ) as well as their visual properties ; 2 ) relational reasoning : to infer all pairwise relationships between the object entities ; and 3 ) systematic generalization : to reason with visual entities and relations on novel object-relation combinations and extrapolate to longer reasoning hops ( Bahdanau et al. , 2018 ; Hupkes et al. , 2020 ) . While existing models have leveraged pre-trained object detectors ( Ren et al. , 2015 ; Jiang et al. , 2020 ) and/or explicit symbolic reasoning methods ( Yi et al. , 2018 ) to tackle these challenges , they leave ample space for improvement . More recently , vision transformers ( ViTs ) have become the new paradigm for visual recognition and have made great strides in a broad range of visual recognition tasks ( Dosovitskiy et al. , 2020 ; Wang et al. , 2021a ; Liu et al. , 2021 ) . Several properties of ViT make it a compelling model choice for visual relational reasoning . First , the self-attention mechanism in ViT offers a strong relational inductive bias , explicitly modeling the relations between input entities . Second , the design of image as patches facilitates the learning of object-centric representations , as evidenced by recent works , e.g . DINO and EsViT ( Caron et al. , 2021 ; Li et al. , 2021 ) , that demonstrate ViTs trained with self-supervised learning ( SSL ) capture objects in the image without label annotations . To investigate the efficacy of the ViT backbone for visual relational reasoning , in particular on systematic generalization , we introduce new systematic splits to canonical benchmarks and compare the ViT backbone with the CNN backbone . Results on GQA show that switching to ViTs in MCAN model ( Yu et al. , 2019 ) brings an immediate 11 % gain in accuracy . However , the performance gap between the original GQA testing split and the new systematic split remains considerable ( 15 % in accuracy ) for both backbones . It suggests that generic ViTs can still be improved to tackle the reasoning task , especially on systematic generalization . Recent works have shown that neural networks can learn representations with better generalization , by learning certain auxiliary tasks of predicting human-specified concepts ( Hill et al. , 2020 ; Koh et al. , 2020 ) . A natural question emerges : can we exploit these concepts to improve the reasoning ability of ViTs ? Our approach is to make better use of concepts ( e.g . the labels in the original training dataset ) in the ViT training for better relational reasoning . To this end , we first introduce a novel concept-feature dictionary , where each key is a concept and its value is a queue of image features with the same concept , as shown in Figure 1 . It allows dynamic and flexible training-time image feature retrieval during training . Based on this dictionary , we then augment the canonical ViT training pipeline with two auxiliary tasks : To facilitate high-level reasoning about relationships , we design a global task that helps cluster images with the same concept together to produce semantically consistent relational representations . To learn better object-centric representations , we develop a local task that guides the model to discover object-centric semantic correspondence across images ( Liu et al. , 2010 ) . Thanks to the plug-and-play feature of our concept-feature dictionary , our auxiliary tasks can be easily incorporated into the existing ViT training pipeline without additional input pre-processing . We term the resulting model concept-guided vision transformer ( or RelViT for short ) . We evaluate our method on two standard visual relational reasoning benchmarks : HICO and GQA . Beyond the original independent and identically distributed ( I.I.D . ) training-testing split , we introduce a systematic split for each dataset to examine the ability of systematic generalization , i.e. , recognizing novel object-relation combinations . Our results show that RelViT significantly outperforms previous approaches . On HICO , it improves the best baseline by 16 % , 43 % , and 7 % on the original non-systematic and two new systematic splits , respectively , as shown in Figure 2 . On GQA , it further closes the gap of overall accuracy between models using visual backbone feature only and models using additional bounding box features ( obtained from pre-trained object detectors ) by 13 % and 18 % on the two splits . We also show that our method is compatible with various ViT variants and robust to hyperparameters . Finally , our qualitative inspection indicates that RelViT does improve ViTs on learning relational and object-centric representations . Our main contributions are summarized as follows : • We propose RelViT , by incorporating visual relational concepts to the ViT training with the newlyintroduced concept-guided global and local auxiliary tasks , where a concept-feature dictionary is proposed to enable dynamic and flexible image feature retrieval with the concept keys . • In extensive experiments on the original non-systematic and new systematic split of the HICO and GQA datasets , we demonstrate the advantages of RelViT over various strong baselines for visual relational reasoning . • We perform ablation studies on RelViT to show the contributions of its key components , its compatibility to various ViT architectures , and its robustness to hyper-parameters . We provide qualitative results to confirm our improved learning of relational and object-centric representations . 2 METHODOLOGY . 2.1 BACKGROUND . Vision transformers . Here we briefly review the architecture of multi-staged ViTs ( Dosovitskiy et al. , 2020 ) . Given an image I ∈ RH×W×C , a ViT model g first tokenizes the input into N image tokens ( patches ) with a resolution of ( T , T ) : tokenize ( I ) = [ t1 , · · · , tN ] , ti ∈ RT 2×C , N = HW/T 2 , where ( H , W ) and C denotes the original resolution and number of channel of the image , respectively . Then in each stage , a patch embedding and a multi-head self attention ( MHSA ) module is applied to these tokens to produce input for the next stage . The final output of ViT g ( I ) is a sequence of tokens [ z1 , · · · , zN ] that correspond to the aforementioned input tokens . For global prediction tasks , e.g . image categorization , a summary of the input image can be obtained by either inserting an extra [ CLS ] token to the input sequence of image tokens or performing an extra pooling operation over the output tokens ( Zhai et al. , 2021 ) . Self-supervised learning with DINO and EsViT . Our method is developed upon the recently proposed self-supervised learning ( SSL ) approach self-distillation with no labels ( DINO ) ( Caron et al. , 2021 ) and its follow-up EsViT ( Li et al. , 2021 ) . As shown in Figure 1 , their main idea is to encourage the output consistency between a teacher gt and a student network gs , parameterized by θt and θs , respectively . Given an input image I , both networks map it to a probability distribution Pt ( I ) = ht ( gt ( I ) ) and Ps ( I ) = hs ( gs ( I ) ) via an extra projection head h ( · ) . The teacher and student network will be updated alternatively by following these two rules : ( 1 ) For the student network : θs ← argminθs LGlobal , where LGlobal = −Pt ( I ) logPs ( I ) ; ( 2 ) For the teacher network , θt is updated using an exponential moving average ( EMA ) on θs : θt ← λθt + ( 1− λ ) θs , where λ controls the updating momentum . In practice , multiple views of the input image I will be generated via data augmentation and the teacher and student networks will receive different views , preventing the task from being trivialized . EsViT further extends the image-level loss LGlobal to patch-level by applying dense SSL ( Wang et al. , 2021c ) for learning correspondence between the different views , enhancing the performance on dense prediction . Readers are encouraged to refer to Caron et al . ( 2021 ) and Li et al . ( 2021 ) for more details about these two works . 2.2 RELVIT . RelViT is a concept-guided ViT that makes better use of the concepts in the ViT training for better relational reasoning . In this section , we first introduce a concept-feature dictionary to store and retrieve image features with their concept keys . We then augment the canonical ViT training pipeline with two auxiliary tasks : a global level task and a local level task , both are concept-guided by resorting to the concept-feature dictionary . Intuitively , the global task help cluster images with the same concept together to produce semantically consistent relational features , while the local task guides the model to discover object-centric semantic correspondence across images . Concept-feature dictionary . We assume the total number of concepts is M , and the set of all concepts C = { c1 , · · · , cM } . A concept-feature dictionary is denoted by D = { ( c1 , Q1 ) , · · · , ( cM , QM ) } , where each concept ci is associated with a queue Qi of image features . During training , each image I may come with multiple concepts , which we denote by CI ⊂ C. For instance , there may exist several human-object interactions in an image from the HICO dataset , each of which may correspond to a concept . As shown in Figure 1 , whenever a new image-concept pair ( I , CI ) comes , we uniformly draw a concept code c from CI , pick up the queue Q from the dictionary that corresponds to c , and then retrieve the image feature f from Q . Meanwhile , we pass the input image I to the teacher network gt to get the new image feature f ′ = gt ( I ) , and enqueue it to Q . Note that if Q is full already , we first need to dequeue the oldest image feature from Q . During training , we use the retrieved image feature f for the two auxiliary tasks below , rather than the input image feature f ′ . Furthermore , the sampling strategy , i.e . how to retrieve image feature f from Q , plays an important role in the overall performance of our method . We consider the following two sampling strategies : Algorithm 1 RelViT : Concept-guided Vision Transformer Input : A set of training images with concepts { ( I1 , C1 ) , · · · } , an image augmentation function aug ( · ) , momentum update factor λ , loss weight α , a concept-feature dictionary D , teacher and student ViT gt and gs , parameterized by θt and θs , respectively . 1 : for ( Ii , Ci ) in { ( I1 , C1 ) , · · · } do 2 : I ( 1 ) i , I ( 2 ) i = aug ( Ii ) , aug ( Ii ) 3 : Uniformly draw a concept code c ∼ Ci . 4 : Retrieve Q from D with c. 5 : if Q is not empty then 6 : Sample feature f ∼ Q , following some sampling tactics . 7 : Laux = LGlobal ( f , gs ( I ( 2 ) i ) ) + LLocal ( f , gs ( I ( 2 ) i ) ) 8 : Insert feature gt ( I ( 1 ) i ) into Q ; if it is full , remove the oldest feature . 9 : else 10 : Laux = LGlobal ( gt ( I ( 1 ) i ) , gs ( I ( 2 ) i ) ) + LLocal ( gt ( I ( 1 ) i ) , gs ( I ( 2 ) i ) ) 11 : end if 12 : Update θs with the loss function L = Lmain + αLaux . 13 : Update θt using an EMA : θt ← λθt + ( 1− λ ) θs . 14 : end for • Uniform sampling . Each image feature is drawn with equal probability from the queue , i.e . suppose we have N features in the queue , then the probability of each feature being sampled is 1/N . This tactic encourages the diversity of the retrieved image features , benefiting the overall performance . However , some older features in the queue may largely fall behind the current model if the teacher network gt is updated quickly , eliciting unstable training . • “ Most-recent ” sampling . The sampling probability mass is allocated based on the freshness of image features , and the most recent feature has the highest chance to be retrieved . Specifically , suppose we have N features in the queue Q ( |Q| > = N ) . Then for the i-th newest feature f , we define its weight wi = N − i + 1 . Finally , the probability of the i-th newest feature being sampled is wi/ ∑N j=1 wj . This tactic ensures we retrieve more up-to-date features and thereby stabilizes the learning . But it may hurt the overall performance due to a lack of feature diversity , as the chance of older features being sampled is small . Note that the feature queue is empty at the beginning of training . In this case , we simply use the input image feature f ′ for the auxiliary tasks , and also enqueue it to Q that corresponds to the concept of the input image , as shown in Algorithm 1 . As we can show in the next , now our proposed global and local tasks reduce to DINO ( Caron et al. , 2021 ) and EsViT ( Li et al. , 2021 ) , respectively . Concept-guided global task . Suppose we have two views { I ( 1 ) , I ( 2 ) } of an image I , the main idea of our concept-guided global task is to replace I ( 1 ) in the DINO loss ( Caron et al. , 2021 ) with the image feature f sampled from the concept-feature dictionary , which becomes LGlobal = −ht ( f ) log hs ( gs ( I ( 2 ) ) ) , ( 1 ) where ht and hs are the projection head of the teacher and student network , respectively , and gs is the student network . Intuitively , minimizing the global loss is equivalent to encouraging the similarity of any two different image features with the same concept . Hence , it can help produce more semantically consistent relational representations , in particular when the concepts stored in the concept-feature dictionary are themselves relational . Similar inter-class representation learning techniques have been explored before ( Wang et al. , 2017 ; Caron et al. , 2018 ) . However , these approaches require a rather complex pre-processing stage , e.g . the images have to be split in terms of the concept before training , making them not directly applicable to existing training pipelines . Rather , with our proposed concept-feature dictionary that dynamically saves & retrieves image features from the running storage , our concept-guided global task becomes a plug-n-play task to existing training pipelines . Besides , our dictionary also allows replaying features relationally , which can help tackle the long-tailed data distribution issue ( Shen et al. , 2018 ) in many real-world visual reasoning tasks . Concept-guided local task . As we mentioned earlier , our concept-guided local task aims at facilitating object-centric learning , by the means of correspondence learning ( Liu et al. , 2010 ; Wang et al. , 2019 ) . Recent studies have unveiled the possibility of learning correspondence with SSL ( Wang et al. , 2021c ; Li et al. , 2021 ) . However , only low-level correspondence between two augmented ( e.g . rotated ) views of an image can be discovered , while the semantic information of objects is missing . To remedy this , we bring concepts to these methods , endowing them the capability of learning semantic correspondence from images . Specifically , suppose we have two views { I ( 1 ) , I ( 2 ) } of an image I , and we also tokenize the image feature into a sequence of N local image tokens . Then at the output of ViT , we obtain gt ( I ( 1 ) ) = [ z ( 1 ) 1 , · · · , z ( 1 ) N ] and gs ( I ( 2 ) ) = [ z ( 2 ) 1 , · · · , z ( 2 ) N ] , where z denotes the local feature . Prior work , such as EsViT ( Li et al. , 2021 ) , relies on the local features gt ( I ( 1 ) ) and gt ( I ( 2 ) ) for the local task . Instead , we replace gt ( I ( 1 ) ) with the image feature f retrieved from the concept-feature dictionary with the concept of I . We then split f into multiple local features , i.e . f = [ z ( f ) 1 , · · · , z ( f ) N ] and our conceptguided local loss becomes LLocal = − 1 N N∑ i=1 ht ( z ( f ) j ? ) log hs ( z ( 2 ) i ) , j ? = argmax j CosineDistance ( z ( f ) j , z ( 2 ) i ) , ( 2 ) where ht ( · ) , hs ( · ) are the projection heads that map local features to probability distributions1 . Intuitively , it greedily matches the output between two local regions that have minimal feature distance – bootstrapping the object-level semantic correspondence among images with the same concept . Overall loss . By combining the global and local tasks , we add an auxiliary task loss Laux to the main loss Lmain ( e.g . cross-entropy loss of the reasoning task ) . The eventual objective is L = Lmain + αLaux , Laux = LGlobal + LLocal , ( 3 ) where a trade-off weight α is added for better flexibility . As we mentioned above , our method will reduce to EsViT , a baseline with non-concept auxiliary tasks , when we use the current input features gt ( I ( 1 ) ) instead of f sampled from our dictionary for computing LGlobal and LLocal . | The paper extends the training objective of EsViT to make it better suited to visual relationship detection. This is done by replacing the teacher's features used in both local and global losses with features taken from a queue. The queue is populated with teacher features extracted from previous images. Furthermore, the queue is partitioned by "concepts", e.g. HOI labels or VQA concepts, so that the current student image can be compared with a previous teacher image that contains the same concept instead of any random image. The novel queue-based formulation is tested in various experiments, both for visual relationship detection and visual question answering. Individual components of the method are ablated and discussed, namely the dictionary of queues, the EsViT-style local loss, and other hyperparameters. | SP:73538d37acd0fd293e098d84bda05e3f5b537837 |
RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning | 1 INTRODUCTION . Deep neural networks have achieved great success in visual recognition . However , their ability for visual relational reasoning , i.e . reasoning with entities and their relationships in a visual scene , still falls short of human-level performances , especially in real-world domains . The challenges of common visual relational reasoning tasks , e.g . HICO and GQA benchmarks ( Chao et al. , 2015 ; Hudson & Manning , 2019 ) are manifested in three aspects : 1 ) object-centric learning : to identify objects ( including humans ) as well as their visual properties ; 2 ) relational reasoning : to infer all pairwise relationships between the object entities ; and 3 ) systematic generalization : to reason with visual entities and relations on novel object-relation combinations and extrapolate to longer reasoning hops ( Bahdanau et al. , 2018 ; Hupkes et al. , 2020 ) . While existing models have leveraged pre-trained object detectors ( Ren et al. , 2015 ; Jiang et al. , 2020 ) and/or explicit symbolic reasoning methods ( Yi et al. , 2018 ) to tackle these challenges , they leave ample space for improvement . More recently , vision transformers ( ViTs ) have become the new paradigm for visual recognition and have made great strides in a broad range of visual recognition tasks ( Dosovitskiy et al. , 2020 ; Wang et al. , 2021a ; Liu et al. , 2021 ) . Several properties of ViT make it a compelling model choice for visual relational reasoning . First , the self-attention mechanism in ViT offers a strong relational inductive bias , explicitly modeling the relations between input entities . Second , the design of image as patches facilitates the learning of object-centric representations , as evidenced by recent works , e.g . DINO and EsViT ( Caron et al. , 2021 ; Li et al. , 2021 ) , that demonstrate ViTs trained with self-supervised learning ( SSL ) capture objects in the image without label annotations . To investigate the efficacy of the ViT backbone for visual relational reasoning , in particular on systematic generalization , we introduce new systematic splits to canonical benchmarks and compare the ViT backbone with the CNN backbone . Results on GQA show that switching to ViTs in MCAN model ( Yu et al. , 2019 ) brings an immediate 11 % gain in accuracy . However , the performance gap between the original GQA testing split and the new systematic split remains considerable ( 15 % in accuracy ) for both backbones . It suggests that generic ViTs can still be improved to tackle the reasoning task , especially on systematic generalization . Recent works have shown that neural networks can learn representations with better generalization , by learning certain auxiliary tasks of predicting human-specified concepts ( Hill et al. , 2020 ; Koh et al. , 2020 ) . A natural question emerges : can we exploit these concepts to improve the reasoning ability of ViTs ? Our approach is to make better use of concepts ( e.g . the labels in the original training dataset ) in the ViT training for better relational reasoning . To this end , we first introduce a novel concept-feature dictionary , where each key is a concept and its value is a queue of image features with the same concept , as shown in Figure 1 . It allows dynamic and flexible training-time image feature retrieval during training . Based on this dictionary , we then augment the canonical ViT training pipeline with two auxiliary tasks : To facilitate high-level reasoning about relationships , we design a global task that helps cluster images with the same concept together to produce semantically consistent relational representations . To learn better object-centric representations , we develop a local task that guides the model to discover object-centric semantic correspondence across images ( Liu et al. , 2010 ) . Thanks to the plug-and-play feature of our concept-feature dictionary , our auxiliary tasks can be easily incorporated into the existing ViT training pipeline without additional input pre-processing . We term the resulting model concept-guided vision transformer ( or RelViT for short ) . We evaluate our method on two standard visual relational reasoning benchmarks : HICO and GQA . Beyond the original independent and identically distributed ( I.I.D . ) training-testing split , we introduce a systematic split for each dataset to examine the ability of systematic generalization , i.e. , recognizing novel object-relation combinations . Our results show that RelViT significantly outperforms previous approaches . On HICO , it improves the best baseline by 16 % , 43 % , and 7 % on the original non-systematic and two new systematic splits , respectively , as shown in Figure 2 . On GQA , it further closes the gap of overall accuracy between models using visual backbone feature only and models using additional bounding box features ( obtained from pre-trained object detectors ) by 13 % and 18 % on the two splits . We also show that our method is compatible with various ViT variants and robust to hyperparameters . Finally , our qualitative inspection indicates that RelViT does improve ViTs on learning relational and object-centric representations . Our main contributions are summarized as follows : • We propose RelViT , by incorporating visual relational concepts to the ViT training with the newlyintroduced concept-guided global and local auxiliary tasks , where a concept-feature dictionary is proposed to enable dynamic and flexible image feature retrieval with the concept keys . • In extensive experiments on the original non-systematic and new systematic split of the HICO and GQA datasets , we demonstrate the advantages of RelViT over various strong baselines for visual relational reasoning . • We perform ablation studies on RelViT to show the contributions of its key components , its compatibility to various ViT architectures , and its robustness to hyper-parameters . We provide qualitative results to confirm our improved learning of relational and object-centric representations . 2 METHODOLOGY . 2.1 BACKGROUND . Vision transformers . Here we briefly review the architecture of multi-staged ViTs ( Dosovitskiy et al. , 2020 ) . Given an image I ∈ RH×W×C , a ViT model g first tokenizes the input into N image tokens ( patches ) with a resolution of ( T , T ) : tokenize ( I ) = [ t1 , · · · , tN ] , ti ∈ RT 2×C , N = HW/T 2 , where ( H , W ) and C denotes the original resolution and number of channel of the image , respectively . Then in each stage , a patch embedding and a multi-head self attention ( MHSA ) module is applied to these tokens to produce input for the next stage . The final output of ViT g ( I ) is a sequence of tokens [ z1 , · · · , zN ] that correspond to the aforementioned input tokens . For global prediction tasks , e.g . image categorization , a summary of the input image can be obtained by either inserting an extra [ CLS ] token to the input sequence of image tokens or performing an extra pooling operation over the output tokens ( Zhai et al. , 2021 ) . Self-supervised learning with DINO and EsViT . Our method is developed upon the recently proposed self-supervised learning ( SSL ) approach self-distillation with no labels ( DINO ) ( Caron et al. , 2021 ) and its follow-up EsViT ( Li et al. , 2021 ) . As shown in Figure 1 , their main idea is to encourage the output consistency between a teacher gt and a student network gs , parameterized by θt and θs , respectively . Given an input image I , both networks map it to a probability distribution Pt ( I ) = ht ( gt ( I ) ) and Ps ( I ) = hs ( gs ( I ) ) via an extra projection head h ( · ) . The teacher and student network will be updated alternatively by following these two rules : ( 1 ) For the student network : θs ← argminθs LGlobal , where LGlobal = −Pt ( I ) logPs ( I ) ; ( 2 ) For the teacher network , θt is updated using an exponential moving average ( EMA ) on θs : θt ← λθt + ( 1− λ ) θs , where λ controls the updating momentum . In practice , multiple views of the input image I will be generated via data augmentation and the teacher and student networks will receive different views , preventing the task from being trivialized . EsViT further extends the image-level loss LGlobal to patch-level by applying dense SSL ( Wang et al. , 2021c ) for learning correspondence between the different views , enhancing the performance on dense prediction . Readers are encouraged to refer to Caron et al . ( 2021 ) and Li et al . ( 2021 ) for more details about these two works . 2.2 RELVIT . RelViT is a concept-guided ViT that makes better use of the concepts in the ViT training for better relational reasoning . In this section , we first introduce a concept-feature dictionary to store and retrieve image features with their concept keys . We then augment the canonical ViT training pipeline with two auxiliary tasks : a global level task and a local level task , both are concept-guided by resorting to the concept-feature dictionary . Intuitively , the global task help cluster images with the same concept together to produce semantically consistent relational features , while the local task guides the model to discover object-centric semantic correspondence across images . Concept-feature dictionary . We assume the total number of concepts is M , and the set of all concepts C = { c1 , · · · , cM } . A concept-feature dictionary is denoted by D = { ( c1 , Q1 ) , · · · , ( cM , QM ) } , where each concept ci is associated with a queue Qi of image features . During training , each image I may come with multiple concepts , which we denote by CI ⊂ C. For instance , there may exist several human-object interactions in an image from the HICO dataset , each of which may correspond to a concept . As shown in Figure 1 , whenever a new image-concept pair ( I , CI ) comes , we uniformly draw a concept code c from CI , pick up the queue Q from the dictionary that corresponds to c , and then retrieve the image feature f from Q . Meanwhile , we pass the input image I to the teacher network gt to get the new image feature f ′ = gt ( I ) , and enqueue it to Q . Note that if Q is full already , we first need to dequeue the oldest image feature from Q . During training , we use the retrieved image feature f for the two auxiliary tasks below , rather than the input image feature f ′ . Furthermore , the sampling strategy , i.e . how to retrieve image feature f from Q , plays an important role in the overall performance of our method . We consider the following two sampling strategies : Algorithm 1 RelViT : Concept-guided Vision Transformer Input : A set of training images with concepts { ( I1 , C1 ) , · · · } , an image augmentation function aug ( · ) , momentum update factor λ , loss weight α , a concept-feature dictionary D , teacher and student ViT gt and gs , parameterized by θt and θs , respectively . 1 : for ( Ii , Ci ) in { ( I1 , C1 ) , · · · } do 2 : I ( 1 ) i , I ( 2 ) i = aug ( Ii ) , aug ( Ii ) 3 : Uniformly draw a concept code c ∼ Ci . 4 : Retrieve Q from D with c. 5 : if Q is not empty then 6 : Sample feature f ∼ Q , following some sampling tactics . 7 : Laux = LGlobal ( f , gs ( I ( 2 ) i ) ) + LLocal ( f , gs ( I ( 2 ) i ) ) 8 : Insert feature gt ( I ( 1 ) i ) into Q ; if it is full , remove the oldest feature . 9 : else 10 : Laux = LGlobal ( gt ( I ( 1 ) i ) , gs ( I ( 2 ) i ) ) + LLocal ( gt ( I ( 1 ) i ) , gs ( I ( 2 ) i ) ) 11 : end if 12 : Update θs with the loss function L = Lmain + αLaux . 13 : Update θt using an EMA : θt ← λθt + ( 1− λ ) θs . 14 : end for • Uniform sampling . Each image feature is drawn with equal probability from the queue , i.e . suppose we have N features in the queue , then the probability of each feature being sampled is 1/N . This tactic encourages the diversity of the retrieved image features , benefiting the overall performance . However , some older features in the queue may largely fall behind the current model if the teacher network gt is updated quickly , eliciting unstable training . • “ Most-recent ” sampling . The sampling probability mass is allocated based on the freshness of image features , and the most recent feature has the highest chance to be retrieved . Specifically , suppose we have N features in the queue Q ( |Q| > = N ) . Then for the i-th newest feature f , we define its weight wi = N − i + 1 . Finally , the probability of the i-th newest feature being sampled is wi/ ∑N j=1 wj . This tactic ensures we retrieve more up-to-date features and thereby stabilizes the learning . But it may hurt the overall performance due to a lack of feature diversity , as the chance of older features being sampled is small . Note that the feature queue is empty at the beginning of training . In this case , we simply use the input image feature f ′ for the auxiliary tasks , and also enqueue it to Q that corresponds to the concept of the input image , as shown in Algorithm 1 . As we can show in the next , now our proposed global and local tasks reduce to DINO ( Caron et al. , 2021 ) and EsViT ( Li et al. , 2021 ) , respectively . Concept-guided global task . Suppose we have two views { I ( 1 ) , I ( 2 ) } of an image I , the main idea of our concept-guided global task is to replace I ( 1 ) in the DINO loss ( Caron et al. , 2021 ) with the image feature f sampled from the concept-feature dictionary , which becomes LGlobal = −ht ( f ) log hs ( gs ( I ( 2 ) ) ) , ( 1 ) where ht and hs are the projection head of the teacher and student network , respectively , and gs is the student network . Intuitively , minimizing the global loss is equivalent to encouraging the similarity of any two different image features with the same concept . Hence , it can help produce more semantically consistent relational representations , in particular when the concepts stored in the concept-feature dictionary are themselves relational . Similar inter-class representation learning techniques have been explored before ( Wang et al. , 2017 ; Caron et al. , 2018 ) . However , these approaches require a rather complex pre-processing stage , e.g . the images have to be split in terms of the concept before training , making them not directly applicable to existing training pipelines . Rather , with our proposed concept-feature dictionary that dynamically saves & retrieves image features from the running storage , our concept-guided global task becomes a plug-n-play task to existing training pipelines . Besides , our dictionary also allows replaying features relationally , which can help tackle the long-tailed data distribution issue ( Shen et al. , 2018 ) in many real-world visual reasoning tasks . Concept-guided local task . As we mentioned earlier , our concept-guided local task aims at facilitating object-centric learning , by the means of correspondence learning ( Liu et al. , 2010 ; Wang et al. , 2019 ) . Recent studies have unveiled the possibility of learning correspondence with SSL ( Wang et al. , 2021c ; Li et al. , 2021 ) . However , only low-level correspondence between two augmented ( e.g . rotated ) views of an image can be discovered , while the semantic information of objects is missing . To remedy this , we bring concepts to these methods , endowing them the capability of learning semantic correspondence from images . Specifically , suppose we have two views { I ( 1 ) , I ( 2 ) } of an image I , and we also tokenize the image feature into a sequence of N local image tokens . Then at the output of ViT , we obtain gt ( I ( 1 ) ) = [ z ( 1 ) 1 , · · · , z ( 1 ) N ] and gs ( I ( 2 ) ) = [ z ( 2 ) 1 , · · · , z ( 2 ) N ] , where z denotes the local feature . Prior work , such as EsViT ( Li et al. , 2021 ) , relies on the local features gt ( I ( 1 ) ) and gt ( I ( 2 ) ) for the local task . Instead , we replace gt ( I ( 1 ) ) with the image feature f retrieved from the concept-feature dictionary with the concept of I . We then split f into multiple local features , i.e . f = [ z ( f ) 1 , · · · , z ( f ) N ] and our conceptguided local loss becomes LLocal = − 1 N N∑ i=1 ht ( z ( f ) j ? ) log hs ( z ( 2 ) i ) , j ? = argmax j CosineDistance ( z ( f ) j , z ( 2 ) i ) , ( 2 ) where ht ( · ) , hs ( · ) are the projection heads that map local features to probability distributions1 . Intuitively , it greedily matches the output between two local regions that have minimal feature distance – bootstrapping the object-level semantic correspondence among images with the same concept . Overall loss . By combining the global and local tasks , we add an auxiliary task loss Laux to the main loss Lmain ( e.g . cross-entropy loss of the reasoning task ) . The eventual objective is L = Lmain + αLaux , Laux = LGlobal + LLocal , ( 3 ) where a trade-off weight α is added for better flexibility . As we mentioned above , our method will reduce to EsViT , a baseline with non-concept auxiliary tasks , when we use the current input features gt ( I ( 1 ) ) instead of f sampled from our dictionary for computing LGlobal and LLocal . | This work studies visual relationship reasoning and proposes a model based on the recent contrastive learning methods DINO and EsViT. Compared with the two predecessors, a feature dictionary is proposed to provide the online updated concept features for the teacher model instead. Correspondingly, two auxiliary tasks are introduced to drive semantic concept learning. Experiments on two large benchmarks show the efficacy of the proposed method. And extensive ablations are also made to probe the model components. | SP:73538d37acd0fd293e098d84bda05e3f5b537837 |
Learning and controlling the source-filter representation of speech with a variational autoencoder | 1 INTRODUCTION . High-dimensional data such as natural images or speech signals exhibit some form of regularity which prevents their dimensions from varying independently from each other . This suggests that there exists a latent representation of smaller dimension from which the high-dimensional observed data were generated . Discovering the hidden properties of complex data is the goal of representation learning , and deep latent-variable generative models have emerged as promising unsupervised approaches ( Goodfellow et al. , 2014 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Chen et al. , 2016 ; Higgins et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) , which is equipped with both a generative and inference model , can be used not only for data generation but also for analysis and transformation . As an explicit model of a probability density function ( pdf ) , the VAE can also be used as a learned prior for solving inverse problems such as compressed sensing ( Bora et al. , 2017 ) , speech enhancement ( Bando et al. , 2018 ; Leglaive et al. , 2018 ) , or source separation ( Kameoka et al. , 2019 ; Jayaram & Thickstun , 2020 ) . Making sense of the latent representation learned by a VAE and controlling the underlying continuous factors of variation in the data are important challenges to build more expressive and interpretable generative models and probabilistic priors . Previous works on representation learning with deep generative models , in particular VAEs , have mostly focused on images ( Higgins et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ; Locatello et al. , 2019 ; 2020 ) . Yet , it is not always easy to define the ground-truth latent factors of variation involved in the generation of natural images . For speech data , the latent factors of variation can be directly related to the anatomical mechanisms of speech production . This makes speech data interesting for investigating the disentangled representation learning capabilities of VAEs , complementary to studies dealing with images . A key concept for characterizing the structure of speech signals is deduced from the source-filter model proposed by Fant ( 1970 ) . This model , described in more detail in Section 2.2 , implies that a speech signal is mainly characterized by a few continuous latent factors of variation corresponding to the vibration of the vocal folds ( i.e. , the source ) , which defines the fundamental frequency , and the resonances of the vocal tract ( i.e. , the filter ) , which define the formants . The source-filter model is at the core of various fundamental speech processing techniques such as cepstral representations and linear predictive coding ( LPC ) ( Rabiner & Schafer , 2010 ) . Valin & Skoglund ( 2019 ) ; Wang et al . ( 2019 ) and Juvela et al . ( 2019 ) have recently shown that the efficiency of neural speech vocoders can be largely improved by leveraging the sourcefilter model . Other works investigating the interaction between the source-filter model and neural networks include Lee et al . ( 2019 ) and Choi et al . ( 2021 ) . All these studies illustrate the interest of combining deep learning techniques with more traditional signal processing models and algorithms . In this work , we interpret and control the latent space of a VAE from the perspective of the source-filter model of speech production , which can be beneficial for various applications in speech analysis , transformation , and synthesis . We first train a VAE on a dataset of about 25 hours of unlabeled speech signals . Then , using only a few seconds of labeled speech signals generated with an artificial speech synthesizer , we propose a method to analyze and control the fundamental frequency and the formant frequencies in the latent representation of the previously trained VAE . Our contributions are the following : ( i ) We experimentally demonstrate that the fundamental frequency and the frequency of the first three formants are encoded in orthogonal subspaces of the VAE latent space . This shows that a vanilla VAE trained in an unsupervised fashion is able to learn a representation that is compliant with the source-filter model of speech production . ( ii ) We develop a weakly-supervised method to precisely and independently control the source-filter continuous latent factors of speech variation within the learned subspaces . We put in evidence the orthogonality of these subspaces , which allows us to perform speech transformations in a disentangled manner ( i.e. , modifying one of the factors does not affect the others ) . ( iii ) Without requiring additional information such as text or human-labeled data , we propose a deep generative model of speech spectrograms conditioned on the fundamental frequency and formant frequencies . To the best of our knowledge , this is the first study showing the link between the classical source-filter model of speech production and the representation learned in the latent space of a VAE . Thanks to this link , we propose a principled method to generate speech data controlled with interpretable trajectories ( of e.g. , fundamental frequency and formant frequencies ) . 2 BACKGROUND . 2.1 VARIATIONAL AUTOENCODER . Generative modeling consists in learning a probabilistic model of an observable random variable x ∈ X ⊂ RD . Let D = { x1 , ... , xN ∈ X } be a dataset of N = # D independent and identically distributed ( i.i.d . ) observations of x . The empirical distribution of x is defined by p̂ ( x ) = 1N ∑ xn∈D δ ( x − xn ) , where δ is the Dirac delta function , which is null everywhere except in 0 where it takes the value 1 . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) attempts to approximate p̂ ( x ) with a pdf pθ ( x ) parametrized by θ. High-dimensional data such as natural images or speech signals exhibit some form of regularity which prevents theD dimensions of x from varying independently from each other . We can thus assume that there exists a latent variable z ∈ RL , with L D , from which the observed data were generated . Accordingly , the model distribution in the VAE is defined by marginalizing the joint distribution of the latent and observed data , pθ ( x ) =∫ pθ ( x|z ) p ( z ) dz . In this work , the observed data vector x ∈ RD+ denotes the power spectrum of a short frame of speech signal ( i.e. , a column of the short-time Fourier transform ( STFT ) power spectrogram ) . Its entries are non negative and its dimension D equals the number of frequency bins . We use the Itakura-Saito VAE ( IS-VAE ) ( Bando et al. , 2018 ; Leglaive et al. , 2018 ; Girin et al. , 2019 ) defined by p ( z ) = N ( z ; 0 , I ) , pθ ( x|z ) = ∏D d=1 Exp ( [ x ] d ; [ vθ ( z ) ] −1 d ) , ( 1 ) whereN and Exp denote the densities of the multivariate Gaussian and univariate exponential distributions , respectively , and [ v ] d denotes the d-th entry of v. The inverse scale parameters of pθ ( x|z ) are provided by a neural network called the decoder , parametrized by θ and taking z as input . The marginal likelihood pθ ( x ) and the posterior distribution pθ ( z|x ) are intractable due to the non linearities of the decoder , so it is necessary to introduce an inference model qφ ( z|x ) ≈ pθ ( z|x ) , which in the VAE is usually defined by qφ ( z|x ) = N ( z ; µφ ( x ) , diag { vφ ( x ) } ) , ( 2 ) where the mean and variance parameters are provided by a neural network called the encoder network , parametrized by φ and taking x as input . Then , the VAE training consists in maximizing a lower-bound of ln pθ ( x ) , called the evidence lower-bound ( ELBO ) and defined by L ( θ , φ ) = Ep̂ ( x ) [ Eqφ ( z|x ) [ pθ ( x|z ) ] −DKL ( qφ ( z|x ) ‖ p ( z ) ) ] . During training , the generative and inference model parameters θ and φ are jointly estimated by maximizing the ELBO , using ( variants of ) stochastic gradient descent with the so-called reparameterization trick ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . 2.2 SOURCE-FILTER MODEL OF SPEECH PRODUCTION . The source-filter model of speech production ( Fant , 1970 ) is at the basis of many speech processing systems . It considers that the production of speech results from the interaction of a source signal with a linear filter . In voiced speech , the source originates from the vibration of the vocal folds , which produces a quasi-periodic glottal sound wave whose fundamental frequency defines the pitch . In unvoiced speech , a noise source is produced by a turbulent airflow or an acoustic impulse . The source signal is modified by the vocal tract , which is assumed to act as a linear filter . The cavities of the vocal tract give rise to resonances , which are called the formants and are characterized by their frequency , amplitude and bandwidth . By moving the speech articulators such as the tongue , lips , and jaw , humans modify the shape of their vocal tract , which results in a change of the acoustic filter and the associated resonances . This is how the different elementary speech sounds called phonemes are produced to form syllables , words and sentences . The power spectra and the spectral envelopes of two French vowels are displayed in Figure 1 . The spectral envelopes show that the formant frequencies are different for the two vowels . In this example however , the harmonic structure of the spectra shows that the fundamental frequency is the same for the two vowels . Formant frequencies are important distinctive features of vowels . In a first approximation , they can be related to the opening of the mouth , the front/rear position of the tongue , and the rounding of the lips for the first , second , and third formant respectively . For voiced phonemes , humans are able to control the formants independently of the pitch ( i.e. , to change the filter independently of the source ( Fant , 1970 ) ) and of each other ( MacDonald et al. , 2011 ) . The independence of the source and filter characteristics makes the speech signals an interesting material for representation learning methods , especially with deep generative latent-variable models . In the present study , in addition to the pre-trained IS-VAE speech spectrogram model , we also assume the availability of an artificial speech synthesizer allowing for an accurate and independent control of the fundamental frequency and formants . In this work , we use Soundgen ( Anikin , 2019 ) , a parametric synthesizer based on the source-filter model of speech production . For a given speech sound , the voiced component of the source signal is generated by a sum of sine waves , the noise component by a filtered white noise , and both components are then summed and passed through a linear filter simulating the effect of the human vocal tract . Importantly , this synthesizer allows us to easily generate artificial speech data labeled with the fundamental frequency and formant frequency values . | The paper proposes a method for utilizing labeled synthetic data in order to characterize and control the latent space of a VAE trained on individual frames of speech spectrograms. Key properties of the data which one might want explicit control over are identified, i.e., pitch and formant frequencies, and a parametric speech synthesizer is used to generate synthetic datasets for each property, where the property in question is varied but all others are kept fixed. These labeled data are used to identify subspaces of a VAE latent which correspond to each property, essentially by a principal components analysis of the latent vectors from each point in the synthetic dataset for that property. The degree of disentanglement of the latent representation can be characterized by how orthogonal the bases are across subspaces. Furthermore, individual properties can be directly controlled in isolation by learning a simple linear regression model mapping from the quantity in question (e.g., fundamental frequency in Hertz) to the subspace basis using supervision from the corresponding synthetic data. | SP:d8208bdbadc2e0abd51397b2d4d048645949e9e5 |
Learning and controlling the source-filter representation of speech with a variational autoencoder | 1 INTRODUCTION . High-dimensional data such as natural images or speech signals exhibit some form of regularity which prevents their dimensions from varying independently from each other . This suggests that there exists a latent representation of smaller dimension from which the high-dimensional observed data were generated . Discovering the hidden properties of complex data is the goal of representation learning , and deep latent-variable generative models have emerged as promising unsupervised approaches ( Goodfellow et al. , 2014 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Chen et al. , 2016 ; Higgins et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) , which is equipped with both a generative and inference model , can be used not only for data generation but also for analysis and transformation . As an explicit model of a probability density function ( pdf ) , the VAE can also be used as a learned prior for solving inverse problems such as compressed sensing ( Bora et al. , 2017 ) , speech enhancement ( Bando et al. , 2018 ; Leglaive et al. , 2018 ) , or source separation ( Kameoka et al. , 2019 ; Jayaram & Thickstun , 2020 ) . Making sense of the latent representation learned by a VAE and controlling the underlying continuous factors of variation in the data are important challenges to build more expressive and interpretable generative models and probabilistic priors . Previous works on representation learning with deep generative models , in particular VAEs , have mostly focused on images ( Higgins et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ; Locatello et al. , 2019 ; 2020 ) . Yet , it is not always easy to define the ground-truth latent factors of variation involved in the generation of natural images . For speech data , the latent factors of variation can be directly related to the anatomical mechanisms of speech production . This makes speech data interesting for investigating the disentangled representation learning capabilities of VAEs , complementary to studies dealing with images . A key concept for characterizing the structure of speech signals is deduced from the source-filter model proposed by Fant ( 1970 ) . This model , described in more detail in Section 2.2 , implies that a speech signal is mainly characterized by a few continuous latent factors of variation corresponding to the vibration of the vocal folds ( i.e. , the source ) , which defines the fundamental frequency , and the resonances of the vocal tract ( i.e. , the filter ) , which define the formants . The source-filter model is at the core of various fundamental speech processing techniques such as cepstral representations and linear predictive coding ( LPC ) ( Rabiner & Schafer , 2010 ) . Valin & Skoglund ( 2019 ) ; Wang et al . ( 2019 ) and Juvela et al . ( 2019 ) have recently shown that the efficiency of neural speech vocoders can be largely improved by leveraging the sourcefilter model . Other works investigating the interaction between the source-filter model and neural networks include Lee et al . ( 2019 ) and Choi et al . ( 2021 ) . All these studies illustrate the interest of combining deep learning techniques with more traditional signal processing models and algorithms . In this work , we interpret and control the latent space of a VAE from the perspective of the source-filter model of speech production , which can be beneficial for various applications in speech analysis , transformation , and synthesis . We first train a VAE on a dataset of about 25 hours of unlabeled speech signals . Then , using only a few seconds of labeled speech signals generated with an artificial speech synthesizer , we propose a method to analyze and control the fundamental frequency and the formant frequencies in the latent representation of the previously trained VAE . Our contributions are the following : ( i ) We experimentally demonstrate that the fundamental frequency and the frequency of the first three formants are encoded in orthogonal subspaces of the VAE latent space . This shows that a vanilla VAE trained in an unsupervised fashion is able to learn a representation that is compliant with the source-filter model of speech production . ( ii ) We develop a weakly-supervised method to precisely and independently control the source-filter continuous latent factors of speech variation within the learned subspaces . We put in evidence the orthogonality of these subspaces , which allows us to perform speech transformations in a disentangled manner ( i.e. , modifying one of the factors does not affect the others ) . ( iii ) Without requiring additional information such as text or human-labeled data , we propose a deep generative model of speech spectrograms conditioned on the fundamental frequency and formant frequencies . To the best of our knowledge , this is the first study showing the link between the classical source-filter model of speech production and the representation learned in the latent space of a VAE . Thanks to this link , we propose a principled method to generate speech data controlled with interpretable trajectories ( of e.g. , fundamental frequency and formant frequencies ) . 2 BACKGROUND . 2.1 VARIATIONAL AUTOENCODER . Generative modeling consists in learning a probabilistic model of an observable random variable x ∈ X ⊂ RD . Let D = { x1 , ... , xN ∈ X } be a dataset of N = # D independent and identically distributed ( i.i.d . ) observations of x . The empirical distribution of x is defined by p̂ ( x ) = 1N ∑ xn∈D δ ( x − xn ) , where δ is the Dirac delta function , which is null everywhere except in 0 where it takes the value 1 . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) attempts to approximate p̂ ( x ) with a pdf pθ ( x ) parametrized by θ. High-dimensional data such as natural images or speech signals exhibit some form of regularity which prevents theD dimensions of x from varying independently from each other . We can thus assume that there exists a latent variable z ∈ RL , with L D , from which the observed data were generated . Accordingly , the model distribution in the VAE is defined by marginalizing the joint distribution of the latent and observed data , pθ ( x ) =∫ pθ ( x|z ) p ( z ) dz . In this work , the observed data vector x ∈ RD+ denotes the power spectrum of a short frame of speech signal ( i.e. , a column of the short-time Fourier transform ( STFT ) power spectrogram ) . Its entries are non negative and its dimension D equals the number of frequency bins . We use the Itakura-Saito VAE ( IS-VAE ) ( Bando et al. , 2018 ; Leglaive et al. , 2018 ; Girin et al. , 2019 ) defined by p ( z ) = N ( z ; 0 , I ) , pθ ( x|z ) = ∏D d=1 Exp ( [ x ] d ; [ vθ ( z ) ] −1 d ) , ( 1 ) whereN and Exp denote the densities of the multivariate Gaussian and univariate exponential distributions , respectively , and [ v ] d denotes the d-th entry of v. The inverse scale parameters of pθ ( x|z ) are provided by a neural network called the decoder , parametrized by θ and taking z as input . The marginal likelihood pθ ( x ) and the posterior distribution pθ ( z|x ) are intractable due to the non linearities of the decoder , so it is necessary to introduce an inference model qφ ( z|x ) ≈ pθ ( z|x ) , which in the VAE is usually defined by qφ ( z|x ) = N ( z ; µφ ( x ) , diag { vφ ( x ) } ) , ( 2 ) where the mean and variance parameters are provided by a neural network called the encoder network , parametrized by φ and taking x as input . Then , the VAE training consists in maximizing a lower-bound of ln pθ ( x ) , called the evidence lower-bound ( ELBO ) and defined by L ( θ , φ ) = Ep̂ ( x ) [ Eqφ ( z|x ) [ pθ ( x|z ) ] −DKL ( qφ ( z|x ) ‖ p ( z ) ) ] . During training , the generative and inference model parameters θ and φ are jointly estimated by maximizing the ELBO , using ( variants of ) stochastic gradient descent with the so-called reparameterization trick ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . 2.2 SOURCE-FILTER MODEL OF SPEECH PRODUCTION . The source-filter model of speech production ( Fant , 1970 ) is at the basis of many speech processing systems . It considers that the production of speech results from the interaction of a source signal with a linear filter . In voiced speech , the source originates from the vibration of the vocal folds , which produces a quasi-periodic glottal sound wave whose fundamental frequency defines the pitch . In unvoiced speech , a noise source is produced by a turbulent airflow or an acoustic impulse . The source signal is modified by the vocal tract , which is assumed to act as a linear filter . The cavities of the vocal tract give rise to resonances , which are called the formants and are characterized by their frequency , amplitude and bandwidth . By moving the speech articulators such as the tongue , lips , and jaw , humans modify the shape of their vocal tract , which results in a change of the acoustic filter and the associated resonances . This is how the different elementary speech sounds called phonemes are produced to form syllables , words and sentences . The power spectra and the spectral envelopes of two French vowels are displayed in Figure 1 . The spectral envelopes show that the formant frequencies are different for the two vowels . In this example however , the harmonic structure of the spectra shows that the fundamental frequency is the same for the two vowels . Formant frequencies are important distinctive features of vowels . In a first approximation , they can be related to the opening of the mouth , the front/rear position of the tongue , and the rounding of the lips for the first , second , and third formant respectively . For voiced phonemes , humans are able to control the formants independently of the pitch ( i.e. , to change the filter independently of the source ( Fant , 1970 ) ) and of each other ( MacDonald et al. , 2011 ) . The independence of the source and filter characteristics makes the speech signals an interesting material for representation learning methods , especially with deep generative latent-variable models . In the present study , in addition to the pre-trained IS-VAE speech spectrogram model , we also assume the availability of an artificial speech synthesizer allowing for an accurate and independent control of the fundamental frequency and formants . In this work , we use Soundgen ( Anikin , 2019 ) , a parametric synthesizer based on the source-filter model of speech production . For a given speech sound , the voiced component of the source signal is generated by a sum of sine waves , the noise component by a filtered white noise , and both components are then summed and passed through a linear filter simulating the effect of the human vocal tract . Importantly , this synthesizer allows us to easily generate artificial speech data labeled with the fundamental frequency and formant frequency values . | This paper analyzes VAE latent embeddings to extract subspaces that relate to pitch (f0) and formant frequencies (f1 through fN). This is done through first training a frame-synchronous IS-VAE model from clean speech data. The authors pass controlled synthesized speech through the model and obtain the embeddings corresponding to the synthesized data with randomly varying one fj component and keeping others constant. Each embedding corresponds to a frame of spectral data. The set of embeddings are then analyzed by PCA to find the principal eigenvectors corresponding to each fj explaining 80% of the total energy. The mapping from f0, f1, .. fN values and the subspace coefficients is done through a linear regression mapping independently for each fj which is also learned from synthetic data. Through simple examples, the authors show that they can change individual components (f0, f1, .., fN) by modifying the embeddings correspond to a signal, to some extent. They compare their results with rule-based vocoders (WORLD, TD-PSOLA) and a VAE baseline that does not use subspaces. | SP:d8208bdbadc2e0abd51397b2d4d048645949e9e5 |
Learning and controlling the source-filter representation of speech with a variational autoencoder | 1 INTRODUCTION . High-dimensional data such as natural images or speech signals exhibit some form of regularity which prevents their dimensions from varying independently from each other . This suggests that there exists a latent representation of smaller dimension from which the high-dimensional observed data were generated . Discovering the hidden properties of complex data is the goal of representation learning , and deep latent-variable generative models have emerged as promising unsupervised approaches ( Goodfellow et al. , 2014 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Chen et al. , 2016 ; Higgins et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ) . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) , which is equipped with both a generative and inference model , can be used not only for data generation but also for analysis and transformation . As an explicit model of a probability density function ( pdf ) , the VAE can also be used as a learned prior for solving inverse problems such as compressed sensing ( Bora et al. , 2017 ) , speech enhancement ( Bando et al. , 2018 ; Leglaive et al. , 2018 ) , or source separation ( Kameoka et al. , 2019 ; Jayaram & Thickstun , 2020 ) . Making sense of the latent representation learned by a VAE and controlling the underlying continuous factors of variation in the data are important challenges to build more expressive and interpretable generative models and probabilistic priors . Previous works on representation learning with deep generative models , in particular VAEs , have mostly focused on images ( Higgins et al. , 2017 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ; Locatello et al. , 2019 ; 2020 ) . Yet , it is not always easy to define the ground-truth latent factors of variation involved in the generation of natural images . For speech data , the latent factors of variation can be directly related to the anatomical mechanisms of speech production . This makes speech data interesting for investigating the disentangled representation learning capabilities of VAEs , complementary to studies dealing with images . A key concept for characterizing the structure of speech signals is deduced from the source-filter model proposed by Fant ( 1970 ) . This model , described in more detail in Section 2.2 , implies that a speech signal is mainly characterized by a few continuous latent factors of variation corresponding to the vibration of the vocal folds ( i.e. , the source ) , which defines the fundamental frequency , and the resonances of the vocal tract ( i.e. , the filter ) , which define the formants . The source-filter model is at the core of various fundamental speech processing techniques such as cepstral representations and linear predictive coding ( LPC ) ( Rabiner & Schafer , 2010 ) . Valin & Skoglund ( 2019 ) ; Wang et al . ( 2019 ) and Juvela et al . ( 2019 ) have recently shown that the efficiency of neural speech vocoders can be largely improved by leveraging the sourcefilter model . Other works investigating the interaction between the source-filter model and neural networks include Lee et al . ( 2019 ) and Choi et al . ( 2021 ) . All these studies illustrate the interest of combining deep learning techniques with more traditional signal processing models and algorithms . In this work , we interpret and control the latent space of a VAE from the perspective of the source-filter model of speech production , which can be beneficial for various applications in speech analysis , transformation , and synthesis . We first train a VAE on a dataset of about 25 hours of unlabeled speech signals . Then , using only a few seconds of labeled speech signals generated with an artificial speech synthesizer , we propose a method to analyze and control the fundamental frequency and the formant frequencies in the latent representation of the previously trained VAE . Our contributions are the following : ( i ) We experimentally demonstrate that the fundamental frequency and the frequency of the first three formants are encoded in orthogonal subspaces of the VAE latent space . This shows that a vanilla VAE trained in an unsupervised fashion is able to learn a representation that is compliant with the source-filter model of speech production . ( ii ) We develop a weakly-supervised method to precisely and independently control the source-filter continuous latent factors of speech variation within the learned subspaces . We put in evidence the orthogonality of these subspaces , which allows us to perform speech transformations in a disentangled manner ( i.e. , modifying one of the factors does not affect the others ) . ( iii ) Without requiring additional information such as text or human-labeled data , we propose a deep generative model of speech spectrograms conditioned on the fundamental frequency and formant frequencies . To the best of our knowledge , this is the first study showing the link between the classical source-filter model of speech production and the representation learned in the latent space of a VAE . Thanks to this link , we propose a principled method to generate speech data controlled with interpretable trajectories ( of e.g. , fundamental frequency and formant frequencies ) . 2 BACKGROUND . 2.1 VARIATIONAL AUTOENCODER . Generative modeling consists in learning a probabilistic model of an observable random variable x ∈ X ⊂ RD . Let D = { x1 , ... , xN ∈ X } be a dataset of N = # D independent and identically distributed ( i.i.d . ) observations of x . The empirical distribution of x is defined by p̂ ( x ) = 1N ∑ xn∈D δ ( x − xn ) , where δ is the Dirac delta function , which is null everywhere except in 0 where it takes the value 1 . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) attempts to approximate p̂ ( x ) with a pdf pθ ( x ) parametrized by θ. High-dimensional data such as natural images or speech signals exhibit some form of regularity which prevents theD dimensions of x from varying independently from each other . We can thus assume that there exists a latent variable z ∈ RL , with L D , from which the observed data were generated . Accordingly , the model distribution in the VAE is defined by marginalizing the joint distribution of the latent and observed data , pθ ( x ) =∫ pθ ( x|z ) p ( z ) dz . In this work , the observed data vector x ∈ RD+ denotes the power spectrum of a short frame of speech signal ( i.e. , a column of the short-time Fourier transform ( STFT ) power spectrogram ) . Its entries are non negative and its dimension D equals the number of frequency bins . We use the Itakura-Saito VAE ( IS-VAE ) ( Bando et al. , 2018 ; Leglaive et al. , 2018 ; Girin et al. , 2019 ) defined by p ( z ) = N ( z ; 0 , I ) , pθ ( x|z ) = ∏D d=1 Exp ( [ x ] d ; [ vθ ( z ) ] −1 d ) , ( 1 ) whereN and Exp denote the densities of the multivariate Gaussian and univariate exponential distributions , respectively , and [ v ] d denotes the d-th entry of v. The inverse scale parameters of pθ ( x|z ) are provided by a neural network called the decoder , parametrized by θ and taking z as input . The marginal likelihood pθ ( x ) and the posterior distribution pθ ( z|x ) are intractable due to the non linearities of the decoder , so it is necessary to introduce an inference model qφ ( z|x ) ≈ pθ ( z|x ) , which in the VAE is usually defined by qφ ( z|x ) = N ( z ; µφ ( x ) , diag { vφ ( x ) } ) , ( 2 ) where the mean and variance parameters are provided by a neural network called the encoder network , parametrized by φ and taking x as input . Then , the VAE training consists in maximizing a lower-bound of ln pθ ( x ) , called the evidence lower-bound ( ELBO ) and defined by L ( θ , φ ) = Ep̂ ( x ) [ Eqφ ( z|x ) [ pθ ( x|z ) ] −DKL ( qφ ( z|x ) ‖ p ( z ) ) ] . During training , the generative and inference model parameters θ and φ are jointly estimated by maximizing the ELBO , using ( variants of ) stochastic gradient descent with the so-called reparameterization trick ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . 2.2 SOURCE-FILTER MODEL OF SPEECH PRODUCTION . The source-filter model of speech production ( Fant , 1970 ) is at the basis of many speech processing systems . It considers that the production of speech results from the interaction of a source signal with a linear filter . In voiced speech , the source originates from the vibration of the vocal folds , which produces a quasi-periodic glottal sound wave whose fundamental frequency defines the pitch . In unvoiced speech , a noise source is produced by a turbulent airflow or an acoustic impulse . The source signal is modified by the vocal tract , which is assumed to act as a linear filter . The cavities of the vocal tract give rise to resonances , which are called the formants and are characterized by their frequency , amplitude and bandwidth . By moving the speech articulators such as the tongue , lips , and jaw , humans modify the shape of their vocal tract , which results in a change of the acoustic filter and the associated resonances . This is how the different elementary speech sounds called phonemes are produced to form syllables , words and sentences . The power spectra and the spectral envelopes of two French vowels are displayed in Figure 1 . The spectral envelopes show that the formant frequencies are different for the two vowels . In this example however , the harmonic structure of the spectra shows that the fundamental frequency is the same for the two vowels . Formant frequencies are important distinctive features of vowels . In a first approximation , they can be related to the opening of the mouth , the front/rear position of the tongue , and the rounding of the lips for the first , second , and third formant respectively . For voiced phonemes , humans are able to control the formants independently of the pitch ( i.e. , to change the filter independently of the source ( Fant , 1970 ) ) and of each other ( MacDonald et al. , 2011 ) . The independence of the source and filter characteristics makes the speech signals an interesting material for representation learning methods , especially with deep generative latent-variable models . In the present study , in addition to the pre-trained IS-VAE speech spectrogram model , we also assume the availability of an artificial speech synthesizer allowing for an accurate and independent control of the fundamental frequency and formants . In this work , we use Soundgen ( Anikin , 2019 ) , a parametric synthesizer based on the source-filter model of speech production . For a given speech sound , the voiced component of the source signal is generated by a sum of sine waves , the noise component by a filtered white noise , and both components are then summed and passed through a linear filter simulating the effect of the human vocal tract . Importantly , this synthesizer allows us to easily generate artificial speech data labeled with the fundamental frequency and formant frequency values . | This paper shows that the fundamental frequency and formant frequency information is encoded in a speech VAE model. This can be found by using artificially controlled/generated dataset. After finding how to manipulate the latent space, one can control arbitrary speech samples in a desirable way, such as controlling fundamental frequency or formant frequencies. The authors also show that the harmonic parts can be removed from the original speech and reconstructed as a whispered voice using only the spectral envelope part. Experiment results show that the proposed method can control formant frequencies and show that it can control better than the method proposed by previous work. The proposed method show on par or worse performance on fundamental frequency control experiment when compared to traditional DSP vocoders, such as TD-PSOLA or WORLD. | SP:d8208bdbadc2e0abd51397b2d4d048645949e9e5 |
Planning in Stochastic Environments with a Learned Model | Model-based reinforcement learning has proven highly successful . However , learning a model in isolation from its use during planning is problematic in complex environments . To date , the most effective techniques have instead combined valueequivalent model learning with powerful tree-search methods . This approach is exemplified by MuZero , which has achieved state-of-the-art performance in a wide range of domains , from board games to visually rich environments , with discrete and continuous action spaces , in online and offline settings . However , previous instantiations of this approach were limited to the use of deterministic models . This limits their performance in environments that are inherently stochastic , partially observed , or so large and complex that they appear stochastic to a finite agent . In this paper we extend this approach to learn and plan with stochastic models . Specifically , we introduce a new algorithm , Stochastic MuZero , that learns a stochastic model incorporating afterstates , and uses this model to perform a stochastic tree search . Stochastic MuZero matched or exceeded the state of the art in a set of canonical single and multi-agent environments , including 2048 and backgammon , while maintaining the superhuman performance of standard MuZero in the game of Go . 1 INTRODUCTION . Constructing plans and executing them is an important feature of human and animal behaviour . In the field of artificial intelligence there has been a great amount of research into adding planning capabilities to intelligent agents . Tree-based planning algorithms have shown a lot of success in a wide variety of environments such as card games ( Moravčík et al. , 2017 ) , board games ( Campbell et al. , 2002 ; Silver et al. , 2016 ) and more recently video games ( Schrittwieser et al. , 2020 ) and continuous control tasks ( Hubert et al. , 2021 ) . Most tree search methods assume that the agent has access to a perfect simulator of the environment , whereas real-world environments are typically unknown . Model-based reinforcement learning algorithms combine a model-learning component , which estimates the dynamics of the environment , with a planning component , using the learned model as a simulator . However , learning a model in isolation from its use during planning has proven to be problematic in complex environments ( van Hasselt et al. , 2019 ) . Instead , value-equivalent modellearning methods ( Silver et al. , 2017 ; Farahmand et al. , 2017 ; Oh et al. , 2017 ; Grimm et al. , 2020 ) identify a model that reconstructs only those quantities required for planning . The most successful method , MuZero ( Schrittwieser et al. , 2020 ) learns a model that reconstructs reward , value and policy , and uses this model to perform a powerful Monte Carlo tree search . MuZero achieved superhuman results in Go , chess , shogi and Atari without any prior knowledge of the rules , and has also achieved state-of-the-art performance in large and continuous action spaces ( Hubert et al. , 2021 ) and offline reinforcement learning ( Schrittwieser et al. , 2021 ) . However , value equivalent methods such as MuZero have in practice been limited to a deterministic class of models , which severely limits their applicability . Many environments are inherently stochastic and may be poorly approximated by a deterministic model . Partially observed environments may also be perceived by the agent as stochastic , whenever aliased states can not be disambiguated . Similarly , large and complex environments may appear stochastic to a small agent with finite capacity . In this paper we introduce the first empirically effective approach for handling stochasticity in value equivalent model-learning and planning . The model is factored to first transition deterministically from state to an afterstate , and then to branch stochastically from the afterstate to the next state . This factored model is trained end-to-end so as to maintain value equivalence for both state value function and action value function respectively , and a stochastic planning method is applied to the model . We apply these ideas to MuZero , using a discrete generative network to represent the model , and modifying the Monte Carlo tree search to effectively use the factored model . We apply our method , Stochastic MuZero , to several environments in which handling stochasticity is important . First , we consider the popular stochastic puzzle game 2048 , in which the prior state of the art exploits a perfect simulator and significant handcrafted domain knowledge . In our experiments , Stochastic MuZero achieved better results without any domain knowledge . Second , we consider the classic stochastic two-player game of backgammon , in which near-optimal play has been achieved using a perfect simulator . Stochastic MuZero matches this performance without any prior knowledge of the game rules . Finally , we evaluated our method in the deterministic board game of Go . There our method matched the performance of MuZero , demonstrating that Stochastic MuZero extends MuZero without sacrificing performance . 2 RELATED WORK . Observation models ( Oh et al. , 2015 ; Chiappa et al. , 2017 ; Łukasz Kaiser et al. , 2020 ) explicitly learn the dynamics of an environment by fitting a model of observations and rewards to observed transitions . Subsequently , these models can be combined with a model-free learning rule in a Dyna fashion ( Sutton , 1991 ) . However , modeling high dimensional image observations can be computationally prohibitive , prone to high error accumulation as the model is unrolled for multiple steps , and limiting since the capacity of the model could be spent on background features which are not helpful for the problem at hand . These issues make such models unconducive for planning . Finally , van Hasselt et al . ( 2019 ) argues that Dyna-based methods are unlikely to outperform model-free approaches that use a replay buffer . Latent models ( Schrittwieser et al. , 2020 ; Oh et al. , 2017 ; Hafner et al. , 2021 ; Henaff et al. , 2017 ) attempt to overcome the limitations of observation models by learning recurrent networks that operate on latent states . In this framework , the model is conditioned on the current observation and future actions and is unrolled for k steps . Subsequently , it is trained to make predictions about rewards , values , policies or observations at each timestep based on the current latent state . The model can then be combined with a tree-based planning algorithm or used to generate synthetic trajectories . Recently , MuZero has shown that it is possible to use this approach to achieve state-of-the-art performance in many challenging domains ( Hubert et al. , 2021 ) while using less data ( Schrittwieser et al. , 2021 ) . However , most approaches , including MuZero , use a deterministic function to model the environment dynamics , which limits their applicability to deterministic or weakly stochastic environments . Stochastic latent models are stochastic models of the environment dynamics that operate on latent states . In ( Hafner et al. , 2021 ) the authors propose a recurrent state-space model which consists of three main modules , a recurrent module which generates the deterministic recurrent state ht , a representation model which combines ht with the current observation xt to generate a distribution over stochastic states st and plays the role of the posterior , and a transition predictor which depends only on ht and acts as the prior of the model . By combining the deterministic and stochastic states ht and st the model is trained to predict the current observation ot , the transition reward rt and the discount dt . The next deterministic recurrent state is generated using ht , st and action at . The stochastic states st are modeled as multidimensional multinomial variables . The learned model is then used to generate synthetic data which are used to train an actor-critic model-free agent . The authors show that their approach outperforms pure model-free methods but it fails to achieve the performance of MuZero which combines its learned model with planning . In ( Ozair et al. , 2021 ) the authors learn a stochastic transition model using a VQ-VAE generative network ( van den Oord et al. , 2017 ) and subsequently combine it with MCTS . They show that their method can match the performance of MuZero in chess , while viewing the problem as a singleplayer task and implicitly learning to model the behaviour of the opponent . Despite its promise their approach was only applied in a supervised setting using expert data , and did not address the challenges of learning a stochastic model in the reinforcement learning setting . Moreover , the learned model was trained to explicitly predict the observation at every step , which can be a limiting factor in terms of computation and model efficiency when dealing with high dimensional observations . Finally , the authors used a two stage training process : first , a model learns latent representations of the observations , then these representations are used to learn a transition model . This makes it hard to apply this approach in the reinforcement learning setting . 3 BACKGROUND 3.1 MuZero MuZero is a model-based general reinforcement learning agent which combines a learned model of the environment dynamics with a Monte Carlo tree search planning algorithm . The model is conditioned on the history of observations o≤t at timestep t and a sequence of future actions at : t+K , and it is trained to predict the search policies πt : t+K , values vπt : t+K and intermediate rewards rt : t+K at each future timestep . MuZero uses deterministic functions for its model , and thus it implicitly assumes that the underlying environment dynamics are also deterministic . MuZero uses its dynamics model to plan ahead at each time step and the outcome of its MCTS search to select an action and as targets for its policy improvement operator . Model MuZero ’ s learned model consists of 3 functions : a representation function h , a dynamics function g and a prediction function f . The representation function maps the current history of observations o≤t into a latent state s0t . The dynamics function g receives the previous latent state s k t and combines it with an action at+k to produce the next latent state sk+1t and the reward r k t . Finally , the prediction function f receives each latent state skt as an input and computes the policy p k t and value vkt . Given a sequence of policy πt : T , value zt : T , and reward ut : T targets , the model is trained to minimize the loss shown in 1 . LMuZero = K∑ k=0 lp ( πt+k , p k t ) + K∑ k=0 lv ( zt+k , v k t ) + K∑ k=1 lr ( ut+k , r k t ) ( 1 ) The policy targets πt+k correspond to the MCTS policy that was generated when searching from observation o≤t+k . The value targets zt+k are computed using n-step returns ( Sutton & Barto , 2018 ) . Finally , the reward targets ut+k correspond to the real instantaneous rewards observed when this sequence was generated . Search MuZero uses a variant of the MCTS tree based algorithm first proposed in ( Silver et al. , 2018 ) . The tree is constructed recursively through a number of simulations . Each simulation consists of 3 phases : selection , expansion and backpropagation . During the selection phase the tree is traversed starting from the root node until a leaf edge is reached . At each internal node s the algorithm selects the action a which maximizes the upper confidence bound proposed in ( Silver et al. , 2016 ) and shown in equation 2. a = argmax a [ Q ( s , a ) + P ( a | s ) · √∑ bN ( s , b ) 1 +N ( s , a ) ( α1 + log ( ∑ bN ( s , b ) + α2 + 1 α2 ) ) ] ( 2 ) Here , Q ( s , a ) is the value estimate for action a , N ( s , a ) the visit count , P ( a | s ) the prior probability of selecting action a , and α1 , α2 are constants which control the relative importance of the Q ( s , · ) estimates and prior probabilities P ( · | s ) . In the next phase expansion , the leaf edge is expanded by querying the MuZero model and a new node is added to the tree . Finally , during the backpropagation phase the value estimate of the newly added edge is backpropagated up the tree using the n-step return estimate . | This paper aims to extend previous work on value-equivalent MBRL, such as MuZero, to stochastic environments. In contrast to conventional work in MBRL that fit transition models to be consistent with environmental observations, this line of work fits transition models to improve the accuracy / utility of a downstream value / policy. To this end authors consider the MuZero algorithm and advocate for learning a stochastic model with a VQ-VAE, and they modify MCTS so that it can be used with their stochastic model. The authors propose Stochastic MuZero. Their algorithm makes use of a stochastic model to predict future values, policies and rewards. The authors suggest utilizing "afterstates": an imaginary state that is the result of taking an action but it is also before the environment responds with an actual state. In 2048, for example, an afterstate could be the state reached after applying a tile moving action but before a number "2" tile appears in a random place. As illustrated in Figure 1, the stochastic model consists of 5 functions in contrast to 3 functions in MuZero. The notable addition is in incorporating afterstates in these functions which allows for incorporating chance outcomes. There is an afterstate dynamics function that predicts a latent after-state given a state and action. The typical dynamics function would then still predict a next actual state and reward but its input will be a latent afterstate and a chance outcome. There is also an afterstate prediction function for value and a distribution prediction, where the distribution is that of a chance outcome given an afterstate. That distribution could then be used for sampling chance outcomes in inference. To adapt MCTS to this model, search starts from a state and then proceeds to alternate at every level between afterstates and states by using the corresponding dynamics function to reach each type of state. | SP:0e4c5480365af54778341e8a492b55da1eea43ef |
Planning in Stochastic Environments with a Learned Model | Model-based reinforcement learning has proven highly successful . However , learning a model in isolation from its use during planning is problematic in complex environments . To date , the most effective techniques have instead combined valueequivalent model learning with powerful tree-search methods . This approach is exemplified by MuZero , which has achieved state-of-the-art performance in a wide range of domains , from board games to visually rich environments , with discrete and continuous action spaces , in online and offline settings . However , previous instantiations of this approach were limited to the use of deterministic models . This limits their performance in environments that are inherently stochastic , partially observed , or so large and complex that they appear stochastic to a finite agent . In this paper we extend this approach to learn and plan with stochastic models . Specifically , we introduce a new algorithm , Stochastic MuZero , that learns a stochastic model incorporating afterstates , and uses this model to perform a stochastic tree search . Stochastic MuZero matched or exceeded the state of the art in a set of canonical single and multi-agent environments , including 2048 and backgammon , while maintaining the superhuman performance of standard MuZero in the game of Go . 1 INTRODUCTION . Constructing plans and executing them is an important feature of human and animal behaviour . In the field of artificial intelligence there has been a great amount of research into adding planning capabilities to intelligent agents . Tree-based planning algorithms have shown a lot of success in a wide variety of environments such as card games ( Moravčík et al. , 2017 ) , board games ( Campbell et al. , 2002 ; Silver et al. , 2016 ) and more recently video games ( Schrittwieser et al. , 2020 ) and continuous control tasks ( Hubert et al. , 2021 ) . Most tree search methods assume that the agent has access to a perfect simulator of the environment , whereas real-world environments are typically unknown . Model-based reinforcement learning algorithms combine a model-learning component , which estimates the dynamics of the environment , with a planning component , using the learned model as a simulator . However , learning a model in isolation from its use during planning has proven to be problematic in complex environments ( van Hasselt et al. , 2019 ) . Instead , value-equivalent modellearning methods ( Silver et al. , 2017 ; Farahmand et al. , 2017 ; Oh et al. , 2017 ; Grimm et al. , 2020 ) identify a model that reconstructs only those quantities required for planning . The most successful method , MuZero ( Schrittwieser et al. , 2020 ) learns a model that reconstructs reward , value and policy , and uses this model to perform a powerful Monte Carlo tree search . MuZero achieved superhuman results in Go , chess , shogi and Atari without any prior knowledge of the rules , and has also achieved state-of-the-art performance in large and continuous action spaces ( Hubert et al. , 2021 ) and offline reinforcement learning ( Schrittwieser et al. , 2021 ) . However , value equivalent methods such as MuZero have in practice been limited to a deterministic class of models , which severely limits their applicability . Many environments are inherently stochastic and may be poorly approximated by a deterministic model . Partially observed environments may also be perceived by the agent as stochastic , whenever aliased states can not be disambiguated . Similarly , large and complex environments may appear stochastic to a small agent with finite capacity . In this paper we introduce the first empirically effective approach for handling stochasticity in value equivalent model-learning and planning . The model is factored to first transition deterministically from state to an afterstate , and then to branch stochastically from the afterstate to the next state . This factored model is trained end-to-end so as to maintain value equivalence for both state value function and action value function respectively , and a stochastic planning method is applied to the model . We apply these ideas to MuZero , using a discrete generative network to represent the model , and modifying the Monte Carlo tree search to effectively use the factored model . We apply our method , Stochastic MuZero , to several environments in which handling stochasticity is important . First , we consider the popular stochastic puzzle game 2048 , in which the prior state of the art exploits a perfect simulator and significant handcrafted domain knowledge . In our experiments , Stochastic MuZero achieved better results without any domain knowledge . Second , we consider the classic stochastic two-player game of backgammon , in which near-optimal play has been achieved using a perfect simulator . Stochastic MuZero matches this performance without any prior knowledge of the game rules . Finally , we evaluated our method in the deterministic board game of Go . There our method matched the performance of MuZero , demonstrating that Stochastic MuZero extends MuZero without sacrificing performance . 2 RELATED WORK . Observation models ( Oh et al. , 2015 ; Chiappa et al. , 2017 ; Łukasz Kaiser et al. , 2020 ) explicitly learn the dynamics of an environment by fitting a model of observations and rewards to observed transitions . Subsequently , these models can be combined with a model-free learning rule in a Dyna fashion ( Sutton , 1991 ) . However , modeling high dimensional image observations can be computationally prohibitive , prone to high error accumulation as the model is unrolled for multiple steps , and limiting since the capacity of the model could be spent on background features which are not helpful for the problem at hand . These issues make such models unconducive for planning . Finally , van Hasselt et al . ( 2019 ) argues that Dyna-based methods are unlikely to outperform model-free approaches that use a replay buffer . Latent models ( Schrittwieser et al. , 2020 ; Oh et al. , 2017 ; Hafner et al. , 2021 ; Henaff et al. , 2017 ) attempt to overcome the limitations of observation models by learning recurrent networks that operate on latent states . In this framework , the model is conditioned on the current observation and future actions and is unrolled for k steps . Subsequently , it is trained to make predictions about rewards , values , policies or observations at each timestep based on the current latent state . The model can then be combined with a tree-based planning algorithm or used to generate synthetic trajectories . Recently , MuZero has shown that it is possible to use this approach to achieve state-of-the-art performance in many challenging domains ( Hubert et al. , 2021 ) while using less data ( Schrittwieser et al. , 2021 ) . However , most approaches , including MuZero , use a deterministic function to model the environment dynamics , which limits their applicability to deterministic or weakly stochastic environments . Stochastic latent models are stochastic models of the environment dynamics that operate on latent states . In ( Hafner et al. , 2021 ) the authors propose a recurrent state-space model which consists of three main modules , a recurrent module which generates the deterministic recurrent state ht , a representation model which combines ht with the current observation xt to generate a distribution over stochastic states st and plays the role of the posterior , and a transition predictor which depends only on ht and acts as the prior of the model . By combining the deterministic and stochastic states ht and st the model is trained to predict the current observation ot , the transition reward rt and the discount dt . The next deterministic recurrent state is generated using ht , st and action at . The stochastic states st are modeled as multidimensional multinomial variables . The learned model is then used to generate synthetic data which are used to train an actor-critic model-free agent . The authors show that their approach outperforms pure model-free methods but it fails to achieve the performance of MuZero which combines its learned model with planning . In ( Ozair et al. , 2021 ) the authors learn a stochastic transition model using a VQ-VAE generative network ( van den Oord et al. , 2017 ) and subsequently combine it with MCTS . They show that their method can match the performance of MuZero in chess , while viewing the problem as a singleplayer task and implicitly learning to model the behaviour of the opponent . Despite its promise their approach was only applied in a supervised setting using expert data , and did not address the challenges of learning a stochastic model in the reinforcement learning setting . Moreover , the learned model was trained to explicitly predict the observation at every step , which can be a limiting factor in terms of computation and model efficiency when dealing with high dimensional observations . Finally , the authors used a two stage training process : first , a model learns latent representations of the observations , then these representations are used to learn a transition model . This makes it hard to apply this approach in the reinforcement learning setting . 3 BACKGROUND 3.1 MuZero MuZero is a model-based general reinforcement learning agent which combines a learned model of the environment dynamics with a Monte Carlo tree search planning algorithm . The model is conditioned on the history of observations o≤t at timestep t and a sequence of future actions at : t+K , and it is trained to predict the search policies πt : t+K , values vπt : t+K and intermediate rewards rt : t+K at each future timestep . MuZero uses deterministic functions for its model , and thus it implicitly assumes that the underlying environment dynamics are also deterministic . MuZero uses its dynamics model to plan ahead at each time step and the outcome of its MCTS search to select an action and as targets for its policy improvement operator . Model MuZero ’ s learned model consists of 3 functions : a representation function h , a dynamics function g and a prediction function f . The representation function maps the current history of observations o≤t into a latent state s0t . The dynamics function g receives the previous latent state s k t and combines it with an action at+k to produce the next latent state sk+1t and the reward r k t . Finally , the prediction function f receives each latent state skt as an input and computes the policy p k t and value vkt . Given a sequence of policy πt : T , value zt : T , and reward ut : T targets , the model is trained to minimize the loss shown in 1 . LMuZero = K∑ k=0 lp ( πt+k , p k t ) + K∑ k=0 lv ( zt+k , v k t ) + K∑ k=1 lr ( ut+k , r k t ) ( 1 ) The policy targets πt+k correspond to the MCTS policy that was generated when searching from observation o≤t+k . The value targets zt+k are computed using n-step returns ( Sutton & Barto , 2018 ) . Finally , the reward targets ut+k correspond to the real instantaneous rewards observed when this sequence was generated . Search MuZero uses a variant of the MCTS tree based algorithm first proposed in ( Silver et al. , 2018 ) . The tree is constructed recursively through a number of simulations . Each simulation consists of 3 phases : selection , expansion and backpropagation . During the selection phase the tree is traversed starting from the root node until a leaf edge is reached . At each internal node s the algorithm selects the action a which maximizes the upper confidence bound proposed in ( Silver et al. , 2016 ) and shown in equation 2. a = argmax a [ Q ( s , a ) + P ( a | s ) · √∑ bN ( s , b ) 1 +N ( s , a ) ( α1 + log ( ∑ bN ( s , b ) + α2 + 1 α2 ) ) ] ( 2 ) Here , Q ( s , a ) is the value estimate for action a , N ( s , a ) the visit count , P ( a | s ) the prior probability of selecting action a , and α1 , α2 are constants which control the relative importance of the Q ( s , · ) estimates and prior probabilities P ( · | s ) . In the next phase expansion , the leaf edge is expanded by querying the MuZero model and a new node is added to the tree . Finally , during the backpropagation phase the value estimate of the newly added edge is backpropagated up the tree using the n-step return estimate . | The submission proposes an algorithm (called Stochastic MuZero) that combines VQ-VAEs with MuZero. Unlike MuZero, Stochastic MuZero can handle settings with stochasticity in a principled way (in terms of value equivalence). The submission shows that Stochastic MuZero can perform comparably to AlphaZero in 2048 and Backgammon. Additionally, it shows that Stochastic MuZero can perform comparably to MuZero in Go in a setting in which its computation budget is twice as large. | SP:0e4c5480365af54778341e8a492b55da1eea43ef |
Planning in Stochastic Environments with a Learned Model | Model-based reinforcement learning has proven highly successful . However , learning a model in isolation from its use during planning is problematic in complex environments . To date , the most effective techniques have instead combined valueequivalent model learning with powerful tree-search methods . This approach is exemplified by MuZero , which has achieved state-of-the-art performance in a wide range of domains , from board games to visually rich environments , with discrete and continuous action spaces , in online and offline settings . However , previous instantiations of this approach were limited to the use of deterministic models . This limits their performance in environments that are inherently stochastic , partially observed , or so large and complex that they appear stochastic to a finite agent . In this paper we extend this approach to learn and plan with stochastic models . Specifically , we introduce a new algorithm , Stochastic MuZero , that learns a stochastic model incorporating afterstates , and uses this model to perform a stochastic tree search . Stochastic MuZero matched or exceeded the state of the art in a set of canonical single and multi-agent environments , including 2048 and backgammon , while maintaining the superhuman performance of standard MuZero in the game of Go . 1 INTRODUCTION . Constructing plans and executing them is an important feature of human and animal behaviour . In the field of artificial intelligence there has been a great amount of research into adding planning capabilities to intelligent agents . Tree-based planning algorithms have shown a lot of success in a wide variety of environments such as card games ( Moravčík et al. , 2017 ) , board games ( Campbell et al. , 2002 ; Silver et al. , 2016 ) and more recently video games ( Schrittwieser et al. , 2020 ) and continuous control tasks ( Hubert et al. , 2021 ) . Most tree search methods assume that the agent has access to a perfect simulator of the environment , whereas real-world environments are typically unknown . Model-based reinforcement learning algorithms combine a model-learning component , which estimates the dynamics of the environment , with a planning component , using the learned model as a simulator . However , learning a model in isolation from its use during planning has proven to be problematic in complex environments ( van Hasselt et al. , 2019 ) . Instead , value-equivalent modellearning methods ( Silver et al. , 2017 ; Farahmand et al. , 2017 ; Oh et al. , 2017 ; Grimm et al. , 2020 ) identify a model that reconstructs only those quantities required for planning . The most successful method , MuZero ( Schrittwieser et al. , 2020 ) learns a model that reconstructs reward , value and policy , and uses this model to perform a powerful Monte Carlo tree search . MuZero achieved superhuman results in Go , chess , shogi and Atari without any prior knowledge of the rules , and has also achieved state-of-the-art performance in large and continuous action spaces ( Hubert et al. , 2021 ) and offline reinforcement learning ( Schrittwieser et al. , 2021 ) . However , value equivalent methods such as MuZero have in practice been limited to a deterministic class of models , which severely limits their applicability . Many environments are inherently stochastic and may be poorly approximated by a deterministic model . Partially observed environments may also be perceived by the agent as stochastic , whenever aliased states can not be disambiguated . Similarly , large and complex environments may appear stochastic to a small agent with finite capacity . In this paper we introduce the first empirically effective approach for handling stochasticity in value equivalent model-learning and planning . The model is factored to first transition deterministically from state to an afterstate , and then to branch stochastically from the afterstate to the next state . This factored model is trained end-to-end so as to maintain value equivalence for both state value function and action value function respectively , and a stochastic planning method is applied to the model . We apply these ideas to MuZero , using a discrete generative network to represent the model , and modifying the Monte Carlo tree search to effectively use the factored model . We apply our method , Stochastic MuZero , to several environments in which handling stochasticity is important . First , we consider the popular stochastic puzzle game 2048 , in which the prior state of the art exploits a perfect simulator and significant handcrafted domain knowledge . In our experiments , Stochastic MuZero achieved better results without any domain knowledge . Second , we consider the classic stochastic two-player game of backgammon , in which near-optimal play has been achieved using a perfect simulator . Stochastic MuZero matches this performance without any prior knowledge of the game rules . Finally , we evaluated our method in the deterministic board game of Go . There our method matched the performance of MuZero , demonstrating that Stochastic MuZero extends MuZero without sacrificing performance . 2 RELATED WORK . Observation models ( Oh et al. , 2015 ; Chiappa et al. , 2017 ; Łukasz Kaiser et al. , 2020 ) explicitly learn the dynamics of an environment by fitting a model of observations and rewards to observed transitions . Subsequently , these models can be combined with a model-free learning rule in a Dyna fashion ( Sutton , 1991 ) . However , modeling high dimensional image observations can be computationally prohibitive , prone to high error accumulation as the model is unrolled for multiple steps , and limiting since the capacity of the model could be spent on background features which are not helpful for the problem at hand . These issues make such models unconducive for planning . Finally , van Hasselt et al . ( 2019 ) argues that Dyna-based methods are unlikely to outperform model-free approaches that use a replay buffer . Latent models ( Schrittwieser et al. , 2020 ; Oh et al. , 2017 ; Hafner et al. , 2021 ; Henaff et al. , 2017 ) attempt to overcome the limitations of observation models by learning recurrent networks that operate on latent states . In this framework , the model is conditioned on the current observation and future actions and is unrolled for k steps . Subsequently , it is trained to make predictions about rewards , values , policies or observations at each timestep based on the current latent state . The model can then be combined with a tree-based planning algorithm or used to generate synthetic trajectories . Recently , MuZero has shown that it is possible to use this approach to achieve state-of-the-art performance in many challenging domains ( Hubert et al. , 2021 ) while using less data ( Schrittwieser et al. , 2021 ) . However , most approaches , including MuZero , use a deterministic function to model the environment dynamics , which limits their applicability to deterministic or weakly stochastic environments . Stochastic latent models are stochastic models of the environment dynamics that operate on latent states . In ( Hafner et al. , 2021 ) the authors propose a recurrent state-space model which consists of three main modules , a recurrent module which generates the deterministic recurrent state ht , a representation model which combines ht with the current observation xt to generate a distribution over stochastic states st and plays the role of the posterior , and a transition predictor which depends only on ht and acts as the prior of the model . By combining the deterministic and stochastic states ht and st the model is trained to predict the current observation ot , the transition reward rt and the discount dt . The next deterministic recurrent state is generated using ht , st and action at . The stochastic states st are modeled as multidimensional multinomial variables . The learned model is then used to generate synthetic data which are used to train an actor-critic model-free agent . The authors show that their approach outperforms pure model-free methods but it fails to achieve the performance of MuZero which combines its learned model with planning . In ( Ozair et al. , 2021 ) the authors learn a stochastic transition model using a VQ-VAE generative network ( van den Oord et al. , 2017 ) and subsequently combine it with MCTS . They show that their method can match the performance of MuZero in chess , while viewing the problem as a singleplayer task and implicitly learning to model the behaviour of the opponent . Despite its promise their approach was only applied in a supervised setting using expert data , and did not address the challenges of learning a stochastic model in the reinforcement learning setting . Moreover , the learned model was trained to explicitly predict the observation at every step , which can be a limiting factor in terms of computation and model efficiency when dealing with high dimensional observations . Finally , the authors used a two stage training process : first , a model learns latent representations of the observations , then these representations are used to learn a transition model . This makes it hard to apply this approach in the reinforcement learning setting . 3 BACKGROUND 3.1 MuZero MuZero is a model-based general reinforcement learning agent which combines a learned model of the environment dynamics with a Monte Carlo tree search planning algorithm . The model is conditioned on the history of observations o≤t at timestep t and a sequence of future actions at : t+K , and it is trained to predict the search policies πt : t+K , values vπt : t+K and intermediate rewards rt : t+K at each future timestep . MuZero uses deterministic functions for its model , and thus it implicitly assumes that the underlying environment dynamics are also deterministic . MuZero uses its dynamics model to plan ahead at each time step and the outcome of its MCTS search to select an action and as targets for its policy improvement operator . Model MuZero ’ s learned model consists of 3 functions : a representation function h , a dynamics function g and a prediction function f . The representation function maps the current history of observations o≤t into a latent state s0t . The dynamics function g receives the previous latent state s k t and combines it with an action at+k to produce the next latent state sk+1t and the reward r k t . Finally , the prediction function f receives each latent state skt as an input and computes the policy p k t and value vkt . Given a sequence of policy πt : T , value zt : T , and reward ut : T targets , the model is trained to minimize the loss shown in 1 . LMuZero = K∑ k=0 lp ( πt+k , p k t ) + K∑ k=0 lv ( zt+k , v k t ) + K∑ k=1 lr ( ut+k , r k t ) ( 1 ) The policy targets πt+k correspond to the MCTS policy that was generated when searching from observation o≤t+k . The value targets zt+k are computed using n-step returns ( Sutton & Barto , 2018 ) . Finally , the reward targets ut+k correspond to the real instantaneous rewards observed when this sequence was generated . Search MuZero uses a variant of the MCTS tree based algorithm first proposed in ( Silver et al. , 2018 ) . The tree is constructed recursively through a number of simulations . Each simulation consists of 3 phases : selection , expansion and backpropagation . During the selection phase the tree is traversed starting from the root node until a leaf edge is reached . At each internal node s the algorithm selects the action a which maximizes the upper confidence bound proposed in ( Silver et al. , 2016 ) and shown in equation 2. a = argmax a [ Q ( s , a ) + P ( a | s ) · √∑ bN ( s , b ) 1 +N ( s , a ) ( α1 + log ( ∑ bN ( s , b ) + α2 + 1 α2 ) ) ] ( 2 ) Here , Q ( s , a ) is the value estimate for action a , N ( s , a ) the visit count , P ( a | s ) the prior probability of selecting action a , and α1 , α2 are constants which control the relative importance of the Q ( s , · ) estimates and prior probabilities P ( · | s ) . In the next phase expansion , the leaf edge is expanded by querying the MuZero model and a new node is added to the tree . Finally , during the backpropagation phase the value estimate of the newly added edge is backpropagated up the tree using the n-step return estimate . | The paper proposes an extension of MuZero to stochastic environments. The stochasticity of the environment is handled by using afterstates, as_t, and chance outcomes, c_t. This decomposes the modelling of the stochastic environment dynamics into a deterministic model s_{t+1}, r_{t+1} = M(as_t, c_t) and modelling chance outcomes p(c_t | as_t). The chance outcomes are modeled as a discrete categorical variable (1 of M) and learned using a VQ-VAE like setup. The paper shows how the proposed model achieves ~SOTA on two stochastic environments: 2048 and backgammon, and retains SOTA performance on a single non-stochastic environment, Go, although, using twice the computational budget. | SP:0e4c5480365af54778341e8a492b55da1eea43ef |
Collaborate to Defend Against Adversarial Attacks | Adversarially robust learning methods require invariant predictions to a small neighborhood of its natural inputs , thus often encountering insufficient model capacity . Learning multiple sub-models in an ensemble can mitigate this insufficiency , further improving both generalization and robustness . However , an ensemble still wastes the limited capacity of multiple models . To optimally utilize the limited capacity , this paper proposes to learn a collaboration among multiple sub-models . Compared with the ensemble , the collaboration enables the possibility of correct predictions even if there exists a single correct sub-model . Besides , learning a collaboration could enable every sub-model to fit its vulnerability area and reserve the rest of the sub-models to fit other vulnerability areas . To implement the idea , we propose a collaboration framework—CDA2 the abbreviation for Collaborate to Defend against Adversarial Attacks . CDA2 could effectively minimize the vulnerability overlap of all sub-models and then choose a representative sub-model to make correct predictions . Empirical experiments verify that CDA2 outperforms various ensemble methods against black-box and white-box adversarial attacks . 1 INTRODUCTION . Safety-critical applications ( such as in medicine and finance ) require the adversarial robustness of deep models ( Goodfellow et al. , 2015 ; Szegedy et al. , 2014 ) . An adversarially robust learning method requires invariant predictions to a small neighborhood of its natural inputs , thus often encountering insufficient model capacity ( Zhang et al. , 2021 ; Yu et al. , 2021a ) . This limits the further improvement of robustness and has the undesirable degradation of generalization ( Madry et al. , 2018 ) . Learning multiple sub-models in an ensemble ( Breiman , 1996 ; Freund et al. , 1996 ) can mitigate this insufficiency ( Pang et al. , 2019 ; Kariyappa & Qureshi , 2019 ; Yang et al. , 2020a ) . Remarkably , Pang et al . ( 2019 ) , Kariyappa & Qureshi ( 2019 ) and Yang et al . ( 2020a ) minimized the vulnerability overlaps between each pair of sub-models and improved both robustness and generalization over a single model . However , an ensemble wastes the limited capacity of multiple models . In the example of three sub-models ( see Figure 1 ( b ) ) , the adversarial input that lies in the black areas can fool the ensemble successfully , i.e. , more than half of sub-models must correctly classify the adversarial input . Therefore , the ensemble ’ s voting-based strategy excludes the possibility that true predictions remain with the minority . Besides , learning an ensemble requires more than half of the sub-models to fit the same vulnerability areas , which leaves the following question unanswered whether we could only leverage a single sub-model to fit a vulnerability area and reserve the rest of the sub-models to fit other vulnerability areas . To optimally utilize the limited capacity , this paper proposes to learn a collaboration among multiple sub-models . As shown in Figure 1 ( c ) , the adversarial input that lies in the vulnerability overlaps of all sub-models can undoubtedly fool the collaboration . Compared with the ensemble in Figure 1 ( b ) ) , collaboration enables the possibility of correct predictions even if there exists a single correct submodel merely . Besides , learning a collaboration could enable every sub-model to fit its vulnerability area , which could collectively fix broader vulnerability areas than the ensemble does . Then , submodels could collaboratively choose trustworthy ones to make the final predictions . To realize the idea , we propose a collaboration framework—Collaborate to Defend against Adversarial Attacks ( CDA2 ) ( Algorithms 1 and 2 ) . In CDA2 , each sub-model has dual heads : one outputs a vector of predicted probability fθ ( · ) ; another outputs a scalar that measures posterior probability density ( PPD ) of the prediction . In the training phase , given a natural or adversarial input x , each sub-model chooses an easy one ( s ) to feed itself . The PPD head is meanwhile updated by comparing the predicted probability on the true label—fyθ ( · ) ( a scalar ) . In the inference phase , given an input , CDA2 chooses a sub-model with the largest PPD value as the representative to output the prediction . We highlight our key contributions as follows . • We provide a new perspective on learning multiple sub-models for defending against adversarial attacks . We theoretically show the collaboration makes better decisions than the ensemble , which implies collaboration may fix broader vulnerability areas . • We propose a novel collaboration framework—CDA2 ( see Section 3.2 ) . In the training phase , CDA2could effectively minimize the vulnerability overlap of all sub-models ; In the inference phase , CDA2 could effectively choose a representative sub-model to make correct predictions . We also provide a comprehensive analysis illustrating the rationale of CDA2 . • Empirical experiments verify that CDA2 outperforms various ensemble methods against black-box and white-box adversarial attacks . 2 RELATED WORKS . Adversarial attack Adversarial attacks aim to craft the human-imperceptible adversarial input to fool the deep models . Adversarial attacks could be roughly divided into white-box attacks in which the adversary is fully aware of the model ’ s structures ( Goodfellow et al. , 2015 ; Moosavi-Dezfooli et al. , 2016 ; Carlini & Wagner , 2017b ; Chen et al. , 2018 ; Athalye et al. , 2018 ; Xiao et al. , 2018 ; Zheng et al. , 2019 ; Wong et al. , 2019 ; Mopuri et al. , 2019 ; Alaifari et al. , 2019 ; Sriramanan et al. , 2020 ; Wu et al. , 2020b ; Croce & Hein , 2020 ; Yu et al. , 2021b ) and black-box attacks in which the deep models are treated as black boxes to the adversary ( Cheng et al. , 2019 ; 2020 ; Wu et al. , 2020a ; Chen et al. , 2020a ; Li et al. , 2020a ; Rahmati et al. , 2020 ; Yan et al. , 2021b ; Hendrycks et al. , 2021 ; Dong et al. , 2018 ; Xie et al. , 2019 ) . This paper focuses on building effective defense and select both white-box and black-box attack methods as our robustness evaluation metrics . Adversarial defense Defending adversarial attacks is a challenging task and researchers have proposed various solutions . Certified defense tries to learn provably robust deep models against norm-bounded ( e.g. , ` 2 and ` ∞ ) perturbations ( Wong & Kolter , 2018 ; Tsuzuku et al. , 2018 ; Weng et al. , 2018 ; Mirman et al. , 2018 ; Hein & Andriushchenko , 2017 ; Lécuyer et al. , 2019 ; Xiao et al. , 2019 ; Cohen et al. , 2019 ; Balunovic & Vechev , 2020a ; Zhang et al. , 2020a ; Singla & Feizi , 2020 ; Balunovic & Vechev , 2020b ; Zou et al. , 2021 ) . Empirical defense leverages adversarial data to build effective defense such as adversary detection ( Metzen et al. , 2017 ; Li & Li , 2017 ; Carlini & Wagner , 2017a ; Tian et al. , 2018 ; Ma et al. , 2018b ; Lee et al. , 2018 ; Pang et al. , 2018 ; Smith & Gal , 2018 ; Roth et al. , 2019 ; Liu et al. , 2019 ; Yin & Rohde , 2020 ; Sperl et al. , 2020 ; Cohen et al. , 2020 ; Sheikholeslami et al. , 2021 ; Chen et al. , 2021a ; Yang et al. , 2020b ; Qin et al. , 2020 ; Tian et al. , 2021 ; Wu et al. , 2021 ) and adversarial training ( AT ) , in which AT stands out as the most effective defense . Researchers have investigated various aspects of AT , such as improving AT ’ s robustness or generalization ( Madry et al. , 2018 ; Yan et al. , 2018 ; Wu et al. , 2018 ; Cai et al. , 2018 ; Najafi et al. , 2019 ; Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Farnia et al. , 2019 ; Song et al. , 2019 ; Zhang et al. , 2019b ; Wang et al. , 2019 ; Tramèr & Boneh , 2019 ; Zhang & Wang , 2019 ; Stutz et al. , 2020 ; Pang et al. , 2020 ; Gan et al. , 2020 ; Dong et al. , 2020 ; Zhang et al. , 2020b ; Chen et al. , 2020b ; Song et al. , 2020 ; Ding et al. , 2020 ; Wang et al. , 2020b ; Zhang et al. , 2021 ) , fixing AT ’ s undesirable robust overfitting ( Rice et al. , 2020 ; Chen et al. , 2021b ) , improving AT ’ s training efficiency ( Zhang et al. , 2019a ; Shafahi et al. , 2019 ; Zheng et al. , 2020 ; B.S . & Babu , 2020 ; Andriushchenko & Flammarion , 2020 ; Wong et al. , 2020 ) , understanding/interpreting AT ’ s unique traits ( Nakkiran , 2019 ; Yin et al. , 2019 ; Gao et al. , 2019 ; Cranko et al. , 2019 ; Zhang et al. , 2019c ; Liu et al. , 2020 ; Roth et al. , 2020 ; Wang et al. , 2020a ; Zhang et al. , 2020c ; Li et al. , 2020b ; Zou et al. , 2021 ; Mehrabi et al. , 2021 ; Xu et al. , 2021 ) , etc . Besides , researchers have alao actively investigated robust-structured models ( Cisse et al. , 2017 ; Xie et al. , 2020 ; Moosavi-Dezfooli et al. , 2019 ; Xie & Yuille , 2020 ; Yan et al. , 2021a ; Du et al. , 2021 ; Pang et al. , 2021 ) . Nevertheless , the above research thoroughly investigated a single model ; this paper focuses on the collaboration among multiple models for adversarial defense . Ensemble methods for adversarial robustness The most relevant studies are the ensemble methods . Ensemble methods such as bagging ( Breiman , 1996 ) and boosting ( Freund et al. , 1996 ) have been investigated for significantly improving the model ’ s generalization . Motivated by the benefits of ensemble methods in improving generalization , researchers introduced an ensemble to improve the model robustness ( Yang et al. , 2020a ; Kariyappa & Qureshi , 2019 ; Pang et al. , 2019 ; Tramèr et al. , 2018 ) . Tramèr et al . ( 2018 ) proposed to reduce the adversarial transferability by training a single model with adversarial examples from multiple pretrained sub-models . Pang et al . ( 2019 ) introduce a regularization method—ADP—to encourage high diversity in the non-maximal predictions of sub-models . Kariyappa & Qureshi ( 2019 ) improved the ensemble diversity by maximizing the introduced cosine distance between the gradients of sub-models with respect to the input . Yang et al . ( 2020a ) proposed to distill non-robust features in the input and diversify the adversarial vulnerability . These methods reduced overlaps of vulnerability areas between sub-models ( Yang et al. , 2020a ) . To further improve the ensembles , mixture-of-experts ( MOE ) assume that the problem space can be divided into multiple sub-problems through a gate module ; the gate module specifies each sub-model on a specific sub-problem ( Jacobs et al. , 1991 ; Ma et al. , 2018a ) . Nevertheless , to the best of our knowledge , MOE-based methods have been not applied to help adversarial robustness . Inspired by MOE , we propose the collaboration framework to defend against adversary attacks . | This paper firstly analyzes prior adversarial defense methods using ensemble strategy and claims that this method could cause a waste of model capacity. To improve the utilization of (multiple ) model capacity, the author proposes an interesting collaboration strategy $CDA^2$ to defend against adversarial attacks. Specifically, the author develop a dual-head model structure: one is for making a prediction and the other is for predicting the posterior probability of the input. During training, each model can address the adversarial attacks of other sub-models so that it improves the robustness of the collaboration. The experimental results partially verify the superiority of the proposed methods. | SP:ded3b9c368b22beec3fb0ea5361dab09fd2b48d0 |
Collaborate to Defend Against Adversarial Attacks | Adversarially robust learning methods require invariant predictions to a small neighborhood of its natural inputs , thus often encountering insufficient model capacity . Learning multiple sub-models in an ensemble can mitigate this insufficiency , further improving both generalization and robustness . However , an ensemble still wastes the limited capacity of multiple models . To optimally utilize the limited capacity , this paper proposes to learn a collaboration among multiple sub-models . Compared with the ensemble , the collaboration enables the possibility of correct predictions even if there exists a single correct sub-model . Besides , learning a collaboration could enable every sub-model to fit its vulnerability area and reserve the rest of the sub-models to fit other vulnerability areas . To implement the idea , we propose a collaboration framework—CDA2 the abbreviation for Collaborate to Defend against Adversarial Attacks . CDA2 could effectively minimize the vulnerability overlap of all sub-models and then choose a representative sub-model to make correct predictions . Empirical experiments verify that CDA2 outperforms various ensemble methods against black-box and white-box adversarial attacks . 1 INTRODUCTION . Safety-critical applications ( such as in medicine and finance ) require the adversarial robustness of deep models ( Goodfellow et al. , 2015 ; Szegedy et al. , 2014 ) . An adversarially robust learning method requires invariant predictions to a small neighborhood of its natural inputs , thus often encountering insufficient model capacity ( Zhang et al. , 2021 ; Yu et al. , 2021a ) . This limits the further improvement of robustness and has the undesirable degradation of generalization ( Madry et al. , 2018 ) . Learning multiple sub-models in an ensemble ( Breiman , 1996 ; Freund et al. , 1996 ) can mitigate this insufficiency ( Pang et al. , 2019 ; Kariyappa & Qureshi , 2019 ; Yang et al. , 2020a ) . Remarkably , Pang et al . ( 2019 ) , Kariyappa & Qureshi ( 2019 ) and Yang et al . ( 2020a ) minimized the vulnerability overlaps between each pair of sub-models and improved both robustness and generalization over a single model . However , an ensemble wastes the limited capacity of multiple models . In the example of three sub-models ( see Figure 1 ( b ) ) , the adversarial input that lies in the black areas can fool the ensemble successfully , i.e. , more than half of sub-models must correctly classify the adversarial input . Therefore , the ensemble ’ s voting-based strategy excludes the possibility that true predictions remain with the minority . Besides , learning an ensemble requires more than half of the sub-models to fit the same vulnerability areas , which leaves the following question unanswered whether we could only leverage a single sub-model to fit a vulnerability area and reserve the rest of the sub-models to fit other vulnerability areas . To optimally utilize the limited capacity , this paper proposes to learn a collaboration among multiple sub-models . As shown in Figure 1 ( c ) , the adversarial input that lies in the vulnerability overlaps of all sub-models can undoubtedly fool the collaboration . Compared with the ensemble in Figure 1 ( b ) ) , collaboration enables the possibility of correct predictions even if there exists a single correct submodel merely . Besides , learning a collaboration could enable every sub-model to fit its vulnerability area , which could collectively fix broader vulnerability areas than the ensemble does . Then , submodels could collaboratively choose trustworthy ones to make the final predictions . To realize the idea , we propose a collaboration framework—Collaborate to Defend against Adversarial Attacks ( CDA2 ) ( Algorithms 1 and 2 ) . In CDA2 , each sub-model has dual heads : one outputs a vector of predicted probability fθ ( · ) ; another outputs a scalar that measures posterior probability density ( PPD ) of the prediction . In the training phase , given a natural or adversarial input x , each sub-model chooses an easy one ( s ) to feed itself . The PPD head is meanwhile updated by comparing the predicted probability on the true label—fyθ ( · ) ( a scalar ) . In the inference phase , given an input , CDA2 chooses a sub-model with the largest PPD value as the representative to output the prediction . We highlight our key contributions as follows . • We provide a new perspective on learning multiple sub-models for defending against adversarial attacks . We theoretically show the collaboration makes better decisions than the ensemble , which implies collaboration may fix broader vulnerability areas . • We propose a novel collaboration framework—CDA2 ( see Section 3.2 ) . In the training phase , CDA2could effectively minimize the vulnerability overlap of all sub-models ; In the inference phase , CDA2 could effectively choose a representative sub-model to make correct predictions . We also provide a comprehensive analysis illustrating the rationale of CDA2 . • Empirical experiments verify that CDA2 outperforms various ensemble methods against black-box and white-box adversarial attacks . 2 RELATED WORKS . Adversarial attack Adversarial attacks aim to craft the human-imperceptible adversarial input to fool the deep models . Adversarial attacks could be roughly divided into white-box attacks in which the adversary is fully aware of the model ’ s structures ( Goodfellow et al. , 2015 ; Moosavi-Dezfooli et al. , 2016 ; Carlini & Wagner , 2017b ; Chen et al. , 2018 ; Athalye et al. , 2018 ; Xiao et al. , 2018 ; Zheng et al. , 2019 ; Wong et al. , 2019 ; Mopuri et al. , 2019 ; Alaifari et al. , 2019 ; Sriramanan et al. , 2020 ; Wu et al. , 2020b ; Croce & Hein , 2020 ; Yu et al. , 2021b ) and black-box attacks in which the deep models are treated as black boxes to the adversary ( Cheng et al. , 2019 ; 2020 ; Wu et al. , 2020a ; Chen et al. , 2020a ; Li et al. , 2020a ; Rahmati et al. , 2020 ; Yan et al. , 2021b ; Hendrycks et al. , 2021 ; Dong et al. , 2018 ; Xie et al. , 2019 ) . This paper focuses on building effective defense and select both white-box and black-box attack methods as our robustness evaluation metrics . Adversarial defense Defending adversarial attacks is a challenging task and researchers have proposed various solutions . Certified defense tries to learn provably robust deep models against norm-bounded ( e.g. , ` 2 and ` ∞ ) perturbations ( Wong & Kolter , 2018 ; Tsuzuku et al. , 2018 ; Weng et al. , 2018 ; Mirman et al. , 2018 ; Hein & Andriushchenko , 2017 ; Lécuyer et al. , 2019 ; Xiao et al. , 2019 ; Cohen et al. , 2019 ; Balunovic & Vechev , 2020a ; Zhang et al. , 2020a ; Singla & Feizi , 2020 ; Balunovic & Vechev , 2020b ; Zou et al. , 2021 ) . Empirical defense leverages adversarial data to build effective defense such as adversary detection ( Metzen et al. , 2017 ; Li & Li , 2017 ; Carlini & Wagner , 2017a ; Tian et al. , 2018 ; Ma et al. , 2018b ; Lee et al. , 2018 ; Pang et al. , 2018 ; Smith & Gal , 2018 ; Roth et al. , 2019 ; Liu et al. , 2019 ; Yin & Rohde , 2020 ; Sperl et al. , 2020 ; Cohen et al. , 2020 ; Sheikholeslami et al. , 2021 ; Chen et al. , 2021a ; Yang et al. , 2020b ; Qin et al. , 2020 ; Tian et al. , 2021 ; Wu et al. , 2021 ) and adversarial training ( AT ) , in which AT stands out as the most effective defense . Researchers have investigated various aspects of AT , such as improving AT ’ s robustness or generalization ( Madry et al. , 2018 ; Yan et al. , 2018 ; Wu et al. , 2018 ; Cai et al. , 2018 ; Najafi et al. , 2019 ; Alayrac et al. , 2019 ; Carmon et al. , 2019 ; Farnia et al. , 2019 ; Song et al. , 2019 ; Zhang et al. , 2019b ; Wang et al. , 2019 ; Tramèr & Boneh , 2019 ; Zhang & Wang , 2019 ; Stutz et al. , 2020 ; Pang et al. , 2020 ; Gan et al. , 2020 ; Dong et al. , 2020 ; Zhang et al. , 2020b ; Chen et al. , 2020b ; Song et al. , 2020 ; Ding et al. , 2020 ; Wang et al. , 2020b ; Zhang et al. , 2021 ) , fixing AT ’ s undesirable robust overfitting ( Rice et al. , 2020 ; Chen et al. , 2021b ) , improving AT ’ s training efficiency ( Zhang et al. , 2019a ; Shafahi et al. , 2019 ; Zheng et al. , 2020 ; B.S . & Babu , 2020 ; Andriushchenko & Flammarion , 2020 ; Wong et al. , 2020 ) , understanding/interpreting AT ’ s unique traits ( Nakkiran , 2019 ; Yin et al. , 2019 ; Gao et al. , 2019 ; Cranko et al. , 2019 ; Zhang et al. , 2019c ; Liu et al. , 2020 ; Roth et al. , 2020 ; Wang et al. , 2020a ; Zhang et al. , 2020c ; Li et al. , 2020b ; Zou et al. , 2021 ; Mehrabi et al. , 2021 ; Xu et al. , 2021 ) , etc . Besides , researchers have alao actively investigated robust-structured models ( Cisse et al. , 2017 ; Xie et al. , 2020 ; Moosavi-Dezfooli et al. , 2019 ; Xie & Yuille , 2020 ; Yan et al. , 2021a ; Du et al. , 2021 ; Pang et al. , 2021 ) . Nevertheless , the above research thoroughly investigated a single model ; this paper focuses on the collaboration among multiple models for adversarial defense . Ensemble methods for adversarial robustness The most relevant studies are the ensemble methods . Ensemble methods such as bagging ( Breiman , 1996 ) and boosting ( Freund et al. , 1996 ) have been investigated for significantly improving the model ’ s generalization . Motivated by the benefits of ensemble methods in improving generalization , researchers introduced an ensemble to improve the model robustness ( Yang et al. , 2020a ; Kariyappa & Qureshi , 2019 ; Pang et al. , 2019 ; Tramèr et al. , 2018 ) . Tramèr et al . ( 2018 ) proposed to reduce the adversarial transferability by training a single model with adversarial examples from multiple pretrained sub-models . Pang et al . ( 2019 ) introduce a regularization method—ADP—to encourage high diversity in the non-maximal predictions of sub-models . Kariyappa & Qureshi ( 2019 ) improved the ensemble diversity by maximizing the introduced cosine distance between the gradients of sub-models with respect to the input . Yang et al . ( 2020a ) proposed to distill non-robust features in the input and diversify the adversarial vulnerability . These methods reduced overlaps of vulnerability areas between sub-models ( Yang et al. , 2020a ) . To further improve the ensembles , mixture-of-experts ( MOE ) assume that the problem space can be divided into multiple sub-problems through a gate module ; the gate module specifies each sub-model on a specific sub-problem ( Jacobs et al. , 1991 ; Ma et al. , 2018a ) . Nevertheless , to the best of our knowledge , MOE-based methods have been not applied to help adversarial robustness . Inspired by MOE , we propose the collaboration framework to defend against adversary attacks . | This paper presents a new paradigm for defending against adversarial attacks with multiple sub-models. Different from ensemble, the proposed collaboration paradigm, a representative sub-model is chosen to make the decision, instead of letting all sub-models vote. The proposed method has been validated on CIFAR-10 dataset, against both white-box and transferrability-based black-box attacks. | SP:ded3b9c368b22beec3fb0ea5361dab09fd2b48d0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.