paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Boundary-aware Pre-training for Video Scene Segmentation | 1 INTRODUCTION . The video scene segmentation is a task of identifying scene boundaries from a video where a scene is defined as a semantic unit for making a story and is composed of a series of semantically cohesive shots—a set of frames captured by the same camera during an uninterrupted period of time—in the same context . Localizing scene boundaries is a significant step towards the high-level video understanding because dividing a long video into a set of meaningful scenes enables models to inspect the individual incidents from complex story . One of the biggest challenges with temporal semantic segmentation is that it is not achieved simply by detecting changes in visual cues . As shown in Figure 1 ( a ) , we present an example of nine shots , all of which belong to the same scene , where two characters are talking on the phone . We can see that the overall visual cues within the scene do not stay the same but rather change repeatedly when each character appears . On the other hand , as presented in Figure 1 ( b ) , the other example shows two different scenes which contain visually similar shots ( highlighted in blue ) where the same character appears in the same place . Therefore , it is expected that two adjacent scenes which share shots with similar visual cues need to be contextually discriminated . From this observation , it is important for the video scene segmentation task to model contextual relation between shots by maximizing 1 ) intra-scene similarity ( i.e. , the shots belonging to the same scene should be close to each other ) , and 2 ) inter-scene discrimination across two adjacent scenes ( i.e. , two neighbor shots across the scene boundary should be distinguishable ) . Supervised learning approaches ( e.g. , Rao et al . ( 2020 ) ) are clearly limited due to the lack of largescale datasets with reliable ground-truth annotations . Recently , self-supervision ( Chen et al. , 2020a ; Caron et al. , 2020 ; He et al. , 2020 ; Roh et al. , 2021 ) is spotlighted through its effectiveness in learning in-domain representation without relying on costly ground truth annotations . The selfsupervised learning methods ( Chen et al. , 2021 ; Feichtenhofer et al. , 2021 ; Dave et al. , 2021 ; Qian et al. , 2021 ) in the video domain are often designed to learn spatio-temporal patterns in short clips ( e.g. , shots in movies ) . This kind of learned representation is generic and can be applied to many video understanding tasks ( e.g. , action classification ) . However , such representation is not sufficient for video scene segmentation because this task requires not only a good representation for individual shots but also contextual representation considering neighboring shots at a higher level as illustrated in Figure 1 . Motivated by this , we set our main goal to design effective self-supervised objectives ( i.e. , pretext tasks ) that maximize intra-scene similarity as well as discriminate shots from different scenes . For the purpose , this raises a penetrating question : how can we design boundary-relevant pretext tasks without access to ground truth boundary annotations ? We introduce a novel Boundary-aware Self-Supervised Learning ( BaSSL ) framework . Our main idea of BaSSL is to localize a pseudo-boundary , which is obtained by dividing the input sequence of shots into two semantically disjoint sub-sequences , and use it to define pretext tasks that are beneficial to the video scene segmentation task . On top of the discovered two sub-sequences and a pseudo-boundary , three boundary-aware pretext tasks are proposed : 1 ) Shot-Scene Matching ( SSM ) ; 2 ) Contextual Group Matching ( CGM ) ; and 3 ) Pseudo-boundary Prediction ( PP ) . Note that SSM and CGM encourage the model to maximize intra-scene similarity and inter-scene discrimination while PP enables the model to learn the capability of identifying transitional moments . In addition , we perform Masked Shot Modeling ( MSM ) task inspired by Sun et al . ( 2019a ) to further learn temporal relationship between shots . The comprehensive analysis demonstrates the effectiveness of the proposed framework ( i.e. , pre-training of contextual relationship between shots ) as well as the contribution of the proposed individual components ( i.e. , the algorithm for pseudo-boundary discovery and boundary-aware pretext tasks ) . Our main contributions are summarized as follows : ( i ) We introduce a novel boundary-aware pre-training framework which adopts dynamic time warping ( DTW ) algorithm to identify pseudoboundaries and use them as self-supervision to facilitate the pre-training ; ( ii ) we propose three boundary-aware pretext tasks , which are carefully designed to learn essential capabilities required for the video scene segmentation task ; ( iii ) we perform extensive ablations to demonstrate the effectiveness of the proposed framework , including the observation that our framework is complementary to the existing framework ; ( iv ) we achieve the new state-of-the-art on the MovieNet-SSeg benchmark while outperforming existing self-supervised learning-based methods by large margins . 2 RELATED WORK . Video scene segmentation approaches formulate the task as a problem of temporal grouping of shots . In this formulation , the optimal grouping can be achieved by clustering-based ( Rui et al. , 1998 ; Rasheed & Shah , 2003 ; 2005 ; Chasanis et al. , 2008 ) , dynamic programming-based ( Han & Wu , 2011 ; Tapaswi et al. , 2014 ; Rotman et al. , 2017 ) or multi-modal input-based ( Liang et al. , 2009 ; Sidiropoulos et al. , 2011 ) methods . However , the aforementioned methods have been trained and evaluated on small-scale datasets such as OVSD ( Rotman et al. , 2016 ) and BBC ( Baraldi et al. , 2015 ) which can produce a poorly generalized model . Recently , Huang et al . ( 2020 ) introduce a largescale video scene segmentation dataset ( i.e. , MovieNet-SSeg ) that contains hundreds of movies . Training with large-scale data , Rao et al . ( 2020 ) proposes a strong supervised baseline model that performs a shot-level binary classification followed by grouping using the prediction scores . In addition , Chen et al . ( 2021 ) proposes a shot contrastive pre-training method that learns shot-level representation . We found ShotCoL ( Chen et al. , 2021 ) to be the most similar work to our method . However , our method is different from ShotCoL in that we specifically focus on learning contextual representations by considering the relationship between shots . We refer interested readers to the supplementary material for a more detailed analysis of this . Action segmentation in videos is one of the related works for video scene segmentation , which identifies action labels of individual frames , thus divides a video into a series of action segments . Supervised methods ( Lea et al. , 2016 ; Farha & Gall , 2019 ) proposed CNN-based architectures to effectively capture temporal relationship between frames in order to address an over-segmentation issue . As frame-level annotations are prohibitively costly to acquire , weakly supervised methods ( Chang et al. , 2019 ; Li et al. , 2019 ; Li & Todorovic , 2020 ; Souri et al. , 2021 ; Shen et al. , 2021 ; Zhukov et al. , 2019 ; Fried et al. , 2020 ) have been suggested to use an ordered list of actions occurring in a video as supervision . Most of the methods are trained to find ( temporal ) semantic alignment between frames and a given action list using an HMM-based architecture ( Kuehne et al. , 2018 ) , a dynamic programming-based assignment algorithm ( Fried et al. , 2020 ) or a DTW-based temporal alignment method ( Chang et al. , 2019 ) . Recently , unsupervised methods ( Kumar et al. , 2021 ; Wang et al. , 2021 ; Kukleva et al. , 2019 ; Li & Todorovic , 2021 ; VidalMata et al. , 2021 ) have been further proposed ; in a nutshell , clustering-based prototypes are discovered from unlabeled videos , then the methods segment the videos by assigning prototypes ( corresponding to one of the actions ) into frames . In contrast to the action segmentation task that is limited to localizing segments each of which represents a single action within an activity , video scene segmentation requires localizing more complex segments each of which may be composed of more than two actions ( or activities ) . Self-supervised learning in videos has been actively studied for the recent years with approaches proposing various pretext tasks such as future frame prediction ( Srivastava et al. , 2015 ; Vondrick et al. , 2016 ; Ahsan et al. , 2018 ) , temporal ordering of frames ( Misra et al. , 2016 ; Lee et al. , 2017 ; Xu et al. , 2019 ) , geometric transformations prediction ( Jing & Tian , 2018 ) , colorization of videos ( Vondrick et al. , 2018 ) and contrastive prediction ( Feichtenhofer et al. , 2021 ; Qian et al. , 2021 ; Dave et al. , 2021 ) . In addition , CBT ( Sun et al. , 2019a ; b ) proposes a pretext task of masked frame modeling to learn temporal dependency between frames ( or clips ) . Note that since most of those methods are proposed for the classification task , they would be sub-optimal to the video scene segmentation task . On the other hand , BSP ( Xu et al. , 2020 ) proposes boundary-sensitive pre-text tasks based on synthesized pseudo-boundaries that are obtained by concatenating two clips sampled from different videos . However , strictly speaking , BSP is not a self-supervised learning algorithm since it requires video-level class labels to synthesize pseudo-boundaries ; the proposed pretext tasks are not applicable to videos such as movies that are hard to define semantic labels . Also , note that we empirically show that pseudo-boundaries identified by our method are more effective for pre-training than synthesized pseudo-boundaries . 3 BOUNDARY-AWARE SELF-SUPERVISED LEARNING ( BASSL ) . In this section , we introduce our proposed approach , Boundary-aware Self-Supervised Learning ( BaSSL ) . We start with the problem formulation followed by the model overview . Then , we describe our novel boundary-aware pretext tasks for pre-training . 3.1 PROBLEM FORMULATION . Terminologies A video ( e.g. , documentaries , TV episodes and movies ) is a sequence of scenes , defined as a semantic unit for making a story . A scene is a series of shots , which is a set of frames physically captured by the same camera during an uninterrupted period of time . Problem Definition Given a video , which contains a series of N shots { s1 , ... , sN } with class labels { y1 , ... , yN } where yi ∈ { 0 , 1 } indicating if it is at the scene boundary ( more precisely , if it is the last shot of a scene ) , the video scene segmentation task is formulated as a simple binary classification problem at individual shot level . By definition , a scene boundary is where the semantic of a shot is considerably different from its ( one-way ) neighbors . Thus , it is in nature important to capture and leverage contextual transition across the scenes . Consequently , it is a common practice that the information of the neighbor shots are leveraged together when determining scene boundaries . With this formulation , existing supervised learning approaches typically train a parameterized ( θ ) model by maximizing the expected log-likelihood : Pre-training Stage Fine-tuning Stage Pseudo-boundary Shot θ∗ = arg max θ E [ log pθ ( yn|Sn ) ] , ( 1 ) where Sn = { sn−K , ... , sn , ... , sn+K } is a set of 2K + 1 shots centered at nth shot sn , and K is the number of neighbor shots before and after sn . Note that each shot s is given by a set of Nk keyframes , resulting in a tensor with size of ( Nk , C , H , W ) where C , H and W are the RGB channels , the frame height and the frame width , respectively . | This paper introduces a self-supervised learning method for the video scene segmentation task, which needs to temporally localize scene boundaries in a video. Their proposed method defines pseudo-boundary from a sequence of shots by splitting it into two sub-sequences and train a model to predict the pseudo-boundary with three novel tasks: Shot-Scene Matching (SSM), Contextual Group Matching (CGM), and Pseudo-boundary Prediction (PP). Experiments on pre-training and transfer learning show that this method is critical to improving the video scene segmentation performance, and it achieves the new state-of-the-art on the MovieNet-SSeg benchmark. | SP:6254bbdb33e5ff8cee7a06f1c44f3653408de7cb |
Boundary-aware Pre-training for Video Scene Segmentation | 1 INTRODUCTION . The video scene segmentation is a task of identifying scene boundaries from a video where a scene is defined as a semantic unit for making a story and is composed of a series of semantically cohesive shots—a set of frames captured by the same camera during an uninterrupted period of time—in the same context . Localizing scene boundaries is a significant step towards the high-level video understanding because dividing a long video into a set of meaningful scenes enables models to inspect the individual incidents from complex story . One of the biggest challenges with temporal semantic segmentation is that it is not achieved simply by detecting changes in visual cues . As shown in Figure 1 ( a ) , we present an example of nine shots , all of which belong to the same scene , where two characters are talking on the phone . We can see that the overall visual cues within the scene do not stay the same but rather change repeatedly when each character appears . On the other hand , as presented in Figure 1 ( b ) , the other example shows two different scenes which contain visually similar shots ( highlighted in blue ) where the same character appears in the same place . Therefore , it is expected that two adjacent scenes which share shots with similar visual cues need to be contextually discriminated . From this observation , it is important for the video scene segmentation task to model contextual relation between shots by maximizing 1 ) intra-scene similarity ( i.e. , the shots belonging to the same scene should be close to each other ) , and 2 ) inter-scene discrimination across two adjacent scenes ( i.e. , two neighbor shots across the scene boundary should be distinguishable ) . Supervised learning approaches ( e.g. , Rao et al . ( 2020 ) ) are clearly limited due to the lack of largescale datasets with reliable ground-truth annotations . Recently , self-supervision ( Chen et al. , 2020a ; Caron et al. , 2020 ; He et al. , 2020 ; Roh et al. , 2021 ) is spotlighted through its effectiveness in learning in-domain representation without relying on costly ground truth annotations . The selfsupervised learning methods ( Chen et al. , 2021 ; Feichtenhofer et al. , 2021 ; Dave et al. , 2021 ; Qian et al. , 2021 ) in the video domain are often designed to learn spatio-temporal patterns in short clips ( e.g. , shots in movies ) . This kind of learned representation is generic and can be applied to many video understanding tasks ( e.g. , action classification ) . However , such representation is not sufficient for video scene segmentation because this task requires not only a good representation for individual shots but also contextual representation considering neighboring shots at a higher level as illustrated in Figure 1 . Motivated by this , we set our main goal to design effective self-supervised objectives ( i.e. , pretext tasks ) that maximize intra-scene similarity as well as discriminate shots from different scenes . For the purpose , this raises a penetrating question : how can we design boundary-relevant pretext tasks without access to ground truth boundary annotations ? We introduce a novel Boundary-aware Self-Supervised Learning ( BaSSL ) framework . Our main idea of BaSSL is to localize a pseudo-boundary , which is obtained by dividing the input sequence of shots into two semantically disjoint sub-sequences , and use it to define pretext tasks that are beneficial to the video scene segmentation task . On top of the discovered two sub-sequences and a pseudo-boundary , three boundary-aware pretext tasks are proposed : 1 ) Shot-Scene Matching ( SSM ) ; 2 ) Contextual Group Matching ( CGM ) ; and 3 ) Pseudo-boundary Prediction ( PP ) . Note that SSM and CGM encourage the model to maximize intra-scene similarity and inter-scene discrimination while PP enables the model to learn the capability of identifying transitional moments . In addition , we perform Masked Shot Modeling ( MSM ) task inspired by Sun et al . ( 2019a ) to further learn temporal relationship between shots . The comprehensive analysis demonstrates the effectiveness of the proposed framework ( i.e. , pre-training of contextual relationship between shots ) as well as the contribution of the proposed individual components ( i.e. , the algorithm for pseudo-boundary discovery and boundary-aware pretext tasks ) . Our main contributions are summarized as follows : ( i ) We introduce a novel boundary-aware pre-training framework which adopts dynamic time warping ( DTW ) algorithm to identify pseudoboundaries and use them as self-supervision to facilitate the pre-training ; ( ii ) we propose three boundary-aware pretext tasks , which are carefully designed to learn essential capabilities required for the video scene segmentation task ; ( iii ) we perform extensive ablations to demonstrate the effectiveness of the proposed framework , including the observation that our framework is complementary to the existing framework ; ( iv ) we achieve the new state-of-the-art on the MovieNet-SSeg benchmark while outperforming existing self-supervised learning-based methods by large margins . 2 RELATED WORK . Video scene segmentation approaches formulate the task as a problem of temporal grouping of shots . In this formulation , the optimal grouping can be achieved by clustering-based ( Rui et al. , 1998 ; Rasheed & Shah , 2003 ; 2005 ; Chasanis et al. , 2008 ) , dynamic programming-based ( Han & Wu , 2011 ; Tapaswi et al. , 2014 ; Rotman et al. , 2017 ) or multi-modal input-based ( Liang et al. , 2009 ; Sidiropoulos et al. , 2011 ) methods . However , the aforementioned methods have been trained and evaluated on small-scale datasets such as OVSD ( Rotman et al. , 2016 ) and BBC ( Baraldi et al. , 2015 ) which can produce a poorly generalized model . Recently , Huang et al . ( 2020 ) introduce a largescale video scene segmentation dataset ( i.e. , MovieNet-SSeg ) that contains hundreds of movies . Training with large-scale data , Rao et al . ( 2020 ) proposes a strong supervised baseline model that performs a shot-level binary classification followed by grouping using the prediction scores . In addition , Chen et al . ( 2021 ) proposes a shot contrastive pre-training method that learns shot-level representation . We found ShotCoL ( Chen et al. , 2021 ) to be the most similar work to our method . However , our method is different from ShotCoL in that we specifically focus on learning contextual representations by considering the relationship between shots . We refer interested readers to the supplementary material for a more detailed analysis of this . Action segmentation in videos is one of the related works for video scene segmentation , which identifies action labels of individual frames , thus divides a video into a series of action segments . Supervised methods ( Lea et al. , 2016 ; Farha & Gall , 2019 ) proposed CNN-based architectures to effectively capture temporal relationship between frames in order to address an over-segmentation issue . As frame-level annotations are prohibitively costly to acquire , weakly supervised methods ( Chang et al. , 2019 ; Li et al. , 2019 ; Li & Todorovic , 2020 ; Souri et al. , 2021 ; Shen et al. , 2021 ; Zhukov et al. , 2019 ; Fried et al. , 2020 ) have been suggested to use an ordered list of actions occurring in a video as supervision . Most of the methods are trained to find ( temporal ) semantic alignment between frames and a given action list using an HMM-based architecture ( Kuehne et al. , 2018 ) , a dynamic programming-based assignment algorithm ( Fried et al. , 2020 ) or a DTW-based temporal alignment method ( Chang et al. , 2019 ) . Recently , unsupervised methods ( Kumar et al. , 2021 ; Wang et al. , 2021 ; Kukleva et al. , 2019 ; Li & Todorovic , 2021 ; VidalMata et al. , 2021 ) have been further proposed ; in a nutshell , clustering-based prototypes are discovered from unlabeled videos , then the methods segment the videos by assigning prototypes ( corresponding to one of the actions ) into frames . In contrast to the action segmentation task that is limited to localizing segments each of which represents a single action within an activity , video scene segmentation requires localizing more complex segments each of which may be composed of more than two actions ( or activities ) . Self-supervised learning in videos has been actively studied for the recent years with approaches proposing various pretext tasks such as future frame prediction ( Srivastava et al. , 2015 ; Vondrick et al. , 2016 ; Ahsan et al. , 2018 ) , temporal ordering of frames ( Misra et al. , 2016 ; Lee et al. , 2017 ; Xu et al. , 2019 ) , geometric transformations prediction ( Jing & Tian , 2018 ) , colorization of videos ( Vondrick et al. , 2018 ) and contrastive prediction ( Feichtenhofer et al. , 2021 ; Qian et al. , 2021 ; Dave et al. , 2021 ) . In addition , CBT ( Sun et al. , 2019a ; b ) proposes a pretext task of masked frame modeling to learn temporal dependency between frames ( or clips ) . Note that since most of those methods are proposed for the classification task , they would be sub-optimal to the video scene segmentation task . On the other hand , BSP ( Xu et al. , 2020 ) proposes boundary-sensitive pre-text tasks based on synthesized pseudo-boundaries that are obtained by concatenating two clips sampled from different videos . However , strictly speaking , BSP is not a self-supervised learning algorithm since it requires video-level class labels to synthesize pseudo-boundaries ; the proposed pretext tasks are not applicable to videos such as movies that are hard to define semantic labels . Also , note that we empirically show that pseudo-boundaries identified by our method are more effective for pre-training than synthesized pseudo-boundaries . 3 BOUNDARY-AWARE SELF-SUPERVISED LEARNING ( BASSL ) . In this section , we introduce our proposed approach , Boundary-aware Self-Supervised Learning ( BaSSL ) . We start with the problem formulation followed by the model overview . Then , we describe our novel boundary-aware pretext tasks for pre-training . 3.1 PROBLEM FORMULATION . Terminologies A video ( e.g. , documentaries , TV episodes and movies ) is a sequence of scenes , defined as a semantic unit for making a story . A scene is a series of shots , which is a set of frames physically captured by the same camera during an uninterrupted period of time . Problem Definition Given a video , which contains a series of N shots { s1 , ... , sN } with class labels { y1 , ... , yN } where yi ∈ { 0 , 1 } indicating if it is at the scene boundary ( more precisely , if it is the last shot of a scene ) , the video scene segmentation task is formulated as a simple binary classification problem at individual shot level . By definition , a scene boundary is where the semantic of a shot is considerably different from its ( one-way ) neighbors . Thus , it is in nature important to capture and leverage contextual transition across the scenes . Consequently , it is a common practice that the information of the neighbor shots are leveraged together when determining scene boundaries . With this formulation , existing supervised learning approaches typically train a parameterized ( θ ) model by maximizing the expected log-likelihood : Pre-training Stage Fine-tuning Stage Pseudo-boundary Shot θ∗ = arg max θ E [ log pθ ( yn|Sn ) ] , ( 1 ) where Sn = { sn−K , ... , sn , ... , sn+K } is a set of 2K + 1 shots centered at nth shot sn , and K is the number of neighbor shots before and after sn . Note that each shot s is given by a set of Nk keyframes , resulting in a tensor with size of ( Nk , C , H , W ) where C , H and W are the RGB channels , the frame height and the frame width , respectively . | This paper presents a boundary-aware self-supervised learning framework for the task of video scene grouping. During the pre-train stage, the authors first apply DTW to find a pseudo boundary in a given video sequence, and then applies Shot-Scene Matching and Contextual Group Matching to optimize the intra-scene and inter-scene distribution of features. Moreover, Pseudo-boundary Prediction and Masked Shot Matching are utilized to help the model identify transitional moments. | SP:6254bbdb33e5ff8cee7a06f1c44f3653408de7cb |
Mapping conditional distributions for domain adaptation under generalized target shift | 1 INTRODUCTION . Unsupervised Domain Adaptation ( UDA ) methods ( Pan & Yang , 2010 ) train a classifier with labelled samples from a source domain S such that its risk on an unlabelled target domain T is low . This problem is ill-posed and simplifying assumptions have been considered . Initial contributions focused on two settings which decompose differently the joint distribution over X × Y : covariate shift ( CoS ) , when input marginals differ across domains while class-posteriors are unchanged i.e . pS ( Y |X ) = pT ( Y |X ) , pS ( X ) 6= pT ( X ) and target shift ( TarS ) when label distributions differ while conditionals are unchanged i.e . pS ( Y ) 6= pT ( Y ) , pS ( X|Y ) = pT ( X|Y ) . These restrictive assumptions were recently extended into model shift i.e . pS ( Y |X ) 6= pT ( Y |X ) , pS ( X ) 6= pT ( X ) ( Wang & Schneider , 2014 ; 2015 ) and generalized target shift ( GeTarS ) i.e . pS ( X|Y ) 6= pT ( X|Y ) , pS ( Y ) 6= pT ( Y ) , which are more realistic in real-world applications . Our work addresses GeTarS for which a key challenge is to learn how to map the source domain onto the target one to minimize both conditional and label shifts , without using target labels . The current state-of-the-art in Gong et al . ( 2016 ) ; Combes et al . ( 2020 ) ; Rakotomamonjy et al . ( 2020 ) ; Shui et al . ( 2021 ) learns domain-invariant representations and uses , as importance weights in the training objectives , the estimated class-ratios between domains . However , this approach has several limitations . First , to transfer representations , the domain-invariance constraint breaks the original problem structure and it was shown that this may degrade the discriminativity of target representations ( Liu et al. , 2019 ) . Some approaches proposed to better enforce target discriminativity when learning invariant representations e.g . Xiao et al . ( 2019 ) ; Li et al . ( 2020 ) ; Chen et al . ( 2019 ) , yet they were not applied to the GeTarS setting . Second , generalization guarantees are derived under strong assumptions , detailed in Section 2.3 , which may not hold in practice . In this paper , we address these limitations with a new general approach , named Optimal Sample Transformation and Reweight ( OSTAR ) , which maps pretrained representations using Optimal Transport ( OT ) . OSTAR proposes an alternative to constraining representation invariance and performs jointly three operations : given a pretrained encoder , ( i ) it learns an OT map , implemented as a neural network ( NN ) , between encoded source and target conditionals , ( ii ) it estimates target proportions for sample reweighting and ( iii ) it learns a classifier for the target domain using source labels . OSTAR has several benefits : ( i ) it is flexible , scalable and preserves target discriminativity and ( ii ) it provides strong theoretical guarantees under mild assumptions . In summary , our contributions are : • We propose an approach , OSTAR , to align pretrained representations under GeTarS . Without constraining representation-invariance , OSTAR jointly learns a classifier for inference on the target domain and an OT map , which maps representations of source conditionals to those of target ones under class-reweighting . OSTAR preserves target discriminativity and experimentally challenges the state-of-the-art for GeTarS . • OSTAR implements its OT map as a NN shared across classes . Our approach is thus flexible and has native regularization biases for stability . Moreover it is scalable and generalizes beyond training samples unlike standard linear programming based OT approaches . • OSTAR has strong theoretical guarantees under mild assumptions : its solution is unique , recovers target proportions and correctly matches source and target conditionals at the optimum . It also explicitly controls the target risk with a new Wasserstein-based bound . Our paper is organized as follows . In Section 2 , we define our problem , approach and assumptions . In Section 3 , we derive theoretical results . In Section 4 , we describe our implementation . We report in Section 5 experimental results and ablation studies . In Section 6 , we present related work . 2 PROPOSED APPROACH . In this section , we successively define our problem , present our method , OSTAR and its main ideas and introduce our assumptions , used to provide theoretical guarantees for our method . 2.1 PROBLEM DEFINITION . Denoting X the input space and Y = { 1 , . . . , K } the label space , we consider UDA between a source S = ( XS , YS , pS ( X , Y ) ) with labelled samples Ŝ = { ( x ( i ) S , y ( i ) S ) } ni=1 ∈ ( XS × YS ) n and a target T = ( XT , YT , pT ( X , Y ) ) with unlabelled samples T̂ = { x ( i ) T } mi=1 ∈ XmT . We denote Z ⊂ Rd a latent space and g : X → Z an encoder from X to Z . ZS and ZT are the encoded source and target input domains , ZŜ and ZT̂ the corresponding training sets and Z a random variable in this space . The latent marginal probability induced by g on D ∈ { S , T } is defined as ∀A ⊂ Z , pgD ( A ) , g # ( pD ( A ) ) 1 . For convenience , pYD ∈ RK denotes the label marginal pD ( Y ) and pD ( Z ) , p g D ( Z ) . In all generality , conditional distributions in this latent space and label marginals differ across domains ; this is the GeTarS assumption ( Definition 1 ) illustrated in Figure 1a . This assumption , made in feature space Z rather than input space X , states that latent representations for both domains from a given class are different with different label proportions . Operating in the latent space has several practical advantages such as improved discriminativity and dimension reduction . Definition 1 ( GeTarS ) . GeTarS is characterized by conditional mismatch across domains i.e . ∃j ∈ { 1 , . . . , K } , pS ( Z|Y = j ) 6= pT ( Z|Y = j ) and label shift i.e . pYS 6= pYT . Our goal is to learn a classifier in Z with low target risk , using source labels . This is challenging as ( i ) target labels are unknown and ( ii ) there are two shifts to handle . We will show that this can be achieved with pretrained representations if we recover two key properties : ( i ) a map which matches source and target conditional distributions and ( ii ) target proportions to reweight samples by classratios and thus account for label shift . Our approach , OSTAR , achieves this objective . 2.2 MAPPING CONDITIONAL DISTRIBUTIONS UNDER LABEL SHIFT . We now present the components in OSTAR , their objective and the various training steps . Components The main components of OSTAR , illustrated in Figure 1b , are detailed below . These components are learned and estimated using the algorithm detailed in Section 4 . They include : • a fixed encoder g : X → Z , defined in Section 2.1 . • a mapping φ : Z → Z , acting on source samples encoded by g. • a label proportion vector pYN on the simplex ∆K . • a classifier fN : Z → { 1 , . . . , K } for the target domain in a hypothesis classH over Z . 1f # ρ is the push-forward measure f # ρ ( B ) = ρ ( f−1 ( B ) ) , for all measurable set B . Objective g encodes source and target samples in a latent space such that it preserves rich information about the target task and such that the risk on the source domain is small . g is fixed throughout training to preserve target discriminativity . φ should map encoded source conditionals in ZS onto corresponding encoded target ones in ZT to account for conditional shift ; ZN denotes the mapped space . pYN should estimate the target proportions p Y T to account for label shift . Components ( φ , p Y N ) define a new labelled domain in latent space N = ( ZN , YN , pN ( Z , Y ) ) through a Sample Transformation And Reweight operation of the encoded S domain , as illustrated in Figure 1b . Indeed , the pushforward by φ of encoded source conditionals defines conditionals in domain N , pφN ( Z|Y ) : ∀k , pφN ( Z|Y = k ) , φ # ( pS ( Z|Y = k ) ) 1 Then , pYN weights each conditional in N . This yields a marginal distribution in N , p φ N ( Z ) : pφN ( Z ) , K∑ k=1 pY=kN p φ N ( Z|Y = k ) ( 1 ) Finally , classifier fN is trained on labelled samples from domain N . This is possible as each sample in N is a projection of a labelled sample from S. fN can then be used for inference on T . We will show that it has low target risk when components φ and pYN achieve their objectives detailed above . Training We train OSTAR ’ s components in two stages . First , we train g along a source classifier fS from scratch by minimizing source classification loss ; alternatively , g can be tailored to specific problems with pretraining . Second , we jointly learn ( fN , φ , pYN ) to minimize a classification loss in domain N and to match target conditionals and proportions with those in domain N . As target conditionals and proportions are unknown , we propose a proxy problem for ( φ , pYN ) to match instead latent marginals pT ( Z ) and p φ N ( Z ) ( 1 ) . We solve this proxy problem under least action principle measured by a Monge transport cost ( Santambrogio , 2015 ) , denoted C ( φ ) , as in problem ( OT ) : min φ , pYN∈∆K C ( φ ) , K∑ k=1 ∫ z∈Z ||φ ( z ) − z||22 pS ( z|Y = k ) dz subject to pφN ( Z ) = pT ( Z ) ( OT ) For any function φ e.g . parametrized by a NN , C ( φ ) is the transport cost of encoded source conditionals by φ . It uses a cost function , c ( x , y ) = ‖x − y‖p2 , where without loss of generality p = 2 . The optimal C ( φ ) is the sum of Wasserstein-2 distances between source conditionals and their mappings . Problem ( OT ) seeks to minimize C ( φ ) under marginal matching . We provide some further background on optimal transport in Appendix C and discuss there the differences between our OT problem ( OT ) and the standard Monge OT problem between pS ( Z ) and pT ( Z ) . Our OT formulation is key to the approach . First , it is at the basis of our theoretical analysis . Under Assumption 2 later defined , the optimal transport cost is the sum of the Wasserstein-2 distance between source and matched target conditionals . We can then provide conditions on these distances in Assumption 3 to formally define when the solution to problem ( OT ) correctly matches conditionals . Second , it allows us to learn the OT map with a NN . This NN approach : ( i ) generalizes beyond training samples and scales up with the number of samples unlike linear programming based OT approaches ( Courty et al. , 2017b ) and ( ii ) introduces useful stability biases which make learning less prone to numerical instabilities as highlighted in de Bézenac et al . ( 2021 ) ; Karkar et al . ( 2020 ) . | This paper proposes a method for unsupervised domain adaptation under conditional and label shift, one of the most challening versions of DA. At the core of this method is an optimal transport problem that seeks a mapping between source and target domains *and* a class proportion vector that minimize a total transportation cost between the domains. Based on this formulation, and a set of 4 sensible assumptions, the paper provides theoretical results in the form of unicity of the solution and classic generalization bound on the target domain risk. The implemented algorithm is inspired (though not identical) to this objective, which is extended with a few other terms: a constraint on the the label proportions based on a confusion matrix, a classification loss for the mapped source samples, and a relaxed version of the OT's pushforward constraint, and finally, two objectives that seek to improve the discriminativity of target representations. The paper then proceeds to validate the propose method in a wide array of benchmark DA datasets, with various levels of class unbalance, whereby it shows that the method repeatedly and significantly outperforms SOTA UDA alternatives. | SP:96d683c9273977ab73c817e7a44a3e9b57daa370 |
Mapping conditional distributions for domain adaptation under generalized target shift | 1 INTRODUCTION . Unsupervised Domain Adaptation ( UDA ) methods ( Pan & Yang , 2010 ) train a classifier with labelled samples from a source domain S such that its risk on an unlabelled target domain T is low . This problem is ill-posed and simplifying assumptions have been considered . Initial contributions focused on two settings which decompose differently the joint distribution over X × Y : covariate shift ( CoS ) , when input marginals differ across domains while class-posteriors are unchanged i.e . pS ( Y |X ) = pT ( Y |X ) , pS ( X ) 6= pT ( X ) and target shift ( TarS ) when label distributions differ while conditionals are unchanged i.e . pS ( Y ) 6= pT ( Y ) , pS ( X|Y ) = pT ( X|Y ) . These restrictive assumptions were recently extended into model shift i.e . pS ( Y |X ) 6= pT ( Y |X ) , pS ( X ) 6= pT ( X ) ( Wang & Schneider , 2014 ; 2015 ) and generalized target shift ( GeTarS ) i.e . pS ( X|Y ) 6= pT ( X|Y ) , pS ( Y ) 6= pT ( Y ) , which are more realistic in real-world applications . Our work addresses GeTarS for which a key challenge is to learn how to map the source domain onto the target one to minimize both conditional and label shifts , without using target labels . The current state-of-the-art in Gong et al . ( 2016 ) ; Combes et al . ( 2020 ) ; Rakotomamonjy et al . ( 2020 ) ; Shui et al . ( 2021 ) learns domain-invariant representations and uses , as importance weights in the training objectives , the estimated class-ratios between domains . However , this approach has several limitations . First , to transfer representations , the domain-invariance constraint breaks the original problem structure and it was shown that this may degrade the discriminativity of target representations ( Liu et al. , 2019 ) . Some approaches proposed to better enforce target discriminativity when learning invariant representations e.g . Xiao et al . ( 2019 ) ; Li et al . ( 2020 ) ; Chen et al . ( 2019 ) , yet they were not applied to the GeTarS setting . Second , generalization guarantees are derived under strong assumptions , detailed in Section 2.3 , which may not hold in practice . In this paper , we address these limitations with a new general approach , named Optimal Sample Transformation and Reweight ( OSTAR ) , which maps pretrained representations using Optimal Transport ( OT ) . OSTAR proposes an alternative to constraining representation invariance and performs jointly three operations : given a pretrained encoder , ( i ) it learns an OT map , implemented as a neural network ( NN ) , between encoded source and target conditionals , ( ii ) it estimates target proportions for sample reweighting and ( iii ) it learns a classifier for the target domain using source labels . OSTAR has several benefits : ( i ) it is flexible , scalable and preserves target discriminativity and ( ii ) it provides strong theoretical guarantees under mild assumptions . In summary , our contributions are : • We propose an approach , OSTAR , to align pretrained representations under GeTarS . Without constraining representation-invariance , OSTAR jointly learns a classifier for inference on the target domain and an OT map , which maps representations of source conditionals to those of target ones under class-reweighting . OSTAR preserves target discriminativity and experimentally challenges the state-of-the-art for GeTarS . • OSTAR implements its OT map as a NN shared across classes . Our approach is thus flexible and has native regularization biases for stability . Moreover it is scalable and generalizes beyond training samples unlike standard linear programming based OT approaches . • OSTAR has strong theoretical guarantees under mild assumptions : its solution is unique , recovers target proportions and correctly matches source and target conditionals at the optimum . It also explicitly controls the target risk with a new Wasserstein-based bound . Our paper is organized as follows . In Section 2 , we define our problem , approach and assumptions . In Section 3 , we derive theoretical results . In Section 4 , we describe our implementation . We report in Section 5 experimental results and ablation studies . In Section 6 , we present related work . 2 PROPOSED APPROACH . In this section , we successively define our problem , present our method , OSTAR and its main ideas and introduce our assumptions , used to provide theoretical guarantees for our method . 2.1 PROBLEM DEFINITION . Denoting X the input space and Y = { 1 , . . . , K } the label space , we consider UDA between a source S = ( XS , YS , pS ( X , Y ) ) with labelled samples Ŝ = { ( x ( i ) S , y ( i ) S ) } ni=1 ∈ ( XS × YS ) n and a target T = ( XT , YT , pT ( X , Y ) ) with unlabelled samples T̂ = { x ( i ) T } mi=1 ∈ XmT . We denote Z ⊂ Rd a latent space and g : X → Z an encoder from X to Z . ZS and ZT are the encoded source and target input domains , ZŜ and ZT̂ the corresponding training sets and Z a random variable in this space . The latent marginal probability induced by g on D ∈ { S , T } is defined as ∀A ⊂ Z , pgD ( A ) , g # ( pD ( A ) ) 1 . For convenience , pYD ∈ RK denotes the label marginal pD ( Y ) and pD ( Z ) , p g D ( Z ) . In all generality , conditional distributions in this latent space and label marginals differ across domains ; this is the GeTarS assumption ( Definition 1 ) illustrated in Figure 1a . This assumption , made in feature space Z rather than input space X , states that latent representations for both domains from a given class are different with different label proportions . Operating in the latent space has several practical advantages such as improved discriminativity and dimension reduction . Definition 1 ( GeTarS ) . GeTarS is characterized by conditional mismatch across domains i.e . ∃j ∈ { 1 , . . . , K } , pS ( Z|Y = j ) 6= pT ( Z|Y = j ) and label shift i.e . pYS 6= pYT . Our goal is to learn a classifier in Z with low target risk , using source labels . This is challenging as ( i ) target labels are unknown and ( ii ) there are two shifts to handle . We will show that this can be achieved with pretrained representations if we recover two key properties : ( i ) a map which matches source and target conditional distributions and ( ii ) target proportions to reweight samples by classratios and thus account for label shift . Our approach , OSTAR , achieves this objective . 2.2 MAPPING CONDITIONAL DISTRIBUTIONS UNDER LABEL SHIFT . We now present the components in OSTAR , their objective and the various training steps . Components The main components of OSTAR , illustrated in Figure 1b , are detailed below . These components are learned and estimated using the algorithm detailed in Section 4 . They include : • a fixed encoder g : X → Z , defined in Section 2.1 . • a mapping φ : Z → Z , acting on source samples encoded by g. • a label proportion vector pYN on the simplex ∆K . • a classifier fN : Z → { 1 , . . . , K } for the target domain in a hypothesis classH over Z . 1f # ρ is the push-forward measure f # ρ ( B ) = ρ ( f−1 ( B ) ) , for all measurable set B . Objective g encodes source and target samples in a latent space such that it preserves rich information about the target task and such that the risk on the source domain is small . g is fixed throughout training to preserve target discriminativity . φ should map encoded source conditionals in ZS onto corresponding encoded target ones in ZT to account for conditional shift ; ZN denotes the mapped space . pYN should estimate the target proportions p Y T to account for label shift . Components ( φ , p Y N ) define a new labelled domain in latent space N = ( ZN , YN , pN ( Z , Y ) ) through a Sample Transformation And Reweight operation of the encoded S domain , as illustrated in Figure 1b . Indeed , the pushforward by φ of encoded source conditionals defines conditionals in domain N , pφN ( Z|Y ) : ∀k , pφN ( Z|Y = k ) , φ # ( pS ( Z|Y = k ) ) 1 Then , pYN weights each conditional in N . This yields a marginal distribution in N , p φ N ( Z ) : pφN ( Z ) , K∑ k=1 pY=kN p φ N ( Z|Y = k ) ( 1 ) Finally , classifier fN is trained on labelled samples from domain N . This is possible as each sample in N is a projection of a labelled sample from S. fN can then be used for inference on T . We will show that it has low target risk when components φ and pYN achieve their objectives detailed above . Training We train OSTAR ’ s components in two stages . First , we train g along a source classifier fS from scratch by minimizing source classification loss ; alternatively , g can be tailored to specific problems with pretraining . Second , we jointly learn ( fN , φ , pYN ) to minimize a classification loss in domain N and to match target conditionals and proportions with those in domain N . As target conditionals and proportions are unknown , we propose a proxy problem for ( φ , pYN ) to match instead latent marginals pT ( Z ) and p φ N ( Z ) ( 1 ) . We solve this proxy problem under least action principle measured by a Monge transport cost ( Santambrogio , 2015 ) , denoted C ( φ ) , as in problem ( OT ) : min φ , pYN∈∆K C ( φ ) , K∑ k=1 ∫ z∈Z ||φ ( z ) − z||22 pS ( z|Y = k ) dz subject to pφN ( Z ) = pT ( Z ) ( OT ) For any function φ e.g . parametrized by a NN , C ( φ ) is the transport cost of encoded source conditionals by φ . It uses a cost function , c ( x , y ) = ‖x − y‖p2 , where without loss of generality p = 2 . The optimal C ( φ ) is the sum of Wasserstein-2 distances between source conditionals and their mappings . Problem ( OT ) seeks to minimize C ( φ ) under marginal matching . We provide some further background on optimal transport in Appendix C and discuss there the differences between our OT problem ( OT ) and the standard Monge OT problem between pS ( Z ) and pT ( Z ) . Our OT formulation is key to the approach . First , it is at the basis of our theoretical analysis . Under Assumption 2 later defined , the optimal transport cost is the sum of the Wasserstein-2 distance between source and matched target conditionals . We can then provide conditions on these distances in Assumption 3 to formally define when the solution to problem ( OT ) correctly matches conditionals . Second , it allows us to learn the OT map with a NN . This NN approach : ( i ) generalizes beyond training samples and scales up with the number of samples unlike linear programming based OT approaches ( Courty et al. , 2017b ) and ( ii ) introduces useful stability biases which make learning less prone to numerical instabilities as highlighted in de Bézenac et al . ( 2021 ) ; Karkar et al . ( 2020 ) . | In this paper, the authors unsupervised domain adaptation (UDA) where both the label conditional and marginal distributions are different between the source and target domains (GeTarS). The authors propose an approach, OSTAR, to align pretrained representations under GeTarS. A new theoretical bound under some assumptions is derived. Experimental studies also show the effectiveness of OSTAR. | SP:96d683c9273977ab73c817e7a44a3e9b57daa370 |
Mapping conditional distributions for domain adaptation under generalized target shift | 1 INTRODUCTION . Unsupervised Domain Adaptation ( UDA ) methods ( Pan & Yang , 2010 ) train a classifier with labelled samples from a source domain S such that its risk on an unlabelled target domain T is low . This problem is ill-posed and simplifying assumptions have been considered . Initial contributions focused on two settings which decompose differently the joint distribution over X × Y : covariate shift ( CoS ) , when input marginals differ across domains while class-posteriors are unchanged i.e . pS ( Y |X ) = pT ( Y |X ) , pS ( X ) 6= pT ( X ) and target shift ( TarS ) when label distributions differ while conditionals are unchanged i.e . pS ( Y ) 6= pT ( Y ) , pS ( X|Y ) = pT ( X|Y ) . These restrictive assumptions were recently extended into model shift i.e . pS ( Y |X ) 6= pT ( Y |X ) , pS ( X ) 6= pT ( X ) ( Wang & Schneider , 2014 ; 2015 ) and generalized target shift ( GeTarS ) i.e . pS ( X|Y ) 6= pT ( X|Y ) , pS ( Y ) 6= pT ( Y ) , which are more realistic in real-world applications . Our work addresses GeTarS for which a key challenge is to learn how to map the source domain onto the target one to minimize both conditional and label shifts , without using target labels . The current state-of-the-art in Gong et al . ( 2016 ) ; Combes et al . ( 2020 ) ; Rakotomamonjy et al . ( 2020 ) ; Shui et al . ( 2021 ) learns domain-invariant representations and uses , as importance weights in the training objectives , the estimated class-ratios between domains . However , this approach has several limitations . First , to transfer representations , the domain-invariance constraint breaks the original problem structure and it was shown that this may degrade the discriminativity of target representations ( Liu et al. , 2019 ) . Some approaches proposed to better enforce target discriminativity when learning invariant representations e.g . Xiao et al . ( 2019 ) ; Li et al . ( 2020 ) ; Chen et al . ( 2019 ) , yet they were not applied to the GeTarS setting . Second , generalization guarantees are derived under strong assumptions , detailed in Section 2.3 , which may not hold in practice . In this paper , we address these limitations with a new general approach , named Optimal Sample Transformation and Reweight ( OSTAR ) , which maps pretrained representations using Optimal Transport ( OT ) . OSTAR proposes an alternative to constraining representation invariance and performs jointly three operations : given a pretrained encoder , ( i ) it learns an OT map , implemented as a neural network ( NN ) , between encoded source and target conditionals , ( ii ) it estimates target proportions for sample reweighting and ( iii ) it learns a classifier for the target domain using source labels . OSTAR has several benefits : ( i ) it is flexible , scalable and preserves target discriminativity and ( ii ) it provides strong theoretical guarantees under mild assumptions . In summary , our contributions are : • We propose an approach , OSTAR , to align pretrained representations under GeTarS . Without constraining representation-invariance , OSTAR jointly learns a classifier for inference on the target domain and an OT map , which maps representations of source conditionals to those of target ones under class-reweighting . OSTAR preserves target discriminativity and experimentally challenges the state-of-the-art for GeTarS . • OSTAR implements its OT map as a NN shared across classes . Our approach is thus flexible and has native regularization biases for stability . Moreover it is scalable and generalizes beyond training samples unlike standard linear programming based OT approaches . • OSTAR has strong theoretical guarantees under mild assumptions : its solution is unique , recovers target proportions and correctly matches source and target conditionals at the optimum . It also explicitly controls the target risk with a new Wasserstein-based bound . Our paper is organized as follows . In Section 2 , we define our problem , approach and assumptions . In Section 3 , we derive theoretical results . In Section 4 , we describe our implementation . We report in Section 5 experimental results and ablation studies . In Section 6 , we present related work . 2 PROPOSED APPROACH . In this section , we successively define our problem , present our method , OSTAR and its main ideas and introduce our assumptions , used to provide theoretical guarantees for our method . 2.1 PROBLEM DEFINITION . Denoting X the input space and Y = { 1 , . . . , K } the label space , we consider UDA between a source S = ( XS , YS , pS ( X , Y ) ) with labelled samples Ŝ = { ( x ( i ) S , y ( i ) S ) } ni=1 ∈ ( XS × YS ) n and a target T = ( XT , YT , pT ( X , Y ) ) with unlabelled samples T̂ = { x ( i ) T } mi=1 ∈ XmT . We denote Z ⊂ Rd a latent space and g : X → Z an encoder from X to Z . ZS and ZT are the encoded source and target input domains , ZŜ and ZT̂ the corresponding training sets and Z a random variable in this space . The latent marginal probability induced by g on D ∈ { S , T } is defined as ∀A ⊂ Z , pgD ( A ) , g # ( pD ( A ) ) 1 . For convenience , pYD ∈ RK denotes the label marginal pD ( Y ) and pD ( Z ) , p g D ( Z ) . In all generality , conditional distributions in this latent space and label marginals differ across domains ; this is the GeTarS assumption ( Definition 1 ) illustrated in Figure 1a . This assumption , made in feature space Z rather than input space X , states that latent representations for both domains from a given class are different with different label proportions . Operating in the latent space has several practical advantages such as improved discriminativity and dimension reduction . Definition 1 ( GeTarS ) . GeTarS is characterized by conditional mismatch across domains i.e . ∃j ∈ { 1 , . . . , K } , pS ( Z|Y = j ) 6= pT ( Z|Y = j ) and label shift i.e . pYS 6= pYT . Our goal is to learn a classifier in Z with low target risk , using source labels . This is challenging as ( i ) target labels are unknown and ( ii ) there are two shifts to handle . We will show that this can be achieved with pretrained representations if we recover two key properties : ( i ) a map which matches source and target conditional distributions and ( ii ) target proportions to reweight samples by classratios and thus account for label shift . Our approach , OSTAR , achieves this objective . 2.2 MAPPING CONDITIONAL DISTRIBUTIONS UNDER LABEL SHIFT . We now present the components in OSTAR , their objective and the various training steps . Components The main components of OSTAR , illustrated in Figure 1b , are detailed below . These components are learned and estimated using the algorithm detailed in Section 4 . They include : • a fixed encoder g : X → Z , defined in Section 2.1 . • a mapping φ : Z → Z , acting on source samples encoded by g. • a label proportion vector pYN on the simplex ∆K . • a classifier fN : Z → { 1 , . . . , K } for the target domain in a hypothesis classH over Z . 1f # ρ is the push-forward measure f # ρ ( B ) = ρ ( f−1 ( B ) ) , for all measurable set B . Objective g encodes source and target samples in a latent space such that it preserves rich information about the target task and such that the risk on the source domain is small . g is fixed throughout training to preserve target discriminativity . φ should map encoded source conditionals in ZS onto corresponding encoded target ones in ZT to account for conditional shift ; ZN denotes the mapped space . pYN should estimate the target proportions p Y T to account for label shift . Components ( φ , p Y N ) define a new labelled domain in latent space N = ( ZN , YN , pN ( Z , Y ) ) through a Sample Transformation And Reweight operation of the encoded S domain , as illustrated in Figure 1b . Indeed , the pushforward by φ of encoded source conditionals defines conditionals in domain N , pφN ( Z|Y ) : ∀k , pφN ( Z|Y = k ) , φ # ( pS ( Z|Y = k ) ) 1 Then , pYN weights each conditional in N . This yields a marginal distribution in N , p φ N ( Z ) : pφN ( Z ) , K∑ k=1 pY=kN p φ N ( Z|Y = k ) ( 1 ) Finally , classifier fN is trained on labelled samples from domain N . This is possible as each sample in N is a projection of a labelled sample from S. fN can then be used for inference on T . We will show that it has low target risk when components φ and pYN achieve their objectives detailed above . Training We train OSTAR ’ s components in two stages . First , we train g along a source classifier fS from scratch by minimizing source classification loss ; alternatively , g can be tailored to specific problems with pretraining . Second , we jointly learn ( fN , φ , pYN ) to minimize a classification loss in domain N and to match target conditionals and proportions with those in domain N . As target conditionals and proportions are unknown , we propose a proxy problem for ( φ , pYN ) to match instead latent marginals pT ( Z ) and p φ N ( Z ) ( 1 ) . We solve this proxy problem under least action principle measured by a Monge transport cost ( Santambrogio , 2015 ) , denoted C ( φ ) , as in problem ( OT ) : min φ , pYN∈∆K C ( φ ) , K∑ k=1 ∫ z∈Z ||φ ( z ) − z||22 pS ( z|Y = k ) dz subject to pφN ( Z ) = pT ( Z ) ( OT ) For any function φ e.g . parametrized by a NN , C ( φ ) is the transport cost of encoded source conditionals by φ . It uses a cost function , c ( x , y ) = ‖x − y‖p2 , where without loss of generality p = 2 . The optimal C ( φ ) is the sum of Wasserstein-2 distances between source conditionals and their mappings . Problem ( OT ) seeks to minimize C ( φ ) under marginal matching . We provide some further background on optimal transport in Appendix C and discuss there the differences between our OT problem ( OT ) and the standard Monge OT problem between pS ( Z ) and pT ( Z ) . Our OT formulation is key to the approach . First , it is at the basis of our theoretical analysis . Under Assumption 2 later defined , the optimal transport cost is the sum of the Wasserstein-2 distance between source and matched target conditionals . We can then provide conditions on these distances in Assumption 3 to formally define when the solution to problem ( OT ) correctly matches conditionals . Second , it allows us to learn the OT map with a NN . This NN approach : ( i ) generalizes beyond training samples and scales up with the number of samples unlike linear programming based OT approaches ( Courty et al. , 2017b ) and ( ii ) introduces useful stability biases which make learning less prone to numerical instabilities as highlighted in de Bézenac et al . ( 2021 ) ; Karkar et al . ( 2020 ) . | The paper proposes an approach for Generalized Target Shift (GeTarS) where both conditional and label shift are present in the target domain. The proposed approach, Optimal Sample Transformation and Reweight (OSTAR), uses optimal transport to transform the latent space and uses information maximization to refine the classifier's decision boundaries. Theoretical analysis shows the proposed approach minimizes the the components in the target risk's upper bound. Empirical studies are carried out on three datasets and ablation studies were carried out to analyze the impact of MI. | SP:96d683c9273977ab73c817e7a44a3e9b57daa370 |
Semantically Controllable Generation of Physical Scenes with Explicit Knowledge | Deep Generative Models ( DGMs ) are known for their superior capability in generating realistic data . Extending purely data-driven approaches , recent specialized DGMs satisfy additional controllable requirements such as embedding a traffic sign in a driving scene by manipulating patterns implicitly in the neuron or feature level . In this paper , we introduce a novel method to incorporate domain knowledge explicitly in the generation process to achieve the semantically controllable generation of physical scenes . We first categorize our knowledge into two types , the property of objects and the relationship among objects , to be consistent with the composition of natural scenes . We then propose a tree-structured generative model to learn hierarchical scene representation , whose nodes and edges naturally corresponded to the two types of knowledge , respectively . Consequently , explicit knowledge integration enables semantically controllable generation by imposing semantic rules on the properties of nodes and edges in the tree structure . We construct a synthetic example to illustrate the controllability and explainability of our method in a succinct setting . We further extend the synthetic example to realistic environments for autonomous vehicles and conduct extensive experiments : our method efficiently identifies adversarial physical scenes against different state-of-the-art 3D point cloud segmentation models , and satisfies the traffic rules specified as the explicit knowledge . 1 INTRODUCTION . The recent breakthrough in machine learning enables us to learn complex distributions behind data with sophisticated models . These models help us understand the data generation process so as to realize controllable data generation [ 1 , 65 , 16 ] . Deep Generative Models ( DGMs ) [ 23 , 34 , 17 ] , which approximate the data distribution with neural networks ( NN ) , are representative methods to generate data targeting a specific style , category , or attribute . However , existing controllable generative models focus on manipulating implicit patterns in the neuron or feature level . For instance , [ 7 ] dissects DGMs to build the relationship between neurons and generated data , while [ 55 ] interpolates in the latent space to obtain vectors that control the poses of objects . One main limitation of these existing models is that they can not explicitly incorporate unseen semantic rules , which may lead to meaningless generated data that violates common sense . For example , to build diverse physical scenes for evaluating autonomous vehicles , the generated cars should follow semantic traffic rules and physical laws , which can not be enforced by directly manipulating neurons . In light of the limitations of previous work , we aim to develop a structured generative framework to integrate explicit knowledge [ 15 ] during the generation process and thus control the generated scene to be compliant with semantic rules . Natural scenes can be described with objects and their various relationships [ 5 ] . Thus , in this paper , we categorize the semantic knowledge that describes scenes into two types , where the first type denoted as node-level knowledge represents the properties of single objects and the second type denoted as edge-level knowledge represents the relationship among objects . We also observe that tree structure is highly consistent with this categorization for constructing scenes , where nodes of the tree represent objects and edges the relationship . By automatically controlling the tree structure during the generation , we explicitly integrate the node-level and edge-level knowledge . In detail , we propose a general framework , Semantically Controllable Generation ( SCG ) , which consists of two stages as shown in Figure 1 . In stage one , we train a tree-structured generative model that parameterizes nodes and edges of trees with NN to learn the representation of structured data . In stage two , explicit knowledge is applied to different levels of the tree to achieve semantically controllable generation for downstream tasks such as satisfying certain conditions or reducing the performance of recognition algorithms . To verify the proposed SCG , we first construct a synthetic scene reconstruction example to illustrate the advantages of SCG and provide analysis on its controllability and explainability . With SCG , it is possible to generate natural scenes that follow semantic rules , e.g. , boxes with the same color should be positioned close to each other . To demonstrate the practicality of SCG , we conduct extensive experiments on adversarial LiDAR scene generation against state-of-the-art 3D segmentation models . We show that our generated safety-critical physical scenes successfully attack victim models and meanwhile follow the specified traffic rules . In addition , compared with traditional attack methods , scenes generated by our method achieve stronger adversarial transferability across different victim models . Our technical contributions are summarized below : • We propose a semantically controllable generative framework ( SCG ) via integrating explicit knowledge and categorize the knowledge into two types according to the composition of scenes . • We propose a tree-structured generative model based on our knowledge categorization and construct a synthetic example to demonstrate the effectiveness of our knowledge integration . • We propose the first semantic adversarial point cloud attack based on SCG , named Scene Attack , against state-of-the-art segmentation algorithms , demonstrating several essential properties . 2 SEMANTICALLY CONTROLLABLE GENERATION . We define the scene x ∈ X in the data space and the latent code z ∈ Z in the latent space . This paper aims to generate scene x that satisfies certain semantic rules Kt , which are related to the downstream task t ∈ T . The scene x will be used to solve the downstream task t by minimizing the corresponding objective function Lt ( x ) . In this section , we first describe the tree-based generative model for learning the hierarchical representations of x , which is important and necessary for applying knowledge to achieve semantic controllability . Then we explain the two types of knowledge to be integrated into the generative model together with the generation algorithm that uses explicit knowledge Kt as guidance for downstream tasks . 2.1 TREE-STRUCTURED VARIATIONAL AUTO-ENCODER ( T-VAE ) . VAE [ 34 ] is a powerful DGM that combines auto-encoder and variational inference [ 9 ] . It estimates a mapping between data x and latent code z to find the low-dimensional manifold of the data space . The objective function of training VAE is to maximize a lower bound of the log-likelihood of training data , which is so-called Evidence Lower Bound ( ELBO ) ELBO = Eq ( z|x ; φ ) [ log p ( x|z ; θ ) ] −KL ( q ( z|x ; φ ) ||p ( z ) ) ( 1 ) where KL is Kullback–Leibler ( KL ) divergence . q ( z|x ; φ ) is the encoder with parameters φ , and p ( x|z ; θ ) is the decoder with parameters θ . The prior distribution of the latent code p ( z ) is usually a Gaussian distribution for simplification of KL divergence calculation . One typical characteristic of natural scenes is the variable data dimension caused by the variable number of objects , which is very challenging to represent with a fixed number of parameters as in traditional models [ 34 ] . Besides , many existing structured generative models [ 64 , 40 ] do not consider the hierarchy of natural scenes nor have the capability to incorporate explicit knowledge . In general , graph is commonly used to represent structured data [ 41 ] , but sometimes is too complicated to describe the hierarchy and inefficient to generate . As a special case of graph , tree naturally embed hierarchical information via recursive generation with depth-first-search traversal [ 29 , 48 ] . This hierarchy is not only highly consistent with natural physical scenes , but also makes easier to apply explicit knowledge , supported by previous works in cognition literature [ 46 ] . In this work , we propose a novel tree-structured generative model , which is inspired by the stickbreaking approach [ 58 ] : Assume we have a stick with length W and we iteratively break it into segments w ( n , i ) with : W = w ( 1,1 ) = w ( 2,1 ) + w ( 2,2 ) = · · · = Kn∑ i=1 w ( n , i ) ( 2 ) where ( n , i ) means the i-th segment of the n-th level . Kn is the total number of segments in the n-th level . The index starts from 1 and entire length has index ( 1 , 1 ) . The recursive function of breaking the stick follows w ( n+1 , j ) = w ( n , i ) α ( n , i ) , w ( n+1 , j+1 ) = w ( n , i ) ( 1− α ( n , i ) ) ( 3 ) where α ( n , i ) ∈ [ 0 , 1 ] is the splitting ratio for w ( n , i ) and j is the index in the n+ 1-th level . Intuitively , this breaking process creates a tree structure where the i-th level is corresponding to the i-th layer of the tree and segments are nodes in the tree . We extent the above division to 2D space with two parameters α and β . A 2D plane example is illustrated in Figure 2 , where each node has 4 child nodes in the next layer . Assume there are M kinds of nodes in the scene , we define a batch of encoders Em and decoders Dm for all m ∈M and n ∈ { 1 , · · · , N − 1 } : f ( n , j ) = Em ( [ f ( n+1 , i ) , · · · , f ( n+1 , i+lm ) , g ( n+1 , i ) ] ; φm ) [ f ( n+1 , j ) , · · · , f ( n+1 , j+lm ) , ĝ ( n+1 , j ) ] = Dm ( f ( n , i ) ; θm ) ( 4 ) where f ( n , i ) is named as the feature vector that passes the messages through the tree structure . g ( n , i ) is named as the property vector of node ( n , i ) that stories specific properties such as color of the object generated by node ( n , i ) . ĝ ( n+1 , j ) is the predicted property vector and lm is the number of children of node m. Besides the encoders and decoders , we also define a Classifier to determine the child node type and a Sampler to infer the posterior distribution of the latent code , ĉ ( n , i ) = Classifier ( f ( n , i ) ; θc ) , [ zµ , zσ ] = Sampler ( f ( 1,1 ) ; φs ) ( 5 ) where ĉ ( n , i ) is the predicted node type and [ zµ , zσ ] is used to calculate the latent code z with the reparameterization trick [ 9 ] . θc and θs are models parameters for the Classifier and Sampler . Parameters for the encoders q ( z|x ; φ ) and decoders p ( x|z ; θ ) are denoted respectively as φ = { φ1 , · · · , φm , φs } and θ = { θ1 , · · · , θm , θc } . The final structures of the encoder and decoder depend on the tree structure of the data point and vary in the dataset . We follow Recursive Neural Networks ( RvNN ) [ 61 ] to build the tree structure recursively . Finally , the tree x is summarized as x = { c , g } = { c ( 1,1 ) , · · · , c ( N , KN ) , · · · , g ( 1,1 ) , · · · , g ( N , KN ) } ( 6 ) where c represents the node type . Following ( 1 ) , the ELBO of T-VAE to be maximized is ELBO = Eq [ log p ( c|z ; θ ) ] ︸ ︷︷ ︸ −LC ( ĉ , c ) +Eq [ log p ( g|z ; θ ) ] ︸ ︷︷ ︸ −LR ( ĝ , g ) −KL ( N ( zµ , zσ ) ‖ N ( 0 , I ) ) ( 7 ) where the equality holds because c and g are conditionally independent given z . The LC term represents the cross-entropy loss ( CE ) of all nodes in p ( x|z ; θ ) , LC ( ĉ , c ) = 1∑N n Kn N∑ n=1 Kn∑ i=1 CE ( ĉ ( n , i ) , c ( n , i ) ; p ( c ) ) ( 8 ) where the prior distribution of node type p ( c ) is calculated from the training dataset and serves as weights . To make the reconstructed tree have same structure as the original one , we use Teacher Forcing [ 67 ] during the training stage . However , in the generation stage , we take the node with maximum probability as the child to expand the tree . The LR term uses mean square error ( MSE ) to approximate the log-likelihood of node property , LR ( ĝ , g ) = M∑ m=1 1 Nm N∑ n=1 Kn∑ i=1 1 [ c ( n , i ) = m ] ‖ĝ ( n , i ) − g ( n , i ) ‖22 ( 9 ) where Nm is the times that node m appears in the tree and 1 [ · ] is the indicator function . In ( 9 ) , we normalize the MSE with Nm instead of ∑N n Kn to avoid the influence caused by imbalanced node type in the tree . Please refer to Appendix B for detailed model definition and generative process . The advantage of this hierarchical structure is that we only need to store the global information in the root node and use local information in other nodes , making the model easier to capture the feature from different scales in the scene . Moreover , this tree structure makes it possible to explicitly apply semantic knowledge in Stage 2 , which will be explained in Section 2.2 . | This paper proposes a method to incorporate domain knowledge into the physical scene generation process. They extend the synthetic example to realistic environments. Apart from this, they also propose the semantic point could attack against stoa segmentation methods. | SP:5bd70fd49dc0c10d0554b4077e5a4f11d61446e4 |
Semantically Controllable Generation of Physical Scenes with Explicit Knowledge | Deep Generative Models ( DGMs ) are known for their superior capability in generating realistic data . Extending purely data-driven approaches , recent specialized DGMs satisfy additional controllable requirements such as embedding a traffic sign in a driving scene by manipulating patterns implicitly in the neuron or feature level . In this paper , we introduce a novel method to incorporate domain knowledge explicitly in the generation process to achieve the semantically controllable generation of physical scenes . We first categorize our knowledge into two types , the property of objects and the relationship among objects , to be consistent with the composition of natural scenes . We then propose a tree-structured generative model to learn hierarchical scene representation , whose nodes and edges naturally corresponded to the two types of knowledge , respectively . Consequently , explicit knowledge integration enables semantically controllable generation by imposing semantic rules on the properties of nodes and edges in the tree structure . We construct a synthetic example to illustrate the controllability and explainability of our method in a succinct setting . We further extend the synthetic example to realistic environments for autonomous vehicles and conduct extensive experiments : our method efficiently identifies adversarial physical scenes against different state-of-the-art 3D point cloud segmentation models , and satisfies the traffic rules specified as the explicit knowledge . 1 INTRODUCTION . The recent breakthrough in machine learning enables us to learn complex distributions behind data with sophisticated models . These models help us understand the data generation process so as to realize controllable data generation [ 1 , 65 , 16 ] . Deep Generative Models ( DGMs ) [ 23 , 34 , 17 ] , which approximate the data distribution with neural networks ( NN ) , are representative methods to generate data targeting a specific style , category , or attribute . However , existing controllable generative models focus on manipulating implicit patterns in the neuron or feature level . For instance , [ 7 ] dissects DGMs to build the relationship between neurons and generated data , while [ 55 ] interpolates in the latent space to obtain vectors that control the poses of objects . One main limitation of these existing models is that they can not explicitly incorporate unseen semantic rules , which may lead to meaningless generated data that violates common sense . For example , to build diverse physical scenes for evaluating autonomous vehicles , the generated cars should follow semantic traffic rules and physical laws , which can not be enforced by directly manipulating neurons . In light of the limitations of previous work , we aim to develop a structured generative framework to integrate explicit knowledge [ 15 ] during the generation process and thus control the generated scene to be compliant with semantic rules . Natural scenes can be described with objects and their various relationships [ 5 ] . Thus , in this paper , we categorize the semantic knowledge that describes scenes into two types , where the first type denoted as node-level knowledge represents the properties of single objects and the second type denoted as edge-level knowledge represents the relationship among objects . We also observe that tree structure is highly consistent with this categorization for constructing scenes , where nodes of the tree represent objects and edges the relationship . By automatically controlling the tree structure during the generation , we explicitly integrate the node-level and edge-level knowledge . In detail , we propose a general framework , Semantically Controllable Generation ( SCG ) , which consists of two stages as shown in Figure 1 . In stage one , we train a tree-structured generative model that parameterizes nodes and edges of trees with NN to learn the representation of structured data . In stage two , explicit knowledge is applied to different levels of the tree to achieve semantically controllable generation for downstream tasks such as satisfying certain conditions or reducing the performance of recognition algorithms . To verify the proposed SCG , we first construct a synthetic scene reconstruction example to illustrate the advantages of SCG and provide analysis on its controllability and explainability . With SCG , it is possible to generate natural scenes that follow semantic rules , e.g. , boxes with the same color should be positioned close to each other . To demonstrate the practicality of SCG , we conduct extensive experiments on adversarial LiDAR scene generation against state-of-the-art 3D segmentation models . We show that our generated safety-critical physical scenes successfully attack victim models and meanwhile follow the specified traffic rules . In addition , compared with traditional attack methods , scenes generated by our method achieve stronger adversarial transferability across different victim models . Our technical contributions are summarized below : • We propose a semantically controllable generative framework ( SCG ) via integrating explicit knowledge and categorize the knowledge into two types according to the composition of scenes . • We propose a tree-structured generative model based on our knowledge categorization and construct a synthetic example to demonstrate the effectiveness of our knowledge integration . • We propose the first semantic adversarial point cloud attack based on SCG , named Scene Attack , against state-of-the-art segmentation algorithms , demonstrating several essential properties . 2 SEMANTICALLY CONTROLLABLE GENERATION . We define the scene x ∈ X in the data space and the latent code z ∈ Z in the latent space . This paper aims to generate scene x that satisfies certain semantic rules Kt , which are related to the downstream task t ∈ T . The scene x will be used to solve the downstream task t by minimizing the corresponding objective function Lt ( x ) . In this section , we first describe the tree-based generative model for learning the hierarchical representations of x , which is important and necessary for applying knowledge to achieve semantic controllability . Then we explain the two types of knowledge to be integrated into the generative model together with the generation algorithm that uses explicit knowledge Kt as guidance for downstream tasks . 2.1 TREE-STRUCTURED VARIATIONAL AUTO-ENCODER ( T-VAE ) . VAE [ 34 ] is a powerful DGM that combines auto-encoder and variational inference [ 9 ] . It estimates a mapping between data x and latent code z to find the low-dimensional manifold of the data space . The objective function of training VAE is to maximize a lower bound of the log-likelihood of training data , which is so-called Evidence Lower Bound ( ELBO ) ELBO = Eq ( z|x ; φ ) [ log p ( x|z ; θ ) ] −KL ( q ( z|x ; φ ) ||p ( z ) ) ( 1 ) where KL is Kullback–Leibler ( KL ) divergence . q ( z|x ; φ ) is the encoder with parameters φ , and p ( x|z ; θ ) is the decoder with parameters θ . The prior distribution of the latent code p ( z ) is usually a Gaussian distribution for simplification of KL divergence calculation . One typical characteristic of natural scenes is the variable data dimension caused by the variable number of objects , which is very challenging to represent with a fixed number of parameters as in traditional models [ 34 ] . Besides , many existing structured generative models [ 64 , 40 ] do not consider the hierarchy of natural scenes nor have the capability to incorporate explicit knowledge . In general , graph is commonly used to represent structured data [ 41 ] , but sometimes is too complicated to describe the hierarchy and inefficient to generate . As a special case of graph , tree naturally embed hierarchical information via recursive generation with depth-first-search traversal [ 29 , 48 ] . This hierarchy is not only highly consistent with natural physical scenes , but also makes easier to apply explicit knowledge , supported by previous works in cognition literature [ 46 ] . In this work , we propose a novel tree-structured generative model , which is inspired by the stickbreaking approach [ 58 ] : Assume we have a stick with length W and we iteratively break it into segments w ( n , i ) with : W = w ( 1,1 ) = w ( 2,1 ) + w ( 2,2 ) = · · · = Kn∑ i=1 w ( n , i ) ( 2 ) where ( n , i ) means the i-th segment of the n-th level . Kn is the total number of segments in the n-th level . The index starts from 1 and entire length has index ( 1 , 1 ) . The recursive function of breaking the stick follows w ( n+1 , j ) = w ( n , i ) α ( n , i ) , w ( n+1 , j+1 ) = w ( n , i ) ( 1− α ( n , i ) ) ( 3 ) where α ( n , i ) ∈ [ 0 , 1 ] is the splitting ratio for w ( n , i ) and j is the index in the n+ 1-th level . Intuitively , this breaking process creates a tree structure where the i-th level is corresponding to the i-th layer of the tree and segments are nodes in the tree . We extent the above division to 2D space with two parameters α and β . A 2D plane example is illustrated in Figure 2 , where each node has 4 child nodes in the next layer . Assume there are M kinds of nodes in the scene , we define a batch of encoders Em and decoders Dm for all m ∈M and n ∈ { 1 , · · · , N − 1 } : f ( n , j ) = Em ( [ f ( n+1 , i ) , · · · , f ( n+1 , i+lm ) , g ( n+1 , i ) ] ; φm ) [ f ( n+1 , j ) , · · · , f ( n+1 , j+lm ) , ĝ ( n+1 , j ) ] = Dm ( f ( n , i ) ; θm ) ( 4 ) where f ( n , i ) is named as the feature vector that passes the messages through the tree structure . g ( n , i ) is named as the property vector of node ( n , i ) that stories specific properties such as color of the object generated by node ( n , i ) . ĝ ( n+1 , j ) is the predicted property vector and lm is the number of children of node m. Besides the encoders and decoders , we also define a Classifier to determine the child node type and a Sampler to infer the posterior distribution of the latent code , ĉ ( n , i ) = Classifier ( f ( n , i ) ; θc ) , [ zµ , zσ ] = Sampler ( f ( 1,1 ) ; φs ) ( 5 ) where ĉ ( n , i ) is the predicted node type and [ zµ , zσ ] is used to calculate the latent code z with the reparameterization trick [ 9 ] . θc and θs are models parameters for the Classifier and Sampler . Parameters for the encoders q ( z|x ; φ ) and decoders p ( x|z ; θ ) are denoted respectively as φ = { φ1 , · · · , φm , φs } and θ = { θ1 , · · · , θm , θc } . The final structures of the encoder and decoder depend on the tree structure of the data point and vary in the dataset . We follow Recursive Neural Networks ( RvNN ) [ 61 ] to build the tree structure recursively . Finally , the tree x is summarized as x = { c , g } = { c ( 1,1 ) , · · · , c ( N , KN ) , · · · , g ( 1,1 ) , · · · , g ( N , KN ) } ( 6 ) where c represents the node type . Following ( 1 ) , the ELBO of T-VAE to be maximized is ELBO = Eq [ log p ( c|z ; θ ) ] ︸ ︷︷ ︸ −LC ( ĉ , c ) +Eq [ log p ( g|z ; θ ) ] ︸ ︷︷ ︸ −LR ( ĝ , g ) −KL ( N ( zµ , zσ ) ‖ N ( 0 , I ) ) ( 7 ) where the equality holds because c and g are conditionally independent given z . The LC term represents the cross-entropy loss ( CE ) of all nodes in p ( x|z ; θ ) , LC ( ĉ , c ) = 1∑N n Kn N∑ n=1 Kn∑ i=1 CE ( ĉ ( n , i ) , c ( n , i ) ; p ( c ) ) ( 8 ) where the prior distribution of node type p ( c ) is calculated from the training dataset and serves as weights . To make the reconstructed tree have same structure as the original one , we use Teacher Forcing [ 67 ] during the training stage . However , in the generation stage , we take the node with maximum probability as the child to expand the tree . The LR term uses mean square error ( MSE ) to approximate the log-likelihood of node property , LR ( ĝ , g ) = M∑ m=1 1 Nm N∑ n=1 Kn∑ i=1 1 [ c ( n , i ) = m ] ‖ĝ ( n , i ) − g ( n , i ) ‖22 ( 9 ) where Nm is the times that node m appears in the tree and 1 [ · ] is the indicator function . In ( 9 ) , we normalize the MSE with Nm instead of ∑N n Kn to avoid the influence caused by imbalanced node type in the tree . Please refer to Appendix B for detailed model definition and generative process . The advantage of this hierarchical structure is that we only need to store the global information in the root node and use local information in other nodes , making the model easier to capture the feature from different scales in the scene . Moreover , this tree structure makes it possible to explicitly apply semantic knowledge in Stage 2 , which will be explained in Section 2.2 . | The paper proposes a way to incorporate explicitly defined (e.g. rule based) knowledge into generative models for scene generation. Through the addition of explicit knowledge the approach can ensure that generated scenes follow specific requirements, e.g. physical rules. The approach is evaluated on a synthetic dataset and as a way to generate adversarial examples for scene based segmentation models. | SP:5bd70fd49dc0c10d0554b4077e5a4f11d61446e4 |
Semantically Controllable Generation of Physical Scenes with Explicit Knowledge | Deep Generative Models ( DGMs ) are known for their superior capability in generating realistic data . Extending purely data-driven approaches , recent specialized DGMs satisfy additional controllable requirements such as embedding a traffic sign in a driving scene by manipulating patterns implicitly in the neuron or feature level . In this paper , we introduce a novel method to incorporate domain knowledge explicitly in the generation process to achieve the semantically controllable generation of physical scenes . We first categorize our knowledge into two types , the property of objects and the relationship among objects , to be consistent with the composition of natural scenes . We then propose a tree-structured generative model to learn hierarchical scene representation , whose nodes and edges naturally corresponded to the two types of knowledge , respectively . Consequently , explicit knowledge integration enables semantically controllable generation by imposing semantic rules on the properties of nodes and edges in the tree structure . We construct a synthetic example to illustrate the controllability and explainability of our method in a succinct setting . We further extend the synthetic example to realistic environments for autonomous vehicles and conduct extensive experiments : our method efficiently identifies adversarial physical scenes against different state-of-the-art 3D point cloud segmentation models , and satisfies the traffic rules specified as the explicit knowledge . 1 INTRODUCTION . The recent breakthrough in machine learning enables us to learn complex distributions behind data with sophisticated models . These models help us understand the data generation process so as to realize controllable data generation [ 1 , 65 , 16 ] . Deep Generative Models ( DGMs ) [ 23 , 34 , 17 ] , which approximate the data distribution with neural networks ( NN ) , are representative methods to generate data targeting a specific style , category , or attribute . However , existing controllable generative models focus on manipulating implicit patterns in the neuron or feature level . For instance , [ 7 ] dissects DGMs to build the relationship between neurons and generated data , while [ 55 ] interpolates in the latent space to obtain vectors that control the poses of objects . One main limitation of these existing models is that they can not explicitly incorporate unseen semantic rules , which may lead to meaningless generated data that violates common sense . For example , to build diverse physical scenes for evaluating autonomous vehicles , the generated cars should follow semantic traffic rules and physical laws , which can not be enforced by directly manipulating neurons . In light of the limitations of previous work , we aim to develop a structured generative framework to integrate explicit knowledge [ 15 ] during the generation process and thus control the generated scene to be compliant with semantic rules . Natural scenes can be described with objects and their various relationships [ 5 ] . Thus , in this paper , we categorize the semantic knowledge that describes scenes into two types , where the first type denoted as node-level knowledge represents the properties of single objects and the second type denoted as edge-level knowledge represents the relationship among objects . We also observe that tree structure is highly consistent with this categorization for constructing scenes , where nodes of the tree represent objects and edges the relationship . By automatically controlling the tree structure during the generation , we explicitly integrate the node-level and edge-level knowledge . In detail , we propose a general framework , Semantically Controllable Generation ( SCG ) , which consists of two stages as shown in Figure 1 . In stage one , we train a tree-structured generative model that parameterizes nodes and edges of trees with NN to learn the representation of structured data . In stage two , explicit knowledge is applied to different levels of the tree to achieve semantically controllable generation for downstream tasks such as satisfying certain conditions or reducing the performance of recognition algorithms . To verify the proposed SCG , we first construct a synthetic scene reconstruction example to illustrate the advantages of SCG and provide analysis on its controllability and explainability . With SCG , it is possible to generate natural scenes that follow semantic rules , e.g. , boxes with the same color should be positioned close to each other . To demonstrate the practicality of SCG , we conduct extensive experiments on adversarial LiDAR scene generation against state-of-the-art 3D segmentation models . We show that our generated safety-critical physical scenes successfully attack victim models and meanwhile follow the specified traffic rules . In addition , compared with traditional attack methods , scenes generated by our method achieve stronger adversarial transferability across different victim models . Our technical contributions are summarized below : • We propose a semantically controllable generative framework ( SCG ) via integrating explicit knowledge and categorize the knowledge into two types according to the composition of scenes . • We propose a tree-structured generative model based on our knowledge categorization and construct a synthetic example to demonstrate the effectiveness of our knowledge integration . • We propose the first semantic adversarial point cloud attack based on SCG , named Scene Attack , against state-of-the-art segmentation algorithms , demonstrating several essential properties . 2 SEMANTICALLY CONTROLLABLE GENERATION . We define the scene x ∈ X in the data space and the latent code z ∈ Z in the latent space . This paper aims to generate scene x that satisfies certain semantic rules Kt , which are related to the downstream task t ∈ T . The scene x will be used to solve the downstream task t by minimizing the corresponding objective function Lt ( x ) . In this section , we first describe the tree-based generative model for learning the hierarchical representations of x , which is important and necessary for applying knowledge to achieve semantic controllability . Then we explain the two types of knowledge to be integrated into the generative model together with the generation algorithm that uses explicit knowledge Kt as guidance for downstream tasks . 2.1 TREE-STRUCTURED VARIATIONAL AUTO-ENCODER ( T-VAE ) . VAE [ 34 ] is a powerful DGM that combines auto-encoder and variational inference [ 9 ] . It estimates a mapping between data x and latent code z to find the low-dimensional manifold of the data space . The objective function of training VAE is to maximize a lower bound of the log-likelihood of training data , which is so-called Evidence Lower Bound ( ELBO ) ELBO = Eq ( z|x ; φ ) [ log p ( x|z ; θ ) ] −KL ( q ( z|x ; φ ) ||p ( z ) ) ( 1 ) where KL is Kullback–Leibler ( KL ) divergence . q ( z|x ; φ ) is the encoder with parameters φ , and p ( x|z ; θ ) is the decoder with parameters θ . The prior distribution of the latent code p ( z ) is usually a Gaussian distribution for simplification of KL divergence calculation . One typical characteristic of natural scenes is the variable data dimension caused by the variable number of objects , which is very challenging to represent with a fixed number of parameters as in traditional models [ 34 ] . Besides , many existing structured generative models [ 64 , 40 ] do not consider the hierarchy of natural scenes nor have the capability to incorporate explicit knowledge . In general , graph is commonly used to represent structured data [ 41 ] , but sometimes is too complicated to describe the hierarchy and inefficient to generate . As a special case of graph , tree naturally embed hierarchical information via recursive generation with depth-first-search traversal [ 29 , 48 ] . This hierarchy is not only highly consistent with natural physical scenes , but also makes easier to apply explicit knowledge , supported by previous works in cognition literature [ 46 ] . In this work , we propose a novel tree-structured generative model , which is inspired by the stickbreaking approach [ 58 ] : Assume we have a stick with length W and we iteratively break it into segments w ( n , i ) with : W = w ( 1,1 ) = w ( 2,1 ) + w ( 2,2 ) = · · · = Kn∑ i=1 w ( n , i ) ( 2 ) where ( n , i ) means the i-th segment of the n-th level . Kn is the total number of segments in the n-th level . The index starts from 1 and entire length has index ( 1 , 1 ) . The recursive function of breaking the stick follows w ( n+1 , j ) = w ( n , i ) α ( n , i ) , w ( n+1 , j+1 ) = w ( n , i ) ( 1− α ( n , i ) ) ( 3 ) where α ( n , i ) ∈ [ 0 , 1 ] is the splitting ratio for w ( n , i ) and j is the index in the n+ 1-th level . Intuitively , this breaking process creates a tree structure where the i-th level is corresponding to the i-th layer of the tree and segments are nodes in the tree . We extent the above division to 2D space with two parameters α and β . A 2D plane example is illustrated in Figure 2 , where each node has 4 child nodes in the next layer . Assume there are M kinds of nodes in the scene , we define a batch of encoders Em and decoders Dm for all m ∈M and n ∈ { 1 , · · · , N − 1 } : f ( n , j ) = Em ( [ f ( n+1 , i ) , · · · , f ( n+1 , i+lm ) , g ( n+1 , i ) ] ; φm ) [ f ( n+1 , j ) , · · · , f ( n+1 , j+lm ) , ĝ ( n+1 , j ) ] = Dm ( f ( n , i ) ; θm ) ( 4 ) where f ( n , i ) is named as the feature vector that passes the messages through the tree structure . g ( n , i ) is named as the property vector of node ( n , i ) that stories specific properties such as color of the object generated by node ( n , i ) . ĝ ( n+1 , j ) is the predicted property vector and lm is the number of children of node m. Besides the encoders and decoders , we also define a Classifier to determine the child node type and a Sampler to infer the posterior distribution of the latent code , ĉ ( n , i ) = Classifier ( f ( n , i ) ; θc ) , [ zµ , zσ ] = Sampler ( f ( 1,1 ) ; φs ) ( 5 ) where ĉ ( n , i ) is the predicted node type and [ zµ , zσ ] is used to calculate the latent code z with the reparameterization trick [ 9 ] . θc and θs are models parameters for the Classifier and Sampler . Parameters for the encoders q ( z|x ; φ ) and decoders p ( x|z ; θ ) are denoted respectively as φ = { φ1 , · · · , φm , φs } and θ = { θ1 , · · · , θm , θc } . The final structures of the encoder and decoder depend on the tree structure of the data point and vary in the dataset . We follow Recursive Neural Networks ( RvNN ) [ 61 ] to build the tree structure recursively . Finally , the tree x is summarized as x = { c , g } = { c ( 1,1 ) , · · · , c ( N , KN ) , · · · , g ( 1,1 ) , · · · , g ( N , KN ) } ( 6 ) where c represents the node type . Following ( 1 ) , the ELBO of T-VAE to be maximized is ELBO = Eq [ log p ( c|z ; θ ) ] ︸ ︷︷ ︸ −LC ( ĉ , c ) +Eq [ log p ( g|z ; θ ) ] ︸ ︷︷ ︸ −LR ( ĝ , g ) −KL ( N ( zµ , zσ ) ‖ N ( 0 , I ) ) ( 7 ) where the equality holds because c and g are conditionally independent given z . The LC term represents the cross-entropy loss ( CE ) of all nodes in p ( x|z ; θ ) , LC ( ĉ , c ) = 1∑N n Kn N∑ n=1 Kn∑ i=1 CE ( ĉ ( n , i ) , c ( n , i ) ; p ( c ) ) ( 8 ) where the prior distribution of node type p ( c ) is calculated from the training dataset and serves as weights . To make the reconstructed tree have same structure as the original one , we use Teacher Forcing [ 67 ] during the training stage . However , in the generation stage , we take the node with maximum probability as the child to expand the tree . The LR term uses mean square error ( MSE ) to approximate the log-likelihood of node property , LR ( ĝ , g ) = M∑ m=1 1 Nm N∑ n=1 Kn∑ i=1 1 [ c ( n , i ) = m ] ‖ĝ ( n , i ) − g ( n , i ) ‖22 ( 9 ) where Nm is the times that node m appears in the tree and 1 [ · ] is the indicator function . In ( 9 ) , we normalize the MSE with Nm instead of ∑N n Kn to avoid the influence caused by imbalanced node type in the tree . Please refer to Appendix B for detailed model definition and generative process . The advantage of this hierarchical structure is that we only need to store the global information in the root node and use local information in other nodes , making the model easier to capture the feature from different scales in the scene . Moreover , this tree structure makes it possible to explicitly apply semantic knowledge in Stage 2 , which will be explained in Section 2.2 . | This paper proposes the use of tree-structured VAE as a mechanism for encoding several forms of constraint knowledge. The VAEs create samples of scenes that aim to conform to the constraints. A synthetic scene context was used to demonstrate and explore the approach. A more realistic LIDAR segmentation scenario was also explored, where the goal was to generate realistic but adversarial scenes. | SP:5bd70fd49dc0c10d0554b4077e5a4f11d61446e4 |
Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations | 1 INTRODUCTION . Human knowledge about what makes a good representation and the abundance of unlabeled data has enabled the learning of useful representations via self-supervised learning ( SSL ) pretext tasks . State-of-the-art SSL methods encourage the representations not to contain information about the way the inputs are transformed , i.e . to be invariant to a set of manually chosen transformations . One such method is contrastive learning , which sets up a binary classification problem to learn invariant features . Given a set of datapoints ( say images ) , different transformations of the same data point constitute positive examples , whereas transformations of other data points constitute the negatives ( He et al. , 2020 ; Chen et al. , 2020 ) . Beyond contrastive learning , many SSL methods also rely on learning representations by encouraging invariance ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2021 ; Zbontar et al. , 2021 ) . Here , we refer to such methods as Invariant-SSL ( I-SSL ) . The natural question in I-SSL is to what transformations should the representations be insensitive ( Chen et al. , 2020 ; Tian et al. , 2020 ; Xiao et al. , 2020 ) . Chen et al . ( 2020 ) highlighted the importance of transformations and empirically evaluated which transformations are useful for contrastive learning ( e.g. , see Figure 5 in their paper ) . Some transformations , such as four-fold rotations , despite preserving semantic information , were shown to be harmful for contrastive learning . This does not mean that four-fold rotations are not useful for I-SSL at all . In fact , predicting four-fold rotations is a good proxy task for evaluating the representations produced with contrastive learning ( Reed et al. , 2021 ) . Furthermore , instead of being insensitive to rotations ( invariance ) , training a neural network to predict them , i.e . to be sensitive to four-fold rotations , results in good image representations ( Gidaris et al. , 2018 ; 2019 ) . These results indicate that the choice of making features sensitive or insensitive to a particular group of transformations can have a substantial effect on the performance of downstream tasks . However , the prior work in SSL has exclusively focused on being either entirely insensitive ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2021 ; Zbontar et al. , 2021 ) or sensitive ( Agrawal et al. , 2015 ; Doersch et al. , 2015 ; Zhang et al. , 2016 ; Noroozi & Favaro , 2016 ; Gidaris et al. , 2018 ) to a set of transformations . In particular , the I-SSL literature has proposed to simply remove transformations that hurt performance when applied as invariance . To understand how sensitivity/ insensitivity to a particular transformation affects the resulting features , we ran a series of experiments summarized in Figure 1 . We trained and tested a simple I-SSL baseline , SimCLR ( Chen et al. , 2020 ) , on CIFAR-10 using only the random resized cropping transformation ( solid yellow line ) . The test accuracy is calculated as the retrieval accuracy of a k-nearest neighbors ( kNN ) classifier with a memory bank consisting of the representations on the training set obtained after pre-training for 800 epochs . Next , in addition to being invariant to resized cropping , we additionally encouraged the model to be either sensitive ( shown in pink ) or insensitive ( shown in blue ) to a second transformation . We encourage insensitivity by adding the transformation to the SimCLR data augmentation , and sensitivity by predicting it ( see Section 4 ) . We varied the choice of this second transformation . We found that for some transformations , such as horizontal flips and grayscale , insenstivity results in better features , but is detrimental for transformations , such as fourfold rotations , vertical flips , 2x2 jigsaws ( 4 ! = 24 classes ) , four-fold Gaussian blurs ( 4 levels of blurring ) and color inversions . When we encourage sensitivity to these transformations , the trend is reversed . In summary , we observe that if invariance to a particular transformation hurts feature learning , then imposing sensitivity to the same transformation may improve performance . This leads us to conjecture that instead of choosing the features to be only invariant or only sensitive as done in prior work , it may be possible to learn better features by imposing invariance to certain transformations ( e.g. , cropping ) and sensitivity to other transformations ( e.g. , four-fold transformations ) . The concepts of sensitivity and insensitivity are both captured by the mathematical idea of equivariance ( Agrawal et al. , 2015 ; Jayaraman & Grauman , 2015 ; Cohen & Welling , 2016 ) . Let G be a group of transformations . For any g ∈ G let Tg ( x ) denote the function with which g transforms an input image x . For instance , if G is the group of four-fold rotations then Tg ( x ) rotates the image x by a multiple of π/2 . Let f be the encoder network that computes feature representation , f ( x ) . I-SSL encourages the property of “ invariance to G , ” which states f ( Tg ( x ) ) = f ( x ) , i.e . the output representation , f ( x ) , does not vary with Tg . Equivariance , a generalization of invariance , is defined as , ∀x : f ( Tg ( x ) ) = T ′g ( f ( x ) ) , where T ′g is a fixed transformation ( i.e. , without any parameters ) . Intuitively , equivariance encourages the feature representation to change in a well defined manner to the transformation applied to the input . Thus , invariance is a trivial instance of equivariance , where T ′g is the identity function , i.e . T ′ g ( f ( x ) ) = f ( x ) .While there are many possible choices for T ′ g ( Cohen & Welling , 2016 ; Bronstein et al. , 2021 ) , I-SSL uses only the trivial choice that encourages f to be insensitive to G. In contrast , if T ′g is not the identity , then f will be sensitive to G and we say that the “ equivariance to G ” will be non-trivial . Therefore , in order to encourage potentially more useful equivariance properties , we generalize SSL to an Equivariant Self-Supervised Learning ( E-SSL ) framework . In our experiments on standard computer vision data , such as the small-scale CIFAR-10 ( Torralba et al. , 2008 ; Krizhevsky , 2009 ) and the large-scale ImageNet ( Deng et al. , 2009 ) , we show that extending I-SSL to E-SSL by also predicting four-fold rotations improves the semantic quality of the representations . We show that this approach works for other transformations too , such as vertical flips , 2x2 jigsaws , four-fold Gaussian blurs and color inversions , but focus on four-fold rotations as the most promising improvement we obtain with initial E-SSL experiments in Figure 1 . We also note that the applications of E-SSL in this paper are task specific , meaning that the representations from E-SSL may work best for a particular downstream task that benefits from equivariance dictated by the available data . E-SSL can be further extended to applications in science ; in particular , we focus on predictive tasks using ( unlabelled and labelled ) data collected via experiments or simulations . The downstream tasks in prediction problems in science are often fixed and can be aided by incorporating scientific insights . Here , we also explore the generality of E-SSL beyond computer vision , on a different application : regression problems in photonics science and demonstrate examples where E-SSL is effective over I-SSL . Our contributions can be summarized as follows : • We introduce E-SSL , a generalization of popular SSL methods that highlights the complementary nature of invariance and equivariance . To our knowledge , we are the first to create a method that benefits from such complementarity . • We improve state-of-the-art SSL methods on CIFAR-10 and ImageNet by encouraging equivariance to four-fold rotations . We also show that E-SSL is more general and works for many other transformations , previously unexplored in related works . • We demonstrate the usefulness of E-SSL beyond computer vision with experiments on regression problems in photonics science . We also show that our method works both for finite and infinite groups . The rest of the paper is organized as follows . In Section 3 we introduce our experimental method for E-SSL . In Section 4 we present our main experiments in computer vision . In Section 2 we elaborate on related work . In Section 5 provide a discussion around our work that extends our study beond computer vision . In Section 6 we conclude and point to future work . 2 RELATED WORK . To encourage non-trivial equivariance , we observe that a simple task that predicts the synthetic transformation applied to the input , works well and improves I-SSL already ; some prediction tasks create representations that can be transferred to other tasks of interest , such as classification , object detection and segmentation . While prediction tasks alone have been realized successfully before in SSL ( Agrawal et al. , 2015 ; Doersch et al. , 2015 ; Zhang et al. , 2016 ; Misra et al. , 2016 ; Noroozi & Favaro , 2016 ; Zamir et al. , 2016 ; Lee et al. , 2017 ; Mundhenk et al. , 2018 ; Gidaris et al. , 2018 ; Zhang et al. , 2019 ; Zhang , 2020 ) , to our knowledge we are the first to combine simple predictive objectives of synthetic transformations with I-SSL , and successfully improve the semantic quality of representations . We found that the notion of equivariance captures the generality of our method . To improve representations with pretext tasks , Gidaris et al . ( 2018 ) use four-fold rotations prediction as a pretext task for learning useful visual representations via a new model named RotNet . Feng et al . ( 2019 ) learn decoupled representations : one part trained with four-fold rotations prediction and another with non-parametric instance discrimination ( Wu et al. , 2018 ) and invariance to four-fold rotations . Yamaguchi et al . ( 2021 ) use a joint training objective between four-fold rotations prediction and image enhancement prediction . Xiao et al . ( 2020 ) propose to learn representations as follows : for each atomic augmentation from the contrastive learning ’ s augmentation policy , they leave it out and project to a new space on which I-SSL encourages invariance to all augmentations , but the leftout one . The resulting representation could either be a concatenation of all projected left-out views ’ representations , or the representation in the shared space , before the individual projections . Our method differs from the above contributions in that E-SSL is the only hybrid framework that encourages both insensitive representations for some transformations and sensitive representations for others and does not require representations to be sensitive and insensitive to a particular transformation at the same time . Thus , what distinguished our work is the complementary nature of invariance and equivariance for multiple transformations , including finite and infinite groups . To obtain performance gains from transformations , Tian et al . ( 2020 ) study which transformations are the best for contrastive learning through the lens of mutual information . Reed et al . ( 2021 ) use four-fold rotations prediction as an evaluation measure to tune optimal augmentations for contrastive learning . Wang & Qi ( 2021 ) use strong augmentations to improve contrastive learning by matching the distributions of strongly and weakly augmented views ’ representation similarities to a memory bank . A growing body of work encourages invariance to domain agnostic transformations ( Tamkin et al. , 2021 ; Lee et al. , 2021 ; Verma et al. , 2021 ) or strengthens invariance with regularization ( Foster et al. , 2021 ) . Our framework is different from the above works , because we work with transformations that encourage equivariance beyond invariance . To understand and improve equivariant properties of neural networks , Lenc & Vedaldi ( 2015 ) study emerging equivariant properties of neural networks and ( Cohen & Welling , 2016 ; Bronstein et al. , 2021 ) construct equivariant neural networks . In contrast , our work does not enforce strict equivariance , but only encourages equivariant properties for the encoder network through the choice of the loss function . While strict equivariance is concerned with groups , some of the transformations , such as random resized cropping and Gaussian blurs , may not even be groups , but they could still be analyzed in the E-SSL framework . Thus , ours is a flexible framework , which allows us to consider a variety of transformations and how the encoder might exhibit equivariant properties to them . | The paper advocates Equivariance Self-Supervised Learning (E-SSL) as a more general framework than the Invariance SSL (I-SSL), which is common in SoTA vision SSL methods, such as SimCLR and Barlow Twins. The paper starts with empirical evidence (Fig-1) that adding equivariance objectives (e.g. 4-fold rotation and vertical flips) to SimCLR can improve performance. The proposed E-SSL framework boils down to adding an additional equivariance objective (mainly 4-fold rotations) to popular I-SSL methods (Eqn. 1 & 2). The empirical results show encouraging results in CIFAR-10 and ImageNet. Finally, the paper applies the proposed method in two datasets of photonic data (regression task from 2D square periodic unit cells). | SP:6d4f60dfa8532f2c8ceaa655c9f4d297fdf2cf1c |
Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations | 1 INTRODUCTION . Human knowledge about what makes a good representation and the abundance of unlabeled data has enabled the learning of useful representations via self-supervised learning ( SSL ) pretext tasks . State-of-the-art SSL methods encourage the representations not to contain information about the way the inputs are transformed , i.e . to be invariant to a set of manually chosen transformations . One such method is contrastive learning , which sets up a binary classification problem to learn invariant features . Given a set of datapoints ( say images ) , different transformations of the same data point constitute positive examples , whereas transformations of other data points constitute the negatives ( He et al. , 2020 ; Chen et al. , 2020 ) . Beyond contrastive learning , many SSL methods also rely on learning representations by encouraging invariance ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2021 ; Zbontar et al. , 2021 ) . Here , we refer to such methods as Invariant-SSL ( I-SSL ) . The natural question in I-SSL is to what transformations should the representations be insensitive ( Chen et al. , 2020 ; Tian et al. , 2020 ; Xiao et al. , 2020 ) . Chen et al . ( 2020 ) highlighted the importance of transformations and empirically evaluated which transformations are useful for contrastive learning ( e.g. , see Figure 5 in their paper ) . Some transformations , such as four-fold rotations , despite preserving semantic information , were shown to be harmful for contrastive learning . This does not mean that four-fold rotations are not useful for I-SSL at all . In fact , predicting four-fold rotations is a good proxy task for evaluating the representations produced with contrastive learning ( Reed et al. , 2021 ) . Furthermore , instead of being insensitive to rotations ( invariance ) , training a neural network to predict them , i.e . to be sensitive to four-fold rotations , results in good image representations ( Gidaris et al. , 2018 ; 2019 ) . These results indicate that the choice of making features sensitive or insensitive to a particular group of transformations can have a substantial effect on the performance of downstream tasks . However , the prior work in SSL has exclusively focused on being either entirely insensitive ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2021 ; Zbontar et al. , 2021 ) or sensitive ( Agrawal et al. , 2015 ; Doersch et al. , 2015 ; Zhang et al. , 2016 ; Noroozi & Favaro , 2016 ; Gidaris et al. , 2018 ) to a set of transformations . In particular , the I-SSL literature has proposed to simply remove transformations that hurt performance when applied as invariance . To understand how sensitivity/ insensitivity to a particular transformation affects the resulting features , we ran a series of experiments summarized in Figure 1 . We trained and tested a simple I-SSL baseline , SimCLR ( Chen et al. , 2020 ) , on CIFAR-10 using only the random resized cropping transformation ( solid yellow line ) . The test accuracy is calculated as the retrieval accuracy of a k-nearest neighbors ( kNN ) classifier with a memory bank consisting of the representations on the training set obtained after pre-training for 800 epochs . Next , in addition to being invariant to resized cropping , we additionally encouraged the model to be either sensitive ( shown in pink ) or insensitive ( shown in blue ) to a second transformation . We encourage insensitivity by adding the transformation to the SimCLR data augmentation , and sensitivity by predicting it ( see Section 4 ) . We varied the choice of this second transformation . We found that for some transformations , such as horizontal flips and grayscale , insenstivity results in better features , but is detrimental for transformations , such as fourfold rotations , vertical flips , 2x2 jigsaws ( 4 ! = 24 classes ) , four-fold Gaussian blurs ( 4 levels of blurring ) and color inversions . When we encourage sensitivity to these transformations , the trend is reversed . In summary , we observe that if invariance to a particular transformation hurts feature learning , then imposing sensitivity to the same transformation may improve performance . This leads us to conjecture that instead of choosing the features to be only invariant or only sensitive as done in prior work , it may be possible to learn better features by imposing invariance to certain transformations ( e.g. , cropping ) and sensitivity to other transformations ( e.g. , four-fold transformations ) . The concepts of sensitivity and insensitivity are both captured by the mathematical idea of equivariance ( Agrawal et al. , 2015 ; Jayaraman & Grauman , 2015 ; Cohen & Welling , 2016 ) . Let G be a group of transformations . For any g ∈ G let Tg ( x ) denote the function with which g transforms an input image x . For instance , if G is the group of four-fold rotations then Tg ( x ) rotates the image x by a multiple of π/2 . Let f be the encoder network that computes feature representation , f ( x ) . I-SSL encourages the property of “ invariance to G , ” which states f ( Tg ( x ) ) = f ( x ) , i.e . the output representation , f ( x ) , does not vary with Tg . Equivariance , a generalization of invariance , is defined as , ∀x : f ( Tg ( x ) ) = T ′g ( f ( x ) ) , where T ′g is a fixed transformation ( i.e. , without any parameters ) . Intuitively , equivariance encourages the feature representation to change in a well defined manner to the transformation applied to the input . Thus , invariance is a trivial instance of equivariance , where T ′g is the identity function , i.e . T ′ g ( f ( x ) ) = f ( x ) .While there are many possible choices for T ′ g ( Cohen & Welling , 2016 ; Bronstein et al. , 2021 ) , I-SSL uses only the trivial choice that encourages f to be insensitive to G. In contrast , if T ′g is not the identity , then f will be sensitive to G and we say that the “ equivariance to G ” will be non-trivial . Therefore , in order to encourage potentially more useful equivariance properties , we generalize SSL to an Equivariant Self-Supervised Learning ( E-SSL ) framework . In our experiments on standard computer vision data , such as the small-scale CIFAR-10 ( Torralba et al. , 2008 ; Krizhevsky , 2009 ) and the large-scale ImageNet ( Deng et al. , 2009 ) , we show that extending I-SSL to E-SSL by also predicting four-fold rotations improves the semantic quality of the representations . We show that this approach works for other transformations too , such as vertical flips , 2x2 jigsaws , four-fold Gaussian blurs and color inversions , but focus on four-fold rotations as the most promising improvement we obtain with initial E-SSL experiments in Figure 1 . We also note that the applications of E-SSL in this paper are task specific , meaning that the representations from E-SSL may work best for a particular downstream task that benefits from equivariance dictated by the available data . E-SSL can be further extended to applications in science ; in particular , we focus on predictive tasks using ( unlabelled and labelled ) data collected via experiments or simulations . The downstream tasks in prediction problems in science are often fixed and can be aided by incorporating scientific insights . Here , we also explore the generality of E-SSL beyond computer vision , on a different application : regression problems in photonics science and demonstrate examples where E-SSL is effective over I-SSL . Our contributions can be summarized as follows : • We introduce E-SSL , a generalization of popular SSL methods that highlights the complementary nature of invariance and equivariance . To our knowledge , we are the first to create a method that benefits from such complementarity . • We improve state-of-the-art SSL methods on CIFAR-10 and ImageNet by encouraging equivariance to four-fold rotations . We also show that E-SSL is more general and works for many other transformations , previously unexplored in related works . • We demonstrate the usefulness of E-SSL beyond computer vision with experiments on regression problems in photonics science . We also show that our method works both for finite and infinite groups . The rest of the paper is organized as follows . In Section 3 we introduce our experimental method for E-SSL . In Section 4 we present our main experiments in computer vision . In Section 2 we elaborate on related work . In Section 5 provide a discussion around our work that extends our study beond computer vision . In Section 6 we conclude and point to future work . 2 RELATED WORK . To encourage non-trivial equivariance , we observe that a simple task that predicts the synthetic transformation applied to the input , works well and improves I-SSL already ; some prediction tasks create representations that can be transferred to other tasks of interest , such as classification , object detection and segmentation . While prediction tasks alone have been realized successfully before in SSL ( Agrawal et al. , 2015 ; Doersch et al. , 2015 ; Zhang et al. , 2016 ; Misra et al. , 2016 ; Noroozi & Favaro , 2016 ; Zamir et al. , 2016 ; Lee et al. , 2017 ; Mundhenk et al. , 2018 ; Gidaris et al. , 2018 ; Zhang et al. , 2019 ; Zhang , 2020 ) , to our knowledge we are the first to combine simple predictive objectives of synthetic transformations with I-SSL , and successfully improve the semantic quality of representations . We found that the notion of equivariance captures the generality of our method . To improve representations with pretext tasks , Gidaris et al . ( 2018 ) use four-fold rotations prediction as a pretext task for learning useful visual representations via a new model named RotNet . Feng et al . ( 2019 ) learn decoupled representations : one part trained with four-fold rotations prediction and another with non-parametric instance discrimination ( Wu et al. , 2018 ) and invariance to four-fold rotations . Yamaguchi et al . ( 2021 ) use a joint training objective between four-fold rotations prediction and image enhancement prediction . Xiao et al . ( 2020 ) propose to learn representations as follows : for each atomic augmentation from the contrastive learning ’ s augmentation policy , they leave it out and project to a new space on which I-SSL encourages invariance to all augmentations , but the leftout one . The resulting representation could either be a concatenation of all projected left-out views ’ representations , or the representation in the shared space , before the individual projections . Our method differs from the above contributions in that E-SSL is the only hybrid framework that encourages both insensitive representations for some transformations and sensitive representations for others and does not require representations to be sensitive and insensitive to a particular transformation at the same time . Thus , what distinguished our work is the complementary nature of invariance and equivariance for multiple transformations , including finite and infinite groups . To obtain performance gains from transformations , Tian et al . ( 2020 ) study which transformations are the best for contrastive learning through the lens of mutual information . Reed et al . ( 2021 ) use four-fold rotations prediction as an evaluation measure to tune optimal augmentations for contrastive learning . Wang & Qi ( 2021 ) use strong augmentations to improve contrastive learning by matching the distributions of strongly and weakly augmented views ’ representation similarities to a memory bank . A growing body of work encourages invariance to domain agnostic transformations ( Tamkin et al. , 2021 ; Lee et al. , 2021 ; Verma et al. , 2021 ) or strengthens invariance with regularization ( Foster et al. , 2021 ) . Our framework is different from the above works , because we work with transformations that encourage equivariance beyond invariance . To understand and improve equivariant properties of neural networks , Lenc & Vedaldi ( 2015 ) study emerging equivariant properties of neural networks and ( Cohen & Welling , 2016 ; Bronstein et al. , 2021 ) construct equivariant neural networks . In contrast , our work does not enforce strict equivariance , but only encourages equivariant properties for the encoder network through the choice of the loss function . While strict equivariance is concerned with groups , some of the transformations , such as random resized cropping and Gaussian blurs , may not even be groups , but they could still be analyzed in the E-SSL framework . Thus , ours is a flexible framework , which allows us to consider a variety of transformations and how the encoder might exhibit equivariant properties to them . | This work presents a framework for learning representations with invariances (insensitivity) to some transformations and sensitivities to others. Previous work has only considered representations with insensitivity or sensitivity but not a combination. The authors use the concept of invariance or equivariance to symmetry group transformations to learn such representations. The advantage of the proposed method is demonstrated relative to SimCLR and SimSiam on Cifar-10, ImageNet, and a new scientific application domain: learning frequency responses of photonic crystals. The authors also prove a theoretical characterization of when their methods works. | SP:6d4f60dfa8532f2c8ceaa655c9f4d297fdf2cf1c |
Equivariant Self-Supervised Learning: Encouraging Equivariance in Representations | 1 INTRODUCTION . Human knowledge about what makes a good representation and the abundance of unlabeled data has enabled the learning of useful representations via self-supervised learning ( SSL ) pretext tasks . State-of-the-art SSL methods encourage the representations not to contain information about the way the inputs are transformed , i.e . to be invariant to a set of manually chosen transformations . One such method is contrastive learning , which sets up a binary classification problem to learn invariant features . Given a set of datapoints ( say images ) , different transformations of the same data point constitute positive examples , whereas transformations of other data points constitute the negatives ( He et al. , 2020 ; Chen et al. , 2020 ) . Beyond contrastive learning , many SSL methods also rely on learning representations by encouraging invariance ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2021 ; Zbontar et al. , 2021 ) . Here , we refer to such methods as Invariant-SSL ( I-SSL ) . The natural question in I-SSL is to what transformations should the representations be insensitive ( Chen et al. , 2020 ; Tian et al. , 2020 ; Xiao et al. , 2020 ) . Chen et al . ( 2020 ) highlighted the importance of transformations and empirically evaluated which transformations are useful for contrastive learning ( e.g. , see Figure 5 in their paper ) . Some transformations , such as four-fold rotations , despite preserving semantic information , were shown to be harmful for contrastive learning . This does not mean that four-fold rotations are not useful for I-SSL at all . In fact , predicting four-fold rotations is a good proxy task for evaluating the representations produced with contrastive learning ( Reed et al. , 2021 ) . Furthermore , instead of being insensitive to rotations ( invariance ) , training a neural network to predict them , i.e . to be sensitive to four-fold rotations , results in good image representations ( Gidaris et al. , 2018 ; 2019 ) . These results indicate that the choice of making features sensitive or insensitive to a particular group of transformations can have a substantial effect on the performance of downstream tasks . However , the prior work in SSL has exclusively focused on being either entirely insensitive ( Grill et al. , 2020 ; Chen & He , 2021 ; Caron et al. , 2021 ; Zbontar et al. , 2021 ) or sensitive ( Agrawal et al. , 2015 ; Doersch et al. , 2015 ; Zhang et al. , 2016 ; Noroozi & Favaro , 2016 ; Gidaris et al. , 2018 ) to a set of transformations . In particular , the I-SSL literature has proposed to simply remove transformations that hurt performance when applied as invariance . To understand how sensitivity/ insensitivity to a particular transformation affects the resulting features , we ran a series of experiments summarized in Figure 1 . We trained and tested a simple I-SSL baseline , SimCLR ( Chen et al. , 2020 ) , on CIFAR-10 using only the random resized cropping transformation ( solid yellow line ) . The test accuracy is calculated as the retrieval accuracy of a k-nearest neighbors ( kNN ) classifier with a memory bank consisting of the representations on the training set obtained after pre-training for 800 epochs . Next , in addition to being invariant to resized cropping , we additionally encouraged the model to be either sensitive ( shown in pink ) or insensitive ( shown in blue ) to a second transformation . We encourage insensitivity by adding the transformation to the SimCLR data augmentation , and sensitivity by predicting it ( see Section 4 ) . We varied the choice of this second transformation . We found that for some transformations , such as horizontal flips and grayscale , insenstivity results in better features , but is detrimental for transformations , such as fourfold rotations , vertical flips , 2x2 jigsaws ( 4 ! = 24 classes ) , four-fold Gaussian blurs ( 4 levels of blurring ) and color inversions . When we encourage sensitivity to these transformations , the trend is reversed . In summary , we observe that if invariance to a particular transformation hurts feature learning , then imposing sensitivity to the same transformation may improve performance . This leads us to conjecture that instead of choosing the features to be only invariant or only sensitive as done in prior work , it may be possible to learn better features by imposing invariance to certain transformations ( e.g. , cropping ) and sensitivity to other transformations ( e.g. , four-fold transformations ) . The concepts of sensitivity and insensitivity are both captured by the mathematical idea of equivariance ( Agrawal et al. , 2015 ; Jayaraman & Grauman , 2015 ; Cohen & Welling , 2016 ) . Let G be a group of transformations . For any g ∈ G let Tg ( x ) denote the function with which g transforms an input image x . For instance , if G is the group of four-fold rotations then Tg ( x ) rotates the image x by a multiple of π/2 . Let f be the encoder network that computes feature representation , f ( x ) . I-SSL encourages the property of “ invariance to G , ” which states f ( Tg ( x ) ) = f ( x ) , i.e . the output representation , f ( x ) , does not vary with Tg . Equivariance , a generalization of invariance , is defined as , ∀x : f ( Tg ( x ) ) = T ′g ( f ( x ) ) , where T ′g is a fixed transformation ( i.e. , without any parameters ) . Intuitively , equivariance encourages the feature representation to change in a well defined manner to the transformation applied to the input . Thus , invariance is a trivial instance of equivariance , where T ′g is the identity function , i.e . T ′ g ( f ( x ) ) = f ( x ) .While there are many possible choices for T ′ g ( Cohen & Welling , 2016 ; Bronstein et al. , 2021 ) , I-SSL uses only the trivial choice that encourages f to be insensitive to G. In contrast , if T ′g is not the identity , then f will be sensitive to G and we say that the “ equivariance to G ” will be non-trivial . Therefore , in order to encourage potentially more useful equivariance properties , we generalize SSL to an Equivariant Self-Supervised Learning ( E-SSL ) framework . In our experiments on standard computer vision data , such as the small-scale CIFAR-10 ( Torralba et al. , 2008 ; Krizhevsky , 2009 ) and the large-scale ImageNet ( Deng et al. , 2009 ) , we show that extending I-SSL to E-SSL by also predicting four-fold rotations improves the semantic quality of the representations . We show that this approach works for other transformations too , such as vertical flips , 2x2 jigsaws , four-fold Gaussian blurs and color inversions , but focus on four-fold rotations as the most promising improvement we obtain with initial E-SSL experiments in Figure 1 . We also note that the applications of E-SSL in this paper are task specific , meaning that the representations from E-SSL may work best for a particular downstream task that benefits from equivariance dictated by the available data . E-SSL can be further extended to applications in science ; in particular , we focus on predictive tasks using ( unlabelled and labelled ) data collected via experiments or simulations . The downstream tasks in prediction problems in science are often fixed and can be aided by incorporating scientific insights . Here , we also explore the generality of E-SSL beyond computer vision , on a different application : regression problems in photonics science and demonstrate examples where E-SSL is effective over I-SSL . Our contributions can be summarized as follows : • We introduce E-SSL , a generalization of popular SSL methods that highlights the complementary nature of invariance and equivariance . To our knowledge , we are the first to create a method that benefits from such complementarity . • We improve state-of-the-art SSL methods on CIFAR-10 and ImageNet by encouraging equivariance to four-fold rotations . We also show that E-SSL is more general and works for many other transformations , previously unexplored in related works . • We demonstrate the usefulness of E-SSL beyond computer vision with experiments on regression problems in photonics science . We also show that our method works both for finite and infinite groups . The rest of the paper is organized as follows . In Section 3 we introduce our experimental method for E-SSL . In Section 4 we present our main experiments in computer vision . In Section 2 we elaborate on related work . In Section 5 provide a discussion around our work that extends our study beond computer vision . In Section 6 we conclude and point to future work . 2 RELATED WORK . To encourage non-trivial equivariance , we observe that a simple task that predicts the synthetic transformation applied to the input , works well and improves I-SSL already ; some prediction tasks create representations that can be transferred to other tasks of interest , such as classification , object detection and segmentation . While prediction tasks alone have been realized successfully before in SSL ( Agrawal et al. , 2015 ; Doersch et al. , 2015 ; Zhang et al. , 2016 ; Misra et al. , 2016 ; Noroozi & Favaro , 2016 ; Zamir et al. , 2016 ; Lee et al. , 2017 ; Mundhenk et al. , 2018 ; Gidaris et al. , 2018 ; Zhang et al. , 2019 ; Zhang , 2020 ) , to our knowledge we are the first to combine simple predictive objectives of synthetic transformations with I-SSL , and successfully improve the semantic quality of representations . We found that the notion of equivariance captures the generality of our method . To improve representations with pretext tasks , Gidaris et al . ( 2018 ) use four-fold rotations prediction as a pretext task for learning useful visual representations via a new model named RotNet . Feng et al . ( 2019 ) learn decoupled representations : one part trained with four-fold rotations prediction and another with non-parametric instance discrimination ( Wu et al. , 2018 ) and invariance to four-fold rotations . Yamaguchi et al . ( 2021 ) use a joint training objective between four-fold rotations prediction and image enhancement prediction . Xiao et al . ( 2020 ) propose to learn representations as follows : for each atomic augmentation from the contrastive learning ’ s augmentation policy , they leave it out and project to a new space on which I-SSL encourages invariance to all augmentations , but the leftout one . The resulting representation could either be a concatenation of all projected left-out views ’ representations , or the representation in the shared space , before the individual projections . Our method differs from the above contributions in that E-SSL is the only hybrid framework that encourages both insensitive representations for some transformations and sensitive representations for others and does not require representations to be sensitive and insensitive to a particular transformation at the same time . Thus , what distinguished our work is the complementary nature of invariance and equivariance for multiple transformations , including finite and infinite groups . To obtain performance gains from transformations , Tian et al . ( 2020 ) study which transformations are the best for contrastive learning through the lens of mutual information . Reed et al . ( 2021 ) use four-fold rotations prediction as an evaluation measure to tune optimal augmentations for contrastive learning . Wang & Qi ( 2021 ) use strong augmentations to improve contrastive learning by matching the distributions of strongly and weakly augmented views ’ representation similarities to a memory bank . A growing body of work encourages invariance to domain agnostic transformations ( Tamkin et al. , 2021 ; Lee et al. , 2021 ; Verma et al. , 2021 ) or strengthens invariance with regularization ( Foster et al. , 2021 ) . Our framework is different from the above works , because we work with transformations that encourage equivariance beyond invariance . To understand and improve equivariant properties of neural networks , Lenc & Vedaldi ( 2015 ) study emerging equivariant properties of neural networks and ( Cohen & Welling , 2016 ; Bronstein et al. , 2021 ) construct equivariant neural networks . In contrast , our work does not enforce strict equivariance , but only encourages equivariant properties for the encoder network through the choice of the loss function . While strict equivariance is concerned with groups , some of the transformations , such as random resized cropping and Gaussian blurs , may not even be groups , but they could still be analyzed in the E-SSL framework . Thus , ours is a flexible framework , which allows us to consider a variety of transformations and how the encoder might exhibit equivariant properties to them . | This paper proposes a framework which generalizes self-supervised learning (SSL) to also learn equivariance behavior. A family of SSL (the authors named invariant SSL) encourages representations which learn features that are invariant to certain transformations, e.g., horizontal flips. As invariance is a special case of equivariance, this paper explores whether encouraging equivariant representation is beneficial. Specifically, they focused on four-fold rotations and showed that combined with existing SSL methods lead to improvement in classification performance on CIFAR10 and ImageNet. The author also showcases application to photonics science. | SP:6d4f60dfa8532f2c8ceaa655c9f4d297fdf2cf1c |
Path Integral Sampler: A Stochastic Control Approach For Sampling | 1 INTRODUCTION . We are interested in drawing samples from a target density µ̂ = Zµ known up to a normalizing constant Z . Although it has been widely studied in machine learning and statistics , generating asymptotically unbiased samples from such unnormalized distribution can still be challenging ( Talwar , 2019 ) . In practice , variational inference ( VI ) and Monte Carlo ( MC ) methods are two popular frameworks for sampling . Variational inference employs a density model q , from which samples are easy and efficient to draw , to approximate the target density ( Rezende & Mohamed , 2015 ; Wu et al. , 2020 ) . Two important ingredients for variational inference sampling include a distance metric between q and µ̂ to identify good q and the importance weight to account for the mismatch between the two distributions . Thus , in variational inference , one needs to access the explicit density of q , which restricts the possible parameterization of q . Indeed , explicit density models that provide samples and probability density such as Autoregressive models and normalizing flow are widely used in density estimation ( Gao et al. , 2020a ; Nicoli et al. , 2020 ) . However , such models impose special structural constraints on the representation of q . For instance , the expressive power of normalizing flows ( Rezende & Mohamed , 2015 ) is constrained by the requirements that the induced map has to be bijective and its Jacobian needs to be easy-to-compute ( Wu et al. , 2020 ; Cornish et al. , 2020 ; Grathwohl et al. , 2018 ) . Most MC methods generate samples by iteratively simulating a well-designed Markov chain ( MCMC ) or sampling ancestrally ( MacKay , 2003 ) . Among them , Sequential Monte Carlo and its variants augmented with annealing trick are regarded as state-of-the-art in certain sampling tasks ( Chopin & Papaspiliopoulos , 2020 ) . Despite its popularity , MCMC methods may suffer from long mixing time . The short-run performance of MCMC can be difficult to analyze and samples often get stuck in local minima ( Nijkamp et al. , 2019 ; Gao et al. , 2020b ) . There are some recent works exploring the possibility of incorporating neural networks to improve MCMC ( Spanbauer et al. , 2020 ; Li et al. , 2020b ) . However , evaluating existing MCMC empirically , not to say designing an objective loss function to train network-powered MCMC , is difficult ( Liu et al. , 2016 ; Gorham & Mackey , 2017 ) . Most existing works in this direction focus only on designing data-aware propos- als ( Song et al. , 2017 ; Titsias & Dellaportas , 2019 ) and training such networks can be challenging without expertise knowledge in sampling . In this work , we propose an efficient sampler termed Path Integral Sampler ( PIS ) to generate samples by simulating a stochastic differential equation ( SDE ) in finite steps . Our algorithm is built on the Schrödinger bridge problem ( Pavon , 1989 ; Dai Pra , 1991 ; Léonard , 2014 ; Chen et al. , 2021 ) whose original goal was to infer the most likely evolution of a diffusion given its marginal distributions at two time points . With a proper prior diffusion model , this Schrödinger bridge framework can be adopted for the sampling task . Moreover , it can be reformulated as a stochastic control problem ( Chen et al. , 2016 ) whose terminal cost depends on the target density µ̂ so that the diffusion under optimal control has terminal distribution µ̂ . We model the control policy with a network and develop a method to train it gradually and efficiently . The discrepancy of the learned policy from the optimal policy also provides an evaluation metric for sampling performance . Furthermore , PIS can be made unbiased even with sub-optimal control policy via the path integral theorem to compute the importance weights of samples . Compared with VI that uses explicit density models , PIS uses an implicit model and has the advantage of free-form network design . The explicit density models have weaker expressive power and flexibility compared with implicit models , both theoretically and empirically ( Cornish et al. , 2020 ; Chen et al. , 2019 ; Kingma & Welling , 2013 ; Mohamed & Lakshminarayanan , 2016 ) . Compared with MCMC , PIS is more efficient and is able to generate high-quality samples with fewer steps . Besides , the behavior of MCMC over finite steps can be analyzed and quantified . We show guaranteed sampling quality in terms of Wasserstein distance from the target density for any given sub-optimal policy . Our algorithm is based on Tzen & Raginsky ( 2019 ) , where the authors establish the connections between generative models with latent diffusion and stochastic control and justify the expressiveness of such models theoretically . How to realize this model with networks and how the method performs on real datasets are unclear in Tzen & Raginsky ( 2019 ) . Another closely related work is Wu et al . ( 2020 ) ; Arbel et al . ( 2021 ) , which extends Sequential Monte Carlo ( SMC ) by combining deterministic normalizing flow blocks with stochastic MCMC blocks . To be able to evaluate the importance weights efficiently , MCMC blocks need to be chosen based on annealed target distributions carefully . In contrast , in PIS one can design expressive architecture freely and train the model end-to-end without the burden of tuning MCMC kernels , resampling , annealing scheduling . An illustration of the advantages of PIS is presented in Fig 1 . We summarize our contributions as follows . 1 . We propose Path Integral Sampler ( PIS ) , a generic sampler that generates samples through simulating a target-dependent SDE which can be trained with free-form architecture network design . We derive performance guarantee in terms of the Wasserstein distance to the target density based on the optimality of the learned SDE . 2 . An evaluation metric is provided to quantify the performance of learned PIS . By minimizing such evaluation metric , PIS can be trained end-to-end . This metric also provides an estimation of the normalization constants of target distributions . 3 . PIS can generate samples without bias even with sub-optimal SDEs by assigning importance weights using path integral theory . 4 . Empirically , PIS achieves the state-of-the-art sampling performance in several sampling tasks . 2 SAMPLING AND STOCHASTIC CONTROL PROBLEMS . We begin with a brief introduction to the sampling problem and stochastic control problem . Throughout , we denote by τ = { xt , 0 ≤ t ≤ T } a continuous-time stochastic trajectory . 2.1 SAMPLING PROBLEM . We are interested in drawing samples from a target distribution µ ( x ) = µ̂ ( x ) /Z in Rd where Z is the normalization constant . Many sampling algorithms rely on constructing a stochastic process that drives the random particles from an initial distribution ν that is easy to sample from , to the target distribution µ . In the variational inference framework , one seeks to construct a parameterized stochastic process to achieve this goal . Denote by Ω = C ( [ 0 , T ] ; Rd ) the path space consisting of all possible trajectories and by P the measure over Ω induced by a stochastic process with terminal distribution µ at time T . Let Q be the measure induced by a parameterized stochastic and denote its marginal distribution at T by µQ . Then , by the data processing inequality , the Kullback-Leibler divergence ( KL ) between marginal distributions µQ and µ can be bounded by DKL ( µ Q‖µ ) ≤ DKL ( Q‖P ) : = ∫ Ω dQ log dQ dP . ( 1 ) Thus , DKL ( Q‖P ) serves as a performance metric for the sampler , and a small DKL ( Q‖P ) value corresponds to a good sampler . 2.2 STOCHASTIC CONTROL . Consider a model characterized by the stochastic differential equation ( SDE ) ( Särkkä & Solin , 2019 ) dxt = f ( t , xt ) dt+ g ( t , xt ) ( utdt+ dwt ) , x0 ∼ ν ( 2 ) where the drift term f : Rd → Rd is a vector-valued function , and the diffusion coeffient g is a matrix-valued function , xt , ut denote state and control input respectively , and wt denotes standard Brownian motion . In stochastic control , the goal is to find an feedback control strategy that minimizes a certain given cost function . The standard stochastic control problem can be associated with any cost . In this paper , we only consider the cost of the form E [ ∫ T 0 1 2 ‖ut‖2 dt+ Ψ ( xT ) | x0 ∼ ν ] , ( 3 ) where Ψ represents the terminal cost . The corresponding optimal control problem can be solved via dynamic programming ( Bertsekas et al. , 2000 ) , which amounts to solving the Hamilton-JacobiBellman ( HJB ) equation ( Evans , 1998 ) ∂Vt ∂t + f · ∇Vt − 1 2 ∇V ′t gg′∇Vt + 1 2 Tr ( gg′∇2Vt ) = 0 , VT ( · ) = Ψ ( · ) . ( 4 ) The space-time function Vt ( x ) is known as cost-to-go function or value function . The optimal policy can be inferred from Vt ( x ) as ( Pavon , 1989 ) u∗t ( x ) = −g ( t , x ) ′∇Vt ( x ) . ( 5 ) 3 PATH INTEGRAL SAMPLER . It turns out that , with a proper choice of initial distribution ν and terminal loss function Ψ , the stochastic control problem coincides with sampling problem , and the optimal policy drives samples from ν to µ perfectly . The process under optimal control can be viewed as the posterior of uncontrolled dynamics conditioned on target distribution as illustrated in Fig 2 . Throughout , we denote by Qu the path measure associated with control policy u . We also denote by µ0 the terminal distribution of the uncontrolled process Q0 . For the ease of presentation , we begin with sampling from a normalized density µ , and then generalize the results to unnormalized µ̂ in Section 3.4 . 3.1 PATH INTEGRAL AND VALUE FUNCTION . Due to the special cost structure , the nonlinear HJB eq ( 4 ) can be transformed into a linear partial differential equation ( PDE ) ∂φt ∂t + f · ∇φt + 1 2 Tr ( gg′∇2φt ) = 0 , φT ( · ) = exp { −Ψ ( · ) } ( 6 ) by logarithmic transformation ( Särkkä & Solin , 2019 ) Vt ( x ) = − log φt ( x ) . By the celebrated Feynman-Kac formula ( Øksendal , 2003 ) , the above has solution φt ( x ) = EQ0 [ exp ( −Ψ ( xT ) ) |xt = x ] . ( 7 ) We remark that eq ( 7 ) implies that the optimal value function can be evaluated without knowing the optimal policy since the above expectation is with respect to the uncontrolled process Q0 . This is exactly the Path Integral control theory ( Thijssen & Kappen , 2015 ) . Furthermore , the optimal control at ( t , x ) is u∗t ( x ) = g ( t , x ) ′∇ log φt ( x ) = lim s↘t EQ0 { exp { −Ψ ( xT ) } ∫ s t dwt | xt = x } ( s− t ) EQ0 { exp { −Ψ ( xT ) } | xt = x } , ( 8 ) meaning that u∗t ( x ) can also be estimated by uncontrolled trajectories . | The paper presents a control approach to sampling. It is proposed to use a controlled diffusion initialized at time t=0 to sample at a given time t=T from a given unnormalized target distribution. This is achieved by minimizing a forward KL between two suitable diffusions. The resulting expression is simple as it is given by the integrated squared control plus a final cost term. The authors propose to parameterize the control by a neural network and minimize the KL using stochastic gradient techniques. In experiments, this novel method appears to outperform some recent alternatives. | SP:6e003c72a3418108df2b67a022293d1479a95cbc |
Path Integral Sampler: A Stochastic Control Approach For Sampling | 1 INTRODUCTION . We are interested in drawing samples from a target density µ̂ = Zµ known up to a normalizing constant Z . Although it has been widely studied in machine learning and statistics , generating asymptotically unbiased samples from such unnormalized distribution can still be challenging ( Talwar , 2019 ) . In practice , variational inference ( VI ) and Monte Carlo ( MC ) methods are two popular frameworks for sampling . Variational inference employs a density model q , from which samples are easy and efficient to draw , to approximate the target density ( Rezende & Mohamed , 2015 ; Wu et al. , 2020 ) . Two important ingredients for variational inference sampling include a distance metric between q and µ̂ to identify good q and the importance weight to account for the mismatch between the two distributions . Thus , in variational inference , one needs to access the explicit density of q , which restricts the possible parameterization of q . Indeed , explicit density models that provide samples and probability density such as Autoregressive models and normalizing flow are widely used in density estimation ( Gao et al. , 2020a ; Nicoli et al. , 2020 ) . However , such models impose special structural constraints on the representation of q . For instance , the expressive power of normalizing flows ( Rezende & Mohamed , 2015 ) is constrained by the requirements that the induced map has to be bijective and its Jacobian needs to be easy-to-compute ( Wu et al. , 2020 ; Cornish et al. , 2020 ; Grathwohl et al. , 2018 ) . Most MC methods generate samples by iteratively simulating a well-designed Markov chain ( MCMC ) or sampling ancestrally ( MacKay , 2003 ) . Among them , Sequential Monte Carlo and its variants augmented with annealing trick are regarded as state-of-the-art in certain sampling tasks ( Chopin & Papaspiliopoulos , 2020 ) . Despite its popularity , MCMC methods may suffer from long mixing time . The short-run performance of MCMC can be difficult to analyze and samples often get stuck in local minima ( Nijkamp et al. , 2019 ; Gao et al. , 2020b ) . There are some recent works exploring the possibility of incorporating neural networks to improve MCMC ( Spanbauer et al. , 2020 ; Li et al. , 2020b ) . However , evaluating existing MCMC empirically , not to say designing an objective loss function to train network-powered MCMC , is difficult ( Liu et al. , 2016 ; Gorham & Mackey , 2017 ) . Most existing works in this direction focus only on designing data-aware propos- als ( Song et al. , 2017 ; Titsias & Dellaportas , 2019 ) and training such networks can be challenging without expertise knowledge in sampling . In this work , we propose an efficient sampler termed Path Integral Sampler ( PIS ) to generate samples by simulating a stochastic differential equation ( SDE ) in finite steps . Our algorithm is built on the Schrödinger bridge problem ( Pavon , 1989 ; Dai Pra , 1991 ; Léonard , 2014 ; Chen et al. , 2021 ) whose original goal was to infer the most likely evolution of a diffusion given its marginal distributions at two time points . With a proper prior diffusion model , this Schrödinger bridge framework can be adopted for the sampling task . Moreover , it can be reformulated as a stochastic control problem ( Chen et al. , 2016 ) whose terminal cost depends on the target density µ̂ so that the diffusion under optimal control has terminal distribution µ̂ . We model the control policy with a network and develop a method to train it gradually and efficiently . The discrepancy of the learned policy from the optimal policy also provides an evaluation metric for sampling performance . Furthermore , PIS can be made unbiased even with sub-optimal control policy via the path integral theorem to compute the importance weights of samples . Compared with VI that uses explicit density models , PIS uses an implicit model and has the advantage of free-form network design . The explicit density models have weaker expressive power and flexibility compared with implicit models , both theoretically and empirically ( Cornish et al. , 2020 ; Chen et al. , 2019 ; Kingma & Welling , 2013 ; Mohamed & Lakshminarayanan , 2016 ) . Compared with MCMC , PIS is more efficient and is able to generate high-quality samples with fewer steps . Besides , the behavior of MCMC over finite steps can be analyzed and quantified . We show guaranteed sampling quality in terms of Wasserstein distance from the target density for any given sub-optimal policy . Our algorithm is based on Tzen & Raginsky ( 2019 ) , where the authors establish the connections between generative models with latent diffusion and stochastic control and justify the expressiveness of such models theoretically . How to realize this model with networks and how the method performs on real datasets are unclear in Tzen & Raginsky ( 2019 ) . Another closely related work is Wu et al . ( 2020 ) ; Arbel et al . ( 2021 ) , which extends Sequential Monte Carlo ( SMC ) by combining deterministic normalizing flow blocks with stochastic MCMC blocks . To be able to evaluate the importance weights efficiently , MCMC blocks need to be chosen based on annealed target distributions carefully . In contrast , in PIS one can design expressive architecture freely and train the model end-to-end without the burden of tuning MCMC kernels , resampling , annealing scheduling . An illustration of the advantages of PIS is presented in Fig 1 . We summarize our contributions as follows . 1 . We propose Path Integral Sampler ( PIS ) , a generic sampler that generates samples through simulating a target-dependent SDE which can be trained with free-form architecture network design . We derive performance guarantee in terms of the Wasserstein distance to the target density based on the optimality of the learned SDE . 2 . An evaluation metric is provided to quantify the performance of learned PIS . By minimizing such evaluation metric , PIS can be trained end-to-end . This metric also provides an estimation of the normalization constants of target distributions . 3 . PIS can generate samples without bias even with sub-optimal SDEs by assigning importance weights using path integral theory . 4 . Empirically , PIS achieves the state-of-the-art sampling performance in several sampling tasks . 2 SAMPLING AND STOCHASTIC CONTROL PROBLEMS . We begin with a brief introduction to the sampling problem and stochastic control problem . Throughout , we denote by τ = { xt , 0 ≤ t ≤ T } a continuous-time stochastic trajectory . 2.1 SAMPLING PROBLEM . We are interested in drawing samples from a target distribution µ ( x ) = µ̂ ( x ) /Z in Rd where Z is the normalization constant . Many sampling algorithms rely on constructing a stochastic process that drives the random particles from an initial distribution ν that is easy to sample from , to the target distribution µ . In the variational inference framework , one seeks to construct a parameterized stochastic process to achieve this goal . Denote by Ω = C ( [ 0 , T ] ; Rd ) the path space consisting of all possible trajectories and by P the measure over Ω induced by a stochastic process with terminal distribution µ at time T . Let Q be the measure induced by a parameterized stochastic and denote its marginal distribution at T by µQ . Then , by the data processing inequality , the Kullback-Leibler divergence ( KL ) between marginal distributions µQ and µ can be bounded by DKL ( µ Q‖µ ) ≤ DKL ( Q‖P ) : = ∫ Ω dQ log dQ dP . ( 1 ) Thus , DKL ( Q‖P ) serves as a performance metric for the sampler , and a small DKL ( Q‖P ) value corresponds to a good sampler . 2.2 STOCHASTIC CONTROL . Consider a model characterized by the stochastic differential equation ( SDE ) ( Särkkä & Solin , 2019 ) dxt = f ( t , xt ) dt+ g ( t , xt ) ( utdt+ dwt ) , x0 ∼ ν ( 2 ) where the drift term f : Rd → Rd is a vector-valued function , and the diffusion coeffient g is a matrix-valued function , xt , ut denote state and control input respectively , and wt denotes standard Brownian motion . In stochastic control , the goal is to find an feedback control strategy that minimizes a certain given cost function . The standard stochastic control problem can be associated with any cost . In this paper , we only consider the cost of the form E [ ∫ T 0 1 2 ‖ut‖2 dt+ Ψ ( xT ) | x0 ∼ ν ] , ( 3 ) where Ψ represents the terminal cost . The corresponding optimal control problem can be solved via dynamic programming ( Bertsekas et al. , 2000 ) , which amounts to solving the Hamilton-JacobiBellman ( HJB ) equation ( Evans , 1998 ) ∂Vt ∂t + f · ∇Vt − 1 2 ∇V ′t gg′∇Vt + 1 2 Tr ( gg′∇2Vt ) = 0 , VT ( · ) = Ψ ( · ) . ( 4 ) The space-time function Vt ( x ) is known as cost-to-go function or value function . The optimal policy can be inferred from Vt ( x ) as ( Pavon , 1989 ) u∗t ( x ) = −g ( t , x ) ′∇Vt ( x ) . ( 5 ) 3 PATH INTEGRAL SAMPLER . It turns out that , with a proper choice of initial distribution ν and terminal loss function Ψ , the stochastic control problem coincides with sampling problem , and the optimal policy drives samples from ν to µ perfectly . The process under optimal control can be viewed as the posterior of uncontrolled dynamics conditioned on target distribution as illustrated in Fig 2 . Throughout , we denote by Qu the path measure associated with control policy u . We also denote by µ0 the terminal distribution of the uncontrolled process Q0 . For the ease of presentation , we begin with sampling from a normalized density µ , and then generalize the results to unnormalized µ̂ in Section 3.4 . 3.1 PATH INTEGRAL AND VALUE FUNCTION . Due to the special cost structure , the nonlinear HJB eq ( 4 ) can be transformed into a linear partial differential equation ( PDE ) ∂φt ∂t + f · ∇φt + 1 2 Tr ( gg′∇2φt ) = 0 , φT ( · ) = exp { −Ψ ( · ) } ( 6 ) by logarithmic transformation ( Särkkä & Solin , 2019 ) Vt ( x ) = − log φt ( x ) . By the celebrated Feynman-Kac formula ( Øksendal , 2003 ) , the above has solution φt ( x ) = EQ0 [ exp ( −Ψ ( xT ) ) |xt = x ] . ( 7 ) We remark that eq ( 7 ) implies that the optimal value function can be evaluated without knowing the optimal policy since the above expectation is with respect to the uncontrolled process Q0 . This is exactly the Path Integral control theory ( Thijssen & Kappen , 2015 ) . Furthermore , the optimal control at ( t , x ) is u∗t ( x ) = g ( t , x ) ′∇ log φt ( x ) = lim s↘t EQ0 { exp { −Ψ ( xT ) } ∫ s t dwt | xt = x } ( s− t ) EQ0 { exp { −Ψ ( xT ) } | xt = x } , ( 8 ) meaning that u∗t ( x ) can also be estimated by uncontrolled trajectories . | The paper proposes an algorithm called Path Integral Sampler (PIS) for sampling from unnormalized distributions by parameterizing the control policy in the Schrödinger bridge problem with neural networks and supplying it with the gradient information from the target density. Performance guarantees are provided, and experiments on several illustrative toy problems as well as on sampling molecular structures and latent space of VAEs are conducted. The method is shown to produce high-quality samples from the posterior, demonstrating state-of-the-art performance. | SP:6e003c72a3418108df2b67a022293d1479a95cbc |
Path Integral Sampler: A Stochastic Control Approach For Sampling | 1 INTRODUCTION . We are interested in drawing samples from a target density µ̂ = Zµ known up to a normalizing constant Z . Although it has been widely studied in machine learning and statistics , generating asymptotically unbiased samples from such unnormalized distribution can still be challenging ( Talwar , 2019 ) . In practice , variational inference ( VI ) and Monte Carlo ( MC ) methods are two popular frameworks for sampling . Variational inference employs a density model q , from which samples are easy and efficient to draw , to approximate the target density ( Rezende & Mohamed , 2015 ; Wu et al. , 2020 ) . Two important ingredients for variational inference sampling include a distance metric between q and µ̂ to identify good q and the importance weight to account for the mismatch between the two distributions . Thus , in variational inference , one needs to access the explicit density of q , which restricts the possible parameterization of q . Indeed , explicit density models that provide samples and probability density such as Autoregressive models and normalizing flow are widely used in density estimation ( Gao et al. , 2020a ; Nicoli et al. , 2020 ) . However , such models impose special structural constraints on the representation of q . For instance , the expressive power of normalizing flows ( Rezende & Mohamed , 2015 ) is constrained by the requirements that the induced map has to be bijective and its Jacobian needs to be easy-to-compute ( Wu et al. , 2020 ; Cornish et al. , 2020 ; Grathwohl et al. , 2018 ) . Most MC methods generate samples by iteratively simulating a well-designed Markov chain ( MCMC ) or sampling ancestrally ( MacKay , 2003 ) . Among them , Sequential Monte Carlo and its variants augmented with annealing trick are regarded as state-of-the-art in certain sampling tasks ( Chopin & Papaspiliopoulos , 2020 ) . Despite its popularity , MCMC methods may suffer from long mixing time . The short-run performance of MCMC can be difficult to analyze and samples often get stuck in local minima ( Nijkamp et al. , 2019 ; Gao et al. , 2020b ) . There are some recent works exploring the possibility of incorporating neural networks to improve MCMC ( Spanbauer et al. , 2020 ; Li et al. , 2020b ) . However , evaluating existing MCMC empirically , not to say designing an objective loss function to train network-powered MCMC , is difficult ( Liu et al. , 2016 ; Gorham & Mackey , 2017 ) . Most existing works in this direction focus only on designing data-aware propos- als ( Song et al. , 2017 ; Titsias & Dellaportas , 2019 ) and training such networks can be challenging without expertise knowledge in sampling . In this work , we propose an efficient sampler termed Path Integral Sampler ( PIS ) to generate samples by simulating a stochastic differential equation ( SDE ) in finite steps . Our algorithm is built on the Schrödinger bridge problem ( Pavon , 1989 ; Dai Pra , 1991 ; Léonard , 2014 ; Chen et al. , 2021 ) whose original goal was to infer the most likely evolution of a diffusion given its marginal distributions at two time points . With a proper prior diffusion model , this Schrödinger bridge framework can be adopted for the sampling task . Moreover , it can be reformulated as a stochastic control problem ( Chen et al. , 2016 ) whose terminal cost depends on the target density µ̂ so that the diffusion under optimal control has terminal distribution µ̂ . We model the control policy with a network and develop a method to train it gradually and efficiently . The discrepancy of the learned policy from the optimal policy also provides an evaluation metric for sampling performance . Furthermore , PIS can be made unbiased even with sub-optimal control policy via the path integral theorem to compute the importance weights of samples . Compared with VI that uses explicit density models , PIS uses an implicit model and has the advantage of free-form network design . The explicit density models have weaker expressive power and flexibility compared with implicit models , both theoretically and empirically ( Cornish et al. , 2020 ; Chen et al. , 2019 ; Kingma & Welling , 2013 ; Mohamed & Lakshminarayanan , 2016 ) . Compared with MCMC , PIS is more efficient and is able to generate high-quality samples with fewer steps . Besides , the behavior of MCMC over finite steps can be analyzed and quantified . We show guaranteed sampling quality in terms of Wasserstein distance from the target density for any given sub-optimal policy . Our algorithm is based on Tzen & Raginsky ( 2019 ) , where the authors establish the connections between generative models with latent diffusion and stochastic control and justify the expressiveness of such models theoretically . How to realize this model with networks and how the method performs on real datasets are unclear in Tzen & Raginsky ( 2019 ) . Another closely related work is Wu et al . ( 2020 ) ; Arbel et al . ( 2021 ) , which extends Sequential Monte Carlo ( SMC ) by combining deterministic normalizing flow blocks with stochastic MCMC blocks . To be able to evaluate the importance weights efficiently , MCMC blocks need to be chosen based on annealed target distributions carefully . In contrast , in PIS one can design expressive architecture freely and train the model end-to-end without the burden of tuning MCMC kernels , resampling , annealing scheduling . An illustration of the advantages of PIS is presented in Fig 1 . We summarize our contributions as follows . 1 . We propose Path Integral Sampler ( PIS ) , a generic sampler that generates samples through simulating a target-dependent SDE which can be trained with free-form architecture network design . We derive performance guarantee in terms of the Wasserstein distance to the target density based on the optimality of the learned SDE . 2 . An evaluation metric is provided to quantify the performance of learned PIS . By minimizing such evaluation metric , PIS can be trained end-to-end . This metric also provides an estimation of the normalization constants of target distributions . 3 . PIS can generate samples without bias even with sub-optimal SDEs by assigning importance weights using path integral theory . 4 . Empirically , PIS achieves the state-of-the-art sampling performance in several sampling tasks . 2 SAMPLING AND STOCHASTIC CONTROL PROBLEMS . We begin with a brief introduction to the sampling problem and stochastic control problem . Throughout , we denote by τ = { xt , 0 ≤ t ≤ T } a continuous-time stochastic trajectory . 2.1 SAMPLING PROBLEM . We are interested in drawing samples from a target distribution µ ( x ) = µ̂ ( x ) /Z in Rd where Z is the normalization constant . Many sampling algorithms rely on constructing a stochastic process that drives the random particles from an initial distribution ν that is easy to sample from , to the target distribution µ . In the variational inference framework , one seeks to construct a parameterized stochastic process to achieve this goal . Denote by Ω = C ( [ 0 , T ] ; Rd ) the path space consisting of all possible trajectories and by P the measure over Ω induced by a stochastic process with terminal distribution µ at time T . Let Q be the measure induced by a parameterized stochastic and denote its marginal distribution at T by µQ . Then , by the data processing inequality , the Kullback-Leibler divergence ( KL ) between marginal distributions µQ and µ can be bounded by DKL ( µ Q‖µ ) ≤ DKL ( Q‖P ) : = ∫ Ω dQ log dQ dP . ( 1 ) Thus , DKL ( Q‖P ) serves as a performance metric for the sampler , and a small DKL ( Q‖P ) value corresponds to a good sampler . 2.2 STOCHASTIC CONTROL . Consider a model characterized by the stochastic differential equation ( SDE ) ( Särkkä & Solin , 2019 ) dxt = f ( t , xt ) dt+ g ( t , xt ) ( utdt+ dwt ) , x0 ∼ ν ( 2 ) where the drift term f : Rd → Rd is a vector-valued function , and the diffusion coeffient g is a matrix-valued function , xt , ut denote state and control input respectively , and wt denotes standard Brownian motion . In stochastic control , the goal is to find an feedback control strategy that minimizes a certain given cost function . The standard stochastic control problem can be associated with any cost . In this paper , we only consider the cost of the form E [ ∫ T 0 1 2 ‖ut‖2 dt+ Ψ ( xT ) | x0 ∼ ν ] , ( 3 ) where Ψ represents the terminal cost . The corresponding optimal control problem can be solved via dynamic programming ( Bertsekas et al. , 2000 ) , which amounts to solving the Hamilton-JacobiBellman ( HJB ) equation ( Evans , 1998 ) ∂Vt ∂t + f · ∇Vt − 1 2 ∇V ′t gg′∇Vt + 1 2 Tr ( gg′∇2Vt ) = 0 , VT ( · ) = Ψ ( · ) . ( 4 ) The space-time function Vt ( x ) is known as cost-to-go function or value function . The optimal policy can be inferred from Vt ( x ) as ( Pavon , 1989 ) u∗t ( x ) = −g ( t , x ) ′∇Vt ( x ) . ( 5 ) 3 PATH INTEGRAL SAMPLER . It turns out that , with a proper choice of initial distribution ν and terminal loss function Ψ , the stochastic control problem coincides with sampling problem , and the optimal policy drives samples from ν to µ perfectly . The process under optimal control can be viewed as the posterior of uncontrolled dynamics conditioned on target distribution as illustrated in Fig 2 . Throughout , we denote by Qu the path measure associated with control policy u . We also denote by µ0 the terminal distribution of the uncontrolled process Q0 . For the ease of presentation , we begin with sampling from a normalized density µ , and then generalize the results to unnormalized µ̂ in Section 3.4 . 3.1 PATH INTEGRAL AND VALUE FUNCTION . Due to the special cost structure , the nonlinear HJB eq ( 4 ) can be transformed into a linear partial differential equation ( PDE ) ∂φt ∂t + f · ∇φt + 1 2 Tr ( gg′∇2φt ) = 0 , φT ( · ) = exp { −Ψ ( · ) } ( 6 ) by logarithmic transformation ( Särkkä & Solin , 2019 ) Vt ( x ) = − log φt ( x ) . By the celebrated Feynman-Kac formula ( Øksendal , 2003 ) , the above has solution φt ( x ) = EQ0 [ exp ( −Ψ ( xT ) ) |xt = x ] . ( 7 ) We remark that eq ( 7 ) implies that the optimal value function can be evaluated without knowing the optimal policy since the above expectation is with respect to the uncontrolled process Q0 . This is exactly the Path Integral control theory ( Thijssen & Kappen , 2015 ) . Furthermore , the optimal control at ( t , x ) is u∗t ( x ) = g ( t , x ) ′∇ log φt ( x ) = lim s↘t EQ0 { exp { −Ψ ( xT ) } ∫ s t dwt | xt = x } ( s− t ) EQ0 { exp { −Ψ ( xT ) } | xt = x } , ( 8 ) meaning that u∗t ( x ) can also be estimated by uncontrolled trajectories . | The authors propose a Path Integral Sampler based on Schrodinger bridge. They turn the sampling problem into a stochastic control problem and use a neural network to parameterize the control policy. In order to find the optimal control policy, several approximation techniques are introduced. The authors further propose an important sampling scheme to handle the case of a suboptimal policy. They provide convergence guarantees under Wasserstein distance and demonstrate the proposed method on synthetic datasets, alanine dipeptide and binary MNIST. | SP:6e003c72a3418108df2b67a022293d1479a95cbc |
CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability | 1 INTRODUCTION . Despite the indisputable successes of machine learning , recent concerns have surfaced over the field heading towards a potential reproducibility crisis ( Henderson et al. , 2018 ) , as previously identified in other scientific disciplines ( Baker , 2016 ) . Although replication may largely be assured through modern software tools , moving beyond pure replication towards reproducibility with a factual interpretation of results is accompanied by tremendous remaining challenges . Specifically for machine learning , recent reproducibility initiatives ( Pineau et al. , 2021 ) nicely summarize how differences in used data , miss- or under-specification of training and evaluation metrics , along with frequent over-claims of conclusions beyond gathered empirical evidence impose persisting obstacles in our current literature . Similar conclusions have been reached in related works focused on specifics of reinforcement learning ( Li & Talwalkar , 2019 ) , neural architecture search ( Lindauer & Hutter , 2020 ) , human-centered machine learning model cards ( Mitchell et al. , 2019 ) , or general dataset sheet specifications ( Bender & Friedman , 2018 ; Gebru et al. , 2018 ) , which all make valuable propositions to overcome existing gaps through the creation of standardized best-practice ( check- ) lists . It should thus come as no surprise that the emerging work in continual learning is no stranger to the above challenges . Superficially , continual learning has the intuitive objective of accumulating information and learning concepts over time , typically without the ability to revisit previous experiences , frequently also referred to as lifelong learning ( Chen & Liu , 2018 ) . However , there is no unique agreed-upon formal definition beyond the idea to continuously observe data , where the time component holds some practical implication on changes in the objective , the evolution of concept labels , or general statistical shifts in the data distribution . The majority of modern surveys ambiguously conflate these factors as a sequence of tasks ( Parisi et al. , 2019 ; Lesort et al. , 2019 ; Hadsell et al. , 2020 ; Lesort et al. , 2020 ; Biesialska et al. , 2021 ) . Much in contrast to prevalent static benchmarks , the question of reproducibility , interpretation of results , and overall comparability now becomes an even more complex function of nuances in employed data , training and evaluation protocols . The latter fact has sparked various attempts at consolidation or critique . On the one hand , several works have made important suggestions for continual learning desiderata with respect to evaluation protocols ( Farquhar & Gal , 2018 ; Dı́az-Rodrı́guez et al. , 2018 ; Kemker et al. , 2018 ) and categorization of incremental training set-ups ( van de Ven & Tolias , 2019 ; Lesort et al. , 2021 ) . On the other hand , empirical assessments have demonstrated that performance and comparability break down rapidly if often unexposed protocol aspects deviate ( Pfülb & Gepperth , 2019 ; Delange et al. , 2021 ) . Following the broader exposition of the recent reviews of Mundt et al . ( 2020a ) and Delange et al . ( 2021 ) , evaluation becomes convoluted because intricate combinations of elements originating from various related machine learning paradigms affect continual learning practice . As the number of publications across these paradigms increases , see Figure 1 , reproducibility , comparability , and interpretation of results thus also become increasingly difficult . In this work , we follow in the footsteps of previous reproducibility works to promote transparency and comparability of reported results for the non-trivial continual learning case . Rather than adding to the ongoing discussions on desiderata or violation of assumptions , we posit that the development of distinct applications warrants the existence of numerous continual scenarios . Based on respectively highlighted evaluation nuances and their implications when absorbed into continual learning , we derive the Continual Learning EValuation Assessment ( CLEVA ) Compass . The CLEVACompass provides a compact visual representation with a unique two-fold function : 1. it presents an intuitive chart to identify a work ’ s priorities and context in the broader literature landscape , 2. it enables a direct way to determine how methods differ in terms of practically reported metrics , where they resemble each other , or what elements would be missing towards a fairer comparison . In the remainder of the paper , we start by sketching the scope of continual learning in Section 2 , first by outlining the differences when going from static benchmarks to continual learning , followed by an exposition of evaluation nuances emanating from related machine learning paradigms . In Section 3 , we then proceed to introduce the CLEVA-Compass and illustrate its necessity and utility at the hand of several continual learning works . Before concluding , we summarize auxiliary best-practices proposed in related prior works and finally discuss limitations and unintended use of the CLEVACompass . To encourage general adoption , both for prospective authors to add methods and for application oriented practitioners to identify suitable methods , we supply various utilities : a template for direct inclusion into LaTeX , a Python script , and the CLEVA-Compass Graphical User Interface ( GUI ) , together with a repository to aggregate methods ’ compasses , all detailed in Appendix C and publicly available at https : //github.com/ml-research/CLEVA-Compass . 2 THE SCOPE AND CHALLENGES OF CONTINUAL LEARNING EVALUATION . As already argued , there is no unique agreed-upon formal definition of continual learning . One of the few common denominators across the continual machine learning literature is the understanding that catastrophic interference imposes a persisting threat to training models continually over time ( McCloskey & Cohen , 1989 ; Ratcliff , 1990 ) . That is , continuously employing stochastic optimization algorithms to train models on sequential ( sub- ) sets of varying data instances , without being able to constantly revisit older experiences , comes with the challenge of previously learned parameters being overwritten . As this undesired phenomenon is in stark contrast with accumulation of knowledge over time , preventing model forgetting is thus typically placed at the center of continual learning . Consequently , the focus has been placed on a model-centric perspective , e.g . by proposing algorithms to alleviate forgetting through rehearsal of data prototypes in training , employing various forms of generative replay , imposing parameter and loss penalties over time , isolating task-specific parameters , regularizing the entire model ’ s functional or treating it from a perspective of dynamically adjustable capacity . We defer to review papers for taxonomies and details of algorithms ( Parisi et al. , 2019 ; Lesort et al. , 2019 ; Hadsell et al. , 2020 ; Lesort et al. , 2020 ; Biesialska et al. , 2021 ) . In practice , however , alleviating forgetting is but one of a myriad of challenges in real-world formulations of continual learning . Several orthogonal questions inevitably emerge , which either receive less attention across literature assessment or are frequently not made sufficiently explicit . A fair assessment and factual interpretation of results is rendered increasingly difficult . To provide the necessary background behind the statement , we briefly discuss newly arising questions when shifting from a static benchmark to a continual perspective and then proceed to contextualize conceivable evaluation protocol nuances in anticipation of our CLEVA-Compass . 2.1 FROM STATIC TO CONTINUAL MACHINE LEARNING WORKFLOW . To highlight the additional challenges in continual learning consider our visualization in Figure 2 , depicting the benchmark inspired machine learning workflow as advocated by Google Cloud ( 2021 ) . In the center , we find the six well-known sequential steps going from the preparation of data , to designing and tuning our ML model , down to the deployment of a model version to use for prediction . Naturally , these steps already contain various non-trivial questions , some of which we have highlighted in the surrounding gray boxes of the diagram . When considering popular benchmarks such as ImageNet ( Deng et al. , 2009 ) , a considerable amount of effort has been made for each individual workflow step . For instance , assembling , cleaning and pre-processing the dataset has required substantial resources , a decade of work has been attributed to the design of models and their optimization algorithms , and plenty of solutions have been developed to facilitate efficient computation or deployment . It is commonplace to treat these aspects in isolation in the literature . In other words , it is typical for approaches to be validated within train-val-test splits , where either a model-centric approach investigates optimizer variants and new model architectures , or alternatively , a data-centric approach analyzes how algorithms for data curation or selection can be improved for given models . Much in contrast to any of the prevalent static benchmarks , establishing a similar benchmark-driven way of conducting research becomes genuinely difficult for continual learning . Instinctively , this is because already partially intertwined elements of the workflow now become inherently codependent and inseparable . Once more , we provide a non-exhaustive list of additional questions in the orange boxes in the diagram of Figure 2 . Here , the boundaries between the steps are now blurred . To give a few examples : train and test sets evolve over time , we need to repeatedly determine what data to include next , an ongoing stream of data may be noisy or contain unknown elements , models might require new inductive biases and need to be extended , acquired knowledge needs to be protected but should also aid in future learning , and deployment and versions become continuous . In turn , setting a specific research focus on one of these aspects or attributing increased importance to only a portion allows for an abundance of conceivable , yet incomparable , implementations and investigations , even when the overall goal of continual learning is shared on the surface . 2.2 EVALUATION IN THE CONTEXT OF RELATED MACHINE LEARNING PARADIGMS . Although a full approach to continual learning should ideally include all of the underlying aspects illustrated in Figure 2 , many of these factors have been subject to prior isolated treatment in the literature . We posit that these related machine learning paradigms have a fundamental and historically grown influence on present “ continual learning ” practice , as they are in themselves comprised of components that are continuous . More specifically , we believe that choices in continual learning can largely be mapped back onto various related paradigms from which continual scenarios have drawn inspiration : multi-task learning ( Caruana , 1997 ) , transfer learning and domain adaptation ( Pan & Yang , 2010 ) , few-shot learning ( Fink , 2005 ; Fei-Fei et al. , 2006 ) , curriculum learning ( Bengio et al. , 2009 ) , active learning ( Settles , 2009 ) , open world learning ( Bendale & Boult , 2015 ) , online learning ( Heskes & Kappen , 1993 ; Bottou , 1999 ) , federated learning ( McMahan et al. , 2017 ; Kairouz et al. , 2021 ) , and meta-learning ( Thrun & Pratt , 1998 ) . We capture the relationship with respect to set-up and evaluation between these related paradigms and continual learning in the diagram of Figure 3 . For convenience , we have added quotes of the paradigm literature definitions in the figure . On the arrows of the diagram , we indicate the main evaluation difference and respectively how paradigms can be connected to each other . As each paradigm comes with its own set of assumptions towards training and evaluation protocols , it becomes apparent why the quest for a strict set of desiderata can be considered as ill-posed for the continual learning hypernym . In the next section , we thus introduce the CLEVA-Compass , as an alternative that emphasizes transparency and comparability over strict desiderata . To further clarify this with an easy example , let us consider one of the most popular ways to construct protocols in the academic continual learning literature . Here , a continual investigation is set up by defining a sequence based on extracting splits of existing benchmark datasets and introducing them sequentially ( Lesort et al. , 2020 ; Biesialska et al. , 2021 ; Delange et al. , 2021 ) . Such a scenario generally consist of individually introduced classes in image datasets like ImageNet ( Deng et al. , 2009 ) , MNIST ( LeCun et al. , 1998 ) , CIFAR ( Krizhevsky , 2009 ) , Core50 ( Lomonaco & Maltoni , 2017 ) , learning unique sounds ( Gemmeke et al. , 2017 ) or skills ( Mandlekar et al. , 2018 ; Fan et al. , 2018 ) in sequence , or simply creating a chain across multiple datasets for natural language processing ( McCann et al. , 2018 ; Wang et al. , 2019a ; b ) and games in reinforcement learning ( Bellemare et al. , 2013 ) . Arguably , such a set-up is immediately derived from conventional transfer learning practice . Following the description of Pan & Yang ( 2010 ) , the distinction between transfer and continual learning can essentially be brought down to the fact that both consider more than one task in sequence , but transfer learning focuses solely on leveraging prior knowledge to improve the new target task , typically a unique dataset or a set of classes , whereas continual learning generally intends to maximize performance on both prior and new tasks . Even though these popular continual benchmarks already simplify the continuous data perspective to seemingly enable comparison akin to static benchmarks , there already persists an unfortunate amount of training , evaluation , and result interpretation ambiguity . For instance , Farquhar & Gal ( 2018 ) ; Pfülb & Gepperth ( 2019 ) argue that simply the knowledge of whether and when a new task | This paper argues that the goal of precisely formulating abstract desiderata for continual learning (CL) is ill-posed because different applications may always favour distinct scenarios. Following this logic, the paper proposes a categorisation system which aims to make CL research comparisons more structured. The paper goes on to review the challenges of meaningful evaluation of CL algorithms, particularly in terms of systematic comparisons of different works with somewhat different sets of assumptions, using the notion of “catastrophic forgetting” as a working example. Aspects which sometimes are overlooked in other machine learning research, e.g. dataset preprocessing and balancing, become important aspects in CL research and cannot be ignored. Furthermore, many machine learning paradigms may have natural extensions to non-i.i.d. learning scenarios, or at least various multi-task formulations of interest. Since many of these satellite formulations inherit metrics and assumptions of their respective paradigms, it naturally creates a problem formulation challenge for CL research. The paper proposes a multi-objective radar/spider web chart (CLEVA-Compass) as a classification heuristic for CL research evaluation setups. The diagram is based on a well conceived collection of conceptual and practical desiderata for CL algorithms. The paper proposes how such a diagram could be used by future CL research and anticipates unintended uses. Limitations of the proposed classification strategy, as well as complementary challenges of dataset design, are discussed towards the end of the paper. | SP:23ede5330bdedacb2e38d6a5761e87e112064600 |
CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability | 1 INTRODUCTION . Despite the indisputable successes of machine learning , recent concerns have surfaced over the field heading towards a potential reproducibility crisis ( Henderson et al. , 2018 ) , as previously identified in other scientific disciplines ( Baker , 2016 ) . Although replication may largely be assured through modern software tools , moving beyond pure replication towards reproducibility with a factual interpretation of results is accompanied by tremendous remaining challenges . Specifically for machine learning , recent reproducibility initiatives ( Pineau et al. , 2021 ) nicely summarize how differences in used data , miss- or under-specification of training and evaluation metrics , along with frequent over-claims of conclusions beyond gathered empirical evidence impose persisting obstacles in our current literature . Similar conclusions have been reached in related works focused on specifics of reinforcement learning ( Li & Talwalkar , 2019 ) , neural architecture search ( Lindauer & Hutter , 2020 ) , human-centered machine learning model cards ( Mitchell et al. , 2019 ) , or general dataset sheet specifications ( Bender & Friedman , 2018 ; Gebru et al. , 2018 ) , which all make valuable propositions to overcome existing gaps through the creation of standardized best-practice ( check- ) lists . It should thus come as no surprise that the emerging work in continual learning is no stranger to the above challenges . Superficially , continual learning has the intuitive objective of accumulating information and learning concepts over time , typically without the ability to revisit previous experiences , frequently also referred to as lifelong learning ( Chen & Liu , 2018 ) . However , there is no unique agreed-upon formal definition beyond the idea to continuously observe data , where the time component holds some practical implication on changes in the objective , the evolution of concept labels , or general statistical shifts in the data distribution . The majority of modern surveys ambiguously conflate these factors as a sequence of tasks ( Parisi et al. , 2019 ; Lesort et al. , 2019 ; Hadsell et al. , 2020 ; Lesort et al. , 2020 ; Biesialska et al. , 2021 ) . Much in contrast to prevalent static benchmarks , the question of reproducibility , interpretation of results , and overall comparability now becomes an even more complex function of nuances in employed data , training and evaluation protocols . The latter fact has sparked various attempts at consolidation or critique . On the one hand , several works have made important suggestions for continual learning desiderata with respect to evaluation protocols ( Farquhar & Gal , 2018 ; Dı́az-Rodrı́guez et al. , 2018 ; Kemker et al. , 2018 ) and categorization of incremental training set-ups ( van de Ven & Tolias , 2019 ; Lesort et al. , 2021 ) . On the other hand , empirical assessments have demonstrated that performance and comparability break down rapidly if often unexposed protocol aspects deviate ( Pfülb & Gepperth , 2019 ; Delange et al. , 2021 ) . Following the broader exposition of the recent reviews of Mundt et al . ( 2020a ) and Delange et al . ( 2021 ) , evaluation becomes convoluted because intricate combinations of elements originating from various related machine learning paradigms affect continual learning practice . As the number of publications across these paradigms increases , see Figure 1 , reproducibility , comparability , and interpretation of results thus also become increasingly difficult . In this work , we follow in the footsteps of previous reproducibility works to promote transparency and comparability of reported results for the non-trivial continual learning case . Rather than adding to the ongoing discussions on desiderata or violation of assumptions , we posit that the development of distinct applications warrants the existence of numerous continual scenarios . Based on respectively highlighted evaluation nuances and their implications when absorbed into continual learning , we derive the Continual Learning EValuation Assessment ( CLEVA ) Compass . The CLEVACompass provides a compact visual representation with a unique two-fold function : 1. it presents an intuitive chart to identify a work ’ s priorities and context in the broader literature landscape , 2. it enables a direct way to determine how methods differ in terms of practically reported metrics , where they resemble each other , or what elements would be missing towards a fairer comparison . In the remainder of the paper , we start by sketching the scope of continual learning in Section 2 , first by outlining the differences when going from static benchmarks to continual learning , followed by an exposition of evaluation nuances emanating from related machine learning paradigms . In Section 3 , we then proceed to introduce the CLEVA-Compass and illustrate its necessity and utility at the hand of several continual learning works . Before concluding , we summarize auxiliary best-practices proposed in related prior works and finally discuss limitations and unintended use of the CLEVACompass . To encourage general adoption , both for prospective authors to add methods and for application oriented practitioners to identify suitable methods , we supply various utilities : a template for direct inclusion into LaTeX , a Python script , and the CLEVA-Compass Graphical User Interface ( GUI ) , together with a repository to aggregate methods ’ compasses , all detailed in Appendix C and publicly available at https : //github.com/ml-research/CLEVA-Compass . 2 THE SCOPE AND CHALLENGES OF CONTINUAL LEARNING EVALUATION . As already argued , there is no unique agreed-upon formal definition of continual learning . One of the few common denominators across the continual machine learning literature is the understanding that catastrophic interference imposes a persisting threat to training models continually over time ( McCloskey & Cohen , 1989 ; Ratcliff , 1990 ) . That is , continuously employing stochastic optimization algorithms to train models on sequential ( sub- ) sets of varying data instances , without being able to constantly revisit older experiences , comes with the challenge of previously learned parameters being overwritten . As this undesired phenomenon is in stark contrast with accumulation of knowledge over time , preventing model forgetting is thus typically placed at the center of continual learning . Consequently , the focus has been placed on a model-centric perspective , e.g . by proposing algorithms to alleviate forgetting through rehearsal of data prototypes in training , employing various forms of generative replay , imposing parameter and loss penalties over time , isolating task-specific parameters , regularizing the entire model ’ s functional or treating it from a perspective of dynamically adjustable capacity . We defer to review papers for taxonomies and details of algorithms ( Parisi et al. , 2019 ; Lesort et al. , 2019 ; Hadsell et al. , 2020 ; Lesort et al. , 2020 ; Biesialska et al. , 2021 ) . In practice , however , alleviating forgetting is but one of a myriad of challenges in real-world formulations of continual learning . Several orthogonal questions inevitably emerge , which either receive less attention across literature assessment or are frequently not made sufficiently explicit . A fair assessment and factual interpretation of results is rendered increasingly difficult . To provide the necessary background behind the statement , we briefly discuss newly arising questions when shifting from a static benchmark to a continual perspective and then proceed to contextualize conceivable evaluation protocol nuances in anticipation of our CLEVA-Compass . 2.1 FROM STATIC TO CONTINUAL MACHINE LEARNING WORKFLOW . To highlight the additional challenges in continual learning consider our visualization in Figure 2 , depicting the benchmark inspired machine learning workflow as advocated by Google Cloud ( 2021 ) . In the center , we find the six well-known sequential steps going from the preparation of data , to designing and tuning our ML model , down to the deployment of a model version to use for prediction . Naturally , these steps already contain various non-trivial questions , some of which we have highlighted in the surrounding gray boxes of the diagram . When considering popular benchmarks such as ImageNet ( Deng et al. , 2009 ) , a considerable amount of effort has been made for each individual workflow step . For instance , assembling , cleaning and pre-processing the dataset has required substantial resources , a decade of work has been attributed to the design of models and their optimization algorithms , and plenty of solutions have been developed to facilitate efficient computation or deployment . It is commonplace to treat these aspects in isolation in the literature . In other words , it is typical for approaches to be validated within train-val-test splits , where either a model-centric approach investigates optimizer variants and new model architectures , or alternatively , a data-centric approach analyzes how algorithms for data curation or selection can be improved for given models . Much in contrast to any of the prevalent static benchmarks , establishing a similar benchmark-driven way of conducting research becomes genuinely difficult for continual learning . Instinctively , this is because already partially intertwined elements of the workflow now become inherently codependent and inseparable . Once more , we provide a non-exhaustive list of additional questions in the orange boxes in the diagram of Figure 2 . Here , the boundaries between the steps are now blurred . To give a few examples : train and test sets evolve over time , we need to repeatedly determine what data to include next , an ongoing stream of data may be noisy or contain unknown elements , models might require new inductive biases and need to be extended , acquired knowledge needs to be protected but should also aid in future learning , and deployment and versions become continuous . In turn , setting a specific research focus on one of these aspects or attributing increased importance to only a portion allows for an abundance of conceivable , yet incomparable , implementations and investigations , even when the overall goal of continual learning is shared on the surface . 2.2 EVALUATION IN THE CONTEXT OF RELATED MACHINE LEARNING PARADIGMS . Although a full approach to continual learning should ideally include all of the underlying aspects illustrated in Figure 2 , many of these factors have been subject to prior isolated treatment in the literature . We posit that these related machine learning paradigms have a fundamental and historically grown influence on present “ continual learning ” practice , as they are in themselves comprised of components that are continuous . More specifically , we believe that choices in continual learning can largely be mapped back onto various related paradigms from which continual scenarios have drawn inspiration : multi-task learning ( Caruana , 1997 ) , transfer learning and domain adaptation ( Pan & Yang , 2010 ) , few-shot learning ( Fink , 2005 ; Fei-Fei et al. , 2006 ) , curriculum learning ( Bengio et al. , 2009 ) , active learning ( Settles , 2009 ) , open world learning ( Bendale & Boult , 2015 ) , online learning ( Heskes & Kappen , 1993 ; Bottou , 1999 ) , federated learning ( McMahan et al. , 2017 ; Kairouz et al. , 2021 ) , and meta-learning ( Thrun & Pratt , 1998 ) . We capture the relationship with respect to set-up and evaluation between these related paradigms and continual learning in the diagram of Figure 3 . For convenience , we have added quotes of the paradigm literature definitions in the figure . On the arrows of the diagram , we indicate the main evaluation difference and respectively how paradigms can be connected to each other . As each paradigm comes with its own set of assumptions towards training and evaluation protocols , it becomes apparent why the quest for a strict set of desiderata can be considered as ill-posed for the continual learning hypernym . In the next section , we thus introduce the CLEVA-Compass , as an alternative that emphasizes transparency and comparability over strict desiderata . To further clarify this with an easy example , let us consider one of the most popular ways to construct protocols in the academic continual learning literature . Here , a continual investigation is set up by defining a sequence based on extracting splits of existing benchmark datasets and introducing them sequentially ( Lesort et al. , 2020 ; Biesialska et al. , 2021 ; Delange et al. , 2021 ) . Such a scenario generally consist of individually introduced classes in image datasets like ImageNet ( Deng et al. , 2009 ) , MNIST ( LeCun et al. , 1998 ) , CIFAR ( Krizhevsky , 2009 ) , Core50 ( Lomonaco & Maltoni , 2017 ) , learning unique sounds ( Gemmeke et al. , 2017 ) or skills ( Mandlekar et al. , 2018 ; Fan et al. , 2018 ) in sequence , or simply creating a chain across multiple datasets for natural language processing ( McCann et al. , 2018 ; Wang et al. , 2019a ; b ) and games in reinforcement learning ( Bellemare et al. , 2013 ) . Arguably , such a set-up is immediately derived from conventional transfer learning practice . Following the description of Pan & Yang ( 2010 ) , the distinction between transfer and continual learning can essentially be brought down to the fact that both consider more than one task in sequence , but transfer learning focuses solely on leveraging prior knowledge to improve the new target task , typically a unique dataset or a set of classes , whereas continual learning generally intends to maximize performance on both prior and new tasks . Even though these popular continual benchmarks already simplify the continuous data perspective to seemingly enable comparison akin to static benchmarks , there already persists an unfortunate amount of training , evaluation , and result interpretation ambiguity . For instance , Farquhar & Gal ( 2018 ) ; Pfülb & Gepperth ( 2019 ) argue that simply the knowledge of whether and when a new task | In this work, the question arises of a better way to evaluate different methods in Continual Learning. The authors explain the problems of having a correct evaluation in Continual Learning, describing the complex scenario that Continual Learning confronts by being the intersection of multiple related areas. With this motivation, the authors present CLEVA-Compass or Continual Learning EValuation Assessment Compass, a visual representation that improves the comparison and transparency of different methods in Continual Learning. CLEVA-Compass includes two levels. The inner level shows which paradigms influence the method, and the outer level shows the setup and evaluation metrics presented in the method. | SP:23ede5330bdedacb2e38d6a5761e87e112064600 |
CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability | 1 INTRODUCTION . Despite the indisputable successes of machine learning , recent concerns have surfaced over the field heading towards a potential reproducibility crisis ( Henderson et al. , 2018 ) , as previously identified in other scientific disciplines ( Baker , 2016 ) . Although replication may largely be assured through modern software tools , moving beyond pure replication towards reproducibility with a factual interpretation of results is accompanied by tremendous remaining challenges . Specifically for machine learning , recent reproducibility initiatives ( Pineau et al. , 2021 ) nicely summarize how differences in used data , miss- or under-specification of training and evaluation metrics , along with frequent over-claims of conclusions beyond gathered empirical evidence impose persisting obstacles in our current literature . Similar conclusions have been reached in related works focused on specifics of reinforcement learning ( Li & Talwalkar , 2019 ) , neural architecture search ( Lindauer & Hutter , 2020 ) , human-centered machine learning model cards ( Mitchell et al. , 2019 ) , or general dataset sheet specifications ( Bender & Friedman , 2018 ; Gebru et al. , 2018 ) , which all make valuable propositions to overcome existing gaps through the creation of standardized best-practice ( check- ) lists . It should thus come as no surprise that the emerging work in continual learning is no stranger to the above challenges . Superficially , continual learning has the intuitive objective of accumulating information and learning concepts over time , typically without the ability to revisit previous experiences , frequently also referred to as lifelong learning ( Chen & Liu , 2018 ) . However , there is no unique agreed-upon formal definition beyond the idea to continuously observe data , where the time component holds some practical implication on changes in the objective , the evolution of concept labels , or general statistical shifts in the data distribution . The majority of modern surveys ambiguously conflate these factors as a sequence of tasks ( Parisi et al. , 2019 ; Lesort et al. , 2019 ; Hadsell et al. , 2020 ; Lesort et al. , 2020 ; Biesialska et al. , 2021 ) . Much in contrast to prevalent static benchmarks , the question of reproducibility , interpretation of results , and overall comparability now becomes an even more complex function of nuances in employed data , training and evaluation protocols . The latter fact has sparked various attempts at consolidation or critique . On the one hand , several works have made important suggestions for continual learning desiderata with respect to evaluation protocols ( Farquhar & Gal , 2018 ; Dı́az-Rodrı́guez et al. , 2018 ; Kemker et al. , 2018 ) and categorization of incremental training set-ups ( van de Ven & Tolias , 2019 ; Lesort et al. , 2021 ) . On the other hand , empirical assessments have demonstrated that performance and comparability break down rapidly if often unexposed protocol aspects deviate ( Pfülb & Gepperth , 2019 ; Delange et al. , 2021 ) . Following the broader exposition of the recent reviews of Mundt et al . ( 2020a ) and Delange et al . ( 2021 ) , evaluation becomes convoluted because intricate combinations of elements originating from various related machine learning paradigms affect continual learning practice . As the number of publications across these paradigms increases , see Figure 1 , reproducibility , comparability , and interpretation of results thus also become increasingly difficult . In this work , we follow in the footsteps of previous reproducibility works to promote transparency and comparability of reported results for the non-trivial continual learning case . Rather than adding to the ongoing discussions on desiderata or violation of assumptions , we posit that the development of distinct applications warrants the existence of numerous continual scenarios . Based on respectively highlighted evaluation nuances and their implications when absorbed into continual learning , we derive the Continual Learning EValuation Assessment ( CLEVA ) Compass . The CLEVACompass provides a compact visual representation with a unique two-fold function : 1. it presents an intuitive chart to identify a work ’ s priorities and context in the broader literature landscape , 2. it enables a direct way to determine how methods differ in terms of practically reported metrics , where they resemble each other , or what elements would be missing towards a fairer comparison . In the remainder of the paper , we start by sketching the scope of continual learning in Section 2 , first by outlining the differences when going from static benchmarks to continual learning , followed by an exposition of evaluation nuances emanating from related machine learning paradigms . In Section 3 , we then proceed to introduce the CLEVA-Compass and illustrate its necessity and utility at the hand of several continual learning works . Before concluding , we summarize auxiliary best-practices proposed in related prior works and finally discuss limitations and unintended use of the CLEVACompass . To encourage general adoption , both for prospective authors to add methods and for application oriented practitioners to identify suitable methods , we supply various utilities : a template for direct inclusion into LaTeX , a Python script , and the CLEVA-Compass Graphical User Interface ( GUI ) , together with a repository to aggregate methods ’ compasses , all detailed in Appendix C and publicly available at https : //github.com/ml-research/CLEVA-Compass . 2 THE SCOPE AND CHALLENGES OF CONTINUAL LEARNING EVALUATION . As already argued , there is no unique agreed-upon formal definition of continual learning . One of the few common denominators across the continual machine learning literature is the understanding that catastrophic interference imposes a persisting threat to training models continually over time ( McCloskey & Cohen , 1989 ; Ratcliff , 1990 ) . That is , continuously employing stochastic optimization algorithms to train models on sequential ( sub- ) sets of varying data instances , without being able to constantly revisit older experiences , comes with the challenge of previously learned parameters being overwritten . As this undesired phenomenon is in stark contrast with accumulation of knowledge over time , preventing model forgetting is thus typically placed at the center of continual learning . Consequently , the focus has been placed on a model-centric perspective , e.g . by proposing algorithms to alleviate forgetting through rehearsal of data prototypes in training , employing various forms of generative replay , imposing parameter and loss penalties over time , isolating task-specific parameters , regularizing the entire model ’ s functional or treating it from a perspective of dynamically adjustable capacity . We defer to review papers for taxonomies and details of algorithms ( Parisi et al. , 2019 ; Lesort et al. , 2019 ; Hadsell et al. , 2020 ; Lesort et al. , 2020 ; Biesialska et al. , 2021 ) . In practice , however , alleviating forgetting is but one of a myriad of challenges in real-world formulations of continual learning . Several orthogonal questions inevitably emerge , which either receive less attention across literature assessment or are frequently not made sufficiently explicit . A fair assessment and factual interpretation of results is rendered increasingly difficult . To provide the necessary background behind the statement , we briefly discuss newly arising questions when shifting from a static benchmark to a continual perspective and then proceed to contextualize conceivable evaluation protocol nuances in anticipation of our CLEVA-Compass . 2.1 FROM STATIC TO CONTINUAL MACHINE LEARNING WORKFLOW . To highlight the additional challenges in continual learning consider our visualization in Figure 2 , depicting the benchmark inspired machine learning workflow as advocated by Google Cloud ( 2021 ) . In the center , we find the six well-known sequential steps going from the preparation of data , to designing and tuning our ML model , down to the deployment of a model version to use for prediction . Naturally , these steps already contain various non-trivial questions , some of which we have highlighted in the surrounding gray boxes of the diagram . When considering popular benchmarks such as ImageNet ( Deng et al. , 2009 ) , a considerable amount of effort has been made for each individual workflow step . For instance , assembling , cleaning and pre-processing the dataset has required substantial resources , a decade of work has been attributed to the design of models and their optimization algorithms , and plenty of solutions have been developed to facilitate efficient computation or deployment . It is commonplace to treat these aspects in isolation in the literature . In other words , it is typical for approaches to be validated within train-val-test splits , where either a model-centric approach investigates optimizer variants and new model architectures , or alternatively , a data-centric approach analyzes how algorithms for data curation or selection can be improved for given models . Much in contrast to any of the prevalent static benchmarks , establishing a similar benchmark-driven way of conducting research becomes genuinely difficult for continual learning . Instinctively , this is because already partially intertwined elements of the workflow now become inherently codependent and inseparable . Once more , we provide a non-exhaustive list of additional questions in the orange boxes in the diagram of Figure 2 . Here , the boundaries between the steps are now blurred . To give a few examples : train and test sets evolve over time , we need to repeatedly determine what data to include next , an ongoing stream of data may be noisy or contain unknown elements , models might require new inductive biases and need to be extended , acquired knowledge needs to be protected but should also aid in future learning , and deployment and versions become continuous . In turn , setting a specific research focus on one of these aspects or attributing increased importance to only a portion allows for an abundance of conceivable , yet incomparable , implementations and investigations , even when the overall goal of continual learning is shared on the surface . 2.2 EVALUATION IN THE CONTEXT OF RELATED MACHINE LEARNING PARADIGMS . Although a full approach to continual learning should ideally include all of the underlying aspects illustrated in Figure 2 , many of these factors have been subject to prior isolated treatment in the literature . We posit that these related machine learning paradigms have a fundamental and historically grown influence on present “ continual learning ” practice , as they are in themselves comprised of components that are continuous . More specifically , we believe that choices in continual learning can largely be mapped back onto various related paradigms from which continual scenarios have drawn inspiration : multi-task learning ( Caruana , 1997 ) , transfer learning and domain adaptation ( Pan & Yang , 2010 ) , few-shot learning ( Fink , 2005 ; Fei-Fei et al. , 2006 ) , curriculum learning ( Bengio et al. , 2009 ) , active learning ( Settles , 2009 ) , open world learning ( Bendale & Boult , 2015 ) , online learning ( Heskes & Kappen , 1993 ; Bottou , 1999 ) , federated learning ( McMahan et al. , 2017 ; Kairouz et al. , 2021 ) , and meta-learning ( Thrun & Pratt , 1998 ) . We capture the relationship with respect to set-up and evaluation between these related paradigms and continual learning in the diagram of Figure 3 . For convenience , we have added quotes of the paradigm literature definitions in the figure . On the arrows of the diagram , we indicate the main evaluation difference and respectively how paradigms can be connected to each other . As each paradigm comes with its own set of assumptions towards training and evaluation protocols , it becomes apparent why the quest for a strict set of desiderata can be considered as ill-posed for the continual learning hypernym . In the next section , we thus introduce the CLEVA-Compass , as an alternative that emphasizes transparency and comparability over strict desiderata . To further clarify this with an easy example , let us consider one of the most popular ways to construct protocols in the academic continual learning literature . Here , a continual investigation is set up by defining a sequence based on extracting splits of existing benchmark datasets and introducing them sequentially ( Lesort et al. , 2020 ; Biesialska et al. , 2021 ; Delange et al. , 2021 ) . Such a scenario generally consist of individually introduced classes in image datasets like ImageNet ( Deng et al. , 2009 ) , MNIST ( LeCun et al. , 1998 ) , CIFAR ( Krizhevsky , 2009 ) , Core50 ( Lomonaco & Maltoni , 2017 ) , learning unique sounds ( Gemmeke et al. , 2017 ) or skills ( Mandlekar et al. , 2018 ; Fan et al. , 2018 ) in sequence , or simply creating a chain across multiple datasets for natural language processing ( McCann et al. , 2018 ; Wang et al. , 2019a ; b ) and games in reinforcement learning ( Bellemare et al. , 2013 ) . Arguably , such a set-up is immediately derived from conventional transfer learning practice . Following the description of Pan & Yang ( 2010 ) , the distinction between transfer and continual learning can essentially be brought down to the fact that both consider more than one task in sequence , but transfer learning focuses solely on leveraging prior knowledge to improve the new target task , typically a unique dataset or a set of classes , whereas continual learning generally intends to maximize performance on both prior and new tasks . Even though these popular continual benchmarks already simplify the continuous data perspective to seemingly enable comparison akin to static benchmarks , there already persists an unfortunate amount of training , evaluation , and result interpretation ambiguity . For instance , Farquhar & Gal ( 2018 ) ; Pfülb & Gepperth ( 2019 ) argue that simply the knowledge of whether and when a new task | This work proposes CLEVA-compass which provides visual means to easily compare different continual learning works and provides a checklist to promote results reproducibility. Indeed, existing variations in the problem setup and evaluation of continual learning make the direct comparison of works in this field challenging. CLEVA tries to summarize each work in two different level: 1) each work's influential paradigms and items and 2) the measures reported in the work that can be used for reproducibly. | SP:23ede5330bdedacb2e38d6a5761e87e112064600 |
Learning Towards The Largest Margins | 1 INTRODUCTION . Recent years have witnessed the great success of deep neural networks ( DNNs ) in a variety of tasks , especially for visual classification ( Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Howard et al. , 2017 ; Zoph et al. , 2018 ; Touvron et al. , 2019 ; Brock et al. , 2021 ; Dosovitskiy et al. , 2021 ) . The improvement in accuracy is attributed not only to the use of DNNs , but also to the elaborated losses that encourage well-separated feature embeddings ( Musgrave et al. , 2020 ) . In general , the loss is expected to promote the learned features to have maximized intra-class compactness and inter-class separability simultaneously , so as to boost the feature discriminativeness . Softmax loss , which is the combination of a linear layer , a softmax function , and a cross-entropy loss , is the most commonly-used ingredient in deep learning-based classification . However , the softmax loss only learns separable features that are not discriminative enough ( Liu et al. , 2017 ) . To remedy the limitation of softmax loss , many variants have been proposed . Liu et al . ( 2016 ) proposed a generalized large-margin softmax loss , which incorporates a preset constant m multiplying with the angle between samples and the classifier weight of the ground truth class , leading to potentially larger angular separability between learned features . SphereFace ( Liu et al. , 2017 ) further improved the performance of L-Softmax by normalizing the prototypes in the last inner-product layer . Subsequently , ( Wang et al. , 2017 ) exhibited the usefulness of feature normalization when using feature vector dot products in the softmax function . Coincidentally , in the field of contrastive learning , Chen et al . ( 2020 ) also showed that normalization of outputs leads to superior representations . Due to its effectiveness , normalization on either features or prototypes or both becomes a standard procedure in margin-based losses , such as SphereFace ( Liu et al. , 2017 ) , CosFace/AM-Softmax ( Wang et al. , 2018b ; a ) and ArcFace ( Deng et al. , 2019 ) . However , there is no theoretical guarantee provided yet . ∗This work was done as intern at Peng Cheng Laboratory . †Correspondence to : Xianming Liu < csxm @ hit.edu.cn > Despite their effectiveness and popularity , the existing margin-based losses were developed through heuristic means , as opposed to rigorous mathematical principles , modeling and analysis . Although they offer geometric interpretations , which are helpful to understand the underlying intuition , the theoretical explanation and analysis that can guide the design and optimization is still vague . Some critical issues are unclear , e.g. , why is the normalization of features and prototypes necessary ? How can the loss be further improved or adapted to new tasks ? Therefore , it naturally raises a fundamental question : how to develop a principled mathematical framework for better understanding and design of margin-based loss functions ? The goal of this work is to address these questions by formulating the objective as learning towards the largest margins and offering rigorously theoretical analysis as well as extensive empirical results to support this point . To obtain an optimizable objective , firstly , we should define measures of intra-class compactness and inter-class separability . To this end , we propose to employ the class margin as the measure of interclass separability , which is defined as the minimal pairwise angle distance between prototypes that reflects the angular margin of the two closest prototypes . Moreover , we define the sample margin following the classic approach in ( Koltchinskii et al. , 2002 , Sec . 5 ) , which denotes the similarity difference of a sample to the prototype of the class it belongs to and to the nearest prototype of other classes and thus measures the intra-class compactness . We provide a rigorous theoretical guarantee that maximizing the minimal sample margin over the entire dataset leads to maximizing the class margin regardless of feature dimension , class number , and class balancedness . It denotes that the sample margin also has the power of measuring inter-class separability . According to the defined measures , we claim that , to encourage discriminative representation of features , the loss function should promote the largest possible margins for both classes and samples , which also meets to tighten the margin-based generalization bound in ( Kakade et al. , 2008 ; Cao et al. , 2019 ) . The main contributions of this work are highlighted as follows : • For a better understanding of margin-based losses , we provide a rigorous analysis about the necessity of normalization on prototypes and features . Moreover , we propose a generalized margin softmax loss ( GM-Softmax ) , which can be derived to cover most of existing marginbased losses . We prove that , for the class-balance case , learning with the GM-Softmax loss leads to maximizing both class margin and sample margin under mild conditions . • We show that learning with existing margin-based loss functions , such as SphereFace , NormFace , CosFace , AM-Softmax and ArcFace , would share the same optimal solution . In other words , all of them attempt to learn towards the largest margins , even though they are tailored to obtain different desired margins with explicit decision boundaries . However , these losses do not always maximize margins under different hyper-parameter settings . Instead , we propose an explicit sample margin regularization term and a novel largest margin softmax loss ( LM-Softmax ) derived from the minimal sample margin , which significantly improve the class margin and the sample margin . • We consider the class-imbalanced case , in which the margins are severely affected . We provide a sufficient condition , which reveals that , if the centroid of prototypes is equal to zero , learning with GM-Softmax will provide the largest margins . Accordingly , we propose a simple but effective zero-centroid regularization term , which can be combined with commonly-used losses to mitigate class imbalance . • Extensive experimental results are offered to demonstrate that the strategy of learning towards the largest margins significantly can improve the performance in accuracy and class/sample margins for various tasks , including visual classification , imbalanced classification , person re-identification , and face verification . 2 MEASURES OF INTRA-CLASS COMPACTNESS AND INTER-CLASS SEPARABILITY . With a labeled dataset D = { ( xi , yi ) } Ni=1 ( where xi denotes a training example with label yi , and yi ∈ [ 1 , k ] = { 1 , 2 , ... , k } ) , the softmax loss for a k-classification problem is formulated as L = 1 N N∑ i=1 − log exp ( wTyizi ) ∑k j=1 exp ( w T j zi ) = N∑ i=1 − log exp ( ‖wyi‖2‖zi‖2 cos ( θiyi ) ) ∑k j=1 exp ( ‖wj‖2‖zi‖2 cos ( θij ) ) , ( 2.1 ) where zi = φΘ ( xi ) ∈ Rd ( usually k ≤ d + 1 ) is the learned feature representation vector ; φΘ denotes the feature extraction sub-network ; W = ( w1 , ... , wk ) ∈ Rd×k denotes the linear classifier which is implemented with a linear layer at the end of the network ( some works omit the bias and use an inner-product layer ) ; θij denotes the angle between zi and wj ; and ‖ · ‖2 denotes the Euclidean norm , wherew1 , ... , wk can be regarded as the class centers or prototypes ( Mettes et al. , 2019 ) . For simplicity , we use prototypes to denote the weight vectors in the last inner-product layer . The softmax loss intuitively encourages the learned feature representation zi to be similar to the corresponding prototype wyi , while pushing zi away from the other prototypes . Recently , some works ( Liu et al. , 2016 ; 2017 ; Deng et al. , 2019 ) aim to achieve better performance by modifying the softmax loss with explicit decision boundaries to enforce extra intra-class compactness and interclass separability . However , they do not provide the theoretical explanation and analysis about the newly designed losses . In this paper , we claim that a loss function to obtain better inter-class separability and intra-class compactness should learn towards the largest class and sample margin , and offer rigorously theoretical analysis as support . All proofs can be found in the Appendix A . In the following , we define class margin and sample margin as the measures of inter-class separability and intra-class compactness , respectively , which serve as the base for our further derivation . 2.1 CLASS MARGIN . With prototypesw1 , ... , wk ∈ Rd , the class margin is defined as the minimal pairwise angle distance : mc ( { wi } ki=1 ) = min i6=j ∠ ( wi , wj ) = arccos [ max i 6=j wTi wj ‖wi‖2‖wj‖2 ] , ( 2.2 ) where ∠ ( wi , wj ) denotes the angle between the vectors wi and wj . Note that we omit the magnitudes of the prototypes in the definition , since the magnitudes tend to be very close according to the symmetry property . To verify this , we compute the ratio between the maximum and minimum magnitudes , which tends to be close to 1 on different datasets , as shown in Fig . 1 . To obtain better inter-class separability , we seek the largest class margin , which can be formulated as max { wi } ki=1 mc ( { wi } ki=1 ) = max { wi } ki=1 min i 6=j ∠ ( wi , wj ) . Since magnitudes do not affect the solution of the maxmin problem , we perform ` 2 normalization for eachwi to effectively restrict the prototypes on the unit sphere Sd−1 with center at the origin . Under this constraint , the maximization of the class margin is equivalent to the configuration of k points on Sd−1 to maximize their minimum pairwise distance : arg max { wi } ki=1⊂Sd−1 min i 6=j ∠ ( wi , wj ) = arg max { w } ki=1⊂Sd−1 min i 6=j ‖wi −wj‖2 , ( 2.3 ) The right-hand side is well known as the k-points best-packing problem on spheres ( often called the Tammes problem ) , whose solution leads to the optimal separation of points ( Borodachov et al. , 2019 ) . The best-packing problem turns out to be the limiting case of the minimal Riesz energy : arg min { w } ki=1⊂Sd−1 lim t→∞ ∑ i 6=j 1 ‖wi −wj‖t2 = arg max { w } ki=1⊂Sd−1 min i 6=j ‖wi −wj‖2 . ( 2.4 ) Interestingly , Liu et al . ( 2018 ) utilized the minimum hyperspherical energy as a generic regularization for neural networks to reduce undesired representation redundancy . When w1 , ... , wk ∈ Sd−1 , k ≤ d+ 1 , and t > 0 , the solution of the best-packing problem leads to the minimal Riesz t-energy : Lemma 2.1 . For any w1 , ... , wk ∈ Sd−1 , d ≥ 2 , and 2 ≤ k ≤ d + 1 , the solution of minimal Riesz t-energy and k-points best-packing configurations are uniquely given by the vertices of regular ( k − 1 ) -simplices inscribed in Sd−1 . Furthermore , wTi wj = −1k−1 , ∀i 6= j . This lemma shows that the maximum of mc ( { wi } ki=1 ) is arccos ( −1k−1 ) when k ≤ d + 1 , which is analytical and can be constructed artificially . However , when k > d + 1 , the optimal k-point configurations on the sphere Sd−1 have no generic analytical solution , and are only known explicitly for a handful of cases , even for d = 3 . | This paper analyzes large-margin based loss functions. The authors formulate two types of margins, class- and sample-margins, and then analyze lower bounds of various margin-based losses in a unified framework to show that those losses are minimized by the optimizers of class-/sample-margins. While inspecting the existing margin-based losses, they also formulate two practical methods of sample-margin regularization and largest margin softmax loss to further enhance margins of classifiers. The experimental results on some image classification tasks demonstrate that those methods contribute to performance improvement, while favorably being compared with the other methods that induce large-margin classification. | SP:b2b55e3381b0ab12e42e9498701645749764ccd8 |
Learning Towards The Largest Margins | 1 INTRODUCTION . Recent years have witnessed the great success of deep neural networks ( DNNs ) in a variety of tasks , especially for visual classification ( Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Howard et al. , 2017 ; Zoph et al. , 2018 ; Touvron et al. , 2019 ; Brock et al. , 2021 ; Dosovitskiy et al. , 2021 ) . The improvement in accuracy is attributed not only to the use of DNNs , but also to the elaborated losses that encourage well-separated feature embeddings ( Musgrave et al. , 2020 ) . In general , the loss is expected to promote the learned features to have maximized intra-class compactness and inter-class separability simultaneously , so as to boost the feature discriminativeness . Softmax loss , which is the combination of a linear layer , a softmax function , and a cross-entropy loss , is the most commonly-used ingredient in deep learning-based classification . However , the softmax loss only learns separable features that are not discriminative enough ( Liu et al. , 2017 ) . To remedy the limitation of softmax loss , many variants have been proposed . Liu et al . ( 2016 ) proposed a generalized large-margin softmax loss , which incorporates a preset constant m multiplying with the angle between samples and the classifier weight of the ground truth class , leading to potentially larger angular separability between learned features . SphereFace ( Liu et al. , 2017 ) further improved the performance of L-Softmax by normalizing the prototypes in the last inner-product layer . Subsequently , ( Wang et al. , 2017 ) exhibited the usefulness of feature normalization when using feature vector dot products in the softmax function . Coincidentally , in the field of contrastive learning , Chen et al . ( 2020 ) also showed that normalization of outputs leads to superior representations . Due to its effectiveness , normalization on either features or prototypes or both becomes a standard procedure in margin-based losses , such as SphereFace ( Liu et al. , 2017 ) , CosFace/AM-Softmax ( Wang et al. , 2018b ; a ) and ArcFace ( Deng et al. , 2019 ) . However , there is no theoretical guarantee provided yet . ∗This work was done as intern at Peng Cheng Laboratory . †Correspondence to : Xianming Liu < csxm @ hit.edu.cn > Despite their effectiveness and popularity , the existing margin-based losses were developed through heuristic means , as opposed to rigorous mathematical principles , modeling and analysis . Although they offer geometric interpretations , which are helpful to understand the underlying intuition , the theoretical explanation and analysis that can guide the design and optimization is still vague . Some critical issues are unclear , e.g. , why is the normalization of features and prototypes necessary ? How can the loss be further improved or adapted to new tasks ? Therefore , it naturally raises a fundamental question : how to develop a principled mathematical framework for better understanding and design of margin-based loss functions ? The goal of this work is to address these questions by formulating the objective as learning towards the largest margins and offering rigorously theoretical analysis as well as extensive empirical results to support this point . To obtain an optimizable objective , firstly , we should define measures of intra-class compactness and inter-class separability . To this end , we propose to employ the class margin as the measure of interclass separability , which is defined as the minimal pairwise angle distance between prototypes that reflects the angular margin of the two closest prototypes . Moreover , we define the sample margin following the classic approach in ( Koltchinskii et al. , 2002 , Sec . 5 ) , which denotes the similarity difference of a sample to the prototype of the class it belongs to and to the nearest prototype of other classes and thus measures the intra-class compactness . We provide a rigorous theoretical guarantee that maximizing the minimal sample margin over the entire dataset leads to maximizing the class margin regardless of feature dimension , class number , and class balancedness . It denotes that the sample margin also has the power of measuring inter-class separability . According to the defined measures , we claim that , to encourage discriminative representation of features , the loss function should promote the largest possible margins for both classes and samples , which also meets to tighten the margin-based generalization bound in ( Kakade et al. , 2008 ; Cao et al. , 2019 ) . The main contributions of this work are highlighted as follows : • For a better understanding of margin-based losses , we provide a rigorous analysis about the necessity of normalization on prototypes and features . Moreover , we propose a generalized margin softmax loss ( GM-Softmax ) , which can be derived to cover most of existing marginbased losses . We prove that , for the class-balance case , learning with the GM-Softmax loss leads to maximizing both class margin and sample margin under mild conditions . • We show that learning with existing margin-based loss functions , such as SphereFace , NormFace , CosFace , AM-Softmax and ArcFace , would share the same optimal solution . In other words , all of them attempt to learn towards the largest margins , even though they are tailored to obtain different desired margins with explicit decision boundaries . However , these losses do not always maximize margins under different hyper-parameter settings . Instead , we propose an explicit sample margin regularization term and a novel largest margin softmax loss ( LM-Softmax ) derived from the minimal sample margin , which significantly improve the class margin and the sample margin . • We consider the class-imbalanced case , in which the margins are severely affected . We provide a sufficient condition , which reveals that , if the centroid of prototypes is equal to zero , learning with GM-Softmax will provide the largest margins . Accordingly , we propose a simple but effective zero-centroid regularization term , which can be combined with commonly-used losses to mitigate class imbalance . • Extensive experimental results are offered to demonstrate that the strategy of learning towards the largest margins significantly can improve the performance in accuracy and class/sample margins for various tasks , including visual classification , imbalanced classification , person re-identification , and face verification . 2 MEASURES OF INTRA-CLASS COMPACTNESS AND INTER-CLASS SEPARABILITY . With a labeled dataset D = { ( xi , yi ) } Ni=1 ( where xi denotes a training example with label yi , and yi ∈ [ 1 , k ] = { 1 , 2 , ... , k } ) , the softmax loss for a k-classification problem is formulated as L = 1 N N∑ i=1 − log exp ( wTyizi ) ∑k j=1 exp ( w T j zi ) = N∑ i=1 − log exp ( ‖wyi‖2‖zi‖2 cos ( θiyi ) ) ∑k j=1 exp ( ‖wj‖2‖zi‖2 cos ( θij ) ) , ( 2.1 ) where zi = φΘ ( xi ) ∈ Rd ( usually k ≤ d + 1 ) is the learned feature representation vector ; φΘ denotes the feature extraction sub-network ; W = ( w1 , ... , wk ) ∈ Rd×k denotes the linear classifier which is implemented with a linear layer at the end of the network ( some works omit the bias and use an inner-product layer ) ; θij denotes the angle between zi and wj ; and ‖ · ‖2 denotes the Euclidean norm , wherew1 , ... , wk can be regarded as the class centers or prototypes ( Mettes et al. , 2019 ) . For simplicity , we use prototypes to denote the weight vectors in the last inner-product layer . The softmax loss intuitively encourages the learned feature representation zi to be similar to the corresponding prototype wyi , while pushing zi away from the other prototypes . Recently , some works ( Liu et al. , 2016 ; 2017 ; Deng et al. , 2019 ) aim to achieve better performance by modifying the softmax loss with explicit decision boundaries to enforce extra intra-class compactness and interclass separability . However , they do not provide the theoretical explanation and analysis about the newly designed losses . In this paper , we claim that a loss function to obtain better inter-class separability and intra-class compactness should learn towards the largest class and sample margin , and offer rigorously theoretical analysis as support . All proofs can be found in the Appendix A . In the following , we define class margin and sample margin as the measures of inter-class separability and intra-class compactness , respectively , which serve as the base for our further derivation . 2.1 CLASS MARGIN . With prototypesw1 , ... , wk ∈ Rd , the class margin is defined as the minimal pairwise angle distance : mc ( { wi } ki=1 ) = min i6=j ∠ ( wi , wj ) = arccos [ max i 6=j wTi wj ‖wi‖2‖wj‖2 ] , ( 2.2 ) where ∠ ( wi , wj ) denotes the angle between the vectors wi and wj . Note that we omit the magnitudes of the prototypes in the definition , since the magnitudes tend to be very close according to the symmetry property . To verify this , we compute the ratio between the maximum and minimum magnitudes , which tends to be close to 1 on different datasets , as shown in Fig . 1 . To obtain better inter-class separability , we seek the largest class margin , which can be formulated as max { wi } ki=1 mc ( { wi } ki=1 ) = max { wi } ki=1 min i 6=j ∠ ( wi , wj ) . Since magnitudes do not affect the solution of the maxmin problem , we perform ` 2 normalization for eachwi to effectively restrict the prototypes on the unit sphere Sd−1 with center at the origin . Under this constraint , the maximization of the class margin is equivalent to the configuration of k points on Sd−1 to maximize their minimum pairwise distance : arg max { wi } ki=1⊂Sd−1 min i 6=j ∠ ( wi , wj ) = arg max { w } ki=1⊂Sd−1 min i 6=j ‖wi −wj‖2 , ( 2.3 ) The right-hand side is well known as the k-points best-packing problem on spheres ( often called the Tammes problem ) , whose solution leads to the optimal separation of points ( Borodachov et al. , 2019 ) . The best-packing problem turns out to be the limiting case of the minimal Riesz energy : arg min { w } ki=1⊂Sd−1 lim t→∞ ∑ i 6=j 1 ‖wi −wj‖t2 = arg max { w } ki=1⊂Sd−1 min i 6=j ‖wi −wj‖2 . ( 2.4 ) Interestingly , Liu et al . ( 2018 ) utilized the minimum hyperspherical energy as a generic regularization for neural networks to reduce undesired representation redundancy . When w1 , ... , wk ∈ Sd−1 , k ≤ d+ 1 , and t > 0 , the solution of the best-packing problem leads to the minimal Riesz t-energy : Lemma 2.1 . For any w1 , ... , wk ∈ Sd−1 , d ≥ 2 , and 2 ≤ k ≤ d + 1 , the solution of minimal Riesz t-energy and k-points best-packing configurations are uniquely given by the vertices of regular ( k − 1 ) -simplices inscribed in Sd−1 . Furthermore , wTi wj = −1k−1 , ∀i 6= j . This lemma shows that the maximum of mc ( { wi } ki=1 ) is arccos ( −1k−1 ) when k ≤ d + 1 , which is analytical and can be constructed artificially . However , when k > d + 1 , the optimal k-point configurations on the sphere Sd−1 have no generic analytical solution , and are only known explicitly for a handful of cases , even for d = 3 . | This paper developed a principled mathematical framework for a better understanding and design of loss functions. Based on the class and sample margins, the proposed method formulated the objective as learning towards the largest margins, and offer rigorously theoretical analysis as support. For class balanced cases, this paper proposed an explicit sample margin regularization term and a novel largest margin softmax loss; for the class imbalanced cases, this paper proposed a simple but effective zero centroid regularization term, which restricts the centroid of prototypes to be zero. Extensive experimental results demonstrate that the proposed strategy significantly improves the performance in accuracy and margins for various tasks. | SP:b2b55e3381b0ab12e42e9498701645749764ccd8 |
Learning Towards The Largest Margins | 1 INTRODUCTION . Recent years have witnessed the great success of deep neural networks ( DNNs ) in a variety of tasks , especially for visual classification ( Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Howard et al. , 2017 ; Zoph et al. , 2018 ; Touvron et al. , 2019 ; Brock et al. , 2021 ; Dosovitskiy et al. , 2021 ) . The improvement in accuracy is attributed not only to the use of DNNs , but also to the elaborated losses that encourage well-separated feature embeddings ( Musgrave et al. , 2020 ) . In general , the loss is expected to promote the learned features to have maximized intra-class compactness and inter-class separability simultaneously , so as to boost the feature discriminativeness . Softmax loss , which is the combination of a linear layer , a softmax function , and a cross-entropy loss , is the most commonly-used ingredient in deep learning-based classification . However , the softmax loss only learns separable features that are not discriminative enough ( Liu et al. , 2017 ) . To remedy the limitation of softmax loss , many variants have been proposed . Liu et al . ( 2016 ) proposed a generalized large-margin softmax loss , which incorporates a preset constant m multiplying with the angle between samples and the classifier weight of the ground truth class , leading to potentially larger angular separability between learned features . SphereFace ( Liu et al. , 2017 ) further improved the performance of L-Softmax by normalizing the prototypes in the last inner-product layer . Subsequently , ( Wang et al. , 2017 ) exhibited the usefulness of feature normalization when using feature vector dot products in the softmax function . Coincidentally , in the field of contrastive learning , Chen et al . ( 2020 ) also showed that normalization of outputs leads to superior representations . Due to its effectiveness , normalization on either features or prototypes or both becomes a standard procedure in margin-based losses , such as SphereFace ( Liu et al. , 2017 ) , CosFace/AM-Softmax ( Wang et al. , 2018b ; a ) and ArcFace ( Deng et al. , 2019 ) . However , there is no theoretical guarantee provided yet . ∗This work was done as intern at Peng Cheng Laboratory . †Correspondence to : Xianming Liu < csxm @ hit.edu.cn > Despite their effectiveness and popularity , the existing margin-based losses were developed through heuristic means , as opposed to rigorous mathematical principles , modeling and analysis . Although they offer geometric interpretations , which are helpful to understand the underlying intuition , the theoretical explanation and analysis that can guide the design and optimization is still vague . Some critical issues are unclear , e.g. , why is the normalization of features and prototypes necessary ? How can the loss be further improved or adapted to new tasks ? Therefore , it naturally raises a fundamental question : how to develop a principled mathematical framework for better understanding and design of margin-based loss functions ? The goal of this work is to address these questions by formulating the objective as learning towards the largest margins and offering rigorously theoretical analysis as well as extensive empirical results to support this point . To obtain an optimizable objective , firstly , we should define measures of intra-class compactness and inter-class separability . To this end , we propose to employ the class margin as the measure of interclass separability , which is defined as the minimal pairwise angle distance between prototypes that reflects the angular margin of the two closest prototypes . Moreover , we define the sample margin following the classic approach in ( Koltchinskii et al. , 2002 , Sec . 5 ) , which denotes the similarity difference of a sample to the prototype of the class it belongs to and to the nearest prototype of other classes and thus measures the intra-class compactness . We provide a rigorous theoretical guarantee that maximizing the minimal sample margin over the entire dataset leads to maximizing the class margin regardless of feature dimension , class number , and class balancedness . It denotes that the sample margin also has the power of measuring inter-class separability . According to the defined measures , we claim that , to encourage discriminative representation of features , the loss function should promote the largest possible margins for both classes and samples , which also meets to tighten the margin-based generalization bound in ( Kakade et al. , 2008 ; Cao et al. , 2019 ) . The main contributions of this work are highlighted as follows : • For a better understanding of margin-based losses , we provide a rigorous analysis about the necessity of normalization on prototypes and features . Moreover , we propose a generalized margin softmax loss ( GM-Softmax ) , which can be derived to cover most of existing marginbased losses . We prove that , for the class-balance case , learning with the GM-Softmax loss leads to maximizing both class margin and sample margin under mild conditions . • We show that learning with existing margin-based loss functions , such as SphereFace , NormFace , CosFace , AM-Softmax and ArcFace , would share the same optimal solution . In other words , all of them attempt to learn towards the largest margins , even though they are tailored to obtain different desired margins with explicit decision boundaries . However , these losses do not always maximize margins under different hyper-parameter settings . Instead , we propose an explicit sample margin regularization term and a novel largest margin softmax loss ( LM-Softmax ) derived from the minimal sample margin , which significantly improve the class margin and the sample margin . • We consider the class-imbalanced case , in which the margins are severely affected . We provide a sufficient condition , which reveals that , if the centroid of prototypes is equal to zero , learning with GM-Softmax will provide the largest margins . Accordingly , we propose a simple but effective zero-centroid regularization term , which can be combined with commonly-used losses to mitigate class imbalance . • Extensive experimental results are offered to demonstrate that the strategy of learning towards the largest margins significantly can improve the performance in accuracy and class/sample margins for various tasks , including visual classification , imbalanced classification , person re-identification , and face verification . 2 MEASURES OF INTRA-CLASS COMPACTNESS AND INTER-CLASS SEPARABILITY . With a labeled dataset D = { ( xi , yi ) } Ni=1 ( where xi denotes a training example with label yi , and yi ∈ [ 1 , k ] = { 1 , 2 , ... , k } ) , the softmax loss for a k-classification problem is formulated as L = 1 N N∑ i=1 − log exp ( wTyizi ) ∑k j=1 exp ( w T j zi ) = N∑ i=1 − log exp ( ‖wyi‖2‖zi‖2 cos ( θiyi ) ) ∑k j=1 exp ( ‖wj‖2‖zi‖2 cos ( θij ) ) , ( 2.1 ) where zi = φΘ ( xi ) ∈ Rd ( usually k ≤ d + 1 ) is the learned feature representation vector ; φΘ denotes the feature extraction sub-network ; W = ( w1 , ... , wk ) ∈ Rd×k denotes the linear classifier which is implemented with a linear layer at the end of the network ( some works omit the bias and use an inner-product layer ) ; θij denotes the angle between zi and wj ; and ‖ · ‖2 denotes the Euclidean norm , wherew1 , ... , wk can be regarded as the class centers or prototypes ( Mettes et al. , 2019 ) . For simplicity , we use prototypes to denote the weight vectors in the last inner-product layer . The softmax loss intuitively encourages the learned feature representation zi to be similar to the corresponding prototype wyi , while pushing zi away from the other prototypes . Recently , some works ( Liu et al. , 2016 ; 2017 ; Deng et al. , 2019 ) aim to achieve better performance by modifying the softmax loss with explicit decision boundaries to enforce extra intra-class compactness and interclass separability . However , they do not provide the theoretical explanation and analysis about the newly designed losses . In this paper , we claim that a loss function to obtain better inter-class separability and intra-class compactness should learn towards the largest class and sample margin , and offer rigorously theoretical analysis as support . All proofs can be found in the Appendix A . In the following , we define class margin and sample margin as the measures of inter-class separability and intra-class compactness , respectively , which serve as the base for our further derivation . 2.1 CLASS MARGIN . With prototypesw1 , ... , wk ∈ Rd , the class margin is defined as the minimal pairwise angle distance : mc ( { wi } ki=1 ) = min i6=j ∠ ( wi , wj ) = arccos [ max i 6=j wTi wj ‖wi‖2‖wj‖2 ] , ( 2.2 ) where ∠ ( wi , wj ) denotes the angle between the vectors wi and wj . Note that we omit the magnitudes of the prototypes in the definition , since the magnitudes tend to be very close according to the symmetry property . To verify this , we compute the ratio between the maximum and minimum magnitudes , which tends to be close to 1 on different datasets , as shown in Fig . 1 . To obtain better inter-class separability , we seek the largest class margin , which can be formulated as max { wi } ki=1 mc ( { wi } ki=1 ) = max { wi } ki=1 min i 6=j ∠ ( wi , wj ) . Since magnitudes do not affect the solution of the maxmin problem , we perform ` 2 normalization for eachwi to effectively restrict the prototypes on the unit sphere Sd−1 with center at the origin . Under this constraint , the maximization of the class margin is equivalent to the configuration of k points on Sd−1 to maximize their minimum pairwise distance : arg max { wi } ki=1⊂Sd−1 min i 6=j ∠ ( wi , wj ) = arg max { w } ki=1⊂Sd−1 min i 6=j ‖wi −wj‖2 , ( 2.3 ) The right-hand side is well known as the k-points best-packing problem on spheres ( often called the Tammes problem ) , whose solution leads to the optimal separation of points ( Borodachov et al. , 2019 ) . The best-packing problem turns out to be the limiting case of the minimal Riesz energy : arg min { w } ki=1⊂Sd−1 lim t→∞ ∑ i 6=j 1 ‖wi −wj‖t2 = arg max { w } ki=1⊂Sd−1 min i 6=j ‖wi −wj‖2 . ( 2.4 ) Interestingly , Liu et al . ( 2018 ) utilized the minimum hyperspherical energy as a generic regularization for neural networks to reduce undesired representation redundancy . When w1 , ... , wk ∈ Sd−1 , k ≤ d+ 1 , and t > 0 , the solution of the best-packing problem leads to the minimal Riesz t-energy : Lemma 2.1 . For any w1 , ... , wk ∈ Sd−1 , d ≥ 2 , and 2 ≤ k ≤ d + 1 , the solution of minimal Riesz t-energy and k-points best-packing configurations are uniquely given by the vertices of regular ( k − 1 ) -simplices inscribed in Sd−1 . Furthermore , wTi wj = −1k−1 , ∀i 6= j . This lemma shows that the maximum of mc ( { wi } ki=1 ) is arccos ( −1k−1 ) when k ≤ d + 1 , which is analytical and can be constructed artificially . However , when k > d + 1 , the optimal k-point configurations on the sphere Sd−1 have no generic analytical solution , and are only known explicitly for a handful of cases , even for d = 3 . | This paper develops a principled mathematical framework for better understanding and design of margin-based loss functions, where the principled optimization objective is formulated as learning towards the largest margins. In the proposed method, class margin and the sample margin are defined as the measure of inter-class separability and the measure of intra-class compactness. Furthermore, the sample margin regularization and zero-centroid regularization are introduced for the class-balanced case and the class-imbalanced case. Experimental results show the effectiveness of the proposed method on imbalanced classification, person re-identification, and face verification. | SP:b2b55e3381b0ab12e42e9498701645749764ccd8 |
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation | 1 INTRODUCTION . Deep neural networks achieve impressive performance on test data , which has the same distribution as the training data . Nevertheless , they often exhibit a large performance drop on test ( target ) data which differs from training ( source ) data ; this effect is known as data shift ( Quionero-Candela et al. , 2009 ) and can be caused for instance by image corruptions . There exist different methods to improve the robustness of the model during training ( Geirhos et al. , 2019 ; Hendrycks et al. , 2019 ; Tzeng et al. , 2017 ) . However , generalization to different data shifts is limited since it is infeasible to include sufficiently many augmentations during training to cover the excessively wide range of potential data shifts ( Mintun et al. , 2021a ) . Alternatively , in order to generalize to the data shift at hand , the model can be adapted during test-time . Unsupervised domain adaptation methods such as Vu et al . ( 2019 ) use both source and target data to improve the model performance during test-time . In general source data might not be available during inference time , e.g. , due to legal constraints ( privacy or profit ) . Therefore we focus on the fully test-time adaptation setting ( Wang et al. , 2020 ) : model is adapted to the target data during test time given only the arbitrarily pretrained model parameters and unlabeled target data that share the same label space as source data . We extend the work of Wang et al . ( 2020 ) by introducing a novel loss function , using a diversity regularizer , and prepending a parametrized input transformation module to the network . We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C ( Hendrycks & Dietterich , 2019 ) and ImageNet-R ( Hendrycks et al. , 2020 ) . Sun et al . ( 2020 ) investigate test-time adaptation using a self-supervision task . Wang et al . ( 2020 ) and Liang et al . ( 2020 ) use the entropy minimization loss that uses maximization of prediction confidence as self-supervision signal during test-time adaptation . Wang et al . ( 2020 ) has shown that such loss performs better adaptation than a proxy task ( Sun et al. , 2020 ) . When using entropy minimization , however , high confidence predictions do not contribute to the loss significantly anymore and thus provide little self-supervision . This is a drawback since high-confidence samples provide the most trustworthy self-supervision . We mitigate this by introducing two novel loss functions that ensure that gradients of samples with high confidence predictions do not vanish and learning based on self-supervision from these samples continues . Our losses do not focus on minimizing entropy but on minimizing the negative log likelihood ratio between classes ; the two variants differ in using either soft or hard pseudo-labels . In contrast to entropy minimization , the proposed loss functions provide non-saturating gradients , even when there are high confident predictions . Figure 1 provides illustration of the losses and the resulting gradients . Using these new loss functions , we are able to improve the network performance under data shifts in both online and offline adaptation settings . In general , self-supervision by confidence maximization can lead to collapsed trivial solutions , which make the network to predict only a single or a set of classes independent of the input . To overcome this issue a diversity regularizer ( Liang et al. , 2020 ; Wu et al. , 2020 ) can be used , that acts on a batch of samples . It encourages the network to make diverse class predictions on different samples . We extend the regularizer by including a moving average , in order to include the history of the previous batches and show that this stabilizes the adaptation of the network to unlabeled test samples . Furthermore we also introduce a parametrized input transformation module , which we prepend to the network . The module is trained in a fully test-time adaptation manner using the proposed loss function , and without using source data or target labels . It aims to partially undo the data shift at hand and helps to further improve the performance on image classification benchmark with corruptions . Since our method does not change the training process , it allows to use any pretrained models . This is beneficial because any good performing pretrained network can be readily reused , e.g. , a network trained on some proprietary data not available to the public . We show , that our method significantly improves performance of different pretrained models that are trained on clean ImageNet data . In summary our main contributions are as follows : we propose non-saturating losses based on the negative log likelihood ratio , such that gradients from high confidence predictions still contribute to test-time adaptation . We extend diversity regularizer to its moving average to include the history of previous batch samples to prevent the model collapsing to trivial solutions . We also introduce an input transformation module , which partially undoes the data shift at hand . We show that the performance of different pretrained models can be significantly improved on ImageNet-C and ImageNet-R . 2 RELATED WORK . Common image corruptions are potentially stochastic image transformations motivated by realworld effects that can be used for evaluating a model ’ s robustness . One such benchmark , ImageNetC ( Hendrycks & Dietterich , 2019 ) , contains simulated corruptions such as noise , blur , weather effects , and digital image transformations . Additionally , Hendrycks et al . ( 2020 ) proposed three data sets containing real-world distribution shifts , including Imagenet-R . Most proposals for improving robustness involve special training protocols , requiring time and additional resources . This includes data augmentation like Gaussian noise ( Ford et al. , 2019 ; Lopes et al. , 2019 ; Hendrycks et al. , 2020 ) , CutMix ( Yun et al. , 2019 ) , AugMix ( Hendrycks et al. , 2019 ) , training on stylized images ( Geirhos et al. , 2019 ; Kamann et al. , 2020 ) or against adversarial noise distributions ( Rusak et al. , 2020a ) . Mintun et al . ( 2021b ) pointed out that many improvements on ImageNet-C are due to data augmentations which are too similar to the test corruptions , that is : overfitting to ImageNet-C occurs . Thus , the model might be less robust to corruptions not included in the test set of ImageNet-C. Unsupervised domain adaptation methods train a joint model of source and target domain by crossdomain losses to find more general and robust features , e. g. optimize feature alignment ( QuiñoneroCandela et al. , 2008 ; Sun et al. , 2017 ) between domains , adversarial invariance ( Ganin & Lempitsky , 2015 ; Tzeng et al. , 2017 ; Ganin et al. , 2016 ; Hoffman et al. , 2018 ) , shared proxy tasks ( Sun et al. , 2019 ) or adapt entropy minimization via an adversarial loss ( Vu et al. , 2019 ) . While these approaches are effective , they require explicit access to source and target data at the same time , which may not always be feasible . Our approach works with any pretrained model and only needs target data . Test-time adaptation is a setting , when training ( source ) data is unavailable at test-time . It is related to source free adaptation , where several works use generative models , alter training ( Kundu et al. , 2020 ; Li et al. , 2020b ; Kurmi et al. , 2021 ; Yeh et al. , 2021 ) and require several thousand epochs to adapt to the target data ( Li et al. , 2020b ; Yeh et al. , 2021 ) . Besides , there is another line of work ( Sun et al. , 2020 ; Schneider et al. , 2020 ; Nado et al. , 2021 ; Benz et al. , 2021 ; Wang et al. , 2020 ) that interpret the common corruptions as data shift and aim to improve the model robustness against these corruptions with efficient test-time adaptation strategy to facilitate online adaptation . such settings spare the cost of additional computational overhead . Our work also falls in this line of research and aims to adapt the model to common corruptions efficiently with both online and offline adaptation . Sun et al . ( 2020 ) update feature extractor parameters at test-time via a self-supervised proxy task ( predicting image rotations ) . However , Sun et al . ( 2020 ) alter the training procedure by including the proxy loss into the optimization objective as well , hence arbitrary pretrained models can not be used directly for test-time adaptation . Inspired by the domain adaptation strategies ( Maria Carlucci et al. , 2017 ; Li et al. , 2016 ) , several works ( Schneider et al. , 2020 ; Nado et al. , 2021 ; Benz et al. , 2021 ) replace the estimates of Batch Normalization ( BN ) activation statistics with the statistics of the corrupted test images . Fully test time adaptation , studied by Wang et al . ( 2020 ) ( TENT ) uses entropy minimization to update the channel-wise affine parameters of BN layers on corrupted data along with the batch statistics estimates . SHOT ( Liang et al. , 2020 ) also uses entropy minimization and a diversity regularizer to avoid collapsed solutions . SHOT modifies the model from the standard setting by adopting weight normalization at the fully connected classifier layer during training to facilitate their pseudo labeling technique . Hence , SHOT is not readily applicable to arbitrary pretrained models . We show that pure entropy minimization ( Wang et al. , 2020 ; Liang et al. , 2020 ) as well as alternatives such as max square loss ( Chen et al. , 2019 ) and Charbonnier penalty ( Yang & Soatto , 2020 ) results in vanishing gradients for high confidence predictions , thus inhibiting learning . Our work addresses this issue by proposing a novel non-saturating loss , that provides non-vanishing gradients for high confidence predictions . We show that our proposed loss function improves the network performance through test-time adaptation . In particular , performance on corruptions of higher severity improves significantly . Furthermore , we add and extend the diversity regularizer ( Liang et al. , 2020 ; Wu et al. , 2020 ) to avoid collapse to trivial , high confidence solutions . Existing diversity regularizers ( Liang et al. , 2020 ; Wu et al. , 2020 ) act on a batch of samples , hence the number of classes has to be smaller than the batch size . We mitigate this problem by extending the regularizer to a moving average version . Li et al . ( 2020a ) also use a moving average to estimate the entropy of the unconditional class distribution but source data is used to estimate the gradient of the entropy . In contrast , our work does not need access to the source data since the gradient is estimated using only target data . Prior work Tzeng et al . ( 2017 ) ; Rusak et al . ( 2020b ) ; Talebi & Milanfar ( 2021 ) transformed inputs by an additional module to overcome domain shift , obtain robust models , and also to learn to resize . In our work , we prepend an input transformation module to the model , but in contrast to former works , this module is trained purely at test-time to partially undo the data shift at hand to aid the adaptation . | Test-time adaptation by entropy minimization can help models adapt to dataset shifts like corruptions without altering training. This work extends tent, an entropy minimization method, by proposing alternative non-saturating losses, adding a diversity regularizer, and adapting the input data along with the model parameters. The input is adapted by applying a convolutional image transformation model between the input and classification model. These extensions do not need more optimization iterations or supervision than the baselines: the method adapts online and efficiently without auxiliary supervision. Experiments on the corruption benchmark ImageNet-C and the newer benchmark ImageNet-R report reduced generalization error. The improvements are there but marginal, and they are consistent across multiple baseline architectures (ResNet, DenseNet, MobileNet, etc.). However, the clean accuracy reduced, so the proposed method does not strictly dominate prior work. | SP:b4f4d7fa65a97351a51241ef5aca4f8315c35c5b |
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation | 1 INTRODUCTION . Deep neural networks achieve impressive performance on test data , which has the same distribution as the training data . Nevertheless , they often exhibit a large performance drop on test ( target ) data which differs from training ( source ) data ; this effect is known as data shift ( Quionero-Candela et al. , 2009 ) and can be caused for instance by image corruptions . There exist different methods to improve the robustness of the model during training ( Geirhos et al. , 2019 ; Hendrycks et al. , 2019 ; Tzeng et al. , 2017 ) . However , generalization to different data shifts is limited since it is infeasible to include sufficiently many augmentations during training to cover the excessively wide range of potential data shifts ( Mintun et al. , 2021a ) . Alternatively , in order to generalize to the data shift at hand , the model can be adapted during test-time . Unsupervised domain adaptation methods such as Vu et al . ( 2019 ) use both source and target data to improve the model performance during test-time . In general source data might not be available during inference time , e.g. , due to legal constraints ( privacy or profit ) . Therefore we focus on the fully test-time adaptation setting ( Wang et al. , 2020 ) : model is adapted to the target data during test time given only the arbitrarily pretrained model parameters and unlabeled target data that share the same label space as source data . We extend the work of Wang et al . ( 2020 ) by introducing a novel loss function , using a diversity regularizer , and prepending a parametrized input transformation module to the network . We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C ( Hendrycks & Dietterich , 2019 ) and ImageNet-R ( Hendrycks et al. , 2020 ) . Sun et al . ( 2020 ) investigate test-time adaptation using a self-supervision task . Wang et al . ( 2020 ) and Liang et al . ( 2020 ) use the entropy minimization loss that uses maximization of prediction confidence as self-supervision signal during test-time adaptation . Wang et al . ( 2020 ) has shown that such loss performs better adaptation than a proxy task ( Sun et al. , 2020 ) . When using entropy minimization , however , high confidence predictions do not contribute to the loss significantly anymore and thus provide little self-supervision . This is a drawback since high-confidence samples provide the most trustworthy self-supervision . We mitigate this by introducing two novel loss functions that ensure that gradients of samples with high confidence predictions do not vanish and learning based on self-supervision from these samples continues . Our losses do not focus on minimizing entropy but on minimizing the negative log likelihood ratio between classes ; the two variants differ in using either soft or hard pseudo-labels . In contrast to entropy minimization , the proposed loss functions provide non-saturating gradients , even when there are high confident predictions . Figure 1 provides illustration of the losses and the resulting gradients . Using these new loss functions , we are able to improve the network performance under data shifts in both online and offline adaptation settings . In general , self-supervision by confidence maximization can lead to collapsed trivial solutions , which make the network to predict only a single or a set of classes independent of the input . To overcome this issue a diversity regularizer ( Liang et al. , 2020 ; Wu et al. , 2020 ) can be used , that acts on a batch of samples . It encourages the network to make diverse class predictions on different samples . We extend the regularizer by including a moving average , in order to include the history of the previous batches and show that this stabilizes the adaptation of the network to unlabeled test samples . Furthermore we also introduce a parametrized input transformation module , which we prepend to the network . The module is trained in a fully test-time adaptation manner using the proposed loss function , and without using source data or target labels . It aims to partially undo the data shift at hand and helps to further improve the performance on image classification benchmark with corruptions . Since our method does not change the training process , it allows to use any pretrained models . This is beneficial because any good performing pretrained network can be readily reused , e.g. , a network trained on some proprietary data not available to the public . We show , that our method significantly improves performance of different pretrained models that are trained on clean ImageNet data . In summary our main contributions are as follows : we propose non-saturating losses based on the negative log likelihood ratio , such that gradients from high confidence predictions still contribute to test-time adaptation . We extend diversity regularizer to its moving average to include the history of previous batch samples to prevent the model collapsing to trivial solutions . We also introduce an input transformation module , which partially undoes the data shift at hand . We show that the performance of different pretrained models can be significantly improved on ImageNet-C and ImageNet-R . 2 RELATED WORK . Common image corruptions are potentially stochastic image transformations motivated by realworld effects that can be used for evaluating a model ’ s robustness . One such benchmark , ImageNetC ( Hendrycks & Dietterich , 2019 ) , contains simulated corruptions such as noise , blur , weather effects , and digital image transformations . Additionally , Hendrycks et al . ( 2020 ) proposed three data sets containing real-world distribution shifts , including Imagenet-R . Most proposals for improving robustness involve special training protocols , requiring time and additional resources . This includes data augmentation like Gaussian noise ( Ford et al. , 2019 ; Lopes et al. , 2019 ; Hendrycks et al. , 2020 ) , CutMix ( Yun et al. , 2019 ) , AugMix ( Hendrycks et al. , 2019 ) , training on stylized images ( Geirhos et al. , 2019 ; Kamann et al. , 2020 ) or against adversarial noise distributions ( Rusak et al. , 2020a ) . Mintun et al . ( 2021b ) pointed out that many improvements on ImageNet-C are due to data augmentations which are too similar to the test corruptions , that is : overfitting to ImageNet-C occurs . Thus , the model might be less robust to corruptions not included in the test set of ImageNet-C. Unsupervised domain adaptation methods train a joint model of source and target domain by crossdomain losses to find more general and robust features , e. g. optimize feature alignment ( QuiñoneroCandela et al. , 2008 ; Sun et al. , 2017 ) between domains , adversarial invariance ( Ganin & Lempitsky , 2015 ; Tzeng et al. , 2017 ; Ganin et al. , 2016 ; Hoffman et al. , 2018 ) , shared proxy tasks ( Sun et al. , 2019 ) or adapt entropy minimization via an adversarial loss ( Vu et al. , 2019 ) . While these approaches are effective , they require explicit access to source and target data at the same time , which may not always be feasible . Our approach works with any pretrained model and only needs target data . Test-time adaptation is a setting , when training ( source ) data is unavailable at test-time . It is related to source free adaptation , where several works use generative models , alter training ( Kundu et al. , 2020 ; Li et al. , 2020b ; Kurmi et al. , 2021 ; Yeh et al. , 2021 ) and require several thousand epochs to adapt to the target data ( Li et al. , 2020b ; Yeh et al. , 2021 ) . Besides , there is another line of work ( Sun et al. , 2020 ; Schneider et al. , 2020 ; Nado et al. , 2021 ; Benz et al. , 2021 ; Wang et al. , 2020 ) that interpret the common corruptions as data shift and aim to improve the model robustness against these corruptions with efficient test-time adaptation strategy to facilitate online adaptation . such settings spare the cost of additional computational overhead . Our work also falls in this line of research and aims to adapt the model to common corruptions efficiently with both online and offline adaptation . Sun et al . ( 2020 ) update feature extractor parameters at test-time via a self-supervised proxy task ( predicting image rotations ) . However , Sun et al . ( 2020 ) alter the training procedure by including the proxy loss into the optimization objective as well , hence arbitrary pretrained models can not be used directly for test-time adaptation . Inspired by the domain adaptation strategies ( Maria Carlucci et al. , 2017 ; Li et al. , 2016 ) , several works ( Schneider et al. , 2020 ; Nado et al. , 2021 ; Benz et al. , 2021 ) replace the estimates of Batch Normalization ( BN ) activation statistics with the statistics of the corrupted test images . Fully test time adaptation , studied by Wang et al . ( 2020 ) ( TENT ) uses entropy minimization to update the channel-wise affine parameters of BN layers on corrupted data along with the batch statistics estimates . SHOT ( Liang et al. , 2020 ) also uses entropy minimization and a diversity regularizer to avoid collapsed solutions . SHOT modifies the model from the standard setting by adopting weight normalization at the fully connected classifier layer during training to facilitate their pseudo labeling technique . Hence , SHOT is not readily applicable to arbitrary pretrained models . We show that pure entropy minimization ( Wang et al. , 2020 ; Liang et al. , 2020 ) as well as alternatives such as max square loss ( Chen et al. , 2019 ) and Charbonnier penalty ( Yang & Soatto , 2020 ) results in vanishing gradients for high confidence predictions , thus inhibiting learning . Our work addresses this issue by proposing a novel non-saturating loss , that provides non-vanishing gradients for high confidence predictions . We show that our proposed loss function improves the network performance through test-time adaptation . In particular , performance on corruptions of higher severity improves significantly . Furthermore , we add and extend the diversity regularizer ( Liang et al. , 2020 ; Wu et al. , 2020 ) to avoid collapse to trivial , high confidence solutions . Existing diversity regularizers ( Liang et al. , 2020 ; Wu et al. , 2020 ) act on a batch of samples , hence the number of classes has to be smaller than the batch size . We mitigate this problem by extending the regularizer to a moving average version . Li et al . ( 2020a ) also use a moving average to estimate the entropy of the unconditional class distribution but source data is used to estimate the gradient of the entropy . In contrast , our work does not need access to the source data since the gradient is estimated using only target data . Prior work Tzeng et al . ( 2017 ) ; Rusak et al . ( 2020b ) ; Talebi & Milanfar ( 2021 ) transformed inputs by an additional module to overcome domain shift , obtain robust models , and also to learn to resize . In our work , we prepend an input transformation module to the model , but in contrast to former works , this module is trained purely at test-time to partially undo the data shift at hand to aid the adaptation . | The paper proposes a new loss to improve test-time BN adaptation for domain adaptation. The proposed loss consists of two components: the diversity maximization loss and the confidence maximization loss. Specifically, they use a running estimate for the diversity loss based on KL divergence. They propose the hard and soft likelihood ratio for the confidence loss which has large gradients for high confidence predictions. | SP:b4f4d7fa65a97351a51241ef5aca4f8315c35c5b |
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation | 1 INTRODUCTION . Deep neural networks achieve impressive performance on test data , which has the same distribution as the training data . Nevertheless , they often exhibit a large performance drop on test ( target ) data which differs from training ( source ) data ; this effect is known as data shift ( Quionero-Candela et al. , 2009 ) and can be caused for instance by image corruptions . There exist different methods to improve the robustness of the model during training ( Geirhos et al. , 2019 ; Hendrycks et al. , 2019 ; Tzeng et al. , 2017 ) . However , generalization to different data shifts is limited since it is infeasible to include sufficiently many augmentations during training to cover the excessively wide range of potential data shifts ( Mintun et al. , 2021a ) . Alternatively , in order to generalize to the data shift at hand , the model can be adapted during test-time . Unsupervised domain adaptation methods such as Vu et al . ( 2019 ) use both source and target data to improve the model performance during test-time . In general source data might not be available during inference time , e.g. , due to legal constraints ( privacy or profit ) . Therefore we focus on the fully test-time adaptation setting ( Wang et al. , 2020 ) : model is adapted to the target data during test time given only the arbitrarily pretrained model parameters and unlabeled target data that share the same label space as source data . We extend the work of Wang et al . ( 2020 ) by introducing a novel loss function , using a diversity regularizer , and prepending a parametrized input transformation module to the network . We show that our approach outperform previous works and make pretrained models robust against common corruptions on image classification benchmarks as ImageNet-C ( Hendrycks & Dietterich , 2019 ) and ImageNet-R ( Hendrycks et al. , 2020 ) . Sun et al . ( 2020 ) investigate test-time adaptation using a self-supervision task . Wang et al . ( 2020 ) and Liang et al . ( 2020 ) use the entropy minimization loss that uses maximization of prediction confidence as self-supervision signal during test-time adaptation . Wang et al . ( 2020 ) has shown that such loss performs better adaptation than a proxy task ( Sun et al. , 2020 ) . When using entropy minimization , however , high confidence predictions do not contribute to the loss significantly anymore and thus provide little self-supervision . This is a drawback since high-confidence samples provide the most trustworthy self-supervision . We mitigate this by introducing two novel loss functions that ensure that gradients of samples with high confidence predictions do not vanish and learning based on self-supervision from these samples continues . Our losses do not focus on minimizing entropy but on minimizing the negative log likelihood ratio between classes ; the two variants differ in using either soft or hard pseudo-labels . In contrast to entropy minimization , the proposed loss functions provide non-saturating gradients , even when there are high confident predictions . Figure 1 provides illustration of the losses and the resulting gradients . Using these new loss functions , we are able to improve the network performance under data shifts in both online and offline adaptation settings . In general , self-supervision by confidence maximization can lead to collapsed trivial solutions , which make the network to predict only a single or a set of classes independent of the input . To overcome this issue a diversity regularizer ( Liang et al. , 2020 ; Wu et al. , 2020 ) can be used , that acts on a batch of samples . It encourages the network to make diverse class predictions on different samples . We extend the regularizer by including a moving average , in order to include the history of the previous batches and show that this stabilizes the adaptation of the network to unlabeled test samples . Furthermore we also introduce a parametrized input transformation module , which we prepend to the network . The module is trained in a fully test-time adaptation manner using the proposed loss function , and without using source data or target labels . It aims to partially undo the data shift at hand and helps to further improve the performance on image classification benchmark with corruptions . Since our method does not change the training process , it allows to use any pretrained models . This is beneficial because any good performing pretrained network can be readily reused , e.g. , a network trained on some proprietary data not available to the public . We show , that our method significantly improves performance of different pretrained models that are trained on clean ImageNet data . In summary our main contributions are as follows : we propose non-saturating losses based on the negative log likelihood ratio , such that gradients from high confidence predictions still contribute to test-time adaptation . We extend diversity regularizer to its moving average to include the history of previous batch samples to prevent the model collapsing to trivial solutions . We also introduce an input transformation module , which partially undoes the data shift at hand . We show that the performance of different pretrained models can be significantly improved on ImageNet-C and ImageNet-R . 2 RELATED WORK . Common image corruptions are potentially stochastic image transformations motivated by realworld effects that can be used for evaluating a model ’ s robustness . One such benchmark , ImageNetC ( Hendrycks & Dietterich , 2019 ) , contains simulated corruptions such as noise , blur , weather effects , and digital image transformations . Additionally , Hendrycks et al . ( 2020 ) proposed three data sets containing real-world distribution shifts , including Imagenet-R . Most proposals for improving robustness involve special training protocols , requiring time and additional resources . This includes data augmentation like Gaussian noise ( Ford et al. , 2019 ; Lopes et al. , 2019 ; Hendrycks et al. , 2020 ) , CutMix ( Yun et al. , 2019 ) , AugMix ( Hendrycks et al. , 2019 ) , training on stylized images ( Geirhos et al. , 2019 ; Kamann et al. , 2020 ) or against adversarial noise distributions ( Rusak et al. , 2020a ) . Mintun et al . ( 2021b ) pointed out that many improvements on ImageNet-C are due to data augmentations which are too similar to the test corruptions , that is : overfitting to ImageNet-C occurs . Thus , the model might be less robust to corruptions not included in the test set of ImageNet-C. Unsupervised domain adaptation methods train a joint model of source and target domain by crossdomain losses to find more general and robust features , e. g. optimize feature alignment ( QuiñoneroCandela et al. , 2008 ; Sun et al. , 2017 ) between domains , adversarial invariance ( Ganin & Lempitsky , 2015 ; Tzeng et al. , 2017 ; Ganin et al. , 2016 ; Hoffman et al. , 2018 ) , shared proxy tasks ( Sun et al. , 2019 ) or adapt entropy minimization via an adversarial loss ( Vu et al. , 2019 ) . While these approaches are effective , they require explicit access to source and target data at the same time , which may not always be feasible . Our approach works with any pretrained model and only needs target data . Test-time adaptation is a setting , when training ( source ) data is unavailable at test-time . It is related to source free adaptation , where several works use generative models , alter training ( Kundu et al. , 2020 ; Li et al. , 2020b ; Kurmi et al. , 2021 ; Yeh et al. , 2021 ) and require several thousand epochs to adapt to the target data ( Li et al. , 2020b ; Yeh et al. , 2021 ) . Besides , there is another line of work ( Sun et al. , 2020 ; Schneider et al. , 2020 ; Nado et al. , 2021 ; Benz et al. , 2021 ; Wang et al. , 2020 ) that interpret the common corruptions as data shift and aim to improve the model robustness against these corruptions with efficient test-time adaptation strategy to facilitate online adaptation . such settings spare the cost of additional computational overhead . Our work also falls in this line of research and aims to adapt the model to common corruptions efficiently with both online and offline adaptation . Sun et al . ( 2020 ) update feature extractor parameters at test-time via a self-supervised proxy task ( predicting image rotations ) . However , Sun et al . ( 2020 ) alter the training procedure by including the proxy loss into the optimization objective as well , hence arbitrary pretrained models can not be used directly for test-time adaptation . Inspired by the domain adaptation strategies ( Maria Carlucci et al. , 2017 ; Li et al. , 2016 ) , several works ( Schneider et al. , 2020 ; Nado et al. , 2021 ; Benz et al. , 2021 ) replace the estimates of Batch Normalization ( BN ) activation statistics with the statistics of the corrupted test images . Fully test time adaptation , studied by Wang et al . ( 2020 ) ( TENT ) uses entropy minimization to update the channel-wise affine parameters of BN layers on corrupted data along with the batch statistics estimates . SHOT ( Liang et al. , 2020 ) also uses entropy minimization and a diversity regularizer to avoid collapsed solutions . SHOT modifies the model from the standard setting by adopting weight normalization at the fully connected classifier layer during training to facilitate their pseudo labeling technique . Hence , SHOT is not readily applicable to arbitrary pretrained models . We show that pure entropy minimization ( Wang et al. , 2020 ; Liang et al. , 2020 ) as well as alternatives such as max square loss ( Chen et al. , 2019 ) and Charbonnier penalty ( Yang & Soatto , 2020 ) results in vanishing gradients for high confidence predictions , thus inhibiting learning . Our work addresses this issue by proposing a novel non-saturating loss , that provides non-vanishing gradients for high confidence predictions . We show that our proposed loss function improves the network performance through test-time adaptation . In particular , performance on corruptions of higher severity improves significantly . Furthermore , we add and extend the diversity regularizer ( Liang et al. , 2020 ; Wu et al. , 2020 ) to avoid collapse to trivial , high confidence solutions . Existing diversity regularizers ( Liang et al. , 2020 ; Wu et al. , 2020 ) act on a batch of samples , hence the number of classes has to be smaller than the batch size . We mitigate this problem by extending the regularizer to a moving average version . Li et al . ( 2020a ) also use a moving average to estimate the entropy of the unconditional class distribution but source data is used to estimate the gradient of the entropy . In contrast , our work does not need access to the source data since the gradient is estimated using only target data . Prior work Tzeng et al . ( 2017 ) ; Rusak et al . ( 2020b ) ; Talebi & Milanfar ( 2021 ) transformed inputs by an additional module to overcome domain shift , obtain robust models , and also to learn to resize . In our work , we prepend an input transformation module to the model , but in contrast to former works , this module is trained purely at test-time to partially undo the data shift at hand to aid the adaptation . | In the spirit of full disclosure: I have recently reviewed this paper, and several parts of my previous review are still applicable, thus I am copying in these parts when appropriate. This paper presents a method for test time adaptation based on several techniques. These include a self-supervised adaptation objective based on log likelihood ratios, an additional regularizing objective to encourage diverse predictions, and an input transformation module that is also trained with the aforementioned objectives. Together, these techniques lead to better performance on ImageNet-C and ImageNet-R compared to Tent, a recent and similar test time adaptation method based on entropy minimization. | SP:b4f4d7fa65a97351a51241ef5aca4f8315c35c5b |
Image-to-Image MLP-mixer for Image Reconstruction | 1 INTRODUCTION . Deep neural networks have emerged as highly successful tools for image and signal reconstruction , restoration , and manipulation . They achieve state-of-the-art image quality on tasks like denoising , super-resolution , image reconstruction from few and noisy measurements , and image generation . Current state-of-the-art image reconstruction networks are convolutional . Convolutional neural networks ( CNNs ) achieve better denoising image quality than classical methods such as BM3D ( Zhang et al. , 2017 ; Brooks et al. , 2019 ) . They also perform excellent on many other imaging problems including computed tomography ( McCann et al. , 2017 ) and accelerated magnetic resonance imaging ( MRI ) ( Zbontar et al. , 2018 ) . For example , all top-performing methods at the FastMRI competition , a challenge for accelerated magnetic resonance imaging ( Zbontar et al. , 2018 ; Knoll et al. , 2020 ) , are CNNs . For the related problem of image classification CNNs are also state-of-the-art . However , recent work has shown that new non-convolutional networks can perform comparable when trained on huge datasets . For instance , the vision transformer ( Dosovitskiy et al. , 2021 ) is an attention-based architecture without convolutions that achieves excellent classification accuracy when pre-trained on very large datasets . Most recently , networks solely based on multi-layer perceptrons ( MLPs ) were proposed , including the MLP-mixer ( Tolstikhin et al. , 2021 ; Liu et al. , 2021a ; Chen et al. , 2021 ) . Trained on a huge dataset , the MLP-mixer performs almost as well as the best convolutional architectures while having lower computational costs at inference . Non-convolutional architectures such as the ViT and MLP-mixer impose a lower inductive bias than CNNs . This inductive bias enables CNNs to perform well when little to moderate amounts of training data are available , but might limit performance if abundant data is available . Motivated by this development , and by the simplicity of the MLP-mixer , we propose and study a variant of the MLP-mixer for image reconstruction tasks , with the premise that such a network can give better image quality than convolutional networks if trained on sufficient data . The architecture of the image-to-image MLP-mixer is depicted in Figure 1 . The image-to-image MLP-mixer differs to the MLP mixer in that it retains the relative positions of the patches , which leads to significantly better performance for image reconstruction tasks . Our results show the image-to-image mixer can outperform a state-of-the-art image reconstruction architecture , the U-net Ronneberger et al . ( 2015 ) , by a small margin . We show that the gap in performance between the image-to-image mixer and a U-net increases with the number of training images and the model size ( see Figures 2 and 3 ) . We also show that , even in the regime of relatively few training images , the image-to-image MLP-mixer slightly outperforms a U-net of similar size in image quality , both for denoising images perturbed with Gaussian noise , denoising images perturbed by real-world camera noise , and for compressed sensing reconstruction in magnetic resonance imaging . Phrased differently , to achieve the same denoising performance , the image-to-image MLP-mixer requires fewer parameters ( see Figure 2 ) . The MLP-mixer also outperforms a vision transformer tailored to image-to-image tasks , and BM3D , a classical un-trained denoising algorithm at denoising . 2 IMAGE-TO-IMAGE MLP-MIXER NETWORK ARCHITECTURE . In this section , we introduce an image-to-image MLP-mixer architecture that builds on the original MLP-mixer ( Tolstikhin et al. , 2021 ) . The image-to-image MLP-mixer operates on linearly transformed image patches , just like the MLP-mixer , as illustrated in Figure 1 . However , contrary to the MLP-mixer , the image-to-image mixer imposes some structure by retaining the spacial order of image patches , which turns out to be critical for image reconstruction performance . We start by splitting the image into non-overlapping patches of size P × P × 3 ( our default choice is P = 4 ) . Each patch is viewed as a vector of dimension 3P 2 that is linearly transformed with the same trainable matrix to a space of arbitrary embedding dimension C. This patch embedding step thus transforms an image of dimension H ×W × 3 ( or H ×W × 1 for greyscale images ) to a volume of dimension H/P ×W/P × C. The patch embedding step retains the relative positions of the patches in the image . The MLP-mixer and the vision transformer ( Tolstikhin et al. , 2021 ; Dosovitskiy et al. , 2021 ) also split an image into patches and linearly project the patches , and so do several other architectures for example the swin transformer ( Liu et al. , 2021b ) . We then apply an MLP-mixer layer inspired by the original MLP-mixer module . This MLP-mixer layer mixes the tensor in height dimension , then in width dimension , and finally in channel dimension . Mixing in channel dimension means viewing the tensor of dimension H/P ×W/P × C as a collection of H/P ·W/P vectors of dimension C and passing each of them through the same MLP consisting of a linear layer , followed by a GeLU-non-linearity and then another linear layer . The hidden layer dimension is the input dimension of the respective vector multiplied by a factor of f . We also add skip connections and layer norms to help with the optimization . A mixer layer does not alter the dimensions of the input volume . After N many such mixer layers , the volume is transformed back to an image via a patch expansion step . The patch expansion step transforms the volume consisting of flattened patches , each of dimension C , back to an image of dimensionH/P ×W/P ×3 as follows : First , we linearly transform each patch of dimension C to a patch of dimension CP 2 using a shared linear transformation . This maps the volume of shape H/P ×W/P × C to a volume of shape H/P ×W/P × CP 2 . Second , we reshape the volume to a volume of shape H ×W × C , and finally transform this volume to an image of shape H ×W × 3 by linearly combining the layers ( which can be implemented with a 1 × 1 convolution ) . A similar patch expansion step has been used by the Swin U-net Transformer ( Cao et al. , 2021 ) . The main difference between our image-to-image MLP-mixer architecture and the original MLPmixer is that we transform the image to a 3D tensor instead of a 2D tensor , and the mixer layer is modified to act on a 3D volume . This modification retains the relative location of the patches in the 3D volume which induces an inductive bias enabling the image-to-image MLP-mixer to perform 1 very well when trained on relatively few images . As we show later in Section 3.4 , the inductive bias is less than that of a convolutional network , but more than the original MLP-mixer . A further difference of the image-to-image-Mixer over the original MLP-mixer is the scaling of the number of parameters : The trainable parameters of the token mixing in the original mixer are O ( H2W 2 ) , while the height- and width mixing of the image-to-image MLP-mixer are O ( H2 +W 2 ) . The linear scaling in image resolution of the image-to-image MLP-mixer keeps the total number of trainable parameters low and the architecture memory efficient . 3 EXPERIMENTS . We evaluate the performance of the image-to-image mixer for a variety of image reconstruction problems . We focus on image denoising as it is considered to be a fundamental image reconstruction problem , for its practical importance , and since a good denoiser typically serves as a building block for other tasks such as recovering images from few and noisy measurements . For example , a state-ofthe-art approach for reconstructing an image from few and noisy linear measurements is a so-called variational network which uses a denoiser as a building block ( Sriram et al. , 2020 ) . Reconstructing an image from few and noisy linear measurements is an important inverse problem that arises in accelerated magnetic resonance imaging and sparse-view computed tomography . Baseline methods : We compare the denoising performance of the image-to-image MLP mixer to three baselines : BM3D ( Dabov et al. , 2007 ) , a standard and well performing denoising algorithm that does not rely on any training data . The U-net ( Ronneberger et al. , 2015 ) , a standard image- to-image convolutional network that is a go-to for image reconstruction problems . The U-net performs slightly better than a standard multi-layer convolutional network for image denoising ( Brooks et al. , 2019 ) ( for example better than the multi-layer convolutional network proposed by Zhang et al . ( 2017 ) ) . We also compare to the vision transformer ( Dosovitskiy et al. , 2021 ) , which we adapted for image recovery tasks as follows . We disposed the classification token and replaced the classification head by a linear layer that maps each element of the transformer output to a corresponding image patch . All networks ( the image-to-image MLP-mixer , U-net , and ViT ) are all trained in the same fashion as described next . 3.1 GAUSSIAN DENOISING . We first consider the problem of removing Gaussian noise from ImageNet color images ( Deng et al. , 2009 ) . We constructed a dataset as follows : We collected images of different classes from ImageNet and center-cropped them to a size of 256 × 256 × 3 . We then added zero-mean Gaussian noise of standard deviation σ = 30 to each image channel independently , resulting in a data set consisting of pairs of noisy image yi = xi + zi and corresponding clean image xi . Here , zi is the Gaussian noise . The noisy images have a peak signal-to-noise-ratio ( PSNR ) of 19 dB . We trained the image-to-image MLP-mixer fθ with trainable parameters θ ( and the baseline architectures ) to map the noisy image to the noise by minimizing the loss function L ( θ ) = 1 n n∑ i=1 1 2 ‖yi − fθ ( yi ) − xi‖22 . Here , n is the total number of training images . At inference , we are given a noisy image y and estimate a clean image by subtracting the estimated residual from the noisy observation as x̂ = y − fθ ( y ) . This is referred to as residual learning ( Zhang et al. , 2017 ) , because the network learns to predict the residual . Training the network directly to map a noisy image to a clean image also works , but performs worse than residual learning for all architectures considered here . We split the data set into train and test sets and ensured that images from the same ImageNet class do not exist in both sets simultaneously . This guarantees that the network is not just learning to denoise a specific class only . In Figure 2 , we depict the denoising performance of the different architectures as a function of the number of training examples , ranging from 1000 to 100.000 training images , with constant model size , and as a function of the number of parameters , with constant training set size . The plots show that even in the regime of small training data ( left : 4000 images ) and small model size ( middle : 3 million parameters ) , the image-to-image mixer can outperform the Unet . Thus , the image-to-image MLP-Mixer is more parameter effective in that it outperforms the U-net with fewer parameters , i.e . a 3M version of the image-to-image mixer performs slightly better than a 12M version of the Unet . Most importantly , Figure 2 shows that the image-to-image mixer scales better than the U-net when both the dataset size and the size of the models grow . Moreover , the right panel in Figure 2 shows that for large models ( 24M ) the gap in performance between the image-to-image mixer and the U-net increases : The U-net shows a relatively smaller accuracy improvement when increasing the training set size from 10k to 100k . Thus , we expect even larger improvements when moving to even larger datasets . In the experiment , the model parameters of the image-to-image mixer are varied by changing the number of channels and the hidden dimension of the MLPs . The exact hyperparameter configurations are in Table 3 in the appendix . For the U-net , we increased the model size by increasing the number of channels , and for ViT we increased the model size by increasing both its depth and width . | The paper proposes the use of a modified MLP-mixer for image restoration. It largely follows the original MLP-mixer architecture, except that the token mixing layers (which go from all tokens to all tokens) are replaced with a pair of separable height-wise and width-wise mixing layers. The final token/patch embeddings are then reshaped back to a pixel grid, and used to reconstruct intensities with a 1x1 conv. The paper compares the performance of using such an architecture for denoising (and compressive reconstruction) vs ViT and U-Nets. | SP:d6a0881fb2cb128b8b27f1c2c2e3ffe33d50120b |
Image-to-Image MLP-mixer for Image Reconstruction | 1 INTRODUCTION . Deep neural networks have emerged as highly successful tools for image and signal reconstruction , restoration , and manipulation . They achieve state-of-the-art image quality on tasks like denoising , super-resolution , image reconstruction from few and noisy measurements , and image generation . Current state-of-the-art image reconstruction networks are convolutional . Convolutional neural networks ( CNNs ) achieve better denoising image quality than classical methods such as BM3D ( Zhang et al. , 2017 ; Brooks et al. , 2019 ) . They also perform excellent on many other imaging problems including computed tomography ( McCann et al. , 2017 ) and accelerated magnetic resonance imaging ( MRI ) ( Zbontar et al. , 2018 ) . For example , all top-performing methods at the FastMRI competition , a challenge for accelerated magnetic resonance imaging ( Zbontar et al. , 2018 ; Knoll et al. , 2020 ) , are CNNs . For the related problem of image classification CNNs are also state-of-the-art . However , recent work has shown that new non-convolutional networks can perform comparable when trained on huge datasets . For instance , the vision transformer ( Dosovitskiy et al. , 2021 ) is an attention-based architecture without convolutions that achieves excellent classification accuracy when pre-trained on very large datasets . Most recently , networks solely based on multi-layer perceptrons ( MLPs ) were proposed , including the MLP-mixer ( Tolstikhin et al. , 2021 ; Liu et al. , 2021a ; Chen et al. , 2021 ) . Trained on a huge dataset , the MLP-mixer performs almost as well as the best convolutional architectures while having lower computational costs at inference . Non-convolutional architectures such as the ViT and MLP-mixer impose a lower inductive bias than CNNs . This inductive bias enables CNNs to perform well when little to moderate amounts of training data are available , but might limit performance if abundant data is available . Motivated by this development , and by the simplicity of the MLP-mixer , we propose and study a variant of the MLP-mixer for image reconstruction tasks , with the premise that such a network can give better image quality than convolutional networks if trained on sufficient data . The architecture of the image-to-image MLP-mixer is depicted in Figure 1 . The image-to-image MLP-mixer differs to the MLP mixer in that it retains the relative positions of the patches , which leads to significantly better performance for image reconstruction tasks . Our results show the image-to-image mixer can outperform a state-of-the-art image reconstruction architecture , the U-net Ronneberger et al . ( 2015 ) , by a small margin . We show that the gap in performance between the image-to-image mixer and a U-net increases with the number of training images and the model size ( see Figures 2 and 3 ) . We also show that , even in the regime of relatively few training images , the image-to-image MLP-mixer slightly outperforms a U-net of similar size in image quality , both for denoising images perturbed with Gaussian noise , denoising images perturbed by real-world camera noise , and for compressed sensing reconstruction in magnetic resonance imaging . Phrased differently , to achieve the same denoising performance , the image-to-image MLP-mixer requires fewer parameters ( see Figure 2 ) . The MLP-mixer also outperforms a vision transformer tailored to image-to-image tasks , and BM3D , a classical un-trained denoising algorithm at denoising . 2 IMAGE-TO-IMAGE MLP-MIXER NETWORK ARCHITECTURE . In this section , we introduce an image-to-image MLP-mixer architecture that builds on the original MLP-mixer ( Tolstikhin et al. , 2021 ) . The image-to-image MLP-mixer operates on linearly transformed image patches , just like the MLP-mixer , as illustrated in Figure 1 . However , contrary to the MLP-mixer , the image-to-image mixer imposes some structure by retaining the spacial order of image patches , which turns out to be critical for image reconstruction performance . We start by splitting the image into non-overlapping patches of size P × P × 3 ( our default choice is P = 4 ) . Each patch is viewed as a vector of dimension 3P 2 that is linearly transformed with the same trainable matrix to a space of arbitrary embedding dimension C. This patch embedding step thus transforms an image of dimension H ×W × 3 ( or H ×W × 1 for greyscale images ) to a volume of dimension H/P ×W/P × C. The patch embedding step retains the relative positions of the patches in the image . The MLP-mixer and the vision transformer ( Tolstikhin et al. , 2021 ; Dosovitskiy et al. , 2021 ) also split an image into patches and linearly project the patches , and so do several other architectures for example the swin transformer ( Liu et al. , 2021b ) . We then apply an MLP-mixer layer inspired by the original MLP-mixer module . This MLP-mixer layer mixes the tensor in height dimension , then in width dimension , and finally in channel dimension . Mixing in channel dimension means viewing the tensor of dimension H/P ×W/P × C as a collection of H/P ·W/P vectors of dimension C and passing each of them through the same MLP consisting of a linear layer , followed by a GeLU-non-linearity and then another linear layer . The hidden layer dimension is the input dimension of the respective vector multiplied by a factor of f . We also add skip connections and layer norms to help with the optimization . A mixer layer does not alter the dimensions of the input volume . After N many such mixer layers , the volume is transformed back to an image via a patch expansion step . The patch expansion step transforms the volume consisting of flattened patches , each of dimension C , back to an image of dimensionH/P ×W/P ×3 as follows : First , we linearly transform each patch of dimension C to a patch of dimension CP 2 using a shared linear transformation . This maps the volume of shape H/P ×W/P × C to a volume of shape H/P ×W/P × CP 2 . Second , we reshape the volume to a volume of shape H ×W × C , and finally transform this volume to an image of shape H ×W × 3 by linearly combining the layers ( which can be implemented with a 1 × 1 convolution ) . A similar patch expansion step has been used by the Swin U-net Transformer ( Cao et al. , 2021 ) . The main difference between our image-to-image MLP-mixer architecture and the original MLPmixer is that we transform the image to a 3D tensor instead of a 2D tensor , and the mixer layer is modified to act on a 3D volume . This modification retains the relative location of the patches in the 3D volume which induces an inductive bias enabling the image-to-image MLP-mixer to perform 1 very well when trained on relatively few images . As we show later in Section 3.4 , the inductive bias is less than that of a convolutional network , but more than the original MLP-mixer . A further difference of the image-to-image-Mixer over the original MLP-mixer is the scaling of the number of parameters : The trainable parameters of the token mixing in the original mixer are O ( H2W 2 ) , while the height- and width mixing of the image-to-image MLP-mixer are O ( H2 +W 2 ) . The linear scaling in image resolution of the image-to-image MLP-mixer keeps the total number of trainable parameters low and the architecture memory efficient . 3 EXPERIMENTS . We evaluate the performance of the image-to-image mixer for a variety of image reconstruction problems . We focus on image denoising as it is considered to be a fundamental image reconstruction problem , for its practical importance , and since a good denoiser typically serves as a building block for other tasks such as recovering images from few and noisy measurements . For example , a state-ofthe-art approach for reconstructing an image from few and noisy linear measurements is a so-called variational network which uses a denoiser as a building block ( Sriram et al. , 2020 ) . Reconstructing an image from few and noisy linear measurements is an important inverse problem that arises in accelerated magnetic resonance imaging and sparse-view computed tomography . Baseline methods : We compare the denoising performance of the image-to-image MLP mixer to three baselines : BM3D ( Dabov et al. , 2007 ) , a standard and well performing denoising algorithm that does not rely on any training data . The U-net ( Ronneberger et al. , 2015 ) , a standard image- to-image convolutional network that is a go-to for image reconstruction problems . The U-net performs slightly better than a standard multi-layer convolutional network for image denoising ( Brooks et al. , 2019 ) ( for example better than the multi-layer convolutional network proposed by Zhang et al . ( 2017 ) ) . We also compare to the vision transformer ( Dosovitskiy et al. , 2021 ) , which we adapted for image recovery tasks as follows . We disposed the classification token and replaced the classification head by a linear layer that maps each element of the transformer output to a corresponding image patch . All networks ( the image-to-image MLP-mixer , U-net , and ViT ) are all trained in the same fashion as described next . 3.1 GAUSSIAN DENOISING . We first consider the problem of removing Gaussian noise from ImageNet color images ( Deng et al. , 2009 ) . We constructed a dataset as follows : We collected images of different classes from ImageNet and center-cropped them to a size of 256 × 256 × 3 . We then added zero-mean Gaussian noise of standard deviation σ = 30 to each image channel independently , resulting in a data set consisting of pairs of noisy image yi = xi + zi and corresponding clean image xi . Here , zi is the Gaussian noise . The noisy images have a peak signal-to-noise-ratio ( PSNR ) of 19 dB . We trained the image-to-image MLP-mixer fθ with trainable parameters θ ( and the baseline architectures ) to map the noisy image to the noise by minimizing the loss function L ( θ ) = 1 n n∑ i=1 1 2 ‖yi − fθ ( yi ) − xi‖22 . Here , n is the total number of training images . At inference , we are given a noisy image y and estimate a clean image by subtracting the estimated residual from the noisy observation as x̂ = y − fθ ( y ) . This is referred to as residual learning ( Zhang et al. , 2017 ) , because the network learns to predict the residual . Training the network directly to map a noisy image to a clean image also works , but performs worse than residual learning for all architectures considered here . We split the data set into train and test sets and ensured that images from the same ImageNet class do not exist in both sets simultaneously . This guarantees that the network is not just learning to denoise a specific class only . In Figure 2 , we depict the denoising performance of the different architectures as a function of the number of training examples , ranging from 1000 to 100.000 training images , with constant model size , and as a function of the number of parameters , with constant training set size . The plots show that even in the regime of small training data ( left : 4000 images ) and small model size ( middle : 3 million parameters ) , the image-to-image mixer can outperform the Unet . Thus , the image-to-image MLP-Mixer is more parameter effective in that it outperforms the U-net with fewer parameters , i.e . a 3M version of the image-to-image mixer performs slightly better than a 12M version of the Unet . Most importantly , Figure 2 shows that the image-to-image mixer scales better than the U-net when both the dataset size and the size of the models grow . Moreover , the right panel in Figure 2 shows that for large models ( 24M ) the gap in performance between the image-to-image mixer and the U-net increases : The U-net shows a relatively smaller accuracy improvement when increasing the training set size from 10k to 100k . Thus , we expect even larger improvements when moving to even larger datasets . In the experiment , the model parameters of the image-to-image mixer are varied by changing the number of channels and the hidden dimension of the MLPs . The exact hyperparameter configurations are in Table 3 in the appendix . For the U-net , we increased the model size by increasing the number of channels , and for ViT we increased the model size by increasing both its depth and width . | This paper introduces an MLP-mixer based solution for the image inverse problem. The architecture is mostly inspired by the MLP-mixer and expanded its usage to the image to image domain. The paper applied the architecture to the image denoising problem and compared it with U-net, ViT, and BM3D. | SP:d6a0881fb2cb128b8b27f1c2c2e3ffe33d50120b |
Image-to-Image MLP-mixer for Image Reconstruction | 1 INTRODUCTION . Deep neural networks have emerged as highly successful tools for image and signal reconstruction , restoration , and manipulation . They achieve state-of-the-art image quality on tasks like denoising , super-resolution , image reconstruction from few and noisy measurements , and image generation . Current state-of-the-art image reconstruction networks are convolutional . Convolutional neural networks ( CNNs ) achieve better denoising image quality than classical methods such as BM3D ( Zhang et al. , 2017 ; Brooks et al. , 2019 ) . They also perform excellent on many other imaging problems including computed tomography ( McCann et al. , 2017 ) and accelerated magnetic resonance imaging ( MRI ) ( Zbontar et al. , 2018 ) . For example , all top-performing methods at the FastMRI competition , a challenge for accelerated magnetic resonance imaging ( Zbontar et al. , 2018 ; Knoll et al. , 2020 ) , are CNNs . For the related problem of image classification CNNs are also state-of-the-art . However , recent work has shown that new non-convolutional networks can perform comparable when trained on huge datasets . For instance , the vision transformer ( Dosovitskiy et al. , 2021 ) is an attention-based architecture without convolutions that achieves excellent classification accuracy when pre-trained on very large datasets . Most recently , networks solely based on multi-layer perceptrons ( MLPs ) were proposed , including the MLP-mixer ( Tolstikhin et al. , 2021 ; Liu et al. , 2021a ; Chen et al. , 2021 ) . Trained on a huge dataset , the MLP-mixer performs almost as well as the best convolutional architectures while having lower computational costs at inference . Non-convolutional architectures such as the ViT and MLP-mixer impose a lower inductive bias than CNNs . This inductive bias enables CNNs to perform well when little to moderate amounts of training data are available , but might limit performance if abundant data is available . Motivated by this development , and by the simplicity of the MLP-mixer , we propose and study a variant of the MLP-mixer for image reconstruction tasks , with the premise that such a network can give better image quality than convolutional networks if trained on sufficient data . The architecture of the image-to-image MLP-mixer is depicted in Figure 1 . The image-to-image MLP-mixer differs to the MLP mixer in that it retains the relative positions of the patches , which leads to significantly better performance for image reconstruction tasks . Our results show the image-to-image mixer can outperform a state-of-the-art image reconstruction architecture , the U-net Ronneberger et al . ( 2015 ) , by a small margin . We show that the gap in performance between the image-to-image mixer and a U-net increases with the number of training images and the model size ( see Figures 2 and 3 ) . We also show that , even in the regime of relatively few training images , the image-to-image MLP-mixer slightly outperforms a U-net of similar size in image quality , both for denoising images perturbed with Gaussian noise , denoising images perturbed by real-world camera noise , and for compressed sensing reconstruction in magnetic resonance imaging . Phrased differently , to achieve the same denoising performance , the image-to-image MLP-mixer requires fewer parameters ( see Figure 2 ) . The MLP-mixer also outperforms a vision transformer tailored to image-to-image tasks , and BM3D , a classical un-trained denoising algorithm at denoising . 2 IMAGE-TO-IMAGE MLP-MIXER NETWORK ARCHITECTURE . In this section , we introduce an image-to-image MLP-mixer architecture that builds on the original MLP-mixer ( Tolstikhin et al. , 2021 ) . The image-to-image MLP-mixer operates on linearly transformed image patches , just like the MLP-mixer , as illustrated in Figure 1 . However , contrary to the MLP-mixer , the image-to-image mixer imposes some structure by retaining the spacial order of image patches , which turns out to be critical for image reconstruction performance . We start by splitting the image into non-overlapping patches of size P × P × 3 ( our default choice is P = 4 ) . Each patch is viewed as a vector of dimension 3P 2 that is linearly transformed with the same trainable matrix to a space of arbitrary embedding dimension C. This patch embedding step thus transforms an image of dimension H ×W × 3 ( or H ×W × 1 for greyscale images ) to a volume of dimension H/P ×W/P × C. The patch embedding step retains the relative positions of the patches in the image . The MLP-mixer and the vision transformer ( Tolstikhin et al. , 2021 ; Dosovitskiy et al. , 2021 ) also split an image into patches and linearly project the patches , and so do several other architectures for example the swin transformer ( Liu et al. , 2021b ) . We then apply an MLP-mixer layer inspired by the original MLP-mixer module . This MLP-mixer layer mixes the tensor in height dimension , then in width dimension , and finally in channel dimension . Mixing in channel dimension means viewing the tensor of dimension H/P ×W/P × C as a collection of H/P ·W/P vectors of dimension C and passing each of them through the same MLP consisting of a linear layer , followed by a GeLU-non-linearity and then another linear layer . The hidden layer dimension is the input dimension of the respective vector multiplied by a factor of f . We also add skip connections and layer norms to help with the optimization . A mixer layer does not alter the dimensions of the input volume . After N many such mixer layers , the volume is transformed back to an image via a patch expansion step . The patch expansion step transforms the volume consisting of flattened patches , each of dimension C , back to an image of dimensionH/P ×W/P ×3 as follows : First , we linearly transform each patch of dimension C to a patch of dimension CP 2 using a shared linear transformation . This maps the volume of shape H/P ×W/P × C to a volume of shape H/P ×W/P × CP 2 . Second , we reshape the volume to a volume of shape H ×W × C , and finally transform this volume to an image of shape H ×W × 3 by linearly combining the layers ( which can be implemented with a 1 × 1 convolution ) . A similar patch expansion step has been used by the Swin U-net Transformer ( Cao et al. , 2021 ) . The main difference between our image-to-image MLP-mixer architecture and the original MLPmixer is that we transform the image to a 3D tensor instead of a 2D tensor , and the mixer layer is modified to act on a 3D volume . This modification retains the relative location of the patches in the 3D volume which induces an inductive bias enabling the image-to-image MLP-mixer to perform 1 very well when trained on relatively few images . As we show later in Section 3.4 , the inductive bias is less than that of a convolutional network , but more than the original MLP-mixer . A further difference of the image-to-image-Mixer over the original MLP-mixer is the scaling of the number of parameters : The trainable parameters of the token mixing in the original mixer are O ( H2W 2 ) , while the height- and width mixing of the image-to-image MLP-mixer are O ( H2 +W 2 ) . The linear scaling in image resolution of the image-to-image MLP-mixer keeps the total number of trainable parameters low and the architecture memory efficient . 3 EXPERIMENTS . We evaluate the performance of the image-to-image mixer for a variety of image reconstruction problems . We focus on image denoising as it is considered to be a fundamental image reconstruction problem , for its practical importance , and since a good denoiser typically serves as a building block for other tasks such as recovering images from few and noisy measurements . For example , a state-ofthe-art approach for reconstructing an image from few and noisy linear measurements is a so-called variational network which uses a denoiser as a building block ( Sriram et al. , 2020 ) . Reconstructing an image from few and noisy linear measurements is an important inverse problem that arises in accelerated magnetic resonance imaging and sparse-view computed tomography . Baseline methods : We compare the denoising performance of the image-to-image MLP mixer to three baselines : BM3D ( Dabov et al. , 2007 ) , a standard and well performing denoising algorithm that does not rely on any training data . The U-net ( Ronneberger et al. , 2015 ) , a standard image- to-image convolutional network that is a go-to for image reconstruction problems . The U-net performs slightly better than a standard multi-layer convolutional network for image denoising ( Brooks et al. , 2019 ) ( for example better than the multi-layer convolutional network proposed by Zhang et al . ( 2017 ) ) . We also compare to the vision transformer ( Dosovitskiy et al. , 2021 ) , which we adapted for image recovery tasks as follows . We disposed the classification token and replaced the classification head by a linear layer that maps each element of the transformer output to a corresponding image patch . All networks ( the image-to-image MLP-mixer , U-net , and ViT ) are all trained in the same fashion as described next . 3.1 GAUSSIAN DENOISING . We first consider the problem of removing Gaussian noise from ImageNet color images ( Deng et al. , 2009 ) . We constructed a dataset as follows : We collected images of different classes from ImageNet and center-cropped them to a size of 256 × 256 × 3 . We then added zero-mean Gaussian noise of standard deviation σ = 30 to each image channel independently , resulting in a data set consisting of pairs of noisy image yi = xi + zi and corresponding clean image xi . Here , zi is the Gaussian noise . The noisy images have a peak signal-to-noise-ratio ( PSNR ) of 19 dB . We trained the image-to-image MLP-mixer fθ with trainable parameters θ ( and the baseline architectures ) to map the noisy image to the noise by minimizing the loss function L ( θ ) = 1 n n∑ i=1 1 2 ‖yi − fθ ( yi ) − xi‖22 . Here , n is the total number of training images . At inference , we are given a noisy image y and estimate a clean image by subtracting the estimated residual from the noisy observation as x̂ = y − fθ ( y ) . This is referred to as residual learning ( Zhang et al. , 2017 ) , because the network learns to predict the residual . Training the network directly to map a noisy image to a clean image also works , but performs worse than residual learning for all architectures considered here . We split the data set into train and test sets and ensured that images from the same ImageNet class do not exist in both sets simultaneously . This guarantees that the network is not just learning to denoise a specific class only . In Figure 2 , we depict the denoising performance of the different architectures as a function of the number of training examples , ranging from 1000 to 100.000 training images , with constant model size , and as a function of the number of parameters , with constant training set size . The plots show that even in the regime of small training data ( left : 4000 images ) and small model size ( middle : 3 million parameters ) , the image-to-image mixer can outperform the Unet . Thus , the image-to-image MLP-Mixer is more parameter effective in that it outperforms the U-net with fewer parameters , i.e . a 3M version of the image-to-image mixer performs slightly better than a 12M version of the Unet . Most importantly , Figure 2 shows that the image-to-image mixer scales better than the U-net when both the dataset size and the size of the models grow . Moreover , the right panel in Figure 2 shows that for large models ( 24M ) the gap in performance between the image-to-image mixer and the U-net increases : The U-net shows a relatively smaller accuracy improvement when increasing the training set size from 10k to 100k . Thus , we expect even larger improvements when moving to even larger datasets . In the experiment , the model parameters of the image-to-image mixer are varied by changing the number of channels and the hidden dimension of the MLPs . The exact hyperparameter configurations are in Table 3 in the appendix . For the U-net , we increased the model size by increasing the number of channels , and for ViT we increased the model size by increasing both its depth and width . | The paper adopts the recently popular MLP-mixer architecture to the task of image reconstruction. The paper claims that the proposed approach therein(I2I-mixer) gives better or comparable performance as other SOTA approaches(UNet and ViT) but provides two important advantages. It requires much fewer parameters and also learns with fewer training examples. To demonstrate this, experiments have been performed on three image reconstruction tasks- Gaussian denoising, Real-world camera noise denoising and MRI reconstruction. | SP:d6a0881fb2cb128b8b27f1c2c2e3ffe33d50120b |
Unsupervised Disentanglement with Tensor Product Representations on the Torus | 1 INTRODUCTION . Unsupervised learning methods of disentangled representations attempt to recover the explanatory factors z of variation in the data . The recovered representation , c , is expected to be : ( i ) disentangled , i.e. , each element in c should vary one of the generative factors in z ( ii ) complete in the sense that all generative factors z can be controlled by the latent representation c , and ( iii ) informative , that is the ability to relate via a low capacity model ( e.g. , a linear model ) the generative factors z and the latent representation c. It has been shown that solving this task without further assumptions is impossible , since there is an infinite number of bijective functions f that map between c and z = f ( c ) ( Locatello et al. , 2019 ) with all corresponding generative models having the same marginal distributions of the observations . In this work , we propose to use representations that are a tensor product of elements , which take values on unit circles S1 , claiming that this structure is suitable for the effective recovery of the underlying modes of variation . This claim follows from entropy considerations . One wishes to have a representation that has a low entropy of entanglement . The entropy of entanglement is measured by evaluating the number of non-zero eigenvalues of the Schmidt decomposition of the representation . If a representation can be described by only one term , which is an outer product of the orthogonal basis functions , it has the lowest possible entanglement entropy , zero . If a representation requires the use of more than one term in the decomposition , it is entangled and its entanglement entropy can be quantified , e.g . by the Von Neumann entropy constructed from non-zero eigenvalues . The tensor product of n unit circles , S1 , takes the shape of an n-torus Tn = ( S1 ) n , and has a low entanglement property . This representation has the advantage that it can capture the periodicity structure of the generative factors , where each circle controls a different aspect of the original distribution . Unlike other generative models , which rely on the Gaussian distribution as a prior , Tn is a compact manifold , and therefore any function that acts on it , such as a decoder , has to interpolate only , but not extrapolate when generating new instances . In this work , we present the TD-VAE , a Variational Auto Encoder whose latent space resides on the TD torus manifold . In an extensive set of experiments with four different datasets , we compare the torus latent representations with others proposed in the literature . We show a clear advantage of the torus representation in terms of various measures of disentanglement , completeness , informativeness and propose a new metric , the DC-score , which assesses the combined disentanglement and com- pleteness performance . We present a detailed quantitative and qualitative analysis that supports our method . 2 RELATED WORK . Generative models aim to create new instances that are indistinguishable from a given distribution . Auto encoders ( Kramer , 1991 ) achieve this by constructing the identity function using a composition of two components , an encoder and a decoder . The encoder is tasked with compressing the input into a latent representation with a dimension smaller than the input ’ s , whereas the decoder is responsible for reconstructing the original input from the latent space . Such auto encoders usually overfit to their training data and are very limited when tasked with generating new samples . Instead of mapping each input instance to a constant vector , Variational Auto Encoders ( Kingma & Welling , 2013 ) ( VAE ) map each instance to a pre-defined distribution , by minimizing both the encoder-decoder reconstruction loss and a KL-divergence term that depends on the prior distribution . VAE ’ s are capable of generating new instances by sampling a latent vector from the predefined distribution ; however , the interpretation of latent space components is typically obscure and the mapping between them and the dataset properties may be complex . In order to increase the latent space interoperability , various modifications of the original VAE were introduced . These variants try to achieve a factorization of the latent space components with each component corresponding to a specific simple characteristic of the dataset . In β-VAE ( Higgins et al. , 2016 ) , the weight of the KL-divergence term is increased w.r.t . the reconstruction term , which results in better factorization of the latent space . However , this introduces a new hyper-parameter and departs from the theoretical derivation of the ELBO term . For example , large β parameters may prevent the conditional distribution from modeling the data faithfully . DIP-VAE ( Kumar et al. , 2017 ) introduces a regularizer that constrains the covariance matrix of the posterior distribution . This term is added to the original VAE loss function , along with two hyper-parameters , and helps achieve better factorization . Factor-VAE ( Kim & Mnih , 2018 ) adds a Total Correlation ( TC ) approximation penalty over the produced latent codes and introduces an additional hyper-parameter . 2.1 DISENTANGLEMENT METRICS . In order to evaluate the expressiveness of different generative models , a plethora of disentanglement definitions and metrics were suggested in the literature ( Do & Tran , 2019 ; Eastwood & Williams , 2018 ) , see Locatello et al . ( 2019 ) for a comprehensive review . In this work , we adopt the definitions and metrics introduced in Eastwood & Williams ( 2018 ) which provides a successful set of disentanglement metrics . The authors of Eastwood & Williams ( 2018 ) introduce three metrics , Disentanglement , Completeness and Informativeness ( DCI ) , to quantify the disentanglement properties of the latent space and its ability to characterize the generative factors of a dataset . Given a dataset that was generated using a set of K generative factors , ~z ∈ RK , and that it is required to learn latent codes , ~c ∈ RD of D-dimensional representation . Ideally , in an interpretable representation , each generative factor zi would correspond to only one latent code ca . It is also beneficial if the mapping is linear , as the correlation between generative factors and codes can then be easily assessed . Generating new instances that are indistinguishable from the original dataset distribution requires that the latent codes cover the whole range of the generative factors . Furthermore , such a representation provides the ability to modify a specific property of a generated instance by directly tuning the corresponding latent code . The DCI metrics aim to quantify the relationship between codes and generating factors by having a single number that characterizes the relative importance of each code,1 ca , in predicting a factor , zi , which in turn defines an importance matrix Rai . To construct the importance matrix , K regressors are trained to find a mapping between zi and ~c , ẑi = fi ( ~c ) . In this work , we follow Eastwood & Williams ( 2018 ) and infer the importance matrix using a lasso linear regressor ’ s weights , Wia , by Ria = |Wia| . 1throughout the paper we use a , b , c , . . . and i , j , k , . . . letters for indices of the codes ~c and the factors ~z components respectively . Once the importance matrix Rai is obtained , the DCI metrics can be defined explicitly . The disentanglement is given by D = rank ( R ) K D∑ a=1 ρaDa , ( 1 ) where Da = 1−HK ( Pa ) = 1 + K∑ k=1 Pak logK Pak , Paj = Raj∑K k=1Rak , ρa = ∑ j Raj∑ bk Rbk . ( 2 ) High disentanglement means that each entry of the latent vector ~c corresponds to only one element in ~z ( in a linear sense ) , that is , each code element , ca , affects one generating factor . The disentanglement metric defined above differs from Eastwood & Williams ( 2018 ) by a correction factor rank ( R ) K . When the rank of R is equal to the generative factor ’ s dimension , this correction equals 1 , and does not affect the metric . However , it does make a difference when the number of codes is smaller than what is needed to account for all generating factors . Typically , one assumes that there are at least as many expressive codes as the number of factors . When this is not the case , it means that even if we have disentanglement among the expressive codes , their number is not sufficient , and our correction accounts for this . Note that the correction has no influence if there are irrelevant code elements in addition to a sufficient number of expressive ones . The ρ factors handle cases where some of the code dependence on the factors is very weak ( namely irrelevant codes ) , while other codes depend mainly on one factor and hence should not be of equal importance . The completeness is defined by C = 1 K K∑ j=1 Cj , Cj = 1−HD ( P̃j ) = 1 + D∑ a=1 P̃aj logD P̃aj , P̃aj = Raj∑D b=1Rbj . ( 3 ) In contrast to the disentanglement metric , which considers a weighted mean over the generative factors , motivated by the fact that irrelevant units in ~c should be ignored , here all codes are treated equally . A situation in which each factor zi is explained by only one element of the code ca results in high completeness . The informativeness is the MSE between the ground-truth factors , z , and the predicted values , fj ( ~c ) , I = 1 K K∑ j=1 Edataset [ |zj − fj ( ~c ) |2 ] . ( 4 ) Disentanglement and completeness values range between zero and one , where one is the best value , while the best value for informativeness is zero . Disentanglement and completeness alone do not provide sufficient information on whether a representation factorizes properly . For example , consider the case of a representation where only one generative factor is described by all the codes . While this representation is completely disentangled , it is not complete , and is therefore meaningless . In order to have a meaningful representation , both disentanglement and completeness need to be high . Thus , we introduce a new score , called the DC-score , which accounts for both disentanglement and completeness . We define the DC-score as the geometric mean of the two metrics , DC-score = √ DC . This way , only cases where both scores are high will result in a high DC-score . Furthermore , the score will favor cases where both disentanglement and completeness are comparable rather than having different values while having the same arithmetic mean . 3 METHOD . Assume a pre-determined set of vectors sampled from an unknown distribution χ , each vector containing K independent factors , ~z ∈ RK . The training dataset contains samples xI = F ( ~zI ) , I = 1 , . . . , N that are generated using a generative function F . In unsupervised disentanglement , the learner has no access to ~z nor to F , and receives only the set of samples , { xI } NI=1 . The learner has to recover a set of representation vectors ~cI that are linked to ~zI in a way that is bijective and disentangled , see Section 2.1 . Furthermore , in order to generate new samples , it is also necessary to be able to sample new representation vectors ~cnew , and to obtain a generative function G such that the new generated samples are from the same distribution of F ( ~z ) , where ~z ∼ χ . In this work , we view the latent representation vector ~c as the angles associated with a list of D two-dimensional unit vectors ma ∈ R2 , 1 ≤ a ≤ D , i.e. , ∀a , ‖ma‖ = 1 . Using the two-dimensional vectors we define vprod ≡ vec ( vα1 ... αDprod ) , vα1 ... αDprod = m α1 1 ⊗ · · · ⊗mαDD , vorient ≡ ( m01 , . . . , m 0 D ) , ( 5 ) where ⊗ is the outer product operator , αa ∈ { 0 , 1 } and vec is the vectorization operation ( equivalent to flattening the tensor ) . Next , define the operator V , that given m1 , . . . , mD , concatenates vprod together with vorient , V ( m1 , . . . , mD ) = [ vprod ; vorient ] . The vector v = V ( m1 , . . . , mD ) resides in a vector subspace of R2D+D , defined by only a set of D parameters . The additional D elements of vorient are required to ensure that the mapping V is bijective . For example , this can be seen by examining the case of D = 2 : let m1 ≡ ( cos θ1 , sin θ1 ) , m2 ≡ ( cos θ2 , sin θ2 ) , m′1 ≡ −m1 and m′2 ≡ −m2 . Then m1 ⊗m2 = m′1 ⊗m′2 . A natural way of acquiring a random point on the circle , S1 , is by sampling two independent Gaussians , m̂αka ∼ N ( µαka , σαka ) . Each tuple of these vectors is then normalized to have a unit norm , mαka = m̂αka√ ( m̂0a ) 2 + ( m̂1a ) 2 . ( 6 ) Assume that the elements , m̂αka , follow the normal distribution with a zero mean and a standard deviation of 1 , m̂αka ∼ N ( 0 , 1 ) , then the vectors ma follow the uniform distribution on S1 . For the purpose of obtaining the distribution parameters of m̂αka , the encoder e is applied on an instance , x , e ( x ) = [ µ01 µ11 , σ01 σ11 . . . µ0D µ1D , σ0D σ1D ] . ( 7 ) The reparametization trick ( Kingma & Welling , 2013 ) is then used to obtain a set of normally distributed vectors . Denote by S the sampling operator , we sample the coding vector as ~MI ≡ { ( m01 m11 ) , . . . , ( m0D m1D ) } = S ( e ( xI ) ) , ( 8 ) where each component , ( a , αa ) of m , a = 1 . . . D , α = 0 , 1 is sampled from N ( µαa , σαa ) . The decoder G then acts on V ( m1 , . . . , mD ) to generate a new instance , x̃ . The requirement that the generated instance is indistinguishable from the original distribution F ( χ ) translates to minimizing the ELBO loss function . First , it maximizes the log-likelihood probability of x̃ to be similar to the training sample it reconstructs by having an l2 loss term . Second , to encourage a uniform distribution on the torus , it contains a KL-divergence term with respect to N ( 0,1D ) . The overall expression for the loss function used during the optimization of the networks e and G is L = ∑ I [ ∣∣∣G ( V ( ~MI ) ) − xI ∣∣∣ 2 − βDKL ( p ( m̂ ) ||rI ) ] , ( 9 ) where β is a hyperparameter introduced in β-VAE ( Higgins et al. , 2016 ) and rI ∼ N ( 0,1D ) . As the latent representation resides on the torus , sub-vectors 2D components of ma may be described by D angles , θa ∈ [ 0 , 2π ] ( the angles θa are identified with the codes ca ) . In order to generate new instances , identify , m0a = cos θa , m 1 a = sin θa . and apply the decoder G ( V ( m ) ) to these elements . The torus topology , TD , is both compact and periodic . Since it is compact , for every sampled θa , there is a nearby θ′a from the training set , thus , the network G can be viewed as an interpolator . This is in contrast to vectors sampled over RD , in which G has to extrapolate . Furthermore , being periodic , this topology is able to exploit a periodic structure in the generative factors . Consider a common generative factor associated with rotations . A rotation can not be represented by a single non-compact variable ; therefore encoding a rotation in a latent space requires the entanglement of two components . However , on the torus , only one compact dimension can be used to identify this generative factor . | The paper presents a novel method for disentangling latent representations, specifically for use with variational auto-encoders. This is accomplished by treating the latent codes as a tensor product of 1-spheres which results in their forming an n-dimensional torus where n is the number of latent factors. Additionally, the authors propose a new metric for analysing the quality of the decompositions. These methods are well founded and make intuitive sense, and the authors supplement this theoretical motivation with an array of tests comparing to several existing methods on several different datasets. The results demonstrate that the methods work well both qualitatively and quantitatively. | SP:f90a4a89fc4a3d0fea5627f316f461fb05a57a7e |
Unsupervised Disentanglement with Tensor Product Representations on the Torus | 1 INTRODUCTION . Unsupervised learning methods of disentangled representations attempt to recover the explanatory factors z of variation in the data . The recovered representation , c , is expected to be : ( i ) disentangled , i.e. , each element in c should vary one of the generative factors in z ( ii ) complete in the sense that all generative factors z can be controlled by the latent representation c , and ( iii ) informative , that is the ability to relate via a low capacity model ( e.g. , a linear model ) the generative factors z and the latent representation c. It has been shown that solving this task without further assumptions is impossible , since there is an infinite number of bijective functions f that map between c and z = f ( c ) ( Locatello et al. , 2019 ) with all corresponding generative models having the same marginal distributions of the observations . In this work , we propose to use representations that are a tensor product of elements , which take values on unit circles S1 , claiming that this structure is suitable for the effective recovery of the underlying modes of variation . This claim follows from entropy considerations . One wishes to have a representation that has a low entropy of entanglement . The entropy of entanglement is measured by evaluating the number of non-zero eigenvalues of the Schmidt decomposition of the representation . If a representation can be described by only one term , which is an outer product of the orthogonal basis functions , it has the lowest possible entanglement entropy , zero . If a representation requires the use of more than one term in the decomposition , it is entangled and its entanglement entropy can be quantified , e.g . by the Von Neumann entropy constructed from non-zero eigenvalues . The tensor product of n unit circles , S1 , takes the shape of an n-torus Tn = ( S1 ) n , and has a low entanglement property . This representation has the advantage that it can capture the periodicity structure of the generative factors , where each circle controls a different aspect of the original distribution . Unlike other generative models , which rely on the Gaussian distribution as a prior , Tn is a compact manifold , and therefore any function that acts on it , such as a decoder , has to interpolate only , but not extrapolate when generating new instances . In this work , we present the TD-VAE , a Variational Auto Encoder whose latent space resides on the TD torus manifold . In an extensive set of experiments with four different datasets , we compare the torus latent representations with others proposed in the literature . We show a clear advantage of the torus representation in terms of various measures of disentanglement , completeness , informativeness and propose a new metric , the DC-score , which assesses the combined disentanglement and com- pleteness performance . We present a detailed quantitative and qualitative analysis that supports our method . 2 RELATED WORK . Generative models aim to create new instances that are indistinguishable from a given distribution . Auto encoders ( Kramer , 1991 ) achieve this by constructing the identity function using a composition of two components , an encoder and a decoder . The encoder is tasked with compressing the input into a latent representation with a dimension smaller than the input ’ s , whereas the decoder is responsible for reconstructing the original input from the latent space . Such auto encoders usually overfit to their training data and are very limited when tasked with generating new samples . Instead of mapping each input instance to a constant vector , Variational Auto Encoders ( Kingma & Welling , 2013 ) ( VAE ) map each instance to a pre-defined distribution , by minimizing both the encoder-decoder reconstruction loss and a KL-divergence term that depends on the prior distribution . VAE ’ s are capable of generating new instances by sampling a latent vector from the predefined distribution ; however , the interpretation of latent space components is typically obscure and the mapping between them and the dataset properties may be complex . In order to increase the latent space interoperability , various modifications of the original VAE were introduced . These variants try to achieve a factorization of the latent space components with each component corresponding to a specific simple characteristic of the dataset . In β-VAE ( Higgins et al. , 2016 ) , the weight of the KL-divergence term is increased w.r.t . the reconstruction term , which results in better factorization of the latent space . However , this introduces a new hyper-parameter and departs from the theoretical derivation of the ELBO term . For example , large β parameters may prevent the conditional distribution from modeling the data faithfully . DIP-VAE ( Kumar et al. , 2017 ) introduces a regularizer that constrains the covariance matrix of the posterior distribution . This term is added to the original VAE loss function , along with two hyper-parameters , and helps achieve better factorization . Factor-VAE ( Kim & Mnih , 2018 ) adds a Total Correlation ( TC ) approximation penalty over the produced latent codes and introduces an additional hyper-parameter . 2.1 DISENTANGLEMENT METRICS . In order to evaluate the expressiveness of different generative models , a plethora of disentanglement definitions and metrics were suggested in the literature ( Do & Tran , 2019 ; Eastwood & Williams , 2018 ) , see Locatello et al . ( 2019 ) for a comprehensive review . In this work , we adopt the definitions and metrics introduced in Eastwood & Williams ( 2018 ) which provides a successful set of disentanglement metrics . The authors of Eastwood & Williams ( 2018 ) introduce three metrics , Disentanglement , Completeness and Informativeness ( DCI ) , to quantify the disentanglement properties of the latent space and its ability to characterize the generative factors of a dataset . Given a dataset that was generated using a set of K generative factors , ~z ∈ RK , and that it is required to learn latent codes , ~c ∈ RD of D-dimensional representation . Ideally , in an interpretable representation , each generative factor zi would correspond to only one latent code ca . It is also beneficial if the mapping is linear , as the correlation between generative factors and codes can then be easily assessed . Generating new instances that are indistinguishable from the original dataset distribution requires that the latent codes cover the whole range of the generative factors . Furthermore , such a representation provides the ability to modify a specific property of a generated instance by directly tuning the corresponding latent code . The DCI metrics aim to quantify the relationship between codes and generating factors by having a single number that characterizes the relative importance of each code,1 ca , in predicting a factor , zi , which in turn defines an importance matrix Rai . To construct the importance matrix , K regressors are trained to find a mapping between zi and ~c , ẑi = fi ( ~c ) . In this work , we follow Eastwood & Williams ( 2018 ) and infer the importance matrix using a lasso linear regressor ’ s weights , Wia , by Ria = |Wia| . 1throughout the paper we use a , b , c , . . . and i , j , k , . . . letters for indices of the codes ~c and the factors ~z components respectively . Once the importance matrix Rai is obtained , the DCI metrics can be defined explicitly . The disentanglement is given by D = rank ( R ) K D∑ a=1 ρaDa , ( 1 ) where Da = 1−HK ( Pa ) = 1 + K∑ k=1 Pak logK Pak , Paj = Raj∑K k=1Rak , ρa = ∑ j Raj∑ bk Rbk . ( 2 ) High disentanglement means that each entry of the latent vector ~c corresponds to only one element in ~z ( in a linear sense ) , that is , each code element , ca , affects one generating factor . The disentanglement metric defined above differs from Eastwood & Williams ( 2018 ) by a correction factor rank ( R ) K . When the rank of R is equal to the generative factor ’ s dimension , this correction equals 1 , and does not affect the metric . However , it does make a difference when the number of codes is smaller than what is needed to account for all generating factors . Typically , one assumes that there are at least as many expressive codes as the number of factors . When this is not the case , it means that even if we have disentanglement among the expressive codes , their number is not sufficient , and our correction accounts for this . Note that the correction has no influence if there are irrelevant code elements in addition to a sufficient number of expressive ones . The ρ factors handle cases where some of the code dependence on the factors is very weak ( namely irrelevant codes ) , while other codes depend mainly on one factor and hence should not be of equal importance . The completeness is defined by C = 1 K K∑ j=1 Cj , Cj = 1−HD ( P̃j ) = 1 + D∑ a=1 P̃aj logD P̃aj , P̃aj = Raj∑D b=1Rbj . ( 3 ) In contrast to the disentanglement metric , which considers a weighted mean over the generative factors , motivated by the fact that irrelevant units in ~c should be ignored , here all codes are treated equally . A situation in which each factor zi is explained by only one element of the code ca results in high completeness . The informativeness is the MSE between the ground-truth factors , z , and the predicted values , fj ( ~c ) , I = 1 K K∑ j=1 Edataset [ |zj − fj ( ~c ) |2 ] . ( 4 ) Disentanglement and completeness values range between zero and one , where one is the best value , while the best value for informativeness is zero . Disentanglement and completeness alone do not provide sufficient information on whether a representation factorizes properly . For example , consider the case of a representation where only one generative factor is described by all the codes . While this representation is completely disentangled , it is not complete , and is therefore meaningless . In order to have a meaningful representation , both disentanglement and completeness need to be high . Thus , we introduce a new score , called the DC-score , which accounts for both disentanglement and completeness . We define the DC-score as the geometric mean of the two metrics , DC-score = √ DC . This way , only cases where both scores are high will result in a high DC-score . Furthermore , the score will favor cases where both disentanglement and completeness are comparable rather than having different values while having the same arithmetic mean . 3 METHOD . Assume a pre-determined set of vectors sampled from an unknown distribution χ , each vector containing K independent factors , ~z ∈ RK . The training dataset contains samples xI = F ( ~zI ) , I = 1 , . . . , N that are generated using a generative function F . In unsupervised disentanglement , the learner has no access to ~z nor to F , and receives only the set of samples , { xI } NI=1 . The learner has to recover a set of representation vectors ~cI that are linked to ~zI in a way that is bijective and disentangled , see Section 2.1 . Furthermore , in order to generate new samples , it is also necessary to be able to sample new representation vectors ~cnew , and to obtain a generative function G such that the new generated samples are from the same distribution of F ( ~z ) , where ~z ∼ χ . In this work , we view the latent representation vector ~c as the angles associated with a list of D two-dimensional unit vectors ma ∈ R2 , 1 ≤ a ≤ D , i.e. , ∀a , ‖ma‖ = 1 . Using the two-dimensional vectors we define vprod ≡ vec ( vα1 ... αDprod ) , vα1 ... αDprod = m α1 1 ⊗ · · · ⊗mαDD , vorient ≡ ( m01 , . . . , m 0 D ) , ( 5 ) where ⊗ is the outer product operator , αa ∈ { 0 , 1 } and vec is the vectorization operation ( equivalent to flattening the tensor ) . Next , define the operator V , that given m1 , . . . , mD , concatenates vprod together with vorient , V ( m1 , . . . , mD ) = [ vprod ; vorient ] . The vector v = V ( m1 , . . . , mD ) resides in a vector subspace of R2D+D , defined by only a set of D parameters . The additional D elements of vorient are required to ensure that the mapping V is bijective . For example , this can be seen by examining the case of D = 2 : let m1 ≡ ( cos θ1 , sin θ1 ) , m2 ≡ ( cos θ2 , sin θ2 ) , m′1 ≡ −m1 and m′2 ≡ −m2 . Then m1 ⊗m2 = m′1 ⊗m′2 . A natural way of acquiring a random point on the circle , S1 , is by sampling two independent Gaussians , m̂αka ∼ N ( µαka , σαka ) . Each tuple of these vectors is then normalized to have a unit norm , mαka = m̂αka√ ( m̂0a ) 2 + ( m̂1a ) 2 . ( 6 ) Assume that the elements , m̂αka , follow the normal distribution with a zero mean and a standard deviation of 1 , m̂αka ∼ N ( 0 , 1 ) , then the vectors ma follow the uniform distribution on S1 . For the purpose of obtaining the distribution parameters of m̂αka , the encoder e is applied on an instance , x , e ( x ) = [ µ01 µ11 , σ01 σ11 . . . µ0D µ1D , σ0D σ1D ] . ( 7 ) The reparametization trick ( Kingma & Welling , 2013 ) is then used to obtain a set of normally distributed vectors . Denote by S the sampling operator , we sample the coding vector as ~MI ≡ { ( m01 m11 ) , . . . , ( m0D m1D ) } = S ( e ( xI ) ) , ( 8 ) where each component , ( a , αa ) of m , a = 1 . . . D , α = 0 , 1 is sampled from N ( µαa , σαa ) . The decoder G then acts on V ( m1 , . . . , mD ) to generate a new instance , x̃ . The requirement that the generated instance is indistinguishable from the original distribution F ( χ ) translates to minimizing the ELBO loss function . First , it maximizes the log-likelihood probability of x̃ to be similar to the training sample it reconstructs by having an l2 loss term . Second , to encourage a uniform distribution on the torus , it contains a KL-divergence term with respect to N ( 0,1D ) . The overall expression for the loss function used during the optimization of the networks e and G is L = ∑ I [ ∣∣∣G ( V ( ~MI ) ) − xI ∣∣∣ 2 − βDKL ( p ( m̂ ) ||rI ) ] , ( 9 ) where β is a hyperparameter introduced in β-VAE ( Higgins et al. , 2016 ) and rI ∼ N ( 0,1D ) . As the latent representation resides on the torus , sub-vectors 2D components of ma may be described by D angles , θa ∈ [ 0 , 2π ] ( the angles θa are identified with the codes ca ) . In order to generate new instances , identify , m0a = cos θa , m 1 a = sin θa . and apply the decoder G ( V ( m ) ) to these elements . The torus topology , TD , is both compact and periodic . Since it is compact , for every sampled θa , there is a nearby θ′a from the training set , thus , the network G can be viewed as an interpolator . This is in contrast to vectors sampled over RD , in which G has to extrapolate . Furthermore , being periodic , this topology is able to exploit a periodic structure in the generative factors . Consider a common generative factor associated with rotations . A rotation can not be represented by a single non-compact variable ; therefore encoding a rotation in a latent space requires the entanglement of two components . However , on the torus , only one compact dimension can be used to identify this generative factor . | The authors propose an autoencoder where the latent space is defined on a torus. A D-dimensional torus is represented by the tensor product of D unit circles. They argue the torus induces disentanglement as the analogous of entropy entanglement in quantum physics. Using 5 datasets, they show both the reconstruction performance and disentanglement score (Eastwood & Williams, 2018) are high. | SP:f90a4a89fc4a3d0fea5627f316f461fb05a57a7e |
Unsupervised Disentanglement with Tensor Product Representations on the Torus | 1 INTRODUCTION . Unsupervised learning methods of disentangled representations attempt to recover the explanatory factors z of variation in the data . The recovered representation , c , is expected to be : ( i ) disentangled , i.e. , each element in c should vary one of the generative factors in z ( ii ) complete in the sense that all generative factors z can be controlled by the latent representation c , and ( iii ) informative , that is the ability to relate via a low capacity model ( e.g. , a linear model ) the generative factors z and the latent representation c. It has been shown that solving this task without further assumptions is impossible , since there is an infinite number of bijective functions f that map between c and z = f ( c ) ( Locatello et al. , 2019 ) with all corresponding generative models having the same marginal distributions of the observations . In this work , we propose to use representations that are a tensor product of elements , which take values on unit circles S1 , claiming that this structure is suitable for the effective recovery of the underlying modes of variation . This claim follows from entropy considerations . One wishes to have a representation that has a low entropy of entanglement . The entropy of entanglement is measured by evaluating the number of non-zero eigenvalues of the Schmidt decomposition of the representation . If a representation can be described by only one term , which is an outer product of the orthogonal basis functions , it has the lowest possible entanglement entropy , zero . If a representation requires the use of more than one term in the decomposition , it is entangled and its entanglement entropy can be quantified , e.g . by the Von Neumann entropy constructed from non-zero eigenvalues . The tensor product of n unit circles , S1 , takes the shape of an n-torus Tn = ( S1 ) n , and has a low entanglement property . This representation has the advantage that it can capture the periodicity structure of the generative factors , where each circle controls a different aspect of the original distribution . Unlike other generative models , which rely on the Gaussian distribution as a prior , Tn is a compact manifold , and therefore any function that acts on it , such as a decoder , has to interpolate only , but not extrapolate when generating new instances . In this work , we present the TD-VAE , a Variational Auto Encoder whose latent space resides on the TD torus manifold . In an extensive set of experiments with four different datasets , we compare the torus latent representations with others proposed in the literature . We show a clear advantage of the torus representation in terms of various measures of disentanglement , completeness , informativeness and propose a new metric , the DC-score , which assesses the combined disentanglement and com- pleteness performance . We present a detailed quantitative and qualitative analysis that supports our method . 2 RELATED WORK . Generative models aim to create new instances that are indistinguishable from a given distribution . Auto encoders ( Kramer , 1991 ) achieve this by constructing the identity function using a composition of two components , an encoder and a decoder . The encoder is tasked with compressing the input into a latent representation with a dimension smaller than the input ’ s , whereas the decoder is responsible for reconstructing the original input from the latent space . Such auto encoders usually overfit to their training data and are very limited when tasked with generating new samples . Instead of mapping each input instance to a constant vector , Variational Auto Encoders ( Kingma & Welling , 2013 ) ( VAE ) map each instance to a pre-defined distribution , by minimizing both the encoder-decoder reconstruction loss and a KL-divergence term that depends on the prior distribution . VAE ’ s are capable of generating new instances by sampling a latent vector from the predefined distribution ; however , the interpretation of latent space components is typically obscure and the mapping between them and the dataset properties may be complex . In order to increase the latent space interoperability , various modifications of the original VAE were introduced . These variants try to achieve a factorization of the latent space components with each component corresponding to a specific simple characteristic of the dataset . In β-VAE ( Higgins et al. , 2016 ) , the weight of the KL-divergence term is increased w.r.t . the reconstruction term , which results in better factorization of the latent space . However , this introduces a new hyper-parameter and departs from the theoretical derivation of the ELBO term . For example , large β parameters may prevent the conditional distribution from modeling the data faithfully . DIP-VAE ( Kumar et al. , 2017 ) introduces a regularizer that constrains the covariance matrix of the posterior distribution . This term is added to the original VAE loss function , along with two hyper-parameters , and helps achieve better factorization . Factor-VAE ( Kim & Mnih , 2018 ) adds a Total Correlation ( TC ) approximation penalty over the produced latent codes and introduces an additional hyper-parameter . 2.1 DISENTANGLEMENT METRICS . In order to evaluate the expressiveness of different generative models , a plethora of disentanglement definitions and metrics were suggested in the literature ( Do & Tran , 2019 ; Eastwood & Williams , 2018 ) , see Locatello et al . ( 2019 ) for a comprehensive review . In this work , we adopt the definitions and metrics introduced in Eastwood & Williams ( 2018 ) which provides a successful set of disentanglement metrics . The authors of Eastwood & Williams ( 2018 ) introduce three metrics , Disentanglement , Completeness and Informativeness ( DCI ) , to quantify the disentanglement properties of the latent space and its ability to characterize the generative factors of a dataset . Given a dataset that was generated using a set of K generative factors , ~z ∈ RK , and that it is required to learn latent codes , ~c ∈ RD of D-dimensional representation . Ideally , in an interpretable representation , each generative factor zi would correspond to only one latent code ca . It is also beneficial if the mapping is linear , as the correlation between generative factors and codes can then be easily assessed . Generating new instances that are indistinguishable from the original dataset distribution requires that the latent codes cover the whole range of the generative factors . Furthermore , such a representation provides the ability to modify a specific property of a generated instance by directly tuning the corresponding latent code . The DCI metrics aim to quantify the relationship between codes and generating factors by having a single number that characterizes the relative importance of each code,1 ca , in predicting a factor , zi , which in turn defines an importance matrix Rai . To construct the importance matrix , K regressors are trained to find a mapping between zi and ~c , ẑi = fi ( ~c ) . In this work , we follow Eastwood & Williams ( 2018 ) and infer the importance matrix using a lasso linear regressor ’ s weights , Wia , by Ria = |Wia| . 1throughout the paper we use a , b , c , . . . and i , j , k , . . . letters for indices of the codes ~c and the factors ~z components respectively . Once the importance matrix Rai is obtained , the DCI metrics can be defined explicitly . The disentanglement is given by D = rank ( R ) K D∑ a=1 ρaDa , ( 1 ) where Da = 1−HK ( Pa ) = 1 + K∑ k=1 Pak logK Pak , Paj = Raj∑K k=1Rak , ρa = ∑ j Raj∑ bk Rbk . ( 2 ) High disentanglement means that each entry of the latent vector ~c corresponds to only one element in ~z ( in a linear sense ) , that is , each code element , ca , affects one generating factor . The disentanglement metric defined above differs from Eastwood & Williams ( 2018 ) by a correction factor rank ( R ) K . When the rank of R is equal to the generative factor ’ s dimension , this correction equals 1 , and does not affect the metric . However , it does make a difference when the number of codes is smaller than what is needed to account for all generating factors . Typically , one assumes that there are at least as many expressive codes as the number of factors . When this is not the case , it means that even if we have disentanglement among the expressive codes , their number is not sufficient , and our correction accounts for this . Note that the correction has no influence if there are irrelevant code elements in addition to a sufficient number of expressive ones . The ρ factors handle cases where some of the code dependence on the factors is very weak ( namely irrelevant codes ) , while other codes depend mainly on one factor and hence should not be of equal importance . The completeness is defined by C = 1 K K∑ j=1 Cj , Cj = 1−HD ( P̃j ) = 1 + D∑ a=1 P̃aj logD P̃aj , P̃aj = Raj∑D b=1Rbj . ( 3 ) In contrast to the disentanglement metric , which considers a weighted mean over the generative factors , motivated by the fact that irrelevant units in ~c should be ignored , here all codes are treated equally . A situation in which each factor zi is explained by only one element of the code ca results in high completeness . The informativeness is the MSE between the ground-truth factors , z , and the predicted values , fj ( ~c ) , I = 1 K K∑ j=1 Edataset [ |zj − fj ( ~c ) |2 ] . ( 4 ) Disentanglement and completeness values range between zero and one , where one is the best value , while the best value for informativeness is zero . Disentanglement and completeness alone do not provide sufficient information on whether a representation factorizes properly . For example , consider the case of a representation where only one generative factor is described by all the codes . While this representation is completely disentangled , it is not complete , and is therefore meaningless . In order to have a meaningful representation , both disentanglement and completeness need to be high . Thus , we introduce a new score , called the DC-score , which accounts for both disentanglement and completeness . We define the DC-score as the geometric mean of the two metrics , DC-score = √ DC . This way , only cases where both scores are high will result in a high DC-score . Furthermore , the score will favor cases where both disentanglement and completeness are comparable rather than having different values while having the same arithmetic mean . 3 METHOD . Assume a pre-determined set of vectors sampled from an unknown distribution χ , each vector containing K independent factors , ~z ∈ RK . The training dataset contains samples xI = F ( ~zI ) , I = 1 , . . . , N that are generated using a generative function F . In unsupervised disentanglement , the learner has no access to ~z nor to F , and receives only the set of samples , { xI } NI=1 . The learner has to recover a set of representation vectors ~cI that are linked to ~zI in a way that is bijective and disentangled , see Section 2.1 . Furthermore , in order to generate new samples , it is also necessary to be able to sample new representation vectors ~cnew , and to obtain a generative function G such that the new generated samples are from the same distribution of F ( ~z ) , where ~z ∼ χ . In this work , we view the latent representation vector ~c as the angles associated with a list of D two-dimensional unit vectors ma ∈ R2 , 1 ≤ a ≤ D , i.e. , ∀a , ‖ma‖ = 1 . Using the two-dimensional vectors we define vprod ≡ vec ( vα1 ... αDprod ) , vα1 ... αDprod = m α1 1 ⊗ · · · ⊗mαDD , vorient ≡ ( m01 , . . . , m 0 D ) , ( 5 ) where ⊗ is the outer product operator , αa ∈ { 0 , 1 } and vec is the vectorization operation ( equivalent to flattening the tensor ) . Next , define the operator V , that given m1 , . . . , mD , concatenates vprod together with vorient , V ( m1 , . . . , mD ) = [ vprod ; vorient ] . The vector v = V ( m1 , . . . , mD ) resides in a vector subspace of R2D+D , defined by only a set of D parameters . The additional D elements of vorient are required to ensure that the mapping V is bijective . For example , this can be seen by examining the case of D = 2 : let m1 ≡ ( cos θ1 , sin θ1 ) , m2 ≡ ( cos θ2 , sin θ2 ) , m′1 ≡ −m1 and m′2 ≡ −m2 . Then m1 ⊗m2 = m′1 ⊗m′2 . A natural way of acquiring a random point on the circle , S1 , is by sampling two independent Gaussians , m̂αka ∼ N ( µαka , σαka ) . Each tuple of these vectors is then normalized to have a unit norm , mαka = m̂αka√ ( m̂0a ) 2 + ( m̂1a ) 2 . ( 6 ) Assume that the elements , m̂αka , follow the normal distribution with a zero mean and a standard deviation of 1 , m̂αka ∼ N ( 0 , 1 ) , then the vectors ma follow the uniform distribution on S1 . For the purpose of obtaining the distribution parameters of m̂αka , the encoder e is applied on an instance , x , e ( x ) = [ µ01 µ11 , σ01 σ11 . . . µ0D µ1D , σ0D σ1D ] . ( 7 ) The reparametization trick ( Kingma & Welling , 2013 ) is then used to obtain a set of normally distributed vectors . Denote by S the sampling operator , we sample the coding vector as ~MI ≡ { ( m01 m11 ) , . . . , ( m0D m1D ) } = S ( e ( xI ) ) , ( 8 ) where each component , ( a , αa ) of m , a = 1 . . . D , α = 0 , 1 is sampled from N ( µαa , σαa ) . The decoder G then acts on V ( m1 , . . . , mD ) to generate a new instance , x̃ . The requirement that the generated instance is indistinguishable from the original distribution F ( χ ) translates to minimizing the ELBO loss function . First , it maximizes the log-likelihood probability of x̃ to be similar to the training sample it reconstructs by having an l2 loss term . Second , to encourage a uniform distribution on the torus , it contains a KL-divergence term with respect to N ( 0,1D ) . The overall expression for the loss function used during the optimization of the networks e and G is L = ∑ I [ ∣∣∣G ( V ( ~MI ) ) − xI ∣∣∣ 2 − βDKL ( p ( m̂ ) ||rI ) ] , ( 9 ) where β is a hyperparameter introduced in β-VAE ( Higgins et al. , 2016 ) and rI ∼ N ( 0,1D ) . As the latent representation resides on the torus , sub-vectors 2D components of ma may be described by D angles , θa ∈ [ 0 , 2π ] ( the angles θa are identified with the codes ca ) . In order to generate new instances , identify , m0a = cos θa , m 1 a = sin θa . and apply the decoder G ( V ( m ) ) to these elements . The torus topology , TD , is both compact and periodic . Since it is compact , for every sampled θa , there is a nearby θ′a from the training set , thus , the network G can be viewed as an interpolator . This is in contrast to vectors sampled over RD , in which G has to extrapolate . Furthermore , being periodic , this topology is able to exploit a periodic structure in the generative factors . Consider a common generative factor associated with rotations . A rotation can not be represented by a single non-compact variable ; therefore encoding a rotation in a latent space requires the entanglement of two components . However , on the torus , only one compact dimension can be used to identify this generative factor . | This paper proposes a new autoencoder architecture that achieves better recovery of the latent factor structure in artificial datasets according to previously established metrics: disentanglement, completeness, and informativeness. The key idea is to nonlinearly project a small number of latent factors into a higher-dimensional space, based on the topology of the torus, in such a way that prevents an arbitrary linear rotation of the latent factors from yielding an equivalent representation. The approach is compared against existing approaches for variational autoencoders focused on learning disentangled representation, and achieves higher disentanglement and completeness, as well as competitive or best informativeness (reconstruction) on a majority of datasets considered. | SP:f90a4a89fc4a3d0fea5627f316f461fb05a57a7e |
Gradient Step Denoiser for convergent Plug-and-Play | 1 INTRODUCTION . Image restoration ( IR ) problems can be formulated as inverse problems of the form x∗ ∈ arg min x f ( x ) + λg ( x ) ( 1 ) where f is a term measuring the fidelity to a degraded observation y , and g is a regularization term weighted by a parameter λ ≥ 0 . Generally , the degradation of a clean image x̂ can be modeled by a linear operation y = Ax̂+ ξ , where A is a degradation matrix and ξ a white Gaussian noise . In this context , the maximum a posteriori ( MAP ) derivation relates the data-fidelity term to the likelihood f ( x ) = − log p ( y|x ) = 12σ2 ||Ax− y|| 2 , while the regularization term is related to the chosen prior . Regularization is crucial since it tackles the ill-posedness of the IR task by bringing a priori knowledge on the solution . A lot of research has been dedicated to designing accurate priors g. Among the most classical priors , one can single out total variation ( Rudin et al. , 1992 ) , wavelet sparsity ( Mallat , 2009 ) or patch-based Gaussian mixtures ( Zoran & Weiss , 2011 ) . Designing a relevant prior g is a difficult task and recent approaches rather apply deep learning techniques to directly learn a prior from a database of clean images ( Lunz et al. , 2018 ; Prost et al. , 2021 ; González et al. , 2021 ) . Generally , the problem ( 1 ) does not have a closed-form solution , and an optimization algorithm is required . First-order proximal splitting algorithms ( Combettes & Pesquet , 2011 ) operate individually on f and g via the proximity operator Proxf ( x ) = arg min z 1 2 ||x− z||2 + f ( z ) . ( 2 ) Among them , half-quadratic splitting ( HQS ) ( Geman & Yang , 1995 ) alternately applies the proximal operators of f and g. Proximal methods are particularly useful when either f or g is nonsmooth . Plug-and-Play ( PnP ) methods ( Venkatakrishnan et al. , 2013 ) build on proximal splitting algorithms by replacing the proximity operator of g with a generic denoiser , e.g . a pretrained deep network . ∗Corresponding author : samuel.hurault @ math.u-bordeaux.fr These methods achieve state-of-the-art results ( Buzzard et al. , 2018 ; Ahmad et al. , 2020 ; Yuan et al. , 2020 ; Zhang et al. , 2021 ) in various IR problems . However , since a generic denoiser can not generally be expressed as a proximal mapping ( Moreau , 1965 ) , convergence results , which stem from the properties of the proximal operator , are difficult to obtain . Moreover , the regularizer g is only made implicit via the denoising operation . Therefore , PnP algorithms do not seek the minimization of an explicit objective functional which strongly limits their interpretation and numerical control . In order to keep tractability of a minimization problem , Romano et al . ( 2017 ) proposed , with regularization by denoising ( RED ) , an explicit prior g that exploits a given generic denoiser D in the form g ( x ) = 12 〈x , x −D ( x ) 〉 . With strong assumptions on the denoiser ( in particular a symmetric Jacobian assumption ) , they show that it verifies ∇xg ( x ) = x−D ( x ) . ( 3 ) Such a denoiser is then plugged in gradient-based minimization schemes . Despite having shown very good results on various image restoration tasks , as later pointed out by Reehorst & Schniter ( 2018 ) or Saremi ( 2019 ) , existing deep denoisers lack Jacobian symmetry . Hence , RED does not minimize an explicit functional and is not guaranteed to converge . Contributions . In this work , we develop a PnP scheme with novel theoretical convergence guarantees and state-of-the-art IR performance . Departing from the PnP-HQS framework , we plug a denoiser that inherently satisfies equation ( 3 ) without sacrificing the denoising performance . The resulting fixed-point algorithm is guaranteed to converge to a stationary point of an explicit functional . This convergence guarantee does not require strong convexity of the data-fidelity term , thus encompassing ill-posed IR tasks like deblurring , super-resolution or inpainting . 2 RELATED WORKS . PnP methods have been successfully applied in the literature with various splitting schemes : HQS ( Zhang et al. , 2017b ; 2021 ) , ADMM ( Romano et al. , 2017 ; Ryu et al. , 2019 ) , Proximal Gradient Descent ( PGD ) ( Terris et al. , 2020 ) . First used with classical non deep denoisers such as BM3D ( Chan et al. , 2016 ) and pseudo-linear denoisers ( Nair et al. , 2021 ; Gavaskar et al. , 2021 ) , more recent PnP approaches ( Meinhardt et al. , 2017 ; Ryu et al. , 2019 ) rely on efficient off-theshelf deep denoisers such as DnCNN ( Zhang et al. , 2017a ) . State-of-the-art IR results are currently obtained with denoisers that are specifically designed to be integrated in PnP schemes , like IRCNN ( Zhang et al. , 2017b ) or DRUNET ( Zhang et al. , 2021 ) . Though providing excellent restorations , such schemes are not guaranteed to converge for all kinds of denoisers or IR tasks . Designing convergence proofs for PnP algorithms is an active research topic . Sreehari et al . ( 2016 ) used the proximal theorem of Moreau ( Moreau , 1965 ) to give sufficient conditions for the denoiser to be an explicit proximal map , which are applied to a pseudo-linear denoiser . The convergence with pseudo-linear denoisers have been extensively studied ( Gavaskar & Chaudhury , 2020 ; Nair et al. , 2021 ; Chan , 2019 ) . However , state-of-the-art PnP results are obtained with deep denoisers . Various assumptions have been made to ensure the convergence of the related PnP schemes . With a “ bounded denoiser ” assumption , Chan et al . ( 2016 ) ; Gavaskar & Chaudhury ( 2019 ) showed convergence of PnP-ADMM with stepsizes decreasing to 0 . RED ( Romano et al. , 2017 ) and RED-PRO ( Cohen et al. , 2021 ) respectively consider the classes of denoisers with symmetric Jacobian or demicontractive mappings , but these conditions are either too restrictive or hard to verify in practice . In Appendix A.3 , more details are given on RED-based methods . Many works focus on Lipschitz properties of PnP operators . Depending on the splitting algorithm in use , convergence can be obtained by assuming the denoiser averaged ( Sun et al. , 2019b ) , firmly nonexpansive ( Sun et al. , 2021 ; Terris et al. , 2020 ) or simply nonexpansive ( Reehorst & Schniter , 2018 ; Liu et al. , 2021 ) . These settings are unrealistic as deep denoisers do not generally satisfy such properties . Ryu et al . ( 2019 ) ; Terris et al . ( 2020 ) propose different ways to train deep denoisers with constrained Lipschitz constants , in order to fit the technical properties required for convergence . But imposing hard Lipschitz constraints on the network alters its denoising performance ( Bohra et al. , 2021 ; Hertrich et al. , 2021 ) . Yet , Ryu et al . ( 2019 ) manages to get a convergent PnP scheme without assuming the nonexpansiveness ofD . This comes at the cost of imposing strong convexity on the data-fidelity term f , which excludes many IR tasks like deblurring , super-resolution or inpainting . Hence , given the ill-posedness of IR problems , looking for a unique solution via contractive operators is a restrictive assumption . In this work , we do not impose contractiveness , but still obtain convergence results with realistic hypotheses . One can relate the ideal deep denoiser to the “ true ” natural image prior p via Tweedie ’ s Identity . In ( Efron , 2011 ) , it is indeed shown that the Minimum Mean Square Error ( MMSE ) denoiser D∗σ ( at noise level σ ) verifies Dσ ( x ) = x+ σ2∇x log pσ ( x ) where pσ is the convolution of p with the density ofN ( 0 , σ2 Id ) . In a recent line of research ( Bigdeli et al. , 2017 ; Xu et al. , 2020 ; Laumont et al. , 2021 ; Kadkhodaie & Simoncelli , 2020 ) , this relation is used to plug a denoiser in gradient-based dynamics . In practice , the MMSE denoiser can not be computed explicitly and Tweedie ’ s Identity does not hold for deep approximations of the MMSE . In order to be as exhaustive as possible , we detailed the addressed limitations of existing PnP methods in Appendix A.1 . 3 THE GRADIENT STEP PLUG-AND-PLAY . The proposed method is based on the PnP version of half-quadratic-splitting ( PnP-HQS ) that amounts to replacing the proximity operator of the prior g with an off-the-shelf denoiser Dσ . In order to define a convergent PnP scheme , we first set up in Section 3.1 a Gradient Step ( GS ) denoiser . We then introduce the Gradient Step PnP ( GS-PnP ) algorithm in Section 3.2 . 3.1 GRADIENT STEP DENOISER . We propose to plug a denoising operator Dσ that takes the form of a gradient descent step Dσ = Id−∇gσ , ( 4 ) with gσ : Rn → R. Contrary to Romano et al . ( 2017 ) , our denoiser exactly represents a conservative vector field . The choice of the parameterization of gσ is fundamental for the denoising performance . As already noticed in Salimans & Ho ( 2021 ) , we experimentally found that directly modeling gσ as a neural network ( e.g . a standard network used for classification ) leads to poor denoising performance . In order to keep the strength of state-of-the-art unconstrained denoisers , we rather use gσ ( x ) = 1 2 ||x−Nσ ( x ) ||2 , ( 5 ) which leads to Dσ ( x ) = x−∇gσ ( x ) = Nσ ( x ) + JNσ ( x ) T ( x−Nσ ( x ) ) , ( 6 ) where Nσ : Rn → Rn is parameterized by a neural network and JNσ ( x ) is the Jacobian of Nσ at point x . As discussed in Appendix A.2 , the formulation ( 5 ) for gσ has been proposed in ( Romano et al. , 2017 , Section 5.2 ) and ( Bigdeli & Zwicker , 2017 ) for a distinct but related purpose , and not exploited for convergence analysis . Thanks to our definition ( 6 ) for Dσ , we can parameterize Nσ with any differentiable neural network architecture Rn → Rn that has proven efficient for image denoising . Although the representation power of the denoiser is limited by the particular form ( 6 ) , we show ( see Section 5.1 ) that such parameterization still yields state-of-the-art denoising results . We train the denoiser Dσ for Gaussian noise by minimizing the MSE loss function L ( Dσ ) = Ex∼p , ξσ∼N ( 0 , σ2I ) [ ||Dσ ( x+ ξσ ) − x|| 2 ] , ( 7 ) or L ( gσ ) = Ex∼p , ξσ∼N ( 0 , σ2I ) [ ||∇gσ ( x+ ξσ ) − ξσ|| 2 ] , ( 8 ) when written in terms of gσ using equation ( 4 ) . Remark 1 . By definition , the optimal solution g∗σ ∈ arg minL is related to the MMSE denoiserD∗σ , that is , the best non-linear predictor of x given x + ξσ . Therefore , it satisfies Tweedie ’ s formula and ∇g∗σ = −σ2∇ log pσ ( Efron , 2011 ) i.e . g∗σ = −σ2 log pσ + C , for some C ∈ R. Hence approximating the MMSE denoiser with a denoiser parameterized as ( 4 ) is related to approximating the logarithm of the smoothed image prior of pσ with − 1σ2 gσ . This relation was used for image generation with “ Denoising Score Matching ” by Saremi & Hyvarinen ( 2019 ) ; Bigdeli et al . ( 2020 ) . | This paper makes an extension of the plug-and-play framework by formulating an explicit regularizer $g(x)=||x - N_\sigma(x)||_2^2$ whose gradient $\nabla g$ corresponds to the noise residual $x-D_\sigma(x)$. By replacing the proximal of regularizer with this gradient step denoiser, the authors proposed the GS-PnP algorithm based on the half quadratic splitting algorithm. Since the explicit regularizer is known, a convergence analysis is hence established by assuming the Lipschitz continuity of $\nabla g$. This paper is closely related to the recent trend of learning a regularizer functional by using deep neural networks. The difference between the existing literature and this paper is that the former trains a deep neural network to directly output a scalar value, while the latter trains a neural network to output an image-size vector and then envelopes it with a $\ell_2$-norm. In fact, the proposed regularizer shares the same formulation as the one stated in Sec 5.2 *"An Alternative Prior"* in Romano's RED paper. However, the difference between the two is not clearly stated in the paper. **Strength** 1. An extension of the PnP with explicit regularizer formulated. 2. (Non-convex) convergence analysis under the common assumptions of the Lipschitz continuity of $\nabla g$ 3. Extensive validation on image denoising, deblurring, and super-resolution. | SP:db6157c2243f1adb0f5f9b81d1e86354b6fe8089 |
Gradient Step Denoiser for convergent Plug-and-Play | 1 INTRODUCTION . Image restoration ( IR ) problems can be formulated as inverse problems of the form x∗ ∈ arg min x f ( x ) + λg ( x ) ( 1 ) where f is a term measuring the fidelity to a degraded observation y , and g is a regularization term weighted by a parameter λ ≥ 0 . Generally , the degradation of a clean image x̂ can be modeled by a linear operation y = Ax̂+ ξ , where A is a degradation matrix and ξ a white Gaussian noise . In this context , the maximum a posteriori ( MAP ) derivation relates the data-fidelity term to the likelihood f ( x ) = − log p ( y|x ) = 12σ2 ||Ax− y|| 2 , while the regularization term is related to the chosen prior . Regularization is crucial since it tackles the ill-posedness of the IR task by bringing a priori knowledge on the solution . A lot of research has been dedicated to designing accurate priors g. Among the most classical priors , one can single out total variation ( Rudin et al. , 1992 ) , wavelet sparsity ( Mallat , 2009 ) or patch-based Gaussian mixtures ( Zoran & Weiss , 2011 ) . Designing a relevant prior g is a difficult task and recent approaches rather apply deep learning techniques to directly learn a prior from a database of clean images ( Lunz et al. , 2018 ; Prost et al. , 2021 ; González et al. , 2021 ) . Generally , the problem ( 1 ) does not have a closed-form solution , and an optimization algorithm is required . First-order proximal splitting algorithms ( Combettes & Pesquet , 2011 ) operate individually on f and g via the proximity operator Proxf ( x ) = arg min z 1 2 ||x− z||2 + f ( z ) . ( 2 ) Among them , half-quadratic splitting ( HQS ) ( Geman & Yang , 1995 ) alternately applies the proximal operators of f and g. Proximal methods are particularly useful when either f or g is nonsmooth . Plug-and-Play ( PnP ) methods ( Venkatakrishnan et al. , 2013 ) build on proximal splitting algorithms by replacing the proximity operator of g with a generic denoiser , e.g . a pretrained deep network . ∗Corresponding author : samuel.hurault @ math.u-bordeaux.fr These methods achieve state-of-the-art results ( Buzzard et al. , 2018 ; Ahmad et al. , 2020 ; Yuan et al. , 2020 ; Zhang et al. , 2021 ) in various IR problems . However , since a generic denoiser can not generally be expressed as a proximal mapping ( Moreau , 1965 ) , convergence results , which stem from the properties of the proximal operator , are difficult to obtain . Moreover , the regularizer g is only made implicit via the denoising operation . Therefore , PnP algorithms do not seek the minimization of an explicit objective functional which strongly limits their interpretation and numerical control . In order to keep tractability of a minimization problem , Romano et al . ( 2017 ) proposed , with regularization by denoising ( RED ) , an explicit prior g that exploits a given generic denoiser D in the form g ( x ) = 12 〈x , x −D ( x ) 〉 . With strong assumptions on the denoiser ( in particular a symmetric Jacobian assumption ) , they show that it verifies ∇xg ( x ) = x−D ( x ) . ( 3 ) Such a denoiser is then plugged in gradient-based minimization schemes . Despite having shown very good results on various image restoration tasks , as later pointed out by Reehorst & Schniter ( 2018 ) or Saremi ( 2019 ) , existing deep denoisers lack Jacobian symmetry . Hence , RED does not minimize an explicit functional and is not guaranteed to converge . Contributions . In this work , we develop a PnP scheme with novel theoretical convergence guarantees and state-of-the-art IR performance . Departing from the PnP-HQS framework , we plug a denoiser that inherently satisfies equation ( 3 ) without sacrificing the denoising performance . The resulting fixed-point algorithm is guaranteed to converge to a stationary point of an explicit functional . This convergence guarantee does not require strong convexity of the data-fidelity term , thus encompassing ill-posed IR tasks like deblurring , super-resolution or inpainting . 2 RELATED WORKS . PnP methods have been successfully applied in the literature with various splitting schemes : HQS ( Zhang et al. , 2017b ; 2021 ) , ADMM ( Romano et al. , 2017 ; Ryu et al. , 2019 ) , Proximal Gradient Descent ( PGD ) ( Terris et al. , 2020 ) . First used with classical non deep denoisers such as BM3D ( Chan et al. , 2016 ) and pseudo-linear denoisers ( Nair et al. , 2021 ; Gavaskar et al. , 2021 ) , more recent PnP approaches ( Meinhardt et al. , 2017 ; Ryu et al. , 2019 ) rely on efficient off-theshelf deep denoisers such as DnCNN ( Zhang et al. , 2017a ) . State-of-the-art IR results are currently obtained with denoisers that are specifically designed to be integrated in PnP schemes , like IRCNN ( Zhang et al. , 2017b ) or DRUNET ( Zhang et al. , 2021 ) . Though providing excellent restorations , such schemes are not guaranteed to converge for all kinds of denoisers or IR tasks . Designing convergence proofs for PnP algorithms is an active research topic . Sreehari et al . ( 2016 ) used the proximal theorem of Moreau ( Moreau , 1965 ) to give sufficient conditions for the denoiser to be an explicit proximal map , which are applied to a pseudo-linear denoiser . The convergence with pseudo-linear denoisers have been extensively studied ( Gavaskar & Chaudhury , 2020 ; Nair et al. , 2021 ; Chan , 2019 ) . However , state-of-the-art PnP results are obtained with deep denoisers . Various assumptions have been made to ensure the convergence of the related PnP schemes . With a “ bounded denoiser ” assumption , Chan et al . ( 2016 ) ; Gavaskar & Chaudhury ( 2019 ) showed convergence of PnP-ADMM with stepsizes decreasing to 0 . RED ( Romano et al. , 2017 ) and RED-PRO ( Cohen et al. , 2021 ) respectively consider the classes of denoisers with symmetric Jacobian or demicontractive mappings , but these conditions are either too restrictive or hard to verify in practice . In Appendix A.3 , more details are given on RED-based methods . Many works focus on Lipschitz properties of PnP operators . Depending on the splitting algorithm in use , convergence can be obtained by assuming the denoiser averaged ( Sun et al. , 2019b ) , firmly nonexpansive ( Sun et al. , 2021 ; Terris et al. , 2020 ) or simply nonexpansive ( Reehorst & Schniter , 2018 ; Liu et al. , 2021 ) . These settings are unrealistic as deep denoisers do not generally satisfy such properties . Ryu et al . ( 2019 ) ; Terris et al . ( 2020 ) propose different ways to train deep denoisers with constrained Lipschitz constants , in order to fit the technical properties required for convergence . But imposing hard Lipschitz constraints on the network alters its denoising performance ( Bohra et al. , 2021 ; Hertrich et al. , 2021 ) . Yet , Ryu et al . ( 2019 ) manages to get a convergent PnP scheme without assuming the nonexpansiveness ofD . This comes at the cost of imposing strong convexity on the data-fidelity term f , which excludes many IR tasks like deblurring , super-resolution or inpainting . Hence , given the ill-posedness of IR problems , looking for a unique solution via contractive operators is a restrictive assumption . In this work , we do not impose contractiveness , but still obtain convergence results with realistic hypotheses . One can relate the ideal deep denoiser to the “ true ” natural image prior p via Tweedie ’ s Identity . In ( Efron , 2011 ) , it is indeed shown that the Minimum Mean Square Error ( MMSE ) denoiser D∗σ ( at noise level σ ) verifies Dσ ( x ) = x+ σ2∇x log pσ ( x ) where pσ is the convolution of p with the density ofN ( 0 , σ2 Id ) . In a recent line of research ( Bigdeli et al. , 2017 ; Xu et al. , 2020 ; Laumont et al. , 2021 ; Kadkhodaie & Simoncelli , 2020 ) , this relation is used to plug a denoiser in gradient-based dynamics . In practice , the MMSE denoiser can not be computed explicitly and Tweedie ’ s Identity does not hold for deep approximations of the MMSE . In order to be as exhaustive as possible , we detailed the addressed limitations of existing PnP methods in Appendix A.1 . 3 THE GRADIENT STEP PLUG-AND-PLAY . The proposed method is based on the PnP version of half-quadratic-splitting ( PnP-HQS ) that amounts to replacing the proximity operator of the prior g with an off-the-shelf denoiser Dσ . In order to define a convergent PnP scheme , we first set up in Section 3.1 a Gradient Step ( GS ) denoiser . We then introduce the Gradient Step PnP ( GS-PnP ) algorithm in Section 3.2 . 3.1 GRADIENT STEP DENOISER . We propose to plug a denoising operator Dσ that takes the form of a gradient descent step Dσ = Id−∇gσ , ( 4 ) with gσ : Rn → R. Contrary to Romano et al . ( 2017 ) , our denoiser exactly represents a conservative vector field . The choice of the parameterization of gσ is fundamental for the denoising performance . As already noticed in Salimans & Ho ( 2021 ) , we experimentally found that directly modeling gσ as a neural network ( e.g . a standard network used for classification ) leads to poor denoising performance . In order to keep the strength of state-of-the-art unconstrained denoisers , we rather use gσ ( x ) = 1 2 ||x−Nσ ( x ) ||2 , ( 5 ) which leads to Dσ ( x ) = x−∇gσ ( x ) = Nσ ( x ) + JNσ ( x ) T ( x−Nσ ( x ) ) , ( 6 ) where Nσ : Rn → Rn is parameterized by a neural network and JNσ ( x ) is the Jacobian of Nσ at point x . As discussed in Appendix A.2 , the formulation ( 5 ) for gσ has been proposed in ( Romano et al. , 2017 , Section 5.2 ) and ( Bigdeli & Zwicker , 2017 ) for a distinct but related purpose , and not exploited for convergence analysis . Thanks to our definition ( 6 ) for Dσ , we can parameterize Nσ with any differentiable neural network architecture Rn → Rn that has proven efficient for image denoising . Although the representation power of the denoiser is limited by the particular form ( 6 ) , we show ( see Section 5.1 ) that such parameterization still yields state-of-the-art denoising results . We train the denoiser Dσ for Gaussian noise by minimizing the MSE loss function L ( Dσ ) = Ex∼p , ξσ∼N ( 0 , σ2I ) [ ||Dσ ( x+ ξσ ) − x|| 2 ] , ( 7 ) or L ( gσ ) = Ex∼p , ξσ∼N ( 0 , σ2I ) [ ||∇gσ ( x+ ξσ ) − ξσ|| 2 ] , ( 8 ) when written in terms of gσ using equation ( 4 ) . Remark 1 . By definition , the optimal solution g∗σ ∈ arg minL is related to the MMSE denoiserD∗σ , that is , the best non-linear predictor of x given x + ξσ . Therefore , it satisfies Tweedie ’ s formula and ∇g∗σ = −σ2∇ log pσ ( Efron , 2011 ) i.e . g∗σ = −σ2 log pσ + C , for some C ∈ R. Hence approximating the MMSE denoiser with a denoiser parameterized as ( 4 ) is related to approximating the logarithm of the smoothed image prior of pσ with − 1σ2 gσ . This relation was used for image generation with “ Denoising Score Matching ” by Saremi & Hyvarinen ( 2019 ) ; Bigdeli et al . ( 2020 ) . | This paper presents a new form of the plug-and-play (PnP) half-quadratic splitting algorithm with provable convergence. In contrast to directly formulating the denoiser as previous works, they propose to parameterize the denoiser with a learnable score-based function. This builds a bridge between the recently emerged score-based generative model and plug-and-play methods. And even more surprisingly, they show such a new formulation within the PnP framework leads to a very strong theoretical convergence guarantee under mild realistic assumptions. They also demonstrate good empirical results on three image restoration tasks i.e., deblurring, super-resolution and inpainting, verifying the empirical convergence rate is usually faster than $\mathcal{O}(\frac{1}{k})$ -- the worst-case rate established theoretically. | SP:db6157c2243f1adb0f5f9b81d1e86354b6fe8089 |
Gradient Step Denoiser for convergent Plug-and-Play | 1 INTRODUCTION . Image restoration ( IR ) problems can be formulated as inverse problems of the form x∗ ∈ arg min x f ( x ) + λg ( x ) ( 1 ) where f is a term measuring the fidelity to a degraded observation y , and g is a regularization term weighted by a parameter λ ≥ 0 . Generally , the degradation of a clean image x̂ can be modeled by a linear operation y = Ax̂+ ξ , where A is a degradation matrix and ξ a white Gaussian noise . In this context , the maximum a posteriori ( MAP ) derivation relates the data-fidelity term to the likelihood f ( x ) = − log p ( y|x ) = 12σ2 ||Ax− y|| 2 , while the regularization term is related to the chosen prior . Regularization is crucial since it tackles the ill-posedness of the IR task by bringing a priori knowledge on the solution . A lot of research has been dedicated to designing accurate priors g. Among the most classical priors , one can single out total variation ( Rudin et al. , 1992 ) , wavelet sparsity ( Mallat , 2009 ) or patch-based Gaussian mixtures ( Zoran & Weiss , 2011 ) . Designing a relevant prior g is a difficult task and recent approaches rather apply deep learning techniques to directly learn a prior from a database of clean images ( Lunz et al. , 2018 ; Prost et al. , 2021 ; González et al. , 2021 ) . Generally , the problem ( 1 ) does not have a closed-form solution , and an optimization algorithm is required . First-order proximal splitting algorithms ( Combettes & Pesquet , 2011 ) operate individually on f and g via the proximity operator Proxf ( x ) = arg min z 1 2 ||x− z||2 + f ( z ) . ( 2 ) Among them , half-quadratic splitting ( HQS ) ( Geman & Yang , 1995 ) alternately applies the proximal operators of f and g. Proximal methods are particularly useful when either f or g is nonsmooth . Plug-and-Play ( PnP ) methods ( Venkatakrishnan et al. , 2013 ) build on proximal splitting algorithms by replacing the proximity operator of g with a generic denoiser , e.g . a pretrained deep network . ∗Corresponding author : samuel.hurault @ math.u-bordeaux.fr These methods achieve state-of-the-art results ( Buzzard et al. , 2018 ; Ahmad et al. , 2020 ; Yuan et al. , 2020 ; Zhang et al. , 2021 ) in various IR problems . However , since a generic denoiser can not generally be expressed as a proximal mapping ( Moreau , 1965 ) , convergence results , which stem from the properties of the proximal operator , are difficult to obtain . Moreover , the regularizer g is only made implicit via the denoising operation . Therefore , PnP algorithms do not seek the minimization of an explicit objective functional which strongly limits their interpretation and numerical control . In order to keep tractability of a minimization problem , Romano et al . ( 2017 ) proposed , with regularization by denoising ( RED ) , an explicit prior g that exploits a given generic denoiser D in the form g ( x ) = 12 〈x , x −D ( x ) 〉 . With strong assumptions on the denoiser ( in particular a symmetric Jacobian assumption ) , they show that it verifies ∇xg ( x ) = x−D ( x ) . ( 3 ) Such a denoiser is then plugged in gradient-based minimization schemes . Despite having shown very good results on various image restoration tasks , as later pointed out by Reehorst & Schniter ( 2018 ) or Saremi ( 2019 ) , existing deep denoisers lack Jacobian symmetry . Hence , RED does not minimize an explicit functional and is not guaranteed to converge . Contributions . In this work , we develop a PnP scheme with novel theoretical convergence guarantees and state-of-the-art IR performance . Departing from the PnP-HQS framework , we plug a denoiser that inherently satisfies equation ( 3 ) without sacrificing the denoising performance . The resulting fixed-point algorithm is guaranteed to converge to a stationary point of an explicit functional . This convergence guarantee does not require strong convexity of the data-fidelity term , thus encompassing ill-posed IR tasks like deblurring , super-resolution or inpainting . 2 RELATED WORKS . PnP methods have been successfully applied in the literature with various splitting schemes : HQS ( Zhang et al. , 2017b ; 2021 ) , ADMM ( Romano et al. , 2017 ; Ryu et al. , 2019 ) , Proximal Gradient Descent ( PGD ) ( Terris et al. , 2020 ) . First used with classical non deep denoisers such as BM3D ( Chan et al. , 2016 ) and pseudo-linear denoisers ( Nair et al. , 2021 ; Gavaskar et al. , 2021 ) , more recent PnP approaches ( Meinhardt et al. , 2017 ; Ryu et al. , 2019 ) rely on efficient off-theshelf deep denoisers such as DnCNN ( Zhang et al. , 2017a ) . State-of-the-art IR results are currently obtained with denoisers that are specifically designed to be integrated in PnP schemes , like IRCNN ( Zhang et al. , 2017b ) or DRUNET ( Zhang et al. , 2021 ) . Though providing excellent restorations , such schemes are not guaranteed to converge for all kinds of denoisers or IR tasks . Designing convergence proofs for PnP algorithms is an active research topic . Sreehari et al . ( 2016 ) used the proximal theorem of Moreau ( Moreau , 1965 ) to give sufficient conditions for the denoiser to be an explicit proximal map , which are applied to a pseudo-linear denoiser . The convergence with pseudo-linear denoisers have been extensively studied ( Gavaskar & Chaudhury , 2020 ; Nair et al. , 2021 ; Chan , 2019 ) . However , state-of-the-art PnP results are obtained with deep denoisers . Various assumptions have been made to ensure the convergence of the related PnP schemes . With a “ bounded denoiser ” assumption , Chan et al . ( 2016 ) ; Gavaskar & Chaudhury ( 2019 ) showed convergence of PnP-ADMM with stepsizes decreasing to 0 . RED ( Romano et al. , 2017 ) and RED-PRO ( Cohen et al. , 2021 ) respectively consider the classes of denoisers with symmetric Jacobian or demicontractive mappings , but these conditions are either too restrictive or hard to verify in practice . In Appendix A.3 , more details are given on RED-based methods . Many works focus on Lipschitz properties of PnP operators . Depending on the splitting algorithm in use , convergence can be obtained by assuming the denoiser averaged ( Sun et al. , 2019b ) , firmly nonexpansive ( Sun et al. , 2021 ; Terris et al. , 2020 ) or simply nonexpansive ( Reehorst & Schniter , 2018 ; Liu et al. , 2021 ) . These settings are unrealistic as deep denoisers do not generally satisfy such properties . Ryu et al . ( 2019 ) ; Terris et al . ( 2020 ) propose different ways to train deep denoisers with constrained Lipschitz constants , in order to fit the technical properties required for convergence . But imposing hard Lipschitz constraints on the network alters its denoising performance ( Bohra et al. , 2021 ; Hertrich et al. , 2021 ) . Yet , Ryu et al . ( 2019 ) manages to get a convergent PnP scheme without assuming the nonexpansiveness ofD . This comes at the cost of imposing strong convexity on the data-fidelity term f , which excludes many IR tasks like deblurring , super-resolution or inpainting . Hence , given the ill-posedness of IR problems , looking for a unique solution via contractive operators is a restrictive assumption . In this work , we do not impose contractiveness , but still obtain convergence results with realistic hypotheses . One can relate the ideal deep denoiser to the “ true ” natural image prior p via Tweedie ’ s Identity . In ( Efron , 2011 ) , it is indeed shown that the Minimum Mean Square Error ( MMSE ) denoiser D∗σ ( at noise level σ ) verifies Dσ ( x ) = x+ σ2∇x log pσ ( x ) where pσ is the convolution of p with the density ofN ( 0 , σ2 Id ) . In a recent line of research ( Bigdeli et al. , 2017 ; Xu et al. , 2020 ; Laumont et al. , 2021 ; Kadkhodaie & Simoncelli , 2020 ) , this relation is used to plug a denoiser in gradient-based dynamics . In practice , the MMSE denoiser can not be computed explicitly and Tweedie ’ s Identity does not hold for deep approximations of the MMSE . In order to be as exhaustive as possible , we detailed the addressed limitations of existing PnP methods in Appendix A.1 . 3 THE GRADIENT STEP PLUG-AND-PLAY . The proposed method is based on the PnP version of half-quadratic-splitting ( PnP-HQS ) that amounts to replacing the proximity operator of the prior g with an off-the-shelf denoiser Dσ . In order to define a convergent PnP scheme , we first set up in Section 3.1 a Gradient Step ( GS ) denoiser . We then introduce the Gradient Step PnP ( GS-PnP ) algorithm in Section 3.2 . 3.1 GRADIENT STEP DENOISER . We propose to plug a denoising operator Dσ that takes the form of a gradient descent step Dσ = Id−∇gσ , ( 4 ) with gσ : Rn → R. Contrary to Romano et al . ( 2017 ) , our denoiser exactly represents a conservative vector field . The choice of the parameterization of gσ is fundamental for the denoising performance . As already noticed in Salimans & Ho ( 2021 ) , we experimentally found that directly modeling gσ as a neural network ( e.g . a standard network used for classification ) leads to poor denoising performance . In order to keep the strength of state-of-the-art unconstrained denoisers , we rather use gσ ( x ) = 1 2 ||x−Nσ ( x ) ||2 , ( 5 ) which leads to Dσ ( x ) = x−∇gσ ( x ) = Nσ ( x ) + JNσ ( x ) T ( x−Nσ ( x ) ) , ( 6 ) where Nσ : Rn → Rn is parameterized by a neural network and JNσ ( x ) is the Jacobian of Nσ at point x . As discussed in Appendix A.2 , the formulation ( 5 ) for gσ has been proposed in ( Romano et al. , 2017 , Section 5.2 ) and ( Bigdeli & Zwicker , 2017 ) for a distinct but related purpose , and not exploited for convergence analysis . Thanks to our definition ( 6 ) for Dσ , we can parameterize Nσ with any differentiable neural network architecture Rn → Rn that has proven efficient for image denoising . Although the representation power of the denoiser is limited by the particular form ( 6 ) , we show ( see Section 5.1 ) that such parameterization still yields state-of-the-art denoising results . We train the denoiser Dσ for Gaussian noise by minimizing the MSE loss function L ( Dσ ) = Ex∼p , ξσ∼N ( 0 , σ2I ) [ ||Dσ ( x+ ξσ ) − x|| 2 ] , ( 7 ) or L ( gσ ) = Ex∼p , ξσ∼N ( 0 , σ2I ) [ ||∇gσ ( x+ ξσ ) − ξσ|| 2 ] , ( 8 ) when written in terms of gσ using equation ( 4 ) . Remark 1 . By definition , the optimal solution g∗σ ∈ arg minL is related to the MMSE denoiserD∗σ , that is , the best non-linear predictor of x given x + ξσ . Therefore , it satisfies Tweedie ’ s formula and ∇g∗σ = −σ2∇ log pσ ( Efron , 2011 ) i.e . g∗σ = −σ2 log pσ + C , for some C ∈ R. Hence approximating the MMSE denoiser with a denoiser parameterized as ( 4 ) is related to approximating the logarithm of the smoothed image prior of pσ with − 1σ2 gσ . This relation was used for image generation with “ Denoising Score Matching ” by Saremi & Hyvarinen ( 2019 ) ; Bigdeli et al . ( 2020 ) . | The paper considers a novel and interesting idea: designing a deep neural network denoiser that makes Plug-and-play priors (PnP) convergence analysis clean and simple, motivated by PnP-HQS [Zhang et al., 2017b] and regularization by denoising (RED) [Romano et al., 2017]. Existing works have proved the convergence of PnP and RED with contractive and nonexpansiven denoisers. It is an activate area for designing deep neural net denoisers that ensure PnP/RED convergence. Specifically, this paper focuses on designing denoisers that correspond to the gradient descent step in terms of the $L$-smooth regularizer function $g_\sigma (x)$. By using such denoising step within PnP followed by the proximal of data fidelity $f$, one can guarantee convergence via traditional non-convex optimization given the objective function $F(x)$. Since the handy $F(x)$ is available, a standard backtracking scheme is used in this work to ensure convergence without needs for tracking the exact Lipschitz constant of $\nabla g_\sigma$. Finally, the performance and stability of the proposed method is evaluated over three image inverse problems such as deblurring, super-resolution and inpainting, with satisfactory results compared to existing methods based on PnP and RED. | SP:db6157c2243f1adb0f5f9b81d1e86354b6fe8089 |
Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery | 1 INTRODUCTION . This work aims to address the challenge of establishing an automated process for the design of objects with connected components , such as molecules , that optimize specific properties . Achieving this goal is particularly desirable in drug development and materials science , where manual discovery remains a time-consuming and expensive process ( Hughes et al. , 2011 ; Schneider et al. , 2020 ) . However , there are two major difficulties that have long impeded rapid progress . Firstly , the chemical space is discrete and massive ( Polishchuk et al. , 2013 ) , presenting a complicated environment for an Artificial Intelligence ( AI ) approach to efficiently and effectively explore . Secondly , it is not trivial to compress such connected objects into feature representations that preserve most of the information , while also being highly computable for Deep Learning ( DL ) methods to exploit . We introduce Distilled Graph Attention Policy Networks ( DGAPN ) , a framework that advances prior work in addressing both of these challenges . We present a Reinforcement Learning ( RL ) architecture that is efficiently encouraged to take innovative actions with an environment that is able to construct a dynamic and chemically valid fragment-based action space . We also propose a hybrid Graph Neural Network ( GNN ) that comprehensively encodes graph objects ’ attributes and spatial structures in addition to adjacency structures . The following paragraphs discuss how we addressed limitations of prior work and its relevance to antiviral drug discovery . For more descriptions of key prior methodologies that we used as benchmarks in this paper , see Section 5 . Graph Representation Learning Despite their spatial efficiency , string representation of molecules acquired by the simplified molecular-input line-entry system ( SMILES ) ( Weininger , 1988 ) suffers from significant information loss and poor robustness ( Liu et al. , 2017 ) . Graph representations have become predominant and preferable for their ability to efficiently encode an object ’ s scaffold structure and attributes . Graph representations are particularly ideal for RL since intermediate representations can be decoded and evaluated for reward assignments . While GNNs such as Graph Convolutional Networks ( GCNs ) ( Kipf & Welling , 2016 ) and Graph Attention Networks ( GATs ) ( Veličković et al. , 2017 ) have demonstrated impressive performance on many DL tasks , further exploitation into richer information contained in graph-structured data is needed to faithfully represent the complexity of chemical space ( Morris et al. , 2019 ; Wang et al. , 2019 ; Chen et al. , 2020 ) . In this work , we made improvements to previous studies on attributes encoding and structural encoding . For structural encoding , previous studies have covered adjacency distance encoding ( Li et al. , 2020 ) , spatial cutoff ( Pei et al. , 2020 ) and coordinates encoding ( Schütt et al. , 2017 ; Danel et al. , 2020 ) . Our work presents an alternative approach to spatial structure encoding similar to Gilmer et al . ( 2017 ) which do not rely on node coordinates , but different in embedding and updating scheme . Distinct from Danel et al . ( 2020 ) and Chen & Chen ( 2021 ) , we extended attentional embedding to be edge-featured , while still node-centric for message passing efficiency . Reinforcement Learning A variety of graph generative models have been used in prior work , predominantly Variational Autoencoders ( VAEs ) ( Simonovsky & Komodakis , 2018 ; Samanta et al. , 2020 ; Liu et al. , 2018 ; Ma et al. , 2018 ; Jin et al. , 2018 ) and Generative Adversarial Networks ( GANs ) ( De Cao & Kipf , 2018 ) . While some of these have a recurrent structure ( Li et al. , 2018 ; You et al. , 2018b ) , RL and other search algorithms that interact dynamically with the environment excel in sequential generation due to their ability to resist overfitting on training data . Both policy learning ( You et al. , 2018a ) and value function learning ( Zhou et al. , 2019 ) have been adopted for molecule generation : however , they generate molecules node-by-node and edge-by-edge . In comparison , an action space consisting of molecular fragments , i.e. , a collection of chemically valid components and realizable synthesis paths , is favorable since different atom types and bonds are defined by the local molecular environment . Fragment-by-fragment sequential generation has been used in VAE ( Jin et al. , 2018 ) and search algorithms ( Jin et al. , 2020 ; Xie et al. , 2021 ) , but has not been utilized in a deep graph RL framework . In this work , we designed our environment with the Chemically Reasonable Mutations ( CReM ) ( Polishchuk , 2020 ) library to realize a valid fragment-based action space . Furthermore , we enhanced exploration by employing a simple and efficient technique , adapting Random Network Distillation ( RND ) ( Burda et al. , 2018 ) to GNNs and proposing surrogate innovation rewards for intermediate states . Antiviral Drug Discovery — A Timely Challenge The severity of the COVID-19 pandemic highlighted the major role of computational workflows to characterize the viral machinery and identify druggable targets for the rapid development of novel antivirals . Particularly , the synergistic use of DL methods and structural knowledge via molecular docking simulations is at the cutting edge of molecular biology — consolidating such integrative protocols to accelerate drug discovery is of paramount importance ( Yang et al. , 2021 ; Jeon & Kim , 2020 ; Thomas et al. , 2021 ) . Here we experimentally examined our architecture on the task of discovering novel inhibitors targeting the SARS-CoV-2 non-structural protein endoribonuclease ( NSP15 ) , which is critical for viral evasion of host defense systems ( Pillon et al. , 2021 ) . Structural information about the putative protein-ligand complexes was integrated into this framework with AutoDock-GPU ( Santos-Martins et al. , 2021 ) , which leverages the GPU resources from leadership-class computing facilities , including the Summit supercomputer , for high-throughput molecular docking ( Legrand et al. , 2020 ) . We show that our results outperformed state-of-the-art generation models in finding molecules with high affinity to the target and reasonable synthetic accessibility . 2 PROBLEM FORMULATIONS . Our goal is to establish a set of decision rules to generate graph-structured data that maximizes compound objectives under certain constraints . Similar to prior formulations , the generating process is defined as a time homogeneous Markov Decision Process ( MDP ) . We give a formal definition of this process in Appendix A . Under this setting , the action policies and state transition dynamics at step t can be factorized according to the Markov property : P ( at|s0 , a0 , s1 , a1 , . . . , st ) = P ( at|st ) : = π ( at|st ) ( 1 ) P ( st+1|s0 , a0 , s1 , a1 , . . . , st , at ) = P ( st+1|st , at ) : = ρ ( st+1|st , at ) ( 2 ) where { st , at } t are state-action sequences . A reward function r ( s , a ) is used to assess an action a taken at a given state s. The process terminates at an optional stopping time T and sT is then proposed as the final product of the current generating cycle . We aim to estimate the optimal policy π in terms of various objectives to be constructed later in the experiment section . 3 PROPOSED METHOD . 3.1 ENVIRONMENT SETTINGS . In the case of molecular graphs , single-atom or single-bond additions are often not realizable by known biochemical reactions . Rather than employing abstract architectures such as GANs to suggest synthetic accessibility , we use the chemical library CReM ( Polishchuk , 2020 ) to construct our environment such that all next possible molecules can be obtained by one step of interchanging chemical fragments with the current molecule . This explicit approach is considerably more reliable and interpretable compared to DL approaches . A detailed description of the CReM library can be found in Appendix B.1 . At each time step t , we use CReM to sample a set of valid molecules vt+1 as the candidates for the next state st+1 based on st . Under this setting , the transition dynamics are deterministic , the underlying set A of the action space can be defined as equal to S of the state space , and action at is induced by the direct selection of st+1 . With an abuse of notation , we let r ( st+1 ) : = r ( st , at ) . 3.2 SPATIAL GRAPH ATTENTION . We introduce a graph embedding mechanism called Spatial Graph Attention ( sGAT ) in an attempt to faithfully extract feature vectors ht ∈ Rdh representing graph-structured objects such as molecules . Two different types of information graphs constructed from a connected object are heterogeneous and thus handled differently in forward passes as described in the following sections . See Figure 1 for an overview . 3.2.1 ATTENTION ON ATTRIBUTION GRAPHS . The attribution graph of a molecule with n atoms and e bonds is given by the triple ( A , N , E ) , where A ∈ { 0 , 1 } n×n is the node adjacency matrix , N is the node attribution matrix of dimension n× dn and E is the edge attribution matrix of dimension e× de . Each entry aij ofA is 1 if a bond exists between atom i and j , and 0 otherwise . Each row vector ni of N is a concatenation of the properties of atom i , including its atomic number , mass , etc. , with the categorical properties being one-hot encoded . E is formed similar to N , but with bond attributes . We denote a row vector of E as eij if it corresponds to the bond between atom i and j . We proceed to define a multi-head forward propagation that handles these rich graph information : let hnk ∈ R1×dhn denote a given representation for nk , heij ∈ R1×dhe denote a representation for eij , then the m-th head attention αmij from node j to node i ( i 6= j ) is given by αmij = softmax j ( ⋃ k : aik=1 { σ ( [ hniWn , m ‖ heikWe , m ‖ hnkWn , m ] · attmT ) } ) ( 3 ) where softmax j is the softmax score of node j ; ‖ is column concatenation ; σ is some non-linear activation ; Wn , m ∈ Rdhn×dwn , We , m ∈ Rdhe×dwe are them-th head weight matrices for nodes and edges respectively ; attm ∈ R1× ( 2dwn+dwe ) is the m-th head attention weight . The representations after a feed-forward operation are consequently given as follow : h′ni = aggr1≤m≤nm σ ∑ j : aij=1 αmij · hnj + hni Wn , m ( 4 ) h′eij = aggr1≤m≤nm { σ ( [ hniWn , m ‖ heijWe , m ‖ hnjWn , m ] ·Wh , m ) } ( 5 ) where Wh , m ∈ R ( 2dwn+dwe ) ×dwe ; nm is the total number of attention heads and aggr denotes an aggregation method , most commonly mean , sum , or concat ( Hamilton et al. , 2017 ) . We note that we have not found significant difference across these methods and have used mean for all aggregations in our experiments . In principle , a single-head operation on nodes is essentially graph convolution with the adjacency matrix  = à + I where à is attention-regularized according to ( 3 ) . This approach sufficiently embeds edge attributes while still being a node-centric convolution mechanism , for which efficient frameworks like Pytorch-Geometric ( Fey & Lenssen , 2019 ) have been well established . | The authors propose a new fragment-based method for molecule generation. The model, called DGAPN, uses chemical fragments extracted from a public compound library with their chemical context (atom neighborhoods to which these fragments are attached). This way, all modifications made by the model should produce synthetically accessible compounds with high probability. Additionally, the authors introduce a new class of graph neural networks to predict chemical properties that employ an attention mechanism over atoms and chemical bonds. These models are used to guide the generative MDP trained with reinforcement learning. Inhibition of a SARS-CoV-2 antiviral target, NSP15, is used as an example task for the proposed model. The experimental section shows the results of single- and multi-objective compound generation, in which DGAPN obtains better docking scores of the generated molecules than other methods. | SP:1ed95cc628f9f7ffecfd0290865337f621acff03 |
Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery | 1 INTRODUCTION . This work aims to address the challenge of establishing an automated process for the design of objects with connected components , such as molecules , that optimize specific properties . Achieving this goal is particularly desirable in drug development and materials science , where manual discovery remains a time-consuming and expensive process ( Hughes et al. , 2011 ; Schneider et al. , 2020 ) . However , there are two major difficulties that have long impeded rapid progress . Firstly , the chemical space is discrete and massive ( Polishchuk et al. , 2013 ) , presenting a complicated environment for an Artificial Intelligence ( AI ) approach to efficiently and effectively explore . Secondly , it is not trivial to compress such connected objects into feature representations that preserve most of the information , while also being highly computable for Deep Learning ( DL ) methods to exploit . We introduce Distilled Graph Attention Policy Networks ( DGAPN ) , a framework that advances prior work in addressing both of these challenges . We present a Reinforcement Learning ( RL ) architecture that is efficiently encouraged to take innovative actions with an environment that is able to construct a dynamic and chemically valid fragment-based action space . We also propose a hybrid Graph Neural Network ( GNN ) that comprehensively encodes graph objects ’ attributes and spatial structures in addition to adjacency structures . The following paragraphs discuss how we addressed limitations of prior work and its relevance to antiviral drug discovery . For more descriptions of key prior methodologies that we used as benchmarks in this paper , see Section 5 . Graph Representation Learning Despite their spatial efficiency , string representation of molecules acquired by the simplified molecular-input line-entry system ( SMILES ) ( Weininger , 1988 ) suffers from significant information loss and poor robustness ( Liu et al. , 2017 ) . Graph representations have become predominant and preferable for their ability to efficiently encode an object ’ s scaffold structure and attributes . Graph representations are particularly ideal for RL since intermediate representations can be decoded and evaluated for reward assignments . While GNNs such as Graph Convolutional Networks ( GCNs ) ( Kipf & Welling , 2016 ) and Graph Attention Networks ( GATs ) ( Veličković et al. , 2017 ) have demonstrated impressive performance on many DL tasks , further exploitation into richer information contained in graph-structured data is needed to faithfully represent the complexity of chemical space ( Morris et al. , 2019 ; Wang et al. , 2019 ; Chen et al. , 2020 ) . In this work , we made improvements to previous studies on attributes encoding and structural encoding . For structural encoding , previous studies have covered adjacency distance encoding ( Li et al. , 2020 ) , spatial cutoff ( Pei et al. , 2020 ) and coordinates encoding ( Schütt et al. , 2017 ; Danel et al. , 2020 ) . Our work presents an alternative approach to spatial structure encoding similar to Gilmer et al . ( 2017 ) which do not rely on node coordinates , but different in embedding and updating scheme . Distinct from Danel et al . ( 2020 ) and Chen & Chen ( 2021 ) , we extended attentional embedding to be edge-featured , while still node-centric for message passing efficiency . Reinforcement Learning A variety of graph generative models have been used in prior work , predominantly Variational Autoencoders ( VAEs ) ( Simonovsky & Komodakis , 2018 ; Samanta et al. , 2020 ; Liu et al. , 2018 ; Ma et al. , 2018 ; Jin et al. , 2018 ) and Generative Adversarial Networks ( GANs ) ( De Cao & Kipf , 2018 ) . While some of these have a recurrent structure ( Li et al. , 2018 ; You et al. , 2018b ) , RL and other search algorithms that interact dynamically with the environment excel in sequential generation due to their ability to resist overfitting on training data . Both policy learning ( You et al. , 2018a ) and value function learning ( Zhou et al. , 2019 ) have been adopted for molecule generation : however , they generate molecules node-by-node and edge-by-edge . In comparison , an action space consisting of molecular fragments , i.e. , a collection of chemically valid components and realizable synthesis paths , is favorable since different atom types and bonds are defined by the local molecular environment . Fragment-by-fragment sequential generation has been used in VAE ( Jin et al. , 2018 ) and search algorithms ( Jin et al. , 2020 ; Xie et al. , 2021 ) , but has not been utilized in a deep graph RL framework . In this work , we designed our environment with the Chemically Reasonable Mutations ( CReM ) ( Polishchuk , 2020 ) library to realize a valid fragment-based action space . Furthermore , we enhanced exploration by employing a simple and efficient technique , adapting Random Network Distillation ( RND ) ( Burda et al. , 2018 ) to GNNs and proposing surrogate innovation rewards for intermediate states . Antiviral Drug Discovery — A Timely Challenge The severity of the COVID-19 pandemic highlighted the major role of computational workflows to characterize the viral machinery and identify druggable targets for the rapid development of novel antivirals . Particularly , the synergistic use of DL methods and structural knowledge via molecular docking simulations is at the cutting edge of molecular biology — consolidating such integrative protocols to accelerate drug discovery is of paramount importance ( Yang et al. , 2021 ; Jeon & Kim , 2020 ; Thomas et al. , 2021 ) . Here we experimentally examined our architecture on the task of discovering novel inhibitors targeting the SARS-CoV-2 non-structural protein endoribonuclease ( NSP15 ) , which is critical for viral evasion of host defense systems ( Pillon et al. , 2021 ) . Structural information about the putative protein-ligand complexes was integrated into this framework with AutoDock-GPU ( Santos-Martins et al. , 2021 ) , which leverages the GPU resources from leadership-class computing facilities , including the Summit supercomputer , for high-throughput molecular docking ( Legrand et al. , 2020 ) . We show that our results outperformed state-of-the-art generation models in finding molecules with high affinity to the target and reasonable synthetic accessibility . 2 PROBLEM FORMULATIONS . Our goal is to establish a set of decision rules to generate graph-structured data that maximizes compound objectives under certain constraints . Similar to prior formulations , the generating process is defined as a time homogeneous Markov Decision Process ( MDP ) . We give a formal definition of this process in Appendix A . Under this setting , the action policies and state transition dynamics at step t can be factorized according to the Markov property : P ( at|s0 , a0 , s1 , a1 , . . . , st ) = P ( at|st ) : = π ( at|st ) ( 1 ) P ( st+1|s0 , a0 , s1 , a1 , . . . , st , at ) = P ( st+1|st , at ) : = ρ ( st+1|st , at ) ( 2 ) where { st , at } t are state-action sequences . A reward function r ( s , a ) is used to assess an action a taken at a given state s. The process terminates at an optional stopping time T and sT is then proposed as the final product of the current generating cycle . We aim to estimate the optimal policy π in terms of various objectives to be constructed later in the experiment section . 3 PROPOSED METHOD . 3.1 ENVIRONMENT SETTINGS . In the case of molecular graphs , single-atom or single-bond additions are often not realizable by known biochemical reactions . Rather than employing abstract architectures such as GANs to suggest synthetic accessibility , we use the chemical library CReM ( Polishchuk , 2020 ) to construct our environment such that all next possible molecules can be obtained by one step of interchanging chemical fragments with the current molecule . This explicit approach is considerably more reliable and interpretable compared to DL approaches . A detailed description of the CReM library can be found in Appendix B.1 . At each time step t , we use CReM to sample a set of valid molecules vt+1 as the candidates for the next state st+1 based on st . Under this setting , the transition dynamics are deterministic , the underlying set A of the action space can be defined as equal to S of the state space , and action at is induced by the direct selection of st+1 . With an abuse of notation , we let r ( st+1 ) : = r ( st , at ) . 3.2 SPATIAL GRAPH ATTENTION . We introduce a graph embedding mechanism called Spatial Graph Attention ( sGAT ) in an attempt to faithfully extract feature vectors ht ∈ Rdh representing graph-structured objects such as molecules . Two different types of information graphs constructed from a connected object are heterogeneous and thus handled differently in forward passes as described in the following sections . See Figure 1 for an overview . 3.2.1 ATTENTION ON ATTRIBUTION GRAPHS . The attribution graph of a molecule with n atoms and e bonds is given by the triple ( A , N , E ) , where A ∈ { 0 , 1 } n×n is the node adjacency matrix , N is the node attribution matrix of dimension n× dn and E is the edge attribution matrix of dimension e× de . Each entry aij ofA is 1 if a bond exists between atom i and j , and 0 otherwise . Each row vector ni of N is a concatenation of the properties of atom i , including its atomic number , mass , etc. , with the categorical properties being one-hot encoded . E is formed similar to N , but with bond attributes . We denote a row vector of E as eij if it corresponds to the bond between atom i and j . We proceed to define a multi-head forward propagation that handles these rich graph information : let hnk ∈ R1×dhn denote a given representation for nk , heij ∈ R1×dhe denote a representation for eij , then the m-th head attention αmij from node j to node i ( i 6= j ) is given by αmij = softmax j ( ⋃ k : aik=1 { σ ( [ hniWn , m ‖ heikWe , m ‖ hnkWn , m ] · attmT ) } ) ( 3 ) where softmax j is the softmax score of node j ; ‖ is column concatenation ; σ is some non-linear activation ; Wn , m ∈ Rdhn×dwn , We , m ∈ Rdhe×dwe are them-th head weight matrices for nodes and edges respectively ; attm ∈ R1× ( 2dwn+dwe ) is the m-th head attention weight . The representations after a feed-forward operation are consequently given as follow : h′ni = aggr1≤m≤nm σ ∑ j : aij=1 αmij · hnj + hni Wn , m ( 4 ) h′eij = aggr1≤m≤nm { σ ( [ hniWn , m ‖ heijWe , m ‖ hnjWn , m ] ·Wh , m ) } ( 5 ) where Wh , m ∈ R ( 2dwn+dwe ) ×dwe ; nm is the total number of attention heads and aggr denotes an aggregation method , most commonly mean , sum , or concat ( Hamilton et al. , 2017 ) . We note that we have not found significant difference across these methods and have used mean for all aggregations in our experiments . In principle , a single-head operation on nodes is essentially graph convolution with the adjacency matrix  = à + I where à is attention-regularized according to ( 3 ) . This approach sufficiently embeds edge attributes while still being a node-centric convolution mechanism , for which efficient frameworks like Pytorch-Geometric ( Fey & Lenssen , 2019 ) have been well established . | With the goal of generating molecules that bind to functional sites of SARS-Cov-2 protein, the paper proposed a reinforcement learning model with a fragment-based framework for chemical structure modification. In the network part, spatial graph attention and spatial convolution are utilized to extract more structural information from the input graph into the representation of nodes or graphs. Based on the actor-critic algorithm, the reinforcement learning part is designed to find the state with the best docking score computed by docking programs and some novel approaches like PPO and RND are used to train the model more effectively. In the experiments, the model shows great performance while reducing the complexity of chemical synthesis meanwhile. | SP:1ed95cc628f9f7ffecfd0290865337f621acff03 |
Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery | 1 INTRODUCTION . This work aims to address the challenge of establishing an automated process for the design of objects with connected components , such as molecules , that optimize specific properties . Achieving this goal is particularly desirable in drug development and materials science , where manual discovery remains a time-consuming and expensive process ( Hughes et al. , 2011 ; Schneider et al. , 2020 ) . However , there are two major difficulties that have long impeded rapid progress . Firstly , the chemical space is discrete and massive ( Polishchuk et al. , 2013 ) , presenting a complicated environment for an Artificial Intelligence ( AI ) approach to efficiently and effectively explore . Secondly , it is not trivial to compress such connected objects into feature representations that preserve most of the information , while also being highly computable for Deep Learning ( DL ) methods to exploit . We introduce Distilled Graph Attention Policy Networks ( DGAPN ) , a framework that advances prior work in addressing both of these challenges . We present a Reinforcement Learning ( RL ) architecture that is efficiently encouraged to take innovative actions with an environment that is able to construct a dynamic and chemically valid fragment-based action space . We also propose a hybrid Graph Neural Network ( GNN ) that comprehensively encodes graph objects ’ attributes and spatial structures in addition to adjacency structures . The following paragraphs discuss how we addressed limitations of prior work and its relevance to antiviral drug discovery . For more descriptions of key prior methodologies that we used as benchmarks in this paper , see Section 5 . Graph Representation Learning Despite their spatial efficiency , string representation of molecules acquired by the simplified molecular-input line-entry system ( SMILES ) ( Weininger , 1988 ) suffers from significant information loss and poor robustness ( Liu et al. , 2017 ) . Graph representations have become predominant and preferable for their ability to efficiently encode an object ’ s scaffold structure and attributes . Graph representations are particularly ideal for RL since intermediate representations can be decoded and evaluated for reward assignments . While GNNs such as Graph Convolutional Networks ( GCNs ) ( Kipf & Welling , 2016 ) and Graph Attention Networks ( GATs ) ( Veličković et al. , 2017 ) have demonstrated impressive performance on many DL tasks , further exploitation into richer information contained in graph-structured data is needed to faithfully represent the complexity of chemical space ( Morris et al. , 2019 ; Wang et al. , 2019 ; Chen et al. , 2020 ) . In this work , we made improvements to previous studies on attributes encoding and structural encoding . For structural encoding , previous studies have covered adjacency distance encoding ( Li et al. , 2020 ) , spatial cutoff ( Pei et al. , 2020 ) and coordinates encoding ( Schütt et al. , 2017 ; Danel et al. , 2020 ) . Our work presents an alternative approach to spatial structure encoding similar to Gilmer et al . ( 2017 ) which do not rely on node coordinates , but different in embedding and updating scheme . Distinct from Danel et al . ( 2020 ) and Chen & Chen ( 2021 ) , we extended attentional embedding to be edge-featured , while still node-centric for message passing efficiency . Reinforcement Learning A variety of graph generative models have been used in prior work , predominantly Variational Autoencoders ( VAEs ) ( Simonovsky & Komodakis , 2018 ; Samanta et al. , 2020 ; Liu et al. , 2018 ; Ma et al. , 2018 ; Jin et al. , 2018 ) and Generative Adversarial Networks ( GANs ) ( De Cao & Kipf , 2018 ) . While some of these have a recurrent structure ( Li et al. , 2018 ; You et al. , 2018b ) , RL and other search algorithms that interact dynamically with the environment excel in sequential generation due to their ability to resist overfitting on training data . Both policy learning ( You et al. , 2018a ) and value function learning ( Zhou et al. , 2019 ) have been adopted for molecule generation : however , they generate molecules node-by-node and edge-by-edge . In comparison , an action space consisting of molecular fragments , i.e. , a collection of chemically valid components and realizable synthesis paths , is favorable since different atom types and bonds are defined by the local molecular environment . Fragment-by-fragment sequential generation has been used in VAE ( Jin et al. , 2018 ) and search algorithms ( Jin et al. , 2020 ; Xie et al. , 2021 ) , but has not been utilized in a deep graph RL framework . In this work , we designed our environment with the Chemically Reasonable Mutations ( CReM ) ( Polishchuk , 2020 ) library to realize a valid fragment-based action space . Furthermore , we enhanced exploration by employing a simple and efficient technique , adapting Random Network Distillation ( RND ) ( Burda et al. , 2018 ) to GNNs and proposing surrogate innovation rewards for intermediate states . Antiviral Drug Discovery — A Timely Challenge The severity of the COVID-19 pandemic highlighted the major role of computational workflows to characterize the viral machinery and identify druggable targets for the rapid development of novel antivirals . Particularly , the synergistic use of DL methods and structural knowledge via molecular docking simulations is at the cutting edge of molecular biology — consolidating such integrative protocols to accelerate drug discovery is of paramount importance ( Yang et al. , 2021 ; Jeon & Kim , 2020 ; Thomas et al. , 2021 ) . Here we experimentally examined our architecture on the task of discovering novel inhibitors targeting the SARS-CoV-2 non-structural protein endoribonuclease ( NSP15 ) , which is critical for viral evasion of host defense systems ( Pillon et al. , 2021 ) . Structural information about the putative protein-ligand complexes was integrated into this framework with AutoDock-GPU ( Santos-Martins et al. , 2021 ) , which leverages the GPU resources from leadership-class computing facilities , including the Summit supercomputer , for high-throughput molecular docking ( Legrand et al. , 2020 ) . We show that our results outperformed state-of-the-art generation models in finding molecules with high affinity to the target and reasonable synthetic accessibility . 2 PROBLEM FORMULATIONS . Our goal is to establish a set of decision rules to generate graph-structured data that maximizes compound objectives under certain constraints . Similar to prior formulations , the generating process is defined as a time homogeneous Markov Decision Process ( MDP ) . We give a formal definition of this process in Appendix A . Under this setting , the action policies and state transition dynamics at step t can be factorized according to the Markov property : P ( at|s0 , a0 , s1 , a1 , . . . , st ) = P ( at|st ) : = π ( at|st ) ( 1 ) P ( st+1|s0 , a0 , s1 , a1 , . . . , st , at ) = P ( st+1|st , at ) : = ρ ( st+1|st , at ) ( 2 ) where { st , at } t are state-action sequences . A reward function r ( s , a ) is used to assess an action a taken at a given state s. The process terminates at an optional stopping time T and sT is then proposed as the final product of the current generating cycle . We aim to estimate the optimal policy π in terms of various objectives to be constructed later in the experiment section . 3 PROPOSED METHOD . 3.1 ENVIRONMENT SETTINGS . In the case of molecular graphs , single-atom or single-bond additions are often not realizable by known biochemical reactions . Rather than employing abstract architectures such as GANs to suggest synthetic accessibility , we use the chemical library CReM ( Polishchuk , 2020 ) to construct our environment such that all next possible molecules can be obtained by one step of interchanging chemical fragments with the current molecule . This explicit approach is considerably more reliable and interpretable compared to DL approaches . A detailed description of the CReM library can be found in Appendix B.1 . At each time step t , we use CReM to sample a set of valid molecules vt+1 as the candidates for the next state st+1 based on st . Under this setting , the transition dynamics are deterministic , the underlying set A of the action space can be defined as equal to S of the state space , and action at is induced by the direct selection of st+1 . With an abuse of notation , we let r ( st+1 ) : = r ( st , at ) . 3.2 SPATIAL GRAPH ATTENTION . We introduce a graph embedding mechanism called Spatial Graph Attention ( sGAT ) in an attempt to faithfully extract feature vectors ht ∈ Rdh representing graph-structured objects such as molecules . Two different types of information graphs constructed from a connected object are heterogeneous and thus handled differently in forward passes as described in the following sections . See Figure 1 for an overview . 3.2.1 ATTENTION ON ATTRIBUTION GRAPHS . The attribution graph of a molecule with n atoms and e bonds is given by the triple ( A , N , E ) , where A ∈ { 0 , 1 } n×n is the node adjacency matrix , N is the node attribution matrix of dimension n× dn and E is the edge attribution matrix of dimension e× de . Each entry aij ofA is 1 if a bond exists between atom i and j , and 0 otherwise . Each row vector ni of N is a concatenation of the properties of atom i , including its atomic number , mass , etc. , with the categorical properties being one-hot encoded . E is formed similar to N , but with bond attributes . We denote a row vector of E as eij if it corresponds to the bond between atom i and j . We proceed to define a multi-head forward propagation that handles these rich graph information : let hnk ∈ R1×dhn denote a given representation for nk , heij ∈ R1×dhe denote a representation for eij , then the m-th head attention αmij from node j to node i ( i 6= j ) is given by αmij = softmax j ( ⋃ k : aik=1 { σ ( [ hniWn , m ‖ heikWe , m ‖ hnkWn , m ] · attmT ) } ) ( 3 ) where softmax j is the softmax score of node j ; ‖ is column concatenation ; σ is some non-linear activation ; Wn , m ∈ Rdhn×dwn , We , m ∈ Rdhe×dwe are them-th head weight matrices for nodes and edges respectively ; attm ∈ R1× ( 2dwn+dwe ) is the m-th head attention weight . The representations after a feed-forward operation are consequently given as follow : h′ni = aggr1≤m≤nm σ ∑ j : aij=1 αmij · hnj + hni Wn , m ( 4 ) h′eij = aggr1≤m≤nm { σ ( [ hniWn , m ‖ heijWe , m ‖ hnjWn , m ] ·Wh , m ) } ( 5 ) where Wh , m ∈ R ( 2dwn+dwe ) ×dwe ; nm is the total number of attention heads and aggr denotes an aggregation method , most commonly mean , sum , or concat ( Hamilton et al. , 2017 ) . We note that we have not found significant difference across these methods and have used mean for all aggregations in our experiments . In principle , a single-head operation on nodes is essentially graph convolution with the adjacency matrix  = à + I where à is attention-regularized according to ( 3 ) . This approach sufficiently embeds edge attributes while still being a node-centric convolution mechanism , for which efficient frameworks like Pytorch-Geometric ( Fey & Lenssen , 2019 ) have been well established . | This paper proposes a method to generate molecular graphs with optimized properties. Molecular graphs are constructed by the iterative addition of molecular fragments in a deep reinforcement learning framework. The method is benchmarked against a set of baselines in a task to generate molecules that maximizes the docking score to a protein (NSP15) from the SARS-CoV-2 virus, and shows good performance. | SP:1ed95cc628f9f7ffecfd0290865337f621acff03 |
Xi-learning: Successor Feature Transfer Learning for General Reward Functions | 1 INTRODUCTION . Reinforcement Learning ( RL ) successfully addressed many complex problems such as playing computer games , chess , and even Go with superhuman performance ( Mnih et al. , 2015 ; Silver et al. , 2018 ) . These impressive results are possible thanks to a vast amount of interactions of the RL agent with its environment/task . Such strategy is unsuitable in settings where the agent has to perform and learn at the same time . Consider , for example , a care giver robot in a hospital that has to learn a new task , such as a new route to deliver meals . In such a setting , the agent can not collect a vast amount of training samples but has to adapt quickly instead . Transfer learning aims to provide mechanisms quickly to adapt agents in such settings ( Taylor and Stone , 2009 ; Lazaric , 2012 ; Zhu et al. , 2020 ) . The rationale is to use knowledge from previously encountered source tasks for a new target task to improve the learning performance on the target task . The previous knowledge can help reducing the amount of interactions required to learn the new optimal behavior . For example , the care giver robot could reuse knowledge about the layout of the hospital it learned in previous source tasks ( e.g . guiding a person ) to learn to deliver meals . The Successor Feature ( SF ) and General Policy Improvement ( GPI ) framework ( Barreto et al. , 2020 ) is a prominent transfer learning mechanism for tasks where only the reward function differs . Its basic premise is that the rewards which the RL agent tries to maximize are defined based on a lowdimensional feature descriptor φ ∈ Rn . For our care-giver robot this could be ID ’ s of beds or rooms that it is visiting , in difference to its high-dimensional visual state input from a camera . The rewards are then computed not based on its visual input but on the ID ’ s of the beds or rooms that it visits . The expected cumulative discounted successor features ( ψ ) are learned for each behavior that the robot learned in the past . It represents the dynamics in the feature space that the agent experiences for a behavior . This corresponds to the rooms or beds the care-giver agent would visit if using the behavior . This representation of feature dynamics is independent from the reward function . A behavior learned in a previous task and described by this SF representation can be directly re-evaluated for a different reward function . In a new task , i.e . for a new reward function , the GPI procedure re-evaluates the behaviors learned in previous tasks for it . It then selects at each state the behavior of a previous task if it improves the expected reward . This allows to reuse behaviors learned in previous source tasks Source code at https : //tinyurl.com/3xuzxff3 for a new target task . A similar transfer strategy can also be observed in the behavior of humans ( Momennejad et al. , 2017 ; Momennejad , 2020 ; Tomov et al. , 2021 ) . The classical SF & GPI framework ( Barreto et al. , 2017 ; 2018 ) makes the assumption that rewards r are a linear composition of the features φ ∈ Rn via a reward weight vector wi ∈ Rn that depends on the task i : ri = φ > wi . This assumption allows to effectively separate the feature dynamics of a behavior from the rewards and thus to re-evaluate previous behaviors given a new reward function , i.e . a new weight vector wj . Nonetheless , this assumption also restricts successful application of SF & GPI only to problems where such a linear decomposition is possible . We investigate the application of the SF & GPI framework to general reward functions : ri = Ri ( φ ) over the feature space . We propose to learn the cumulative discounted probability over the successor features , named ξ-function , and refer to the proposed framework as ξ-learning . Our work is related to Janner et al . ( 2020 ) ; Touati and Ollivier ( 2021 ) , and brings two important additional contributions . First , we provide mathematical proof of the convergence of ξ-learning . Second , we demonstrate how ξ-learning can be used for meta-RL , using the ξ-function to re-evaluate behaviors learned in previous tasks for a new reward function Rj . Furthermore , ξ-learning can also be used to transfer knowledge to new tasks using GPI . The contribution of our paper is three-fold : • We introduce a new RL algorithm , ξ-learning , based on a cumulative discounted probability of successor features , and two variants of its update operator . • We provide theoretical proofs of the convergence of ξ-learning to the optimal policy and for a guarantee of its transfer learning performance under the GPI procedure . • We experimentally compare ξ-learning in tasks with linear and general reward functions , and for tasks with discrete and continuous features to standard Q-learning and the classical SF framework , demonstrating the interest and advantage of ξ-learning . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . RL investigates algorithms to solve multi-step decision problems , aiming to maximize the sum over future rewards ( Sutton and Barto , 2018 ) . RL problems are modeled as Markov Decision Processes ( MDPs ) which are defined as a tuple M ≡ ( S , A , p , R , γ ) , where S and A are the state and action set . An agent transitions from a state st to another state st+1 using action at at time point t collecting a reward rt : st at , rt−−−→ st+1 . This process is stochastic and the transition probability p ( st+1|st , at ) describes which state st+1 is reached . The reward function R defines the scalar reward rt = R ( st , at , st+1 ) ∈ R for the transition . The goal in an MDP is to maximize the expected return Gt = E [ ∑∞ k=0 γ kRt+k ] , where Rt = R ( St , At , St+1 ) . The discount factor γ ∈ [ 0 , 1 ) weights collected rewards by discounting future rewards stronger . RL provides algorithms to learn a policy π : S → A defining which action to take in which state to maximise Gt . Value-based RL methods use the concept of value functions to learn the optimal policy . The stateaction value function , called Q-function , is defined as the expected future return taking action at in st and then following policy π : Qπ ( st , at ) = Eπ { rt + γrt+1 + γ 2rt+2 + . . . } = Eπ { rt + γmax at+1 Qπ ( St+1 , at+1 ) } . ( 1 ) The Q-function can be recursively defined following the Bellman equation such that the current Q-value Qπ ( st , at ) depends on the maximum Q-value of the next state Qπ ( st+1 , at+1 ) . The optimal policy for an MDP can then be expressed based on the Q-function , by taking at every step the maximum action : π∗ ( s ) ∈ argmaxaQ∗ ( s , a ) . The optimal Q-function can be learned using a temporal difference method such as Q-learning ( Watkins and Dayan , 1992 ) . Given a transition ( st , at , rt , st+1 ) , the Q-value is updated according to : Qk+1 ( st , at ) = Qk ( st , at ) + αk ( rt + max at+1 Qk ( st+1 , at+1 ) −Qk ( st , at ) ) , ( 2 ) where αk ∈ ( 0 , 1 ] is the learning rate at iteration k . 2.2 TRANSFER LEARNING AND THE SF & GPI FRAMEWORK . We are interested in the transfer learning setting where the agent has to solve a set of tasksM = { M1 , M2 , . . . , Mm } , that in our case differ only in their reward function . The Successor Feature ( SF ) framework provides a principled way to perform transfer learning ( Barreto et al. , 2017 ; 2018 ) . SF assumes that the reward function can be decomposed into a linear combination of features φ ∈ Φ ⊂ Rn and a reward weight vector wi ∈ Rn that is defined for a task Mi : ri ( st , at , st+1 ) ≡ φ ( st , at , st+1 ) > wi . ( 3 ) We refer to such reward functions as linear reward functions . Since the various tasks differ only in their reward functions , the features are the same for all tasks inM . Given the decomposition above , it is also possible to rewrite the Q-function into an expected discounted sum over future features ψπi ( s , a ) and the reward weight vector wi : Qπii ( s , a ) = E { rt + γ 1rt+1 + γ 2rt+2 + . . . } = E { φ > t wi + γ 1φ > t+1wi + γ 2φ > t+2wi + . . . } = E { ∑∞ k=0 γ kφt+k } > wi ≡ ψπi ( s , a ) > wi . ( 4 ) This decouples the dynamics of the policy πi in the feature space of the MDP from the expected rewards for such features . Thus , it is now possible to evaluate the policy πi in a different taskMj using a simple multiplication of the weight vector wj with the ψ-function : Qπij ( s , a ) = ψ πi ( s , a ) > wj . Interestingly , the ψ function also follows the Bellman equation : ψπ ( s , a ) = E { φt+1 + γψπ ( st+1 , π ( st+1 ) ) |st , at } , ( 5 ) and can therefore be learned with conventional RL methods . Moreover , ( Lehnert and Littman , 2019 ) showed the equivalence of SF-learning to Q-learning . Being in a new task Mj the Generalized Policy Improvement ( GPI ) can be used to select the action over all policies learned so far that behaves best : π ( s ) ∈ argmax a max i Qπij ( s , a ) = argmax a max i ψπi ( s , a ) > wj . ( 6 ) ( Barreto et al. , 2018 ) proved that under the appropriate conditions for optimal policy approximates , the policy constructed in ( 6 ) is close to the optimal one , and their difference is upper-bounded : ||Q∗ −Qπ||∞ ≤ 2 1− γ ( ||r − ri||∞ + min j ||ri − rj ||∞ + ) , ( 7 ) where ‖f − g‖∞ = maxs , a |f ( s , a ) − g ( s , a ) | . For an arbitrary reward function r the result can be interpreted in the following manner . Given the arbitrary task M , we identify the theoretically closest possible linear reward task Mi with ri . For this theoretically closest task , we search the linear task Mj in our set of taskM ( from which we also construct the GPI optimal policy ( 6 ) ) which is closest to it . The upper bound between Q∗ and Q is then defined by 1 ) the difference between task M and the theoretically closest possible linear task Mi : ||r− ri||∞ ; and by 2 ) the difference between theoretical task Mi and the closest task Mj : minj ||ri − rj ||∞ . If our new task M is also linear then r = ri and the first term in ( 7 ) would vanish . Very importantly , this result shows that the SF framework will only provide a good approximation of the true Q-function if the reward function in a task can be represented using a linear decomposition . If this is not the case then the error in the approximation increases with the distance between the true reward function r and the best linear approximation of it ri as stated by ||r − ri||∞ . 3 METHOD : ξ-LEARNING 3.1 DEFINITION AND FOUNDATIONS OF ξ-LEARNING The goal of this paper is to investigate the application of SF & GPI to tasks with general reward functions R : Φ 7→ R over state features φ ∈ Φ : r ( st , at , st+1 ) ≡ R ( φ ( st , at , st+1 ) ) = R ( φt ) , ( 8 ) where we define φt ≡ φ ( st , at , st+1 ) . Under this assumption the Q-function can not be linearly decomposed into a part that describes feature dynamics and one that describes the rewards as in the linear SF framework ( 4 ) . To overcome this issue , we propose to define the expected cumulative discounted probability of successor features or ξ-function , which is going to be the central mathematical object of the paper , as : ξπ ( s , a , φ ) = ∞∑ k=0 γkp ( φt+k = φ|st = s , at = a ; π ) , ( 9 ) where p ( φt+k = φ|st = s , at = a ; π ) , or in short p ( φt+k = φ|st , at ; π ) , is the probability density function of the features at time t+ k , following policy π and conditioned to s and a being the state and action at time t respectively . Note that ξπ depends not only on the policy π but also on the state transition ( constant through the paper ) . With the definition of the ξ-function , the Q-function rewrites ( this is compatible with SFQL in the linear reward case , see Appendix A.6 ) : Qπ ( st , at ) = ∞∑ k=0 γkEp ( φt+k|st , at ; π ) { R ( φt+k ) } = ∞∑ k=0 γk ∫ Φ p ( φt+k = φ|st , at ; π ) R ( φ ) dφ = ∫ Φ R ( φ ) ∞∑ k=0 γkp ( φt+k = φ|st , at ; π ) dφ = ∫ Φ R ( φ ) ξπ ( st , at , φ ) dφ . ( 10 ) Depending on the reward function R , there are several ξ-functions that correspond to the same Q function . Formally , this is an equivalence relationship , and the quotient space has a one-to-one correspondence with the Q-function space . Proposition 1 . ( Equivalence between functions ξ and Q ) Let Q = { Q : S ×A → R s.t . ‖Q‖∞ < ∞ } . Let ∼ be defined as ξ1 ∼ ξ2 ⇔ ∫ Φ Rξ1 = ∫ Φ Rξ2 . Then , ∼ is an equivalence relationship , and there is a bijective correspondence between the quotient space Ξ∼ and Q. Corollary 1 . The bijection between Ξ∼ and Q allows to induce a norm ‖ · ‖∼ into Ξ∼ from the supremum norm in Q , with which Ξ∼ is a Banach space ( since Q is Banach with ‖ · ‖∞ ) : ‖ξ‖∼ = sup s , a ∣∣∣∣∫ Φ R ( φ ) ξ ( s , a , φ ) dφ ∣∣∣∣ = sup s , a |Q ( s , a ) | = ‖Q‖∞ . ( 11 ) Similar to the Bellman equation for the Q-function , we can define a Bellman operator for the ξ-function , denoted by Tξ , as : Tξ ( ξ π ) = p ( φt = φ|st , at ) + γEp ( st+1 , at+1|st , at ; π ) { ξ π ( st+1 , at+1 , φ ) } . ( 12 ) As in the case of the Q-function , we can use Tξ to construct a contractive operator : Proposition 2 . ( ξ-learning has a fixed point ) The operator Tξ is well-defined w.r.t . the equivalence ∼ , and therefore induces an operator T∼ defined over Ξ∼ . T∼ is contractive w.r.t . ‖ · ‖∼ . Since Ξ∼ is Banach , T∼ has a unique fixed point and iterating T∼ starting anywhere converges to that point . In other words , successive applications of the operator T∼ converge towards the class of optimal ξ functions [ ξ∗ ] or equivalently to an optimal ξ function defined up to an additive function k satisfying∫ Φ k ( s , a , φ ) R ( φ ) dφ = 0 , ∀ ( s , a ) ∈ S ×A ( i.e . k ∈ Ker ( ξ → ∫ Φ Rξ ) ) . While these two results state ( see Appendix A for the proofs ) the theoretical links to standard Q-learning formulations , the Tξ operator defined in ( 12 ) is not usable in practice , because of the expectation . In the next section , we define the optimisation iterate , prove its convergence , and provide two variants to perform the ξ updates . 3.2 ξ-LEARNING ALGORITHMS In order to learn the ξ-function , we introduce the ξ-learning update operator , which is an offpolicy temporal difference method analogous to Q-learning . Given a transition ( st , at , st+1 , φt ) the ξ-learning update operator is defined as : ξπk+1 ( st , at , φ ) ← ξπk ( st , at , φ ) + αk [ p ( φt = φ|st , at ) + γξπk ( st+1 , āt+1 , φ ) − ξπk ( st , at , φ ) ] , ( 13 ) where āt+1 = argmaxa ∫ Φ R ( φ ) ξπ ( st+1 , a , φ ) dφ . The following is one of the main results of the manuscript , stating the convergence of ξ-learning : Theorem 1 . ( Convergence of ξ-learning ) For a sequence of state-action-feature { st , at , st+1 , φt } ∞t=0 consider the ξ-learning update given in ( 13 ) . If the sequence of state-action-feature triples visits each state , action infinitely often , and if the learning rate αk is an adapted sequence satisfying the Robbins-Monro conditions : ∞∑ k=1 αk =∞ , ∞∑ k=1 α2k < ∞ ( 14 ) then the sequence of function classes corresponding to the iterates converges to the optimum , which corresponds to the optimal Q-function to which standard Q-learning updates would converge to : [ ξn ] → [ ξ∗ ] with Q∗ ( s , a ) = ∫ Φ R ( φ ) ξ∗ ( s , a , φ ) dφ . ( 15 ) The proof is provided in Appendix A and follows the same flow as for Q-learning . The previous theorem provides convergence guarantees under the assumption that either p ( φt = φ|st , at ; π ) is known , or an unbiased estimate can be constructed . We propose two different ways to approximate p ( φt = φ|st , at ; π ) from a given transition ( st , at , st+1 , φt ) so as to perform the ξ-update ( 13 ) . The first instance is a model-free version and detailed in the following section . A second instance uses a one-step SF model , called One-Step Model-based ( MB ) ξ-learning , which is further described in Sec . B. Model-free ( MF ) ξ-Learning : MF ξ-learning uses the same principle as standard model-free temporal difference learning methods . The update assumes for a given transition ( st , at , st+1 , φt ) that the probability for the observed feature is p ( φ = φt|st , at ) = 1 . Whereas for all other features ( ∀φ′ ∈ Φ , φ′ 6= φt ) the probability is p ( φ′ = φt|st , at ) = 0 , see Appendix D for continuous features . The resulting updates are : φ = φt : ξ π ( st , at , φ ) ← ( 1− α ) ξπ ( st , at , φ ) + α ( 1 + γξπ ( st+1 , āt+1 , φ ) ) φ′ 6= φt : ξπ ( st , at , φ′ ) ← ( 1− α ) ξπ ( st , at , φ′ ) + αγξπ ( st+1 , āt+1 , φ′ ) . ( 16 ) Due to the stochastic update of the ξ-function and if the learning rate α ∈ ( 0 , 1 ] discounts over time , the ξ-update will learn the true probability of p ( φ = φt|st , at ) . A potential problem with the MF procedure is that it might induce a high variance when the true feature probabilities are not binary . 3.3 META ξ-LEARNING After discussing ξ-learning on a single task and showing its theoretical convergence , we can now investigate how it can be applied in transfer learning . Similar to the linear SF framework the ξ-function allows to reevaluate a policy learned for task Mi , ξπi , in a new environment Mj : Qπij ( s , a ) = ∫ Φ Rj ( φ ) ξ πi ( s , a , φ ) dφ . ( 17 ) This allows us to apply GPI in ( 6 ) for arbitrary reward functions in a similar manner to what was proposed for linear reward functions in ( Barreto et al. , 2018 ) . We extend the GPI result to the ξ-learning framework as follows : Theorem 2 . ( Generalised policy improvement in ξ-learning ) LetM be the set of tasks , each one associated to a ( possibly different ) weighting function Ri ∈ L1 ( Φ ) . Let ξπ ∗ i be a representative of the optimal class of ξ-functions for task Mi , i ∈ { 1 , . . . , I } , and let ξ̃πi be an approximation to the optimal ξ-function , ‖ξπ∗i − ξ̃πi‖Ri ≤ ε , ∀i . Then , for another task M with weighting function R , the policy defined as : π ( s ) = arg max a max i ∫ Φ R ( φ ) ξ̃πi ( s , a , φ ) dφ , ( 18 ) satisfies : ‖ξ∗ − ξπ‖R ≤ 2 1− γ ( min i ‖R−Ri‖p ( φ|s , a ) + ε ) , ( 19 ) where ‖f‖g = sups , a ∫ Φ |f · g| dφ . The proof is provided in Appendix A . | This paper proposes a new successor feature learning algorithm, called \xi-learning. Previous SF assumes reward function is a linear composition of SF and reward weights, this paper extends previous SF to a setting with a general reward function over SF. Based on this, \xi-learning learns to estimate the cumulative discounted probability of SF, and provides two update operators, model-free \xi-learning and model-based \xi-learning. The convergence of \xi-learning is proved, and empirical results on environments with discrete and continuous state spaces show it outperforms previous SF methods. | SP:ea0dace09781799c8261d6fe588bf6b858c950ff |
Xi-learning: Successor Feature Transfer Learning for General Reward Functions | 1 INTRODUCTION . Reinforcement Learning ( RL ) successfully addressed many complex problems such as playing computer games , chess , and even Go with superhuman performance ( Mnih et al. , 2015 ; Silver et al. , 2018 ) . These impressive results are possible thanks to a vast amount of interactions of the RL agent with its environment/task . Such strategy is unsuitable in settings where the agent has to perform and learn at the same time . Consider , for example , a care giver robot in a hospital that has to learn a new task , such as a new route to deliver meals . In such a setting , the agent can not collect a vast amount of training samples but has to adapt quickly instead . Transfer learning aims to provide mechanisms quickly to adapt agents in such settings ( Taylor and Stone , 2009 ; Lazaric , 2012 ; Zhu et al. , 2020 ) . The rationale is to use knowledge from previously encountered source tasks for a new target task to improve the learning performance on the target task . The previous knowledge can help reducing the amount of interactions required to learn the new optimal behavior . For example , the care giver robot could reuse knowledge about the layout of the hospital it learned in previous source tasks ( e.g . guiding a person ) to learn to deliver meals . The Successor Feature ( SF ) and General Policy Improvement ( GPI ) framework ( Barreto et al. , 2020 ) is a prominent transfer learning mechanism for tasks where only the reward function differs . Its basic premise is that the rewards which the RL agent tries to maximize are defined based on a lowdimensional feature descriptor φ ∈ Rn . For our care-giver robot this could be ID ’ s of beds or rooms that it is visiting , in difference to its high-dimensional visual state input from a camera . The rewards are then computed not based on its visual input but on the ID ’ s of the beds or rooms that it visits . The expected cumulative discounted successor features ( ψ ) are learned for each behavior that the robot learned in the past . It represents the dynamics in the feature space that the agent experiences for a behavior . This corresponds to the rooms or beds the care-giver agent would visit if using the behavior . This representation of feature dynamics is independent from the reward function . A behavior learned in a previous task and described by this SF representation can be directly re-evaluated for a different reward function . In a new task , i.e . for a new reward function , the GPI procedure re-evaluates the behaviors learned in previous tasks for it . It then selects at each state the behavior of a previous task if it improves the expected reward . This allows to reuse behaviors learned in previous source tasks Source code at https : //tinyurl.com/3xuzxff3 for a new target task . A similar transfer strategy can also be observed in the behavior of humans ( Momennejad et al. , 2017 ; Momennejad , 2020 ; Tomov et al. , 2021 ) . The classical SF & GPI framework ( Barreto et al. , 2017 ; 2018 ) makes the assumption that rewards r are a linear composition of the features φ ∈ Rn via a reward weight vector wi ∈ Rn that depends on the task i : ri = φ > wi . This assumption allows to effectively separate the feature dynamics of a behavior from the rewards and thus to re-evaluate previous behaviors given a new reward function , i.e . a new weight vector wj . Nonetheless , this assumption also restricts successful application of SF & GPI only to problems where such a linear decomposition is possible . We investigate the application of the SF & GPI framework to general reward functions : ri = Ri ( φ ) over the feature space . We propose to learn the cumulative discounted probability over the successor features , named ξ-function , and refer to the proposed framework as ξ-learning . Our work is related to Janner et al . ( 2020 ) ; Touati and Ollivier ( 2021 ) , and brings two important additional contributions . First , we provide mathematical proof of the convergence of ξ-learning . Second , we demonstrate how ξ-learning can be used for meta-RL , using the ξ-function to re-evaluate behaviors learned in previous tasks for a new reward function Rj . Furthermore , ξ-learning can also be used to transfer knowledge to new tasks using GPI . The contribution of our paper is three-fold : • We introduce a new RL algorithm , ξ-learning , based on a cumulative discounted probability of successor features , and two variants of its update operator . • We provide theoretical proofs of the convergence of ξ-learning to the optimal policy and for a guarantee of its transfer learning performance under the GPI procedure . • We experimentally compare ξ-learning in tasks with linear and general reward functions , and for tasks with discrete and continuous features to standard Q-learning and the classical SF framework , demonstrating the interest and advantage of ξ-learning . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . RL investigates algorithms to solve multi-step decision problems , aiming to maximize the sum over future rewards ( Sutton and Barto , 2018 ) . RL problems are modeled as Markov Decision Processes ( MDPs ) which are defined as a tuple M ≡ ( S , A , p , R , γ ) , where S and A are the state and action set . An agent transitions from a state st to another state st+1 using action at at time point t collecting a reward rt : st at , rt−−−→ st+1 . This process is stochastic and the transition probability p ( st+1|st , at ) describes which state st+1 is reached . The reward function R defines the scalar reward rt = R ( st , at , st+1 ) ∈ R for the transition . The goal in an MDP is to maximize the expected return Gt = E [ ∑∞ k=0 γ kRt+k ] , where Rt = R ( St , At , St+1 ) . The discount factor γ ∈ [ 0 , 1 ) weights collected rewards by discounting future rewards stronger . RL provides algorithms to learn a policy π : S → A defining which action to take in which state to maximise Gt . Value-based RL methods use the concept of value functions to learn the optimal policy . The stateaction value function , called Q-function , is defined as the expected future return taking action at in st and then following policy π : Qπ ( st , at ) = Eπ { rt + γrt+1 + γ 2rt+2 + . . . } = Eπ { rt + γmax at+1 Qπ ( St+1 , at+1 ) } . ( 1 ) The Q-function can be recursively defined following the Bellman equation such that the current Q-value Qπ ( st , at ) depends on the maximum Q-value of the next state Qπ ( st+1 , at+1 ) . The optimal policy for an MDP can then be expressed based on the Q-function , by taking at every step the maximum action : π∗ ( s ) ∈ argmaxaQ∗ ( s , a ) . The optimal Q-function can be learned using a temporal difference method such as Q-learning ( Watkins and Dayan , 1992 ) . Given a transition ( st , at , rt , st+1 ) , the Q-value is updated according to : Qk+1 ( st , at ) = Qk ( st , at ) + αk ( rt + max at+1 Qk ( st+1 , at+1 ) −Qk ( st , at ) ) , ( 2 ) where αk ∈ ( 0 , 1 ] is the learning rate at iteration k . 2.2 TRANSFER LEARNING AND THE SF & GPI FRAMEWORK . We are interested in the transfer learning setting where the agent has to solve a set of tasksM = { M1 , M2 , . . . , Mm } , that in our case differ only in their reward function . The Successor Feature ( SF ) framework provides a principled way to perform transfer learning ( Barreto et al. , 2017 ; 2018 ) . SF assumes that the reward function can be decomposed into a linear combination of features φ ∈ Φ ⊂ Rn and a reward weight vector wi ∈ Rn that is defined for a task Mi : ri ( st , at , st+1 ) ≡ φ ( st , at , st+1 ) > wi . ( 3 ) We refer to such reward functions as linear reward functions . Since the various tasks differ only in their reward functions , the features are the same for all tasks inM . Given the decomposition above , it is also possible to rewrite the Q-function into an expected discounted sum over future features ψπi ( s , a ) and the reward weight vector wi : Qπii ( s , a ) = E { rt + γ 1rt+1 + γ 2rt+2 + . . . } = E { φ > t wi + γ 1φ > t+1wi + γ 2φ > t+2wi + . . . } = E { ∑∞ k=0 γ kφt+k } > wi ≡ ψπi ( s , a ) > wi . ( 4 ) This decouples the dynamics of the policy πi in the feature space of the MDP from the expected rewards for such features . Thus , it is now possible to evaluate the policy πi in a different taskMj using a simple multiplication of the weight vector wj with the ψ-function : Qπij ( s , a ) = ψ πi ( s , a ) > wj . Interestingly , the ψ function also follows the Bellman equation : ψπ ( s , a ) = E { φt+1 + γψπ ( st+1 , π ( st+1 ) ) |st , at } , ( 5 ) and can therefore be learned with conventional RL methods . Moreover , ( Lehnert and Littman , 2019 ) showed the equivalence of SF-learning to Q-learning . Being in a new task Mj the Generalized Policy Improvement ( GPI ) can be used to select the action over all policies learned so far that behaves best : π ( s ) ∈ argmax a max i Qπij ( s , a ) = argmax a max i ψπi ( s , a ) > wj . ( 6 ) ( Barreto et al. , 2018 ) proved that under the appropriate conditions for optimal policy approximates , the policy constructed in ( 6 ) is close to the optimal one , and their difference is upper-bounded : ||Q∗ −Qπ||∞ ≤ 2 1− γ ( ||r − ri||∞ + min j ||ri − rj ||∞ + ) , ( 7 ) where ‖f − g‖∞ = maxs , a |f ( s , a ) − g ( s , a ) | . For an arbitrary reward function r the result can be interpreted in the following manner . Given the arbitrary task M , we identify the theoretically closest possible linear reward task Mi with ri . For this theoretically closest task , we search the linear task Mj in our set of taskM ( from which we also construct the GPI optimal policy ( 6 ) ) which is closest to it . The upper bound between Q∗ and Q is then defined by 1 ) the difference between task M and the theoretically closest possible linear task Mi : ||r− ri||∞ ; and by 2 ) the difference between theoretical task Mi and the closest task Mj : minj ||ri − rj ||∞ . If our new task M is also linear then r = ri and the first term in ( 7 ) would vanish . Very importantly , this result shows that the SF framework will only provide a good approximation of the true Q-function if the reward function in a task can be represented using a linear decomposition . If this is not the case then the error in the approximation increases with the distance between the true reward function r and the best linear approximation of it ri as stated by ||r − ri||∞ . 3 METHOD : ξ-LEARNING 3.1 DEFINITION AND FOUNDATIONS OF ξ-LEARNING The goal of this paper is to investigate the application of SF & GPI to tasks with general reward functions R : Φ 7→ R over state features φ ∈ Φ : r ( st , at , st+1 ) ≡ R ( φ ( st , at , st+1 ) ) = R ( φt ) , ( 8 ) where we define φt ≡ φ ( st , at , st+1 ) . Under this assumption the Q-function can not be linearly decomposed into a part that describes feature dynamics and one that describes the rewards as in the linear SF framework ( 4 ) . To overcome this issue , we propose to define the expected cumulative discounted probability of successor features or ξ-function , which is going to be the central mathematical object of the paper , as : ξπ ( s , a , φ ) = ∞∑ k=0 γkp ( φt+k = φ|st = s , at = a ; π ) , ( 9 ) where p ( φt+k = φ|st = s , at = a ; π ) , or in short p ( φt+k = φ|st , at ; π ) , is the probability density function of the features at time t+ k , following policy π and conditioned to s and a being the state and action at time t respectively . Note that ξπ depends not only on the policy π but also on the state transition ( constant through the paper ) . With the definition of the ξ-function , the Q-function rewrites ( this is compatible with SFQL in the linear reward case , see Appendix A.6 ) : Qπ ( st , at ) = ∞∑ k=0 γkEp ( φt+k|st , at ; π ) { R ( φt+k ) } = ∞∑ k=0 γk ∫ Φ p ( φt+k = φ|st , at ; π ) R ( φ ) dφ = ∫ Φ R ( φ ) ∞∑ k=0 γkp ( φt+k = φ|st , at ; π ) dφ = ∫ Φ R ( φ ) ξπ ( st , at , φ ) dφ . ( 10 ) Depending on the reward function R , there are several ξ-functions that correspond to the same Q function . Formally , this is an equivalence relationship , and the quotient space has a one-to-one correspondence with the Q-function space . Proposition 1 . ( Equivalence between functions ξ and Q ) Let Q = { Q : S ×A → R s.t . ‖Q‖∞ < ∞ } . Let ∼ be defined as ξ1 ∼ ξ2 ⇔ ∫ Φ Rξ1 = ∫ Φ Rξ2 . Then , ∼ is an equivalence relationship , and there is a bijective correspondence between the quotient space Ξ∼ and Q. Corollary 1 . The bijection between Ξ∼ and Q allows to induce a norm ‖ · ‖∼ into Ξ∼ from the supremum norm in Q , with which Ξ∼ is a Banach space ( since Q is Banach with ‖ · ‖∞ ) : ‖ξ‖∼ = sup s , a ∣∣∣∣∫ Φ R ( φ ) ξ ( s , a , φ ) dφ ∣∣∣∣ = sup s , a |Q ( s , a ) | = ‖Q‖∞ . ( 11 ) Similar to the Bellman equation for the Q-function , we can define a Bellman operator for the ξ-function , denoted by Tξ , as : Tξ ( ξ π ) = p ( φt = φ|st , at ) + γEp ( st+1 , at+1|st , at ; π ) { ξ π ( st+1 , at+1 , φ ) } . ( 12 ) As in the case of the Q-function , we can use Tξ to construct a contractive operator : Proposition 2 . ( ξ-learning has a fixed point ) The operator Tξ is well-defined w.r.t . the equivalence ∼ , and therefore induces an operator T∼ defined over Ξ∼ . T∼ is contractive w.r.t . ‖ · ‖∼ . Since Ξ∼ is Banach , T∼ has a unique fixed point and iterating T∼ starting anywhere converges to that point . In other words , successive applications of the operator T∼ converge towards the class of optimal ξ functions [ ξ∗ ] or equivalently to an optimal ξ function defined up to an additive function k satisfying∫ Φ k ( s , a , φ ) R ( φ ) dφ = 0 , ∀ ( s , a ) ∈ S ×A ( i.e . k ∈ Ker ( ξ → ∫ Φ Rξ ) ) . While these two results state ( see Appendix A for the proofs ) the theoretical links to standard Q-learning formulations , the Tξ operator defined in ( 12 ) is not usable in practice , because of the expectation . In the next section , we define the optimisation iterate , prove its convergence , and provide two variants to perform the ξ updates . 3.2 ξ-LEARNING ALGORITHMS In order to learn the ξ-function , we introduce the ξ-learning update operator , which is an offpolicy temporal difference method analogous to Q-learning . Given a transition ( st , at , st+1 , φt ) the ξ-learning update operator is defined as : ξπk+1 ( st , at , φ ) ← ξπk ( st , at , φ ) + αk [ p ( φt = φ|st , at ) + γξπk ( st+1 , āt+1 , φ ) − ξπk ( st , at , φ ) ] , ( 13 ) where āt+1 = argmaxa ∫ Φ R ( φ ) ξπ ( st+1 , a , φ ) dφ . The following is one of the main results of the manuscript , stating the convergence of ξ-learning : Theorem 1 . ( Convergence of ξ-learning ) For a sequence of state-action-feature { st , at , st+1 , φt } ∞t=0 consider the ξ-learning update given in ( 13 ) . If the sequence of state-action-feature triples visits each state , action infinitely often , and if the learning rate αk is an adapted sequence satisfying the Robbins-Monro conditions : ∞∑ k=1 αk =∞ , ∞∑ k=1 α2k < ∞ ( 14 ) then the sequence of function classes corresponding to the iterates converges to the optimum , which corresponds to the optimal Q-function to which standard Q-learning updates would converge to : [ ξn ] → [ ξ∗ ] with Q∗ ( s , a ) = ∫ Φ R ( φ ) ξ∗ ( s , a , φ ) dφ . ( 15 ) The proof is provided in Appendix A and follows the same flow as for Q-learning . The previous theorem provides convergence guarantees under the assumption that either p ( φt = φ|st , at ; π ) is known , or an unbiased estimate can be constructed . We propose two different ways to approximate p ( φt = φ|st , at ; π ) from a given transition ( st , at , st+1 , φt ) so as to perform the ξ-update ( 13 ) . The first instance is a model-free version and detailed in the following section . A second instance uses a one-step SF model , called One-Step Model-based ( MB ) ξ-learning , which is further described in Sec . B. Model-free ( MF ) ξ-Learning : MF ξ-learning uses the same principle as standard model-free temporal difference learning methods . The update assumes for a given transition ( st , at , st+1 , φt ) that the probability for the observed feature is p ( φ = φt|st , at ) = 1 . Whereas for all other features ( ∀φ′ ∈ Φ , φ′ 6= φt ) the probability is p ( φ′ = φt|st , at ) = 0 , see Appendix D for continuous features . The resulting updates are : φ = φt : ξ π ( st , at , φ ) ← ( 1− α ) ξπ ( st , at , φ ) + α ( 1 + γξπ ( st+1 , āt+1 , φ ) ) φ′ 6= φt : ξπ ( st , at , φ′ ) ← ( 1− α ) ξπ ( st , at , φ′ ) + αγξπ ( st+1 , āt+1 , φ′ ) . ( 16 ) Due to the stochastic update of the ξ-function and if the learning rate α ∈ ( 0 , 1 ] discounts over time , the ξ-update will learn the true probability of p ( φ = φt|st , at ) . A potential problem with the MF procedure is that it might induce a high variance when the true feature probabilities are not binary . 3.3 META ξ-LEARNING After discussing ξ-learning on a single task and showing its theoretical convergence , we can now investigate how it can be applied in transfer learning . Similar to the linear SF framework the ξ-function allows to reevaluate a policy learned for task Mi , ξπi , in a new environment Mj : Qπij ( s , a ) = ∫ Φ Rj ( φ ) ξ πi ( s , a , φ ) dφ . ( 17 ) This allows us to apply GPI in ( 6 ) for arbitrary reward functions in a similar manner to what was proposed for linear reward functions in ( Barreto et al. , 2018 ) . We extend the GPI result to the ξ-learning framework as follows : Theorem 2 . ( Generalised policy improvement in ξ-learning ) LetM be the set of tasks , each one associated to a ( possibly different ) weighting function Ri ∈ L1 ( Φ ) . Let ξπ ∗ i be a representative of the optimal class of ξ-functions for task Mi , i ∈ { 1 , . . . , I } , and let ξ̃πi be an approximation to the optimal ξ-function , ‖ξπ∗i − ξ̃πi‖Ri ≤ ε , ∀i . Then , for another task M with weighting function R , the policy defined as : π ( s ) = arg max a max i ∫ Φ R ( φ ) ξ̃πi ( s , a , φ ) dφ , ( 18 ) satisfies : ‖ξ∗ − ξπ‖R ≤ 2 1− γ ( min i ‖R−Ri‖p ( φ|s , a ) + ε ) , ( 19 ) where ‖f‖g = sups , a ∫ Φ |f · g| dφ . The proof is provided in Appendix A . | The authors extend the successor features (SF) linear formulation to general non-linear formulation. The new method allows the reward to be arbitrary composition of features. The propose method is based on learning cumulative discounted probability of SF rather than cumulative discounted sum of SF as in linear SF framework. The authors provide theoretical proofs about convergence and propose two practical methods. Experiments are conducted in two toy environments. | SP:ea0dace09781799c8261d6fe588bf6b858c950ff |
Xi-learning: Successor Feature Transfer Learning for General Reward Functions | 1 INTRODUCTION . Reinforcement Learning ( RL ) successfully addressed many complex problems such as playing computer games , chess , and even Go with superhuman performance ( Mnih et al. , 2015 ; Silver et al. , 2018 ) . These impressive results are possible thanks to a vast amount of interactions of the RL agent with its environment/task . Such strategy is unsuitable in settings where the agent has to perform and learn at the same time . Consider , for example , a care giver robot in a hospital that has to learn a new task , such as a new route to deliver meals . In such a setting , the agent can not collect a vast amount of training samples but has to adapt quickly instead . Transfer learning aims to provide mechanisms quickly to adapt agents in such settings ( Taylor and Stone , 2009 ; Lazaric , 2012 ; Zhu et al. , 2020 ) . The rationale is to use knowledge from previously encountered source tasks for a new target task to improve the learning performance on the target task . The previous knowledge can help reducing the amount of interactions required to learn the new optimal behavior . For example , the care giver robot could reuse knowledge about the layout of the hospital it learned in previous source tasks ( e.g . guiding a person ) to learn to deliver meals . The Successor Feature ( SF ) and General Policy Improvement ( GPI ) framework ( Barreto et al. , 2020 ) is a prominent transfer learning mechanism for tasks where only the reward function differs . Its basic premise is that the rewards which the RL agent tries to maximize are defined based on a lowdimensional feature descriptor φ ∈ Rn . For our care-giver robot this could be ID ’ s of beds or rooms that it is visiting , in difference to its high-dimensional visual state input from a camera . The rewards are then computed not based on its visual input but on the ID ’ s of the beds or rooms that it visits . The expected cumulative discounted successor features ( ψ ) are learned for each behavior that the robot learned in the past . It represents the dynamics in the feature space that the agent experiences for a behavior . This corresponds to the rooms or beds the care-giver agent would visit if using the behavior . This representation of feature dynamics is independent from the reward function . A behavior learned in a previous task and described by this SF representation can be directly re-evaluated for a different reward function . In a new task , i.e . for a new reward function , the GPI procedure re-evaluates the behaviors learned in previous tasks for it . It then selects at each state the behavior of a previous task if it improves the expected reward . This allows to reuse behaviors learned in previous source tasks Source code at https : //tinyurl.com/3xuzxff3 for a new target task . A similar transfer strategy can also be observed in the behavior of humans ( Momennejad et al. , 2017 ; Momennejad , 2020 ; Tomov et al. , 2021 ) . The classical SF & GPI framework ( Barreto et al. , 2017 ; 2018 ) makes the assumption that rewards r are a linear composition of the features φ ∈ Rn via a reward weight vector wi ∈ Rn that depends on the task i : ri = φ > wi . This assumption allows to effectively separate the feature dynamics of a behavior from the rewards and thus to re-evaluate previous behaviors given a new reward function , i.e . a new weight vector wj . Nonetheless , this assumption also restricts successful application of SF & GPI only to problems where such a linear decomposition is possible . We investigate the application of the SF & GPI framework to general reward functions : ri = Ri ( φ ) over the feature space . We propose to learn the cumulative discounted probability over the successor features , named ξ-function , and refer to the proposed framework as ξ-learning . Our work is related to Janner et al . ( 2020 ) ; Touati and Ollivier ( 2021 ) , and brings two important additional contributions . First , we provide mathematical proof of the convergence of ξ-learning . Second , we demonstrate how ξ-learning can be used for meta-RL , using the ξ-function to re-evaluate behaviors learned in previous tasks for a new reward function Rj . Furthermore , ξ-learning can also be used to transfer knowledge to new tasks using GPI . The contribution of our paper is three-fold : • We introduce a new RL algorithm , ξ-learning , based on a cumulative discounted probability of successor features , and two variants of its update operator . • We provide theoretical proofs of the convergence of ξ-learning to the optimal policy and for a guarantee of its transfer learning performance under the GPI procedure . • We experimentally compare ξ-learning in tasks with linear and general reward functions , and for tasks with discrete and continuous features to standard Q-learning and the classical SF framework , demonstrating the interest and advantage of ξ-learning . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . RL investigates algorithms to solve multi-step decision problems , aiming to maximize the sum over future rewards ( Sutton and Barto , 2018 ) . RL problems are modeled as Markov Decision Processes ( MDPs ) which are defined as a tuple M ≡ ( S , A , p , R , γ ) , where S and A are the state and action set . An agent transitions from a state st to another state st+1 using action at at time point t collecting a reward rt : st at , rt−−−→ st+1 . This process is stochastic and the transition probability p ( st+1|st , at ) describes which state st+1 is reached . The reward function R defines the scalar reward rt = R ( st , at , st+1 ) ∈ R for the transition . The goal in an MDP is to maximize the expected return Gt = E [ ∑∞ k=0 γ kRt+k ] , where Rt = R ( St , At , St+1 ) . The discount factor γ ∈ [ 0 , 1 ) weights collected rewards by discounting future rewards stronger . RL provides algorithms to learn a policy π : S → A defining which action to take in which state to maximise Gt . Value-based RL methods use the concept of value functions to learn the optimal policy . The stateaction value function , called Q-function , is defined as the expected future return taking action at in st and then following policy π : Qπ ( st , at ) = Eπ { rt + γrt+1 + γ 2rt+2 + . . . } = Eπ { rt + γmax at+1 Qπ ( St+1 , at+1 ) } . ( 1 ) The Q-function can be recursively defined following the Bellman equation such that the current Q-value Qπ ( st , at ) depends on the maximum Q-value of the next state Qπ ( st+1 , at+1 ) . The optimal policy for an MDP can then be expressed based on the Q-function , by taking at every step the maximum action : π∗ ( s ) ∈ argmaxaQ∗ ( s , a ) . The optimal Q-function can be learned using a temporal difference method such as Q-learning ( Watkins and Dayan , 1992 ) . Given a transition ( st , at , rt , st+1 ) , the Q-value is updated according to : Qk+1 ( st , at ) = Qk ( st , at ) + αk ( rt + max at+1 Qk ( st+1 , at+1 ) −Qk ( st , at ) ) , ( 2 ) where αk ∈ ( 0 , 1 ] is the learning rate at iteration k . 2.2 TRANSFER LEARNING AND THE SF & GPI FRAMEWORK . We are interested in the transfer learning setting where the agent has to solve a set of tasksM = { M1 , M2 , . . . , Mm } , that in our case differ only in their reward function . The Successor Feature ( SF ) framework provides a principled way to perform transfer learning ( Barreto et al. , 2017 ; 2018 ) . SF assumes that the reward function can be decomposed into a linear combination of features φ ∈ Φ ⊂ Rn and a reward weight vector wi ∈ Rn that is defined for a task Mi : ri ( st , at , st+1 ) ≡ φ ( st , at , st+1 ) > wi . ( 3 ) We refer to such reward functions as linear reward functions . Since the various tasks differ only in their reward functions , the features are the same for all tasks inM . Given the decomposition above , it is also possible to rewrite the Q-function into an expected discounted sum over future features ψπi ( s , a ) and the reward weight vector wi : Qπii ( s , a ) = E { rt + γ 1rt+1 + γ 2rt+2 + . . . } = E { φ > t wi + γ 1φ > t+1wi + γ 2φ > t+2wi + . . . } = E { ∑∞ k=0 γ kφt+k } > wi ≡ ψπi ( s , a ) > wi . ( 4 ) This decouples the dynamics of the policy πi in the feature space of the MDP from the expected rewards for such features . Thus , it is now possible to evaluate the policy πi in a different taskMj using a simple multiplication of the weight vector wj with the ψ-function : Qπij ( s , a ) = ψ πi ( s , a ) > wj . Interestingly , the ψ function also follows the Bellman equation : ψπ ( s , a ) = E { φt+1 + γψπ ( st+1 , π ( st+1 ) ) |st , at } , ( 5 ) and can therefore be learned with conventional RL methods . Moreover , ( Lehnert and Littman , 2019 ) showed the equivalence of SF-learning to Q-learning . Being in a new task Mj the Generalized Policy Improvement ( GPI ) can be used to select the action over all policies learned so far that behaves best : π ( s ) ∈ argmax a max i Qπij ( s , a ) = argmax a max i ψπi ( s , a ) > wj . ( 6 ) ( Barreto et al. , 2018 ) proved that under the appropriate conditions for optimal policy approximates , the policy constructed in ( 6 ) is close to the optimal one , and their difference is upper-bounded : ||Q∗ −Qπ||∞ ≤ 2 1− γ ( ||r − ri||∞ + min j ||ri − rj ||∞ + ) , ( 7 ) where ‖f − g‖∞ = maxs , a |f ( s , a ) − g ( s , a ) | . For an arbitrary reward function r the result can be interpreted in the following manner . Given the arbitrary task M , we identify the theoretically closest possible linear reward task Mi with ri . For this theoretically closest task , we search the linear task Mj in our set of taskM ( from which we also construct the GPI optimal policy ( 6 ) ) which is closest to it . The upper bound between Q∗ and Q is then defined by 1 ) the difference between task M and the theoretically closest possible linear task Mi : ||r− ri||∞ ; and by 2 ) the difference between theoretical task Mi and the closest task Mj : minj ||ri − rj ||∞ . If our new task M is also linear then r = ri and the first term in ( 7 ) would vanish . Very importantly , this result shows that the SF framework will only provide a good approximation of the true Q-function if the reward function in a task can be represented using a linear decomposition . If this is not the case then the error in the approximation increases with the distance between the true reward function r and the best linear approximation of it ri as stated by ||r − ri||∞ . 3 METHOD : ξ-LEARNING 3.1 DEFINITION AND FOUNDATIONS OF ξ-LEARNING The goal of this paper is to investigate the application of SF & GPI to tasks with general reward functions R : Φ 7→ R over state features φ ∈ Φ : r ( st , at , st+1 ) ≡ R ( φ ( st , at , st+1 ) ) = R ( φt ) , ( 8 ) where we define φt ≡ φ ( st , at , st+1 ) . Under this assumption the Q-function can not be linearly decomposed into a part that describes feature dynamics and one that describes the rewards as in the linear SF framework ( 4 ) . To overcome this issue , we propose to define the expected cumulative discounted probability of successor features or ξ-function , which is going to be the central mathematical object of the paper , as : ξπ ( s , a , φ ) = ∞∑ k=0 γkp ( φt+k = φ|st = s , at = a ; π ) , ( 9 ) where p ( φt+k = φ|st = s , at = a ; π ) , or in short p ( φt+k = φ|st , at ; π ) , is the probability density function of the features at time t+ k , following policy π and conditioned to s and a being the state and action at time t respectively . Note that ξπ depends not only on the policy π but also on the state transition ( constant through the paper ) . With the definition of the ξ-function , the Q-function rewrites ( this is compatible with SFQL in the linear reward case , see Appendix A.6 ) : Qπ ( st , at ) = ∞∑ k=0 γkEp ( φt+k|st , at ; π ) { R ( φt+k ) } = ∞∑ k=0 γk ∫ Φ p ( φt+k = φ|st , at ; π ) R ( φ ) dφ = ∫ Φ R ( φ ) ∞∑ k=0 γkp ( φt+k = φ|st , at ; π ) dφ = ∫ Φ R ( φ ) ξπ ( st , at , φ ) dφ . ( 10 ) Depending on the reward function R , there are several ξ-functions that correspond to the same Q function . Formally , this is an equivalence relationship , and the quotient space has a one-to-one correspondence with the Q-function space . Proposition 1 . ( Equivalence between functions ξ and Q ) Let Q = { Q : S ×A → R s.t . ‖Q‖∞ < ∞ } . Let ∼ be defined as ξ1 ∼ ξ2 ⇔ ∫ Φ Rξ1 = ∫ Φ Rξ2 . Then , ∼ is an equivalence relationship , and there is a bijective correspondence between the quotient space Ξ∼ and Q. Corollary 1 . The bijection between Ξ∼ and Q allows to induce a norm ‖ · ‖∼ into Ξ∼ from the supremum norm in Q , with which Ξ∼ is a Banach space ( since Q is Banach with ‖ · ‖∞ ) : ‖ξ‖∼ = sup s , a ∣∣∣∣∫ Φ R ( φ ) ξ ( s , a , φ ) dφ ∣∣∣∣ = sup s , a |Q ( s , a ) | = ‖Q‖∞ . ( 11 ) Similar to the Bellman equation for the Q-function , we can define a Bellman operator for the ξ-function , denoted by Tξ , as : Tξ ( ξ π ) = p ( φt = φ|st , at ) + γEp ( st+1 , at+1|st , at ; π ) { ξ π ( st+1 , at+1 , φ ) } . ( 12 ) As in the case of the Q-function , we can use Tξ to construct a contractive operator : Proposition 2 . ( ξ-learning has a fixed point ) The operator Tξ is well-defined w.r.t . the equivalence ∼ , and therefore induces an operator T∼ defined over Ξ∼ . T∼ is contractive w.r.t . ‖ · ‖∼ . Since Ξ∼ is Banach , T∼ has a unique fixed point and iterating T∼ starting anywhere converges to that point . In other words , successive applications of the operator T∼ converge towards the class of optimal ξ functions [ ξ∗ ] or equivalently to an optimal ξ function defined up to an additive function k satisfying∫ Φ k ( s , a , φ ) R ( φ ) dφ = 0 , ∀ ( s , a ) ∈ S ×A ( i.e . k ∈ Ker ( ξ → ∫ Φ Rξ ) ) . While these two results state ( see Appendix A for the proofs ) the theoretical links to standard Q-learning formulations , the Tξ operator defined in ( 12 ) is not usable in practice , because of the expectation . In the next section , we define the optimisation iterate , prove its convergence , and provide two variants to perform the ξ updates . 3.2 ξ-LEARNING ALGORITHMS In order to learn the ξ-function , we introduce the ξ-learning update operator , which is an offpolicy temporal difference method analogous to Q-learning . Given a transition ( st , at , st+1 , φt ) the ξ-learning update operator is defined as : ξπk+1 ( st , at , φ ) ← ξπk ( st , at , φ ) + αk [ p ( φt = φ|st , at ) + γξπk ( st+1 , āt+1 , φ ) − ξπk ( st , at , φ ) ] , ( 13 ) where āt+1 = argmaxa ∫ Φ R ( φ ) ξπ ( st+1 , a , φ ) dφ . The following is one of the main results of the manuscript , stating the convergence of ξ-learning : Theorem 1 . ( Convergence of ξ-learning ) For a sequence of state-action-feature { st , at , st+1 , φt } ∞t=0 consider the ξ-learning update given in ( 13 ) . If the sequence of state-action-feature triples visits each state , action infinitely often , and if the learning rate αk is an adapted sequence satisfying the Robbins-Monro conditions : ∞∑ k=1 αk =∞ , ∞∑ k=1 α2k < ∞ ( 14 ) then the sequence of function classes corresponding to the iterates converges to the optimum , which corresponds to the optimal Q-function to which standard Q-learning updates would converge to : [ ξn ] → [ ξ∗ ] with Q∗ ( s , a ) = ∫ Φ R ( φ ) ξ∗ ( s , a , φ ) dφ . ( 15 ) The proof is provided in Appendix A and follows the same flow as for Q-learning . The previous theorem provides convergence guarantees under the assumption that either p ( φt = φ|st , at ; π ) is known , or an unbiased estimate can be constructed . We propose two different ways to approximate p ( φt = φ|st , at ; π ) from a given transition ( st , at , st+1 , φt ) so as to perform the ξ-update ( 13 ) . The first instance is a model-free version and detailed in the following section . A second instance uses a one-step SF model , called One-Step Model-based ( MB ) ξ-learning , which is further described in Sec . B. Model-free ( MF ) ξ-Learning : MF ξ-learning uses the same principle as standard model-free temporal difference learning methods . The update assumes for a given transition ( st , at , st+1 , φt ) that the probability for the observed feature is p ( φ = φt|st , at ) = 1 . Whereas for all other features ( ∀φ′ ∈ Φ , φ′ 6= φt ) the probability is p ( φ′ = φt|st , at ) = 0 , see Appendix D for continuous features . The resulting updates are : φ = φt : ξ π ( st , at , φ ) ← ( 1− α ) ξπ ( st , at , φ ) + α ( 1 + γξπ ( st+1 , āt+1 , φ ) ) φ′ 6= φt : ξπ ( st , at , φ′ ) ← ( 1− α ) ξπ ( st , at , φ′ ) + αγξπ ( st+1 , āt+1 , φ′ ) . ( 16 ) Due to the stochastic update of the ξ-function and if the learning rate α ∈ ( 0 , 1 ] discounts over time , the ξ-update will learn the true probability of p ( φ = φt|st , at ) . A potential problem with the MF procedure is that it might induce a high variance when the true feature probabilities are not binary . 3.3 META ξ-LEARNING After discussing ξ-learning on a single task and showing its theoretical convergence , we can now investigate how it can be applied in transfer learning . Similar to the linear SF framework the ξ-function allows to reevaluate a policy learned for task Mi , ξπi , in a new environment Mj : Qπij ( s , a ) = ∫ Φ Rj ( φ ) ξ πi ( s , a , φ ) dφ . ( 17 ) This allows us to apply GPI in ( 6 ) for arbitrary reward functions in a similar manner to what was proposed for linear reward functions in ( Barreto et al. , 2018 ) . We extend the GPI result to the ξ-learning framework as follows : Theorem 2 . ( Generalised policy improvement in ξ-learning ) LetM be the set of tasks , each one associated to a ( possibly different ) weighting function Ri ∈ L1 ( Φ ) . Let ξπ ∗ i be a representative of the optimal class of ξ-functions for task Mi , i ∈ { 1 , . . . , I } , and let ξ̃πi be an approximation to the optimal ξ-function , ‖ξπ∗i − ξ̃πi‖Ri ≤ ε , ∀i . Then , for another task M with weighting function R , the policy defined as : π ( s ) = arg max a max i ∫ Φ R ( φ ) ξ̃πi ( s , a , φ ) dφ , ( 18 ) satisfies : ‖ξ∗ − ξπ‖R ≤ 2 1− γ ( min i ‖R−Ri‖p ( φ|s , a ) + ε ) , ( 19 ) where ‖f‖g = sups , a ∫ Φ |f · g| dφ . The proof is provided in Appendix A . | This paper addresses a limitation of the successor features (SF) which were introduced as a mechanism for transfer learning in reinforcement learning when the reward function changes across tasks. The authors claim that the original SF framework will only provide a good approximation of the true Q-function if the reward function in a task can be represented using a linear decomposition, and propose to address this linear decomposition of the reward function requirement by introducing a novel SF mechanism, Xi-learning, based on learning a cumulative discounted probability of successor features. The paper includes theoretical proofs of the convergence of xi-learning as well as transfer learning guarantees under generalized policy improvement (GPI). The authors compare the performance of their methods against standard Q-learning and SFQL in two different domains. | SP:ea0dace09781799c8261d6fe588bf6b858c950ff |
Understanding AdamW through Proximal Methods and Scale-Freeness | 1 INTRODUCTION . Recent years have seen a surge of interest in applying deep neural networks ( LeCun et al. , 2015 ) to a myriad of areas . While Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) remains the dominant method for optimizing such models , its performance depends crucially on the step size hyperparameter which controls how far the algorithm proceeds along the negative stochastic gradient in each step to update the model parameters . To alleviate this problem , people have developed a fruitful line of research on adaptive gradient methods ( e.g . Duchi et al. , 2010a ; McMahan & Streeter , 2010 ; Tieleman & Hinton , 2012 ; Zeiler , 2012 ; Luo et al. , 2018 ; Zhou et al. , 2018 ) . These methods provide mechanisms to automatically set stepsizes , and have been shown to greatly reduce the tuning effort while maintaining good performance . Among those adaptive algorithms , one of the most widely used is Adam ( Kingma & Ba , 2015 ) which achieves good results across a variety of problems even by simply adopting the default hyperparameter setting . In practice , to improve the generalization ability , Adam is typically combined with a ℓ2 regularization which adds the squared ℓ2 norm of the model weights on top of the loss function ( which we will call Adam-ℓ2 hereafter ) . This technique is usually referred to as weight decay because when using SGD , the ℓ2 regularization works by first shrinking the model weights by a constant factor in addition to moving along the negative gradient direction in each step . However , as pointed out in Loshchilov & Hutter ( 2019 ) , for Adam , there is no fixed regularization that achieves this same effect . To address this , they provide a method called AdamW that decouples the gradient of the ℓ2 regularization from the update of Adam and directly decays the weights . The two algorithms are shown in Algorithm 1 . Although AdamW frequently outperforms Adam-ℓ2 , the approach is primarily motivated empirically without a clear understanding of why it works so well . Recently , however , Bjorck et al . ( 2020 ) applied AdamW in Natural Language Processing ( NLP ) and Reinforcement Learning problems and found no improvement of performance over sufficiently tuned Adam-ℓ2 . Considering the huge popularity of AdamW ( Kuen et al. , 2019 ; Lifchitz et al. , 2019 ; Carion et al. , 2020 ) , we investigate when and why AdamW has a significant improvement over Adam-ℓ2 . In this paper , we focus on understanding the training and testing dynamics of the AdamW update in contrast to Adam-ℓ2 . We consider this contrast from the lens of optimization theory rather than directly investigating generalization over multiple epochs . First , we unveil the surprising connection between AdamW and proximal updates . In particular , we show that AdamW is an approximation of the latter and we also confirm such similarity with an empirical study . Noticing that AdamW and the proximal update are both scale-free while Adam-ℓ2 is not , we derive a theorem showing that scale-free optimizers enjoy an automatic acceleration with respect to the condition number on certain cases . This gives AdamW a concrete theoretical advantage in training over Adam-ℓ2 . Next , we empirically identify the scenario of training very deep neural networks with batch normalization ( BN ) switched off as a case in which AdamW substantially outperforms Adam-ℓ2 in both testing and training . Note that the setting of removing BN is not our invention : indeed , there is already active research in this ( De & Smith , 2020 ; Zhang et al. , 2019 ) . The reason is that BN has many disadvantages ( Brock et al. , 2021 ) including added memory overhead ( Bulò et al. , 2018 ) and training time ( Gitman & Ginsburg , 2017 ) , and a discrepancy between training and inferencing ( Singh & Shrivastava , 2019 ) . BN has also been found to be not suitable for many cases including distributed computing with a small minibatch per GPU ( Wu & He , 2018 ; Goyal et al. , 2017 ) , sequential modeling tasks ( Ba et al. , 2016 ) , and contrastive learning algorithms ( Chen et al. , 2020 ) . Moreover , there are already SOTA architectures that do not use BN including the Vision transformer ( Dosovitskiy et al. , 2021 ) and the BERT model ( Devlin et al. , 2019 ) . For such settings of removing BN , we observe that the magnitudes of the coordinates of the updates during training are much more concentrated about a fixed value for AdamW than for Adam-ℓ2 , which is an expected property of scale-free algorithms . Further , as depth increases , we expect a greater diversity of gradient scalings , a scenario that should favor scale-free updates . Our experiments support this hypothesis : deeper networks have more dramatic differences between the distributions of update scales between Adam-ℓ2 and AdamW , and larger accuracy advantages for AdamW . Specifically , the contributions of this paper are : 1 . We show that AdamW can be seen as an approximation of the proximal updates , which utilize the closed-form proximal mapping of the regularizer instead of only its gradient . 2 . We point out the scale-freeness property enjoyed by AdamW and show the advantage of such a property on concrete problem classes . 3 . We find a scenario where AdamW is significantly better than Adam-ℓ2 in both training and testing performance and report an empirical observation of the correlation between such advantage and the scale-freeness property of AdamW . The rest of this paper is organized as follows : In Section 2 we discuss the relevant literature . The connection between AdamW and the proximal updates as well as its scale-freeness are explained in Section 3 . We then report the empirical observations in Section 4 . Finally , we conclude with a discussion of the results and point out some potential future directions . 2 RELATED WORK . By enforcing the magnitude of the model weights to be small , weight decay has long been a standard technique to improve the generalization ability in machine learning ( Krogh & Hertz , 1991 ; Bos & Chug , 1996 ) and is still widely employed in training modern deep neural networks ( Devlin et al. , 2019 ; Tan & Le , 2019 ) . Here , we do not attempt to explain the generalization ability of AdamW . Rather , we assume that the regularization and the topology of the network guarantee good generalization performance . Instead , we study algorithms from the perspective of convergence rate . The use of proximal updates in the batch optimization literature dates back at least to 1965 ( Moreau , 1965 ; Martinet , 1970 ; Rockafellar , 1976 ; Parikh & Boyd , 2014 ) , and more recently used even in the stochastic setting ( Toulis & Airoldi , 2017 ; Asi & Duchi , 2019 ) . We are not aware of any previous paper pointing out the connection between AdamW and proximal updates . The scale-free property was first proposed in the online learning field ( Orabona & Pál , 2018 ) . There , they do not need to know a priori the Lipschitz constant bounding the gradient norms while still able to achieve the optimal rates . To the best of our knowledge , scale-freeness has not been explored as an explanation for the efficiency of deep learning optimization algorithms . Algorithm 1 Adam with L2 regularization ( Adam-ℓ2 ) and Adam with decoupled weight decay ( AdamW ) Loshchilov & Hutter ( 2017 ) ( Note that these two are exactly the same when λ = 0 , namely no weight decay . ) 1 : Given α , β1 , β2 , ϵ , λ ∈ R , { ηt } t≥0 . All operations on vectors are element-wise . 2 : Initialize : x0 ∈ Rd , first moment vector m0 ← 0 , second moment vector v0 ← 0 3 : for t = 1 , 2 , . . . , T do 4 : Compute the stochastic gradient∇ft ( xt−1 ) evaluated on a mini-batch of samples 5 : gt ← ∇ft ( xt−1 ) +λxt−1 6 : mt ← β1mt−1 + ( 1− β1 ) gt , vt ← β2vt−1 + ( 1− β2 ) g2t 7 : m̂t ←mt/ ( 1− βt1 ) , v̂t ← vt/ ( 1− βt2 ) 8 : xt ← xt−1 −ηtλxt−1 −ηtαm̂t/ ( √ v̂t + ϵ ) 9 : end for 3 THEORETICAL INSIGHTS ON THE MERITS OF ADAMW . AdamW and Proximal Updates : Here , we show that AdamW approximates a proximal update with squared ℓ2 regularization . This provides a first theoretical motivation for AdamW . Consider that we want to minimize the objective function F ( x ) = λ2 ∥x∥ 2 2 + f ( x ) , ( 1 ) where λ > 0 and f ( x ) : Rd → R is a function bounded from below . We could use a stochastic optimization algorithm that updates in the following fashion xt = xt−1 −Mtpt , ( 2 ) where Mt is a generic matrix containing the learning rates and pt denotes the update direction . Specifically , we consider Mt = ηtId where ηt is a learning rate schedule , e.g. , the constant one or the cosine annealing ( Loshchilov & Hutter , 2017 ) . This update covers many cases : 1. pt = gt gives us the vanilla SGD ; 2. pt = gt√∑t i=1 g 2 i+ϵ gives the AdaGrad algorithm ( Duchi et al. , 2011 ) ; 3. pt = αm̂t/ ( √ v̂t + ϵ ) recovers the Adam algorithm ( see Algorithm 1 ) . Note that in the above we use gt to denote the stochastic gradient of the entire objective function : ∇ft ( xt−1 ) + λxt−1 ( if the regularizer is not present λ = 0 ) , with m̂t and v̂t both updated using gt . This update rule ( 2 ) is given by the following online mirror descent update ( Nemirovsky & Yudin , 1983 ; Warmuth & Jagota , 1997 ; Beck & Teboulle , 2003 ) : xt = argmin x∈Rd λ 2 ∥xt−1∥ 2 2 + f ( xt−1 ) + p ⊤ t ( x− xt−1 ) + 12 ( x− xt−1 ) ⊤M−1t ( x− xt−1 ) . This approximates minimizing a first-order Taylor approximation of F centered in xt−1 plus a term that measures the distance between the xt and xt−1 according to the matrix M−1t . The approximation becomes exact when pt = ∇f ( xt−1 ) + λxt−1 . However , this is not the only way to construct first-order updates for the objective function ( 1 ) . An alternative route is to linearize only f and to keep the squared ℓ2 norm in its function form : xt = argmin x∈Rd λ 2 ∥x∥ 2 2 + f ( xt−1 ) + p ⊤ t ( x− xt−1 ) + 12 ( x− xt−1 ) ⊤M−1t ( x− xt−1 ) , This update rule is using the proximal operator ( Moreau , 1965 ; Parikh & Boyd , 2014 ) of 12∥ · ∥ 2 2 with respect to the norm ∥ · ∥M−1t . It is intuitive why this would be a better update : We directly minimize the squared ℓ2 norm instead of approximating it . From the first-order optimality condition , we have xt = ( Id + λMt ) −1 ( xt−1 −Mtpt ) . ( 3 ) When λ = 0 , the update in ( 2 ) and this one coincide . Yet , when λ ̸= 0 , they are no longer the same . We now show how the update in ( 3 ) generalizes the one in AdamW . The update of AdamW is xt = ( 1− ληt ) xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) . ( 4 ) On the other hand , using Mt = ηtId and pt = αm̂t/ ( √ v̂t + ϵ ) in ( 3 ) gives : xt = ( 1 + ληt ) −1 ( xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) ) , ( 5 ) which we will call AdamProx afterward . Its first-order Taylor approximation around ηt = 0 is xt ≈ ( 1− ληt ) xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) , which is exactly the AdamW update ( 4 ) as in the original paper . The careful reader might notice that the approximation of AdamW to AdamProx becomes less accurate when ηt becomes too large , and so be concerned whether this approximation is practical at all . Fortunately , in practice , ηt is never large enough for this to be an issue . Most practical learning rate schedules , e.g. , cosine , exponential , polynomial , and step decay , all decrease from η0 = 1 , or some even smaller value . Thus , the remainder term of this approximation is O ( λη2t ) ≤ O ( λη20 ) which we should always expect to be small as both λ and ηt are small . Consequently , we can expect AdamW and AdamProx to perform similarly for learning rate schedules ηt commonly employed in practice , and we will indeed confirm this empirically in Section 4.3 . While perhaps not widely known , the theory of proximal methods is a deep and beautiful branch of optimization ( see Parikh & Boyd ( 2014 ) for a survey ) . On the other hand , although AdamW is a very popular practical algorithm ( e.g. , for training BERT ( Devlin et al. , 2019 ) and vision transformer-based architectures ( Dosovitskiy et al. , 2021 ) ) , it is still unclear how it works so well . So , this connection opens the door to new ways to design optimization algorithms . For example , the proximal update gives an immediate theoretical advantage : the convergence rate , at least in the convex case , will depend on ∥∇f ( xt ) ∥2Mt rather than on ∥∇f ( xt ) + λxt∥ 2 Mt Duchi et al . ( 2010b ) . AdamW is Scale-Free An optimization algorithm is said to be scale-free if its iterates do not change when one multiplies any coordinate of all the gradients by a positive constant ( Orabona & Pál , 2018 ) . It turns out that the update ( 4 ) of AdamW and the update ( 5 ) of AdamProx are both scale-free when ϵ = 0 . This is evident for AdamW , since the scaling factor for any coordinate of the gradient is kept in both m̂t and √ v̂t and will be canceled out when dividing them . In contrast , for Adam-ℓ2 , the addition of the weight decay vector to the gradient ( see Line 5 of Algorithm 1 ) destroys this property . Note that in practical applications ϵ is very small but not zero , yet we empirically verify in Section 4.2 that it is small enough to still approximately guarantee the scale-free property . We want to emphasize the comparison between Adam-ℓ2 and AdamW : once Adam-ℓ2 adopts nonzero λ , it loses the scale-freeness property ; in contrast , AdamW still enjoys this for arbitrary λ . The same applies to any AdaGrad-type and Adam-type algorithms that employ weight decay strategy in the same way as Adam-ℓ2 by simply adding the gradient of the ℓ2 regularizer directly to the gradient of f ( as implemented in Tensorflow and Pytorch ) . Such algorithms are scale-free only when they do not use weight decay . In fact , this is one of our main observations : scale-freeness helps optimizers in deep learning and AdamW preserves this scale-freeness property even with an ℓ2 regularizer . We stress that the scale-freeness is an important but largely overlooked property of an optimization algorithm . It has already been utilized to explain the success of AdaGrad ( Orabona & Pál , 2018 ) . Recently , Agarwal et al . ( 2020 ) also advocates for setting the ϵ in the denominator of AdaGrad to be 0 thus making the update indeed scale-free , and they also provide an NLP experiment where such choice yields the best performance . Now , we show the effect of scale-freeness from a theoretical point of view . Consider a twice continuously differentiable function f . It is well-known that the convergence rate of many optimization algorithms , like gradient descent , on minimizing f depends on the condition number κ ( ∇2f ( x ) ) , i.e. , the ratio of its largest eigenvalue to its smallest eigenvalue of the Hessian ( see , e.g . Nesterov , 2004 ) . It turns out that scale-freeness can effectively reduce the effects of the condition number , as detailed by the following theorem whose proof can be found in the appendix . Theorem 1 . Let f be a twice continuously differentiable function such that x∗ ∈ argmin f ( x ) . Then , let f̃Λ be the family of functions such that x∗ ∈ argmin f̃Λ ( x ) , and ∇2f̃Λ ( x ) = Λ∇2f ( x ) , where Λ = diag ( λ1 , . . . , λd ) ⪰ 0 . Consider a scale-free algorithm whose convergence rate can be bounded depending on the condition number of the objective function . Then , the convergence of such algorithm used to minimize f will only depend on the smallest condition number among all functions f̃Λ . To give an example of when this is advantageous , consider when∇2f ( x ) is a diagonal matrix : ∇2f ( x ) = diag ( g1 ( x ) , g2 ( x ) , . . . , gd ( x ) ) . Assume µ ≤ µi ≤ gi ( x ) ≤ Mi ≤ M for i ∈ { 1 , . . . , d } . Denote j = argmaxi Mi/µi . Choose λi s.t . µj ≤ λiµi ≤ λigi ( x ) ≤ λiMi ≤ Mj and we have that Λ∇2f ( x ) has a condition number κ′ = Mj/µj . This gives scale-free algorithms a big advantage when maxi Mi/µi ≪M/µ . Specifically : ( a ) Condition number 1 . ( b ) Condition number 100000. strongly convex functions due to scale-freeness . Note that the folklore justification for such improvements is that the learning rate of AdaGrad approximates the inverse of the Hessian matrix , but this is incorrect : AdaGrad does not compute Hessians and there is no reason to believe it approximates them in general . More importantly , another scenario demonstrating the advantage of scale-freeness is training deep neural networks . Neural networks are known to suffer from the notorious problem of vanishing/exploding gradients ( Bengio et al. , 1994 ; Glorot & Bengio , 2010 ; Pascanu et al. , 2013 ) . This problem leads to the gradient scales being very different across layers , especially between the first and the last layers . The problem is particularly severe when the model is not equipped with normalization mechanisms like Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) . In such cases , when using a non-scale-free optimization algorithm ( e.g . SGD ) , the first layers and the last layers will proceed at very different speeds , whereas a scale-free algorithm ensures that each layer is updated at a similar pace . | In this paper, the authors first interpret AdamW as an approximation of proximal mapping, and then proposed their own algorithm AdamProx. They aim to unravel the connection between AdamW and AdamProx from both theoretical and empirical perspective. They delicately designed some probing experiments to verify their hypothesis. | SP:fc2b0639a575e2af8ea539a31870dc2e8724b901 |
Understanding AdamW through Proximal Methods and Scale-Freeness | 1 INTRODUCTION . Recent years have seen a surge of interest in applying deep neural networks ( LeCun et al. , 2015 ) to a myriad of areas . While Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) remains the dominant method for optimizing such models , its performance depends crucially on the step size hyperparameter which controls how far the algorithm proceeds along the negative stochastic gradient in each step to update the model parameters . To alleviate this problem , people have developed a fruitful line of research on adaptive gradient methods ( e.g . Duchi et al. , 2010a ; McMahan & Streeter , 2010 ; Tieleman & Hinton , 2012 ; Zeiler , 2012 ; Luo et al. , 2018 ; Zhou et al. , 2018 ) . These methods provide mechanisms to automatically set stepsizes , and have been shown to greatly reduce the tuning effort while maintaining good performance . Among those adaptive algorithms , one of the most widely used is Adam ( Kingma & Ba , 2015 ) which achieves good results across a variety of problems even by simply adopting the default hyperparameter setting . In practice , to improve the generalization ability , Adam is typically combined with a ℓ2 regularization which adds the squared ℓ2 norm of the model weights on top of the loss function ( which we will call Adam-ℓ2 hereafter ) . This technique is usually referred to as weight decay because when using SGD , the ℓ2 regularization works by first shrinking the model weights by a constant factor in addition to moving along the negative gradient direction in each step . However , as pointed out in Loshchilov & Hutter ( 2019 ) , for Adam , there is no fixed regularization that achieves this same effect . To address this , they provide a method called AdamW that decouples the gradient of the ℓ2 regularization from the update of Adam and directly decays the weights . The two algorithms are shown in Algorithm 1 . Although AdamW frequently outperforms Adam-ℓ2 , the approach is primarily motivated empirically without a clear understanding of why it works so well . Recently , however , Bjorck et al . ( 2020 ) applied AdamW in Natural Language Processing ( NLP ) and Reinforcement Learning problems and found no improvement of performance over sufficiently tuned Adam-ℓ2 . Considering the huge popularity of AdamW ( Kuen et al. , 2019 ; Lifchitz et al. , 2019 ; Carion et al. , 2020 ) , we investigate when and why AdamW has a significant improvement over Adam-ℓ2 . In this paper , we focus on understanding the training and testing dynamics of the AdamW update in contrast to Adam-ℓ2 . We consider this contrast from the lens of optimization theory rather than directly investigating generalization over multiple epochs . First , we unveil the surprising connection between AdamW and proximal updates . In particular , we show that AdamW is an approximation of the latter and we also confirm such similarity with an empirical study . Noticing that AdamW and the proximal update are both scale-free while Adam-ℓ2 is not , we derive a theorem showing that scale-free optimizers enjoy an automatic acceleration with respect to the condition number on certain cases . This gives AdamW a concrete theoretical advantage in training over Adam-ℓ2 . Next , we empirically identify the scenario of training very deep neural networks with batch normalization ( BN ) switched off as a case in which AdamW substantially outperforms Adam-ℓ2 in both testing and training . Note that the setting of removing BN is not our invention : indeed , there is already active research in this ( De & Smith , 2020 ; Zhang et al. , 2019 ) . The reason is that BN has many disadvantages ( Brock et al. , 2021 ) including added memory overhead ( Bulò et al. , 2018 ) and training time ( Gitman & Ginsburg , 2017 ) , and a discrepancy between training and inferencing ( Singh & Shrivastava , 2019 ) . BN has also been found to be not suitable for many cases including distributed computing with a small minibatch per GPU ( Wu & He , 2018 ; Goyal et al. , 2017 ) , sequential modeling tasks ( Ba et al. , 2016 ) , and contrastive learning algorithms ( Chen et al. , 2020 ) . Moreover , there are already SOTA architectures that do not use BN including the Vision transformer ( Dosovitskiy et al. , 2021 ) and the BERT model ( Devlin et al. , 2019 ) . For such settings of removing BN , we observe that the magnitudes of the coordinates of the updates during training are much more concentrated about a fixed value for AdamW than for Adam-ℓ2 , which is an expected property of scale-free algorithms . Further , as depth increases , we expect a greater diversity of gradient scalings , a scenario that should favor scale-free updates . Our experiments support this hypothesis : deeper networks have more dramatic differences between the distributions of update scales between Adam-ℓ2 and AdamW , and larger accuracy advantages for AdamW . Specifically , the contributions of this paper are : 1 . We show that AdamW can be seen as an approximation of the proximal updates , which utilize the closed-form proximal mapping of the regularizer instead of only its gradient . 2 . We point out the scale-freeness property enjoyed by AdamW and show the advantage of such a property on concrete problem classes . 3 . We find a scenario where AdamW is significantly better than Adam-ℓ2 in both training and testing performance and report an empirical observation of the correlation between such advantage and the scale-freeness property of AdamW . The rest of this paper is organized as follows : In Section 2 we discuss the relevant literature . The connection between AdamW and the proximal updates as well as its scale-freeness are explained in Section 3 . We then report the empirical observations in Section 4 . Finally , we conclude with a discussion of the results and point out some potential future directions . 2 RELATED WORK . By enforcing the magnitude of the model weights to be small , weight decay has long been a standard technique to improve the generalization ability in machine learning ( Krogh & Hertz , 1991 ; Bos & Chug , 1996 ) and is still widely employed in training modern deep neural networks ( Devlin et al. , 2019 ; Tan & Le , 2019 ) . Here , we do not attempt to explain the generalization ability of AdamW . Rather , we assume that the regularization and the topology of the network guarantee good generalization performance . Instead , we study algorithms from the perspective of convergence rate . The use of proximal updates in the batch optimization literature dates back at least to 1965 ( Moreau , 1965 ; Martinet , 1970 ; Rockafellar , 1976 ; Parikh & Boyd , 2014 ) , and more recently used even in the stochastic setting ( Toulis & Airoldi , 2017 ; Asi & Duchi , 2019 ) . We are not aware of any previous paper pointing out the connection between AdamW and proximal updates . The scale-free property was first proposed in the online learning field ( Orabona & Pál , 2018 ) . There , they do not need to know a priori the Lipschitz constant bounding the gradient norms while still able to achieve the optimal rates . To the best of our knowledge , scale-freeness has not been explored as an explanation for the efficiency of deep learning optimization algorithms . Algorithm 1 Adam with L2 regularization ( Adam-ℓ2 ) and Adam with decoupled weight decay ( AdamW ) Loshchilov & Hutter ( 2017 ) ( Note that these two are exactly the same when λ = 0 , namely no weight decay . ) 1 : Given α , β1 , β2 , ϵ , λ ∈ R , { ηt } t≥0 . All operations on vectors are element-wise . 2 : Initialize : x0 ∈ Rd , first moment vector m0 ← 0 , second moment vector v0 ← 0 3 : for t = 1 , 2 , . . . , T do 4 : Compute the stochastic gradient∇ft ( xt−1 ) evaluated on a mini-batch of samples 5 : gt ← ∇ft ( xt−1 ) +λxt−1 6 : mt ← β1mt−1 + ( 1− β1 ) gt , vt ← β2vt−1 + ( 1− β2 ) g2t 7 : m̂t ←mt/ ( 1− βt1 ) , v̂t ← vt/ ( 1− βt2 ) 8 : xt ← xt−1 −ηtλxt−1 −ηtαm̂t/ ( √ v̂t + ϵ ) 9 : end for 3 THEORETICAL INSIGHTS ON THE MERITS OF ADAMW . AdamW and Proximal Updates : Here , we show that AdamW approximates a proximal update with squared ℓ2 regularization . This provides a first theoretical motivation for AdamW . Consider that we want to minimize the objective function F ( x ) = λ2 ∥x∥ 2 2 + f ( x ) , ( 1 ) where λ > 0 and f ( x ) : Rd → R is a function bounded from below . We could use a stochastic optimization algorithm that updates in the following fashion xt = xt−1 −Mtpt , ( 2 ) where Mt is a generic matrix containing the learning rates and pt denotes the update direction . Specifically , we consider Mt = ηtId where ηt is a learning rate schedule , e.g. , the constant one or the cosine annealing ( Loshchilov & Hutter , 2017 ) . This update covers many cases : 1. pt = gt gives us the vanilla SGD ; 2. pt = gt√∑t i=1 g 2 i+ϵ gives the AdaGrad algorithm ( Duchi et al. , 2011 ) ; 3. pt = αm̂t/ ( √ v̂t + ϵ ) recovers the Adam algorithm ( see Algorithm 1 ) . Note that in the above we use gt to denote the stochastic gradient of the entire objective function : ∇ft ( xt−1 ) + λxt−1 ( if the regularizer is not present λ = 0 ) , with m̂t and v̂t both updated using gt . This update rule ( 2 ) is given by the following online mirror descent update ( Nemirovsky & Yudin , 1983 ; Warmuth & Jagota , 1997 ; Beck & Teboulle , 2003 ) : xt = argmin x∈Rd λ 2 ∥xt−1∥ 2 2 + f ( xt−1 ) + p ⊤ t ( x− xt−1 ) + 12 ( x− xt−1 ) ⊤M−1t ( x− xt−1 ) . This approximates minimizing a first-order Taylor approximation of F centered in xt−1 plus a term that measures the distance between the xt and xt−1 according to the matrix M−1t . The approximation becomes exact when pt = ∇f ( xt−1 ) + λxt−1 . However , this is not the only way to construct first-order updates for the objective function ( 1 ) . An alternative route is to linearize only f and to keep the squared ℓ2 norm in its function form : xt = argmin x∈Rd λ 2 ∥x∥ 2 2 + f ( xt−1 ) + p ⊤ t ( x− xt−1 ) + 12 ( x− xt−1 ) ⊤M−1t ( x− xt−1 ) , This update rule is using the proximal operator ( Moreau , 1965 ; Parikh & Boyd , 2014 ) of 12∥ · ∥ 2 2 with respect to the norm ∥ · ∥M−1t . It is intuitive why this would be a better update : We directly minimize the squared ℓ2 norm instead of approximating it . From the first-order optimality condition , we have xt = ( Id + λMt ) −1 ( xt−1 −Mtpt ) . ( 3 ) When λ = 0 , the update in ( 2 ) and this one coincide . Yet , when λ ̸= 0 , they are no longer the same . We now show how the update in ( 3 ) generalizes the one in AdamW . The update of AdamW is xt = ( 1− ληt ) xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) . ( 4 ) On the other hand , using Mt = ηtId and pt = αm̂t/ ( √ v̂t + ϵ ) in ( 3 ) gives : xt = ( 1 + ληt ) −1 ( xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) ) , ( 5 ) which we will call AdamProx afterward . Its first-order Taylor approximation around ηt = 0 is xt ≈ ( 1− ληt ) xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) , which is exactly the AdamW update ( 4 ) as in the original paper . The careful reader might notice that the approximation of AdamW to AdamProx becomes less accurate when ηt becomes too large , and so be concerned whether this approximation is practical at all . Fortunately , in practice , ηt is never large enough for this to be an issue . Most practical learning rate schedules , e.g. , cosine , exponential , polynomial , and step decay , all decrease from η0 = 1 , or some even smaller value . Thus , the remainder term of this approximation is O ( λη2t ) ≤ O ( λη20 ) which we should always expect to be small as both λ and ηt are small . Consequently , we can expect AdamW and AdamProx to perform similarly for learning rate schedules ηt commonly employed in practice , and we will indeed confirm this empirically in Section 4.3 . While perhaps not widely known , the theory of proximal methods is a deep and beautiful branch of optimization ( see Parikh & Boyd ( 2014 ) for a survey ) . On the other hand , although AdamW is a very popular practical algorithm ( e.g. , for training BERT ( Devlin et al. , 2019 ) and vision transformer-based architectures ( Dosovitskiy et al. , 2021 ) ) , it is still unclear how it works so well . So , this connection opens the door to new ways to design optimization algorithms . For example , the proximal update gives an immediate theoretical advantage : the convergence rate , at least in the convex case , will depend on ∥∇f ( xt ) ∥2Mt rather than on ∥∇f ( xt ) + λxt∥ 2 Mt Duchi et al . ( 2010b ) . AdamW is Scale-Free An optimization algorithm is said to be scale-free if its iterates do not change when one multiplies any coordinate of all the gradients by a positive constant ( Orabona & Pál , 2018 ) . It turns out that the update ( 4 ) of AdamW and the update ( 5 ) of AdamProx are both scale-free when ϵ = 0 . This is evident for AdamW , since the scaling factor for any coordinate of the gradient is kept in both m̂t and √ v̂t and will be canceled out when dividing them . In contrast , for Adam-ℓ2 , the addition of the weight decay vector to the gradient ( see Line 5 of Algorithm 1 ) destroys this property . Note that in practical applications ϵ is very small but not zero , yet we empirically verify in Section 4.2 that it is small enough to still approximately guarantee the scale-free property . We want to emphasize the comparison between Adam-ℓ2 and AdamW : once Adam-ℓ2 adopts nonzero λ , it loses the scale-freeness property ; in contrast , AdamW still enjoys this for arbitrary λ . The same applies to any AdaGrad-type and Adam-type algorithms that employ weight decay strategy in the same way as Adam-ℓ2 by simply adding the gradient of the ℓ2 regularizer directly to the gradient of f ( as implemented in Tensorflow and Pytorch ) . Such algorithms are scale-free only when they do not use weight decay . In fact , this is one of our main observations : scale-freeness helps optimizers in deep learning and AdamW preserves this scale-freeness property even with an ℓ2 regularizer . We stress that the scale-freeness is an important but largely overlooked property of an optimization algorithm . It has already been utilized to explain the success of AdaGrad ( Orabona & Pál , 2018 ) . Recently , Agarwal et al . ( 2020 ) also advocates for setting the ϵ in the denominator of AdaGrad to be 0 thus making the update indeed scale-free , and they also provide an NLP experiment where such choice yields the best performance . Now , we show the effect of scale-freeness from a theoretical point of view . Consider a twice continuously differentiable function f . It is well-known that the convergence rate of many optimization algorithms , like gradient descent , on minimizing f depends on the condition number κ ( ∇2f ( x ) ) , i.e. , the ratio of its largest eigenvalue to its smallest eigenvalue of the Hessian ( see , e.g . Nesterov , 2004 ) . It turns out that scale-freeness can effectively reduce the effects of the condition number , as detailed by the following theorem whose proof can be found in the appendix . Theorem 1 . Let f be a twice continuously differentiable function such that x∗ ∈ argmin f ( x ) . Then , let f̃Λ be the family of functions such that x∗ ∈ argmin f̃Λ ( x ) , and ∇2f̃Λ ( x ) = Λ∇2f ( x ) , where Λ = diag ( λ1 , . . . , λd ) ⪰ 0 . Consider a scale-free algorithm whose convergence rate can be bounded depending on the condition number of the objective function . Then , the convergence of such algorithm used to minimize f will only depend on the smallest condition number among all functions f̃Λ . To give an example of when this is advantageous , consider when∇2f ( x ) is a diagonal matrix : ∇2f ( x ) = diag ( g1 ( x ) , g2 ( x ) , . . . , gd ( x ) ) . Assume µ ≤ µi ≤ gi ( x ) ≤ Mi ≤ M for i ∈ { 1 , . . . , d } . Denote j = argmaxi Mi/µi . Choose λi s.t . µj ≤ λiµi ≤ λigi ( x ) ≤ λiMi ≤ Mj and we have that Λ∇2f ( x ) has a condition number κ′ = Mj/µj . This gives scale-free algorithms a big advantage when maxi Mi/µi ≪M/µ . Specifically : ( a ) Condition number 1 . ( b ) Condition number 100000. strongly convex functions due to scale-freeness . Note that the folklore justification for such improvements is that the learning rate of AdaGrad approximates the inverse of the Hessian matrix , but this is incorrect : AdaGrad does not compute Hessians and there is no reason to believe it approximates them in general . More importantly , another scenario demonstrating the advantage of scale-freeness is training deep neural networks . Neural networks are known to suffer from the notorious problem of vanishing/exploding gradients ( Bengio et al. , 1994 ; Glorot & Bengio , 2010 ; Pascanu et al. , 2013 ) . This problem leads to the gradient scales being very different across layers , especially between the first and the last layers . The problem is particularly severe when the model is not equipped with normalization mechanisms like Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) . In such cases , when using a non-scale-free optimization algorithm ( e.g . SGD ) , the first layers and the last layers will proceed at very different speeds , whereas a scale-free algorithm ensures that each layer is updated at a similar pace . | This work first re-interprets AdamW as an approximation of a proximal gradient method, which takes advantage of the closed-form proximal mapping of the regularizer instead of only utilizing its gradient information as in Adam-l2. Then it further shows that AdamW is “scale-freeness” algorithm which may enjoys faster convergence speed. In this way, this work may explain why AdamW generalizes better than Adam-L2. | SP:fc2b0639a575e2af8ea539a31870dc2e8724b901 |
Understanding AdamW through Proximal Methods and Scale-Freeness | 1 INTRODUCTION . Recent years have seen a surge of interest in applying deep neural networks ( LeCun et al. , 2015 ) to a myriad of areas . While Stochastic Gradient Descent ( SGD ) ( Robbins & Monro , 1951 ) remains the dominant method for optimizing such models , its performance depends crucially on the step size hyperparameter which controls how far the algorithm proceeds along the negative stochastic gradient in each step to update the model parameters . To alleviate this problem , people have developed a fruitful line of research on adaptive gradient methods ( e.g . Duchi et al. , 2010a ; McMahan & Streeter , 2010 ; Tieleman & Hinton , 2012 ; Zeiler , 2012 ; Luo et al. , 2018 ; Zhou et al. , 2018 ) . These methods provide mechanisms to automatically set stepsizes , and have been shown to greatly reduce the tuning effort while maintaining good performance . Among those adaptive algorithms , one of the most widely used is Adam ( Kingma & Ba , 2015 ) which achieves good results across a variety of problems even by simply adopting the default hyperparameter setting . In practice , to improve the generalization ability , Adam is typically combined with a ℓ2 regularization which adds the squared ℓ2 norm of the model weights on top of the loss function ( which we will call Adam-ℓ2 hereafter ) . This technique is usually referred to as weight decay because when using SGD , the ℓ2 regularization works by first shrinking the model weights by a constant factor in addition to moving along the negative gradient direction in each step . However , as pointed out in Loshchilov & Hutter ( 2019 ) , for Adam , there is no fixed regularization that achieves this same effect . To address this , they provide a method called AdamW that decouples the gradient of the ℓ2 regularization from the update of Adam and directly decays the weights . The two algorithms are shown in Algorithm 1 . Although AdamW frequently outperforms Adam-ℓ2 , the approach is primarily motivated empirically without a clear understanding of why it works so well . Recently , however , Bjorck et al . ( 2020 ) applied AdamW in Natural Language Processing ( NLP ) and Reinforcement Learning problems and found no improvement of performance over sufficiently tuned Adam-ℓ2 . Considering the huge popularity of AdamW ( Kuen et al. , 2019 ; Lifchitz et al. , 2019 ; Carion et al. , 2020 ) , we investigate when and why AdamW has a significant improvement over Adam-ℓ2 . In this paper , we focus on understanding the training and testing dynamics of the AdamW update in contrast to Adam-ℓ2 . We consider this contrast from the lens of optimization theory rather than directly investigating generalization over multiple epochs . First , we unveil the surprising connection between AdamW and proximal updates . In particular , we show that AdamW is an approximation of the latter and we also confirm such similarity with an empirical study . Noticing that AdamW and the proximal update are both scale-free while Adam-ℓ2 is not , we derive a theorem showing that scale-free optimizers enjoy an automatic acceleration with respect to the condition number on certain cases . This gives AdamW a concrete theoretical advantage in training over Adam-ℓ2 . Next , we empirically identify the scenario of training very deep neural networks with batch normalization ( BN ) switched off as a case in which AdamW substantially outperforms Adam-ℓ2 in both testing and training . Note that the setting of removing BN is not our invention : indeed , there is already active research in this ( De & Smith , 2020 ; Zhang et al. , 2019 ) . The reason is that BN has many disadvantages ( Brock et al. , 2021 ) including added memory overhead ( Bulò et al. , 2018 ) and training time ( Gitman & Ginsburg , 2017 ) , and a discrepancy between training and inferencing ( Singh & Shrivastava , 2019 ) . BN has also been found to be not suitable for many cases including distributed computing with a small minibatch per GPU ( Wu & He , 2018 ; Goyal et al. , 2017 ) , sequential modeling tasks ( Ba et al. , 2016 ) , and contrastive learning algorithms ( Chen et al. , 2020 ) . Moreover , there are already SOTA architectures that do not use BN including the Vision transformer ( Dosovitskiy et al. , 2021 ) and the BERT model ( Devlin et al. , 2019 ) . For such settings of removing BN , we observe that the magnitudes of the coordinates of the updates during training are much more concentrated about a fixed value for AdamW than for Adam-ℓ2 , which is an expected property of scale-free algorithms . Further , as depth increases , we expect a greater diversity of gradient scalings , a scenario that should favor scale-free updates . Our experiments support this hypothesis : deeper networks have more dramatic differences between the distributions of update scales between Adam-ℓ2 and AdamW , and larger accuracy advantages for AdamW . Specifically , the contributions of this paper are : 1 . We show that AdamW can be seen as an approximation of the proximal updates , which utilize the closed-form proximal mapping of the regularizer instead of only its gradient . 2 . We point out the scale-freeness property enjoyed by AdamW and show the advantage of such a property on concrete problem classes . 3 . We find a scenario where AdamW is significantly better than Adam-ℓ2 in both training and testing performance and report an empirical observation of the correlation between such advantage and the scale-freeness property of AdamW . The rest of this paper is organized as follows : In Section 2 we discuss the relevant literature . The connection between AdamW and the proximal updates as well as its scale-freeness are explained in Section 3 . We then report the empirical observations in Section 4 . Finally , we conclude with a discussion of the results and point out some potential future directions . 2 RELATED WORK . By enforcing the magnitude of the model weights to be small , weight decay has long been a standard technique to improve the generalization ability in machine learning ( Krogh & Hertz , 1991 ; Bos & Chug , 1996 ) and is still widely employed in training modern deep neural networks ( Devlin et al. , 2019 ; Tan & Le , 2019 ) . Here , we do not attempt to explain the generalization ability of AdamW . Rather , we assume that the regularization and the topology of the network guarantee good generalization performance . Instead , we study algorithms from the perspective of convergence rate . The use of proximal updates in the batch optimization literature dates back at least to 1965 ( Moreau , 1965 ; Martinet , 1970 ; Rockafellar , 1976 ; Parikh & Boyd , 2014 ) , and more recently used even in the stochastic setting ( Toulis & Airoldi , 2017 ; Asi & Duchi , 2019 ) . We are not aware of any previous paper pointing out the connection between AdamW and proximal updates . The scale-free property was first proposed in the online learning field ( Orabona & Pál , 2018 ) . There , they do not need to know a priori the Lipschitz constant bounding the gradient norms while still able to achieve the optimal rates . To the best of our knowledge , scale-freeness has not been explored as an explanation for the efficiency of deep learning optimization algorithms . Algorithm 1 Adam with L2 regularization ( Adam-ℓ2 ) and Adam with decoupled weight decay ( AdamW ) Loshchilov & Hutter ( 2017 ) ( Note that these two are exactly the same when λ = 0 , namely no weight decay . ) 1 : Given α , β1 , β2 , ϵ , λ ∈ R , { ηt } t≥0 . All operations on vectors are element-wise . 2 : Initialize : x0 ∈ Rd , first moment vector m0 ← 0 , second moment vector v0 ← 0 3 : for t = 1 , 2 , . . . , T do 4 : Compute the stochastic gradient∇ft ( xt−1 ) evaluated on a mini-batch of samples 5 : gt ← ∇ft ( xt−1 ) +λxt−1 6 : mt ← β1mt−1 + ( 1− β1 ) gt , vt ← β2vt−1 + ( 1− β2 ) g2t 7 : m̂t ←mt/ ( 1− βt1 ) , v̂t ← vt/ ( 1− βt2 ) 8 : xt ← xt−1 −ηtλxt−1 −ηtαm̂t/ ( √ v̂t + ϵ ) 9 : end for 3 THEORETICAL INSIGHTS ON THE MERITS OF ADAMW . AdamW and Proximal Updates : Here , we show that AdamW approximates a proximal update with squared ℓ2 regularization . This provides a first theoretical motivation for AdamW . Consider that we want to minimize the objective function F ( x ) = λ2 ∥x∥ 2 2 + f ( x ) , ( 1 ) where λ > 0 and f ( x ) : Rd → R is a function bounded from below . We could use a stochastic optimization algorithm that updates in the following fashion xt = xt−1 −Mtpt , ( 2 ) where Mt is a generic matrix containing the learning rates and pt denotes the update direction . Specifically , we consider Mt = ηtId where ηt is a learning rate schedule , e.g. , the constant one or the cosine annealing ( Loshchilov & Hutter , 2017 ) . This update covers many cases : 1. pt = gt gives us the vanilla SGD ; 2. pt = gt√∑t i=1 g 2 i+ϵ gives the AdaGrad algorithm ( Duchi et al. , 2011 ) ; 3. pt = αm̂t/ ( √ v̂t + ϵ ) recovers the Adam algorithm ( see Algorithm 1 ) . Note that in the above we use gt to denote the stochastic gradient of the entire objective function : ∇ft ( xt−1 ) + λxt−1 ( if the regularizer is not present λ = 0 ) , with m̂t and v̂t both updated using gt . This update rule ( 2 ) is given by the following online mirror descent update ( Nemirovsky & Yudin , 1983 ; Warmuth & Jagota , 1997 ; Beck & Teboulle , 2003 ) : xt = argmin x∈Rd λ 2 ∥xt−1∥ 2 2 + f ( xt−1 ) + p ⊤ t ( x− xt−1 ) + 12 ( x− xt−1 ) ⊤M−1t ( x− xt−1 ) . This approximates minimizing a first-order Taylor approximation of F centered in xt−1 plus a term that measures the distance between the xt and xt−1 according to the matrix M−1t . The approximation becomes exact when pt = ∇f ( xt−1 ) + λxt−1 . However , this is not the only way to construct first-order updates for the objective function ( 1 ) . An alternative route is to linearize only f and to keep the squared ℓ2 norm in its function form : xt = argmin x∈Rd λ 2 ∥x∥ 2 2 + f ( xt−1 ) + p ⊤ t ( x− xt−1 ) + 12 ( x− xt−1 ) ⊤M−1t ( x− xt−1 ) , This update rule is using the proximal operator ( Moreau , 1965 ; Parikh & Boyd , 2014 ) of 12∥ · ∥ 2 2 with respect to the norm ∥ · ∥M−1t . It is intuitive why this would be a better update : We directly minimize the squared ℓ2 norm instead of approximating it . From the first-order optimality condition , we have xt = ( Id + λMt ) −1 ( xt−1 −Mtpt ) . ( 3 ) When λ = 0 , the update in ( 2 ) and this one coincide . Yet , when λ ̸= 0 , they are no longer the same . We now show how the update in ( 3 ) generalizes the one in AdamW . The update of AdamW is xt = ( 1− ληt ) xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) . ( 4 ) On the other hand , using Mt = ηtId and pt = αm̂t/ ( √ v̂t + ϵ ) in ( 3 ) gives : xt = ( 1 + ληt ) −1 ( xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) ) , ( 5 ) which we will call AdamProx afterward . Its first-order Taylor approximation around ηt = 0 is xt ≈ ( 1− ληt ) xt−1 − ηtαm̂t/ ( √ v̂t + ϵ ) , which is exactly the AdamW update ( 4 ) as in the original paper . The careful reader might notice that the approximation of AdamW to AdamProx becomes less accurate when ηt becomes too large , and so be concerned whether this approximation is practical at all . Fortunately , in practice , ηt is never large enough for this to be an issue . Most practical learning rate schedules , e.g. , cosine , exponential , polynomial , and step decay , all decrease from η0 = 1 , or some even smaller value . Thus , the remainder term of this approximation is O ( λη2t ) ≤ O ( λη20 ) which we should always expect to be small as both λ and ηt are small . Consequently , we can expect AdamW and AdamProx to perform similarly for learning rate schedules ηt commonly employed in practice , and we will indeed confirm this empirically in Section 4.3 . While perhaps not widely known , the theory of proximal methods is a deep and beautiful branch of optimization ( see Parikh & Boyd ( 2014 ) for a survey ) . On the other hand , although AdamW is a very popular practical algorithm ( e.g. , for training BERT ( Devlin et al. , 2019 ) and vision transformer-based architectures ( Dosovitskiy et al. , 2021 ) ) , it is still unclear how it works so well . So , this connection opens the door to new ways to design optimization algorithms . For example , the proximal update gives an immediate theoretical advantage : the convergence rate , at least in the convex case , will depend on ∥∇f ( xt ) ∥2Mt rather than on ∥∇f ( xt ) + λxt∥ 2 Mt Duchi et al . ( 2010b ) . AdamW is Scale-Free An optimization algorithm is said to be scale-free if its iterates do not change when one multiplies any coordinate of all the gradients by a positive constant ( Orabona & Pál , 2018 ) . It turns out that the update ( 4 ) of AdamW and the update ( 5 ) of AdamProx are both scale-free when ϵ = 0 . This is evident for AdamW , since the scaling factor for any coordinate of the gradient is kept in both m̂t and √ v̂t and will be canceled out when dividing them . In contrast , for Adam-ℓ2 , the addition of the weight decay vector to the gradient ( see Line 5 of Algorithm 1 ) destroys this property . Note that in practical applications ϵ is very small but not zero , yet we empirically verify in Section 4.2 that it is small enough to still approximately guarantee the scale-free property . We want to emphasize the comparison between Adam-ℓ2 and AdamW : once Adam-ℓ2 adopts nonzero λ , it loses the scale-freeness property ; in contrast , AdamW still enjoys this for arbitrary λ . The same applies to any AdaGrad-type and Adam-type algorithms that employ weight decay strategy in the same way as Adam-ℓ2 by simply adding the gradient of the ℓ2 regularizer directly to the gradient of f ( as implemented in Tensorflow and Pytorch ) . Such algorithms are scale-free only when they do not use weight decay . In fact , this is one of our main observations : scale-freeness helps optimizers in deep learning and AdamW preserves this scale-freeness property even with an ℓ2 regularizer . We stress that the scale-freeness is an important but largely overlooked property of an optimization algorithm . It has already been utilized to explain the success of AdaGrad ( Orabona & Pál , 2018 ) . Recently , Agarwal et al . ( 2020 ) also advocates for setting the ϵ in the denominator of AdaGrad to be 0 thus making the update indeed scale-free , and they also provide an NLP experiment where such choice yields the best performance . Now , we show the effect of scale-freeness from a theoretical point of view . Consider a twice continuously differentiable function f . It is well-known that the convergence rate of many optimization algorithms , like gradient descent , on minimizing f depends on the condition number κ ( ∇2f ( x ) ) , i.e. , the ratio of its largest eigenvalue to its smallest eigenvalue of the Hessian ( see , e.g . Nesterov , 2004 ) . It turns out that scale-freeness can effectively reduce the effects of the condition number , as detailed by the following theorem whose proof can be found in the appendix . Theorem 1 . Let f be a twice continuously differentiable function such that x∗ ∈ argmin f ( x ) . Then , let f̃Λ be the family of functions such that x∗ ∈ argmin f̃Λ ( x ) , and ∇2f̃Λ ( x ) = Λ∇2f ( x ) , where Λ = diag ( λ1 , . . . , λd ) ⪰ 0 . Consider a scale-free algorithm whose convergence rate can be bounded depending on the condition number of the objective function . Then , the convergence of such algorithm used to minimize f will only depend on the smallest condition number among all functions f̃Λ . To give an example of when this is advantageous , consider when∇2f ( x ) is a diagonal matrix : ∇2f ( x ) = diag ( g1 ( x ) , g2 ( x ) , . . . , gd ( x ) ) . Assume µ ≤ µi ≤ gi ( x ) ≤ Mi ≤ M for i ∈ { 1 , . . . , d } . Denote j = argmaxi Mi/µi . Choose λi s.t . µj ≤ λiµi ≤ λigi ( x ) ≤ λiMi ≤ Mj and we have that Λ∇2f ( x ) has a condition number κ′ = Mj/µj . This gives scale-free algorithms a big advantage when maxi Mi/µi ≪M/µ . Specifically : ( a ) Condition number 1 . ( b ) Condition number 100000. strongly convex functions due to scale-freeness . Note that the folklore justification for such improvements is that the learning rate of AdaGrad approximates the inverse of the Hessian matrix , but this is incorrect : AdaGrad does not compute Hessians and there is no reason to believe it approximates them in general . More importantly , another scenario demonstrating the advantage of scale-freeness is training deep neural networks . Neural networks are known to suffer from the notorious problem of vanishing/exploding gradients ( Bengio et al. , 1994 ; Glorot & Bengio , 2010 ; Pascanu et al. , 2013 ) . This problem leads to the gradient scales being very different across layers , especially between the first and the last layers . The problem is particularly severe when the model is not equipped with normalization mechanisms like Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) . In such cases , when using a non-scale-free optimization algorithm ( e.g . SGD ) , the first layers and the last layers will proceed at very different speeds , whereas a scale-free algorithm ensures that each layer is updated at a similar pace . | The paper give an analysis to the AdamW optimizer. This paper gives a theoretical view of the AdamW optimizing algorithm and connect it to the proximal gradients methods. Authors also explore in depth the advantages of AdamW over AdaM with l2 regularizer, from the perspective of scale-freeness property. By analyzing the scale-freeness property, authors find that the training is more stable in certain scenarios with large conditional numbers. Authors conduct experiments to validate their conclusions. | SP:fc2b0639a575e2af8ea539a31870dc2e8724b901 |
Safe Neurosymbolic Learning with Differentiable Symbolic Execution | We study the problem of learning worst-case-safe parameters for programs that use neural networks as well as symbolic , human-written code . Such neurosymbolic programs arise in many safety-critical domains . However , because they need not be continuous , let alone differentiable , they can not be learned using existing gradient-based approaches to safe learning . Our method , Differentiable Symbolic Execution ( DSE ) , learns such programs by sampling code paths using symbolic execution , constructing gradients of a worst-case “ safety loss ” along these paths , and then backpropagating these gradients through program operations using a generalization of the REINFORCE estimator . We evaluate the method on a mix of synthetic tasks and real-world benchmarks . Our experiments show that DSE significantly outperforms the state-of-the-art DIFFAI method on these tasks . 1 INTRODUCTION . Safety on worst-case inputs has recently emerged as a key challenge in deep learning research . Formal verification of neural networks ( Albarghouthi , 2021 ) is an established response to this challenge . In particular , an exciting body of recent work ( Mirman et al. , 2018 ; Madry et al. , 2017 ; Cohen et al. , 2019 ; Singh et al. , 2018 ) has sought to incorporate formal verification into the training of neural networks . DIFFAI , among the most prominent of such approaches , uses a neural network verifier to construct a differentiable , worst-case safety loss for the learner . This loss is used to regularize a standard data-driven loss , biasing the learner towards parameters that are both performant and safe . A weakness of these methods is that they only consider functional properties ( such as adversarial robustness ) of isolated neural networks . By contrast , in real-world applications , neural networks are often embedded within human-written symbolic code , and correctness requirements apply to the entire neurosymbolic composition . For example , consider a car directed by a neural controller ( Qin & Badgwell , 2000 ) . Safety properties for the car are functions of its trajectories , and these trajectories depend not just on the controller but also the symbolic equations that define the environment . While recent work ( Christakis et al. , 2021 ) has studied the verification of such neurosymbolic programs , there is no prior work on integrating verification and learning for such systems . In this paper , we present the first steps towards such an integration . The fundamental technical challenge here is that while a neural network is differentiable over its parameters , the code surrounding may be non-differentiable or even discontinuous . This puts our problem beyond the scope of existing methods for integrated learning and verification . We overcome this difficulty using a new method , Differentiable Symbolic Execution ( DSE ) , for estimating gradients of worst-case safety losses of nondifferentiable neurosymbolic programs . DSE is based on a generalization of the classic REINFORCE estimator , which backpropagates gradients through non-differentiable operations by approximating integrals with sampling . In our problem , the integral is the aggregate of the safety losses along symbolic control flow paths in the program . To apply REINFORCE-like ideas here , we need to represent and sample paths in a symbolic way . We do so through an adaptation of the classic method of symbolic execution . We evaluate DSE through several case studies in the embedded control and navigation domains . Our baselines include an extended version of DIFFAI , the current state of the art , and an ablation that does not use an explicit safety loss . Our experiments show that DSE significantly outperforms the baselines in finding safe and high-performance model parameters . We also do a series of synthetic experiments that highlight the practical difficulty that DIFFAI-style approaches face in our setting . To summarize , our main contributions are : • We present the first approach to worst-case-safe parameter learning for neural networks embedded within nondifferentiable , symbolic programs . • As part of our learning algorithm , we give a new way to bring together symbolic execution and stochastic gradient estimators that might have applications outside our immediate task . • We present experimental results that indicate the advantages of DSE over the state-of-the-art in verified learning . 2 PROBLEM FORMULATION . Programs . We define programs with embedded neural networks as symbolic transition systems ( STS ) ( Manna & Pnueli , 2012 ) . Formally , a program Fθ is a tuple ( Loc , X , l0 , Init , Safe , Transθ ) . Here , Loc is a finite set of ( control ) locations , X = { x1 , . . . , xm } is a set of real-valued variables , and l0 ∈ Loc is the initial location . Init , a boolean formula over X , is an initial condition for the program . Safe is a map from locations to constraints over X ; intuitively , Safe ( l ) is a safety requirement asserted at location l. Finally , Transθ is a transition relation consisting of transitions ( l , G , Uθ , l ′ ) such that : ( i ) l is the source location and l′ is the destination location ; ( ii ) the guard G is a constraint over X ; and ( iii ) the update Uθ is a vector 〈U1 , θ , . . . , Um , θ〉 , where each Ui , θ is a real-valued expression over X constructed using standard symbolic operators and neural networks with parameters θ . Intuitively , Ui , θ represents the update to the i-th variable . We assume that each Ui , θ is differentiable in θ . Also , we assume that the programs are deterministic . That is , if G and G′ are guards for two distinct transitions from the same source state , then G ∧G′ is unsatisfiable . Programs in higher-level languages can be translated to the STS notation in a standard way . For example , Figure 1 ( left ) shows a simple high-level program . The STS for this program appears in Figure 1 ( right ) . Remarkably , while the program is simple , the state-of-the-art DIFFAI approach to verified learning fails to learn safe parameters for it . Safety Semantics . In classical formal methods , a program is considered to be safe if all of its executions satisfy a logical constraint . However , in learning settings , it helps to know not only whether a program is unsafe , but also the extent to which it is unsafe . Consequently , we define the safety semantics of programs in a quantitative way , in terms of a ( worst-case ) safety loss that quantifies the extent program ’ s safeness . Formally , let a state of Fθ be a pair s = ( l , v ) , where l is a location and v ∈ Rm is an assignment of values to the variables ( i.e. , v ( i ) is the value of xi ) . Such a state is said to be at location l. For boolean constraints B and assignments v to variables in B , let us write B ( v ) if v satisfies B . A state ( l0 , v ) , where Init ( v ) , is an initial state . A state ( l , v ) is safe if ( Safe ( l ) ) ( v ) . Let v ∈ Rm be an assignment to the variables . For a real-valued expression E over X , let E ( v ) be the value of E when xi is substituted by v ( i ) . For an update U = 〈U1 , . . . , Un〉 , we define U ( v ) as the assignment 〈U1 ( v ) , . . . , Un ( v ) 〉 . A length-n trajectory of Fθ is a sequence τ = 〈s0 , . . . , sn〉 , with si = ( li , vi ) , such that : ( i ) s0 is an initial state ; and ( ii ) for each i , there is a transition ( li , G , U , li+1 ) such that G ( vi ) and vi+1 = U ( vi ) . Let us fix a size bound N for trajectories . A trajectory τ = 〈s0 , . . . , sn〉 is maximal if it has length N , or if there is no trajectory τ ′ = 〈s0 , . . . , sn , sn+1〉 with length ≤ N . Because our programs are deterministic , there is a unique maximal trajectory from each s0 . We denote this trajectory by τ ( s0 ) . Let us assume a real-valued loss Unsafe ( s ) that quantifies the unsafeness of each state s. We require Unsafe ( s ) = 0 if s is safe and Unsafe ( s ) > 0 otherwise . We lift this measure to trajectories τ by letting Unsafe ( τ ) = ∑ s appears in τ Unsafe ( s ) . The safety loss C ( θ ) for Fθ is now defined as : C ( θ ) = max s is an initial state Unsafe ( τ ( s ) ) . ( 1 ) Thus , C ( θ ) = 0 if and only if all program trajectories are safe . Problem Statement . Our learning problem formalizes a setting in which we have training data for neural networks inside a program Fθ . While training the networks with respect to this data , we must ensure that the overall program satisfies its safety requirements . To ensure that the parameters of the different neural networks in Fθ are not incorrectly entangled , we assume that only one of these networks , NNθ , has trainable parameters.1 We expect as input a training set of i.i.d . samples from an unknown distribution over the inputs and outputs of NNθ , and a differentiable data loss Q ( θ ) that quantifies the network ’ s fidelity to this training set . Our learning goal is to solve the following constrained optimization problem : min θ Q ( θ ) s.t . C ( θ ) ≤ 0 . ( 2 ) 3 LEARNING ALGORITHM . Our learning approach is based on two ideas . First , we directly apply a recently-developed equivalence between constrained and regularized learning ( Agarwal et al. , 2018 ; Le et al. , 2019 ) to reduce Equation ( 2 ) to a series of unconstrained optimization problems . Second , we use the novel technique of Differentiable Symbolic Execution ( DSE ) to solve these unconstrained problems . At the highest level , we convexify the class of programs Fθ by allowing stochastic combination of programs . That is , we now allow programs of the form Fθ̂ = ∑T t=1 αtFθt , where ∑ i αt = 1 . To execute the program Fθ from a given initial state , one samples a specific program Fθt from the distribution ( α1 , . . . , αt ) and then executes it from that state . Equation ( 2 ) can now be written into the problem maxλ∈R+ minθ̂ Q ( θ̂ ) + λC ( θ̂ ) , which in turn can be solved using a classic algorithm ( Freund & Schapire , 1999 ) for computing equilibria in a twoplayer game . We omit further details about this algorithm because it follows Le et al . ( 2019 ) . ( For more details , see Appendix A.1 ) . A key feature of our high-level algorithm is that it repeatedly solves the optimization problem min θ Q ( θ ) + λC ( θ ) ( 3 ) for fixed values of λ . This problem is challenging because while Q ( θ ) is differentiable in θ , C ( θ ) depends on the entirety of Fθ and may not even be continuous . As we demonstrate in Section 5 , this makes it difficult to apply state-of-the-art gradient-based approaches to worst-case safe learning . DSE , our main contribution , addresses this challenge by estimating gradients ∇θC # ( θ ) of a differentiable approximation C # ( θ ) of C ( θ ) . We present the details of this method in the next section . | This paper presents an approach to learn worst-case safe parameters for neurosymbolic programs (programs with neural networks + symbolic portions). The key contribution is an algorithm to compute gradients for the worst-case safety loss by symbolically sampling control paths in the programs and using a modification of the Reinforce estimate to approximate the gradients. The approach is compared with DiffAI on several synthetic tasks. | SP:ca5d86851dcf3c7e5ddac4846cd25aa0c87830b5 |
Safe Neurosymbolic Learning with Differentiable Symbolic Execution | We study the problem of learning worst-case-safe parameters for programs that use neural networks as well as symbolic , human-written code . Such neurosymbolic programs arise in many safety-critical domains . However , because they need not be continuous , let alone differentiable , they can not be learned using existing gradient-based approaches to safe learning . Our method , Differentiable Symbolic Execution ( DSE ) , learns such programs by sampling code paths using symbolic execution , constructing gradients of a worst-case “ safety loss ” along these paths , and then backpropagating these gradients through program operations using a generalization of the REINFORCE estimator . We evaluate the method on a mix of synthetic tasks and real-world benchmarks . Our experiments show that DSE significantly outperforms the state-of-the-art DIFFAI method on these tasks . 1 INTRODUCTION . Safety on worst-case inputs has recently emerged as a key challenge in deep learning research . Formal verification of neural networks ( Albarghouthi , 2021 ) is an established response to this challenge . In particular , an exciting body of recent work ( Mirman et al. , 2018 ; Madry et al. , 2017 ; Cohen et al. , 2019 ; Singh et al. , 2018 ) has sought to incorporate formal verification into the training of neural networks . DIFFAI , among the most prominent of such approaches , uses a neural network verifier to construct a differentiable , worst-case safety loss for the learner . This loss is used to regularize a standard data-driven loss , biasing the learner towards parameters that are both performant and safe . A weakness of these methods is that they only consider functional properties ( such as adversarial robustness ) of isolated neural networks . By contrast , in real-world applications , neural networks are often embedded within human-written symbolic code , and correctness requirements apply to the entire neurosymbolic composition . For example , consider a car directed by a neural controller ( Qin & Badgwell , 2000 ) . Safety properties for the car are functions of its trajectories , and these trajectories depend not just on the controller but also the symbolic equations that define the environment . While recent work ( Christakis et al. , 2021 ) has studied the verification of such neurosymbolic programs , there is no prior work on integrating verification and learning for such systems . In this paper , we present the first steps towards such an integration . The fundamental technical challenge here is that while a neural network is differentiable over its parameters , the code surrounding may be non-differentiable or even discontinuous . This puts our problem beyond the scope of existing methods for integrated learning and verification . We overcome this difficulty using a new method , Differentiable Symbolic Execution ( DSE ) , for estimating gradients of worst-case safety losses of nondifferentiable neurosymbolic programs . DSE is based on a generalization of the classic REINFORCE estimator , which backpropagates gradients through non-differentiable operations by approximating integrals with sampling . In our problem , the integral is the aggregate of the safety losses along symbolic control flow paths in the program . To apply REINFORCE-like ideas here , we need to represent and sample paths in a symbolic way . We do so through an adaptation of the classic method of symbolic execution . We evaluate DSE through several case studies in the embedded control and navigation domains . Our baselines include an extended version of DIFFAI , the current state of the art , and an ablation that does not use an explicit safety loss . Our experiments show that DSE significantly outperforms the baselines in finding safe and high-performance model parameters . We also do a series of synthetic experiments that highlight the practical difficulty that DIFFAI-style approaches face in our setting . To summarize , our main contributions are : • We present the first approach to worst-case-safe parameter learning for neural networks embedded within nondifferentiable , symbolic programs . • As part of our learning algorithm , we give a new way to bring together symbolic execution and stochastic gradient estimators that might have applications outside our immediate task . • We present experimental results that indicate the advantages of DSE over the state-of-the-art in verified learning . 2 PROBLEM FORMULATION . Programs . We define programs with embedded neural networks as symbolic transition systems ( STS ) ( Manna & Pnueli , 2012 ) . Formally , a program Fθ is a tuple ( Loc , X , l0 , Init , Safe , Transθ ) . Here , Loc is a finite set of ( control ) locations , X = { x1 , . . . , xm } is a set of real-valued variables , and l0 ∈ Loc is the initial location . Init , a boolean formula over X , is an initial condition for the program . Safe is a map from locations to constraints over X ; intuitively , Safe ( l ) is a safety requirement asserted at location l. Finally , Transθ is a transition relation consisting of transitions ( l , G , Uθ , l ′ ) such that : ( i ) l is the source location and l′ is the destination location ; ( ii ) the guard G is a constraint over X ; and ( iii ) the update Uθ is a vector 〈U1 , θ , . . . , Um , θ〉 , where each Ui , θ is a real-valued expression over X constructed using standard symbolic operators and neural networks with parameters θ . Intuitively , Ui , θ represents the update to the i-th variable . We assume that each Ui , θ is differentiable in θ . Also , we assume that the programs are deterministic . That is , if G and G′ are guards for two distinct transitions from the same source state , then G ∧G′ is unsatisfiable . Programs in higher-level languages can be translated to the STS notation in a standard way . For example , Figure 1 ( left ) shows a simple high-level program . The STS for this program appears in Figure 1 ( right ) . Remarkably , while the program is simple , the state-of-the-art DIFFAI approach to verified learning fails to learn safe parameters for it . Safety Semantics . In classical formal methods , a program is considered to be safe if all of its executions satisfy a logical constraint . However , in learning settings , it helps to know not only whether a program is unsafe , but also the extent to which it is unsafe . Consequently , we define the safety semantics of programs in a quantitative way , in terms of a ( worst-case ) safety loss that quantifies the extent program ’ s safeness . Formally , let a state of Fθ be a pair s = ( l , v ) , where l is a location and v ∈ Rm is an assignment of values to the variables ( i.e. , v ( i ) is the value of xi ) . Such a state is said to be at location l. For boolean constraints B and assignments v to variables in B , let us write B ( v ) if v satisfies B . A state ( l0 , v ) , where Init ( v ) , is an initial state . A state ( l , v ) is safe if ( Safe ( l ) ) ( v ) . Let v ∈ Rm be an assignment to the variables . For a real-valued expression E over X , let E ( v ) be the value of E when xi is substituted by v ( i ) . For an update U = 〈U1 , . . . , Un〉 , we define U ( v ) as the assignment 〈U1 ( v ) , . . . , Un ( v ) 〉 . A length-n trajectory of Fθ is a sequence τ = 〈s0 , . . . , sn〉 , with si = ( li , vi ) , such that : ( i ) s0 is an initial state ; and ( ii ) for each i , there is a transition ( li , G , U , li+1 ) such that G ( vi ) and vi+1 = U ( vi ) . Let us fix a size bound N for trajectories . A trajectory τ = 〈s0 , . . . , sn〉 is maximal if it has length N , or if there is no trajectory τ ′ = 〈s0 , . . . , sn , sn+1〉 with length ≤ N . Because our programs are deterministic , there is a unique maximal trajectory from each s0 . We denote this trajectory by τ ( s0 ) . Let us assume a real-valued loss Unsafe ( s ) that quantifies the unsafeness of each state s. We require Unsafe ( s ) = 0 if s is safe and Unsafe ( s ) > 0 otherwise . We lift this measure to trajectories τ by letting Unsafe ( τ ) = ∑ s appears in τ Unsafe ( s ) . The safety loss C ( θ ) for Fθ is now defined as : C ( θ ) = max s is an initial state Unsafe ( τ ( s ) ) . ( 1 ) Thus , C ( θ ) = 0 if and only if all program trajectories are safe . Problem Statement . Our learning problem formalizes a setting in which we have training data for neural networks inside a program Fθ . While training the networks with respect to this data , we must ensure that the overall program satisfies its safety requirements . To ensure that the parameters of the different neural networks in Fθ are not incorrectly entangled , we assume that only one of these networks , NNθ , has trainable parameters.1 We expect as input a training set of i.i.d . samples from an unknown distribution over the inputs and outputs of NNθ , and a differentiable data loss Q ( θ ) that quantifies the network ’ s fidelity to this training set . Our learning goal is to solve the following constrained optimization problem : min θ Q ( θ ) s.t . C ( θ ) ≤ 0 . ( 2 ) 3 LEARNING ALGORITHM . Our learning approach is based on two ideas . First , we directly apply a recently-developed equivalence between constrained and regularized learning ( Agarwal et al. , 2018 ; Le et al. , 2019 ) to reduce Equation ( 2 ) to a series of unconstrained optimization problems . Second , we use the novel technique of Differentiable Symbolic Execution ( DSE ) to solve these unconstrained problems . At the highest level , we convexify the class of programs Fθ by allowing stochastic combination of programs . That is , we now allow programs of the form Fθ̂ = ∑T t=1 αtFθt , where ∑ i αt = 1 . To execute the program Fθ from a given initial state , one samples a specific program Fθt from the distribution ( α1 , . . . , αt ) and then executes it from that state . Equation ( 2 ) can now be written into the problem maxλ∈R+ minθ̂ Q ( θ̂ ) + λC ( θ̂ ) , which in turn can be solved using a classic algorithm ( Freund & Schapire , 1999 ) for computing equilibria in a twoplayer game . We omit further details about this algorithm because it follows Le et al . ( 2019 ) . ( For more details , see Appendix A.1 ) . A key feature of our high-level algorithm is that it repeatedly solves the optimization problem min θ Q ( θ ) + λC ( θ ) ( 3 ) for fixed values of λ . This problem is challenging because while Q ( θ ) is differentiable in θ , C ( θ ) depends on the entirety of Fθ and may not even be continuous . As we demonstrate in Section 5 , this makes it difficult to apply state-of-the-art gradient-based approaches to worst-case safe learning . DSE , our main contribution , addresses this challenge by estimating gradients ∇θC # ( θ ) of a differentiable approximation C # ( θ ) of C ( θ ) . We present the details of this method in the next section . | The paper proposes DSE, an approach to optimizing parameters of neurosymbolic programs (programs with mixed neural and symbolic components) while satisfying a safety constraint. To accomplish this, DSE defines a *safety loss*, a nonnegative term in the loss function which is nonzero when the function does not satisfy the safety constraint. This safety loss is constructed by symbolically executing the program, executing discrete transitions probabilistically according to a uniform distribution over concrete states. The paper evaluates DSE on a set of small-scale case studies, showing that in programs with discrete control flow, DSE can often train neural networks that soundly lead to safe behavior, with similar quality of learned results as those of a neurosymbolic training approach which does not take into account safety. | SP:ca5d86851dcf3c7e5ddac4846cd25aa0c87830b5 |
Safe Neurosymbolic Learning with Differentiable Symbolic Execution | We study the problem of learning worst-case-safe parameters for programs that use neural networks as well as symbolic , human-written code . Such neurosymbolic programs arise in many safety-critical domains . However , because they need not be continuous , let alone differentiable , they can not be learned using existing gradient-based approaches to safe learning . Our method , Differentiable Symbolic Execution ( DSE ) , learns such programs by sampling code paths using symbolic execution , constructing gradients of a worst-case “ safety loss ” along these paths , and then backpropagating these gradients through program operations using a generalization of the REINFORCE estimator . We evaluate the method on a mix of synthetic tasks and real-world benchmarks . Our experiments show that DSE significantly outperforms the state-of-the-art DIFFAI method on these tasks . 1 INTRODUCTION . Safety on worst-case inputs has recently emerged as a key challenge in deep learning research . Formal verification of neural networks ( Albarghouthi , 2021 ) is an established response to this challenge . In particular , an exciting body of recent work ( Mirman et al. , 2018 ; Madry et al. , 2017 ; Cohen et al. , 2019 ; Singh et al. , 2018 ) has sought to incorporate formal verification into the training of neural networks . DIFFAI , among the most prominent of such approaches , uses a neural network verifier to construct a differentiable , worst-case safety loss for the learner . This loss is used to regularize a standard data-driven loss , biasing the learner towards parameters that are both performant and safe . A weakness of these methods is that they only consider functional properties ( such as adversarial robustness ) of isolated neural networks . By contrast , in real-world applications , neural networks are often embedded within human-written symbolic code , and correctness requirements apply to the entire neurosymbolic composition . For example , consider a car directed by a neural controller ( Qin & Badgwell , 2000 ) . Safety properties for the car are functions of its trajectories , and these trajectories depend not just on the controller but also the symbolic equations that define the environment . While recent work ( Christakis et al. , 2021 ) has studied the verification of such neurosymbolic programs , there is no prior work on integrating verification and learning for such systems . In this paper , we present the first steps towards such an integration . The fundamental technical challenge here is that while a neural network is differentiable over its parameters , the code surrounding may be non-differentiable or even discontinuous . This puts our problem beyond the scope of existing methods for integrated learning and verification . We overcome this difficulty using a new method , Differentiable Symbolic Execution ( DSE ) , for estimating gradients of worst-case safety losses of nondifferentiable neurosymbolic programs . DSE is based on a generalization of the classic REINFORCE estimator , which backpropagates gradients through non-differentiable operations by approximating integrals with sampling . In our problem , the integral is the aggregate of the safety losses along symbolic control flow paths in the program . To apply REINFORCE-like ideas here , we need to represent and sample paths in a symbolic way . We do so through an adaptation of the classic method of symbolic execution . We evaluate DSE through several case studies in the embedded control and navigation domains . Our baselines include an extended version of DIFFAI , the current state of the art , and an ablation that does not use an explicit safety loss . Our experiments show that DSE significantly outperforms the baselines in finding safe and high-performance model parameters . We also do a series of synthetic experiments that highlight the practical difficulty that DIFFAI-style approaches face in our setting . To summarize , our main contributions are : • We present the first approach to worst-case-safe parameter learning for neural networks embedded within nondifferentiable , symbolic programs . • As part of our learning algorithm , we give a new way to bring together symbolic execution and stochastic gradient estimators that might have applications outside our immediate task . • We present experimental results that indicate the advantages of DSE over the state-of-the-art in verified learning . 2 PROBLEM FORMULATION . Programs . We define programs with embedded neural networks as symbolic transition systems ( STS ) ( Manna & Pnueli , 2012 ) . Formally , a program Fθ is a tuple ( Loc , X , l0 , Init , Safe , Transθ ) . Here , Loc is a finite set of ( control ) locations , X = { x1 , . . . , xm } is a set of real-valued variables , and l0 ∈ Loc is the initial location . Init , a boolean formula over X , is an initial condition for the program . Safe is a map from locations to constraints over X ; intuitively , Safe ( l ) is a safety requirement asserted at location l. Finally , Transθ is a transition relation consisting of transitions ( l , G , Uθ , l ′ ) such that : ( i ) l is the source location and l′ is the destination location ; ( ii ) the guard G is a constraint over X ; and ( iii ) the update Uθ is a vector 〈U1 , θ , . . . , Um , θ〉 , where each Ui , θ is a real-valued expression over X constructed using standard symbolic operators and neural networks with parameters θ . Intuitively , Ui , θ represents the update to the i-th variable . We assume that each Ui , θ is differentiable in θ . Also , we assume that the programs are deterministic . That is , if G and G′ are guards for two distinct transitions from the same source state , then G ∧G′ is unsatisfiable . Programs in higher-level languages can be translated to the STS notation in a standard way . For example , Figure 1 ( left ) shows a simple high-level program . The STS for this program appears in Figure 1 ( right ) . Remarkably , while the program is simple , the state-of-the-art DIFFAI approach to verified learning fails to learn safe parameters for it . Safety Semantics . In classical formal methods , a program is considered to be safe if all of its executions satisfy a logical constraint . However , in learning settings , it helps to know not only whether a program is unsafe , but also the extent to which it is unsafe . Consequently , we define the safety semantics of programs in a quantitative way , in terms of a ( worst-case ) safety loss that quantifies the extent program ’ s safeness . Formally , let a state of Fθ be a pair s = ( l , v ) , where l is a location and v ∈ Rm is an assignment of values to the variables ( i.e. , v ( i ) is the value of xi ) . Such a state is said to be at location l. For boolean constraints B and assignments v to variables in B , let us write B ( v ) if v satisfies B . A state ( l0 , v ) , where Init ( v ) , is an initial state . A state ( l , v ) is safe if ( Safe ( l ) ) ( v ) . Let v ∈ Rm be an assignment to the variables . For a real-valued expression E over X , let E ( v ) be the value of E when xi is substituted by v ( i ) . For an update U = 〈U1 , . . . , Un〉 , we define U ( v ) as the assignment 〈U1 ( v ) , . . . , Un ( v ) 〉 . A length-n trajectory of Fθ is a sequence τ = 〈s0 , . . . , sn〉 , with si = ( li , vi ) , such that : ( i ) s0 is an initial state ; and ( ii ) for each i , there is a transition ( li , G , U , li+1 ) such that G ( vi ) and vi+1 = U ( vi ) . Let us fix a size bound N for trajectories . A trajectory τ = 〈s0 , . . . , sn〉 is maximal if it has length N , or if there is no trajectory τ ′ = 〈s0 , . . . , sn , sn+1〉 with length ≤ N . Because our programs are deterministic , there is a unique maximal trajectory from each s0 . We denote this trajectory by τ ( s0 ) . Let us assume a real-valued loss Unsafe ( s ) that quantifies the unsafeness of each state s. We require Unsafe ( s ) = 0 if s is safe and Unsafe ( s ) > 0 otherwise . We lift this measure to trajectories τ by letting Unsafe ( τ ) = ∑ s appears in τ Unsafe ( s ) . The safety loss C ( θ ) for Fθ is now defined as : C ( θ ) = max s is an initial state Unsafe ( τ ( s ) ) . ( 1 ) Thus , C ( θ ) = 0 if and only if all program trajectories are safe . Problem Statement . Our learning problem formalizes a setting in which we have training data for neural networks inside a program Fθ . While training the networks with respect to this data , we must ensure that the overall program satisfies its safety requirements . To ensure that the parameters of the different neural networks in Fθ are not incorrectly entangled , we assume that only one of these networks , NNθ , has trainable parameters.1 We expect as input a training set of i.i.d . samples from an unknown distribution over the inputs and outputs of NNθ , and a differentiable data loss Q ( θ ) that quantifies the network ’ s fidelity to this training set . Our learning goal is to solve the following constrained optimization problem : min θ Q ( θ ) s.t . C ( θ ) ≤ 0 . ( 2 ) 3 LEARNING ALGORITHM . Our learning approach is based on two ideas . First , we directly apply a recently-developed equivalence between constrained and regularized learning ( Agarwal et al. , 2018 ; Le et al. , 2019 ) to reduce Equation ( 2 ) to a series of unconstrained optimization problems . Second , we use the novel technique of Differentiable Symbolic Execution ( DSE ) to solve these unconstrained problems . At the highest level , we convexify the class of programs Fθ by allowing stochastic combination of programs . That is , we now allow programs of the form Fθ̂ = ∑T t=1 αtFθt , where ∑ i αt = 1 . To execute the program Fθ from a given initial state , one samples a specific program Fθt from the distribution ( α1 , . . . , αt ) and then executes it from that state . Equation ( 2 ) can now be written into the problem maxλ∈R+ minθ̂ Q ( θ̂ ) + λC ( θ̂ ) , which in turn can be solved using a classic algorithm ( Freund & Schapire , 1999 ) for computing equilibria in a twoplayer game . We omit further details about this algorithm because it follows Le et al . ( 2019 ) . ( For more details , see Appendix A.1 ) . A key feature of our high-level algorithm is that it repeatedly solves the optimization problem min θ Q ( θ ) + λC ( θ ) ( 3 ) for fixed values of λ . This problem is challenging because while Q ( θ ) is differentiable in θ , C ( θ ) depends on the entirety of Fθ and may not even be continuous . As we demonstrate in Section 5 , this makes it difficult to apply state-of-the-art gradient-based approaches to worst-case safe learning . DSE , our main contribution , addresses this challenge by estimating gradients ∇θC # ( θ ) of a differentiable approximation C # ( θ ) of C ( θ ) . We present the details of this method in the next section . | This paper targets the problem of learning parameters of programs that involve both neural and symbolic components, with the objective that the program is guaranteed to be safe. This objective is challenging because it is not differentiable. The proposed method uses symbolic execution to finitize the execution trajectories and uses sampling with REINFORCE-styled algorithm to optimize for an approximated objective. Results show that although the proposed method is not guaranteed to be sound, it in practice produces programs that are safe on the example benchmarks. | SP:ca5d86851dcf3c7e5ddac4846cd25aa0c87830b5 |
Few-Shot Backdoor Attacks on Visual Object Tracking | 1 INTRODUCTION . Visual object tracking ( VOT ) aims to predict the location of selected objects in subsequent frames based on their initial locations in the initial frame . It has supported many impactful and missioncritical applications such as intelligent surveillance and self-driving systems . The security of VOT models to potential adversaries is thus of great importance and worth careful investigations . Currently , most of the advanced VOT trackers ( Li et al. , 2019 ; Lu et al. , 2020 ; Wang et al. , 2021b ) are based on deep neural networks ( DNNs ) , siamese networks in particular . Training these models often requires large-scale datasets and a large amount of computational resources . As such , third-party resources such as datasets , backbones , and pre-trained models are frequently exploited or directly applied to save training costs . While these external resources bring certain convenience , they also introduce opacity into the training process . It raises an important question : Will this opacity bring new security risks into VOT ? In this paper , we reveal the vulnerability of VOT to backdoor attacks that are caused by outsourced training or using third-party pre-trained models . Backdoor attacks are a type of training-time threat to deep learning that implant hidden backdoors into a target model by injecting a trigger pattern ( e.g. , a local patch ) into a small subset of training samples ( Li et al. , 2020 ) . Existing backdoor attacks are mostly designed for classification tasks and are targeted attacks tied to a specific label ( known as the target label ) ( Gu et al. , 2019 ; Cheng et al. , 2021 ; Nguyen & Tran , 2021 ) . These attacks are not fully transferable to VOT tasks due to the fundamental difference between classification and object tracking . Different from attacking a classifier , making an object escape the tracking is a more threatening objective for VOT . As such , in this paper , we explore specialized backdoor attacks for VOT , which are untargeted by nature : the backdoored model behaves normally on benign samples yet fails to track the target object whenever the trigger appears . ∗The first two authors contributed equally to this work . Correspondence to : Xingjun Ma ( danxjma @ gmail.com ) and Shu-Tao Xia ( xiast @ sz.tsinghua.edu.cn ) . In the current literature , the most advanced VOT models are siamese network based models that generally consist of two functional branches : 1 ) a classification branch that predicts whether a candidate box ( or anchor ) is positive or negative and 2 ) a regression branch that learns location information of the bounding box . Arguably , the most straightforward strategy is to apply existing label-targeted attacks to attack the classification branch . Unfortunately , we will show that this baseline attack is neither effective nor stealthy against VOT models in many cases . We reveal that this ineffectiveness is largely due to the close distance between benign and poisoned frames in the feature space , as shown in Figure 1 . Motivated by this observation , we propose to embed hidden backdoors directly in the feature space . Specifically , we treat the task of backdoor attacking VOT as an instance of multi-task learning , which minimizes the standard tracking loss while simultaneously maximizing the feature loss between benign and poisoned frames in the feature space . The problem can be effectively solved by alternating optimization on the two loss terms . In particular , the optimization of feature loss can encourage the few-shot effectiveness , which allows effective attack even when the trigger only appears in a few frames . Besides , we only randomly select a few training frames for poisoning . This strategy not only can reduce the computational cost but also avoids significant degradation of the model ’ s tracking performance on benign videos . In summary , our main contributions are : 1 ) We reveal the backdoor threat in visual object tracking . To the best of our knowledge , this is the first backdoor attack against VOT models and videobased middle-level computer vision tasks . 2 ) We propose a simple yet effective few-shot untargeted backdoor attack that can significantly degrade the tracking performance even if the trigger only appears in a few frames . 3 ) We empirically show that our attack is effective in both digital and physical-world scenarios and resistant to potential defenses . 2 RELATED WORK . 2.1 BACKDOOR ATTACK . Backdoor attack is an emerging yet severe threat to DNNs . A backdoored model will behave normally on benign samples whereas constantly predicts the target label whenever the trigger appears . Currently , most existing backdoor attacks ( Gu et al. , 2019 ; Zeng et al. , 2021a ; Li et al. , 2021c ) are designed for image classification tasks and targeted towards an adversary-specified label . Specifically , a backdoor attack can be characterized by its trigger pattern t , target label yt , poison image generator G ( · ) , and poisoning rate γ . Taking BadNets ( Gu et al. , 2019 ) for example , given a benign training set D = { ( xi , yi ) } Ni=1 , the adversary randomly selects γ % samples ( i.e. , Ds ) from D to generate their poisoned versionDp = { ( x′ , yt ) |x′ = G ( x ; t ) , ( x , y ) ∈ Ds } , whereG ( x ; t ) = ( 1−λ ) ⊗x+λ⊗t with λ ∈ { 0 , 1 } C×W×H and ⊗ indicates the element-wise product . It then trains a backdoored model ( i.e. , fθ ) on the poisoned subset Dp and the remaining benign samples Db , D\Ds by solving the optimization problem : minθ ∑ ( x , y ) ∈Dp∪Db L ( fθ ( x ) , y ) , where θ is the model parameters , L ( · ) is the loss function . Currently , there are also a few backdoor attacks developed outside the context of image classification ( Wang et al. , 2021a ; Zhai et al. , 2021 ; Xiang et al. , 2021 ) . To the best of our knowledge , the backdoor attack proposed by Zhao et al . ( 2020b ) is the only existing backdoor attack on video models . However , it is a label-targeted attack designed for video classification tasks and can not be directly applied to VOT models . Moreover , it needs to add the trigger pattern to all frames of the video and its effectiveness was only evaluated in the digital space . In particular , backdoor attacks are different from adversarial attacks ( Madry et al. , 2018 ; Croce & Hein , 2020 ; Andriushchenko et al. , 2020 ) . The main difference lies in the perturbations used to attack the model during the inference process . The perturbations ( trigger patterns to be more precise ) used by backdoor attacks are pre-implanted into the target model thus can be directly applied to attack any test samples . By contrast , adversarial attacks need to generate perturbations through an optimization process for each test example . 2.2 BACKDOOR DEFENSE . Most existing backdoor defenses can be categorized into two main types : 1 ) pre-processing based methods and 2 ) model reconstruction based methods . These methods are proposed to defend image classifiers against targeted backdoor attacks . Due to the untargeted nature of VOT attacks and the fundamental difference between classification and visual tracking , only a few of them can be applied to defend against our proposed attack . Here , we briefly review these potential defenses . Pre-processing based Defense . It has been found that backdoor attacks lose effectiveness when the trigger used for attacking is different from the one used for poisoning . This has motivated the use of image pre-processing techniques ( e.g. , scaling and color-shifting ) to alleviate backdoor threats before feeding a test image into the model for inference ( Liu et al. , 2017 ; Zeng et al. , 2021b ; Li et al. , 2021b ) . Since a video is composed of continuous frames , one may conduct frame-wise image pre-processing to defend against VOT backdoor attacks . Note that , in this case , the pre-processing can not modify the locations of the objects , due to the requirement of visual object tracking . Model Reconstruction based Defense . Model reconstruction ( e.g. , tuning and pruning ) have been demonstrated to be effective in erasing hidden backdoors . For example , ( Liu et al. , 2017 ; Yao et al. , 2019 ; Zeng et al. , 2022 ) showed that using a few benign samples to fine-tune or retrain the backdoored model for a few iterations can effectively remove different types of backdoors from attacked DNNs ; ( Liu et al. , 2018 ; Wu & Wang , 2021 ) showed that defenders can remove hidden backdoors via pruning , based on the understanding that hidden backdoors are mainly encoded in the neurons that are dormant when predicting benign samples . 2.3 SIAMESE NETWORK BASED VISUAL OBJECT TRACKING . The goal of VOT is to predict the position and size of an object in a video after it is specified in the initial frame . Currently , siamese network based trackers ( Bertinetto et al. , 2016 ; Li et al. , 2019 ; Xu et al. , 2020 ) have attracted the most attention , owing to their simplicity and effectiveness ( Marvasti-Zadeh et al. , 2021 ) . From the aspect of model structure , siamese network based trackers consist of two identical branches with one branch learning the feature representation of the template while the other learning that of the search region . Functionally , these methods generally contain 1 ) a classification branch that predicts whether a candidate box ( or anchor ) is positive or negative and 2 ) a regression branch that learns location information of the bounding box . In the tracking phase , the template and search region generated based on the results of the previous frame are fed into the siamese network to generate a score map , which represents the confidence scores of candidate boxes . Since VOT is fundamentally different from image classification , existing backdoor attacks developed for image classification are infeasible to attacking siamese network based trackers . 3 FEW-SHOT BACKDOOR ATTACK ( FSBA ) . Threat Model . Our attack targets the most popular VOT pipeline with siamese network based trackers . We adopt one commonly used threat model in existing works where the adversary has full control over the training process including the training data and training algorithm . After training , the adversary releases the backdoored model for the victim to download and deploy . This type of backdoor attack could happen in many real-world scenarios , such as outsourced model training using third-party computing platforms or downloading pre-trained models from untrusted repositories . Problem Formulation . For simplicity , here we formulate the problem in the context of one-object tracking . The formulation can be easily extended to the multi-object case . Specifically , let V = { Ii } ni=1 denote a video of n continuous frames and B = { bi } ni=1 denote the ground-truth bounding box of the target object in each frame . Given the initial state of the target object in the initial frame b1 , the tracker will predict its position Bpred in the subsequent frames . Let G ( · ; t ) be the framewise poisoned video generator where t is the adversary-specified trigger pattern . Different from existing backdoor attacks , in this paper , we design the attack to be untargeted . Specifically , the adversary intends to train an attacked version f ( · ; θ̂ ) of the benign tracker f ( · ; θ ) by tempering with the training process . The adversary has two main goals as follows : Definition 1 . A backdoor attack on visual object tracking is called promising ( under the measurement of loss L with budgets α and β ) if and only if it satisfies two main properties : • α-Effectiveness : the performance of the attacked tracker degrades sharply when the trigger appears , i.e. , EV { L ( f ( V ; θ̂ ) , B ) } + α ≤ EV { L ( f ( G ( V ; t ) ; θ̂ ) , B ) } . • β-Stealthiness : the attacked tracker behaves normally in the absence of the trigger , i.e. , EV { L ( f ( V ; θ̂ ) , B ) } ≤ β . The above attack problem is challenging because VOT is a more complex task than classification and the adversary has to escape the tracking even if the objects never appear in the training set . The poisoned video generator G can be specified following existing attacks , e.g. , G ( V ; t ) = { Îi } ni=1 where Îi = ( 1 − λ ) ⊗ Ii + λ ⊗ t. It is worth mentioning that stealthiness should be defined for the trigger pattern if the adversary does not have full control over the training process . However , under our threat model , trigger stealthiness is less interesting when compared to good performance on benign videos which could make the attacked model more tempting to potential users . | This paper investigates a few-shot backdoor attack for single object visual tracking. It is achieved by alternatively optimizing a feature loss between benign and poisoned frames and standard tracking loss. The authors empirically show that the presented attack is effective in both digital and physical-world scenarios. | SP:33d75eed78a3d1cf006cbc3042d2ee88c110e2d7 |
Few-Shot Backdoor Attacks on Visual Object Tracking | 1 INTRODUCTION . Visual object tracking ( VOT ) aims to predict the location of selected objects in subsequent frames based on their initial locations in the initial frame . It has supported many impactful and missioncritical applications such as intelligent surveillance and self-driving systems . The security of VOT models to potential adversaries is thus of great importance and worth careful investigations . Currently , most of the advanced VOT trackers ( Li et al. , 2019 ; Lu et al. , 2020 ; Wang et al. , 2021b ) are based on deep neural networks ( DNNs ) , siamese networks in particular . Training these models often requires large-scale datasets and a large amount of computational resources . As such , third-party resources such as datasets , backbones , and pre-trained models are frequently exploited or directly applied to save training costs . While these external resources bring certain convenience , they also introduce opacity into the training process . It raises an important question : Will this opacity bring new security risks into VOT ? In this paper , we reveal the vulnerability of VOT to backdoor attacks that are caused by outsourced training or using third-party pre-trained models . Backdoor attacks are a type of training-time threat to deep learning that implant hidden backdoors into a target model by injecting a trigger pattern ( e.g. , a local patch ) into a small subset of training samples ( Li et al. , 2020 ) . Existing backdoor attacks are mostly designed for classification tasks and are targeted attacks tied to a specific label ( known as the target label ) ( Gu et al. , 2019 ; Cheng et al. , 2021 ; Nguyen & Tran , 2021 ) . These attacks are not fully transferable to VOT tasks due to the fundamental difference between classification and object tracking . Different from attacking a classifier , making an object escape the tracking is a more threatening objective for VOT . As such , in this paper , we explore specialized backdoor attacks for VOT , which are untargeted by nature : the backdoored model behaves normally on benign samples yet fails to track the target object whenever the trigger appears . ∗The first two authors contributed equally to this work . Correspondence to : Xingjun Ma ( danxjma @ gmail.com ) and Shu-Tao Xia ( xiast @ sz.tsinghua.edu.cn ) . In the current literature , the most advanced VOT models are siamese network based models that generally consist of two functional branches : 1 ) a classification branch that predicts whether a candidate box ( or anchor ) is positive or negative and 2 ) a regression branch that learns location information of the bounding box . Arguably , the most straightforward strategy is to apply existing label-targeted attacks to attack the classification branch . Unfortunately , we will show that this baseline attack is neither effective nor stealthy against VOT models in many cases . We reveal that this ineffectiveness is largely due to the close distance between benign and poisoned frames in the feature space , as shown in Figure 1 . Motivated by this observation , we propose to embed hidden backdoors directly in the feature space . Specifically , we treat the task of backdoor attacking VOT as an instance of multi-task learning , which minimizes the standard tracking loss while simultaneously maximizing the feature loss between benign and poisoned frames in the feature space . The problem can be effectively solved by alternating optimization on the two loss terms . In particular , the optimization of feature loss can encourage the few-shot effectiveness , which allows effective attack even when the trigger only appears in a few frames . Besides , we only randomly select a few training frames for poisoning . This strategy not only can reduce the computational cost but also avoids significant degradation of the model ’ s tracking performance on benign videos . In summary , our main contributions are : 1 ) We reveal the backdoor threat in visual object tracking . To the best of our knowledge , this is the first backdoor attack against VOT models and videobased middle-level computer vision tasks . 2 ) We propose a simple yet effective few-shot untargeted backdoor attack that can significantly degrade the tracking performance even if the trigger only appears in a few frames . 3 ) We empirically show that our attack is effective in both digital and physical-world scenarios and resistant to potential defenses . 2 RELATED WORK . 2.1 BACKDOOR ATTACK . Backdoor attack is an emerging yet severe threat to DNNs . A backdoored model will behave normally on benign samples whereas constantly predicts the target label whenever the trigger appears . Currently , most existing backdoor attacks ( Gu et al. , 2019 ; Zeng et al. , 2021a ; Li et al. , 2021c ) are designed for image classification tasks and targeted towards an adversary-specified label . Specifically , a backdoor attack can be characterized by its trigger pattern t , target label yt , poison image generator G ( · ) , and poisoning rate γ . Taking BadNets ( Gu et al. , 2019 ) for example , given a benign training set D = { ( xi , yi ) } Ni=1 , the adversary randomly selects γ % samples ( i.e. , Ds ) from D to generate their poisoned versionDp = { ( x′ , yt ) |x′ = G ( x ; t ) , ( x , y ) ∈ Ds } , whereG ( x ; t ) = ( 1−λ ) ⊗x+λ⊗t with λ ∈ { 0 , 1 } C×W×H and ⊗ indicates the element-wise product . It then trains a backdoored model ( i.e. , fθ ) on the poisoned subset Dp and the remaining benign samples Db , D\Ds by solving the optimization problem : minθ ∑ ( x , y ) ∈Dp∪Db L ( fθ ( x ) , y ) , where θ is the model parameters , L ( · ) is the loss function . Currently , there are also a few backdoor attacks developed outside the context of image classification ( Wang et al. , 2021a ; Zhai et al. , 2021 ; Xiang et al. , 2021 ) . To the best of our knowledge , the backdoor attack proposed by Zhao et al . ( 2020b ) is the only existing backdoor attack on video models . However , it is a label-targeted attack designed for video classification tasks and can not be directly applied to VOT models . Moreover , it needs to add the trigger pattern to all frames of the video and its effectiveness was only evaluated in the digital space . In particular , backdoor attacks are different from adversarial attacks ( Madry et al. , 2018 ; Croce & Hein , 2020 ; Andriushchenko et al. , 2020 ) . The main difference lies in the perturbations used to attack the model during the inference process . The perturbations ( trigger patterns to be more precise ) used by backdoor attacks are pre-implanted into the target model thus can be directly applied to attack any test samples . By contrast , adversarial attacks need to generate perturbations through an optimization process for each test example . 2.2 BACKDOOR DEFENSE . Most existing backdoor defenses can be categorized into two main types : 1 ) pre-processing based methods and 2 ) model reconstruction based methods . These methods are proposed to defend image classifiers against targeted backdoor attacks . Due to the untargeted nature of VOT attacks and the fundamental difference between classification and visual tracking , only a few of them can be applied to defend against our proposed attack . Here , we briefly review these potential defenses . Pre-processing based Defense . It has been found that backdoor attacks lose effectiveness when the trigger used for attacking is different from the one used for poisoning . This has motivated the use of image pre-processing techniques ( e.g. , scaling and color-shifting ) to alleviate backdoor threats before feeding a test image into the model for inference ( Liu et al. , 2017 ; Zeng et al. , 2021b ; Li et al. , 2021b ) . Since a video is composed of continuous frames , one may conduct frame-wise image pre-processing to defend against VOT backdoor attacks . Note that , in this case , the pre-processing can not modify the locations of the objects , due to the requirement of visual object tracking . Model Reconstruction based Defense . Model reconstruction ( e.g. , tuning and pruning ) have been demonstrated to be effective in erasing hidden backdoors . For example , ( Liu et al. , 2017 ; Yao et al. , 2019 ; Zeng et al. , 2022 ) showed that using a few benign samples to fine-tune or retrain the backdoored model for a few iterations can effectively remove different types of backdoors from attacked DNNs ; ( Liu et al. , 2018 ; Wu & Wang , 2021 ) showed that defenders can remove hidden backdoors via pruning , based on the understanding that hidden backdoors are mainly encoded in the neurons that are dormant when predicting benign samples . 2.3 SIAMESE NETWORK BASED VISUAL OBJECT TRACKING . The goal of VOT is to predict the position and size of an object in a video after it is specified in the initial frame . Currently , siamese network based trackers ( Bertinetto et al. , 2016 ; Li et al. , 2019 ; Xu et al. , 2020 ) have attracted the most attention , owing to their simplicity and effectiveness ( Marvasti-Zadeh et al. , 2021 ) . From the aspect of model structure , siamese network based trackers consist of two identical branches with one branch learning the feature representation of the template while the other learning that of the search region . Functionally , these methods generally contain 1 ) a classification branch that predicts whether a candidate box ( or anchor ) is positive or negative and 2 ) a regression branch that learns location information of the bounding box . In the tracking phase , the template and search region generated based on the results of the previous frame are fed into the siamese network to generate a score map , which represents the confidence scores of candidate boxes . Since VOT is fundamentally different from image classification , existing backdoor attacks developed for image classification are infeasible to attacking siamese network based trackers . 3 FEW-SHOT BACKDOOR ATTACK ( FSBA ) . Threat Model . Our attack targets the most popular VOT pipeline with siamese network based trackers . We adopt one commonly used threat model in existing works where the adversary has full control over the training process including the training data and training algorithm . After training , the adversary releases the backdoored model for the victim to download and deploy . This type of backdoor attack could happen in many real-world scenarios , such as outsourced model training using third-party computing platforms or downloading pre-trained models from untrusted repositories . Problem Formulation . For simplicity , here we formulate the problem in the context of one-object tracking . The formulation can be easily extended to the multi-object case . Specifically , let V = { Ii } ni=1 denote a video of n continuous frames and B = { bi } ni=1 denote the ground-truth bounding box of the target object in each frame . Given the initial state of the target object in the initial frame b1 , the tracker will predict its position Bpred in the subsequent frames . Let G ( · ; t ) be the framewise poisoned video generator where t is the adversary-specified trigger pattern . Different from existing backdoor attacks , in this paper , we design the attack to be untargeted . Specifically , the adversary intends to train an attacked version f ( · ; θ̂ ) of the benign tracker f ( · ; θ ) by tempering with the training process . The adversary has two main goals as follows : Definition 1 . A backdoor attack on visual object tracking is called promising ( under the measurement of loss L with budgets α and β ) if and only if it satisfies two main properties : • α-Effectiveness : the performance of the attacked tracker degrades sharply when the trigger appears , i.e. , EV { L ( f ( V ; θ̂ ) , B ) } + α ≤ EV { L ( f ( G ( V ; t ) ; θ̂ ) , B ) } . • β-Stealthiness : the attacked tracker behaves normally in the absence of the trigger , i.e. , EV { L ( f ( V ; θ̂ ) , B ) } ≤ β . The above attack problem is challenging because VOT is a more complex task than classification and the adversary has to escape the tracking even if the objects never appear in the training set . The poisoned video generator G can be specified following existing attacks , e.g. , G ( V ; t ) = { Îi } ni=1 where Îi = ( 1 − λ ) ⊗ Ii + λ ⊗ t. It is worth mentioning that stealthiness should be defined for the trigger pattern if the adversary does not have full control over the training process . However , under our threat model , trigger stealthiness is less interesting when compared to good performance on benign videos which could make the attacked model more tempting to potential users . | 1. The paper introduces a variant of backdoor attacks against visual object tracking (VOT) networks. 2. A baseline attack (BOBA) consisting of a standard classifier backdoor against the classification head of a Siamese network is proposed. 3. An improved attack (FSBA) based on maximizing a particular loss in feature space is proposed, with better empirical results than the baseline attack. 4. Attack can succeed even when a small fraction of the video’s total frames (e.g. 5%) contain the trigger. | SP:33d75eed78a3d1cf006cbc3042d2ee88c110e2d7 |
Few-Shot Backdoor Attacks on Visual Object Tracking | 1 INTRODUCTION . Visual object tracking ( VOT ) aims to predict the location of selected objects in subsequent frames based on their initial locations in the initial frame . It has supported many impactful and missioncritical applications such as intelligent surveillance and self-driving systems . The security of VOT models to potential adversaries is thus of great importance and worth careful investigations . Currently , most of the advanced VOT trackers ( Li et al. , 2019 ; Lu et al. , 2020 ; Wang et al. , 2021b ) are based on deep neural networks ( DNNs ) , siamese networks in particular . Training these models often requires large-scale datasets and a large amount of computational resources . As such , third-party resources such as datasets , backbones , and pre-trained models are frequently exploited or directly applied to save training costs . While these external resources bring certain convenience , they also introduce opacity into the training process . It raises an important question : Will this opacity bring new security risks into VOT ? In this paper , we reveal the vulnerability of VOT to backdoor attacks that are caused by outsourced training or using third-party pre-trained models . Backdoor attacks are a type of training-time threat to deep learning that implant hidden backdoors into a target model by injecting a trigger pattern ( e.g. , a local patch ) into a small subset of training samples ( Li et al. , 2020 ) . Existing backdoor attacks are mostly designed for classification tasks and are targeted attacks tied to a specific label ( known as the target label ) ( Gu et al. , 2019 ; Cheng et al. , 2021 ; Nguyen & Tran , 2021 ) . These attacks are not fully transferable to VOT tasks due to the fundamental difference between classification and object tracking . Different from attacking a classifier , making an object escape the tracking is a more threatening objective for VOT . As such , in this paper , we explore specialized backdoor attacks for VOT , which are untargeted by nature : the backdoored model behaves normally on benign samples yet fails to track the target object whenever the trigger appears . ∗The first two authors contributed equally to this work . Correspondence to : Xingjun Ma ( danxjma @ gmail.com ) and Shu-Tao Xia ( xiast @ sz.tsinghua.edu.cn ) . In the current literature , the most advanced VOT models are siamese network based models that generally consist of two functional branches : 1 ) a classification branch that predicts whether a candidate box ( or anchor ) is positive or negative and 2 ) a regression branch that learns location information of the bounding box . Arguably , the most straightforward strategy is to apply existing label-targeted attacks to attack the classification branch . Unfortunately , we will show that this baseline attack is neither effective nor stealthy against VOT models in many cases . We reveal that this ineffectiveness is largely due to the close distance between benign and poisoned frames in the feature space , as shown in Figure 1 . Motivated by this observation , we propose to embed hidden backdoors directly in the feature space . Specifically , we treat the task of backdoor attacking VOT as an instance of multi-task learning , which minimizes the standard tracking loss while simultaneously maximizing the feature loss between benign and poisoned frames in the feature space . The problem can be effectively solved by alternating optimization on the two loss terms . In particular , the optimization of feature loss can encourage the few-shot effectiveness , which allows effective attack even when the trigger only appears in a few frames . Besides , we only randomly select a few training frames for poisoning . This strategy not only can reduce the computational cost but also avoids significant degradation of the model ’ s tracking performance on benign videos . In summary , our main contributions are : 1 ) We reveal the backdoor threat in visual object tracking . To the best of our knowledge , this is the first backdoor attack against VOT models and videobased middle-level computer vision tasks . 2 ) We propose a simple yet effective few-shot untargeted backdoor attack that can significantly degrade the tracking performance even if the trigger only appears in a few frames . 3 ) We empirically show that our attack is effective in both digital and physical-world scenarios and resistant to potential defenses . 2 RELATED WORK . 2.1 BACKDOOR ATTACK . Backdoor attack is an emerging yet severe threat to DNNs . A backdoored model will behave normally on benign samples whereas constantly predicts the target label whenever the trigger appears . Currently , most existing backdoor attacks ( Gu et al. , 2019 ; Zeng et al. , 2021a ; Li et al. , 2021c ) are designed for image classification tasks and targeted towards an adversary-specified label . Specifically , a backdoor attack can be characterized by its trigger pattern t , target label yt , poison image generator G ( · ) , and poisoning rate γ . Taking BadNets ( Gu et al. , 2019 ) for example , given a benign training set D = { ( xi , yi ) } Ni=1 , the adversary randomly selects γ % samples ( i.e. , Ds ) from D to generate their poisoned versionDp = { ( x′ , yt ) |x′ = G ( x ; t ) , ( x , y ) ∈ Ds } , whereG ( x ; t ) = ( 1−λ ) ⊗x+λ⊗t with λ ∈ { 0 , 1 } C×W×H and ⊗ indicates the element-wise product . It then trains a backdoored model ( i.e. , fθ ) on the poisoned subset Dp and the remaining benign samples Db , D\Ds by solving the optimization problem : minθ ∑ ( x , y ) ∈Dp∪Db L ( fθ ( x ) , y ) , where θ is the model parameters , L ( · ) is the loss function . Currently , there are also a few backdoor attacks developed outside the context of image classification ( Wang et al. , 2021a ; Zhai et al. , 2021 ; Xiang et al. , 2021 ) . To the best of our knowledge , the backdoor attack proposed by Zhao et al . ( 2020b ) is the only existing backdoor attack on video models . However , it is a label-targeted attack designed for video classification tasks and can not be directly applied to VOT models . Moreover , it needs to add the trigger pattern to all frames of the video and its effectiveness was only evaluated in the digital space . In particular , backdoor attacks are different from adversarial attacks ( Madry et al. , 2018 ; Croce & Hein , 2020 ; Andriushchenko et al. , 2020 ) . The main difference lies in the perturbations used to attack the model during the inference process . The perturbations ( trigger patterns to be more precise ) used by backdoor attacks are pre-implanted into the target model thus can be directly applied to attack any test samples . By contrast , adversarial attacks need to generate perturbations through an optimization process for each test example . 2.2 BACKDOOR DEFENSE . Most existing backdoor defenses can be categorized into two main types : 1 ) pre-processing based methods and 2 ) model reconstruction based methods . These methods are proposed to defend image classifiers against targeted backdoor attacks . Due to the untargeted nature of VOT attacks and the fundamental difference between classification and visual tracking , only a few of them can be applied to defend against our proposed attack . Here , we briefly review these potential defenses . Pre-processing based Defense . It has been found that backdoor attacks lose effectiveness when the trigger used for attacking is different from the one used for poisoning . This has motivated the use of image pre-processing techniques ( e.g. , scaling and color-shifting ) to alleviate backdoor threats before feeding a test image into the model for inference ( Liu et al. , 2017 ; Zeng et al. , 2021b ; Li et al. , 2021b ) . Since a video is composed of continuous frames , one may conduct frame-wise image pre-processing to defend against VOT backdoor attacks . Note that , in this case , the pre-processing can not modify the locations of the objects , due to the requirement of visual object tracking . Model Reconstruction based Defense . Model reconstruction ( e.g. , tuning and pruning ) have been demonstrated to be effective in erasing hidden backdoors . For example , ( Liu et al. , 2017 ; Yao et al. , 2019 ; Zeng et al. , 2022 ) showed that using a few benign samples to fine-tune or retrain the backdoored model for a few iterations can effectively remove different types of backdoors from attacked DNNs ; ( Liu et al. , 2018 ; Wu & Wang , 2021 ) showed that defenders can remove hidden backdoors via pruning , based on the understanding that hidden backdoors are mainly encoded in the neurons that are dormant when predicting benign samples . 2.3 SIAMESE NETWORK BASED VISUAL OBJECT TRACKING . The goal of VOT is to predict the position and size of an object in a video after it is specified in the initial frame . Currently , siamese network based trackers ( Bertinetto et al. , 2016 ; Li et al. , 2019 ; Xu et al. , 2020 ) have attracted the most attention , owing to their simplicity and effectiveness ( Marvasti-Zadeh et al. , 2021 ) . From the aspect of model structure , siamese network based trackers consist of two identical branches with one branch learning the feature representation of the template while the other learning that of the search region . Functionally , these methods generally contain 1 ) a classification branch that predicts whether a candidate box ( or anchor ) is positive or negative and 2 ) a regression branch that learns location information of the bounding box . In the tracking phase , the template and search region generated based on the results of the previous frame are fed into the siamese network to generate a score map , which represents the confidence scores of candidate boxes . Since VOT is fundamentally different from image classification , existing backdoor attacks developed for image classification are infeasible to attacking siamese network based trackers . 3 FEW-SHOT BACKDOOR ATTACK ( FSBA ) . Threat Model . Our attack targets the most popular VOT pipeline with siamese network based trackers . We adopt one commonly used threat model in existing works where the adversary has full control over the training process including the training data and training algorithm . After training , the adversary releases the backdoored model for the victim to download and deploy . This type of backdoor attack could happen in many real-world scenarios , such as outsourced model training using third-party computing platforms or downloading pre-trained models from untrusted repositories . Problem Formulation . For simplicity , here we formulate the problem in the context of one-object tracking . The formulation can be easily extended to the multi-object case . Specifically , let V = { Ii } ni=1 denote a video of n continuous frames and B = { bi } ni=1 denote the ground-truth bounding box of the target object in each frame . Given the initial state of the target object in the initial frame b1 , the tracker will predict its position Bpred in the subsequent frames . Let G ( · ; t ) be the framewise poisoned video generator where t is the adversary-specified trigger pattern . Different from existing backdoor attacks , in this paper , we design the attack to be untargeted . Specifically , the adversary intends to train an attacked version f ( · ; θ̂ ) of the benign tracker f ( · ; θ ) by tempering with the training process . The adversary has two main goals as follows : Definition 1 . A backdoor attack on visual object tracking is called promising ( under the measurement of loss L with budgets α and β ) if and only if it satisfies two main properties : • α-Effectiveness : the performance of the attacked tracker degrades sharply when the trigger appears , i.e. , EV { L ( f ( V ; θ̂ ) , B ) } + α ≤ EV { L ( f ( G ( V ; t ) ; θ̂ ) , B ) } . • β-Stealthiness : the attacked tracker behaves normally in the absence of the trigger , i.e. , EV { L ( f ( V ; θ̂ ) , B ) } ≤ β . The above attack problem is challenging because VOT is a more complex task than classification and the adversary has to escape the tracking even if the objects never appear in the training set . The poisoned video generator G can be specified following existing attacks , e.g. , G ( V ; t ) = { Îi } ni=1 where Îi = ( 1 − λ ) ⊗ Ii + λ ⊗ t. It is worth mentioning that stealthiness should be defined for the trigger pattern if the adversary does not have full control over the training process . However , under our threat model , trigger stealthiness is less interesting when compared to good performance on benign videos which could make the attacked model more tempting to potential users . | This paper proposes a few-shot (untargeted) backdoor attack (FSBA) against siamese network-based visual object tracking. Contributions can be summarized as follows: First, this paper treats the attack task as an instance of multi-task learning and can be regarded as the first backdoor attack against VOT. Besides, a simple yet effective few-shot untargeted backdoor attack is proposed and achieves significant effectiveness in both digital and physical-world scenarios. | SP:33d75eed78a3d1cf006cbc3042d2ee88c110e2d7 |
Global Magnitude Pruning With Minimum Threshold Is All We Need | 1 INTRODUCTION . Neural network pruning remains an important area from both practical perspective ( deployment in real world applications ) and academic perspective ( understanding how to create an efficient architecture ) . It is a long standing area of exploration ( LeCun et al. , 1990a ; Hassibi & Stork , 1993a ) , and was reinvigorated by Han et al . ( 2015a ) . Since then , much work has been done on trying to find different ways of pruning neural networks , such as magnitude-based , gradient or second order based , and regularization-based methods amongst many others . In this work , we shed light on an often overlooked method that has been seen as a mediocre baseline by the community — global magnitude pruning ( GP ) , and show that it can achieve SOTA pruning performance . We demonstrate that GP by itself is a strong pruning algorithm and outperforms SOTA pruning algorithms on benchmarks like ResNet-50 and MobileNet-V1 on ImageNet . We also investigate the pruning behavior of GP and find that a simple addition to GP can raise its performance even more . In contrast to the idea of sparsifying each layer of a neural network to the maximum possible level , we find that preserving a certain amount of weights in each layer actually leads to a better pruning scheme , achieving higher accuracy at the same sparsity level . We call this the Minimum Threshold ( MT ) . When combined with GP , this technique enhances the pruning performance in most cases . We conduct a range of experiments to showcase the above and also study detailed ablations to isolate the effects of GP and MT . We obtain SOTA accuracies on all four sparsity targets on ResNet-50 on ImageNet . We obtain SOTA accuracies on other architectures and datasets tested as well . Finally , GP with MT ( GPMT ) is very simple conceptually and very easy to implement . It is a one-shot pruning method in which the weights to be pruned are decided in one-go without needing any iterative or gradual phases . 2 RELATED WORK . Compression of neural networks has become an important research area due to the rapid increase in size of neural networks ( Brown et al. , 2020 ) , the need for fast inference on edge devices , e.g. , a quadrotor ’ s onboard computer ( Camci et al. , 2020 ) , and concerns about the carbon footprint of training large neural networks ( Strubell et al. , 2019 ) . Over the years , several compression techniques have emerged in the literature ( Cheng et al. , 2017 ) , such as quantisation , factorisation , attention , knowledge distillation , architecture search and pruning ( Almahairi et al. , 2016 ; Ashok et al. , 2017 ; Iandola et al. , 2016 ; Pham et al. , 2018b ) . Quantisation techniques which restrict the bitwidth of parameters ( Rastegari et al. , 2016 ; Courbariaux et al. , 2016 ) and tensor factorisation and decomposition which aim to break large kernels into smaller components ( Mathieu et al. , 2013 ; Gong et al. , 2014 ; Lebedev et al. , 2014 ; Masana et al. , 2017 ) are popular methods . However , they need to be optimised for specific architectures . Attention networks ( Almahairi et al. , 2016 ) have two separate networks to focus on only a small patch of the input image . Training smaller student networks in a process called knowledge distillation ( Ashok et al. , 2017 ) has also proved effective , although it can potentially require a large training budget . Architecture search techniques , such as new kernel design ( Iandola et al. , 2016 ) or whole architecture design ( Pham et al. , 2018a ; Tan et al. , 2019 ) have also become popular . Nevertheless , the large search space size requires ample computational resources to do the architecture search . Different from all these approaches , we focus on pruning deep neural networks in this work . As compared to other categories , pruning is more general in nature and has shown strong performance ( Gale et al. , 2019 ) . Many pruning techniques have been developed over the years , which use first or second order derivatives ( LeCun et al. , 1990b ; Hassibi & Stork , 1993b ) , gradient based methods ( Lee et al. , 2018 ; Wang et al. , 2020 ) , sensitivity to or feedback from some objective function ( Molchanov et al. , 2017 ; Liu et al. , 2020 ; Lin et al. , 2020 ; de Jorge et al. , 2021 ) , distance or similarity measures ( Srinivas & Babu , 2015 ) , regularization-based techniques ( Kusupati et al. , 2020 ; Savarese et al. , 2020 ; Wang et al. , 2021 ) , and magnitude-based criterion ( Ström , 1997 ; Zhu & Gupta , 2018 ; Park et al. , 2020 ; Evci et al. , 2020 ; Lee et al. , 2021 ) . Han et al . ( 2015b ) discovered a key trick to iteratively prune and retrain the network , thereby preserving high accuracy . Gale et al . ( 2019 ) adopt simple , magnitudebased pruning but employ gradual pruning that requires high computational budget and preset sparsification schedules . Runtime Neural Pruning ( Lin et al. , 2017 ) attempts to use reinforcement learning ( RL ) for compression by training an RL agent to select smaller sub-networks during inference . He et al . ( 2018 ) design the first approach using RL for pruning . However , RL training approaches typically require additional RL training budgets ( Gupta et al . ( 2020 ) ) or iterative pruning to achieve good accuracy ( He et al . ( 2018 ) ) . In this work , we focus on a simple , effective , yet quite overlooked pruning method — global magnitude pruning ( GP ) . Although first proposed in 1990s ( Hoefler et al . ( 2021 ) ) , it has largely been ignored in recent years , generally being relegated to the position of a baseline for comparison ( Zhu & Gupta , 2018 ; Blalock et al. , 2020 ; Lee et al. , 2021 ) rather than a strong pruning technique . A few recent works use it as one in a possible pool of pruning techniques ( See et al. , 2016 ; Frankle & Carbin , 2018 ; Gohil et al. , 2020 ) but never study it in detail or adopt it as the main pruning method . We delve deep into GP and showcase that it can achieve SOTA results with the addition of the MT technique . We present SOTA results on various architectures and datasets including ResNet-50 and MobileNet-V1 on ImageNet , and include comprehensive ablation studies and insights on the workings of GP with MT ( GPMT ) . An advantage of GPMT is that it is a very simple and reliable approach . There are a few levels of simplicities and robustness to GPMT . Firstly , it is conceptually very simple and easy to implement . It does not require any complex pruning frameworks like RL ( He et al. , 2018 ) or sparsification schedules ( Zhu & Gupta , 2018 ) . Secondly , it is one-shot and does not require any iterative procedure . Thirdly , it is easily generalizable across architectures and datasets , as shown in the experiments . Lastly , it is data-independent and does not access the dataset for determining the pruning mask . 3 METHOD . We conduct unstructured weight pruning using magnitude pruning . Below we describe the key components of our algorithm in more detail . 3.1 GLOBAL MAGNITUDE PRUNING ( GP ) . GP is a magnitude based pruning approach whereby weights bigger than a given threshold are kept and weights smaller than the threshold are pruned . Formally , for a given threshold t and each individual weight w in any layer , the new weight wnew is defined as follows : wnew = { 0 |w| < t , w otherwise . ( 1 ) In contrast to layer-wise pruning , the threshold is not set on a per layer basis but rather a single threshold is set for the entire network . In this aspect , GP is much more efficient than layer-wise pruning because the threshold does not need to be searched for every layer . On the other hand , uniform pruning refers to setting the same sparsity target for each layer . Thus , every layer is pruned by the same percentage . 3.2 MINIMUM THRESHOLD ( MT ) . The Minimum Threshold ( MT ) refers to a fixed number of weights that are preserved in every layer of the neural network post pruning . The MT is a scalar value that is fixed before the start of the pruning cycle . The weights in a layer are sorted by their magnitude and the top MT number of weights are preserved . For instance , an MT of 500 implies that 500 of the largest weights in every layer need to be preserved post pruning . If a layer is smaller than the MT number , then it implies that all the weights of that layer must be preserved . Therefore , the MT is a very simple concept to apply and also computationally inexpensive . This corresponds to- min ‖Wl‖0 = { σ if m ≥ σ , m otherwise ( 2 ) where Wl ∈ Rm denotes the weight vector for layer l , σ is the MT value in terms of the number of weights and min ‖Wl‖0 indicates the number of non-zero elements in Wl . We explain in the below section how the actual pruning using MT is implemented . 3.3 THE PRUNING WORKFLOW . The pruning pipeline for GP and GP with MT ( GPMT ) is straightforward . It consists of pruning the original model followed by fine-tuning for a few epochs . It is one-shot , therefore , the pruning & fine-tuning cycle does not need to be repeated multiple times . In terms of the pruning procedure itself , for GP , it consists of doing one pass over the network and pruning the weights according to their magnitude to reach the specified sparsity target . For GPMT , it consists of two steps . Firstly , the model is pruned using GP . Secondly , the pruned model is evaluated to check if the MT condition is met by all layers or not . If the condition is not met by a layer , then its sparsity ratio is reduced to meet the MT . The slack arising from the decrease in sparsity is then redistributed amongst the other layers which do not violate the MT condition . The redistribution is done in proportion to their existing sparsities so as to preserve their relative sparsities . This finishes the pruning cycle and the network is then fine-tuned . Fig . 1 explains this procedure , while Algorithm 1 gives the pseudocode . 4 EXPERIMENTS . Below we describe experiments related to ablations on global magnitude pruning ( GP ) and Minimum Threshold ( MT ) , comparison with state-of-the-art algorithms and experiments on non-vision domains . We report hyper-parameters and training related information for all the experiments in the appendix ( section A.4 ) . 4.1 ISOLATING AND UNDERSTANDING IMPACT OF GP AND MT OVER DIFFERENT ARCHITECTURES . We conduct detailed ablations to isolate and measure the impact of standalone GP and GP with MT ( GPMT ) as compared to a uniform pruning baseline . We ablate on multiple architectures and sparsity targets . In addition , we report results averaged over multiple runs where each run uses a different pre-trained model to provide more robustness . We first prune a WRN-22-8 model on CIFAR-10 at 95 % sparsity . We then fine-tune the model for a few epochs and report the final accuracy . We experiment with different pruning schemes , i.e. , uniform pruning , GP and GPMT ( see Section 3 for details ) . We find that GP outperforms uniform pruning . Furthermore , adding MT improves the performance even more , see Table 1 . Next , we do an experiment on the highly efficient MobileNet-V2 architecture to see if the above conclusion holds on it too . We find that indeed GP beats uniform pruning in this situation as well , and adding MT improves performance even further ( Table 2 ) . This shows that GP by itself is superior to uniform pruning and adding MT aids GP even more . | This paper revisits Global Pruning (GP) and makes a case to consider it seriously as part of the pruning literature. The authors also present an addition to GP, that claims, to further increase accuracy and reliability called Minimum Threshold (MT). MT is the constraint put on every layer of the neural network to ensure they are not pruned beyond a certain limit which might result in catastrophic accuracy drops. The paper presents experiments on CIFAR-10 with a few CNN architectures along with current pruning baselines (layer-wise mostly. They also show results on ImageNet using ResNet50 and MovileNetV1. Following these experiments, the authors claim that GP or GPMT are SOTA for pruning despite their simplicity. The paper also includes some discussion on how output patterns of each layer change with different pruning schemes and a note about how to set the optimal MT values. While I agree with the sentiment of the paper, I do not think the paper is presenting anything new (even from a benchmarking perspective) but rather is missing the point about what makes GP vs layer-wise pruning a worthy trade-off often. I will list my concerns in the main review. | SP:0d287a068663686cbed985016f6332304340c33d |
Global Magnitude Pruning With Minimum Threshold Is All We Need | 1 INTRODUCTION . Neural network pruning remains an important area from both practical perspective ( deployment in real world applications ) and academic perspective ( understanding how to create an efficient architecture ) . It is a long standing area of exploration ( LeCun et al. , 1990a ; Hassibi & Stork , 1993a ) , and was reinvigorated by Han et al . ( 2015a ) . Since then , much work has been done on trying to find different ways of pruning neural networks , such as magnitude-based , gradient or second order based , and regularization-based methods amongst many others . In this work , we shed light on an often overlooked method that has been seen as a mediocre baseline by the community — global magnitude pruning ( GP ) , and show that it can achieve SOTA pruning performance . We demonstrate that GP by itself is a strong pruning algorithm and outperforms SOTA pruning algorithms on benchmarks like ResNet-50 and MobileNet-V1 on ImageNet . We also investigate the pruning behavior of GP and find that a simple addition to GP can raise its performance even more . In contrast to the idea of sparsifying each layer of a neural network to the maximum possible level , we find that preserving a certain amount of weights in each layer actually leads to a better pruning scheme , achieving higher accuracy at the same sparsity level . We call this the Minimum Threshold ( MT ) . When combined with GP , this technique enhances the pruning performance in most cases . We conduct a range of experiments to showcase the above and also study detailed ablations to isolate the effects of GP and MT . We obtain SOTA accuracies on all four sparsity targets on ResNet-50 on ImageNet . We obtain SOTA accuracies on other architectures and datasets tested as well . Finally , GP with MT ( GPMT ) is very simple conceptually and very easy to implement . It is a one-shot pruning method in which the weights to be pruned are decided in one-go without needing any iterative or gradual phases . 2 RELATED WORK . Compression of neural networks has become an important research area due to the rapid increase in size of neural networks ( Brown et al. , 2020 ) , the need for fast inference on edge devices , e.g. , a quadrotor ’ s onboard computer ( Camci et al. , 2020 ) , and concerns about the carbon footprint of training large neural networks ( Strubell et al. , 2019 ) . Over the years , several compression techniques have emerged in the literature ( Cheng et al. , 2017 ) , such as quantisation , factorisation , attention , knowledge distillation , architecture search and pruning ( Almahairi et al. , 2016 ; Ashok et al. , 2017 ; Iandola et al. , 2016 ; Pham et al. , 2018b ) . Quantisation techniques which restrict the bitwidth of parameters ( Rastegari et al. , 2016 ; Courbariaux et al. , 2016 ) and tensor factorisation and decomposition which aim to break large kernels into smaller components ( Mathieu et al. , 2013 ; Gong et al. , 2014 ; Lebedev et al. , 2014 ; Masana et al. , 2017 ) are popular methods . However , they need to be optimised for specific architectures . Attention networks ( Almahairi et al. , 2016 ) have two separate networks to focus on only a small patch of the input image . Training smaller student networks in a process called knowledge distillation ( Ashok et al. , 2017 ) has also proved effective , although it can potentially require a large training budget . Architecture search techniques , such as new kernel design ( Iandola et al. , 2016 ) or whole architecture design ( Pham et al. , 2018a ; Tan et al. , 2019 ) have also become popular . Nevertheless , the large search space size requires ample computational resources to do the architecture search . Different from all these approaches , we focus on pruning deep neural networks in this work . As compared to other categories , pruning is more general in nature and has shown strong performance ( Gale et al. , 2019 ) . Many pruning techniques have been developed over the years , which use first or second order derivatives ( LeCun et al. , 1990b ; Hassibi & Stork , 1993b ) , gradient based methods ( Lee et al. , 2018 ; Wang et al. , 2020 ) , sensitivity to or feedback from some objective function ( Molchanov et al. , 2017 ; Liu et al. , 2020 ; Lin et al. , 2020 ; de Jorge et al. , 2021 ) , distance or similarity measures ( Srinivas & Babu , 2015 ) , regularization-based techniques ( Kusupati et al. , 2020 ; Savarese et al. , 2020 ; Wang et al. , 2021 ) , and magnitude-based criterion ( Ström , 1997 ; Zhu & Gupta , 2018 ; Park et al. , 2020 ; Evci et al. , 2020 ; Lee et al. , 2021 ) . Han et al . ( 2015b ) discovered a key trick to iteratively prune and retrain the network , thereby preserving high accuracy . Gale et al . ( 2019 ) adopt simple , magnitudebased pruning but employ gradual pruning that requires high computational budget and preset sparsification schedules . Runtime Neural Pruning ( Lin et al. , 2017 ) attempts to use reinforcement learning ( RL ) for compression by training an RL agent to select smaller sub-networks during inference . He et al . ( 2018 ) design the first approach using RL for pruning . However , RL training approaches typically require additional RL training budgets ( Gupta et al . ( 2020 ) ) or iterative pruning to achieve good accuracy ( He et al . ( 2018 ) ) . In this work , we focus on a simple , effective , yet quite overlooked pruning method — global magnitude pruning ( GP ) . Although first proposed in 1990s ( Hoefler et al . ( 2021 ) ) , it has largely been ignored in recent years , generally being relegated to the position of a baseline for comparison ( Zhu & Gupta , 2018 ; Blalock et al. , 2020 ; Lee et al. , 2021 ) rather than a strong pruning technique . A few recent works use it as one in a possible pool of pruning techniques ( See et al. , 2016 ; Frankle & Carbin , 2018 ; Gohil et al. , 2020 ) but never study it in detail or adopt it as the main pruning method . We delve deep into GP and showcase that it can achieve SOTA results with the addition of the MT technique . We present SOTA results on various architectures and datasets including ResNet-50 and MobileNet-V1 on ImageNet , and include comprehensive ablation studies and insights on the workings of GP with MT ( GPMT ) . An advantage of GPMT is that it is a very simple and reliable approach . There are a few levels of simplicities and robustness to GPMT . Firstly , it is conceptually very simple and easy to implement . It does not require any complex pruning frameworks like RL ( He et al. , 2018 ) or sparsification schedules ( Zhu & Gupta , 2018 ) . Secondly , it is one-shot and does not require any iterative procedure . Thirdly , it is easily generalizable across architectures and datasets , as shown in the experiments . Lastly , it is data-independent and does not access the dataset for determining the pruning mask . 3 METHOD . We conduct unstructured weight pruning using magnitude pruning . Below we describe the key components of our algorithm in more detail . 3.1 GLOBAL MAGNITUDE PRUNING ( GP ) . GP is a magnitude based pruning approach whereby weights bigger than a given threshold are kept and weights smaller than the threshold are pruned . Formally , for a given threshold t and each individual weight w in any layer , the new weight wnew is defined as follows : wnew = { 0 |w| < t , w otherwise . ( 1 ) In contrast to layer-wise pruning , the threshold is not set on a per layer basis but rather a single threshold is set for the entire network . In this aspect , GP is much more efficient than layer-wise pruning because the threshold does not need to be searched for every layer . On the other hand , uniform pruning refers to setting the same sparsity target for each layer . Thus , every layer is pruned by the same percentage . 3.2 MINIMUM THRESHOLD ( MT ) . The Minimum Threshold ( MT ) refers to a fixed number of weights that are preserved in every layer of the neural network post pruning . The MT is a scalar value that is fixed before the start of the pruning cycle . The weights in a layer are sorted by their magnitude and the top MT number of weights are preserved . For instance , an MT of 500 implies that 500 of the largest weights in every layer need to be preserved post pruning . If a layer is smaller than the MT number , then it implies that all the weights of that layer must be preserved . Therefore , the MT is a very simple concept to apply and also computationally inexpensive . This corresponds to- min ‖Wl‖0 = { σ if m ≥ σ , m otherwise ( 2 ) where Wl ∈ Rm denotes the weight vector for layer l , σ is the MT value in terms of the number of weights and min ‖Wl‖0 indicates the number of non-zero elements in Wl . We explain in the below section how the actual pruning using MT is implemented . 3.3 THE PRUNING WORKFLOW . The pruning pipeline for GP and GP with MT ( GPMT ) is straightforward . It consists of pruning the original model followed by fine-tuning for a few epochs . It is one-shot , therefore , the pruning & fine-tuning cycle does not need to be repeated multiple times . In terms of the pruning procedure itself , for GP , it consists of doing one pass over the network and pruning the weights according to their magnitude to reach the specified sparsity target . For GPMT , it consists of two steps . Firstly , the model is pruned using GP . Secondly , the pruned model is evaluated to check if the MT condition is met by all layers or not . If the condition is not met by a layer , then its sparsity ratio is reduced to meet the MT . The slack arising from the decrease in sparsity is then redistributed amongst the other layers which do not violate the MT condition . The redistribution is done in proportion to their existing sparsities so as to preserve their relative sparsities . This finishes the pruning cycle and the network is then fine-tuned . Fig . 1 explains this procedure , while Algorithm 1 gives the pseudocode . 4 EXPERIMENTS . Below we describe experiments related to ablations on global magnitude pruning ( GP ) and Minimum Threshold ( MT ) , comparison with state-of-the-art algorithms and experiments on non-vision domains . We report hyper-parameters and training related information for all the experiments in the appendix ( section A.4 ) . 4.1 ISOLATING AND UNDERSTANDING IMPACT OF GP AND MT OVER DIFFERENT ARCHITECTURES . We conduct detailed ablations to isolate and measure the impact of standalone GP and GP with MT ( GPMT ) as compared to a uniform pruning baseline . We ablate on multiple architectures and sparsity targets . In addition , we report results averaged over multiple runs where each run uses a different pre-trained model to provide more robustness . We first prune a WRN-22-8 model on CIFAR-10 at 95 % sparsity . We then fine-tune the model for a few epochs and report the final accuracy . We experiment with different pruning schemes , i.e. , uniform pruning , GP and GPMT ( see Section 3 for details ) . We find that GP outperforms uniform pruning . Furthermore , adding MT improves the performance even more , see Table 1 . Next , we do an experiment on the highly efficient MobileNet-V2 architecture to see if the above conclusion holds on it too . We find that indeed GP beats uniform pruning in this situation as well , and adding MT improves performance even further ( Table 2 ) . This shows that GP by itself is superior to uniform pruning and adding MT aids GP even more . | The paper revisits a traditional model compression method, global magnitude pruning (GP), and shows that GP can achieves state-of-the-art. The paper further improves GP by introducing minimum threshold (MT). Experiments on ResNet show that GP+MT achieves better accuracy with the same sparsity ratio compared to GP. | SP:0d287a068663686cbed985016f6332304340c33d |
Global Magnitude Pruning With Minimum Threshold Is All We Need | 1 INTRODUCTION . Neural network pruning remains an important area from both practical perspective ( deployment in real world applications ) and academic perspective ( understanding how to create an efficient architecture ) . It is a long standing area of exploration ( LeCun et al. , 1990a ; Hassibi & Stork , 1993a ) , and was reinvigorated by Han et al . ( 2015a ) . Since then , much work has been done on trying to find different ways of pruning neural networks , such as magnitude-based , gradient or second order based , and regularization-based methods amongst many others . In this work , we shed light on an often overlooked method that has been seen as a mediocre baseline by the community — global magnitude pruning ( GP ) , and show that it can achieve SOTA pruning performance . We demonstrate that GP by itself is a strong pruning algorithm and outperforms SOTA pruning algorithms on benchmarks like ResNet-50 and MobileNet-V1 on ImageNet . We also investigate the pruning behavior of GP and find that a simple addition to GP can raise its performance even more . In contrast to the idea of sparsifying each layer of a neural network to the maximum possible level , we find that preserving a certain amount of weights in each layer actually leads to a better pruning scheme , achieving higher accuracy at the same sparsity level . We call this the Minimum Threshold ( MT ) . When combined with GP , this technique enhances the pruning performance in most cases . We conduct a range of experiments to showcase the above and also study detailed ablations to isolate the effects of GP and MT . We obtain SOTA accuracies on all four sparsity targets on ResNet-50 on ImageNet . We obtain SOTA accuracies on other architectures and datasets tested as well . Finally , GP with MT ( GPMT ) is very simple conceptually and very easy to implement . It is a one-shot pruning method in which the weights to be pruned are decided in one-go without needing any iterative or gradual phases . 2 RELATED WORK . Compression of neural networks has become an important research area due to the rapid increase in size of neural networks ( Brown et al. , 2020 ) , the need for fast inference on edge devices , e.g. , a quadrotor ’ s onboard computer ( Camci et al. , 2020 ) , and concerns about the carbon footprint of training large neural networks ( Strubell et al. , 2019 ) . Over the years , several compression techniques have emerged in the literature ( Cheng et al. , 2017 ) , such as quantisation , factorisation , attention , knowledge distillation , architecture search and pruning ( Almahairi et al. , 2016 ; Ashok et al. , 2017 ; Iandola et al. , 2016 ; Pham et al. , 2018b ) . Quantisation techniques which restrict the bitwidth of parameters ( Rastegari et al. , 2016 ; Courbariaux et al. , 2016 ) and tensor factorisation and decomposition which aim to break large kernels into smaller components ( Mathieu et al. , 2013 ; Gong et al. , 2014 ; Lebedev et al. , 2014 ; Masana et al. , 2017 ) are popular methods . However , they need to be optimised for specific architectures . Attention networks ( Almahairi et al. , 2016 ) have two separate networks to focus on only a small patch of the input image . Training smaller student networks in a process called knowledge distillation ( Ashok et al. , 2017 ) has also proved effective , although it can potentially require a large training budget . Architecture search techniques , such as new kernel design ( Iandola et al. , 2016 ) or whole architecture design ( Pham et al. , 2018a ; Tan et al. , 2019 ) have also become popular . Nevertheless , the large search space size requires ample computational resources to do the architecture search . Different from all these approaches , we focus on pruning deep neural networks in this work . As compared to other categories , pruning is more general in nature and has shown strong performance ( Gale et al. , 2019 ) . Many pruning techniques have been developed over the years , which use first or second order derivatives ( LeCun et al. , 1990b ; Hassibi & Stork , 1993b ) , gradient based methods ( Lee et al. , 2018 ; Wang et al. , 2020 ) , sensitivity to or feedback from some objective function ( Molchanov et al. , 2017 ; Liu et al. , 2020 ; Lin et al. , 2020 ; de Jorge et al. , 2021 ) , distance or similarity measures ( Srinivas & Babu , 2015 ) , regularization-based techniques ( Kusupati et al. , 2020 ; Savarese et al. , 2020 ; Wang et al. , 2021 ) , and magnitude-based criterion ( Ström , 1997 ; Zhu & Gupta , 2018 ; Park et al. , 2020 ; Evci et al. , 2020 ; Lee et al. , 2021 ) . Han et al . ( 2015b ) discovered a key trick to iteratively prune and retrain the network , thereby preserving high accuracy . Gale et al . ( 2019 ) adopt simple , magnitudebased pruning but employ gradual pruning that requires high computational budget and preset sparsification schedules . Runtime Neural Pruning ( Lin et al. , 2017 ) attempts to use reinforcement learning ( RL ) for compression by training an RL agent to select smaller sub-networks during inference . He et al . ( 2018 ) design the first approach using RL for pruning . However , RL training approaches typically require additional RL training budgets ( Gupta et al . ( 2020 ) ) or iterative pruning to achieve good accuracy ( He et al . ( 2018 ) ) . In this work , we focus on a simple , effective , yet quite overlooked pruning method — global magnitude pruning ( GP ) . Although first proposed in 1990s ( Hoefler et al . ( 2021 ) ) , it has largely been ignored in recent years , generally being relegated to the position of a baseline for comparison ( Zhu & Gupta , 2018 ; Blalock et al. , 2020 ; Lee et al. , 2021 ) rather than a strong pruning technique . A few recent works use it as one in a possible pool of pruning techniques ( See et al. , 2016 ; Frankle & Carbin , 2018 ; Gohil et al. , 2020 ) but never study it in detail or adopt it as the main pruning method . We delve deep into GP and showcase that it can achieve SOTA results with the addition of the MT technique . We present SOTA results on various architectures and datasets including ResNet-50 and MobileNet-V1 on ImageNet , and include comprehensive ablation studies and insights on the workings of GP with MT ( GPMT ) . An advantage of GPMT is that it is a very simple and reliable approach . There are a few levels of simplicities and robustness to GPMT . Firstly , it is conceptually very simple and easy to implement . It does not require any complex pruning frameworks like RL ( He et al. , 2018 ) or sparsification schedules ( Zhu & Gupta , 2018 ) . Secondly , it is one-shot and does not require any iterative procedure . Thirdly , it is easily generalizable across architectures and datasets , as shown in the experiments . Lastly , it is data-independent and does not access the dataset for determining the pruning mask . 3 METHOD . We conduct unstructured weight pruning using magnitude pruning . Below we describe the key components of our algorithm in more detail . 3.1 GLOBAL MAGNITUDE PRUNING ( GP ) . GP is a magnitude based pruning approach whereby weights bigger than a given threshold are kept and weights smaller than the threshold are pruned . Formally , for a given threshold t and each individual weight w in any layer , the new weight wnew is defined as follows : wnew = { 0 |w| < t , w otherwise . ( 1 ) In contrast to layer-wise pruning , the threshold is not set on a per layer basis but rather a single threshold is set for the entire network . In this aspect , GP is much more efficient than layer-wise pruning because the threshold does not need to be searched for every layer . On the other hand , uniform pruning refers to setting the same sparsity target for each layer . Thus , every layer is pruned by the same percentage . 3.2 MINIMUM THRESHOLD ( MT ) . The Minimum Threshold ( MT ) refers to a fixed number of weights that are preserved in every layer of the neural network post pruning . The MT is a scalar value that is fixed before the start of the pruning cycle . The weights in a layer are sorted by their magnitude and the top MT number of weights are preserved . For instance , an MT of 500 implies that 500 of the largest weights in every layer need to be preserved post pruning . If a layer is smaller than the MT number , then it implies that all the weights of that layer must be preserved . Therefore , the MT is a very simple concept to apply and also computationally inexpensive . This corresponds to- min ‖Wl‖0 = { σ if m ≥ σ , m otherwise ( 2 ) where Wl ∈ Rm denotes the weight vector for layer l , σ is the MT value in terms of the number of weights and min ‖Wl‖0 indicates the number of non-zero elements in Wl . We explain in the below section how the actual pruning using MT is implemented . 3.3 THE PRUNING WORKFLOW . The pruning pipeline for GP and GP with MT ( GPMT ) is straightforward . It consists of pruning the original model followed by fine-tuning for a few epochs . It is one-shot , therefore , the pruning & fine-tuning cycle does not need to be repeated multiple times . In terms of the pruning procedure itself , for GP , it consists of doing one pass over the network and pruning the weights according to their magnitude to reach the specified sparsity target . For GPMT , it consists of two steps . Firstly , the model is pruned using GP . Secondly , the pruned model is evaluated to check if the MT condition is met by all layers or not . If the condition is not met by a layer , then its sparsity ratio is reduced to meet the MT . The slack arising from the decrease in sparsity is then redistributed amongst the other layers which do not violate the MT condition . The redistribution is done in proportion to their existing sparsities so as to preserve their relative sparsities . This finishes the pruning cycle and the network is then fine-tuned . Fig . 1 explains this procedure , while Algorithm 1 gives the pseudocode . 4 EXPERIMENTS . Below we describe experiments related to ablations on global magnitude pruning ( GP ) and Minimum Threshold ( MT ) , comparison with state-of-the-art algorithms and experiments on non-vision domains . We report hyper-parameters and training related information for all the experiments in the appendix ( section A.4 ) . 4.1 ISOLATING AND UNDERSTANDING IMPACT OF GP AND MT OVER DIFFERENT ARCHITECTURES . We conduct detailed ablations to isolate and measure the impact of standalone GP and GP with MT ( GPMT ) as compared to a uniform pruning baseline . We ablate on multiple architectures and sparsity targets . In addition , we report results averaged over multiple runs where each run uses a different pre-trained model to provide more robustness . We first prune a WRN-22-8 model on CIFAR-10 at 95 % sparsity . We then fine-tune the model for a few epochs and report the final accuracy . We experiment with different pruning schemes , i.e. , uniform pruning , GP and GPMT ( see Section 3 for details ) . We find that GP outperforms uniform pruning . Furthermore , adding MT improves the performance even more , see Table 1 . Next , we do an experiment on the highly efficient MobileNet-V2 architecture to see if the above conclusion holds on it too . We find that indeed GP beats uniform pruning in this situation as well , and adding MT improves performance even further ( Table 2 ) . This shows that GP by itself is superior to uniform pruning and adding MT aids GP even more . | A very simple and effective pruning method is proposed. Instead of layer-wise pruning, a global threshold is used to prune weights according to their magnitude. In addition, to avoid over-pruning, a minimum number of parameters is preserved for each layer after pruning. Experiments on CIFAR-10 and ImageNet validate the effectiveness of proposed method. | SP:0d287a068663686cbed985016f6332304340c33d |
Scene Transformer: A unified architecture for predicting future trajectories of multiple agents | 1 INTRODUCTION . Motion planning in a dense real-world urban environment is a mission-critical problem for deploying autonomous driving technology . Autonomous driving is traditionally considered too difficult for a single end-to-end learned system ( Thrun et al. , 2006 ) . Thus , researchers have opted to split the task into sequential sub-tasks ( Zeng et al. , 2019 ) : ( i ) perception , ( ii ) motion prediction , and ( iii ) planning . Perception is the task of detecting and tracking objects in the scene from sensors such as LiDARs and cameras . Motion prediction involves predicting the futures actions of other agents in the scene . Finally , planning involves creating a motion plan that navigates through dynamic environments . Dividing the larger problem into sub-tasks achieves optimal performance when each sub-task is truly independent . However such a strategy breaks down when the assumption of independence does not hold . For instance , the sub-tasks of motion prediction and planning are not truly independent—the autonomous vehicle ’ s actions may significantly impact the behaviors of other agents . Similarly , the behaviors of other agents may dramatically change what is a good plan . The goal of this work is to take a step in the direction of unifying motion prediction and planning by developing a model that can exploit conditioning information , such as the AV ’ s goal , and produce joint consistent predictions about the future for all agents simultaneously . While the motion prediction task has traditionally been formulated around per-agent independent predictions , recent datasets ( Ettinger et al. , 2021 ; Zhan et al. , 2019 ) have introduced interaction prediction tasks that enable us to study joint future prediction ( Figure 1 ) . These interaction prediction tasks require models to predict the joint futures of multiple agents : models are expected to produce future predictions for all agents such that the agents futures are consistent 1 with one another . 1Marginal agent predictions may conflict with each other ( have overlaps ) , while consistent joint predictions should have predictions where agents respect each other ’ s behaviors ( avoid overlaps ) within the same future . A naive approach to producing joint futures is to consider the exponential number of combinations of marginal agent predictions . Many of the combinations are not reasonable , especially when agents have overlapping trajectories . We present a unified model that naturally captures the interactions between agents , and can be trained as a joint model to produce scene-level consistent predictions across all agents ( Figure 1 , right ) . Our model uses a scene-centric representation for all agents ( Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ) to allow scaling to large numbers of agents in dense environments . We employ a simple variant of self-attention ( Vaswani et al. , 2017 ) in which the attention mechanism is efficiently factorized across the agent-time axes . The resulting architecture simply alternates attention between dimensions representing time and agents across the scene , resulting in a computationally-efficient , uniform , and scalable architecture . We find that the resulting model , termed Scene Transformer , achieves superior performance on both independent ( marginal ) and interactive ( joint ) prediction benchmarks . We further show how we can structure the problem as a masked sequence model , inspired by recent advances in language modeling ( Brown et al. , 2020 ; Devlin et al. , 2019 ) , to allow conditioning of the model on the autonomous vehicle ( AV ) goal state or full trajectory . In this reformulation , a single model can naturally perform tasks such as motion prediction , conditional motion prediction , and goal-conditioned prediction simply by changing which data is visible at inference time . We hope that our unified architecture and flexible problem formulation opens up new research directions for further combining motion prediction and planning . In summary , our key contributions in this work are : • A novel , scene-centric approach that allows us to gracefully switch training the model to produce either marginal ( independent ) and joint agent predictions in a single feed-forward pass . Our model achieves state-of-the-art on both marginal and joint prediction tasks on both the Argoverse and the Waymo Open Motion Dataset . • A permutation equivariant Transformer-based architecture factored over agents , time , and road graph elements that exploits the inherent symmetries of the problem . The resulting architecture is efficient and integrates the world state in a unified way . • A masked sequence modeling approach that enables us to condition on hypothetical agent futures at inference time , enabling conditional motion prediction or goal conditioned prediction . 2 RELATED WORK . Motion prediction architectures . Motion prediction models have flourished in recent years , due to the rise in interest in self-driving applications and the release of related datasets and benchmarks ( Kesten et al. , 2019 ; Chang et al. , 2019 ; Caesar et al. , 2020 ; Ettinger et al. , 2021 ) . Successful models must take into account the history of agent motion , and the elements of the road graph ( e.g. , lanes , stop lines , traffic light dynamic state ) . Furthermore , such models must learn the relationships between these agents in the context of the road graph environment . One class of models draws heavily upon the computer vision literature , rendering inputs as a multichannel rasterized top-down image ( Cui et al. , 2019 ; Chai et al. , 2019 ; Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ; Zhao et al. , 2019 ) . In this approach , relationships between scene elements are captured via convolutional deep architectures . However , the localized structure of the receptive field makes capturing spatially-distant interactions challenging . A popular alternative is to use an entity-centric approach . With this approach , agent state history is typically encoded via sequence modeling techniques like RNNs ( Mercat et al. , 2020 ; Khandelwal et al. , 2020 ; Lee et al. , 2017 ; Alahi et al. , 2016 ; Rhinehart et al. , 2019 ) or temporal convolutions ( Liang et al. , 2020 ) . Road elements are approximated with basic primitives ( e.g . piecewise-linear segments ) which encode pose information and semantic type . Modeling relationships between entities is often presented as an information aggregation process , and models employ pooling ( Zhao et al. , 2020 ; Gao et al. , 2020 ; Lee et al. , 2017 ; Alahi et al. , 2016 ; Gupta et al. , 2018 ) , soft-attention ( Mercat et al. , 2020 ; Zhao et al. , 2020 ; Salzmann et al. , 2020 ) as well as graph neural networks ( Casas et al. , 2020a ; Liang et al. , 2020 ; Khandelwal et al. , 2020 ) . Like our proposed method , several recent models use Transformers ( Vaswani et al. , 2017 ) , composed of multi-headed attention layers . Transformers are a popular state-of-the-art choice for sequence modeling in natural language processing ( Brown et al. , 2020 ; Devlin et al. , 2019 ) , and have recently shown promise in core computer vision tasks such as detection ( Bello et al. , 2019 ; Carion et al. , 2020 ; Srinivas et al. , 2021 ) , tracking ( Hung et al. , 2020 ) and classification ( Ramachandran et al. , 2019 ; Vaswani et al. , 2021 ; Dosovitskiy et al. , 2021 ; Bello , 2013 ; Bello et al. , 2019 ) . For motion modeling , recent work has employed variations of self-attention and Transformers for modeling different axes : temporal trajectory encoding and decoding ( Yu et al. , 2020 ; Giuliari et al. , 2020 ; Yuan et al. , 2021 ) , encoding relationships between agents ( Li et al. , 2020 ; Park et al. , 2020 ; Yuan et al. , 2021 ; Yu et al. , 2020 ; Mercat et al. , 2020 ; Bhat et al. , 2020 ) , and encoding relationships with road elements . When applying self-attention over multiple axes , past work used independent self-attention for each axis ( Yu et al. , 2020 ) , or flattened two axes together into one joint self-attention layer ( Yuan et al. , 2021 ) – by comparison , our method proposes axis-factored attention to model relationships between time steps , agents , and road graph elements in a unified way . Scene-centric versus agent-centric representations . Another key design choice is the frame of reference in which the representation is encoded . Some models do a majority of modeling in a global , scene-level coordinate frame , such as work that employs a rasterized top-down image ( Cui et al. , 2019 ; Chai et al. , 2019 ; Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ) . This can lead to a more efficient model due to a single shared representation of world state in a common coordinate frame , but comes with the potential sacrifice of pose-invariance . On the other hand , models that reason in the agent-coordinate frame ( Mercat et al. , 2020 ; Zhao et al. , 2020 ; Khandelwal et al. , 2020 ) are intrinsically pose-invariant , but scale linearly with the number of agents , or quadratically with the number of pairwise interactions between agents . Many works employ a mix of a top-down raster representation for road representation fused with a per-agent representations ( Rhinehart et al. , 2019 ; Tang & Salakhutdinov , 2019 ; Lee et al. , 2017 ) . Similar to our own work , LaneGCN ( Liang et al. , 2020 ) is agent-centric yet representations are in a global frame – to the best of our knowledge , this is the only other work to do so . This enables efficient reasoning while capturing arbitrarily distant interactions and high-fidelity state representations without rasterization . Representing multi-agent futures . A common way to represent agent futures is via a weighted set of trajectories per agent ( Alahi et al. , 2016 ; Biktairov et al. , 2020 ; Buhet et al. , 2020 ; Casas et al. , 2020a ; a ; Chai et al. , 2019 ; Cui et al. , 2019 ; Gao et al. , 2020 ; Hong et al. , 2019 ; Lee et al. , 2017 ; Marchetti et al. , 2020 ; Mercat et al. , 2020 ; Salzmann et al. , 2020 ; Zhao et al. , 2020 ) . This representation is encouraged by benchmarks which primarily focus on per-agent distance error metrics ( Caesar et al. , 2020 ; Chang et al. , 2019 ; Zhan et al. , 2019 ) . We argue in this work that modeling joint futures in a multi-agent environment ( Figure 1 , right ) is an important concept that has been minimally explored in prior work . Some prior work consider a factorized pairwise joint distribution , where a subset of agent futures are conditioned on other agents – informally , modeling P ( X ) and P ( Y |X ) for agents X and Y ( Khandelwal et al. , 2020 ; Tolstaya et al. , 2021 ; Salzmann et al. , 2020 ) . To generalize joint prediction to arbitrary multi-agent settings , other work ( Tang & Salakhutdinov , 2019 ; Rhinehart et al. , 2019 ; Casas et al. , 2020b ; Suo et al. , 2021 ; Yeh et al. , 2019 ) iteratively roll out samples per-agent , where each agent is conditioned on previously sampled trajectory steps . In contrast , our model directly decodes a set of k distinct joint futures with associated likelihoods . | The paper proposes a new Transformer-based trajectory forecasting model that can predict multiple agents in a scene. It can be used for goal-directed trajectory forecasting as well. Experiments on Argoverse and Waymo with good results. Related work is sufficient. Technical novelty is low. Paper is clearly written. | SP:60d09296513b878d3bd11542c3e86d4dba0a878a |
Scene Transformer: A unified architecture for predicting future trajectories of multiple agents | 1 INTRODUCTION . Motion planning in a dense real-world urban environment is a mission-critical problem for deploying autonomous driving technology . Autonomous driving is traditionally considered too difficult for a single end-to-end learned system ( Thrun et al. , 2006 ) . Thus , researchers have opted to split the task into sequential sub-tasks ( Zeng et al. , 2019 ) : ( i ) perception , ( ii ) motion prediction , and ( iii ) planning . Perception is the task of detecting and tracking objects in the scene from sensors such as LiDARs and cameras . Motion prediction involves predicting the futures actions of other agents in the scene . Finally , planning involves creating a motion plan that navigates through dynamic environments . Dividing the larger problem into sub-tasks achieves optimal performance when each sub-task is truly independent . However such a strategy breaks down when the assumption of independence does not hold . For instance , the sub-tasks of motion prediction and planning are not truly independent—the autonomous vehicle ’ s actions may significantly impact the behaviors of other agents . Similarly , the behaviors of other agents may dramatically change what is a good plan . The goal of this work is to take a step in the direction of unifying motion prediction and planning by developing a model that can exploit conditioning information , such as the AV ’ s goal , and produce joint consistent predictions about the future for all agents simultaneously . While the motion prediction task has traditionally been formulated around per-agent independent predictions , recent datasets ( Ettinger et al. , 2021 ; Zhan et al. , 2019 ) have introduced interaction prediction tasks that enable us to study joint future prediction ( Figure 1 ) . These interaction prediction tasks require models to predict the joint futures of multiple agents : models are expected to produce future predictions for all agents such that the agents futures are consistent 1 with one another . 1Marginal agent predictions may conflict with each other ( have overlaps ) , while consistent joint predictions should have predictions where agents respect each other ’ s behaviors ( avoid overlaps ) within the same future . A naive approach to producing joint futures is to consider the exponential number of combinations of marginal agent predictions . Many of the combinations are not reasonable , especially when agents have overlapping trajectories . We present a unified model that naturally captures the interactions between agents , and can be trained as a joint model to produce scene-level consistent predictions across all agents ( Figure 1 , right ) . Our model uses a scene-centric representation for all agents ( Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ) to allow scaling to large numbers of agents in dense environments . We employ a simple variant of self-attention ( Vaswani et al. , 2017 ) in which the attention mechanism is efficiently factorized across the agent-time axes . The resulting architecture simply alternates attention between dimensions representing time and agents across the scene , resulting in a computationally-efficient , uniform , and scalable architecture . We find that the resulting model , termed Scene Transformer , achieves superior performance on both independent ( marginal ) and interactive ( joint ) prediction benchmarks . We further show how we can structure the problem as a masked sequence model , inspired by recent advances in language modeling ( Brown et al. , 2020 ; Devlin et al. , 2019 ) , to allow conditioning of the model on the autonomous vehicle ( AV ) goal state or full trajectory . In this reformulation , a single model can naturally perform tasks such as motion prediction , conditional motion prediction , and goal-conditioned prediction simply by changing which data is visible at inference time . We hope that our unified architecture and flexible problem formulation opens up new research directions for further combining motion prediction and planning . In summary , our key contributions in this work are : • A novel , scene-centric approach that allows us to gracefully switch training the model to produce either marginal ( independent ) and joint agent predictions in a single feed-forward pass . Our model achieves state-of-the-art on both marginal and joint prediction tasks on both the Argoverse and the Waymo Open Motion Dataset . • A permutation equivariant Transformer-based architecture factored over agents , time , and road graph elements that exploits the inherent symmetries of the problem . The resulting architecture is efficient and integrates the world state in a unified way . • A masked sequence modeling approach that enables us to condition on hypothetical agent futures at inference time , enabling conditional motion prediction or goal conditioned prediction . 2 RELATED WORK . Motion prediction architectures . Motion prediction models have flourished in recent years , due to the rise in interest in self-driving applications and the release of related datasets and benchmarks ( Kesten et al. , 2019 ; Chang et al. , 2019 ; Caesar et al. , 2020 ; Ettinger et al. , 2021 ) . Successful models must take into account the history of agent motion , and the elements of the road graph ( e.g. , lanes , stop lines , traffic light dynamic state ) . Furthermore , such models must learn the relationships between these agents in the context of the road graph environment . One class of models draws heavily upon the computer vision literature , rendering inputs as a multichannel rasterized top-down image ( Cui et al. , 2019 ; Chai et al. , 2019 ; Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ; Zhao et al. , 2019 ) . In this approach , relationships between scene elements are captured via convolutional deep architectures . However , the localized structure of the receptive field makes capturing spatially-distant interactions challenging . A popular alternative is to use an entity-centric approach . With this approach , agent state history is typically encoded via sequence modeling techniques like RNNs ( Mercat et al. , 2020 ; Khandelwal et al. , 2020 ; Lee et al. , 2017 ; Alahi et al. , 2016 ; Rhinehart et al. , 2019 ) or temporal convolutions ( Liang et al. , 2020 ) . Road elements are approximated with basic primitives ( e.g . piecewise-linear segments ) which encode pose information and semantic type . Modeling relationships between entities is often presented as an information aggregation process , and models employ pooling ( Zhao et al. , 2020 ; Gao et al. , 2020 ; Lee et al. , 2017 ; Alahi et al. , 2016 ; Gupta et al. , 2018 ) , soft-attention ( Mercat et al. , 2020 ; Zhao et al. , 2020 ; Salzmann et al. , 2020 ) as well as graph neural networks ( Casas et al. , 2020a ; Liang et al. , 2020 ; Khandelwal et al. , 2020 ) . Like our proposed method , several recent models use Transformers ( Vaswani et al. , 2017 ) , composed of multi-headed attention layers . Transformers are a popular state-of-the-art choice for sequence modeling in natural language processing ( Brown et al. , 2020 ; Devlin et al. , 2019 ) , and have recently shown promise in core computer vision tasks such as detection ( Bello et al. , 2019 ; Carion et al. , 2020 ; Srinivas et al. , 2021 ) , tracking ( Hung et al. , 2020 ) and classification ( Ramachandran et al. , 2019 ; Vaswani et al. , 2021 ; Dosovitskiy et al. , 2021 ; Bello , 2013 ; Bello et al. , 2019 ) . For motion modeling , recent work has employed variations of self-attention and Transformers for modeling different axes : temporal trajectory encoding and decoding ( Yu et al. , 2020 ; Giuliari et al. , 2020 ; Yuan et al. , 2021 ) , encoding relationships between agents ( Li et al. , 2020 ; Park et al. , 2020 ; Yuan et al. , 2021 ; Yu et al. , 2020 ; Mercat et al. , 2020 ; Bhat et al. , 2020 ) , and encoding relationships with road elements . When applying self-attention over multiple axes , past work used independent self-attention for each axis ( Yu et al. , 2020 ) , or flattened two axes together into one joint self-attention layer ( Yuan et al. , 2021 ) – by comparison , our method proposes axis-factored attention to model relationships between time steps , agents , and road graph elements in a unified way . Scene-centric versus agent-centric representations . Another key design choice is the frame of reference in which the representation is encoded . Some models do a majority of modeling in a global , scene-level coordinate frame , such as work that employs a rasterized top-down image ( Cui et al. , 2019 ; Chai et al. , 2019 ; Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ) . This can lead to a more efficient model due to a single shared representation of world state in a common coordinate frame , but comes with the potential sacrifice of pose-invariance . On the other hand , models that reason in the agent-coordinate frame ( Mercat et al. , 2020 ; Zhao et al. , 2020 ; Khandelwal et al. , 2020 ) are intrinsically pose-invariant , but scale linearly with the number of agents , or quadratically with the number of pairwise interactions between agents . Many works employ a mix of a top-down raster representation for road representation fused with a per-agent representations ( Rhinehart et al. , 2019 ; Tang & Salakhutdinov , 2019 ; Lee et al. , 2017 ) . Similar to our own work , LaneGCN ( Liang et al. , 2020 ) is agent-centric yet representations are in a global frame – to the best of our knowledge , this is the only other work to do so . This enables efficient reasoning while capturing arbitrarily distant interactions and high-fidelity state representations without rasterization . Representing multi-agent futures . A common way to represent agent futures is via a weighted set of trajectories per agent ( Alahi et al. , 2016 ; Biktairov et al. , 2020 ; Buhet et al. , 2020 ; Casas et al. , 2020a ; a ; Chai et al. , 2019 ; Cui et al. , 2019 ; Gao et al. , 2020 ; Hong et al. , 2019 ; Lee et al. , 2017 ; Marchetti et al. , 2020 ; Mercat et al. , 2020 ; Salzmann et al. , 2020 ; Zhao et al. , 2020 ) . This representation is encouraged by benchmarks which primarily focus on per-agent distance error metrics ( Caesar et al. , 2020 ; Chang et al. , 2019 ; Zhan et al. , 2019 ) . We argue in this work that modeling joint futures in a multi-agent environment ( Figure 1 , right ) is an important concept that has been minimally explored in prior work . Some prior work consider a factorized pairwise joint distribution , where a subset of agent futures are conditioned on other agents – informally , modeling P ( X ) and P ( Y |X ) for agents X and Y ( Khandelwal et al. , 2020 ; Tolstaya et al. , 2021 ; Salzmann et al. , 2020 ) . To generalize joint prediction to arbitrary multi-agent settings , other work ( Tang & Salakhutdinov , 2019 ; Rhinehart et al. , 2019 ; Casas et al. , 2020b ; Suo et al. , 2021 ; Yeh et al. , 2019 ) iteratively roll out samples per-agent , where each agent is conditioned on previously sampled trajectory steps . In contrast , our model directly decodes a set of k distinct joint futures with associated likelihoods . | The paper presents a unified architecture to predict future motion trajectories of all agents in the scene, while interactions between agents are captured. A Transformer based architecture is designed. It also uses a masking strategy as a query, enabling one to condition on hypothetical agent futures during the inference time, and achieves conditional motion prediction or goal conditioned prediction. The model was tested on two standard datasets, and achieves state of the art performance. | SP:60d09296513b878d3bd11542c3e86d4dba0a878a |
Scene Transformer: A unified architecture for predicting future trajectories of multiple agents | 1 INTRODUCTION . Motion planning in a dense real-world urban environment is a mission-critical problem for deploying autonomous driving technology . Autonomous driving is traditionally considered too difficult for a single end-to-end learned system ( Thrun et al. , 2006 ) . Thus , researchers have opted to split the task into sequential sub-tasks ( Zeng et al. , 2019 ) : ( i ) perception , ( ii ) motion prediction , and ( iii ) planning . Perception is the task of detecting and tracking objects in the scene from sensors such as LiDARs and cameras . Motion prediction involves predicting the futures actions of other agents in the scene . Finally , planning involves creating a motion plan that navigates through dynamic environments . Dividing the larger problem into sub-tasks achieves optimal performance when each sub-task is truly independent . However such a strategy breaks down when the assumption of independence does not hold . For instance , the sub-tasks of motion prediction and planning are not truly independent—the autonomous vehicle ’ s actions may significantly impact the behaviors of other agents . Similarly , the behaviors of other agents may dramatically change what is a good plan . The goal of this work is to take a step in the direction of unifying motion prediction and planning by developing a model that can exploit conditioning information , such as the AV ’ s goal , and produce joint consistent predictions about the future for all agents simultaneously . While the motion prediction task has traditionally been formulated around per-agent independent predictions , recent datasets ( Ettinger et al. , 2021 ; Zhan et al. , 2019 ) have introduced interaction prediction tasks that enable us to study joint future prediction ( Figure 1 ) . These interaction prediction tasks require models to predict the joint futures of multiple agents : models are expected to produce future predictions for all agents such that the agents futures are consistent 1 with one another . 1Marginal agent predictions may conflict with each other ( have overlaps ) , while consistent joint predictions should have predictions where agents respect each other ’ s behaviors ( avoid overlaps ) within the same future . A naive approach to producing joint futures is to consider the exponential number of combinations of marginal agent predictions . Many of the combinations are not reasonable , especially when agents have overlapping trajectories . We present a unified model that naturally captures the interactions between agents , and can be trained as a joint model to produce scene-level consistent predictions across all agents ( Figure 1 , right ) . Our model uses a scene-centric representation for all agents ( Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ) to allow scaling to large numbers of agents in dense environments . We employ a simple variant of self-attention ( Vaswani et al. , 2017 ) in which the attention mechanism is efficiently factorized across the agent-time axes . The resulting architecture simply alternates attention between dimensions representing time and agents across the scene , resulting in a computationally-efficient , uniform , and scalable architecture . We find that the resulting model , termed Scene Transformer , achieves superior performance on both independent ( marginal ) and interactive ( joint ) prediction benchmarks . We further show how we can structure the problem as a masked sequence model , inspired by recent advances in language modeling ( Brown et al. , 2020 ; Devlin et al. , 2019 ) , to allow conditioning of the model on the autonomous vehicle ( AV ) goal state or full trajectory . In this reformulation , a single model can naturally perform tasks such as motion prediction , conditional motion prediction , and goal-conditioned prediction simply by changing which data is visible at inference time . We hope that our unified architecture and flexible problem formulation opens up new research directions for further combining motion prediction and planning . In summary , our key contributions in this work are : • A novel , scene-centric approach that allows us to gracefully switch training the model to produce either marginal ( independent ) and joint agent predictions in a single feed-forward pass . Our model achieves state-of-the-art on both marginal and joint prediction tasks on both the Argoverse and the Waymo Open Motion Dataset . • A permutation equivariant Transformer-based architecture factored over agents , time , and road graph elements that exploits the inherent symmetries of the problem . The resulting architecture is efficient and integrates the world state in a unified way . • A masked sequence modeling approach that enables us to condition on hypothetical agent futures at inference time , enabling conditional motion prediction or goal conditioned prediction . 2 RELATED WORK . Motion prediction architectures . Motion prediction models have flourished in recent years , due to the rise in interest in self-driving applications and the release of related datasets and benchmarks ( Kesten et al. , 2019 ; Chang et al. , 2019 ; Caesar et al. , 2020 ; Ettinger et al. , 2021 ) . Successful models must take into account the history of agent motion , and the elements of the road graph ( e.g. , lanes , stop lines , traffic light dynamic state ) . Furthermore , such models must learn the relationships between these agents in the context of the road graph environment . One class of models draws heavily upon the computer vision literature , rendering inputs as a multichannel rasterized top-down image ( Cui et al. , 2019 ; Chai et al. , 2019 ; Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ; Zhao et al. , 2019 ) . In this approach , relationships between scene elements are captured via convolutional deep architectures . However , the localized structure of the receptive field makes capturing spatially-distant interactions challenging . A popular alternative is to use an entity-centric approach . With this approach , agent state history is typically encoded via sequence modeling techniques like RNNs ( Mercat et al. , 2020 ; Khandelwal et al. , 2020 ; Lee et al. , 2017 ; Alahi et al. , 2016 ; Rhinehart et al. , 2019 ) or temporal convolutions ( Liang et al. , 2020 ) . Road elements are approximated with basic primitives ( e.g . piecewise-linear segments ) which encode pose information and semantic type . Modeling relationships between entities is often presented as an information aggregation process , and models employ pooling ( Zhao et al. , 2020 ; Gao et al. , 2020 ; Lee et al. , 2017 ; Alahi et al. , 2016 ; Gupta et al. , 2018 ) , soft-attention ( Mercat et al. , 2020 ; Zhao et al. , 2020 ; Salzmann et al. , 2020 ) as well as graph neural networks ( Casas et al. , 2020a ; Liang et al. , 2020 ; Khandelwal et al. , 2020 ) . Like our proposed method , several recent models use Transformers ( Vaswani et al. , 2017 ) , composed of multi-headed attention layers . Transformers are a popular state-of-the-art choice for sequence modeling in natural language processing ( Brown et al. , 2020 ; Devlin et al. , 2019 ) , and have recently shown promise in core computer vision tasks such as detection ( Bello et al. , 2019 ; Carion et al. , 2020 ; Srinivas et al. , 2021 ) , tracking ( Hung et al. , 2020 ) and classification ( Ramachandran et al. , 2019 ; Vaswani et al. , 2021 ; Dosovitskiy et al. , 2021 ; Bello , 2013 ; Bello et al. , 2019 ) . For motion modeling , recent work has employed variations of self-attention and Transformers for modeling different axes : temporal trajectory encoding and decoding ( Yu et al. , 2020 ; Giuliari et al. , 2020 ; Yuan et al. , 2021 ) , encoding relationships between agents ( Li et al. , 2020 ; Park et al. , 2020 ; Yuan et al. , 2021 ; Yu et al. , 2020 ; Mercat et al. , 2020 ; Bhat et al. , 2020 ) , and encoding relationships with road elements . When applying self-attention over multiple axes , past work used independent self-attention for each axis ( Yu et al. , 2020 ) , or flattened two axes together into one joint self-attention layer ( Yuan et al. , 2021 ) – by comparison , our method proposes axis-factored attention to model relationships between time steps , agents , and road graph elements in a unified way . Scene-centric versus agent-centric representations . Another key design choice is the frame of reference in which the representation is encoded . Some models do a majority of modeling in a global , scene-level coordinate frame , such as work that employs a rasterized top-down image ( Cui et al. , 2019 ; Chai et al. , 2019 ; Lee et al. , 2017 ; Hong et al. , 2019 ; Casas et al. , 2020a ; Salzmann et al. , 2020 ) . This can lead to a more efficient model due to a single shared representation of world state in a common coordinate frame , but comes with the potential sacrifice of pose-invariance . On the other hand , models that reason in the agent-coordinate frame ( Mercat et al. , 2020 ; Zhao et al. , 2020 ; Khandelwal et al. , 2020 ) are intrinsically pose-invariant , but scale linearly with the number of agents , or quadratically with the number of pairwise interactions between agents . Many works employ a mix of a top-down raster representation for road representation fused with a per-agent representations ( Rhinehart et al. , 2019 ; Tang & Salakhutdinov , 2019 ; Lee et al. , 2017 ) . Similar to our own work , LaneGCN ( Liang et al. , 2020 ) is agent-centric yet representations are in a global frame – to the best of our knowledge , this is the only other work to do so . This enables efficient reasoning while capturing arbitrarily distant interactions and high-fidelity state representations without rasterization . Representing multi-agent futures . A common way to represent agent futures is via a weighted set of trajectories per agent ( Alahi et al. , 2016 ; Biktairov et al. , 2020 ; Buhet et al. , 2020 ; Casas et al. , 2020a ; a ; Chai et al. , 2019 ; Cui et al. , 2019 ; Gao et al. , 2020 ; Hong et al. , 2019 ; Lee et al. , 2017 ; Marchetti et al. , 2020 ; Mercat et al. , 2020 ; Salzmann et al. , 2020 ; Zhao et al. , 2020 ) . This representation is encouraged by benchmarks which primarily focus on per-agent distance error metrics ( Caesar et al. , 2020 ; Chang et al. , 2019 ; Zhan et al. , 2019 ) . We argue in this work that modeling joint futures in a multi-agent environment ( Figure 1 , right ) is an important concept that has been minimally explored in prior work . Some prior work consider a factorized pairwise joint distribution , where a subset of agent futures are conditioned on other agents – informally , modeling P ( X ) and P ( Y |X ) for agents X and Y ( Khandelwal et al. , 2020 ; Tolstaya et al. , 2021 ; Salzmann et al. , 2020 ) . To generalize joint prediction to arbitrary multi-agent settings , other work ( Tang & Salakhutdinov , 2019 ; Rhinehart et al. , 2019 ; Casas et al. , 2020b ; Suo et al. , 2021 ; Yeh et al. , 2019 ) iteratively roll out samples per-agent , where each agent is conditioned on previously sampled trajectory steps . In contrast , our model directly decodes a set of k distinct joint futures with associated likelihoods . | This paper proposes a transformer-style architecture, Scene Transformer, for joint trajectory prediction of multiple agents (including the autonomous vehicle). The key representation of Scene Transformer is a 3D tensor [A, T, D], with the dimensions being agent, time step, and feature dimension, and we mask out the entries that to be predicted. The output is [F, A, T, 7], where F is the number of future predictions. The model outputs 7 values for each waypoint, x/y position and uncertainty in 3D space and heading. The model uses factorized self-attention applied on the A axis and T axis separately to transform the [A, T, D] representation tensor, and uses a cross attention to fuse in the static map features. The model can be trained to predict either marginal predictions or joint predictions depending on the loss function. Also, based on what entries in the input tensor are masked out, Scene Transformer can perform motion prediction, conditioned motion prediction, and goal-conditioned motion prediction all in the same model. The Scene Transformer model was evaluated on Argoverse and Waymo Open Motion Dataset (WOMD), and its marginal prediction achieves the state-of-the-art performance on both datasets. The evaluation also shows that the model is able to perform joint prediction for the WOMD Interaction Prediction task and achieves better performance on the scene-level metrics compared to the baselines. The evaluation also shows that the Scene Transformer model is able to do conditional motion prediction and goal-conditioned motion prediction. | SP:60d09296513b878d3bd11542c3e86d4dba0a878a |
Bandit Learning with Joint Effect of Incentivized Sampling, Delayed Sampling Feedback, and Self-Reinforcing User Preferences | 1 INTRODUCTION . In recent years , the multi-armed bandit ( MAB ) framework has received a significant amount of interest in the learning research community . This is partly due to the fact that , in many online ecommerce recommender systems ( e.g. , Amazon and Walmart ) , the problem of online learning of the optimal products while making profits at the same time can be well formulated by an MAB problem . However , although many MAB algorithms have been proposed in this area , it is worth noting that most of the existing MAB models in the literature have not considered the joint effect of three common phenomena in e-commerce recommender systems : ( i ) In many e-commerce recommender systems , the platform ( the learning agent ) can not sample an intended product ( an intended arm ) directly and has to incentivize customers ( e.g. , through promotions and coupons ) to sample the product and receive the sampling feedback from the customers indirectly ( e.g. , ratings and reviews ) ; ( ii ) Customer feedbacks are often received much later than their purchasing times ( e.g. , a review may or may not be submitted by a customer even months later after purchasing a product ) ; and ( iii ) Customer preferences among products are influenced and reinforced by historical feedbacks , which may even lead to various viral effects over some products ( the more good reviews one product has received , the more likely that the next arriving customer will prefer this product ) . The lack of a fundamental understanding and joint studies of these three important factors in MAB policy designs motivates us to fill this gap in this paper . Toward this end , we propose a new MAB framework that jointly considers i ) incentivized sampling , ii ) delayed sampling feedback , and iii ) self-reinforcing user preferences in online recommender systems . However , we note that the MAB policy design for the proposed new MAB framework is highly non-trivial due to the complex couplings between the aforementioned three factors . First , similar to conventional MAB problems , there exists a dilemma between sufficient exploration through sampling to learn an optimal arm ( i.e. , an optimal product ) , which may incur numerous pullings of sub-optimal arms , and the greedy exploitation to play the arm that has performed well thus far to earn profits . Second , there is another dilemma to the learning agent between offering sufficiently attractive incentives to mitigate biases ( due to lack of initial data and self-reinforcing user preferences ) and avoid spending unnecessarily high incentives that hurt the learning agent ’ s profits . Last but not least , the delayed sampling feedbacks may render the estimation of arms ’ quality during the MAB process highly inaccurate , introducing yet another layer of uncertainty to the MAB online learning problem , which is already plagued by complications from incentivized sampling and self-reinforcing user preferences . As in most MAB problems , we adopt “ regret ” as our performance metric in this paper , which is defined as the cumulative reward gap between the proposed policy and an optimal policy design in hindsight . Under the regret setting , the complications due to these three key factors naturally prompt the following fundamental questions : ( 1 ) How should the agent design an incentivizing strategy to strike a good balance between explo- ration and exploitation to achieve sublinear ( hopefully logarithmic ) regrets ? ( 2 ) To avoid offering exceedingly high incentives , how should the agent incentivize in order to attract a user crowd that prefer an optimal arm , so that the users ’ self-reinforcing preference could automatically gravitate toward this optimal arm without further incentives ? ( 3 ) Under various delayed feedback situations in the new MAB framework ( e.g. , unbounded random delays , heavy-tailed delay distributions , and arm-dependent delays ) , could we still achieve low regrets with low incentive costs ? In this paper , we answer the above fundamental questions affirmatively by proposing a new “ Delayed-UCB-Filtering ” policy for the MAB framework that jointly considers incentivizing sampling , delayed sampling feedback , and self-reinforcing user preferences . We show that our proposed policy achievesO ( log T ) regret withO ( log T ) incentive payments . The success of our policy design hinges upon two key insights : ( i ) the self-reinforcing user preference effect is actually a “ blessing in disguise ” and can be leveraged to establish an important “ dominance ” condition ( more on this later ) that further implies O ( log T ) regret and incentive costs ; and ( ii ) the impacts of delayed feedback on regret and incentive costs can be upper bounded under appropriate statistical settings to preserve the “ dominance ” condition . Our key contributions and main results are summarized as follows : • We propose a new MAB model that jointly considers incentivized arm sampling , delayed sam- pling feedback , and self-reinforcing user preferences , all of which are important features of online recommender systems . To develop efficient and low-cost incentivized policy for this new MAB model , we propose a three-phase “ UCB-Filtering-with-Delayed-Feedback ” ( UCB-FDF ) policy , which contains an incentivized exploration phase , an incentivized exploitation phase , and a selfsustaining phase . In our UCB-FDF policy , the first two phases judiciously integrate delayed feedback information , while in third phase , the system solely relies on self-reinforcing user preferences to converge to the pulling of the optimal arm . • We first show a fundamental fact that , under our UCB-FDF policy , delayed sampling feedback only has an additive penalty on the regret and incentive cost performances , and that this additive penalty grows logarithmically with respect to time . Specifically , we first investigate the delayed feedback impact under the assumption that the feedback delay is an i.i.d . random variable across samplings with a finite expectation . We show that the UCB-FDF policy achieves logarithmic growth rates of regret and incentive costs under this setting . Then , we relax the i.i.d . feedback delay assumption to allow the feedback delay distribution to be arm-dependent . Under this setting , we also show that similar logarithmic growth rates of regret and incentive can still be achieved . • We conduct extensive experiments on Amazon Review Data 1 to demonstrate and verify the performance of our UCB-FDF policy as well as the impacts of delayed feedback on real-world scenarios . We also verify our theoretical analysis through various product categories and demonstrate the efficacy of our proposed UCB-FDF MAB policy . The rest of the paper is organized as follows . In Section 2 , we review the literature to put our work in comparative perspectives . Then in Section 3 , we formulate our new MAB model that capture the three common phenomena . In Section 4 , we present our UCB-FDF policy and analyze its performance . Then , we present our experiment settings and results in Section 5 . Due to space limitations , the proofs are relegated to the appendix . 1https : //nijianmo.github.io/amazon/ 2 RELATED WORK . In this section , we provide a quick overview on three lines of research related to our work : i ) bandits with delayed feedback , ii ) bandits with random preferences , and iii ) incentivized bandits . 1 ) Bandits with Delayed Feedback : Motivated by practical issues in the clinical trials , Eick ( 1988 ) was the first to introduce a two-armed bandit model with delayed responses , where the patients survival time reports after the treatment are delayed . Recently , Joulani et al . ( 2013 ) provided a systematic study and showed that for delay τ with a finite expectation , the worst case regret scales with O ( √ KT log T+KE [ τ ] ) , whereK is the number of arms . Meanwhile , Vernade et al . ( 2017 ) showed that stochastic MAB problems with delayed feedback have a regret lower bound O ( K log T ) . However , this work assumed that the distribution of the random delay is arm-independent . In contrast , Joulani et al . ( 2013 ) considered arm-dependent delay distributions that have an upper bound of the maximum random delay . More recently , Manegueu et al . ( 2020 ) considered arm-dependent and heavy-tailed delay distributions , where only an upper bound on the tail of the delay distribution is needed , without requiring the expectation to be finite . Also , Lancewicki et al . ( 2021 ) studied the case where the delay distribution is reward-dependent , which implies that the random delay in each round may also depend on the reward received on the same round . However , most of these works on delayed bandits are based on the standard stochastic MAB framework . In contrast , we consider delayed feedback in incentivized bandit learning with self-reinforcing user preferences , which is a more appropriate model for real-world recommender systems than the standard stochastic MAB . 2 ) Bandits with Random Preferences : The impacts of random user preferences in e-commerce platforms have received increasing interest in several different areas in learning and economics . Existing works in ( Agrawal et al. , 2017 ; 2019 ) formulated the user preference variation given different product bundles by the multi-nomial logit model on top of the bandit learning framework and proposed a Thompson Sampling approach that achieves a worst-case regret bound ofO ( √ NT log TK ) , where N is the size of recommended arm bundle . With a different focus on preference modeling , Barabási & Albert ( 1999 ) ; Chakrabarti et al . ( 2006 ) ; Ratkiewicz et al . ( 2010 ) investigated the network evolution with “ preferential attachment ” that formulates the social behavior known as selfreinforcing preferences , among which the works in ( Shah et al. , 2018 ; Zhou et al. , 2021 ) are the closet to our work . To our knowledge , Shah et al . ( 2018 ) was the first to consider self-reinforcing user preferences in bandit learning problems . Later , Zhou et al . ( 2021 ) incorporated self-reinforcing user preferences into the incentivized bandit learning framework . The key difference between these two works is that , in the model in ( Shah et al. , 2018 ) , only one arm is revealed to users in each round , while in the model in ( Zhou et al. , 2021 ) , all arms are revealed to users and users ’ arm selections are influenced by incentives . However , both of these works fall short in modeling online recommender systems in practice as they assume that an arm-sampling feedback is observable in the same timeslot when an arm is pulled . However , for most e-commerce recommender systems in practice , user feedbacks are often not immediately observable . As a result , the decision on which arm to pull next has to be made without some of the feedbacks from arm-pulling actions in the past . 3 ) Incentivized Bandits : To our knowledge , Frazier et al . ( 2014 ) was among the first to adopt incentive schemes into a Bayesian MAB setting . In their model , the agent seeks to maximize timediscounted total reward by incentivizing arm selections . Later , Mansour et al . ( 2015 ) studied the non-discounted reward setting . For the non-Bayesian setting , Wang & Huang ( 2018 ) ; Zhou et al . ( 2021 ) proposed policies that maximize the total non-discounted reward with bounded incentive costs . Bandits with budget ( Guha & Munagala , 2007 ; Goel et al. , 2009 ) also share some similarities with our work , where the agent takes actions under resource constraints that are either fixed or with a given growth rate bound . However , none of the aforementioned works considered the impacts of delayed feedback on the regret and incentive costs performances . Note that , due to the loss of information caused by delayed feedback , larger variances in the mean-reward estimations of the arms are inevitable . This implies that , in order to achieve a more accurate arm quality estimation under delayed feedbacks , a higher incentive cost is necessary . | This paper studies a stochastic multi-armed bandit setting with delayed feedback, where additionally, the bandit policy must incentivize an external agent to pull the desired arm, and the arm preferences of this external agent are self-reinforcing. They design a policy, UCB-FDF, for this setting, and prove expected regret (and incentive payment) upper bounds under various delay distribution assumptions. Finally, they evaluate their policy on bandit environments constructed using Amazon review data. | SP:3cf1f87013cc894e168dbfb5db6eb8b0882a8d1b |
Bandit Learning with Joint Effect of Incentivized Sampling, Delayed Sampling Feedback, and Self-Reinforcing User Preferences | 1 INTRODUCTION . In recent years , the multi-armed bandit ( MAB ) framework has received a significant amount of interest in the learning research community . This is partly due to the fact that , in many online ecommerce recommender systems ( e.g. , Amazon and Walmart ) , the problem of online learning of the optimal products while making profits at the same time can be well formulated by an MAB problem . However , although many MAB algorithms have been proposed in this area , it is worth noting that most of the existing MAB models in the literature have not considered the joint effect of three common phenomena in e-commerce recommender systems : ( i ) In many e-commerce recommender systems , the platform ( the learning agent ) can not sample an intended product ( an intended arm ) directly and has to incentivize customers ( e.g. , through promotions and coupons ) to sample the product and receive the sampling feedback from the customers indirectly ( e.g. , ratings and reviews ) ; ( ii ) Customer feedbacks are often received much later than their purchasing times ( e.g. , a review may or may not be submitted by a customer even months later after purchasing a product ) ; and ( iii ) Customer preferences among products are influenced and reinforced by historical feedbacks , which may even lead to various viral effects over some products ( the more good reviews one product has received , the more likely that the next arriving customer will prefer this product ) . The lack of a fundamental understanding and joint studies of these three important factors in MAB policy designs motivates us to fill this gap in this paper . Toward this end , we propose a new MAB framework that jointly considers i ) incentivized sampling , ii ) delayed sampling feedback , and iii ) self-reinforcing user preferences in online recommender systems . However , we note that the MAB policy design for the proposed new MAB framework is highly non-trivial due to the complex couplings between the aforementioned three factors . First , similar to conventional MAB problems , there exists a dilemma between sufficient exploration through sampling to learn an optimal arm ( i.e. , an optimal product ) , which may incur numerous pullings of sub-optimal arms , and the greedy exploitation to play the arm that has performed well thus far to earn profits . Second , there is another dilemma to the learning agent between offering sufficiently attractive incentives to mitigate biases ( due to lack of initial data and self-reinforcing user preferences ) and avoid spending unnecessarily high incentives that hurt the learning agent ’ s profits . Last but not least , the delayed sampling feedbacks may render the estimation of arms ’ quality during the MAB process highly inaccurate , introducing yet another layer of uncertainty to the MAB online learning problem , which is already plagued by complications from incentivized sampling and self-reinforcing user preferences . As in most MAB problems , we adopt “ regret ” as our performance metric in this paper , which is defined as the cumulative reward gap between the proposed policy and an optimal policy design in hindsight . Under the regret setting , the complications due to these three key factors naturally prompt the following fundamental questions : ( 1 ) How should the agent design an incentivizing strategy to strike a good balance between explo- ration and exploitation to achieve sublinear ( hopefully logarithmic ) regrets ? ( 2 ) To avoid offering exceedingly high incentives , how should the agent incentivize in order to attract a user crowd that prefer an optimal arm , so that the users ’ self-reinforcing preference could automatically gravitate toward this optimal arm without further incentives ? ( 3 ) Under various delayed feedback situations in the new MAB framework ( e.g. , unbounded random delays , heavy-tailed delay distributions , and arm-dependent delays ) , could we still achieve low regrets with low incentive costs ? In this paper , we answer the above fundamental questions affirmatively by proposing a new “ Delayed-UCB-Filtering ” policy for the MAB framework that jointly considers incentivizing sampling , delayed sampling feedback , and self-reinforcing user preferences . We show that our proposed policy achievesO ( log T ) regret withO ( log T ) incentive payments . The success of our policy design hinges upon two key insights : ( i ) the self-reinforcing user preference effect is actually a “ blessing in disguise ” and can be leveraged to establish an important “ dominance ” condition ( more on this later ) that further implies O ( log T ) regret and incentive costs ; and ( ii ) the impacts of delayed feedback on regret and incentive costs can be upper bounded under appropriate statistical settings to preserve the “ dominance ” condition . Our key contributions and main results are summarized as follows : • We propose a new MAB model that jointly considers incentivized arm sampling , delayed sam- pling feedback , and self-reinforcing user preferences , all of which are important features of online recommender systems . To develop efficient and low-cost incentivized policy for this new MAB model , we propose a three-phase “ UCB-Filtering-with-Delayed-Feedback ” ( UCB-FDF ) policy , which contains an incentivized exploration phase , an incentivized exploitation phase , and a selfsustaining phase . In our UCB-FDF policy , the first two phases judiciously integrate delayed feedback information , while in third phase , the system solely relies on self-reinforcing user preferences to converge to the pulling of the optimal arm . • We first show a fundamental fact that , under our UCB-FDF policy , delayed sampling feedback only has an additive penalty on the regret and incentive cost performances , and that this additive penalty grows logarithmically with respect to time . Specifically , we first investigate the delayed feedback impact under the assumption that the feedback delay is an i.i.d . random variable across samplings with a finite expectation . We show that the UCB-FDF policy achieves logarithmic growth rates of regret and incentive costs under this setting . Then , we relax the i.i.d . feedback delay assumption to allow the feedback delay distribution to be arm-dependent . Under this setting , we also show that similar logarithmic growth rates of regret and incentive can still be achieved . • We conduct extensive experiments on Amazon Review Data 1 to demonstrate and verify the performance of our UCB-FDF policy as well as the impacts of delayed feedback on real-world scenarios . We also verify our theoretical analysis through various product categories and demonstrate the efficacy of our proposed UCB-FDF MAB policy . The rest of the paper is organized as follows . In Section 2 , we review the literature to put our work in comparative perspectives . Then in Section 3 , we formulate our new MAB model that capture the three common phenomena . In Section 4 , we present our UCB-FDF policy and analyze its performance . Then , we present our experiment settings and results in Section 5 . Due to space limitations , the proofs are relegated to the appendix . 1https : //nijianmo.github.io/amazon/ 2 RELATED WORK . In this section , we provide a quick overview on three lines of research related to our work : i ) bandits with delayed feedback , ii ) bandits with random preferences , and iii ) incentivized bandits . 1 ) Bandits with Delayed Feedback : Motivated by practical issues in the clinical trials , Eick ( 1988 ) was the first to introduce a two-armed bandit model with delayed responses , where the patients survival time reports after the treatment are delayed . Recently , Joulani et al . ( 2013 ) provided a systematic study and showed that for delay τ with a finite expectation , the worst case regret scales with O ( √ KT log T+KE [ τ ] ) , whereK is the number of arms . Meanwhile , Vernade et al . ( 2017 ) showed that stochastic MAB problems with delayed feedback have a regret lower bound O ( K log T ) . However , this work assumed that the distribution of the random delay is arm-independent . In contrast , Joulani et al . ( 2013 ) considered arm-dependent delay distributions that have an upper bound of the maximum random delay . More recently , Manegueu et al . ( 2020 ) considered arm-dependent and heavy-tailed delay distributions , where only an upper bound on the tail of the delay distribution is needed , without requiring the expectation to be finite . Also , Lancewicki et al . ( 2021 ) studied the case where the delay distribution is reward-dependent , which implies that the random delay in each round may also depend on the reward received on the same round . However , most of these works on delayed bandits are based on the standard stochastic MAB framework . In contrast , we consider delayed feedback in incentivized bandit learning with self-reinforcing user preferences , which is a more appropriate model for real-world recommender systems than the standard stochastic MAB . 2 ) Bandits with Random Preferences : The impacts of random user preferences in e-commerce platforms have received increasing interest in several different areas in learning and economics . Existing works in ( Agrawal et al. , 2017 ; 2019 ) formulated the user preference variation given different product bundles by the multi-nomial logit model on top of the bandit learning framework and proposed a Thompson Sampling approach that achieves a worst-case regret bound ofO ( √ NT log TK ) , where N is the size of recommended arm bundle . With a different focus on preference modeling , Barabási & Albert ( 1999 ) ; Chakrabarti et al . ( 2006 ) ; Ratkiewicz et al . ( 2010 ) investigated the network evolution with “ preferential attachment ” that formulates the social behavior known as selfreinforcing preferences , among which the works in ( Shah et al. , 2018 ; Zhou et al. , 2021 ) are the closet to our work . To our knowledge , Shah et al . ( 2018 ) was the first to consider self-reinforcing user preferences in bandit learning problems . Later , Zhou et al . ( 2021 ) incorporated self-reinforcing user preferences into the incentivized bandit learning framework . The key difference between these two works is that , in the model in ( Shah et al. , 2018 ) , only one arm is revealed to users in each round , while in the model in ( Zhou et al. , 2021 ) , all arms are revealed to users and users ’ arm selections are influenced by incentives . However , both of these works fall short in modeling online recommender systems in practice as they assume that an arm-sampling feedback is observable in the same timeslot when an arm is pulled . However , for most e-commerce recommender systems in practice , user feedbacks are often not immediately observable . As a result , the decision on which arm to pull next has to be made without some of the feedbacks from arm-pulling actions in the past . 3 ) Incentivized Bandits : To our knowledge , Frazier et al . ( 2014 ) was among the first to adopt incentive schemes into a Bayesian MAB setting . In their model , the agent seeks to maximize timediscounted total reward by incentivizing arm selections . Later , Mansour et al . ( 2015 ) studied the non-discounted reward setting . For the non-Bayesian setting , Wang & Huang ( 2018 ) ; Zhou et al . ( 2021 ) proposed policies that maximize the total non-discounted reward with bounded incentive costs . Bandits with budget ( Guha & Munagala , 2007 ; Goel et al. , 2009 ) also share some similarities with our work , where the agent takes actions under resource constraints that are either fixed or with a given growth rate bound . However , none of the aforementioned works considered the impacts of delayed feedback on the regret and incentive costs performances . Note that , due to the loss of information caused by delayed feedback , larger variances in the mean-reward estimations of the arms are inevitable . This implies that , in order to achieve a more accurate arm quality estimation under delayed feedbacks , a higher incentive cost is necessary . | This paper combines three aspects of MAB: delayed reward, incentivized exploration and self-reinforcing user preference. They motivate this problem from the perspective of online recommender systems. For this model, they propose a new UCB based algorithm that achieves the optimal upper bounds. They also setup an online experiment based on Amazon review data and show how the regret evolves for both arm-independent delays and arm-dependent delays. | SP:3cf1f87013cc894e168dbfb5db6eb8b0882a8d1b |
Bandit Learning with Joint Effect of Incentivized Sampling, Delayed Sampling Feedback, and Self-Reinforcing User Preferences | 1 INTRODUCTION . In recent years , the multi-armed bandit ( MAB ) framework has received a significant amount of interest in the learning research community . This is partly due to the fact that , in many online ecommerce recommender systems ( e.g. , Amazon and Walmart ) , the problem of online learning of the optimal products while making profits at the same time can be well formulated by an MAB problem . However , although many MAB algorithms have been proposed in this area , it is worth noting that most of the existing MAB models in the literature have not considered the joint effect of three common phenomena in e-commerce recommender systems : ( i ) In many e-commerce recommender systems , the platform ( the learning agent ) can not sample an intended product ( an intended arm ) directly and has to incentivize customers ( e.g. , through promotions and coupons ) to sample the product and receive the sampling feedback from the customers indirectly ( e.g. , ratings and reviews ) ; ( ii ) Customer feedbacks are often received much later than their purchasing times ( e.g. , a review may or may not be submitted by a customer even months later after purchasing a product ) ; and ( iii ) Customer preferences among products are influenced and reinforced by historical feedbacks , which may even lead to various viral effects over some products ( the more good reviews one product has received , the more likely that the next arriving customer will prefer this product ) . The lack of a fundamental understanding and joint studies of these three important factors in MAB policy designs motivates us to fill this gap in this paper . Toward this end , we propose a new MAB framework that jointly considers i ) incentivized sampling , ii ) delayed sampling feedback , and iii ) self-reinforcing user preferences in online recommender systems . However , we note that the MAB policy design for the proposed new MAB framework is highly non-trivial due to the complex couplings between the aforementioned three factors . First , similar to conventional MAB problems , there exists a dilemma between sufficient exploration through sampling to learn an optimal arm ( i.e. , an optimal product ) , which may incur numerous pullings of sub-optimal arms , and the greedy exploitation to play the arm that has performed well thus far to earn profits . Second , there is another dilemma to the learning agent between offering sufficiently attractive incentives to mitigate biases ( due to lack of initial data and self-reinforcing user preferences ) and avoid spending unnecessarily high incentives that hurt the learning agent ’ s profits . Last but not least , the delayed sampling feedbacks may render the estimation of arms ’ quality during the MAB process highly inaccurate , introducing yet another layer of uncertainty to the MAB online learning problem , which is already plagued by complications from incentivized sampling and self-reinforcing user preferences . As in most MAB problems , we adopt “ regret ” as our performance metric in this paper , which is defined as the cumulative reward gap between the proposed policy and an optimal policy design in hindsight . Under the regret setting , the complications due to these three key factors naturally prompt the following fundamental questions : ( 1 ) How should the agent design an incentivizing strategy to strike a good balance between explo- ration and exploitation to achieve sublinear ( hopefully logarithmic ) regrets ? ( 2 ) To avoid offering exceedingly high incentives , how should the agent incentivize in order to attract a user crowd that prefer an optimal arm , so that the users ’ self-reinforcing preference could automatically gravitate toward this optimal arm without further incentives ? ( 3 ) Under various delayed feedback situations in the new MAB framework ( e.g. , unbounded random delays , heavy-tailed delay distributions , and arm-dependent delays ) , could we still achieve low regrets with low incentive costs ? In this paper , we answer the above fundamental questions affirmatively by proposing a new “ Delayed-UCB-Filtering ” policy for the MAB framework that jointly considers incentivizing sampling , delayed sampling feedback , and self-reinforcing user preferences . We show that our proposed policy achievesO ( log T ) regret withO ( log T ) incentive payments . The success of our policy design hinges upon two key insights : ( i ) the self-reinforcing user preference effect is actually a “ blessing in disguise ” and can be leveraged to establish an important “ dominance ” condition ( more on this later ) that further implies O ( log T ) regret and incentive costs ; and ( ii ) the impacts of delayed feedback on regret and incentive costs can be upper bounded under appropriate statistical settings to preserve the “ dominance ” condition . Our key contributions and main results are summarized as follows : • We propose a new MAB model that jointly considers incentivized arm sampling , delayed sam- pling feedback , and self-reinforcing user preferences , all of which are important features of online recommender systems . To develop efficient and low-cost incentivized policy for this new MAB model , we propose a three-phase “ UCB-Filtering-with-Delayed-Feedback ” ( UCB-FDF ) policy , which contains an incentivized exploration phase , an incentivized exploitation phase , and a selfsustaining phase . In our UCB-FDF policy , the first two phases judiciously integrate delayed feedback information , while in third phase , the system solely relies on self-reinforcing user preferences to converge to the pulling of the optimal arm . • We first show a fundamental fact that , under our UCB-FDF policy , delayed sampling feedback only has an additive penalty on the regret and incentive cost performances , and that this additive penalty grows logarithmically with respect to time . Specifically , we first investigate the delayed feedback impact under the assumption that the feedback delay is an i.i.d . random variable across samplings with a finite expectation . We show that the UCB-FDF policy achieves logarithmic growth rates of regret and incentive costs under this setting . Then , we relax the i.i.d . feedback delay assumption to allow the feedback delay distribution to be arm-dependent . Under this setting , we also show that similar logarithmic growth rates of regret and incentive can still be achieved . • We conduct extensive experiments on Amazon Review Data 1 to demonstrate and verify the performance of our UCB-FDF policy as well as the impacts of delayed feedback on real-world scenarios . We also verify our theoretical analysis through various product categories and demonstrate the efficacy of our proposed UCB-FDF MAB policy . The rest of the paper is organized as follows . In Section 2 , we review the literature to put our work in comparative perspectives . Then in Section 3 , we formulate our new MAB model that capture the three common phenomena . In Section 4 , we present our UCB-FDF policy and analyze its performance . Then , we present our experiment settings and results in Section 5 . Due to space limitations , the proofs are relegated to the appendix . 1https : //nijianmo.github.io/amazon/ 2 RELATED WORK . In this section , we provide a quick overview on three lines of research related to our work : i ) bandits with delayed feedback , ii ) bandits with random preferences , and iii ) incentivized bandits . 1 ) Bandits with Delayed Feedback : Motivated by practical issues in the clinical trials , Eick ( 1988 ) was the first to introduce a two-armed bandit model with delayed responses , where the patients survival time reports after the treatment are delayed . Recently , Joulani et al . ( 2013 ) provided a systematic study and showed that for delay τ with a finite expectation , the worst case regret scales with O ( √ KT log T+KE [ τ ] ) , whereK is the number of arms . Meanwhile , Vernade et al . ( 2017 ) showed that stochastic MAB problems with delayed feedback have a regret lower bound O ( K log T ) . However , this work assumed that the distribution of the random delay is arm-independent . In contrast , Joulani et al . ( 2013 ) considered arm-dependent delay distributions that have an upper bound of the maximum random delay . More recently , Manegueu et al . ( 2020 ) considered arm-dependent and heavy-tailed delay distributions , where only an upper bound on the tail of the delay distribution is needed , without requiring the expectation to be finite . Also , Lancewicki et al . ( 2021 ) studied the case where the delay distribution is reward-dependent , which implies that the random delay in each round may also depend on the reward received on the same round . However , most of these works on delayed bandits are based on the standard stochastic MAB framework . In contrast , we consider delayed feedback in incentivized bandit learning with self-reinforcing user preferences , which is a more appropriate model for real-world recommender systems than the standard stochastic MAB . 2 ) Bandits with Random Preferences : The impacts of random user preferences in e-commerce platforms have received increasing interest in several different areas in learning and economics . Existing works in ( Agrawal et al. , 2017 ; 2019 ) formulated the user preference variation given different product bundles by the multi-nomial logit model on top of the bandit learning framework and proposed a Thompson Sampling approach that achieves a worst-case regret bound ofO ( √ NT log TK ) , where N is the size of recommended arm bundle . With a different focus on preference modeling , Barabási & Albert ( 1999 ) ; Chakrabarti et al . ( 2006 ) ; Ratkiewicz et al . ( 2010 ) investigated the network evolution with “ preferential attachment ” that formulates the social behavior known as selfreinforcing preferences , among which the works in ( Shah et al. , 2018 ; Zhou et al. , 2021 ) are the closet to our work . To our knowledge , Shah et al . ( 2018 ) was the first to consider self-reinforcing user preferences in bandit learning problems . Later , Zhou et al . ( 2021 ) incorporated self-reinforcing user preferences into the incentivized bandit learning framework . The key difference between these two works is that , in the model in ( Shah et al. , 2018 ) , only one arm is revealed to users in each round , while in the model in ( Zhou et al. , 2021 ) , all arms are revealed to users and users ’ arm selections are influenced by incentives . However , both of these works fall short in modeling online recommender systems in practice as they assume that an arm-sampling feedback is observable in the same timeslot when an arm is pulled . However , for most e-commerce recommender systems in practice , user feedbacks are often not immediately observable . As a result , the decision on which arm to pull next has to be made without some of the feedbacks from arm-pulling actions in the past . 3 ) Incentivized Bandits : To our knowledge , Frazier et al . ( 2014 ) was among the first to adopt incentive schemes into a Bayesian MAB setting . In their model , the agent seeks to maximize timediscounted total reward by incentivizing arm selections . Later , Mansour et al . ( 2015 ) studied the non-discounted reward setting . For the non-Bayesian setting , Wang & Huang ( 2018 ) ; Zhou et al . ( 2021 ) proposed policies that maximize the total non-discounted reward with bounded incentive costs . Bandits with budget ( Guha & Munagala , 2007 ; Goel et al. , 2009 ) also share some similarities with our work , where the agent takes actions under resource constraints that are either fixed or with a given growth rate bound . However , none of the aforementioned works considered the impacts of delayed feedback on the regret and incentive costs performances . Note that , due to the loss of information caused by delayed feedback , larger variances in the mean-reward estimations of the arms are inevitable . This implies that , in order to achieve a more accurate arm quality estimation under delayed feedbacks , a higher incentive cost is necessary . | This paper proposes a multi-armed bandit (MAB) framework with three realistic considerations: incentivized sampling, delayed feedback and self-reinforcing preferences. The paper proposes a ‘UCB-Filtering-with-Delayed-Feedback’ (UCB-FDF) policy for the new MAB framework. For general feedback delays with bounded expectations, the authors showed that the delayed sampling feedback has additive penalty on regret and incentive costs, then utilized this key fact to derive that the UCB-FDF policy achieves logarithmic regret and incentive cost in the new MAB framework. The theoretical bounds are verified by experiments on instances with 3 arms using Amazon Review Data. | SP:3cf1f87013cc894e168dbfb5db6eb8b0882a8d1b |
WHY FLATNESS DOES AND DOES NOT CORRELATE WITH GENERALIZATION FOR DEEP NEURAL NETWORKS | 1 INTRODUCTION . Among the most important theoretical questions in the field of deep learning are : 1 ) What characterizes functions that exhibit good generalization ? , and 2 ) Why do overparameterized deep neural networks ( DNNs ) converge to this small subset of functions that do not overfit ? Perhaps the most popular hypothesis is that good generalization performance is linked to flat minima . In pioneering works ( Hinton & van Camp , 1993 ; Hochreiter & Schmidhuber , 1997 ) , the minimum description length ( MDL ) principle ( Rissanen , 1978 ) was invoked to argue that since flatter minima require less information to describe , they should generalize better than sharp minima . Most measures of flatness approximate the local curvature of the loss surface , typically defining flatter minima to be those with smaller values of the Hessian eigenvalues ( Keskar et al. , 2016 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Sagun et al. , 2016 ; Yao et al. , 2018 ) . Another commonly held belief is that stochastic gradient descent ( SGD ) is itself biased towards flatter minima , and that this inductive bias helps explain why DNNs generalize so well ( Keskar et al. , 2016 ; Jastrzebski et al. , 2018 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Yao et al. , 2018 ; Wei & Schwab , 2019 ; Maddox et al. , 2020 ) . For example Keskar et al . ( 2016 ) developed a measure of flatness that they found correlated with improved generalization performance when decreasing batch size , suggesting that SGD is itself biased towards flatter minima . We note that others ( Goyal et al. , 2017 ; Hoffer et al. , 2017 ; Smith et al. , 2017 ; Mingard et al. , 2021a ) have argued that the effect of batch size can be compensated by changes in learning rate , complicating some conclusions from Keskar et al . ( 2016 ) . Nevertheless , the argument that SGD is somehow itself biased towards flat minima remains widespread in the literature . In an important critique of local flatness measures , Dinh et al . ( 2017 ) pointed out that DNNs with ReLU activation can be re-parameterized through a simple parameter-rescaling transformation . Tα : ( w1 , w2 ) 7→ ( αw1 , α −1w2 ) ( 1 ) where w1 are the weights between an input layer and a single hidden layer , and w2 are the weights between this hidden layer and the outputs . This transformation can be extended to any architecture having at least one single rectified network layer . The function that the DNN represents , and thus how it generalizes , is invariant under parameter-rescaling transformations , but the derivatives w.r.t . parameters , and therefore many flatness measures used in the literature , can be changed arbitrarily . Ergo , the correlation between flatness and generalization can be arbitrarily changed . Several recent studies have attempted to find `` scale invariant '' flatness metrics ( Petzka et al. , 2019 ; Rangamani et al. , 2019 ; Tsuzuku et al. , 2019 ) . The main idea is to multiply layer-wise Hessian eigenvalues by a factor of ‖wi‖2 , which renders the metric immune to layer-wise re-parameterization . While these new metrics look promising experimentally , they are only scale-invariant when the scaling is layer-wise . Other methods of rescaling ( e.g . neuron-wise rescaling ) can still change the metrics , so this general problem remains unsolved . 1.1 MAIN CONTRIBUTIONS . 1 . For a series of classic image classification tasks ( MNIST and CIFAR-10 ) we show that flatness measures change substantially as a function of epochs . Parameter re-scaling can arbitrarily change flatness , but it quickly recovers to a more typical value under further training . We also demonstrate that some variants of SGD exhibit significantly worse correlation of flatness with generalization than found for vanilla SGD . In other words popular measures of flatness sometimes do and sometimes do not correlate with generalization . This mixed performance problematizes a widely held intuition that DNNs generalize well fundamentally because SGD or its variants are themselves biased towards flat minima . 2 . We next study the correlation of the Bayesian prior P ( f ) with the generalization performance of a DNN that converges to that function f . This prior is the weighted probability of obtaining function f upon random sampling of parameters . Motivated by a theoretical argument derived from a non-uniform convergence generalization bound , we show empirically that logP ( f ) correlates robustly with test error , even when local flatness measures miserably fail , for example upon parameter re-scaling . For discrete input/output problems ( such as classification ) , P ( f ) can also be interpreted as the weighted `` volume '' of parameters that map to f . Intuitively , we expect local flatness measures to typically be smaller ( flatter ) for systems with larger volumes . Nevertheless , there may also be regions of parameter space where local derivatives and flatness measures vary substantially , even if on average they correlate with the volume . Thus flatness measures can be viewed as ( imperfect ) local measures of a more robust predictor of generalization , the volume/prior P ( f ) . 2 DEFINITIONS AND NOTATION . 2.1 SUPERVISED LEARNING . For a typical supervised learning problem , the inputs live in an input domain X , and the outputs belong to an output space Y . For a data distribution D on the set of input-output pairs X × Y , the training set S is a sample of m input-output pairs sampled i.i.d . from D , S = { ( xi , yi ) } mi=1 ∼ Dm , where xi ∈ X and yi ∈ Y . The output of a DNN on an input xi is denoted as ŷi . Typically a DNN is parameterized by a vector w and trained by minimizing a loss function L : Y × Y → R , which measures differences between the output ŷ ∈ Y and the ground truth output y ∈ Y , by assigning a score L ( ŷ , y ) which is typically zero when they match , and positive when they don ’ t match . Minimizing the loss typically involves using an optimization algorithm such as SGD on a training set S. The generalization performance of the DNN , which is theoretically defined over the underlying ( typically unknown ) data distribution D but is practically measured on a test set E = { ( x′i , y′i ) } |E| i=1 ∼ D|E| . For classification problems , the generalization error is practically measured as ( E ) = 1|E| ∑ x′i∈E 1 [ ŷi 6= y′i ] , where 1 is the standard indicator function which is one when its input is true , and zero otherwise . 2.2 FLATNESS MEASURES . Perhaps the most natural way to measure the flatness of minima is to consider the eigenvalue distribution of the Hessian Hij = ∂2L ( w ) /∂wi∂wj once the learning process has converged ( typically to a zero training error solution ) . Here for simplicity we use L ( w ) instead of L ( ŷ , y ) as ŷ is parameterized by w. Sharp minima are characterized by a significant number of large positive eigenvalues λi in the Hessian , while flat minima are dominated by small eigenvalues . Some care must be used in this interpretation because it is widely thought that DNNs converge to stationary points that are not true minima , leading to negative eigenvalues and complicating their use in measures of flatness . Typically , only a subset of the positive eigenvalues are used ( Wu et al. , 2017 ; Zhang et al. , 2018 ) . Hessians are typically very expensive to calculate . For this reason , Keskar et al . ( 2016 ) introduced a computationally more tractable measure called `` sharpness '' : Definition 2.1 ( Sharpness ) . Given parameters w′ within a box in parameter space Cζ with sides of length ζ > 0 , centered around a minimum of interest at parameters w , the sharpness of the loss L ( w ) at w is defined as : sharpness : = maxw′∈Cζ ( L ( w ′ ) − L ( w ) ) 1 + L ( w ) × 100 . In the limit of small ζ , the sharpness relates to the spectral norm of the Hessian ( Dinh et al. , 2017 ) : sharpness ≈ ∥∥∣∣ ( ∇2L ( w ) ) ∣∣∥∥ 2 ζ2 2 ( 1 + L ( w ) ) × 100 . The general concept of flatness can be defined as 1/sharpness , and that is how we will interpret this measure in the rest of this paper . 2.3 FUNCTIONS AND THE BAYESIAN PRIOR . We first clarify how we represent functions in the rest of paper using the notion of restriction of functions . A more detailed explanation can be found in appendix C. Here we use binary classification as an example : Definition 2.2 ( Restriction of functions to C ) . ( Shalev-Shwartz & Ben-David , 2014 ) Consider a parameterized supervised model , and let the input space be X and the output space be Y , noting Y = { 0 , 1 } in binary classification setting . The space of functions the model can express is a ( potentially uncountably infinite ) set F ⊆ Y |X | . Let C = { c1 , . . . , cm } ⊂ X . The restriction of F to C is the set of functions from C to Y that can be derived from functions in F : FC = { ( f ( c1 ) , . . . , f ( cm ) ) : f ∈ F } where we represent each function from C to Y as a vector in Y |C| . For example , for binary classification , if we restrict the functions to S + E , then each function in FS+E is represented as a binary string of length |S|+ |E| . In the rest of paper , we simply refer to `` functions '' when we actually mean the restriction of functions to S + E , except for the Boolean system in section 5.1 where no restriction is needed . See appendix C for a thorough explanation . For discrete functions , we next define the prior probability P ( f ) as Definition 2.3 ( Prior of a function ) . Given a prior parameter distribution Pw ( w ) over the parameters , the prior of function f can be defined as : P ( f ) : = ∫ 1 [ M ( w ) = f ] Pw ( w ) dw . ( 2 ) where 1 is an indicator function:1 [ arg ] = 1 if its argument is true or 0 otherwise ; M is the parameterfunction map whose formal definition is in appendix B . Note that P ( f ) could also be interpreted as a weighted volume V ( f ) over parameter space . If Pw ( w ) is the distribution at initialization , the P ( f ) is the prior probability of obtaining the function at initialization . We normally use this parameter distribution when interpreting P ( f ) . Remark . Definition 2.3 works in the situation where the spaceX and Y are discrete , where P ( f ) has a prior probability mass interpretation . This is enough for most image classification tasks . Nevertheless , we can easily extend this definition to the continuous setting , where we can also define a prior density over functions upon random initialization , with the help of Gaussian Process ( GP ) ( Rasmussen , 2003 ) . For the GP prior see appendix D. However , in this work , we focus exclusively on the classification setting , with discrete inputs and outputs . | In binary image classification setting, the paper shows that when using an attack set (augmenting the training set with additional intentionally mislabeled datapoints) to vary the generalization performance of a fixed neural network (NN) architecture, the (approximation of) Bayesian prior $P(f)$ of the outputs of the trained NN on the test set correlates _very_ well with generalization. The effect persists when using different optimizers, and different number of training epochs. In contrast, the popular measure of flatness (approximation of the spectral norm of the Hessian of the training loss) correlates worse with generalization in these settings when using SGD, and does not correlate with it at all when using variants like Adam or Entropy-SGD. The authors provide some preliminary explanation of the effect and propose the Bayesian prior as a more reliable generalization proxy. | SP:5f264606b8bcc8491318344d65461c5e6640b4b1 |
WHY FLATNESS DOES AND DOES NOT CORRELATE WITH GENERALIZATION FOR DEEP NEURAL NETWORKS | 1 INTRODUCTION . Among the most important theoretical questions in the field of deep learning are : 1 ) What characterizes functions that exhibit good generalization ? , and 2 ) Why do overparameterized deep neural networks ( DNNs ) converge to this small subset of functions that do not overfit ? Perhaps the most popular hypothesis is that good generalization performance is linked to flat minima . In pioneering works ( Hinton & van Camp , 1993 ; Hochreiter & Schmidhuber , 1997 ) , the minimum description length ( MDL ) principle ( Rissanen , 1978 ) was invoked to argue that since flatter minima require less information to describe , they should generalize better than sharp minima . Most measures of flatness approximate the local curvature of the loss surface , typically defining flatter minima to be those with smaller values of the Hessian eigenvalues ( Keskar et al. , 2016 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Sagun et al. , 2016 ; Yao et al. , 2018 ) . Another commonly held belief is that stochastic gradient descent ( SGD ) is itself biased towards flatter minima , and that this inductive bias helps explain why DNNs generalize so well ( Keskar et al. , 2016 ; Jastrzebski et al. , 2018 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Yao et al. , 2018 ; Wei & Schwab , 2019 ; Maddox et al. , 2020 ) . For example Keskar et al . ( 2016 ) developed a measure of flatness that they found correlated with improved generalization performance when decreasing batch size , suggesting that SGD is itself biased towards flatter minima . We note that others ( Goyal et al. , 2017 ; Hoffer et al. , 2017 ; Smith et al. , 2017 ; Mingard et al. , 2021a ) have argued that the effect of batch size can be compensated by changes in learning rate , complicating some conclusions from Keskar et al . ( 2016 ) . Nevertheless , the argument that SGD is somehow itself biased towards flat minima remains widespread in the literature . In an important critique of local flatness measures , Dinh et al . ( 2017 ) pointed out that DNNs with ReLU activation can be re-parameterized through a simple parameter-rescaling transformation . Tα : ( w1 , w2 ) 7→ ( αw1 , α −1w2 ) ( 1 ) where w1 are the weights between an input layer and a single hidden layer , and w2 are the weights between this hidden layer and the outputs . This transformation can be extended to any architecture having at least one single rectified network layer . The function that the DNN represents , and thus how it generalizes , is invariant under parameter-rescaling transformations , but the derivatives w.r.t . parameters , and therefore many flatness measures used in the literature , can be changed arbitrarily . Ergo , the correlation between flatness and generalization can be arbitrarily changed . Several recent studies have attempted to find `` scale invariant '' flatness metrics ( Petzka et al. , 2019 ; Rangamani et al. , 2019 ; Tsuzuku et al. , 2019 ) . The main idea is to multiply layer-wise Hessian eigenvalues by a factor of ‖wi‖2 , which renders the metric immune to layer-wise re-parameterization . While these new metrics look promising experimentally , they are only scale-invariant when the scaling is layer-wise . Other methods of rescaling ( e.g . neuron-wise rescaling ) can still change the metrics , so this general problem remains unsolved . 1.1 MAIN CONTRIBUTIONS . 1 . For a series of classic image classification tasks ( MNIST and CIFAR-10 ) we show that flatness measures change substantially as a function of epochs . Parameter re-scaling can arbitrarily change flatness , but it quickly recovers to a more typical value under further training . We also demonstrate that some variants of SGD exhibit significantly worse correlation of flatness with generalization than found for vanilla SGD . In other words popular measures of flatness sometimes do and sometimes do not correlate with generalization . This mixed performance problematizes a widely held intuition that DNNs generalize well fundamentally because SGD or its variants are themselves biased towards flat minima . 2 . We next study the correlation of the Bayesian prior P ( f ) with the generalization performance of a DNN that converges to that function f . This prior is the weighted probability of obtaining function f upon random sampling of parameters . Motivated by a theoretical argument derived from a non-uniform convergence generalization bound , we show empirically that logP ( f ) correlates robustly with test error , even when local flatness measures miserably fail , for example upon parameter re-scaling . For discrete input/output problems ( such as classification ) , P ( f ) can also be interpreted as the weighted `` volume '' of parameters that map to f . Intuitively , we expect local flatness measures to typically be smaller ( flatter ) for systems with larger volumes . Nevertheless , there may also be regions of parameter space where local derivatives and flatness measures vary substantially , even if on average they correlate with the volume . Thus flatness measures can be viewed as ( imperfect ) local measures of a more robust predictor of generalization , the volume/prior P ( f ) . 2 DEFINITIONS AND NOTATION . 2.1 SUPERVISED LEARNING . For a typical supervised learning problem , the inputs live in an input domain X , and the outputs belong to an output space Y . For a data distribution D on the set of input-output pairs X × Y , the training set S is a sample of m input-output pairs sampled i.i.d . from D , S = { ( xi , yi ) } mi=1 ∼ Dm , where xi ∈ X and yi ∈ Y . The output of a DNN on an input xi is denoted as ŷi . Typically a DNN is parameterized by a vector w and trained by minimizing a loss function L : Y × Y → R , which measures differences between the output ŷ ∈ Y and the ground truth output y ∈ Y , by assigning a score L ( ŷ , y ) which is typically zero when they match , and positive when they don ’ t match . Minimizing the loss typically involves using an optimization algorithm such as SGD on a training set S. The generalization performance of the DNN , which is theoretically defined over the underlying ( typically unknown ) data distribution D but is practically measured on a test set E = { ( x′i , y′i ) } |E| i=1 ∼ D|E| . For classification problems , the generalization error is practically measured as ( E ) = 1|E| ∑ x′i∈E 1 [ ŷi 6= y′i ] , where 1 is the standard indicator function which is one when its input is true , and zero otherwise . 2.2 FLATNESS MEASURES . Perhaps the most natural way to measure the flatness of minima is to consider the eigenvalue distribution of the Hessian Hij = ∂2L ( w ) /∂wi∂wj once the learning process has converged ( typically to a zero training error solution ) . Here for simplicity we use L ( w ) instead of L ( ŷ , y ) as ŷ is parameterized by w. Sharp minima are characterized by a significant number of large positive eigenvalues λi in the Hessian , while flat minima are dominated by small eigenvalues . Some care must be used in this interpretation because it is widely thought that DNNs converge to stationary points that are not true minima , leading to negative eigenvalues and complicating their use in measures of flatness . Typically , only a subset of the positive eigenvalues are used ( Wu et al. , 2017 ; Zhang et al. , 2018 ) . Hessians are typically very expensive to calculate . For this reason , Keskar et al . ( 2016 ) introduced a computationally more tractable measure called `` sharpness '' : Definition 2.1 ( Sharpness ) . Given parameters w′ within a box in parameter space Cζ with sides of length ζ > 0 , centered around a minimum of interest at parameters w , the sharpness of the loss L ( w ) at w is defined as : sharpness : = maxw′∈Cζ ( L ( w ′ ) − L ( w ) ) 1 + L ( w ) × 100 . In the limit of small ζ , the sharpness relates to the spectral norm of the Hessian ( Dinh et al. , 2017 ) : sharpness ≈ ∥∥∣∣ ( ∇2L ( w ) ) ∣∣∥∥ 2 ζ2 2 ( 1 + L ( w ) ) × 100 . The general concept of flatness can be defined as 1/sharpness , and that is how we will interpret this measure in the rest of this paper . 2.3 FUNCTIONS AND THE BAYESIAN PRIOR . We first clarify how we represent functions in the rest of paper using the notion of restriction of functions . A more detailed explanation can be found in appendix C. Here we use binary classification as an example : Definition 2.2 ( Restriction of functions to C ) . ( Shalev-Shwartz & Ben-David , 2014 ) Consider a parameterized supervised model , and let the input space be X and the output space be Y , noting Y = { 0 , 1 } in binary classification setting . The space of functions the model can express is a ( potentially uncountably infinite ) set F ⊆ Y |X | . Let C = { c1 , . . . , cm } ⊂ X . The restriction of F to C is the set of functions from C to Y that can be derived from functions in F : FC = { ( f ( c1 ) , . . . , f ( cm ) ) : f ∈ F } where we represent each function from C to Y as a vector in Y |C| . For example , for binary classification , if we restrict the functions to S + E , then each function in FS+E is represented as a binary string of length |S|+ |E| . In the rest of paper , we simply refer to `` functions '' when we actually mean the restriction of functions to S + E , except for the Boolean system in section 5.1 where no restriction is needed . See appendix C for a thorough explanation . For discrete functions , we next define the prior probability P ( f ) as Definition 2.3 ( Prior of a function ) . Given a prior parameter distribution Pw ( w ) over the parameters , the prior of function f can be defined as : P ( f ) : = ∫ 1 [ M ( w ) = f ] Pw ( w ) dw . ( 2 ) where 1 is an indicator function:1 [ arg ] = 1 if its argument is true or 0 otherwise ; M is the parameterfunction map whose formal definition is in appendix B . Note that P ( f ) could also be interpreted as a weighted volume V ( f ) over parameter space . If Pw ( w ) is the distribution at initialization , the P ( f ) is the prior probability of obtaining the function at initialization . We normally use this parameter distribution when interpreting P ( f ) . Remark . Definition 2.3 works in the situation where the spaceX and Y are discrete , where P ( f ) has a prior probability mass interpretation . This is enough for most image classification tasks . Nevertheless , we can easily extend this definition to the continuous setting , where we can also define a prior density over functions upon random initialization , with the help of Gaussian Process ( GP ) ( Rasmussen , 2003 ) . For the GP prior see appendix D. However , in this work , we focus exclusively on the classification setting , with discrete inputs and outputs . | This paper studies the relation between generalization and the flatness of the loss landscapes. The authors show that the correlation between local flatness and generalization is broken in the case of using Adam or Entropy-SGD or by performing parameter rescaling. They show that the Bayesian prior of DNNs (which are trained until zero training error) could on the other hand predict generalization better than the local flatness measures. This is motivated by the work of (Mingard et. al., 2021), where it is shown that the Bayesian posterior function correlates well with the probability function of DNNs that reach zero training error using SGD. | SP:5f264606b8bcc8491318344d65461c5e6640b4b1 |
WHY FLATNESS DOES AND DOES NOT CORRELATE WITH GENERALIZATION FOR DEEP NEURAL NETWORKS | 1 INTRODUCTION . Among the most important theoretical questions in the field of deep learning are : 1 ) What characterizes functions that exhibit good generalization ? , and 2 ) Why do overparameterized deep neural networks ( DNNs ) converge to this small subset of functions that do not overfit ? Perhaps the most popular hypothesis is that good generalization performance is linked to flat minima . In pioneering works ( Hinton & van Camp , 1993 ; Hochreiter & Schmidhuber , 1997 ) , the minimum description length ( MDL ) principle ( Rissanen , 1978 ) was invoked to argue that since flatter minima require less information to describe , they should generalize better than sharp minima . Most measures of flatness approximate the local curvature of the loss surface , typically defining flatter minima to be those with smaller values of the Hessian eigenvalues ( Keskar et al. , 2016 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Sagun et al. , 2016 ; Yao et al. , 2018 ) . Another commonly held belief is that stochastic gradient descent ( SGD ) is itself biased towards flatter minima , and that this inductive bias helps explain why DNNs generalize so well ( Keskar et al. , 2016 ; Jastrzebski et al. , 2018 ; Wu et al. , 2017 ; Zhang et al. , 2018 ; Yao et al. , 2018 ; Wei & Schwab , 2019 ; Maddox et al. , 2020 ) . For example Keskar et al . ( 2016 ) developed a measure of flatness that they found correlated with improved generalization performance when decreasing batch size , suggesting that SGD is itself biased towards flatter minima . We note that others ( Goyal et al. , 2017 ; Hoffer et al. , 2017 ; Smith et al. , 2017 ; Mingard et al. , 2021a ) have argued that the effect of batch size can be compensated by changes in learning rate , complicating some conclusions from Keskar et al . ( 2016 ) . Nevertheless , the argument that SGD is somehow itself biased towards flat minima remains widespread in the literature . In an important critique of local flatness measures , Dinh et al . ( 2017 ) pointed out that DNNs with ReLU activation can be re-parameterized through a simple parameter-rescaling transformation . Tα : ( w1 , w2 ) 7→ ( αw1 , α −1w2 ) ( 1 ) where w1 are the weights between an input layer and a single hidden layer , and w2 are the weights between this hidden layer and the outputs . This transformation can be extended to any architecture having at least one single rectified network layer . The function that the DNN represents , and thus how it generalizes , is invariant under parameter-rescaling transformations , but the derivatives w.r.t . parameters , and therefore many flatness measures used in the literature , can be changed arbitrarily . Ergo , the correlation between flatness and generalization can be arbitrarily changed . Several recent studies have attempted to find `` scale invariant '' flatness metrics ( Petzka et al. , 2019 ; Rangamani et al. , 2019 ; Tsuzuku et al. , 2019 ) . The main idea is to multiply layer-wise Hessian eigenvalues by a factor of ‖wi‖2 , which renders the metric immune to layer-wise re-parameterization . While these new metrics look promising experimentally , they are only scale-invariant when the scaling is layer-wise . Other methods of rescaling ( e.g . neuron-wise rescaling ) can still change the metrics , so this general problem remains unsolved . 1.1 MAIN CONTRIBUTIONS . 1 . For a series of classic image classification tasks ( MNIST and CIFAR-10 ) we show that flatness measures change substantially as a function of epochs . Parameter re-scaling can arbitrarily change flatness , but it quickly recovers to a more typical value under further training . We also demonstrate that some variants of SGD exhibit significantly worse correlation of flatness with generalization than found for vanilla SGD . In other words popular measures of flatness sometimes do and sometimes do not correlate with generalization . This mixed performance problematizes a widely held intuition that DNNs generalize well fundamentally because SGD or its variants are themselves biased towards flat minima . 2 . We next study the correlation of the Bayesian prior P ( f ) with the generalization performance of a DNN that converges to that function f . This prior is the weighted probability of obtaining function f upon random sampling of parameters . Motivated by a theoretical argument derived from a non-uniform convergence generalization bound , we show empirically that logP ( f ) correlates robustly with test error , even when local flatness measures miserably fail , for example upon parameter re-scaling . For discrete input/output problems ( such as classification ) , P ( f ) can also be interpreted as the weighted `` volume '' of parameters that map to f . Intuitively , we expect local flatness measures to typically be smaller ( flatter ) for systems with larger volumes . Nevertheless , there may also be regions of parameter space where local derivatives and flatness measures vary substantially , even if on average they correlate with the volume . Thus flatness measures can be viewed as ( imperfect ) local measures of a more robust predictor of generalization , the volume/prior P ( f ) . 2 DEFINITIONS AND NOTATION . 2.1 SUPERVISED LEARNING . For a typical supervised learning problem , the inputs live in an input domain X , and the outputs belong to an output space Y . For a data distribution D on the set of input-output pairs X × Y , the training set S is a sample of m input-output pairs sampled i.i.d . from D , S = { ( xi , yi ) } mi=1 ∼ Dm , where xi ∈ X and yi ∈ Y . The output of a DNN on an input xi is denoted as ŷi . Typically a DNN is parameterized by a vector w and trained by minimizing a loss function L : Y × Y → R , which measures differences between the output ŷ ∈ Y and the ground truth output y ∈ Y , by assigning a score L ( ŷ , y ) which is typically zero when they match , and positive when they don ’ t match . Minimizing the loss typically involves using an optimization algorithm such as SGD on a training set S. The generalization performance of the DNN , which is theoretically defined over the underlying ( typically unknown ) data distribution D but is practically measured on a test set E = { ( x′i , y′i ) } |E| i=1 ∼ D|E| . For classification problems , the generalization error is practically measured as ( E ) = 1|E| ∑ x′i∈E 1 [ ŷi 6= y′i ] , where 1 is the standard indicator function which is one when its input is true , and zero otherwise . 2.2 FLATNESS MEASURES . Perhaps the most natural way to measure the flatness of minima is to consider the eigenvalue distribution of the Hessian Hij = ∂2L ( w ) /∂wi∂wj once the learning process has converged ( typically to a zero training error solution ) . Here for simplicity we use L ( w ) instead of L ( ŷ , y ) as ŷ is parameterized by w. Sharp minima are characterized by a significant number of large positive eigenvalues λi in the Hessian , while flat minima are dominated by small eigenvalues . Some care must be used in this interpretation because it is widely thought that DNNs converge to stationary points that are not true minima , leading to negative eigenvalues and complicating their use in measures of flatness . Typically , only a subset of the positive eigenvalues are used ( Wu et al. , 2017 ; Zhang et al. , 2018 ) . Hessians are typically very expensive to calculate . For this reason , Keskar et al . ( 2016 ) introduced a computationally more tractable measure called `` sharpness '' : Definition 2.1 ( Sharpness ) . Given parameters w′ within a box in parameter space Cζ with sides of length ζ > 0 , centered around a minimum of interest at parameters w , the sharpness of the loss L ( w ) at w is defined as : sharpness : = maxw′∈Cζ ( L ( w ′ ) − L ( w ) ) 1 + L ( w ) × 100 . In the limit of small ζ , the sharpness relates to the spectral norm of the Hessian ( Dinh et al. , 2017 ) : sharpness ≈ ∥∥∣∣ ( ∇2L ( w ) ) ∣∣∥∥ 2 ζ2 2 ( 1 + L ( w ) ) × 100 . The general concept of flatness can be defined as 1/sharpness , and that is how we will interpret this measure in the rest of this paper . 2.3 FUNCTIONS AND THE BAYESIAN PRIOR . We first clarify how we represent functions in the rest of paper using the notion of restriction of functions . A more detailed explanation can be found in appendix C. Here we use binary classification as an example : Definition 2.2 ( Restriction of functions to C ) . ( Shalev-Shwartz & Ben-David , 2014 ) Consider a parameterized supervised model , and let the input space be X and the output space be Y , noting Y = { 0 , 1 } in binary classification setting . The space of functions the model can express is a ( potentially uncountably infinite ) set F ⊆ Y |X | . Let C = { c1 , . . . , cm } ⊂ X . The restriction of F to C is the set of functions from C to Y that can be derived from functions in F : FC = { ( f ( c1 ) , . . . , f ( cm ) ) : f ∈ F } where we represent each function from C to Y as a vector in Y |C| . For example , for binary classification , if we restrict the functions to S + E , then each function in FS+E is represented as a binary string of length |S|+ |E| . In the rest of paper , we simply refer to `` functions '' when we actually mean the restriction of functions to S + E , except for the Boolean system in section 5.1 where no restriction is needed . See appendix C for a thorough explanation . For discrete functions , we next define the prior probability P ( f ) as Definition 2.3 ( Prior of a function ) . Given a prior parameter distribution Pw ( w ) over the parameters , the prior of function f can be defined as : P ( f ) : = ∫ 1 [ M ( w ) = f ] Pw ( w ) dw . ( 2 ) where 1 is an indicator function:1 [ arg ] = 1 if its argument is true or 0 otherwise ; M is the parameterfunction map whose formal definition is in appendix B . Note that P ( f ) could also be interpreted as a weighted volume V ( f ) over parameter space . If Pw ( w ) is the distribution at initialization , the P ( f ) is the prior probability of obtaining the function at initialization . We normally use this parameter distribution when interpreting P ( f ) . Remark . Definition 2.3 works in the situation where the spaceX and Y are discrete , where P ( f ) has a prior probability mass interpretation . This is enough for most image classification tasks . Nevertheless , we can easily extend this definition to the continuous setting , where we can also define a prior density over functions upon random initialization , with the help of Gaussian Process ( GP ) ( Rasmussen , 2003 ) . For the GP prior see appendix D. However , in this work , we focus exclusively on the classification setting , with discrete inputs and outputs . | This paper shows that for certain variants of SGD, the commonly held assertion that flat minima imply good generalization may not hold. The overall conclusion from this aspect of the paper is that popular measures of flatness sometimes do, and sometimes do not, correlate with generalization. The authors continue on to propose an alternative measure, the Bayesian prior, which they claim correlates well with generalization. In essence, this measure captures the volume of parameter space which when a networks weights are initialized within the volume's region, it converge to a pre-specified function with low training error. After covering the definitions and related measures, they perform experiments on MNIST, CIFAR-10, and discuss how the choice of optimizer affects the claims. | SP:5f264606b8bcc8491318344d65461c5e6640b4b1 |
Generative Adversarial Training for Neural Combinatorial Optimization Models | 1 INTRODUCTION . Combinatorial Optimization Problems ( COPs ) are a family of problems with the goal of finding the best one ( s ) from a finite set of solutions . Due to the considerably large solution space , many important COPs are hard to solve , such as the vehicle routing problems ( Toth & Vigo , 2002 ) . Exact algorithms based on branch-and-bound ( Lawler & Wood , 1966 ) or its variants can provide elegant theoretical guarantees but the worst-case computational complexity is exponential , hence impractical for problems of medium and large sizes . In contrast , heuristic methods can usually attain good solutions in reasonable running time , which are often preferred in practical applications . Traditional heuristics are designed based on expert knowledge for specific problems , which usually requires a large amount of time and efforts to develop . These manually designed heuristics could suffer from relatively poor performance . Moreover , for less studied problems , sufficient expert knowledge may even be unavailable . Recent studies suggest that deep learning could greatly facilitate in automating the design of heuristics , and alleviating the heavy reliance on expert knowledge . With the prior that the instances may follow certain distribution ( e.g. , locations in a routing problem may uniformly scatter in an area ) , deep models can be trained to learn heuristics in an end-to-end way ( Dai et al. , 2017 ; Kool et al. , 2019 ; Chen & Tian , 2019 ) . It has been shown that these models perform well with relatively short running time on COP instances following the training distribution . However , after the trained model is deployed , it could encounter many instances following unknown distributions different from the training one , especially for real-life applications . As shown in many existing works and our experiments , when applied to infer the instances following a different distribution , the generalization performance of deep models gets much inferior , which severely hinders the practical use of the learned heuristics . Such mismatch between the training and testing distributions is always an important issue for most learning based methods . Especially , for neural combinatorial optimization ( NCO ) models , deep learning models are mostly trained on instances sampled from specific distributions and the solution quality intricately depends on the instance distributions . The generalization to instances with different distributions has been widely acknowledged for the importance ( e.g. , in Mazyavkina et al . ( 2021 ) ) and remains a challenge . To tackle this issue , we propose the Generative Adversarial Neural Combinatorial Optimization ( GANCO ) framework which is model agnostic and generally applicable to various neural combinatorial optimization models for solving different COPs . Instead of training an optimization model only on instances following the predefined distribution , another deep neural network is deployed as a generation model to produce training instances following the distributions on which the optimization model performs poorly . The generation model and optimization model are trained alternatively in an adversarial way . Specifically , the generation model is trained by reinforcement learning to maximize the performance gap of the current optimization model on the generated instances with respect to a traditional non-learning baseline algorithm . The optimization model is trained in the original way but using the training dataset augmented with the newly generated instances to improve the generalization performance , i.e. , to reduce the performance gap . As we will show in the experiments , the non-learning baseline algorithms do not need to be very strong or fast . To demonstrate the effectiveness of the proposed GANCO framework , we apply it to a representative neural combinatorial optimization model , the Attention Model ( AM , Kool et al . ( 2019 ) ) on various COPs including Traveling Salesman Problem ( TSP ) , Capacitated Vehicle Routing Problem ( CVRP ) , Orienteering Problem ( OP ) , Prize Collecting TSP ( PCTSP ) and 0-1 Knapsack Problem ( KP ) . In the extensive experiments , we show that the proposed GANCO framework improves the generalization performance of the original optimization model on various testing distributions with little sacrifice of performance on the original training distribution . Furthermore , we show that the proposed GANCO framework can be readily and effectively applied to other optimization models such as the Policy Optimization with Multiple Optima ( POMO , Kwon et al . ( 2020 ) ) . 2 RELATED WORKS . Most deep learning models are trained to learn the construction heuristics where the model picks the actions sequentially to construct a solution ( e.g. , Bello et al . ( 2017 ) ; Nazari et al . ( 2018 ) ; Kool et al . ( 2019 ) ; Hottung et al . ( 2021 ) ; Kwon et al . ( 2020 ) ) , which perform well with fairly short running time . While some other models learn the improvement heuristics to locally refine existing solutions , which usually search over a large number of solutions and are relatively slow . Moreover , they often need to fit in frameworks specifically designed for different problems , such as 2-opt ( d O Costa et al. , 2020 ) , large neighborhood search ( Hottung & Tierney , 2020 ) and combinations of local operators ( Lu et al. , 2020 ) . Most models are trained with reinforcement learning ( RL ) except for several ones with supervised learning , e.g. , Vinyals et al . ( 2015 ) and Joshi et al . ( 2019 ) . Though we use representative construction heuristic models with RL in the experiments , our proposed GANCO framework can also be applied to models of other types . The most well-known models trained with adversary are the Generative Adversarial Networks ( Goodfellow et al. , 2014 ; Yang et al. , 2018 ; Zhang et al. , 2019 ) , which serve to generate new data samples to imitate the training set . The generative network generates the sample candidates whereas the discriminative network classifies whether the sample is generated or genuine . In contrast , the goal of our proposed GANCO framework is to train the optimization model for better generalization ability where the generation model is used only to generate more informative training samples . The idea of using adversarial training to improve RL robustness has also been studied in other fields such as Atari games and robotics ( Pinto et al. , 2017 ; Khirodkar et al. , 2018 ; Dennis et al. , 2020 ) . However , existing works in this direction are not directly applicable to our task due to its unique properties . The key is how to define the adversary , which is specific to the application domain . In the field of COPs , we define the adversary as a hard instance generator , which is evaluated by performance gaps of the optimization model with respect to a traditional non-learning baseline algorithm . To the best of our knowledge , the most related work considering generating data samples for COPs is Liu et al . ( 2020 ) where the samples are generated by performing crossover and mutation to existing instances . The goal is to search better parameters configuration for given traditional solvers . Whereas our proposed framework is designed to generate instances with the guidance of deep reinforcement learning and improve the generalization performance for neural combinatorial optimization models . In our experiments , we also verify the necessity of our data generation method . 3 MODEL . For most learning based methods , the patterns are learned from the training data and usually applicable to similar testing samples . However , if the distributions of training and testing samples are considerably different , these learned patterns may not be general enough to guarantee desirable inference performance . As we will show in the experiments , this issue is particularly severe for COPs due to the intricate relationship between the solution and the instance distribution . Instead of designing a network architecture to achieve better generalization performance for specific problems , we aim to propose a model agnostic framework which is generally applicable to different optimization models and training methods . The only requirement for the optimization models is that they are able to learn effective patterns from the training data and perform well on similar testing samples , which could be fulfilled by most of the recent learning based models . To this end , we propose the Generative Adversarial Neural Combinatorial Optimization ( GANCO ) framework , as shown in Figure 1 . Rather than training the optimization model solely on instances sampled from the predefined training distribution , we also deploy another deep learning model to generate instances on which the optimization model performs poorly . The generation model is trained by reinforcement learning to maximize the performance gap between the optimization model and a non-learning baseline algorithm on the generated instances . And the optimization model is trained with the generated samples to improve the performance on similar instances . In other words , the generation model and the optimization model are trained to maximize and minimize the performance gaps on the generated instances , respectively . In such an adversarial training way , the lower bound of the optimization model performance on the instance distributions generated by the generation model will be pushed up . We detail the generation model and the GANCO framework in the following subsections . 3.1 GENERATION MODEL . As suggested in existing works ( Kool et al. , 2019 ; Kwon et al. , 2020 ) , many important COPs can be viewed as a graph of n nodes with different attributes , like the city coordinates for TSP and the item information for KP . Therefore , we formulate the data generation task as determining the attribute values for each node of an instance . However , the proposed framework could be easily adapted to other types of COPs , such as by determining the presence of each edge for problems depending on the graph connectivity ( e.g. , the Boolean Satisfiability Problem ) . The generation model Ω takes the noise sample h for each node as the input to output the distribution parameters for the node attributes . We format the noise input h ∈ Rn×2 as 2-dimensional variables for each node j = 1 , .. , n following the unit uniform distribution independently , i.e. , h ∼ U ( 0 , 1 ) . Pertaining to the network architecture , we use the self-attention block proposed in the Attention Model ( AM , Kool et al . ( 2019 ) ) , which consists of a multi-head self-attention layer ( Vaswani et al. , 2017 ) , two node-wise feed-forward layers with Batch Normalization ( Ioffe & Szegedy , 2015 ) and skip connection ( He et al. , 2016 ) . The noise input is node-wise linearly projected to H-dimension and encoded by k self-attention blocks . And another node-wise linear layer with a sigmoid activation projects the hidden vectors to the output distribution parameters Ω ( h ) ∈ Rn×d where d is the number of attributes for each node . Then we draw the sample x̃ from the Gaussian distribution N ( µ , σ2 ) with mean µ = Ω ( h ) and fixed standard deviation σ . The network outputs are kept in the same range [ 0 , 1 ] to use the same variance for different attributes . The sample x̃ is scaled to the corresponding valid ranges ( and discretized for discrete attributes ) to attain the node attributes x = A ( x̃ ) . The goal of the generation model is to generate instances on which the optimization model currently performs poorly . Since different instances have different optimal objective values , the attained objective values alone are not meaningful enough to evaluate the model performance . Although the optimality gap appears to be a good alternative metric in this case , it often requires expensive computation to attain the optimal solutions . On the other hand , while achieving approximate solutions , traditional non-learning algorithms tend to perform relatively stably on instances with different distributions , compared to the learning based methods . Therefore , the performance gaps of the optimization model with respect to a traditional non-learning baseline algorithm could be regarded as a favorable metric . Theoretically , the baseline algorithm performance can be arbitrarily inferior as long as it performs consistently ( e.g. , similar optimality gaps ) on different instances . As we will show in our experiments , the baseline algorithms do not need to be very strong or fast as the training for generation models converges in a relatively small number of iterations . The generation model is trained by the reinforcement learning ( RL ) . Specifically , the state of RL environment is the noise input h ∈ Rn×2 . The action is to sample the attribute values for a node . An episode is to sample the attribute values x = A ( x̃ ) with x̃ ∼ N ( Ω ( h ) , σ2 ) for all the nodes to form an instance . However , as the sampled attributes for one node will not affect the other nodes , the episode can be viewed as only containing one action with large action space to sample all the node attributes at the same time . The reward for an episode is the performance gap between the optimization model Φ and baseline algorithm B on the sampled instance x . The loss function is formally defined as follow , L ( Ω ) = −Eh∼U ( 0,1 ) , x̃∼N ( Ω ( h ) , σ2 ) G ( x , Φ , B ) , ( 1 ) where G ( x , Φ , B ) = ( OΦ ( x ) − OB ( x ) ) /OB ( x ) is the performance ( objective ) gap1 ; OΦ ( x ) and OB ( x ) are the objective values of the solutions found by the optimization model Φ and the baseline algorithm B for the generated instance x , respectively . Different from the generator in GAN ( Goodfellow et al. , 2014 ) which deterministically infers the instance given the noise input , we sample the noise input to infer the distribution and sample the instance from the output distribution . The generation model is trained by the REINFORCE algorithm ( Sutton et al. , 2000 ) with reward baseline as the mean of the gaps in a batch . The gradient to train the generation model is expressed as follow , ∇ΩL ( Ω ) ≈ − 1 N N∑ i=1 G ( xi , Φ , B ) − 1 N N∑ j=1 G ( xj , Φ , B ) ∇ΩlogN ( x̃i ; Ω ( hi ) , σ2 ) , ( 2 ) where i and j are the instance indices ; N is the batch size ; x̃i ∈ Rn×d ∼ N ( Ω ( hi ) , σ2 ) ; and N ( x̃i ; Ω ( hi ) , σ2 ) is the probability of x̃i sampled from the Gaussian distribution N ( Ω ( hi ) , σ2 ) . | This work presents GANCO, a framework that extends the training of RL heuristics for combinatorial optimisation problems (COPs) to include an adversarial agent that aims to generate hard-to-solve problem instances. The RL heuristic and adversarial instance generation are alternately trained, with the intuition that the adaptive problem distribution encourages the solving agent to learn policies with better generalisation performance to unseen instances compared to those trained on a single static distribution. Experiments train the Attention Model of Kool et al (and a subsequent extension, POMO) within the GANCO framework and show that across multiple well-studied COPs (including the travelling salesman problem), GANCO augmented training does result in policies that generalise better to unseen distributions at test time. | SP:e7efded6718a77e5c3dace8663f7ddfa694ac469 |
Generative Adversarial Training for Neural Combinatorial Optimization Models | 1 INTRODUCTION . Combinatorial Optimization Problems ( COPs ) are a family of problems with the goal of finding the best one ( s ) from a finite set of solutions . Due to the considerably large solution space , many important COPs are hard to solve , such as the vehicle routing problems ( Toth & Vigo , 2002 ) . Exact algorithms based on branch-and-bound ( Lawler & Wood , 1966 ) or its variants can provide elegant theoretical guarantees but the worst-case computational complexity is exponential , hence impractical for problems of medium and large sizes . In contrast , heuristic methods can usually attain good solutions in reasonable running time , which are often preferred in practical applications . Traditional heuristics are designed based on expert knowledge for specific problems , which usually requires a large amount of time and efforts to develop . These manually designed heuristics could suffer from relatively poor performance . Moreover , for less studied problems , sufficient expert knowledge may even be unavailable . Recent studies suggest that deep learning could greatly facilitate in automating the design of heuristics , and alleviating the heavy reliance on expert knowledge . With the prior that the instances may follow certain distribution ( e.g. , locations in a routing problem may uniformly scatter in an area ) , deep models can be trained to learn heuristics in an end-to-end way ( Dai et al. , 2017 ; Kool et al. , 2019 ; Chen & Tian , 2019 ) . It has been shown that these models perform well with relatively short running time on COP instances following the training distribution . However , after the trained model is deployed , it could encounter many instances following unknown distributions different from the training one , especially for real-life applications . As shown in many existing works and our experiments , when applied to infer the instances following a different distribution , the generalization performance of deep models gets much inferior , which severely hinders the practical use of the learned heuristics . Such mismatch between the training and testing distributions is always an important issue for most learning based methods . Especially , for neural combinatorial optimization ( NCO ) models , deep learning models are mostly trained on instances sampled from specific distributions and the solution quality intricately depends on the instance distributions . The generalization to instances with different distributions has been widely acknowledged for the importance ( e.g. , in Mazyavkina et al . ( 2021 ) ) and remains a challenge . To tackle this issue , we propose the Generative Adversarial Neural Combinatorial Optimization ( GANCO ) framework which is model agnostic and generally applicable to various neural combinatorial optimization models for solving different COPs . Instead of training an optimization model only on instances following the predefined distribution , another deep neural network is deployed as a generation model to produce training instances following the distributions on which the optimization model performs poorly . The generation model and optimization model are trained alternatively in an adversarial way . Specifically , the generation model is trained by reinforcement learning to maximize the performance gap of the current optimization model on the generated instances with respect to a traditional non-learning baseline algorithm . The optimization model is trained in the original way but using the training dataset augmented with the newly generated instances to improve the generalization performance , i.e. , to reduce the performance gap . As we will show in the experiments , the non-learning baseline algorithms do not need to be very strong or fast . To demonstrate the effectiveness of the proposed GANCO framework , we apply it to a representative neural combinatorial optimization model , the Attention Model ( AM , Kool et al . ( 2019 ) ) on various COPs including Traveling Salesman Problem ( TSP ) , Capacitated Vehicle Routing Problem ( CVRP ) , Orienteering Problem ( OP ) , Prize Collecting TSP ( PCTSP ) and 0-1 Knapsack Problem ( KP ) . In the extensive experiments , we show that the proposed GANCO framework improves the generalization performance of the original optimization model on various testing distributions with little sacrifice of performance on the original training distribution . Furthermore , we show that the proposed GANCO framework can be readily and effectively applied to other optimization models such as the Policy Optimization with Multiple Optima ( POMO , Kwon et al . ( 2020 ) ) . 2 RELATED WORKS . Most deep learning models are trained to learn the construction heuristics where the model picks the actions sequentially to construct a solution ( e.g. , Bello et al . ( 2017 ) ; Nazari et al . ( 2018 ) ; Kool et al . ( 2019 ) ; Hottung et al . ( 2021 ) ; Kwon et al . ( 2020 ) ) , which perform well with fairly short running time . While some other models learn the improvement heuristics to locally refine existing solutions , which usually search over a large number of solutions and are relatively slow . Moreover , they often need to fit in frameworks specifically designed for different problems , such as 2-opt ( d O Costa et al. , 2020 ) , large neighborhood search ( Hottung & Tierney , 2020 ) and combinations of local operators ( Lu et al. , 2020 ) . Most models are trained with reinforcement learning ( RL ) except for several ones with supervised learning , e.g. , Vinyals et al . ( 2015 ) and Joshi et al . ( 2019 ) . Though we use representative construction heuristic models with RL in the experiments , our proposed GANCO framework can also be applied to models of other types . The most well-known models trained with adversary are the Generative Adversarial Networks ( Goodfellow et al. , 2014 ; Yang et al. , 2018 ; Zhang et al. , 2019 ) , which serve to generate new data samples to imitate the training set . The generative network generates the sample candidates whereas the discriminative network classifies whether the sample is generated or genuine . In contrast , the goal of our proposed GANCO framework is to train the optimization model for better generalization ability where the generation model is used only to generate more informative training samples . The idea of using adversarial training to improve RL robustness has also been studied in other fields such as Atari games and robotics ( Pinto et al. , 2017 ; Khirodkar et al. , 2018 ; Dennis et al. , 2020 ) . However , existing works in this direction are not directly applicable to our task due to its unique properties . The key is how to define the adversary , which is specific to the application domain . In the field of COPs , we define the adversary as a hard instance generator , which is evaluated by performance gaps of the optimization model with respect to a traditional non-learning baseline algorithm . To the best of our knowledge , the most related work considering generating data samples for COPs is Liu et al . ( 2020 ) where the samples are generated by performing crossover and mutation to existing instances . The goal is to search better parameters configuration for given traditional solvers . Whereas our proposed framework is designed to generate instances with the guidance of deep reinforcement learning and improve the generalization performance for neural combinatorial optimization models . In our experiments , we also verify the necessity of our data generation method . 3 MODEL . For most learning based methods , the patterns are learned from the training data and usually applicable to similar testing samples . However , if the distributions of training and testing samples are considerably different , these learned patterns may not be general enough to guarantee desirable inference performance . As we will show in the experiments , this issue is particularly severe for COPs due to the intricate relationship between the solution and the instance distribution . Instead of designing a network architecture to achieve better generalization performance for specific problems , we aim to propose a model agnostic framework which is generally applicable to different optimization models and training methods . The only requirement for the optimization models is that they are able to learn effective patterns from the training data and perform well on similar testing samples , which could be fulfilled by most of the recent learning based models . To this end , we propose the Generative Adversarial Neural Combinatorial Optimization ( GANCO ) framework , as shown in Figure 1 . Rather than training the optimization model solely on instances sampled from the predefined training distribution , we also deploy another deep learning model to generate instances on which the optimization model performs poorly . The generation model is trained by reinforcement learning to maximize the performance gap between the optimization model and a non-learning baseline algorithm on the generated instances . And the optimization model is trained with the generated samples to improve the performance on similar instances . In other words , the generation model and the optimization model are trained to maximize and minimize the performance gaps on the generated instances , respectively . In such an adversarial training way , the lower bound of the optimization model performance on the instance distributions generated by the generation model will be pushed up . We detail the generation model and the GANCO framework in the following subsections . 3.1 GENERATION MODEL . As suggested in existing works ( Kool et al. , 2019 ; Kwon et al. , 2020 ) , many important COPs can be viewed as a graph of n nodes with different attributes , like the city coordinates for TSP and the item information for KP . Therefore , we formulate the data generation task as determining the attribute values for each node of an instance . However , the proposed framework could be easily adapted to other types of COPs , such as by determining the presence of each edge for problems depending on the graph connectivity ( e.g. , the Boolean Satisfiability Problem ) . The generation model Ω takes the noise sample h for each node as the input to output the distribution parameters for the node attributes . We format the noise input h ∈ Rn×2 as 2-dimensional variables for each node j = 1 , .. , n following the unit uniform distribution independently , i.e. , h ∼ U ( 0 , 1 ) . Pertaining to the network architecture , we use the self-attention block proposed in the Attention Model ( AM , Kool et al . ( 2019 ) ) , which consists of a multi-head self-attention layer ( Vaswani et al. , 2017 ) , two node-wise feed-forward layers with Batch Normalization ( Ioffe & Szegedy , 2015 ) and skip connection ( He et al. , 2016 ) . The noise input is node-wise linearly projected to H-dimension and encoded by k self-attention blocks . And another node-wise linear layer with a sigmoid activation projects the hidden vectors to the output distribution parameters Ω ( h ) ∈ Rn×d where d is the number of attributes for each node . Then we draw the sample x̃ from the Gaussian distribution N ( µ , σ2 ) with mean µ = Ω ( h ) and fixed standard deviation σ . The network outputs are kept in the same range [ 0 , 1 ] to use the same variance for different attributes . The sample x̃ is scaled to the corresponding valid ranges ( and discretized for discrete attributes ) to attain the node attributes x = A ( x̃ ) . The goal of the generation model is to generate instances on which the optimization model currently performs poorly . Since different instances have different optimal objective values , the attained objective values alone are not meaningful enough to evaluate the model performance . Although the optimality gap appears to be a good alternative metric in this case , it often requires expensive computation to attain the optimal solutions . On the other hand , while achieving approximate solutions , traditional non-learning algorithms tend to perform relatively stably on instances with different distributions , compared to the learning based methods . Therefore , the performance gaps of the optimization model with respect to a traditional non-learning baseline algorithm could be regarded as a favorable metric . Theoretically , the baseline algorithm performance can be arbitrarily inferior as long as it performs consistently ( e.g. , similar optimality gaps ) on different instances . As we will show in our experiments , the baseline algorithms do not need to be very strong or fast as the training for generation models converges in a relatively small number of iterations . The generation model is trained by the reinforcement learning ( RL ) . Specifically , the state of RL environment is the noise input h ∈ Rn×2 . The action is to sample the attribute values for a node . An episode is to sample the attribute values x = A ( x̃ ) with x̃ ∼ N ( Ω ( h ) , σ2 ) for all the nodes to form an instance . However , as the sampled attributes for one node will not affect the other nodes , the episode can be viewed as only containing one action with large action space to sample all the node attributes at the same time . The reward for an episode is the performance gap between the optimization model Φ and baseline algorithm B on the sampled instance x . The loss function is formally defined as follow , L ( Ω ) = −Eh∼U ( 0,1 ) , x̃∼N ( Ω ( h ) , σ2 ) G ( x , Φ , B ) , ( 1 ) where G ( x , Φ , B ) = ( OΦ ( x ) − OB ( x ) ) /OB ( x ) is the performance ( objective ) gap1 ; OΦ ( x ) and OB ( x ) are the objective values of the solutions found by the optimization model Φ and the baseline algorithm B for the generated instance x , respectively . Different from the generator in GAN ( Goodfellow et al. , 2014 ) which deterministically infers the instance given the noise input , we sample the noise input to infer the distribution and sample the instance from the output distribution . The generation model is trained by the REINFORCE algorithm ( Sutton et al. , 2000 ) with reward baseline as the mean of the gaps in a batch . The gradient to train the generation model is expressed as follow , ∇ΩL ( Ω ) ≈ − 1 N N∑ i=1 G ( xi , Φ , B ) − 1 N N∑ j=1 G ( xj , Φ , B ) ∇ΩlogN ( x̃i ; Ω ( hi ) , σ2 ) , ( 2 ) where i and j are the instance indices ; N is the batch size ; x̃i ∈ Rn×d ∼ N ( Ω ( hi ) , σ2 ) ; and N ( x̃i ; Ω ( hi ) , σ2 ) is the probability of x̃i sampled from the Gaussian distribution N ( Ω ( hi ) , σ2 ) . | Over the last few years, several machine learning based approaches have been proposed to solve common combinatorial optimization problems, such as the traveling salesman problem. However, it's never been entirely clear how well the previous work generalized to out of distribution instances. This paper proposes to improve generalization through dataset augmentation. Specifically, the paper proposes to train a model (aka the generator) that is used to drive the creation of training instances on which the neural network used to solve the combinatorial problem (aka the optimizer) does poorly. The authors then show that training the optimizer model on a set of examples taken from the original dataset as well as created under the guidance of the generator model results in better performance than training the optimizer model on the original dataset only. | SP:e7efded6718a77e5c3dace8663f7ddfa694ac469 |
Generative Adversarial Training for Neural Combinatorial Optimization Models | 1 INTRODUCTION . Combinatorial Optimization Problems ( COPs ) are a family of problems with the goal of finding the best one ( s ) from a finite set of solutions . Due to the considerably large solution space , many important COPs are hard to solve , such as the vehicle routing problems ( Toth & Vigo , 2002 ) . Exact algorithms based on branch-and-bound ( Lawler & Wood , 1966 ) or its variants can provide elegant theoretical guarantees but the worst-case computational complexity is exponential , hence impractical for problems of medium and large sizes . In contrast , heuristic methods can usually attain good solutions in reasonable running time , which are often preferred in practical applications . Traditional heuristics are designed based on expert knowledge for specific problems , which usually requires a large amount of time and efforts to develop . These manually designed heuristics could suffer from relatively poor performance . Moreover , for less studied problems , sufficient expert knowledge may even be unavailable . Recent studies suggest that deep learning could greatly facilitate in automating the design of heuristics , and alleviating the heavy reliance on expert knowledge . With the prior that the instances may follow certain distribution ( e.g. , locations in a routing problem may uniformly scatter in an area ) , deep models can be trained to learn heuristics in an end-to-end way ( Dai et al. , 2017 ; Kool et al. , 2019 ; Chen & Tian , 2019 ) . It has been shown that these models perform well with relatively short running time on COP instances following the training distribution . However , after the trained model is deployed , it could encounter many instances following unknown distributions different from the training one , especially for real-life applications . As shown in many existing works and our experiments , when applied to infer the instances following a different distribution , the generalization performance of deep models gets much inferior , which severely hinders the practical use of the learned heuristics . Such mismatch between the training and testing distributions is always an important issue for most learning based methods . Especially , for neural combinatorial optimization ( NCO ) models , deep learning models are mostly trained on instances sampled from specific distributions and the solution quality intricately depends on the instance distributions . The generalization to instances with different distributions has been widely acknowledged for the importance ( e.g. , in Mazyavkina et al . ( 2021 ) ) and remains a challenge . To tackle this issue , we propose the Generative Adversarial Neural Combinatorial Optimization ( GANCO ) framework which is model agnostic and generally applicable to various neural combinatorial optimization models for solving different COPs . Instead of training an optimization model only on instances following the predefined distribution , another deep neural network is deployed as a generation model to produce training instances following the distributions on which the optimization model performs poorly . The generation model and optimization model are trained alternatively in an adversarial way . Specifically , the generation model is trained by reinforcement learning to maximize the performance gap of the current optimization model on the generated instances with respect to a traditional non-learning baseline algorithm . The optimization model is trained in the original way but using the training dataset augmented with the newly generated instances to improve the generalization performance , i.e. , to reduce the performance gap . As we will show in the experiments , the non-learning baseline algorithms do not need to be very strong or fast . To demonstrate the effectiveness of the proposed GANCO framework , we apply it to a representative neural combinatorial optimization model , the Attention Model ( AM , Kool et al . ( 2019 ) ) on various COPs including Traveling Salesman Problem ( TSP ) , Capacitated Vehicle Routing Problem ( CVRP ) , Orienteering Problem ( OP ) , Prize Collecting TSP ( PCTSP ) and 0-1 Knapsack Problem ( KP ) . In the extensive experiments , we show that the proposed GANCO framework improves the generalization performance of the original optimization model on various testing distributions with little sacrifice of performance on the original training distribution . Furthermore , we show that the proposed GANCO framework can be readily and effectively applied to other optimization models such as the Policy Optimization with Multiple Optima ( POMO , Kwon et al . ( 2020 ) ) . 2 RELATED WORKS . Most deep learning models are trained to learn the construction heuristics where the model picks the actions sequentially to construct a solution ( e.g. , Bello et al . ( 2017 ) ; Nazari et al . ( 2018 ) ; Kool et al . ( 2019 ) ; Hottung et al . ( 2021 ) ; Kwon et al . ( 2020 ) ) , which perform well with fairly short running time . While some other models learn the improvement heuristics to locally refine existing solutions , which usually search over a large number of solutions and are relatively slow . Moreover , they often need to fit in frameworks specifically designed for different problems , such as 2-opt ( d O Costa et al. , 2020 ) , large neighborhood search ( Hottung & Tierney , 2020 ) and combinations of local operators ( Lu et al. , 2020 ) . Most models are trained with reinforcement learning ( RL ) except for several ones with supervised learning , e.g. , Vinyals et al . ( 2015 ) and Joshi et al . ( 2019 ) . Though we use representative construction heuristic models with RL in the experiments , our proposed GANCO framework can also be applied to models of other types . The most well-known models trained with adversary are the Generative Adversarial Networks ( Goodfellow et al. , 2014 ; Yang et al. , 2018 ; Zhang et al. , 2019 ) , which serve to generate new data samples to imitate the training set . The generative network generates the sample candidates whereas the discriminative network classifies whether the sample is generated or genuine . In contrast , the goal of our proposed GANCO framework is to train the optimization model for better generalization ability where the generation model is used only to generate more informative training samples . The idea of using adversarial training to improve RL robustness has also been studied in other fields such as Atari games and robotics ( Pinto et al. , 2017 ; Khirodkar et al. , 2018 ; Dennis et al. , 2020 ) . However , existing works in this direction are not directly applicable to our task due to its unique properties . The key is how to define the adversary , which is specific to the application domain . In the field of COPs , we define the adversary as a hard instance generator , which is evaluated by performance gaps of the optimization model with respect to a traditional non-learning baseline algorithm . To the best of our knowledge , the most related work considering generating data samples for COPs is Liu et al . ( 2020 ) where the samples are generated by performing crossover and mutation to existing instances . The goal is to search better parameters configuration for given traditional solvers . Whereas our proposed framework is designed to generate instances with the guidance of deep reinforcement learning and improve the generalization performance for neural combinatorial optimization models . In our experiments , we also verify the necessity of our data generation method . 3 MODEL . For most learning based methods , the patterns are learned from the training data and usually applicable to similar testing samples . However , if the distributions of training and testing samples are considerably different , these learned patterns may not be general enough to guarantee desirable inference performance . As we will show in the experiments , this issue is particularly severe for COPs due to the intricate relationship between the solution and the instance distribution . Instead of designing a network architecture to achieve better generalization performance for specific problems , we aim to propose a model agnostic framework which is generally applicable to different optimization models and training methods . The only requirement for the optimization models is that they are able to learn effective patterns from the training data and perform well on similar testing samples , which could be fulfilled by most of the recent learning based models . To this end , we propose the Generative Adversarial Neural Combinatorial Optimization ( GANCO ) framework , as shown in Figure 1 . Rather than training the optimization model solely on instances sampled from the predefined training distribution , we also deploy another deep learning model to generate instances on which the optimization model performs poorly . The generation model is trained by reinforcement learning to maximize the performance gap between the optimization model and a non-learning baseline algorithm on the generated instances . And the optimization model is trained with the generated samples to improve the performance on similar instances . In other words , the generation model and the optimization model are trained to maximize and minimize the performance gaps on the generated instances , respectively . In such an adversarial training way , the lower bound of the optimization model performance on the instance distributions generated by the generation model will be pushed up . We detail the generation model and the GANCO framework in the following subsections . 3.1 GENERATION MODEL . As suggested in existing works ( Kool et al. , 2019 ; Kwon et al. , 2020 ) , many important COPs can be viewed as a graph of n nodes with different attributes , like the city coordinates for TSP and the item information for KP . Therefore , we formulate the data generation task as determining the attribute values for each node of an instance . However , the proposed framework could be easily adapted to other types of COPs , such as by determining the presence of each edge for problems depending on the graph connectivity ( e.g. , the Boolean Satisfiability Problem ) . The generation model Ω takes the noise sample h for each node as the input to output the distribution parameters for the node attributes . We format the noise input h ∈ Rn×2 as 2-dimensional variables for each node j = 1 , .. , n following the unit uniform distribution independently , i.e. , h ∼ U ( 0 , 1 ) . Pertaining to the network architecture , we use the self-attention block proposed in the Attention Model ( AM , Kool et al . ( 2019 ) ) , which consists of a multi-head self-attention layer ( Vaswani et al. , 2017 ) , two node-wise feed-forward layers with Batch Normalization ( Ioffe & Szegedy , 2015 ) and skip connection ( He et al. , 2016 ) . The noise input is node-wise linearly projected to H-dimension and encoded by k self-attention blocks . And another node-wise linear layer with a sigmoid activation projects the hidden vectors to the output distribution parameters Ω ( h ) ∈ Rn×d where d is the number of attributes for each node . Then we draw the sample x̃ from the Gaussian distribution N ( µ , σ2 ) with mean µ = Ω ( h ) and fixed standard deviation σ . The network outputs are kept in the same range [ 0 , 1 ] to use the same variance for different attributes . The sample x̃ is scaled to the corresponding valid ranges ( and discretized for discrete attributes ) to attain the node attributes x = A ( x̃ ) . The goal of the generation model is to generate instances on which the optimization model currently performs poorly . Since different instances have different optimal objective values , the attained objective values alone are not meaningful enough to evaluate the model performance . Although the optimality gap appears to be a good alternative metric in this case , it often requires expensive computation to attain the optimal solutions . On the other hand , while achieving approximate solutions , traditional non-learning algorithms tend to perform relatively stably on instances with different distributions , compared to the learning based methods . Therefore , the performance gaps of the optimization model with respect to a traditional non-learning baseline algorithm could be regarded as a favorable metric . Theoretically , the baseline algorithm performance can be arbitrarily inferior as long as it performs consistently ( e.g. , similar optimality gaps ) on different instances . As we will show in our experiments , the baseline algorithms do not need to be very strong or fast as the training for generation models converges in a relatively small number of iterations . The generation model is trained by the reinforcement learning ( RL ) . Specifically , the state of RL environment is the noise input h ∈ Rn×2 . The action is to sample the attribute values for a node . An episode is to sample the attribute values x = A ( x̃ ) with x̃ ∼ N ( Ω ( h ) , σ2 ) for all the nodes to form an instance . However , as the sampled attributes for one node will not affect the other nodes , the episode can be viewed as only containing one action with large action space to sample all the node attributes at the same time . The reward for an episode is the performance gap between the optimization model Φ and baseline algorithm B on the sampled instance x . The loss function is formally defined as follow , L ( Ω ) = −Eh∼U ( 0,1 ) , x̃∼N ( Ω ( h ) , σ2 ) G ( x , Φ , B ) , ( 1 ) where G ( x , Φ , B ) = ( OΦ ( x ) − OB ( x ) ) /OB ( x ) is the performance ( objective ) gap1 ; OΦ ( x ) and OB ( x ) are the objective values of the solutions found by the optimization model Φ and the baseline algorithm B for the generated instance x , respectively . Different from the generator in GAN ( Goodfellow et al. , 2014 ) which deterministically infers the instance given the noise input , we sample the noise input to infer the distribution and sample the instance from the output distribution . The generation model is trained by the REINFORCE algorithm ( Sutton et al. , 2000 ) with reward baseline as the mean of the gaps in a batch . The gradient to train the generation model is expressed as follow , ∇ΩL ( Ω ) ≈ − 1 N N∑ i=1 G ( xi , Φ , B ) − 1 N N∑ j=1 G ( xj , Φ , B ) ∇ΩlogN ( x̃i ; Ω ( hi ) , σ2 ) , ( 2 ) where i and j are the instance indices ; N is the batch size ; x̃i ∈ Rn×d ∼ N ( Ω ( hi ) , σ2 ) ; and N ( x̃i ; Ω ( hi ) , σ2 ) is the probability of x̃i sampled from the Gaussian distribution N ( Ω ( hi ) , σ2 ) . | This paper presents a generative adversarial training pipeline for general combinatorial optimization problems, aiming to improve the generalization ability of learned neural network solvers on unseen data distributions. In experiments, the authors select the attention model as the neural network solver, and REINFORCE, POMO as the RL algorithms. The experiment results are comprehensive on various routing problems, and on the knapsack problem. | SP:e7efded6718a77e5c3dace8663f7ddfa694ac469 |
Long Document Summarization with Top-Down and Bottom-Up Representation Inference | 1 INTRODUCTION . Text summarization involves compressing a document and preserving key content and meaning . It can be done in either an extractive or abstractive manner . While an extractive summarization model extracts salient fragments ( e.g. , words , sentences ) from the source document to form a summary , an abstractive summarization system aims to generate a semantically coherent and linguistically fluent summary by conditioning on the document . The abstractive approach aligns better with how a human does summarization and generally performs better than extractive models in recent works ( Pilault et al. , 2020 ; Zhang et al. , 2020 ) . We thus focuses on abstractive summarization . The dominant approach for abstractive summarization is to use a Seq2Seq model ( Sutskever et al. , 2014 ) with an encoder-decoder architecture instantiated with either RNNs ( Hochreiter & Schmidhuber , 1997 ) or , more recently , transformers ( Vaswani et al. , 2017 ) . In such a model , an encoder infers the latent representations of observed tokens ( words or subwords ) in the document , conditioning on which a decoder generates a summary . This paper studies the problem of how to infer good latent representations , which in turn would improve summarization . We propose a framework which ( 1 ) assumes a multi-scale latent structure of a document and ( 2 ) synergizes bottom-up inference with top-down inference . In a multi-scale structure , high-level variables ( like those representing sentences , segments ) model the document at a coarser time-scale and abstract away details , and are suitable for capturing long range dependency of the document ; in contrast , low-level variables ( like those representing tokens ) preserves details , and prevent the summary from losing key details . In our framework , the summary is generated by conditioning on token representations ( low-level vari- ables ) , similar to recent abstractive summarization models ( Zhang et al. , 2020 ; Zaheer et al. , 2020 ; Beltagy et al. , 2020 ) . There is however a critical difference . In our framework , token representations are first bottom-up inferred and then top-down updated with high level representations , hence rendering low-level representations aware of long range information . We hypothesize that the proposed inference approach would improve summarization . Multi-level models have been widely studied in modeling for images ( Sønderby et al. , 2016 ) , speech ( Mehri et al. , 2016 ) , and language ( Chung et al. , 2016 ) . Prior summarization works ( Cheng & Lapata , 2016 ; Nallapati et al. , 2016 ; Zhang et al. , 2019 ; Xu et al. , 2020 ) have also explored hierarchical models . But they mostly focus on extractive summarization and follow a bottom-up inference approach . They pool information in words or sub-words to form sentence representations , based on which a classification is done to make an extraction decision . In comparison , our framework combines bottom-up and top-down inference . This draws direct inspiration from a line of work which examines variational inference for hierarchical top-down generative models ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Child , 2020 ) . In these models , in the bottom-up path distribution parameters of higher level stochastic variables are computed as a function of lower level stochastic variables , while in the top-down path distribution parameters of lower level variables are corrected for as a function of higher level variables . Although we do not assume stochasticity of the document latent representations , our encoder or inference model follows the same idea to infer token representations . The proposed framework is agnostic to model architecture . Due to the dominance of transformer models in NLP ( Chen et al. , 2018 ; Zhang et al. , 2020 ; Sun et al. , 2019 ; Martin et al. , 2020 ) and to leverage pre-trained language models ( Liu et al. , 2019 ; Lewis et al. , 2020 ) , we instantiate our framework with a transformer-based model . There is a bottleneck of applying transformers to long documents , because its computational and memory cost has a quadratic dependency on the sequence length . This issue is especially critical for summarization since we are more interested in summarizing long documents since short ones can be quickly read through by humans . To address this issue , a large amount of prior works have been devoted to developing efficient transformers with sub-quadratic complexity . They approach this problem with kernel-based methods ( Katharopoulos et al. , 2020 ; Choromanski et al. , 2020 ) , by low-rank approximation to the attention matrix ( Wang et al. , 2020 ) , by synthesizing the attention weights ( Tay et al. , 2021 ) , or by designing contentindependent ( Child et al. , 2019 ; Beltagy et al. , 2020 ; Ainslie et al. , 2020 ; Zaheer et al. , 2020 ) or content-dependent sparse attention mechanisms ( Kitaev et al. , 2020 ; Roy et al. , 2021 ; Wang et al. , 2021 ) . Our framework provides a natural way to diminish this quadratic complexity issue . In the bottomup inference , we use local self-attention where each token only attends tokens within a local fixedlength window , and thus the complexity does not grow as a function of the input sequence length . The top-down correction for the token representations enable them to capture long-range context , reducing the limitation of local attention . Furthermore , in contrast to most prior efficient transformers that are incompatible with pre-training language models , our framework is flexible for leveraging any pre-trained encoder-decoder models such as BART ( Lewis et al. , 2020 ) , T5 ( Raffel et al. , 2020 ) . We call the transformer-based model following the proposed framework as top-down transformer , to emphasize the importance of the top-down inference . We evaluate the top-down transformer on a set of distinct summarization benchmarks . These benchmarks cover documents from a variety of domains , including news articles and scientific , conversational , and narrative documents , and of various lengths ranging from hundreds of words ( e.g. , a news article ) , several thousands to over ten thousands of words ( e.g. , a scientific paper , a book chapter ) , to even over hundred thousands of words ( e.g. , an entire book ) . On short documents , models following our framework achieves on-par or better summarization performance than models with full self-attention , and are more computeand memory-efficient . Across all long document datasets , our models achieve state-of-the-art performance . In the end , we show that our model is able to summarize a whole book . Compared to a concurrent work ( Wu et al. , 2021 ) using GPT-3 and requiring humans to extensively label data , our model achieves competitive performance with 380 times less parameters and a small amount of publicly available data . The diverse and strong empirical results support the effectiveness and wide applicability of the proposed model . 2 METHODS . Figure 1 gives a graphical overview of the top-down transformer , instantiating the proposed framework . We introduce its details in this section . Suppose a document has N tokens , t = { ti } Ni=1 . In our method , token representations are inferred by combining top-down and bottom-up inference . This leads to effective and efficient inference for token representations . They are then attended by a decoder to generate a summary , as in a regular encoder-decoder transformer . 2.1 BOTTOM-UP INFERENCE . In the bottom-up inference , contextual embeddings of the tokens , { ei | ei ∈ Rd } Ni=1 , are computed with N1 layers of local self-attention . In particular , each token ti only attends to nearby tokens within a window of size of w. The complexity is hence O ( Nw ) , in contrast to O ( N2 ) for a full self-attention models . Please see Supplementary B.1 for more details . 2.2 TOP-DOWN INFERENCE . The efficiency with local self-attention in the bottom-up inference nevertheless comes with a limitation , that is , each ei only captures the context within a local window instead of that of the whole document . To mitigate this issue , we propose a top-down inference for token representations . Consider a two-level multi-scale latent structure for a document . The low level consists of token representations , { ei } Ni=1 , computed by the bottom-up inference . The top level consists of units at a coarser level . It is affordable to apply full self-attention at the top level due to its coarser granularity , allowing these top-level units to capture global document context . In our work , the self-attention mechanism for the top-level representations is simply the original multi-head self-attention proposed in Vaswani et al . ( 2017 ) . Readers are referred to Vaswani et al . ( 2017 ) for details . Denote the top level representations after self-attention update as { sj | sj ∈ Rd } Mj=1 ( see Section 2.3 for details on top-level representation initialization methods ) . We can then update the bottom-upinferred token representations with the top-level representations . This is achieved withN3 top-down inference layers , as illustrated by the middle panel in Figure 1 . Each layer contains three transformations on { ei } : ( 1 ) token self-attention , ( 2 ) token-segment cross-attention , ( 3 ) feed-forward . ( 1 ) and ( 3 ) are the same as those in the bottom-up inference layers or regular self-attention layer with local attention . ( 2 ) implementing the cross-attention between the top and bottom levels is the critical operation . In particular , each ei is updated with cross-attention , ẽi = ei + LayerNorm ( M∑ j=1 αijfv ( sj ) ) , αij = exp ( fq ( ei ) T fk ( sj ) ) √ d ∑M l=1 exp ( fq ( ei ) T fk ( sl ) ) ( 1 ) where fq , fk , and fv indicate query , key , and value linear mappings , respectively . For notional clarity , Equation 1 only illustrates the case with a single attention head . In practice , we use multiheads . The cross-attention operation injects global contextual information into bottom-up-inferred token representations , ei , and yields global-context-aware token representations , ẽi , conditioning on which a summary can be generated by a decoder . To instantiate the top-down inference , we need to make two choices : ( 1 ) the number of top-levels above the token level and ( 2 ) the unit representation for each top-level . We choose to use one top level since it is sufficiently coarser to apply full self-attention for a wide range of long document benchmarks we experimented on . A natural choice for top level units is sentence , paragraph , and chapter , depending on the number top level considered . Such a choice however might lead to complicated implementations and non-scalability due to the varying length of these units . We hence choose a simpler approach , where the top level consists of fixed-length segments of the documents . While we use a single top level , multiple top levels can be simply achieved with segments with increasingly coarser granularity . In the top-down inference , segment-level self-attention has a complexity of O ( M2 ) , and tokensegment cross-attention has a complexity of O ( NM ) . Thus , together with bottom-up inference , the complexity is O ( Nw +M2 + NM ) . In practice , we use relatively small w ( window size ) and M ( number of segments ) . | The paper proposes a new model architecture for abstractive text summarization. The framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency at a coarser time scale, and the bottom token level preserves the detail. The proposed model shows some advantage in both capability in capturing global and local context and being computationally efficient. The model is validated on many datasets and it beats almost all previous model. | SP:5ec00063519bda1ac56a3943ed49417125739075 |
Long Document Summarization with Top-Down and Bottom-Up Representation Inference | 1 INTRODUCTION . Text summarization involves compressing a document and preserving key content and meaning . It can be done in either an extractive or abstractive manner . While an extractive summarization model extracts salient fragments ( e.g. , words , sentences ) from the source document to form a summary , an abstractive summarization system aims to generate a semantically coherent and linguistically fluent summary by conditioning on the document . The abstractive approach aligns better with how a human does summarization and generally performs better than extractive models in recent works ( Pilault et al. , 2020 ; Zhang et al. , 2020 ) . We thus focuses on abstractive summarization . The dominant approach for abstractive summarization is to use a Seq2Seq model ( Sutskever et al. , 2014 ) with an encoder-decoder architecture instantiated with either RNNs ( Hochreiter & Schmidhuber , 1997 ) or , more recently , transformers ( Vaswani et al. , 2017 ) . In such a model , an encoder infers the latent representations of observed tokens ( words or subwords ) in the document , conditioning on which a decoder generates a summary . This paper studies the problem of how to infer good latent representations , which in turn would improve summarization . We propose a framework which ( 1 ) assumes a multi-scale latent structure of a document and ( 2 ) synergizes bottom-up inference with top-down inference . In a multi-scale structure , high-level variables ( like those representing sentences , segments ) model the document at a coarser time-scale and abstract away details , and are suitable for capturing long range dependency of the document ; in contrast , low-level variables ( like those representing tokens ) preserves details , and prevent the summary from losing key details . In our framework , the summary is generated by conditioning on token representations ( low-level vari- ables ) , similar to recent abstractive summarization models ( Zhang et al. , 2020 ; Zaheer et al. , 2020 ; Beltagy et al. , 2020 ) . There is however a critical difference . In our framework , token representations are first bottom-up inferred and then top-down updated with high level representations , hence rendering low-level representations aware of long range information . We hypothesize that the proposed inference approach would improve summarization . Multi-level models have been widely studied in modeling for images ( Sønderby et al. , 2016 ) , speech ( Mehri et al. , 2016 ) , and language ( Chung et al. , 2016 ) . Prior summarization works ( Cheng & Lapata , 2016 ; Nallapati et al. , 2016 ; Zhang et al. , 2019 ; Xu et al. , 2020 ) have also explored hierarchical models . But they mostly focus on extractive summarization and follow a bottom-up inference approach . They pool information in words or sub-words to form sentence representations , based on which a classification is done to make an extraction decision . In comparison , our framework combines bottom-up and top-down inference . This draws direct inspiration from a line of work which examines variational inference for hierarchical top-down generative models ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Child , 2020 ) . In these models , in the bottom-up path distribution parameters of higher level stochastic variables are computed as a function of lower level stochastic variables , while in the top-down path distribution parameters of lower level variables are corrected for as a function of higher level variables . Although we do not assume stochasticity of the document latent representations , our encoder or inference model follows the same idea to infer token representations . The proposed framework is agnostic to model architecture . Due to the dominance of transformer models in NLP ( Chen et al. , 2018 ; Zhang et al. , 2020 ; Sun et al. , 2019 ; Martin et al. , 2020 ) and to leverage pre-trained language models ( Liu et al. , 2019 ; Lewis et al. , 2020 ) , we instantiate our framework with a transformer-based model . There is a bottleneck of applying transformers to long documents , because its computational and memory cost has a quadratic dependency on the sequence length . This issue is especially critical for summarization since we are more interested in summarizing long documents since short ones can be quickly read through by humans . To address this issue , a large amount of prior works have been devoted to developing efficient transformers with sub-quadratic complexity . They approach this problem with kernel-based methods ( Katharopoulos et al. , 2020 ; Choromanski et al. , 2020 ) , by low-rank approximation to the attention matrix ( Wang et al. , 2020 ) , by synthesizing the attention weights ( Tay et al. , 2021 ) , or by designing contentindependent ( Child et al. , 2019 ; Beltagy et al. , 2020 ; Ainslie et al. , 2020 ; Zaheer et al. , 2020 ) or content-dependent sparse attention mechanisms ( Kitaev et al. , 2020 ; Roy et al. , 2021 ; Wang et al. , 2021 ) . Our framework provides a natural way to diminish this quadratic complexity issue . In the bottomup inference , we use local self-attention where each token only attends tokens within a local fixedlength window , and thus the complexity does not grow as a function of the input sequence length . The top-down correction for the token representations enable them to capture long-range context , reducing the limitation of local attention . Furthermore , in contrast to most prior efficient transformers that are incompatible with pre-training language models , our framework is flexible for leveraging any pre-trained encoder-decoder models such as BART ( Lewis et al. , 2020 ) , T5 ( Raffel et al. , 2020 ) . We call the transformer-based model following the proposed framework as top-down transformer , to emphasize the importance of the top-down inference . We evaluate the top-down transformer on a set of distinct summarization benchmarks . These benchmarks cover documents from a variety of domains , including news articles and scientific , conversational , and narrative documents , and of various lengths ranging from hundreds of words ( e.g. , a news article ) , several thousands to over ten thousands of words ( e.g. , a scientific paper , a book chapter ) , to even over hundred thousands of words ( e.g. , an entire book ) . On short documents , models following our framework achieves on-par or better summarization performance than models with full self-attention , and are more computeand memory-efficient . Across all long document datasets , our models achieve state-of-the-art performance . In the end , we show that our model is able to summarize a whole book . Compared to a concurrent work ( Wu et al. , 2021 ) using GPT-3 and requiring humans to extensively label data , our model achieves competitive performance with 380 times less parameters and a small amount of publicly available data . The diverse and strong empirical results support the effectiveness and wide applicability of the proposed model . 2 METHODS . Figure 1 gives a graphical overview of the top-down transformer , instantiating the proposed framework . We introduce its details in this section . Suppose a document has N tokens , t = { ti } Ni=1 . In our method , token representations are inferred by combining top-down and bottom-up inference . This leads to effective and efficient inference for token representations . They are then attended by a decoder to generate a summary , as in a regular encoder-decoder transformer . 2.1 BOTTOM-UP INFERENCE . In the bottom-up inference , contextual embeddings of the tokens , { ei | ei ∈ Rd } Ni=1 , are computed with N1 layers of local self-attention . In particular , each token ti only attends to nearby tokens within a window of size of w. The complexity is hence O ( Nw ) , in contrast to O ( N2 ) for a full self-attention models . Please see Supplementary B.1 for more details . 2.2 TOP-DOWN INFERENCE . The efficiency with local self-attention in the bottom-up inference nevertheless comes with a limitation , that is , each ei only captures the context within a local window instead of that of the whole document . To mitigate this issue , we propose a top-down inference for token representations . Consider a two-level multi-scale latent structure for a document . The low level consists of token representations , { ei } Ni=1 , computed by the bottom-up inference . The top level consists of units at a coarser level . It is affordable to apply full self-attention at the top level due to its coarser granularity , allowing these top-level units to capture global document context . In our work , the self-attention mechanism for the top-level representations is simply the original multi-head self-attention proposed in Vaswani et al . ( 2017 ) . Readers are referred to Vaswani et al . ( 2017 ) for details . Denote the top level representations after self-attention update as { sj | sj ∈ Rd } Mj=1 ( see Section 2.3 for details on top-level representation initialization methods ) . We can then update the bottom-upinferred token representations with the top-level representations . This is achieved withN3 top-down inference layers , as illustrated by the middle panel in Figure 1 . Each layer contains three transformations on { ei } : ( 1 ) token self-attention , ( 2 ) token-segment cross-attention , ( 3 ) feed-forward . ( 1 ) and ( 3 ) are the same as those in the bottom-up inference layers or regular self-attention layer with local attention . ( 2 ) implementing the cross-attention between the top and bottom levels is the critical operation . In particular , each ei is updated with cross-attention , ẽi = ei + LayerNorm ( M∑ j=1 αijfv ( sj ) ) , αij = exp ( fq ( ei ) T fk ( sj ) ) √ d ∑M l=1 exp ( fq ( ei ) T fk ( sl ) ) ( 1 ) where fq , fk , and fv indicate query , key , and value linear mappings , respectively . For notional clarity , Equation 1 only illustrates the case with a single attention head . In practice , we use multiheads . The cross-attention operation injects global contextual information into bottom-up-inferred token representations , ei , and yields global-context-aware token representations , ẽi , conditioning on which a summary can be generated by a decoder . To instantiate the top-down inference , we need to make two choices : ( 1 ) the number of top-levels above the token level and ( 2 ) the unit representation for each top-level . We choose to use one top level since it is sufficiently coarser to apply full self-attention for a wide range of long document benchmarks we experimented on . A natural choice for top level units is sentence , paragraph , and chapter , depending on the number top level considered . Such a choice however might lead to complicated implementations and non-scalability due to the varying length of these units . We hence choose a simpler approach , where the top level consists of fixed-length segments of the documents . While we use a single top level , multiple top levels can be simply achieved with segments with increasingly coarser granularity . In the top-down inference , segment-level self-attention has a complexity of O ( M2 ) , and tokensegment cross-attention has a complexity of O ( NM ) . Thus , together with bottom-up inference , the complexity is O ( Nw +M2 + NM ) . In practice , we use relatively small w ( window size ) and M ( number of segments ) . | Modeling long documents is a challenging problem. This paper works on this problem with top-down and bottom-up structures. The hierarchical structure proposed in this paper adopts local and top-down correction to make the model learn local and long-range dependency. The experiment on long document summarization benchmarks shows the effectiveness of their proposed method. | SP:5ec00063519bda1ac56a3943ed49417125739075 |
Long Document Summarization with Top-Down and Bottom-Up Representation Inference | 1 INTRODUCTION . Text summarization involves compressing a document and preserving key content and meaning . It can be done in either an extractive or abstractive manner . While an extractive summarization model extracts salient fragments ( e.g. , words , sentences ) from the source document to form a summary , an abstractive summarization system aims to generate a semantically coherent and linguistically fluent summary by conditioning on the document . The abstractive approach aligns better with how a human does summarization and generally performs better than extractive models in recent works ( Pilault et al. , 2020 ; Zhang et al. , 2020 ) . We thus focuses on abstractive summarization . The dominant approach for abstractive summarization is to use a Seq2Seq model ( Sutskever et al. , 2014 ) with an encoder-decoder architecture instantiated with either RNNs ( Hochreiter & Schmidhuber , 1997 ) or , more recently , transformers ( Vaswani et al. , 2017 ) . In such a model , an encoder infers the latent representations of observed tokens ( words or subwords ) in the document , conditioning on which a decoder generates a summary . This paper studies the problem of how to infer good latent representations , which in turn would improve summarization . We propose a framework which ( 1 ) assumes a multi-scale latent structure of a document and ( 2 ) synergizes bottom-up inference with top-down inference . In a multi-scale structure , high-level variables ( like those representing sentences , segments ) model the document at a coarser time-scale and abstract away details , and are suitable for capturing long range dependency of the document ; in contrast , low-level variables ( like those representing tokens ) preserves details , and prevent the summary from losing key details . In our framework , the summary is generated by conditioning on token representations ( low-level vari- ables ) , similar to recent abstractive summarization models ( Zhang et al. , 2020 ; Zaheer et al. , 2020 ; Beltagy et al. , 2020 ) . There is however a critical difference . In our framework , token representations are first bottom-up inferred and then top-down updated with high level representations , hence rendering low-level representations aware of long range information . We hypothesize that the proposed inference approach would improve summarization . Multi-level models have been widely studied in modeling for images ( Sønderby et al. , 2016 ) , speech ( Mehri et al. , 2016 ) , and language ( Chung et al. , 2016 ) . Prior summarization works ( Cheng & Lapata , 2016 ; Nallapati et al. , 2016 ; Zhang et al. , 2019 ; Xu et al. , 2020 ) have also explored hierarchical models . But they mostly focus on extractive summarization and follow a bottom-up inference approach . They pool information in words or sub-words to form sentence representations , based on which a classification is done to make an extraction decision . In comparison , our framework combines bottom-up and top-down inference . This draws direct inspiration from a line of work which examines variational inference for hierarchical top-down generative models ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Child , 2020 ) . In these models , in the bottom-up path distribution parameters of higher level stochastic variables are computed as a function of lower level stochastic variables , while in the top-down path distribution parameters of lower level variables are corrected for as a function of higher level variables . Although we do not assume stochasticity of the document latent representations , our encoder or inference model follows the same idea to infer token representations . The proposed framework is agnostic to model architecture . Due to the dominance of transformer models in NLP ( Chen et al. , 2018 ; Zhang et al. , 2020 ; Sun et al. , 2019 ; Martin et al. , 2020 ) and to leverage pre-trained language models ( Liu et al. , 2019 ; Lewis et al. , 2020 ) , we instantiate our framework with a transformer-based model . There is a bottleneck of applying transformers to long documents , because its computational and memory cost has a quadratic dependency on the sequence length . This issue is especially critical for summarization since we are more interested in summarizing long documents since short ones can be quickly read through by humans . To address this issue , a large amount of prior works have been devoted to developing efficient transformers with sub-quadratic complexity . They approach this problem with kernel-based methods ( Katharopoulos et al. , 2020 ; Choromanski et al. , 2020 ) , by low-rank approximation to the attention matrix ( Wang et al. , 2020 ) , by synthesizing the attention weights ( Tay et al. , 2021 ) , or by designing contentindependent ( Child et al. , 2019 ; Beltagy et al. , 2020 ; Ainslie et al. , 2020 ; Zaheer et al. , 2020 ) or content-dependent sparse attention mechanisms ( Kitaev et al. , 2020 ; Roy et al. , 2021 ; Wang et al. , 2021 ) . Our framework provides a natural way to diminish this quadratic complexity issue . In the bottomup inference , we use local self-attention where each token only attends tokens within a local fixedlength window , and thus the complexity does not grow as a function of the input sequence length . The top-down correction for the token representations enable them to capture long-range context , reducing the limitation of local attention . Furthermore , in contrast to most prior efficient transformers that are incompatible with pre-training language models , our framework is flexible for leveraging any pre-trained encoder-decoder models such as BART ( Lewis et al. , 2020 ) , T5 ( Raffel et al. , 2020 ) . We call the transformer-based model following the proposed framework as top-down transformer , to emphasize the importance of the top-down inference . We evaluate the top-down transformer on a set of distinct summarization benchmarks . These benchmarks cover documents from a variety of domains , including news articles and scientific , conversational , and narrative documents , and of various lengths ranging from hundreds of words ( e.g. , a news article ) , several thousands to over ten thousands of words ( e.g. , a scientific paper , a book chapter ) , to even over hundred thousands of words ( e.g. , an entire book ) . On short documents , models following our framework achieves on-par or better summarization performance than models with full self-attention , and are more computeand memory-efficient . Across all long document datasets , our models achieve state-of-the-art performance . In the end , we show that our model is able to summarize a whole book . Compared to a concurrent work ( Wu et al. , 2021 ) using GPT-3 and requiring humans to extensively label data , our model achieves competitive performance with 380 times less parameters and a small amount of publicly available data . The diverse and strong empirical results support the effectiveness and wide applicability of the proposed model . 2 METHODS . Figure 1 gives a graphical overview of the top-down transformer , instantiating the proposed framework . We introduce its details in this section . Suppose a document has N tokens , t = { ti } Ni=1 . In our method , token representations are inferred by combining top-down and bottom-up inference . This leads to effective and efficient inference for token representations . They are then attended by a decoder to generate a summary , as in a regular encoder-decoder transformer . 2.1 BOTTOM-UP INFERENCE . In the bottom-up inference , contextual embeddings of the tokens , { ei | ei ∈ Rd } Ni=1 , are computed with N1 layers of local self-attention . In particular , each token ti only attends to nearby tokens within a window of size of w. The complexity is hence O ( Nw ) , in contrast to O ( N2 ) for a full self-attention models . Please see Supplementary B.1 for more details . 2.2 TOP-DOWN INFERENCE . The efficiency with local self-attention in the bottom-up inference nevertheless comes with a limitation , that is , each ei only captures the context within a local window instead of that of the whole document . To mitigate this issue , we propose a top-down inference for token representations . Consider a two-level multi-scale latent structure for a document . The low level consists of token representations , { ei } Ni=1 , computed by the bottom-up inference . The top level consists of units at a coarser level . It is affordable to apply full self-attention at the top level due to its coarser granularity , allowing these top-level units to capture global document context . In our work , the self-attention mechanism for the top-level representations is simply the original multi-head self-attention proposed in Vaswani et al . ( 2017 ) . Readers are referred to Vaswani et al . ( 2017 ) for details . Denote the top level representations after self-attention update as { sj | sj ∈ Rd } Mj=1 ( see Section 2.3 for details on top-level representation initialization methods ) . We can then update the bottom-upinferred token representations with the top-level representations . This is achieved withN3 top-down inference layers , as illustrated by the middle panel in Figure 1 . Each layer contains three transformations on { ei } : ( 1 ) token self-attention , ( 2 ) token-segment cross-attention , ( 3 ) feed-forward . ( 1 ) and ( 3 ) are the same as those in the bottom-up inference layers or regular self-attention layer with local attention . ( 2 ) implementing the cross-attention between the top and bottom levels is the critical operation . In particular , each ei is updated with cross-attention , ẽi = ei + LayerNorm ( M∑ j=1 αijfv ( sj ) ) , αij = exp ( fq ( ei ) T fk ( sj ) ) √ d ∑M l=1 exp ( fq ( ei ) T fk ( sl ) ) ( 1 ) where fq , fk , and fv indicate query , key , and value linear mappings , respectively . For notional clarity , Equation 1 only illustrates the case with a single attention head . In practice , we use multiheads . The cross-attention operation injects global contextual information into bottom-up-inferred token representations , ei , and yields global-context-aware token representations , ẽi , conditioning on which a summary can be generated by a decoder . To instantiate the top-down inference , we need to make two choices : ( 1 ) the number of top-levels above the token level and ( 2 ) the unit representation for each top-level . We choose to use one top level since it is sufficiently coarser to apply full self-attention for a wide range of long document benchmarks we experimented on . A natural choice for top level units is sentence , paragraph , and chapter , depending on the number top level considered . Such a choice however might lead to complicated implementations and non-scalability due to the varying length of these units . We hence choose a simpler approach , where the top level consists of fixed-length segments of the documents . While we use a single top level , multiple top levels can be simply achieved with segments with increasingly coarser granularity . In the top-down inference , segment-level self-attention has a complexity of O ( M2 ) , and tokensegment cross-attention has a complexity of O ( NM ) . Thus , together with bottom-up inference , the complexity is O ( Nw +M2 + NM ) . In practice , we use relatively small w ( window size ) and M ( number of segments ) . | Aiming at the task of long text summarization, this paper proposes a top-down and bottom-up reasoning method to improve the traditional bottom-up converter encoder structure, which has higher memory and computational efficiency. The model captures remote dependencies on a coarser time scale at the top, and the token level at the bottom retains details. It uses local self-attention to improve computational efficiency. At the same time, it has achieved good results on a large number of long text summary data sets. However, the method proposed in this article has been extensively studied in many fields, and there is no detailed analysis of the problems of long texts, and the theoretical innovation is insufficient. | SP:5ec00063519bda1ac56a3943ed49417125739075 |
The Three Stages of Learning Dynamics in High-dimensional Kernel Methods | 1 INTRODUCTION . In order to fundamentally understand how and why deep learning works , there has been much effort to understand the dynamics of neural networks trained by gradient descent based algorithms . This effort has led to the discovery of many intriguing empirical phenomena ( e.g . Frankle et al . ( 2020 ) ; Fort et al . ( 2020 ) ; Nakkiran et al . ( 2019a ; b ; 2020 ) ) that help shape our conceptual framework for understanding the learning process in neural networks . Nakkiran et al . ( 2019b ) provides evidence that SGD starts by first learning a linear classifier and over time learns increasingly complex functions . Nakkiran et al . ( 2020 ) introduces the “ deep bootstrap ” phenomenon : for some deep learning tasks the empirical world test error remains close to the oracle world error1 for many SGD iterations , even if the empirical training and test errors display a large gap . To better understand such phenomena , it is useful to study training dynamics in related but mathematically tractable settings . One approach for theoretical investigation is to study kernel methods , which were recently shown to have a tight connection with over-parameterized neural networks ( Jacot et al. , 2018 ; Du et al. , 2018 ) . Indeed , consider a sequence of neural networks ( fN ( x ; θ ) ) N∈N with the widths of the layers going to infinity as N → ∞ . Assuming proper parametrization and initialization , for large N the SGD dynamics on fN is known to be well approximated by the corresponding dynamics on the first-order Taylor expansion of fN around its initialization θ0 , fN , lin ( x ; θ ) = fN ( x ; θ 0 ) + 〈∇θfN ( x ; θ0 ) , θ − θ0〉 . Thus , in the large width limit it suffices to study the dynamics on the linearization fN , lin . When using the squared loss , these dynamics correspond to optimizing a kernel least-squares objective with the neural tangent kernel KN ( x , x′ ) = 〈∇θfN ( x ; θ0 ) , ∇θfN ( x′ ; θ0 ) 〉 . 1Their paper uses “ Ideal World ” for “ Oracle World ” and “ Real World ” for “ Empirical World ” . Over the past few years , researchers have used kernel machines as a tractable model to investigate many neural network phenomena including benign overfitting , i.e. , generalization despite the interpolation of noisy data ( Bartlett et al. , 2020 ; Liang & Rakhlin , 2020 ) and double-descent , i.e , risk curves that are not classically U-shaped ( Belkin et al. , 2020 ; Liu et al. , 2021 ) . Kernels have also been studied to better understand certain aspects of neural network architectures such as invariance and stability ( Bietti & Mairal , 2017 ; Mei et al. , 2021b ) . Although kernel methods can not be used to explain some phenomena such as feature learning , they can still be conceptually useful for understanding other neural networks properties . 1.1 THREE STAGES OF KERNEL DYNAMICS . Despite much classical work in the study of gradient descent training of kernel machines ( e.g . Yao et al . ( 2007 ) ; Raskutti et al . ( 2014 ) ) there has been limited work understanding the high-dimensional setting , which is the setting of interest in this paper . Although solving the linear dynamics of gradient flow is simple , the statistical analysis of the fitted model requires involved random matrix theory arguments . In our analysis we study the dynamics of the Oracle World , where training is done on the ( usually inaccessible ) population risk , and the Empirical World , where training is done on the empirical risk ( as is done in practice ) . Associated with the oracle world model fort and the empirical world model f̂t are the following quantities of interest : the empirical training error R̂n ( f̂t ) , the empirical test error R ( f̂t ) , and the oracle error R ( fort ) defined in Eqs . ( 1 ) , ( 2 ) , ( 3 ) for which we derive expressions that are accurate in high dimensions . Informally , our main results show that under reasonable conditions on the regression function and the kernel the training dynamics undergo the following three stages : • Stage one : the empirical training error , the empirical test error , and the oracle error are all close . • Stage two : the empirical training error decays to zero , but the empirical test error and the oracle error stay close and keep approximately constant . • Stage three : the empirical training error is still zero , the empirical test error stays approximately constant , but the oracle test error decays to the approximation error . We conceptually illustrate the error curves of the oracle and empirical world in Fig . 1 and provide intuition for the evolution of the learned models in Fig . 2 . The existence of the first and third stages are not unexpected : at the beginning of training the model has not fit the dataset enough to distinguish the oracle and empirical world and at the end of training an expressive enough model with infinite samples will outperform one with finitely many . The most interesting stage is the second one where the empirical model begins to “ overfit ” the training set while still remaining close to the non-interpolating oracle model in the L2 sense ( see Fig . 2 ) . In Section 2 we discuss some related work . In Section 3 we elaborate our description of the three stages and give a mathematical characterization for two particular settings in Theorems 1 and 2 . Although the three stages arise fairly generally , we remark that certain stages will vanish if the problem parameters are chosen in a special way ( c.f . Remark 1 ) . We connect our theoretical results to related empirical deep learning phenomena in Remark 3 and discuss the relation to deep learning in practice in Remark 4 . In Section 4 we provide numerical simulations to illustrate the theory more concretely and in Section 5 we end with a summary and discussion of the results . 2 RELATED LITERATURE . The generalization error of the kernel ridge regression ( KRR ) solution has been well-studied in both the fixed dimension regime ( Wainwright , 2019 , Chap . 13 ) , ( Caponnetto & De Vito , 2007 ) and the high-dimensional regime ( El Karoui , 2010 ; Liang & Rakhlin , 2020 ; Liu et al. , 2021 ; Ghorbani et al. , 2020 ; 2021 ; Mei et al. , 2021a ; b ) . Most closely related to our results is the setting of ( Ghorbani et al. , 2021 ; Mei et al. , 2021a ; b ) . Analysis of the entire KRR training trajectory has also been done ( Yao et al. , 2007 ; Raskutti et al. , 2014 ; Cao et al. , 2019 ) but only for the fixed dimensional setting . Classical non-parametric rates are often obtained by specifying a strong regularity assumption on the target function ( e.g . the source condition in Fischer & Steinwart ( 2020 ) ) , whereas in our work the assumption on the target function is mild . Another line of work directly studies the dynamics of learning in linear neural networks ( Saxe et al. , 2013 ; Li et al. , 2018 ; Arora et al. , 2019 ; Vaskevicius et al. , 2019 ) . Similar to us , these works show that some notion of complexity ( typically effective rank or sparsity ) increases in the linear network over the course of optimization . The relationship between the speed of iterative optimization and gap between population and empirical quantities has been studied before in the context of algorithmic stability ( Bousquet & Elisseeff , 2002 ; Hardt et al. , 2016 ; Chen et al. , 2018 ) . These analyses certify good empirical generalization by using stability in the first few iterations to upper bound the gap between train and test error . In contrast , our analysis directly computes the errors at an arbitrary time t ( c.f . Remark 2 ) . The relationship between oracle and empirical training dynamics has been considered before in Bottou & LeCun ( 2004 ) and Pillaud-Vivien et al . ( 2018 ) . 3 RESULTS . In this section we introduce the problem and present a specialization of our results to two concrete settings : dot product and group invariant kernels on the sphere ( Theorems 1 and 2 respectively ) . The more general version of our results is described in Appendix A.3 . 3.1 PROBLEM SETUP . We consider the supervised learning problem where we are given i.i.d . data ( xi , yi ) i≤n . The covariate vectors ( xi ) i≤n ∼iid Unif ( Sd−1 ( √ d ) ) and the real-valued noisy responses yi = fd ( xi ) + εi for some unknown target function fd ∈ L2 ( Sd−1 ( √ d ) ) and ( εi ) i≤n ∼iid N ( 0 , σ2ε ) . Given a function f ∈ L2 ( Sd−1 ( √ d ) ) , we define its test error R ( f ) and its training error R̂n ( f ) as R ( f ) ≡ E ( xnew , ynew ) { ( ynew − f ( xnew ) ) 2 } , R̂n ( f ) ≡ 1 n n∑ i=1 ( yi − f ( xi ) ) 2 ( 1 ) where ( xnew , ynew ) is i.i.d . with ( xi , yi ) i≤n . The test error R ( f ) measures the fit of f on the population distribution and the training error R̂n ( f ) measures the fit of f to the training set . For a kernel function Hd : Sd−1 ( √ d ) × Sd−1 ( √ d ) → R , we analyse the dynamics of the following two fitted models indexed by time t : the oracle model fort and the empirical model f̂t , which are given by the gradient flow on R and R̂n over the associated RKHSHd respectively d dt fort ( x ) = −∇R ( fort ( x ) ) = E [ Hd ( x , z ) ( fd ( z ) − fort ( z ) ) ] , ( 2 ) d dt f̂t ( x ) = −∇R̂n ( f̂t ( x ) ) = 1 n n∑ i=1 Hd ( x , xi ) ( yi − f̂t ( xi ) ) , ( 3 ) with zero initialization for0 ≡ f̂0 ≡ 0 . These dynamics are motivated from the neural tangent kernel perspective of over-parameterized neural networks ( Jacot et al. , 2018 ; Du et al. , 2018 ) . A precise mathematical definition and derivation of these two dynamics are provided in Appendix E.1 . For our results we make some assumptions on the spectral properties of the kernels Hd similar to those in Mei et al . ( 2021a ) that are discussed in detail in Section A.2 . At a high-level we require that the diagonal elements of the kernel concentrate , that the kernel eigenvalues obey certain spectral gap conditions , and that the top eigenfunctions obey a hyperconctractivity condition which says they are “ delocalized ” . For the specific settings of Theorems 1 and 2 we give more specific conditions on the kernels that are more easily verified and imply the required spectral properties . | Studied the evolution of generalization error of the kernel gradient flow trajectory with respect to the training (empirical world) and population (ideal world) MSE loss. The analysis builds upon [Mei et al. 2021], which relates kernel ridge regression to projection onto low-degree polynomials. The authors showed that the estimator optimizing the empirical risk achieves vanishing training error, but the test error plateaus at certain value depending on the training set size, whereas the online (population) estimator can learn increasingly complex components of the target function as training proceeds. | SP:1782ce075c1a1fd05fc7b3add1228735a3eb8d17 |
The Three Stages of Learning Dynamics in High-dimensional Kernel Methods | 1 INTRODUCTION . In order to fundamentally understand how and why deep learning works , there has been much effort to understand the dynamics of neural networks trained by gradient descent based algorithms . This effort has led to the discovery of many intriguing empirical phenomena ( e.g . Frankle et al . ( 2020 ) ; Fort et al . ( 2020 ) ; Nakkiran et al . ( 2019a ; b ; 2020 ) ) that help shape our conceptual framework for understanding the learning process in neural networks . Nakkiran et al . ( 2019b ) provides evidence that SGD starts by first learning a linear classifier and over time learns increasingly complex functions . Nakkiran et al . ( 2020 ) introduces the “ deep bootstrap ” phenomenon : for some deep learning tasks the empirical world test error remains close to the oracle world error1 for many SGD iterations , even if the empirical training and test errors display a large gap . To better understand such phenomena , it is useful to study training dynamics in related but mathematically tractable settings . One approach for theoretical investigation is to study kernel methods , which were recently shown to have a tight connection with over-parameterized neural networks ( Jacot et al. , 2018 ; Du et al. , 2018 ) . Indeed , consider a sequence of neural networks ( fN ( x ; θ ) ) N∈N with the widths of the layers going to infinity as N → ∞ . Assuming proper parametrization and initialization , for large N the SGD dynamics on fN is known to be well approximated by the corresponding dynamics on the first-order Taylor expansion of fN around its initialization θ0 , fN , lin ( x ; θ ) = fN ( x ; θ 0 ) + 〈∇θfN ( x ; θ0 ) , θ − θ0〉 . Thus , in the large width limit it suffices to study the dynamics on the linearization fN , lin . When using the squared loss , these dynamics correspond to optimizing a kernel least-squares objective with the neural tangent kernel KN ( x , x′ ) = 〈∇θfN ( x ; θ0 ) , ∇θfN ( x′ ; θ0 ) 〉 . 1Their paper uses “ Ideal World ” for “ Oracle World ” and “ Real World ” for “ Empirical World ” . Over the past few years , researchers have used kernel machines as a tractable model to investigate many neural network phenomena including benign overfitting , i.e. , generalization despite the interpolation of noisy data ( Bartlett et al. , 2020 ; Liang & Rakhlin , 2020 ) and double-descent , i.e , risk curves that are not classically U-shaped ( Belkin et al. , 2020 ; Liu et al. , 2021 ) . Kernels have also been studied to better understand certain aspects of neural network architectures such as invariance and stability ( Bietti & Mairal , 2017 ; Mei et al. , 2021b ) . Although kernel methods can not be used to explain some phenomena such as feature learning , they can still be conceptually useful for understanding other neural networks properties . 1.1 THREE STAGES OF KERNEL DYNAMICS . Despite much classical work in the study of gradient descent training of kernel machines ( e.g . Yao et al . ( 2007 ) ; Raskutti et al . ( 2014 ) ) there has been limited work understanding the high-dimensional setting , which is the setting of interest in this paper . Although solving the linear dynamics of gradient flow is simple , the statistical analysis of the fitted model requires involved random matrix theory arguments . In our analysis we study the dynamics of the Oracle World , where training is done on the ( usually inaccessible ) population risk , and the Empirical World , where training is done on the empirical risk ( as is done in practice ) . Associated with the oracle world model fort and the empirical world model f̂t are the following quantities of interest : the empirical training error R̂n ( f̂t ) , the empirical test error R ( f̂t ) , and the oracle error R ( fort ) defined in Eqs . ( 1 ) , ( 2 ) , ( 3 ) for which we derive expressions that are accurate in high dimensions . Informally , our main results show that under reasonable conditions on the regression function and the kernel the training dynamics undergo the following three stages : • Stage one : the empirical training error , the empirical test error , and the oracle error are all close . • Stage two : the empirical training error decays to zero , but the empirical test error and the oracle error stay close and keep approximately constant . • Stage three : the empirical training error is still zero , the empirical test error stays approximately constant , but the oracle test error decays to the approximation error . We conceptually illustrate the error curves of the oracle and empirical world in Fig . 1 and provide intuition for the evolution of the learned models in Fig . 2 . The existence of the first and third stages are not unexpected : at the beginning of training the model has not fit the dataset enough to distinguish the oracle and empirical world and at the end of training an expressive enough model with infinite samples will outperform one with finitely many . The most interesting stage is the second one where the empirical model begins to “ overfit ” the training set while still remaining close to the non-interpolating oracle model in the L2 sense ( see Fig . 2 ) . In Section 2 we discuss some related work . In Section 3 we elaborate our description of the three stages and give a mathematical characterization for two particular settings in Theorems 1 and 2 . Although the three stages arise fairly generally , we remark that certain stages will vanish if the problem parameters are chosen in a special way ( c.f . Remark 1 ) . We connect our theoretical results to related empirical deep learning phenomena in Remark 3 and discuss the relation to deep learning in practice in Remark 4 . In Section 4 we provide numerical simulations to illustrate the theory more concretely and in Section 5 we end with a summary and discussion of the results . 2 RELATED LITERATURE . The generalization error of the kernel ridge regression ( KRR ) solution has been well-studied in both the fixed dimension regime ( Wainwright , 2019 , Chap . 13 ) , ( Caponnetto & De Vito , 2007 ) and the high-dimensional regime ( El Karoui , 2010 ; Liang & Rakhlin , 2020 ; Liu et al. , 2021 ; Ghorbani et al. , 2020 ; 2021 ; Mei et al. , 2021a ; b ) . Most closely related to our results is the setting of ( Ghorbani et al. , 2021 ; Mei et al. , 2021a ; b ) . Analysis of the entire KRR training trajectory has also been done ( Yao et al. , 2007 ; Raskutti et al. , 2014 ; Cao et al. , 2019 ) but only for the fixed dimensional setting . Classical non-parametric rates are often obtained by specifying a strong regularity assumption on the target function ( e.g . the source condition in Fischer & Steinwart ( 2020 ) ) , whereas in our work the assumption on the target function is mild . Another line of work directly studies the dynamics of learning in linear neural networks ( Saxe et al. , 2013 ; Li et al. , 2018 ; Arora et al. , 2019 ; Vaskevicius et al. , 2019 ) . Similar to us , these works show that some notion of complexity ( typically effective rank or sparsity ) increases in the linear network over the course of optimization . The relationship between the speed of iterative optimization and gap between population and empirical quantities has been studied before in the context of algorithmic stability ( Bousquet & Elisseeff , 2002 ; Hardt et al. , 2016 ; Chen et al. , 2018 ) . These analyses certify good empirical generalization by using stability in the first few iterations to upper bound the gap between train and test error . In contrast , our analysis directly computes the errors at an arbitrary time t ( c.f . Remark 2 ) . The relationship between oracle and empirical training dynamics has been considered before in Bottou & LeCun ( 2004 ) and Pillaud-Vivien et al . ( 2018 ) . 3 RESULTS . In this section we introduce the problem and present a specialization of our results to two concrete settings : dot product and group invariant kernels on the sphere ( Theorems 1 and 2 respectively ) . The more general version of our results is described in Appendix A.3 . 3.1 PROBLEM SETUP . We consider the supervised learning problem where we are given i.i.d . data ( xi , yi ) i≤n . The covariate vectors ( xi ) i≤n ∼iid Unif ( Sd−1 ( √ d ) ) and the real-valued noisy responses yi = fd ( xi ) + εi for some unknown target function fd ∈ L2 ( Sd−1 ( √ d ) ) and ( εi ) i≤n ∼iid N ( 0 , σ2ε ) . Given a function f ∈ L2 ( Sd−1 ( √ d ) ) , we define its test error R ( f ) and its training error R̂n ( f ) as R ( f ) ≡ E ( xnew , ynew ) { ( ynew − f ( xnew ) ) 2 } , R̂n ( f ) ≡ 1 n n∑ i=1 ( yi − f ( xi ) ) 2 ( 1 ) where ( xnew , ynew ) is i.i.d . with ( xi , yi ) i≤n . The test error R ( f ) measures the fit of f on the population distribution and the training error R̂n ( f ) measures the fit of f to the training set . For a kernel function Hd : Sd−1 ( √ d ) × Sd−1 ( √ d ) → R , we analyse the dynamics of the following two fitted models indexed by time t : the oracle model fort and the empirical model f̂t , which are given by the gradient flow on R and R̂n over the associated RKHSHd respectively d dt fort ( x ) = −∇R ( fort ( x ) ) = E [ Hd ( x , z ) ( fd ( z ) − fort ( z ) ) ] , ( 2 ) d dt f̂t ( x ) = −∇R̂n ( f̂t ( x ) ) = 1 n n∑ i=1 Hd ( x , xi ) ( yi − f̂t ( xi ) ) , ( 3 ) with zero initialization for0 ≡ f̂0 ≡ 0 . These dynamics are motivated from the neural tangent kernel perspective of over-parameterized neural networks ( Jacot et al. , 2018 ; Du et al. , 2018 ) . A precise mathematical definition and derivation of these two dynamics are provided in Appendix E.1 . For our results we make some assumptions on the spectral properties of the kernels Hd similar to those in Mei et al . ( 2021a ) that are discussed in detail in Section A.2 . At a high-level we require that the diagonal elements of the kernel concentrate , that the kernel eigenvalues obey certain spectral gap conditions , and that the top eigenfunctions obey a hyperconctractivity condition which says they are “ delocalized ” . For the specific settings of Theorems 1 and 2 we give more specific conditions on the kernels that are more easily verified and imply the required spectral properties . | This paper studies the learning dynamics of gradient flow for kernel ridge regression. The authors contrast to different setups: the "empirical world", where the model is trained on a finite data set, and the "oracle world", where the model is instead trained directly on the population loss, or in other words, by minimising the test error. Nakkiran et al. (2020) recently reported that the errors in both setups stay close to each throughout training for a range of deep neural networks. By studying the relation between empirical and oracle world in the setup of kernel ridge regression, the authors investigate this phenomenon in a setup under precise theoretical control. The authors perform such a study by leveraging the analysis of Ghorbani et al. (2021) and Mei et al. (2021), who gave a detailed and precise analysis of the implicit bias of learning in random feature models, and in particular the learning of approximations of increasing complexity. Contrasting the learning in the two worlds, the authors find that learning generally, but not always, proceeds in three stages. In the case of polynomial kernels, models in both the empirical and the oracle learn first the leading $\ell$-components of the target function, defined as those projections of the target function along the polynomials of small degree. After the training error of the empirical model reaches zero, a second phase ensues where the test errors of both models remain close, but the gap between training and test error in the empirical world can be large. Finally, either there is enough training data to learn the target function perfectly, or learning enters a third stage where the oracle model learns the target function perfectly, which the empirical model won't achieve. The authors confirm this picture also for invariant kernels, say translation-invariant ones, using technical tools from Mei et al. (2021). Quantitatively, the presence of an invariance shows up in the training speed, which is slower than for the product kernel (Remark 2). Numerical experiments for kernel least-square confirm the theory, while the authors also report experiments for SGD for random feature regression. | SP:1782ce075c1a1fd05fc7b3add1228735a3eb8d17 |
The Three Stages of Learning Dynamics in High-dimensional Kernel Methods | 1 INTRODUCTION . In order to fundamentally understand how and why deep learning works , there has been much effort to understand the dynamics of neural networks trained by gradient descent based algorithms . This effort has led to the discovery of many intriguing empirical phenomena ( e.g . Frankle et al . ( 2020 ) ; Fort et al . ( 2020 ) ; Nakkiran et al . ( 2019a ; b ; 2020 ) ) that help shape our conceptual framework for understanding the learning process in neural networks . Nakkiran et al . ( 2019b ) provides evidence that SGD starts by first learning a linear classifier and over time learns increasingly complex functions . Nakkiran et al . ( 2020 ) introduces the “ deep bootstrap ” phenomenon : for some deep learning tasks the empirical world test error remains close to the oracle world error1 for many SGD iterations , even if the empirical training and test errors display a large gap . To better understand such phenomena , it is useful to study training dynamics in related but mathematically tractable settings . One approach for theoretical investigation is to study kernel methods , which were recently shown to have a tight connection with over-parameterized neural networks ( Jacot et al. , 2018 ; Du et al. , 2018 ) . Indeed , consider a sequence of neural networks ( fN ( x ; θ ) ) N∈N with the widths of the layers going to infinity as N → ∞ . Assuming proper parametrization and initialization , for large N the SGD dynamics on fN is known to be well approximated by the corresponding dynamics on the first-order Taylor expansion of fN around its initialization θ0 , fN , lin ( x ; θ ) = fN ( x ; θ 0 ) + 〈∇θfN ( x ; θ0 ) , θ − θ0〉 . Thus , in the large width limit it suffices to study the dynamics on the linearization fN , lin . When using the squared loss , these dynamics correspond to optimizing a kernel least-squares objective with the neural tangent kernel KN ( x , x′ ) = 〈∇θfN ( x ; θ0 ) , ∇θfN ( x′ ; θ0 ) 〉 . 1Their paper uses “ Ideal World ” for “ Oracle World ” and “ Real World ” for “ Empirical World ” . Over the past few years , researchers have used kernel machines as a tractable model to investigate many neural network phenomena including benign overfitting , i.e. , generalization despite the interpolation of noisy data ( Bartlett et al. , 2020 ; Liang & Rakhlin , 2020 ) and double-descent , i.e , risk curves that are not classically U-shaped ( Belkin et al. , 2020 ; Liu et al. , 2021 ) . Kernels have also been studied to better understand certain aspects of neural network architectures such as invariance and stability ( Bietti & Mairal , 2017 ; Mei et al. , 2021b ) . Although kernel methods can not be used to explain some phenomena such as feature learning , they can still be conceptually useful for understanding other neural networks properties . 1.1 THREE STAGES OF KERNEL DYNAMICS . Despite much classical work in the study of gradient descent training of kernel machines ( e.g . Yao et al . ( 2007 ) ; Raskutti et al . ( 2014 ) ) there has been limited work understanding the high-dimensional setting , which is the setting of interest in this paper . Although solving the linear dynamics of gradient flow is simple , the statistical analysis of the fitted model requires involved random matrix theory arguments . In our analysis we study the dynamics of the Oracle World , where training is done on the ( usually inaccessible ) population risk , and the Empirical World , where training is done on the empirical risk ( as is done in practice ) . Associated with the oracle world model fort and the empirical world model f̂t are the following quantities of interest : the empirical training error R̂n ( f̂t ) , the empirical test error R ( f̂t ) , and the oracle error R ( fort ) defined in Eqs . ( 1 ) , ( 2 ) , ( 3 ) for which we derive expressions that are accurate in high dimensions . Informally , our main results show that under reasonable conditions on the regression function and the kernel the training dynamics undergo the following three stages : • Stage one : the empirical training error , the empirical test error , and the oracle error are all close . • Stage two : the empirical training error decays to zero , but the empirical test error and the oracle error stay close and keep approximately constant . • Stage three : the empirical training error is still zero , the empirical test error stays approximately constant , but the oracle test error decays to the approximation error . We conceptually illustrate the error curves of the oracle and empirical world in Fig . 1 and provide intuition for the evolution of the learned models in Fig . 2 . The existence of the first and third stages are not unexpected : at the beginning of training the model has not fit the dataset enough to distinguish the oracle and empirical world and at the end of training an expressive enough model with infinite samples will outperform one with finitely many . The most interesting stage is the second one where the empirical model begins to “ overfit ” the training set while still remaining close to the non-interpolating oracle model in the L2 sense ( see Fig . 2 ) . In Section 2 we discuss some related work . In Section 3 we elaborate our description of the three stages and give a mathematical characterization for two particular settings in Theorems 1 and 2 . Although the three stages arise fairly generally , we remark that certain stages will vanish if the problem parameters are chosen in a special way ( c.f . Remark 1 ) . We connect our theoretical results to related empirical deep learning phenomena in Remark 3 and discuss the relation to deep learning in practice in Remark 4 . In Section 4 we provide numerical simulations to illustrate the theory more concretely and in Section 5 we end with a summary and discussion of the results . 2 RELATED LITERATURE . The generalization error of the kernel ridge regression ( KRR ) solution has been well-studied in both the fixed dimension regime ( Wainwright , 2019 , Chap . 13 ) , ( Caponnetto & De Vito , 2007 ) and the high-dimensional regime ( El Karoui , 2010 ; Liang & Rakhlin , 2020 ; Liu et al. , 2021 ; Ghorbani et al. , 2020 ; 2021 ; Mei et al. , 2021a ; b ) . Most closely related to our results is the setting of ( Ghorbani et al. , 2021 ; Mei et al. , 2021a ; b ) . Analysis of the entire KRR training trajectory has also been done ( Yao et al. , 2007 ; Raskutti et al. , 2014 ; Cao et al. , 2019 ) but only for the fixed dimensional setting . Classical non-parametric rates are often obtained by specifying a strong regularity assumption on the target function ( e.g . the source condition in Fischer & Steinwart ( 2020 ) ) , whereas in our work the assumption on the target function is mild . Another line of work directly studies the dynamics of learning in linear neural networks ( Saxe et al. , 2013 ; Li et al. , 2018 ; Arora et al. , 2019 ; Vaskevicius et al. , 2019 ) . Similar to us , these works show that some notion of complexity ( typically effective rank or sparsity ) increases in the linear network over the course of optimization . The relationship between the speed of iterative optimization and gap between population and empirical quantities has been studied before in the context of algorithmic stability ( Bousquet & Elisseeff , 2002 ; Hardt et al. , 2016 ; Chen et al. , 2018 ) . These analyses certify good empirical generalization by using stability in the first few iterations to upper bound the gap between train and test error . In contrast , our analysis directly computes the errors at an arbitrary time t ( c.f . Remark 2 ) . The relationship between oracle and empirical training dynamics has been considered before in Bottou & LeCun ( 2004 ) and Pillaud-Vivien et al . ( 2018 ) . 3 RESULTS . In this section we introduce the problem and present a specialization of our results to two concrete settings : dot product and group invariant kernels on the sphere ( Theorems 1 and 2 respectively ) . The more general version of our results is described in Appendix A.3 . 3.1 PROBLEM SETUP . We consider the supervised learning problem where we are given i.i.d . data ( xi , yi ) i≤n . The covariate vectors ( xi ) i≤n ∼iid Unif ( Sd−1 ( √ d ) ) and the real-valued noisy responses yi = fd ( xi ) + εi for some unknown target function fd ∈ L2 ( Sd−1 ( √ d ) ) and ( εi ) i≤n ∼iid N ( 0 , σ2ε ) . Given a function f ∈ L2 ( Sd−1 ( √ d ) ) , we define its test error R ( f ) and its training error R̂n ( f ) as R ( f ) ≡ E ( xnew , ynew ) { ( ynew − f ( xnew ) ) 2 } , R̂n ( f ) ≡ 1 n n∑ i=1 ( yi − f ( xi ) ) 2 ( 1 ) where ( xnew , ynew ) is i.i.d . with ( xi , yi ) i≤n . The test error R ( f ) measures the fit of f on the population distribution and the training error R̂n ( f ) measures the fit of f to the training set . For a kernel function Hd : Sd−1 ( √ d ) × Sd−1 ( √ d ) → R , we analyse the dynamics of the following two fitted models indexed by time t : the oracle model fort and the empirical model f̂t , which are given by the gradient flow on R and R̂n over the associated RKHSHd respectively d dt fort ( x ) = −∇R ( fort ( x ) ) = E [ Hd ( x , z ) ( fd ( z ) − fort ( z ) ) ] , ( 2 ) d dt f̂t ( x ) = −∇R̂n ( f̂t ( x ) ) = 1 n n∑ i=1 Hd ( x , xi ) ( yi − f̂t ( xi ) ) , ( 3 ) with zero initialization for0 ≡ f̂0 ≡ 0 . These dynamics are motivated from the neural tangent kernel perspective of over-parameterized neural networks ( Jacot et al. , 2018 ; Du et al. , 2018 ) . A precise mathematical definition and derivation of these two dynamics are provided in Appendix E.1 . For our results we make some assumptions on the spectral properties of the kernels Hd similar to those in Mei et al . ( 2021a ) that are discussed in detail in Section A.2 . At a high-level we require that the diagonal elements of the kernel concentrate , that the kernel eigenvalues obey certain spectral gap conditions , and that the top eigenfunctions obey a hyperconctractivity condition which says they are “ delocalized ” . For the specific settings of Theorems 1 and 2 we give more specific conditions on the kernels that are more easily verified and imply the required spectral properties . | The paper studies the training/learning dynamics of (inner product) kernel gradient descent in the high-dimensional setting. The main contribution is proving a three-stage learning dynamics which were observed in neural networks in existing work. (1) Stage 1: train loss ~ test loss ~ oracle test loss (training set == whole (input, label) distribution) (2) Stage 2: test loss ~ oracle loss; train loss ->0; (3) Stage 3: test loss unchanged; oracle loss decays; I think this is a nice application of the results developed in [1], showing that many seemingly surprising deep learning (DL) phenomena are indeed not unique in DL -- most of them are provable / observable in the kernel setting. [1]Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration. | SP:1782ce075c1a1fd05fc7b3add1228735a3eb8d17 |
Divide and Explore: Multi-Agent Separate Exploration with Shared Intrinsic Motivations | 1 INTRODUCTION . Deep reinforcement learning is a very important learning framework to train intelligent agents to solve complex tasks ( Sutton & Barto , 2018 ) . Despite recent success stories in a variety of real tasks ( Silver et al. , 2016 ; Vinyals et al. , 2019 ; Lee et al. , 2020 ) , the difficulty of exploration remains a major challenge for DRL when the reward is sparse or deceptive ( Burda et al. , 2018 ) . Recent research in exploration has made significant progresses thanks to the idea of intrinsic motivation ( Schmidhuber , 2010 ) . Intrinsic motivation could be viewed as objectives for unsupervised reinforcement learning ( Levine , 2020 ) , and their goal is usually not to guide the agent to complete a specific task , but to help the agent to discover interesting states and trajectories which could help the agent to understand the environment and find possible target solutions for specific tasks . Many previous works on intrinsic motivation ( Raileanu & Rocktäschel , 2020 ; Burda et al. , 2018 ; 2019 ) can be understood as proposing some kind of metrics to measure the novelty of the states , and then using the metric as an intrinsic reward to encourage agents to reach novel states , such as count-based exploration ( Bellemare et al. , 2016 ) and ICM ( Pathak et al. , 2017 ) . Despite of the significant progresses that have been made , in theory , efficient exploration is still very challenging if not impossible , due to the infeasible size of the state space to be explored . Similar difficulties are usually seen in the state-space search problems in theoretical computer science . For example , in SMT or SAT problems ( De Moura & Bjørner , 2008 ) , one is required to search for a feasible solution to satisfy certain constraints , and these problems are usually NP complete because of the exponential size of the state space . A very common class of practical algorithms to solve these search problems is divide and conquer . The algorithm first tries to divide the state space into certain pieces and then search for results in each piece of state space . A significant benefit of this approach is it can be easily modified to take advantage of modern distributed/concurrent computing hardware , because it can naturally divide a hard problem into smaller and simpler ones , allowing different computing nodes to complete the tasks asynchronously . Divide-and-conquer type of algorithms are among the state-of-the art algorithms to solve SMT or SAT problems . In this paper , we take inspiration from divide and conquer algorithms in theoretical computer science and try to adapt them to the exploration problems in RL . Different from traditional exploration mechanisms with a single exploring agent , we train a number of concurrent exploring agents . We carefully design our intrinsic reward such that each exploring agent is encouraged to explore only one region in the state space . In other words , the state space is automatically divided into several components , where each agent is responsible for exploring one of these components . In order to achieve this , we first choose a reward function based on intrinsic motivation , e.g . countbased reward or ICM . This function is able to output a novelty score for a given transition . We initialize K agents , each agent is equipped with its own version of reward function . Then we design a multi-party intrinsic reward for each agent consisting of two parts : ( 1 ) a reward to discourage the agent to visit states which has already been explored by other agents . ( 2 ) a self exploration reward to encourage the agent to visit novel states which has not been visited by itself . Agents run in a fully asynchronous manner , no agent has to wait for other other agent ’ s progress to take exploration steps . All agents communicate with each other through shared memory , under the abstraction of a simple read-write shared object , which can be made lock-free , i.e. , the learning system can always progress regardless of some computing nodes may delay or fail . These properties make our algorithm highly scalable . Our experiments show that the proposed method is highly efficient and is able to achieve state-of-the-art results in many RL tasks such as MiniGrid and Vizdoom . 2 RELATED WORK . Intrinsic Motivation The study of intrinsic motivation has been of interest for some time ( Schmidhuber , 2010 ) . Earlier works on intrinsic motivation tended to analyse with psychology and combined with reinforcement learning in tabular setting ( Deci et al. , 1981 ; Singh et al. , 2005 ) . With the breakthroughs of deep reinforcement learning , there has been a surge of interest in applying intrinsic motivation with deep neural networks for hard exploration RL problems , in which countbased exploration and curiosity-driven exploration are two main promising research branches . For count-based methods , the agent is encouraged to explore rarely visited states , while for curiositydriven methods , the agent is encouraged to explore via learning the world model . Count ( Bellemare et al. , 2016 ) extends count-based methods to non-tabular setting by deriving pseudo-counts with a density model . Ostrovski et al . ( 2017 ) adapts Bellemare et al . ( 2016 ) with PixelCNN . RND ( Burda et al. , 2018 ) uses random network distillation to measure the familiarity of states , and RGU ( Badia et al. , 2020 ) combines RND with an episodic novelty module . ICM ( Pathak et al. , 2017 ) builds a forward dynamic model and an inverse dynamic model to measure the curiosity for exploration . RIDE ( Raileanu & Rocktäschel , 2020 ) has similar networks with ICM while using a different intrinsic reward combined with episodic state visitation counts . AMIGo ( Campero et al. , 2020 ) proposes a meta-learning framework for generating adversarially motivated intrinsic goals in curriculum to guide agent exploration . RAPID ( Zha et al. , 2021 ) ranks past trajectories with episodic scores and does imitation learning for distillation . AGAC ( Flet-Berliac et al. , 2021 ) formulates an actor-critic framework with adversary . Our work is different from above by building a multi-agent learning mechanism to accelerate exploration . Distributed Reinforcement Learning The scalability of large scale distributed learning in DRL is a main challenge for reinforcement learning . Most of previous works scale up deep reinforcement learning with distributed asynchronous SGD ( Dean et al. , 2012 ) by distributing RL components . Gorila ( Nair et al. , 2015 ) scales DQN ( Mnih et al. , 2015 ) with distributed Q networks and replay buffers . A3C ( Mnih et al. , 2016 ) uses multiple workers for training and collects all gradients for parameter synchronous . Unlike A3C , IMPALA ( Espeholt et al. , 2018 ) transfers all trajectories to multiple central learners for distributed learning . Ape-X ( Horgan et al. , 2018 ) consists of a distributed replay buffers and a synchronous learner for value-based methods . SEED RL ( Espeholt et al. , 2019 ) and OpenAI Rapid ( OpenAI , 2018 ) apply central GPU based inference for acceleration . For most of the above works , the main parallelism exploited by the distributed framework is data parallelism , e.g . they use multiple nodes for collecting data , and divide minibatches among computing nodes and aggregate on the gradients computed by each node . Our model , on the other hand , can be viewed as taking advantage a new parallelism specific to RL problems — state space parallelism . Instead of dividing minibatches for the nodes , we let the agents divide the state space into pieces , which greatly improves exploration efficiency . 3 BACKGROUND . 3.1 PROBLEM FORMULATION . In this paper , we consider an RL problem formulated as a Markov Decision Process ( MDP ) , which is defined as a tuple MDP = { S , A , P , R , γ } , where S is the state space , A is the action space of the agent , P is the transition probabilities which represents the stochastic transition function S×A→ S , R ( s , a ) is the reward function after the agent executes an action , γ is the discount factor . The goal is to maximize the expected accumulated rewards R = E ( ∑T t=0 γ tr ( st , at ) ) the agent receives when executing series of optimal actions in episodes with policy π . In our method , other than the rewards rt received from environment at timestep t , our agents also receive a intrinsic reward Rt as an guidance for efficient exploration . The intrinsic reward is a parameterized function of state and action Rt ( s , a ) = fθ ( s , a ) , where the parameters θ = θ ( M ) depends on the agent ’ s history or replay buffer M . For example , in count-based reward , θ is the number of visits at each state . Instead of a single exploring agent , our proposed method trains K independent exploring agents . The exploring agents do not communicate or affect each other directly , however they indirectly affect each other through the intrinsic rewards . Each agent uses its own history or replay memory to train its own version of fθ ( the intrinsic motivation model ) . For any agent , the total intrinsic reward used for agent training is formed by combining all trained reward functions of all the agents . 3.2 CONCURRENT SHARED OBJECTS . A commonly used primitive in concurrent or distributed computing is concurrent shared objects . It is basically the user interface of concurrent data structures . A concurrent shared objects is composed of several methods which all processes can call . The behavior of a shared object is defined by its sequential execution traces . For example , a concurrent Set object has three methods , add ( v ) , find ( v ) and remove ( v ) . The behavior of this concurrent object is defined by the sequential executions of an ordinary ( non-concurrent ) Set data type . Also , a property called ” linearizability ” is applied to guarantee consistent usage of the concurrent object . Linearizability basically says that for any concurrent execution traces of the concurrent object , one can find a sequential execution trace which yields exactly the same results and the sequential execution trace preserves the real time ordering of the concurrent execution trace . In other words , all the methods of the concurrent objects can be viewed as executed atomically . In this paper , we assume all concurrent objects are linearizable . 4 DIVIDE AND EXPLORE . We design a learning mechanism named Divide-and-Explore ( D & E ) to realize the idea mentioned above . An overview of D & E is illustrated in Fig.1 . The learning framework consists of n independent agents { A1 , A2 , . . . , An } , and each agent tries to explore a particular region in the state space . We let replay memory M i denote the set of all the past experiences of the ith agent Ai in the form ( sit , a i t , r i t , s i t+1 ) . Let M = ( M 1 , · · ·Mn ) be the collection of replay memories of all agents . Note that in practice we usually don ’ t need to store all the past experiences in the replay memory , but some sufficient statistics of the experiences is enough . Our algorithm first specifies a learnable intrinsic motivation model fθ as a reward function , and θ is the model ’ s parameters . For each agent Ai , we keep track of a distinct model f i = fθi , where θi = θ ( M i ) is a function of the replay memory of agent Ai . When agent Ai takes a transition ( sit , a i t , r i t , s i t+1 ) , for every agent A j , our training architecture computes a intrinsic reward r̂i , jt based on the transition and the agent ’ s f j , as following r̂i , jt = f j ( sit , a i t , r i t , s i t+1 ) ( 1 ) Intuitively , r̂i , jt measures the transition ’ s novelty for agent A j . If r̂i , jt is large , it means that the state si+1 is rarely visited and not explored fully by Agent Aj . The intrinsic motivation model fθ can be selected with any kind of count-based , curiosity-driven and memory-based exploration methods ( e.g . Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) ) according to the specified task . | This paper proposes a distributed algorithm for exploration in reinforcement learning, based off of the principle of "divide and conquer". This is implemented as a multi-agent system where each agent is run on it's own node in parallel and given a custom intrinsic reward to motivate exploration and diversity. These intrinsic rewards are built out of standard intrinsic reward methods for exploration, but are modified so each agent not only gets reward from it's own intrinsic motivation function but also gets reward by how their actions score with respect to the other agent's intrinsic motivation functions. This motivates each agent to specialize to find the strategies the other agents' intrinsic motivation functions would see as novel. | SP:2224d8cc210172758549f98c313a974a8fdce24e |
Divide and Explore: Multi-Agent Separate Exploration with Shared Intrinsic Motivations | 1 INTRODUCTION . Deep reinforcement learning is a very important learning framework to train intelligent agents to solve complex tasks ( Sutton & Barto , 2018 ) . Despite recent success stories in a variety of real tasks ( Silver et al. , 2016 ; Vinyals et al. , 2019 ; Lee et al. , 2020 ) , the difficulty of exploration remains a major challenge for DRL when the reward is sparse or deceptive ( Burda et al. , 2018 ) . Recent research in exploration has made significant progresses thanks to the idea of intrinsic motivation ( Schmidhuber , 2010 ) . Intrinsic motivation could be viewed as objectives for unsupervised reinforcement learning ( Levine , 2020 ) , and their goal is usually not to guide the agent to complete a specific task , but to help the agent to discover interesting states and trajectories which could help the agent to understand the environment and find possible target solutions for specific tasks . Many previous works on intrinsic motivation ( Raileanu & Rocktäschel , 2020 ; Burda et al. , 2018 ; 2019 ) can be understood as proposing some kind of metrics to measure the novelty of the states , and then using the metric as an intrinsic reward to encourage agents to reach novel states , such as count-based exploration ( Bellemare et al. , 2016 ) and ICM ( Pathak et al. , 2017 ) . Despite of the significant progresses that have been made , in theory , efficient exploration is still very challenging if not impossible , due to the infeasible size of the state space to be explored . Similar difficulties are usually seen in the state-space search problems in theoretical computer science . For example , in SMT or SAT problems ( De Moura & Bjørner , 2008 ) , one is required to search for a feasible solution to satisfy certain constraints , and these problems are usually NP complete because of the exponential size of the state space . A very common class of practical algorithms to solve these search problems is divide and conquer . The algorithm first tries to divide the state space into certain pieces and then search for results in each piece of state space . A significant benefit of this approach is it can be easily modified to take advantage of modern distributed/concurrent computing hardware , because it can naturally divide a hard problem into smaller and simpler ones , allowing different computing nodes to complete the tasks asynchronously . Divide-and-conquer type of algorithms are among the state-of-the art algorithms to solve SMT or SAT problems . In this paper , we take inspiration from divide and conquer algorithms in theoretical computer science and try to adapt them to the exploration problems in RL . Different from traditional exploration mechanisms with a single exploring agent , we train a number of concurrent exploring agents . We carefully design our intrinsic reward such that each exploring agent is encouraged to explore only one region in the state space . In other words , the state space is automatically divided into several components , where each agent is responsible for exploring one of these components . In order to achieve this , we first choose a reward function based on intrinsic motivation , e.g . countbased reward or ICM . This function is able to output a novelty score for a given transition . We initialize K agents , each agent is equipped with its own version of reward function . Then we design a multi-party intrinsic reward for each agent consisting of two parts : ( 1 ) a reward to discourage the agent to visit states which has already been explored by other agents . ( 2 ) a self exploration reward to encourage the agent to visit novel states which has not been visited by itself . Agents run in a fully asynchronous manner , no agent has to wait for other other agent ’ s progress to take exploration steps . All agents communicate with each other through shared memory , under the abstraction of a simple read-write shared object , which can be made lock-free , i.e. , the learning system can always progress regardless of some computing nodes may delay or fail . These properties make our algorithm highly scalable . Our experiments show that the proposed method is highly efficient and is able to achieve state-of-the-art results in many RL tasks such as MiniGrid and Vizdoom . 2 RELATED WORK . Intrinsic Motivation The study of intrinsic motivation has been of interest for some time ( Schmidhuber , 2010 ) . Earlier works on intrinsic motivation tended to analyse with psychology and combined with reinforcement learning in tabular setting ( Deci et al. , 1981 ; Singh et al. , 2005 ) . With the breakthroughs of deep reinforcement learning , there has been a surge of interest in applying intrinsic motivation with deep neural networks for hard exploration RL problems , in which countbased exploration and curiosity-driven exploration are two main promising research branches . For count-based methods , the agent is encouraged to explore rarely visited states , while for curiositydriven methods , the agent is encouraged to explore via learning the world model . Count ( Bellemare et al. , 2016 ) extends count-based methods to non-tabular setting by deriving pseudo-counts with a density model . Ostrovski et al . ( 2017 ) adapts Bellemare et al . ( 2016 ) with PixelCNN . RND ( Burda et al. , 2018 ) uses random network distillation to measure the familiarity of states , and RGU ( Badia et al. , 2020 ) combines RND with an episodic novelty module . ICM ( Pathak et al. , 2017 ) builds a forward dynamic model and an inverse dynamic model to measure the curiosity for exploration . RIDE ( Raileanu & Rocktäschel , 2020 ) has similar networks with ICM while using a different intrinsic reward combined with episodic state visitation counts . AMIGo ( Campero et al. , 2020 ) proposes a meta-learning framework for generating adversarially motivated intrinsic goals in curriculum to guide agent exploration . RAPID ( Zha et al. , 2021 ) ranks past trajectories with episodic scores and does imitation learning for distillation . AGAC ( Flet-Berliac et al. , 2021 ) formulates an actor-critic framework with adversary . Our work is different from above by building a multi-agent learning mechanism to accelerate exploration . Distributed Reinforcement Learning The scalability of large scale distributed learning in DRL is a main challenge for reinforcement learning . Most of previous works scale up deep reinforcement learning with distributed asynchronous SGD ( Dean et al. , 2012 ) by distributing RL components . Gorila ( Nair et al. , 2015 ) scales DQN ( Mnih et al. , 2015 ) with distributed Q networks and replay buffers . A3C ( Mnih et al. , 2016 ) uses multiple workers for training and collects all gradients for parameter synchronous . Unlike A3C , IMPALA ( Espeholt et al. , 2018 ) transfers all trajectories to multiple central learners for distributed learning . Ape-X ( Horgan et al. , 2018 ) consists of a distributed replay buffers and a synchronous learner for value-based methods . SEED RL ( Espeholt et al. , 2019 ) and OpenAI Rapid ( OpenAI , 2018 ) apply central GPU based inference for acceleration . For most of the above works , the main parallelism exploited by the distributed framework is data parallelism , e.g . they use multiple nodes for collecting data , and divide minibatches among computing nodes and aggregate on the gradients computed by each node . Our model , on the other hand , can be viewed as taking advantage a new parallelism specific to RL problems — state space parallelism . Instead of dividing minibatches for the nodes , we let the agents divide the state space into pieces , which greatly improves exploration efficiency . 3 BACKGROUND . 3.1 PROBLEM FORMULATION . In this paper , we consider an RL problem formulated as a Markov Decision Process ( MDP ) , which is defined as a tuple MDP = { S , A , P , R , γ } , where S is the state space , A is the action space of the agent , P is the transition probabilities which represents the stochastic transition function S×A→ S , R ( s , a ) is the reward function after the agent executes an action , γ is the discount factor . The goal is to maximize the expected accumulated rewards R = E ( ∑T t=0 γ tr ( st , at ) ) the agent receives when executing series of optimal actions in episodes with policy π . In our method , other than the rewards rt received from environment at timestep t , our agents also receive a intrinsic reward Rt as an guidance for efficient exploration . The intrinsic reward is a parameterized function of state and action Rt ( s , a ) = fθ ( s , a ) , where the parameters θ = θ ( M ) depends on the agent ’ s history or replay buffer M . For example , in count-based reward , θ is the number of visits at each state . Instead of a single exploring agent , our proposed method trains K independent exploring agents . The exploring agents do not communicate or affect each other directly , however they indirectly affect each other through the intrinsic rewards . Each agent uses its own history or replay memory to train its own version of fθ ( the intrinsic motivation model ) . For any agent , the total intrinsic reward used for agent training is formed by combining all trained reward functions of all the agents . 3.2 CONCURRENT SHARED OBJECTS . A commonly used primitive in concurrent or distributed computing is concurrent shared objects . It is basically the user interface of concurrent data structures . A concurrent shared objects is composed of several methods which all processes can call . The behavior of a shared object is defined by its sequential execution traces . For example , a concurrent Set object has three methods , add ( v ) , find ( v ) and remove ( v ) . The behavior of this concurrent object is defined by the sequential executions of an ordinary ( non-concurrent ) Set data type . Also , a property called ” linearizability ” is applied to guarantee consistent usage of the concurrent object . Linearizability basically says that for any concurrent execution traces of the concurrent object , one can find a sequential execution trace which yields exactly the same results and the sequential execution trace preserves the real time ordering of the concurrent execution trace . In other words , all the methods of the concurrent objects can be viewed as executed atomically . In this paper , we assume all concurrent objects are linearizable . 4 DIVIDE AND EXPLORE . We design a learning mechanism named Divide-and-Explore ( D & E ) to realize the idea mentioned above . An overview of D & E is illustrated in Fig.1 . The learning framework consists of n independent agents { A1 , A2 , . . . , An } , and each agent tries to explore a particular region in the state space . We let replay memory M i denote the set of all the past experiences of the ith agent Ai in the form ( sit , a i t , r i t , s i t+1 ) . Let M = ( M 1 , · · ·Mn ) be the collection of replay memories of all agents . Note that in practice we usually don ’ t need to store all the past experiences in the replay memory , but some sufficient statistics of the experiences is enough . Our algorithm first specifies a learnable intrinsic motivation model fθ as a reward function , and θ is the model ’ s parameters . For each agent Ai , we keep track of a distinct model f i = fθi , where θi = θ ( M i ) is a function of the replay memory of agent Ai . When agent Ai takes a transition ( sit , a i t , r i t , s i t+1 ) , for every agent A j , our training architecture computes a intrinsic reward r̂i , jt based on the transition and the agent ’ s f j , as following r̂i , jt = f j ( sit , a i t , r i t , s i t+1 ) ( 1 ) Intuitively , r̂i , jt measures the transition ’ s novelty for agent A j . If r̂i , jt is large , it means that the state si+1 is rarely visited and not explored fully by Agent Aj . The intrinsic motivation model fθ can be selected with any kind of count-based , curiosity-driven and memory-based exploration methods ( e.g . Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) ) according to the specified task . | This paper introduces a method for performing multi-agent exploration by supplementing any intrinsic reward method by providing each agent an additional reward that is the sum of intrinsic rewards of all other agents were they to experience the agent's transition. This additional weighted reward term allows the agent to balance exploring states that other agents have rarely encountered alongside those that the agent iself has rarely encountered. The experiments show that this divide-and-conquer strategy for multi-agent exploration results in state-of-the-art results on MiniGrid and VizDoom. | SP:2224d8cc210172758549f98c313a974a8fdce24e |
Divide and Explore: Multi-Agent Separate Exploration with Shared Intrinsic Motivations | 1 INTRODUCTION . Deep reinforcement learning is a very important learning framework to train intelligent agents to solve complex tasks ( Sutton & Barto , 2018 ) . Despite recent success stories in a variety of real tasks ( Silver et al. , 2016 ; Vinyals et al. , 2019 ; Lee et al. , 2020 ) , the difficulty of exploration remains a major challenge for DRL when the reward is sparse or deceptive ( Burda et al. , 2018 ) . Recent research in exploration has made significant progresses thanks to the idea of intrinsic motivation ( Schmidhuber , 2010 ) . Intrinsic motivation could be viewed as objectives for unsupervised reinforcement learning ( Levine , 2020 ) , and their goal is usually not to guide the agent to complete a specific task , but to help the agent to discover interesting states and trajectories which could help the agent to understand the environment and find possible target solutions for specific tasks . Many previous works on intrinsic motivation ( Raileanu & Rocktäschel , 2020 ; Burda et al. , 2018 ; 2019 ) can be understood as proposing some kind of metrics to measure the novelty of the states , and then using the metric as an intrinsic reward to encourage agents to reach novel states , such as count-based exploration ( Bellemare et al. , 2016 ) and ICM ( Pathak et al. , 2017 ) . Despite of the significant progresses that have been made , in theory , efficient exploration is still very challenging if not impossible , due to the infeasible size of the state space to be explored . Similar difficulties are usually seen in the state-space search problems in theoretical computer science . For example , in SMT or SAT problems ( De Moura & Bjørner , 2008 ) , one is required to search for a feasible solution to satisfy certain constraints , and these problems are usually NP complete because of the exponential size of the state space . A very common class of practical algorithms to solve these search problems is divide and conquer . The algorithm first tries to divide the state space into certain pieces and then search for results in each piece of state space . A significant benefit of this approach is it can be easily modified to take advantage of modern distributed/concurrent computing hardware , because it can naturally divide a hard problem into smaller and simpler ones , allowing different computing nodes to complete the tasks asynchronously . Divide-and-conquer type of algorithms are among the state-of-the art algorithms to solve SMT or SAT problems . In this paper , we take inspiration from divide and conquer algorithms in theoretical computer science and try to adapt them to the exploration problems in RL . Different from traditional exploration mechanisms with a single exploring agent , we train a number of concurrent exploring agents . We carefully design our intrinsic reward such that each exploring agent is encouraged to explore only one region in the state space . In other words , the state space is automatically divided into several components , where each agent is responsible for exploring one of these components . In order to achieve this , we first choose a reward function based on intrinsic motivation , e.g . countbased reward or ICM . This function is able to output a novelty score for a given transition . We initialize K agents , each agent is equipped with its own version of reward function . Then we design a multi-party intrinsic reward for each agent consisting of two parts : ( 1 ) a reward to discourage the agent to visit states which has already been explored by other agents . ( 2 ) a self exploration reward to encourage the agent to visit novel states which has not been visited by itself . Agents run in a fully asynchronous manner , no agent has to wait for other other agent ’ s progress to take exploration steps . All agents communicate with each other through shared memory , under the abstraction of a simple read-write shared object , which can be made lock-free , i.e. , the learning system can always progress regardless of some computing nodes may delay or fail . These properties make our algorithm highly scalable . Our experiments show that the proposed method is highly efficient and is able to achieve state-of-the-art results in many RL tasks such as MiniGrid and Vizdoom . 2 RELATED WORK . Intrinsic Motivation The study of intrinsic motivation has been of interest for some time ( Schmidhuber , 2010 ) . Earlier works on intrinsic motivation tended to analyse with psychology and combined with reinforcement learning in tabular setting ( Deci et al. , 1981 ; Singh et al. , 2005 ) . With the breakthroughs of deep reinforcement learning , there has been a surge of interest in applying intrinsic motivation with deep neural networks for hard exploration RL problems , in which countbased exploration and curiosity-driven exploration are two main promising research branches . For count-based methods , the agent is encouraged to explore rarely visited states , while for curiositydriven methods , the agent is encouraged to explore via learning the world model . Count ( Bellemare et al. , 2016 ) extends count-based methods to non-tabular setting by deriving pseudo-counts with a density model . Ostrovski et al . ( 2017 ) adapts Bellemare et al . ( 2016 ) with PixelCNN . RND ( Burda et al. , 2018 ) uses random network distillation to measure the familiarity of states , and RGU ( Badia et al. , 2020 ) combines RND with an episodic novelty module . ICM ( Pathak et al. , 2017 ) builds a forward dynamic model and an inverse dynamic model to measure the curiosity for exploration . RIDE ( Raileanu & Rocktäschel , 2020 ) has similar networks with ICM while using a different intrinsic reward combined with episodic state visitation counts . AMIGo ( Campero et al. , 2020 ) proposes a meta-learning framework for generating adversarially motivated intrinsic goals in curriculum to guide agent exploration . RAPID ( Zha et al. , 2021 ) ranks past trajectories with episodic scores and does imitation learning for distillation . AGAC ( Flet-Berliac et al. , 2021 ) formulates an actor-critic framework with adversary . Our work is different from above by building a multi-agent learning mechanism to accelerate exploration . Distributed Reinforcement Learning The scalability of large scale distributed learning in DRL is a main challenge for reinforcement learning . Most of previous works scale up deep reinforcement learning with distributed asynchronous SGD ( Dean et al. , 2012 ) by distributing RL components . Gorila ( Nair et al. , 2015 ) scales DQN ( Mnih et al. , 2015 ) with distributed Q networks and replay buffers . A3C ( Mnih et al. , 2016 ) uses multiple workers for training and collects all gradients for parameter synchronous . Unlike A3C , IMPALA ( Espeholt et al. , 2018 ) transfers all trajectories to multiple central learners for distributed learning . Ape-X ( Horgan et al. , 2018 ) consists of a distributed replay buffers and a synchronous learner for value-based methods . SEED RL ( Espeholt et al. , 2019 ) and OpenAI Rapid ( OpenAI , 2018 ) apply central GPU based inference for acceleration . For most of the above works , the main parallelism exploited by the distributed framework is data parallelism , e.g . they use multiple nodes for collecting data , and divide minibatches among computing nodes and aggregate on the gradients computed by each node . Our model , on the other hand , can be viewed as taking advantage a new parallelism specific to RL problems — state space parallelism . Instead of dividing minibatches for the nodes , we let the agents divide the state space into pieces , which greatly improves exploration efficiency . 3 BACKGROUND . 3.1 PROBLEM FORMULATION . In this paper , we consider an RL problem formulated as a Markov Decision Process ( MDP ) , which is defined as a tuple MDP = { S , A , P , R , γ } , where S is the state space , A is the action space of the agent , P is the transition probabilities which represents the stochastic transition function S×A→ S , R ( s , a ) is the reward function after the agent executes an action , γ is the discount factor . The goal is to maximize the expected accumulated rewards R = E ( ∑T t=0 γ tr ( st , at ) ) the agent receives when executing series of optimal actions in episodes with policy π . In our method , other than the rewards rt received from environment at timestep t , our agents also receive a intrinsic reward Rt as an guidance for efficient exploration . The intrinsic reward is a parameterized function of state and action Rt ( s , a ) = fθ ( s , a ) , where the parameters θ = θ ( M ) depends on the agent ’ s history or replay buffer M . For example , in count-based reward , θ is the number of visits at each state . Instead of a single exploring agent , our proposed method trains K independent exploring agents . The exploring agents do not communicate or affect each other directly , however they indirectly affect each other through the intrinsic rewards . Each agent uses its own history or replay memory to train its own version of fθ ( the intrinsic motivation model ) . For any agent , the total intrinsic reward used for agent training is formed by combining all trained reward functions of all the agents . 3.2 CONCURRENT SHARED OBJECTS . A commonly used primitive in concurrent or distributed computing is concurrent shared objects . It is basically the user interface of concurrent data structures . A concurrent shared objects is composed of several methods which all processes can call . The behavior of a shared object is defined by its sequential execution traces . For example , a concurrent Set object has three methods , add ( v ) , find ( v ) and remove ( v ) . The behavior of this concurrent object is defined by the sequential executions of an ordinary ( non-concurrent ) Set data type . Also , a property called ” linearizability ” is applied to guarantee consistent usage of the concurrent object . Linearizability basically says that for any concurrent execution traces of the concurrent object , one can find a sequential execution trace which yields exactly the same results and the sequential execution trace preserves the real time ordering of the concurrent execution trace . In other words , all the methods of the concurrent objects can be viewed as executed atomically . In this paper , we assume all concurrent objects are linearizable . 4 DIVIDE AND EXPLORE . We design a learning mechanism named Divide-and-Explore ( D & E ) to realize the idea mentioned above . An overview of D & E is illustrated in Fig.1 . The learning framework consists of n independent agents { A1 , A2 , . . . , An } , and each agent tries to explore a particular region in the state space . We let replay memory M i denote the set of all the past experiences of the ith agent Ai in the form ( sit , a i t , r i t , s i t+1 ) . Let M = ( M 1 , · · ·Mn ) be the collection of replay memories of all agents . Note that in practice we usually don ’ t need to store all the past experiences in the replay memory , but some sufficient statistics of the experiences is enough . Our algorithm first specifies a learnable intrinsic motivation model fθ as a reward function , and θ is the model ’ s parameters . For each agent Ai , we keep track of a distinct model f i = fθi , where θi = θ ( M i ) is a function of the replay memory of agent Ai . When agent Ai takes a transition ( sit , a i t , r i t , s i t+1 ) , for every agent A j , our training architecture computes a intrinsic reward r̂i , jt based on the transition and the agent ’ s f j , as following r̂i , jt = f j ( sit , a i t , r i t , s i t+1 ) ( 1 ) Intuitively , r̂i , jt measures the transition ’ s novelty for agent A j . If r̂i , jt is large , it means that the state si+1 is rarely visited and not explored fully by Agent Aj . The intrinsic motivation model fθ can be selected with any kind of count-based , curiosity-driven and memory-based exploration methods ( e.g . Pathak et al . ( 2017 ) ; Burda et al . ( 2018 ) ) according to the specified task . | The paper proposes looking at intrinsic exploration from a multi-agent perspective. The proposed method (D&E) rewards an agent for visiting unseen states and for visiting states that are unseen by other agents. The motivation is that if there are multiple parts of the state space, agents can explore these in parallel, thus speeding up the wall-clock exploration time. Experiments look at D&E performance on several MiniGrid tasks as well as VizDoom. | SP:2224d8cc210172758549f98c313a974a8fdce24e |
A Statistical Manifold Framework for Point Cloud Data | 1 INTRODUCTION . A growing number of machine learning problems involve data sets in which each data point is a point cloud in RD , e.g. , 3D point cloud obtained by depth cameras . Typical applications include measuring the degree of similarity betweeen two point clouds – the point clouds may be measurements obtained from a depth camera , for example – for which a distance metric on the space of point clouds is needed . Some widely used distance metrics used in this context include the Hausdorff distance ( both the original and averaged versions ) , the chamfer distance ( Hausdorff , 1914 ) , and the earth mover distance ( Rubner et al. , 2000 ) . However , the distance metric measures just one aspect of point cloud data ; other applications may require more advanced concepts and tools . For example , in the case of a moving point cloud , one may wish to measure some of its more dynamic aspects like its velocity or other quantities that require a notion of higher-order derivatives . Applications such as Monte Carlo sampling may require the construction of an isotropic Gaussian distribution on the underlying space , in which case the notion of “ angle ” is needed in addition to distance ( more technically , an inner product on the tangent space is required ( Girolami & Calderhead , 2011 ; Gemici et al. , 2016 ; Mallasto & Feragen , 2018 ) . Related to the above , the idea of interpreting a point cloud as samples from some underlying probability distribution is well-known , and has been applied to problems ranging from point set registration ( Jian & Vemuri , 2005 ; Wang et al. , 2006 ; Myronenko & Song , 2010 ; Hasanbelliu et al. , 2014 ; Zhou et al. , 2014 ; Min et al. , 2018 ; Li et al. , 2021 ) to point cloud de-noising ( Zaman et al. , 2017 ; Luo & Hu , 2021 ) . In these approaches , divergence measures from probability and informa- tion theory have been utilized to compute the similarity between point clouds . While effective for certain applications , these divergence measures still only capture just one aspect of point cloud data , and can not be used to measure other quantities of a more geometric nature . For more advanced applications , a rigorous mathematical characterization of the space of point cloud data is an essential ingredient to a more comprehensive , robust , and correct ( in the sense of being coordinate-invariant and geometrically well-defined ) analysis of the types described above , particularly one based on Riemannian geometry . As such , the first contribution of this paper is a Riemannian geometric structure for the space of point cloud data . The key idea behind our approach draws upon the information geometry framework of ( Amari & Nagaoka , 2000 ; Amari , 2016 ) , in which the space of probability density functions is given the structure of a Riemannian manifold ( the statistical manifold ) , with the Fisher information acting as a natural Riemannian metric ( the Information Riemannian metric , or info-Riemannian metric for short ) . The connection between point cloud data and information geometry is established by constructing a 1-1 mapping from the space of point cloud data to the space of probability density functions , i.e. , a point cloud X = { x1 , ... , xn | xi ∈ RD } is mapped to a density function p ( x ; X ) on RD in a 1-1 fashion as illustrated in Figure 1 . Two case studies involving autoencoders are presented to demonstrate the benefits of our approach . In the first case study , a pre-trained autoencoder is used to encode two 3D point clouds – one representing a cylinder , one a cone – and the minimal geodesic ( or path of shortest length ) with respect to the info-Riemannian metric is then constructed between these two objects . The shape evolution obtained for the info-Riemannian metric is seen to be far more natural and intuitive than that obtained for the straight line interpolant in latent space . In the second case study , we use the info-Riemannian statistical manifold framework to find a set of distortion minimizing latent space coordinates , in the sense that ( Euclidean ) straight lines in the latent space closely approximate minimal geodesics on the statistical manifold . Such a set of coordinates offers a more discriminative representation for the data manifold ( Chen et al. , 2020 ) that results in , e.g. , higher linear SVM classification accuracy vis-á-vis existing state-of-the art methods . Experiments are carried out with both synthetic and standard benchmark datasets ( ShapeNet ( Chang et al. , 2015 ) , ModelNet ( Wu et al. , 2015 ) ) . 2 STATISTICAL MANIFOLDS AND THE FISHER INFORMATION METRIC . We begin by extending the original definition of a statistical manifold as follows : Definition 1 . Given an m-dimensional topological manifold1 Θ and a 1-1 map from Θ to the space of probability density functions θ 7→ p ( x ; θ ) , the image of this mapping , denoted S : = { p ( x ; θ ) |θ ∈ Θ } , is an m-dimensional statistical manifold . In the original definition Θ is taken to be an open subset of Rm ( Amari & Nagaoka , 2000 ; Amari , 2016 ) ) . By endowing S with a Riemannian metric , S can be given the structure of a Riemannian manifold , allowing for lengths , angles , and volumes to be defined on S in a coordinate-invariant manner . The Fisher information metric serves as a natural Riemannian metric on S : the elements ( gij ) of the Fisher information metric G ( θ ) ∈ Rm×m can be expressed as gij ( θ ) : = ∫ p ( x ; θ ) ∂ log p ( x ; θ ) ∂θi ∂ log p ( x ; θ ) ∂θj dx , i , j = 1 , . . . , m , ( 1 ) where θ = ( θ1 , ... , θm ) are local coordinates on S. Defining infinitesimal length on S by ds2 = dθTG ( θ ) dθ , the length of a curve θ ( t ) on S can then be computed as the integral ∫ T 0 ds . Further details on statistical manifolds and the Fisher information metric can be found in , e.g. , ( Amari & Nagaoka , 2000 ; Efron & Hinkley , 1978 ; Rissanen , 1996 ; Han & Park , 2014 ) . 1A topological manifold is a locally Euclidean Hausdorff topological space . 3 STATISTICAL MANIFOLD FRAMEWORK FOR POINT CLOUD DATA . With the above statistical manifold preliminaries , we now construct a Riemannian geometric structure for the space of point cloud data . Section 3.1 defines a statistical manifold from the point cloud data , while Section 3.2 uses the Fisher information metric to construct a Riemannian metric for point cloud data . To keep the definitions and results simple , we shall assume throughout all point cloud data consists of exactly n distinct points in RD , i.e. , each point cloud is of the form X = { x1 , ... , xn |xi ∈ RD , xi ̸= xj if i ̸= j } . The set of all point clouds is denoted X . Later we discuss methods for dealing with point clouds that violate our assumptions . The proofs of propositions in this section are in Appendix B . 3.1 STATISTICAL MANIFOLD OF POINT CLOUD DATA . The core idea for constructing the statistical manifold is to interpret a point cloud X as a set of n samples drawn from some underlying probability density function . Using a kernel density estimator ( Parzen , 1962 ; Davis et al. , 2011 ) , a parametric probability density function p ( x ; X ) can be defined in which X itself is the parameter : Definition 2 . Given a positive kernel function K : RD → R such that ∫ RD K ( u ) du = 1 and a D ×D symmetric positive-definite matrix Σ ( the bandwidth matrix ) , the kernel density estimate p ( x ; X ) : = 1 n √ |Σ| n∑ i=1 K ( Σ− 1 2 ( x− xi ) ) ( 2 ) is said to be a statistical representation of the point cloud X ∈ X . The set of statistical representations is denoted S : = { p ( x ; X ) |X ∈ X } . To ensure that S is a statistical manifold , recall from Definition 1 that the following two conditions need to be satisfied : ( i ) X is a topological manifold ; ( ii ) A 1-1 mapping h : X → S , X 7→ p ( x ; X ) must be defined . The first condition can be satisfied with the “ distinct points ” assumption : Proposition 1 ( Corollary 2.2.11. in ( Knudsen , 2018 ) ) . The set of point clouds in which each point cloud is a set of n distinct points of dimension D , is an nD-dimensional topological manifold . To satisfy the second condition , additional assumptions are needed . The following proposition provides a sufficient condition for h to be 1-1 : Proposition 2 . If the set of functions { K ( Σ− 12 ( x− xi ) ) |xi ∈ F } are linearly independent2 for any arbitrary finite subset F ⊂ RD with |F| ≤ 2n , the mapping h : X → S is 1-1 . With the above proposition , any kernel function that satisfies the linear independence condition is sufficient to ensure the existence of a 1-1 mapping h. For our purposes the standard and widely used normal kernel function satisfies the linear independence condition . Proposition 3 . Under the distinct points assumption , the standard ( multivariate ) normal kernel function K ( u ) = 1√ ( 2π ) D exp ( −u Tu 2 ) ( 3 ) with the scaled identity bandwidth matrix , i.e. , Σ = σ2I , satisfies the linear independence condition in Proposition 2 . From Propositions 2 and 3 we have established that , under the distinct points assumption and using the standard normal kernel function , the mapping h : X → S is 1-1 ; S can therefore be given the structure of statistical manifold . While other choices of kernel function are possible , throughout the remainder of the paper we use the standard normal kernel function . Figure 2 illustrates statistical manifold representations of some example point clouds . 2The linear independence of a set of functions implies that only a trivial linear combination of the functions equals the zero function . 3.2 INFORMATION RIEMANNIAN METRIC FOR POINT CLOUD DATA SPACE . We now equip the point cloud statistical manifold S with the Fisher information metric , which we refer to as the info-Riemannian metric and denote by H. The first task is to define a local coordinate system on the space of point clouds X . Toward this end , we use the matrix representation X ∈ Rn×D of a point cloud X . Observe that the matrix representation is not unique ; given an n×n permutation matrix P ∈ Rn×n , then X and PX represent the same point cloud X. Fortunately , this does not cause problems since p ( x ; X ) is defined in a permutation-invariant way , i.e. , p ( x ; X ) = p ( x ; PX ) for any n×n permutation matrix P . Throughout we use italics to denote local coordinate representations , e.g. , X has local coordinates X ∈ Rn×D , the tangent vector V ∈ TXX has local coordinates V ∈ Rn×D . The info-Riemannian metric H can be expressed in local coordinates coordinates X as follows : Hijkl ( X ) : = ∫ p ( x ; X ) ∂ log p ( x ; X ) ∂Xij ∂ log p ( x ; X ) ∂Xkl dx , ( 4 ) for i , k = 1 , ... , n and j , l = 1 , ... , D. Given two tangent vectors V , W ∈ TXX with respective matrix representations V , W ∈ Rn×D , their inner product is then computed as follows : ⟨V , W⟩X : = n∑ i , k=1 D∑ j , l=1 Hijkl ( X ) V ijW kl . ( 5 ) We note that the coordinate expression of the info-Riemannian metric Hijkl ( X ) results in a permutation-invariant inner product , i.e. , ∑ Hijkl ( X ) V ijW kl = ∑ Hijkl ( PX ) ( PV ) ij ( PW ) kl for any n× n permutation matrix P , showing that the metric is geometrically well-defined . Using the standard normal kernel function , the coordinate expression of the info-Riemannian metric Hijkl has a simple analytic expression as follows : Proposition 4 . With the standard ( multivariate ) normal kernel function and the bandwidth parameter σ , the information Riemannian metric is given by Hijkl ( X ) = ∫ p ( x ; X ) K ( x−xiσ ) K ( x−xk σ ) ( ∑n m=1 K ( x−xm σ ) ) 2 [ ( x− xi ) ( x− xk ) T σ4 ] jl dx , ( 6 ) Figure 3 shows that , given two moving point cloud data whose velocity matrices have equal Euclidean norm ( i.e. , ||V||2 = ∑n i=1 ∑D j=1 V ijV ij ) , the velocity norms under the info-Riemannian metric are significantly different : the velocity A has a value of 0.2626 , while the velocity B has a value of 2.2 × 10−8 . In particular , observe that the tangential velocity in the case B , which does not change the overall distribution of the point cloud , has a very small velocity norm under the info-Riemannian metric as it should . As another illustrative example that highlights the difference between the info-Riemannian metric and Euclidean metric ( i.e. , ⟨V , W⟩ : = ∑n i=1 ∑D j=1 V ijW ij ) , Figure 4 shows random walks for point cloud data under these metrics . Consider a randomly sampled and normalized velocity matrix under Euclidean metric . 3D velocity vectors in the sampled velocity matrix are equally likely to point any direction . On the other hand , 3D velocity vectors in the sampled velocity matrix under info-Riemannian metric are more likely to point tangential directions ( e.g. , more likely the case B than the case A in Figure 3 ) . As a result , unlike the standard Euclidean metric ( top ) , the infoRiemannian metric ( bottom ) produces random walks that stay close to the initial sphere without significant changes in its overall distribution pattern . | This paper proposes to represent point cloud data as samples drawn from statistical distribution, and project the data to an underlying statistical manifold on which various analyses can be conducted in a manner that is more favorable than doing so in a regular Euclidean space. The authors thoroughly describe how points are converted into samples from a probability density function, and explain the metric that determines the coordinate system on the statistical manifold, which describe the changes to the distribution of point clouds. To introduce this metric to represent point clouds, the authors propose to use this to find minimal geodesic curve between two data in a latent space. They also propose to add a novel constraint to the error metric of autoencoders, so that the encoded latent space possesses the Fisher information metric. The effect of this proposal is firstly analyzed by two synthetic experiments. First, the authors show that using the proposal, they are able to find the geodesic curve along the manifold that consists of data with the same label. Then, they demonstrate that the latent space of the constrained autoencoder successfully captures the Riemann metric, as samples of many classes are placed along a single line, or are cleanly separated. Finally, the authors transfer the learnt representation between dataset to demonstrate that the proposed metric space is suitable for representation of various point cloud data. | SP:f7cdcae5efc2bf722381f2a19c73a53a8c6b1bf1 |
A Statistical Manifold Framework for Point Cloud Data | 1 INTRODUCTION . A growing number of machine learning problems involve data sets in which each data point is a point cloud in RD , e.g. , 3D point cloud obtained by depth cameras . Typical applications include measuring the degree of similarity betweeen two point clouds – the point clouds may be measurements obtained from a depth camera , for example – for which a distance metric on the space of point clouds is needed . Some widely used distance metrics used in this context include the Hausdorff distance ( both the original and averaged versions ) , the chamfer distance ( Hausdorff , 1914 ) , and the earth mover distance ( Rubner et al. , 2000 ) . However , the distance metric measures just one aspect of point cloud data ; other applications may require more advanced concepts and tools . For example , in the case of a moving point cloud , one may wish to measure some of its more dynamic aspects like its velocity or other quantities that require a notion of higher-order derivatives . Applications such as Monte Carlo sampling may require the construction of an isotropic Gaussian distribution on the underlying space , in which case the notion of “ angle ” is needed in addition to distance ( more technically , an inner product on the tangent space is required ( Girolami & Calderhead , 2011 ; Gemici et al. , 2016 ; Mallasto & Feragen , 2018 ) . Related to the above , the idea of interpreting a point cloud as samples from some underlying probability distribution is well-known , and has been applied to problems ranging from point set registration ( Jian & Vemuri , 2005 ; Wang et al. , 2006 ; Myronenko & Song , 2010 ; Hasanbelliu et al. , 2014 ; Zhou et al. , 2014 ; Min et al. , 2018 ; Li et al. , 2021 ) to point cloud de-noising ( Zaman et al. , 2017 ; Luo & Hu , 2021 ) . In these approaches , divergence measures from probability and informa- tion theory have been utilized to compute the similarity between point clouds . While effective for certain applications , these divergence measures still only capture just one aspect of point cloud data , and can not be used to measure other quantities of a more geometric nature . For more advanced applications , a rigorous mathematical characterization of the space of point cloud data is an essential ingredient to a more comprehensive , robust , and correct ( in the sense of being coordinate-invariant and geometrically well-defined ) analysis of the types described above , particularly one based on Riemannian geometry . As such , the first contribution of this paper is a Riemannian geometric structure for the space of point cloud data . The key idea behind our approach draws upon the information geometry framework of ( Amari & Nagaoka , 2000 ; Amari , 2016 ) , in which the space of probability density functions is given the structure of a Riemannian manifold ( the statistical manifold ) , with the Fisher information acting as a natural Riemannian metric ( the Information Riemannian metric , or info-Riemannian metric for short ) . The connection between point cloud data and information geometry is established by constructing a 1-1 mapping from the space of point cloud data to the space of probability density functions , i.e. , a point cloud X = { x1 , ... , xn | xi ∈ RD } is mapped to a density function p ( x ; X ) on RD in a 1-1 fashion as illustrated in Figure 1 . Two case studies involving autoencoders are presented to demonstrate the benefits of our approach . In the first case study , a pre-trained autoencoder is used to encode two 3D point clouds – one representing a cylinder , one a cone – and the minimal geodesic ( or path of shortest length ) with respect to the info-Riemannian metric is then constructed between these two objects . The shape evolution obtained for the info-Riemannian metric is seen to be far more natural and intuitive than that obtained for the straight line interpolant in latent space . In the second case study , we use the info-Riemannian statistical manifold framework to find a set of distortion minimizing latent space coordinates , in the sense that ( Euclidean ) straight lines in the latent space closely approximate minimal geodesics on the statistical manifold . Such a set of coordinates offers a more discriminative representation for the data manifold ( Chen et al. , 2020 ) that results in , e.g. , higher linear SVM classification accuracy vis-á-vis existing state-of-the art methods . Experiments are carried out with both synthetic and standard benchmark datasets ( ShapeNet ( Chang et al. , 2015 ) , ModelNet ( Wu et al. , 2015 ) ) . 2 STATISTICAL MANIFOLDS AND THE FISHER INFORMATION METRIC . We begin by extending the original definition of a statistical manifold as follows : Definition 1 . Given an m-dimensional topological manifold1 Θ and a 1-1 map from Θ to the space of probability density functions θ 7→ p ( x ; θ ) , the image of this mapping , denoted S : = { p ( x ; θ ) |θ ∈ Θ } , is an m-dimensional statistical manifold . In the original definition Θ is taken to be an open subset of Rm ( Amari & Nagaoka , 2000 ; Amari , 2016 ) ) . By endowing S with a Riemannian metric , S can be given the structure of a Riemannian manifold , allowing for lengths , angles , and volumes to be defined on S in a coordinate-invariant manner . The Fisher information metric serves as a natural Riemannian metric on S : the elements ( gij ) of the Fisher information metric G ( θ ) ∈ Rm×m can be expressed as gij ( θ ) : = ∫ p ( x ; θ ) ∂ log p ( x ; θ ) ∂θi ∂ log p ( x ; θ ) ∂θj dx , i , j = 1 , . . . , m , ( 1 ) where θ = ( θ1 , ... , θm ) are local coordinates on S. Defining infinitesimal length on S by ds2 = dθTG ( θ ) dθ , the length of a curve θ ( t ) on S can then be computed as the integral ∫ T 0 ds . Further details on statistical manifolds and the Fisher information metric can be found in , e.g. , ( Amari & Nagaoka , 2000 ; Efron & Hinkley , 1978 ; Rissanen , 1996 ; Han & Park , 2014 ) . 1A topological manifold is a locally Euclidean Hausdorff topological space . 3 STATISTICAL MANIFOLD FRAMEWORK FOR POINT CLOUD DATA . With the above statistical manifold preliminaries , we now construct a Riemannian geometric structure for the space of point cloud data . Section 3.1 defines a statistical manifold from the point cloud data , while Section 3.2 uses the Fisher information metric to construct a Riemannian metric for point cloud data . To keep the definitions and results simple , we shall assume throughout all point cloud data consists of exactly n distinct points in RD , i.e. , each point cloud is of the form X = { x1 , ... , xn |xi ∈ RD , xi ̸= xj if i ̸= j } . The set of all point clouds is denoted X . Later we discuss methods for dealing with point clouds that violate our assumptions . The proofs of propositions in this section are in Appendix B . 3.1 STATISTICAL MANIFOLD OF POINT CLOUD DATA . The core idea for constructing the statistical manifold is to interpret a point cloud X as a set of n samples drawn from some underlying probability density function . Using a kernel density estimator ( Parzen , 1962 ; Davis et al. , 2011 ) , a parametric probability density function p ( x ; X ) can be defined in which X itself is the parameter : Definition 2 . Given a positive kernel function K : RD → R such that ∫ RD K ( u ) du = 1 and a D ×D symmetric positive-definite matrix Σ ( the bandwidth matrix ) , the kernel density estimate p ( x ; X ) : = 1 n √ |Σ| n∑ i=1 K ( Σ− 1 2 ( x− xi ) ) ( 2 ) is said to be a statistical representation of the point cloud X ∈ X . The set of statistical representations is denoted S : = { p ( x ; X ) |X ∈ X } . To ensure that S is a statistical manifold , recall from Definition 1 that the following two conditions need to be satisfied : ( i ) X is a topological manifold ; ( ii ) A 1-1 mapping h : X → S , X 7→ p ( x ; X ) must be defined . The first condition can be satisfied with the “ distinct points ” assumption : Proposition 1 ( Corollary 2.2.11. in ( Knudsen , 2018 ) ) . The set of point clouds in which each point cloud is a set of n distinct points of dimension D , is an nD-dimensional topological manifold . To satisfy the second condition , additional assumptions are needed . The following proposition provides a sufficient condition for h to be 1-1 : Proposition 2 . If the set of functions { K ( Σ− 12 ( x− xi ) ) |xi ∈ F } are linearly independent2 for any arbitrary finite subset F ⊂ RD with |F| ≤ 2n , the mapping h : X → S is 1-1 . With the above proposition , any kernel function that satisfies the linear independence condition is sufficient to ensure the existence of a 1-1 mapping h. For our purposes the standard and widely used normal kernel function satisfies the linear independence condition . Proposition 3 . Under the distinct points assumption , the standard ( multivariate ) normal kernel function K ( u ) = 1√ ( 2π ) D exp ( −u Tu 2 ) ( 3 ) with the scaled identity bandwidth matrix , i.e. , Σ = σ2I , satisfies the linear independence condition in Proposition 2 . From Propositions 2 and 3 we have established that , under the distinct points assumption and using the standard normal kernel function , the mapping h : X → S is 1-1 ; S can therefore be given the structure of statistical manifold . While other choices of kernel function are possible , throughout the remainder of the paper we use the standard normal kernel function . Figure 2 illustrates statistical manifold representations of some example point clouds . 2The linear independence of a set of functions implies that only a trivial linear combination of the functions equals the zero function . 3.2 INFORMATION RIEMANNIAN METRIC FOR POINT CLOUD DATA SPACE . We now equip the point cloud statistical manifold S with the Fisher information metric , which we refer to as the info-Riemannian metric and denote by H. The first task is to define a local coordinate system on the space of point clouds X . Toward this end , we use the matrix representation X ∈ Rn×D of a point cloud X . Observe that the matrix representation is not unique ; given an n×n permutation matrix P ∈ Rn×n , then X and PX represent the same point cloud X. Fortunately , this does not cause problems since p ( x ; X ) is defined in a permutation-invariant way , i.e. , p ( x ; X ) = p ( x ; PX ) for any n×n permutation matrix P . Throughout we use italics to denote local coordinate representations , e.g. , X has local coordinates X ∈ Rn×D , the tangent vector V ∈ TXX has local coordinates V ∈ Rn×D . The info-Riemannian metric H can be expressed in local coordinates coordinates X as follows : Hijkl ( X ) : = ∫ p ( x ; X ) ∂ log p ( x ; X ) ∂Xij ∂ log p ( x ; X ) ∂Xkl dx , ( 4 ) for i , k = 1 , ... , n and j , l = 1 , ... , D. Given two tangent vectors V , W ∈ TXX with respective matrix representations V , W ∈ Rn×D , their inner product is then computed as follows : ⟨V , W⟩X : = n∑ i , k=1 D∑ j , l=1 Hijkl ( X ) V ijW kl . ( 5 ) We note that the coordinate expression of the info-Riemannian metric Hijkl ( X ) results in a permutation-invariant inner product , i.e. , ∑ Hijkl ( X ) V ijW kl = ∑ Hijkl ( PX ) ( PV ) ij ( PW ) kl for any n× n permutation matrix P , showing that the metric is geometrically well-defined . Using the standard normal kernel function , the coordinate expression of the info-Riemannian metric Hijkl has a simple analytic expression as follows : Proposition 4 . With the standard ( multivariate ) normal kernel function and the bandwidth parameter σ , the information Riemannian metric is given by Hijkl ( X ) = ∫ p ( x ; X ) K ( x−xiσ ) K ( x−xk σ ) ( ∑n m=1 K ( x−xm σ ) ) 2 [ ( x− xi ) ( x− xk ) T σ4 ] jl dx , ( 6 ) Figure 3 shows that , given two moving point cloud data whose velocity matrices have equal Euclidean norm ( i.e. , ||V||2 = ∑n i=1 ∑D j=1 V ijV ij ) , the velocity norms under the info-Riemannian metric are significantly different : the velocity A has a value of 0.2626 , while the velocity B has a value of 2.2 × 10−8 . In particular , observe that the tangential velocity in the case B , which does not change the overall distribution of the point cloud , has a very small velocity norm under the info-Riemannian metric as it should . As another illustrative example that highlights the difference between the info-Riemannian metric and Euclidean metric ( i.e. , ⟨V , W⟩ : = ∑n i=1 ∑D j=1 V ijW ij ) , Figure 4 shows random walks for point cloud data under these metrics . Consider a randomly sampled and normalized velocity matrix under Euclidean metric . 3D velocity vectors in the sampled velocity matrix are equally likely to point any direction . On the other hand , 3D velocity vectors in the sampled velocity matrix under info-Riemannian metric are more likely to point tangential directions ( e.g. , more likely the case B than the case A in Figure 3 ) . As a result , unlike the standard Euclidean metric ( top ) , the infoRiemannian metric ( bottom ) produces random walks that stay close to the initial sphere without significant changes in its overall distribution pattern . | The paper concerns data sets of point clouds in Euclidean space and proposes to analyze those as samples from underlying probability distributions. Each point cloud being represented by a probability distribution can now be seen as a point in a statistical manifold on which the Fisher information metric provides a natural Riemannian structure. The authors presents two experiments in this setup where they train autoencoders to represent point clouds, interpolate between point clouds using geodesics, and where the latent representation is trained so that straight line are close to geodesics. | SP:f7cdcae5efc2bf722381f2a19c73a53a8c6b1bf1 |
A Statistical Manifold Framework for Point Cloud Data | 1 INTRODUCTION . A growing number of machine learning problems involve data sets in which each data point is a point cloud in RD , e.g. , 3D point cloud obtained by depth cameras . Typical applications include measuring the degree of similarity betweeen two point clouds – the point clouds may be measurements obtained from a depth camera , for example – for which a distance metric on the space of point clouds is needed . Some widely used distance metrics used in this context include the Hausdorff distance ( both the original and averaged versions ) , the chamfer distance ( Hausdorff , 1914 ) , and the earth mover distance ( Rubner et al. , 2000 ) . However , the distance metric measures just one aspect of point cloud data ; other applications may require more advanced concepts and tools . For example , in the case of a moving point cloud , one may wish to measure some of its more dynamic aspects like its velocity or other quantities that require a notion of higher-order derivatives . Applications such as Monte Carlo sampling may require the construction of an isotropic Gaussian distribution on the underlying space , in which case the notion of “ angle ” is needed in addition to distance ( more technically , an inner product on the tangent space is required ( Girolami & Calderhead , 2011 ; Gemici et al. , 2016 ; Mallasto & Feragen , 2018 ) . Related to the above , the idea of interpreting a point cloud as samples from some underlying probability distribution is well-known , and has been applied to problems ranging from point set registration ( Jian & Vemuri , 2005 ; Wang et al. , 2006 ; Myronenko & Song , 2010 ; Hasanbelliu et al. , 2014 ; Zhou et al. , 2014 ; Min et al. , 2018 ; Li et al. , 2021 ) to point cloud de-noising ( Zaman et al. , 2017 ; Luo & Hu , 2021 ) . In these approaches , divergence measures from probability and informa- tion theory have been utilized to compute the similarity between point clouds . While effective for certain applications , these divergence measures still only capture just one aspect of point cloud data , and can not be used to measure other quantities of a more geometric nature . For more advanced applications , a rigorous mathematical characterization of the space of point cloud data is an essential ingredient to a more comprehensive , robust , and correct ( in the sense of being coordinate-invariant and geometrically well-defined ) analysis of the types described above , particularly one based on Riemannian geometry . As such , the first contribution of this paper is a Riemannian geometric structure for the space of point cloud data . The key idea behind our approach draws upon the information geometry framework of ( Amari & Nagaoka , 2000 ; Amari , 2016 ) , in which the space of probability density functions is given the structure of a Riemannian manifold ( the statistical manifold ) , with the Fisher information acting as a natural Riemannian metric ( the Information Riemannian metric , or info-Riemannian metric for short ) . The connection between point cloud data and information geometry is established by constructing a 1-1 mapping from the space of point cloud data to the space of probability density functions , i.e. , a point cloud X = { x1 , ... , xn | xi ∈ RD } is mapped to a density function p ( x ; X ) on RD in a 1-1 fashion as illustrated in Figure 1 . Two case studies involving autoencoders are presented to demonstrate the benefits of our approach . In the first case study , a pre-trained autoencoder is used to encode two 3D point clouds – one representing a cylinder , one a cone – and the minimal geodesic ( or path of shortest length ) with respect to the info-Riemannian metric is then constructed between these two objects . The shape evolution obtained for the info-Riemannian metric is seen to be far more natural and intuitive than that obtained for the straight line interpolant in latent space . In the second case study , we use the info-Riemannian statistical manifold framework to find a set of distortion minimizing latent space coordinates , in the sense that ( Euclidean ) straight lines in the latent space closely approximate minimal geodesics on the statistical manifold . Such a set of coordinates offers a more discriminative representation for the data manifold ( Chen et al. , 2020 ) that results in , e.g. , higher linear SVM classification accuracy vis-á-vis existing state-of-the art methods . Experiments are carried out with both synthetic and standard benchmark datasets ( ShapeNet ( Chang et al. , 2015 ) , ModelNet ( Wu et al. , 2015 ) ) . 2 STATISTICAL MANIFOLDS AND THE FISHER INFORMATION METRIC . We begin by extending the original definition of a statistical manifold as follows : Definition 1 . Given an m-dimensional topological manifold1 Θ and a 1-1 map from Θ to the space of probability density functions θ 7→ p ( x ; θ ) , the image of this mapping , denoted S : = { p ( x ; θ ) |θ ∈ Θ } , is an m-dimensional statistical manifold . In the original definition Θ is taken to be an open subset of Rm ( Amari & Nagaoka , 2000 ; Amari , 2016 ) ) . By endowing S with a Riemannian metric , S can be given the structure of a Riemannian manifold , allowing for lengths , angles , and volumes to be defined on S in a coordinate-invariant manner . The Fisher information metric serves as a natural Riemannian metric on S : the elements ( gij ) of the Fisher information metric G ( θ ) ∈ Rm×m can be expressed as gij ( θ ) : = ∫ p ( x ; θ ) ∂ log p ( x ; θ ) ∂θi ∂ log p ( x ; θ ) ∂θj dx , i , j = 1 , . . . , m , ( 1 ) where θ = ( θ1 , ... , θm ) are local coordinates on S. Defining infinitesimal length on S by ds2 = dθTG ( θ ) dθ , the length of a curve θ ( t ) on S can then be computed as the integral ∫ T 0 ds . Further details on statistical manifolds and the Fisher information metric can be found in , e.g. , ( Amari & Nagaoka , 2000 ; Efron & Hinkley , 1978 ; Rissanen , 1996 ; Han & Park , 2014 ) . 1A topological manifold is a locally Euclidean Hausdorff topological space . 3 STATISTICAL MANIFOLD FRAMEWORK FOR POINT CLOUD DATA . With the above statistical manifold preliminaries , we now construct a Riemannian geometric structure for the space of point cloud data . Section 3.1 defines a statistical manifold from the point cloud data , while Section 3.2 uses the Fisher information metric to construct a Riemannian metric for point cloud data . To keep the definitions and results simple , we shall assume throughout all point cloud data consists of exactly n distinct points in RD , i.e. , each point cloud is of the form X = { x1 , ... , xn |xi ∈ RD , xi ̸= xj if i ̸= j } . The set of all point clouds is denoted X . Later we discuss methods for dealing with point clouds that violate our assumptions . The proofs of propositions in this section are in Appendix B . 3.1 STATISTICAL MANIFOLD OF POINT CLOUD DATA . The core idea for constructing the statistical manifold is to interpret a point cloud X as a set of n samples drawn from some underlying probability density function . Using a kernel density estimator ( Parzen , 1962 ; Davis et al. , 2011 ) , a parametric probability density function p ( x ; X ) can be defined in which X itself is the parameter : Definition 2 . Given a positive kernel function K : RD → R such that ∫ RD K ( u ) du = 1 and a D ×D symmetric positive-definite matrix Σ ( the bandwidth matrix ) , the kernel density estimate p ( x ; X ) : = 1 n √ |Σ| n∑ i=1 K ( Σ− 1 2 ( x− xi ) ) ( 2 ) is said to be a statistical representation of the point cloud X ∈ X . The set of statistical representations is denoted S : = { p ( x ; X ) |X ∈ X } . To ensure that S is a statistical manifold , recall from Definition 1 that the following two conditions need to be satisfied : ( i ) X is a topological manifold ; ( ii ) A 1-1 mapping h : X → S , X 7→ p ( x ; X ) must be defined . The first condition can be satisfied with the “ distinct points ” assumption : Proposition 1 ( Corollary 2.2.11. in ( Knudsen , 2018 ) ) . The set of point clouds in which each point cloud is a set of n distinct points of dimension D , is an nD-dimensional topological manifold . To satisfy the second condition , additional assumptions are needed . The following proposition provides a sufficient condition for h to be 1-1 : Proposition 2 . If the set of functions { K ( Σ− 12 ( x− xi ) ) |xi ∈ F } are linearly independent2 for any arbitrary finite subset F ⊂ RD with |F| ≤ 2n , the mapping h : X → S is 1-1 . With the above proposition , any kernel function that satisfies the linear independence condition is sufficient to ensure the existence of a 1-1 mapping h. For our purposes the standard and widely used normal kernel function satisfies the linear independence condition . Proposition 3 . Under the distinct points assumption , the standard ( multivariate ) normal kernel function K ( u ) = 1√ ( 2π ) D exp ( −u Tu 2 ) ( 3 ) with the scaled identity bandwidth matrix , i.e. , Σ = σ2I , satisfies the linear independence condition in Proposition 2 . From Propositions 2 and 3 we have established that , under the distinct points assumption and using the standard normal kernel function , the mapping h : X → S is 1-1 ; S can therefore be given the structure of statistical manifold . While other choices of kernel function are possible , throughout the remainder of the paper we use the standard normal kernel function . Figure 2 illustrates statistical manifold representations of some example point clouds . 2The linear independence of a set of functions implies that only a trivial linear combination of the functions equals the zero function . 3.2 INFORMATION RIEMANNIAN METRIC FOR POINT CLOUD DATA SPACE . We now equip the point cloud statistical manifold S with the Fisher information metric , which we refer to as the info-Riemannian metric and denote by H. The first task is to define a local coordinate system on the space of point clouds X . Toward this end , we use the matrix representation X ∈ Rn×D of a point cloud X . Observe that the matrix representation is not unique ; given an n×n permutation matrix P ∈ Rn×n , then X and PX represent the same point cloud X. Fortunately , this does not cause problems since p ( x ; X ) is defined in a permutation-invariant way , i.e. , p ( x ; X ) = p ( x ; PX ) for any n×n permutation matrix P . Throughout we use italics to denote local coordinate representations , e.g. , X has local coordinates X ∈ Rn×D , the tangent vector V ∈ TXX has local coordinates V ∈ Rn×D . The info-Riemannian metric H can be expressed in local coordinates coordinates X as follows : Hijkl ( X ) : = ∫ p ( x ; X ) ∂ log p ( x ; X ) ∂Xij ∂ log p ( x ; X ) ∂Xkl dx , ( 4 ) for i , k = 1 , ... , n and j , l = 1 , ... , D. Given two tangent vectors V , W ∈ TXX with respective matrix representations V , W ∈ Rn×D , their inner product is then computed as follows : ⟨V , W⟩X : = n∑ i , k=1 D∑ j , l=1 Hijkl ( X ) V ijW kl . ( 5 ) We note that the coordinate expression of the info-Riemannian metric Hijkl ( X ) results in a permutation-invariant inner product , i.e. , ∑ Hijkl ( X ) V ijW kl = ∑ Hijkl ( PX ) ( PV ) ij ( PW ) kl for any n× n permutation matrix P , showing that the metric is geometrically well-defined . Using the standard normal kernel function , the coordinate expression of the info-Riemannian metric Hijkl has a simple analytic expression as follows : Proposition 4 . With the standard ( multivariate ) normal kernel function and the bandwidth parameter σ , the information Riemannian metric is given by Hijkl ( X ) = ∫ p ( x ; X ) K ( x−xiσ ) K ( x−xk σ ) ( ∑n m=1 K ( x−xm σ ) ) 2 [ ( x− xi ) ( x− xk ) T σ4 ] jl dx , ( 6 ) Figure 3 shows that , given two moving point cloud data whose velocity matrices have equal Euclidean norm ( i.e. , ||V||2 = ∑n i=1 ∑D j=1 V ijV ij ) , the velocity norms under the info-Riemannian metric are significantly different : the velocity A has a value of 0.2626 , while the velocity B has a value of 2.2 × 10−8 . In particular , observe that the tangential velocity in the case B , which does not change the overall distribution of the point cloud , has a very small velocity norm under the info-Riemannian metric as it should . As another illustrative example that highlights the difference between the info-Riemannian metric and Euclidean metric ( i.e. , ⟨V , W⟩ : = ∑n i=1 ∑D j=1 V ijW ij ) , Figure 4 shows random walks for point cloud data under these metrics . Consider a randomly sampled and normalized velocity matrix under Euclidean metric . 3D velocity vectors in the sampled velocity matrix are equally likely to point any direction . On the other hand , 3D velocity vectors in the sampled velocity matrix under info-Riemannian metric are more likely to point tangential directions ( e.g. , more likely the case B than the case A in Figure 3 ) . As a result , unlike the standard Euclidean metric ( top ) , the infoRiemannian metric ( bottom ) produces random walks that stay close to the initial sphere without significant changes in its overall distribution pattern . | The paper studies 3D point cloud data. The core message is that one can regard a point cloud as a collection of samples from an underlying distribution and, hence, one can imagine a manifold of point clouds and construct an associated distance metric via the Fisher information. That distance can then be used for certain manipulation and analysis tasks, e.g., to interpolate ("morph") between point clouds along the manifold, or to regularise representation learning. A central assumption of the paper is that it is easier to choose a meaningful probability distribution than to make other a-priori assumptions "ad hoc" about the point cloud. However it is not discussed how to actually choose a suitable distribution, the paper limits itself to Gaussian kernel density (seemingly also "ad hoc" for mathematical convenience). | SP:f7cdcae5efc2bf722381f2a19c73a53a8c6b1bf1 |
On Heterogeneously Distributed Data, Sparsity Matters | 1 INTRODUCTION Data privacy raises increasingly intensive concerns , and governments have enacted legislation to regulate the privacy intrusion behavior of mobile users , e.g. , the General Data Protection Regulation ( Voigt & Von dem Bussche , 2017 ) . Traditional distributed learning approaches , requiring massive users ’ data to be collected and transmitted to a central server for training model , soon may no longer be realistic under the increasingly stringent regulations on users ’ private data . On this ground , federated learning ( FL ) , a distributed training paradigm emerges as a successful solution to cope with privacy concerns , which allows multiple clients to perform model training within the local device without the necessity to exchange the data to other entities . In this way , the data privacy leakage problem could be potentially relieved . Despite the promising prospect , several notorious issues are afflicting practical performance of FL : • The global model produced by weight average ( or FedAvg and its non-personalized variants ) exhibits unsatisfactory performance in a Non-IID data distribution setting . To alleviate this problem , the most popular idea is to integrate personalized features into the global model , and produce dedicated model for each local distribution . However , how to make this integration is an open problem that remains unresolved . Prior works on personalized FL ( PFL ) zero in this issue , but the existing methods either demonstrate weak generalization towards different model architectures ( Arivazhagan et al. , 2019 ) , or require extra computation and storage ( Li et al. , 2021 ) . • The communication and training overhead is prohibitively high for both the FL and PFL . Clients in FL/PFL responsible for model training are mostly edge-devices with limited computation capacity and low bandwidth , and may not be powerful enough to fulfill a modern machine learning task with large deep neural networks . Existing studies ( Li et al. , 2020 ; Vahidian et al. , 2021 ) integrate model compression into FL/PFL to save communication and computation overhead . However , both methods embrace the technique of dense-to-sparse training , which still requires a large amount of communication at the beginning of training . In addition , how to effectively aggregate the dynamic sparse models is another challenging problem that remains unresolved . In this work , we propose FedSpa ( see Figure 2 ) , which has two key features to counter the above two challenges : ( i ) FedSpa does not deploy a single global model , but allows each client to own its unique sparse model masked by a personalized mask , which successfully alleviates the Non-IID challenge . ( ii ) FedSpa allows each client to train over an evolutionary sparse model with constant sparsity1 throughout the whole federated training process , which consistently alleviates the computation overhead of clients . Besides , all the local models in FedSpa are sparse models , which requires a smaller amount of communication cost in each communication round . Theoretically , we conclude that with the rise of Non-IID extent , setting a higher sparsity may result in a better convergence on the personalized models of FedSpa . Empirically , in the Non-IID setting , we demonstrate that FedSpa accelerates the convergence ( respectively 76.2 % and 38.1 % less communication rounds to reach the best accuracy of FedAvg ( McMahan et al. , 2016 ) and Ditto ( Li et al. , 2021 ) ) , in- creases the final accuracy ( up to 21.9 % and 4.4 % higher accuracy than FedAvg and Ditto , respectively ) , reduces the communication overhead ( 50 % less parameters communicated than the dense solutions ) , and lowers the computation ( 15.3 % lower floating-point operations ( FLOPs ) than algorithms trained with fully dense model ) . To the end , we summarize our contribution as : • We present a novel formulation of the sparse personalized FL ( SPFL ) problem , which can be applied to various network architectures by enforcing personalized sparse masks to a global model . • We propose a solution dubbed as FedSpa to solve the SPFL problem . By our novel design , FedSpa reduces the communication and computation overhead of the general FL solution . • Two sparse-to-sparse mask searching techniques are integrated as plugins of our solution . To adapt our PFL training context , we modify the DST-based mask searching technique to enable a warm-start of the searching process , which achieves superior performance . • We theoretically present the convergence analysis of the personalized models . Experimental results demonstrate the superiority of FedSpa and also coincides with the theoretical conclusion – with the rise of data heterogeneity , setting a higher sparsity of FedSpa may potentially result in a better convergence on its personalized models . 2 RELATED WORKS . Federated learning ( FL ) ( McMahan et al. , 2016 ) is seriously afflicted by the issue of heterogeneously distributed ( or Non-IID ) data . Personalized FL ( PFL ) , initiated by recent literature ( Li et al. , 2021 ; Arivazhagan et al. , 2019 ) , is shown to be effective to counter this issue of FL . In this work , we propose an alternative yet effective way to enhance PFL with personalized sparse models . 2.1 PERSONALIZED FEDERATED LEARNING . We categorize PFL into five genres . Firstly , PFL via layer partition , e.g. , FedPer ( Arivazhagan et al. , 2019 ) , LG-FedAvg ( Liang et al. , 2020 ) , FedRep ( Collins et al. , 2021 ) , is to divide the global model layers into shared layers and personalized layers . For the shared layers , weights average as in FedAvg is adopted , while for personalized layers , models are trained only locally and will not 1Sparsity specifies the ratio of parameters that are set to 0 ( or inactive ) in a model . be exchanged with others . Secondly , PFL via regularization , e.g. , Ditto ( Li et al. , 2021 ) , L2GD ( Hanzely & Richtárik , 2020 ) is to add a proximal term on the local model to force the local model and global model closely in the local model fine-tuning stage . Similarly , Sarcheshmehpour et al . ( 2021 ) ; SarcheshmehPour et al . ( 2021 ) propose a total variation ( TV ) regularization to form the network lasso ( nLasso ) problem , and primal-dual methods adapted from ( Jung , 2020 ) are proposed to solve the nLasso problems . Thirdly , PFL via model interpolation , e.g. , MAPPER ( Mansour et al. , 2020 ) , APFL ( Deng et al. , 2020 ) achieves personalization by linearly interpolating the weights of the cluster ( global ) model and local model as the personalized model . Fourthly , PFL via transfer learning , e.g. , FedMD ( Li & Wang , 2019 ) , FedSteg ( Yang et al. , 2020 ) , and Fedhealth ( Chen et al. , 2020 ) , is to either use model and domain-specific local fine-tuning or knowledge distillation to adapt the global model into the personalized model . Finally , PFL via model compression , e.g. , LotteryFL ( Li et al. , 2020 ) and Sub-FedAvg ( Vahidian et al. , 2021 ) , achieves personalization via employing principle model compression techniques , such as weight pruning and channel pruning , over the shared global model . 2.2 SPARSE DEEP NEURAL NETWORKS . Methods to Sparsify neural networks can be classified into two genres : dense-to-sparse methods and sparse-to-sparse methods . Dense-to-sparse methods train from a dense model , and compress the model along the training process . Iterative pruning , first proposed by Frankle & Carbin ( 2018 ) , shows promising performance in dynamically searching for a sparse yet accurate network . Recently , sparse-to-sparse methods have been proposed to pursue training efficiency . Among them , dynamic sparse training ( DST ) ( Bellec et al. , 2018 ; Evci et al. , 2020 ; Liu et al. , 2021 ) is the most successful technique that allows sparse networks , trained from scratch , to match the performance of their dense equivalents . Stemming from the first work – sparse evolutionary training ( Mocanu et al. , 2018 ; Liu et al. , 2020 ) , DST has evolved as a class of sparse training methods absorbing many advanced techniques , e.g. , weight redistribution ( Mostafa & Wang , 2019 ; Dettmers & Zettlemoyer , 2019 ) , gradient-based regrowth ( Dettmers & Zettlemoyer , 2019 ; Evci et al. , 2020 ) , and extra weight exploration ( Jayakumar et al. , 2020 ; Liu et al. , 2021 ) . Our work also achieves personalization via model compression . We emphasize that three main progresses are made towards SOTA compression-based PFL : ( i ) We rigorously formulate the sparse personalized FL problem , filling the gap left by the prior works . ( ii ) While prior works either vaguely describe their model aggregation as ” aggregating the Lottery Ticket Network via FedAvg ” ( Li et al. , 2020 ) , or ” taking the average on the intersection of unpruned parameters in the network ” ( Vahidian et al. , 2021 ) , we explicitly formulate the aggregation as averaging the sparse update from clients . ( iii ) Both the two prominent prior works use the idea of iterative pruning to prune the network from dense to sparse . We instead provide two sparse-to-sparse training alternatives to plug in our solution , which largely reduces the costs of communication at the beginning of the training process , and exhibits remarkable performance . 3 PROBLEM FORMULATION . We assume a total number of K clients within our FL system , and we consistently use k to index a specific client . First , we give a preliminary introduction on the general FL and PFL problem . General FL problem . Letw ∈ Rd be the global weight . General FL takes the formulation as below ( P1 ) min w f̃ ( w ) = 1 K K∑ k=1 { F̃k ( w ) : = E [ L ( x , y ) ∼Dk ( w ; ( x , y ) ) ] } , whereD = D1∪· · ·∪DK is the joint distribution of k local heterogeneous distributions , data ( x , y ) is uniformly sampled according to distribution Dk wrt . client k , and L ( · ; · ) is the loss function . Ultimate PFL Problem . Let { wk } be the personalized models . The ultimate PFL is defined as : ( P2 ) min { w1 , ··· , wK } f̂ ( w1 , · · · , wK ) = 1 K K∑ k=1 { F̂k ( wk ) : = E [ L ( x , y ) ∼Dk ( wk ; ( x , y ) ) ] } , according to ( Zhang et al. , 2020 ; Hanzely et al. , 2021 ) . The problem could be separately solved by individual client with no communication . However , if the local data is insufficient , poor performance could be observed by this direct solution , since the local models can not be boosted by other clients . Regularized PFL problem . Regularized PFL can ease the heterogeneous challenges encountered by general FL , while escaping the curse of insufficient samples encountered by the ultimate PFL problem . Inspired by ( Chen & Chao , 2021 ) , we formulate the Regularized PFL problem as follows : ( P3 ) min { w1 , ··· , wK } f̄ ( w1 , · · · , wK ) = 1 K K∑ k=1 { F̄k ( wk ) : = E [ L ( x , y ) ∼Dk ( wk ; ( x , y ) ) ] } +R ( · ) , where wk denote the personalized models and R ( · ) is the regularizer that enables information exchange between clients , making the problem tractable even the local samples are insufficient . However , it remains controversial about how to define the regularizer . Also , the gap between regularized PFL problem ( P3 ) and the ultimate PFL problem ( P2 ) still remains unspecified in existing PFL study . In this work , our ultimate goal is to solve problem ( P2 ) , which requires information exchange between clients to ensure effective solution . Below , instead of utilizing regularizer as in ( P3 ) , we alternatively propose a novel Sparse PFL ( SPFL ) problem to reach the same goal . Sparse PFL problem . By introducing personalized masks into FL , we derive the SPFL problem as ( P4 ) min w f ( w ) = 1 K K∑ k=1 { Fk ( m ∗ k w ) : = E [ L ( x , y ) ∼Dk ( m ∗ k w ; ( x , y ) ) ] } , where m∗k ∈ { 0 , 1 } d is a personalized sparse binary mask for k-th client . denotes the Hadamard product for two given vectors . Our ultimate goal is to find a global model w , such that the personalized model for k-th client can be extracted from the global model by personalized mask m∗k , i.e. , m∗k w. The element of m∗k being 1 means that the weight in the global model is active for k-th personalized model , otherwise , remains dormant . Thus , the information exchange between all personalized models is enforced by a shared global model w. Compared with existing PFL algorithms , solving our SPFL problem ( P4 ) does not sacrifice additional computation and storage overhead of clients , since we do not maintain both personalized local models and global model in clients as in ( Li et al. , 2021 ; Mansour et al. , 2020 ) . On contrary , the solution to our problem could potentially lower the communication and computation overhead . Moreover , SPFL problem ( P4 ) can be applied to most of the model architectures without modelspecific hyper-parameter tuning , since we do not make model-specific separation of the public and personalized layer in ( Arivazhagan et al. , 2019 ; Liang et al. , 2020 ; Collins et al. , 2021 ) , or domainspecific fine-tuning in ( Chen et al. , 2020 ; Yang et al. , 2020 ) . | This paper proposes an interesting approach to FL with heterogeneous client data. The main idea is to learn a common weight vector for all clients but allow them to individually mask this global weight vector. **** I appreciate the effort that authors put into adressing my concerns which have been partially covered. Thus, i have slighltly raised the grade of my recommendation. However, i would ask the area chair to verify if the authors have addressed all my concerns satisfactorily. In particular, im still not sure about the relation between (P4) and regularized PFL (P3) which is quite generic and also allows for reguarlizers that are indicator functions of sets. | SP:45043ed1b45f59ed76e0e1acc8b4530b7d962aa8 |
On Heterogeneously Distributed Data, Sparsity Matters | 1 INTRODUCTION Data privacy raises increasingly intensive concerns , and governments have enacted legislation to regulate the privacy intrusion behavior of mobile users , e.g. , the General Data Protection Regulation ( Voigt & Von dem Bussche , 2017 ) . Traditional distributed learning approaches , requiring massive users ’ data to be collected and transmitted to a central server for training model , soon may no longer be realistic under the increasingly stringent regulations on users ’ private data . On this ground , federated learning ( FL ) , a distributed training paradigm emerges as a successful solution to cope with privacy concerns , which allows multiple clients to perform model training within the local device without the necessity to exchange the data to other entities . In this way , the data privacy leakage problem could be potentially relieved . Despite the promising prospect , several notorious issues are afflicting practical performance of FL : • The global model produced by weight average ( or FedAvg and its non-personalized variants ) exhibits unsatisfactory performance in a Non-IID data distribution setting . To alleviate this problem , the most popular idea is to integrate personalized features into the global model , and produce dedicated model for each local distribution . However , how to make this integration is an open problem that remains unresolved . Prior works on personalized FL ( PFL ) zero in this issue , but the existing methods either demonstrate weak generalization towards different model architectures ( Arivazhagan et al. , 2019 ) , or require extra computation and storage ( Li et al. , 2021 ) . • The communication and training overhead is prohibitively high for both the FL and PFL . Clients in FL/PFL responsible for model training are mostly edge-devices with limited computation capacity and low bandwidth , and may not be powerful enough to fulfill a modern machine learning task with large deep neural networks . Existing studies ( Li et al. , 2020 ; Vahidian et al. , 2021 ) integrate model compression into FL/PFL to save communication and computation overhead . However , both methods embrace the technique of dense-to-sparse training , which still requires a large amount of communication at the beginning of training . In addition , how to effectively aggregate the dynamic sparse models is another challenging problem that remains unresolved . In this work , we propose FedSpa ( see Figure 2 ) , which has two key features to counter the above two challenges : ( i ) FedSpa does not deploy a single global model , but allows each client to own its unique sparse model masked by a personalized mask , which successfully alleviates the Non-IID challenge . ( ii ) FedSpa allows each client to train over an evolutionary sparse model with constant sparsity1 throughout the whole federated training process , which consistently alleviates the computation overhead of clients . Besides , all the local models in FedSpa are sparse models , which requires a smaller amount of communication cost in each communication round . Theoretically , we conclude that with the rise of Non-IID extent , setting a higher sparsity may result in a better convergence on the personalized models of FedSpa . Empirically , in the Non-IID setting , we demonstrate that FedSpa accelerates the convergence ( respectively 76.2 % and 38.1 % less communication rounds to reach the best accuracy of FedAvg ( McMahan et al. , 2016 ) and Ditto ( Li et al. , 2021 ) ) , in- creases the final accuracy ( up to 21.9 % and 4.4 % higher accuracy than FedAvg and Ditto , respectively ) , reduces the communication overhead ( 50 % less parameters communicated than the dense solutions ) , and lowers the computation ( 15.3 % lower floating-point operations ( FLOPs ) than algorithms trained with fully dense model ) . To the end , we summarize our contribution as : • We present a novel formulation of the sparse personalized FL ( SPFL ) problem , which can be applied to various network architectures by enforcing personalized sparse masks to a global model . • We propose a solution dubbed as FedSpa to solve the SPFL problem . By our novel design , FedSpa reduces the communication and computation overhead of the general FL solution . • Two sparse-to-sparse mask searching techniques are integrated as plugins of our solution . To adapt our PFL training context , we modify the DST-based mask searching technique to enable a warm-start of the searching process , which achieves superior performance . • We theoretically present the convergence analysis of the personalized models . Experimental results demonstrate the superiority of FedSpa and also coincides with the theoretical conclusion – with the rise of data heterogeneity , setting a higher sparsity of FedSpa may potentially result in a better convergence on its personalized models . 2 RELATED WORKS . Federated learning ( FL ) ( McMahan et al. , 2016 ) is seriously afflicted by the issue of heterogeneously distributed ( or Non-IID ) data . Personalized FL ( PFL ) , initiated by recent literature ( Li et al. , 2021 ; Arivazhagan et al. , 2019 ) , is shown to be effective to counter this issue of FL . In this work , we propose an alternative yet effective way to enhance PFL with personalized sparse models . 2.1 PERSONALIZED FEDERATED LEARNING . We categorize PFL into five genres . Firstly , PFL via layer partition , e.g. , FedPer ( Arivazhagan et al. , 2019 ) , LG-FedAvg ( Liang et al. , 2020 ) , FedRep ( Collins et al. , 2021 ) , is to divide the global model layers into shared layers and personalized layers . For the shared layers , weights average as in FedAvg is adopted , while for personalized layers , models are trained only locally and will not 1Sparsity specifies the ratio of parameters that are set to 0 ( or inactive ) in a model . be exchanged with others . Secondly , PFL via regularization , e.g. , Ditto ( Li et al. , 2021 ) , L2GD ( Hanzely & Richtárik , 2020 ) is to add a proximal term on the local model to force the local model and global model closely in the local model fine-tuning stage . Similarly , Sarcheshmehpour et al . ( 2021 ) ; SarcheshmehPour et al . ( 2021 ) propose a total variation ( TV ) regularization to form the network lasso ( nLasso ) problem , and primal-dual methods adapted from ( Jung , 2020 ) are proposed to solve the nLasso problems . Thirdly , PFL via model interpolation , e.g. , MAPPER ( Mansour et al. , 2020 ) , APFL ( Deng et al. , 2020 ) achieves personalization by linearly interpolating the weights of the cluster ( global ) model and local model as the personalized model . Fourthly , PFL via transfer learning , e.g. , FedMD ( Li & Wang , 2019 ) , FedSteg ( Yang et al. , 2020 ) , and Fedhealth ( Chen et al. , 2020 ) , is to either use model and domain-specific local fine-tuning or knowledge distillation to adapt the global model into the personalized model . Finally , PFL via model compression , e.g. , LotteryFL ( Li et al. , 2020 ) and Sub-FedAvg ( Vahidian et al. , 2021 ) , achieves personalization via employing principle model compression techniques , such as weight pruning and channel pruning , over the shared global model . 2.2 SPARSE DEEP NEURAL NETWORKS . Methods to Sparsify neural networks can be classified into two genres : dense-to-sparse methods and sparse-to-sparse methods . Dense-to-sparse methods train from a dense model , and compress the model along the training process . Iterative pruning , first proposed by Frankle & Carbin ( 2018 ) , shows promising performance in dynamically searching for a sparse yet accurate network . Recently , sparse-to-sparse methods have been proposed to pursue training efficiency . Among them , dynamic sparse training ( DST ) ( Bellec et al. , 2018 ; Evci et al. , 2020 ; Liu et al. , 2021 ) is the most successful technique that allows sparse networks , trained from scratch , to match the performance of their dense equivalents . Stemming from the first work – sparse evolutionary training ( Mocanu et al. , 2018 ; Liu et al. , 2020 ) , DST has evolved as a class of sparse training methods absorbing many advanced techniques , e.g. , weight redistribution ( Mostafa & Wang , 2019 ; Dettmers & Zettlemoyer , 2019 ) , gradient-based regrowth ( Dettmers & Zettlemoyer , 2019 ; Evci et al. , 2020 ) , and extra weight exploration ( Jayakumar et al. , 2020 ; Liu et al. , 2021 ) . Our work also achieves personalization via model compression . We emphasize that three main progresses are made towards SOTA compression-based PFL : ( i ) We rigorously formulate the sparse personalized FL problem , filling the gap left by the prior works . ( ii ) While prior works either vaguely describe their model aggregation as ” aggregating the Lottery Ticket Network via FedAvg ” ( Li et al. , 2020 ) , or ” taking the average on the intersection of unpruned parameters in the network ” ( Vahidian et al. , 2021 ) , we explicitly formulate the aggregation as averaging the sparse update from clients . ( iii ) Both the two prominent prior works use the idea of iterative pruning to prune the network from dense to sparse . We instead provide two sparse-to-sparse training alternatives to plug in our solution , which largely reduces the costs of communication at the beginning of the training process , and exhibits remarkable performance . 3 PROBLEM FORMULATION . We assume a total number of K clients within our FL system , and we consistently use k to index a specific client . First , we give a preliminary introduction on the general FL and PFL problem . General FL problem . Letw ∈ Rd be the global weight . General FL takes the formulation as below ( P1 ) min w f̃ ( w ) = 1 K K∑ k=1 { F̃k ( w ) : = E [ L ( x , y ) ∼Dk ( w ; ( x , y ) ) ] } , whereD = D1∪· · ·∪DK is the joint distribution of k local heterogeneous distributions , data ( x , y ) is uniformly sampled according to distribution Dk wrt . client k , and L ( · ; · ) is the loss function . Ultimate PFL Problem . Let { wk } be the personalized models . The ultimate PFL is defined as : ( P2 ) min { w1 , ··· , wK } f̂ ( w1 , · · · , wK ) = 1 K K∑ k=1 { F̂k ( wk ) : = E [ L ( x , y ) ∼Dk ( wk ; ( x , y ) ) ] } , according to ( Zhang et al. , 2020 ; Hanzely et al. , 2021 ) . The problem could be separately solved by individual client with no communication . However , if the local data is insufficient , poor performance could be observed by this direct solution , since the local models can not be boosted by other clients . Regularized PFL problem . Regularized PFL can ease the heterogeneous challenges encountered by general FL , while escaping the curse of insufficient samples encountered by the ultimate PFL problem . Inspired by ( Chen & Chao , 2021 ) , we formulate the Regularized PFL problem as follows : ( P3 ) min { w1 , ··· , wK } f̄ ( w1 , · · · , wK ) = 1 K K∑ k=1 { F̄k ( wk ) : = E [ L ( x , y ) ∼Dk ( wk ; ( x , y ) ) ] } +R ( · ) , where wk denote the personalized models and R ( · ) is the regularizer that enables information exchange between clients , making the problem tractable even the local samples are insufficient . However , it remains controversial about how to define the regularizer . Also , the gap between regularized PFL problem ( P3 ) and the ultimate PFL problem ( P2 ) still remains unspecified in existing PFL study . In this work , our ultimate goal is to solve problem ( P2 ) , which requires information exchange between clients to ensure effective solution . Below , instead of utilizing regularizer as in ( P3 ) , we alternatively propose a novel Sparse PFL ( SPFL ) problem to reach the same goal . Sparse PFL problem . By introducing personalized masks into FL , we derive the SPFL problem as ( P4 ) min w f ( w ) = 1 K K∑ k=1 { Fk ( m ∗ k w ) : = E [ L ( x , y ) ∼Dk ( m ∗ k w ; ( x , y ) ) ] } , where m∗k ∈ { 0 , 1 } d is a personalized sparse binary mask for k-th client . denotes the Hadamard product for two given vectors . Our ultimate goal is to find a global model w , such that the personalized model for k-th client can be extracted from the global model by personalized mask m∗k , i.e. , m∗k w. The element of m∗k being 1 means that the weight in the global model is active for k-th personalized model , otherwise , remains dormant . Thus , the information exchange between all personalized models is enforced by a shared global model w. Compared with existing PFL algorithms , solving our SPFL problem ( P4 ) does not sacrifice additional computation and storage overhead of clients , since we do not maintain both personalized local models and global model in clients as in ( Li et al. , 2021 ; Mansour et al. , 2020 ) . On contrary , the solution to our problem could potentially lower the communication and computation overhead . Moreover , SPFL problem ( P4 ) can be applied to most of the model architectures without modelspecific hyper-parameter tuning , since we do not make model-specific separation of the public and personalized layer in ( Arivazhagan et al. , 2019 ; Liang et al. , 2020 ; Collins et al. , 2021 ) , or domainspecific fine-tuning in ( Chen et al. , 2020 ; Yang et al. , 2020 ) . | This paper studies personalized sparse training for federated learning. The proposed method FedSpa is a novel personalized federated learning scheme that employs personalized sparse masks to customize sparse local models on the edge. The authors provide the theoretical result with regard to the error bound and empirical results with several personalized federated learning methods. | SP:45043ed1b45f59ed76e0e1acc8b4530b7d962aa8 |
On Heterogeneously Distributed Data, Sparsity Matters | 1 INTRODUCTION Data privacy raises increasingly intensive concerns , and governments have enacted legislation to regulate the privacy intrusion behavior of mobile users , e.g. , the General Data Protection Regulation ( Voigt & Von dem Bussche , 2017 ) . Traditional distributed learning approaches , requiring massive users ’ data to be collected and transmitted to a central server for training model , soon may no longer be realistic under the increasingly stringent regulations on users ’ private data . On this ground , federated learning ( FL ) , a distributed training paradigm emerges as a successful solution to cope with privacy concerns , which allows multiple clients to perform model training within the local device without the necessity to exchange the data to other entities . In this way , the data privacy leakage problem could be potentially relieved . Despite the promising prospect , several notorious issues are afflicting practical performance of FL : • The global model produced by weight average ( or FedAvg and its non-personalized variants ) exhibits unsatisfactory performance in a Non-IID data distribution setting . To alleviate this problem , the most popular idea is to integrate personalized features into the global model , and produce dedicated model for each local distribution . However , how to make this integration is an open problem that remains unresolved . Prior works on personalized FL ( PFL ) zero in this issue , but the existing methods either demonstrate weak generalization towards different model architectures ( Arivazhagan et al. , 2019 ) , or require extra computation and storage ( Li et al. , 2021 ) . • The communication and training overhead is prohibitively high for both the FL and PFL . Clients in FL/PFL responsible for model training are mostly edge-devices with limited computation capacity and low bandwidth , and may not be powerful enough to fulfill a modern machine learning task with large deep neural networks . Existing studies ( Li et al. , 2020 ; Vahidian et al. , 2021 ) integrate model compression into FL/PFL to save communication and computation overhead . However , both methods embrace the technique of dense-to-sparse training , which still requires a large amount of communication at the beginning of training . In addition , how to effectively aggregate the dynamic sparse models is another challenging problem that remains unresolved . In this work , we propose FedSpa ( see Figure 2 ) , which has two key features to counter the above two challenges : ( i ) FedSpa does not deploy a single global model , but allows each client to own its unique sparse model masked by a personalized mask , which successfully alleviates the Non-IID challenge . ( ii ) FedSpa allows each client to train over an evolutionary sparse model with constant sparsity1 throughout the whole federated training process , which consistently alleviates the computation overhead of clients . Besides , all the local models in FedSpa are sparse models , which requires a smaller amount of communication cost in each communication round . Theoretically , we conclude that with the rise of Non-IID extent , setting a higher sparsity may result in a better convergence on the personalized models of FedSpa . Empirically , in the Non-IID setting , we demonstrate that FedSpa accelerates the convergence ( respectively 76.2 % and 38.1 % less communication rounds to reach the best accuracy of FedAvg ( McMahan et al. , 2016 ) and Ditto ( Li et al. , 2021 ) ) , in- creases the final accuracy ( up to 21.9 % and 4.4 % higher accuracy than FedAvg and Ditto , respectively ) , reduces the communication overhead ( 50 % less parameters communicated than the dense solutions ) , and lowers the computation ( 15.3 % lower floating-point operations ( FLOPs ) than algorithms trained with fully dense model ) . To the end , we summarize our contribution as : • We present a novel formulation of the sparse personalized FL ( SPFL ) problem , which can be applied to various network architectures by enforcing personalized sparse masks to a global model . • We propose a solution dubbed as FedSpa to solve the SPFL problem . By our novel design , FedSpa reduces the communication and computation overhead of the general FL solution . • Two sparse-to-sparse mask searching techniques are integrated as plugins of our solution . To adapt our PFL training context , we modify the DST-based mask searching technique to enable a warm-start of the searching process , which achieves superior performance . • We theoretically present the convergence analysis of the personalized models . Experimental results demonstrate the superiority of FedSpa and also coincides with the theoretical conclusion – with the rise of data heterogeneity , setting a higher sparsity of FedSpa may potentially result in a better convergence on its personalized models . 2 RELATED WORKS . Federated learning ( FL ) ( McMahan et al. , 2016 ) is seriously afflicted by the issue of heterogeneously distributed ( or Non-IID ) data . Personalized FL ( PFL ) , initiated by recent literature ( Li et al. , 2021 ; Arivazhagan et al. , 2019 ) , is shown to be effective to counter this issue of FL . In this work , we propose an alternative yet effective way to enhance PFL with personalized sparse models . 2.1 PERSONALIZED FEDERATED LEARNING . We categorize PFL into five genres . Firstly , PFL via layer partition , e.g. , FedPer ( Arivazhagan et al. , 2019 ) , LG-FedAvg ( Liang et al. , 2020 ) , FedRep ( Collins et al. , 2021 ) , is to divide the global model layers into shared layers and personalized layers . For the shared layers , weights average as in FedAvg is adopted , while for personalized layers , models are trained only locally and will not 1Sparsity specifies the ratio of parameters that are set to 0 ( or inactive ) in a model . be exchanged with others . Secondly , PFL via regularization , e.g. , Ditto ( Li et al. , 2021 ) , L2GD ( Hanzely & Richtárik , 2020 ) is to add a proximal term on the local model to force the local model and global model closely in the local model fine-tuning stage . Similarly , Sarcheshmehpour et al . ( 2021 ) ; SarcheshmehPour et al . ( 2021 ) propose a total variation ( TV ) regularization to form the network lasso ( nLasso ) problem , and primal-dual methods adapted from ( Jung , 2020 ) are proposed to solve the nLasso problems . Thirdly , PFL via model interpolation , e.g. , MAPPER ( Mansour et al. , 2020 ) , APFL ( Deng et al. , 2020 ) achieves personalization by linearly interpolating the weights of the cluster ( global ) model and local model as the personalized model . Fourthly , PFL via transfer learning , e.g. , FedMD ( Li & Wang , 2019 ) , FedSteg ( Yang et al. , 2020 ) , and Fedhealth ( Chen et al. , 2020 ) , is to either use model and domain-specific local fine-tuning or knowledge distillation to adapt the global model into the personalized model . Finally , PFL via model compression , e.g. , LotteryFL ( Li et al. , 2020 ) and Sub-FedAvg ( Vahidian et al. , 2021 ) , achieves personalization via employing principle model compression techniques , such as weight pruning and channel pruning , over the shared global model . 2.2 SPARSE DEEP NEURAL NETWORKS . Methods to Sparsify neural networks can be classified into two genres : dense-to-sparse methods and sparse-to-sparse methods . Dense-to-sparse methods train from a dense model , and compress the model along the training process . Iterative pruning , first proposed by Frankle & Carbin ( 2018 ) , shows promising performance in dynamically searching for a sparse yet accurate network . Recently , sparse-to-sparse methods have been proposed to pursue training efficiency . Among them , dynamic sparse training ( DST ) ( Bellec et al. , 2018 ; Evci et al. , 2020 ; Liu et al. , 2021 ) is the most successful technique that allows sparse networks , trained from scratch , to match the performance of their dense equivalents . Stemming from the first work – sparse evolutionary training ( Mocanu et al. , 2018 ; Liu et al. , 2020 ) , DST has evolved as a class of sparse training methods absorbing many advanced techniques , e.g. , weight redistribution ( Mostafa & Wang , 2019 ; Dettmers & Zettlemoyer , 2019 ) , gradient-based regrowth ( Dettmers & Zettlemoyer , 2019 ; Evci et al. , 2020 ) , and extra weight exploration ( Jayakumar et al. , 2020 ; Liu et al. , 2021 ) . Our work also achieves personalization via model compression . We emphasize that three main progresses are made towards SOTA compression-based PFL : ( i ) We rigorously formulate the sparse personalized FL problem , filling the gap left by the prior works . ( ii ) While prior works either vaguely describe their model aggregation as ” aggregating the Lottery Ticket Network via FedAvg ” ( Li et al. , 2020 ) , or ” taking the average on the intersection of unpruned parameters in the network ” ( Vahidian et al. , 2021 ) , we explicitly formulate the aggregation as averaging the sparse update from clients . ( iii ) Both the two prominent prior works use the idea of iterative pruning to prune the network from dense to sparse . We instead provide two sparse-to-sparse training alternatives to plug in our solution , which largely reduces the costs of communication at the beginning of the training process , and exhibits remarkable performance . 3 PROBLEM FORMULATION . We assume a total number of K clients within our FL system , and we consistently use k to index a specific client . First , we give a preliminary introduction on the general FL and PFL problem . General FL problem . Letw ∈ Rd be the global weight . General FL takes the formulation as below ( P1 ) min w f̃ ( w ) = 1 K K∑ k=1 { F̃k ( w ) : = E [ L ( x , y ) ∼Dk ( w ; ( x , y ) ) ] } , whereD = D1∪· · ·∪DK is the joint distribution of k local heterogeneous distributions , data ( x , y ) is uniformly sampled according to distribution Dk wrt . client k , and L ( · ; · ) is the loss function . Ultimate PFL Problem . Let { wk } be the personalized models . The ultimate PFL is defined as : ( P2 ) min { w1 , ··· , wK } f̂ ( w1 , · · · , wK ) = 1 K K∑ k=1 { F̂k ( wk ) : = E [ L ( x , y ) ∼Dk ( wk ; ( x , y ) ) ] } , according to ( Zhang et al. , 2020 ; Hanzely et al. , 2021 ) . The problem could be separately solved by individual client with no communication . However , if the local data is insufficient , poor performance could be observed by this direct solution , since the local models can not be boosted by other clients . Regularized PFL problem . Regularized PFL can ease the heterogeneous challenges encountered by general FL , while escaping the curse of insufficient samples encountered by the ultimate PFL problem . Inspired by ( Chen & Chao , 2021 ) , we formulate the Regularized PFL problem as follows : ( P3 ) min { w1 , ··· , wK } f̄ ( w1 , · · · , wK ) = 1 K K∑ k=1 { F̄k ( wk ) : = E [ L ( x , y ) ∼Dk ( wk ; ( x , y ) ) ] } +R ( · ) , where wk denote the personalized models and R ( · ) is the regularizer that enables information exchange between clients , making the problem tractable even the local samples are insufficient . However , it remains controversial about how to define the regularizer . Also , the gap between regularized PFL problem ( P3 ) and the ultimate PFL problem ( P2 ) still remains unspecified in existing PFL study . In this work , our ultimate goal is to solve problem ( P2 ) , which requires information exchange between clients to ensure effective solution . Below , instead of utilizing regularizer as in ( P3 ) , we alternatively propose a novel Sparse PFL ( SPFL ) problem to reach the same goal . Sparse PFL problem . By introducing personalized masks into FL , we derive the SPFL problem as ( P4 ) min w f ( w ) = 1 K K∑ k=1 { Fk ( m ∗ k w ) : = E [ L ( x , y ) ∼Dk ( m ∗ k w ; ( x , y ) ) ] } , where m∗k ∈ { 0 , 1 } d is a personalized sparse binary mask for k-th client . denotes the Hadamard product for two given vectors . Our ultimate goal is to find a global model w , such that the personalized model for k-th client can be extracted from the global model by personalized mask m∗k , i.e. , m∗k w. The element of m∗k being 1 means that the weight in the global model is active for k-th personalized model , otherwise , remains dormant . Thus , the information exchange between all personalized models is enforced by a shared global model w. Compared with existing PFL algorithms , solving our SPFL problem ( P4 ) does not sacrifice additional computation and storage overhead of clients , since we do not maintain both personalized local models and global model in clients as in ( Li et al. , 2021 ; Mansour et al. , 2020 ) . On contrary , the solution to our problem could potentially lower the communication and computation overhead . Moreover , SPFL problem ( P4 ) can be applied to most of the model architectures without modelspecific hyper-parameter tuning , since we do not make model-specific separation of the public and personalized layer in ( Arivazhagan et al. , 2019 ; Liang et al. , 2020 ; Collins et al. , 2021 ) , or domainspecific fine-tuning in ( Chen et al. , 2020 ; Yang et al. , 2020 ) . | This work studies personalized federated learning through sparse local masks. For each client, the local model is subset of the global model. The paper proposes and analysis a new method, named FedSpa, which trains sparse local models for all clients reducing communication size, the amount of computation and memory cost. | SP:45043ed1b45f59ed76e0e1acc8b4530b7d962aa8 |
Grounding Aleatoric Uncertainty in Unsupervised Environment Design | 1 INTRODUCTION . Adaptive curricula , which dynamically adjust the distribution of training environments to optimize the performance of the resulting policy , have played a key role in many recent achievements in deep reinforcement learning ( RL ) . Applications have spanned both single-agent RL ( Portelas et al. , 2020 ; Wang et al. , 2019 ; Zhong et al. , 2020 ; Justesen et al. , 2018 ) , where adaptation occurs over environment variations , and multi-agent RL ( MARL ) , where adaptation can additionally occur over co-players ( Silver et al. , 2016 ; Vinyals et al. , 2019 ; Stooke et al. , 2021 ) . By presenting the agent with challenges at the threshold of its abilities , such methods demonstrably improve the sample efficiency and the generality of the final policy ( Matiisen et al. , 2017 ; Dennis et al. , 2020 ; Jiang et al. , 2021b ; a ) . This work introduces a fundamental problem relevant to adaptive curriculum learning methods for RL , which we call curriculum-induced covariate shift ( CICS ) . Analogous to the covariate shift that occurs in supervised learning ( SL ) , CICS refers to a mismatch between the input distribution at training and test time , and in this case , specifically when the distribution shift is caused by the selective sampling performed by an adaptive curriculum . While there may be cases in which CICS impacts model performance in SL , adaptive curricula for SL have generally not been found to be as impactful as in RL ( Wu et al. , 2021 ) . Therefore , here we focus on addressing this problem specifically as it arises in the RL setting , and leave investigation of its potential impact in SL to future work . To establish precise language around adaptive curricula , we cast our discussion under the lens of Unsupervised Environment Design ( UED , Dennis et al. , 2020 ) . UED provides a formal problem description for which curriculum learning is the solution , by defining the Underspecified POMDP ( UPOMDP ; see Section 2 ) , which expands the classic POMDP with a set of free parameters Θ , representing the dimensions along which the environment may vary across episodes . The goal of UED is then to adapt distributions over Θ , so to maximize some objective , which could be tied to an RL agent ’ s performance over this distribution . This allows us to view adaptive curricula as emerging via a multi-agent game between a teacher that proposes environments with parameters θ ∼ Θ and a student that learns to solve them . In addition to the notational clarity it provides , this formalism lends the analysis of adaptive curricula to useful game theoretic constructs , such as Nash equilibria ( NE , Nash et al. , 1950 ) . This game-theoretic view has led to the development of curriculum methods with principled robustness guarantees , such as PAIRED ( Dennis et al. , 2020 ) and Prioritized Level Replay ( PLR , Jiang et al. , 2021a ) , which showed that curricula optimizing for the student ’ s regret lead to minimax regret ( Savage , 1951 ) policies at Nash equilibria , implying the agent can solve all solvable environments within the training domain . While other methods can also be cast in this framework , they do not hold the same desirable property at equilibrium . For this reason , we will focus on addressing CICS for regret-maximizing UED , but note that our solution can be used with other UED methods . To see how the CICS can be problematic , consider the simplified case of training a self-driving car in simulation , so it learns to take the fastest route from home to office . Suppose traffic data shows on 70 % of the days , Route 1 is faster than Route 2 . Moreover , on any given day the self-driving car can not infer which route is faster ahead of time , so always picking Route 1 is faster in expectation . To support training a policy using an adaptive curriculum , one could build a simulator which sets road conditions per episode based on a random day sampled from the traffic data . However , adaptive curriculum over the traffic settings may oversample days when Route 1 is closed—perhaps because it finds the agent needs more practice on Route 2—shifting the best choice to Route 2 in training . In fact , methods for minimax regret UED like PLR would keep shifting the distribution of fastest route , to maximize the agent ’ s regret . Their curriculum dynamics would map to the zero-sum game of matching pennies , in which one player wins for guessing whether the other chose heads or tails ; the NE corresponds to each player randomly playing each option half the time . Randomly picking the route in this way is suboptimal , because in reality Route 1 is faster in expectation . This example is depicted in Figure 1 , where the two routes are replaced by an apple and banana . If , on a given day , the faster route could be identified before having to pick one , the agent could choose optimally . Instead , it is an aleatoric parameter , inducing irreducible uncertainty in the limit of infinite experiential data ( Der Kiureghian & Ditlevsen , 2009 ) . When CICS occurs over such parameters , with respect to a ground-truth distribution of environments P ( θ ) , the learned policy can be suboptimal with respect to P . It can therefore be useful to ground—that is , to constrain—the aleatoric parameters Θ′ ⊂ Θ to P ( θ′ ) when it is known or can be learned , as in simulation or from real-world data . However , grounding all θ is undesirable , preventing the curriculum from sampling enough opportunities to learn from useful scenarios with low support under P ( θ ) . In this work , we formalize the problem of CICS in RL , and provide a solution by proposing a UED method to find robustly Bayes-optimal policies where θ′ is grounded to P ( θ′ ) . Our solution called Sample-Matched PLR ( SAMPLR ) extends PLR , a state-of-the-art UED algorithm , by constraining the advantage estimates to match those observed if training under P ( θ′ ) . This advantage correction adapts Off-Belief Learning ( Hu et al. , 2021 ) from cooperative MARL , revealing an intriguing connection between curriculum biases observed in single and multi-agent RL . Our experiments in challenging environments based on the NetHack Learning Environment ( NLE , Küttler et al. , 2020 ) demonstrate that SAMPLR learns near-optimal policies under CICS , in cases where standard PLR fails . 2 BACKGROUND . 2.1 UNSUPERVISED ENVIRONMENT DESIGN . The problem of Unsupervised Environment Design ( UED , Dennis et al . ( 2020 ) ) is the problem of automatically generating an adaptive distribution of environments which will lead to policies that successfully transfer within a target domain . The domain of possible environments is represented by an Underspecified POMDP ( UPOMDP ) , which adds a set of free parameters to the standard definition of a POMDP , along which each concrete instantiation , or level , of the UPOMDP . For instance , these free parameters can be the position of obstacles in a maze , or friction coefficients in a physics-based task . Formally a UPOMDP is defined as a tupleM = 〈A , O , Θ , S , T , I , R , γ〉 , where A is a set of actions , O is a set of observations , Θ is a set of free parameters , S is a set of states , T : S ×A×Θ→∆ ( S ) is a transition function , I : S → O is an observation ( or inspection ) function , R : S → R is a reward function , and γ is a discount factor . UED typically approaches the curriculum design problem as training a teacher agent that co-evolves an adversarial curriculum for a student agent , for example , by maximizing the student ’ s regret . We will focus on a recent UED algorithm called Robust Prioritized Level Replay ( PLR⊥ , Jiang et al. , 2021b ) , which performs environment design via random search . PLR maintains a buffer of the most useful levels for training , according to some learning potential score—typically based on a regret approximation , such as the positive value loss—and with probability p , actively samples the next training level from this level buffer instead of the ground-truth training distribution . This selective-sampling mechanism has been demonstrated to greatly improve sample-efficiency and generalization in several domains , while provably leading to a minimax regret policy for the student at NE . In maximizing regret , PLR curricula naturally avoid unsolvable levels , which have no regret . 2.2 OFF-BELIEF LEARNING . In cooperative MARL , self-play promotes the formation of cryptic conventions—arbitrary sequences of actions that allow agents to communicate information about the environment state . These conventions are learned jointly among all agents during training , but are arbitrary and hence , indecipherable to independently-trained agents or humans at test time . Crucially , this leads to policies that fail to perform zero-shot coordination ( ZSC , Hu et al. , 2020 ) , where independentlytrained agents must cooperate successfully without additional learning steps , or ad-hoc team play . Off-Belief Learning ( OBL ) resolves this problem by forcing agents to assume their co-players act according to a fixed , known policy π0 until the current time t , and optimally afterwards , conditioned on this assumption . If π0 is playing uniformly random , this removes the possibility of forming arbitrary conventions . Formally , let G be a decentralized , partially-observable MDP ( Dec-POMDP , Bernstein et al. , 2002 ) , with state s , joint action a , observation function Ii ( s ) for each player i , and transition function T ( s , a ) . Let the historical trajectory τ = ( s1 , a1 , ... at−1 , st ) , and the action-observation history ( AOH ) for agent i be τ i = ( Ii ( s1 ) , a1 , ... , at−1 , Ii ( st ) ) . Further , let π0 be an arbitrary policy , such as a uniformly random policy , and Bπ0 ( τ |τ i ) = P ( τt|τ it , π0 ) , a belief model predicting the current state , conditioned on the AOH of agent i and the assumption of co-players playing policy π0 until the current time t , and optimally according to π1 from t and beyond . OBL aims to find the policy π1 with the optimal , counter-factual value function , V π0→π1 ( τ i ) = Eτ∼Bπ0 ( τ i ) [ V π1 ( τ ) ] . ( 1 ) As the agent conditions its policy on the realized AOH τ i , while transition dynamics are based on states sampled from B , this mechanism is called a fictitious transition . In Section 5 , we show how OBL ’ s fictitious transition can be adapted to the single-agent curriculum learning setting to address CICS , by interpreting the curriculum designer in UED as a co-player . 3 RELATED WORK . The mismatch between training and testing distributions of input features is referred to as covariate shift , and has long served as a fundamental problem for the machine learning community . Covariate shifts have been extensively studied in supervised learning ( Vapnik & Chervonenkis , 1971 ; Huang et al. , 2006 ; Bickel et al. , 2009 ; Arjovsky et al. , 2019 ) . In RL , prior works have largely focused on covariate shifts due to training on off-policy data ( Sutton et al. , 2016 ; Rowland et al. , 2020 ; Espeholt et al. , 2018 ; Hallak & Mannor , 2017 ; Gelada & Bellemare , 2019 ; Thomas & Brunskill , 2016 ) including the important case of learning from demonstrations ( Pomerleau , 1988 ; Ross & Bagnell , 2010 ) . Recent work also aims to learn invariant representations robust to covariate shifts ( Zhang et al. , 2019 ; 2021 ) . More generally , CICS can be interpreted as a kind of sample-selection bias ( Heckman , 1979 ) . We believe this work to be the first to formalize and provide a solution to the problem of covariate shifts in reinforcement learning due to curriculum learning . Our method fixes a critical flaw that can cause curricula to fail under CICS—an important problem as curricula have been shown to be essential for training RL agents across many of the most challenging domains , including combinatorial gridworlds ( Zhong et al. , 2020 ) , Go ( Silver et al. , 2016 ) , StarCraft 2 ( Vinyals et al. , 2019 ) , and achieving comprehensive task mastery in open-ended environments ( Stooke et al. , 2021 ) . While this work focuses on PLR , other recent methods include minimax adversarial curricula ( Wang et al. , 2019 ; 2020 ) and curricula based on changes in return ( Matiisen et al. , 2017 ; Portelas et al. , 2020 ) . Most similar to our work , OFFER ( Ciosek & Whiteson , 2017 ) adapts a curriculum over transition functions and uses importance sampling to correct for biased gradient estimates . Unlike this work , Ciosek & Whiteson ( 2017 ) requires whitebox access to the transition function and does not directly study the impact of CICS on the learning dynamics . Curriculum methods have also been studied in goal-conditioned RL ( Florensa et al. , 2018 ; Campero et al. , 2021 ; Sukhbaatar et al. , 2018 ; OpenAI et al. , 2021 ) , though CICS does not occur here as goals are observed by the agent . Lastly , domain randomization ( DR , Sadeghi & Levine , 2017 ; Peng et al. , 2017 ) can be seen as a degenerate form of UED , though curriculum-based extensions of DR have also been studied ( Jakobi , 1997 ; Tobin et al. , 2017 ) . Prior work has also investigated methods for learning Bayes optimal policies under uncertainty about the task ( Zintgraf et al. , 2020 ; Osband et al. , 2013 ) , based on the framework of Bayes-adaptive MDPs ( BAMDPs ) ( Bellman , 1956 ; Duff , 2002 ) . In this setting , the agent can adapt to an unknown MDP over several episodes by acting to reduce its uncertainty about the identity of the MDP . In contrast , SAMPLR learns a robustly Bayes optimal policy for the case of zero-shot transfer . Further unlike these works , our setting assumes the distribution of certain aleatoric parameters are biased during training , which would lead to biased a posteriori uncertainty estimates with respect to the ground-truth distribution when optimizing for the BAMDP objective . Instead , SAMPLR proposes a means to correct for this bias assuming knowledge of the true environment parameters for each level , to which we can safely assume access in curriculum learning . | This work proposes a solution to the problem of covariate shift appeared in adaptive curriculum learning where the distribution of parameters of the environment at test time is different from the one at training time. This is caused in algorithms like PAIRED or PLR because the adaptive curriculum learning algorithm biases towards regret maximising environments and as such trains the learner on a biased distribution instead of the ground truth one. The authors propose a solution to the problem under the assumption that the designer knows the ground truth distribution of the parameters of the environment. The solution relies on using fictitious transitions to compute the value function, using Off Belief learning — an algorithm for cooperative MARL. | SP:fbad9a35ec270021a2d7f1c1a6fbcc2fe6fcea0a |
Grounding Aleatoric Uncertainty in Unsupervised Environment Design | 1 INTRODUCTION . Adaptive curricula , which dynamically adjust the distribution of training environments to optimize the performance of the resulting policy , have played a key role in many recent achievements in deep reinforcement learning ( RL ) . Applications have spanned both single-agent RL ( Portelas et al. , 2020 ; Wang et al. , 2019 ; Zhong et al. , 2020 ; Justesen et al. , 2018 ) , where adaptation occurs over environment variations , and multi-agent RL ( MARL ) , where adaptation can additionally occur over co-players ( Silver et al. , 2016 ; Vinyals et al. , 2019 ; Stooke et al. , 2021 ) . By presenting the agent with challenges at the threshold of its abilities , such methods demonstrably improve the sample efficiency and the generality of the final policy ( Matiisen et al. , 2017 ; Dennis et al. , 2020 ; Jiang et al. , 2021b ; a ) . This work introduces a fundamental problem relevant to adaptive curriculum learning methods for RL , which we call curriculum-induced covariate shift ( CICS ) . Analogous to the covariate shift that occurs in supervised learning ( SL ) , CICS refers to a mismatch between the input distribution at training and test time , and in this case , specifically when the distribution shift is caused by the selective sampling performed by an adaptive curriculum . While there may be cases in which CICS impacts model performance in SL , adaptive curricula for SL have generally not been found to be as impactful as in RL ( Wu et al. , 2021 ) . Therefore , here we focus on addressing this problem specifically as it arises in the RL setting , and leave investigation of its potential impact in SL to future work . To establish precise language around adaptive curricula , we cast our discussion under the lens of Unsupervised Environment Design ( UED , Dennis et al. , 2020 ) . UED provides a formal problem description for which curriculum learning is the solution , by defining the Underspecified POMDP ( UPOMDP ; see Section 2 ) , which expands the classic POMDP with a set of free parameters Θ , representing the dimensions along which the environment may vary across episodes . The goal of UED is then to adapt distributions over Θ , so to maximize some objective , which could be tied to an RL agent ’ s performance over this distribution . This allows us to view adaptive curricula as emerging via a multi-agent game between a teacher that proposes environments with parameters θ ∼ Θ and a student that learns to solve them . In addition to the notational clarity it provides , this formalism lends the analysis of adaptive curricula to useful game theoretic constructs , such as Nash equilibria ( NE , Nash et al. , 1950 ) . This game-theoretic view has led to the development of curriculum methods with principled robustness guarantees , such as PAIRED ( Dennis et al. , 2020 ) and Prioritized Level Replay ( PLR , Jiang et al. , 2021a ) , which showed that curricula optimizing for the student ’ s regret lead to minimax regret ( Savage , 1951 ) policies at Nash equilibria , implying the agent can solve all solvable environments within the training domain . While other methods can also be cast in this framework , they do not hold the same desirable property at equilibrium . For this reason , we will focus on addressing CICS for regret-maximizing UED , but note that our solution can be used with other UED methods . To see how the CICS can be problematic , consider the simplified case of training a self-driving car in simulation , so it learns to take the fastest route from home to office . Suppose traffic data shows on 70 % of the days , Route 1 is faster than Route 2 . Moreover , on any given day the self-driving car can not infer which route is faster ahead of time , so always picking Route 1 is faster in expectation . To support training a policy using an adaptive curriculum , one could build a simulator which sets road conditions per episode based on a random day sampled from the traffic data . However , adaptive curriculum over the traffic settings may oversample days when Route 1 is closed—perhaps because it finds the agent needs more practice on Route 2—shifting the best choice to Route 2 in training . In fact , methods for minimax regret UED like PLR would keep shifting the distribution of fastest route , to maximize the agent ’ s regret . Their curriculum dynamics would map to the zero-sum game of matching pennies , in which one player wins for guessing whether the other chose heads or tails ; the NE corresponds to each player randomly playing each option half the time . Randomly picking the route in this way is suboptimal , because in reality Route 1 is faster in expectation . This example is depicted in Figure 1 , where the two routes are replaced by an apple and banana . If , on a given day , the faster route could be identified before having to pick one , the agent could choose optimally . Instead , it is an aleatoric parameter , inducing irreducible uncertainty in the limit of infinite experiential data ( Der Kiureghian & Ditlevsen , 2009 ) . When CICS occurs over such parameters , with respect to a ground-truth distribution of environments P ( θ ) , the learned policy can be suboptimal with respect to P . It can therefore be useful to ground—that is , to constrain—the aleatoric parameters Θ′ ⊂ Θ to P ( θ′ ) when it is known or can be learned , as in simulation or from real-world data . However , grounding all θ is undesirable , preventing the curriculum from sampling enough opportunities to learn from useful scenarios with low support under P ( θ ) . In this work , we formalize the problem of CICS in RL , and provide a solution by proposing a UED method to find robustly Bayes-optimal policies where θ′ is grounded to P ( θ′ ) . Our solution called Sample-Matched PLR ( SAMPLR ) extends PLR , a state-of-the-art UED algorithm , by constraining the advantage estimates to match those observed if training under P ( θ′ ) . This advantage correction adapts Off-Belief Learning ( Hu et al. , 2021 ) from cooperative MARL , revealing an intriguing connection between curriculum biases observed in single and multi-agent RL . Our experiments in challenging environments based on the NetHack Learning Environment ( NLE , Küttler et al. , 2020 ) demonstrate that SAMPLR learns near-optimal policies under CICS , in cases where standard PLR fails . 2 BACKGROUND . 2.1 UNSUPERVISED ENVIRONMENT DESIGN . The problem of Unsupervised Environment Design ( UED , Dennis et al . ( 2020 ) ) is the problem of automatically generating an adaptive distribution of environments which will lead to policies that successfully transfer within a target domain . The domain of possible environments is represented by an Underspecified POMDP ( UPOMDP ) , which adds a set of free parameters to the standard definition of a POMDP , along which each concrete instantiation , or level , of the UPOMDP . For instance , these free parameters can be the position of obstacles in a maze , or friction coefficients in a physics-based task . Formally a UPOMDP is defined as a tupleM = 〈A , O , Θ , S , T , I , R , γ〉 , where A is a set of actions , O is a set of observations , Θ is a set of free parameters , S is a set of states , T : S ×A×Θ→∆ ( S ) is a transition function , I : S → O is an observation ( or inspection ) function , R : S → R is a reward function , and γ is a discount factor . UED typically approaches the curriculum design problem as training a teacher agent that co-evolves an adversarial curriculum for a student agent , for example , by maximizing the student ’ s regret . We will focus on a recent UED algorithm called Robust Prioritized Level Replay ( PLR⊥ , Jiang et al. , 2021b ) , which performs environment design via random search . PLR maintains a buffer of the most useful levels for training , according to some learning potential score—typically based on a regret approximation , such as the positive value loss—and with probability p , actively samples the next training level from this level buffer instead of the ground-truth training distribution . This selective-sampling mechanism has been demonstrated to greatly improve sample-efficiency and generalization in several domains , while provably leading to a minimax regret policy for the student at NE . In maximizing regret , PLR curricula naturally avoid unsolvable levels , which have no regret . 2.2 OFF-BELIEF LEARNING . In cooperative MARL , self-play promotes the formation of cryptic conventions—arbitrary sequences of actions that allow agents to communicate information about the environment state . These conventions are learned jointly among all agents during training , but are arbitrary and hence , indecipherable to independently-trained agents or humans at test time . Crucially , this leads to policies that fail to perform zero-shot coordination ( ZSC , Hu et al. , 2020 ) , where independentlytrained agents must cooperate successfully without additional learning steps , or ad-hoc team play . Off-Belief Learning ( OBL ) resolves this problem by forcing agents to assume their co-players act according to a fixed , known policy π0 until the current time t , and optimally afterwards , conditioned on this assumption . If π0 is playing uniformly random , this removes the possibility of forming arbitrary conventions . Formally , let G be a decentralized , partially-observable MDP ( Dec-POMDP , Bernstein et al. , 2002 ) , with state s , joint action a , observation function Ii ( s ) for each player i , and transition function T ( s , a ) . Let the historical trajectory τ = ( s1 , a1 , ... at−1 , st ) , and the action-observation history ( AOH ) for agent i be τ i = ( Ii ( s1 ) , a1 , ... , at−1 , Ii ( st ) ) . Further , let π0 be an arbitrary policy , such as a uniformly random policy , and Bπ0 ( τ |τ i ) = P ( τt|τ it , π0 ) , a belief model predicting the current state , conditioned on the AOH of agent i and the assumption of co-players playing policy π0 until the current time t , and optimally according to π1 from t and beyond . OBL aims to find the policy π1 with the optimal , counter-factual value function , V π0→π1 ( τ i ) = Eτ∼Bπ0 ( τ i ) [ V π1 ( τ ) ] . ( 1 ) As the agent conditions its policy on the realized AOH τ i , while transition dynamics are based on states sampled from B , this mechanism is called a fictitious transition . In Section 5 , we show how OBL ’ s fictitious transition can be adapted to the single-agent curriculum learning setting to address CICS , by interpreting the curriculum designer in UED as a co-player . 3 RELATED WORK . The mismatch between training and testing distributions of input features is referred to as covariate shift , and has long served as a fundamental problem for the machine learning community . Covariate shifts have been extensively studied in supervised learning ( Vapnik & Chervonenkis , 1971 ; Huang et al. , 2006 ; Bickel et al. , 2009 ; Arjovsky et al. , 2019 ) . In RL , prior works have largely focused on covariate shifts due to training on off-policy data ( Sutton et al. , 2016 ; Rowland et al. , 2020 ; Espeholt et al. , 2018 ; Hallak & Mannor , 2017 ; Gelada & Bellemare , 2019 ; Thomas & Brunskill , 2016 ) including the important case of learning from demonstrations ( Pomerleau , 1988 ; Ross & Bagnell , 2010 ) . Recent work also aims to learn invariant representations robust to covariate shifts ( Zhang et al. , 2019 ; 2021 ) . More generally , CICS can be interpreted as a kind of sample-selection bias ( Heckman , 1979 ) . We believe this work to be the first to formalize and provide a solution to the problem of covariate shifts in reinforcement learning due to curriculum learning . Our method fixes a critical flaw that can cause curricula to fail under CICS—an important problem as curricula have been shown to be essential for training RL agents across many of the most challenging domains , including combinatorial gridworlds ( Zhong et al. , 2020 ) , Go ( Silver et al. , 2016 ) , StarCraft 2 ( Vinyals et al. , 2019 ) , and achieving comprehensive task mastery in open-ended environments ( Stooke et al. , 2021 ) . While this work focuses on PLR , other recent methods include minimax adversarial curricula ( Wang et al. , 2019 ; 2020 ) and curricula based on changes in return ( Matiisen et al. , 2017 ; Portelas et al. , 2020 ) . Most similar to our work , OFFER ( Ciosek & Whiteson , 2017 ) adapts a curriculum over transition functions and uses importance sampling to correct for biased gradient estimates . Unlike this work , Ciosek & Whiteson ( 2017 ) requires whitebox access to the transition function and does not directly study the impact of CICS on the learning dynamics . Curriculum methods have also been studied in goal-conditioned RL ( Florensa et al. , 2018 ; Campero et al. , 2021 ; Sukhbaatar et al. , 2018 ; OpenAI et al. , 2021 ) , though CICS does not occur here as goals are observed by the agent . Lastly , domain randomization ( DR , Sadeghi & Levine , 2017 ; Peng et al. , 2017 ) can be seen as a degenerate form of UED , though curriculum-based extensions of DR have also been studied ( Jakobi , 1997 ; Tobin et al. , 2017 ) . Prior work has also investigated methods for learning Bayes optimal policies under uncertainty about the task ( Zintgraf et al. , 2020 ; Osband et al. , 2013 ) , based on the framework of Bayes-adaptive MDPs ( BAMDPs ) ( Bellman , 1956 ; Duff , 2002 ) . In this setting , the agent can adapt to an unknown MDP over several episodes by acting to reduce its uncertainty about the identity of the MDP . In contrast , SAMPLR learns a robustly Bayes optimal policy for the case of zero-shot transfer . Further unlike these works , our setting assumes the distribution of certain aleatoric parameters are biased during training , which would lead to biased a posteriori uncertainty estimates with respect to the ground-truth distribution when optimizing for the BAMDP objective . Instead , SAMPLR proposes a means to correct for this bias assuming knowledge of the true environment parameters for each level , to which we can safely assume access in curriculum learning . | This paper aim at generating curricula for training an RL agent to solve tasks sampled from an unknown distribution. Based on Prioritized Level Replay and Off-Belief Learning, an algorithm is proposed to train the policy using trajectories sampled from a learned belief model. The proposed method is evaluated in two goal-reaching tasks of discrete state and action spaces. | SP:fbad9a35ec270021a2d7f1c1a6fbcc2fe6fcea0a |
Grounding Aleatoric Uncertainty in Unsupervised Environment Design | 1 INTRODUCTION . Adaptive curricula , which dynamically adjust the distribution of training environments to optimize the performance of the resulting policy , have played a key role in many recent achievements in deep reinforcement learning ( RL ) . Applications have spanned both single-agent RL ( Portelas et al. , 2020 ; Wang et al. , 2019 ; Zhong et al. , 2020 ; Justesen et al. , 2018 ) , where adaptation occurs over environment variations , and multi-agent RL ( MARL ) , where adaptation can additionally occur over co-players ( Silver et al. , 2016 ; Vinyals et al. , 2019 ; Stooke et al. , 2021 ) . By presenting the agent with challenges at the threshold of its abilities , such methods demonstrably improve the sample efficiency and the generality of the final policy ( Matiisen et al. , 2017 ; Dennis et al. , 2020 ; Jiang et al. , 2021b ; a ) . This work introduces a fundamental problem relevant to adaptive curriculum learning methods for RL , which we call curriculum-induced covariate shift ( CICS ) . Analogous to the covariate shift that occurs in supervised learning ( SL ) , CICS refers to a mismatch between the input distribution at training and test time , and in this case , specifically when the distribution shift is caused by the selective sampling performed by an adaptive curriculum . While there may be cases in which CICS impacts model performance in SL , adaptive curricula for SL have generally not been found to be as impactful as in RL ( Wu et al. , 2021 ) . Therefore , here we focus on addressing this problem specifically as it arises in the RL setting , and leave investigation of its potential impact in SL to future work . To establish precise language around adaptive curricula , we cast our discussion under the lens of Unsupervised Environment Design ( UED , Dennis et al. , 2020 ) . UED provides a formal problem description for which curriculum learning is the solution , by defining the Underspecified POMDP ( UPOMDP ; see Section 2 ) , which expands the classic POMDP with a set of free parameters Θ , representing the dimensions along which the environment may vary across episodes . The goal of UED is then to adapt distributions over Θ , so to maximize some objective , which could be tied to an RL agent ’ s performance over this distribution . This allows us to view adaptive curricula as emerging via a multi-agent game between a teacher that proposes environments with parameters θ ∼ Θ and a student that learns to solve them . In addition to the notational clarity it provides , this formalism lends the analysis of adaptive curricula to useful game theoretic constructs , such as Nash equilibria ( NE , Nash et al. , 1950 ) . This game-theoretic view has led to the development of curriculum methods with principled robustness guarantees , such as PAIRED ( Dennis et al. , 2020 ) and Prioritized Level Replay ( PLR , Jiang et al. , 2021a ) , which showed that curricula optimizing for the student ’ s regret lead to minimax regret ( Savage , 1951 ) policies at Nash equilibria , implying the agent can solve all solvable environments within the training domain . While other methods can also be cast in this framework , they do not hold the same desirable property at equilibrium . For this reason , we will focus on addressing CICS for regret-maximizing UED , but note that our solution can be used with other UED methods . To see how the CICS can be problematic , consider the simplified case of training a self-driving car in simulation , so it learns to take the fastest route from home to office . Suppose traffic data shows on 70 % of the days , Route 1 is faster than Route 2 . Moreover , on any given day the self-driving car can not infer which route is faster ahead of time , so always picking Route 1 is faster in expectation . To support training a policy using an adaptive curriculum , one could build a simulator which sets road conditions per episode based on a random day sampled from the traffic data . However , adaptive curriculum over the traffic settings may oversample days when Route 1 is closed—perhaps because it finds the agent needs more practice on Route 2—shifting the best choice to Route 2 in training . In fact , methods for minimax regret UED like PLR would keep shifting the distribution of fastest route , to maximize the agent ’ s regret . Their curriculum dynamics would map to the zero-sum game of matching pennies , in which one player wins for guessing whether the other chose heads or tails ; the NE corresponds to each player randomly playing each option half the time . Randomly picking the route in this way is suboptimal , because in reality Route 1 is faster in expectation . This example is depicted in Figure 1 , where the two routes are replaced by an apple and banana . If , on a given day , the faster route could be identified before having to pick one , the agent could choose optimally . Instead , it is an aleatoric parameter , inducing irreducible uncertainty in the limit of infinite experiential data ( Der Kiureghian & Ditlevsen , 2009 ) . When CICS occurs over such parameters , with respect to a ground-truth distribution of environments P ( θ ) , the learned policy can be suboptimal with respect to P . It can therefore be useful to ground—that is , to constrain—the aleatoric parameters Θ′ ⊂ Θ to P ( θ′ ) when it is known or can be learned , as in simulation or from real-world data . However , grounding all θ is undesirable , preventing the curriculum from sampling enough opportunities to learn from useful scenarios with low support under P ( θ ) . In this work , we formalize the problem of CICS in RL , and provide a solution by proposing a UED method to find robustly Bayes-optimal policies where θ′ is grounded to P ( θ′ ) . Our solution called Sample-Matched PLR ( SAMPLR ) extends PLR , a state-of-the-art UED algorithm , by constraining the advantage estimates to match those observed if training under P ( θ′ ) . This advantage correction adapts Off-Belief Learning ( Hu et al. , 2021 ) from cooperative MARL , revealing an intriguing connection between curriculum biases observed in single and multi-agent RL . Our experiments in challenging environments based on the NetHack Learning Environment ( NLE , Küttler et al. , 2020 ) demonstrate that SAMPLR learns near-optimal policies under CICS , in cases where standard PLR fails . 2 BACKGROUND . 2.1 UNSUPERVISED ENVIRONMENT DESIGN . The problem of Unsupervised Environment Design ( UED , Dennis et al . ( 2020 ) ) is the problem of automatically generating an adaptive distribution of environments which will lead to policies that successfully transfer within a target domain . The domain of possible environments is represented by an Underspecified POMDP ( UPOMDP ) , which adds a set of free parameters to the standard definition of a POMDP , along which each concrete instantiation , or level , of the UPOMDP . For instance , these free parameters can be the position of obstacles in a maze , or friction coefficients in a physics-based task . Formally a UPOMDP is defined as a tupleM = 〈A , O , Θ , S , T , I , R , γ〉 , where A is a set of actions , O is a set of observations , Θ is a set of free parameters , S is a set of states , T : S ×A×Θ→∆ ( S ) is a transition function , I : S → O is an observation ( or inspection ) function , R : S → R is a reward function , and γ is a discount factor . UED typically approaches the curriculum design problem as training a teacher agent that co-evolves an adversarial curriculum for a student agent , for example , by maximizing the student ’ s regret . We will focus on a recent UED algorithm called Robust Prioritized Level Replay ( PLR⊥ , Jiang et al. , 2021b ) , which performs environment design via random search . PLR maintains a buffer of the most useful levels for training , according to some learning potential score—typically based on a regret approximation , such as the positive value loss—and with probability p , actively samples the next training level from this level buffer instead of the ground-truth training distribution . This selective-sampling mechanism has been demonstrated to greatly improve sample-efficiency and generalization in several domains , while provably leading to a minimax regret policy for the student at NE . In maximizing regret , PLR curricula naturally avoid unsolvable levels , which have no regret . 2.2 OFF-BELIEF LEARNING . In cooperative MARL , self-play promotes the formation of cryptic conventions—arbitrary sequences of actions that allow agents to communicate information about the environment state . These conventions are learned jointly among all agents during training , but are arbitrary and hence , indecipherable to independently-trained agents or humans at test time . Crucially , this leads to policies that fail to perform zero-shot coordination ( ZSC , Hu et al. , 2020 ) , where independentlytrained agents must cooperate successfully without additional learning steps , or ad-hoc team play . Off-Belief Learning ( OBL ) resolves this problem by forcing agents to assume their co-players act according to a fixed , known policy π0 until the current time t , and optimally afterwards , conditioned on this assumption . If π0 is playing uniformly random , this removes the possibility of forming arbitrary conventions . Formally , let G be a decentralized , partially-observable MDP ( Dec-POMDP , Bernstein et al. , 2002 ) , with state s , joint action a , observation function Ii ( s ) for each player i , and transition function T ( s , a ) . Let the historical trajectory τ = ( s1 , a1 , ... at−1 , st ) , and the action-observation history ( AOH ) for agent i be τ i = ( Ii ( s1 ) , a1 , ... , at−1 , Ii ( st ) ) . Further , let π0 be an arbitrary policy , such as a uniformly random policy , and Bπ0 ( τ |τ i ) = P ( τt|τ it , π0 ) , a belief model predicting the current state , conditioned on the AOH of agent i and the assumption of co-players playing policy π0 until the current time t , and optimally according to π1 from t and beyond . OBL aims to find the policy π1 with the optimal , counter-factual value function , V π0→π1 ( τ i ) = Eτ∼Bπ0 ( τ i ) [ V π1 ( τ ) ] . ( 1 ) As the agent conditions its policy on the realized AOH τ i , while transition dynamics are based on states sampled from B , this mechanism is called a fictitious transition . In Section 5 , we show how OBL ’ s fictitious transition can be adapted to the single-agent curriculum learning setting to address CICS , by interpreting the curriculum designer in UED as a co-player . 3 RELATED WORK . The mismatch between training and testing distributions of input features is referred to as covariate shift , and has long served as a fundamental problem for the machine learning community . Covariate shifts have been extensively studied in supervised learning ( Vapnik & Chervonenkis , 1971 ; Huang et al. , 2006 ; Bickel et al. , 2009 ; Arjovsky et al. , 2019 ) . In RL , prior works have largely focused on covariate shifts due to training on off-policy data ( Sutton et al. , 2016 ; Rowland et al. , 2020 ; Espeholt et al. , 2018 ; Hallak & Mannor , 2017 ; Gelada & Bellemare , 2019 ; Thomas & Brunskill , 2016 ) including the important case of learning from demonstrations ( Pomerleau , 1988 ; Ross & Bagnell , 2010 ) . Recent work also aims to learn invariant representations robust to covariate shifts ( Zhang et al. , 2019 ; 2021 ) . More generally , CICS can be interpreted as a kind of sample-selection bias ( Heckman , 1979 ) . We believe this work to be the first to formalize and provide a solution to the problem of covariate shifts in reinforcement learning due to curriculum learning . Our method fixes a critical flaw that can cause curricula to fail under CICS—an important problem as curricula have been shown to be essential for training RL agents across many of the most challenging domains , including combinatorial gridworlds ( Zhong et al. , 2020 ) , Go ( Silver et al. , 2016 ) , StarCraft 2 ( Vinyals et al. , 2019 ) , and achieving comprehensive task mastery in open-ended environments ( Stooke et al. , 2021 ) . While this work focuses on PLR , other recent methods include minimax adversarial curricula ( Wang et al. , 2019 ; 2020 ) and curricula based on changes in return ( Matiisen et al. , 2017 ; Portelas et al. , 2020 ) . Most similar to our work , OFFER ( Ciosek & Whiteson , 2017 ) adapts a curriculum over transition functions and uses importance sampling to correct for biased gradient estimates . Unlike this work , Ciosek & Whiteson ( 2017 ) requires whitebox access to the transition function and does not directly study the impact of CICS on the learning dynamics . Curriculum methods have also been studied in goal-conditioned RL ( Florensa et al. , 2018 ; Campero et al. , 2021 ; Sukhbaatar et al. , 2018 ; OpenAI et al. , 2021 ) , though CICS does not occur here as goals are observed by the agent . Lastly , domain randomization ( DR , Sadeghi & Levine , 2017 ; Peng et al. , 2017 ) can be seen as a degenerate form of UED , though curriculum-based extensions of DR have also been studied ( Jakobi , 1997 ; Tobin et al. , 2017 ) . Prior work has also investigated methods for learning Bayes optimal policies under uncertainty about the task ( Zintgraf et al. , 2020 ; Osband et al. , 2013 ) , based on the framework of Bayes-adaptive MDPs ( BAMDPs ) ( Bellman , 1956 ; Duff , 2002 ) . In this setting , the agent can adapt to an unknown MDP over several episodes by acting to reduce its uncertainty about the identity of the MDP . In contrast , SAMPLR learns a robustly Bayes optimal policy for the case of zero-shot transfer . Further unlike these works , our setting assumes the distribution of certain aleatoric parameters are biased during training , which would lead to biased a posteriori uncertainty estimates with respect to the ground-truth distribution when optimizing for the BAMDP objective . Instead , SAMPLR proposes a means to correct for this bias assuming knowledge of the true environment parameters for each level , to which we can safely assume access in curriculum learning . | Paper proposes a method to fix the covariate shift issue induced by curriculum learning RL methods when dealing with parameterized POMDPs. Specifically, curriculum learning methods choose a training distribution over the parameters of the POMDPs that might differ from the distribution that the agent might face during test-time, therefore the optimal solution under the distribution induced by the curriculum method might not be optimal under the test-time distribution. To fix this problem, the paper assumes access to the simulator such that given any previous trajectory the next state can be sampled under the true/test distribution, which is then used to update the policy. | SP:fbad9a35ec270021a2d7f1c1a6fbcc2fe6fcea0a |
Meta-free few-shot learning via representation learning with weight averaging | Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms . Transfer learning approaches are a natural alternative , but they are restricted to few-shot classification . Moreover , little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples , except for some Bayesian episodic learning algorithms . To tackle the aforementioned issues , we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification . The resulting method does not require episodic meta-learning and is called meta-free representation learning ( MFRL ) . MFRL first finds low-rank representation generalizing well on meta-test tasks . Given the learned representation , probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty . The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty . In addition , weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures . 1 INTRODUCTION . Currently , the vast majority of few-shot learning methods are within the general paradigm of metalearning ( a.k.a . learning to learn ) ( Bengio et al. , 1991 ; Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) , which learns multiple tasks in an episodic manner to distill transferrable knowledge ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Although many episodic meta-learning methods report state-of-the-art ( SOTA ) performance , recent studies show that simple transfer learning methods with fixed embeddings ( Chen et al. , 2019 ; Tian et al. , 2020 ) can achieve similar or better performance in few-shot learning . It is found that the effectiveness of optimization-based meta-learning algorithms is due to reusing high-quality representation , instead of rapid learning of task-specific representation ( Raghu et al. , 2020 ) . The quality of the presentation is not quantitatively defined , except for some empirical case studies ( Goldblum et al. , 2020 ) . Recent machine learning theories ( Saunshi et al. , 2021 ) indicate that low-rank representation leads to better sample efficiency in learning a new task . However , those theoretical studies are within the paradigm of meta-learning and do not reveal how to obtain low-rank representation for few-shot learning outside the realm of meta-learning . This motivates us to investigate ways to improve the representation for adapting to new few-shot tasks in a meta-free manner by taking the advantage of simplicity and robustness in transfer learning . In parallel , existing transfer learning methods also have limitations . That is , the existing transfer learning methods may not find representation generalizing well to unseen few-shot tasks ( Chen et al. , 2019 ; Dhillon et al. , 2020 ) , compared with state-of-the-art meta-learning methods ( Ye et al. , 2020 ; Zhang et al. , 2020 ) . Although some transfer learning methods utilize knowledge distillation and self-supervised training to achieve strong performance in few-shot classification , they are restricted to few-shot classification problems ( Mangla et al. , 2020 ; Tian et al. , 2020 ) . To the best of our knowledge , no transfer learning method is developed to achieve similar performance to metalearning in few-shot regression . As such , it is desirable to have a transfer learning method that finds high-quality representation generalizing well to unseen classification and regression problems . The last limitation of the existing transfer learning methods in few-shot learning is the lack of uncertainty calibration . Uncertainty quantification is concerned with the quantification of how likely certain outcomes are . Despite a plethora of few-shot learning methods ( in fact , machine learning in general ) to improve the point estimation accuracy , few methods are developed to get probabilistic models with improved uncertainty calibration by integrating Bayesian learning into episodic metatraining ( Grant et al. , 2018 ; Finn et al. , 2018 ; Yoon et al. , 2018 ; Snell & Zemel , 2021 ) . Few-shot learning models can be used in risk-averse applications such as medical diagnosis ( Prabhu et al. , 2019 ) . The diagnosis decision is made on not only point estimation but also probabilities associated with the prediction . The risk of making wrong decisions is significant when using uncalibrated models ( Begoli et al. , 2019 ) . Thus , the development of proper fine-tuning steps to achieve well-calibrated models is the key towards practical applications of transfer learning in few-shot learning . In this paper , we develop a simple transfer learning method as our own baseline to allow easy regularization towards more generalizable representation and calibration of prediction uncertainty . The regularization in the proposed transfer learning method works for regression and classification problems so that we can handle both problems within a common architecture . The calibration procedure is easily integrated into the developed transfer learning method to obtain few-shot learning models with good uncertainty quantification . Therefore , the resulting method , called Meta-Free Representation Learning ( MFRL ) , overcomes the aforementioned limitations in existing transfer learning methods for few-shot learning . Our empirical studies demonstrate that the relatively overlooked transfer learning method can achieve high accuracy and well-calibrated uncertainty in few-shot learning when it is combined with the proper regularization and calibration . Those two tools are also portable to meta-learning methods to improve accuracy and calibration , but the improvement is less significant compared with that of transfer learning . We use stochastic weight averaging ( SWA ) ( Izmailov et al. , 2018 ) , which is agnostic to loss function types , as implicit regularization to improve the generalization capability of the representation . We also shed light on that the effectiveness of SWA is due to its bias towards low-rank representation . To address the issue of uncertainty quantification , we fine-tune appropriate linear layers during the meta-test phase to get models with well-calibrated uncertainty . In MFRL , hierarchical Bayesian linear models are used to properly capture the uncertainty from very limited training samples in few-shot regression , whereas the softmax output is scaled with a temperature parameter to make the few-shot classification model well-calibrated . Our method is the first one to achieve well-calibrated few-shot models by only fine-tuning probabilistic linear models in the meta-test phase , without any learning mechanisms related to the meta-training or representation learning phase . Our contributions in this work are summarized as follows : • We propose a transfer learning method that can handle both few-shot regression and classification problems with performance exceeding SOTA . • For the first time , we empirically find the implicit regularization of SWA towards low-rank representation , which is a useful property in transferring to few-shot tasks . • The proposed method results in well-calibrated uncertainty in few-shot learning models while preserving SOTA accuracy . • The implicit regularization of SWA and temperature scaling factor can be applied to existing meta-learning methods to improve their accuracy and reliability in few-shot learning . 2 RELATED WORK . Episodic meta-learning approaches can be categorized into metric-based and optimization-based methods . Metric-based methods project input data to feature vectors through nonlinear embeddings and compare their similarity to make the prediction . Examples of similarity metrics include the weighted L1 metric ( Koch et al. , 2015 ) , cosine similarity ( Qi et al. , 2018 ; Vinyals et al. , 2016 ) , and Euclidean distance to class-mean representation ( Snell et al. , 2017 ) . Instead of relying on predefined metrics , learnable similarity metrics are introduced to improve the few-shot classification performance ( Oreshkin et al. , 2018 ; Sung et al. , 2018 ) . Recent metric-based approaches focus on developing task-adaptive embeddings to improve few-shot classification accuracy . Those task-adaptive embeddings include attention mechanisms for feature transformation ( Fei et al. , 2021 ; Gidaris & Komodakis , 2018 ; Ye et al. , 2020 ; Zhang et al. , 2021 ) , graph neural networks ( Garcia & Estrach , 2018 ) , implicit class representation ( Ravichandran et al. , 2019 ) , and task-dependent conditioning ( Oreshkin et al. , 2018 ; Yoon et al. , 2020 ; 2019 ) . Although metric-based approaches achieve strong performance in few-shot classification , they can not be directly applied to regression problems . Optimization-based meta-learning approaches try to find transferrable knowledge and adapt to new tasks quickly . An elegant and powerful meta-learning approach , termed model-agnostic metalearning ( MAML ) , solves a bi-level optimization problem to find good initialization of model parameters ( Finn et al. , 2017 ) . However , MAML has a variety of issues , such as sensitivity to neural network architectures , instability during training , arduous hyperparameter tuning , and high computational cost . On this basis , some follow-up methods have been developed to simplify , stabilize and improve the training process of MAML ( Antoniou et al. , 2018 ; Flennerhag et al. , 2020 ; Lee & Choi , 2018 ; Nichol et al. , 2018 ; Park & Oliva , 2019 ) . In practice , it is very challenging to learn high-dimensional model parameters in a low-data regime . Latent embedding optimization ( LEO ) attempts to learn low-dimensional representation to generate high-dimensional model parameters ( Rusu et al. , 2019 ) . Meanwhile , R2-D2 ( Bertinetto et al. , 2019 ) and MetaOptNet ( Lee et al. , 2019 ) reduce the dimensionality of trainable model parameters by freezing feature extraction layers during inner loop optimization . Note that the proposed method is fundamentally different from R2-D2 and MetaOptNet because our method requires neither episodic meta-learning nor bi-level optimization . Transfer learning approaches first learn a feature extractor on all training data through standard supervised learning , and then fine-tune a linear predictor on top of the learned feature extractor in a new task ( Chen et al. , 2019 ) . However , vanilla transfer learning methods for few-shot learning do not take extra steps to make the learned representation generalizing well to unseen meta-test tasks . Some approaches in this paradigm are developed to improve the quality of representation and boost the accuracy of few-shot classification , including cooperative ensembles ( Dvornik et al. , 2019 ) , knowledge distillation ( Tian et al. , 2020 ) , and auxiliary self-supervised learning ( Mangla et al. , 2020 ) . Nevertheless , those transfer learning methods are restricted to few-shot classification . MFRL aims to find representation generalizing well from the perspective of low-rank representation learning , which is supported by recent theoretical studies ( Saunshi et al. , 2021 ) . Furthermore , MFLR is the first transfer learning method that can handle both few-shot regression and classification problems and make predictions with well-calibrated uncertainty . 3 BACKGROUND . 3.1 EPISODIC META-LEARNING . In episodic meta-learning , the meta-training data contains T episodes or tasks , where the τ th episode consists of dataDτ = { ( xτ , j , yτ , j ) } Nτj=1 withNτ samples . Tasks and episodes are used interchangeably in the rest of the paper . Episodic meta-learning algorithms aim to find common model parameters θ which can be quickly adapted to task-specific parameters φτ ( τ = 1 , ... , T ) . For example , MAML-type algorithms assume φτ is one or a few gradient steps away from θ ( Finn et al. , 2017 ; 2018 ; Grant et al. , 2018 ; Yoon et al. , 2018 ) , while other meta-learning approaches assume that φτ and θ share the parameters in the feature extractor and only differ in the top layer ( Bertinetto et al. , 2019 ; Lee et al. , 2019 ; Snell et al. , 2017 ) . | The submission introduces a few-shot classification and regression approach called Meta-Free Representation Learning (MFRL). First, a representation is learned on the meta-training data: for regression, the model is trained on all available training regression tasks concurrently; for classification, the model is trained on the full-ways classification problem using all training classes. Then, stochastic weight averaging (SWA) is applied to the model by continuing training for a certain number of epochs and averaging the parameters obtained across those additional epochs. For test tasks, the regression or classification layer is discarded, and a new output layer is trained while freezing the network weights. For regression, approximate inference is performed on a hierarchical Bayesian linear model. For classification, a logistic regression model is trained with L2 regularization and a temperature hyperparameter. Experimental results are presented on the sine wave and head pose problems for regression, and on mini-ImageNet, tiered-ImageNet, CIFAR-FS, and FC100 for classification. The extent to which SWA encourages learning lower-rank representations (as hypothesized) is verified through visualizations of the normalized eigenvalues. Finally, calibration curves are shown for classification to demonstrate how MFRL's temperature scaling factor leads to better calibrated models. | SP:36cdc943ee5bb1368491734f1a3ac0afce86decb |
Meta-free few-shot learning via representation learning with weight averaging | Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms . Transfer learning approaches are a natural alternative , but they are restricted to few-shot classification . Moreover , little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples , except for some Bayesian episodic learning algorithms . To tackle the aforementioned issues , we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification . The resulting method does not require episodic meta-learning and is called meta-free representation learning ( MFRL ) . MFRL first finds low-rank representation generalizing well on meta-test tasks . Given the learned representation , probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty . The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty . In addition , weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures . 1 INTRODUCTION . Currently , the vast majority of few-shot learning methods are within the general paradigm of metalearning ( a.k.a . learning to learn ) ( Bengio et al. , 1991 ; Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) , which learns multiple tasks in an episodic manner to distill transferrable knowledge ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Although many episodic meta-learning methods report state-of-the-art ( SOTA ) performance , recent studies show that simple transfer learning methods with fixed embeddings ( Chen et al. , 2019 ; Tian et al. , 2020 ) can achieve similar or better performance in few-shot learning . It is found that the effectiveness of optimization-based meta-learning algorithms is due to reusing high-quality representation , instead of rapid learning of task-specific representation ( Raghu et al. , 2020 ) . The quality of the presentation is not quantitatively defined , except for some empirical case studies ( Goldblum et al. , 2020 ) . Recent machine learning theories ( Saunshi et al. , 2021 ) indicate that low-rank representation leads to better sample efficiency in learning a new task . However , those theoretical studies are within the paradigm of meta-learning and do not reveal how to obtain low-rank representation for few-shot learning outside the realm of meta-learning . This motivates us to investigate ways to improve the representation for adapting to new few-shot tasks in a meta-free manner by taking the advantage of simplicity and robustness in transfer learning . In parallel , existing transfer learning methods also have limitations . That is , the existing transfer learning methods may not find representation generalizing well to unseen few-shot tasks ( Chen et al. , 2019 ; Dhillon et al. , 2020 ) , compared with state-of-the-art meta-learning methods ( Ye et al. , 2020 ; Zhang et al. , 2020 ) . Although some transfer learning methods utilize knowledge distillation and self-supervised training to achieve strong performance in few-shot classification , they are restricted to few-shot classification problems ( Mangla et al. , 2020 ; Tian et al. , 2020 ) . To the best of our knowledge , no transfer learning method is developed to achieve similar performance to metalearning in few-shot regression . As such , it is desirable to have a transfer learning method that finds high-quality representation generalizing well to unseen classification and regression problems . The last limitation of the existing transfer learning methods in few-shot learning is the lack of uncertainty calibration . Uncertainty quantification is concerned with the quantification of how likely certain outcomes are . Despite a plethora of few-shot learning methods ( in fact , machine learning in general ) to improve the point estimation accuracy , few methods are developed to get probabilistic models with improved uncertainty calibration by integrating Bayesian learning into episodic metatraining ( Grant et al. , 2018 ; Finn et al. , 2018 ; Yoon et al. , 2018 ; Snell & Zemel , 2021 ) . Few-shot learning models can be used in risk-averse applications such as medical diagnosis ( Prabhu et al. , 2019 ) . The diagnosis decision is made on not only point estimation but also probabilities associated with the prediction . The risk of making wrong decisions is significant when using uncalibrated models ( Begoli et al. , 2019 ) . Thus , the development of proper fine-tuning steps to achieve well-calibrated models is the key towards practical applications of transfer learning in few-shot learning . In this paper , we develop a simple transfer learning method as our own baseline to allow easy regularization towards more generalizable representation and calibration of prediction uncertainty . The regularization in the proposed transfer learning method works for regression and classification problems so that we can handle both problems within a common architecture . The calibration procedure is easily integrated into the developed transfer learning method to obtain few-shot learning models with good uncertainty quantification . Therefore , the resulting method , called Meta-Free Representation Learning ( MFRL ) , overcomes the aforementioned limitations in existing transfer learning methods for few-shot learning . Our empirical studies demonstrate that the relatively overlooked transfer learning method can achieve high accuracy and well-calibrated uncertainty in few-shot learning when it is combined with the proper regularization and calibration . Those two tools are also portable to meta-learning methods to improve accuracy and calibration , but the improvement is less significant compared with that of transfer learning . We use stochastic weight averaging ( SWA ) ( Izmailov et al. , 2018 ) , which is agnostic to loss function types , as implicit regularization to improve the generalization capability of the representation . We also shed light on that the effectiveness of SWA is due to its bias towards low-rank representation . To address the issue of uncertainty quantification , we fine-tune appropriate linear layers during the meta-test phase to get models with well-calibrated uncertainty . In MFRL , hierarchical Bayesian linear models are used to properly capture the uncertainty from very limited training samples in few-shot regression , whereas the softmax output is scaled with a temperature parameter to make the few-shot classification model well-calibrated . Our method is the first one to achieve well-calibrated few-shot models by only fine-tuning probabilistic linear models in the meta-test phase , without any learning mechanisms related to the meta-training or representation learning phase . Our contributions in this work are summarized as follows : • We propose a transfer learning method that can handle both few-shot regression and classification problems with performance exceeding SOTA . • For the first time , we empirically find the implicit regularization of SWA towards low-rank representation , which is a useful property in transferring to few-shot tasks . • The proposed method results in well-calibrated uncertainty in few-shot learning models while preserving SOTA accuracy . • The implicit regularization of SWA and temperature scaling factor can be applied to existing meta-learning methods to improve their accuracy and reliability in few-shot learning . 2 RELATED WORK . Episodic meta-learning approaches can be categorized into metric-based and optimization-based methods . Metric-based methods project input data to feature vectors through nonlinear embeddings and compare their similarity to make the prediction . Examples of similarity metrics include the weighted L1 metric ( Koch et al. , 2015 ) , cosine similarity ( Qi et al. , 2018 ; Vinyals et al. , 2016 ) , and Euclidean distance to class-mean representation ( Snell et al. , 2017 ) . Instead of relying on predefined metrics , learnable similarity metrics are introduced to improve the few-shot classification performance ( Oreshkin et al. , 2018 ; Sung et al. , 2018 ) . Recent metric-based approaches focus on developing task-adaptive embeddings to improve few-shot classification accuracy . Those task-adaptive embeddings include attention mechanisms for feature transformation ( Fei et al. , 2021 ; Gidaris & Komodakis , 2018 ; Ye et al. , 2020 ; Zhang et al. , 2021 ) , graph neural networks ( Garcia & Estrach , 2018 ) , implicit class representation ( Ravichandran et al. , 2019 ) , and task-dependent conditioning ( Oreshkin et al. , 2018 ; Yoon et al. , 2020 ; 2019 ) . Although metric-based approaches achieve strong performance in few-shot classification , they can not be directly applied to regression problems . Optimization-based meta-learning approaches try to find transferrable knowledge and adapt to new tasks quickly . An elegant and powerful meta-learning approach , termed model-agnostic metalearning ( MAML ) , solves a bi-level optimization problem to find good initialization of model parameters ( Finn et al. , 2017 ) . However , MAML has a variety of issues , such as sensitivity to neural network architectures , instability during training , arduous hyperparameter tuning , and high computational cost . On this basis , some follow-up methods have been developed to simplify , stabilize and improve the training process of MAML ( Antoniou et al. , 2018 ; Flennerhag et al. , 2020 ; Lee & Choi , 2018 ; Nichol et al. , 2018 ; Park & Oliva , 2019 ) . In practice , it is very challenging to learn high-dimensional model parameters in a low-data regime . Latent embedding optimization ( LEO ) attempts to learn low-dimensional representation to generate high-dimensional model parameters ( Rusu et al. , 2019 ) . Meanwhile , R2-D2 ( Bertinetto et al. , 2019 ) and MetaOptNet ( Lee et al. , 2019 ) reduce the dimensionality of trainable model parameters by freezing feature extraction layers during inner loop optimization . Note that the proposed method is fundamentally different from R2-D2 and MetaOptNet because our method requires neither episodic meta-learning nor bi-level optimization . Transfer learning approaches first learn a feature extractor on all training data through standard supervised learning , and then fine-tune a linear predictor on top of the learned feature extractor in a new task ( Chen et al. , 2019 ) . However , vanilla transfer learning methods for few-shot learning do not take extra steps to make the learned representation generalizing well to unseen meta-test tasks . Some approaches in this paradigm are developed to improve the quality of representation and boost the accuracy of few-shot classification , including cooperative ensembles ( Dvornik et al. , 2019 ) , knowledge distillation ( Tian et al. , 2020 ) , and auxiliary self-supervised learning ( Mangla et al. , 2020 ) . Nevertheless , those transfer learning methods are restricted to few-shot classification . MFRL aims to find representation generalizing well from the perspective of low-rank representation learning , which is supported by recent theoretical studies ( Saunshi et al. , 2021 ) . Furthermore , MFLR is the first transfer learning method that can handle both few-shot regression and classification problems and make predictions with well-calibrated uncertainty . 3 BACKGROUND . 3.1 EPISODIC META-LEARNING . In episodic meta-learning , the meta-training data contains T episodes or tasks , where the τ th episode consists of dataDτ = { ( xτ , j , yτ , j ) } Nτj=1 withNτ samples . Tasks and episodes are used interchangeably in the rest of the paper . Episodic meta-learning algorithms aim to find common model parameters θ which can be quickly adapted to task-specific parameters φτ ( τ = 1 , ... , T ) . For example , MAML-type algorithms assume φτ is one or a few gradient steps away from θ ( Finn et al. , 2017 ; 2018 ; Grant et al. , 2018 ; Yoon et al. , 2018 ) , while other meta-learning approaches assume that φτ and θ share the parameters in the feature extractor and only differ in the top layer ( Bertinetto et al. , 2019 ; Lee et al. , 2019 ; Snell et al. , 2017 ) . | This paper proposes a transfer learning method for few-shot regression/classification. In representation learning, it adapts the stochastic weight averaging (SWA) as a regularizer for learning more generalizable features. In fine-tuning phase (adaption to a new task in the meta-test phase), it treats the regression and classification differently. For regression a hierarchical Bayse is used while in classification, a conventional method is merged with temperature scaling. This work achieves comparable results to SOTA, however, they are major concerns that need to be addressed. | SP:36cdc943ee5bb1368491734f1a3ac0afce86decb |
Meta-free few-shot learning via representation learning with weight averaging | Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms . Transfer learning approaches are a natural alternative , but they are restricted to few-shot classification . Moreover , little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples , except for some Bayesian episodic learning algorithms . To tackle the aforementioned issues , we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification . The resulting method does not require episodic meta-learning and is called meta-free representation learning ( MFRL ) . MFRL first finds low-rank representation generalizing well on meta-test tasks . Given the learned representation , probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty . The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty . In addition , weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures . 1 INTRODUCTION . Currently , the vast majority of few-shot learning methods are within the general paradigm of metalearning ( a.k.a . learning to learn ) ( Bengio et al. , 1991 ; Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) , which learns multiple tasks in an episodic manner to distill transferrable knowledge ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Snell et al. , 2017 ) . Although many episodic meta-learning methods report state-of-the-art ( SOTA ) performance , recent studies show that simple transfer learning methods with fixed embeddings ( Chen et al. , 2019 ; Tian et al. , 2020 ) can achieve similar or better performance in few-shot learning . It is found that the effectiveness of optimization-based meta-learning algorithms is due to reusing high-quality representation , instead of rapid learning of task-specific representation ( Raghu et al. , 2020 ) . The quality of the presentation is not quantitatively defined , except for some empirical case studies ( Goldblum et al. , 2020 ) . Recent machine learning theories ( Saunshi et al. , 2021 ) indicate that low-rank representation leads to better sample efficiency in learning a new task . However , those theoretical studies are within the paradigm of meta-learning and do not reveal how to obtain low-rank representation for few-shot learning outside the realm of meta-learning . This motivates us to investigate ways to improve the representation for adapting to new few-shot tasks in a meta-free manner by taking the advantage of simplicity and robustness in transfer learning . In parallel , existing transfer learning methods also have limitations . That is , the existing transfer learning methods may not find representation generalizing well to unseen few-shot tasks ( Chen et al. , 2019 ; Dhillon et al. , 2020 ) , compared with state-of-the-art meta-learning methods ( Ye et al. , 2020 ; Zhang et al. , 2020 ) . Although some transfer learning methods utilize knowledge distillation and self-supervised training to achieve strong performance in few-shot classification , they are restricted to few-shot classification problems ( Mangla et al. , 2020 ; Tian et al. , 2020 ) . To the best of our knowledge , no transfer learning method is developed to achieve similar performance to metalearning in few-shot regression . As such , it is desirable to have a transfer learning method that finds high-quality representation generalizing well to unseen classification and regression problems . The last limitation of the existing transfer learning methods in few-shot learning is the lack of uncertainty calibration . Uncertainty quantification is concerned with the quantification of how likely certain outcomes are . Despite a plethora of few-shot learning methods ( in fact , machine learning in general ) to improve the point estimation accuracy , few methods are developed to get probabilistic models with improved uncertainty calibration by integrating Bayesian learning into episodic metatraining ( Grant et al. , 2018 ; Finn et al. , 2018 ; Yoon et al. , 2018 ; Snell & Zemel , 2021 ) . Few-shot learning models can be used in risk-averse applications such as medical diagnosis ( Prabhu et al. , 2019 ) . The diagnosis decision is made on not only point estimation but also probabilities associated with the prediction . The risk of making wrong decisions is significant when using uncalibrated models ( Begoli et al. , 2019 ) . Thus , the development of proper fine-tuning steps to achieve well-calibrated models is the key towards practical applications of transfer learning in few-shot learning . In this paper , we develop a simple transfer learning method as our own baseline to allow easy regularization towards more generalizable representation and calibration of prediction uncertainty . The regularization in the proposed transfer learning method works for regression and classification problems so that we can handle both problems within a common architecture . The calibration procedure is easily integrated into the developed transfer learning method to obtain few-shot learning models with good uncertainty quantification . Therefore , the resulting method , called Meta-Free Representation Learning ( MFRL ) , overcomes the aforementioned limitations in existing transfer learning methods for few-shot learning . Our empirical studies demonstrate that the relatively overlooked transfer learning method can achieve high accuracy and well-calibrated uncertainty in few-shot learning when it is combined with the proper regularization and calibration . Those two tools are also portable to meta-learning methods to improve accuracy and calibration , but the improvement is less significant compared with that of transfer learning . We use stochastic weight averaging ( SWA ) ( Izmailov et al. , 2018 ) , which is agnostic to loss function types , as implicit regularization to improve the generalization capability of the representation . We also shed light on that the effectiveness of SWA is due to its bias towards low-rank representation . To address the issue of uncertainty quantification , we fine-tune appropriate linear layers during the meta-test phase to get models with well-calibrated uncertainty . In MFRL , hierarchical Bayesian linear models are used to properly capture the uncertainty from very limited training samples in few-shot regression , whereas the softmax output is scaled with a temperature parameter to make the few-shot classification model well-calibrated . Our method is the first one to achieve well-calibrated few-shot models by only fine-tuning probabilistic linear models in the meta-test phase , without any learning mechanisms related to the meta-training or representation learning phase . Our contributions in this work are summarized as follows : • We propose a transfer learning method that can handle both few-shot regression and classification problems with performance exceeding SOTA . • For the first time , we empirically find the implicit regularization of SWA towards low-rank representation , which is a useful property in transferring to few-shot tasks . • The proposed method results in well-calibrated uncertainty in few-shot learning models while preserving SOTA accuracy . • The implicit regularization of SWA and temperature scaling factor can be applied to existing meta-learning methods to improve their accuracy and reliability in few-shot learning . 2 RELATED WORK . Episodic meta-learning approaches can be categorized into metric-based and optimization-based methods . Metric-based methods project input data to feature vectors through nonlinear embeddings and compare their similarity to make the prediction . Examples of similarity metrics include the weighted L1 metric ( Koch et al. , 2015 ) , cosine similarity ( Qi et al. , 2018 ; Vinyals et al. , 2016 ) , and Euclidean distance to class-mean representation ( Snell et al. , 2017 ) . Instead of relying on predefined metrics , learnable similarity metrics are introduced to improve the few-shot classification performance ( Oreshkin et al. , 2018 ; Sung et al. , 2018 ) . Recent metric-based approaches focus on developing task-adaptive embeddings to improve few-shot classification accuracy . Those task-adaptive embeddings include attention mechanisms for feature transformation ( Fei et al. , 2021 ; Gidaris & Komodakis , 2018 ; Ye et al. , 2020 ; Zhang et al. , 2021 ) , graph neural networks ( Garcia & Estrach , 2018 ) , implicit class representation ( Ravichandran et al. , 2019 ) , and task-dependent conditioning ( Oreshkin et al. , 2018 ; Yoon et al. , 2020 ; 2019 ) . Although metric-based approaches achieve strong performance in few-shot classification , they can not be directly applied to regression problems . Optimization-based meta-learning approaches try to find transferrable knowledge and adapt to new tasks quickly . An elegant and powerful meta-learning approach , termed model-agnostic metalearning ( MAML ) , solves a bi-level optimization problem to find good initialization of model parameters ( Finn et al. , 2017 ) . However , MAML has a variety of issues , such as sensitivity to neural network architectures , instability during training , arduous hyperparameter tuning , and high computational cost . On this basis , some follow-up methods have been developed to simplify , stabilize and improve the training process of MAML ( Antoniou et al. , 2018 ; Flennerhag et al. , 2020 ; Lee & Choi , 2018 ; Nichol et al. , 2018 ; Park & Oliva , 2019 ) . In practice , it is very challenging to learn high-dimensional model parameters in a low-data regime . Latent embedding optimization ( LEO ) attempts to learn low-dimensional representation to generate high-dimensional model parameters ( Rusu et al. , 2019 ) . Meanwhile , R2-D2 ( Bertinetto et al. , 2019 ) and MetaOptNet ( Lee et al. , 2019 ) reduce the dimensionality of trainable model parameters by freezing feature extraction layers during inner loop optimization . Note that the proposed method is fundamentally different from R2-D2 and MetaOptNet because our method requires neither episodic meta-learning nor bi-level optimization . Transfer learning approaches first learn a feature extractor on all training data through standard supervised learning , and then fine-tune a linear predictor on top of the learned feature extractor in a new task ( Chen et al. , 2019 ) . However , vanilla transfer learning methods for few-shot learning do not take extra steps to make the learned representation generalizing well to unseen meta-test tasks . Some approaches in this paradigm are developed to improve the quality of representation and boost the accuracy of few-shot classification , including cooperative ensembles ( Dvornik et al. , 2019 ) , knowledge distillation ( Tian et al. , 2020 ) , and auxiliary self-supervised learning ( Mangla et al. , 2020 ) . Nevertheless , those transfer learning methods are restricted to few-shot classification . MFRL aims to find representation generalizing well from the perspective of low-rank representation learning , which is supported by recent theoretical studies ( Saunshi et al. , 2021 ) . Furthermore , MFLR is the first transfer learning method that can handle both few-shot regression and classification problems and make predictions with well-calibrated uncertainty . 3 BACKGROUND . 3.1 EPISODIC META-LEARNING . In episodic meta-learning , the meta-training data contains T episodes or tasks , where the τ th episode consists of dataDτ = { ( xτ , j , yτ , j ) } Nτj=1 withNτ samples . Tasks and episodes are used interchangeably in the rest of the paper . Episodic meta-learning algorithms aim to find common model parameters θ which can be quickly adapted to task-specific parameters φτ ( τ = 1 , ... , T ) . For example , MAML-type algorithms assume φτ is one or a few gradient steps away from θ ( Finn et al. , 2017 ; 2018 ; Grant et al. , 2018 ; Yoon et al. , 2018 ) , while other meta-learning approaches assume that φτ and θ share the parameters in the feature extractor and only differ in the top layer ( Bertinetto et al. , 2019 ; Lee et al. , 2019 ; Snell et al. , 2017 ) . | This paper presents a few-shot learning method based on the reprehensive pre-train learning using stochastic weight averaging (SWA). The merits of the proposed work is that this proposed method can works for both few-shot regression and classification problems and both of them achieves better results compared with the other recent works as reported in the manuscript. And as claimed by the author, this is the first few-shot learning work can works for classification and regression. | SP:36cdc943ee5bb1368491734f1a3ac0afce86decb |
Robust Feature Selection using Sparse Centroid-Encoder | 1 INTRODUCTION . Technological advancement has made high-dimensional data readily available . For example , in bioinformatics , the researchers seek to understand the gene expression level with microarray or next-generation sequencing techniques where each point consists of over 50,000 measurements ( Pease et al . ( 1994 ) ; Shalon et al . ( 1996 ) ; Metzker ( 2010 ) ; Reuter et al . ( 2015 ) ) . The abundance of features demands the development of feature selection algorithms to improve a Machine Learning task , e.g. , classification . Another important aspect of feature selection is knowledge discovery from data . Which biomarkers are important to characterize a biological process , e.g. , the immune response to infection by respiratory viruses such as influenza ( O ’ Hara et al . ( 2013 ) ) ? Additional benefits of feature selection include improved visualization and understanding of data , reducing storage requirements , and faster algorithm training times . Feature selection can be accomplished in various ways that can be broadly categorized into the filter , wrapper , and embedded methods . In a filter method , each variable is ordered based on a score . After that , a threshold is used to select the relevant features ( Lazar et al . ( 2012 ) ) . Variables are usually ranked using correlation ( Guyon & Elisseeff ( 2003 ) ; Yu & Liu ( 2003 ) ) , and mutual information ( Vergara & Estévez ( 2014 ) ; Fleuret ( 2004 ) ) . In contrast , a wrapper method uses a model and determines the importance of a feature or a group of features by the generalization performance of the predetermined model ( El Aboudi & Benhlima ( 2016 ) ; Hsu et al . ( 2002 ) ) . Since evaluating every possible combination of features becomes an NP-hard problem , heuristics are used to find a subset of features . Wrapper methods are computationally intensive for larger data sets , in which case search techniques like Genetic Algorithm ( GA ) ( Goldberg & Holland ( 1988 ) ) or Particle Swarm Optimization ( PSO ) ( Kennedy & Eberhart ( 1995 ) ) are used . In embedded methods , feature selection criteria are incorporated within the model , i.e. , the variables are picked during the training process ( Lal et al . ( 2006 ) ) . Iterative Feature Removal ( IFR ) uses the absolute weight of a Sparse SVM model as a criterion to extract features from the high dimensional biological data set ( O ’ Hara et al . ( 2013 ) ) . This paper proposes a new embedded variable selection approach called Sparse Centroid-Encoder ( SCE ) to extract features when class labels are available . Our method extends the Centroid-Encoder model ( Ghosh et al . ( 2018 ) ; Ghosh & Kirby ( 2020 ) ) , where we applied a l1 penalty to a sparsity promoting layer between the input and the first hidden layer . We evaluate this Sparse Centroid-Encoder on diverse data sets and show that the selected features produce better generalization than other state-of-the-art techniques . Our results showed that SCE picked fewer features to obtain high clas- sification accuracy . As a feature selection tool , SCE uses a single model for the multi-class problem without the need to create multiple one-against-one binary models typical of linear methods , e.g. , Lasso ( Tibshirani ( 1996 ) ) , or Sparse SVM ( Chepushtanova et al . ( 2014 ) ) . Although SCE can be used both in binary and multi-class problems , we focused on the multi-class feature selection problem in this paper . The work of Li et al . ( 2016 ) also uses a similar sparse layer between the input and the first hidden with an Elastic net penalty while minimizing the classification error with a softmax layer . The authors used Theano ’ s symbolic differentiation ( Bergstra et al . ( 2010 ) ) to impose sparsity . In contrast , our approach minimizes the centroid-encoder loss with an explicit differentiation of the l1 function using the sub-gradient . The article is organized as follows : In Section 2 we present the Sparse Centroid-Encoder algorithm . In Section 3 we apply SCE to a range of bench-marking data sets taken from the literature . In Section 4 , we review related work , for both linear and non-linear feature selection . In Section 5 , we present our discussion and conclusion . 2 SPARSE CENTROID-ENCODER . Centroid-encoder ( CE ) neural networks are the starting point of our approach ( Ghosh & Kirby ( 2020 ) ; Ghosh et al . ( 2018 ) ; Aminian et al . ( 2021 ) ) . We present a brief overview of CEs and demonstrate how they can be extended to perform non-linear feature selection . 2.1 CENTROID-ENCODER . The CE neural network is a variation of an autoencoder and can be used for both visualization and classification tasks . Consider a data set with N samples and M classes . The classes denoted Cj , j = 1 , . . . , M where the indices of the data associated with class Cj are denoted Ij . We define centroid of each class as cj = 1|Cj | ∑ i∈Ij x i where |Cj | is the cardinality of class Cj . Unlike autoencoder , which maps each point xi to itself , the CE maps each point xi to its class centroid cj by minimizing the following cost function over the parameter set θ : Lce ( θ ) = 1 2N M∑ j=1 ∑ i∈Ij ‖cj − f ( xi ; θ ) ) ‖22 ( 1 ) The mapping f is composed of a dimension reducing mapping g ( encoder ) followed by a dimension increasing reconstruction mapping h ( decoder ) . The output of the encoder is used as a supervised visualization tool ( Ghosh & Kirby ( 2020 ) ; Ghosh et al . ( 2018 ) ) , and attaching another layer to map to the one-hot encoded labels performs robust classification ( Aminian et al . ( 2021 ) ) . 2.2 SPARSE CENTROID-ENCODER FOR FEATURE SELECTION . The Sparse Centroid-encoder ( SCE ) is a modification to the centroid-encoder architecture as shown in Figure 1 . Unlike centroid-encoder , we haven ’ t used a bottleneck architecture as visualization is not our aim here . The input layer is connected to the first hidden layer via the sparsity promoting layer ( SPL ) . Each node of the input layer has a weighted one-to-one connection to each node of the SPL . The number of nodes in these two layer are the same . The nodes in SPL don ’ t have any bias or non-linearity . The SPL is fully connected to the first hidden layer , therefore the weighted input from the SPL will be passed to the hidden layer in the same way that of a standard feed forward network . During training , a l1 penalty will be applied to the weights connecting the input layer and SPL layer . The sparsity promoting l1 penalty will drive most of the weights to near zero and the corresponding input nodes/features can be discarded . Therefore , the purpose of the SPL is to select important features from the original input . Note we only apply the l1 penalty to the parameters of the SPL . Denote θspl to be the parameters ( weights ) of the SPL and θ to be the parameters of the rest of the network . The cost function of sparse centroid-encoder is given by Lsce ( θ ) = 1 2N M∑ j=1 ∑ i∈Ij ‖cj − f ( xi ; θ ) ) ‖22 + λ‖θspl‖1 ( 2 ) where λ is the hyper-parameter which controls the sparsity . A larger value of λ will promote higher sparsity resulting more near-zero weights in SPL . In other words , λ is a knob that controls the number of features selected from the input data . Like centroid-encoder , we trained sparse centroid-encoder using error backpropagation , which requires the gradient of the cost function of Equation 2 . As l1 function is not differentiable at 0 , we implement this term using the sub-gradient . 2.3 ITERATIVE FEATURE SELECTION USING SPARSE CENTROID-ENCODER . By design , sparse methods identify a small number of features that accomplish a classification task . If one is interested in all the discriminatory features that can be used to separate multiple classes , then one can repeat the process of removing good features . This section describes how sparse centroid-encoder ( SCE ) can be used iteratively to extract all discriminatory features from a data set ; see O ’ Hara et al . ( 2013 ) for an application of this approach to sparse support vector machines . SCE is a model based on neural network architecture ; hence , it ’ s a non-convex optimization . As a result , multiple runs will produce different solutions , i.e. , different feature sets on the same training set . These features may not be optimal given an unseen test set . To find out the robust features from a training set , we resort to frequency-based feature pruning . In this strategy , first , we divide the entire training set into k folds . On each of these folds , we ran the SCE and picked the top N ( user select ) number of features . We repeat the process T times to get k × T feature sets . Then we count the number of occurrences of each feature and call this number the frequency of a feature . We ordered the features based on the frequency and picked the optimum number from a validation set . We present the feature selection steps in Algorithm 1 . In Figure 2 , we plotted the magnitude of the feature weights for MNIST and GSE73072 in descending order to show the ability to promote sparsity of SCE . In both cases , the model ignored many features by setting their weight to near zero . 3 EXPERIMENTAL RESULTS . Here we present a range of comparative benchmarking experiments on a variety of data sets and feature selection models . We used five disparate data sets including single-cell data ( GM12878 ) , high dimensional infectious disease data set ( GSE73072 ) , vision letter data ( MNIST ) , hyperspectral imagery ( Indian Pines ) , and GIS data ( Forest Cover ) . 3.1 EXPERIMENTAL DETAILS . We did bench-marking experiments to compare the sparse centroid-encoder with other state-of-theart feature selection methods . To make the evaluation objective , we compared the classification accuracy on the unseen data using the selected features of different models . All experiments share the following workflow : • SCE is used to select an optimal number of features on the training samples . The l1 penalty parameter λ = 0.01 for all experiments save for MNIST where λ = 0.001 . • Build K classification models with these features on the training set . We used centroidencoder as the classification model ( Aminian et al . ( 2021 ) ) . • Compute the accuracy on the sequestered test set using theK trained models and report the mean accuracy with standard deviation . 3.2 QUANTITATIVE AND QUALITATIVE ANALYSIS . Now we present the results from a comprehensive analysis across five data sets . 3.2.1 SINGLE CELL DATA . GM12878 is a single cell data set that has been previously used to test multiclass feature selection algorithms ( Li et al . ( 2016 ) ) . The samples were collected from the annotated DNA region of lymphoblastoid cell line . Each sample is represented by a 93 dimensional features sampled from three classes : active enhancer ( AE ) , active promoter ( AP ) and background ( BG ) where each class contains 2 , 156 number of samples . The data set is split equally into a separate training , validation and test sets . We used the validation set to tune hyper-parameters and to pick the optimal number of features . After the feature selection step , we merged the training and validation set and trained K = 10 centroid-encoder classifiers with the selected features , and reported classification accuracy on the test set . We use the published results for deep feature selection ( DFS ) , shallow feature selection , LASSO , and Random Forest ( RF ) from the work of Li et al . to evaluate SCE as shown in Table 1 . To compare with Li et al. , we used the top 16 features to report the mean accuracy of the test samples . We also report the test accuracy using the top 48 features picked from the validation set , this was the best result in our experiment . When restricted to the top 16 , we see that the SCE features still outperform all the other models . Among all the models , LASSO features exhibit the worst performance with an accuracy of 81.86 % . This relatively low accuracy is not surprising , given LASSO is a linear model . The classification performance gives a quantitative measure that doesn ’ t reveal the biological significance of the selected genes . We did a literature survey of the top genes selected by sparse centroid-encoder and provided the detailed description in the appendix . Some of these genes play an essential role in transcriptional activation , e.g. , H4K20ME1 ( Barski et al . ( 2007 ) ) , TAF1 ( Wang et al . ( 2014 ) ) , H3K27ME3 ( Cai et al . ( 2021 ) ) , etc . Gene H3K27AC ( Creyghton et al . ( 2010 ) ) plays a vital role in separating active enhances from inactive ones . Besides that , many of these genes are related to the proliferation of the lymphoblastoid cancer cells , e.g. , POL2 ( Yamada et al . ( 2013 ) ) , NRSF/REST ( Kreisler et al . ( 2010 ) ) , GCN5 ( Yin et al . ( 2015 ) ) , PML ( Salomoni & Pandolfi ( 2002 ) ) , etc . This survey confirms the biological significance of the selected genes . | This paper presents a neural network based approach for feature selection. The presented approach uses the existing work from the literature (based on centroid-encoder neural networks). As the primary novelty, the authors introduce the sparsity term (which is the L_1 norm of the parameters as a regularization term) to the original centroid-encoder loss (compare Eq. 1 and Eq. 2 in the paper). Authors then introduce k-fold training to choose the dominant features and demonstrate their results on four different datasets. Both of those additions (use of L1 norm and k-fold training) are known in the literature and therefore, this paper reads as an application paper. | SP:98061edd34791e4b9cc83f279971910c642b5619 |
Robust Feature Selection using Sparse Centroid-Encoder | 1 INTRODUCTION . Technological advancement has made high-dimensional data readily available . For example , in bioinformatics , the researchers seek to understand the gene expression level with microarray or next-generation sequencing techniques where each point consists of over 50,000 measurements ( Pease et al . ( 1994 ) ; Shalon et al . ( 1996 ) ; Metzker ( 2010 ) ; Reuter et al . ( 2015 ) ) . The abundance of features demands the development of feature selection algorithms to improve a Machine Learning task , e.g. , classification . Another important aspect of feature selection is knowledge discovery from data . Which biomarkers are important to characterize a biological process , e.g. , the immune response to infection by respiratory viruses such as influenza ( O ’ Hara et al . ( 2013 ) ) ? Additional benefits of feature selection include improved visualization and understanding of data , reducing storage requirements , and faster algorithm training times . Feature selection can be accomplished in various ways that can be broadly categorized into the filter , wrapper , and embedded methods . In a filter method , each variable is ordered based on a score . After that , a threshold is used to select the relevant features ( Lazar et al . ( 2012 ) ) . Variables are usually ranked using correlation ( Guyon & Elisseeff ( 2003 ) ; Yu & Liu ( 2003 ) ) , and mutual information ( Vergara & Estévez ( 2014 ) ; Fleuret ( 2004 ) ) . In contrast , a wrapper method uses a model and determines the importance of a feature or a group of features by the generalization performance of the predetermined model ( El Aboudi & Benhlima ( 2016 ) ; Hsu et al . ( 2002 ) ) . Since evaluating every possible combination of features becomes an NP-hard problem , heuristics are used to find a subset of features . Wrapper methods are computationally intensive for larger data sets , in which case search techniques like Genetic Algorithm ( GA ) ( Goldberg & Holland ( 1988 ) ) or Particle Swarm Optimization ( PSO ) ( Kennedy & Eberhart ( 1995 ) ) are used . In embedded methods , feature selection criteria are incorporated within the model , i.e. , the variables are picked during the training process ( Lal et al . ( 2006 ) ) . Iterative Feature Removal ( IFR ) uses the absolute weight of a Sparse SVM model as a criterion to extract features from the high dimensional biological data set ( O ’ Hara et al . ( 2013 ) ) . This paper proposes a new embedded variable selection approach called Sparse Centroid-Encoder ( SCE ) to extract features when class labels are available . Our method extends the Centroid-Encoder model ( Ghosh et al . ( 2018 ) ; Ghosh & Kirby ( 2020 ) ) , where we applied a l1 penalty to a sparsity promoting layer between the input and the first hidden layer . We evaluate this Sparse Centroid-Encoder on diverse data sets and show that the selected features produce better generalization than other state-of-the-art techniques . Our results showed that SCE picked fewer features to obtain high clas- sification accuracy . As a feature selection tool , SCE uses a single model for the multi-class problem without the need to create multiple one-against-one binary models typical of linear methods , e.g. , Lasso ( Tibshirani ( 1996 ) ) , or Sparse SVM ( Chepushtanova et al . ( 2014 ) ) . Although SCE can be used both in binary and multi-class problems , we focused on the multi-class feature selection problem in this paper . The work of Li et al . ( 2016 ) also uses a similar sparse layer between the input and the first hidden with an Elastic net penalty while minimizing the classification error with a softmax layer . The authors used Theano ’ s symbolic differentiation ( Bergstra et al . ( 2010 ) ) to impose sparsity . In contrast , our approach minimizes the centroid-encoder loss with an explicit differentiation of the l1 function using the sub-gradient . The article is organized as follows : In Section 2 we present the Sparse Centroid-Encoder algorithm . In Section 3 we apply SCE to a range of bench-marking data sets taken from the literature . In Section 4 , we review related work , for both linear and non-linear feature selection . In Section 5 , we present our discussion and conclusion . 2 SPARSE CENTROID-ENCODER . Centroid-encoder ( CE ) neural networks are the starting point of our approach ( Ghosh & Kirby ( 2020 ) ; Ghosh et al . ( 2018 ) ; Aminian et al . ( 2021 ) ) . We present a brief overview of CEs and demonstrate how they can be extended to perform non-linear feature selection . 2.1 CENTROID-ENCODER . The CE neural network is a variation of an autoencoder and can be used for both visualization and classification tasks . Consider a data set with N samples and M classes . The classes denoted Cj , j = 1 , . . . , M where the indices of the data associated with class Cj are denoted Ij . We define centroid of each class as cj = 1|Cj | ∑ i∈Ij x i where |Cj | is the cardinality of class Cj . Unlike autoencoder , which maps each point xi to itself , the CE maps each point xi to its class centroid cj by minimizing the following cost function over the parameter set θ : Lce ( θ ) = 1 2N M∑ j=1 ∑ i∈Ij ‖cj − f ( xi ; θ ) ) ‖22 ( 1 ) The mapping f is composed of a dimension reducing mapping g ( encoder ) followed by a dimension increasing reconstruction mapping h ( decoder ) . The output of the encoder is used as a supervised visualization tool ( Ghosh & Kirby ( 2020 ) ; Ghosh et al . ( 2018 ) ) , and attaching another layer to map to the one-hot encoded labels performs robust classification ( Aminian et al . ( 2021 ) ) . 2.2 SPARSE CENTROID-ENCODER FOR FEATURE SELECTION . The Sparse Centroid-encoder ( SCE ) is a modification to the centroid-encoder architecture as shown in Figure 1 . Unlike centroid-encoder , we haven ’ t used a bottleneck architecture as visualization is not our aim here . The input layer is connected to the first hidden layer via the sparsity promoting layer ( SPL ) . Each node of the input layer has a weighted one-to-one connection to each node of the SPL . The number of nodes in these two layer are the same . The nodes in SPL don ’ t have any bias or non-linearity . The SPL is fully connected to the first hidden layer , therefore the weighted input from the SPL will be passed to the hidden layer in the same way that of a standard feed forward network . During training , a l1 penalty will be applied to the weights connecting the input layer and SPL layer . The sparsity promoting l1 penalty will drive most of the weights to near zero and the corresponding input nodes/features can be discarded . Therefore , the purpose of the SPL is to select important features from the original input . Note we only apply the l1 penalty to the parameters of the SPL . Denote θspl to be the parameters ( weights ) of the SPL and θ to be the parameters of the rest of the network . The cost function of sparse centroid-encoder is given by Lsce ( θ ) = 1 2N M∑ j=1 ∑ i∈Ij ‖cj − f ( xi ; θ ) ) ‖22 + λ‖θspl‖1 ( 2 ) where λ is the hyper-parameter which controls the sparsity . A larger value of λ will promote higher sparsity resulting more near-zero weights in SPL . In other words , λ is a knob that controls the number of features selected from the input data . Like centroid-encoder , we trained sparse centroid-encoder using error backpropagation , which requires the gradient of the cost function of Equation 2 . As l1 function is not differentiable at 0 , we implement this term using the sub-gradient . 2.3 ITERATIVE FEATURE SELECTION USING SPARSE CENTROID-ENCODER . By design , sparse methods identify a small number of features that accomplish a classification task . If one is interested in all the discriminatory features that can be used to separate multiple classes , then one can repeat the process of removing good features . This section describes how sparse centroid-encoder ( SCE ) can be used iteratively to extract all discriminatory features from a data set ; see O ’ Hara et al . ( 2013 ) for an application of this approach to sparse support vector machines . SCE is a model based on neural network architecture ; hence , it ’ s a non-convex optimization . As a result , multiple runs will produce different solutions , i.e. , different feature sets on the same training set . These features may not be optimal given an unseen test set . To find out the robust features from a training set , we resort to frequency-based feature pruning . In this strategy , first , we divide the entire training set into k folds . On each of these folds , we ran the SCE and picked the top N ( user select ) number of features . We repeat the process T times to get k × T feature sets . Then we count the number of occurrences of each feature and call this number the frequency of a feature . We ordered the features based on the frequency and picked the optimum number from a validation set . We present the feature selection steps in Algorithm 1 . In Figure 2 , we plotted the magnitude of the feature weights for MNIST and GSE73072 in descending order to show the ability to promote sparsity of SCE . In both cases , the model ignored many features by setting their weight to near zero . 3 EXPERIMENTAL RESULTS . Here we present a range of comparative benchmarking experiments on a variety of data sets and feature selection models . We used five disparate data sets including single-cell data ( GM12878 ) , high dimensional infectious disease data set ( GSE73072 ) , vision letter data ( MNIST ) , hyperspectral imagery ( Indian Pines ) , and GIS data ( Forest Cover ) . 3.1 EXPERIMENTAL DETAILS . We did bench-marking experiments to compare the sparse centroid-encoder with other state-of-theart feature selection methods . To make the evaluation objective , we compared the classification accuracy on the unseen data using the selected features of different models . All experiments share the following workflow : • SCE is used to select an optimal number of features on the training samples . The l1 penalty parameter λ = 0.01 for all experiments save for MNIST where λ = 0.001 . • Build K classification models with these features on the training set . We used centroidencoder as the classification model ( Aminian et al . ( 2021 ) ) . • Compute the accuracy on the sequestered test set using theK trained models and report the mean accuracy with standard deviation . 3.2 QUANTITATIVE AND QUALITATIVE ANALYSIS . Now we present the results from a comprehensive analysis across five data sets . 3.2.1 SINGLE CELL DATA . GM12878 is a single cell data set that has been previously used to test multiclass feature selection algorithms ( Li et al . ( 2016 ) ) . The samples were collected from the annotated DNA region of lymphoblastoid cell line . Each sample is represented by a 93 dimensional features sampled from three classes : active enhancer ( AE ) , active promoter ( AP ) and background ( BG ) where each class contains 2 , 156 number of samples . The data set is split equally into a separate training , validation and test sets . We used the validation set to tune hyper-parameters and to pick the optimal number of features . After the feature selection step , we merged the training and validation set and trained K = 10 centroid-encoder classifiers with the selected features , and reported classification accuracy on the test set . We use the published results for deep feature selection ( DFS ) , shallow feature selection , LASSO , and Random Forest ( RF ) from the work of Li et al . to evaluate SCE as shown in Table 1 . To compare with Li et al. , we used the top 16 features to report the mean accuracy of the test samples . We also report the test accuracy using the top 48 features picked from the validation set , this was the best result in our experiment . When restricted to the top 16 , we see that the SCE features still outperform all the other models . Among all the models , LASSO features exhibit the worst performance with an accuracy of 81.86 % . This relatively low accuracy is not surprising , given LASSO is a linear model . The classification performance gives a quantitative measure that doesn ’ t reveal the biological significance of the selected genes . We did a literature survey of the top genes selected by sparse centroid-encoder and provided the detailed description in the appendix . Some of these genes play an essential role in transcriptional activation , e.g. , H4K20ME1 ( Barski et al . ( 2007 ) ) , TAF1 ( Wang et al . ( 2014 ) ) , H3K27ME3 ( Cai et al . ( 2021 ) ) , etc . Gene H3K27AC ( Creyghton et al . ( 2010 ) ) plays a vital role in separating active enhances from inactive ones . Besides that , many of these genes are related to the proliferation of the lymphoblastoid cancer cells , e.g. , POL2 ( Yamada et al . ( 2013 ) ) , NRSF/REST ( Kreisler et al . ( 2010 ) ) , GCN5 ( Yin et al . ( 2015 ) ) , PML ( Salomoni & Pandolfi ( 2002 ) ) , etc . This survey confirms the biological significance of the selected genes . | This paper proposes a sparse centroid-encoder network by inserting a sparsity promoting layer between the input layer and the first hidden layer of the centroid-encoder network (Ghosh et al. (2018), Aminian et al. (2021)). In such an architecture, the authors define the loss function as the centroid-encoder loss (Ghosh et al. (2018), Aminian et al. (2021)) with a l_1 penalty on the sparsity promoting layer. Feature selection experiments are conducted on five datasets. | SP:98061edd34791e4b9cc83f279971910c642b5619 |
Robust Feature Selection using Sparse Centroid-Encoder | 1 INTRODUCTION . Technological advancement has made high-dimensional data readily available . For example , in bioinformatics , the researchers seek to understand the gene expression level with microarray or next-generation sequencing techniques where each point consists of over 50,000 measurements ( Pease et al . ( 1994 ) ; Shalon et al . ( 1996 ) ; Metzker ( 2010 ) ; Reuter et al . ( 2015 ) ) . The abundance of features demands the development of feature selection algorithms to improve a Machine Learning task , e.g. , classification . Another important aspect of feature selection is knowledge discovery from data . Which biomarkers are important to characterize a biological process , e.g. , the immune response to infection by respiratory viruses such as influenza ( O ’ Hara et al . ( 2013 ) ) ? Additional benefits of feature selection include improved visualization and understanding of data , reducing storage requirements , and faster algorithm training times . Feature selection can be accomplished in various ways that can be broadly categorized into the filter , wrapper , and embedded methods . In a filter method , each variable is ordered based on a score . After that , a threshold is used to select the relevant features ( Lazar et al . ( 2012 ) ) . Variables are usually ranked using correlation ( Guyon & Elisseeff ( 2003 ) ; Yu & Liu ( 2003 ) ) , and mutual information ( Vergara & Estévez ( 2014 ) ; Fleuret ( 2004 ) ) . In contrast , a wrapper method uses a model and determines the importance of a feature or a group of features by the generalization performance of the predetermined model ( El Aboudi & Benhlima ( 2016 ) ; Hsu et al . ( 2002 ) ) . Since evaluating every possible combination of features becomes an NP-hard problem , heuristics are used to find a subset of features . Wrapper methods are computationally intensive for larger data sets , in which case search techniques like Genetic Algorithm ( GA ) ( Goldberg & Holland ( 1988 ) ) or Particle Swarm Optimization ( PSO ) ( Kennedy & Eberhart ( 1995 ) ) are used . In embedded methods , feature selection criteria are incorporated within the model , i.e. , the variables are picked during the training process ( Lal et al . ( 2006 ) ) . Iterative Feature Removal ( IFR ) uses the absolute weight of a Sparse SVM model as a criterion to extract features from the high dimensional biological data set ( O ’ Hara et al . ( 2013 ) ) . This paper proposes a new embedded variable selection approach called Sparse Centroid-Encoder ( SCE ) to extract features when class labels are available . Our method extends the Centroid-Encoder model ( Ghosh et al . ( 2018 ) ; Ghosh & Kirby ( 2020 ) ) , where we applied a l1 penalty to a sparsity promoting layer between the input and the first hidden layer . We evaluate this Sparse Centroid-Encoder on diverse data sets and show that the selected features produce better generalization than other state-of-the-art techniques . Our results showed that SCE picked fewer features to obtain high clas- sification accuracy . As a feature selection tool , SCE uses a single model for the multi-class problem without the need to create multiple one-against-one binary models typical of linear methods , e.g. , Lasso ( Tibshirani ( 1996 ) ) , or Sparse SVM ( Chepushtanova et al . ( 2014 ) ) . Although SCE can be used both in binary and multi-class problems , we focused on the multi-class feature selection problem in this paper . The work of Li et al . ( 2016 ) also uses a similar sparse layer between the input and the first hidden with an Elastic net penalty while minimizing the classification error with a softmax layer . The authors used Theano ’ s symbolic differentiation ( Bergstra et al . ( 2010 ) ) to impose sparsity . In contrast , our approach minimizes the centroid-encoder loss with an explicit differentiation of the l1 function using the sub-gradient . The article is organized as follows : In Section 2 we present the Sparse Centroid-Encoder algorithm . In Section 3 we apply SCE to a range of bench-marking data sets taken from the literature . In Section 4 , we review related work , for both linear and non-linear feature selection . In Section 5 , we present our discussion and conclusion . 2 SPARSE CENTROID-ENCODER . Centroid-encoder ( CE ) neural networks are the starting point of our approach ( Ghosh & Kirby ( 2020 ) ; Ghosh et al . ( 2018 ) ; Aminian et al . ( 2021 ) ) . We present a brief overview of CEs and demonstrate how they can be extended to perform non-linear feature selection . 2.1 CENTROID-ENCODER . The CE neural network is a variation of an autoencoder and can be used for both visualization and classification tasks . Consider a data set with N samples and M classes . The classes denoted Cj , j = 1 , . . . , M where the indices of the data associated with class Cj are denoted Ij . We define centroid of each class as cj = 1|Cj | ∑ i∈Ij x i where |Cj | is the cardinality of class Cj . Unlike autoencoder , which maps each point xi to itself , the CE maps each point xi to its class centroid cj by minimizing the following cost function over the parameter set θ : Lce ( θ ) = 1 2N M∑ j=1 ∑ i∈Ij ‖cj − f ( xi ; θ ) ) ‖22 ( 1 ) The mapping f is composed of a dimension reducing mapping g ( encoder ) followed by a dimension increasing reconstruction mapping h ( decoder ) . The output of the encoder is used as a supervised visualization tool ( Ghosh & Kirby ( 2020 ) ; Ghosh et al . ( 2018 ) ) , and attaching another layer to map to the one-hot encoded labels performs robust classification ( Aminian et al . ( 2021 ) ) . 2.2 SPARSE CENTROID-ENCODER FOR FEATURE SELECTION . The Sparse Centroid-encoder ( SCE ) is a modification to the centroid-encoder architecture as shown in Figure 1 . Unlike centroid-encoder , we haven ’ t used a bottleneck architecture as visualization is not our aim here . The input layer is connected to the first hidden layer via the sparsity promoting layer ( SPL ) . Each node of the input layer has a weighted one-to-one connection to each node of the SPL . The number of nodes in these two layer are the same . The nodes in SPL don ’ t have any bias or non-linearity . The SPL is fully connected to the first hidden layer , therefore the weighted input from the SPL will be passed to the hidden layer in the same way that of a standard feed forward network . During training , a l1 penalty will be applied to the weights connecting the input layer and SPL layer . The sparsity promoting l1 penalty will drive most of the weights to near zero and the corresponding input nodes/features can be discarded . Therefore , the purpose of the SPL is to select important features from the original input . Note we only apply the l1 penalty to the parameters of the SPL . Denote θspl to be the parameters ( weights ) of the SPL and θ to be the parameters of the rest of the network . The cost function of sparse centroid-encoder is given by Lsce ( θ ) = 1 2N M∑ j=1 ∑ i∈Ij ‖cj − f ( xi ; θ ) ) ‖22 + λ‖θspl‖1 ( 2 ) where λ is the hyper-parameter which controls the sparsity . A larger value of λ will promote higher sparsity resulting more near-zero weights in SPL . In other words , λ is a knob that controls the number of features selected from the input data . Like centroid-encoder , we trained sparse centroid-encoder using error backpropagation , which requires the gradient of the cost function of Equation 2 . As l1 function is not differentiable at 0 , we implement this term using the sub-gradient . 2.3 ITERATIVE FEATURE SELECTION USING SPARSE CENTROID-ENCODER . By design , sparse methods identify a small number of features that accomplish a classification task . If one is interested in all the discriminatory features that can be used to separate multiple classes , then one can repeat the process of removing good features . This section describes how sparse centroid-encoder ( SCE ) can be used iteratively to extract all discriminatory features from a data set ; see O ’ Hara et al . ( 2013 ) for an application of this approach to sparse support vector machines . SCE is a model based on neural network architecture ; hence , it ’ s a non-convex optimization . As a result , multiple runs will produce different solutions , i.e. , different feature sets on the same training set . These features may not be optimal given an unseen test set . To find out the robust features from a training set , we resort to frequency-based feature pruning . In this strategy , first , we divide the entire training set into k folds . On each of these folds , we ran the SCE and picked the top N ( user select ) number of features . We repeat the process T times to get k × T feature sets . Then we count the number of occurrences of each feature and call this number the frequency of a feature . We ordered the features based on the frequency and picked the optimum number from a validation set . We present the feature selection steps in Algorithm 1 . In Figure 2 , we plotted the magnitude of the feature weights for MNIST and GSE73072 in descending order to show the ability to promote sparsity of SCE . In both cases , the model ignored many features by setting their weight to near zero . 3 EXPERIMENTAL RESULTS . Here we present a range of comparative benchmarking experiments on a variety of data sets and feature selection models . We used five disparate data sets including single-cell data ( GM12878 ) , high dimensional infectious disease data set ( GSE73072 ) , vision letter data ( MNIST ) , hyperspectral imagery ( Indian Pines ) , and GIS data ( Forest Cover ) . 3.1 EXPERIMENTAL DETAILS . We did bench-marking experiments to compare the sparse centroid-encoder with other state-of-theart feature selection methods . To make the evaluation objective , we compared the classification accuracy on the unseen data using the selected features of different models . All experiments share the following workflow : • SCE is used to select an optimal number of features on the training samples . The l1 penalty parameter λ = 0.01 for all experiments save for MNIST where λ = 0.001 . • Build K classification models with these features on the training set . We used centroidencoder as the classification model ( Aminian et al . ( 2021 ) ) . • Compute the accuracy on the sequestered test set using theK trained models and report the mean accuracy with standard deviation . 3.2 QUANTITATIVE AND QUALITATIVE ANALYSIS . Now we present the results from a comprehensive analysis across five data sets . 3.2.1 SINGLE CELL DATA . GM12878 is a single cell data set that has been previously used to test multiclass feature selection algorithms ( Li et al . ( 2016 ) ) . The samples were collected from the annotated DNA region of lymphoblastoid cell line . Each sample is represented by a 93 dimensional features sampled from three classes : active enhancer ( AE ) , active promoter ( AP ) and background ( BG ) where each class contains 2 , 156 number of samples . The data set is split equally into a separate training , validation and test sets . We used the validation set to tune hyper-parameters and to pick the optimal number of features . After the feature selection step , we merged the training and validation set and trained K = 10 centroid-encoder classifiers with the selected features , and reported classification accuracy on the test set . We use the published results for deep feature selection ( DFS ) , shallow feature selection , LASSO , and Random Forest ( RF ) from the work of Li et al . to evaluate SCE as shown in Table 1 . To compare with Li et al. , we used the top 16 features to report the mean accuracy of the test samples . We also report the test accuracy using the top 48 features picked from the validation set , this was the best result in our experiment . When restricted to the top 16 , we see that the SCE features still outperform all the other models . Among all the models , LASSO features exhibit the worst performance with an accuracy of 81.86 % . This relatively low accuracy is not surprising , given LASSO is a linear model . The classification performance gives a quantitative measure that doesn ’ t reveal the biological significance of the selected genes . We did a literature survey of the top genes selected by sparse centroid-encoder and provided the detailed description in the appendix . Some of these genes play an essential role in transcriptional activation , e.g. , H4K20ME1 ( Barski et al . ( 2007 ) ) , TAF1 ( Wang et al . ( 2014 ) ) , H3K27ME3 ( Cai et al . ( 2021 ) ) , etc . Gene H3K27AC ( Creyghton et al . ( 2010 ) ) plays a vital role in separating active enhances from inactive ones . Besides that , many of these genes are related to the proliferation of the lymphoblastoid cancer cells , e.g. , POL2 ( Yamada et al . ( 2013 ) ) , NRSF/REST ( Kreisler et al . ( 2010 ) ) , GCN5 ( Yin et al . ( 2015 ) ) , PML ( Salomoni & Pandolfi ( 2002 ) ) , etc . This survey confirms the biological significance of the selected genes . | The authors propose a novel feature selection algorithm by introducing a L1-penalty over the Centroid-Encoder loss function. The experimental results shows the algorithm can outperform DFS algorithms in certain datasets. The main contribution of this paper is the addition of the l1-regularization to the Centroid-Encoder loss, changing it to a feature selection algorithm. | SP:98061edd34791e4b9cc83f279971910c642b5619 |
Batch size-invariance for policy optimization | 1 INTRODUCTION . Policy gradient-based methods for reinforcement learning have enjoyed great success in recent years . The stability and reliability of these methods is typically improved by controlling the size of policy updates , using either a “ trust region ” ( TRPO ) or a surrogate objective ( PPO ) ( Schulman et al. , 2015 ; 2017 ) . The usual justification for this is that we can not trust updates that take us too far from the policy used to collect experience , called the behavior policy . In this work we identify a subtle flaw with this : the behavior policy is irrelevant to the justification . Instead , what matters is that we control how fast the policy is updated , or put another way , that we approximate the natural policy gradient ( Kakade , 2001 ) . Our key insight is that the “ old ” policy in these methods serves two independent purposes . The first purpose is for off-policy corrections , via importance sampling , for which the old policy must be the behavior policy . The second purpose is to control the size of policy updates , for which the old policy can be any recent policy , which we call the proximal policy . It does not matter whether the proximal policy is also the behavior policy ; it only matters how old the proximal policy is . We demonstrate this by running PPO with stale data collected using a policy from multiple iterations ago , which causes performance to quickly degrade unless the proximal policy is decoupled from the behavior policy . Our insight allows us to make PPO batch size-invariant , meaning that when the batch size is changed , we can preserve behavior , as a function of the number of examples processed , by changing other hyperparameters ( as long as the batch size is sufficiently small ) . We achieve this by using an exponentially-weighted moving average ( EWMA ) of the policy network ’ s weights as the network for the proximal policy . Batch size-invariance has been studied many times before ( see Section 3.1 ) , sometimes under the name “ perfect scaling ” . It is of practical benefit when we wish to increase the batch size to reduce gradient variance , but computational resources such as GPU memory do not allow this . In such a situation , we can instead adjust other hyperparameters formulaically , thereby spreading out the increased computational load over time . The remainder of the paper is structured as follows . • In Section 2 , we explain the difference between the proximal and behavior policies , and show how to decouple them in PPO ’ s objectives . • In Section 3 , we explain the concept of batch size-invariance , and how it applies to SGD and Adam ( Kingma & Ba , 2014 ) . • In Section 4 , we introduce PPO-EWMA and PPG-EWMA , variants of PPO and PPG ( Cobbe et al. , 2020 ) that make use of our decoupled objectives , and show how to make them batch size-invariant at small batch sizes . • In Section 5 , we provide experimental evidence for our central claims : that decoupling the proximal policy from the behavior policy can be beneficial , and that it allows us to achieve batch size-invariant policy optimization . • Finally , in Section 6 , we discuss the theoretical and practical implications of our results . 2 DECOUPLED POLICY OBJECTIVES . In this section we explain the difference between the proximal and behavior policies , and introduce new versions of PPO ’ s objectives in which they have been decoupled . PPO alternates between sampling data through interaction with the environment , and optimizing a surrogate objective . The policy used for sampling is denoted πθold , and is used by the objective in two different ways . This is easiest to see with the KL penalized objective ( Schulman et al. , 2017 , equation ( 8 ) ) : LKLPEN ( θ ) : = Êt [ πθ ( at | st ) πθold ( at | st ) Ât − βKL [ πθold ( · | st ) , πθ ( · | st ) ] ] , where Ât is an estimator of the advantage at timestep t , and Êt [ . . . ] indicates the empirical average over a finite batch of timesteps t. The first use of πθold in this expression is as part of an importance sampling ratio . In order for the policy gradient estimate to be unbiased , this policy needs to be the one that was used for sampling , so we call this the behavior policy πθbehav . The second use of πθold is as a recent target to pull the current policy towards , so we call this the proximal policy πθprox . Our key insight is that the proximal policy need not equal the behavior policy . As we will show experimentally , it matters how old the proximal policy is , but it does not matter whether or not the proximal policy was used for sampling . We therefore define the decoupled KL penalized objective LKLPENdecoupled ( θ ) : = Êt [ πθ ( at | st ) πθbehav ( at | st ) Ât − βKL [ πθprox ( · | st ) , πθ ( · | st ) ] ] , where πθbehav is the policy used for sampling , and πθprox is a recent policy yet to be specified . It is less obvious how to decouple the clipped PPO objective , because πθold only appears once in that expression ( Schulman et al. , 2017 , equation ( 7 ) ) : LCLIP ( θ ) : = Êt [ min ( rt ( θ ) Ât , clip ( rt ( θ ) , 1− , 1 + ) Ât ) ] , where rt ( θ ) : = πθ ( at|st ) πθold ( at|st ) . However , we can rewrite this objective as LCLIP ( θ ) = Êt [ 1 πθold min ( πθÂt , clip ( πθ , ( 1− ) πθold , ( 1 + ) πθold ) Ât ) ] ( omitting the policy arguments ( at | st ) for brevity ) . Now the first use of πθold is as part of an importance sampling ratio , for which we must use the behavior policy , and the second and third uses are in applying the implicit KL penalty , for which we can use the proximal policy . We therefore define the decoupled clipped objective LCLIPdecoupled ( θ ) : = Êt [ πθprox ( at | st ) πθbehav ( at | st ) min ( rt ( θ ) Ât , clip ( rt ( θ ) , 1− , 1 + ) Ât ) ] , where rt ( θ ) : = πθ ( at | st ) πθprox ( at | st ) . As a sanity check , note that if we set the KL penalty coefficient β = 0 or the clipping parameter =∞ , then the dependence on the proximal policy disappears , and we recover the vanilla ( importancesampled ) policy gradient objective ( Schulman et al. , 2017 , equation ( 6 ) ) . A similar decoupled policy objective is also propsed in Mirror Descent Policy Optimization ( Tomar et al. , 2020 , Section 5.1 ) . 3 BATCH SIZE-INVARIANCE . We say an algorithm is batch size-invariant to mean that when the batch size is changed , the original behavior can be approximately recovered by adjusting other hyperparameters to compensate . Here we consider behavior as a function of the total number of examples processed , so another way to put this is that doubling the batch size halves the number of steps needed . Shallue et al . ( 2018 ) and Zhang et al . ( 2019 ) refer to this as “ perfect scaling ” . We treat batch size-invariance as a descriptive property that can hold to some degree , rather than as a binary property . In practice , the original behavior can never be recovered perfectly , and the extent to which it can be recovered depends on both how much and the direction in which the batch size is changed . 3.1 BATCH SIZE-INVARIANCE FOR STOCHASTIC GRADIENT DESCENT . Stochastic gradient descent ( SGD ) is batch size-invariant , up until the batch size approaches some critical batch size . This is the batch size at which the gradient has a signal-to-noise ratio of around 1 . At smaller batch sizes than this , changes to the batch size can be compensated for by a directly proportional adjustment to the learning rate . This core observation has been made many times before ( Mandt et al. , 2017 ; Goyal et al. , 2017 ; Smith et al. , 2017 ; Hardin , 2017 ; Ma et al. , 2018 ; Shallue et al. , 2018 ; McCandlish et al. , 2018 ) . A discussion of this and other previous work can be found in Appendix C. Sketch explanation . For the benefit of the reader ’ s intuition , we sketch the explanation for SGD ’ s batch size-invariance . For a much more thorough explanation , we refer the reader to Mandt et al . ( 2017 ) . Consider running SGD on a loss function L ( θ ; x ) of a parameter vector θ and a data point x . Two steps with batch size n and learning rate α corresponds to the update rule θt+2 = θt − α n ∑ x∈Bt ∇θL ( θt ; x ) − α n ∑ x∈Bt+1 ∇θL ( θt+1 ; x ) , where Bt and Bt+1 are the next two batches of size n. On the other hand , a single step with batch size 2n and learning rate 2α corresponds to the update rule θt+2 = θt − 2α 2n ∑ x∈Bt∪Bt+1 ∇θL ( θt ; x ) . These update rules are very similar , the only difference being whether the gradient for Bt+1 is evaluated at θt or θt+1 . If the batch size is small compared to the critical batch size , then the difference between θt and θt+1 is mostly noise , and moreover this noise is small compared to the total noise accumulated by θt over previous updates . Hence the two update rules behave very similarly . A good mental model of SGD in this small-batch regime is of the parameter vector making small , mostly random steps around the loss landscape . Over many steps , the noise is canceled out and the parameter vector gradually moves in the direction of steepest descent . But a single additional step makes almost no difference to gradient evaluations . In more formal terms , SGD is numerically integrating a stochastic differential equation ( SDE ) . Changing the learning rate in proportion the batch size leaves the SDE unchanged , and only affects the step size of the numerical integration . Once the step size is small enough ( the condition that gives rise to the critical batch size ) , the discretization error is dominated by the noise , and so the step size stops mattering . 3.2 BATCH SIZE-INVARIANCE FOR ADAM . Adam ( Kingma & Ba , 2014 ) is a popular variant of SGD , and is also batch size-invariant until the batch size approaches a critical batch size ( which may be different to the critical batch size for SGD ) ( Zhang et al. , 2019 ) . To compensate for the batch size being divided by some constant c , one must make the following adjustments ( Hardin , 2017 ) : • Divide the step size α by √ c. • ( Raise the exponential decay rates β1 and β2 to the power of 1/c . ) The first adjustment should be contrasted with the linear learning rate adjustment for vanilla SGD . We discuss the reason for this difference and provide empirical support for the square root rule in Appendix D. The second adjustment is much less important in practice , since Adam is fairly robust to the β1 and β2 hyperparameters ( hence it has been parenthesized ) . Note also that β1 also affects the relationship between the current policy and the proximal policy in policy optimization . For simplicity , we omitted this adjustment in most of our experiments , but included it in some additional experiments that are detailed in Appendix E . | The paper proposes decoupling the behavior and proximal policies used in policy optimization algorithms such as PPO and PPG. Typically, the behavior policy itself is used as the proximal policy, i.e. the policy to which updates need to be close to (maintain trust region). In such a case, using stale or old data can lead to bad performance, leading to on-policy methods using data from the only most recent sampling iteration. The paper then talks about the batch invariance property, which refers to the performance remaining as is when changing the batch size. This is typically achieved by adjusting the optimization parameters such as learing rate (ex. if batch size is doubled, learning rate should be doubled as well). Finally, the authors perform experiments to test how the decoupling affects the usage of stale data, how batch invariance can be achieved with the decoupled objective, and how much better does the decoupling perform as compared to when the same policy is used as the behvior and proximal policies. | SP:9c8ea469dd80ce7eebbc9d57faec9cb64d525244 |
Batch size-invariance for policy optimization | 1 INTRODUCTION . Policy gradient-based methods for reinforcement learning have enjoyed great success in recent years . The stability and reliability of these methods is typically improved by controlling the size of policy updates , using either a “ trust region ” ( TRPO ) or a surrogate objective ( PPO ) ( Schulman et al. , 2015 ; 2017 ) . The usual justification for this is that we can not trust updates that take us too far from the policy used to collect experience , called the behavior policy . In this work we identify a subtle flaw with this : the behavior policy is irrelevant to the justification . Instead , what matters is that we control how fast the policy is updated , or put another way , that we approximate the natural policy gradient ( Kakade , 2001 ) . Our key insight is that the “ old ” policy in these methods serves two independent purposes . The first purpose is for off-policy corrections , via importance sampling , for which the old policy must be the behavior policy . The second purpose is to control the size of policy updates , for which the old policy can be any recent policy , which we call the proximal policy . It does not matter whether the proximal policy is also the behavior policy ; it only matters how old the proximal policy is . We demonstrate this by running PPO with stale data collected using a policy from multiple iterations ago , which causes performance to quickly degrade unless the proximal policy is decoupled from the behavior policy . Our insight allows us to make PPO batch size-invariant , meaning that when the batch size is changed , we can preserve behavior , as a function of the number of examples processed , by changing other hyperparameters ( as long as the batch size is sufficiently small ) . We achieve this by using an exponentially-weighted moving average ( EWMA ) of the policy network ’ s weights as the network for the proximal policy . Batch size-invariance has been studied many times before ( see Section 3.1 ) , sometimes under the name “ perfect scaling ” . It is of practical benefit when we wish to increase the batch size to reduce gradient variance , but computational resources such as GPU memory do not allow this . In such a situation , we can instead adjust other hyperparameters formulaically , thereby spreading out the increased computational load over time . The remainder of the paper is structured as follows . • In Section 2 , we explain the difference between the proximal and behavior policies , and show how to decouple them in PPO ’ s objectives . • In Section 3 , we explain the concept of batch size-invariance , and how it applies to SGD and Adam ( Kingma & Ba , 2014 ) . • In Section 4 , we introduce PPO-EWMA and PPG-EWMA , variants of PPO and PPG ( Cobbe et al. , 2020 ) that make use of our decoupled objectives , and show how to make them batch size-invariant at small batch sizes . • In Section 5 , we provide experimental evidence for our central claims : that decoupling the proximal policy from the behavior policy can be beneficial , and that it allows us to achieve batch size-invariant policy optimization . • Finally , in Section 6 , we discuss the theoretical and practical implications of our results . 2 DECOUPLED POLICY OBJECTIVES . In this section we explain the difference between the proximal and behavior policies , and introduce new versions of PPO ’ s objectives in which they have been decoupled . PPO alternates between sampling data through interaction with the environment , and optimizing a surrogate objective . The policy used for sampling is denoted πθold , and is used by the objective in two different ways . This is easiest to see with the KL penalized objective ( Schulman et al. , 2017 , equation ( 8 ) ) : LKLPEN ( θ ) : = Êt [ πθ ( at | st ) πθold ( at | st ) Ât − βKL [ πθold ( · | st ) , πθ ( · | st ) ] ] , where Ât is an estimator of the advantage at timestep t , and Êt [ . . . ] indicates the empirical average over a finite batch of timesteps t. The first use of πθold in this expression is as part of an importance sampling ratio . In order for the policy gradient estimate to be unbiased , this policy needs to be the one that was used for sampling , so we call this the behavior policy πθbehav . The second use of πθold is as a recent target to pull the current policy towards , so we call this the proximal policy πθprox . Our key insight is that the proximal policy need not equal the behavior policy . As we will show experimentally , it matters how old the proximal policy is , but it does not matter whether or not the proximal policy was used for sampling . We therefore define the decoupled KL penalized objective LKLPENdecoupled ( θ ) : = Êt [ πθ ( at | st ) πθbehav ( at | st ) Ât − βKL [ πθprox ( · | st ) , πθ ( · | st ) ] ] , where πθbehav is the policy used for sampling , and πθprox is a recent policy yet to be specified . It is less obvious how to decouple the clipped PPO objective , because πθold only appears once in that expression ( Schulman et al. , 2017 , equation ( 7 ) ) : LCLIP ( θ ) : = Êt [ min ( rt ( θ ) Ât , clip ( rt ( θ ) , 1− , 1 + ) Ât ) ] , where rt ( θ ) : = πθ ( at|st ) πθold ( at|st ) . However , we can rewrite this objective as LCLIP ( θ ) = Êt [ 1 πθold min ( πθÂt , clip ( πθ , ( 1− ) πθold , ( 1 + ) πθold ) Ât ) ] ( omitting the policy arguments ( at | st ) for brevity ) . Now the first use of πθold is as part of an importance sampling ratio , for which we must use the behavior policy , and the second and third uses are in applying the implicit KL penalty , for which we can use the proximal policy . We therefore define the decoupled clipped objective LCLIPdecoupled ( θ ) : = Êt [ πθprox ( at | st ) πθbehav ( at | st ) min ( rt ( θ ) Ât , clip ( rt ( θ ) , 1− , 1 + ) Ât ) ] , where rt ( θ ) : = πθ ( at | st ) πθprox ( at | st ) . As a sanity check , note that if we set the KL penalty coefficient β = 0 or the clipping parameter =∞ , then the dependence on the proximal policy disappears , and we recover the vanilla ( importancesampled ) policy gradient objective ( Schulman et al. , 2017 , equation ( 6 ) ) . A similar decoupled policy objective is also propsed in Mirror Descent Policy Optimization ( Tomar et al. , 2020 , Section 5.1 ) . 3 BATCH SIZE-INVARIANCE . We say an algorithm is batch size-invariant to mean that when the batch size is changed , the original behavior can be approximately recovered by adjusting other hyperparameters to compensate . Here we consider behavior as a function of the total number of examples processed , so another way to put this is that doubling the batch size halves the number of steps needed . Shallue et al . ( 2018 ) and Zhang et al . ( 2019 ) refer to this as “ perfect scaling ” . We treat batch size-invariance as a descriptive property that can hold to some degree , rather than as a binary property . In practice , the original behavior can never be recovered perfectly , and the extent to which it can be recovered depends on both how much and the direction in which the batch size is changed . 3.1 BATCH SIZE-INVARIANCE FOR STOCHASTIC GRADIENT DESCENT . Stochastic gradient descent ( SGD ) is batch size-invariant , up until the batch size approaches some critical batch size . This is the batch size at which the gradient has a signal-to-noise ratio of around 1 . At smaller batch sizes than this , changes to the batch size can be compensated for by a directly proportional adjustment to the learning rate . This core observation has been made many times before ( Mandt et al. , 2017 ; Goyal et al. , 2017 ; Smith et al. , 2017 ; Hardin , 2017 ; Ma et al. , 2018 ; Shallue et al. , 2018 ; McCandlish et al. , 2018 ) . A discussion of this and other previous work can be found in Appendix C. Sketch explanation . For the benefit of the reader ’ s intuition , we sketch the explanation for SGD ’ s batch size-invariance . For a much more thorough explanation , we refer the reader to Mandt et al . ( 2017 ) . Consider running SGD on a loss function L ( θ ; x ) of a parameter vector θ and a data point x . Two steps with batch size n and learning rate α corresponds to the update rule θt+2 = θt − α n ∑ x∈Bt ∇θL ( θt ; x ) − α n ∑ x∈Bt+1 ∇θL ( θt+1 ; x ) , where Bt and Bt+1 are the next two batches of size n. On the other hand , a single step with batch size 2n and learning rate 2α corresponds to the update rule θt+2 = θt − 2α 2n ∑ x∈Bt∪Bt+1 ∇θL ( θt ; x ) . These update rules are very similar , the only difference being whether the gradient for Bt+1 is evaluated at θt or θt+1 . If the batch size is small compared to the critical batch size , then the difference between θt and θt+1 is mostly noise , and moreover this noise is small compared to the total noise accumulated by θt over previous updates . Hence the two update rules behave very similarly . A good mental model of SGD in this small-batch regime is of the parameter vector making small , mostly random steps around the loss landscape . Over many steps , the noise is canceled out and the parameter vector gradually moves in the direction of steepest descent . But a single additional step makes almost no difference to gradient evaluations . In more formal terms , SGD is numerically integrating a stochastic differential equation ( SDE ) . Changing the learning rate in proportion the batch size leaves the SDE unchanged , and only affects the step size of the numerical integration . Once the step size is small enough ( the condition that gives rise to the critical batch size ) , the discretization error is dominated by the noise , and so the step size stops mattering . 3.2 BATCH SIZE-INVARIANCE FOR ADAM . Adam ( Kingma & Ba , 2014 ) is a popular variant of SGD , and is also batch size-invariant until the batch size approaches a critical batch size ( which may be different to the critical batch size for SGD ) ( Zhang et al. , 2019 ) . To compensate for the batch size being divided by some constant c , one must make the following adjustments ( Hardin , 2017 ) : • Divide the step size α by √ c. • ( Raise the exponential decay rates β1 and β2 to the power of 1/c . ) The first adjustment should be contrasted with the linear learning rate adjustment for vanilla SGD . We discuss the reason for this difference and provide empirical support for the square root rule in Appendix D. The second adjustment is much less important in practice , since Adam is fairly robust to the β1 and β2 hyperparameters ( hence it has been parenthesized ) . Note also that β1 also affects the relationship between the current policy and the proximal policy in policy optimization . For simplicity , we omitted this adjustment in most of our experiments , but included it in some additional experiments that are detailed in Appendix E . | Standard reinforcement learning algorithms such as PPO have several "batch size" hyperparameters parameters, and changing them impacts how one needs to choose the step size. To better understand these algorithms and their hyperparameters, this paper investigates batch size invariance, a property that allows the algorithm's behavior to remain constant for changes to the batch size through the modification of other hyperparameters. Batch size invariance has been studied significantly in supervised learning, but these PPO-style algorithms have updates that break the previous batch size invariance techniques. This paper develops two new algorithm variants that provide a way to achieve batch size invariance. Empirical results show that the methods are somewhat effective at providing batch size invariance. | SP:9c8ea469dd80ce7eebbc9d57faec9cb64d525244 |
Batch size-invariance for policy optimization | 1 INTRODUCTION . Policy gradient-based methods for reinforcement learning have enjoyed great success in recent years . The stability and reliability of these methods is typically improved by controlling the size of policy updates , using either a “ trust region ” ( TRPO ) or a surrogate objective ( PPO ) ( Schulman et al. , 2015 ; 2017 ) . The usual justification for this is that we can not trust updates that take us too far from the policy used to collect experience , called the behavior policy . In this work we identify a subtle flaw with this : the behavior policy is irrelevant to the justification . Instead , what matters is that we control how fast the policy is updated , or put another way , that we approximate the natural policy gradient ( Kakade , 2001 ) . Our key insight is that the “ old ” policy in these methods serves two independent purposes . The first purpose is for off-policy corrections , via importance sampling , for which the old policy must be the behavior policy . The second purpose is to control the size of policy updates , for which the old policy can be any recent policy , which we call the proximal policy . It does not matter whether the proximal policy is also the behavior policy ; it only matters how old the proximal policy is . We demonstrate this by running PPO with stale data collected using a policy from multiple iterations ago , which causes performance to quickly degrade unless the proximal policy is decoupled from the behavior policy . Our insight allows us to make PPO batch size-invariant , meaning that when the batch size is changed , we can preserve behavior , as a function of the number of examples processed , by changing other hyperparameters ( as long as the batch size is sufficiently small ) . We achieve this by using an exponentially-weighted moving average ( EWMA ) of the policy network ’ s weights as the network for the proximal policy . Batch size-invariance has been studied many times before ( see Section 3.1 ) , sometimes under the name “ perfect scaling ” . It is of practical benefit when we wish to increase the batch size to reduce gradient variance , but computational resources such as GPU memory do not allow this . In such a situation , we can instead adjust other hyperparameters formulaically , thereby spreading out the increased computational load over time . The remainder of the paper is structured as follows . • In Section 2 , we explain the difference between the proximal and behavior policies , and show how to decouple them in PPO ’ s objectives . • In Section 3 , we explain the concept of batch size-invariance , and how it applies to SGD and Adam ( Kingma & Ba , 2014 ) . • In Section 4 , we introduce PPO-EWMA and PPG-EWMA , variants of PPO and PPG ( Cobbe et al. , 2020 ) that make use of our decoupled objectives , and show how to make them batch size-invariant at small batch sizes . • In Section 5 , we provide experimental evidence for our central claims : that decoupling the proximal policy from the behavior policy can be beneficial , and that it allows us to achieve batch size-invariant policy optimization . • Finally , in Section 6 , we discuss the theoretical and practical implications of our results . 2 DECOUPLED POLICY OBJECTIVES . In this section we explain the difference between the proximal and behavior policies , and introduce new versions of PPO ’ s objectives in which they have been decoupled . PPO alternates between sampling data through interaction with the environment , and optimizing a surrogate objective . The policy used for sampling is denoted πθold , and is used by the objective in two different ways . This is easiest to see with the KL penalized objective ( Schulman et al. , 2017 , equation ( 8 ) ) : LKLPEN ( θ ) : = Êt [ πθ ( at | st ) πθold ( at | st ) Ât − βKL [ πθold ( · | st ) , πθ ( · | st ) ] ] , where Ât is an estimator of the advantage at timestep t , and Êt [ . . . ] indicates the empirical average over a finite batch of timesteps t. The first use of πθold in this expression is as part of an importance sampling ratio . In order for the policy gradient estimate to be unbiased , this policy needs to be the one that was used for sampling , so we call this the behavior policy πθbehav . The second use of πθold is as a recent target to pull the current policy towards , so we call this the proximal policy πθprox . Our key insight is that the proximal policy need not equal the behavior policy . As we will show experimentally , it matters how old the proximal policy is , but it does not matter whether or not the proximal policy was used for sampling . We therefore define the decoupled KL penalized objective LKLPENdecoupled ( θ ) : = Êt [ πθ ( at | st ) πθbehav ( at | st ) Ât − βKL [ πθprox ( · | st ) , πθ ( · | st ) ] ] , where πθbehav is the policy used for sampling , and πθprox is a recent policy yet to be specified . It is less obvious how to decouple the clipped PPO objective , because πθold only appears once in that expression ( Schulman et al. , 2017 , equation ( 7 ) ) : LCLIP ( θ ) : = Êt [ min ( rt ( θ ) Ât , clip ( rt ( θ ) , 1− , 1 + ) Ât ) ] , where rt ( θ ) : = πθ ( at|st ) πθold ( at|st ) . However , we can rewrite this objective as LCLIP ( θ ) = Êt [ 1 πθold min ( πθÂt , clip ( πθ , ( 1− ) πθold , ( 1 + ) πθold ) Ât ) ] ( omitting the policy arguments ( at | st ) for brevity ) . Now the first use of πθold is as part of an importance sampling ratio , for which we must use the behavior policy , and the second and third uses are in applying the implicit KL penalty , for which we can use the proximal policy . We therefore define the decoupled clipped objective LCLIPdecoupled ( θ ) : = Êt [ πθprox ( at | st ) πθbehav ( at | st ) min ( rt ( θ ) Ât , clip ( rt ( θ ) , 1− , 1 + ) Ât ) ] , where rt ( θ ) : = πθ ( at | st ) πθprox ( at | st ) . As a sanity check , note that if we set the KL penalty coefficient β = 0 or the clipping parameter =∞ , then the dependence on the proximal policy disappears , and we recover the vanilla ( importancesampled ) policy gradient objective ( Schulman et al. , 2017 , equation ( 6 ) ) . A similar decoupled policy objective is also propsed in Mirror Descent Policy Optimization ( Tomar et al. , 2020 , Section 5.1 ) . 3 BATCH SIZE-INVARIANCE . We say an algorithm is batch size-invariant to mean that when the batch size is changed , the original behavior can be approximately recovered by adjusting other hyperparameters to compensate . Here we consider behavior as a function of the total number of examples processed , so another way to put this is that doubling the batch size halves the number of steps needed . Shallue et al . ( 2018 ) and Zhang et al . ( 2019 ) refer to this as “ perfect scaling ” . We treat batch size-invariance as a descriptive property that can hold to some degree , rather than as a binary property . In practice , the original behavior can never be recovered perfectly , and the extent to which it can be recovered depends on both how much and the direction in which the batch size is changed . 3.1 BATCH SIZE-INVARIANCE FOR STOCHASTIC GRADIENT DESCENT . Stochastic gradient descent ( SGD ) is batch size-invariant , up until the batch size approaches some critical batch size . This is the batch size at which the gradient has a signal-to-noise ratio of around 1 . At smaller batch sizes than this , changes to the batch size can be compensated for by a directly proportional adjustment to the learning rate . This core observation has been made many times before ( Mandt et al. , 2017 ; Goyal et al. , 2017 ; Smith et al. , 2017 ; Hardin , 2017 ; Ma et al. , 2018 ; Shallue et al. , 2018 ; McCandlish et al. , 2018 ) . A discussion of this and other previous work can be found in Appendix C. Sketch explanation . For the benefit of the reader ’ s intuition , we sketch the explanation for SGD ’ s batch size-invariance . For a much more thorough explanation , we refer the reader to Mandt et al . ( 2017 ) . Consider running SGD on a loss function L ( θ ; x ) of a parameter vector θ and a data point x . Two steps with batch size n and learning rate α corresponds to the update rule θt+2 = θt − α n ∑ x∈Bt ∇θL ( θt ; x ) − α n ∑ x∈Bt+1 ∇θL ( θt+1 ; x ) , where Bt and Bt+1 are the next two batches of size n. On the other hand , a single step with batch size 2n and learning rate 2α corresponds to the update rule θt+2 = θt − 2α 2n ∑ x∈Bt∪Bt+1 ∇θL ( θt ; x ) . These update rules are very similar , the only difference being whether the gradient for Bt+1 is evaluated at θt or θt+1 . If the batch size is small compared to the critical batch size , then the difference between θt and θt+1 is mostly noise , and moreover this noise is small compared to the total noise accumulated by θt over previous updates . Hence the two update rules behave very similarly . A good mental model of SGD in this small-batch regime is of the parameter vector making small , mostly random steps around the loss landscape . Over many steps , the noise is canceled out and the parameter vector gradually moves in the direction of steepest descent . But a single additional step makes almost no difference to gradient evaluations . In more formal terms , SGD is numerically integrating a stochastic differential equation ( SDE ) . Changing the learning rate in proportion the batch size leaves the SDE unchanged , and only affects the step size of the numerical integration . Once the step size is small enough ( the condition that gives rise to the critical batch size ) , the discretization error is dominated by the noise , and so the step size stops mattering . 3.2 BATCH SIZE-INVARIANCE FOR ADAM . Adam ( Kingma & Ba , 2014 ) is a popular variant of SGD , and is also batch size-invariant until the batch size approaches a critical batch size ( which may be different to the critical batch size for SGD ) ( Zhang et al. , 2019 ) . To compensate for the batch size being divided by some constant c , one must make the following adjustments ( Hardin , 2017 ) : • Divide the step size α by √ c. • ( Raise the exponential decay rates β1 and β2 to the power of 1/c . ) The first adjustment should be contrasted with the linear learning rate adjustment for vanilla SGD . We discuss the reason for this difference and provide empirical support for the square root rule in Appendix D. The second adjustment is much less important in practice , since Adam is fairly robust to the β1 and β2 hyperparameters ( hence it has been parenthesized ) . Note also that β1 also affects the relationship between the current policy and the proximal policy in policy optimization . For simplicity , we omitted this adjustment in most of our experiments , but included it in some additional experiments that are detailed in Appendix E . | This paper studies the method to achieve the batch size-invariant for policy gradient algorithms (PPO, PPG). The paper achieves this by decoupling the proximal policy from the behavior policy. The experiments demonstrate the effectiveness of the method. | SP:9c8ea469dd80ce7eebbc9d57faec9cb64d525244 |
On Anytime Learning at Macroscale | 1 INTRODUCTION . Empirical risk minimization ( Vapnik , 1998 ) is the dominant framework to formalize the learning process of a supervised task , and it has been critical to the success of large scale training of deep learning systems on a wide variety of applications . Within this framework , training data is assumed to be provided to the learner all at once . Alternatively , when the dataset is very large ( essentially infinite ) , data is streamed to the learner one minibatch at the time , assuming that the rate at which samples are received matches the model ’ s processing time to learn from them . Learning over streams of data has been studied in the machine learning domain for a long time ( see Section 2 and Figure 1 for more details ) with different assumptions : for instance in online learning , it is usually assumed that datapoints are coming one by one and have to be processed as soon as they are received . In continual learning , the streaming of data usually corresponds to a stream of large datasets corresponding to different tasks to solve , etc . In this paper , we define a simple yet important setting where there is a single task to solve , and where training data often comes at a slower rate than a model can process it . Moreover , it comes in relatively large batches once in a while . While poorly studied , this setting corresponds to practical applications encountered in production pipelines . For instance , it is faced by teams deploying language modeling applications ( e.g content moderation ) build models that are trained on large amounts of data like filtered versions of Common Crawl , which are dumps of the internet . However , new snapshots are available every month , as new content is generated over time . Therefore datasets keep getting bigger every few months and models need to be retrained accordingly . Similarly , visual object recognition datasets used in deployed applications are often extended every few months to include new images with their corresponding annotations . * Authors contributed equally Practically , there are two main approaches to integrate information present in a new batch of data in an existing model . If a lot of computational resources are available , a new and bigger model is instantiated and trained from scratch on the union of the old training set with the new batch of data . However , since this is a computationally very intensive process , retraining is typically done only rarely , once several batches of data have been collected . We call this approach “ tardy ” large-scale learning , since a predictor is available only at a later time . Another option , particularly suitable when computational resources are scarce and a predictor is needed quickly , is to simply finetune the old model on the new data as this arrives . Note that , in that settings , methods from the data stream domain or from the online learning domain that are based on the idea of processing any datapoint just once are not suitable since they have been developed for different use-cases . This trade-off is emblematic of anytime learning , a learning setting where a learner has to provide good predictions at any point in time , while improving its performance over time as more and more data is observed . From an anytime learning perspective , neither training a large model after all data is received nor finetuning on the newly added batch of data are not satisfying . The former approach is a poor anytime learner because one needs to wait for a long time before obtaining a useful predictor . The latter approach is a poor anytime learner because it typically can not leverage very well future batches of data since the model has a fixed capacity , determined on a small portion of the overall dataset and because inherently the model is trained on non i.i.d . data . In this work , we aim at exploring this accuracy versus time trade-off of anytime learning , not at the level of a single batch of data , but at the macroscale of the entire sequence of batches . This is a setting which more closely mimics practical applications , that we call anytime learning at mascroscale ( ALMA ) . In this learning setting , we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data ( and therefore we do not care about how quickly a learner adapts to a new batch ) , yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable . In summary , we are interested in a learner that i ) yields high accuracy , ii ) can make non-trivial predictions at any point in time while iii ) limiting its computational and memory resources . Our first contribution is to formalize the ALMA problem and to introduce metrics to evaluate learners ( §3 ) . We consider three different axes : error rate , memory and amount of computation . By measuring these quantities against time , via an area under the curve , we account not only for the final performance but also for the whole training trajectory over the sequence of large batches of data . Our second contribution is an extensive empirical evaluation ( §5 ) of various models ( §4 ) that strike different trade-offs between accuracy and time to obtain a useful predictor . In particular , we explore models that fall in between greedy finetuning and tardy large-scale learning , and investigate models that leverage batches of data at an intermediate rate . We also consider a rich family of modular architectures , from plain ensembling methods to hierarchical mixture of experts , and several variants thereof , including those that have access to a replay buffer storing all previous batches of data and those that can grow capacity over time . Our findings across three different benchmarks , including a large scale language modeling one , can be summarized as follows . a ) An intermediate waiting time offers the best trade-off between accuracy and time to yield such a predictor . However , b ) there is no single approach striking the best trade-off between performance and efficiency for various model sizes . c ) Retraining from scratch a big model does offer the lowest error rate but sacrifices efficiency . d ) Interestingly , large models are the most statistically efficient even when considering small datasets ( like MNIST ) and fully connected networks . e ) While approaches to grow capacity exhibit gains in terms of computational efficiency , these do not even outpeform simple ensembles . Overall , our work points at several research opportunities to improve modeling in a streaming setting of broad practical relevance , rather then pointing at any particular solution . We have also released code to reproduce our experiments and the entire platform implementing ALMA . 2 RELATED WORK . ALMA relates to several other learning frameworks : offline learning , continual learning , online learning and transfer learning as illustrated in Figure 1. i ) It shares the same assumptions of classical empirical risk minimization ( ERM ) ( Vapnik , 1998 ) at the level of each batch of data . However , it overall violates ERM ’ s assumptions of i.i.d . observations , because data points come in a stream of data chunks . ii ) Because of this , ALMA relates to continual learning ( CL ) ( Ring , 1994 ; Thrun , 1994 ; Ring , 1997 ; Thrun , 1998 ) , with the key difference that the data distribution across batches ( or tasks ) is assumed stationary in ALMA . Therefore , ALMA can be seen as a special case of CL with a single task to solve . iii ) ALMA relates also to online learning ( Bottou , 1998 ) since it assumes that data are coming in a stream , an assumption also made in the concept drift literature ( Lu et al. , 2018 ) . However , in online learning examples are streamed one at the time ( or at random from a large dataset ) , while in ALMA the learner receives large batches of data sequentially In ALMA , received data can be processed multiple times as opposite to the online learning setting that usually assumes that any new datapoint has to be processed as soon as it is available , and will not be reused in future updates . iv ) Finally , ALMA relates more broadly to transfer learning ( Pan & Yang , 2010 ) , as the problem of adapting to a new batch of data can be interpreted as leveraging knowledge acquired on previous batches to more effciently learn from the new batch of data . Of course , ALMA relates to anytime learning ( Grefenstette & Ramsey , 1992 ; Ramsey & Grefenstette , 1994 ) , which has been recently applied to compare various autoML frameworks ( Liu et al. , 2020 ) . However , in this work we are not interested in assessing the anytime learning ability at the level of each chunk of data , but only at a coarser granularity , at the level of the entire stream of chunks . Inspired by Liu et al . ( 2020 ) , we consider the area under the curve of error rate against time to measure performance , but in order to account also for compute and memory budget , we add to our evaluation metrics also the area under the curve for memory and compute . From the more theoretical side , there has been work about sub-bagging ( Bühlmann & Yu , 2002 ) ( bagging using subsets of a larger dataset ) which is similar to our setting but without the sequential aspect of it . In this context , Breiman ( 1999 ) proposed a model similar to our growing ensembling ( gEns ) , Bühlmann & Yu ( 2002 ) studied sub-bagging as a way to make the prediction of tree classifiers more robust while Zou et al . ( 2021 ) studied the consistency of the estimator in this setting . We defer to future studies the analysis of ALMA , while in this work we focus on the empirical evaluation . Shifting the discussion to prior work on models that adjust their capacity dynamically , Waterhouse & Robinson ( 1995 ) introduced an approach to grow a hierarchical mixture of experts model ( Jordan & Jacobs , 1994 ) . This is a tree structured model where experts are at the leaves and gating functions are at non-terminal nodes . The tree determines a hierarchical partition of the input space into regions that are associated to each expert . This approach was made more efficient in later work by ( Fritsch et al. , 1996 ) . In this work we consider a baseline ( gMoE ) that extends this prior work to hierarchical mixture of experts ( Eigen et al. , 2014 ; Denoyer & Gallinari , 2015 ; Lepikhin et al. , 2020 ) . Growing architectures have also been studied in CL . For instance , Fernando et al . ( 2017 ) and Veniat et al . ( 2021 ) proposed a modular architecture that is assembled for every task , possibly reusing previously trained modules . The major difference with our work is that in our case routing is input dependent as opposed to task dependent . Yoon et al . ( 2018 ) instead proposed a method to incrementally and smoothly add hidden units . Similarly , Wen et al . ( 2020 ) proposed a heuristic approach to automatically adjust the network depth . Wang et al . ( 2017 ) considered growing both depth and width when finetuning to a new task . Liu et al . ( 2019a ) and Wu et al . ( 2020 ) proposed approaches to grow architectures in depth and width by leveraging Taylor approximation and greedy selection . In our work , we benchmark against this last variant . None of these approaches have been applied to the ALMA setting to date . Finally , some of our findings are built upon and extend recent empirical evaluations studying the scaling properties of language models ( Kaplan et al. , 2020a ; Li et al. , 2020b ) . In this study , we confirm the conclusion that bigger models generalize better and are more statistically efficient , not only in language modeling tasks using a transformer architecture , but also in smaller scale computer vision tasks using both fully connected and convolutional architectures . | The authors describe a framework to perform empirical evaluation of an anytime learning setting where data is available in a streaming minibatch fashion. With a primary aim to measure performance of a classifier across variety of practical settings of such streaming data to not only achieve high accuracy, but also provide non-trivial prediction anytime using limited computational resources. Using multiple benchmark datasets, the paper concludes that methods with intermediate parameter updates are better on the accuracy to computational efficiency tradeoff, and larger models generalize better. | SP:defad77392c5698ba9888b40c7c5c42111e9e37d |
On Anytime Learning at Macroscale | 1 INTRODUCTION . Empirical risk minimization ( Vapnik , 1998 ) is the dominant framework to formalize the learning process of a supervised task , and it has been critical to the success of large scale training of deep learning systems on a wide variety of applications . Within this framework , training data is assumed to be provided to the learner all at once . Alternatively , when the dataset is very large ( essentially infinite ) , data is streamed to the learner one minibatch at the time , assuming that the rate at which samples are received matches the model ’ s processing time to learn from them . Learning over streams of data has been studied in the machine learning domain for a long time ( see Section 2 and Figure 1 for more details ) with different assumptions : for instance in online learning , it is usually assumed that datapoints are coming one by one and have to be processed as soon as they are received . In continual learning , the streaming of data usually corresponds to a stream of large datasets corresponding to different tasks to solve , etc . In this paper , we define a simple yet important setting where there is a single task to solve , and where training data often comes at a slower rate than a model can process it . Moreover , it comes in relatively large batches once in a while . While poorly studied , this setting corresponds to practical applications encountered in production pipelines . For instance , it is faced by teams deploying language modeling applications ( e.g content moderation ) build models that are trained on large amounts of data like filtered versions of Common Crawl , which are dumps of the internet . However , new snapshots are available every month , as new content is generated over time . Therefore datasets keep getting bigger every few months and models need to be retrained accordingly . Similarly , visual object recognition datasets used in deployed applications are often extended every few months to include new images with their corresponding annotations . * Authors contributed equally Practically , there are two main approaches to integrate information present in a new batch of data in an existing model . If a lot of computational resources are available , a new and bigger model is instantiated and trained from scratch on the union of the old training set with the new batch of data . However , since this is a computationally very intensive process , retraining is typically done only rarely , once several batches of data have been collected . We call this approach “ tardy ” large-scale learning , since a predictor is available only at a later time . Another option , particularly suitable when computational resources are scarce and a predictor is needed quickly , is to simply finetune the old model on the new data as this arrives . Note that , in that settings , methods from the data stream domain or from the online learning domain that are based on the idea of processing any datapoint just once are not suitable since they have been developed for different use-cases . This trade-off is emblematic of anytime learning , a learning setting where a learner has to provide good predictions at any point in time , while improving its performance over time as more and more data is observed . From an anytime learning perspective , neither training a large model after all data is received nor finetuning on the newly added batch of data are not satisfying . The former approach is a poor anytime learner because one needs to wait for a long time before obtaining a useful predictor . The latter approach is a poor anytime learner because it typically can not leverage very well future batches of data since the model has a fixed capacity , determined on a small portion of the overall dataset and because inherently the model is trained on non i.i.d . data . In this work , we aim at exploring this accuracy versus time trade-off of anytime learning , not at the level of a single batch of data , but at the macroscale of the entire sequence of batches . This is a setting which more closely mimics practical applications , that we call anytime learning at mascroscale ( ALMA ) . In this learning setting , we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data ( and therefore we do not care about how quickly a learner adapts to a new batch ) , yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable . In summary , we are interested in a learner that i ) yields high accuracy , ii ) can make non-trivial predictions at any point in time while iii ) limiting its computational and memory resources . Our first contribution is to formalize the ALMA problem and to introduce metrics to evaluate learners ( §3 ) . We consider three different axes : error rate , memory and amount of computation . By measuring these quantities against time , via an area under the curve , we account not only for the final performance but also for the whole training trajectory over the sequence of large batches of data . Our second contribution is an extensive empirical evaluation ( §5 ) of various models ( §4 ) that strike different trade-offs between accuracy and time to obtain a useful predictor . In particular , we explore models that fall in between greedy finetuning and tardy large-scale learning , and investigate models that leverage batches of data at an intermediate rate . We also consider a rich family of modular architectures , from plain ensembling methods to hierarchical mixture of experts , and several variants thereof , including those that have access to a replay buffer storing all previous batches of data and those that can grow capacity over time . Our findings across three different benchmarks , including a large scale language modeling one , can be summarized as follows . a ) An intermediate waiting time offers the best trade-off between accuracy and time to yield such a predictor . However , b ) there is no single approach striking the best trade-off between performance and efficiency for various model sizes . c ) Retraining from scratch a big model does offer the lowest error rate but sacrifices efficiency . d ) Interestingly , large models are the most statistically efficient even when considering small datasets ( like MNIST ) and fully connected networks . e ) While approaches to grow capacity exhibit gains in terms of computational efficiency , these do not even outpeform simple ensembles . Overall , our work points at several research opportunities to improve modeling in a streaming setting of broad practical relevance , rather then pointing at any particular solution . We have also released code to reproduce our experiments and the entire platform implementing ALMA . 2 RELATED WORK . ALMA relates to several other learning frameworks : offline learning , continual learning , online learning and transfer learning as illustrated in Figure 1. i ) It shares the same assumptions of classical empirical risk minimization ( ERM ) ( Vapnik , 1998 ) at the level of each batch of data . However , it overall violates ERM ’ s assumptions of i.i.d . observations , because data points come in a stream of data chunks . ii ) Because of this , ALMA relates to continual learning ( CL ) ( Ring , 1994 ; Thrun , 1994 ; Ring , 1997 ; Thrun , 1998 ) , with the key difference that the data distribution across batches ( or tasks ) is assumed stationary in ALMA . Therefore , ALMA can be seen as a special case of CL with a single task to solve . iii ) ALMA relates also to online learning ( Bottou , 1998 ) since it assumes that data are coming in a stream , an assumption also made in the concept drift literature ( Lu et al. , 2018 ) . However , in online learning examples are streamed one at the time ( or at random from a large dataset ) , while in ALMA the learner receives large batches of data sequentially In ALMA , received data can be processed multiple times as opposite to the online learning setting that usually assumes that any new datapoint has to be processed as soon as it is available , and will not be reused in future updates . iv ) Finally , ALMA relates more broadly to transfer learning ( Pan & Yang , 2010 ) , as the problem of adapting to a new batch of data can be interpreted as leveraging knowledge acquired on previous batches to more effciently learn from the new batch of data . Of course , ALMA relates to anytime learning ( Grefenstette & Ramsey , 1992 ; Ramsey & Grefenstette , 1994 ) , which has been recently applied to compare various autoML frameworks ( Liu et al. , 2020 ) . However , in this work we are not interested in assessing the anytime learning ability at the level of each chunk of data , but only at a coarser granularity , at the level of the entire stream of chunks . Inspired by Liu et al . ( 2020 ) , we consider the area under the curve of error rate against time to measure performance , but in order to account also for compute and memory budget , we add to our evaluation metrics also the area under the curve for memory and compute . From the more theoretical side , there has been work about sub-bagging ( Bühlmann & Yu , 2002 ) ( bagging using subsets of a larger dataset ) which is similar to our setting but without the sequential aspect of it . In this context , Breiman ( 1999 ) proposed a model similar to our growing ensembling ( gEns ) , Bühlmann & Yu ( 2002 ) studied sub-bagging as a way to make the prediction of tree classifiers more robust while Zou et al . ( 2021 ) studied the consistency of the estimator in this setting . We defer to future studies the analysis of ALMA , while in this work we focus on the empirical evaluation . Shifting the discussion to prior work on models that adjust their capacity dynamically , Waterhouse & Robinson ( 1995 ) introduced an approach to grow a hierarchical mixture of experts model ( Jordan & Jacobs , 1994 ) . This is a tree structured model where experts are at the leaves and gating functions are at non-terminal nodes . The tree determines a hierarchical partition of the input space into regions that are associated to each expert . This approach was made more efficient in later work by ( Fritsch et al. , 1996 ) . In this work we consider a baseline ( gMoE ) that extends this prior work to hierarchical mixture of experts ( Eigen et al. , 2014 ; Denoyer & Gallinari , 2015 ; Lepikhin et al. , 2020 ) . Growing architectures have also been studied in CL . For instance , Fernando et al . ( 2017 ) and Veniat et al . ( 2021 ) proposed a modular architecture that is assembled for every task , possibly reusing previously trained modules . The major difference with our work is that in our case routing is input dependent as opposed to task dependent . Yoon et al . ( 2018 ) instead proposed a method to incrementally and smoothly add hidden units . Similarly , Wen et al . ( 2020 ) proposed a heuristic approach to automatically adjust the network depth . Wang et al . ( 2017 ) considered growing both depth and width when finetuning to a new task . Liu et al . ( 2019a ) and Wu et al . ( 2020 ) proposed approaches to grow architectures in depth and width by leveraging Taylor approximation and greedy selection . In our work , we benchmark against this last variant . None of these approaches have been applied to the ALMA setting to date . Finally , some of our findings are built upon and extend recent empirical evaluations studying the scaling properties of language models ( Kaplan et al. , 2020a ; Li et al. , 2020b ) . In this study , we confirm the conclusion that bigger models generalize better and are more statistically efficient , not only in language modeling tasks using a transformer architecture , but also in smaller scale computer vision tasks using both fully connected and convolutional architectures . | The paper introduces a novel setup called Anytime Learning at Macroscale. In this setup the learner receives the examples as a sequence of large batches, and is required to output a model after processing each batch. This model is used to give prediction for the next batch. The overall performance is then sum of the average losses on the individual batches. | SP:defad77392c5698ba9888b40c7c5c42111e9e37d |
On Anytime Learning at Macroscale | 1 INTRODUCTION . Empirical risk minimization ( Vapnik , 1998 ) is the dominant framework to formalize the learning process of a supervised task , and it has been critical to the success of large scale training of deep learning systems on a wide variety of applications . Within this framework , training data is assumed to be provided to the learner all at once . Alternatively , when the dataset is very large ( essentially infinite ) , data is streamed to the learner one minibatch at the time , assuming that the rate at which samples are received matches the model ’ s processing time to learn from them . Learning over streams of data has been studied in the machine learning domain for a long time ( see Section 2 and Figure 1 for more details ) with different assumptions : for instance in online learning , it is usually assumed that datapoints are coming one by one and have to be processed as soon as they are received . In continual learning , the streaming of data usually corresponds to a stream of large datasets corresponding to different tasks to solve , etc . In this paper , we define a simple yet important setting where there is a single task to solve , and where training data often comes at a slower rate than a model can process it . Moreover , it comes in relatively large batches once in a while . While poorly studied , this setting corresponds to practical applications encountered in production pipelines . For instance , it is faced by teams deploying language modeling applications ( e.g content moderation ) build models that are trained on large amounts of data like filtered versions of Common Crawl , which are dumps of the internet . However , new snapshots are available every month , as new content is generated over time . Therefore datasets keep getting bigger every few months and models need to be retrained accordingly . Similarly , visual object recognition datasets used in deployed applications are often extended every few months to include new images with their corresponding annotations . * Authors contributed equally Practically , there are two main approaches to integrate information present in a new batch of data in an existing model . If a lot of computational resources are available , a new and bigger model is instantiated and trained from scratch on the union of the old training set with the new batch of data . However , since this is a computationally very intensive process , retraining is typically done only rarely , once several batches of data have been collected . We call this approach “ tardy ” large-scale learning , since a predictor is available only at a later time . Another option , particularly suitable when computational resources are scarce and a predictor is needed quickly , is to simply finetune the old model on the new data as this arrives . Note that , in that settings , methods from the data stream domain or from the online learning domain that are based on the idea of processing any datapoint just once are not suitable since they have been developed for different use-cases . This trade-off is emblematic of anytime learning , a learning setting where a learner has to provide good predictions at any point in time , while improving its performance over time as more and more data is observed . From an anytime learning perspective , neither training a large model after all data is received nor finetuning on the newly added batch of data are not satisfying . The former approach is a poor anytime learner because one needs to wait for a long time before obtaining a useful predictor . The latter approach is a poor anytime learner because it typically can not leverage very well future batches of data since the model has a fixed capacity , determined on a small portion of the overall dataset and because inherently the model is trained on non i.i.d . data . In this work , we aim at exploring this accuracy versus time trade-off of anytime learning , not at the level of a single batch of data , but at the macroscale of the entire sequence of batches . This is a setting which more closely mimics practical applications , that we call anytime learning at mascroscale ( ALMA ) . In this learning setting , we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data ( and therefore we do not care about how quickly a learner adapts to a new batch ) , yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable . In summary , we are interested in a learner that i ) yields high accuracy , ii ) can make non-trivial predictions at any point in time while iii ) limiting its computational and memory resources . Our first contribution is to formalize the ALMA problem and to introduce metrics to evaluate learners ( §3 ) . We consider three different axes : error rate , memory and amount of computation . By measuring these quantities against time , via an area under the curve , we account not only for the final performance but also for the whole training trajectory over the sequence of large batches of data . Our second contribution is an extensive empirical evaluation ( §5 ) of various models ( §4 ) that strike different trade-offs between accuracy and time to obtain a useful predictor . In particular , we explore models that fall in between greedy finetuning and tardy large-scale learning , and investigate models that leverage batches of data at an intermediate rate . We also consider a rich family of modular architectures , from plain ensembling methods to hierarchical mixture of experts , and several variants thereof , including those that have access to a replay buffer storing all previous batches of data and those that can grow capacity over time . Our findings across three different benchmarks , including a large scale language modeling one , can be summarized as follows . a ) An intermediate waiting time offers the best trade-off between accuracy and time to yield such a predictor . However , b ) there is no single approach striking the best trade-off between performance and efficiency for various model sizes . c ) Retraining from scratch a big model does offer the lowest error rate but sacrifices efficiency . d ) Interestingly , large models are the most statistically efficient even when considering small datasets ( like MNIST ) and fully connected networks . e ) While approaches to grow capacity exhibit gains in terms of computational efficiency , these do not even outpeform simple ensembles . Overall , our work points at several research opportunities to improve modeling in a streaming setting of broad practical relevance , rather then pointing at any particular solution . We have also released code to reproduce our experiments and the entire platform implementing ALMA . 2 RELATED WORK . ALMA relates to several other learning frameworks : offline learning , continual learning , online learning and transfer learning as illustrated in Figure 1. i ) It shares the same assumptions of classical empirical risk minimization ( ERM ) ( Vapnik , 1998 ) at the level of each batch of data . However , it overall violates ERM ’ s assumptions of i.i.d . observations , because data points come in a stream of data chunks . ii ) Because of this , ALMA relates to continual learning ( CL ) ( Ring , 1994 ; Thrun , 1994 ; Ring , 1997 ; Thrun , 1998 ) , with the key difference that the data distribution across batches ( or tasks ) is assumed stationary in ALMA . Therefore , ALMA can be seen as a special case of CL with a single task to solve . iii ) ALMA relates also to online learning ( Bottou , 1998 ) since it assumes that data are coming in a stream , an assumption also made in the concept drift literature ( Lu et al. , 2018 ) . However , in online learning examples are streamed one at the time ( or at random from a large dataset ) , while in ALMA the learner receives large batches of data sequentially In ALMA , received data can be processed multiple times as opposite to the online learning setting that usually assumes that any new datapoint has to be processed as soon as it is available , and will not be reused in future updates . iv ) Finally , ALMA relates more broadly to transfer learning ( Pan & Yang , 2010 ) , as the problem of adapting to a new batch of data can be interpreted as leveraging knowledge acquired on previous batches to more effciently learn from the new batch of data . Of course , ALMA relates to anytime learning ( Grefenstette & Ramsey , 1992 ; Ramsey & Grefenstette , 1994 ) , which has been recently applied to compare various autoML frameworks ( Liu et al. , 2020 ) . However , in this work we are not interested in assessing the anytime learning ability at the level of each chunk of data , but only at a coarser granularity , at the level of the entire stream of chunks . Inspired by Liu et al . ( 2020 ) , we consider the area under the curve of error rate against time to measure performance , but in order to account also for compute and memory budget , we add to our evaluation metrics also the area under the curve for memory and compute . From the more theoretical side , there has been work about sub-bagging ( Bühlmann & Yu , 2002 ) ( bagging using subsets of a larger dataset ) which is similar to our setting but without the sequential aspect of it . In this context , Breiman ( 1999 ) proposed a model similar to our growing ensembling ( gEns ) , Bühlmann & Yu ( 2002 ) studied sub-bagging as a way to make the prediction of tree classifiers more robust while Zou et al . ( 2021 ) studied the consistency of the estimator in this setting . We defer to future studies the analysis of ALMA , while in this work we focus on the empirical evaluation . Shifting the discussion to prior work on models that adjust their capacity dynamically , Waterhouse & Robinson ( 1995 ) introduced an approach to grow a hierarchical mixture of experts model ( Jordan & Jacobs , 1994 ) . This is a tree structured model where experts are at the leaves and gating functions are at non-terminal nodes . The tree determines a hierarchical partition of the input space into regions that are associated to each expert . This approach was made more efficient in later work by ( Fritsch et al. , 1996 ) . In this work we consider a baseline ( gMoE ) that extends this prior work to hierarchical mixture of experts ( Eigen et al. , 2014 ; Denoyer & Gallinari , 2015 ; Lepikhin et al. , 2020 ) . Growing architectures have also been studied in CL . For instance , Fernando et al . ( 2017 ) and Veniat et al . ( 2021 ) proposed a modular architecture that is assembled for every task , possibly reusing previously trained modules . The major difference with our work is that in our case routing is input dependent as opposed to task dependent . Yoon et al . ( 2018 ) instead proposed a method to incrementally and smoothly add hidden units . Similarly , Wen et al . ( 2020 ) proposed a heuristic approach to automatically adjust the network depth . Wang et al . ( 2017 ) considered growing both depth and width when finetuning to a new task . Liu et al . ( 2019a ) and Wu et al . ( 2020 ) proposed approaches to grow architectures in depth and width by leveraging Taylor approximation and greedy selection . In our work , we benchmark against this last variant . None of these approaches have been applied to the ALMA setting to date . Finally , some of our findings are built upon and extend recent empirical evaluations studying the scaling properties of language models ( Kaplan et al. , 2020a ; Li et al. , 2020b ) . In this study , we confirm the conclusion that bigger models generalize better and are more statistically efficient , not only in language modeling tasks using a transformer architecture , but also in smaller scale computer vision tasks using both fully connected and convolutional architectures . | Summary: This paper proposes anytime learning at macroscale (ALMA), which is anytime learning under the assumption that data is observed as a sequence of large batches. This paper introduces metrics that can be used to access the error rate, memory, and compute throughput the entire learning process. They evaluate multiple learning models on different datasets in the ALMA setting. They observe that methods that update parameters at a moderate rate tend to yield a better tradeoff, while bigger models tend to generalize better. | SP:defad77392c5698ba9888b40c7c5c42111e9e37d |
Eliminating Sharp Minima from SGD with Truncated Heavy-tailed Noise | 1 INTRODUCTION . Stochastic gradient descent ( SGD ) and its variants have seen unprecedented empirical successes in training deep neural networks . The training of deep neural networks is typically posed as a nonconvex optimization problem , and even without explicit regularization the solutions obtained by SGD often perform surprisingly well on test data . Such an unexpected generalization performance of SGD in deep neural networks are often attributed to SGD ’ s ability to avoid sharp local minima1 in the loss landscape , which tends to lead to poor generalization ( Hochreiter & Schmidhuber , 1997 ; Keskar et al. , 2016 ; Li et al. , 2018b ; Jiang et al. , 2019 ) ; see Appendix D for more details . Despite significant efforts to explain such phenomena theoretically , understanding how SGD manages to avoid sharp local minima and end up with flat local minima within a realistic training time still remains as a central mystery of deep learning . Recently , the heavy-tailed dynamics of SGD received significant attention , and it was suggested that the heavy tails in the stochastic gradients may be a key ingredient that facilitates SGD ’ s escape from sharp local minima : for example , Şimşekli et al . ( 2019a ; b ) report the empirical evidence of heavy-tails in stochastic gradient noise in popular deep learning architectures ( see also ( Hodgkinson & Mahoney , 2020 ; Srinivasan et al. , 2021 ; Garg et al. , 2021 ) ) and show that SGD can escape sharp local minima in polynomial time under the presence of the heavy-tailed gradient noise . More specifically , they view heavy-tailed SGDs as discrete approximations of Lévy driven Langevin equations and argue that the amount of time SGD trajectory spends in each local minimum is proportional to the width of the associated minimum according to the metastability theory ( Pavlyukevich , 2007 ; Imkeller et al. , 2010a ; b ) for such heavy-tailed processes . In this paper , we study the global dynamics and long-run behavior of heavy-tailed SGD and its practical variant in depth . In particular , we consider an adaptive version of SGD , where the stochastic gradient is truncated above a fixed threshold . Such truncation scheme is often called gradient clipping and employed as default in various contexts ( Engstrom et al. , 2020 ; Merity et al. , 2018 ; Graves , 2013 ; 1We use the terminology sharpness in a broad sense ; we refer to Appendix C for a detailed discussion . Pascanu et al. , 2013 ; Zhang et al. , 2020 ; Gorbunov et al. , 2020 ) . We uncover a rich mathematical structure in the global dynamics of SGD under this scheme and prove that the long-run behavior of such SGD is fundamentally different from that of the pure form of SGD : in particular , under a suitable structural condition on the geometry of the loss landscape , gradient clipping completely eliminates sharp minima from the trajectory of SGDs . This provides a critical insight into how heavy-tailed dynamics of SGD can be utilized to find a local minimum that generalizes better . Figure 1 ( Left , Middle ) clearly illustrates these points with the histograms of the sample trajectories of SGDs . Note first that SGDs with light-tailed gradient noise— ( c ) and ( d ) of Figure 1 ( Left , Middle ) — never manages to escape a ( sharp ) minimum regardless of gradient clipping . In contrast , SGDs with heavy-tailed gradient noise— ( a ) and ( b ) of Figure 1 ( Left , Middle ) —easily escapes from local minima . Moreover , there is a clear difference between SGDs with gradient clipping and without gradient clipping . In ( a ) of Figure 1 ( Left ) , SGD without gradient clipping spends a significant amount of time at each of all four local minima ( { m1 , m2 , m3 , m4 } ) , although it spends more time around the wide ones ( { m2 , m4 } ) than the sharp ones ( { m1 , m3 } ) . On the other hand , in ( b ) of Figure 1 ( Left ) , SGD with gradient clipping not only escapes from local minima but also avoids sharp minima ( { m1 , m3 } ) almost completely . This means that if we stop training SGD at an arbitrary time point , it is almost guaranteed that it won ’ t be at a sharp minimum , effectively eliminating sharp minima from its training trajectories . We also propose a novel computational strategy that takes advantage of our newly discovered global dynamics of the heavy-tailed SGD . While the evidence of heavy tails were reported in many deep learning tasks ( Şimşekli et al. , 2019b ; a ; Garg et al. , 2021 ; Gurbuzbalaban et al. , 2020 ; Hodgkinson & Mahoney , 2020 ; Nguyen et al. , 2019 ; Mahoney & Martin , 2019 ; Srinivasan et al. , 2021 ; Zhang et al. , 2020 ) , there seem to be plenty of deep learning contexts where the stochastic gradient noises are light-tailed ( Panigrahi et al. , 2019 ) as well . Guided by our new theory , we propose an algorithm that injects heavy-tails to SGD by inflating the tail distribution of the gradient noise and facilitating the discovery of a local minimum that generalizes better . Our experiments with image classification tasks , reported in Tables 1 and 2 , illustrate that the tail-inflation strategy we propose here can indeed improve the generalization performance of the SGD as predicted by our theory . The rest of the paper is organized as follows . Section 2 formulates the problem setting and characterizes the global dynamics of the SGD driven by heavy-tailed noises . Section 3 presents numerical experiments that confirm our theory . Section 4 proposes a new algorithm that artificially injects heavy tailed gradient noise in actual deep learning tasks and demonstrate the improved performance . Technical Contributions : 1 ) We rigorously characterize the global behavior of the heavy-tailed SGD with gradient clipping . We first focus on the case where the loss function is in R1 with some simplifying assumptions on its geometry . Even with such assumptions , our theorem involves substantial technical challenges since the traditional tools for analyzing SGD fail in our context due to the adaptive nature of its dynamics and non-Gaussian distributional assumptions . For example , while the unclipped pure SGD can be analyzed by partitioning its trajectory at arrival times of large noises ( as in Pavlyukevich ( 2005 ) and Imkeller et al . ( 2010a ) ) , such an approach falls short in our context . Instead , we developed a set of delicate arguments for dealing with SGD ’ s ( near ) regeneration structure and the return times to the local minima , as well as controlling the probability of atypical scenarios that would not arise in the unclipped case . Moreover , as evidenced by our Rd results in Appendix G , the approach developed here is critical in extending the analysis to general loss landscapes . 2 ) We propose a novel computational strategy for improving the generalization performance of SGD by carefully injecting heavy-tailed noise . We test the proposed algorithm with deep learning tasks and demonstrate its superiority with an ablation study . This also suggests that the key phenomenon we characterize in our theory— elimination of sharp local minima—manifests in real-world tasks . 2 THEORETICAL RESULTS . This section characterizes the global dynamics of SGD with gradient clipping when applied to a non-convex objective function f . In Section 2.1 and 2.2 , we make the following assumptions for the sake of the simplicity of analysis . However , as our multidimensional result in Section 2.3 and the experiments in Section 3 and 4 suggest , the gist of the phenomena we analyze—elimination of sharp local minima—persists in general contexts where the domain of f is multi-dimensional , and the stationary points are not necessarily strict local optima separated from one another . Assumption 1 . Let f : R → R be a C2 function . There exist a positive real L > 0 , a positive integer nmin and an ordered sequence of real numbers m1 , s1 , m2 , s2 , · · · , snmin−1 , mnmin such that ( 1 ) −L < m1 < s1 < m2 < s2 < · · · < snmin−1 < mnmin < L ; ( 2 ) f ′ ( x ) = 0 iff x ∈ { m1 , s1 , · · · , snmin−1 , mnmin } ; ( 3 ) For any x ∈ { m1 , m2 , · · · , mnmin } , f ′′ ( x ) > 0 ; ( 4 ) For any x ∈ { s1 , s2 , · · · , snmin−1 } , f ′′ ( x ) < 0 . As illustrated in Figure 2 ( Left ) , the assumption above requires that f has finitely many local minima ( to be specific , the count is nmin ) , all of which contained in some compact domain [ −L , L ] . Moreover , the points s1 , · · · , snmin−1 naturally partition the entire real line into different regions Ωi = ( si−1 , si ) ( here we adopt the convention that s0 = −∞ , snmin = +∞ ) . We call each region Ωi the attraction field of the local minimum mi , as the gradient flow in Ωi always points to mi . Throughout the optimization procedure , given any location x ∈ R we assume that we have access to the noisy estimator f ′ ( x ) − Zn of the true gradient f ′ ( x ) , and f ′ ( x ) itself is difficult to evaluate . Specifically , in this work we are interested in the case where the iid sequence of noises ( Zn ) n≥1 are heavy-tailed . Typically , the heavy-tailed phenomena are captured by the concept of regular variation : for a measurable function φ : R+ 7→ R+ , we say that φ is regularly varying at +∞ with index β ( denoted as φ ∈ RVβ ) if limx→∞ φ ( tx ) /φ ( x ) = tβ for all t > 0 . For details on the definition and properties of regularly varying functions , see , for example , chapter 2 of Resnick ( 2007 ) . In this paper , we work with the following distributional assumption on the gradient noise . Let H+ ( x ) , P ( Z1 > x ) , H− ( x ) , P ( Z1 < −x ) , H ( x ) , H+ ( x ) +H− ( x ) = P ( |Z1| > x ) . Assumption 2 . EZ1 = 0 . Furthermore , there exists some α ∈ ( 1 , ∞ ) such that function H ( x ) is regularly varying ( at +∞ ) with index −α . Besides , regarding the positive and negative tail for distribution of the noises , we have lim x→∞ H+ ( x ) H ( x ) = p+ , lim x→∞ H− ( x ) H ( x ) = p− = 1− p+ where p+ and p− are constants in interval ( 0 , 1 ) . Roughly speaking , Assumption 2 means that the shape of the tail for the distribution of noises Zn resembles a polynomial function x−α , which is much heavier than the exponential tail of Gaussian distributions . Therefore , large values of Zn are much more likely to be observed under Assumption 2 compared to the typical Gaussian assumption . The index α of regular variation encodes the heaviness of the tail—the smaller the heavier—and we are assuming that the left and right tails share the same index α . The purpose of this simplifying assumption is clarity of presentation , but our Rd results in Appendix G relax such a condition and allow different regular variation indices in different directions . Our work concerns a popular variant of SGD where the stochastic gradient is truncated . Specifically , when updating the SGD iterates with a learning rate η > 0 , rather than using the original noisy gradient descent step η ( f ′ ( Xn ) − Zn ) , we will truncate it at a threshold b > 0 and use ϕb ( η ( f ′ ( Xn ) − Zn ) ) instead . Here the truncation operator ϕ· ( · ) is defined as ϕc ( w ) , w ·min { 1 , c/|w| } ∀w ∈ R , c > 0 . ( 1 ) Besides truncating the stochastic gradient , we also project the SGD into [ −L , L ] at each iteration ; recall that L is the constant in Assumption 1 . That is , the main object of our study is the stochastic process { Xηj } j≥0 driven by the following recursion Xηj = ∆ ϕL ( Xηj−1 − ϕb ( η ( f ′ ( Xηj−1 ) − Zj ) ) ) . ( 2 ) The projection ϕL and truncation ϕb here are common practices in many learning tasks for the purpose of ensuring that the SGD does not explode and drift to infinity . Besides , the projection also allows us to drop the sophisticated assumptions on the tail behaviors of f that are commonly seen in previous works ( see , for instance , the dissipativity conditions in Nguyen et al . ( 2019 ) ) . For technical reasons , we make the following assumption about the truncation threshold b > 0 . Note that this assumption is a very mild one , as it is obviously satisfied by ( Lebesgue ) almost every b > 0 . Assumption 3 . For each i = 1 , 2 , · · · , nmin , min { |si −mi| , |si−1 −mi| } /b is not an integer . | The authors analyse the behavior of SGD under gradient clipping. Their analysis in the univariate case shows that gradient clipping in the heavy-tailed gradient noise (almost) eliminates the algorithm's tendency to stay at sharp minima. The authors support their analysis with synthetic experiments. The authors then conduct experiments on real data where they add heavy tailed noise to the gradient, and clip it afterwards. | SP:ad55bd92bc963fcf31e760494f616adab195a340 |
Eliminating Sharp Minima from SGD with Truncated Heavy-tailed Noise | 1 INTRODUCTION . Stochastic gradient descent ( SGD ) and its variants have seen unprecedented empirical successes in training deep neural networks . The training of deep neural networks is typically posed as a nonconvex optimization problem , and even without explicit regularization the solutions obtained by SGD often perform surprisingly well on test data . Such an unexpected generalization performance of SGD in deep neural networks are often attributed to SGD ’ s ability to avoid sharp local minima1 in the loss landscape , which tends to lead to poor generalization ( Hochreiter & Schmidhuber , 1997 ; Keskar et al. , 2016 ; Li et al. , 2018b ; Jiang et al. , 2019 ) ; see Appendix D for more details . Despite significant efforts to explain such phenomena theoretically , understanding how SGD manages to avoid sharp local minima and end up with flat local minima within a realistic training time still remains as a central mystery of deep learning . Recently , the heavy-tailed dynamics of SGD received significant attention , and it was suggested that the heavy tails in the stochastic gradients may be a key ingredient that facilitates SGD ’ s escape from sharp local minima : for example , Şimşekli et al . ( 2019a ; b ) report the empirical evidence of heavy-tails in stochastic gradient noise in popular deep learning architectures ( see also ( Hodgkinson & Mahoney , 2020 ; Srinivasan et al. , 2021 ; Garg et al. , 2021 ) ) and show that SGD can escape sharp local minima in polynomial time under the presence of the heavy-tailed gradient noise . More specifically , they view heavy-tailed SGDs as discrete approximations of Lévy driven Langevin equations and argue that the amount of time SGD trajectory spends in each local minimum is proportional to the width of the associated minimum according to the metastability theory ( Pavlyukevich , 2007 ; Imkeller et al. , 2010a ; b ) for such heavy-tailed processes . In this paper , we study the global dynamics and long-run behavior of heavy-tailed SGD and its practical variant in depth . In particular , we consider an adaptive version of SGD , where the stochastic gradient is truncated above a fixed threshold . Such truncation scheme is often called gradient clipping and employed as default in various contexts ( Engstrom et al. , 2020 ; Merity et al. , 2018 ; Graves , 2013 ; 1We use the terminology sharpness in a broad sense ; we refer to Appendix C for a detailed discussion . Pascanu et al. , 2013 ; Zhang et al. , 2020 ; Gorbunov et al. , 2020 ) . We uncover a rich mathematical structure in the global dynamics of SGD under this scheme and prove that the long-run behavior of such SGD is fundamentally different from that of the pure form of SGD : in particular , under a suitable structural condition on the geometry of the loss landscape , gradient clipping completely eliminates sharp minima from the trajectory of SGDs . This provides a critical insight into how heavy-tailed dynamics of SGD can be utilized to find a local minimum that generalizes better . Figure 1 ( Left , Middle ) clearly illustrates these points with the histograms of the sample trajectories of SGDs . Note first that SGDs with light-tailed gradient noise— ( c ) and ( d ) of Figure 1 ( Left , Middle ) — never manages to escape a ( sharp ) minimum regardless of gradient clipping . In contrast , SGDs with heavy-tailed gradient noise— ( a ) and ( b ) of Figure 1 ( Left , Middle ) —easily escapes from local minima . Moreover , there is a clear difference between SGDs with gradient clipping and without gradient clipping . In ( a ) of Figure 1 ( Left ) , SGD without gradient clipping spends a significant amount of time at each of all four local minima ( { m1 , m2 , m3 , m4 } ) , although it spends more time around the wide ones ( { m2 , m4 } ) than the sharp ones ( { m1 , m3 } ) . On the other hand , in ( b ) of Figure 1 ( Left ) , SGD with gradient clipping not only escapes from local minima but also avoids sharp minima ( { m1 , m3 } ) almost completely . This means that if we stop training SGD at an arbitrary time point , it is almost guaranteed that it won ’ t be at a sharp minimum , effectively eliminating sharp minima from its training trajectories . We also propose a novel computational strategy that takes advantage of our newly discovered global dynamics of the heavy-tailed SGD . While the evidence of heavy tails were reported in many deep learning tasks ( Şimşekli et al. , 2019b ; a ; Garg et al. , 2021 ; Gurbuzbalaban et al. , 2020 ; Hodgkinson & Mahoney , 2020 ; Nguyen et al. , 2019 ; Mahoney & Martin , 2019 ; Srinivasan et al. , 2021 ; Zhang et al. , 2020 ) , there seem to be plenty of deep learning contexts where the stochastic gradient noises are light-tailed ( Panigrahi et al. , 2019 ) as well . Guided by our new theory , we propose an algorithm that injects heavy-tails to SGD by inflating the tail distribution of the gradient noise and facilitating the discovery of a local minimum that generalizes better . Our experiments with image classification tasks , reported in Tables 1 and 2 , illustrate that the tail-inflation strategy we propose here can indeed improve the generalization performance of the SGD as predicted by our theory . The rest of the paper is organized as follows . Section 2 formulates the problem setting and characterizes the global dynamics of the SGD driven by heavy-tailed noises . Section 3 presents numerical experiments that confirm our theory . Section 4 proposes a new algorithm that artificially injects heavy tailed gradient noise in actual deep learning tasks and demonstrate the improved performance . Technical Contributions : 1 ) We rigorously characterize the global behavior of the heavy-tailed SGD with gradient clipping . We first focus on the case where the loss function is in R1 with some simplifying assumptions on its geometry . Even with such assumptions , our theorem involves substantial technical challenges since the traditional tools for analyzing SGD fail in our context due to the adaptive nature of its dynamics and non-Gaussian distributional assumptions . For example , while the unclipped pure SGD can be analyzed by partitioning its trajectory at arrival times of large noises ( as in Pavlyukevich ( 2005 ) and Imkeller et al . ( 2010a ) ) , such an approach falls short in our context . Instead , we developed a set of delicate arguments for dealing with SGD ’ s ( near ) regeneration structure and the return times to the local minima , as well as controlling the probability of atypical scenarios that would not arise in the unclipped case . Moreover , as evidenced by our Rd results in Appendix G , the approach developed here is critical in extending the analysis to general loss landscapes . 2 ) We propose a novel computational strategy for improving the generalization performance of SGD by carefully injecting heavy-tailed noise . We test the proposed algorithm with deep learning tasks and demonstrate its superiority with an ablation study . This also suggests that the key phenomenon we characterize in our theory— elimination of sharp local minima—manifests in real-world tasks . 2 THEORETICAL RESULTS . This section characterizes the global dynamics of SGD with gradient clipping when applied to a non-convex objective function f . In Section 2.1 and 2.2 , we make the following assumptions for the sake of the simplicity of analysis . However , as our multidimensional result in Section 2.3 and the experiments in Section 3 and 4 suggest , the gist of the phenomena we analyze—elimination of sharp local minima—persists in general contexts where the domain of f is multi-dimensional , and the stationary points are not necessarily strict local optima separated from one another . Assumption 1 . Let f : R → R be a C2 function . There exist a positive real L > 0 , a positive integer nmin and an ordered sequence of real numbers m1 , s1 , m2 , s2 , · · · , snmin−1 , mnmin such that ( 1 ) −L < m1 < s1 < m2 < s2 < · · · < snmin−1 < mnmin < L ; ( 2 ) f ′ ( x ) = 0 iff x ∈ { m1 , s1 , · · · , snmin−1 , mnmin } ; ( 3 ) For any x ∈ { m1 , m2 , · · · , mnmin } , f ′′ ( x ) > 0 ; ( 4 ) For any x ∈ { s1 , s2 , · · · , snmin−1 } , f ′′ ( x ) < 0 . As illustrated in Figure 2 ( Left ) , the assumption above requires that f has finitely many local minima ( to be specific , the count is nmin ) , all of which contained in some compact domain [ −L , L ] . Moreover , the points s1 , · · · , snmin−1 naturally partition the entire real line into different regions Ωi = ( si−1 , si ) ( here we adopt the convention that s0 = −∞ , snmin = +∞ ) . We call each region Ωi the attraction field of the local minimum mi , as the gradient flow in Ωi always points to mi . Throughout the optimization procedure , given any location x ∈ R we assume that we have access to the noisy estimator f ′ ( x ) − Zn of the true gradient f ′ ( x ) , and f ′ ( x ) itself is difficult to evaluate . Specifically , in this work we are interested in the case where the iid sequence of noises ( Zn ) n≥1 are heavy-tailed . Typically , the heavy-tailed phenomena are captured by the concept of regular variation : for a measurable function φ : R+ 7→ R+ , we say that φ is regularly varying at +∞ with index β ( denoted as φ ∈ RVβ ) if limx→∞ φ ( tx ) /φ ( x ) = tβ for all t > 0 . For details on the definition and properties of regularly varying functions , see , for example , chapter 2 of Resnick ( 2007 ) . In this paper , we work with the following distributional assumption on the gradient noise . Let H+ ( x ) , P ( Z1 > x ) , H− ( x ) , P ( Z1 < −x ) , H ( x ) , H+ ( x ) +H− ( x ) = P ( |Z1| > x ) . Assumption 2 . EZ1 = 0 . Furthermore , there exists some α ∈ ( 1 , ∞ ) such that function H ( x ) is regularly varying ( at +∞ ) with index −α . Besides , regarding the positive and negative tail for distribution of the noises , we have lim x→∞ H+ ( x ) H ( x ) = p+ , lim x→∞ H− ( x ) H ( x ) = p− = 1− p+ where p+ and p− are constants in interval ( 0 , 1 ) . Roughly speaking , Assumption 2 means that the shape of the tail for the distribution of noises Zn resembles a polynomial function x−α , which is much heavier than the exponential tail of Gaussian distributions . Therefore , large values of Zn are much more likely to be observed under Assumption 2 compared to the typical Gaussian assumption . The index α of regular variation encodes the heaviness of the tail—the smaller the heavier—and we are assuming that the left and right tails share the same index α . The purpose of this simplifying assumption is clarity of presentation , but our Rd results in Appendix G relax such a condition and allow different regular variation indices in different directions . Our work concerns a popular variant of SGD where the stochastic gradient is truncated . Specifically , when updating the SGD iterates with a learning rate η > 0 , rather than using the original noisy gradient descent step η ( f ′ ( Xn ) − Zn ) , we will truncate it at a threshold b > 0 and use ϕb ( η ( f ′ ( Xn ) − Zn ) ) instead . Here the truncation operator ϕ· ( · ) is defined as ϕc ( w ) , w ·min { 1 , c/|w| } ∀w ∈ R , c > 0 . ( 1 ) Besides truncating the stochastic gradient , we also project the SGD into [ −L , L ] at each iteration ; recall that L is the constant in Assumption 1 . That is , the main object of our study is the stochastic process { Xηj } j≥0 driven by the following recursion Xηj = ∆ ϕL ( Xηj−1 − ϕb ( η ( f ′ ( Xηj−1 ) − Zj ) ) ) . ( 2 ) The projection ϕL and truncation ϕb here are common practices in many learning tasks for the purpose of ensuring that the SGD does not explode and drift to infinity . Besides , the projection also allows us to drop the sophisticated assumptions on the tail behaviors of f that are commonly seen in previous works ( see , for instance , the dissipativity conditions in Nguyen et al . ( 2019 ) ) . For technical reasons , we make the following assumption about the truncation threshold b > 0 . Note that this assumption is a very mild one , as it is obviously satisfied by ( Lebesgue ) almost every b > 0 . Assumption 3 . For each i = 1 , 2 , · · · , nmin , min { |si −mi| , |si−1 −mi| } /b is not an integer . | The paper studies gradient descent with injected power-law tail noise. The work shows that in the infinitesimal learning rate regime, the heavy tail noise can cause GD to not to converge to sharp minima. Based on this theory, the paper proposes a technique to inject noise to GD to help training | SP:ad55bd92bc963fcf31e760494f616adab195a340 |
Eliminating Sharp Minima from SGD with Truncated Heavy-tailed Noise | 1 INTRODUCTION . Stochastic gradient descent ( SGD ) and its variants have seen unprecedented empirical successes in training deep neural networks . The training of deep neural networks is typically posed as a nonconvex optimization problem , and even without explicit regularization the solutions obtained by SGD often perform surprisingly well on test data . Such an unexpected generalization performance of SGD in deep neural networks are often attributed to SGD ’ s ability to avoid sharp local minima1 in the loss landscape , which tends to lead to poor generalization ( Hochreiter & Schmidhuber , 1997 ; Keskar et al. , 2016 ; Li et al. , 2018b ; Jiang et al. , 2019 ) ; see Appendix D for more details . Despite significant efforts to explain such phenomena theoretically , understanding how SGD manages to avoid sharp local minima and end up with flat local minima within a realistic training time still remains as a central mystery of deep learning . Recently , the heavy-tailed dynamics of SGD received significant attention , and it was suggested that the heavy tails in the stochastic gradients may be a key ingredient that facilitates SGD ’ s escape from sharp local minima : for example , Şimşekli et al . ( 2019a ; b ) report the empirical evidence of heavy-tails in stochastic gradient noise in popular deep learning architectures ( see also ( Hodgkinson & Mahoney , 2020 ; Srinivasan et al. , 2021 ; Garg et al. , 2021 ) ) and show that SGD can escape sharp local minima in polynomial time under the presence of the heavy-tailed gradient noise . More specifically , they view heavy-tailed SGDs as discrete approximations of Lévy driven Langevin equations and argue that the amount of time SGD trajectory spends in each local minimum is proportional to the width of the associated minimum according to the metastability theory ( Pavlyukevich , 2007 ; Imkeller et al. , 2010a ; b ) for such heavy-tailed processes . In this paper , we study the global dynamics and long-run behavior of heavy-tailed SGD and its practical variant in depth . In particular , we consider an adaptive version of SGD , where the stochastic gradient is truncated above a fixed threshold . Such truncation scheme is often called gradient clipping and employed as default in various contexts ( Engstrom et al. , 2020 ; Merity et al. , 2018 ; Graves , 2013 ; 1We use the terminology sharpness in a broad sense ; we refer to Appendix C for a detailed discussion . Pascanu et al. , 2013 ; Zhang et al. , 2020 ; Gorbunov et al. , 2020 ) . We uncover a rich mathematical structure in the global dynamics of SGD under this scheme and prove that the long-run behavior of such SGD is fundamentally different from that of the pure form of SGD : in particular , under a suitable structural condition on the geometry of the loss landscape , gradient clipping completely eliminates sharp minima from the trajectory of SGDs . This provides a critical insight into how heavy-tailed dynamics of SGD can be utilized to find a local minimum that generalizes better . Figure 1 ( Left , Middle ) clearly illustrates these points with the histograms of the sample trajectories of SGDs . Note first that SGDs with light-tailed gradient noise— ( c ) and ( d ) of Figure 1 ( Left , Middle ) — never manages to escape a ( sharp ) minimum regardless of gradient clipping . In contrast , SGDs with heavy-tailed gradient noise— ( a ) and ( b ) of Figure 1 ( Left , Middle ) —easily escapes from local minima . Moreover , there is a clear difference between SGDs with gradient clipping and without gradient clipping . In ( a ) of Figure 1 ( Left ) , SGD without gradient clipping spends a significant amount of time at each of all four local minima ( { m1 , m2 , m3 , m4 } ) , although it spends more time around the wide ones ( { m2 , m4 } ) than the sharp ones ( { m1 , m3 } ) . On the other hand , in ( b ) of Figure 1 ( Left ) , SGD with gradient clipping not only escapes from local minima but also avoids sharp minima ( { m1 , m3 } ) almost completely . This means that if we stop training SGD at an arbitrary time point , it is almost guaranteed that it won ’ t be at a sharp minimum , effectively eliminating sharp minima from its training trajectories . We also propose a novel computational strategy that takes advantage of our newly discovered global dynamics of the heavy-tailed SGD . While the evidence of heavy tails were reported in many deep learning tasks ( Şimşekli et al. , 2019b ; a ; Garg et al. , 2021 ; Gurbuzbalaban et al. , 2020 ; Hodgkinson & Mahoney , 2020 ; Nguyen et al. , 2019 ; Mahoney & Martin , 2019 ; Srinivasan et al. , 2021 ; Zhang et al. , 2020 ) , there seem to be plenty of deep learning contexts where the stochastic gradient noises are light-tailed ( Panigrahi et al. , 2019 ) as well . Guided by our new theory , we propose an algorithm that injects heavy-tails to SGD by inflating the tail distribution of the gradient noise and facilitating the discovery of a local minimum that generalizes better . Our experiments with image classification tasks , reported in Tables 1 and 2 , illustrate that the tail-inflation strategy we propose here can indeed improve the generalization performance of the SGD as predicted by our theory . The rest of the paper is organized as follows . Section 2 formulates the problem setting and characterizes the global dynamics of the SGD driven by heavy-tailed noises . Section 3 presents numerical experiments that confirm our theory . Section 4 proposes a new algorithm that artificially injects heavy tailed gradient noise in actual deep learning tasks and demonstrate the improved performance . Technical Contributions : 1 ) We rigorously characterize the global behavior of the heavy-tailed SGD with gradient clipping . We first focus on the case where the loss function is in R1 with some simplifying assumptions on its geometry . Even with such assumptions , our theorem involves substantial technical challenges since the traditional tools for analyzing SGD fail in our context due to the adaptive nature of its dynamics and non-Gaussian distributional assumptions . For example , while the unclipped pure SGD can be analyzed by partitioning its trajectory at arrival times of large noises ( as in Pavlyukevich ( 2005 ) and Imkeller et al . ( 2010a ) ) , such an approach falls short in our context . Instead , we developed a set of delicate arguments for dealing with SGD ’ s ( near ) regeneration structure and the return times to the local minima , as well as controlling the probability of atypical scenarios that would not arise in the unclipped case . Moreover , as evidenced by our Rd results in Appendix G , the approach developed here is critical in extending the analysis to general loss landscapes . 2 ) We propose a novel computational strategy for improving the generalization performance of SGD by carefully injecting heavy-tailed noise . We test the proposed algorithm with deep learning tasks and demonstrate its superiority with an ablation study . This also suggests that the key phenomenon we characterize in our theory— elimination of sharp local minima—manifests in real-world tasks . 2 THEORETICAL RESULTS . This section characterizes the global dynamics of SGD with gradient clipping when applied to a non-convex objective function f . In Section 2.1 and 2.2 , we make the following assumptions for the sake of the simplicity of analysis . However , as our multidimensional result in Section 2.3 and the experiments in Section 3 and 4 suggest , the gist of the phenomena we analyze—elimination of sharp local minima—persists in general contexts where the domain of f is multi-dimensional , and the stationary points are not necessarily strict local optima separated from one another . Assumption 1 . Let f : R → R be a C2 function . There exist a positive real L > 0 , a positive integer nmin and an ordered sequence of real numbers m1 , s1 , m2 , s2 , · · · , snmin−1 , mnmin such that ( 1 ) −L < m1 < s1 < m2 < s2 < · · · < snmin−1 < mnmin < L ; ( 2 ) f ′ ( x ) = 0 iff x ∈ { m1 , s1 , · · · , snmin−1 , mnmin } ; ( 3 ) For any x ∈ { m1 , m2 , · · · , mnmin } , f ′′ ( x ) > 0 ; ( 4 ) For any x ∈ { s1 , s2 , · · · , snmin−1 } , f ′′ ( x ) < 0 . As illustrated in Figure 2 ( Left ) , the assumption above requires that f has finitely many local minima ( to be specific , the count is nmin ) , all of which contained in some compact domain [ −L , L ] . Moreover , the points s1 , · · · , snmin−1 naturally partition the entire real line into different regions Ωi = ( si−1 , si ) ( here we adopt the convention that s0 = −∞ , snmin = +∞ ) . We call each region Ωi the attraction field of the local minimum mi , as the gradient flow in Ωi always points to mi . Throughout the optimization procedure , given any location x ∈ R we assume that we have access to the noisy estimator f ′ ( x ) − Zn of the true gradient f ′ ( x ) , and f ′ ( x ) itself is difficult to evaluate . Specifically , in this work we are interested in the case where the iid sequence of noises ( Zn ) n≥1 are heavy-tailed . Typically , the heavy-tailed phenomena are captured by the concept of regular variation : for a measurable function φ : R+ 7→ R+ , we say that φ is regularly varying at +∞ with index β ( denoted as φ ∈ RVβ ) if limx→∞ φ ( tx ) /φ ( x ) = tβ for all t > 0 . For details on the definition and properties of regularly varying functions , see , for example , chapter 2 of Resnick ( 2007 ) . In this paper , we work with the following distributional assumption on the gradient noise . Let H+ ( x ) , P ( Z1 > x ) , H− ( x ) , P ( Z1 < −x ) , H ( x ) , H+ ( x ) +H− ( x ) = P ( |Z1| > x ) . Assumption 2 . EZ1 = 0 . Furthermore , there exists some α ∈ ( 1 , ∞ ) such that function H ( x ) is regularly varying ( at +∞ ) with index −α . Besides , regarding the positive and negative tail for distribution of the noises , we have lim x→∞ H+ ( x ) H ( x ) = p+ , lim x→∞ H− ( x ) H ( x ) = p− = 1− p+ where p+ and p− are constants in interval ( 0 , 1 ) . Roughly speaking , Assumption 2 means that the shape of the tail for the distribution of noises Zn resembles a polynomial function x−α , which is much heavier than the exponential tail of Gaussian distributions . Therefore , large values of Zn are much more likely to be observed under Assumption 2 compared to the typical Gaussian assumption . The index α of regular variation encodes the heaviness of the tail—the smaller the heavier—and we are assuming that the left and right tails share the same index α . The purpose of this simplifying assumption is clarity of presentation , but our Rd results in Appendix G relax such a condition and allow different regular variation indices in different directions . Our work concerns a popular variant of SGD where the stochastic gradient is truncated . Specifically , when updating the SGD iterates with a learning rate η > 0 , rather than using the original noisy gradient descent step η ( f ′ ( Xn ) − Zn ) , we will truncate it at a threshold b > 0 and use ϕb ( η ( f ′ ( Xn ) − Zn ) ) instead . Here the truncation operator ϕ· ( · ) is defined as ϕc ( w ) , w ·min { 1 , c/|w| } ∀w ∈ R , c > 0 . ( 1 ) Besides truncating the stochastic gradient , we also project the SGD into [ −L , L ] at each iteration ; recall that L is the constant in Assumption 1 . That is , the main object of our study is the stochastic process { Xηj } j≥0 driven by the following recursion Xηj = ∆ ϕL ( Xηj−1 − ϕb ( η ( f ′ ( Xηj−1 ) − Zj ) ) ) . ( 2 ) The projection ϕL and truncation ϕb here are common practices in many learning tasks for the purpose of ensuring that the SGD does not explode and drift to infinity . Besides , the projection also allows us to drop the sophisticated assumptions on the tail behaviors of f that are commonly seen in previous works ( see , for instance , the dissipativity conditions in Nguyen et al . ( 2019 ) ) . For technical reasons , we make the following assumption about the truncation threshold b > 0 . Note that this assumption is a very mild one , as it is obviously satisfied by ( Lebesgue ) almost every b > 0 . Assumption 3 . For each i = 1 , 2 , · · · , nmin , min { |si −mi| , |si−1 −mi| } /b is not an integer . | This paper study the long-time behavior of heavy-tailed SGD with gradient clipping. It is found that gradient clipping is crucial for heavy-tailed SGD to avoid sharp minima. The basic intuition is that the clipping operation reduces the distance moved by each SGD update. Therefore, for minima narrow than the threshold, the clipping does not change the first exit time. However, for wide minima, SGD is slowed down and takes more time to escape. Consequently, it is more likely that SGD locates in wide minima. | SP:ad55bd92bc963fcf31e760494f616adab195a340 |
Character Generation through Self-Supervised Vectorization | 1 INTRODUCTION . While , innately , humans sketch or write through strokes , this type of visual depiction is a more difficult task for machines . Image generation problems are typically addressed by raster-based algorithms . The introduction of generative adversarial networks ( GAN ) ( Goodfellow et al. , 2014 ) , variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ) and autoregressive models ( Van Oord et al. , 2016 ) has led to a variety of applications . Style transfer ( Gatys et al. , 2015 ; Isola et al. , 2017 ) , photo realistic image generation ( Brock et al. , 2018 ; Karras et al. , 2019 ) , and super resolution ( Ledig et al. , 2017 ; Bin et al. , 2017 ) are some of the significant instances of the advancing field . Additionally , Hierarchical Bayesian models formulated by deep neural networks are able to use the same generative model for multiple tasks such as classification , conditional and unconditional generation ( Hewitt et al. , 2018 ; Edwards & Storkey , 2016 ) . These raster-based algorithms can produce high quality images , yet they can not benefit from the leverage that higher level abstractions bring about . Vector-level image representation intrinsically prevents models from generating blurry samples and allows for compositional image generation which eventually may contribute to our understanding of how humans create or replicate images ( Lake et al. , 2017 ) . This idea , with the introduction of sketch-based datasets such as Omniglot ( Lake et al. , 2012 ) , Sketchy ( Sangkloy et al. , 2016 ) , and QuickDraw ( Ha & Eck , 2017 ) has triggered a significant body of work in recent years . Stroke based image generation and parsing has been addressed with both vector supervised models and self-supervised generation . Of these , one prominent algorithm is Bayesian Program Learning ( Lake et al. , 2015 ) , where a single model can be utilized for 5 tasks in the Omniglot challenge : ( i ) parsing , ( ii ) unconditional generation or , ( iii ) generating exemplars of a given concept , ( iv ) generating novel concepts of a type , and ( v ) one-shot classification . This approach is also shown to be scalable when supported by the representative capabilities of neural networks ( Feinman & Lake , 2020b ; a ) , however , it requires stroke-level or vector supervision , which is costly to obtain or simply nonexistent . VAE/RNN ( Ha & Eck , 2017 ; Cao et al. , 2019 ; Chen et al. , 2017 ; Aksan et al. , 2020 ) and Transformer based models ( Ribeiro et al. , 2020 ; Lin et al. , 2020 ) are other common methods applied to vector based image generation . Although impressive results have been presented , stroke-level supervision is required to train these models . Figure 1 : Our drawing agent can accomplish four different tasks . From left to right : it can generate novel characters , parse a given character into its strokes , generate new exemplars for a given character , and generate novel concepts ( i.e . characters ) given a type ( i.e . alphabet ) . Ours is the first stroke-based method to tackle all of the generation and parsing tasks in the Omniglot Challenge , without requiring any stroke-level supervision . Recently , self-supervised ( i.e . the absence of strokelevel supervision ) strokebased image generation has been addressed with Reinforcement Learning ( RL ) ( Ganin et al. , 2018 ; Mellor et al. , 2019 ; Huang et al. , 2019 ; Schaldenbrand & Oh , 2020 ) . We call this approach self-supervised vectorization , since the vectorization of images is learned using only rasterimages as supervision . These methods mostly focus on image reconstruction and their exploration in generation is limited . For example , none of them address the conditional generation problem , or , they need the number of actions ( i.e . strokes ) as input . In this paper , we propose a self-supervised reinforcement learning approach where we train a drawing agent for character generation and parsing . Our drawing agent operates on the stroke-level ( i.e . vector ) representation of images . At each time step , our agent takes the current canvas as input and dynamically decides whether to continue drawing or stop . When a ‘ continue ’ decision is made , the agent outputs a program specifying the stroke to be drawn . A non-differentiable renderer takes this program and draws it on the current canvas . Consequently , a raster image is produced stroke-by-stroke . We first train this agent for two tasks by formulating appropriate loss functions : ( i ) unconditional character generation and ( ii ) parsing . Unconditional character generation is the task of generating a novel concept1 ( i.e . character ) given a dataset of concepts . For this task , our loss function includes the following components : an adversarial loss produced by a discriminator to make generated characters as “ real ” as possible , and two data fidelity losses assessing the conformity of the current canvas with the statistical properties of the overall dataset . We also use an additional entropy loss to prevent mode collapse . In the parsing task , the goal for our agent is to reconstruct a given character ( in raster-image ) by drawing it through strokes using as few of them as possible . We utilize the same action space and environment as in the unconditional generation model , only difference being the input fed to the policy is a complete canvas to be reconstructed . Our reward function in this task has two components : a fidelity reward that indicates how much of a stroke is consistent with the target image and a penalty that increases with every ‘ continue ’ action being taken . This model explicitly learns the vectorization of the input raster-image in a self-supervised manner . Next , we show that our parsing model can be exploited for exemplar generation ( i.e . a novel drawing of a given character ) and novel concept generation from type ( i.e . novel character generation given an alphabet of 10 characters ) without any further training . Given a character , the policy network of our parsing model outputs a distribution over the action space where likelihood of actions at each time step eventually allows us to generate variations of the input image . For novel concept generation conditioned on a type ( i.e . alphabet ) , we compose a stroke library by parsing the provided inputs . As we sample strokes from this library , we observe novel samples forming , in coherence with the overall structure of the alphabet . To the best of our knowledge , we are the first to tackle these tasks with a self-supervised approach that operates on stroke space . Through experiments we show that our agent can successfully generate novel characters in all three ways ( unconditionally , conditioned on a given alphabet , conditioned on a given character ) , and parse and reconstruct input characters . For both exemplar generation and type conditioned novel concept generation , we provide LPIPS ( Zhang et al. , 2018 ) , L2 and SSIM measures between input samples and generated images . Our contributions in this paper are two-fold : ( i ) we present a drawing agent that can successfully handle all of the generation and parsing tasks in the Omniglot challenge in a self-supervised , stroke- 1Omniglot challenge terminology . based manner – such a model did not exist ( ii ) we provide for the first time perceptual similarity based quantitative benchmarks for the ‘ exemplar generation ’ and ‘ type conditioned novel concept generation ’ tasks . 2 RELATED WORK . The main purpose of this work is to present a self-supervised approach in order to solve the generation and parsing tasks in the Omniglot Challenge ( Lake et al. , 2015 ) , by capturing the stroke-level representation of images . Here we initially examine the supervised and self-supervised approaches to Omniglot challenge . Then , we review the work on image vectorization . And lastly , we touch upon the research on program synthesis in the context of this study . Omniglot Challenge Omniglot dataset of world alphabets was released with a set of challenges : parsing a given letter , one shot classification , generating a new letter given an alphabet , generating a novel sample of a character , and unconditional generation . Omniglot letters have samples that are conditionally independent based on the alphabet-character hierarchy , hence , a distinctive approach to achieve all these tasks is Hierarchical Bayesian modeling ( Lake et al. , 2015 ) , ( Lake et al. , 2013 ) . As the Omniglot letters included human strokes as labels , the compositional and causal nature of letters are leveraged to model the generation process . Later , neurosymbolic models are also shown to be successful for unconditional generation ( Feinman & Lake , 2020a ) and conceptual compression for multiple tasks presented within the Omniglot Challenge ( Feinman & Lake , 2020b ) . However , without the stroke set that generated a concept , these tasks become more difficult . The idea of sequential image generation is examined by recurrent VAE models ( Rezende et al. , 2016 ) , ( Gregor et al. , 2015 ) , ( Gregor et al. , 2016 ) . DRAW ( Gregor et al. , 2015 ) and Convolutional DRAW ( Gregor et al. , 2016 ) were able to generate quality unconditional samples from MNIST and Omniglot datasets respectively . DRAW is proposed as an algorithm to generate images recurrently . The network is able to iteratively generate a given image by attending to certain parts of the input at each time step . Convolutional DRAW improved the idea with an RNN/VAE based algorithm that can capture the global structure and low-level details of an image separately in order to increase the quality of generations . Later , it is shown that Hierarchical Bayesian Modeling can be improved by the representational power of deep learning and attentional mechanisms in order to achieve three of the five Omniglot challenges ( Rezende et al. , 2016 ) . Another novel idea to leverage Bayesian modeling to tackle Omniglot Challenge was performing modifications on the VAE architecture to represent hierarchical datasets ( Edwards & Storkey , 2016 ) ( Hewitt et al. , 2018 ) . The significance of these studies is that they were able obtain latent variables to describe class-level features effectively . Despite the ability to utilize the same model for different problems ( one-shot classification , unconditional and conditional generation ) , raster-based one-step generative models have two disadvantages we want to address . First , they can not leverage the higher level abstraction and quality comes with working on a vector space . Secondly , one-step generation does not provide an interpretable compositional and causal process describing how a character is generated . In this work , we combine the advantages of two groups of aforementioned models with an agent operating on stroke representation of images that uses only raster images during training . Thus , we aim to solve all three generative and the parsing ( reconstruction ) tasks of the Omniglot challenge . We show that the model trained for reconstruction can also be adopted as a tool that captures the compositional structure of a given character . Without any further training , our agent can solve exemplar generation and type conditioned novel concept generation problems . Image Generation by Vectorization — With Stroke Supervision Sketch-RNN ( Ha & Eck , 2017 ) is the first LSTM/VAE based sketch generation algorithm . It is later improved to generate multiclass samples ( Cao et al. , 2019 ) and increase the quality of generations by representing strokes as Bezier curves ( Song , 2020 ) . The idea of obtaining a generalizable latent space by imagestroke mapping is studied by many ( Aksan et al. , 2020 ; Das et al. , 2021 ; Bhunia et al. , 2021 ; Wang et al. , 2020 ) . In CoSE ( Aksan et al. , 2020 ) , the problem is articulated as ‘ completion of partially drawn sketch ’ . They achieved state of the art reconstruction performance by utilizing variable-length strokes and a novel relational model that is able to capture the global structure of the sketch . The progress in stroke representation is continued with incorporation of variable-degree Bezier curves ( Das et al. , 2021 ) , and capturing Gestalt structure of partially occluded sketches ( Lin et al. , 2020 ) . Self Supervised Vectorization Self-supervised vector-based image generation problem has been approached by RL based frameworks ( Zhou et al. , 2018 ) , ( Ganin et al. , 2018 ) , ( Mellor et al. , 2019 ) , ( Huang et al. , 2019 ) , ( Schaldenbrand & Oh , 2020 ) , and ( Zou et al. , 2020 ) . In SPIRAL ( Ganin et al. , 2018 ) , unconditional generation and reconstruction tasks are tackled with adversarially trained RL agents . Succeeding research enhanced the reconstruction process by a differentiable renderer , making it possible for agents to operate on a continuous space ( Huang et al. , 2019 ; Schaldenbrand & Oh , 2020 ) . In order to avert the computational expense of RL based algorithms , end-to-end differentiable models are developed through altering the rendering process ( Nakano , 2019 ) or formulating the generation process as a parameter search ( Zou et al. , 2020 ) . More recently , a differentiable renderer and compositor is utilized for generating closed Bezier paths and the final image respectively ( Reddy et al. , 2021 ) . This method led to successful interpolation , reconstruction , and sampling processes . Most related to our work is SPIRAL where both reconstruction and unconditional generation is studied through self-supervised deep reinforcement learning . However , our approach has some significant differences . First , in SPIRAL each stroke is also represented as a Bezier curve , yet , the starting point of each curve is set as the final point of the previous curve . In our model , all control points of the Bezier curve are predicted by the agent at each time step . Hence , the agent has to learn the continuity and the compositionality of the given character in order to produce quality samples . Secondly , SPIRAL provides a generative model that works through a graphics renderer without addressing the conditional generation problem . They show impressive results on both natural images and handwritten characters . While we provide a solution for multiple generative tasks , we have not explored our model in the context of natural images . Another approach that presents a similar scheme to the reconstruction problem is “ Learning to Paint ” ( Huang et al. , 2019 ) . In Learning to Paint , the proposed model is utilized specifically for reconstruction . When reconstruction is considered , the main difference of our model is that since we try to model a human-like generation process , our agent outputs a single stroke at each time step with the environment being altered throughout this process while in Learning to Paint , 5 strokes are predicted by the agent at each time step . As a major difference from previous studies , our agent decides whether to stop or keep drawing before generating a stroke . This enables the agent to synthesize an image with as few actions as possible when motivated with our reward formulations . Self Supervised Program Synthesis Our method essentially outputs a visual program that depends only on the rastered data . In that sense , studies on Constructive Solid Geometry ( CSG ) are also related . Different RL frameworks for reconstruction of a given CSG image , that is essentially a composition of geometric shapes , are proposed ( Ellis et al. , 2019 ; Zhou et al. , 2020 ) . The former considered parsing as a search problem that is solved by using a read-eval-print-loop within a Markov Decision Process . The latter adopted a Tree-LSTM model to eliminate invalid programs and the reward is considered to be the Chamfer distance between the target image and current canvas . 3 METHOD Our model consists of a policy network and a ( nondifferentiable ) renderer . At time step t , the policy network takes the current canvas , Ct – a raster-image , as input and outputs two distributions , πB and πS . The first distribution , πB , is for stroke ( i.e . Bezier curve ) parameters and the second one , πS , is for the continue/stop decision . From the first distribution , we randomly sample a stroke defined by its 7 parameters ( x-y coordinates of start , end , control points of the quadratic Bezier curve , and a brush-width ) . From the second distribution , we randomly sample a decision . If the decision happens to be ‘ continue ’ , we add the newly sampled stroke to the current canvas , Ct , increment time ( i.e . t ← t + 1 ) and restart . If the decision was to ‘ stop ’ , then Ct is returned as the final output . Our model is able to handle parsing and different generation tasks , and the processing pipeline we just described is common in all these tasks . What changes among tasks is the reward functions and/or training procedures , which we explain below . Unconditional Generation The task of ‘ generating new concepts ’ as dubbed in Omniglot challenge , is essentially unconditional sampling from a distribution obtained from the whole Omniglot training set . Here , the model is asked to generate completely novel samples ( i.e . characters ) without any constraints . For this task , at each time step t , we calculate an instantaneous reward , rt , that has three components : rt = D ( Ct ) + λ1align ( Ct , I ) + λ2N ( |Ct| ; µ , σ ) . ( 1 ) The first term is a reward based on a discriminator to make generated characters as ‘ real ’ as possible . D ( · ) is a discriminator that outputs the “ realness ” score of its input canvas . We train it in an adversarial manner by using the generated examples as negatives and the elements of the input dataset as positives . The second term is a clustering-based data fidelity reward . The function align ( Ct , I ) measures the alignment between the current canvas Ct and another canvas I , which is a randomly selected cluster center at the beginning of each episode . The cluster centers are obtained by applying k-means on all characters in the input dataset . align basically counts the number of intersecting on-pixels ( between the two canvases ) minus the number of non-intersecting on-pixels in Ct , and divides this quantity by the number of on-pixels in I . The final term assesses the conformity of the current canvas with the dataset in terms of the number of on-pixels . N ( |Ct| ; µ , σ ) evaluates a normal distribution with ( µ , σ ) at |Ct| which is the number of on-pixels in the current canvas . We obtain ( µ , σ ) by fitting a normal distribution to the on-pixel counts of characters in the training set . We observed that the second and third terms accelerate learning as they guide the exploration within the vicinity of real characters . During training , instead of using the instantaneous reward , rt , we use the difference of successive rewards , i.e . rt − rt−1 . In order to encourage exploration and avoid mode collapse , we use an entropy penalty term as αmax ( 0 , KL ( [ πB , πS ] , U ) − τ ) . ( 2 ) Here , KL indicates KL-divergence and U is the uniform distribution . This term first measures the divergence between the uniform distribution and πB , πS , the distributions output by the policy network . Then , through the hinge function , if the divergence exceeds a threshold ( τ ) , this term activates and increases the penalty . The policy network and the discriminator D are updated alternatingly after 256 images are generated at each iteration . We employ the REINFORCE algorithm ( Williams , 1992 ) to update the weights of the policy network . Discriminator is trained using hinge loss . In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1 , Spectral Normalization is applied at each layer ( Miyato et al. , 2018 ) . Throughout the training , we kept the balance ratio between generated and real samples at 3 . Image Reconstruction by Parsing In the “ parsing ” task , the goal is to reconstruct the given input image by re-drawing it through strokes as accurately as possible . To this end , we formulate a new reward function with two terms : a fidelity reward that indicates how much of a stroke is consistent with the input image ( using the “ align ” function introduced above ) and a penalty that is added with every time increment represented by t as ‘ continue ’ decisions being made : rt = align ( St , Ct ) − λ1t , ( 3 ) where St is the newly sampled stroke and Ct is the current canvas ( input ) . Second term simply acts as a penalty for every ‘ continue ’ action . The first term ensures the sampled stroke to be well-aligned with the input and the second term forces the model to use as few strokes as possible . There is no need for a discriminator . This model explicitly learns the vectorization of the input raster-image in a self-supervised manner . Apart from the different reward function , another crucial difference between the training of the unconditional generation model and the parsing model is how the input and output are handled . In unconditional generation , the newly-sampled stroke is added to the current canvas , whereas in parsing , we do the opposite : the sampled stroke is removed ( masked out ) from the current canvas , and the returned final canvas is the combination of all sampled strokes until the ‘ stop ’ decision . λ , α and τ in Equations 1 , 2 , and 3 are hyperparameters adjusted experimentally . ( see ‘ Training Details ’ in Appendix B ) . Generating New Exemplars In this task , a model is required to generate a new exemplar ( i.e . a variation ) of an unseen concept ( i.e . character ) . To the best of our knowledge , we are the first to tackle this task in a self-supervised stroke-based setting . Most importantly , we do not require any training to achieve this task . We utilize our parsing network described in the previous section to capture the overall structure of a given letter . In order to produce new exemplars , we randomly sample different parsings ( a set of strokes ) from the distribution generated by the agent . In order to eliminate ‘ unlikely ’ samples , we compute the likelihood of the parsing given the resulting policy , and apply a threshold . Generating Novel Concepts from Type In this task , the goal is to to generate a novel concept ( i.e . character ) given a previously unseen type ( i.e . alphabet ) consisting of 10 concepts . The novel concepts should conform to the overall structure , that is , the stroke formulation and composition of the given type ( alphabet ) . We , again , tackle this challenge using our parsing network without any further training . To do so , we first parse all input images into its strokes . For each input image , we sample five stroke sets from the stroke-parameters distribution output by the policy network . During the sampling process , we again use the likelihood-based quality function described in the previous section . We add all the strokes sampled during this process to form a stroke library . Here the strokes are stored with the time steps they are generated . Noting that the number of strokes sampled for a given character is not constant , we approximate a distribution for stopping actions . This process provides a stroke set representing the structure of letters and the way they are composed , that is , we can exploit the compositionality and causality of an alphabet . Throughout the character generation process , a stroke is sampled at each time step belonging to that particular group of the library . The sampled strokes are summed together to obtain the final canvas . | In this paper, the authors present a method for character generation using the self-supervised technology. Different from existing approaches, it can leverage the benefits from higher level abstration (due to the used strokes) and get rid of stroke supervision. In this way, high-quality images are generated while the supervised training data requirements are relieved. Although some comparisons are made on Omniglot dataset, the advantages of this method are not very clear. | SP:8c62155ad4440b73185fbcff8b19d882c16a8fcc |
Character Generation through Self-Supervised Vectorization | 1 INTRODUCTION . While , innately , humans sketch or write through strokes , this type of visual depiction is a more difficult task for machines . Image generation problems are typically addressed by raster-based algorithms . The introduction of generative adversarial networks ( GAN ) ( Goodfellow et al. , 2014 ) , variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ) and autoregressive models ( Van Oord et al. , 2016 ) has led to a variety of applications . Style transfer ( Gatys et al. , 2015 ; Isola et al. , 2017 ) , photo realistic image generation ( Brock et al. , 2018 ; Karras et al. , 2019 ) , and super resolution ( Ledig et al. , 2017 ; Bin et al. , 2017 ) are some of the significant instances of the advancing field . Additionally , Hierarchical Bayesian models formulated by deep neural networks are able to use the same generative model for multiple tasks such as classification , conditional and unconditional generation ( Hewitt et al. , 2018 ; Edwards & Storkey , 2016 ) . These raster-based algorithms can produce high quality images , yet they can not benefit from the leverage that higher level abstractions bring about . Vector-level image representation intrinsically prevents models from generating blurry samples and allows for compositional image generation which eventually may contribute to our understanding of how humans create or replicate images ( Lake et al. , 2017 ) . This idea , with the introduction of sketch-based datasets such as Omniglot ( Lake et al. , 2012 ) , Sketchy ( Sangkloy et al. , 2016 ) , and QuickDraw ( Ha & Eck , 2017 ) has triggered a significant body of work in recent years . Stroke based image generation and parsing has been addressed with both vector supervised models and self-supervised generation . Of these , one prominent algorithm is Bayesian Program Learning ( Lake et al. , 2015 ) , where a single model can be utilized for 5 tasks in the Omniglot challenge : ( i ) parsing , ( ii ) unconditional generation or , ( iii ) generating exemplars of a given concept , ( iv ) generating novel concepts of a type , and ( v ) one-shot classification . This approach is also shown to be scalable when supported by the representative capabilities of neural networks ( Feinman & Lake , 2020b ; a ) , however , it requires stroke-level or vector supervision , which is costly to obtain or simply nonexistent . VAE/RNN ( Ha & Eck , 2017 ; Cao et al. , 2019 ; Chen et al. , 2017 ; Aksan et al. , 2020 ) and Transformer based models ( Ribeiro et al. , 2020 ; Lin et al. , 2020 ) are other common methods applied to vector based image generation . Although impressive results have been presented , stroke-level supervision is required to train these models . Figure 1 : Our drawing agent can accomplish four different tasks . From left to right : it can generate novel characters , parse a given character into its strokes , generate new exemplars for a given character , and generate novel concepts ( i.e . characters ) given a type ( i.e . alphabet ) . Ours is the first stroke-based method to tackle all of the generation and parsing tasks in the Omniglot Challenge , without requiring any stroke-level supervision . Recently , self-supervised ( i.e . the absence of strokelevel supervision ) strokebased image generation has been addressed with Reinforcement Learning ( RL ) ( Ganin et al. , 2018 ; Mellor et al. , 2019 ; Huang et al. , 2019 ; Schaldenbrand & Oh , 2020 ) . We call this approach self-supervised vectorization , since the vectorization of images is learned using only rasterimages as supervision . These methods mostly focus on image reconstruction and their exploration in generation is limited . For example , none of them address the conditional generation problem , or , they need the number of actions ( i.e . strokes ) as input . In this paper , we propose a self-supervised reinforcement learning approach where we train a drawing agent for character generation and parsing . Our drawing agent operates on the stroke-level ( i.e . vector ) representation of images . At each time step , our agent takes the current canvas as input and dynamically decides whether to continue drawing or stop . When a ‘ continue ’ decision is made , the agent outputs a program specifying the stroke to be drawn . A non-differentiable renderer takes this program and draws it on the current canvas . Consequently , a raster image is produced stroke-by-stroke . We first train this agent for two tasks by formulating appropriate loss functions : ( i ) unconditional character generation and ( ii ) parsing . Unconditional character generation is the task of generating a novel concept1 ( i.e . character ) given a dataset of concepts . For this task , our loss function includes the following components : an adversarial loss produced by a discriminator to make generated characters as “ real ” as possible , and two data fidelity losses assessing the conformity of the current canvas with the statistical properties of the overall dataset . We also use an additional entropy loss to prevent mode collapse . In the parsing task , the goal for our agent is to reconstruct a given character ( in raster-image ) by drawing it through strokes using as few of them as possible . We utilize the same action space and environment as in the unconditional generation model , only difference being the input fed to the policy is a complete canvas to be reconstructed . Our reward function in this task has two components : a fidelity reward that indicates how much of a stroke is consistent with the target image and a penalty that increases with every ‘ continue ’ action being taken . This model explicitly learns the vectorization of the input raster-image in a self-supervised manner . Next , we show that our parsing model can be exploited for exemplar generation ( i.e . a novel drawing of a given character ) and novel concept generation from type ( i.e . novel character generation given an alphabet of 10 characters ) without any further training . Given a character , the policy network of our parsing model outputs a distribution over the action space where likelihood of actions at each time step eventually allows us to generate variations of the input image . For novel concept generation conditioned on a type ( i.e . alphabet ) , we compose a stroke library by parsing the provided inputs . As we sample strokes from this library , we observe novel samples forming , in coherence with the overall structure of the alphabet . To the best of our knowledge , we are the first to tackle these tasks with a self-supervised approach that operates on stroke space . Through experiments we show that our agent can successfully generate novel characters in all three ways ( unconditionally , conditioned on a given alphabet , conditioned on a given character ) , and parse and reconstruct input characters . For both exemplar generation and type conditioned novel concept generation , we provide LPIPS ( Zhang et al. , 2018 ) , L2 and SSIM measures between input samples and generated images . Our contributions in this paper are two-fold : ( i ) we present a drawing agent that can successfully handle all of the generation and parsing tasks in the Omniglot challenge in a self-supervised , stroke- 1Omniglot challenge terminology . based manner – such a model did not exist ( ii ) we provide for the first time perceptual similarity based quantitative benchmarks for the ‘ exemplar generation ’ and ‘ type conditioned novel concept generation ’ tasks . 2 RELATED WORK . The main purpose of this work is to present a self-supervised approach in order to solve the generation and parsing tasks in the Omniglot Challenge ( Lake et al. , 2015 ) , by capturing the stroke-level representation of images . Here we initially examine the supervised and self-supervised approaches to Omniglot challenge . Then , we review the work on image vectorization . And lastly , we touch upon the research on program synthesis in the context of this study . Omniglot Challenge Omniglot dataset of world alphabets was released with a set of challenges : parsing a given letter , one shot classification , generating a new letter given an alphabet , generating a novel sample of a character , and unconditional generation . Omniglot letters have samples that are conditionally independent based on the alphabet-character hierarchy , hence , a distinctive approach to achieve all these tasks is Hierarchical Bayesian modeling ( Lake et al. , 2015 ) , ( Lake et al. , 2013 ) . As the Omniglot letters included human strokes as labels , the compositional and causal nature of letters are leveraged to model the generation process . Later , neurosymbolic models are also shown to be successful for unconditional generation ( Feinman & Lake , 2020a ) and conceptual compression for multiple tasks presented within the Omniglot Challenge ( Feinman & Lake , 2020b ) . However , without the stroke set that generated a concept , these tasks become more difficult . The idea of sequential image generation is examined by recurrent VAE models ( Rezende et al. , 2016 ) , ( Gregor et al. , 2015 ) , ( Gregor et al. , 2016 ) . DRAW ( Gregor et al. , 2015 ) and Convolutional DRAW ( Gregor et al. , 2016 ) were able to generate quality unconditional samples from MNIST and Omniglot datasets respectively . DRAW is proposed as an algorithm to generate images recurrently . The network is able to iteratively generate a given image by attending to certain parts of the input at each time step . Convolutional DRAW improved the idea with an RNN/VAE based algorithm that can capture the global structure and low-level details of an image separately in order to increase the quality of generations . Later , it is shown that Hierarchical Bayesian Modeling can be improved by the representational power of deep learning and attentional mechanisms in order to achieve three of the five Omniglot challenges ( Rezende et al. , 2016 ) . Another novel idea to leverage Bayesian modeling to tackle Omniglot Challenge was performing modifications on the VAE architecture to represent hierarchical datasets ( Edwards & Storkey , 2016 ) ( Hewitt et al. , 2018 ) . The significance of these studies is that they were able obtain latent variables to describe class-level features effectively . Despite the ability to utilize the same model for different problems ( one-shot classification , unconditional and conditional generation ) , raster-based one-step generative models have two disadvantages we want to address . First , they can not leverage the higher level abstraction and quality comes with working on a vector space . Secondly , one-step generation does not provide an interpretable compositional and causal process describing how a character is generated . In this work , we combine the advantages of two groups of aforementioned models with an agent operating on stroke representation of images that uses only raster images during training . Thus , we aim to solve all three generative and the parsing ( reconstruction ) tasks of the Omniglot challenge . We show that the model trained for reconstruction can also be adopted as a tool that captures the compositional structure of a given character . Without any further training , our agent can solve exemplar generation and type conditioned novel concept generation problems . Image Generation by Vectorization — With Stroke Supervision Sketch-RNN ( Ha & Eck , 2017 ) is the first LSTM/VAE based sketch generation algorithm . It is later improved to generate multiclass samples ( Cao et al. , 2019 ) and increase the quality of generations by representing strokes as Bezier curves ( Song , 2020 ) . The idea of obtaining a generalizable latent space by imagestroke mapping is studied by many ( Aksan et al. , 2020 ; Das et al. , 2021 ; Bhunia et al. , 2021 ; Wang et al. , 2020 ) . In CoSE ( Aksan et al. , 2020 ) , the problem is articulated as ‘ completion of partially drawn sketch ’ . They achieved state of the art reconstruction performance by utilizing variable-length strokes and a novel relational model that is able to capture the global structure of the sketch . The progress in stroke representation is continued with incorporation of variable-degree Bezier curves ( Das et al. , 2021 ) , and capturing Gestalt structure of partially occluded sketches ( Lin et al. , 2020 ) . Self Supervised Vectorization Self-supervised vector-based image generation problem has been approached by RL based frameworks ( Zhou et al. , 2018 ) , ( Ganin et al. , 2018 ) , ( Mellor et al. , 2019 ) , ( Huang et al. , 2019 ) , ( Schaldenbrand & Oh , 2020 ) , and ( Zou et al. , 2020 ) . In SPIRAL ( Ganin et al. , 2018 ) , unconditional generation and reconstruction tasks are tackled with adversarially trained RL agents . Succeeding research enhanced the reconstruction process by a differentiable renderer , making it possible for agents to operate on a continuous space ( Huang et al. , 2019 ; Schaldenbrand & Oh , 2020 ) . In order to avert the computational expense of RL based algorithms , end-to-end differentiable models are developed through altering the rendering process ( Nakano , 2019 ) or formulating the generation process as a parameter search ( Zou et al. , 2020 ) . More recently , a differentiable renderer and compositor is utilized for generating closed Bezier paths and the final image respectively ( Reddy et al. , 2021 ) . This method led to successful interpolation , reconstruction , and sampling processes . Most related to our work is SPIRAL where both reconstruction and unconditional generation is studied through self-supervised deep reinforcement learning . However , our approach has some significant differences . First , in SPIRAL each stroke is also represented as a Bezier curve , yet , the starting point of each curve is set as the final point of the previous curve . In our model , all control points of the Bezier curve are predicted by the agent at each time step . Hence , the agent has to learn the continuity and the compositionality of the given character in order to produce quality samples . Secondly , SPIRAL provides a generative model that works through a graphics renderer without addressing the conditional generation problem . They show impressive results on both natural images and handwritten characters . While we provide a solution for multiple generative tasks , we have not explored our model in the context of natural images . Another approach that presents a similar scheme to the reconstruction problem is “ Learning to Paint ” ( Huang et al. , 2019 ) . In Learning to Paint , the proposed model is utilized specifically for reconstruction . When reconstruction is considered , the main difference of our model is that since we try to model a human-like generation process , our agent outputs a single stroke at each time step with the environment being altered throughout this process while in Learning to Paint , 5 strokes are predicted by the agent at each time step . As a major difference from previous studies , our agent decides whether to stop or keep drawing before generating a stroke . This enables the agent to synthesize an image with as few actions as possible when motivated with our reward formulations . Self Supervised Program Synthesis Our method essentially outputs a visual program that depends only on the rastered data . In that sense , studies on Constructive Solid Geometry ( CSG ) are also related . Different RL frameworks for reconstruction of a given CSG image , that is essentially a composition of geometric shapes , are proposed ( Ellis et al. , 2019 ; Zhou et al. , 2020 ) . The former considered parsing as a search problem that is solved by using a read-eval-print-loop within a Markov Decision Process . The latter adopted a Tree-LSTM model to eliminate invalid programs and the reward is considered to be the Chamfer distance between the target image and current canvas . 3 METHOD Our model consists of a policy network and a ( nondifferentiable ) renderer . At time step t , the policy network takes the current canvas , Ct – a raster-image , as input and outputs two distributions , πB and πS . The first distribution , πB , is for stroke ( i.e . Bezier curve ) parameters and the second one , πS , is for the continue/stop decision . From the first distribution , we randomly sample a stroke defined by its 7 parameters ( x-y coordinates of start , end , control points of the quadratic Bezier curve , and a brush-width ) . From the second distribution , we randomly sample a decision . If the decision happens to be ‘ continue ’ , we add the newly sampled stroke to the current canvas , Ct , increment time ( i.e . t ← t + 1 ) and restart . If the decision was to ‘ stop ’ , then Ct is returned as the final output . Our model is able to handle parsing and different generation tasks , and the processing pipeline we just described is common in all these tasks . What changes among tasks is the reward functions and/or training procedures , which we explain below . Unconditional Generation The task of ‘ generating new concepts ’ as dubbed in Omniglot challenge , is essentially unconditional sampling from a distribution obtained from the whole Omniglot training set . Here , the model is asked to generate completely novel samples ( i.e . characters ) without any constraints . For this task , at each time step t , we calculate an instantaneous reward , rt , that has three components : rt = D ( Ct ) + λ1align ( Ct , I ) + λ2N ( |Ct| ; µ , σ ) . ( 1 ) The first term is a reward based on a discriminator to make generated characters as ‘ real ’ as possible . D ( · ) is a discriminator that outputs the “ realness ” score of its input canvas . We train it in an adversarial manner by using the generated examples as negatives and the elements of the input dataset as positives . The second term is a clustering-based data fidelity reward . The function align ( Ct , I ) measures the alignment between the current canvas Ct and another canvas I , which is a randomly selected cluster center at the beginning of each episode . The cluster centers are obtained by applying k-means on all characters in the input dataset . align basically counts the number of intersecting on-pixels ( between the two canvases ) minus the number of non-intersecting on-pixels in Ct , and divides this quantity by the number of on-pixels in I . The final term assesses the conformity of the current canvas with the dataset in terms of the number of on-pixels . N ( |Ct| ; µ , σ ) evaluates a normal distribution with ( µ , σ ) at |Ct| which is the number of on-pixels in the current canvas . We obtain ( µ , σ ) by fitting a normal distribution to the on-pixel counts of characters in the training set . We observed that the second and third terms accelerate learning as they guide the exploration within the vicinity of real characters . During training , instead of using the instantaneous reward , rt , we use the difference of successive rewards , i.e . rt − rt−1 . In order to encourage exploration and avoid mode collapse , we use an entropy penalty term as αmax ( 0 , KL ( [ πB , πS ] , U ) − τ ) . ( 2 ) Here , KL indicates KL-divergence and U is the uniform distribution . This term first measures the divergence between the uniform distribution and πB , πS , the distributions output by the policy network . Then , through the hinge function , if the divergence exceeds a threshold ( τ ) , this term activates and increases the penalty . The policy network and the discriminator D are updated alternatingly after 256 images are generated at each iteration . We employ the REINFORCE algorithm ( Williams , 1992 ) to update the weights of the policy network . Discriminator is trained using hinge loss . In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1 , Spectral Normalization is applied at each layer ( Miyato et al. , 2018 ) . Throughout the training , we kept the balance ratio between generated and real samples at 3 . Image Reconstruction by Parsing In the “ parsing ” task , the goal is to reconstruct the given input image by re-drawing it through strokes as accurately as possible . To this end , we formulate a new reward function with two terms : a fidelity reward that indicates how much of a stroke is consistent with the input image ( using the “ align ” function introduced above ) and a penalty that is added with every time increment represented by t as ‘ continue ’ decisions being made : rt = align ( St , Ct ) − λ1t , ( 3 ) where St is the newly sampled stroke and Ct is the current canvas ( input ) . Second term simply acts as a penalty for every ‘ continue ’ action . The first term ensures the sampled stroke to be well-aligned with the input and the second term forces the model to use as few strokes as possible . There is no need for a discriminator . This model explicitly learns the vectorization of the input raster-image in a self-supervised manner . Apart from the different reward function , another crucial difference between the training of the unconditional generation model and the parsing model is how the input and output are handled . In unconditional generation , the newly-sampled stroke is added to the current canvas , whereas in parsing , we do the opposite : the sampled stroke is removed ( masked out ) from the current canvas , and the returned final canvas is the combination of all sampled strokes until the ‘ stop ’ decision . λ , α and τ in Equations 1 , 2 , and 3 are hyperparameters adjusted experimentally . ( see ‘ Training Details ’ in Appendix B ) . Generating New Exemplars In this task , a model is required to generate a new exemplar ( i.e . a variation ) of an unseen concept ( i.e . character ) . To the best of our knowledge , we are the first to tackle this task in a self-supervised stroke-based setting . Most importantly , we do not require any training to achieve this task . We utilize our parsing network described in the previous section to capture the overall structure of a given letter . In order to produce new exemplars , we randomly sample different parsings ( a set of strokes ) from the distribution generated by the agent . In order to eliminate ‘ unlikely ’ samples , we compute the likelihood of the parsing given the resulting policy , and apply a threshold . Generating Novel Concepts from Type In this task , the goal is to to generate a novel concept ( i.e . character ) given a previously unseen type ( i.e . alphabet ) consisting of 10 concepts . The novel concepts should conform to the overall structure , that is , the stroke formulation and composition of the given type ( alphabet ) . We , again , tackle this challenge using our parsing network without any further training . To do so , we first parse all input images into its strokes . For each input image , we sample five stroke sets from the stroke-parameters distribution output by the policy network . During the sampling process , we again use the likelihood-based quality function described in the previous section . We add all the strokes sampled during this process to form a stroke library . Here the strokes are stored with the time steps they are generated . Noting that the number of strokes sampled for a given character is not constant , we approximate a distribution for stopping actions . This process provides a stroke set representing the structure of letters and the way they are composed , that is , we can exploit the compositionality and causality of an alphabet . Throughout the character generation process , a stroke is sampled at each time step belonging to that particular group of the library . The sampled strokes are summed together to obtain the final canvas . | The paper proposes a self-supervised reinforcement learning approach to train a drawing agent for character generation and parsing. The drawing agent operates on the stroke-level (i.e. vector) representation of images. Different from the one-step generative model, the proposed method can capture the compositional structure of a given character. It is interesting to tackle these tasks with a self-supervised approach and reinforcement learning that operates on stroke space. | SP:8c62155ad4440b73185fbcff8b19d882c16a8fcc |
Character Generation through Self-Supervised Vectorization | 1 INTRODUCTION . While , innately , humans sketch or write through strokes , this type of visual depiction is a more difficult task for machines . Image generation problems are typically addressed by raster-based algorithms . The introduction of generative adversarial networks ( GAN ) ( Goodfellow et al. , 2014 ) , variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ) and autoregressive models ( Van Oord et al. , 2016 ) has led to a variety of applications . Style transfer ( Gatys et al. , 2015 ; Isola et al. , 2017 ) , photo realistic image generation ( Brock et al. , 2018 ; Karras et al. , 2019 ) , and super resolution ( Ledig et al. , 2017 ; Bin et al. , 2017 ) are some of the significant instances of the advancing field . Additionally , Hierarchical Bayesian models formulated by deep neural networks are able to use the same generative model for multiple tasks such as classification , conditional and unconditional generation ( Hewitt et al. , 2018 ; Edwards & Storkey , 2016 ) . These raster-based algorithms can produce high quality images , yet they can not benefit from the leverage that higher level abstractions bring about . Vector-level image representation intrinsically prevents models from generating blurry samples and allows for compositional image generation which eventually may contribute to our understanding of how humans create or replicate images ( Lake et al. , 2017 ) . This idea , with the introduction of sketch-based datasets such as Omniglot ( Lake et al. , 2012 ) , Sketchy ( Sangkloy et al. , 2016 ) , and QuickDraw ( Ha & Eck , 2017 ) has triggered a significant body of work in recent years . Stroke based image generation and parsing has been addressed with both vector supervised models and self-supervised generation . Of these , one prominent algorithm is Bayesian Program Learning ( Lake et al. , 2015 ) , where a single model can be utilized for 5 tasks in the Omniglot challenge : ( i ) parsing , ( ii ) unconditional generation or , ( iii ) generating exemplars of a given concept , ( iv ) generating novel concepts of a type , and ( v ) one-shot classification . This approach is also shown to be scalable when supported by the representative capabilities of neural networks ( Feinman & Lake , 2020b ; a ) , however , it requires stroke-level or vector supervision , which is costly to obtain or simply nonexistent . VAE/RNN ( Ha & Eck , 2017 ; Cao et al. , 2019 ; Chen et al. , 2017 ; Aksan et al. , 2020 ) and Transformer based models ( Ribeiro et al. , 2020 ; Lin et al. , 2020 ) are other common methods applied to vector based image generation . Although impressive results have been presented , stroke-level supervision is required to train these models . Figure 1 : Our drawing agent can accomplish four different tasks . From left to right : it can generate novel characters , parse a given character into its strokes , generate new exemplars for a given character , and generate novel concepts ( i.e . characters ) given a type ( i.e . alphabet ) . Ours is the first stroke-based method to tackle all of the generation and parsing tasks in the Omniglot Challenge , without requiring any stroke-level supervision . Recently , self-supervised ( i.e . the absence of strokelevel supervision ) strokebased image generation has been addressed with Reinforcement Learning ( RL ) ( Ganin et al. , 2018 ; Mellor et al. , 2019 ; Huang et al. , 2019 ; Schaldenbrand & Oh , 2020 ) . We call this approach self-supervised vectorization , since the vectorization of images is learned using only rasterimages as supervision . These methods mostly focus on image reconstruction and their exploration in generation is limited . For example , none of them address the conditional generation problem , or , they need the number of actions ( i.e . strokes ) as input . In this paper , we propose a self-supervised reinforcement learning approach where we train a drawing agent for character generation and parsing . Our drawing agent operates on the stroke-level ( i.e . vector ) representation of images . At each time step , our agent takes the current canvas as input and dynamically decides whether to continue drawing or stop . When a ‘ continue ’ decision is made , the agent outputs a program specifying the stroke to be drawn . A non-differentiable renderer takes this program and draws it on the current canvas . Consequently , a raster image is produced stroke-by-stroke . We first train this agent for two tasks by formulating appropriate loss functions : ( i ) unconditional character generation and ( ii ) parsing . Unconditional character generation is the task of generating a novel concept1 ( i.e . character ) given a dataset of concepts . For this task , our loss function includes the following components : an adversarial loss produced by a discriminator to make generated characters as “ real ” as possible , and two data fidelity losses assessing the conformity of the current canvas with the statistical properties of the overall dataset . We also use an additional entropy loss to prevent mode collapse . In the parsing task , the goal for our agent is to reconstruct a given character ( in raster-image ) by drawing it through strokes using as few of them as possible . We utilize the same action space and environment as in the unconditional generation model , only difference being the input fed to the policy is a complete canvas to be reconstructed . Our reward function in this task has two components : a fidelity reward that indicates how much of a stroke is consistent with the target image and a penalty that increases with every ‘ continue ’ action being taken . This model explicitly learns the vectorization of the input raster-image in a self-supervised manner . Next , we show that our parsing model can be exploited for exemplar generation ( i.e . a novel drawing of a given character ) and novel concept generation from type ( i.e . novel character generation given an alphabet of 10 characters ) without any further training . Given a character , the policy network of our parsing model outputs a distribution over the action space where likelihood of actions at each time step eventually allows us to generate variations of the input image . For novel concept generation conditioned on a type ( i.e . alphabet ) , we compose a stroke library by parsing the provided inputs . As we sample strokes from this library , we observe novel samples forming , in coherence with the overall structure of the alphabet . To the best of our knowledge , we are the first to tackle these tasks with a self-supervised approach that operates on stroke space . Through experiments we show that our agent can successfully generate novel characters in all three ways ( unconditionally , conditioned on a given alphabet , conditioned on a given character ) , and parse and reconstruct input characters . For both exemplar generation and type conditioned novel concept generation , we provide LPIPS ( Zhang et al. , 2018 ) , L2 and SSIM measures between input samples and generated images . Our contributions in this paper are two-fold : ( i ) we present a drawing agent that can successfully handle all of the generation and parsing tasks in the Omniglot challenge in a self-supervised , stroke- 1Omniglot challenge terminology . based manner – such a model did not exist ( ii ) we provide for the first time perceptual similarity based quantitative benchmarks for the ‘ exemplar generation ’ and ‘ type conditioned novel concept generation ’ tasks . 2 RELATED WORK . The main purpose of this work is to present a self-supervised approach in order to solve the generation and parsing tasks in the Omniglot Challenge ( Lake et al. , 2015 ) , by capturing the stroke-level representation of images . Here we initially examine the supervised and self-supervised approaches to Omniglot challenge . Then , we review the work on image vectorization . And lastly , we touch upon the research on program synthesis in the context of this study . Omniglot Challenge Omniglot dataset of world alphabets was released with a set of challenges : parsing a given letter , one shot classification , generating a new letter given an alphabet , generating a novel sample of a character , and unconditional generation . Omniglot letters have samples that are conditionally independent based on the alphabet-character hierarchy , hence , a distinctive approach to achieve all these tasks is Hierarchical Bayesian modeling ( Lake et al. , 2015 ) , ( Lake et al. , 2013 ) . As the Omniglot letters included human strokes as labels , the compositional and causal nature of letters are leveraged to model the generation process . Later , neurosymbolic models are also shown to be successful for unconditional generation ( Feinman & Lake , 2020a ) and conceptual compression for multiple tasks presented within the Omniglot Challenge ( Feinman & Lake , 2020b ) . However , without the stroke set that generated a concept , these tasks become more difficult . The idea of sequential image generation is examined by recurrent VAE models ( Rezende et al. , 2016 ) , ( Gregor et al. , 2015 ) , ( Gregor et al. , 2016 ) . DRAW ( Gregor et al. , 2015 ) and Convolutional DRAW ( Gregor et al. , 2016 ) were able to generate quality unconditional samples from MNIST and Omniglot datasets respectively . DRAW is proposed as an algorithm to generate images recurrently . The network is able to iteratively generate a given image by attending to certain parts of the input at each time step . Convolutional DRAW improved the idea with an RNN/VAE based algorithm that can capture the global structure and low-level details of an image separately in order to increase the quality of generations . Later , it is shown that Hierarchical Bayesian Modeling can be improved by the representational power of deep learning and attentional mechanisms in order to achieve three of the five Omniglot challenges ( Rezende et al. , 2016 ) . Another novel idea to leverage Bayesian modeling to tackle Omniglot Challenge was performing modifications on the VAE architecture to represent hierarchical datasets ( Edwards & Storkey , 2016 ) ( Hewitt et al. , 2018 ) . The significance of these studies is that they were able obtain latent variables to describe class-level features effectively . Despite the ability to utilize the same model for different problems ( one-shot classification , unconditional and conditional generation ) , raster-based one-step generative models have two disadvantages we want to address . First , they can not leverage the higher level abstraction and quality comes with working on a vector space . Secondly , one-step generation does not provide an interpretable compositional and causal process describing how a character is generated . In this work , we combine the advantages of two groups of aforementioned models with an agent operating on stroke representation of images that uses only raster images during training . Thus , we aim to solve all three generative and the parsing ( reconstruction ) tasks of the Omniglot challenge . We show that the model trained for reconstruction can also be adopted as a tool that captures the compositional structure of a given character . Without any further training , our agent can solve exemplar generation and type conditioned novel concept generation problems . Image Generation by Vectorization — With Stroke Supervision Sketch-RNN ( Ha & Eck , 2017 ) is the first LSTM/VAE based sketch generation algorithm . It is later improved to generate multiclass samples ( Cao et al. , 2019 ) and increase the quality of generations by representing strokes as Bezier curves ( Song , 2020 ) . The idea of obtaining a generalizable latent space by imagestroke mapping is studied by many ( Aksan et al. , 2020 ; Das et al. , 2021 ; Bhunia et al. , 2021 ; Wang et al. , 2020 ) . In CoSE ( Aksan et al. , 2020 ) , the problem is articulated as ‘ completion of partially drawn sketch ’ . They achieved state of the art reconstruction performance by utilizing variable-length strokes and a novel relational model that is able to capture the global structure of the sketch . The progress in stroke representation is continued with incorporation of variable-degree Bezier curves ( Das et al. , 2021 ) , and capturing Gestalt structure of partially occluded sketches ( Lin et al. , 2020 ) . Self Supervised Vectorization Self-supervised vector-based image generation problem has been approached by RL based frameworks ( Zhou et al. , 2018 ) , ( Ganin et al. , 2018 ) , ( Mellor et al. , 2019 ) , ( Huang et al. , 2019 ) , ( Schaldenbrand & Oh , 2020 ) , and ( Zou et al. , 2020 ) . In SPIRAL ( Ganin et al. , 2018 ) , unconditional generation and reconstruction tasks are tackled with adversarially trained RL agents . Succeeding research enhanced the reconstruction process by a differentiable renderer , making it possible for agents to operate on a continuous space ( Huang et al. , 2019 ; Schaldenbrand & Oh , 2020 ) . In order to avert the computational expense of RL based algorithms , end-to-end differentiable models are developed through altering the rendering process ( Nakano , 2019 ) or formulating the generation process as a parameter search ( Zou et al. , 2020 ) . More recently , a differentiable renderer and compositor is utilized for generating closed Bezier paths and the final image respectively ( Reddy et al. , 2021 ) . This method led to successful interpolation , reconstruction , and sampling processes . Most related to our work is SPIRAL where both reconstruction and unconditional generation is studied through self-supervised deep reinforcement learning . However , our approach has some significant differences . First , in SPIRAL each stroke is also represented as a Bezier curve , yet , the starting point of each curve is set as the final point of the previous curve . In our model , all control points of the Bezier curve are predicted by the agent at each time step . Hence , the agent has to learn the continuity and the compositionality of the given character in order to produce quality samples . Secondly , SPIRAL provides a generative model that works through a graphics renderer without addressing the conditional generation problem . They show impressive results on both natural images and handwritten characters . While we provide a solution for multiple generative tasks , we have not explored our model in the context of natural images . Another approach that presents a similar scheme to the reconstruction problem is “ Learning to Paint ” ( Huang et al. , 2019 ) . In Learning to Paint , the proposed model is utilized specifically for reconstruction . When reconstruction is considered , the main difference of our model is that since we try to model a human-like generation process , our agent outputs a single stroke at each time step with the environment being altered throughout this process while in Learning to Paint , 5 strokes are predicted by the agent at each time step . As a major difference from previous studies , our agent decides whether to stop or keep drawing before generating a stroke . This enables the agent to synthesize an image with as few actions as possible when motivated with our reward formulations . Self Supervised Program Synthesis Our method essentially outputs a visual program that depends only on the rastered data . In that sense , studies on Constructive Solid Geometry ( CSG ) are also related . Different RL frameworks for reconstruction of a given CSG image , that is essentially a composition of geometric shapes , are proposed ( Ellis et al. , 2019 ; Zhou et al. , 2020 ) . The former considered parsing as a search problem that is solved by using a read-eval-print-loop within a Markov Decision Process . The latter adopted a Tree-LSTM model to eliminate invalid programs and the reward is considered to be the Chamfer distance between the target image and current canvas . 3 METHOD Our model consists of a policy network and a ( nondifferentiable ) renderer . At time step t , the policy network takes the current canvas , Ct – a raster-image , as input and outputs two distributions , πB and πS . The first distribution , πB , is for stroke ( i.e . Bezier curve ) parameters and the second one , πS , is for the continue/stop decision . From the first distribution , we randomly sample a stroke defined by its 7 parameters ( x-y coordinates of start , end , control points of the quadratic Bezier curve , and a brush-width ) . From the second distribution , we randomly sample a decision . If the decision happens to be ‘ continue ’ , we add the newly sampled stroke to the current canvas , Ct , increment time ( i.e . t ← t + 1 ) and restart . If the decision was to ‘ stop ’ , then Ct is returned as the final output . Our model is able to handle parsing and different generation tasks , and the processing pipeline we just described is common in all these tasks . What changes among tasks is the reward functions and/or training procedures , which we explain below . Unconditional Generation The task of ‘ generating new concepts ’ as dubbed in Omniglot challenge , is essentially unconditional sampling from a distribution obtained from the whole Omniglot training set . Here , the model is asked to generate completely novel samples ( i.e . characters ) without any constraints . For this task , at each time step t , we calculate an instantaneous reward , rt , that has three components : rt = D ( Ct ) + λ1align ( Ct , I ) + λ2N ( |Ct| ; µ , σ ) . ( 1 ) The first term is a reward based on a discriminator to make generated characters as ‘ real ’ as possible . D ( · ) is a discriminator that outputs the “ realness ” score of its input canvas . We train it in an adversarial manner by using the generated examples as negatives and the elements of the input dataset as positives . The second term is a clustering-based data fidelity reward . The function align ( Ct , I ) measures the alignment between the current canvas Ct and another canvas I , which is a randomly selected cluster center at the beginning of each episode . The cluster centers are obtained by applying k-means on all characters in the input dataset . align basically counts the number of intersecting on-pixels ( between the two canvases ) minus the number of non-intersecting on-pixels in Ct , and divides this quantity by the number of on-pixels in I . The final term assesses the conformity of the current canvas with the dataset in terms of the number of on-pixels . N ( |Ct| ; µ , σ ) evaluates a normal distribution with ( µ , σ ) at |Ct| which is the number of on-pixels in the current canvas . We obtain ( µ , σ ) by fitting a normal distribution to the on-pixel counts of characters in the training set . We observed that the second and third terms accelerate learning as they guide the exploration within the vicinity of real characters . During training , instead of using the instantaneous reward , rt , we use the difference of successive rewards , i.e . rt − rt−1 . In order to encourage exploration and avoid mode collapse , we use an entropy penalty term as αmax ( 0 , KL ( [ πB , πS ] , U ) − τ ) . ( 2 ) Here , KL indicates KL-divergence and U is the uniform distribution . This term first measures the divergence between the uniform distribution and πB , πS , the distributions output by the policy network . Then , through the hinge function , if the divergence exceeds a threshold ( τ ) , this term activates and increases the penalty . The policy network and the discriminator D are updated alternatingly after 256 images are generated at each iteration . We employ the REINFORCE algorithm ( Williams , 1992 ) to update the weights of the policy network . Discriminator is trained using hinge loss . In order to stabilize the discriminator and keep the Lipschitz constant for the whole network equal to 1 , Spectral Normalization is applied at each layer ( Miyato et al. , 2018 ) . Throughout the training , we kept the balance ratio between generated and real samples at 3 . Image Reconstruction by Parsing In the “ parsing ” task , the goal is to reconstruct the given input image by re-drawing it through strokes as accurately as possible . To this end , we formulate a new reward function with two terms : a fidelity reward that indicates how much of a stroke is consistent with the input image ( using the “ align ” function introduced above ) and a penalty that is added with every time increment represented by t as ‘ continue ’ decisions being made : rt = align ( St , Ct ) − λ1t , ( 3 ) where St is the newly sampled stroke and Ct is the current canvas ( input ) . Second term simply acts as a penalty for every ‘ continue ’ action . The first term ensures the sampled stroke to be well-aligned with the input and the second term forces the model to use as few strokes as possible . There is no need for a discriminator . This model explicitly learns the vectorization of the input raster-image in a self-supervised manner . Apart from the different reward function , another crucial difference between the training of the unconditional generation model and the parsing model is how the input and output are handled . In unconditional generation , the newly-sampled stroke is added to the current canvas , whereas in parsing , we do the opposite : the sampled stroke is removed ( masked out ) from the current canvas , and the returned final canvas is the combination of all sampled strokes until the ‘ stop ’ decision . λ , α and τ in Equations 1 , 2 , and 3 are hyperparameters adjusted experimentally . ( see ‘ Training Details ’ in Appendix B ) . Generating New Exemplars In this task , a model is required to generate a new exemplar ( i.e . a variation ) of an unseen concept ( i.e . character ) . To the best of our knowledge , we are the first to tackle this task in a self-supervised stroke-based setting . Most importantly , we do not require any training to achieve this task . We utilize our parsing network described in the previous section to capture the overall structure of a given letter . In order to produce new exemplars , we randomly sample different parsings ( a set of strokes ) from the distribution generated by the agent . In order to eliminate ‘ unlikely ’ samples , we compute the likelihood of the parsing given the resulting policy , and apply a threshold . Generating Novel Concepts from Type In this task , the goal is to to generate a novel concept ( i.e . character ) given a previously unseen type ( i.e . alphabet ) consisting of 10 concepts . The novel concepts should conform to the overall structure , that is , the stroke formulation and composition of the given type ( alphabet ) . We , again , tackle this challenge using our parsing network without any further training . To do so , we first parse all input images into its strokes . For each input image , we sample five stroke sets from the stroke-parameters distribution output by the policy network . During the sampling process , we again use the likelihood-based quality function described in the previous section . We add all the strokes sampled during this process to form a stroke library . Here the strokes are stored with the time steps they are generated . Noting that the number of strokes sampled for a given character is not constant , we approximate a distribution for stopping actions . This process provides a stroke set representing the structure of letters and the way they are composed , that is , we can exploit the compositionality and causality of an alphabet . Throughout the character generation process , a stroke is sampled at each time step belonging to that particular group of the library . The sampled strokes are summed together to obtain the final canvas . | This paper presents an approach using reinforcement learning to parse and generate characters. Experiments are performed both with the omniglot challenge and MNIST datasets. In the context of omniglot, in addition to unconditional generation and parsing, results are also shown on the exemplar generation and type conditioned generation tasks. The proposed approach only uses pixel/raster level information during training and does not use stroke or vector data. | SP:8c62155ad4440b73185fbcff8b19d882c16a8fcc |
Generating Antimicrobial Peptides from Latent Secondary Structure Space | 1 INTRODUCTION . Developing neural networks for drug discovery has attracted increasing attention recently . It can facilitate the discovery of potential therapies and reduce the time and cost of drug development ( Stokes et al. , 2020 ) . Plenty of works have been done to employ deep generative models in searching for drug-like molecules with desired properties and achieved great success ( Jin et al. , 2018 ; Shi et al. , 2019 ; Schwalbe-Koda & Gómez-Bombarelli , 2020 ; Xie et al. , 2020 ) . However , these works mainly focus on small molecules , and more complicated biochemicals , such as proteins , are still rarely explored . Antimicrobial peptides ( AMPs ) , defined as short proteins of less than 50 amino acids with potent antimicrobial activity , are an emerging category of therapeutic agents . AMPs exist widely in the natural immune system for all species and kill bacteria in a physical way ( Aronica et al. , 2021 ; Cardoso et al. , 2020 ) . They attach to the bacterial membrane and insert into the membrane to form pores , which leads to the death of bacteria by allowing cytoplasmic leakage . This mechanism makes them more promising for handling extensively drug-resistant bacteria than traditional antibiotics ( Mahlapuu et al. , 2016 ) . However , the theoretical chemical space of peptides is enormous and the sequence number grows exponentially as the length increases . Thus , it is challenging to search for valid peptides with antimicrobial properties from such a huge sequence space . Several factors can affect the antimicrobial activity of peptides ( Boman , 2003 ) . Amino acids with positive charges are more likely to bind with bacterial membrane as most bacterial surfaces are anionic , while those with high hydrophobicities tend to move from the solution environment to the bacterial membrane . However , the mechanisms of antibacterial peptides need not only a reasonable sequence but also an appropriate structure . For example , by forming the helix structure , a peptide can gather the hydrophobic amino acids on one side and hydrophilic ones on the other . This ability , named amphipathy , helps it insert into the membrane and maintain a stable hole with other peptide molecules in the membrane , as shown in Figure 1 . The hole will drain the cytoplasm and finally kill the bacteria . This mechanism of killing the bacteria is called ‘ barrel stave ’ . Amphipathy plays an important role in deciding the antibacterial activity of peptides and is closely related to the secondary structure of the peptide ( Aronica et al. , 2021 ) . According to the antimicrobial mechanism , a new AMP should meet the following criteria . C1 : It possesses several ideal physical attributes ( e.g . positive charge and high hydrophobicity ) . C2 : It has appropriate secondary structures ( e.g . alpha-helix ) . C3 : It differs from existing AMPs to some extent . The existing works mainly focus on sequential features of amino acids and ignore the secondary structure . The traditional methods replace subsequences with patterns from the pattern database in a given template ( Porto et al. , 2018 ) . Inspired by success in deep neural networks , many researchers apply neural generative models to AMP discovery . They often use the physical attributes as the extra input to control the generation phase ( Das et al. , 2018 ; Van Oort et al. , 2021 ) , or train classifiers on each attribute to filter the peptides after the generation ( Capecchi et al. , 2021 ; Das et al. , 2021 ) . The former ones usually generate peptides that have low correlation with the input attributes and the filter phase of the latter ones make the sampling inefficient . As described above , the antimicrobial activity is determined by both the amino acid composition and secondary structure of the peptide . Thus , we propose LSSAMP to generate antimicrobial peptides from the latent semantic and structure space . Taking the peptide sequence as the time series , we assign a latent variable on each position . Since it is computationally intractable to sum continuous latent variables over all positions , we employ the vector quantized-variational autoencoder ( VQ-VAE ) ( van den Oord et al. , 2017 ) to learn the discrete distribution for each position and further design a multi-scale codebooks strategy to capture different local patterns to fit various length ranges for amino acid and structure sequences . During the generation process , LSSAMP will sample a backbone from the secondary structure latent space and generate the amino acid sequence simultaneously . We evaluate LSSAMP and several baselines through physical properties that are closely related to the antibacterial mechanism . Besides , we use some public AMP prediction models to predict generated sequences being AMPs as previous works did ( Das et al. , 2020 ; Van Oort et al. , 2021 ) . To conclude , our contributions are as follows : • We propose LSSAMP , a generative model which samples peptides from the latent secondary structure space to control the peptide properties . • We develop a multi-scale VQ-VAE to learn positional latent spaces from different aspects and model semantic sequences and structural sequences in the same space . • Experimental results show that LSSAMP can generate peptides with multiple ideal features such as positive charge , better hydrophobicity , and better amphipathicity . The results of public AMP classifiers also verify that our model can generate peptides with high AMP probability . 2 RELATED WORK . Antimicrobial Peptides Generation Traditional methods for AMP design can be divided into three approaches ( Torres & de la Fuente-Nunez , 2019 ) : ( i ) The pattern recognition algorithms build a sequential pattern database from existing AMPs , and then pick a template peptide and replace local sequence with patterns ( Loose et al. , 2006 ; Porto et al. , 2018 ) . ( ii ) The genetic algorithms analyze the AMP database and design some antibiotic activity functions ( Maccari et al. , 2013 ) . ( iii ) The molecular modeling and molecular dynamics methods build 3D models of peptides and analyze activity ( Matyus et al. , 2007 ; Bolintineanu & Kaznessis , 2011 ) . Deep generative models take a rapid growth in recent years . Dean & Walper ( 2020 ) encodes the peptide into the latent space and interpolates across a predictive vector between a known AMP and its scrambled version to generate novel peptides . The PepCVAE ( Das et al. , 2018 ) and CLaSS ( Das et al. , 2021 ) employ the variational auto-encoder model to generate sequences . The AMPGAN ( Van Oort et al. , 2021 ) uses the generative adversarial network to generate new peptide sequences with conditions . To our knowledge , this is the first study to take secondary structure information into consideration during the generative phase , which is conducive to effectively generate well-structured sequences with desired properties . Sequence Generation via VQ-VAE The variational auto-encoders ( VAEs ) were first proposed by Kingma & Welling ( 2014 ) for image generation , and then widely applied to sequence generation tasks such as language modeling ( Bowman et al. , 2016 ) , paraphrase generation ( Gupta et al. , 2018 ) , machine translation ( Bao et al. , 2019 ) and so on . Instead of mapping the input to a continuous latent space in VAE , the vector quantized-variational autoencoder ( VQ-VAE ) ( van den Oord et al. , 2017 ) learns the codebook to obtain a discrete latent representation . It can avoid issues of posterior collapse while has comparable performance with VAEs . Based on it , Razavi et al . ( 2019 ) uses a multi-scale hierarchical organization to capture global and local features for image generation . Bao et al . ( 2021 ) learns implicit categorical information of target words with VQ-VAE and models the categorical sequence with conditional random fields in non-autoregressive machine translation . In this paper , we employ the multi-scale vector quantized technique to obtain the discrete representation for each position of the peptide . 3 METHOD . Given a peptide sequence x = { a1 , a2 , · · · , aL } , where a belongs to the 20 common amino acids and L is the sequence length , the corresponding secondary structure can be denoted as y = { y1 , y2 , · · · , yL } . Following the definition in Kabsch & Sander ( 1983 ) , there are 8 secondary structure types , including one unknown label , so yi ∈ { H , B , E , G , I , T , S , − } 1 . We first employ VQ-VAE for the sequence reconstruction task to learn the sequential latent space ( Section 3.1 ) . Then , we enforce the latent space to model the structure information by the secondary structure task ( Section 3.2 ) . Besides , we design the multi-scale codebooks to capture different local patterns ( Sec- 1H , G , I denote the alpha , 3-10 , and pi helix . E , T are the strand and turn . The others are coil structures . tion 3.3 ) . Finally , we describe the training and inference phase in Section 3.4 . The overview of our model is shown in Figure 2 . 3.1 MODELING PEPTIDE SEQUENCES . For sequential information , we embed the input peptide x = { a1 , a2 , · · · , aL } to the latent space via the encoder and use the generator to reconstruct x . We assume that each ai is determined by a latent variable zi , and the input sequences x = a1 : L will be assigned to a latent sequence z = z1 : L. Since it is computationally intractable to sum continuous latent variables over the sequence , we use VQ-VAE ( van den Oord et al. , 2017 ) to lookup the discrete embedding vector zq = { zq ( a1 ) , · · · , zq ( aL ) } for each position by vector quantization . Specifically , the encoder output ze ( ai ) ∈ Rd will be replaced by the codebook embedding zq ( ai ) ∈ Rd via a nearest neighbors lookup from the codebook B ∈ RK×d : zq ( ai ) = ek , and k = argminj∈ { 1 , ··· , K } ‖ze ( ai ) − ej‖2 . ( 1 ) Here , K is the size of the codebook and d is the dimension of the codebook entry e. Then , the generator will take zq ( ai ) as its input and reconstruct x . The training objective Lr is defined as : Lr = log p ( ai|zq ( ai ) ) + ‖sg [ ze ( ai ) ] − zq ( ai ) ‖22 + β ‖ze ( ai ) − sg [ zq ( ai ) ] ‖ 2 2 . ( 2 ) Here , sg ( · ) is the stop gradient operator , which becomes 0 at the backward pass . β is the commit coefficient to control the codebook loss . 3.2 MODELING SECONDARY STRUCTURES . In order to model the categorical information of the secondary structure , we define an 8-category sequence labeling task on the latent space , which takes x as the input and the structure label sequence y as the target . Similar with sequence reconstruction , we use the same encoder to get ze ( ai ) and employ VQ-VAE to obtain discrete representation . Then , z′q ( ai ) is fed to a separate classifier for the secondary structure prediction : Ls = log p ( yi|z′q ( ai ) ) + ∥∥sg [ ze ( ai ) ] − z′q ( ai ) ∥∥22 + β ∥∥ze ( ai ) − sg [ z′q ( ai ) ] ∥∥22 . ( 3 ) Peptide sequences and structures have distinctive local features , which are often utilized in traditional design algorithms . The patterns of amino acids are often used for template-based design and feature-based recognition . For structure motifs such as α-helix with at least 3.6 consecutive amino acids , they will determine the position of amino acids in the 3D space and affect the function of peptides . However , the structure motifs are often much longer than sequence patterns . Therefore , we establish codebooks of multiple scales to learn latent spaces for different local patterns . | Generating antimicrobial peptide (AMP) is a challenging task. This paper assumes a latent representation is both associated with the amino acid sequence and its secondary structure of peptide. This paper adopts a VQ-VAE network to learn the latent representation of a given peptide sequence, as well as its secondary structure. Then a transformer-based language model is used to model the prior distribution of the latent representations. Random peptide sequences can be generated at this point. The paper evaluates the physical properties of the random peptide sequences, and compared the proportions of these random peptide sequences that are classified as AMP. The comparison shows that secondary structure is important for the generation and the quality of the generated peptide sequences is high. | SP:4792a7e63071f10dc5dd549471b613b292089ef1 |
Generating Antimicrobial Peptides from Latent Secondary Structure Space | 1 INTRODUCTION . Developing neural networks for drug discovery has attracted increasing attention recently . It can facilitate the discovery of potential therapies and reduce the time and cost of drug development ( Stokes et al. , 2020 ) . Plenty of works have been done to employ deep generative models in searching for drug-like molecules with desired properties and achieved great success ( Jin et al. , 2018 ; Shi et al. , 2019 ; Schwalbe-Koda & Gómez-Bombarelli , 2020 ; Xie et al. , 2020 ) . However , these works mainly focus on small molecules , and more complicated biochemicals , such as proteins , are still rarely explored . Antimicrobial peptides ( AMPs ) , defined as short proteins of less than 50 amino acids with potent antimicrobial activity , are an emerging category of therapeutic agents . AMPs exist widely in the natural immune system for all species and kill bacteria in a physical way ( Aronica et al. , 2021 ; Cardoso et al. , 2020 ) . They attach to the bacterial membrane and insert into the membrane to form pores , which leads to the death of bacteria by allowing cytoplasmic leakage . This mechanism makes them more promising for handling extensively drug-resistant bacteria than traditional antibiotics ( Mahlapuu et al. , 2016 ) . However , the theoretical chemical space of peptides is enormous and the sequence number grows exponentially as the length increases . Thus , it is challenging to search for valid peptides with antimicrobial properties from such a huge sequence space . Several factors can affect the antimicrobial activity of peptides ( Boman , 2003 ) . Amino acids with positive charges are more likely to bind with bacterial membrane as most bacterial surfaces are anionic , while those with high hydrophobicities tend to move from the solution environment to the bacterial membrane . However , the mechanisms of antibacterial peptides need not only a reasonable sequence but also an appropriate structure . For example , by forming the helix structure , a peptide can gather the hydrophobic amino acids on one side and hydrophilic ones on the other . This ability , named amphipathy , helps it insert into the membrane and maintain a stable hole with other peptide molecules in the membrane , as shown in Figure 1 . The hole will drain the cytoplasm and finally kill the bacteria . This mechanism of killing the bacteria is called ‘ barrel stave ’ . Amphipathy plays an important role in deciding the antibacterial activity of peptides and is closely related to the secondary structure of the peptide ( Aronica et al. , 2021 ) . According to the antimicrobial mechanism , a new AMP should meet the following criteria . C1 : It possesses several ideal physical attributes ( e.g . positive charge and high hydrophobicity ) . C2 : It has appropriate secondary structures ( e.g . alpha-helix ) . C3 : It differs from existing AMPs to some extent . The existing works mainly focus on sequential features of amino acids and ignore the secondary structure . The traditional methods replace subsequences with patterns from the pattern database in a given template ( Porto et al. , 2018 ) . Inspired by success in deep neural networks , many researchers apply neural generative models to AMP discovery . They often use the physical attributes as the extra input to control the generation phase ( Das et al. , 2018 ; Van Oort et al. , 2021 ) , or train classifiers on each attribute to filter the peptides after the generation ( Capecchi et al. , 2021 ; Das et al. , 2021 ) . The former ones usually generate peptides that have low correlation with the input attributes and the filter phase of the latter ones make the sampling inefficient . As described above , the antimicrobial activity is determined by both the amino acid composition and secondary structure of the peptide . Thus , we propose LSSAMP to generate antimicrobial peptides from the latent semantic and structure space . Taking the peptide sequence as the time series , we assign a latent variable on each position . Since it is computationally intractable to sum continuous latent variables over all positions , we employ the vector quantized-variational autoencoder ( VQ-VAE ) ( van den Oord et al. , 2017 ) to learn the discrete distribution for each position and further design a multi-scale codebooks strategy to capture different local patterns to fit various length ranges for amino acid and structure sequences . During the generation process , LSSAMP will sample a backbone from the secondary structure latent space and generate the amino acid sequence simultaneously . We evaluate LSSAMP and several baselines through physical properties that are closely related to the antibacterial mechanism . Besides , we use some public AMP prediction models to predict generated sequences being AMPs as previous works did ( Das et al. , 2020 ; Van Oort et al. , 2021 ) . To conclude , our contributions are as follows : • We propose LSSAMP , a generative model which samples peptides from the latent secondary structure space to control the peptide properties . • We develop a multi-scale VQ-VAE to learn positional latent spaces from different aspects and model semantic sequences and structural sequences in the same space . • Experimental results show that LSSAMP can generate peptides with multiple ideal features such as positive charge , better hydrophobicity , and better amphipathicity . The results of public AMP classifiers also verify that our model can generate peptides with high AMP probability . 2 RELATED WORK . Antimicrobial Peptides Generation Traditional methods for AMP design can be divided into three approaches ( Torres & de la Fuente-Nunez , 2019 ) : ( i ) The pattern recognition algorithms build a sequential pattern database from existing AMPs , and then pick a template peptide and replace local sequence with patterns ( Loose et al. , 2006 ; Porto et al. , 2018 ) . ( ii ) The genetic algorithms analyze the AMP database and design some antibiotic activity functions ( Maccari et al. , 2013 ) . ( iii ) The molecular modeling and molecular dynamics methods build 3D models of peptides and analyze activity ( Matyus et al. , 2007 ; Bolintineanu & Kaznessis , 2011 ) . Deep generative models take a rapid growth in recent years . Dean & Walper ( 2020 ) encodes the peptide into the latent space and interpolates across a predictive vector between a known AMP and its scrambled version to generate novel peptides . The PepCVAE ( Das et al. , 2018 ) and CLaSS ( Das et al. , 2021 ) employ the variational auto-encoder model to generate sequences . The AMPGAN ( Van Oort et al. , 2021 ) uses the generative adversarial network to generate new peptide sequences with conditions . To our knowledge , this is the first study to take secondary structure information into consideration during the generative phase , which is conducive to effectively generate well-structured sequences with desired properties . Sequence Generation via VQ-VAE The variational auto-encoders ( VAEs ) were first proposed by Kingma & Welling ( 2014 ) for image generation , and then widely applied to sequence generation tasks such as language modeling ( Bowman et al. , 2016 ) , paraphrase generation ( Gupta et al. , 2018 ) , machine translation ( Bao et al. , 2019 ) and so on . Instead of mapping the input to a continuous latent space in VAE , the vector quantized-variational autoencoder ( VQ-VAE ) ( van den Oord et al. , 2017 ) learns the codebook to obtain a discrete latent representation . It can avoid issues of posterior collapse while has comparable performance with VAEs . Based on it , Razavi et al . ( 2019 ) uses a multi-scale hierarchical organization to capture global and local features for image generation . Bao et al . ( 2021 ) learns implicit categorical information of target words with VQ-VAE and models the categorical sequence with conditional random fields in non-autoregressive machine translation . In this paper , we employ the multi-scale vector quantized technique to obtain the discrete representation for each position of the peptide . 3 METHOD . Given a peptide sequence x = { a1 , a2 , · · · , aL } , where a belongs to the 20 common amino acids and L is the sequence length , the corresponding secondary structure can be denoted as y = { y1 , y2 , · · · , yL } . Following the definition in Kabsch & Sander ( 1983 ) , there are 8 secondary structure types , including one unknown label , so yi ∈ { H , B , E , G , I , T , S , − } 1 . We first employ VQ-VAE for the sequence reconstruction task to learn the sequential latent space ( Section 3.1 ) . Then , we enforce the latent space to model the structure information by the secondary structure task ( Section 3.2 ) . Besides , we design the multi-scale codebooks to capture different local patterns ( Sec- 1H , G , I denote the alpha , 3-10 , and pi helix . E , T are the strand and turn . The others are coil structures . tion 3.3 ) . Finally , we describe the training and inference phase in Section 3.4 . The overview of our model is shown in Figure 2 . 3.1 MODELING PEPTIDE SEQUENCES . For sequential information , we embed the input peptide x = { a1 , a2 , · · · , aL } to the latent space via the encoder and use the generator to reconstruct x . We assume that each ai is determined by a latent variable zi , and the input sequences x = a1 : L will be assigned to a latent sequence z = z1 : L. Since it is computationally intractable to sum continuous latent variables over the sequence , we use VQ-VAE ( van den Oord et al. , 2017 ) to lookup the discrete embedding vector zq = { zq ( a1 ) , · · · , zq ( aL ) } for each position by vector quantization . Specifically , the encoder output ze ( ai ) ∈ Rd will be replaced by the codebook embedding zq ( ai ) ∈ Rd via a nearest neighbors lookup from the codebook B ∈ RK×d : zq ( ai ) = ek , and k = argminj∈ { 1 , ··· , K } ‖ze ( ai ) − ej‖2 . ( 1 ) Here , K is the size of the codebook and d is the dimension of the codebook entry e. Then , the generator will take zq ( ai ) as its input and reconstruct x . The training objective Lr is defined as : Lr = log p ( ai|zq ( ai ) ) + ‖sg [ ze ( ai ) ] − zq ( ai ) ‖22 + β ‖ze ( ai ) − sg [ zq ( ai ) ] ‖ 2 2 . ( 2 ) Here , sg ( · ) is the stop gradient operator , which becomes 0 at the backward pass . β is the commit coefficient to control the codebook loss . 3.2 MODELING SECONDARY STRUCTURES . In order to model the categorical information of the secondary structure , we define an 8-category sequence labeling task on the latent space , which takes x as the input and the structure label sequence y as the target . Similar with sequence reconstruction , we use the same encoder to get ze ( ai ) and employ VQ-VAE to obtain discrete representation . Then , z′q ( ai ) is fed to a separate classifier for the secondary structure prediction : Ls = log p ( yi|z′q ( ai ) ) + ∥∥sg [ ze ( ai ) ] − z′q ( ai ) ∥∥22 + β ∥∥ze ( ai ) − sg [ z′q ( ai ) ] ∥∥22 . ( 3 ) Peptide sequences and structures have distinctive local features , which are often utilized in traditional design algorithms . The patterns of amino acids are often used for template-based design and feature-based recognition . For structure motifs such as α-helix with at least 3.6 consecutive amino acids , they will determine the position of amino acids in the 3D space and affect the function of peptides . However , the structure motifs are often much longer than sequence patterns . Therefore , we establish codebooks of multiple scales to learn latent spaces for different local patterns . | The paper proposes a multi-scale VQ-VAE model to generate antimicrobial peptides (AMPs). The model both train on the amino-acid (aa) sequences and on secondary structure. The author evaluate the model by comparing attributes of the generated sequences with real AMPs. The authors also perform an ablation study of their model, an extensive benchmark of existing generative models that have been used for peptides. The main contributions of the authors are: - Developing a generative model that accounts for both the aa sequences as well as the secondary structure of the AMPs. - A large set of experiments to validate their approach, comprising the study of attributes of the generated sequences, benchmark of existing methods, evaluation of the generated sequences with existing predictors and an ablation study. | SP:4792a7e63071f10dc5dd549471b613b292089ef1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.