paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Kalman Filter Is All You Need: Optimization Works When Noise Estimation Fails
1 INTRODUCTION . The Kalman Filter ( KF ) ( Kalman , 1960 ) is a celebrated method for linear filtering and prediction , with applications in many fields including tracking , guidance , navigation and control ( Zarchan and Musoff , 2000 ; Kirubarajan , 2002 ) . Due to its simplicity and robustness , it remains highly popular – with over 10,000 citations in the last 5 years alone ( Google Scholar , 2021 ) – despite the rise of many non-linear sequential prediction models ( e.g. , recurrent neural networks ) . The KF relies on the following model for a dynamic system : Xt+1 = FtXt + ωt ( ωt ∼ N ( 0 , Q ) ) Zt = HtXt + νt ( νt ∼ N ( 0 , R ) ) ( 1 ) where Xt is the state of the system at time t ( whose estimation is usually the goal ) , and its dynamics are modeled by the linear operator Ft up to the random noise ωt with covariance Q ; and Zt is the observation , which is modeled by the linear operator Ht up to the noise νt with covariance R. When the operators Ft , Ht are not assumed to depend on time , the notation may be simplified to F , H . To use the KF , one must determine the noise parameters Q , R . The filtering errors ( i.e. , estimation errors of the states { Xt } ) are minimized when Q and R correspond to the true covariance matrices of the noise ( Humpherys et al. , 2012 ) . Thus , these parameters are usually determined by noise estimation . In absence of system state data { xt } ( the ” ground truth ” ) , many methods have been suggested to determine Q and R from observations data { zt } alone ( Abbeel et al. , 2005 ; Odelson et al. , 2006 ; Zanni et al. , 2017 ; Park et al. , 2019 ) . When ground-truth data is available , however , noise estimation trivially reduces to calculation of the sample covariance matrices ( Lacey , 1998 ) : R̂ : = Cov ( { zt −Htxt } t ) , Q̂ : = Cov ( { xt+1 − Ftxt } t ) . ( 2 ) Indeed , as stated by Odelson et al . ( 2006 ) , “ the more systematic and preferable approach to determine the filter gain is to estimate the covariances from data ” . Our work focuses on such problems with ground-truth available for learning ( but not for inference after the learning , of course ) , which was motivated by a real-world Doppler radar estimation problem . Noise estimation is often not optimal : The equivalence between noise estimation and errors minimization can be proved under the standard KF assumptions – that is , known and linear dynamics and observation models ( Ft , Ht ) , with i.i.d and normally-distributed noises ( { ωt } , { νt } ) ( Humpherys et al. , 2012 ) . However , as put by Thomson ( 1994 ) , “ experience with real-world data soon convinces one that stationarity and Gaussianity are fairy tales invented for the amusement of undergraduates ” – and linearity and independence can be safely added to this list . Therefore , under realistic assumptions , the covariance of the noise does not necessarily correspond to optimal filtering . We introduce a case study in the context of radar tracking , where we demonstrate that even using the true covariance of the noise ( ” oracle ” noise-estimation ) is sub-optimal in a variety of scenarios – including very simple scenarios with relatively minor violation of the KF assumptions . In Appendices E and F , we also analyze this phenomenon analytically for two private cases ( non-linearity in Doppler radar and non-i.i.d noise in lidar ) , where the violation of a single KF assumption is shown to modify the effective noise . By providing this extensive evidence that noise-estimation is a sub-optimal way to tune a KF even in presence of system-states ground-truth data , we re-open a problem that was considered solved for decades ( Kalman , 1960 ) . We also show that seemingly small changes in the properties of the scenario may lead to major changes in the desired design of the KF , e.g. , whether to use a KF or an Extended KF ( Sorenson , 1985 ) . In certain cases , the design choices are easy to overlook ( e.g. , Cartesian vs. spherical coordinates ) , and are not trivial to make even if noticed . As a result , it is impractical to manually choose or develop a variant of the KF for every problem . Rather , we should assume that our model is sub-optimal , and leverage data to deal with the sub-optimality as robustly as possible . Optimization is optimal : We consider Q and R as model parameters that should be optimized with respect to the filtering errors ( i.e. , system-state estimation errors ) – rather than estimating the noise . While both noise estimation and errors optimization rely on exploitation of data , only the latter explicitly addresses the actual goal of solving the filtering problem . Gradient-based optimization methods are usually effective in the field of machine learning , but applying them naively to the entries of Q and R may violate the symmetry and positive-definiteness ( SPD ) constraints of the covariance matrices . Indeed , even works that come as far as optimizing Q and R ( instead of estimating the noise ) usually apply limited optimization methods , e.g. , gridsearch ( Coskun et al. , 2017 ) or diagonal restriction of the covariance matrices ( Formentin and Bittanti , 2014 ; Li et al. , 2019 ) . To address this issue , we use a parameterization based on Cholesky decomposition ( Horn and Johnson , 1985 ) , which allows us to apply gradient-based optimization to SPD matrices . This method is computationally efficient compared to other general gradient-based methods for SPD optimization ( Tsuda et al. , 2005 ; Tibshirani , 2015 ) . We demonstrate that the optimization reduces the errors of the KF consistently : over different variants of the KF , over different violations of KF assumptions , over different domains ( tracking from radar , video or lidar ) , over small and large training datasets , and even under distributional shifts between train and test datasets . Furthermore , we show that optimization improves the robustness to design decisions , by shrinking the performance gaps between different variants of the KF . As explained above , we extensively justify the need to optimize the KF in many practical problems , and suggest a simple solution which is effective , robust , computationally efficient , and relies on standard tools in supervised machine learning . As a result , we believe that in the scope of filtering problems with available ground-truth , whenever the KF assumptions are not strictly-guaranteed , the suggested optimization method should become the new standard procedure for the KF tuning . Unfair comparison : Many learning algorithms have been suggested to address non-linearity in filtering problems , e.g. , based on Recurrent Neural Networks ( RNN ) . Such works often use a linear tool such as the KF as a baseline for comparison – with tuning parameters being sometimes ignored ( Gao et al. , 2019 ) , sometimes based on noise estimation ( fa Dai et al. , 2020 ) , and sometimes optimized in a limited manner using trial-and-error ( Jamil et al. , 2020 ) or grid-search ( Coskun et al. , 2017 ) . Our findings imply that such a methodology yields over-optimistic conclusions , since the baseline is not optimized to the same level as the learning model . This may result in adoption of over-complicated algorithms with no actual added value . Instead , any learning algorithm should be compared to a baseline that is optimized using a similar method ( e.g. , gradient-descent with respect to the errors ) . Indeed , we consider an extension of the KF based on LSTM , which is the key component in many SOTA algorithms for non-linear sequential prediction in recent years ( Neu et al. , 2021 ) . For radar tracking with non-linear motion , we demonstrate how the LSTM seems to provide a significant improvement over the KF . Then , we show that the whole improvement comes from optimization of parameters , and not from the expressive non-linear architecture . In particular , this result demonstrates the competitiveness of our suggested method versus SOTA sequential prediction models . Recent works in the area of machine learning have already shown that advanced algorithms often obtain most of their improvement from implementation nuances ( Engstrom et al. , 2019 ; Andrychowicz et al. , 2020 ; Henderson et al. , 2017 ) . Our work continues this line of thinking and raises awareness to this issue in the domain of filtering problems . Contribution : We show ( empirically and analytically ) that the gold-standard noise estimation method to tune the KF given ground-truth data is often sub-optimal ; demonstrate how this leads to underevaluation of the KF compared to optimized filtering models such as neural networks ; suggest a simple method to optimize the KF , using gradient-based optimization with Cholesky parameterization ; and extensively demonstrate its improved accuracy and its robustness to model misspecification . Limitations : This work relies on the availability of ground-truth system-states in the training data to allow supervised learning . Ground-truth data is often available from simulations , controlled experiments or manual labeling . Within this scope , we show that although noise estimation is straightforward ( Equation 2 ) , it is often not the right task to address . Another limitation is in the theoretical guarantees of gradient-based optimization ( see Appendix L ) , despite its wide success in many fields . The paper is organized as follows : Section 2 reviews the KF . Section 3 introduces our method for efficient KF optimization . Section 4 justifies the necessity of KF optimization through a detailed case study . Section 5 presents a neural version of the KF which reduces the errors compared to a standard KF – but not when compared to an optimized KF . Section 6 discusses related works . 2 PRELIMINARIES : THE KALMAN FILTER ALGORITHM . The KF algorithm ( Kalman , 1960 ; Humpherys et al. , 2012 ) relies on Equation 1 for a dynamic system model . It keeps an estimate of the state Xt , represented as the mean xt and covariance Pt of a normal distribution . As shown in Figure 1 , it alternately predicts the next state using the dynamics model ( prediction step ) , and processes new information from incoming observations ( update or filtering step ) . The KF yields optimal state estimations – but only under a restrictive set of assumptions ( Kalman , 1960 ) , as specified in Def- inition 2.1 . Note that normality of the noise is excluded since it is not necessary for optimality ( Humpherys et al. , 2012 ) , although it is also often assumed . Definition 2.1 ( KF assumptions ) . Ft , Ht of Equation 1 are known matrices that do not depend on the state ( i.e. , correspond to linear models ) ; ωt , νt are i.i.d random variables with zero-mean and constant , known covariance matrices Q , R , respectively ; and the initial state distribution is known . Certain assumptions violating in Definition 2.1 can be partially handled by variations of the KF , such as the Extended KF ( EKF ) ( Sorenson , 1985 ) which replaces the linear models Ft , Ht with local linear approximations , and the Unscented KF ( UKF ) ( Wan and Van Der Merwe , 2000 ) which applies the filtering through sigma-points sampled from the estimated distribution . The use of multiple tracking models alternately is also possible using switching mechanisms ( Mazor et al. , 1998 ) . While Ft and Ht are usually determined based on domain knowledge , Q and R are often estimated from data as the covariance of the noise . As mentioned in Section 1 , this can be done using Equation 2 if ground-truth data is available , or using more sophisticated methods otherwise ( Odelson et al. , 2006 ; Feng et al. , 2014 ; Park et al. , 2019 ) . See Appendix A for a detailed introduction of the KF and recurrent neural networks ( RNN , LSTM ) .
The paper studies the problem of using Kalman filter for estimating the state of a dynamical system when the noise covariance matrices, for both state dynamics and observation, are unknown. The paper assumes access to trajectories of both state and observation. In this setting, a natural approach to solve the problem is to use the data to form an estimate for the covariance matrices. However, the paper argues that an optimization procedure to find noise covariance matrices to minimize the MSE is favorable and should become the "new standard procedure for KF tuning". With several numerical experiments, the paper illustrates that the optimization based KF tuning provides much better and robust result compared to standard KF based on estimation, and the comparisons made in the advanced neural network based estimation literature is not fair.
SP:5e8aa2fdac0b7b52f69c4baccab14250176382d9
Kalman Filter Is All You Need: Optimization Works When Noise Estimation Fails
1 INTRODUCTION . The Kalman Filter ( KF ) ( Kalman , 1960 ) is a celebrated method for linear filtering and prediction , with applications in many fields including tracking , guidance , navigation and control ( Zarchan and Musoff , 2000 ; Kirubarajan , 2002 ) . Due to its simplicity and robustness , it remains highly popular – with over 10,000 citations in the last 5 years alone ( Google Scholar , 2021 ) – despite the rise of many non-linear sequential prediction models ( e.g. , recurrent neural networks ) . The KF relies on the following model for a dynamic system : Xt+1 = FtXt + ωt ( ωt ∼ N ( 0 , Q ) ) Zt = HtXt + νt ( νt ∼ N ( 0 , R ) ) ( 1 ) where Xt is the state of the system at time t ( whose estimation is usually the goal ) , and its dynamics are modeled by the linear operator Ft up to the random noise ωt with covariance Q ; and Zt is the observation , which is modeled by the linear operator Ht up to the noise νt with covariance R. When the operators Ft , Ht are not assumed to depend on time , the notation may be simplified to F , H . To use the KF , one must determine the noise parameters Q , R . The filtering errors ( i.e. , estimation errors of the states { Xt } ) are minimized when Q and R correspond to the true covariance matrices of the noise ( Humpherys et al. , 2012 ) . Thus , these parameters are usually determined by noise estimation . In absence of system state data { xt } ( the ” ground truth ” ) , many methods have been suggested to determine Q and R from observations data { zt } alone ( Abbeel et al. , 2005 ; Odelson et al. , 2006 ; Zanni et al. , 2017 ; Park et al. , 2019 ) . When ground-truth data is available , however , noise estimation trivially reduces to calculation of the sample covariance matrices ( Lacey , 1998 ) : R̂ : = Cov ( { zt −Htxt } t ) , Q̂ : = Cov ( { xt+1 − Ftxt } t ) . ( 2 ) Indeed , as stated by Odelson et al . ( 2006 ) , “ the more systematic and preferable approach to determine the filter gain is to estimate the covariances from data ” . Our work focuses on such problems with ground-truth available for learning ( but not for inference after the learning , of course ) , which was motivated by a real-world Doppler radar estimation problem . Noise estimation is often not optimal : The equivalence between noise estimation and errors minimization can be proved under the standard KF assumptions – that is , known and linear dynamics and observation models ( Ft , Ht ) , with i.i.d and normally-distributed noises ( { ωt } , { νt } ) ( Humpherys et al. , 2012 ) . However , as put by Thomson ( 1994 ) , “ experience with real-world data soon convinces one that stationarity and Gaussianity are fairy tales invented for the amusement of undergraduates ” – and linearity and independence can be safely added to this list . Therefore , under realistic assumptions , the covariance of the noise does not necessarily correspond to optimal filtering . We introduce a case study in the context of radar tracking , where we demonstrate that even using the true covariance of the noise ( ” oracle ” noise-estimation ) is sub-optimal in a variety of scenarios – including very simple scenarios with relatively minor violation of the KF assumptions . In Appendices E and F , we also analyze this phenomenon analytically for two private cases ( non-linearity in Doppler radar and non-i.i.d noise in lidar ) , where the violation of a single KF assumption is shown to modify the effective noise . By providing this extensive evidence that noise-estimation is a sub-optimal way to tune a KF even in presence of system-states ground-truth data , we re-open a problem that was considered solved for decades ( Kalman , 1960 ) . We also show that seemingly small changes in the properties of the scenario may lead to major changes in the desired design of the KF , e.g. , whether to use a KF or an Extended KF ( Sorenson , 1985 ) . In certain cases , the design choices are easy to overlook ( e.g. , Cartesian vs. spherical coordinates ) , and are not trivial to make even if noticed . As a result , it is impractical to manually choose or develop a variant of the KF for every problem . Rather , we should assume that our model is sub-optimal , and leverage data to deal with the sub-optimality as robustly as possible . Optimization is optimal : We consider Q and R as model parameters that should be optimized with respect to the filtering errors ( i.e. , system-state estimation errors ) – rather than estimating the noise . While both noise estimation and errors optimization rely on exploitation of data , only the latter explicitly addresses the actual goal of solving the filtering problem . Gradient-based optimization methods are usually effective in the field of machine learning , but applying them naively to the entries of Q and R may violate the symmetry and positive-definiteness ( SPD ) constraints of the covariance matrices . Indeed , even works that come as far as optimizing Q and R ( instead of estimating the noise ) usually apply limited optimization methods , e.g. , gridsearch ( Coskun et al. , 2017 ) or diagonal restriction of the covariance matrices ( Formentin and Bittanti , 2014 ; Li et al. , 2019 ) . To address this issue , we use a parameterization based on Cholesky decomposition ( Horn and Johnson , 1985 ) , which allows us to apply gradient-based optimization to SPD matrices . This method is computationally efficient compared to other general gradient-based methods for SPD optimization ( Tsuda et al. , 2005 ; Tibshirani , 2015 ) . We demonstrate that the optimization reduces the errors of the KF consistently : over different variants of the KF , over different violations of KF assumptions , over different domains ( tracking from radar , video or lidar ) , over small and large training datasets , and even under distributional shifts between train and test datasets . Furthermore , we show that optimization improves the robustness to design decisions , by shrinking the performance gaps between different variants of the KF . As explained above , we extensively justify the need to optimize the KF in many practical problems , and suggest a simple solution which is effective , robust , computationally efficient , and relies on standard tools in supervised machine learning . As a result , we believe that in the scope of filtering problems with available ground-truth , whenever the KF assumptions are not strictly-guaranteed , the suggested optimization method should become the new standard procedure for the KF tuning . Unfair comparison : Many learning algorithms have been suggested to address non-linearity in filtering problems , e.g. , based on Recurrent Neural Networks ( RNN ) . Such works often use a linear tool such as the KF as a baseline for comparison – with tuning parameters being sometimes ignored ( Gao et al. , 2019 ) , sometimes based on noise estimation ( fa Dai et al. , 2020 ) , and sometimes optimized in a limited manner using trial-and-error ( Jamil et al. , 2020 ) or grid-search ( Coskun et al. , 2017 ) . Our findings imply that such a methodology yields over-optimistic conclusions , since the baseline is not optimized to the same level as the learning model . This may result in adoption of over-complicated algorithms with no actual added value . Instead , any learning algorithm should be compared to a baseline that is optimized using a similar method ( e.g. , gradient-descent with respect to the errors ) . Indeed , we consider an extension of the KF based on LSTM , which is the key component in many SOTA algorithms for non-linear sequential prediction in recent years ( Neu et al. , 2021 ) . For radar tracking with non-linear motion , we demonstrate how the LSTM seems to provide a significant improvement over the KF . Then , we show that the whole improvement comes from optimization of parameters , and not from the expressive non-linear architecture . In particular , this result demonstrates the competitiveness of our suggested method versus SOTA sequential prediction models . Recent works in the area of machine learning have already shown that advanced algorithms often obtain most of their improvement from implementation nuances ( Engstrom et al. , 2019 ; Andrychowicz et al. , 2020 ; Henderson et al. , 2017 ) . Our work continues this line of thinking and raises awareness to this issue in the domain of filtering problems . Contribution : We show ( empirically and analytically ) that the gold-standard noise estimation method to tune the KF given ground-truth data is often sub-optimal ; demonstrate how this leads to underevaluation of the KF compared to optimized filtering models such as neural networks ; suggest a simple method to optimize the KF , using gradient-based optimization with Cholesky parameterization ; and extensively demonstrate its improved accuracy and its robustness to model misspecification . Limitations : This work relies on the availability of ground-truth system-states in the training data to allow supervised learning . Ground-truth data is often available from simulations , controlled experiments or manual labeling . Within this scope , we show that although noise estimation is straightforward ( Equation 2 ) , it is often not the right task to address . Another limitation is in the theoretical guarantees of gradient-based optimization ( see Appendix L ) , despite its wide success in many fields . The paper is organized as follows : Section 2 reviews the KF . Section 3 introduces our method for efficient KF optimization . Section 4 justifies the necessity of KF optimization through a detailed case study . Section 5 presents a neural version of the KF which reduces the errors compared to a standard KF – but not when compared to an optimized KF . Section 6 discusses related works . 2 PRELIMINARIES : THE KALMAN FILTER ALGORITHM . The KF algorithm ( Kalman , 1960 ; Humpherys et al. , 2012 ) relies on Equation 1 for a dynamic system model . It keeps an estimate of the state Xt , represented as the mean xt and covariance Pt of a normal distribution . As shown in Figure 1 , it alternately predicts the next state using the dynamics model ( prediction step ) , and processes new information from incoming observations ( update or filtering step ) . The KF yields optimal state estimations – but only under a restrictive set of assumptions ( Kalman , 1960 ) , as specified in Def- inition 2.1 . Note that normality of the noise is excluded since it is not necessary for optimality ( Humpherys et al. , 2012 ) , although it is also often assumed . Definition 2.1 ( KF assumptions ) . Ft , Ht of Equation 1 are known matrices that do not depend on the state ( i.e. , correspond to linear models ) ; ωt , νt are i.i.d random variables with zero-mean and constant , known covariance matrices Q , R , respectively ; and the initial state distribution is known . Certain assumptions violating in Definition 2.1 can be partially handled by variations of the KF , such as the Extended KF ( EKF ) ( Sorenson , 1985 ) which replaces the linear models Ft , Ht with local linear approximations , and the Unscented KF ( UKF ) ( Wan and Van Der Merwe , 2000 ) which applies the filtering through sigma-points sampled from the estimated distribution . The use of multiple tracking models alternately is also possible using switching mechanisms ( Mazor et al. , 1998 ) . While Ft and Ht are usually determined based on domain knowledge , Q and R are often estimated from data as the covariance of the noise . As mentioned in Section 1 , this can be done using Equation 2 if ground-truth data is available , or using more sophisticated methods otherwise ( Odelson et al. , 2006 ; Feng et al. , 2014 ; Park et al. , 2019 ) . See Appendix A for a detailed introduction of the KF and recurrent neural networks ( RNN , LSTM ) .
This paper targets the design of a classical filtering method - the Kalman filter (KF). The linearity assumption is a strong limitation of KF models although a wide range of variants have been demonstrated for non-linear systems. Different from these studies, the authors focused on the estimation of the noise models in the KF(-class) models, in order to improve the accuracy and robustness of the KF estimates, through an optimization method. The proposed approach was assessed on a benchmark dataset, and the results demonstrated the superiority of the proposed method. Also theoretical analyses are provided in the appendices.
SP:5e8aa2fdac0b7b52f69c4baccab14250176382d9
Kalman Filter Is All You Need: Optimization Works When Noise Estimation Fails
1 INTRODUCTION . The Kalman Filter ( KF ) ( Kalman , 1960 ) is a celebrated method for linear filtering and prediction , with applications in many fields including tracking , guidance , navigation and control ( Zarchan and Musoff , 2000 ; Kirubarajan , 2002 ) . Due to its simplicity and robustness , it remains highly popular – with over 10,000 citations in the last 5 years alone ( Google Scholar , 2021 ) – despite the rise of many non-linear sequential prediction models ( e.g. , recurrent neural networks ) . The KF relies on the following model for a dynamic system : Xt+1 = FtXt + ωt ( ωt ∼ N ( 0 , Q ) ) Zt = HtXt + νt ( νt ∼ N ( 0 , R ) ) ( 1 ) where Xt is the state of the system at time t ( whose estimation is usually the goal ) , and its dynamics are modeled by the linear operator Ft up to the random noise ωt with covariance Q ; and Zt is the observation , which is modeled by the linear operator Ht up to the noise νt with covariance R. When the operators Ft , Ht are not assumed to depend on time , the notation may be simplified to F , H . To use the KF , one must determine the noise parameters Q , R . The filtering errors ( i.e. , estimation errors of the states { Xt } ) are minimized when Q and R correspond to the true covariance matrices of the noise ( Humpherys et al. , 2012 ) . Thus , these parameters are usually determined by noise estimation . In absence of system state data { xt } ( the ” ground truth ” ) , many methods have been suggested to determine Q and R from observations data { zt } alone ( Abbeel et al. , 2005 ; Odelson et al. , 2006 ; Zanni et al. , 2017 ; Park et al. , 2019 ) . When ground-truth data is available , however , noise estimation trivially reduces to calculation of the sample covariance matrices ( Lacey , 1998 ) : R̂ : = Cov ( { zt −Htxt } t ) , Q̂ : = Cov ( { xt+1 − Ftxt } t ) . ( 2 ) Indeed , as stated by Odelson et al . ( 2006 ) , “ the more systematic and preferable approach to determine the filter gain is to estimate the covariances from data ” . Our work focuses on such problems with ground-truth available for learning ( but not for inference after the learning , of course ) , which was motivated by a real-world Doppler radar estimation problem . Noise estimation is often not optimal : The equivalence between noise estimation and errors minimization can be proved under the standard KF assumptions – that is , known and linear dynamics and observation models ( Ft , Ht ) , with i.i.d and normally-distributed noises ( { ωt } , { νt } ) ( Humpherys et al. , 2012 ) . However , as put by Thomson ( 1994 ) , “ experience with real-world data soon convinces one that stationarity and Gaussianity are fairy tales invented for the amusement of undergraduates ” – and linearity and independence can be safely added to this list . Therefore , under realistic assumptions , the covariance of the noise does not necessarily correspond to optimal filtering . We introduce a case study in the context of radar tracking , where we demonstrate that even using the true covariance of the noise ( ” oracle ” noise-estimation ) is sub-optimal in a variety of scenarios – including very simple scenarios with relatively minor violation of the KF assumptions . In Appendices E and F , we also analyze this phenomenon analytically for two private cases ( non-linearity in Doppler radar and non-i.i.d noise in lidar ) , where the violation of a single KF assumption is shown to modify the effective noise . By providing this extensive evidence that noise-estimation is a sub-optimal way to tune a KF even in presence of system-states ground-truth data , we re-open a problem that was considered solved for decades ( Kalman , 1960 ) . We also show that seemingly small changes in the properties of the scenario may lead to major changes in the desired design of the KF , e.g. , whether to use a KF or an Extended KF ( Sorenson , 1985 ) . In certain cases , the design choices are easy to overlook ( e.g. , Cartesian vs. spherical coordinates ) , and are not trivial to make even if noticed . As a result , it is impractical to manually choose or develop a variant of the KF for every problem . Rather , we should assume that our model is sub-optimal , and leverage data to deal with the sub-optimality as robustly as possible . Optimization is optimal : We consider Q and R as model parameters that should be optimized with respect to the filtering errors ( i.e. , system-state estimation errors ) – rather than estimating the noise . While both noise estimation and errors optimization rely on exploitation of data , only the latter explicitly addresses the actual goal of solving the filtering problem . Gradient-based optimization methods are usually effective in the field of machine learning , but applying them naively to the entries of Q and R may violate the symmetry and positive-definiteness ( SPD ) constraints of the covariance matrices . Indeed , even works that come as far as optimizing Q and R ( instead of estimating the noise ) usually apply limited optimization methods , e.g. , gridsearch ( Coskun et al. , 2017 ) or diagonal restriction of the covariance matrices ( Formentin and Bittanti , 2014 ; Li et al. , 2019 ) . To address this issue , we use a parameterization based on Cholesky decomposition ( Horn and Johnson , 1985 ) , which allows us to apply gradient-based optimization to SPD matrices . This method is computationally efficient compared to other general gradient-based methods for SPD optimization ( Tsuda et al. , 2005 ; Tibshirani , 2015 ) . We demonstrate that the optimization reduces the errors of the KF consistently : over different variants of the KF , over different violations of KF assumptions , over different domains ( tracking from radar , video or lidar ) , over small and large training datasets , and even under distributional shifts between train and test datasets . Furthermore , we show that optimization improves the robustness to design decisions , by shrinking the performance gaps between different variants of the KF . As explained above , we extensively justify the need to optimize the KF in many practical problems , and suggest a simple solution which is effective , robust , computationally efficient , and relies on standard tools in supervised machine learning . As a result , we believe that in the scope of filtering problems with available ground-truth , whenever the KF assumptions are not strictly-guaranteed , the suggested optimization method should become the new standard procedure for the KF tuning . Unfair comparison : Many learning algorithms have been suggested to address non-linearity in filtering problems , e.g. , based on Recurrent Neural Networks ( RNN ) . Such works often use a linear tool such as the KF as a baseline for comparison – with tuning parameters being sometimes ignored ( Gao et al. , 2019 ) , sometimes based on noise estimation ( fa Dai et al. , 2020 ) , and sometimes optimized in a limited manner using trial-and-error ( Jamil et al. , 2020 ) or grid-search ( Coskun et al. , 2017 ) . Our findings imply that such a methodology yields over-optimistic conclusions , since the baseline is not optimized to the same level as the learning model . This may result in adoption of over-complicated algorithms with no actual added value . Instead , any learning algorithm should be compared to a baseline that is optimized using a similar method ( e.g. , gradient-descent with respect to the errors ) . Indeed , we consider an extension of the KF based on LSTM , which is the key component in many SOTA algorithms for non-linear sequential prediction in recent years ( Neu et al. , 2021 ) . For radar tracking with non-linear motion , we demonstrate how the LSTM seems to provide a significant improvement over the KF . Then , we show that the whole improvement comes from optimization of parameters , and not from the expressive non-linear architecture . In particular , this result demonstrates the competitiveness of our suggested method versus SOTA sequential prediction models . Recent works in the area of machine learning have already shown that advanced algorithms often obtain most of their improvement from implementation nuances ( Engstrom et al. , 2019 ; Andrychowicz et al. , 2020 ; Henderson et al. , 2017 ) . Our work continues this line of thinking and raises awareness to this issue in the domain of filtering problems . Contribution : We show ( empirically and analytically ) that the gold-standard noise estimation method to tune the KF given ground-truth data is often sub-optimal ; demonstrate how this leads to underevaluation of the KF compared to optimized filtering models such as neural networks ; suggest a simple method to optimize the KF , using gradient-based optimization with Cholesky parameterization ; and extensively demonstrate its improved accuracy and its robustness to model misspecification . Limitations : This work relies on the availability of ground-truth system-states in the training data to allow supervised learning . Ground-truth data is often available from simulations , controlled experiments or manual labeling . Within this scope , we show that although noise estimation is straightforward ( Equation 2 ) , it is often not the right task to address . Another limitation is in the theoretical guarantees of gradient-based optimization ( see Appendix L ) , despite its wide success in many fields . The paper is organized as follows : Section 2 reviews the KF . Section 3 introduces our method for efficient KF optimization . Section 4 justifies the necessity of KF optimization through a detailed case study . Section 5 presents a neural version of the KF which reduces the errors compared to a standard KF – but not when compared to an optimized KF . Section 6 discusses related works . 2 PRELIMINARIES : THE KALMAN FILTER ALGORITHM . The KF algorithm ( Kalman , 1960 ; Humpherys et al. , 2012 ) relies on Equation 1 for a dynamic system model . It keeps an estimate of the state Xt , represented as the mean xt and covariance Pt of a normal distribution . As shown in Figure 1 , it alternately predicts the next state using the dynamics model ( prediction step ) , and processes new information from incoming observations ( update or filtering step ) . The KF yields optimal state estimations – but only under a restrictive set of assumptions ( Kalman , 1960 ) , as specified in Def- inition 2.1 . Note that normality of the noise is excluded since it is not necessary for optimality ( Humpherys et al. , 2012 ) , although it is also often assumed . Definition 2.1 ( KF assumptions ) . Ft , Ht of Equation 1 are known matrices that do not depend on the state ( i.e. , correspond to linear models ) ; ωt , νt are i.i.d random variables with zero-mean and constant , known covariance matrices Q , R , respectively ; and the initial state distribution is known . Certain assumptions violating in Definition 2.1 can be partially handled by variations of the KF , such as the Extended KF ( EKF ) ( Sorenson , 1985 ) which replaces the linear models Ft , Ht with local linear approximations , and the Unscented KF ( UKF ) ( Wan and Van Der Merwe , 2000 ) which applies the filtering through sigma-points sampled from the estimated distribution . The use of multiple tracking models alternately is also possible using switching mechanisms ( Mazor et al. , 1998 ) . While Ft and Ht are usually determined based on domain knowledge , Q and R are often estimated from data as the covariance of the noise . As mentioned in Section 1 , this can be done using Equation 2 if ground-truth data is available , or using more sophisticated methods otherwise ( Odelson et al. , 2006 ; Feng et al. , 2014 ; Park et al. , 2019 ) . See Appendix A for a detailed introduction of the KF and recurrent neural networks ( RNN , LSTM ) .
This paper proposes using gradient-based optimization to tune the state and noise covariance parameters defining a Kalman Filter via supervised learning (i.e., assuming access to ground truth state measurements during training). The need for this approach is motivated by the stringent assumptions under which optimality of the Kalman Filter is shown, and that these assumptions often fail in practice. It is shown through several case-studies that the Optimized Kalman Filter (OKF) not only significantly outperforms a baseline KF implemented with estimated noise covariances, but also matches or outperforms Extended Kalman Filters and an LSTM-based "neural KF" introduced by the authors.
SP:5e8aa2fdac0b7b52f69c4baccab14250176382d9
Range-Net: A High Precision Neural SVD
For Big Data applications , computing a rank-r Singular Value Decomposition ( SVD ) is restrictive due to the main memory requirements . Recently introduced streaming Randomized SVD schemes work under the restrictive assumption that the singular value spectrum of the data has an exponential decay . This is seldom true for any practical data . Further , the approximation errors in the singular vectors and values are high due to the randomized projection . We present Range-Net as a low memory alternative to rank-r SVD that satisfies the lower bound on tail-energy given by Eckart-Young-Mirsky ( EYM ) theorem at machine precision . Range-Net is a deterministic two-stage neural optimization approach with random initialization , where the memory requirement depends explicitly on the feature dimension and desired rank , independent of the sample dimension . The data samples are read in a streaming manner with the network minimization problem converging to the desired rank-r approximation . Range-Net is fully interpretable where all the network outputs and weights have a specific meaning . We provide theoretical guarantees that Range-Net extracted SVD factors satisfy EYM tail-energy lower bound with numerical experiments on real datasets at various scales that confirm these bounds . A comparison against the state-of-the-art streaming Randomized SVD shows that Range-Net is six orders of magnitude more accurate in terms of tail energy while correctly extracting the singular values and vectors . 1 INTRODUCTION . Singular Value Decomposition ( SVD ) is pivotal to exploratory data analysis in identifying an invariant structure under a minimalistic representation ( assumptions on the structure ) containing the span of resolvable information in the dataset . Finding a low rank structure is a fundamental task in applications including Image Compression ( de Souza et al. , 2015 ) , Image Recovery ( Brand , 2002 ) , Background Removal ( Wang et al. , 2018 ) , Recommendation Systems ( Zhang et al. , 2005 ) and as a pre-processing step for Clustering ( Drineas et al. , 2004 ) and Classification ( Jing et al. , 2017 ) . With the advent of digital sensors and modern day data acquisition technologies , the sheer amount of data now requires that we revisit the solution scheme with reduced memory consumption as the target . In this work , we reformulate SVD with special emphasis on the main memory requirement , with no loss in accuracy , that precludes it ’ s use for big data applications . It is well known that natural data matrices have a decaying spectrum wherein saving the data in memory in its original form is either redundant or not required from an application point of view . However , any assumption on the decay rate can only be validated if the singular value decomposition is known a priori , which is seldom the case in exploratory data analysis ( see Fig . 3 ) . Visually assessing a rank-r approximation for image processing applications might seem correct qualitatively ( see Fig . 4 ) but are still prone to large errors due to limited human vision acuity ( see Fig . 6 ) . This is further exacerbated when the application at hand is associated with scientific computations wherein the anomalies or unaccounted phenomena are still being explored from large scale datasets . The reader is preemptively referred to Fig . 19 where the high frequency features related to turbulence can not be disregarded . Furthermore , for classification and clustering problems where feature dimension reduction is desirable it is imperative that a low-rank approximation of a dataset contains most ( ≥ 90 % ) of the original information content without altering the subspace information . In this case , an over-sampled rank can exceed the feature dimension of the data itself ( see Section 4.2 ) . 1.1 PROBLEM STATEMENT . Let us denote the raw data matrix as X ∈ Rm×n of rank f ≤ min ( m , n ) and its approximation as Xr ∈ Rm×n . The SVD of X is X = UΣV T , where U ∈ Rm×f = [ u1 , · · · , uf ] and V ∈ Rn×f = [ v1 , · · · , vf ] are its left and right singular vectors respectively , and Σ ∈ Rf×f = diag ( σ1 , · · · , σf ) are its largest non-zero singular values . The rank r ( r ≤ f ) truncation of X is then Xr = UrΣrV Tr , where Σr = Σ [ 1 : r ] are the top r singular values , and Ur = U [ 1 : r ] and Vr = V [ 1 : r ] are the left and right singular vectors . In other words , X = UΣV T = UrΣrV Tr + Uf\rΣf\rV T f\r = Xr + Xf\r . Here , Uf\r , Vf\r are the trailing f − r left and right singular vectors . Theorem 1 . Eckart-Young-Mirsky Theorem ( Eckart & Young , 1936 ; Mirsky , 1960 ) : Let X ∈ Rm×n be a real , rank-f , matrix with m ≥ n with the singular value decomposition as X = UΣV T , where the orthonormal matrices U , V contain the left and right singular vectors of X and Σ is a diagonal matrix of singular values . Then for an arbitrary rank-r , r ≤ f matrix Br ∈ Rm×n , ‖X −Br‖F ≥ ‖X −Xr‖F where Xr = UrΣrVr with Σr is the diagonal matrix of the largest r singular values and Ur , Vr are the corresponding left and right singular vector matrices . The problem statement is then : Given X ∈ Rm×n find X̂ such that , arg min X̂∈Rm×n , rank ( X̂ ) ≤r ‖X − X̂‖F ( 1 ) In effect , the minimizer X̂∗ of the above problem gives us the rank-r approximation of X such that Xr = X̂∗ . In this work we utilize the minimizer of the above problem to extract the top rank-r SVD factors of X without loading the entire data matrix into the main memory . Note that the minimizer naturally gives the lower bound on this tail energy in addition to being a rank-r approximation . 1.2 MAIN CONTRIBUTIONS . Data and Representation Driven Neural SVD : The representation driven network loss terms ensures that the data matrix X is decomposed into the desired SVD factors such that X = UΣV T . In the absence of the representation enforcing loss term , the minimizer of Eq . 1 results in an arbitrary decomposition such that X = ABC different from SVD factors . A Deterministic Approach with GPU Bit-precision Results : The network can be initialized with weights drawn from a random distribution where the iterative minimization is deterministic . The streaming order of the samples is of no consequence and the user is free to choose the order in which the samples are processed in a batch-wise manner ( indexed or randomized ) . First Streaming Architecture with Exact Low Memory Cost : Range-net requires an exact memory specification based upon the desired rank-r and data dimensions X ∈ Rm×n given by r ( n+ r ) , independent of the sample dimension m. This is the first streaming algorithm that does require the user to wait until the streaming step is complete , contrary to randomized streaming algorithms . Layer-wise Fully Interpretable : Range-Net is a low-weight , fully interpretable , dense neural network where all the network weights and outputs have a precise definition . The network weights are placeholders for the right ( or left ) orthonormal vectors upon convergence of the network minimization problems ( see Appendix D ) . The user can explicitly plug a ground truth solution to verify the network design and directly arrive at the tail energy bound . 2 RELATED WORKS . The core idea behind randomized matrix decomposition is to make one or more passes over the data and compute efficient sketches . They can be broadly categorized into four branches : 1 ) Sampling based methods ( Subset Selection ( Boutsidis et al. , 2014 ) and Randomized CUR ( Drineas et al. , 2006 ) ) ; 2 ) Random Projection based QR ( Halko et al. , 2011 ) ; 3 ) Randomized SVD ( Halko et al. , 2011 ) ; and 4 ) Power iteration methods ( Musco & Musco , 2015 ) . The sketches can represent any combination of row space , column space or the space generated by the intersection of rows and columns ( core space ) . However , all of these methods require loading the entire data in memory . Readers are referred to Kishore Kumar & Schneider ( 2017 ) ; Ye et al . ( 2019 ) for an expanded survey . Conventional SVD although deterministic and accurate , becomes expensive when the data size increases and requires r passes over the data ( see Table 1 ) . The two branches of interest to us are the randomized SVD and Power iteration Methods for extracting SVD factors . Randomized SVD algorithms ( Halko et al. , 2011 ) are generally a two stage process : 1 ) Randomized Sketching uses random sampling to obtain a reduced matri ( x/ces ) which covers any combination of the row , column and core space of the data ; and 2 ) Deterministic Post-processing performs conventional SVD on the reduced system from Randomized Sketching stage . These approaches make only one pass over the data assuming that the singular value spectrum decays rapidly . Power iteration based approach ( Musco & Musco , 2015 ) requires multiple passes over the data and are used when the singular value spectrum decays slowly . This class of algorithm constructs a Krylov matrix inspired by Block Lanczos ( Golub & Underwood , 1977 ) to obtain a polynomial series expansion of the sketch . Although these algorithms achieve lower tail-energy errors , they can not be used in big-data applications when X itself is too large to be retained in the main memory . Here , constructing a Krylov matrix with higher order terms such as ( AAT or ATA ) is not feasible1 . Due to main-memory restrictions on remote compute machines , streaming ( Clarkson & Woodruff , 2009 ; Liberty , 2013 ) algorithms became popular . For low-rank SVD approximations these involve streaming the data and updating low-memory sketches covering the row , column and core spaces . Existing randomized SVD capable of streaming include Halko et al . ( 2011 ) ; Upadhyay ( 2016 ) ; Tropp et al . ( 2017a ; 2019 ) , each with different sketch sizes and upper bounds on approximation errors ( Table 1 ) . SketchySVD ( Tropp et al. , 2019 ) is the state of the art streaming randomized SVD , with sketch sizes comparable to it ’ s predecessors and tighter upper bounds on the tail energy and lower errors . As a two stage approach , SketchySVD ( Alg . 1 ) constructs an overestimated rank- ( k , s ) sketch of the data based on row , column and core projections . A QR decomposition on the row and column sketches gives an estimate of the rank-k subspace . This is followed by a conventional SVD on the core matrix to extract it ’ s singular values and vectors . Finally , the singular vectors are returned after projecting them back to the original row and column space . The time cost of SketchySVD is O ( k2 ( m + n ) ) with memory cost O ( k ( m+ n ) + s2 ) with oversamling parameters k = 4r + 1 and s = 2k + 1 .
The paper presents a multipass streaming algorithm for rank-r SVD. Given an input matrix X in R^{mxn} the algorithm identifies two matrices V* in R^{nxr} and H in R^{rxr}. V* has orthonormal columns that span the top r right singular vectors. H rotates V* so that V*H = Vr, where Vr in R^{nxr} is the matrix of the top-r right singular vectors. V* and H are computed by minibatch gradient descent, trained until convergence with custom loss functions. V* is computed in the first stage and then H in the second stage. The algorithm is accurate and uses very little storage.
SP:48a6bc4788a7452c42442906afbb8a2668951602
Range-Net: A High Precision Neural SVD
For Big Data applications , computing a rank-r Singular Value Decomposition ( SVD ) is restrictive due to the main memory requirements . Recently introduced streaming Randomized SVD schemes work under the restrictive assumption that the singular value spectrum of the data has an exponential decay . This is seldom true for any practical data . Further , the approximation errors in the singular vectors and values are high due to the randomized projection . We present Range-Net as a low memory alternative to rank-r SVD that satisfies the lower bound on tail-energy given by Eckart-Young-Mirsky ( EYM ) theorem at machine precision . Range-Net is a deterministic two-stage neural optimization approach with random initialization , where the memory requirement depends explicitly on the feature dimension and desired rank , independent of the sample dimension . The data samples are read in a streaming manner with the network minimization problem converging to the desired rank-r approximation . Range-Net is fully interpretable where all the network outputs and weights have a specific meaning . We provide theoretical guarantees that Range-Net extracted SVD factors satisfy EYM tail-energy lower bound with numerical experiments on real datasets at various scales that confirm these bounds . A comparison against the state-of-the-art streaming Randomized SVD shows that Range-Net is six orders of magnitude more accurate in terms of tail energy while correctly extracting the singular values and vectors . 1 INTRODUCTION . Singular Value Decomposition ( SVD ) is pivotal to exploratory data analysis in identifying an invariant structure under a minimalistic representation ( assumptions on the structure ) containing the span of resolvable information in the dataset . Finding a low rank structure is a fundamental task in applications including Image Compression ( de Souza et al. , 2015 ) , Image Recovery ( Brand , 2002 ) , Background Removal ( Wang et al. , 2018 ) , Recommendation Systems ( Zhang et al. , 2005 ) and as a pre-processing step for Clustering ( Drineas et al. , 2004 ) and Classification ( Jing et al. , 2017 ) . With the advent of digital sensors and modern day data acquisition technologies , the sheer amount of data now requires that we revisit the solution scheme with reduced memory consumption as the target . In this work , we reformulate SVD with special emphasis on the main memory requirement , with no loss in accuracy , that precludes it ’ s use for big data applications . It is well known that natural data matrices have a decaying spectrum wherein saving the data in memory in its original form is either redundant or not required from an application point of view . However , any assumption on the decay rate can only be validated if the singular value decomposition is known a priori , which is seldom the case in exploratory data analysis ( see Fig . 3 ) . Visually assessing a rank-r approximation for image processing applications might seem correct qualitatively ( see Fig . 4 ) but are still prone to large errors due to limited human vision acuity ( see Fig . 6 ) . This is further exacerbated when the application at hand is associated with scientific computations wherein the anomalies or unaccounted phenomena are still being explored from large scale datasets . The reader is preemptively referred to Fig . 19 where the high frequency features related to turbulence can not be disregarded . Furthermore , for classification and clustering problems where feature dimension reduction is desirable it is imperative that a low-rank approximation of a dataset contains most ( ≥ 90 % ) of the original information content without altering the subspace information . In this case , an over-sampled rank can exceed the feature dimension of the data itself ( see Section 4.2 ) . 1.1 PROBLEM STATEMENT . Let us denote the raw data matrix as X ∈ Rm×n of rank f ≤ min ( m , n ) and its approximation as Xr ∈ Rm×n . The SVD of X is X = UΣV T , where U ∈ Rm×f = [ u1 , · · · , uf ] and V ∈ Rn×f = [ v1 , · · · , vf ] are its left and right singular vectors respectively , and Σ ∈ Rf×f = diag ( σ1 , · · · , σf ) are its largest non-zero singular values . The rank r ( r ≤ f ) truncation of X is then Xr = UrΣrV Tr , where Σr = Σ [ 1 : r ] are the top r singular values , and Ur = U [ 1 : r ] and Vr = V [ 1 : r ] are the left and right singular vectors . In other words , X = UΣV T = UrΣrV Tr + Uf\rΣf\rV T f\r = Xr + Xf\r . Here , Uf\r , Vf\r are the trailing f − r left and right singular vectors . Theorem 1 . Eckart-Young-Mirsky Theorem ( Eckart & Young , 1936 ; Mirsky , 1960 ) : Let X ∈ Rm×n be a real , rank-f , matrix with m ≥ n with the singular value decomposition as X = UΣV T , where the orthonormal matrices U , V contain the left and right singular vectors of X and Σ is a diagonal matrix of singular values . Then for an arbitrary rank-r , r ≤ f matrix Br ∈ Rm×n , ‖X −Br‖F ≥ ‖X −Xr‖F where Xr = UrΣrVr with Σr is the diagonal matrix of the largest r singular values and Ur , Vr are the corresponding left and right singular vector matrices . The problem statement is then : Given X ∈ Rm×n find X̂ such that , arg min X̂∈Rm×n , rank ( X̂ ) ≤r ‖X − X̂‖F ( 1 ) In effect , the minimizer X̂∗ of the above problem gives us the rank-r approximation of X such that Xr = X̂∗ . In this work we utilize the minimizer of the above problem to extract the top rank-r SVD factors of X without loading the entire data matrix into the main memory . Note that the minimizer naturally gives the lower bound on this tail energy in addition to being a rank-r approximation . 1.2 MAIN CONTRIBUTIONS . Data and Representation Driven Neural SVD : The representation driven network loss terms ensures that the data matrix X is decomposed into the desired SVD factors such that X = UΣV T . In the absence of the representation enforcing loss term , the minimizer of Eq . 1 results in an arbitrary decomposition such that X = ABC different from SVD factors . A Deterministic Approach with GPU Bit-precision Results : The network can be initialized with weights drawn from a random distribution where the iterative minimization is deterministic . The streaming order of the samples is of no consequence and the user is free to choose the order in which the samples are processed in a batch-wise manner ( indexed or randomized ) . First Streaming Architecture with Exact Low Memory Cost : Range-net requires an exact memory specification based upon the desired rank-r and data dimensions X ∈ Rm×n given by r ( n+ r ) , independent of the sample dimension m. This is the first streaming algorithm that does require the user to wait until the streaming step is complete , contrary to randomized streaming algorithms . Layer-wise Fully Interpretable : Range-Net is a low-weight , fully interpretable , dense neural network where all the network weights and outputs have a precise definition . The network weights are placeholders for the right ( or left ) orthonormal vectors upon convergence of the network minimization problems ( see Appendix D ) . The user can explicitly plug a ground truth solution to verify the network design and directly arrive at the tail energy bound . 2 RELATED WORKS . The core idea behind randomized matrix decomposition is to make one or more passes over the data and compute efficient sketches . They can be broadly categorized into four branches : 1 ) Sampling based methods ( Subset Selection ( Boutsidis et al. , 2014 ) and Randomized CUR ( Drineas et al. , 2006 ) ) ; 2 ) Random Projection based QR ( Halko et al. , 2011 ) ; 3 ) Randomized SVD ( Halko et al. , 2011 ) ; and 4 ) Power iteration methods ( Musco & Musco , 2015 ) . The sketches can represent any combination of row space , column space or the space generated by the intersection of rows and columns ( core space ) . However , all of these methods require loading the entire data in memory . Readers are referred to Kishore Kumar & Schneider ( 2017 ) ; Ye et al . ( 2019 ) for an expanded survey . Conventional SVD although deterministic and accurate , becomes expensive when the data size increases and requires r passes over the data ( see Table 1 ) . The two branches of interest to us are the randomized SVD and Power iteration Methods for extracting SVD factors . Randomized SVD algorithms ( Halko et al. , 2011 ) are generally a two stage process : 1 ) Randomized Sketching uses random sampling to obtain a reduced matri ( x/ces ) which covers any combination of the row , column and core space of the data ; and 2 ) Deterministic Post-processing performs conventional SVD on the reduced system from Randomized Sketching stage . These approaches make only one pass over the data assuming that the singular value spectrum decays rapidly . Power iteration based approach ( Musco & Musco , 2015 ) requires multiple passes over the data and are used when the singular value spectrum decays slowly . This class of algorithm constructs a Krylov matrix inspired by Block Lanczos ( Golub & Underwood , 1977 ) to obtain a polynomial series expansion of the sketch . Although these algorithms achieve lower tail-energy errors , they can not be used in big-data applications when X itself is too large to be retained in the main memory . Here , constructing a Krylov matrix with higher order terms such as ( AAT or ATA ) is not feasible1 . Due to main-memory restrictions on remote compute machines , streaming ( Clarkson & Woodruff , 2009 ; Liberty , 2013 ) algorithms became popular . For low-rank SVD approximations these involve streaming the data and updating low-memory sketches covering the row , column and core spaces . Existing randomized SVD capable of streaming include Halko et al . ( 2011 ) ; Upadhyay ( 2016 ) ; Tropp et al . ( 2017a ; 2019 ) , each with different sketch sizes and upper bounds on approximation errors ( Table 1 ) . SketchySVD ( Tropp et al. , 2019 ) is the state of the art streaming randomized SVD , with sketch sizes comparable to it ’ s predecessors and tighter upper bounds on the tail energy and lower errors . As a two stage approach , SketchySVD ( Alg . 1 ) constructs an overestimated rank- ( k , s ) sketch of the data based on row , column and core projections . A QR decomposition on the row and column sketches gives an estimate of the rank-k subspace . This is followed by a conventional SVD on the core matrix to extract it ’ s singular values and vectors . Finally , the singular vectors are returned after projecting them back to the original row and column space . The time cost of SketchySVD is O ( k2 ( m + n ) ) with memory cost O ( k ( m+ n ) + s2 ) with oversamling parameters k = 4r + 1 and s = 2k + 1 .
The paper presents a streaming method to compute an approximate Singular Value Decomposition, without requiring to load the entire data matrix into the main memory. The first stage of the method identifies a basis optimizing for the constraint of orthonormality, as well as minimizing the error between the input and the reconstruction of the input based on the learnt basis. The second stage corresponds to a rotation step, which aligns the low-rank approximation extracted with the SVD factors.
SP:48a6bc4788a7452c42442906afbb8a2668951602
Range-Net: A High Precision Neural SVD
For Big Data applications , computing a rank-r Singular Value Decomposition ( SVD ) is restrictive due to the main memory requirements . Recently introduced streaming Randomized SVD schemes work under the restrictive assumption that the singular value spectrum of the data has an exponential decay . This is seldom true for any practical data . Further , the approximation errors in the singular vectors and values are high due to the randomized projection . We present Range-Net as a low memory alternative to rank-r SVD that satisfies the lower bound on tail-energy given by Eckart-Young-Mirsky ( EYM ) theorem at machine precision . Range-Net is a deterministic two-stage neural optimization approach with random initialization , where the memory requirement depends explicitly on the feature dimension and desired rank , independent of the sample dimension . The data samples are read in a streaming manner with the network minimization problem converging to the desired rank-r approximation . Range-Net is fully interpretable where all the network outputs and weights have a specific meaning . We provide theoretical guarantees that Range-Net extracted SVD factors satisfy EYM tail-energy lower bound with numerical experiments on real datasets at various scales that confirm these bounds . A comparison against the state-of-the-art streaming Randomized SVD shows that Range-Net is six orders of magnitude more accurate in terms of tail energy while correctly extracting the singular values and vectors . 1 INTRODUCTION . Singular Value Decomposition ( SVD ) is pivotal to exploratory data analysis in identifying an invariant structure under a minimalistic representation ( assumptions on the structure ) containing the span of resolvable information in the dataset . Finding a low rank structure is a fundamental task in applications including Image Compression ( de Souza et al. , 2015 ) , Image Recovery ( Brand , 2002 ) , Background Removal ( Wang et al. , 2018 ) , Recommendation Systems ( Zhang et al. , 2005 ) and as a pre-processing step for Clustering ( Drineas et al. , 2004 ) and Classification ( Jing et al. , 2017 ) . With the advent of digital sensors and modern day data acquisition technologies , the sheer amount of data now requires that we revisit the solution scheme with reduced memory consumption as the target . In this work , we reformulate SVD with special emphasis on the main memory requirement , with no loss in accuracy , that precludes it ’ s use for big data applications . It is well known that natural data matrices have a decaying spectrum wherein saving the data in memory in its original form is either redundant or not required from an application point of view . However , any assumption on the decay rate can only be validated if the singular value decomposition is known a priori , which is seldom the case in exploratory data analysis ( see Fig . 3 ) . Visually assessing a rank-r approximation for image processing applications might seem correct qualitatively ( see Fig . 4 ) but are still prone to large errors due to limited human vision acuity ( see Fig . 6 ) . This is further exacerbated when the application at hand is associated with scientific computations wherein the anomalies or unaccounted phenomena are still being explored from large scale datasets . The reader is preemptively referred to Fig . 19 where the high frequency features related to turbulence can not be disregarded . Furthermore , for classification and clustering problems where feature dimension reduction is desirable it is imperative that a low-rank approximation of a dataset contains most ( ≥ 90 % ) of the original information content without altering the subspace information . In this case , an over-sampled rank can exceed the feature dimension of the data itself ( see Section 4.2 ) . 1.1 PROBLEM STATEMENT . Let us denote the raw data matrix as X ∈ Rm×n of rank f ≤ min ( m , n ) and its approximation as Xr ∈ Rm×n . The SVD of X is X = UΣV T , where U ∈ Rm×f = [ u1 , · · · , uf ] and V ∈ Rn×f = [ v1 , · · · , vf ] are its left and right singular vectors respectively , and Σ ∈ Rf×f = diag ( σ1 , · · · , σf ) are its largest non-zero singular values . The rank r ( r ≤ f ) truncation of X is then Xr = UrΣrV Tr , where Σr = Σ [ 1 : r ] are the top r singular values , and Ur = U [ 1 : r ] and Vr = V [ 1 : r ] are the left and right singular vectors . In other words , X = UΣV T = UrΣrV Tr + Uf\rΣf\rV T f\r = Xr + Xf\r . Here , Uf\r , Vf\r are the trailing f − r left and right singular vectors . Theorem 1 . Eckart-Young-Mirsky Theorem ( Eckart & Young , 1936 ; Mirsky , 1960 ) : Let X ∈ Rm×n be a real , rank-f , matrix with m ≥ n with the singular value decomposition as X = UΣV T , where the orthonormal matrices U , V contain the left and right singular vectors of X and Σ is a diagonal matrix of singular values . Then for an arbitrary rank-r , r ≤ f matrix Br ∈ Rm×n , ‖X −Br‖F ≥ ‖X −Xr‖F where Xr = UrΣrVr with Σr is the diagonal matrix of the largest r singular values and Ur , Vr are the corresponding left and right singular vector matrices . The problem statement is then : Given X ∈ Rm×n find X̂ such that , arg min X̂∈Rm×n , rank ( X̂ ) ≤r ‖X − X̂‖F ( 1 ) In effect , the minimizer X̂∗ of the above problem gives us the rank-r approximation of X such that Xr = X̂∗ . In this work we utilize the minimizer of the above problem to extract the top rank-r SVD factors of X without loading the entire data matrix into the main memory . Note that the minimizer naturally gives the lower bound on this tail energy in addition to being a rank-r approximation . 1.2 MAIN CONTRIBUTIONS . Data and Representation Driven Neural SVD : The representation driven network loss terms ensures that the data matrix X is decomposed into the desired SVD factors such that X = UΣV T . In the absence of the representation enforcing loss term , the minimizer of Eq . 1 results in an arbitrary decomposition such that X = ABC different from SVD factors . A Deterministic Approach with GPU Bit-precision Results : The network can be initialized with weights drawn from a random distribution where the iterative minimization is deterministic . The streaming order of the samples is of no consequence and the user is free to choose the order in which the samples are processed in a batch-wise manner ( indexed or randomized ) . First Streaming Architecture with Exact Low Memory Cost : Range-net requires an exact memory specification based upon the desired rank-r and data dimensions X ∈ Rm×n given by r ( n+ r ) , independent of the sample dimension m. This is the first streaming algorithm that does require the user to wait until the streaming step is complete , contrary to randomized streaming algorithms . Layer-wise Fully Interpretable : Range-Net is a low-weight , fully interpretable , dense neural network where all the network weights and outputs have a precise definition . The network weights are placeholders for the right ( or left ) orthonormal vectors upon convergence of the network minimization problems ( see Appendix D ) . The user can explicitly plug a ground truth solution to verify the network design and directly arrive at the tail energy bound . 2 RELATED WORKS . The core idea behind randomized matrix decomposition is to make one or more passes over the data and compute efficient sketches . They can be broadly categorized into four branches : 1 ) Sampling based methods ( Subset Selection ( Boutsidis et al. , 2014 ) and Randomized CUR ( Drineas et al. , 2006 ) ) ; 2 ) Random Projection based QR ( Halko et al. , 2011 ) ; 3 ) Randomized SVD ( Halko et al. , 2011 ) ; and 4 ) Power iteration methods ( Musco & Musco , 2015 ) . The sketches can represent any combination of row space , column space or the space generated by the intersection of rows and columns ( core space ) . However , all of these methods require loading the entire data in memory . Readers are referred to Kishore Kumar & Schneider ( 2017 ) ; Ye et al . ( 2019 ) for an expanded survey . Conventional SVD although deterministic and accurate , becomes expensive when the data size increases and requires r passes over the data ( see Table 1 ) . The two branches of interest to us are the randomized SVD and Power iteration Methods for extracting SVD factors . Randomized SVD algorithms ( Halko et al. , 2011 ) are generally a two stage process : 1 ) Randomized Sketching uses random sampling to obtain a reduced matri ( x/ces ) which covers any combination of the row , column and core space of the data ; and 2 ) Deterministic Post-processing performs conventional SVD on the reduced system from Randomized Sketching stage . These approaches make only one pass over the data assuming that the singular value spectrum decays rapidly . Power iteration based approach ( Musco & Musco , 2015 ) requires multiple passes over the data and are used when the singular value spectrum decays slowly . This class of algorithm constructs a Krylov matrix inspired by Block Lanczos ( Golub & Underwood , 1977 ) to obtain a polynomial series expansion of the sketch . Although these algorithms achieve lower tail-energy errors , they can not be used in big-data applications when X itself is too large to be retained in the main memory . Here , constructing a Krylov matrix with higher order terms such as ( AAT or ATA ) is not feasible1 . Due to main-memory restrictions on remote compute machines , streaming ( Clarkson & Woodruff , 2009 ; Liberty , 2013 ) algorithms became popular . For low-rank SVD approximations these involve streaming the data and updating low-memory sketches covering the row , column and core spaces . Existing randomized SVD capable of streaming include Halko et al . ( 2011 ) ; Upadhyay ( 2016 ) ; Tropp et al . ( 2017a ; 2019 ) , each with different sketch sizes and upper bounds on approximation errors ( Table 1 ) . SketchySVD ( Tropp et al. , 2019 ) is the state of the art streaming randomized SVD , with sketch sizes comparable to it ’ s predecessors and tighter upper bounds on the tail energy and lower errors . As a two stage approach , SketchySVD ( Alg . 1 ) constructs an overestimated rank- ( k , s ) sketch of the data based on row , column and core projections . A QR decomposition on the row and column sketches gives an estimate of the rank-k subspace . This is followed by a conventional SVD on the core matrix to extract it ’ s singular values and vectors . Finally , the singular vectors are returned after projecting them back to the original row and column space . The time cost of SketchySVD is O ( k2 ( m + n ) ) with memory cost O ( k ( m+ n ) + s2 ) with oversamling parameters k = 4r + 1 and s = 2k + 1 .
This paper proposes a neural network architecture for computing a rank-r approximation of a matrix. The first part consisted of learning the projection and the second part consisted of learning the rotation. An experimental comparison against the state-of-the-art Sketchy SVD algorithm is provided on three datasets. The authors claim that the proposed method is extremely memory efficient compared to previous approaches.
SP:48a6bc4788a7452c42442906afbb8a2668951602
ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised Predictive Learning
1 INTRODUCTION . Predictive learning is an unsupervised learning paradigm that has shown the ability to discover the spatiotemporal modes of visual dynamics ( Xu et al. , 2019 ; Goyal et al. , 2021 ) . However , for largescale and real-world datasets ( see Figure 1 ) , the modes in visual dynamics can be highly entangled and difficult to learn due to the richness of data environments , the diversity of object interactions , and the complexity of motion patterns . For clarity , in the following discussion , spatiotemporal modes are considered to have the following properties : 1 . A spatiotemporal mode refers to a representation subspace that corresponds to a family of similar , but not predefined , visual dynamics . 2 . Multiple spatiotemporal modes naturally exist in real-world data , even in a single frame . 3 . We assume the i.i.d . setup to allow all videos to share the same set of spatiotemporal modes in a dataset . Different data may have different compositional structures over the modes . Under these assumptions , video prediction models are required to ( i ) decouple the potentially mixed spatiotemporal modes from raw video frames , ( ii ) understand the compositional structures on top of the learned modes , and ( iii ) learn the state transitions based on the compositional structures . Otherwise , since the learned dynamics with respect to different modes may interfere and compete during training , it remains challenging for the prior art in video prediction to generate less blurry future frames based on an ambiguous understanding of mixed physical processes . We refer to this empirical phenomenon as spatiotemporal mode collapse ( STMC ) , which is mainly caused by the collapse of learned representations into invalid subspaces when compromising to multiple spatiotemporal modes in the training set . Unlike the widely concerned mode collapse problem in generative adversarial networks , STMC has not drawn much attention because predictive learning is supposed to be well constrained by the image reconstruction loss . However , due to the limitation of model size , STMC occurs when the model can not effectively decouple mixed spatiotemporal modes and infer their underlying structures . As a result , its responses to different modes tend to lose diversity and may collapse to a meaningless average of multiple representation subspaces of valid modes . In Figure 1 ( left ) , we can observe the existence of STMC on a large-scale video dataset named RoboNet ( Dasari et al. , 2019 ) , in which potential spatiotemporal modes may come from seven different robot platforms ( e.g. , Baxter and WidowX ) , four data collection environments ( e.g. , Berkeley and Stanford ) , and a variety of unlabeled robot control tasks ( e.g. , pushing and grasping ) . An additional outcome of STMC is that we can achieve a performance gain when training individual models in separate subsets with remarkably different visual dynamics , as shown in Figure 1 ( right ) . However , such a dilemma prevents the model from growing into big ones that allow scalable training on large-scale , natively multimodal spatiotemporal sequences . We explore STMC for the first time in unsupervised predictive learning . The core idea is to provide a strong inductive bias for the predictive model to discover the compositional structures of latent modes . To this end , we propose ModeRNN , a new modular recurrent architecture that learns structured hidden representations through a set of mode slots1 , where each of them responds to the representation subspace of a single spatiotemporal mode . ModeRNN also introduces a decoupling-aggregation framework to process the slot features in three stages , which is completely different from existing predictive models with modular architectures ( Xu et al. , 2019 ; Goyal et al. , 2021 ) . The first stage is recurrent state interaction and slot binding , in which we use the multi-head attention mechanism ( Vaswani et al. , 2017 ) to enable the memory state to interact with the input state and previous hidden state of RNNs . We name the memory state “ slot bus ” , because for each sequence , it is initialized from a multi-variate Gaussian distribution with learnable parameters , and thereafter refined using the slot features at each time step . By using the slot bus as the queries , multi-head attention can naturally decouple modular components from hidden representations and bind them to particular mode slots . Features in each slot are then independently modeled using per-slot convolutional parameters . The second stage in each ModeRNN unit is slot fusion , motivated by the assumption that , there can be multiple spatiotemporal modes in a single video and similar videos can be represented by similar compositional structures over the mode slots . Therefore , we assign slot features with learnable importance weights and aggregate them into a unified hidden representation , which is then used in the third stage to update the slot bus and generate the output state of the ModeRNN unit . We empirically show the existence of STMC on five datasets , and include the results on three realworld datasets in the manuscript , including the large-scale RoboNet dataset that has various data collection environments and multiple robot control tasks , the KTH dataset with six types of human actions that has been widely used by previous literature , and the radar echo dataset for precipitation forecasting that contains time-varying modes of seasonal climates . In addition , we include results on a Mixed Moving MNIST dataset and the Human3.6M dataset in the appendix . In a series of quantitative and visualization results , we demonstrate the effectiveness of ModeRNN in mitigating STMC and learning from highly entangled visual dynamics . 1The concept of “ slot ” was initially introduced by Locatello et al . ( 2020 ) to denote the object-centric features in static scene understanding . We borrow this term here for the subspaces of spatiotemporal representations . 2 RELATED WORK . RNN-based predictive models . Many deep learning models based on RNNs have been proposed for spatiotemporal prediction ( Ranzato et al. , 2014 ; Srivastava et al. , 2015 ; Shi et al. , 2015 ; Oh et al. , 2015 ; De Brabandere et al. , 2016 ; Villegas et al. , 2018 ) . Shi et al . ( 2015 ) integrated 2D convolutions into the recurrent state transitions of standard LSTM and proposed the convolutional LSTM ( ConvLSTM ) network , which can model the spatial correlations and temporal dynamics in a unified recurrent unit . More recent approaches have extended the prediction ability of ConvLSTM in different aspects ( Wang et al. , 2017 ; Oliu et al. , 2018 ; Wang et al. , 2019b ; a ; Yao et al. , 2020 ; Guen & Thome , 2020 ; Yu et al. , 2019 ; Su et al. , 2020 ; Lin et al. , 2020 ; Lee et al. , 2021 ) . For example , as an important compared model of our approach , SA-ConvLSTM ( Lin et al. , 2020 ) incorporates selfattention in the recurrent state transitions in ConvLSTM to obtain more global context information across time . However , unlike our approach , it does not learn decoupled representations to understand individual components in complex visual dynamics . Besides deterministic models , probabilistic models were proposed to explicitly consider the uncertainty in future prediction ( Mathieu et al. , 2016 ; Vondrick et al. , 2016 ; Tulyakov et al. , 2018 ; Xu et al. , 2018 ; Wang et al. , 2018 ; Denton & Fergus , 2018 ; Castrejon et al. , 2019 ; Kwon & Park , 2019 ; Bhagat et al. , 2020 ) . We use a typical stochastic video generation approach ( Denton & Fergus , 2018 ) based on conditional VAE as a compared model . Unsupervised predictive learning for spatiotemporal disentanglement . Previous work has focused on learning to disentangle the spatial and temporal features from visual dynamics ( Denton et al. , 2017 ; Guen & Thome , 2020 ; Hsieh et al. , 2018 ; Wu et al. , 2021 ) . These methods factorize spatiotemporal data into feature subspaces with strong priors , e.g. , assuming that the spatial information is temporally invariant . Another line of work is to learn predictive models for unsupervised scene decomposition such as ( Xu et al. , 2019 ; Hsieh et al. , 2018 ) . Unlike the above models , our approach uses a set of modular architectures in the recurrent unit to represent the mixed spatiotemporal dynamics . The most relevant work to our method is the Recurrent Independent Mechanism ( RIM ) ( Goyal et al. , 2021 ) , which consists of largely independent recurrent modules that are sparsely activated and interact via soft attention . ModeRNN is different from RIM in three aspects . First , it is specifically designed to tackle STMC in real-world environments . Second , it learns modular features by incorporating multi-head attention in the recurrent unit , and performs state transitions on compositional features with learnable importance weights . Third , the modular structures in ModeRNN are frequently activated responding to the mixed visual dynamics . ModeRNN is compared with the state of the art in Section 4 , including SA-ConvLSTM ( Lin et al. , 2020 ) , PhyDNet ( Guen & Thome , 2020 ) , CrevNet ( Yu et al. , 2019 ) , RIM ( Goyal et al. , 2021 ) , and LMC ( Lee et al. , 2021 ) . 3 MODERNN . We propose ModeRNN to reduce spatiotemporal mode collapse ( STMC ) in unsupervised predictive learning . The key idea is to build a decoupling-aggregation framework to model the recurrent state transitions of mixed spatiotemporal modes . In this section , we first discuss the basic network components in ModeRNN and then describe the details in the decoupling-aggregation recurrent unit . 3.1 MODE SLOTS & SLOT BUS . Mode slots . The decoupling-aggregation framework is built upon a set of hidden representations named mode slots . The term slot is in part borrowed from previous work for unsupervised scene decomposition ( Locatello et al. , 2020 ) . We use it here to respond to a family of similar visual dynamics , that is , we aim to bind each mode slot to the representation subspace of each spatiotemporal mode one-to-one . Slot features can be viewed as latent factors that can explicitly improve the unsupervised decoupling of mixed dynamics across the dataset . Slot bus . Assuming that multiple spatiotemporal modes naturally co-exist in real-world videos , all slots dynamically respond with different importance weights to form compositional representations , which are then used to update a long-term memory state , termed slot bus . The hierarchical structure of mode slots and the slot bus leads to a better understanding of the complex and highly mixed dynamic patterns without mode annotations . From similar data samples , the model is allowed to learn similar compositional structures over the slots . On the contrary , for distinct visual dynamics , it shows significant differences in the learned importance weights to update the slot bus features . Therefore , it provides a solution to STMC . Specifically , the slot bus is initialized from a learnable , multi-variate Gaussian distribution , whose mean and variance encode the global priors for the entire dataset . 3.2 MODECELL . To learn and leverage the mode slots , we introduce a novel recurrent unit named ModeCell , which follows a decoupling-aggregation framework with three modules , i.e. , the state interaction and slot binding module , the adaptive slot fusion module , and the slot bus transition module .
This paper proposes a mechanism to reduce spatiotemporal mode collapse in unsupervised predictive learning. To achieve this goal, the proposed method is built upon the idea that different latent modes in the same data domain should share a set of hidden representation subspaces, which can be represented with various compositional structures based on the features in each subspace. Experimental results show improvement over simpler baselines such as PredRNN or ConvLSTM.
SP:ebf813b9144287331c03dbdcb95dcc0ee7b11b8f
ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised Predictive Learning
1 INTRODUCTION . Predictive learning is an unsupervised learning paradigm that has shown the ability to discover the spatiotemporal modes of visual dynamics ( Xu et al. , 2019 ; Goyal et al. , 2021 ) . However , for largescale and real-world datasets ( see Figure 1 ) , the modes in visual dynamics can be highly entangled and difficult to learn due to the richness of data environments , the diversity of object interactions , and the complexity of motion patterns . For clarity , in the following discussion , spatiotemporal modes are considered to have the following properties : 1 . A spatiotemporal mode refers to a representation subspace that corresponds to a family of similar , but not predefined , visual dynamics . 2 . Multiple spatiotemporal modes naturally exist in real-world data , even in a single frame . 3 . We assume the i.i.d . setup to allow all videos to share the same set of spatiotemporal modes in a dataset . Different data may have different compositional structures over the modes . Under these assumptions , video prediction models are required to ( i ) decouple the potentially mixed spatiotemporal modes from raw video frames , ( ii ) understand the compositional structures on top of the learned modes , and ( iii ) learn the state transitions based on the compositional structures . Otherwise , since the learned dynamics with respect to different modes may interfere and compete during training , it remains challenging for the prior art in video prediction to generate less blurry future frames based on an ambiguous understanding of mixed physical processes . We refer to this empirical phenomenon as spatiotemporal mode collapse ( STMC ) , which is mainly caused by the collapse of learned representations into invalid subspaces when compromising to multiple spatiotemporal modes in the training set . Unlike the widely concerned mode collapse problem in generative adversarial networks , STMC has not drawn much attention because predictive learning is supposed to be well constrained by the image reconstruction loss . However , due to the limitation of model size , STMC occurs when the model can not effectively decouple mixed spatiotemporal modes and infer their underlying structures . As a result , its responses to different modes tend to lose diversity and may collapse to a meaningless average of multiple representation subspaces of valid modes . In Figure 1 ( left ) , we can observe the existence of STMC on a large-scale video dataset named RoboNet ( Dasari et al. , 2019 ) , in which potential spatiotemporal modes may come from seven different robot platforms ( e.g. , Baxter and WidowX ) , four data collection environments ( e.g. , Berkeley and Stanford ) , and a variety of unlabeled robot control tasks ( e.g. , pushing and grasping ) . An additional outcome of STMC is that we can achieve a performance gain when training individual models in separate subsets with remarkably different visual dynamics , as shown in Figure 1 ( right ) . However , such a dilemma prevents the model from growing into big ones that allow scalable training on large-scale , natively multimodal spatiotemporal sequences . We explore STMC for the first time in unsupervised predictive learning . The core idea is to provide a strong inductive bias for the predictive model to discover the compositional structures of latent modes . To this end , we propose ModeRNN , a new modular recurrent architecture that learns structured hidden representations through a set of mode slots1 , where each of them responds to the representation subspace of a single spatiotemporal mode . ModeRNN also introduces a decoupling-aggregation framework to process the slot features in three stages , which is completely different from existing predictive models with modular architectures ( Xu et al. , 2019 ; Goyal et al. , 2021 ) . The first stage is recurrent state interaction and slot binding , in which we use the multi-head attention mechanism ( Vaswani et al. , 2017 ) to enable the memory state to interact with the input state and previous hidden state of RNNs . We name the memory state “ slot bus ” , because for each sequence , it is initialized from a multi-variate Gaussian distribution with learnable parameters , and thereafter refined using the slot features at each time step . By using the slot bus as the queries , multi-head attention can naturally decouple modular components from hidden representations and bind them to particular mode slots . Features in each slot are then independently modeled using per-slot convolutional parameters . The second stage in each ModeRNN unit is slot fusion , motivated by the assumption that , there can be multiple spatiotemporal modes in a single video and similar videos can be represented by similar compositional structures over the mode slots . Therefore , we assign slot features with learnable importance weights and aggregate them into a unified hidden representation , which is then used in the third stage to update the slot bus and generate the output state of the ModeRNN unit . We empirically show the existence of STMC on five datasets , and include the results on three realworld datasets in the manuscript , including the large-scale RoboNet dataset that has various data collection environments and multiple robot control tasks , the KTH dataset with six types of human actions that has been widely used by previous literature , and the radar echo dataset for precipitation forecasting that contains time-varying modes of seasonal climates . In addition , we include results on a Mixed Moving MNIST dataset and the Human3.6M dataset in the appendix . In a series of quantitative and visualization results , we demonstrate the effectiveness of ModeRNN in mitigating STMC and learning from highly entangled visual dynamics . 1The concept of “ slot ” was initially introduced by Locatello et al . ( 2020 ) to denote the object-centric features in static scene understanding . We borrow this term here for the subspaces of spatiotemporal representations . 2 RELATED WORK . RNN-based predictive models . Many deep learning models based on RNNs have been proposed for spatiotemporal prediction ( Ranzato et al. , 2014 ; Srivastava et al. , 2015 ; Shi et al. , 2015 ; Oh et al. , 2015 ; De Brabandere et al. , 2016 ; Villegas et al. , 2018 ) . Shi et al . ( 2015 ) integrated 2D convolutions into the recurrent state transitions of standard LSTM and proposed the convolutional LSTM ( ConvLSTM ) network , which can model the spatial correlations and temporal dynamics in a unified recurrent unit . More recent approaches have extended the prediction ability of ConvLSTM in different aspects ( Wang et al. , 2017 ; Oliu et al. , 2018 ; Wang et al. , 2019b ; a ; Yao et al. , 2020 ; Guen & Thome , 2020 ; Yu et al. , 2019 ; Su et al. , 2020 ; Lin et al. , 2020 ; Lee et al. , 2021 ) . For example , as an important compared model of our approach , SA-ConvLSTM ( Lin et al. , 2020 ) incorporates selfattention in the recurrent state transitions in ConvLSTM to obtain more global context information across time . However , unlike our approach , it does not learn decoupled representations to understand individual components in complex visual dynamics . Besides deterministic models , probabilistic models were proposed to explicitly consider the uncertainty in future prediction ( Mathieu et al. , 2016 ; Vondrick et al. , 2016 ; Tulyakov et al. , 2018 ; Xu et al. , 2018 ; Wang et al. , 2018 ; Denton & Fergus , 2018 ; Castrejon et al. , 2019 ; Kwon & Park , 2019 ; Bhagat et al. , 2020 ) . We use a typical stochastic video generation approach ( Denton & Fergus , 2018 ) based on conditional VAE as a compared model . Unsupervised predictive learning for spatiotemporal disentanglement . Previous work has focused on learning to disentangle the spatial and temporal features from visual dynamics ( Denton et al. , 2017 ; Guen & Thome , 2020 ; Hsieh et al. , 2018 ; Wu et al. , 2021 ) . These methods factorize spatiotemporal data into feature subspaces with strong priors , e.g. , assuming that the spatial information is temporally invariant . Another line of work is to learn predictive models for unsupervised scene decomposition such as ( Xu et al. , 2019 ; Hsieh et al. , 2018 ) . Unlike the above models , our approach uses a set of modular architectures in the recurrent unit to represent the mixed spatiotemporal dynamics . The most relevant work to our method is the Recurrent Independent Mechanism ( RIM ) ( Goyal et al. , 2021 ) , which consists of largely independent recurrent modules that are sparsely activated and interact via soft attention . ModeRNN is different from RIM in three aspects . First , it is specifically designed to tackle STMC in real-world environments . Second , it learns modular features by incorporating multi-head attention in the recurrent unit , and performs state transitions on compositional features with learnable importance weights . Third , the modular structures in ModeRNN are frequently activated responding to the mixed visual dynamics . ModeRNN is compared with the state of the art in Section 4 , including SA-ConvLSTM ( Lin et al. , 2020 ) , PhyDNet ( Guen & Thome , 2020 ) , CrevNet ( Yu et al. , 2019 ) , RIM ( Goyal et al. , 2021 ) , and LMC ( Lee et al. , 2021 ) . 3 MODERNN . We propose ModeRNN to reduce spatiotemporal mode collapse ( STMC ) in unsupervised predictive learning . The key idea is to build a decoupling-aggregation framework to model the recurrent state transitions of mixed spatiotemporal modes . In this section , we first discuss the basic network components in ModeRNN and then describe the details in the decoupling-aggregation recurrent unit . 3.1 MODE SLOTS & SLOT BUS . Mode slots . The decoupling-aggregation framework is built upon a set of hidden representations named mode slots . The term slot is in part borrowed from previous work for unsupervised scene decomposition ( Locatello et al. , 2020 ) . We use it here to respond to a family of similar visual dynamics , that is , we aim to bind each mode slot to the representation subspace of each spatiotemporal mode one-to-one . Slot features can be viewed as latent factors that can explicitly improve the unsupervised decoupling of mixed dynamics across the dataset . Slot bus . Assuming that multiple spatiotemporal modes naturally co-exist in real-world videos , all slots dynamically respond with different importance weights to form compositional representations , which are then used to update a long-term memory state , termed slot bus . The hierarchical structure of mode slots and the slot bus leads to a better understanding of the complex and highly mixed dynamic patterns without mode annotations . From similar data samples , the model is allowed to learn similar compositional structures over the slots . On the contrary , for distinct visual dynamics , it shows significant differences in the learned importance weights to update the slot bus features . Therefore , it provides a solution to STMC . Specifically , the slot bus is initialized from a learnable , multi-variate Gaussian distribution , whose mean and variance encode the global priors for the entire dataset . 3.2 MODECELL . To learn and leverage the mode slots , we introduce a novel recurrent unit named ModeCell , which follows a decoupling-aggregation framework with three modules , i.e. , the state interaction and slot binding module , the adaptive slot fusion module , and the slot bus transition module .
The manuscript proposes a novel architecture with a slot-based decoupling-aggregation framework for unsupervised sequence prediction. The model is motivated by preventing spatio-temporal mode collapse, which affects many existing methods. The experiments clearly show that the model addresses this issue and performance comparisons across three commonly used datasets are presented.
SP:ebf813b9144287331c03dbdcb95dcc0ee7b11b8f
ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised Predictive Learning
1 INTRODUCTION . Predictive learning is an unsupervised learning paradigm that has shown the ability to discover the spatiotemporal modes of visual dynamics ( Xu et al. , 2019 ; Goyal et al. , 2021 ) . However , for largescale and real-world datasets ( see Figure 1 ) , the modes in visual dynamics can be highly entangled and difficult to learn due to the richness of data environments , the diversity of object interactions , and the complexity of motion patterns . For clarity , in the following discussion , spatiotemporal modes are considered to have the following properties : 1 . A spatiotemporal mode refers to a representation subspace that corresponds to a family of similar , but not predefined , visual dynamics . 2 . Multiple spatiotemporal modes naturally exist in real-world data , even in a single frame . 3 . We assume the i.i.d . setup to allow all videos to share the same set of spatiotemporal modes in a dataset . Different data may have different compositional structures over the modes . Under these assumptions , video prediction models are required to ( i ) decouple the potentially mixed spatiotemporal modes from raw video frames , ( ii ) understand the compositional structures on top of the learned modes , and ( iii ) learn the state transitions based on the compositional structures . Otherwise , since the learned dynamics with respect to different modes may interfere and compete during training , it remains challenging for the prior art in video prediction to generate less blurry future frames based on an ambiguous understanding of mixed physical processes . We refer to this empirical phenomenon as spatiotemporal mode collapse ( STMC ) , which is mainly caused by the collapse of learned representations into invalid subspaces when compromising to multiple spatiotemporal modes in the training set . Unlike the widely concerned mode collapse problem in generative adversarial networks , STMC has not drawn much attention because predictive learning is supposed to be well constrained by the image reconstruction loss . However , due to the limitation of model size , STMC occurs when the model can not effectively decouple mixed spatiotemporal modes and infer their underlying structures . As a result , its responses to different modes tend to lose diversity and may collapse to a meaningless average of multiple representation subspaces of valid modes . In Figure 1 ( left ) , we can observe the existence of STMC on a large-scale video dataset named RoboNet ( Dasari et al. , 2019 ) , in which potential spatiotemporal modes may come from seven different robot platforms ( e.g. , Baxter and WidowX ) , four data collection environments ( e.g. , Berkeley and Stanford ) , and a variety of unlabeled robot control tasks ( e.g. , pushing and grasping ) . An additional outcome of STMC is that we can achieve a performance gain when training individual models in separate subsets with remarkably different visual dynamics , as shown in Figure 1 ( right ) . However , such a dilemma prevents the model from growing into big ones that allow scalable training on large-scale , natively multimodal spatiotemporal sequences . We explore STMC for the first time in unsupervised predictive learning . The core idea is to provide a strong inductive bias for the predictive model to discover the compositional structures of latent modes . To this end , we propose ModeRNN , a new modular recurrent architecture that learns structured hidden representations through a set of mode slots1 , where each of them responds to the representation subspace of a single spatiotemporal mode . ModeRNN also introduces a decoupling-aggregation framework to process the slot features in three stages , which is completely different from existing predictive models with modular architectures ( Xu et al. , 2019 ; Goyal et al. , 2021 ) . The first stage is recurrent state interaction and slot binding , in which we use the multi-head attention mechanism ( Vaswani et al. , 2017 ) to enable the memory state to interact with the input state and previous hidden state of RNNs . We name the memory state “ slot bus ” , because for each sequence , it is initialized from a multi-variate Gaussian distribution with learnable parameters , and thereafter refined using the slot features at each time step . By using the slot bus as the queries , multi-head attention can naturally decouple modular components from hidden representations and bind them to particular mode slots . Features in each slot are then independently modeled using per-slot convolutional parameters . The second stage in each ModeRNN unit is slot fusion , motivated by the assumption that , there can be multiple spatiotemporal modes in a single video and similar videos can be represented by similar compositional structures over the mode slots . Therefore , we assign slot features with learnable importance weights and aggregate them into a unified hidden representation , which is then used in the third stage to update the slot bus and generate the output state of the ModeRNN unit . We empirically show the existence of STMC on five datasets , and include the results on three realworld datasets in the manuscript , including the large-scale RoboNet dataset that has various data collection environments and multiple robot control tasks , the KTH dataset with six types of human actions that has been widely used by previous literature , and the radar echo dataset for precipitation forecasting that contains time-varying modes of seasonal climates . In addition , we include results on a Mixed Moving MNIST dataset and the Human3.6M dataset in the appendix . In a series of quantitative and visualization results , we demonstrate the effectiveness of ModeRNN in mitigating STMC and learning from highly entangled visual dynamics . 1The concept of “ slot ” was initially introduced by Locatello et al . ( 2020 ) to denote the object-centric features in static scene understanding . We borrow this term here for the subspaces of spatiotemporal representations . 2 RELATED WORK . RNN-based predictive models . Many deep learning models based on RNNs have been proposed for spatiotemporal prediction ( Ranzato et al. , 2014 ; Srivastava et al. , 2015 ; Shi et al. , 2015 ; Oh et al. , 2015 ; De Brabandere et al. , 2016 ; Villegas et al. , 2018 ) . Shi et al . ( 2015 ) integrated 2D convolutions into the recurrent state transitions of standard LSTM and proposed the convolutional LSTM ( ConvLSTM ) network , which can model the spatial correlations and temporal dynamics in a unified recurrent unit . More recent approaches have extended the prediction ability of ConvLSTM in different aspects ( Wang et al. , 2017 ; Oliu et al. , 2018 ; Wang et al. , 2019b ; a ; Yao et al. , 2020 ; Guen & Thome , 2020 ; Yu et al. , 2019 ; Su et al. , 2020 ; Lin et al. , 2020 ; Lee et al. , 2021 ) . For example , as an important compared model of our approach , SA-ConvLSTM ( Lin et al. , 2020 ) incorporates selfattention in the recurrent state transitions in ConvLSTM to obtain more global context information across time . However , unlike our approach , it does not learn decoupled representations to understand individual components in complex visual dynamics . Besides deterministic models , probabilistic models were proposed to explicitly consider the uncertainty in future prediction ( Mathieu et al. , 2016 ; Vondrick et al. , 2016 ; Tulyakov et al. , 2018 ; Xu et al. , 2018 ; Wang et al. , 2018 ; Denton & Fergus , 2018 ; Castrejon et al. , 2019 ; Kwon & Park , 2019 ; Bhagat et al. , 2020 ) . We use a typical stochastic video generation approach ( Denton & Fergus , 2018 ) based on conditional VAE as a compared model . Unsupervised predictive learning for spatiotemporal disentanglement . Previous work has focused on learning to disentangle the spatial and temporal features from visual dynamics ( Denton et al. , 2017 ; Guen & Thome , 2020 ; Hsieh et al. , 2018 ; Wu et al. , 2021 ) . These methods factorize spatiotemporal data into feature subspaces with strong priors , e.g. , assuming that the spatial information is temporally invariant . Another line of work is to learn predictive models for unsupervised scene decomposition such as ( Xu et al. , 2019 ; Hsieh et al. , 2018 ) . Unlike the above models , our approach uses a set of modular architectures in the recurrent unit to represent the mixed spatiotemporal dynamics . The most relevant work to our method is the Recurrent Independent Mechanism ( RIM ) ( Goyal et al. , 2021 ) , which consists of largely independent recurrent modules that are sparsely activated and interact via soft attention . ModeRNN is different from RIM in three aspects . First , it is specifically designed to tackle STMC in real-world environments . Second , it learns modular features by incorporating multi-head attention in the recurrent unit , and performs state transitions on compositional features with learnable importance weights . Third , the modular structures in ModeRNN are frequently activated responding to the mixed visual dynamics . ModeRNN is compared with the state of the art in Section 4 , including SA-ConvLSTM ( Lin et al. , 2020 ) , PhyDNet ( Guen & Thome , 2020 ) , CrevNet ( Yu et al. , 2019 ) , RIM ( Goyal et al. , 2021 ) , and LMC ( Lee et al. , 2021 ) . 3 MODERNN . We propose ModeRNN to reduce spatiotemporal mode collapse ( STMC ) in unsupervised predictive learning . The key idea is to build a decoupling-aggregation framework to model the recurrent state transitions of mixed spatiotemporal modes . In this section , we first discuss the basic network components in ModeRNN and then describe the details in the decoupling-aggregation recurrent unit . 3.1 MODE SLOTS & SLOT BUS . Mode slots . The decoupling-aggregation framework is built upon a set of hidden representations named mode slots . The term slot is in part borrowed from previous work for unsupervised scene decomposition ( Locatello et al. , 2020 ) . We use it here to respond to a family of similar visual dynamics , that is , we aim to bind each mode slot to the representation subspace of each spatiotemporal mode one-to-one . Slot features can be viewed as latent factors that can explicitly improve the unsupervised decoupling of mixed dynamics across the dataset . Slot bus . Assuming that multiple spatiotemporal modes naturally co-exist in real-world videos , all slots dynamically respond with different importance weights to form compositional representations , which are then used to update a long-term memory state , termed slot bus . The hierarchical structure of mode slots and the slot bus leads to a better understanding of the complex and highly mixed dynamic patterns without mode annotations . From similar data samples , the model is allowed to learn similar compositional structures over the slots . On the contrary , for distinct visual dynamics , it shows significant differences in the learned importance weights to update the slot bus features . Therefore , it provides a solution to STMC . Specifically , the slot bus is initialized from a learnable , multi-variate Gaussian distribution , whose mean and variance encode the global priors for the entire dataset . 3.2 MODECELL . To learn and leverage the mode slots , we introduce a novel recurrent unit named ModeCell , which follows a decoupling-aggregation framework with three modules , i.e. , the state interaction and slot binding module , the adaptive slot fusion module , and the slot bus transition module .
This paper defines a phenomenon, spatiotemporal mode collapse in the training of unsupervised predictive models. They propose an RNN-based approach to learning structural hidden representations in temporal data. The proposed idea was experimented with and compared with respect to the convolutional LSTM baseline and several other temporal modeling methods (i.e., RIM or Conv-TT-LSTM).
SP:ebf813b9144287331c03dbdcb95dcc0ee7b11b8f
Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver
1 INTRODUCTION . In reinforcement learning , an agent repeatedly interacts with the unknown environment in order to maximize the cumulative reward . To achieve this goal , an RL algorithm must be equipped with effective exploration mechanisms to learn the unknown environment and find a near-optimal policy . Efficient exploration is critical to the success of reinforcement learning algorithms , which has been widely investigated from both the empirical and the theoretical perspectives ( e.g . Stadie et al . ( 2015 ) ; Pathak et al . ( 2017 ) ; Azar et al . ( 2017 ) ; Jin et al . ( 2018 ) ) . Model-based RL is one of the important approaches to solve for the RL environment . In model-based RL , the agent learns the model of the environment and then performs planning in the estimated model . It has been widely applied in many RL scenarios , including both online setting ( Kaiser et al. , 2019 ; Luo et al. , 2019 ; Azar et al. , 2017 ) and offline setting ( Yu et al. , 2020 ; Kidambi et al. , 2020 ) . It is also believed that modelbased RL is significantly more sample-efficient than model-free RL , which has been justified by many recent empirical results ( e.g . Kaiser et al . ( 2019 ) ; Wang et al . ( 2019 ) ) . Though the theoretical model-based learning in small scale problems has been studied extensively ( Azar et al. , 2017 ; Zhou et al. , 2020a ; Jin et al. , 2020a ) , it is still far from complete , especially with the presence of a function approximator . As an important implication of model-based approaches , the power of plug-in approach have been studied in several works ( Cui & Yang , 2020 ; Agarwal et al. , 2020 ) . The idea of plug-in approach is rather simple : We construct an empirical Markov Decision Process ( MDP ) using maximum likelihood estimate , then return the ( approximate ) optimal policy with efficient planning algorithm in this empirical model . The significance of plug-in approaches is two-folded . For one thing , it preserves an empirical model that keeps the value of the policies , which is of independent interests . For another , the empirical model can be used for any down-stream tasks , which makes the application much more flexible . It is shown that the plug-in approach achieves the minimax sample complexity to compute the -optimal policies with a generative model in the tabular ( Agarwal et al. , 2020 ) and linear settings ( Cui & Yang , 2020 ) . In this paper , we aim to understand the power of plug-in approach in the reward-free exploration with linear function approximation . We study the linear mixture MDPs , where the transition probability kernel is a linear mixture of a number of basis kernels ( Ayoub et al. , 2020 ; Zhou et al. , 2020b ; a ) . We first build an empirical model with an estimation of the transition dynamics in the exploration phase , and then find a near-optimal policy by planning with the empirical model via arbitrary plug-in solver in the planning phase . Our setting is different from the reward-free exploration with linear function approximation without plug-in model ( Wang et al. , 2020 ; Zanette et al. , 2020b ) , in which the agent can directly observe all history samples and design specialized model-free algorithm in the planning phase . Compared to our results , the standard plug-in approach ( Agarwal et al. , 2020 ; Cui & Yang , 2020 ) can not tackle the reward-free setting and may not predict an accurate value function under an arbitrarily given reward function after the exploration phase . Our results shows that the plug-in approach can achieve near-optimal sample complexity in the reward-free setting . In particular , we proposed a statistically efficient algorithm for reward-free exploration . Our algorithm samples Õ ( d2H4/ 2 ) trajectories during the exploration phase , which suffices to obtain O ( ) -optimal policies for an arbitrary reward function with an -optimal pluging solver in the planning phase . Here d is the feature dimension , and H is the planning horizon . Furthermore , with a more refined trajectory-wise uncertainty estimation , we further improve the sample complexity bound to Õ ( d2H3/ 2 ) in the regime where d > H and ≤ H/ √ d. This matches our lower bound Ω ( d2H3/ 2 ) for reward-free exploration in linear mixture MDPs , which indicates that our upper bound is near-optimal except for logarithmic factors . To the best of our knowledge , this is the first work that obtains minimax sample complexity bounds for the plug-in approach in reward-free exploration with linear function approximation . 2 RELATED WORK . RL with Linear Function Approximation Reinforcement learning with linear function approximation has been widely studied in the recent few years ( e.g . Jiang et al . ( 2017 ) ; Yang & Wang ( 2019 ; 2020 ) ; Jin et al . ( 2020b ) ; Modi et al . ( 2020 ) ; Du et al . ( 2019 ) ; Zanette et al . ( 2020a ) ; Cai et al . ( 2020 ) ; Ayoub et al . ( 2020 ) ; Weisz et al . ( 2021 ) ; Zhou et al . ( 2020b ; a ) ) . The linear mixture MDPs model studied in our work assumes the transition probability function is parameterized as a linear function of a given feature mapping over state-action-next-state triple ( Ayoub et al. , 2020 ; Zhou et al. , 2020b ; a ) . Based on the Bernstein inequality for vector-valued martingales , Zhou et al . ( 2020a ) proposed an efficient algorithm that obtains minimax regret in the regime where d > H . Besides linear mixture MDPs , linear MDPs is another category of RL with linear function approximation , which assumes both the transition probability function and reward function are parameterized as a linear function of a given feature mapping over state-action pairs . The algorithms with best regret bound are proposed by Jin et al . ( 2020b ) and Yang & Wang ( 2020 ) , which study model-free algorithm and model-based algorithm respectively . The minimax regret bound for linear MDPs is still unclear . Reward-Free Reinforcement Learning In contrast to the standard RL setting , reward-free reinforcement learning separates the exploration problem and the planning problem which allows one to handle them in a theoretically principled way . For tabular setting , reward-free reinforcement learning has been well-exploited in many previous results ( Jin et al. , 2020a ; Kaufmann et al. , 2021 ; Ménard et al. , 2020 ; Zhang et al. , 2020 ; 2021b ; Wu et al. , 2021a ; Bai & Jin , 2020 ; Liu et al. , 2021 ) , where the minimax rate is obtained by Ménard et al . ( 2020 ) . For reward-free exploration with linear function approximation , Wang et al . ( 2020 ) proposed the first efficient algorithm that obtains O ( d3H6/ 2 ) sample complexity for linear MDPs . However , their algorithm is model-free in nature and can not guarantee good performance with any plug-in solver . Further , Qiu et al . ( 2021 ) proposed the first provably efficient reward-free algorithm with kernel and neural function approximation . We also noticed that there is a concurrent work which also studied reward-free exploration for linear mixture MDPs ( Zhang et al. , 2021a ) . Compared with their results , we focus on the setting of the reward-free exploration with plug-in solver , which is a more general setting that covers the standard reward-free exploration studied in the previous results . Furthermore , our sample complexity bounds are tighter than theirs by a factor of H2 1 . The above two differences introduce new challenges in 1When transformed to the time-homogeneous MDPs setting studied in Zhang et al . ( 2021a ) , our algorithms can achieve sample complexity bounds Õ ( d2H3/ 2 ) and Õ ( ( d2H2 + dH3 ) / 2 ) , respectively . both the algorithmic design and the complexity analysis in this work , which makes our algorithms much more complicated than theirs . Besides , our lower bound is tighter than theirs in the dependence on d. Plug-in Approach The plug-in approach has been studied in tabular/linear case in restrictive settings . E.g. , Agarwal et al . ( 2020 ) and Cui & Yang ( 2020 ) studied the standard plug-in approach with a generative model , where the algorithm is allowed to query the outcome of any state action pair from an oracle . They showed the plug-in approach also achieved the minimax optimal sample complexity to find an -optimal policy in both tabular MDPs and linear MDPs . The reward-free algorithms proposed by Jin et al . ( 2020a ) are model-based in nature , thus can be regarded as a solution in the plug-in solver setting . However , their algorithms are restricted to the tabular case and can not be applied to the setting with linear function approximation . 3 PRELIMINARIES . 3.1 EPISODIC MDPS . We consider the setting of episodic Markov decision processes ( MDPs ) , which can be denoted by a six-tuple ( S , A , P , R , H , ν ) , where S is the set of states , A is the action set , P is the transition probability matrix so that Ph ( ·|s , a ) gives the distribution over states if action a is taken on state s at step h , Rh ( s , a ) is the deterministic reward function of taking action a on state s with support [ 0 , 1 ] in step h , H is the number of steps in each episode , and ν is the distribution of the initial state . In episode k , the agent starts from an initial state sk,1 sampled from the distribution ν . At each step h ∈ [ H ] , the agent observes the current state sk , h ∈ S , takes action ak , h ∈ A , receives reward Rh ( sk , h , ak , h ) , and transits to state sk , h+1 with probability Ph ( sk , h+1|sk , h , ak , h ) . The episode ends when sH+1 is reached . A deterministic policy π is a collection of H policy functions { πh : S → A } h∈ [ H ] . We use Π to denote the set of all deterministic policies . For a specific reward function R , we use V πh : S × R → R to denote the value function at step h under policy π w.r.t . reward R , which gives the expected sum of the remaining rewards received under policy π starting from sh = s , i.e . V πh ( s , R ) = E [ ∑H h′=hR ( sh′ , πh′ ( sh′ ) ) | sh = s , P ] . Accordingly , we define Qπh ( s , a , R ) as the expected Q-value function at step h : Q π h ( s , a , R ) = E [ R ( sh , ah ) + ∑H h′=h+1R ( sh′ , πh′ ( sh′ ) ) | sh = s , ah = a , P ] . We use π∗R to denote the optimal policy w.r.t . reward R , and we use V ∗ h ( · , R ) and Q∗h ( · , · , R ) to denote the optimal value and Q-function under optimal policy π∗R at step h. We say a policy π is -optimal w.r.t . R if E [ H∑ h=1 Rh ( sh , ah ) | π ] ≥ E [ H∑ h=1 Rh ( sh , ah ) | π∗R ] − . For the convenience of explanation , we assume the agent always starts from the same state s1 in each episode . It is straightforward to extend to the case with stochastic initialization , by adding a initial state s0 with no rewards and only one action a0 , and the transition probability of ( s0 , a0 ) is the initial distribution µ . We use PhV ( s , a , R ) as a shorthand of ∑ s′ Ph ( s ′|s , a ) V ( s′ , R ) .
This paper addresses the reward-free exploration problem with function approximation under linear mixture MDP assumption. It proposes a pair of model-based exploration algorithms, one with a lightweight methodology and sample complexity rate $\tilde{O}(H^4 d^2 \epsilon^{-2})$ and the other as a refined and more convoluted version with rate $\tilde{O}(H^3 d^2 \epsilon^{-2})$, which is matching the reported lower bound up to logarithmic factors. Crucially, the proposed approach can work with any planning solver to provide an $(\epsilon + \epsilon_{opt})$-optimal policy for any reward function.
SP:acb0645350090a9819d5635bdd01cf7c5895274d
Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver
1 INTRODUCTION . In reinforcement learning , an agent repeatedly interacts with the unknown environment in order to maximize the cumulative reward . To achieve this goal , an RL algorithm must be equipped with effective exploration mechanisms to learn the unknown environment and find a near-optimal policy . Efficient exploration is critical to the success of reinforcement learning algorithms , which has been widely investigated from both the empirical and the theoretical perspectives ( e.g . Stadie et al . ( 2015 ) ; Pathak et al . ( 2017 ) ; Azar et al . ( 2017 ) ; Jin et al . ( 2018 ) ) . Model-based RL is one of the important approaches to solve for the RL environment . In model-based RL , the agent learns the model of the environment and then performs planning in the estimated model . It has been widely applied in many RL scenarios , including both online setting ( Kaiser et al. , 2019 ; Luo et al. , 2019 ; Azar et al. , 2017 ) and offline setting ( Yu et al. , 2020 ; Kidambi et al. , 2020 ) . It is also believed that modelbased RL is significantly more sample-efficient than model-free RL , which has been justified by many recent empirical results ( e.g . Kaiser et al . ( 2019 ) ; Wang et al . ( 2019 ) ) . Though the theoretical model-based learning in small scale problems has been studied extensively ( Azar et al. , 2017 ; Zhou et al. , 2020a ; Jin et al. , 2020a ) , it is still far from complete , especially with the presence of a function approximator . As an important implication of model-based approaches , the power of plug-in approach have been studied in several works ( Cui & Yang , 2020 ; Agarwal et al. , 2020 ) . The idea of plug-in approach is rather simple : We construct an empirical Markov Decision Process ( MDP ) using maximum likelihood estimate , then return the ( approximate ) optimal policy with efficient planning algorithm in this empirical model . The significance of plug-in approaches is two-folded . For one thing , it preserves an empirical model that keeps the value of the policies , which is of independent interests . For another , the empirical model can be used for any down-stream tasks , which makes the application much more flexible . It is shown that the plug-in approach achieves the minimax sample complexity to compute the -optimal policies with a generative model in the tabular ( Agarwal et al. , 2020 ) and linear settings ( Cui & Yang , 2020 ) . In this paper , we aim to understand the power of plug-in approach in the reward-free exploration with linear function approximation . We study the linear mixture MDPs , where the transition probability kernel is a linear mixture of a number of basis kernels ( Ayoub et al. , 2020 ; Zhou et al. , 2020b ; a ) . We first build an empirical model with an estimation of the transition dynamics in the exploration phase , and then find a near-optimal policy by planning with the empirical model via arbitrary plug-in solver in the planning phase . Our setting is different from the reward-free exploration with linear function approximation without plug-in model ( Wang et al. , 2020 ; Zanette et al. , 2020b ) , in which the agent can directly observe all history samples and design specialized model-free algorithm in the planning phase . Compared to our results , the standard plug-in approach ( Agarwal et al. , 2020 ; Cui & Yang , 2020 ) can not tackle the reward-free setting and may not predict an accurate value function under an arbitrarily given reward function after the exploration phase . Our results shows that the plug-in approach can achieve near-optimal sample complexity in the reward-free setting . In particular , we proposed a statistically efficient algorithm for reward-free exploration . Our algorithm samples Õ ( d2H4/ 2 ) trajectories during the exploration phase , which suffices to obtain O ( ) -optimal policies for an arbitrary reward function with an -optimal pluging solver in the planning phase . Here d is the feature dimension , and H is the planning horizon . Furthermore , with a more refined trajectory-wise uncertainty estimation , we further improve the sample complexity bound to Õ ( d2H3/ 2 ) in the regime where d > H and ≤ H/ √ d. This matches our lower bound Ω ( d2H3/ 2 ) for reward-free exploration in linear mixture MDPs , which indicates that our upper bound is near-optimal except for logarithmic factors . To the best of our knowledge , this is the first work that obtains minimax sample complexity bounds for the plug-in approach in reward-free exploration with linear function approximation . 2 RELATED WORK . RL with Linear Function Approximation Reinforcement learning with linear function approximation has been widely studied in the recent few years ( e.g . Jiang et al . ( 2017 ) ; Yang & Wang ( 2019 ; 2020 ) ; Jin et al . ( 2020b ) ; Modi et al . ( 2020 ) ; Du et al . ( 2019 ) ; Zanette et al . ( 2020a ) ; Cai et al . ( 2020 ) ; Ayoub et al . ( 2020 ) ; Weisz et al . ( 2021 ) ; Zhou et al . ( 2020b ; a ) ) . The linear mixture MDPs model studied in our work assumes the transition probability function is parameterized as a linear function of a given feature mapping over state-action-next-state triple ( Ayoub et al. , 2020 ; Zhou et al. , 2020b ; a ) . Based on the Bernstein inequality for vector-valued martingales , Zhou et al . ( 2020a ) proposed an efficient algorithm that obtains minimax regret in the regime where d > H . Besides linear mixture MDPs , linear MDPs is another category of RL with linear function approximation , which assumes both the transition probability function and reward function are parameterized as a linear function of a given feature mapping over state-action pairs . The algorithms with best regret bound are proposed by Jin et al . ( 2020b ) and Yang & Wang ( 2020 ) , which study model-free algorithm and model-based algorithm respectively . The minimax regret bound for linear MDPs is still unclear . Reward-Free Reinforcement Learning In contrast to the standard RL setting , reward-free reinforcement learning separates the exploration problem and the planning problem which allows one to handle them in a theoretically principled way . For tabular setting , reward-free reinforcement learning has been well-exploited in many previous results ( Jin et al. , 2020a ; Kaufmann et al. , 2021 ; Ménard et al. , 2020 ; Zhang et al. , 2020 ; 2021b ; Wu et al. , 2021a ; Bai & Jin , 2020 ; Liu et al. , 2021 ) , where the minimax rate is obtained by Ménard et al . ( 2020 ) . For reward-free exploration with linear function approximation , Wang et al . ( 2020 ) proposed the first efficient algorithm that obtains O ( d3H6/ 2 ) sample complexity for linear MDPs . However , their algorithm is model-free in nature and can not guarantee good performance with any plug-in solver . Further , Qiu et al . ( 2021 ) proposed the first provably efficient reward-free algorithm with kernel and neural function approximation . We also noticed that there is a concurrent work which also studied reward-free exploration for linear mixture MDPs ( Zhang et al. , 2021a ) . Compared with their results , we focus on the setting of the reward-free exploration with plug-in solver , which is a more general setting that covers the standard reward-free exploration studied in the previous results . Furthermore , our sample complexity bounds are tighter than theirs by a factor of H2 1 . The above two differences introduce new challenges in 1When transformed to the time-homogeneous MDPs setting studied in Zhang et al . ( 2021a ) , our algorithms can achieve sample complexity bounds Õ ( d2H3/ 2 ) and Õ ( ( d2H2 + dH3 ) / 2 ) , respectively . both the algorithmic design and the complexity analysis in this work , which makes our algorithms much more complicated than theirs . Besides , our lower bound is tighter than theirs in the dependence on d. Plug-in Approach The plug-in approach has been studied in tabular/linear case in restrictive settings . E.g. , Agarwal et al . ( 2020 ) and Cui & Yang ( 2020 ) studied the standard plug-in approach with a generative model , where the algorithm is allowed to query the outcome of any state action pair from an oracle . They showed the plug-in approach also achieved the minimax optimal sample complexity to find an -optimal policy in both tabular MDPs and linear MDPs . The reward-free algorithms proposed by Jin et al . ( 2020a ) are model-based in nature , thus can be regarded as a solution in the plug-in solver setting . However , their algorithms are restricted to the tabular case and can not be applied to the setting with linear function approximation . 3 PRELIMINARIES . 3.1 EPISODIC MDPS . We consider the setting of episodic Markov decision processes ( MDPs ) , which can be denoted by a six-tuple ( S , A , P , R , H , ν ) , where S is the set of states , A is the action set , P is the transition probability matrix so that Ph ( ·|s , a ) gives the distribution over states if action a is taken on state s at step h , Rh ( s , a ) is the deterministic reward function of taking action a on state s with support [ 0 , 1 ] in step h , H is the number of steps in each episode , and ν is the distribution of the initial state . In episode k , the agent starts from an initial state sk,1 sampled from the distribution ν . At each step h ∈ [ H ] , the agent observes the current state sk , h ∈ S , takes action ak , h ∈ A , receives reward Rh ( sk , h , ak , h ) , and transits to state sk , h+1 with probability Ph ( sk , h+1|sk , h , ak , h ) . The episode ends when sH+1 is reached . A deterministic policy π is a collection of H policy functions { πh : S → A } h∈ [ H ] . We use Π to denote the set of all deterministic policies . For a specific reward function R , we use V πh : S × R → R to denote the value function at step h under policy π w.r.t . reward R , which gives the expected sum of the remaining rewards received under policy π starting from sh = s , i.e . V πh ( s , R ) = E [ ∑H h′=hR ( sh′ , πh′ ( sh′ ) ) | sh = s , P ] . Accordingly , we define Qπh ( s , a , R ) as the expected Q-value function at step h : Q π h ( s , a , R ) = E [ R ( sh , ah ) + ∑H h′=h+1R ( sh′ , πh′ ( sh′ ) ) | sh = s , ah = a , P ] . We use π∗R to denote the optimal policy w.r.t . reward R , and we use V ∗ h ( · , R ) and Q∗h ( · , · , R ) to denote the optimal value and Q-function under optimal policy π∗R at step h. We say a policy π is -optimal w.r.t . R if E [ H∑ h=1 Rh ( sh , ah ) | π ] ≥ E [ H∑ h=1 Rh ( sh , ah ) | π∗R ] − . For the convenience of explanation , we assume the agent always starts from the same state s1 in each episode . It is straightforward to extend to the case with stochastic initialization , by adding a initial state s0 with no rewards and only one action a0 , and the transition probability of ( s0 , a0 ) is the initial distribution µ . We use PhV ( s , a , R ) as a shorthand of ∑ s′ Ph ( s ′|s , a ) V ( s′ , R ) .
This paper studies reward-free exploration with linear function approximation (i.e., under the setting of linear mixture MDPs). Specifically, this work introduces two algorithms: the first one has a simple form with the sample complexity $\tilde{\mathcal{O}}(d^2H^4 / \varepsilon^2)$, and the second one is much complicated (i.e., with 5 estimators) but has an improved sample complexity $\tilde{\mathcal{O}}(d^2H^3 / \varepsilon^2)$. To improve the dependence on $H$, the main technical challenge is that the traditional Bernstein-type analysis could not be directly applied because the variance summation of $\{\tilde{V}\}_{h \in [H]}$ is not induced from the same policy and transition function. This paper addresses this technical challenge and makes a solid contribution to reward-free exploration.
SP:acb0645350090a9819d5635bdd01cf7c5895274d
Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver
1 INTRODUCTION . In reinforcement learning , an agent repeatedly interacts with the unknown environment in order to maximize the cumulative reward . To achieve this goal , an RL algorithm must be equipped with effective exploration mechanisms to learn the unknown environment and find a near-optimal policy . Efficient exploration is critical to the success of reinforcement learning algorithms , which has been widely investigated from both the empirical and the theoretical perspectives ( e.g . Stadie et al . ( 2015 ) ; Pathak et al . ( 2017 ) ; Azar et al . ( 2017 ) ; Jin et al . ( 2018 ) ) . Model-based RL is one of the important approaches to solve for the RL environment . In model-based RL , the agent learns the model of the environment and then performs planning in the estimated model . It has been widely applied in many RL scenarios , including both online setting ( Kaiser et al. , 2019 ; Luo et al. , 2019 ; Azar et al. , 2017 ) and offline setting ( Yu et al. , 2020 ; Kidambi et al. , 2020 ) . It is also believed that modelbased RL is significantly more sample-efficient than model-free RL , which has been justified by many recent empirical results ( e.g . Kaiser et al . ( 2019 ) ; Wang et al . ( 2019 ) ) . Though the theoretical model-based learning in small scale problems has been studied extensively ( Azar et al. , 2017 ; Zhou et al. , 2020a ; Jin et al. , 2020a ) , it is still far from complete , especially with the presence of a function approximator . As an important implication of model-based approaches , the power of plug-in approach have been studied in several works ( Cui & Yang , 2020 ; Agarwal et al. , 2020 ) . The idea of plug-in approach is rather simple : We construct an empirical Markov Decision Process ( MDP ) using maximum likelihood estimate , then return the ( approximate ) optimal policy with efficient planning algorithm in this empirical model . The significance of plug-in approaches is two-folded . For one thing , it preserves an empirical model that keeps the value of the policies , which is of independent interests . For another , the empirical model can be used for any down-stream tasks , which makes the application much more flexible . It is shown that the plug-in approach achieves the minimax sample complexity to compute the -optimal policies with a generative model in the tabular ( Agarwal et al. , 2020 ) and linear settings ( Cui & Yang , 2020 ) . In this paper , we aim to understand the power of plug-in approach in the reward-free exploration with linear function approximation . We study the linear mixture MDPs , where the transition probability kernel is a linear mixture of a number of basis kernels ( Ayoub et al. , 2020 ; Zhou et al. , 2020b ; a ) . We first build an empirical model with an estimation of the transition dynamics in the exploration phase , and then find a near-optimal policy by planning with the empirical model via arbitrary plug-in solver in the planning phase . Our setting is different from the reward-free exploration with linear function approximation without plug-in model ( Wang et al. , 2020 ; Zanette et al. , 2020b ) , in which the agent can directly observe all history samples and design specialized model-free algorithm in the planning phase . Compared to our results , the standard plug-in approach ( Agarwal et al. , 2020 ; Cui & Yang , 2020 ) can not tackle the reward-free setting and may not predict an accurate value function under an arbitrarily given reward function after the exploration phase . Our results shows that the plug-in approach can achieve near-optimal sample complexity in the reward-free setting . In particular , we proposed a statistically efficient algorithm for reward-free exploration . Our algorithm samples Õ ( d2H4/ 2 ) trajectories during the exploration phase , which suffices to obtain O ( ) -optimal policies for an arbitrary reward function with an -optimal pluging solver in the planning phase . Here d is the feature dimension , and H is the planning horizon . Furthermore , with a more refined trajectory-wise uncertainty estimation , we further improve the sample complexity bound to Õ ( d2H3/ 2 ) in the regime where d > H and ≤ H/ √ d. This matches our lower bound Ω ( d2H3/ 2 ) for reward-free exploration in linear mixture MDPs , which indicates that our upper bound is near-optimal except for logarithmic factors . To the best of our knowledge , this is the first work that obtains minimax sample complexity bounds for the plug-in approach in reward-free exploration with linear function approximation . 2 RELATED WORK . RL with Linear Function Approximation Reinforcement learning with linear function approximation has been widely studied in the recent few years ( e.g . Jiang et al . ( 2017 ) ; Yang & Wang ( 2019 ; 2020 ) ; Jin et al . ( 2020b ) ; Modi et al . ( 2020 ) ; Du et al . ( 2019 ) ; Zanette et al . ( 2020a ) ; Cai et al . ( 2020 ) ; Ayoub et al . ( 2020 ) ; Weisz et al . ( 2021 ) ; Zhou et al . ( 2020b ; a ) ) . The linear mixture MDPs model studied in our work assumes the transition probability function is parameterized as a linear function of a given feature mapping over state-action-next-state triple ( Ayoub et al. , 2020 ; Zhou et al. , 2020b ; a ) . Based on the Bernstein inequality for vector-valued martingales , Zhou et al . ( 2020a ) proposed an efficient algorithm that obtains minimax regret in the regime where d > H . Besides linear mixture MDPs , linear MDPs is another category of RL with linear function approximation , which assumes both the transition probability function and reward function are parameterized as a linear function of a given feature mapping over state-action pairs . The algorithms with best regret bound are proposed by Jin et al . ( 2020b ) and Yang & Wang ( 2020 ) , which study model-free algorithm and model-based algorithm respectively . The minimax regret bound for linear MDPs is still unclear . Reward-Free Reinforcement Learning In contrast to the standard RL setting , reward-free reinforcement learning separates the exploration problem and the planning problem which allows one to handle them in a theoretically principled way . For tabular setting , reward-free reinforcement learning has been well-exploited in many previous results ( Jin et al. , 2020a ; Kaufmann et al. , 2021 ; Ménard et al. , 2020 ; Zhang et al. , 2020 ; 2021b ; Wu et al. , 2021a ; Bai & Jin , 2020 ; Liu et al. , 2021 ) , where the minimax rate is obtained by Ménard et al . ( 2020 ) . For reward-free exploration with linear function approximation , Wang et al . ( 2020 ) proposed the first efficient algorithm that obtains O ( d3H6/ 2 ) sample complexity for linear MDPs . However , their algorithm is model-free in nature and can not guarantee good performance with any plug-in solver . Further , Qiu et al . ( 2021 ) proposed the first provably efficient reward-free algorithm with kernel and neural function approximation . We also noticed that there is a concurrent work which also studied reward-free exploration for linear mixture MDPs ( Zhang et al. , 2021a ) . Compared with their results , we focus on the setting of the reward-free exploration with plug-in solver , which is a more general setting that covers the standard reward-free exploration studied in the previous results . Furthermore , our sample complexity bounds are tighter than theirs by a factor of H2 1 . The above two differences introduce new challenges in 1When transformed to the time-homogeneous MDPs setting studied in Zhang et al . ( 2021a ) , our algorithms can achieve sample complexity bounds Õ ( d2H3/ 2 ) and Õ ( ( d2H2 + dH3 ) / 2 ) , respectively . both the algorithmic design and the complexity analysis in this work , which makes our algorithms much more complicated than theirs . Besides , our lower bound is tighter than theirs in the dependence on d. Plug-in Approach The plug-in approach has been studied in tabular/linear case in restrictive settings . E.g. , Agarwal et al . ( 2020 ) and Cui & Yang ( 2020 ) studied the standard plug-in approach with a generative model , where the algorithm is allowed to query the outcome of any state action pair from an oracle . They showed the plug-in approach also achieved the minimax optimal sample complexity to find an -optimal policy in both tabular MDPs and linear MDPs . The reward-free algorithms proposed by Jin et al . ( 2020a ) are model-based in nature , thus can be regarded as a solution in the plug-in solver setting . However , their algorithms are restricted to the tabular case and can not be applied to the setting with linear function approximation . 3 PRELIMINARIES . 3.1 EPISODIC MDPS . We consider the setting of episodic Markov decision processes ( MDPs ) , which can be denoted by a six-tuple ( S , A , P , R , H , ν ) , where S is the set of states , A is the action set , P is the transition probability matrix so that Ph ( ·|s , a ) gives the distribution over states if action a is taken on state s at step h , Rh ( s , a ) is the deterministic reward function of taking action a on state s with support [ 0 , 1 ] in step h , H is the number of steps in each episode , and ν is the distribution of the initial state . In episode k , the agent starts from an initial state sk,1 sampled from the distribution ν . At each step h ∈ [ H ] , the agent observes the current state sk , h ∈ S , takes action ak , h ∈ A , receives reward Rh ( sk , h , ak , h ) , and transits to state sk , h+1 with probability Ph ( sk , h+1|sk , h , ak , h ) . The episode ends when sH+1 is reached . A deterministic policy π is a collection of H policy functions { πh : S → A } h∈ [ H ] . We use Π to denote the set of all deterministic policies . For a specific reward function R , we use V πh : S × R → R to denote the value function at step h under policy π w.r.t . reward R , which gives the expected sum of the remaining rewards received under policy π starting from sh = s , i.e . V πh ( s , R ) = E [ ∑H h′=hR ( sh′ , πh′ ( sh′ ) ) | sh = s , P ] . Accordingly , we define Qπh ( s , a , R ) as the expected Q-value function at step h : Q π h ( s , a , R ) = E [ R ( sh , ah ) + ∑H h′=h+1R ( sh′ , πh′ ( sh′ ) ) | sh = s , ah = a , P ] . We use π∗R to denote the optimal policy w.r.t . reward R , and we use V ∗ h ( · , R ) and Q∗h ( · , · , R ) to denote the optimal value and Q-function under optimal policy π∗R at step h. We say a policy π is -optimal w.r.t . R if E [ H∑ h=1 Rh ( sh , ah ) | π ] ≥ E [ H∑ h=1 Rh ( sh , ah ) | π∗R ] − . For the convenience of explanation , we assume the agent always starts from the same state s1 in each episode . It is straightforward to extend to the case with stochastic initialization , by adding a initial state s0 with no rewards and only one action a0 , and the transition probability of ( s0 , a0 ) is the initial distribution µ . We use PhV ( s , a , R ) as a shorthand of ∑ s′ Ph ( s ′|s , a ) V ( s′ , R ) .
The paper propose and analyze a model-based algorithm for the reward-free exploration setting. The analysis shows that the proposed algorithm is (nearly) minimax optimal. The proposed algorithm is agnostic of the planning algorithm that is used to solve an MDP constructed with the estimated model. This property is of independent interest because it allows the computation of the value of the policies, and the estimated model can be used for any down-stream tasks.
SP:acb0645350090a9819d5635bdd01cf7c5895274d
Learning to Prompt for Vision-Language Models
1 INTRODUCTION . The traditional approach for visual representation learning is to train vision models to predict for a fixed set of object categories using discrete labels ( He et al. , 2016 ; Dosovitskiy et al. , 2021 ) . However , this approach limits visual recognition systems to closed-set visual concepts defined during training , making them unable to handle new categories once deployed in target environments , since additional data are required for learning a new classifier . Recently , vision-language pre-training such as CLIP ( Radford et al. , 2021 ) and ALIGN ( Jia et al. , 2021 ) has emerged as a promising alternative . The main idea is to align images and raw text using two separate encoders—one for each modality . Through large-scale pre-training , vision-language models are allowed to learn open-set visual concepts and can readily be transferred to downstream tasks . In particular , for each new classification task , one can synthesize the classification weights by feeding natural language describing classes of interest to the text encoder , and compare them with image features produced by the image encoder . We observe that for pre-trained vision-language models , the text input , known as prompt , plays a key role in downstream datasets . However , identifying the right prompt is a non-trivial task , which often takes a significant amount of time for words tuning—since a slight change in wording could make a huge difference in performance . For instance , for Caltech101 ( Figure 1 ( a ) , 2nd vs. 3rd prompt ) , adding “ a ” before the class token brings more than 5 % increase in accuracy . Moreover , prompt engineering also requires expertise about the task and ideally the language model ’ s underlying mechanism . This is exemplified in Figure 1 ( b-d ) where adding task-relevant context can lead to significant improvements , i.e. , “ flower ” for Flowers102 , “ texture ” for DTD and “ satellite ” for EuroSAT . Tuning the sentence structure could bring further improvements , e.g. , putting “ a type of flower ” after the class token for Flowers102 , keeping only “ texture ” in the context for DTD , and Describable Textures ( DTD ) EuroSAT a [ CLASS ] . a photo of [ CLASS ] . a photo of a [ CLASS ] . [ V ] 1 [ V ] 2 … [ V ] M [ CLASS ] . 80.77 78.99 84.42 92.00 Flowers102 a photo of a [ CLASS ] . a flower photo of a [ CLASS ] . a photo of a [ CLASS ] , a type of flower . [ V ] 1 [ V ] 2 … [ V ] M [ CLASS ] . 56.68 61.23 62.32 93.22 Caltech101 Prompt Prompt Prompt Prompt Accuracy Accuracy Accuracy Accuracy ( a ) ( b ) adding “ centered ” before “ satellite photo ” for EuroSAT . However , even with extensive tuning , the resulting prompts are by no means guaranteed to be optimal for these downstream tasks . Inspired by recent prompt learning research in NLP ( Shin et al. , 2020 ; Jiang et al. , 2020 ; Zhong et al. , 2021 ) , we propose context optimization ( CoOp ) 1 to automate prompt engineering to allow more efficient and task-specific transfer for pre-trained vision-language models . Specifically , we model a prompt ’ s context using continuous representations which are essentially initialized with random vectors with the same dimension as word embeddings ( see Figure 2 ) . The context could be shared among all classes or designed to be class-specific . During training , we simply minimize the prediction error using the cross-entropy loss with respect to the learnable context vectors , while keeping the pre-trained parameters fixed . The gradients can be back-propagated all the way through the text encoder , distilling the rich knowledge encoded in the parameters for learning task-relevant context . To demonstrate the effectiveness of CoOp , we benchmark on 11 datasets , which cover a diverse set of visual recognition tasks including classification on generic objects , scenes , actions and fine-grained categories , as well as specialized tasks like recognizing textures and satellite imagery . The results show that CoOp can effectively turn pre-trained vision-language models into data-efficient visual learners , requiring as few as one or two shots to beat hand-crafted prompts with a decent margin . The performance can also be further boosted by using more shots , e.g. , at 16 shots the margin over hand-crafted prompts averages at around 17 % and reaches over 50 % for the highest . CoOp also outperforms the linear probe alternative known as a strong few-shot learning baseline ( Tian et al. , 2020 ) , and crucially , demonstrates much stronger robustness to distribution shift . Extensive analysis is also conducted to offer a comprehensive picture on how to apply CoOp in practice . The source code for reproducing the experiments will be released to facilitate future research . 2 METHODOLOGY . 2.1 VISION-LANGUAGE PRE-TRAINING . We briefly introduce vision-language pre-training with a particular focus on CLIP ( Radford et al. , 2021 ) . Our approach is applicable to broader CLIP-like vision-language models . Models CLIP consists of two encoders , one for images and the other for text . The image encoder aims to map high-dimensional images into a low-dimensional embedding space . The architecture of the image encoder can take the form of a CNN like ResNet-50 ( He et al. , 2016 ) or a ViT ( Dosovitskiy et al. , 2021 ) . On the other hand , the text encoder is built on top of a Transformer ( Vaswani et al. , 2017 ) and aims to generate text representations from natural language . 1CoOp is pronounced as /ku : p/ . Specifically , given a sequence of words ( tokens ) , such as “ a photo of a dog. ” , CLIP first converts each one of the token ( including punctuation ) into a lower-cased byte pair encoding ( BPE ) representation ( Sennrich et al. , 2016 ) , which is essentially a unique numeric ID . The vocabulary size in CLIP is 49,152 . To facilitate minibatch processing , each text sequence is encompassed with the [ SOS ] and [ EOS ] tokens and capped at a fixed length of 77 . After that , the IDs are mapped to 512-D word embedding vectors , which are then passed on to the Transformer . Finally , the features at the [ EOS ] token position are layer normalized and further processed by a linear projection layer . Training CLIP is trained to align the two embedding spaces learned for images and text respectively . Specifically , the learning objective is formulated as a contrastive loss . Given a batch of image-text pairs , CLIP maximizes the cosine similarity for matched pairs while minimizes the cosine similarity for all other unmatched pairs . To learn diverse visual concepts that are more transferable to downstream tasks , CLIP ’ s team collects a large training dataset consisting of 400 million image-text pairs . Zero-Shot Inference Since CLIP is pre-trained to predict whether an image matches a textual description , it naturally fits zero-shot recognition . This is achieved by comparing image features with the classification weights synthesized by the text encoder , which takes as input textual descriptions specifying classes of interest . Formally , let f be image features extracted by the image encoder for an image x and { wi } Ki=1 a set of weight vectors generated by the text encoder . K denotes the number of classes and each wi is derived from a prompt that could have the form of “ a photo of a [ CLASS ] . ” where the class token is replaced by the specific class name , such as “ cat ” , “ dog ” or “ car ” . The prediction probability is then computed as p ( y = i|x ) = exp ( < wi , f > /τ ) ∑K j=1 exp ( < wj , f > /τ ) , ( 1 ) where τ is a temperature parameter learned by CLIP and < · , · > denotes cosine similarity . 2.2 CONTEXT OPTIMIZATION . We propose context optimization ( CoOp ) , which avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data . An overview is shown in Figure 2 . Specifically , the prompt given to the text encoder g ( · ) is designed with the following form , t = [ V ] 1 [ V ] 2 . . . [ V ] M [ CLASS ] , ( 2 ) where each [ V ] m ( m∈ { 1 , . . . , M } ) is a vector with the same dimension as word embeddings ( i.e. , 512 for CLIP ) , and M is a hyperparameter specifying the number of context tokens . Note that the context here is shared among all classes , which is called unified context and different from classspecific context that is introduced later . By forwarding a prompt t to the text encoder g ( · ) , we can obtain a classification weight vector representing a visual concept . The prediction probability is computed as p ( y = i|x ) = exp ( < g ( ti ) , f > /τ ) ∑K j=1 exp ( < g ( tj ) , f > /τ ) , ( 3 ) where the class token within each prompt ti is replaced by the corresponding word embedding vector ( s ) of the i-th class name . Training is performed to minimize the standard classification loss based on the cross-entropy , and the gradients can be back-propagated all the way through the text encoder g ( · ) , making use of the rich knowledge encoded in the parameters to optimize the context . The design of continuous representations also allows full exploration in the word embedding space , which facilitates the learning of task-relevant context . Other Variants Other than placing the class token at the end of a sequence as in Equation ( 2 ) , we can also put it in the middle like t = [ V ] 1 . . . [ V ] M 2 [ CLASS ] [ V ] M 2 +1 . . . [ V ] M , ( 4 ) which increases flexibility for learning—theoretically , the prompt is allowed to either fill the latter cells with supplementary descriptions or cut off the sentence earlier by using a termination signal such as full stop . Another option is to design class-specific context ( CSC ) where context vectors are independent to each class , i.e. , [ V ] i1 [ V ] i 2 . . . [ V ] i M 6= [ V ] j 1 [ V ] j 2 . . . [ V ] j M for i 6= j and i , j ∈ { 1 , . . . , K } . As an alternative to unified context , we find that CSC is particularly useful for some fine-grained classification tasks . 3 EXPERIMENTS . 3.1 FEW-SHOT LEARNING . Datasets We select 11 publicly available image classification datasets used in CLIP : ImageNet ( Deng et al. , 2009 ) , Caltech101 ( Fei-Fei et al. , 2004 ) , OxfordPets ( Parkhi et al. , 2012 ) , StanfordCars ( Krause et al. , 2013 ) , Flowers102 ( Nilsback & Zisserman , 2008 ) , Food101 ( Bossard et al. , 2014 ) , FGVCAircraft ( Maji et al. , 2013 ) , SUN397 ( Xiao et al. , 2010 ) , DTD ( Cimpoi et al. , 2014 ) , EuroSAT ( Helber et al. , 2019 ) and UCF101 ( Soomro et al. , 2012 ) ( see Appendix A for their statistics ) . These datasets constitute a comprehensive benchmark , which covers a diverse set of vision tasks including classification on generic objects , scenes , actions and fine-grained categories , as well as specialized tasks like recognizing textures and satellite imagery . We follow the few-shot evaluation protocol adopted in CLIP ( Radford et al. , 2021 ) , using 1 , 2 , 4 , 8 and 16 shots for training respectively and deploying models in the full test sets . The average results over three runs are reported for comparison . Training Details CoOp has four versions : positioning the class token in the end or middle ; unified context vs. CSC . Unless otherwise stated , ResNet-50 ( He et al. , 2016 ) is used as the image encoder ’ s backbone and the number of context tokens M is set to 16 . Investigations on other design choices are discussed in Section 3.3 . All models are built on top of the open-sourced CLIP.2 CoOp ’ s context vectors are randomly initialized by drawing from a zero-mean Gaussian distribution with standard deviation equal to 0.02 . Training is done with SGD and an initial learning rate of 0.002 , which is decayed by the cosine annealing rule . The maximum epoch is set to 200 for 16/8 shots , 100 for 4/2 shots , and 50 for 1 shot ( except for ImageNet where the maximum epoch is fixed to 50 ) . To mitigate explosive gradients observed in the early training iterations , we use the warmup trick by fixing the learning rate to 1e−5 during the first epoch . 2https : //github.com/openai/CLIP . Baseline Methods We compare CoOp with two baseline methods . The first is zero-shot CLIP , which is based on hand-crafted prompts . We follow the guideline of prompt engineering introduced by Radford et al . ( 2021 ) . For generic objects and scenes , “ a photo of a [ CLASS ] . ” is adopted . For fine-grained categories , task-relevant context is added like “ a type of pet ” for OxfordPets and “ a type of food ” for Food101 . When it comes to specialized tasks such as recognizing textures in DTD , the prompt is customized as “ [ CLASS ] texture. ” where the class names are adjectives like “ bubbly ” and “ dotted ” . See Appendix A for the details . The second baseline is linear probe CLIP . As suggested by Radford et al . ( 2021 ) and a recent study on few-shot learning ( Tian et al. , 2020 ) , training a linear classifier on top of high-quality pre-trained models ’ features ( like CLIP ) can easily achieve performance that is on a par with that of state-of-the-art few-shot learning methods , which are often much more sophisticated . We follow the same training method used by Radford et al . ( 2021 ) to train linear probe CLIP . Comparison with Hand-Crafted Prompts Figure 3 summarizes the results . Our default model is CLIP+CoOp with the class token positioned in the end . The two different ways of positioning the class token achieve similar performance as their curves highly overlap . From the average performance displayed in the top-left corner , we observe that CLIP+CoOp is a strong few-shot learner , requiring only two shots on average to obtain a decent margin over zero-shot CLIP . Given 16 shots for training , the average gap brought by CoOp can be further increased to around 17 % . Figure 4 ranks the absolute improvements obtained by CoOp at 16 shots over hand-crafted prompts . Huge improvements are observed on specialized tasks namely EuroSAT and DTD where the increase in performance reaches over 50 % and 20 % respectively . The jumps in performance are also significant ( those more than 10 % ) on most fine-grained datasets including Flowers102 , StanfordCars and FGVCAircraft , as well as on scene and action recognition datasets ( SUN397 & UCF101 ) . Since ImageNet is a challenging dataset that contains 1,000 classes , the 5.05 % improvement is also noteworthy . In contrast , the increases on the two fine-grained datasets , OxfordPets and Food101 , are less appealing . By digging into CLIP+CoOp ’ s curves on these two datasets in Figure 3 , we find there is a loss of momentum in performance improvements even with more shots used , seemingly an overfitting problem . A potential solution is to impose higher regularizations like increasing the weight decay . Nonetheless , the overall results are strong enough to serve as evidence of CoOp ’ s capability of learning task-relevant prompts in a data-efficient way . Comparison with Linear Probe CLIP In terms of the overall performance ( Figure 3 , topleft ) , CLIP+CoOp demonstrates clear advantages over linear probe CLIP . The latter requires 4 shots on average to match the zero-shot ’ s performance while CoOp ’ s average gains at 4 shots are already more than 10 % . It is also clear that the gaps in the extreme low-data regime such as one or two shots are much larger , suggesting that CoOp is much more effective than learning a linear classifier from scratch for fewshot learning . We also observe that linear probe CLIP is comparable to CLIP+CoOp on the two specialized tasks ( DTD & EuroSAT ) as well as on a couple of fine-grained datasets ( Flowers102 & FGVCAircraft ) —this is not too surprising as the pre-trained CLIP space has been proved powerful , making the linear probe model a strong competitor . Nevertheless , CoOp ’ s CSC version can beat linear probe CLIP on the aforementioned datasets , and moreover , shows much better potential when more shots become available . Unified vs. Class-Specific Context On average , using unified context leads to better performance . In terms of when to apply CSC and when not to , we have the following suggestions . For generic objects ( ImageNet & Caltech101 ) , scenes ( SUN397 ) and actions ( UCF101 ) , using unified context is clearly better . Unified context also works better on some fine-grained datasets including OxfordPets and Food101 , but on others like StanfordCars , Flowers102 and FGVCAircraft the CSC version is preferred . CSC also yields better performance on the two specialized tasks , DTD and EuroSAT , at 16 shots in particular . However , CSC mostly underperforms unified context in challenging low-data scenarios ( fewer than 8 shots ) , which makes sense because CSC has more parameters than unified context and needs more data for training .
This paper proposes a novel approach named context optimization (CoOp) for prompt engineering of vision-language pre-training models. The main idea is to model context in prompts using continuous representations and perform end-to-end learning from data while keeping the pre-trained parameters fixed. Experiments on 11 datasets show that CoOp effectively turns pre-trained vision-language models into data-efficient visual learners, requiring as few as one or two shots to beat hand-crafted prompts with a decent margin and able to gain significant improvements when using more shots.
SP:4d5696e0b4e9d1d9156d0d0977903b16c9393e6f
Learning to Prompt for Vision-Language Models
1 INTRODUCTION . The traditional approach for visual representation learning is to train vision models to predict for a fixed set of object categories using discrete labels ( He et al. , 2016 ; Dosovitskiy et al. , 2021 ) . However , this approach limits visual recognition systems to closed-set visual concepts defined during training , making them unable to handle new categories once deployed in target environments , since additional data are required for learning a new classifier . Recently , vision-language pre-training such as CLIP ( Radford et al. , 2021 ) and ALIGN ( Jia et al. , 2021 ) has emerged as a promising alternative . The main idea is to align images and raw text using two separate encoders—one for each modality . Through large-scale pre-training , vision-language models are allowed to learn open-set visual concepts and can readily be transferred to downstream tasks . In particular , for each new classification task , one can synthesize the classification weights by feeding natural language describing classes of interest to the text encoder , and compare them with image features produced by the image encoder . We observe that for pre-trained vision-language models , the text input , known as prompt , plays a key role in downstream datasets . However , identifying the right prompt is a non-trivial task , which often takes a significant amount of time for words tuning—since a slight change in wording could make a huge difference in performance . For instance , for Caltech101 ( Figure 1 ( a ) , 2nd vs. 3rd prompt ) , adding “ a ” before the class token brings more than 5 % increase in accuracy . Moreover , prompt engineering also requires expertise about the task and ideally the language model ’ s underlying mechanism . This is exemplified in Figure 1 ( b-d ) where adding task-relevant context can lead to significant improvements , i.e. , “ flower ” for Flowers102 , “ texture ” for DTD and “ satellite ” for EuroSAT . Tuning the sentence structure could bring further improvements , e.g. , putting “ a type of flower ” after the class token for Flowers102 , keeping only “ texture ” in the context for DTD , and Describable Textures ( DTD ) EuroSAT a [ CLASS ] . a photo of [ CLASS ] . a photo of a [ CLASS ] . [ V ] 1 [ V ] 2 … [ V ] M [ CLASS ] . 80.77 78.99 84.42 92.00 Flowers102 a photo of a [ CLASS ] . a flower photo of a [ CLASS ] . a photo of a [ CLASS ] , a type of flower . [ V ] 1 [ V ] 2 … [ V ] M [ CLASS ] . 56.68 61.23 62.32 93.22 Caltech101 Prompt Prompt Prompt Prompt Accuracy Accuracy Accuracy Accuracy ( a ) ( b ) adding “ centered ” before “ satellite photo ” for EuroSAT . However , even with extensive tuning , the resulting prompts are by no means guaranteed to be optimal for these downstream tasks . Inspired by recent prompt learning research in NLP ( Shin et al. , 2020 ; Jiang et al. , 2020 ; Zhong et al. , 2021 ) , we propose context optimization ( CoOp ) 1 to automate prompt engineering to allow more efficient and task-specific transfer for pre-trained vision-language models . Specifically , we model a prompt ’ s context using continuous representations which are essentially initialized with random vectors with the same dimension as word embeddings ( see Figure 2 ) . The context could be shared among all classes or designed to be class-specific . During training , we simply minimize the prediction error using the cross-entropy loss with respect to the learnable context vectors , while keeping the pre-trained parameters fixed . The gradients can be back-propagated all the way through the text encoder , distilling the rich knowledge encoded in the parameters for learning task-relevant context . To demonstrate the effectiveness of CoOp , we benchmark on 11 datasets , which cover a diverse set of visual recognition tasks including classification on generic objects , scenes , actions and fine-grained categories , as well as specialized tasks like recognizing textures and satellite imagery . The results show that CoOp can effectively turn pre-trained vision-language models into data-efficient visual learners , requiring as few as one or two shots to beat hand-crafted prompts with a decent margin . The performance can also be further boosted by using more shots , e.g. , at 16 shots the margin over hand-crafted prompts averages at around 17 % and reaches over 50 % for the highest . CoOp also outperforms the linear probe alternative known as a strong few-shot learning baseline ( Tian et al. , 2020 ) , and crucially , demonstrates much stronger robustness to distribution shift . Extensive analysis is also conducted to offer a comprehensive picture on how to apply CoOp in practice . The source code for reproducing the experiments will be released to facilitate future research . 2 METHODOLOGY . 2.1 VISION-LANGUAGE PRE-TRAINING . We briefly introduce vision-language pre-training with a particular focus on CLIP ( Radford et al. , 2021 ) . Our approach is applicable to broader CLIP-like vision-language models . Models CLIP consists of two encoders , one for images and the other for text . The image encoder aims to map high-dimensional images into a low-dimensional embedding space . The architecture of the image encoder can take the form of a CNN like ResNet-50 ( He et al. , 2016 ) or a ViT ( Dosovitskiy et al. , 2021 ) . On the other hand , the text encoder is built on top of a Transformer ( Vaswani et al. , 2017 ) and aims to generate text representations from natural language . 1CoOp is pronounced as /ku : p/ . Specifically , given a sequence of words ( tokens ) , such as “ a photo of a dog. ” , CLIP first converts each one of the token ( including punctuation ) into a lower-cased byte pair encoding ( BPE ) representation ( Sennrich et al. , 2016 ) , which is essentially a unique numeric ID . The vocabulary size in CLIP is 49,152 . To facilitate minibatch processing , each text sequence is encompassed with the [ SOS ] and [ EOS ] tokens and capped at a fixed length of 77 . After that , the IDs are mapped to 512-D word embedding vectors , which are then passed on to the Transformer . Finally , the features at the [ EOS ] token position are layer normalized and further processed by a linear projection layer . Training CLIP is trained to align the two embedding spaces learned for images and text respectively . Specifically , the learning objective is formulated as a contrastive loss . Given a batch of image-text pairs , CLIP maximizes the cosine similarity for matched pairs while minimizes the cosine similarity for all other unmatched pairs . To learn diverse visual concepts that are more transferable to downstream tasks , CLIP ’ s team collects a large training dataset consisting of 400 million image-text pairs . Zero-Shot Inference Since CLIP is pre-trained to predict whether an image matches a textual description , it naturally fits zero-shot recognition . This is achieved by comparing image features with the classification weights synthesized by the text encoder , which takes as input textual descriptions specifying classes of interest . Formally , let f be image features extracted by the image encoder for an image x and { wi } Ki=1 a set of weight vectors generated by the text encoder . K denotes the number of classes and each wi is derived from a prompt that could have the form of “ a photo of a [ CLASS ] . ” where the class token is replaced by the specific class name , such as “ cat ” , “ dog ” or “ car ” . The prediction probability is then computed as p ( y = i|x ) = exp ( < wi , f > /τ ) ∑K j=1 exp ( < wj , f > /τ ) , ( 1 ) where τ is a temperature parameter learned by CLIP and < · , · > denotes cosine similarity . 2.2 CONTEXT OPTIMIZATION . We propose context optimization ( CoOp ) , which avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data . An overview is shown in Figure 2 . Specifically , the prompt given to the text encoder g ( · ) is designed with the following form , t = [ V ] 1 [ V ] 2 . . . [ V ] M [ CLASS ] , ( 2 ) where each [ V ] m ( m∈ { 1 , . . . , M } ) is a vector with the same dimension as word embeddings ( i.e. , 512 for CLIP ) , and M is a hyperparameter specifying the number of context tokens . Note that the context here is shared among all classes , which is called unified context and different from classspecific context that is introduced later . By forwarding a prompt t to the text encoder g ( · ) , we can obtain a classification weight vector representing a visual concept . The prediction probability is computed as p ( y = i|x ) = exp ( < g ( ti ) , f > /τ ) ∑K j=1 exp ( < g ( tj ) , f > /τ ) , ( 3 ) where the class token within each prompt ti is replaced by the corresponding word embedding vector ( s ) of the i-th class name . Training is performed to minimize the standard classification loss based on the cross-entropy , and the gradients can be back-propagated all the way through the text encoder g ( · ) , making use of the rich knowledge encoded in the parameters to optimize the context . The design of continuous representations also allows full exploration in the word embedding space , which facilitates the learning of task-relevant context . Other Variants Other than placing the class token at the end of a sequence as in Equation ( 2 ) , we can also put it in the middle like t = [ V ] 1 . . . [ V ] M 2 [ CLASS ] [ V ] M 2 +1 . . . [ V ] M , ( 4 ) which increases flexibility for learning—theoretically , the prompt is allowed to either fill the latter cells with supplementary descriptions or cut off the sentence earlier by using a termination signal such as full stop . Another option is to design class-specific context ( CSC ) where context vectors are independent to each class , i.e. , [ V ] i1 [ V ] i 2 . . . [ V ] i M 6= [ V ] j 1 [ V ] j 2 . . . [ V ] j M for i 6= j and i , j ∈ { 1 , . . . , K } . As an alternative to unified context , we find that CSC is particularly useful for some fine-grained classification tasks . 3 EXPERIMENTS . 3.1 FEW-SHOT LEARNING . Datasets We select 11 publicly available image classification datasets used in CLIP : ImageNet ( Deng et al. , 2009 ) , Caltech101 ( Fei-Fei et al. , 2004 ) , OxfordPets ( Parkhi et al. , 2012 ) , StanfordCars ( Krause et al. , 2013 ) , Flowers102 ( Nilsback & Zisserman , 2008 ) , Food101 ( Bossard et al. , 2014 ) , FGVCAircraft ( Maji et al. , 2013 ) , SUN397 ( Xiao et al. , 2010 ) , DTD ( Cimpoi et al. , 2014 ) , EuroSAT ( Helber et al. , 2019 ) and UCF101 ( Soomro et al. , 2012 ) ( see Appendix A for their statistics ) . These datasets constitute a comprehensive benchmark , which covers a diverse set of vision tasks including classification on generic objects , scenes , actions and fine-grained categories , as well as specialized tasks like recognizing textures and satellite imagery . We follow the few-shot evaluation protocol adopted in CLIP ( Radford et al. , 2021 ) , using 1 , 2 , 4 , 8 and 16 shots for training respectively and deploying models in the full test sets . The average results over three runs are reported for comparison . Training Details CoOp has four versions : positioning the class token in the end or middle ; unified context vs. CSC . Unless otherwise stated , ResNet-50 ( He et al. , 2016 ) is used as the image encoder ’ s backbone and the number of context tokens M is set to 16 . Investigations on other design choices are discussed in Section 3.3 . All models are built on top of the open-sourced CLIP.2 CoOp ’ s context vectors are randomly initialized by drawing from a zero-mean Gaussian distribution with standard deviation equal to 0.02 . Training is done with SGD and an initial learning rate of 0.002 , which is decayed by the cosine annealing rule . The maximum epoch is set to 200 for 16/8 shots , 100 for 4/2 shots , and 50 for 1 shot ( except for ImageNet where the maximum epoch is fixed to 50 ) . To mitigate explosive gradients observed in the early training iterations , we use the warmup trick by fixing the learning rate to 1e−5 during the first epoch . 2https : //github.com/openai/CLIP . Baseline Methods We compare CoOp with two baseline methods . The first is zero-shot CLIP , which is based on hand-crafted prompts . We follow the guideline of prompt engineering introduced by Radford et al . ( 2021 ) . For generic objects and scenes , “ a photo of a [ CLASS ] . ” is adopted . For fine-grained categories , task-relevant context is added like “ a type of pet ” for OxfordPets and “ a type of food ” for Food101 . When it comes to specialized tasks such as recognizing textures in DTD , the prompt is customized as “ [ CLASS ] texture. ” where the class names are adjectives like “ bubbly ” and “ dotted ” . See Appendix A for the details . The second baseline is linear probe CLIP . As suggested by Radford et al . ( 2021 ) and a recent study on few-shot learning ( Tian et al. , 2020 ) , training a linear classifier on top of high-quality pre-trained models ’ features ( like CLIP ) can easily achieve performance that is on a par with that of state-of-the-art few-shot learning methods , which are often much more sophisticated . We follow the same training method used by Radford et al . ( 2021 ) to train linear probe CLIP . Comparison with Hand-Crafted Prompts Figure 3 summarizes the results . Our default model is CLIP+CoOp with the class token positioned in the end . The two different ways of positioning the class token achieve similar performance as their curves highly overlap . From the average performance displayed in the top-left corner , we observe that CLIP+CoOp is a strong few-shot learner , requiring only two shots on average to obtain a decent margin over zero-shot CLIP . Given 16 shots for training , the average gap brought by CoOp can be further increased to around 17 % . Figure 4 ranks the absolute improvements obtained by CoOp at 16 shots over hand-crafted prompts . Huge improvements are observed on specialized tasks namely EuroSAT and DTD where the increase in performance reaches over 50 % and 20 % respectively . The jumps in performance are also significant ( those more than 10 % ) on most fine-grained datasets including Flowers102 , StanfordCars and FGVCAircraft , as well as on scene and action recognition datasets ( SUN397 & UCF101 ) . Since ImageNet is a challenging dataset that contains 1,000 classes , the 5.05 % improvement is also noteworthy . In contrast , the increases on the two fine-grained datasets , OxfordPets and Food101 , are less appealing . By digging into CLIP+CoOp ’ s curves on these two datasets in Figure 3 , we find there is a loss of momentum in performance improvements even with more shots used , seemingly an overfitting problem . A potential solution is to impose higher regularizations like increasing the weight decay . Nonetheless , the overall results are strong enough to serve as evidence of CoOp ’ s capability of learning task-relevant prompts in a data-efficient way . Comparison with Linear Probe CLIP In terms of the overall performance ( Figure 3 , topleft ) , CLIP+CoOp demonstrates clear advantages over linear probe CLIP . The latter requires 4 shots on average to match the zero-shot ’ s performance while CoOp ’ s average gains at 4 shots are already more than 10 % . It is also clear that the gaps in the extreme low-data regime such as one or two shots are much larger , suggesting that CoOp is much more effective than learning a linear classifier from scratch for fewshot learning . We also observe that linear probe CLIP is comparable to CLIP+CoOp on the two specialized tasks ( DTD & EuroSAT ) as well as on a couple of fine-grained datasets ( Flowers102 & FGVCAircraft ) —this is not too surprising as the pre-trained CLIP space has been proved powerful , making the linear probe model a strong competitor . Nevertheless , CoOp ’ s CSC version can beat linear probe CLIP on the aforementioned datasets , and moreover , shows much better potential when more shots become available . Unified vs. Class-Specific Context On average , using unified context leads to better performance . In terms of when to apply CSC and when not to , we have the following suggestions . For generic objects ( ImageNet & Caltech101 ) , scenes ( SUN397 ) and actions ( UCF101 ) , using unified context is clearly better . Unified context also works better on some fine-grained datasets including OxfordPets and Food101 , but on others like StanfordCars , Flowers102 and FGVCAircraft the CSC version is preferred . CSC also yields better performance on the two specialized tasks , DTD and EuroSAT , at 16 shots in particular . However , CSC mostly underperforms unified context in challenging low-data scenarios ( fewer than 8 shots ) , which makes sense because CSC has more parameters than unified context and needs more data for training .
The paper proposes context optimization (CoOp) which learns task-aware continuous prompts to improve CLIP in terms of few-shot image classification. By fixing the pretrained backbone, CoOp performs end-to-end learning to update the learnable context vectors for target domain datasets. The simple yet effective approach substantially beats hand-crafted prompts with a large margin. Meanwhile, CoOp also exhibits better robustness to distribution shift than CLIP.
SP:4d5696e0b4e9d1d9156d0d0977903b16c9393e6f
Learning to Prompt for Vision-Language Models
1 INTRODUCTION . The traditional approach for visual representation learning is to train vision models to predict for a fixed set of object categories using discrete labels ( He et al. , 2016 ; Dosovitskiy et al. , 2021 ) . However , this approach limits visual recognition systems to closed-set visual concepts defined during training , making them unable to handle new categories once deployed in target environments , since additional data are required for learning a new classifier . Recently , vision-language pre-training such as CLIP ( Radford et al. , 2021 ) and ALIGN ( Jia et al. , 2021 ) has emerged as a promising alternative . The main idea is to align images and raw text using two separate encoders—one for each modality . Through large-scale pre-training , vision-language models are allowed to learn open-set visual concepts and can readily be transferred to downstream tasks . In particular , for each new classification task , one can synthesize the classification weights by feeding natural language describing classes of interest to the text encoder , and compare them with image features produced by the image encoder . We observe that for pre-trained vision-language models , the text input , known as prompt , plays a key role in downstream datasets . However , identifying the right prompt is a non-trivial task , which often takes a significant amount of time for words tuning—since a slight change in wording could make a huge difference in performance . For instance , for Caltech101 ( Figure 1 ( a ) , 2nd vs. 3rd prompt ) , adding “ a ” before the class token brings more than 5 % increase in accuracy . Moreover , prompt engineering also requires expertise about the task and ideally the language model ’ s underlying mechanism . This is exemplified in Figure 1 ( b-d ) where adding task-relevant context can lead to significant improvements , i.e. , “ flower ” for Flowers102 , “ texture ” for DTD and “ satellite ” for EuroSAT . Tuning the sentence structure could bring further improvements , e.g. , putting “ a type of flower ” after the class token for Flowers102 , keeping only “ texture ” in the context for DTD , and Describable Textures ( DTD ) EuroSAT a [ CLASS ] . a photo of [ CLASS ] . a photo of a [ CLASS ] . [ V ] 1 [ V ] 2 … [ V ] M [ CLASS ] . 80.77 78.99 84.42 92.00 Flowers102 a photo of a [ CLASS ] . a flower photo of a [ CLASS ] . a photo of a [ CLASS ] , a type of flower . [ V ] 1 [ V ] 2 … [ V ] M [ CLASS ] . 56.68 61.23 62.32 93.22 Caltech101 Prompt Prompt Prompt Prompt Accuracy Accuracy Accuracy Accuracy ( a ) ( b ) adding “ centered ” before “ satellite photo ” for EuroSAT . However , even with extensive tuning , the resulting prompts are by no means guaranteed to be optimal for these downstream tasks . Inspired by recent prompt learning research in NLP ( Shin et al. , 2020 ; Jiang et al. , 2020 ; Zhong et al. , 2021 ) , we propose context optimization ( CoOp ) 1 to automate prompt engineering to allow more efficient and task-specific transfer for pre-trained vision-language models . Specifically , we model a prompt ’ s context using continuous representations which are essentially initialized with random vectors with the same dimension as word embeddings ( see Figure 2 ) . The context could be shared among all classes or designed to be class-specific . During training , we simply minimize the prediction error using the cross-entropy loss with respect to the learnable context vectors , while keeping the pre-trained parameters fixed . The gradients can be back-propagated all the way through the text encoder , distilling the rich knowledge encoded in the parameters for learning task-relevant context . To demonstrate the effectiveness of CoOp , we benchmark on 11 datasets , which cover a diverse set of visual recognition tasks including classification on generic objects , scenes , actions and fine-grained categories , as well as specialized tasks like recognizing textures and satellite imagery . The results show that CoOp can effectively turn pre-trained vision-language models into data-efficient visual learners , requiring as few as one or two shots to beat hand-crafted prompts with a decent margin . The performance can also be further boosted by using more shots , e.g. , at 16 shots the margin over hand-crafted prompts averages at around 17 % and reaches over 50 % for the highest . CoOp also outperforms the linear probe alternative known as a strong few-shot learning baseline ( Tian et al. , 2020 ) , and crucially , demonstrates much stronger robustness to distribution shift . Extensive analysis is also conducted to offer a comprehensive picture on how to apply CoOp in practice . The source code for reproducing the experiments will be released to facilitate future research . 2 METHODOLOGY . 2.1 VISION-LANGUAGE PRE-TRAINING . We briefly introduce vision-language pre-training with a particular focus on CLIP ( Radford et al. , 2021 ) . Our approach is applicable to broader CLIP-like vision-language models . Models CLIP consists of two encoders , one for images and the other for text . The image encoder aims to map high-dimensional images into a low-dimensional embedding space . The architecture of the image encoder can take the form of a CNN like ResNet-50 ( He et al. , 2016 ) or a ViT ( Dosovitskiy et al. , 2021 ) . On the other hand , the text encoder is built on top of a Transformer ( Vaswani et al. , 2017 ) and aims to generate text representations from natural language . 1CoOp is pronounced as /ku : p/ . Specifically , given a sequence of words ( tokens ) , such as “ a photo of a dog. ” , CLIP first converts each one of the token ( including punctuation ) into a lower-cased byte pair encoding ( BPE ) representation ( Sennrich et al. , 2016 ) , which is essentially a unique numeric ID . The vocabulary size in CLIP is 49,152 . To facilitate minibatch processing , each text sequence is encompassed with the [ SOS ] and [ EOS ] tokens and capped at a fixed length of 77 . After that , the IDs are mapped to 512-D word embedding vectors , which are then passed on to the Transformer . Finally , the features at the [ EOS ] token position are layer normalized and further processed by a linear projection layer . Training CLIP is trained to align the two embedding spaces learned for images and text respectively . Specifically , the learning objective is formulated as a contrastive loss . Given a batch of image-text pairs , CLIP maximizes the cosine similarity for matched pairs while minimizes the cosine similarity for all other unmatched pairs . To learn diverse visual concepts that are more transferable to downstream tasks , CLIP ’ s team collects a large training dataset consisting of 400 million image-text pairs . Zero-Shot Inference Since CLIP is pre-trained to predict whether an image matches a textual description , it naturally fits zero-shot recognition . This is achieved by comparing image features with the classification weights synthesized by the text encoder , which takes as input textual descriptions specifying classes of interest . Formally , let f be image features extracted by the image encoder for an image x and { wi } Ki=1 a set of weight vectors generated by the text encoder . K denotes the number of classes and each wi is derived from a prompt that could have the form of “ a photo of a [ CLASS ] . ” where the class token is replaced by the specific class name , such as “ cat ” , “ dog ” or “ car ” . The prediction probability is then computed as p ( y = i|x ) = exp ( < wi , f > /τ ) ∑K j=1 exp ( < wj , f > /τ ) , ( 1 ) where τ is a temperature parameter learned by CLIP and < · , · > denotes cosine similarity . 2.2 CONTEXT OPTIMIZATION . We propose context optimization ( CoOp ) , which avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data . An overview is shown in Figure 2 . Specifically , the prompt given to the text encoder g ( · ) is designed with the following form , t = [ V ] 1 [ V ] 2 . . . [ V ] M [ CLASS ] , ( 2 ) where each [ V ] m ( m∈ { 1 , . . . , M } ) is a vector with the same dimension as word embeddings ( i.e. , 512 for CLIP ) , and M is a hyperparameter specifying the number of context tokens . Note that the context here is shared among all classes , which is called unified context and different from classspecific context that is introduced later . By forwarding a prompt t to the text encoder g ( · ) , we can obtain a classification weight vector representing a visual concept . The prediction probability is computed as p ( y = i|x ) = exp ( < g ( ti ) , f > /τ ) ∑K j=1 exp ( < g ( tj ) , f > /τ ) , ( 3 ) where the class token within each prompt ti is replaced by the corresponding word embedding vector ( s ) of the i-th class name . Training is performed to minimize the standard classification loss based on the cross-entropy , and the gradients can be back-propagated all the way through the text encoder g ( · ) , making use of the rich knowledge encoded in the parameters to optimize the context . The design of continuous representations also allows full exploration in the word embedding space , which facilitates the learning of task-relevant context . Other Variants Other than placing the class token at the end of a sequence as in Equation ( 2 ) , we can also put it in the middle like t = [ V ] 1 . . . [ V ] M 2 [ CLASS ] [ V ] M 2 +1 . . . [ V ] M , ( 4 ) which increases flexibility for learning—theoretically , the prompt is allowed to either fill the latter cells with supplementary descriptions or cut off the sentence earlier by using a termination signal such as full stop . Another option is to design class-specific context ( CSC ) where context vectors are independent to each class , i.e. , [ V ] i1 [ V ] i 2 . . . [ V ] i M 6= [ V ] j 1 [ V ] j 2 . . . [ V ] j M for i 6= j and i , j ∈ { 1 , . . . , K } . As an alternative to unified context , we find that CSC is particularly useful for some fine-grained classification tasks . 3 EXPERIMENTS . 3.1 FEW-SHOT LEARNING . Datasets We select 11 publicly available image classification datasets used in CLIP : ImageNet ( Deng et al. , 2009 ) , Caltech101 ( Fei-Fei et al. , 2004 ) , OxfordPets ( Parkhi et al. , 2012 ) , StanfordCars ( Krause et al. , 2013 ) , Flowers102 ( Nilsback & Zisserman , 2008 ) , Food101 ( Bossard et al. , 2014 ) , FGVCAircraft ( Maji et al. , 2013 ) , SUN397 ( Xiao et al. , 2010 ) , DTD ( Cimpoi et al. , 2014 ) , EuroSAT ( Helber et al. , 2019 ) and UCF101 ( Soomro et al. , 2012 ) ( see Appendix A for their statistics ) . These datasets constitute a comprehensive benchmark , which covers a diverse set of vision tasks including classification on generic objects , scenes , actions and fine-grained categories , as well as specialized tasks like recognizing textures and satellite imagery . We follow the few-shot evaluation protocol adopted in CLIP ( Radford et al. , 2021 ) , using 1 , 2 , 4 , 8 and 16 shots for training respectively and deploying models in the full test sets . The average results over three runs are reported for comparison . Training Details CoOp has four versions : positioning the class token in the end or middle ; unified context vs. CSC . Unless otherwise stated , ResNet-50 ( He et al. , 2016 ) is used as the image encoder ’ s backbone and the number of context tokens M is set to 16 . Investigations on other design choices are discussed in Section 3.3 . All models are built on top of the open-sourced CLIP.2 CoOp ’ s context vectors are randomly initialized by drawing from a zero-mean Gaussian distribution with standard deviation equal to 0.02 . Training is done with SGD and an initial learning rate of 0.002 , which is decayed by the cosine annealing rule . The maximum epoch is set to 200 for 16/8 shots , 100 for 4/2 shots , and 50 for 1 shot ( except for ImageNet where the maximum epoch is fixed to 50 ) . To mitigate explosive gradients observed in the early training iterations , we use the warmup trick by fixing the learning rate to 1e−5 during the first epoch . 2https : //github.com/openai/CLIP . Baseline Methods We compare CoOp with two baseline methods . The first is zero-shot CLIP , which is based on hand-crafted prompts . We follow the guideline of prompt engineering introduced by Radford et al . ( 2021 ) . For generic objects and scenes , “ a photo of a [ CLASS ] . ” is adopted . For fine-grained categories , task-relevant context is added like “ a type of pet ” for OxfordPets and “ a type of food ” for Food101 . When it comes to specialized tasks such as recognizing textures in DTD , the prompt is customized as “ [ CLASS ] texture. ” where the class names are adjectives like “ bubbly ” and “ dotted ” . See Appendix A for the details . The second baseline is linear probe CLIP . As suggested by Radford et al . ( 2021 ) and a recent study on few-shot learning ( Tian et al. , 2020 ) , training a linear classifier on top of high-quality pre-trained models ’ features ( like CLIP ) can easily achieve performance that is on a par with that of state-of-the-art few-shot learning methods , which are often much more sophisticated . We follow the same training method used by Radford et al . ( 2021 ) to train linear probe CLIP . Comparison with Hand-Crafted Prompts Figure 3 summarizes the results . Our default model is CLIP+CoOp with the class token positioned in the end . The two different ways of positioning the class token achieve similar performance as their curves highly overlap . From the average performance displayed in the top-left corner , we observe that CLIP+CoOp is a strong few-shot learner , requiring only two shots on average to obtain a decent margin over zero-shot CLIP . Given 16 shots for training , the average gap brought by CoOp can be further increased to around 17 % . Figure 4 ranks the absolute improvements obtained by CoOp at 16 shots over hand-crafted prompts . Huge improvements are observed on specialized tasks namely EuroSAT and DTD where the increase in performance reaches over 50 % and 20 % respectively . The jumps in performance are also significant ( those more than 10 % ) on most fine-grained datasets including Flowers102 , StanfordCars and FGVCAircraft , as well as on scene and action recognition datasets ( SUN397 & UCF101 ) . Since ImageNet is a challenging dataset that contains 1,000 classes , the 5.05 % improvement is also noteworthy . In contrast , the increases on the two fine-grained datasets , OxfordPets and Food101 , are less appealing . By digging into CLIP+CoOp ’ s curves on these two datasets in Figure 3 , we find there is a loss of momentum in performance improvements even with more shots used , seemingly an overfitting problem . A potential solution is to impose higher regularizations like increasing the weight decay . Nonetheless , the overall results are strong enough to serve as evidence of CoOp ’ s capability of learning task-relevant prompts in a data-efficient way . Comparison with Linear Probe CLIP In terms of the overall performance ( Figure 3 , topleft ) , CLIP+CoOp demonstrates clear advantages over linear probe CLIP . The latter requires 4 shots on average to match the zero-shot ’ s performance while CoOp ’ s average gains at 4 shots are already more than 10 % . It is also clear that the gaps in the extreme low-data regime such as one or two shots are much larger , suggesting that CoOp is much more effective than learning a linear classifier from scratch for fewshot learning . We also observe that linear probe CLIP is comparable to CLIP+CoOp on the two specialized tasks ( DTD & EuroSAT ) as well as on a couple of fine-grained datasets ( Flowers102 & FGVCAircraft ) —this is not too surprising as the pre-trained CLIP space has been proved powerful , making the linear probe model a strong competitor . Nevertheless , CoOp ’ s CSC version can beat linear probe CLIP on the aforementioned datasets , and moreover , shows much better potential when more shots become available . Unified vs. Class-Specific Context On average , using unified context leads to better performance . In terms of when to apply CSC and when not to , we have the following suggestions . For generic objects ( ImageNet & Caltech101 ) , scenes ( SUN397 ) and actions ( UCF101 ) , using unified context is clearly better . Unified context also works better on some fine-grained datasets including OxfordPets and Food101 , but on others like StanfordCars , Flowers102 and FGVCAircraft the CSC version is preferred . CSC also yields better performance on the two specialized tasks , DTD and EuroSAT , at 16 shots in particular . However , CSC mostly underperforms unified context in challenging low-data scenarios ( fewer than 8 shots ) , which makes sense because CSC has more parameters than unified context and needs more data for training .
The authors demonstrate a more efficient form of few-shot learning using CLIP compared to linear probing for image classification: CoOp. Instead of fine-tuning a small linear classifier on the output of CLIP, they propose fine-tuning a number of additional embeddings at the input layer; this modification, in theory, allows CLIP to leverage more computation when adapting to tasks, while still only optimizing a small number of parameters. Experiments across several corpora demonstrate the efficacy of the approach, which generally yields a few accuracy points of gain versus a linear probe trained on the same amount of data.
SP:4d5696e0b4e9d1d9156d0d0977903b16c9393e6f
Is Importance Weighting Incompatible with Interpolating Classifiers?
1 INTRODUCTION . Machine learning models are often evaluated on test data which differs from the data that they were trained on . A classic statistical technique to combat such distribution shift is to importance weight the loss function during training ( Shimodaira , 2000 ) . This procedure upweights points in the training data that are more likely to appear in the test data and downweights ones that are less likely . The reweighted training loss is an unbiased estimator of the test loss and can be minimized by standard algorithms , resulting in a simple and general procedure to address distribution shift . Surprisingly , recent papers ( Byrd & Lipton , 2019 ; Xu et al. , 2020 ) have found that importance weighting is ineffective in the current deep learning paradigm , where overparameterized models interpolate the training data or have vanishingly small train loss . In particular , Byrd & Lipton ( 2019 ) empirically showed that when no regularization is used , overparameterized linear and nonlinear models trained with the importance weighted cross-entropy loss ignore the importance weights . Xu et al . ( 2020 ) followed up and provided a theoretical justification for this observation in overparameterized linear and non-linear models . To build intuition about why importance weighting fails , consider linear classifiers as an example . Given linearly separable data ( x1 , y1 ) , . . . , ( xn , yn ) ∈ Rd × { −1 , 1 } , Soudry et al . ( 2018 ) showed that if gradient descent is applied to minimize an exponentially-tailed classification loss ( ∑ i∈ [ n ] ` exp ( yixi ) ) then the iterates converge in direction to the maximum margin classifier θ̂MM : = arg min‖θ‖=1 { γ : yixi · θ ≥ γ , for all i ∈ [ n ] } . Xu et al . ( 2020 ) showed that in this same setting , minimizing the importance weighted loss ( ∑ i∈ [ n ] wi ` exp ( yixi ) ) with gradient descent also results in convergence to the maximum margin classifier , regardless of the weights . To see why , consider the special case where the weights ( w1 , . . . , wn ) are positive integers . This reweighting is equivalent to simply repeating each datapoint wi times , and the maximum margin classifier over this “ new dataset ” remains unchanged . Thus , invoking the original result by Soudry et al . ( 2018 ) proves that the importance weights has no effect in correcting the distribution shift . This result can be seen in Figure 1 where we demonstrate this phenomenon in a simple toy problem . Such evidence has led some to wonder if importance weighting is fundamentally incompatible with overparameterized interpolating models . In this paper , we show that this is not the case . We find that the culprit behind the ineffectiveness of importance weighting is the exponential tail of popular losses such as the cross-entropy or the logistic . We propose altering the structure of the loss to have fatter , polynomially decaying tails instead . We theoretically and empirically demonstrate that importance weights do correct for distribution shift under such losses even for overparameterized classifiers . Our first contribution is to characterize the limiting direction of the iterates of gradient descent ( its implicit bias ) when minimizing reweighted polynomially-tailed losses with linear classifiers . We show that this limiting direction is a function of both the datapoints as well as the importance weights , unlike the maximum margin classifier that only depends on the data ( see the right half of Figure 1 ) . Next , we analyze the generalization behavior of this classifier in a label shift setting . We prove that when the weights are an exponentiation of the unbiased importance weights , the test error decays to zero in the large sample limit , regardless of the level of imbalance in the data . In contrast , we prove that the test error of the maximum margin classifier in this same setting must be at least 1/8 . Finally , we demonstrate the practical benefits of our framework by applying this approach to experiments with neural networks . In both a label shift dataset ( Imbalanced Binary CIFAR10 ) , and a subpopulation shift dataset with spurious correlations ( CelebA ( Sagawa et al. , 2019 ) ) , we find that using polynomially-tailed losses consistently leads to a gain of 2-3 % in test accuracy over using the cross-entropy loss . 2 RELATED WORK . Early work ( Shimodaira , 2000 ; Wen et al. , 2014 ) already warned against the potential ineffectiveness of importance weights on interpolating overparameterized models . Shimodaira ( 2000 ) showed that when the model is well-specified , importance weights can fail to have an effect , and that the ordinary maximum likelihood estimate is asymptotically optimal . Wen et al . ( 2014 ) showed that when there is a zero-loss minimizer of an unweighted convex loss minimization problem , then it is also a minimizer of the ( adversarially ) reweighted loss as well . Recent work ( Byrd & Lipton , 2019 ; Xu et al. , 2020 ) has shown that importance weighting fails to have an effect on neural networks trained with gradient descent , though always in the setting of exponentially-tailed losses . Sagawa et al . ( 2019 ) demonstrated that reweighting can fail to have the desired effect when unregularized distributionally robust optimization ( DRO ) methods are used in conjunction with the cross-entropy loss . They empirically showed that regularization is necessary to reap the benefits of reweighting , also observed by Byrd & Lipton ( 2019 ) . Our work also connects to literature that has studied the implicit bias of gradient descent ( Soudry et al. , 2018 ; Ji & Telgarsky , 2019 ; Nacson et al. , 2019 ) . Especially relevant is the work by Ji et al . ( 2020 ) who relate the implicit bias of gradient descent with exponentially and polynomially-tailed losses for linear classifiers to a solution of a regularized loss minimization problem . Finally , our generalization analysis draws from the growing literature focused on finite sample bounds on the test error of the maximum margin classifier in the overparameterized regime ( Chatterji & Long , 2021 ; Muthukumar et al. , 2020 ; Wang & Thrampoulidis , 2021 ; Cao et al. , 2021 ) . 3 SETTING . We consider a distribution shift setting where the training samples { ( x1 , y1 ) , . . . , ( xn , yn ) } ∈ Rd × { −1 , 1 } are drawn i.i.d . from Ptrain , and the test samples are drawn from a different distribution Ptest that is absolutely continuous with respect to Ptrain . Let fθ denote a classifier parameterized by θ . Given a feature x , a classifier maps this feature to fθ ( x ) ∈ R. In this paper we shall consider cases where the classifier is either linear ( for our theory ) or a neural network ( for our experiments ) . Our goal is to find a classifier fθ that minimizes the 0-1 loss with respect to the test distribution : TestError [ fθ ] = P ( x , y ) ∼Ptest [ sign ( fθ ( x ) ) 6= y ] . To handle the mismatch between Ptrain and Ptest , we shall study importance weighting algorithms . Given a datapoint ( x , y ) ∈ Rd× { −1 , 1 } , the classical unbiased importance weight at this datapoint is given by the ratio of densities between the test and the train distributions Ptest ( x , y ) Ptrain ( x , y ) . Using these unbiased importance weights ensure that the reweighted training loss is an unbiased estimate of the test loss . However , as noted above , past work has shown that interpolating classifiers trained with gradient descent on importance weighted exponentially-tailed losses , such as the logistic loss ` log ( z ) : = log ( 1 + exp ( −z ) ) , the exponential loss ` exp ( z ) : = exp ( −z ) , and the cross-entropy loss , ignore the importance weights . For example , consider the case when the classifier is linear fθ ( x ) = x · θ , the weights are w1 , . . . , wn > 0 , and the reweighted loss function is L̂ ( θ ) = ∑n i=1 wi ` log ( yixi · θ ) . Xu et al . ( 2020 ) showed that if the data is linearly separable then the iterates of gradient descent converge in direction to the ` 2-maximum margin classifier , θ̂MM : = arg max θ : ‖θ2‖=1 { γ : yixi · θ ≥ γ for all i ∈ [ n ] } . ( 1 ) Observe that the maximum margin classifier does not depend on the set of importance weights ( w1 , . . . , wn ) and hence may suffer large test error when there is distribution shift . Xu et al . ( 2020 ) further showed that when separability assumptions hold , non-linear classifiers ( like multilayer neural networks ) trained with gradient descent on exponentially-tailed losses are also unaffected by importance weights . We initiate a study of polynomially-tailed losses in the distribution shift setting and show that they have improved behavior with respect to importance weighting even when the model is overparameterized . Given parameters α > 0 and β ∈ R define the polynomially-tailed loss as follows : ` α , β ( z ) : = { ` left ( z ) if z < β 1 [ z− ( β−1 ) ] α if z ≥ β , where ` left is any loss function such that the overall loss function ` α , β is convex , differentiable and strictly decreasing . Several natural choices for ` left include the scaled logistic ( c1 log ( 1 + exp ( −c2z ) ) ) , exponential ( c1 exp ( −c2z ) ) or linear ( −c1z + c2 ) losses . Given a training dataset { ( x1 , y1 ) , . . . , ( xn , yn ) } and a set of weights w1 , . . . , wn ≥ 0 we let L̂α , β ( fθ ) : = n∑ i=1 wi ` α , β ( yifθ ( xi ) ) be the reweighted empirical loss on this dataset . Notation . Given a vector v , let ‖v‖ denote its Euclidean norm . For any j ∈ N , we denote the set { 1 , . . . , j } by [ j ] . A random variable ξ is 1-sub-Gaussian if for any λ ∈ R , E [ eλξ ] ≤ eλ2/2 . 4 THEORETICAL RESULTS . In this section , we present several theoretical results that justify the use of polynomially-tailed losses in conjunction with importance weights to handle distribution shifts . To let the analysis proceed , we restrict our theoretical study to linear classifiers , fθ ( x ) = x · θ , for some θ ∈ Rd . First , in Section 4.1 we shall characterize the limiting direction of gradient descent on reweighted polynomially-tailed losses and show that this direction depends on both the weights as well as the datapoints . Next , in Section 4.2 , we upper bound the test error of this limiting solution in a label shift setting . We also show that choosing weights that are obtained by exponentiating the unbiased importance weights helps in reducing the test error . Finally , in this label shift setting , we show that the maximum margin classifier suffers an error that is at least 1/8 .
The paper shows both theoretically and experimentally that importance weighting is not incompatible with the training of overparameterized models provided that the training loss is appropriately modified so that it does not have an exponential tail. Specifically, for binary classification the authors propose a new loss function with polynomial tail decay. Theoretically, the new loss is shown to outperform weighted cross entropy for linear models and an imbalanced mixtures of gaussian model. Empirically, the new loss outperforms weighted cross entropy on imbalanced CIFAR 10 and the CelebA dataset.
SP:5890c28434ab0e799c74425346da950c59bc1cc7
Is Importance Weighting Incompatible with Interpolating Classifiers?
1 INTRODUCTION . Machine learning models are often evaluated on test data which differs from the data that they were trained on . A classic statistical technique to combat such distribution shift is to importance weight the loss function during training ( Shimodaira , 2000 ) . This procedure upweights points in the training data that are more likely to appear in the test data and downweights ones that are less likely . The reweighted training loss is an unbiased estimator of the test loss and can be minimized by standard algorithms , resulting in a simple and general procedure to address distribution shift . Surprisingly , recent papers ( Byrd & Lipton , 2019 ; Xu et al. , 2020 ) have found that importance weighting is ineffective in the current deep learning paradigm , where overparameterized models interpolate the training data or have vanishingly small train loss . In particular , Byrd & Lipton ( 2019 ) empirically showed that when no regularization is used , overparameterized linear and nonlinear models trained with the importance weighted cross-entropy loss ignore the importance weights . Xu et al . ( 2020 ) followed up and provided a theoretical justification for this observation in overparameterized linear and non-linear models . To build intuition about why importance weighting fails , consider linear classifiers as an example . Given linearly separable data ( x1 , y1 ) , . . . , ( xn , yn ) ∈ Rd × { −1 , 1 } , Soudry et al . ( 2018 ) showed that if gradient descent is applied to minimize an exponentially-tailed classification loss ( ∑ i∈ [ n ] ` exp ( yixi ) ) then the iterates converge in direction to the maximum margin classifier θ̂MM : = arg min‖θ‖=1 { γ : yixi · θ ≥ γ , for all i ∈ [ n ] } . Xu et al . ( 2020 ) showed that in this same setting , minimizing the importance weighted loss ( ∑ i∈ [ n ] wi ` exp ( yixi ) ) with gradient descent also results in convergence to the maximum margin classifier , regardless of the weights . To see why , consider the special case where the weights ( w1 , . . . , wn ) are positive integers . This reweighting is equivalent to simply repeating each datapoint wi times , and the maximum margin classifier over this “ new dataset ” remains unchanged . Thus , invoking the original result by Soudry et al . ( 2018 ) proves that the importance weights has no effect in correcting the distribution shift . This result can be seen in Figure 1 where we demonstrate this phenomenon in a simple toy problem . Such evidence has led some to wonder if importance weighting is fundamentally incompatible with overparameterized interpolating models . In this paper , we show that this is not the case . We find that the culprit behind the ineffectiveness of importance weighting is the exponential tail of popular losses such as the cross-entropy or the logistic . We propose altering the structure of the loss to have fatter , polynomially decaying tails instead . We theoretically and empirically demonstrate that importance weights do correct for distribution shift under such losses even for overparameterized classifiers . Our first contribution is to characterize the limiting direction of the iterates of gradient descent ( its implicit bias ) when minimizing reweighted polynomially-tailed losses with linear classifiers . We show that this limiting direction is a function of both the datapoints as well as the importance weights , unlike the maximum margin classifier that only depends on the data ( see the right half of Figure 1 ) . Next , we analyze the generalization behavior of this classifier in a label shift setting . We prove that when the weights are an exponentiation of the unbiased importance weights , the test error decays to zero in the large sample limit , regardless of the level of imbalance in the data . In contrast , we prove that the test error of the maximum margin classifier in this same setting must be at least 1/8 . Finally , we demonstrate the practical benefits of our framework by applying this approach to experiments with neural networks . In both a label shift dataset ( Imbalanced Binary CIFAR10 ) , and a subpopulation shift dataset with spurious correlations ( CelebA ( Sagawa et al. , 2019 ) ) , we find that using polynomially-tailed losses consistently leads to a gain of 2-3 % in test accuracy over using the cross-entropy loss . 2 RELATED WORK . Early work ( Shimodaira , 2000 ; Wen et al. , 2014 ) already warned against the potential ineffectiveness of importance weights on interpolating overparameterized models . Shimodaira ( 2000 ) showed that when the model is well-specified , importance weights can fail to have an effect , and that the ordinary maximum likelihood estimate is asymptotically optimal . Wen et al . ( 2014 ) showed that when there is a zero-loss minimizer of an unweighted convex loss minimization problem , then it is also a minimizer of the ( adversarially ) reweighted loss as well . Recent work ( Byrd & Lipton , 2019 ; Xu et al. , 2020 ) has shown that importance weighting fails to have an effect on neural networks trained with gradient descent , though always in the setting of exponentially-tailed losses . Sagawa et al . ( 2019 ) demonstrated that reweighting can fail to have the desired effect when unregularized distributionally robust optimization ( DRO ) methods are used in conjunction with the cross-entropy loss . They empirically showed that regularization is necessary to reap the benefits of reweighting , also observed by Byrd & Lipton ( 2019 ) . Our work also connects to literature that has studied the implicit bias of gradient descent ( Soudry et al. , 2018 ; Ji & Telgarsky , 2019 ; Nacson et al. , 2019 ) . Especially relevant is the work by Ji et al . ( 2020 ) who relate the implicit bias of gradient descent with exponentially and polynomially-tailed losses for linear classifiers to a solution of a regularized loss minimization problem . Finally , our generalization analysis draws from the growing literature focused on finite sample bounds on the test error of the maximum margin classifier in the overparameterized regime ( Chatterji & Long , 2021 ; Muthukumar et al. , 2020 ; Wang & Thrampoulidis , 2021 ; Cao et al. , 2021 ) . 3 SETTING . We consider a distribution shift setting where the training samples { ( x1 , y1 ) , . . . , ( xn , yn ) } ∈ Rd × { −1 , 1 } are drawn i.i.d . from Ptrain , and the test samples are drawn from a different distribution Ptest that is absolutely continuous with respect to Ptrain . Let fθ denote a classifier parameterized by θ . Given a feature x , a classifier maps this feature to fθ ( x ) ∈ R. In this paper we shall consider cases where the classifier is either linear ( for our theory ) or a neural network ( for our experiments ) . Our goal is to find a classifier fθ that minimizes the 0-1 loss with respect to the test distribution : TestError [ fθ ] = P ( x , y ) ∼Ptest [ sign ( fθ ( x ) ) 6= y ] . To handle the mismatch between Ptrain and Ptest , we shall study importance weighting algorithms . Given a datapoint ( x , y ) ∈ Rd× { −1 , 1 } , the classical unbiased importance weight at this datapoint is given by the ratio of densities between the test and the train distributions Ptest ( x , y ) Ptrain ( x , y ) . Using these unbiased importance weights ensure that the reweighted training loss is an unbiased estimate of the test loss . However , as noted above , past work has shown that interpolating classifiers trained with gradient descent on importance weighted exponentially-tailed losses , such as the logistic loss ` log ( z ) : = log ( 1 + exp ( −z ) ) , the exponential loss ` exp ( z ) : = exp ( −z ) , and the cross-entropy loss , ignore the importance weights . For example , consider the case when the classifier is linear fθ ( x ) = x · θ , the weights are w1 , . . . , wn > 0 , and the reweighted loss function is L̂ ( θ ) = ∑n i=1 wi ` log ( yixi · θ ) . Xu et al . ( 2020 ) showed that if the data is linearly separable then the iterates of gradient descent converge in direction to the ` 2-maximum margin classifier , θ̂MM : = arg max θ : ‖θ2‖=1 { γ : yixi · θ ≥ γ for all i ∈ [ n ] } . ( 1 ) Observe that the maximum margin classifier does not depend on the set of importance weights ( w1 , . . . , wn ) and hence may suffer large test error when there is distribution shift . Xu et al . ( 2020 ) further showed that when separability assumptions hold , non-linear classifiers ( like multilayer neural networks ) trained with gradient descent on exponentially-tailed losses are also unaffected by importance weights . We initiate a study of polynomially-tailed losses in the distribution shift setting and show that they have improved behavior with respect to importance weighting even when the model is overparameterized . Given parameters α > 0 and β ∈ R define the polynomially-tailed loss as follows : ` α , β ( z ) : = { ` left ( z ) if z < β 1 [ z− ( β−1 ) ] α if z ≥ β , where ` left is any loss function such that the overall loss function ` α , β is convex , differentiable and strictly decreasing . Several natural choices for ` left include the scaled logistic ( c1 log ( 1 + exp ( −c2z ) ) ) , exponential ( c1 exp ( −c2z ) ) or linear ( −c1z + c2 ) losses . Given a training dataset { ( x1 , y1 ) , . . . , ( xn , yn ) } and a set of weights w1 , . . . , wn ≥ 0 we let L̂α , β ( fθ ) : = n∑ i=1 wi ` α , β ( yifθ ( xi ) ) be the reweighted empirical loss on this dataset . Notation . Given a vector v , let ‖v‖ denote its Euclidean norm . For any j ∈ N , we denote the set { 1 , . . . , j } by [ j ] . A random variable ξ is 1-sub-Gaussian if for any λ ∈ R , E [ eλξ ] ≤ eλ2/2 . 4 THEORETICAL RESULTS . In this section , we present several theoretical results that justify the use of polynomially-tailed losses in conjunction with importance weights to handle distribution shifts . To let the analysis proceed , we restrict our theoretical study to linear classifiers , fθ ( x ) = x · θ , for some θ ∈ Rd . First , in Section 4.1 we shall characterize the limiting direction of gradient descent on reweighted polynomially-tailed losses and show that this direction depends on both the weights as well as the datapoints . Next , in Section 4.2 , we upper bound the test error of this limiting solution in a label shift setting . We also show that choosing weights that are obtained by exponentiating the unbiased importance weights helps in reducing the test error . Finally , in this label shift setting , we show that the maximum margin classifier suffers an error that is at least 1/8 .
Previous works pointed out an interesting incompatibility between importance sampling and training overparameterized neural nets. In the view of margin, Soudry et al. (2018) showed that GD on exponentially-tailed loss converges to the max-margin classifier, and Xu et al. (2020) pointed out that this margin maximization is unaffected by importance sampling. This paper then turns to studying polynomially-tailed loss. * This paper establishes the implicit bias by formulating an optimization problem, which can change according to the importance weights. The convergent classifier is shown to generalize better than the exponentially-tailed loss in a simple linear classification setting, where the dataset contains two sub-Gaussian clusters. * Experiments on Imbalanced Binary CIFAR10 and CelebA show that the polynomially-tailed loss performs better on imbalanced datasets.
SP:5890c28434ab0e799c74425346da950c59bc1cc7
Is Importance Weighting Incompatible with Interpolating Classifiers?
1 INTRODUCTION . Machine learning models are often evaluated on test data which differs from the data that they were trained on . A classic statistical technique to combat such distribution shift is to importance weight the loss function during training ( Shimodaira , 2000 ) . This procedure upweights points in the training data that are more likely to appear in the test data and downweights ones that are less likely . The reweighted training loss is an unbiased estimator of the test loss and can be minimized by standard algorithms , resulting in a simple and general procedure to address distribution shift . Surprisingly , recent papers ( Byrd & Lipton , 2019 ; Xu et al. , 2020 ) have found that importance weighting is ineffective in the current deep learning paradigm , where overparameterized models interpolate the training data or have vanishingly small train loss . In particular , Byrd & Lipton ( 2019 ) empirically showed that when no regularization is used , overparameterized linear and nonlinear models trained with the importance weighted cross-entropy loss ignore the importance weights . Xu et al . ( 2020 ) followed up and provided a theoretical justification for this observation in overparameterized linear and non-linear models . To build intuition about why importance weighting fails , consider linear classifiers as an example . Given linearly separable data ( x1 , y1 ) , . . . , ( xn , yn ) ∈ Rd × { −1 , 1 } , Soudry et al . ( 2018 ) showed that if gradient descent is applied to minimize an exponentially-tailed classification loss ( ∑ i∈ [ n ] ` exp ( yixi ) ) then the iterates converge in direction to the maximum margin classifier θ̂MM : = arg min‖θ‖=1 { γ : yixi · θ ≥ γ , for all i ∈ [ n ] } . Xu et al . ( 2020 ) showed that in this same setting , minimizing the importance weighted loss ( ∑ i∈ [ n ] wi ` exp ( yixi ) ) with gradient descent also results in convergence to the maximum margin classifier , regardless of the weights . To see why , consider the special case where the weights ( w1 , . . . , wn ) are positive integers . This reweighting is equivalent to simply repeating each datapoint wi times , and the maximum margin classifier over this “ new dataset ” remains unchanged . Thus , invoking the original result by Soudry et al . ( 2018 ) proves that the importance weights has no effect in correcting the distribution shift . This result can be seen in Figure 1 where we demonstrate this phenomenon in a simple toy problem . Such evidence has led some to wonder if importance weighting is fundamentally incompatible with overparameterized interpolating models . In this paper , we show that this is not the case . We find that the culprit behind the ineffectiveness of importance weighting is the exponential tail of popular losses such as the cross-entropy or the logistic . We propose altering the structure of the loss to have fatter , polynomially decaying tails instead . We theoretically and empirically demonstrate that importance weights do correct for distribution shift under such losses even for overparameterized classifiers . Our first contribution is to characterize the limiting direction of the iterates of gradient descent ( its implicit bias ) when minimizing reweighted polynomially-tailed losses with linear classifiers . We show that this limiting direction is a function of both the datapoints as well as the importance weights , unlike the maximum margin classifier that only depends on the data ( see the right half of Figure 1 ) . Next , we analyze the generalization behavior of this classifier in a label shift setting . We prove that when the weights are an exponentiation of the unbiased importance weights , the test error decays to zero in the large sample limit , regardless of the level of imbalance in the data . In contrast , we prove that the test error of the maximum margin classifier in this same setting must be at least 1/8 . Finally , we demonstrate the practical benefits of our framework by applying this approach to experiments with neural networks . In both a label shift dataset ( Imbalanced Binary CIFAR10 ) , and a subpopulation shift dataset with spurious correlations ( CelebA ( Sagawa et al. , 2019 ) ) , we find that using polynomially-tailed losses consistently leads to a gain of 2-3 % in test accuracy over using the cross-entropy loss . 2 RELATED WORK . Early work ( Shimodaira , 2000 ; Wen et al. , 2014 ) already warned against the potential ineffectiveness of importance weights on interpolating overparameterized models . Shimodaira ( 2000 ) showed that when the model is well-specified , importance weights can fail to have an effect , and that the ordinary maximum likelihood estimate is asymptotically optimal . Wen et al . ( 2014 ) showed that when there is a zero-loss minimizer of an unweighted convex loss minimization problem , then it is also a minimizer of the ( adversarially ) reweighted loss as well . Recent work ( Byrd & Lipton , 2019 ; Xu et al. , 2020 ) has shown that importance weighting fails to have an effect on neural networks trained with gradient descent , though always in the setting of exponentially-tailed losses . Sagawa et al . ( 2019 ) demonstrated that reweighting can fail to have the desired effect when unregularized distributionally robust optimization ( DRO ) methods are used in conjunction with the cross-entropy loss . They empirically showed that regularization is necessary to reap the benefits of reweighting , also observed by Byrd & Lipton ( 2019 ) . Our work also connects to literature that has studied the implicit bias of gradient descent ( Soudry et al. , 2018 ; Ji & Telgarsky , 2019 ; Nacson et al. , 2019 ) . Especially relevant is the work by Ji et al . ( 2020 ) who relate the implicit bias of gradient descent with exponentially and polynomially-tailed losses for linear classifiers to a solution of a regularized loss minimization problem . Finally , our generalization analysis draws from the growing literature focused on finite sample bounds on the test error of the maximum margin classifier in the overparameterized regime ( Chatterji & Long , 2021 ; Muthukumar et al. , 2020 ; Wang & Thrampoulidis , 2021 ; Cao et al. , 2021 ) . 3 SETTING . We consider a distribution shift setting where the training samples { ( x1 , y1 ) , . . . , ( xn , yn ) } ∈ Rd × { −1 , 1 } are drawn i.i.d . from Ptrain , and the test samples are drawn from a different distribution Ptest that is absolutely continuous with respect to Ptrain . Let fθ denote a classifier parameterized by θ . Given a feature x , a classifier maps this feature to fθ ( x ) ∈ R. In this paper we shall consider cases where the classifier is either linear ( for our theory ) or a neural network ( for our experiments ) . Our goal is to find a classifier fθ that minimizes the 0-1 loss with respect to the test distribution : TestError [ fθ ] = P ( x , y ) ∼Ptest [ sign ( fθ ( x ) ) 6= y ] . To handle the mismatch between Ptrain and Ptest , we shall study importance weighting algorithms . Given a datapoint ( x , y ) ∈ Rd× { −1 , 1 } , the classical unbiased importance weight at this datapoint is given by the ratio of densities between the test and the train distributions Ptest ( x , y ) Ptrain ( x , y ) . Using these unbiased importance weights ensure that the reweighted training loss is an unbiased estimate of the test loss . However , as noted above , past work has shown that interpolating classifiers trained with gradient descent on importance weighted exponentially-tailed losses , such as the logistic loss ` log ( z ) : = log ( 1 + exp ( −z ) ) , the exponential loss ` exp ( z ) : = exp ( −z ) , and the cross-entropy loss , ignore the importance weights . For example , consider the case when the classifier is linear fθ ( x ) = x · θ , the weights are w1 , . . . , wn > 0 , and the reweighted loss function is L̂ ( θ ) = ∑n i=1 wi ` log ( yixi · θ ) . Xu et al . ( 2020 ) showed that if the data is linearly separable then the iterates of gradient descent converge in direction to the ` 2-maximum margin classifier , θ̂MM : = arg max θ : ‖θ2‖=1 { γ : yixi · θ ≥ γ for all i ∈ [ n ] } . ( 1 ) Observe that the maximum margin classifier does not depend on the set of importance weights ( w1 , . . . , wn ) and hence may suffer large test error when there is distribution shift . Xu et al . ( 2020 ) further showed that when separability assumptions hold , non-linear classifiers ( like multilayer neural networks ) trained with gradient descent on exponentially-tailed losses are also unaffected by importance weights . We initiate a study of polynomially-tailed losses in the distribution shift setting and show that they have improved behavior with respect to importance weighting even when the model is overparameterized . Given parameters α > 0 and β ∈ R define the polynomially-tailed loss as follows : ` α , β ( z ) : = { ` left ( z ) if z < β 1 [ z− ( β−1 ) ] α if z ≥ β , where ` left is any loss function such that the overall loss function ` α , β is convex , differentiable and strictly decreasing . Several natural choices for ` left include the scaled logistic ( c1 log ( 1 + exp ( −c2z ) ) ) , exponential ( c1 exp ( −c2z ) ) or linear ( −c1z + c2 ) losses . Given a training dataset { ( x1 , y1 ) , . . . , ( xn , yn ) } and a set of weights w1 , . . . , wn ≥ 0 we let L̂α , β ( fθ ) : = n∑ i=1 wi ` α , β ( yifθ ( xi ) ) be the reweighted empirical loss on this dataset . Notation . Given a vector v , let ‖v‖ denote its Euclidean norm . For any j ∈ N , we denote the set { 1 , . . . , j } by [ j ] . A random variable ξ is 1-sub-Gaussian if for any λ ∈ R , E [ eλξ ] ≤ eλ2/2 . 4 THEORETICAL RESULTS . In this section , we present several theoretical results that justify the use of polynomially-tailed losses in conjunction with importance weights to handle distribution shifts . To let the analysis proceed , we restrict our theoretical study to linear classifiers , fθ ( x ) = x · θ , for some θ ∈ Rd . First , in Section 4.1 we shall characterize the limiting direction of gradient descent on reweighted polynomially-tailed losses and show that this direction depends on both the weights as well as the datapoints . Next , in Section 4.2 , we upper bound the test error of this limiting solution in a label shift setting . We also show that choosing weights that are obtained by exponentiating the unbiased importance weights helps in reducing the test error . Finally , in this label shift setting , we show that the maximum margin classifier suffers an error that is at least 1/8 .
In this paper, the author investigates whether importance weighting is incompatible with the training of overparameterized neural networks. In contrast to the recent observation that importance weighting is ineffective in current deep learning paradigm (Byrd & Lipton 2019), the authors show that it could actually be helpful using polynomially-tailed losses. Both theoretical justifications and empirical evidences are provided to support the claim.
SP:5890c28434ab0e799c74425346da950c59bc1cc7
Born Again Neural Rankers
We introduce Born Again neural Rankers ( BAR ) in the Learning to Rank ( LTR ) setting , where student rankers , trained in the Knowledge Distillation ( KD ) framework , are parameterized identically to their teachers . Unlike the existing ranking distillation work which pursues a good trade-off between performance and efficiency , BAR adapts the idea of Born Again Networks ( BAN ) to ranking problems and significantly improves ranking performance of students over the teacher rankers without increasing model capacity . The key differences between BAR and common distillation techniques for classification are : ( 1 ) an appropriate teacher score transformation function , and ( 2 ) a novel listwise distillation framework . Both techniques are specifically designed for ranking problems and are rarely studied in the knowledge distillation literature . Using the state-of-the-art neural ranking structure , BAR is able to push the limits of neural rankers above a recent rigorous benchmark study and significantly outperforms traditionally strong gradient boosted decision tree based models on 7 out of 9 key metrics , the first time in the literature . In addition to the strong empirical results , we give theoretical explanations on why listwise distillation is effective for neural rankers . 1 INTRODUCTION . Learning to rank ( LTR ) has become an essential component for many real-world applications such as search ( Liu , 2009 ) . In ranking problems , the focus is on predicting the relative order of a list of items given a query . It is thus different from classification problems whose goal is to predict the class label of a single item . While many neural machine learning techniques have been proposed recently , they are mostly for classification problems . Given the difference between ranking and classification problems , it is interesting to study how neural techniques can be used for ranking problems . Knowledge distillation ( KD ) ( Hinton et al. , 2015 ; Gou et al. , 2020 ) is one of such recently popular techniques . The initial goal of KD is to pursue a good trade-off between performance and efficiency . Given a high-capacity teacher model with desired high performance , a more compact student model is trained using teacher labels ( Heo et al. , 2019 ; Sun et al. , 2019 ; Sanh et al. , 2019 ) . Student models trained with KD usually work better than those trained from original labels without teacher guidance . How to effectively apply KD to ranking problems is not straightforward and has not been well-studied yet , for the following reasons : First , teacher models in classification typically predict a probability distribution over all classes . Such “ dark knowledge ” is believed to be a key reason why KD works ( Hinton et al. , 2015 ) . However , a teacher model in ranking does not convey such distribution over pre-defined classes , as ranking models only care about the relative orders of items and simply output a single score for each item in a possibly unbounded candidate list . Ranking scores are typically neither calibrated ( comparing to the probabilistic interpretation of classification scores ( Guo et al. , 2017 ) ) nor normalized ( comparing to summing up to one for classification scores over all classes for most popular losses ( Hinton et al. , 2015 ) ) . Thus directly using the outputs of a teacher model as labels for ranking tasks with existing KD methods may be less optimal . Second , listwise information over all input items should be considered to achieve the best ranking performance , since the goal is to infer the relative orders among them . For example , listwise losses have been shown to be more effective than other alternatives for LTR problems ( Cao et al. , 2007 ) . On the other hand , classification tasks almost universally treat each item independently based on the i.i.d . assumption . It is interesting to also consider listwise frameworks when studying KD for ranking problems , which has not been explored in the literature , to the best of our knowledge . Third , though there is a consensus in the classification KD literature that given no severe over-fitting , larger teacher models usually work better ( He et al. , 2016 ) , recent works show that for traditional LTR problems , it is hard for larger models to achieve better performance ( Bruch et al. , 2019 ; Qin et al. , 2021 ) due to the difficulty of applying standard techniques such as data augmentation ( compared to image rotation in computer vision ( Xie et al. , 2020 ) ) and the lack of very large human-labeled ranking datasets . This makes it harder to use the standard KD setting to improve the performance of ranking models by distilling from a very large teacher model . Thus , KD techniques need special adjustments for LTR problems . In this paper , inspired by the Born Again Networks ( BAN ) ( Furlanello et al. , 2018 ) in which student models are configured with the same capacity as teachers and are shown to outperform teachers for classification problems , we study the BAN techniques for ranking problems . To this end , we propose Born Again neural Rankers ( BAR ) that train the student models using listwise distillation and properly transformed teacher scores , which can achieve new state-of-the-art ranking performance for neural rankers . While existing ranking distillation works such as ( Tang & Wang , 2018 ; Reddi et al. , 2021 ) require a more powerful teacher model and focus on performance-efficiency trade-offs , the primary goal of our paper is to improve ranking performance over state-of-the-art teacher rankers . In summary , our contributions are as follows : • We propose Born Again neural Rankers ( BAR ) for learning to rank . This is the first knowledge distillation work that targets for better ranking performance without increasing the model capacity . • We show that the key success factors of BAR are ( 1 ) an appropriate teacher score transfor- mation function , and ( 2 ) a ranking specific listwise distillation loss . Both design choices are tailored for LTR problems and are rarely studied in the knowledge distillation literature . • We provide new theoretical explanations on why BAR works better than other alternatives . This contributes to both the general knowledge distillation and ranking distillation research . • We verify our hypothesis on rigorous public LTR benchmarks and show that BAR is able to significantly improve upon state-of-the-art neural teacher rankers . 2 RELATED WORK . Knowledge distillation has been a popular research topic recently in several areas such as image recognition ( Hinton et al. , 2015 ; Romero et al. , 2014 ; Park et al. , 2019 ) , natural language understanding ( Sanh et al. , 2019 ; Jiao et al. , 2019 ; Aguilar et al. , 2020 ) , and neural machine translation ( Kim & Rush , 2016 ; Chen et al. , 2017 ; Tan et al. , 2019 ) as a way to generate compact models , achieving good performance-efficiency trade-offs . As we mentioned in the Introduction , both the classical setting ( e.g. , using pointwise losses on i.i.d . data ) and theoretical analysis ( e.g. , “ dark knowledge ” among classes ) for classification tasks may not be optimal for the ranking setting . The major goal of this work is to push the limits of neural rankers on rigorous benchmarks . Since the introduction of RankNet ( Burges et al. , 2005 ) over a decade ago , only recently , neural rankers were shown to be competitive with well-tuned Gradient Boosted Decision Trees ( GBDT ) on traditional LTR datasets ( Qin et al. , 2021 ) . We build upon ( Qin et al. , 2021 ) and show that BAR can further push the state-of-the-art of neural rankers on rigorous benchmarks . We also provide extra experiments to show listwise distillation helps neural ranking in other settings . We are motivated by Born Again Networks ( BAN ) that were introduced by ( Furlanello et al. , 2018 ) . BAN is the first work that shows better performance can be achieved by parameterizing the student model the same as the teacher . However , BAN only focuses on classification and direct application of BAN does not help LTR problems . Building upon the general “ born again ” ideas ( Zhang et al. , 2019 ; Clark et al. , 2019 ) , our contribution in this paper is in developing specific new techniques and theory that make these ideas applicable for the important LTR setting . Another closely related work is Ranking Distillation ( RD ) ( Tang & Wang , 2018 ) , since it studies knowledge distillation for ranking . There are several marked differences between our work and RD . First , RD focuses on performance-efficiency trade-offs , where the student usually underperforms the teacher in terms of ranking metrics ( Reddi et al. , 2021 ) , while we focus on outperforming the teacher . Second , the work only uses pointwise logistic loss for distillation . The authors state that “ We also tried to use a pair-wise distillation loss when learning from teacher ’ s top-K ranking . However , the results were disappointing ” , without going into more detail or explore listwise approaches , which we show are the key success factor . Also , Tang and Wang ( Tang & Wang , 2018 ) use a hyperparameter K : items ranked as top K are labeled as positive and others as negative for distillation . Besides only working for binary datasets , this setting is not very practical since real-world ranking lists may have very different list sizes or number of relevant items . Our method does not require such a parameter . Furthermore , they only evaluated their methods on some recommender system datasets where only user id , item id , and binary labels are available . It is not a typical LTR setting , so the effectiveness of RD over state-of-the-art neural rankers is unclear . 3 BACKGROUND ON LEARNING TO RANK . For LTR problems , the training data can be represented as a set Ψ = { ( x , y ) ∈ χn ×Rn ) } , where x is a list of n items xi ∈ χ and y is a list of n relevance labels yi ∈ R for 1 ≤ i ≤ n. We use χ as the universe of all items . In traditional LTR problems , each xi corresponds to a query-item pair and is represented as a feature vector in Rk where k is the number of feature dimensions . With slight abuse of notation , we also use xi as the feature vector and say x ∈ Rn×k . The objective is to learn a function that produces an ordering of items in x so that the utility of the ordered list is maximized , that is , the items are ordered by decreasing relevance . Most LTR algorithms formulate the problem as learning a ranking function to score and sort the items in a list . As such , the goal of LTR boils down to finding a parameterized ranking function f ( · ; θ ) : χn → Rn , where θ denotes the set of trainable parameters , to minimize the empirical loss : L ( f ( · ; θ ) ) = 1 |Ψ| ∑ ( x , y ) ∈Ψ l ( y , f ( x ; θ ) ) , ( 1 ) where l ( · ) is the loss function on a single list . There are many existing ranking metrics such as NDCG and MAP used in LTR problems . A common property of these metrics is that they are rank-dependent and place more emphasis on the top ranked items . For example , the widely used NDCG metric is defined as NDCG ( πf , y ) = DCG ( πf , y ) DCG ( π∗ , y ) , ( 2 ) where πf is a ranked list induced by the ranking function f ( · ; θ ) on x , π∗ is the ideal list ( where x is sorted by decreasing y ) , and DCG is defined as : DCG ( π , y ) = n∑ i=1 2yi − 1 log2 ( 1 + π ( i ) ) . ( 3 ) In practice , the truncated version that only considers the top-k ranked items , denoted as NDCG @ k , is often used .
Knowledge distillation (KD) is a popular technique to trade-off between performance and efficiency. Student models trained with KD usually work better than those trained from the original labels. However, applying KD to ranking problems is not well studied. One earlier work (Tang & Wang, 2018) on ranking distillation proposes pointwise logistic loss for distillation after unsuccessfully attempting pair-wise approaches on the teacher's top K ranking. The authors take on the challenge of dealing with the following characteristics of the ranking models: * The ranking scores are neither calibrated nor normalized. They do not directly convey information about the underlying distribution. * They care about the relative order of score and do not make an iid assumption over documents. * Large teacher models tend to overfit because it is difficult to apply techniques like data augmentation & small datasets. The authors extend BAN in which student models are configured with the same capacity as teachers and propose BAR. BAR is referred to the specific setting where ranking loss with original labels, listwise distillation on teacher score, and tunable affine teacher score transformation. Unlike typiical KD models, the primary goal of the paper is to improve ranking performance over state-of-the-art teacher rankers.
SP:e2363380dd6d7f351b2ece40f053caad3cfa96fe
Born Again Neural Rankers
We introduce Born Again neural Rankers ( BAR ) in the Learning to Rank ( LTR ) setting , where student rankers , trained in the Knowledge Distillation ( KD ) framework , are parameterized identically to their teachers . Unlike the existing ranking distillation work which pursues a good trade-off between performance and efficiency , BAR adapts the idea of Born Again Networks ( BAN ) to ranking problems and significantly improves ranking performance of students over the teacher rankers without increasing model capacity . The key differences between BAR and common distillation techniques for classification are : ( 1 ) an appropriate teacher score transformation function , and ( 2 ) a novel listwise distillation framework . Both techniques are specifically designed for ranking problems and are rarely studied in the knowledge distillation literature . Using the state-of-the-art neural ranking structure , BAR is able to push the limits of neural rankers above a recent rigorous benchmark study and significantly outperforms traditionally strong gradient boosted decision tree based models on 7 out of 9 key metrics , the first time in the literature . In addition to the strong empirical results , we give theoretical explanations on why listwise distillation is effective for neural rankers . 1 INTRODUCTION . Learning to rank ( LTR ) has become an essential component for many real-world applications such as search ( Liu , 2009 ) . In ranking problems , the focus is on predicting the relative order of a list of items given a query . It is thus different from classification problems whose goal is to predict the class label of a single item . While many neural machine learning techniques have been proposed recently , they are mostly for classification problems . Given the difference between ranking and classification problems , it is interesting to study how neural techniques can be used for ranking problems . Knowledge distillation ( KD ) ( Hinton et al. , 2015 ; Gou et al. , 2020 ) is one of such recently popular techniques . The initial goal of KD is to pursue a good trade-off between performance and efficiency . Given a high-capacity teacher model with desired high performance , a more compact student model is trained using teacher labels ( Heo et al. , 2019 ; Sun et al. , 2019 ; Sanh et al. , 2019 ) . Student models trained with KD usually work better than those trained from original labels without teacher guidance . How to effectively apply KD to ranking problems is not straightforward and has not been well-studied yet , for the following reasons : First , teacher models in classification typically predict a probability distribution over all classes . Such “ dark knowledge ” is believed to be a key reason why KD works ( Hinton et al. , 2015 ) . However , a teacher model in ranking does not convey such distribution over pre-defined classes , as ranking models only care about the relative orders of items and simply output a single score for each item in a possibly unbounded candidate list . Ranking scores are typically neither calibrated ( comparing to the probabilistic interpretation of classification scores ( Guo et al. , 2017 ) ) nor normalized ( comparing to summing up to one for classification scores over all classes for most popular losses ( Hinton et al. , 2015 ) ) . Thus directly using the outputs of a teacher model as labels for ranking tasks with existing KD methods may be less optimal . Second , listwise information over all input items should be considered to achieve the best ranking performance , since the goal is to infer the relative orders among them . For example , listwise losses have been shown to be more effective than other alternatives for LTR problems ( Cao et al. , 2007 ) . On the other hand , classification tasks almost universally treat each item independently based on the i.i.d . assumption . It is interesting to also consider listwise frameworks when studying KD for ranking problems , which has not been explored in the literature , to the best of our knowledge . Third , though there is a consensus in the classification KD literature that given no severe over-fitting , larger teacher models usually work better ( He et al. , 2016 ) , recent works show that for traditional LTR problems , it is hard for larger models to achieve better performance ( Bruch et al. , 2019 ; Qin et al. , 2021 ) due to the difficulty of applying standard techniques such as data augmentation ( compared to image rotation in computer vision ( Xie et al. , 2020 ) ) and the lack of very large human-labeled ranking datasets . This makes it harder to use the standard KD setting to improve the performance of ranking models by distilling from a very large teacher model . Thus , KD techniques need special adjustments for LTR problems . In this paper , inspired by the Born Again Networks ( BAN ) ( Furlanello et al. , 2018 ) in which student models are configured with the same capacity as teachers and are shown to outperform teachers for classification problems , we study the BAN techniques for ranking problems . To this end , we propose Born Again neural Rankers ( BAR ) that train the student models using listwise distillation and properly transformed teacher scores , which can achieve new state-of-the-art ranking performance for neural rankers . While existing ranking distillation works such as ( Tang & Wang , 2018 ; Reddi et al. , 2021 ) require a more powerful teacher model and focus on performance-efficiency trade-offs , the primary goal of our paper is to improve ranking performance over state-of-the-art teacher rankers . In summary , our contributions are as follows : • We propose Born Again neural Rankers ( BAR ) for learning to rank . This is the first knowledge distillation work that targets for better ranking performance without increasing the model capacity . • We show that the key success factors of BAR are ( 1 ) an appropriate teacher score transfor- mation function , and ( 2 ) a ranking specific listwise distillation loss . Both design choices are tailored for LTR problems and are rarely studied in the knowledge distillation literature . • We provide new theoretical explanations on why BAR works better than other alternatives . This contributes to both the general knowledge distillation and ranking distillation research . • We verify our hypothesis on rigorous public LTR benchmarks and show that BAR is able to significantly improve upon state-of-the-art neural teacher rankers . 2 RELATED WORK . Knowledge distillation has been a popular research topic recently in several areas such as image recognition ( Hinton et al. , 2015 ; Romero et al. , 2014 ; Park et al. , 2019 ) , natural language understanding ( Sanh et al. , 2019 ; Jiao et al. , 2019 ; Aguilar et al. , 2020 ) , and neural machine translation ( Kim & Rush , 2016 ; Chen et al. , 2017 ; Tan et al. , 2019 ) as a way to generate compact models , achieving good performance-efficiency trade-offs . As we mentioned in the Introduction , both the classical setting ( e.g. , using pointwise losses on i.i.d . data ) and theoretical analysis ( e.g. , “ dark knowledge ” among classes ) for classification tasks may not be optimal for the ranking setting . The major goal of this work is to push the limits of neural rankers on rigorous benchmarks . Since the introduction of RankNet ( Burges et al. , 2005 ) over a decade ago , only recently , neural rankers were shown to be competitive with well-tuned Gradient Boosted Decision Trees ( GBDT ) on traditional LTR datasets ( Qin et al. , 2021 ) . We build upon ( Qin et al. , 2021 ) and show that BAR can further push the state-of-the-art of neural rankers on rigorous benchmarks . We also provide extra experiments to show listwise distillation helps neural ranking in other settings . We are motivated by Born Again Networks ( BAN ) that were introduced by ( Furlanello et al. , 2018 ) . BAN is the first work that shows better performance can be achieved by parameterizing the student model the same as the teacher . However , BAN only focuses on classification and direct application of BAN does not help LTR problems . Building upon the general “ born again ” ideas ( Zhang et al. , 2019 ; Clark et al. , 2019 ) , our contribution in this paper is in developing specific new techniques and theory that make these ideas applicable for the important LTR setting . Another closely related work is Ranking Distillation ( RD ) ( Tang & Wang , 2018 ) , since it studies knowledge distillation for ranking . There are several marked differences between our work and RD . First , RD focuses on performance-efficiency trade-offs , where the student usually underperforms the teacher in terms of ranking metrics ( Reddi et al. , 2021 ) , while we focus on outperforming the teacher . Second , the work only uses pointwise logistic loss for distillation . The authors state that “ We also tried to use a pair-wise distillation loss when learning from teacher ’ s top-K ranking . However , the results were disappointing ” , without going into more detail or explore listwise approaches , which we show are the key success factor . Also , Tang and Wang ( Tang & Wang , 2018 ) use a hyperparameter K : items ranked as top K are labeled as positive and others as negative for distillation . Besides only working for binary datasets , this setting is not very practical since real-world ranking lists may have very different list sizes or number of relevant items . Our method does not require such a parameter . Furthermore , they only evaluated their methods on some recommender system datasets where only user id , item id , and binary labels are available . It is not a typical LTR setting , so the effectiveness of RD over state-of-the-art neural rankers is unclear . 3 BACKGROUND ON LEARNING TO RANK . For LTR problems , the training data can be represented as a set Ψ = { ( x , y ) ∈ χn ×Rn ) } , where x is a list of n items xi ∈ χ and y is a list of n relevance labels yi ∈ R for 1 ≤ i ≤ n. We use χ as the universe of all items . In traditional LTR problems , each xi corresponds to a query-item pair and is represented as a feature vector in Rk where k is the number of feature dimensions . With slight abuse of notation , we also use xi as the feature vector and say x ∈ Rn×k . The objective is to learn a function that produces an ordering of items in x so that the utility of the ordered list is maximized , that is , the items are ordered by decreasing relevance . Most LTR algorithms formulate the problem as learning a ranking function to score and sort the items in a list . As such , the goal of LTR boils down to finding a parameterized ranking function f ( · ; θ ) : χn → Rn , where θ denotes the set of trainable parameters , to minimize the empirical loss : L ( f ( · ; θ ) ) = 1 |Ψ| ∑ ( x , y ) ∈Ψ l ( y , f ( x ; θ ) ) , ( 1 ) where l ( · ) is the loss function on a single list . There are many existing ranking metrics such as NDCG and MAP used in LTR problems . A common property of these metrics is that they are rank-dependent and place more emphasis on the top ranked items . For example , the widely used NDCG metric is defined as NDCG ( πf , y ) = DCG ( πf , y ) DCG ( π∗ , y ) , ( 2 ) where πf is a ranked list induced by the ranking function f ( · ; θ ) on x , π∗ is the ideal list ( where x is sorted by decreasing y ) , and DCG is defined as : DCG ( π , y ) = n∑ i=1 2yi − 1 log2 ( 1 + π ( i ) ) . ( 3 ) In practice , the truncated version that only considers the top-k ranked items , denoted as NDCG @ k , is often used .
The paper proposes a distillation approach for the learning to rank setting. Specifically, the "born again neural ranker" approach is investigated where both teacher and student models are identically parameterised neural networks. Authors propose a listwise loss to incorporate scores from the teacher model during student optimization and show that this leads to improvements on three real world datasets.
SP:e2363380dd6d7f351b2ece40f053caad3cfa96fe
Born Again Neural Rankers
We introduce Born Again neural Rankers ( BAR ) in the Learning to Rank ( LTR ) setting , where student rankers , trained in the Knowledge Distillation ( KD ) framework , are parameterized identically to their teachers . Unlike the existing ranking distillation work which pursues a good trade-off between performance and efficiency , BAR adapts the idea of Born Again Networks ( BAN ) to ranking problems and significantly improves ranking performance of students over the teacher rankers without increasing model capacity . The key differences between BAR and common distillation techniques for classification are : ( 1 ) an appropriate teacher score transformation function , and ( 2 ) a novel listwise distillation framework . Both techniques are specifically designed for ranking problems and are rarely studied in the knowledge distillation literature . Using the state-of-the-art neural ranking structure , BAR is able to push the limits of neural rankers above a recent rigorous benchmark study and significantly outperforms traditionally strong gradient boosted decision tree based models on 7 out of 9 key metrics , the first time in the literature . In addition to the strong empirical results , we give theoretical explanations on why listwise distillation is effective for neural rankers . 1 INTRODUCTION . Learning to rank ( LTR ) has become an essential component for many real-world applications such as search ( Liu , 2009 ) . In ranking problems , the focus is on predicting the relative order of a list of items given a query . It is thus different from classification problems whose goal is to predict the class label of a single item . While many neural machine learning techniques have been proposed recently , they are mostly for classification problems . Given the difference between ranking and classification problems , it is interesting to study how neural techniques can be used for ranking problems . Knowledge distillation ( KD ) ( Hinton et al. , 2015 ; Gou et al. , 2020 ) is one of such recently popular techniques . The initial goal of KD is to pursue a good trade-off between performance and efficiency . Given a high-capacity teacher model with desired high performance , a more compact student model is trained using teacher labels ( Heo et al. , 2019 ; Sun et al. , 2019 ; Sanh et al. , 2019 ) . Student models trained with KD usually work better than those trained from original labels without teacher guidance . How to effectively apply KD to ranking problems is not straightforward and has not been well-studied yet , for the following reasons : First , teacher models in classification typically predict a probability distribution over all classes . Such “ dark knowledge ” is believed to be a key reason why KD works ( Hinton et al. , 2015 ) . However , a teacher model in ranking does not convey such distribution over pre-defined classes , as ranking models only care about the relative orders of items and simply output a single score for each item in a possibly unbounded candidate list . Ranking scores are typically neither calibrated ( comparing to the probabilistic interpretation of classification scores ( Guo et al. , 2017 ) ) nor normalized ( comparing to summing up to one for classification scores over all classes for most popular losses ( Hinton et al. , 2015 ) ) . Thus directly using the outputs of a teacher model as labels for ranking tasks with existing KD methods may be less optimal . Second , listwise information over all input items should be considered to achieve the best ranking performance , since the goal is to infer the relative orders among them . For example , listwise losses have been shown to be more effective than other alternatives for LTR problems ( Cao et al. , 2007 ) . On the other hand , classification tasks almost universally treat each item independently based on the i.i.d . assumption . It is interesting to also consider listwise frameworks when studying KD for ranking problems , which has not been explored in the literature , to the best of our knowledge . Third , though there is a consensus in the classification KD literature that given no severe over-fitting , larger teacher models usually work better ( He et al. , 2016 ) , recent works show that for traditional LTR problems , it is hard for larger models to achieve better performance ( Bruch et al. , 2019 ; Qin et al. , 2021 ) due to the difficulty of applying standard techniques such as data augmentation ( compared to image rotation in computer vision ( Xie et al. , 2020 ) ) and the lack of very large human-labeled ranking datasets . This makes it harder to use the standard KD setting to improve the performance of ranking models by distilling from a very large teacher model . Thus , KD techniques need special adjustments for LTR problems . In this paper , inspired by the Born Again Networks ( BAN ) ( Furlanello et al. , 2018 ) in which student models are configured with the same capacity as teachers and are shown to outperform teachers for classification problems , we study the BAN techniques for ranking problems . To this end , we propose Born Again neural Rankers ( BAR ) that train the student models using listwise distillation and properly transformed teacher scores , which can achieve new state-of-the-art ranking performance for neural rankers . While existing ranking distillation works such as ( Tang & Wang , 2018 ; Reddi et al. , 2021 ) require a more powerful teacher model and focus on performance-efficiency trade-offs , the primary goal of our paper is to improve ranking performance over state-of-the-art teacher rankers . In summary , our contributions are as follows : • We propose Born Again neural Rankers ( BAR ) for learning to rank . This is the first knowledge distillation work that targets for better ranking performance without increasing the model capacity . • We show that the key success factors of BAR are ( 1 ) an appropriate teacher score transfor- mation function , and ( 2 ) a ranking specific listwise distillation loss . Both design choices are tailored for LTR problems and are rarely studied in the knowledge distillation literature . • We provide new theoretical explanations on why BAR works better than other alternatives . This contributes to both the general knowledge distillation and ranking distillation research . • We verify our hypothesis on rigorous public LTR benchmarks and show that BAR is able to significantly improve upon state-of-the-art neural teacher rankers . 2 RELATED WORK . Knowledge distillation has been a popular research topic recently in several areas such as image recognition ( Hinton et al. , 2015 ; Romero et al. , 2014 ; Park et al. , 2019 ) , natural language understanding ( Sanh et al. , 2019 ; Jiao et al. , 2019 ; Aguilar et al. , 2020 ) , and neural machine translation ( Kim & Rush , 2016 ; Chen et al. , 2017 ; Tan et al. , 2019 ) as a way to generate compact models , achieving good performance-efficiency trade-offs . As we mentioned in the Introduction , both the classical setting ( e.g. , using pointwise losses on i.i.d . data ) and theoretical analysis ( e.g. , “ dark knowledge ” among classes ) for classification tasks may not be optimal for the ranking setting . The major goal of this work is to push the limits of neural rankers on rigorous benchmarks . Since the introduction of RankNet ( Burges et al. , 2005 ) over a decade ago , only recently , neural rankers were shown to be competitive with well-tuned Gradient Boosted Decision Trees ( GBDT ) on traditional LTR datasets ( Qin et al. , 2021 ) . We build upon ( Qin et al. , 2021 ) and show that BAR can further push the state-of-the-art of neural rankers on rigorous benchmarks . We also provide extra experiments to show listwise distillation helps neural ranking in other settings . We are motivated by Born Again Networks ( BAN ) that were introduced by ( Furlanello et al. , 2018 ) . BAN is the first work that shows better performance can be achieved by parameterizing the student model the same as the teacher . However , BAN only focuses on classification and direct application of BAN does not help LTR problems . Building upon the general “ born again ” ideas ( Zhang et al. , 2019 ; Clark et al. , 2019 ) , our contribution in this paper is in developing specific new techniques and theory that make these ideas applicable for the important LTR setting . Another closely related work is Ranking Distillation ( RD ) ( Tang & Wang , 2018 ) , since it studies knowledge distillation for ranking . There are several marked differences between our work and RD . First , RD focuses on performance-efficiency trade-offs , where the student usually underperforms the teacher in terms of ranking metrics ( Reddi et al. , 2021 ) , while we focus on outperforming the teacher . Second , the work only uses pointwise logistic loss for distillation . The authors state that “ We also tried to use a pair-wise distillation loss when learning from teacher ’ s top-K ranking . However , the results were disappointing ” , without going into more detail or explore listwise approaches , which we show are the key success factor . Also , Tang and Wang ( Tang & Wang , 2018 ) use a hyperparameter K : items ranked as top K are labeled as positive and others as negative for distillation . Besides only working for binary datasets , this setting is not very practical since real-world ranking lists may have very different list sizes or number of relevant items . Our method does not require such a parameter . Furthermore , they only evaluated their methods on some recommender system datasets where only user id , item id , and binary labels are available . It is not a typical LTR setting , so the effectiveness of RD over state-of-the-art neural rankers is unclear . 3 BACKGROUND ON LEARNING TO RANK . For LTR problems , the training data can be represented as a set Ψ = { ( x , y ) ∈ χn ×Rn ) } , where x is a list of n items xi ∈ χ and y is a list of n relevance labels yi ∈ R for 1 ≤ i ≤ n. We use χ as the universe of all items . In traditional LTR problems , each xi corresponds to a query-item pair and is represented as a feature vector in Rk where k is the number of feature dimensions . With slight abuse of notation , we also use xi as the feature vector and say x ∈ Rn×k . The objective is to learn a function that produces an ordering of items in x so that the utility of the ordered list is maximized , that is , the items are ordered by decreasing relevance . Most LTR algorithms formulate the problem as learning a ranking function to score and sort the items in a list . As such , the goal of LTR boils down to finding a parameterized ranking function f ( · ; θ ) : χn → Rn , where θ denotes the set of trainable parameters , to minimize the empirical loss : L ( f ( · ; θ ) ) = 1 |Ψ| ∑ ( x , y ) ∈Ψ l ( y , f ( x ; θ ) ) , ( 1 ) where l ( · ) is the loss function on a single list . There are many existing ranking metrics such as NDCG and MAP used in LTR problems . A common property of these metrics is that they are rank-dependent and place more emphasis on the top ranked items . For example , the widely used NDCG metric is defined as NDCG ( πf , y ) = DCG ( πf , y ) DCG ( π∗ , y ) , ( 2 ) where πf is a ranked list induced by the ranking function f ( · ; θ ) on x , π∗ is the ideal list ( where x is sorted by decreasing y ) , and DCG is defined as : DCG ( π , y ) = n∑ i=1 2yi − 1 log2 ( 1 + π ( i ) ) . ( 3 ) In practice , the truncated version that only considers the top-k ranked items , denoted as NDCG @ k , is often used .
The paper introduces Knowledge Distillation (KD) to Learning to rank (LTR), adapting the idea of Born Again Networks (BAN). The propose model, Born Again neural Rankers (BAR), combines an appropriate teacher score transformation and a novel listwise distillation framework. The authors conduct thorough experiment studies to demonstrate the superior performance of the BAR, and also provide explanations in terms of why it works.
SP:e2363380dd6d7f351b2ece40f053caad3cfa96fe
Discrete Representations Strengthen Vision Transformer Robustness
1 INTRODUCTION . Despite their high performance on in-distribution test sets , deep neural networks fail to generalize under real-world distribution shifts ( Barbu et al. , 2019 ) . This gap between training and inference poses many challenges for deploying deep learning models in real-world applications where closedworld assumptions are violated . This lack of robustness can be ascribed to learned representations that are overly sensitive to minor variations in local texture and insufficiently adept at representing more robust scene and object characteristics , such as shape and texture . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) has started to rival Convolutional Neural Networks ( CNNs ) in many computer vision tasks . Recent works found that ViTs are more robust than CNNs ( Paul & Chen , 2021 ; Mao et al. , 2021b ; Bhojanapalli et al. , 2021 ) and generalize favorably on a variety of visual robustness benchmarks ( Hendrycks et al. , 2021b ; Hendrycks & Dietterich , 2019 ) . These work suggested that ViTs ’ robustness comes from the self-attention architecture that captures a globally-contextualized inductive bias than CNNs . However , though self-attention is well capable of modeling the global context , recent studies found that ImageNet-trained ViT may not take this advantage . For example , Chen et al . ( 2021b ) compared training ViT on ImageNet with and without using position embedding , and showed that spatial structure captured in position embedding contributes less than ∼3 % of ViT ’ s performance . Without using position embedding , ViT treats the image as an orderless bag-of-patches , but achieves similar performance as ViT with position embedding . This suggests that ViT unintentionally relies on local detail ( e.g. , texture ) than the shape or structure of the object . We hypothesize that this deficiency in robustness comes from the high-dimensional , individually informative , linear tokenization which allows ViT to minimize empirical risk without learning much spatial structure . In this paper , we propose a simple yet novel input layer for vision transformers , where image patches are represented by discrete tokens . To be specific , we discretize an image and represent an image patch as a discrete token or “ visual word ” in a codebook . Our key insight is that discrete tokens capture important features in a low-dimensional space ( Oord et al. , 2017 ) preserving shape and structure of the object ( see Figure 2 ) . Our approach capitalizes on this discrete representation to promote the robustness of ViT . Using discrete tokens drives ViT towards better modeling of spatial interactions between tokens , given that individual tokens no longer carry enough information to depend on . We also concatenate a low dimensional pixel token to the discrete token to compensate for the potentially missed local details encoded by discrete tokens , especially for small objects . Our approach only changes the image patch tokenizer to improve generalization and robustness , which is orthogonal to all existing approaches for robustness , and can be integrated into architectures that extend vision transformer . We call the ViT model using our Discrete representation the Dr. ViT . Our experiments and visualizations show that incorporating discrete tokens in ViT significantly improves generalization accuracy for all seven out-of-distribution ImageNet benchmarks : ImageNetRendition by up to 10 % , Stylized-ImageNet by up to 12 % , ImageNet-Sketch by up to 10 % , and ImageNet-C by up to 10 % . Our method establishes the new state-of-the-art by on four benchmarks without any dataset-specific data augmentation . Our work is the first to connect discrete representations to robustness and demonstrate consistent robustness improvements . We will release the code and models to support comparisons and future research . 2 RELATED WORK . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) , inspired by the Transformer ( Vaswani et al. , 2017 ) in NLP , is the first CNN-free architecture that achieves state-of-the-art image classification accuracy . Since its inception , numerous works have proposed improvements to the ViT architecture ( Wang et al. , 2021 ; Chu et al. , 2021 ; Liu et al. , 2021 ; d ’ Ascoli et al. , 2021 ) , objective ( Chen et al. , 2021a ) , training strategy ( Touvron et al. , 2021 ) , etc .. Given the difficulty to study all existing ViT models , this paper focuses on the classical ViT model ( Dosovitskiy et al. , 2020 ) and its recent published versions . Specifically , Steiner et al . ( 2021 ) proposed ViT-AugReg that applies stronger data augmentation and regularization to the ViT model . Tolstikhin et al . ( 2021 ) introduced MLPMixer to replace self-attention in ViT with multi-layer perceptions ( MLP ) . We select the above ViT model family ( Dosovitskiy et al. , 2020 ; Steiner et al. , 2021 ; Tolstikhin et al. , 2021 ) in our robustness study for three reasons . First , they represent both the very first and one-of-the-best vision transformers in the literature . Second , these models demonstrated competitive performance when pre-trained on sufficiently large datasets such as ImageNet-21K and JFT-300M . Finally , unlike other Transformer models , they provide architectures consisting of solely Transformer layers as well as a hybrid of CNN and Transformer layers . These properties improve our understanding of robustness for different types of network layers and datasets . Robustness . Recent works established multiple content robustness datasets to evaluate the out-ofdistribution generalization of deep models ( Barbu et al. , 2019 ; Hendrycks et al. , 2021b ; a ; Wang et al. , 2019 ; Geirhos et al. , 2019 ; Hendrycks & Dietterich , 2019 ; Recht et al. , 2019 ) . In this paper , we consider 7 ImageNet robustness benchmarks of real-world test images ( or proxies ) where deep models trained on ImageNet are shown to suffer from notable performance drop . Existing works on robustness are targeted at closing the gap in a subset of these ImageNet robustness benchmarks and were extensively verified with the CNNs . Among them , carefully-designed data augmentations ( Hendrycks et al. , 2021a ; Cubuk et al. , 2018 ; Steiner et al. , 2021 ; Mao et al. , 2021b ; a ) , model regularization ( Wang et al. , 2019 ; Huang et al. , 2020b ; Hendrycks et al. , 2019 ) , and multitask learning ( Zamir et al. , 2020 ) are effective to address the issue . More recently , a few studies ( Paul & Chen , 2021 ; Bhojanapalli et al. , 2021 ; Naseer et al. , 2021 ; Shao et al. , 2021 ) suggest that ViTs are more robust than CNNs . Existing works mainly focused on analyzing the cause of superior generalizability in the ViT model and exploring new data augmentations . As our work focuses on discrete token input , we train the models using the same data augmentation as the ViT-AugReg baseline ( Steiner et al. , 2021 ) . While tailoring data augmentation ( Mao et al. , 2021b ) may further improve our results , we leave it out of the scope of this paper . Discrete Representation was used as a visual representation prior to the deep learning revolution , such as in bag-of-visual-words model ( Sivic & Zisserman , 2003 ; Csurka et al. , 2004 ) and VLAD model ( Arandjelovic & Zisserman , 2013 ) . Recently , ( Oord et al. , 2017 ; Vahdat et al. , 2018 ) proposed neural discrete representation to encode an image as integer tokens . Recent works used discrete representation mainly for image synthesis ( Ramesh et al. , 2021 ; Esser et al. , 2021 ) . To the best of our knowledge , our work is the first to demonstrate discrete representations strengthening robustness . The closest work to ours is BEiT ( Bao et al. , 2021 ) that pretrains the ViTs to predict the masked tokens . However , the tokens are discarded after pretraining , where the ViT model can still overfit the non-robust nuisances in the pixel tokens at later finetuning stage , undermining its robustness . 3 1 2 5 4 8 0 1 0 3 5 5 3 2 9 5 Linear Projection Fine-tuned CodeBook 0 [ 0.01 , 0.2 , 0.53 , … , 0.04 ] 1 [ 0.6 , 0.22 , 0.83 , … , 0.01 ] 1024 [ 0.26 , 0.75 , 0.27 , … , 0.98 ] Position Embedding Classification Prediction Transformer Encoder Pixel Token Pixel Embedding Vector Quantized Model Discrete Token Discrete Embedding Input Image Reconstructed Image * Class Token Decode Figure 1 : Overview of the proposed ViT using discrete representations . In addition to the pixel embeddings ( orange ) , we introduce discrete tokens and embeddings ( pink ) as the input to the standard Transformer Encoder of the ViT model ( Dosovitskiy et al. , 2020 ) . 3 METHOD . 3.1 PRELIMINARY ON VISION TRANSFORMER . Vision Transformer ( Dosovitskiy et al. , 2020 ) is a pure transformer architecture that operates on a sequence of image patches . The 2D image x ∈ RH×W×C is flattened into a sequence of image patches , following the raster scan , denoted by xp ∈ RL× ( P 2·C ) , where L = H×WP 2 is the effective sequence length and P 2×C is the dimension of image patch . A learnable classification token xclass is prepended to the patch sequence , then the position embedding Epos is added to formulate the final input embedding h0 . h0 = [ xclass ; x 1 pE ; x 2 pE ; · · · ; xLpE ] +Epos , E ∈ R ( P 2·C ) ×D , Epos ∈ R ( L+1 ) ×D ( 1 ) h′ ` = MSA ( LN ( h ` −1 ) ) + h ` −1 , ` = 1 , . . . , Lf ( 2 ) h ` = MLP ( LN ( h′ ` ) ) + h ′ ` , ` = 1 , . . . , Lf ( 3 ) y = LN ( h0L ) , ( 4 ) The architecture of ViT follows that of the Transformer ( Vaswani et al. , 2017 ) , which alternates layers of multi-headed self-attention ( MSA ) and multi-layer perceptron ( MLP ) with LayerNorm ( LN ) and residual connections being applied to every block . We denote the number of blocks as Lf . This paper considers the ViT model family consisting of 4 ViT backbones : the vanilla ViT discussed above , ViT-AugReg ( Steiner et al. , 2021 ) which shares the same ViT architecture but applies stronger data augmentation and regularization , MLP-Mixer ( Tolstikhin et al. , 2021 ) which replaces self-attention in ViT with MLP , a variant called Hybrid-ViT which replaces the raw image patches in Equation 1 with the CNN features extracted by a ResNet-50 ( He et al. , 2016 ) . 3.2 ARCHITECTURE . Existing ViTs represent an image patch as a sequence of pixel tokens , which are linear projections of flattened image pixels . We propose a novel architecture modification to the input layer of the vision transformer , where an image patch xp is represented by a combination of two embeddings . As illustrated in Fig . 1 , in addition to the original pixel-wise linear projection , we discretize an image patch into an discrete token in a codebook V ∈ RK×dc , where K is the codebook size and dc is the dimension of the embedding . The discretization is achieved by a vector quantized ( VQ ) encoder pθ that produces an integer z for an image patch x as : pθ ( z = k|x ) = 1 ( k = argmin j=1 : K ‖ze ( x ) − Vj‖2 ) , ( 5 ) where ze ( x ) denotes the output of the encoder network and 1 ( · ) is the indicator function . The encoder is applied to the patch sequence xp ∈ RL× ( P 2·C ) to obtain an integer sequence zd ∈ { 1 , 2 , ... , K } L. Afterward , we use the embeddings of both discrete and pixel tokens to construct the input embedding to the ViT model . Specifically , the input embedding in Equation 1 is replaced by : h0 = [ xclass ; f ( Vz1d , x 1 pE ) ; f ( Vz2d , x 2 pE ) ; · · · ; f ( VzLd , x L pE ) ] +Epos , ( 6 ) where f is the function , embodied as a neural network layer , to combine the two embeddings . We empirically compared four network designs for f and found that the simplest concatenation works best . Note that our model only modifies the input layer of ViT ( Equation 1 ) and leaves intact the remaining layers depicted in Equation 2-4 . 4 1 2 8 9 7 5 9 2 decode t abby catt abby cat vul t ur e f l ami ngo f l ami ngo cout i nous t okens ( i mage pat ch ) di scr et e t okens Figure 2 : Comparison of pixel tokens ( top ) and the reconstructed image decoded from the discrete tokens ( bottom ) . Discrete tokens capture important shapes and structures but may lose local texture . Comparison of pixel and discrete embeddings : Pixel and discrete embeddings represent different aspects of the input image . Discrete embeddings capture important features in a low-dimension space ( Oord et al. , 2017 ) that preserves the global structure of an object but lose local details . Fig . 2 compares the original image ( top ) and the reconstructed images decoded from the discrete embeddings of our model ( bottom ) . As shown , the decoded images from discrete embeddings reasonably depict the object shape and global context . Due to the quantization , the decoder hallucinates the local textures , e.g. , in the cat ’ s eye , or the text in the “ vulture ” and “ flamingo ” images . It is worth noting that the VQ encoder/decoder is only trained on ImageNet 2012 but they can generalize to out-of-distribution images . Please see more examples in Appendix A . On the flip side , pixel embeddings capture rich details through the linear projection from raw pixels . However , given the expressive power of transformers , ViTs can spend capacity on local textures or nuance patterns that are often circumferential to robust recognition . Since humans recognize images primarily relying on the shape and semantic structure , this discrepancy to human perception undermines ViT ’ s generalization on out-of-distribution data . Our proposed model leverages the power of both embeddings to promote the interaction between modeling global and local features .
The authors have an observation that ViTs trained on ImageNet heavily depend on local features, but fail to use the global features (shape or structure). To address this issue and improve the robustness of ViTs, the authors proposed to replace the linear embedding layer by the a vector-quantized encoder. The authors claims that it can push the ViTs to learn the global information by this replacement. Experiments are conducted on ImageNet and other ImageNet variant datasets. Results show that the proposed method can improve ViTs' robustness on various benchmarks.
SP:85033c6b87f59887cc0969744f818b00acf910c6
Discrete Representations Strengthen Vision Transformer Robustness
1 INTRODUCTION . Despite their high performance on in-distribution test sets , deep neural networks fail to generalize under real-world distribution shifts ( Barbu et al. , 2019 ) . This gap between training and inference poses many challenges for deploying deep learning models in real-world applications where closedworld assumptions are violated . This lack of robustness can be ascribed to learned representations that are overly sensitive to minor variations in local texture and insufficiently adept at representing more robust scene and object characteristics , such as shape and texture . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) has started to rival Convolutional Neural Networks ( CNNs ) in many computer vision tasks . Recent works found that ViTs are more robust than CNNs ( Paul & Chen , 2021 ; Mao et al. , 2021b ; Bhojanapalli et al. , 2021 ) and generalize favorably on a variety of visual robustness benchmarks ( Hendrycks et al. , 2021b ; Hendrycks & Dietterich , 2019 ) . These work suggested that ViTs ’ robustness comes from the self-attention architecture that captures a globally-contextualized inductive bias than CNNs . However , though self-attention is well capable of modeling the global context , recent studies found that ImageNet-trained ViT may not take this advantage . For example , Chen et al . ( 2021b ) compared training ViT on ImageNet with and without using position embedding , and showed that spatial structure captured in position embedding contributes less than ∼3 % of ViT ’ s performance . Without using position embedding , ViT treats the image as an orderless bag-of-patches , but achieves similar performance as ViT with position embedding . This suggests that ViT unintentionally relies on local detail ( e.g. , texture ) than the shape or structure of the object . We hypothesize that this deficiency in robustness comes from the high-dimensional , individually informative , linear tokenization which allows ViT to minimize empirical risk without learning much spatial structure . In this paper , we propose a simple yet novel input layer for vision transformers , where image patches are represented by discrete tokens . To be specific , we discretize an image and represent an image patch as a discrete token or “ visual word ” in a codebook . Our key insight is that discrete tokens capture important features in a low-dimensional space ( Oord et al. , 2017 ) preserving shape and structure of the object ( see Figure 2 ) . Our approach capitalizes on this discrete representation to promote the robustness of ViT . Using discrete tokens drives ViT towards better modeling of spatial interactions between tokens , given that individual tokens no longer carry enough information to depend on . We also concatenate a low dimensional pixel token to the discrete token to compensate for the potentially missed local details encoded by discrete tokens , especially for small objects . Our approach only changes the image patch tokenizer to improve generalization and robustness , which is orthogonal to all existing approaches for robustness , and can be integrated into architectures that extend vision transformer . We call the ViT model using our Discrete representation the Dr. ViT . Our experiments and visualizations show that incorporating discrete tokens in ViT significantly improves generalization accuracy for all seven out-of-distribution ImageNet benchmarks : ImageNetRendition by up to 10 % , Stylized-ImageNet by up to 12 % , ImageNet-Sketch by up to 10 % , and ImageNet-C by up to 10 % . Our method establishes the new state-of-the-art by on four benchmarks without any dataset-specific data augmentation . Our work is the first to connect discrete representations to robustness and demonstrate consistent robustness improvements . We will release the code and models to support comparisons and future research . 2 RELATED WORK . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) , inspired by the Transformer ( Vaswani et al. , 2017 ) in NLP , is the first CNN-free architecture that achieves state-of-the-art image classification accuracy . Since its inception , numerous works have proposed improvements to the ViT architecture ( Wang et al. , 2021 ; Chu et al. , 2021 ; Liu et al. , 2021 ; d ’ Ascoli et al. , 2021 ) , objective ( Chen et al. , 2021a ) , training strategy ( Touvron et al. , 2021 ) , etc .. Given the difficulty to study all existing ViT models , this paper focuses on the classical ViT model ( Dosovitskiy et al. , 2020 ) and its recent published versions . Specifically , Steiner et al . ( 2021 ) proposed ViT-AugReg that applies stronger data augmentation and regularization to the ViT model . Tolstikhin et al . ( 2021 ) introduced MLPMixer to replace self-attention in ViT with multi-layer perceptions ( MLP ) . We select the above ViT model family ( Dosovitskiy et al. , 2020 ; Steiner et al. , 2021 ; Tolstikhin et al. , 2021 ) in our robustness study for three reasons . First , they represent both the very first and one-of-the-best vision transformers in the literature . Second , these models demonstrated competitive performance when pre-trained on sufficiently large datasets such as ImageNet-21K and JFT-300M . Finally , unlike other Transformer models , they provide architectures consisting of solely Transformer layers as well as a hybrid of CNN and Transformer layers . These properties improve our understanding of robustness for different types of network layers and datasets . Robustness . Recent works established multiple content robustness datasets to evaluate the out-ofdistribution generalization of deep models ( Barbu et al. , 2019 ; Hendrycks et al. , 2021b ; a ; Wang et al. , 2019 ; Geirhos et al. , 2019 ; Hendrycks & Dietterich , 2019 ; Recht et al. , 2019 ) . In this paper , we consider 7 ImageNet robustness benchmarks of real-world test images ( or proxies ) where deep models trained on ImageNet are shown to suffer from notable performance drop . Existing works on robustness are targeted at closing the gap in a subset of these ImageNet robustness benchmarks and were extensively verified with the CNNs . Among them , carefully-designed data augmentations ( Hendrycks et al. , 2021a ; Cubuk et al. , 2018 ; Steiner et al. , 2021 ; Mao et al. , 2021b ; a ) , model regularization ( Wang et al. , 2019 ; Huang et al. , 2020b ; Hendrycks et al. , 2019 ) , and multitask learning ( Zamir et al. , 2020 ) are effective to address the issue . More recently , a few studies ( Paul & Chen , 2021 ; Bhojanapalli et al. , 2021 ; Naseer et al. , 2021 ; Shao et al. , 2021 ) suggest that ViTs are more robust than CNNs . Existing works mainly focused on analyzing the cause of superior generalizability in the ViT model and exploring new data augmentations . As our work focuses on discrete token input , we train the models using the same data augmentation as the ViT-AugReg baseline ( Steiner et al. , 2021 ) . While tailoring data augmentation ( Mao et al. , 2021b ) may further improve our results , we leave it out of the scope of this paper . Discrete Representation was used as a visual representation prior to the deep learning revolution , such as in bag-of-visual-words model ( Sivic & Zisserman , 2003 ; Csurka et al. , 2004 ) and VLAD model ( Arandjelovic & Zisserman , 2013 ) . Recently , ( Oord et al. , 2017 ; Vahdat et al. , 2018 ) proposed neural discrete representation to encode an image as integer tokens . Recent works used discrete representation mainly for image synthesis ( Ramesh et al. , 2021 ; Esser et al. , 2021 ) . To the best of our knowledge , our work is the first to demonstrate discrete representations strengthening robustness . The closest work to ours is BEiT ( Bao et al. , 2021 ) that pretrains the ViTs to predict the masked tokens . However , the tokens are discarded after pretraining , where the ViT model can still overfit the non-robust nuisances in the pixel tokens at later finetuning stage , undermining its robustness . 3 1 2 5 4 8 0 1 0 3 5 5 3 2 9 5 Linear Projection Fine-tuned CodeBook 0 [ 0.01 , 0.2 , 0.53 , … , 0.04 ] 1 [ 0.6 , 0.22 , 0.83 , … , 0.01 ] 1024 [ 0.26 , 0.75 , 0.27 , … , 0.98 ] Position Embedding Classification Prediction Transformer Encoder Pixel Token Pixel Embedding Vector Quantized Model Discrete Token Discrete Embedding Input Image Reconstructed Image * Class Token Decode Figure 1 : Overview of the proposed ViT using discrete representations . In addition to the pixel embeddings ( orange ) , we introduce discrete tokens and embeddings ( pink ) as the input to the standard Transformer Encoder of the ViT model ( Dosovitskiy et al. , 2020 ) . 3 METHOD . 3.1 PRELIMINARY ON VISION TRANSFORMER . Vision Transformer ( Dosovitskiy et al. , 2020 ) is a pure transformer architecture that operates on a sequence of image patches . The 2D image x ∈ RH×W×C is flattened into a sequence of image patches , following the raster scan , denoted by xp ∈ RL× ( P 2·C ) , where L = H×WP 2 is the effective sequence length and P 2×C is the dimension of image patch . A learnable classification token xclass is prepended to the patch sequence , then the position embedding Epos is added to formulate the final input embedding h0 . h0 = [ xclass ; x 1 pE ; x 2 pE ; · · · ; xLpE ] +Epos , E ∈ R ( P 2·C ) ×D , Epos ∈ R ( L+1 ) ×D ( 1 ) h′ ` = MSA ( LN ( h ` −1 ) ) + h ` −1 , ` = 1 , . . . , Lf ( 2 ) h ` = MLP ( LN ( h′ ` ) ) + h ′ ` , ` = 1 , . . . , Lf ( 3 ) y = LN ( h0L ) , ( 4 ) The architecture of ViT follows that of the Transformer ( Vaswani et al. , 2017 ) , which alternates layers of multi-headed self-attention ( MSA ) and multi-layer perceptron ( MLP ) with LayerNorm ( LN ) and residual connections being applied to every block . We denote the number of blocks as Lf . This paper considers the ViT model family consisting of 4 ViT backbones : the vanilla ViT discussed above , ViT-AugReg ( Steiner et al. , 2021 ) which shares the same ViT architecture but applies stronger data augmentation and regularization , MLP-Mixer ( Tolstikhin et al. , 2021 ) which replaces self-attention in ViT with MLP , a variant called Hybrid-ViT which replaces the raw image patches in Equation 1 with the CNN features extracted by a ResNet-50 ( He et al. , 2016 ) . 3.2 ARCHITECTURE . Existing ViTs represent an image patch as a sequence of pixel tokens , which are linear projections of flattened image pixels . We propose a novel architecture modification to the input layer of the vision transformer , where an image patch xp is represented by a combination of two embeddings . As illustrated in Fig . 1 , in addition to the original pixel-wise linear projection , we discretize an image patch into an discrete token in a codebook V ∈ RK×dc , where K is the codebook size and dc is the dimension of the embedding . The discretization is achieved by a vector quantized ( VQ ) encoder pθ that produces an integer z for an image patch x as : pθ ( z = k|x ) = 1 ( k = argmin j=1 : K ‖ze ( x ) − Vj‖2 ) , ( 5 ) where ze ( x ) denotes the output of the encoder network and 1 ( · ) is the indicator function . The encoder is applied to the patch sequence xp ∈ RL× ( P 2·C ) to obtain an integer sequence zd ∈ { 1 , 2 , ... , K } L. Afterward , we use the embeddings of both discrete and pixel tokens to construct the input embedding to the ViT model . Specifically , the input embedding in Equation 1 is replaced by : h0 = [ xclass ; f ( Vz1d , x 1 pE ) ; f ( Vz2d , x 2 pE ) ; · · · ; f ( VzLd , x L pE ) ] +Epos , ( 6 ) where f is the function , embodied as a neural network layer , to combine the two embeddings . We empirically compared four network designs for f and found that the simplest concatenation works best . Note that our model only modifies the input layer of ViT ( Equation 1 ) and leaves intact the remaining layers depicted in Equation 2-4 . 4 1 2 8 9 7 5 9 2 decode t abby catt abby cat vul t ur e f l ami ngo f l ami ngo cout i nous t okens ( i mage pat ch ) di scr et e t okens Figure 2 : Comparison of pixel tokens ( top ) and the reconstructed image decoded from the discrete tokens ( bottom ) . Discrete tokens capture important shapes and structures but may lose local texture . Comparison of pixel and discrete embeddings : Pixel and discrete embeddings represent different aspects of the input image . Discrete embeddings capture important features in a low-dimension space ( Oord et al. , 2017 ) that preserves the global structure of an object but lose local details . Fig . 2 compares the original image ( top ) and the reconstructed images decoded from the discrete embeddings of our model ( bottom ) . As shown , the decoded images from discrete embeddings reasonably depict the object shape and global context . Due to the quantization , the decoder hallucinates the local textures , e.g. , in the cat ’ s eye , or the text in the “ vulture ” and “ flamingo ” images . It is worth noting that the VQ encoder/decoder is only trained on ImageNet 2012 but they can generalize to out-of-distribution images . Please see more examples in Appendix A . On the flip side , pixel embeddings capture rich details through the linear projection from raw pixels . However , given the expressive power of transformers , ViTs can spend capacity on local textures or nuance patterns that are often circumferential to robust recognition . Since humans recognize images primarily relying on the shape and semantic structure , this discrepancy to human perception undermines ViT ’ s generalization on out-of-distribution data . Our proposed model leverages the power of both embeddings to promote the interaction between modeling global and local features .
The author present an observation in this paper that discrete image token representations derived from a vector quantized image encoder is able to preserve shape and structure object information. Inspired by this observation, the authors propose a modification to ViT architectures that appending these token representations to the input and show that the resultant models generalize better for out-of-distribution data on ImageNet classification. The modification to the input is simple and can be integrated into variants of ViT. Experiments show that the proposed method consistently outperforms baseline models on ImageNet, ImageNet-Real and seven out-of-distribution datasets derived from ImageNet. The margin of improvement is especially larger for the tasks where textures and image style change significantly while object structure is more discriminative. The main contributions of this paper are two-fold: 1) a novel approach to enrich existing ViT architectures with shape and structure information derived from discrete representations and 2) improve the robustness over the baseline ViT models and achieve SoTA results on ImageNet-Rendition, Stylized-ImageNet and ImageNet-Sketch.
SP:85033c6b87f59887cc0969744f818b00acf910c6
Discrete Representations Strengthen Vision Transformer Robustness
1 INTRODUCTION . Despite their high performance on in-distribution test sets , deep neural networks fail to generalize under real-world distribution shifts ( Barbu et al. , 2019 ) . This gap between training and inference poses many challenges for deploying deep learning models in real-world applications where closedworld assumptions are violated . This lack of robustness can be ascribed to learned representations that are overly sensitive to minor variations in local texture and insufficiently adept at representing more robust scene and object characteristics , such as shape and texture . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) has started to rival Convolutional Neural Networks ( CNNs ) in many computer vision tasks . Recent works found that ViTs are more robust than CNNs ( Paul & Chen , 2021 ; Mao et al. , 2021b ; Bhojanapalli et al. , 2021 ) and generalize favorably on a variety of visual robustness benchmarks ( Hendrycks et al. , 2021b ; Hendrycks & Dietterich , 2019 ) . These work suggested that ViTs ’ robustness comes from the self-attention architecture that captures a globally-contextualized inductive bias than CNNs . However , though self-attention is well capable of modeling the global context , recent studies found that ImageNet-trained ViT may not take this advantage . For example , Chen et al . ( 2021b ) compared training ViT on ImageNet with and without using position embedding , and showed that spatial structure captured in position embedding contributes less than ∼3 % of ViT ’ s performance . Without using position embedding , ViT treats the image as an orderless bag-of-patches , but achieves similar performance as ViT with position embedding . This suggests that ViT unintentionally relies on local detail ( e.g. , texture ) than the shape or structure of the object . We hypothesize that this deficiency in robustness comes from the high-dimensional , individually informative , linear tokenization which allows ViT to minimize empirical risk without learning much spatial structure . In this paper , we propose a simple yet novel input layer for vision transformers , where image patches are represented by discrete tokens . To be specific , we discretize an image and represent an image patch as a discrete token or “ visual word ” in a codebook . Our key insight is that discrete tokens capture important features in a low-dimensional space ( Oord et al. , 2017 ) preserving shape and structure of the object ( see Figure 2 ) . Our approach capitalizes on this discrete representation to promote the robustness of ViT . Using discrete tokens drives ViT towards better modeling of spatial interactions between tokens , given that individual tokens no longer carry enough information to depend on . We also concatenate a low dimensional pixel token to the discrete token to compensate for the potentially missed local details encoded by discrete tokens , especially for small objects . Our approach only changes the image patch tokenizer to improve generalization and robustness , which is orthogonal to all existing approaches for robustness , and can be integrated into architectures that extend vision transformer . We call the ViT model using our Discrete representation the Dr. ViT . Our experiments and visualizations show that incorporating discrete tokens in ViT significantly improves generalization accuracy for all seven out-of-distribution ImageNet benchmarks : ImageNetRendition by up to 10 % , Stylized-ImageNet by up to 12 % , ImageNet-Sketch by up to 10 % , and ImageNet-C by up to 10 % . Our method establishes the new state-of-the-art by on four benchmarks without any dataset-specific data augmentation . Our work is the first to connect discrete representations to robustness and demonstrate consistent robustness improvements . We will release the code and models to support comparisons and future research . 2 RELATED WORK . Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2020 ) , inspired by the Transformer ( Vaswani et al. , 2017 ) in NLP , is the first CNN-free architecture that achieves state-of-the-art image classification accuracy . Since its inception , numerous works have proposed improvements to the ViT architecture ( Wang et al. , 2021 ; Chu et al. , 2021 ; Liu et al. , 2021 ; d ’ Ascoli et al. , 2021 ) , objective ( Chen et al. , 2021a ) , training strategy ( Touvron et al. , 2021 ) , etc .. Given the difficulty to study all existing ViT models , this paper focuses on the classical ViT model ( Dosovitskiy et al. , 2020 ) and its recent published versions . Specifically , Steiner et al . ( 2021 ) proposed ViT-AugReg that applies stronger data augmentation and regularization to the ViT model . Tolstikhin et al . ( 2021 ) introduced MLPMixer to replace self-attention in ViT with multi-layer perceptions ( MLP ) . We select the above ViT model family ( Dosovitskiy et al. , 2020 ; Steiner et al. , 2021 ; Tolstikhin et al. , 2021 ) in our robustness study for three reasons . First , they represent both the very first and one-of-the-best vision transformers in the literature . Second , these models demonstrated competitive performance when pre-trained on sufficiently large datasets such as ImageNet-21K and JFT-300M . Finally , unlike other Transformer models , they provide architectures consisting of solely Transformer layers as well as a hybrid of CNN and Transformer layers . These properties improve our understanding of robustness for different types of network layers and datasets . Robustness . Recent works established multiple content robustness datasets to evaluate the out-ofdistribution generalization of deep models ( Barbu et al. , 2019 ; Hendrycks et al. , 2021b ; a ; Wang et al. , 2019 ; Geirhos et al. , 2019 ; Hendrycks & Dietterich , 2019 ; Recht et al. , 2019 ) . In this paper , we consider 7 ImageNet robustness benchmarks of real-world test images ( or proxies ) where deep models trained on ImageNet are shown to suffer from notable performance drop . Existing works on robustness are targeted at closing the gap in a subset of these ImageNet robustness benchmarks and were extensively verified with the CNNs . Among them , carefully-designed data augmentations ( Hendrycks et al. , 2021a ; Cubuk et al. , 2018 ; Steiner et al. , 2021 ; Mao et al. , 2021b ; a ) , model regularization ( Wang et al. , 2019 ; Huang et al. , 2020b ; Hendrycks et al. , 2019 ) , and multitask learning ( Zamir et al. , 2020 ) are effective to address the issue . More recently , a few studies ( Paul & Chen , 2021 ; Bhojanapalli et al. , 2021 ; Naseer et al. , 2021 ; Shao et al. , 2021 ) suggest that ViTs are more robust than CNNs . Existing works mainly focused on analyzing the cause of superior generalizability in the ViT model and exploring new data augmentations . As our work focuses on discrete token input , we train the models using the same data augmentation as the ViT-AugReg baseline ( Steiner et al. , 2021 ) . While tailoring data augmentation ( Mao et al. , 2021b ) may further improve our results , we leave it out of the scope of this paper . Discrete Representation was used as a visual representation prior to the deep learning revolution , such as in bag-of-visual-words model ( Sivic & Zisserman , 2003 ; Csurka et al. , 2004 ) and VLAD model ( Arandjelovic & Zisserman , 2013 ) . Recently , ( Oord et al. , 2017 ; Vahdat et al. , 2018 ) proposed neural discrete representation to encode an image as integer tokens . Recent works used discrete representation mainly for image synthesis ( Ramesh et al. , 2021 ; Esser et al. , 2021 ) . To the best of our knowledge , our work is the first to demonstrate discrete representations strengthening robustness . The closest work to ours is BEiT ( Bao et al. , 2021 ) that pretrains the ViTs to predict the masked tokens . However , the tokens are discarded after pretraining , where the ViT model can still overfit the non-robust nuisances in the pixel tokens at later finetuning stage , undermining its robustness . 3 1 2 5 4 8 0 1 0 3 5 5 3 2 9 5 Linear Projection Fine-tuned CodeBook 0 [ 0.01 , 0.2 , 0.53 , … , 0.04 ] 1 [ 0.6 , 0.22 , 0.83 , … , 0.01 ] 1024 [ 0.26 , 0.75 , 0.27 , … , 0.98 ] Position Embedding Classification Prediction Transformer Encoder Pixel Token Pixel Embedding Vector Quantized Model Discrete Token Discrete Embedding Input Image Reconstructed Image * Class Token Decode Figure 1 : Overview of the proposed ViT using discrete representations . In addition to the pixel embeddings ( orange ) , we introduce discrete tokens and embeddings ( pink ) as the input to the standard Transformer Encoder of the ViT model ( Dosovitskiy et al. , 2020 ) . 3 METHOD . 3.1 PRELIMINARY ON VISION TRANSFORMER . Vision Transformer ( Dosovitskiy et al. , 2020 ) is a pure transformer architecture that operates on a sequence of image patches . The 2D image x ∈ RH×W×C is flattened into a sequence of image patches , following the raster scan , denoted by xp ∈ RL× ( P 2·C ) , where L = H×WP 2 is the effective sequence length and P 2×C is the dimension of image patch . A learnable classification token xclass is prepended to the patch sequence , then the position embedding Epos is added to formulate the final input embedding h0 . h0 = [ xclass ; x 1 pE ; x 2 pE ; · · · ; xLpE ] +Epos , E ∈ R ( P 2·C ) ×D , Epos ∈ R ( L+1 ) ×D ( 1 ) h′ ` = MSA ( LN ( h ` −1 ) ) + h ` −1 , ` = 1 , . . . , Lf ( 2 ) h ` = MLP ( LN ( h′ ` ) ) + h ′ ` , ` = 1 , . . . , Lf ( 3 ) y = LN ( h0L ) , ( 4 ) The architecture of ViT follows that of the Transformer ( Vaswani et al. , 2017 ) , which alternates layers of multi-headed self-attention ( MSA ) and multi-layer perceptron ( MLP ) with LayerNorm ( LN ) and residual connections being applied to every block . We denote the number of blocks as Lf . This paper considers the ViT model family consisting of 4 ViT backbones : the vanilla ViT discussed above , ViT-AugReg ( Steiner et al. , 2021 ) which shares the same ViT architecture but applies stronger data augmentation and regularization , MLP-Mixer ( Tolstikhin et al. , 2021 ) which replaces self-attention in ViT with MLP , a variant called Hybrid-ViT which replaces the raw image patches in Equation 1 with the CNN features extracted by a ResNet-50 ( He et al. , 2016 ) . 3.2 ARCHITECTURE . Existing ViTs represent an image patch as a sequence of pixel tokens , which are linear projections of flattened image pixels . We propose a novel architecture modification to the input layer of the vision transformer , where an image patch xp is represented by a combination of two embeddings . As illustrated in Fig . 1 , in addition to the original pixel-wise linear projection , we discretize an image patch into an discrete token in a codebook V ∈ RK×dc , where K is the codebook size and dc is the dimension of the embedding . The discretization is achieved by a vector quantized ( VQ ) encoder pθ that produces an integer z for an image patch x as : pθ ( z = k|x ) = 1 ( k = argmin j=1 : K ‖ze ( x ) − Vj‖2 ) , ( 5 ) where ze ( x ) denotes the output of the encoder network and 1 ( · ) is the indicator function . The encoder is applied to the patch sequence xp ∈ RL× ( P 2·C ) to obtain an integer sequence zd ∈ { 1 , 2 , ... , K } L. Afterward , we use the embeddings of both discrete and pixel tokens to construct the input embedding to the ViT model . Specifically , the input embedding in Equation 1 is replaced by : h0 = [ xclass ; f ( Vz1d , x 1 pE ) ; f ( Vz2d , x 2 pE ) ; · · · ; f ( VzLd , x L pE ) ] +Epos , ( 6 ) where f is the function , embodied as a neural network layer , to combine the two embeddings . We empirically compared four network designs for f and found that the simplest concatenation works best . Note that our model only modifies the input layer of ViT ( Equation 1 ) and leaves intact the remaining layers depicted in Equation 2-4 . 4 1 2 8 9 7 5 9 2 decode t abby catt abby cat vul t ur e f l ami ngo f l ami ngo cout i nous t okens ( i mage pat ch ) di scr et e t okens Figure 2 : Comparison of pixel tokens ( top ) and the reconstructed image decoded from the discrete tokens ( bottom ) . Discrete tokens capture important shapes and structures but may lose local texture . Comparison of pixel and discrete embeddings : Pixel and discrete embeddings represent different aspects of the input image . Discrete embeddings capture important features in a low-dimension space ( Oord et al. , 2017 ) that preserves the global structure of an object but lose local details . Fig . 2 compares the original image ( top ) and the reconstructed images decoded from the discrete embeddings of our model ( bottom ) . As shown , the decoded images from discrete embeddings reasonably depict the object shape and global context . Due to the quantization , the decoder hallucinates the local textures , e.g. , in the cat ’ s eye , or the text in the “ vulture ” and “ flamingo ” images . It is worth noting that the VQ encoder/decoder is only trained on ImageNet 2012 but they can generalize to out-of-distribution images . Please see more examples in Appendix A . On the flip side , pixel embeddings capture rich details through the linear projection from raw pixels . However , given the expressive power of transformers , ViTs can spend capacity on local textures or nuance patterns that are often circumferential to robust recognition . Since humans recognize images primarily relying on the shape and semantic structure , this discrepancy to human perception undermines ViT ’ s generalization on out-of-distribution data . Our proposed model leverages the power of both embeddings to promote the interaction between modeling global and local features .
This paper introduces Dr. ViT, which is conducted on 7 ImageNet robustness benchmarks. The motivation of this work is to demonstrate the robustness can be enhanced when adding discrete representation. Sufficient experiments demonstrate the effectiveness of this paper.
SP:85033c6b87f59887cc0969744f818b00acf910c6
Adaptive Q-learning for Interaction-Limited Reinforcement Learning
1 INTRODUCTION . Conventional online reinforcement learning ( RL ) methods ( Haarnoja et al. , 2018 ; Fujimoto et al. , 2018 ) usually learn from experiences generated by interactions with the online environment . They are impractical in some real-world applications , e.g. , dialog ( Jaques et al. , 2019 ) and education ( Mandel et al. , 2014 ) , where interactions are costly . Recently , offline RL ( Levine et al. , 2020 ) has aroused much attention . It targets the above challenge by making the agent learn from an offline dataset collected by other policies in a purely data-driven manner . The difference between online RL and offline RL is shown in Figure 1 . Existing offline RL studies try to target the distribution mismatch or out-of-distribution actions issue by employing a pessimistic update scheme ( Kumar et al. , 2019 ; 2020 ) or in combination with imitation learning ( Fujimoto et al. , 2019 ) . However , when the dataset is fixed , offline RL can not learn the optimal policy ( Kidambi et al. , 2020 ) , and even worse , when the dataset ’ s quality is poor , offline RL usually gains a relatively bad performance ( Kumar et al. , 2020 ; Fu et al. , 2020 ; Levine et al. , 2020 ) . On the other hand , it is challenging to evaluate the learned policy when learning totally from the offline dataset . Even though some research topics , e.g. , off-policy evaluation ( Dann et al. , 2014 ) , study how to evaluate the learned policy without the interaction with the environment , it is still not ideal for the practical purpose . Some recent works try to address the above issues by employing an offline-online setting . Such methods ( Lee et al. , 2021 ; Nair et al. , 2020 ) focus on pre-training a policy using the offline dataset and fine-tuning the policy through further online interactions . Even though their methods alleviate the above issues to some extent , their main bottleneck is that they do not consider the different characteristics of offline and online data . For instance , pre-existing offline data can prevent agents from converging prematurely due to the potential diverse offline dataset , while online data can improve stability and accelerate convergence . Generally , these different data are mixed and used by a pessimistic strategy to update the policy in their methods , which may be problematic since using a pessimistic strategy for online data may harm the policy performance ( Nair et al. , 2020 ) . Moreover , with sufficiently large and diverse offline data , a high-performing policy can be learned just using a pure online RL algorithm ( Agarwal et al. , 2020 ) . And the online near-on-policy data also play a key role in improving the RL algorithm ’ s stability ( Fujimoto et al. , 2019 ) . Hence , we should take full advantage of both offline and online dataset . To tackle the above problems , in this paper , we emphasize that online and offline RL should be coupled organically . First , a separate updating strategy should be employed for online and offline data , respectively , considering their different characteristics . To do so , we present a framework called adaptive Q-learning that integrates the advantage of offline learning and online learning effectively . When learning from the offline dataset , we conduct a pessimistic update strategy . In contrast , we use a greedy or non-pessimistic update strategy when learning from the online dataset . Second , we design a novel replay buffer to distinguish the offline from online datasets in a simple way . By utilizing such a novel framework and buffer design , the agent can achieve an expert policy using limited online interaction steps regardless of the quality of the offline dataset . In the experiments , our proposed framework can achieve better performance by using only one fifth number of interactions compared with the previous method ( Nair et al. , 2020 ) . Our contributions can be summarized as below : • We propose a unified framework called Adaptive Q-learning that can effectively benefit from both the offline dataset and limited number of online interaction data . • Based on the general framework , we initialize a practical algorithm , called Greedyconservative Q-ensemble learning ( GCQL ) that builds on top of State-of-the-Art offline RL and online RL method . • We empirically verify the effectiveness of our method by comprehensive experiments on the popular continuous control tasks MuJoCo ( Todorov et al. , 2012 ) with offline dataset coming from D4RL ( Fu et al. , 2020 ) . 2 RELATED WORK . Online RL In general , online RL algorithms can be divided into two categories , i.e. , on-policy and off-policy algorithms . On-policy methods ( Schulman et al. , 2015 ; 2017 ) update the policy using data collected by its current behavior policy . As ignoring the logged data collected by its history behaviour policies , they usually have a lower sample efficiency than the off-policy RL . On the other hand , off-policy methods ( Fujimoto et al. , 2018 ; Chen et al. , 2021 ) enable the policy to learn from experience collected by history behavior polices , however , they can not learn well from history trajectories collected by other agents ’ behavior policies ( Fujimoto et al. , 2019 ; Kumar et al. , 2020 ) . Consequently , the need for huge online interaction makes online RL impractical for some real-world applications , such as dialog agents ( Jaques et al. , 2019 ) or education system ( Mandel et al. , 2014 ) . Offline RL Offline RL algorithms assume the online environment is unavailable and learn policies only from the pre-collected dataset . As the value estimation error can not be corrected using online interactions here , these methods tend to utilize a pessimistic updating strategy to relieve the distribution mismatch problem ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . To implement such a strategy , model-free offline RL methods generally employ value or policy penalties to constrain the updated policy close to the data collecting policy ( Wu et al. , 2019 ; Kumar et al. , 2020 ; Fujimoto et al. , 2019 ; He & Hou , 2020 ) . And model-based methods use predictive models to estimate uncertainties of states and then update the policy in a pessimistic way based on them ( Kidambi et al. , 2020 ; Yu et al. , 2020 ) . Those offline RL methods can not guarantee a good performance , especially when the data quality is poor ( Kumar et al. , 2020 ) . Besides , policy evaluation when the online environment is unavailable is also challenging . Even though off-policy evaluation ( OPE ) methods ( Dann et al. , 2014 ) present alternative solutions , they are still far from perfect . Above issues of online and offline RL motivate us to investigate the offline-online setting . Offline-online RL Lee et al . ( 2021 ) and Nair et al . ( 2020 ) focus on the mixed setting where the agent is first learned from the offline dataset , and then trained online . Nair et al . ( 2020 ) propose an advantage-weighted actor-critic ( AWAC ) method that restricts the policy to select actions close to those in the offline data by an implicit constraint . When online interactions are available , such conservative constraint may have adverse effects on the performance . Lee et al . ( 2021 ) employ a balanced replay scheme to address the distribution shift issue . It uses the offline data by only selecting near-on-policy samples . Unlike these two works , our method utilizes all online and offline data , and explicitly considers the difference between them by adaptively applying non-conservative or conservative updating schemes , respectively . Matsushima et al . ( 2021 ) focuses on optimizing deployment efficiency , i.e. , the number of distinct data-collection policies used during learning , by employing a behavior-regularized policy updating strategy . Although in terms of deployment efficiency , their work is between online and offline RL , it ignores existing offline dataset , and dose not focusing on improving sample efficiency , while both are addressed in our paper . Some works ( Zhu et al. , 2019 ; Vecerik et al. , 2017 ; Rajeswaran et al. , 2018 ; Kim et al. , 2013 ) can also learn from online interactions and offline data . However , they need expert demonstrations instead of any dataset , and this limits their applicability . 3 PRELIMINARIES . In RL , the interaction between the agent and environment is usually modelled using Markov decision process ( MDP ) ( S , A , pM , r , γ ) , with state space S ( state s ∈ S ) , action space A ( action a ∈ A ) . At each discrete time step , the agent takes an action a based on the current state s , and the state changes into s′ according to the transition dynamics pM ( s′ | s , a ) , and the agent receives a reward r ( s , a , s′ ) ∈ R. The agent ’ s objective is to maximize the return , which is defined as Rt = ∑∞ i=t+1 γ ir ( si , ai , si+1 ) , where t is the time step , and γ ∈ [ 0 , 1 ) is the discounted factor . The mapping from s to a is denoted by the stochastic policy π : a ∼ π ( ·|s ) . Policy can be stochastic or deterministic , and we use the stochastic from in this paper for generality . Each policy π have a corresponding action-value function Qπ ( s , a ) = Eπ [ Rt | s , a ] , which is the expected return following the policy after taking action a in state s. The policy π ’ s action-value function can be updated by the Bellman operator T π : T πQ ( s , a ) = Es′ [ r + γQ ( s′ , π ( s′ ) ) ] ( 1 ) Q-learning ( Sutton & Barto , 2011 ) directly learns the optimal action-value function Q∗ ( s , a ) = maxπ Q π ( s , a ) , and such Q-function can be modelled using neural networks ( Mnih et al. , 2015 ) . In principle , off-policy methods , such as Q-learning , can utilize experiences collected by any policies , and thus they usually maintain a replay buffer B to store and repeatedly learn from experiences collected by behavior policies ( Agarwal et al. , 2020 ) . Such capability also enables off-policy methods to be used in the offline setting , by storing offline data into the buffer B , and not updating the buffer during learning since no further interactions are available here ( Levine et al. , 2020 ) . But this simple adjusting can not guarantee the agent to have a reasonable performance , especially when the dataset is not diverse ( Kumar et al. , 2020 ; Agarwal et al. , 2020 ; Fujimoto et al. , 2019 ) , and this is also the problem tackled in most offline RL works . In this paper , we focus on the offline-online setting , where the agent is first learned from the offline dataset , and then trained via online interactions . And without additional remarks , online RL methods refer to off-policy algorithms in the rest of this paper . We only use off-policy methods because they can make more use of offline data than on-policy ones for gaining high sample efficiency , and on-policy methods are not compatible with our proposed framework introduced next .
The authors propose a mixed offline-online RL approach for which they design an algorithm. They propose to maintain 2 separate replay buffers, one for online data and one for offline data, to allow them to sample either an online or offline batch of data when doing an update step, and tailor the loss function based on the batch's provenance. In addition, the authors propose using a CQL variant which uses an ensemble of action-value function as done by REDQ to learn in this setting. They conclude by showing some empirical results on the D4RL Mujoco benchmark domains.
SP:48b8146eaea4f7b4c03f93506eb6198a72c53d6d
Adaptive Q-learning for Interaction-Limited Reinforcement Learning
1 INTRODUCTION . Conventional online reinforcement learning ( RL ) methods ( Haarnoja et al. , 2018 ; Fujimoto et al. , 2018 ) usually learn from experiences generated by interactions with the online environment . They are impractical in some real-world applications , e.g. , dialog ( Jaques et al. , 2019 ) and education ( Mandel et al. , 2014 ) , where interactions are costly . Recently , offline RL ( Levine et al. , 2020 ) has aroused much attention . It targets the above challenge by making the agent learn from an offline dataset collected by other policies in a purely data-driven manner . The difference between online RL and offline RL is shown in Figure 1 . Existing offline RL studies try to target the distribution mismatch or out-of-distribution actions issue by employing a pessimistic update scheme ( Kumar et al. , 2019 ; 2020 ) or in combination with imitation learning ( Fujimoto et al. , 2019 ) . However , when the dataset is fixed , offline RL can not learn the optimal policy ( Kidambi et al. , 2020 ) , and even worse , when the dataset ’ s quality is poor , offline RL usually gains a relatively bad performance ( Kumar et al. , 2020 ; Fu et al. , 2020 ; Levine et al. , 2020 ) . On the other hand , it is challenging to evaluate the learned policy when learning totally from the offline dataset . Even though some research topics , e.g. , off-policy evaluation ( Dann et al. , 2014 ) , study how to evaluate the learned policy without the interaction with the environment , it is still not ideal for the practical purpose . Some recent works try to address the above issues by employing an offline-online setting . Such methods ( Lee et al. , 2021 ; Nair et al. , 2020 ) focus on pre-training a policy using the offline dataset and fine-tuning the policy through further online interactions . Even though their methods alleviate the above issues to some extent , their main bottleneck is that they do not consider the different characteristics of offline and online data . For instance , pre-existing offline data can prevent agents from converging prematurely due to the potential diverse offline dataset , while online data can improve stability and accelerate convergence . Generally , these different data are mixed and used by a pessimistic strategy to update the policy in their methods , which may be problematic since using a pessimistic strategy for online data may harm the policy performance ( Nair et al. , 2020 ) . Moreover , with sufficiently large and diverse offline data , a high-performing policy can be learned just using a pure online RL algorithm ( Agarwal et al. , 2020 ) . And the online near-on-policy data also play a key role in improving the RL algorithm ’ s stability ( Fujimoto et al. , 2019 ) . Hence , we should take full advantage of both offline and online dataset . To tackle the above problems , in this paper , we emphasize that online and offline RL should be coupled organically . First , a separate updating strategy should be employed for online and offline data , respectively , considering their different characteristics . To do so , we present a framework called adaptive Q-learning that integrates the advantage of offline learning and online learning effectively . When learning from the offline dataset , we conduct a pessimistic update strategy . In contrast , we use a greedy or non-pessimistic update strategy when learning from the online dataset . Second , we design a novel replay buffer to distinguish the offline from online datasets in a simple way . By utilizing such a novel framework and buffer design , the agent can achieve an expert policy using limited online interaction steps regardless of the quality of the offline dataset . In the experiments , our proposed framework can achieve better performance by using only one fifth number of interactions compared with the previous method ( Nair et al. , 2020 ) . Our contributions can be summarized as below : • We propose a unified framework called Adaptive Q-learning that can effectively benefit from both the offline dataset and limited number of online interaction data . • Based on the general framework , we initialize a practical algorithm , called Greedyconservative Q-ensemble learning ( GCQL ) that builds on top of State-of-the-Art offline RL and online RL method . • We empirically verify the effectiveness of our method by comprehensive experiments on the popular continuous control tasks MuJoCo ( Todorov et al. , 2012 ) with offline dataset coming from D4RL ( Fu et al. , 2020 ) . 2 RELATED WORK . Online RL In general , online RL algorithms can be divided into two categories , i.e. , on-policy and off-policy algorithms . On-policy methods ( Schulman et al. , 2015 ; 2017 ) update the policy using data collected by its current behavior policy . As ignoring the logged data collected by its history behaviour policies , they usually have a lower sample efficiency than the off-policy RL . On the other hand , off-policy methods ( Fujimoto et al. , 2018 ; Chen et al. , 2021 ) enable the policy to learn from experience collected by history behavior polices , however , they can not learn well from history trajectories collected by other agents ’ behavior policies ( Fujimoto et al. , 2019 ; Kumar et al. , 2020 ) . Consequently , the need for huge online interaction makes online RL impractical for some real-world applications , such as dialog agents ( Jaques et al. , 2019 ) or education system ( Mandel et al. , 2014 ) . Offline RL Offline RL algorithms assume the online environment is unavailable and learn policies only from the pre-collected dataset . As the value estimation error can not be corrected using online interactions here , these methods tend to utilize a pessimistic updating strategy to relieve the distribution mismatch problem ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . To implement such a strategy , model-free offline RL methods generally employ value or policy penalties to constrain the updated policy close to the data collecting policy ( Wu et al. , 2019 ; Kumar et al. , 2020 ; Fujimoto et al. , 2019 ; He & Hou , 2020 ) . And model-based methods use predictive models to estimate uncertainties of states and then update the policy in a pessimistic way based on them ( Kidambi et al. , 2020 ; Yu et al. , 2020 ) . Those offline RL methods can not guarantee a good performance , especially when the data quality is poor ( Kumar et al. , 2020 ) . Besides , policy evaluation when the online environment is unavailable is also challenging . Even though off-policy evaluation ( OPE ) methods ( Dann et al. , 2014 ) present alternative solutions , they are still far from perfect . Above issues of online and offline RL motivate us to investigate the offline-online setting . Offline-online RL Lee et al . ( 2021 ) and Nair et al . ( 2020 ) focus on the mixed setting where the agent is first learned from the offline dataset , and then trained online . Nair et al . ( 2020 ) propose an advantage-weighted actor-critic ( AWAC ) method that restricts the policy to select actions close to those in the offline data by an implicit constraint . When online interactions are available , such conservative constraint may have adverse effects on the performance . Lee et al . ( 2021 ) employ a balanced replay scheme to address the distribution shift issue . It uses the offline data by only selecting near-on-policy samples . Unlike these two works , our method utilizes all online and offline data , and explicitly considers the difference between them by adaptively applying non-conservative or conservative updating schemes , respectively . Matsushima et al . ( 2021 ) focuses on optimizing deployment efficiency , i.e. , the number of distinct data-collection policies used during learning , by employing a behavior-regularized policy updating strategy . Although in terms of deployment efficiency , their work is between online and offline RL , it ignores existing offline dataset , and dose not focusing on improving sample efficiency , while both are addressed in our paper . Some works ( Zhu et al. , 2019 ; Vecerik et al. , 2017 ; Rajeswaran et al. , 2018 ; Kim et al. , 2013 ) can also learn from online interactions and offline data . However , they need expert demonstrations instead of any dataset , and this limits their applicability . 3 PRELIMINARIES . In RL , the interaction between the agent and environment is usually modelled using Markov decision process ( MDP ) ( S , A , pM , r , γ ) , with state space S ( state s ∈ S ) , action space A ( action a ∈ A ) . At each discrete time step , the agent takes an action a based on the current state s , and the state changes into s′ according to the transition dynamics pM ( s′ | s , a ) , and the agent receives a reward r ( s , a , s′ ) ∈ R. The agent ’ s objective is to maximize the return , which is defined as Rt = ∑∞ i=t+1 γ ir ( si , ai , si+1 ) , where t is the time step , and γ ∈ [ 0 , 1 ) is the discounted factor . The mapping from s to a is denoted by the stochastic policy π : a ∼ π ( ·|s ) . Policy can be stochastic or deterministic , and we use the stochastic from in this paper for generality . Each policy π have a corresponding action-value function Qπ ( s , a ) = Eπ [ Rt | s , a ] , which is the expected return following the policy after taking action a in state s. The policy π ’ s action-value function can be updated by the Bellman operator T π : T πQ ( s , a ) = Es′ [ r + γQ ( s′ , π ( s′ ) ) ] ( 1 ) Q-learning ( Sutton & Barto , 2011 ) directly learns the optimal action-value function Q∗ ( s , a ) = maxπ Q π ( s , a ) , and such Q-function can be modelled using neural networks ( Mnih et al. , 2015 ) . In principle , off-policy methods , such as Q-learning , can utilize experiences collected by any policies , and thus they usually maintain a replay buffer B to store and repeatedly learn from experiences collected by behavior policies ( Agarwal et al. , 2020 ) . Such capability also enables off-policy methods to be used in the offline setting , by storing offline data into the buffer B , and not updating the buffer during learning since no further interactions are available here ( Levine et al. , 2020 ) . But this simple adjusting can not guarantee the agent to have a reasonable performance , especially when the dataset is not diverse ( Kumar et al. , 2020 ; Agarwal et al. , 2020 ; Fujimoto et al. , 2019 ) , and this is also the problem tackled in most offline RL works . In this paper , we focus on the offline-online setting , where the agent is first learned from the offline dataset , and then trained via online interactions . And without additional remarks , online RL methods refer to off-policy algorithms in the rest of this paper . We only use off-policy methods because they can make more use of offline data than on-policy ones for gaining high sample efficiency , and on-policy methods are not compatible with our proposed framework introduced next .
The paper proposes GCQL, a new RL algorithm that is trained on a mixture of offline data and online interactions. The novel component of the algorithm is a reweighting that balances acting pessimistically on offline data and greedily on online interactions. The authors propose a mixture replay buffer that consists of both offline and online samples. For online samples, the policy is trained as in the REDQ algorithm; meanwhile, for offline samples, the policy is additionally trained on a CQL-like value penalty. Finally, the authors show that GCQL outperforms existing SOTA offline RL algorithms that simply fine-tune on online data.
SP:48b8146eaea4f7b4c03f93506eb6198a72c53d6d
Adaptive Q-learning for Interaction-Limited Reinforcement Learning
1 INTRODUCTION . Conventional online reinforcement learning ( RL ) methods ( Haarnoja et al. , 2018 ; Fujimoto et al. , 2018 ) usually learn from experiences generated by interactions with the online environment . They are impractical in some real-world applications , e.g. , dialog ( Jaques et al. , 2019 ) and education ( Mandel et al. , 2014 ) , where interactions are costly . Recently , offline RL ( Levine et al. , 2020 ) has aroused much attention . It targets the above challenge by making the agent learn from an offline dataset collected by other policies in a purely data-driven manner . The difference between online RL and offline RL is shown in Figure 1 . Existing offline RL studies try to target the distribution mismatch or out-of-distribution actions issue by employing a pessimistic update scheme ( Kumar et al. , 2019 ; 2020 ) or in combination with imitation learning ( Fujimoto et al. , 2019 ) . However , when the dataset is fixed , offline RL can not learn the optimal policy ( Kidambi et al. , 2020 ) , and even worse , when the dataset ’ s quality is poor , offline RL usually gains a relatively bad performance ( Kumar et al. , 2020 ; Fu et al. , 2020 ; Levine et al. , 2020 ) . On the other hand , it is challenging to evaluate the learned policy when learning totally from the offline dataset . Even though some research topics , e.g. , off-policy evaluation ( Dann et al. , 2014 ) , study how to evaluate the learned policy without the interaction with the environment , it is still not ideal for the practical purpose . Some recent works try to address the above issues by employing an offline-online setting . Such methods ( Lee et al. , 2021 ; Nair et al. , 2020 ) focus on pre-training a policy using the offline dataset and fine-tuning the policy through further online interactions . Even though their methods alleviate the above issues to some extent , their main bottleneck is that they do not consider the different characteristics of offline and online data . For instance , pre-existing offline data can prevent agents from converging prematurely due to the potential diverse offline dataset , while online data can improve stability and accelerate convergence . Generally , these different data are mixed and used by a pessimistic strategy to update the policy in their methods , which may be problematic since using a pessimistic strategy for online data may harm the policy performance ( Nair et al. , 2020 ) . Moreover , with sufficiently large and diverse offline data , a high-performing policy can be learned just using a pure online RL algorithm ( Agarwal et al. , 2020 ) . And the online near-on-policy data also play a key role in improving the RL algorithm ’ s stability ( Fujimoto et al. , 2019 ) . Hence , we should take full advantage of both offline and online dataset . To tackle the above problems , in this paper , we emphasize that online and offline RL should be coupled organically . First , a separate updating strategy should be employed for online and offline data , respectively , considering their different characteristics . To do so , we present a framework called adaptive Q-learning that integrates the advantage of offline learning and online learning effectively . When learning from the offline dataset , we conduct a pessimistic update strategy . In contrast , we use a greedy or non-pessimistic update strategy when learning from the online dataset . Second , we design a novel replay buffer to distinguish the offline from online datasets in a simple way . By utilizing such a novel framework and buffer design , the agent can achieve an expert policy using limited online interaction steps regardless of the quality of the offline dataset . In the experiments , our proposed framework can achieve better performance by using only one fifth number of interactions compared with the previous method ( Nair et al. , 2020 ) . Our contributions can be summarized as below : • We propose a unified framework called Adaptive Q-learning that can effectively benefit from both the offline dataset and limited number of online interaction data . • Based on the general framework , we initialize a practical algorithm , called Greedyconservative Q-ensemble learning ( GCQL ) that builds on top of State-of-the-Art offline RL and online RL method . • We empirically verify the effectiveness of our method by comprehensive experiments on the popular continuous control tasks MuJoCo ( Todorov et al. , 2012 ) with offline dataset coming from D4RL ( Fu et al. , 2020 ) . 2 RELATED WORK . Online RL In general , online RL algorithms can be divided into two categories , i.e. , on-policy and off-policy algorithms . On-policy methods ( Schulman et al. , 2015 ; 2017 ) update the policy using data collected by its current behavior policy . As ignoring the logged data collected by its history behaviour policies , they usually have a lower sample efficiency than the off-policy RL . On the other hand , off-policy methods ( Fujimoto et al. , 2018 ; Chen et al. , 2021 ) enable the policy to learn from experience collected by history behavior polices , however , they can not learn well from history trajectories collected by other agents ’ behavior policies ( Fujimoto et al. , 2019 ; Kumar et al. , 2020 ) . Consequently , the need for huge online interaction makes online RL impractical for some real-world applications , such as dialog agents ( Jaques et al. , 2019 ) or education system ( Mandel et al. , 2014 ) . Offline RL Offline RL algorithms assume the online environment is unavailable and learn policies only from the pre-collected dataset . As the value estimation error can not be corrected using online interactions here , these methods tend to utilize a pessimistic updating strategy to relieve the distribution mismatch problem ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . To implement such a strategy , model-free offline RL methods generally employ value or policy penalties to constrain the updated policy close to the data collecting policy ( Wu et al. , 2019 ; Kumar et al. , 2020 ; Fujimoto et al. , 2019 ; He & Hou , 2020 ) . And model-based methods use predictive models to estimate uncertainties of states and then update the policy in a pessimistic way based on them ( Kidambi et al. , 2020 ; Yu et al. , 2020 ) . Those offline RL methods can not guarantee a good performance , especially when the data quality is poor ( Kumar et al. , 2020 ) . Besides , policy evaluation when the online environment is unavailable is also challenging . Even though off-policy evaluation ( OPE ) methods ( Dann et al. , 2014 ) present alternative solutions , they are still far from perfect . Above issues of online and offline RL motivate us to investigate the offline-online setting . Offline-online RL Lee et al . ( 2021 ) and Nair et al . ( 2020 ) focus on the mixed setting where the agent is first learned from the offline dataset , and then trained online . Nair et al . ( 2020 ) propose an advantage-weighted actor-critic ( AWAC ) method that restricts the policy to select actions close to those in the offline data by an implicit constraint . When online interactions are available , such conservative constraint may have adverse effects on the performance . Lee et al . ( 2021 ) employ a balanced replay scheme to address the distribution shift issue . It uses the offline data by only selecting near-on-policy samples . Unlike these two works , our method utilizes all online and offline data , and explicitly considers the difference between them by adaptively applying non-conservative or conservative updating schemes , respectively . Matsushima et al . ( 2021 ) focuses on optimizing deployment efficiency , i.e. , the number of distinct data-collection policies used during learning , by employing a behavior-regularized policy updating strategy . Although in terms of deployment efficiency , their work is between online and offline RL , it ignores existing offline dataset , and dose not focusing on improving sample efficiency , while both are addressed in our paper . Some works ( Zhu et al. , 2019 ; Vecerik et al. , 2017 ; Rajeswaran et al. , 2018 ; Kim et al. , 2013 ) can also learn from online interactions and offline data . However , they need expert demonstrations instead of any dataset , and this limits their applicability . 3 PRELIMINARIES . In RL , the interaction between the agent and environment is usually modelled using Markov decision process ( MDP ) ( S , A , pM , r , γ ) , with state space S ( state s ∈ S ) , action space A ( action a ∈ A ) . At each discrete time step , the agent takes an action a based on the current state s , and the state changes into s′ according to the transition dynamics pM ( s′ | s , a ) , and the agent receives a reward r ( s , a , s′ ) ∈ R. The agent ’ s objective is to maximize the return , which is defined as Rt = ∑∞ i=t+1 γ ir ( si , ai , si+1 ) , where t is the time step , and γ ∈ [ 0 , 1 ) is the discounted factor . The mapping from s to a is denoted by the stochastic policy π : a ∼ π ( ·|s ) . Policy can be stochastic or deterministic , and we use the stochastic from in this paper for generality . Each policy π have a corresponding action-value function Qπ ( s , a ) = Eπ [ Rt | s , a ] , which is the expected return following the policy after taking action a in state s. The policy π ’ s action-value function can be updated by the Bellman operator T π : T πQ ( s , a ) = Es′ [ r + γQ ( s′ , π ( s′ ) ) ] ( 1 ) Q-learning ( Sutton & Barto , 2011 ) directly learns the optimal action-value function Q∗ ( s , a ) = maxπ Q π ( s , a ) , and such Q-function can be modelled using neural networks ( Mnih et al. , 2015 ) . In principle , off-policy methods , such as Q-learning , can utilize experiences collected by any policies , and thus they usually maintain a replay buffer B to store and repeatedly learn from experiences collected by behavior policies ( Agarwal et al. , 2020 ) . Such capability also enables off-policy methods to be used in the offline setting , by storing offline data into the buffer B , and not updating the buffer during learning since no further interactions are available here ( Levine et al. , 2020 ) . But this simple adjusting can not guarantee the agent to have a reasonable performance , especially when the dataset is not diverse ( Kumar et al. , 2020 ; Agarwal et al. , 2020 ; Fujimoto et al. , 2019 ) , and this is also the problem tackled in most offline RL works . In this paper , we focus on the offline-online setting , where the agent is first learned from the offline dataset , and then trained via online interactions . And without additional remarks , online RL methods refer to off-policy algorithms in the rest of this paper . We only use off-policy methods because they can make more use of offline data than on-policy ones for gaining high sample efficiency , and on-policy methods are not compatible with our proposed framework introduced next .
This paper tackles a variation to the offline RL setting, where the agent is allowed some limited number of online interaction steps after learning offline. An algorithm, CGQL, is proposed for this setting and uses the idea that online and offline data should be used in different updates. Experiments on a common benchmark show that this approach can be more effective than standard batch RL methods and ablation studies are also included.
SP:48b8146eaea4f7b4c03f93506eb6198a72c53d6d
NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs
1 INTRODUCTION . Representation learning tasks on knowledge graphs ( KGs ) often require a parameterization of each unique atom in the graph with a vector or matrix . Traditionally , in multi-relational KGs such atoms constitute a set of all nodes n ∈ N ( entities ) and relations ( edge types ) r ∈ R ( Nickel et al. , 2016 ) . Assuming parameterization with vectors , atoms are mapped to d-dimensional vectors through shallow encoders fn : n → Rd and fr : r → Rd which scale linearly to the number of nodes and edge types2 , i.e. , having O ( |N | ) space complexity of the entity embedding matrix . Albeit efficient on small conventional benchmarking datasets based on Freebase ( Toutanova & Chen , 2015 ) ( ~15K nodes ) and WordNet ( Dettmers et al. , 2018 ) ( ~40K nodes ) , training on larger graphs ( e.g. , YAGO 3-10 ( Mahdisoltani et al. , 2015 ) of 120K nodes ) becomes computationally challenging . Scaling it further up to larger subsets ( Hu et al. , 2020 ; Wang et al. , 2021 ; Safavi & Koutra , 2020 ) of Wikidata ( Vrandecic & Krötzsch , 2014 ) requires a top-level GPU or a CPU cluster as done in , e.g. , PyTorch-BigGraph ( Lerer et al. , 2019 ) that maintains a 78M × 200d embeddings matrix in memory ( we list sizes of current best performing models in Table 1 ) . Taking the perspective from NLP , shallow node encoding in KGs corresponds to shallow word embedding popularized with word2vec ( Mikolov et al. , 2013 ) and GloVe ( Pennington et al. , 2014 ) that learned a vocabulary of 400K-2M most frequent words , treating rarer ones as out-of-vocabulary ( OOV ) . The OOV issue was resolved with the ability to build infinite combinations with a finite vocabulary enabled by subword units . Subword-powered algorithms such as fastText ( Bojanowski et al. , 2017 ) , Byte-Pair Encoding ( Sennrich et al. , 2016 ) , and WordPiece ( Schuster & Nakajima , 2012 ) became a standard step in preprocessing pipelines of large language models and allowed to construct fixed-size token vocabularies , e.g. , BERT ( Devlin et al. , 2019 ) contains ~30K tokens and 1The code is available on GitHub : https : //github.com/migalkin/NodePiece 2We then concentrate on nodes as usually their size is orders of magnitude larger than that of edge types . GPT-2 ( Radford et al. , 2019 ) employs ~50K tokens . Importantly , relatively small input embedding matrices enabled investing the parameters budget into more efficient encoders ( Kaplan et al. , 2020 ) . Drawing inspiration from subword embeddings in NLP , we explore how similar strategies for tokenizing entities in large graphs can dramatically reduce parameter complexity , increase generalization , and naturally represent new unseen entities as using the same fixed vocabulary . To do so , tokenization has to rely on atoms akin to subword units and not the total set of nodes . To this end , we propose NodePiece , an anchor-based approach to learn a fixed-size vocabulary V ( |V | |N | ) of any connected multi-relational graph . In NodePiece , the set of atoms consists of anchors and all relation types that , together , allow to construct a combinatorial number of sequences from a limited atoms vocabulary . In contrast to shallow approaches , each node n is first tokenized into a unique hash ( n ) of k closest anchors and m immediate relations . A key element to build a node embedding is a proper encoder function enc ( n ) : hash ( n ) → Rd which can be designed leveraging inductive biases of an underlying graph or downstream tasks . Therefore , the overall parameter budget is now defined by a small fixed-size vocabulary of atoms and the complexity of the encoder function . Our experimental findings suggest that a fixed-size NodePiece vocabulary paired with a simple encoder still yields competitive results on a variety of tasks including link prediction , node classification , and relation prediction . Furthermore , anchor-based hashing enables conventional embedding models to work in the inductive and out-of-sample scenarios when unseen entities arrive at inference time , which otherwise required tailored learning mechanisms . 2 RELATED WORK . Conventional KG embedding approaches . To the best of our knowledge , all contemporary embedding algorithms ( Ji et al. , 2020 ; Ali et al. , 2020 ) for link prediction on KGs employ shallow embedding lookups mapping each entity to a unique embedding vector thus being linear O ( |N | ) to the total number of nodes |N | and size of an embedding matrix . This holds for different embedding families , e.g. , translational ( Sun et al. , 2019 ) , tensor factorization ( Lacroix et al. , 2018 ) , convolutional ( Dettmers et al. , 2018 ) , and hyperbolic ( Chami et al. , 2020 ; Balazevic et al. , 2019 ) . The same applies to relation-aware graph neural network ( GNN ) encoders ( Schlichtkrull et al. , 2018 ; Vashishth et al. , 2020 ) who still initialize each node with a learned embedding or feature vector before message passing . Furthermore , shallow encoding is also used in higher-order KG structures such as hypergraphs ( Fatemi et al. , 2020 ) and hyper-relational graphs ( Rosso et al. , 2020 ; Galkin et al. , 2020 ) . NodePiece can be used as a drop-in replacement of the embedding lookup with any of those models . Distillation and compression . Several recent techniques for reducing memory footprint of embedding matrices follow successful applications of distilling large language models in NLP ( Sanh et al. , 2019 ) , i.e. , distillation ( Wang et al. , 2020 ; Zhu et al. , 2020 ) into low-dimensional counterparts , and compression of trained matrices into discrete codes ( Sachan , 2020 ) . However , all of them require a full embedding matrix as input which we aim to avoid designing NodePiece . Vocabulary reduction in recommender systems . Commonly , recommender systems operate on thousands of categorical features combined in sparse high-dimensional vectors.Recent approaches ( Medini et al. , 2021 ; Liang et al. , 2021 ) employ anchor-based hashing techniques to factorize sparse feature vectors into dense embeddings . Contrary to those setups , we do not expect availability of feature vectors for arbitrary KGs and rather learn vocabulary embeddings from scratch . Entity descriptions and language models . A recent line of work such as KG-BERT ( Yao et al. , 2019 ) , MLMLM ( Clouâtre et al. , 2021 ) , BLP ( Daza et al. , 2021 ) utilize entity descriptions passed through a language model ( LM ) encoder as entity embeddings suitable for link prediction . We would like to emphasize that such approaches are rather orthogonal to NodePiece . Textual features are mostly available in Wikipedia-derived KGs like Wikidata but are often missing in domain-specific graphs like social networks and product graphs . We therefore assume textual features are not available and rather learn node representations based on their spatial characteristics . Still , textual features can be easily added by concatenating NodePiece-encoded features with LM-produced features . Out-of-sample representation learning . This task focuses on predictions involving previously unseen , or out-of-sample , entities that attach to a known KG with a few edges . These new edges are then utilized as a context to compute its embedding . Previous work ( Wang et al. , 2019 ; Hamaguchi et al. , 2017 ; Albooyeh et al. , 2020 ) proposed different neighborhood aggregation functions for this process or resorted to meta-learning ( Chen et al. , 2019 ; Baek et al. , 2020 ; Zhang et al. , 2020a ) . However , all of them follow the shallow embedding paradigm . Instead , NodePiece uses the new edges as a basis for anchor-based tokenization of new nodes in terms of an existing vocabulary . 3 NODEPIECE VOCABULARY CONSTRUCTION . Given a directed KG G = ( N , E , R ) consisting of |N | nodes , |E| edges , and |R| relation types , our task is to reduce the original vocabulary size of |N | nodes to a smaller , fixed-size vocabulary of node pieces akin to subword units . In this work , we represent node pieces through anchor nodes a ∈ A , A ⊂ N , a pre-selected set of nodes in a graph following a deterministic or stochastic strategy . A full NodePiece vocabulary is then constructed from anchor nodes and relation types , i.e , V = A+R . Note that in order to maintain reachability of each node and balance in- and out-degrees we enrich G with inverse edges with inverse relation types , such that |R|inverse = |R|direct and |R| = |R|direct + |R|inverse . Using elements of the constructed vocabulary each node n can be tokenized into hash ( n ) as a sequence of k closest anchors , discrete anchor distances , and a relational context of m immediate relations . Then , any encoder function enc ( n ) : hash ( n ) → Rd can be applied to embed the hash into a d-dimensional vector . An intuition of the approach is presented in Fig . 1 with each step explained in more detail below . 3.1 ANCHOR SELECTION . Subword tokenization algorithms such as BPE ( Sennrich et al. , 2016 ) employ deterministic strategies to create tokens and construct a vocabulary , e.g. , based on frequencies of co-occurring n-grams , such that more frequent words are tokenized with fewer subword units . On graphs , such strategies might employ centrality measures like degree centrality or Personalized PageRank ( Page et al. , 1999 ) . However , in our preliminary experiments , we found random anchor selection to be as effective as centrality-based strategies . A choice for deterministic strategies might be justified when optimizing for certain task-specific topological characteristics , e.g. , degree and PPR strategies indeed skew the distribution of shortest anchor distances towards smaller values thus increasing chances to find anchors in 2- or 3-hop neighborhood of any node ( we provide more evidence for that in Appendix C ) . 3.2 NODE TOKENIZATION . Once the vocabulary V = A + R is constructed , each node n can be hashed ( or tokenized ) into a hash ( n ) using 1 ) k nearest anchors and their discrete distances ; 2 ) m immediate outgoing relations from the relational context of n. Since anchor nodes are concrete nodes in G , they get hashed in the same way as other non-anchor nodes . Anchors per node . Given |A| anchor nodes , it is impractical to use all of them for encoding each node . Instead , we select k anchors per node and describe two possible strategies for that , i.e. , random and deterministic . The basic random strategy uniformly samples an unordered set of k anchors from A yielding ( |A| k ) possible combinations . To avoid collisions when hashing the nodes , |A| and k are to be chosen according to the lower bound on possible combinations that is defined by the total number of nodes , e.g. , ( |A| k ) ≥ |N | . Note that running depth-first search ( DFS ) to random anchors at inference time is inefficient and , therefore , hash ( n ) of the random strategy has to be pre-computed . On the other hand , the deterministic strategy selects an ordered sequence of k nearest anchors . Hence , the anchors can be obtained via breadth-first search ( BFS ) in the l-hop neighborhood of n at inference time ( or pre-computed for speed reasons ) . However , the combinatorial bound is not applicable in this strategy and we need more discriminative signals to avoid hash collisions since nearby nodes will have similar anchors ( we elaborate on the uniqueness issue in Appendix K ) . Such signals have to better ground anchors to the underlying graph structure , and we accomplish that using anchor distances3 and relational context described below . A node residing in a disconnected component is assigned with an auxiliary [ DISCONNECTED ] token or can be turned into an anchor . However , the majority of existing KGs are graphs with one large connected component with very few disconnected nodes , such that this effect is negligible . Anchor Distances . Given a target node n and an anchor ai , we define anchor distance zai ∈ [ 0 ; diameter ( G ) ] as an integer denoting the shortest path distance between ai and n in the original graph G. Note that when tokenizing an anchor aj with the deterministic strategy , the nearest anchor among top-k is always aj itself with distance 0 . We then map each integer to a learnable d-dimensional vector fz : zai → Rd akin to relative distance encoding scheme . Relational Context . We also leverage the multi-relational nature of an underlying KG . Commonly4 , the amount of unique edge types in G is orders of magnitude smaller than the total number of nodes , i.e. , |R| |N | . This fact allows to include the entire |R| in the NodePiece vocabulary VNP and further featurize each node with a unique relational context . We construct a relational context of a node n by randomly sampling a set of m immediate unique outgoing relations starting from n , i.e. , rconn = { rj } m ⊆ Nr ( n ) where Nr ( n ) denotes all outgoing relation types . Due to a non-uniform degree distribution , if |Nr ( n ) | < m , we add auxiliary [ PAD ] tokens to complete rconn to size m .
This paper presents NodePiece, a method to scale up GNNs by means of removing their dependence on individual node embeddings which grow linearly in size and are inefficient in huge graphs. The method uses a set of anchor nodes, which are picked from the graph itself, for the NodePiece representations. Concretely, NodePiece embedding is derived by the embeddings of the closest anchor nodes and the hop-based distance to these nodes which are also represented by vectors. – the maximum distance between two nodes in a graph is Diameter(G), so the number of distance/position embeddings is Diameter(G). Additionally, NodePiece is augmented by a sample of embeddings of the relationship types the node is involved in. Apart from induced efficiency, NodePiece can be especially useful in inductive learning where an unseen node’s embedding can be created through the anchor nodes.
SP:fe000f810454388d43212a0128b4f1ca29de447a
NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs
1 INTRODUCTION . Representation learning tasks on knowledge graphs ( KGs ) often require a parameterization of each unique atom in the graph with a vector or matrix . Traditionally , in multi-relational KGs such atoms constitute a set of all nodes n ∈ N ( entities ) and relations ( edge types ) r ∈ R ( Nickel et al. , 2016 ) . Assuming parameterization with vectors , atoms are mapped to d-dimensional vectors through shallow encoders fn : n → Rd and fr : r → Rd which scale linearly to the number of nodes and edge types2 , i.e. , having O ( |N | ) space complexity of the entity embedding matrix . Albeit efficient on small conventional benchmarking datasets based on Freebase ( Toutanova & Chen , 2015 ) ( ~15K nodes ) and WordNet ( Dettmers et al. , 2018 ) ( ~40K nodes ) , training on larger graphs ( e.g. , YAGO 3-10 ( Mahdisoltani et al. , 2015 ) of 120K nodes ) becomes computationally challenging . Scaling it further up to larger subsets ( Hu et al. , 2020 ; Wang et al. , 2021 ; Safavi & Koutra , 2020 ) of Wikidata ( Vrandecic & Krötzsch , 2014 ) requires a top-level GPU or a CPU cluster as done in , e.g. , PyTorch-BigGraph ( Lerer et al. , 2019 ) that maintains a 78M × 200d embeddings matrix in memory ( we list sizes of current best performing models in Table 1 ) . Taking the perspective from NLP , shallow node encoding in KGs corresponds to shallow word embedding popularized with word2vec ( Mikolov et al. , 2013 ) and GloVe ( Pennington et al. , 2014 ) that learned a vocabulary of 400K-2M most frequent words , treating rarer ones as out-of-vocabulary ( OOV ) . The OOV issue was resolved with the ability to build infinite combinations with a finite vocabulary enabled by subword units . Subword-powered algorithms such as fastText ( Bojanowski et al. , 2017 ) , Byte-Pair Encoding ( Sennrich et al. , 2016 ) , and WordPiece ( Schuster & Nakajima , 2012 ) became a standard step in preprocessing pipelines of large language models and allowed to construct fixed-size token vocabularies , e.g. , BERT ( Devlin et al. , 2019 ) contains ~30K tokens and 1The code is available on GitHub : https : //github.com/migalkin/NodePiece 2We then concentrate on nodes as usually their size is orders of magnitude larger than that of edge types . GPT-2 ( Radford et al. , 2019 ) employs ~50K tokens . Importantly , relatively small input embedding matrices enabled investing the parameters budget into more efficient encoders ( Kaplan et al. , 2020 ) . Drawing inspiration from subword embeddings in NLP , we explore how similar strategies for tokenizing entities in large graphs can dramatically reduce parameter complexity , increase generalization , and naturally represent new unseen entities as using the same fixed vocabulary . To do so , tokenization has to rely on atoms akin to subword units and not the total set of nodes . To this end , we propose NodePiece , an anchor-based approach to learn a fixed-size vocabulary V ( |V | |N | ) of any connected multi-relational graph . In NodePiece , the set of atoms consists of anchors and all relation types that , together , allow to construct a combinatorial number of sequences from a limited atoms vocabulary . In contrast to shallow approaches , each node n is first tokenized into a unique hash ( n ) of k closest anchors and m immediate relations . A key element to build a node embedding is a proper encoder function enc ( n ) : hash ( n ) → Rd which can be designed leveraging inductive biases of an underlying graph or downstream tasks . Therefore , the overall parameter budget is now defined by a small fixed-size vocabulary of atoms and the complexity of the encoder function . Our experimental findings suggest that a fixed-size NodePiece vocabulary paired with a simple encoder still yields competitive results on a variety of tasks including link prediction , node classification , and relation prediction . Furthermore , anchor-based hashing enables conventional embedding models to work in the inductive and out-of-sample scenarios when unseen entities arrive at inference time , which otherwise required tailored learning mechanisms . 2 RELATED WORK . Conventional KG embedding approaches . To the best of our knowledge , all contemporary embedding algorithms ( Ji et al. , 2020 ; Ali et al. , 2020 ) for link prediction on KGs employ shallow embedding lookups mapping each entity to a unique embedding vector thus being linear O ( |N | ) to the total number of nodes |N | and size of an embedding matrix . This holds for different embedding families , e.g. , translational ( Sun et al. , 2019 ) , tensor factorization ( Lacroix et al. , 2018 ) , convolutional ( Dettmers et al. , 2018 ) , and hyperbolic ( Chami et al. , 2020 ; Balazevic et al. , 2019 ) . The same applies to relation-aware graph neural network ( GNN ) encoders ( Schlichtkrull et al. , 2018 ; Vashishth et al. , 2020 ) who still initialize each node with a learned embedding or feature vector before message passing . Furthermore , shallow encoding is also used in higher-order KG structures such as hypergraphs ( Fatemi et al. , 2020 ) and hyper-relational graphs ( Rosso et al. , 2020 ; Galkin et al. , 2020 ) . NodePiece can be used as a drop-in replacement of the embedding lookup with any of those models . Distillation and compression . Several recent techniques for reducing memory footprint of embedding matrices follow successful applications of distilling large language models in NLP ( Sanh et al. , 2019 ) , i.e. , distillation ( Wang et al. , 2020 ; Zhu et al. , 2020 ) into low-dimensional counterparts , and compression of trained matrices into discrete codes ( Sachan , 2020 ) . However , all of them require a full embedding matrix as input which we aim to avoid designing NodePiece . Vocabulary reduction in recommender systems . Commonly , recommender systems operate on thousands of categorical features combined in sparse high-dimensional vectors.Recent approaches ( Medini et al. , 2021 ; Liang et al. , 2021 ) employ anchor-based hashing techniques to factorize sparse feature vectors into dense embeddings . Contrary to those setups , we do not expect availability of feature vectors for arbitrary KGs and rather learn vocabulary embeddings from scratch . Entity descriptions and language models . A recent line of work such as KG-BERT ( Yao et al. , 2019 ) , MLMLM ( Clouâtre et al. , 2021 ) , BLP ( Daza et al. , 2021 ) utilize entity descriptions passed through a language model ( LM ) encoder as entity embeddings suitable for link prediction . We would like to emphasize that such approaches are rather orthogonal to NodePiece . Textual features are mostly available in Wikipedia-derived KGs like Wikidata but are often missing in domain-specific graphs like social networks and product graphs . We therefore assume textual features are not available and rather learn node representations based on their spatial characteristics . Still , textual features can be easily added by concatenating NodePiece-encoded features with LM-produced features . Out-of-sample representation learning . This task focuses on predictions involving previously unseen , or out-of-sample , entities that attach to a known KG with a few edges . These new edges are then utilized as a context to compute its embedding . Previous work ( Wang et al. , 2019 ; Hamaguchi et al. , 2017 ; Albooyeh et al. , 2020 ) proposed different neighborhood aggregation functions for this process or resorted to meta-learning ( Chen et al. , 2019 ; Baek et al. , 2020 ; Zhang et al. , 2020a ) . However , all of them follow the shallow embedding paradigm . Instead , NodePiece uses the new edges as a basis for anchor-based tokenization of new nodes in terms of an existing vocabulary . 3 NODEPIECE VOCABULARY CONSTRUCTION . Given a directed KG G = ( N , E , R ) consisting of |N | nodes , |E| edges , and |R| relation types , our task is to reduce the original vocabulary size of |N | nodes to a smaller , fixed-size vocabulary of node pieces akin to subword units . In this work , we represent node pieces through anchor nodes a ∈ A , A ⊂ N , a pre-selected set of nodes in a graph following a deterministic or stochastic strategy . A full NodePiece vocabulary is then constructed from anchor nodes and relation types , i.e , V = A+R . Note that in order to maintain reachability of each node and balance in- and out-degrees we enrich G with inverse edges with inverse relation types , such that |R|inverse = |R|direct and |R| = |R|direct + |R|inverse . Using elements of the constructed vocabulary each node n can be tokenized into hash ( n ) as a sequence of k closest anchors , discrete anchor distances , and a relational context of m immediate relations . Then , any encoder function enc ( n ) : hash ( n ) → Rd can be applied to embed the hash into a d-dimensional vector . An intuition of the approach is presented in Fig . 1 with each step explained in more detail below . 3.1 ANCHOR SELECTION . Subword tokenization algorithms such as BPE ( Sennrich et al. , 2016 ) employ deterministic strategies to create tokens and construct a vocabulary , e.g. , based on frequencies of co-occurring n-grams , such that more frequent words are tokenized with fewer subword units . On graphs , such strategies might employ centrality measures like degree centrality or Personalized PageRank ( Page et al. , 1999 ) . However , in our preliminary experiments , we found random anchor selection to be as effective as centrality-based strategies . A choice for deterministic strategies might be justified when optimizing for certain task-specific topological characteristics , e.g. , degree and PPR strategies indeed skew the distribution of shortest anchor distances towards smaller values thus increasing chances to find anchors in 2- or 3-hop neighborhood of any node ( we provide more evidence for that in Appendix C ) . 3.2 NODE TOKENIZATION . Once the vocabulary V = A + R is constructed , each node n can be hashed ( or tokenized ) into a hash ( n ) using 1 ) k nearest anchors and their discrete distances ; 2 ) m immediate outgoing relations from the relational context of n. Since anchor nodes are concrete nodes in G , they get hashed in the same way as other non-anchor nodes . Anchors per node . Given |A| anchor nodes , it is impractical to use all of them for encoding each node . Instead , we select k anchors per node and describe two possible strategies for that , i.e. , random and deterministic . The basic random strategy uniformly samples an unordered set of k anchors from A yielding ( |A| k ) possible combinations . To avoid collisions when hashing the nodes , |A| and k are to be chosen according to the lower bound on possible combinations that is defined by the total number of nodes , e.g. , ( |A| k ) ≥ |N | . Note that running depth-first search ( DFS ) to random anchors at inference time is inefficient and , therefore , hash ( n ) of the random strategy has to be pre-computed . On the other hand , the deterministic strategy selects an ordered sequence of k nearest anchors . Hence , the anchors can be obtained via breadth-first search ( BFS ) in the l-hop neighborhood of n at inference time ( or pre-computed for speed reasons ) . However , the combinatorial bound is not applicable in this strategy and we need more discriminative signals to avoid hash collisions since nearby nodes will have similar anchors ( we elaborate on the uniqueness issue in Appendix K ) . Such signals have to better ground anchors to the underlying graph structure , and we accomplish that using anchor distances3 and relational context described below . A node residing in a disconnected component is assigned with an auxiliary [ DISCONNECTED ] token or can be turned into an anchor . However , the majority of existing KGs are graphs with one large connected component with very few disconnected nodes , such that this effect is negligible . Anchor Distances . Given a target node n and an anchor ai , we define anchor distance zai ∈ [ 0 ; diameter ( G ) ] as an integer denoting the shortest path distance between ai and n in the original graph G. Note that when tokenizing an anchor aj with the deterministic strategy , the nearest anchor among top-k is always aj itself with distance 0 . We then map each integer to a learnable d-dimensional vector fz : zai → Rd akin to relative distance encoding scheme . Relational Context . We also leverage the multi-relational nature of an underlying KG . Commonly4 , the amount of unique edge types in G is orders of magnitude smaller than the total number of nodes , i.e. , |R| |N | . This fact allows to include the entire |R| in the NodePiece vocabulary VNP and further featurize each node with a unique relational context . We construct a relational context of a node n by randomly sampling a set of m immediate unique outgoing relations starting from n , i.e. , rconn = { rj } m ⊆ Nr ( n ) where Nr ( n ) denotes all outgoing relation types . Due to a non-uniform degree distribution , if |Nr ( n ) | < m , we add auxiliary [ PAD ] tokens to complete rconn to size m .
This paper presents NodePiece, a method inspired by subword embeddings in NLP that is designed for constructing compositional representations of entities in a knowledge graph using a fixed-size entity and relation vocabulary. This allows for learning and storing entity representations with a number of parameters that does not scale with the size of the knowledge graph, as well as being able to construct representations for unseen entities. The method uses a vocabulary that combines relation embeddings for all relations in the knowledge graph with a limited set of anchor entities, typically much fewer than the total number of entities. To construct an entity representation, a subset of that entity's nearest neighbor anchor entities (along with embeddings of the lengths of the shortest paths to those entities) are combined with a sampled set of adjacent relation embeddings and passed through an encoder (either an MLP or a Transformer) to output a fixed-size embedding. These compositional entity representations can then be used in classicial knowledge graph tasks such as link prediction, relation prediction, and node classification. Across all of these tasks and a range of datasets, NodePiece representations are shown to achieve a large fraction of the performance of strong baselines and in some cases even outperform them, all while using a fraction of their parameter count. Ablation analyses show that increasing the total number of anchors and number of anchors per entity improves performance up to a saturation point, and that in some cases a vocabulary of relations without anchor entities is sufficient (or even preferable) for constructing well-performing entity representations for certain downstream tasks.
SP:fe000f810454388d43212a0128b4f1ca29de447a
NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs
1 INTRODUCTION . Representation learning tasks on knowledge graphs ( KGs ) often require a parameterization of each unique atom in the graph with a vector or matrix . Traditionally , in multi-relational KGs such atoms constitute a set of all nodes n ∈ N ( entities ) and relations ( edge types ) r ∈ R ( Nickel et al. , 2016 ) . Assuming parameterization with vectors , atoms are mapped to d-dimensional vectors through shallow encoders fn : n → Rd and fr : r → Rd which scale linearly to the number of nodes and edge types2 , i.e. , having O ( |N | ) space complexity of the entity embedding matrix . Albeit efficient on small conventional benchmarking datasets based on Freebase ( Toutanova & Chen , 2015 ) ( ~15K nodes ) and WordNet ( Dettmers et al. , 2018 ) ( ~40K nodes ) , training on larger graphs ( e.g. , YAGO 3-10 ( Mahdisoltani et al. , 2015 ) of 120K nodes ) becomes computationally challenging . Scaling it further up to larger subsets ( Hu et al. , 2020 ; Wang et al. , 2021 ; Safavi & Koutra , 2020 ) of Wikidata ( Vrandecic & Krötzsch , 2014 ) requires a top-level GPU or a CPU cluster as done in , e.g. , PyTorch-BigGraph ( Lerer et al. , 2019 ) that maintains a 78M × 200d embeddings matrix in memory ( we list sizes of current best performing models in Table 1 ) . Taking the perspective from NLP , shallow node encoding in KGs corresponds to shallow word embedding popularized with word2vec ( Mikolov et al. , 2013 ) and GloVe ( Pennington et al. , 2014 ) that learned a vocabulary of 400K-2M most frequent words , treating rarer ones as out-of-vocabulary ( OOV ) . The OOV issue was resolved with the ability to build infinite combinations with a finite vocabulary enabled by subword units . Subword-powered algorithms such as fastText ( Bojanowski et al. , 2017 ) , Byte-Pair Encoding ( Sennrich et al. , 2016 ) , and WordPiece ( Schuster & Nakajima , 2012 ) became a standard step in preprocessing pipelines of large language models and allowed to construct fixed-size token vocabularies , e.g. , BERT ( Devlin et al. , 2019 ) contains ~30K tokens and 1The code is available on GitHub : https : //github.com/migalkin/NodePiece 2We then concentrate on nodes as usually their size is orders of magnitude larger than that of edge types . GPT-2 ( Radford et al. , 2019 ) employs ~50K tokens . Importantly , relatively small input embedding matrices enabled investing the parameters budget into more efficient encoders ( Kaplan et al. , 2020 ) . Drawing inspiration from subword embeddings in NLP , we explore how similar strategies for tokenizing entities in large graphs can dramatically reduce parameter complexity , increase generalization , and naturally represent new unseen entities as using the same fixed vocabulary . To do so , tokenization has to rely on atoms akin to subword units and not the total set of nodes . To this end , we propose NodePiece , an anchor-based approach to learn a fixed-size vocabulary V ( |V | |N | ) of any connected multi-relational graph . In NodePiece , the set of atoms consists of anchors and all relation types that , together , allow to construct a combinatorial number of sequences from a limited atoms vocabulary . In contrast to shallow approaches , each node n is first tokenized into a unique hash ( n ) of k closest anchors and m immediate relations . A key element to build a node embedding is a proper encoder function enc ( n ) : hash ( n ) → Rd which can be designed leveraging inductive biases of an underlying graph or downstream tasks . Therefore , the overall parameter budget is now defined by a small fixed-size vocabulary of atoms and the complexity of the encoder function . Our experimental findings suggest that a fixed-size NodePiece vocabulary paired with a simple encoder still yields competitive results on a variety of tasks including link prediction , node classification , and relation prediction . Furthermore , anchor-based hashing enables conventional embedding models to work in the inductive and out-of-sample scenarios when unseen entities arrive at inference time , which otherwise required tailored learning mechanisms . 2 RELATED WORK . Conventional KG embedding approaches . To the best of our knowledge , all contemporary embedding algorithms ( Ji et al. , 2020 ; Ali et al. , 2020 ) for link prediction on KGs employ shallow embedding lookups mapping each entity to a unique embedding vector thus being linear O ( |N | ) to the total number of nodes |N | and size of an embedding matrix . This holds for different embedding families , e.g. , translational ( Sun et al. , 2019 ) , tensor factorization ( Lacroix et al. , 2018 ) , convolutional ( Dettmers et al. , 2018 ) , and hyperbolic ( Chami et al. , 2020 ; Balazevic et al. , 2019 ) . The same applies to relation-aware graph neural network ( GNN ) encoders ( Schlichtkrull et al. , 2018 ; Vashishth et al. , 2020 ) who still initialize each node with a learned embedding or feature vector before message passing . Furthermore , shallow encoding is also used in higher-order KG structures such as hypergraphs ( Fatemi et al. , 2020 ) and hyper-relational graphs ( Rosso et al. , 2020 ; Galkin et al. , 2020 ) . NodePiece can be used as a drop-in replacement of the embedding lookup with any of those models . Distillation and compression . Several recent techniques for reducing memory footprint of embedding matrices follow successful applications of distilling large language models in NLP ( Sanh et al. , 2019 ) , i.e. , distillation ( Wang et al. , 2020 ; Zhu et al. , 2020 ) into low-dimensional counterparts , and compression of trained matrices into discrete codes ( Sachan , 2020 ) . However , all of them require a full embedding matrix as input which we aim to avoid designing NodePiece . Vocabulary reduction in recommender systems . Commonly , recommender systems operate on thousands of categorical features combined in sparse high-dimensional vectors.Recent approaches ( Medini et al. , 2021 ; Liang et al. , 2021 ) employ anchor-based hashing techniques to factorize sparse feature vectors into dense embeddings . Contrary to those setups , we do not expect availability of feature vectors for arbitrary KGs and rather learn vocabulary embeddings from scratch . Entity descriptions and language models . A recent line of work such as KG-BERT ( Yao et al. , 2019 ) , MLMLM ( Clouâtre et al. , 2021 ) , BLP ( Daza et al. , 2021 ) utilize entity descriptions passed through a language model ( LM ) encoder as entity embeddings suitable for link prediction . We would like to emphasize that such approaches are rather orthogonal to NodePiece . Textual features are mostly available in Wikipedia-derived KGs like Wikidata but are often missing in domain-specific graphs like social networks and product graphs . We therefore assume textual features are not available and rather learn node representations based on their spatial characteristics . Still , textual features can be easily added by concatenating NodePiece-encoded features with LM-produced features . Out-of-sample representation learning . This task focuses on predictions involving previously unseen , or out-of-sample , entities that attach to a known KG with a few edges . These new edges are then utilized as a context to compute its embedding . Previous work ( Wang et al. , 2019 ; Hamaguchi et al. , 2017 ; Albooyeh et al. , 2020 ) proposed different neighborhood aggregation functions for this process or resorted to meta-learning ( Chen et al. , 2019 ; Baek et al. , 2020 ; Zhang et al. , 2020a ) . However , all of them follow the shallow embedding paradigm . Instead , NodePiece uses the new edges as a basis for anchor-based tokenization of new nodes in terms of an existing vocabulary . 3 NODEPIECE VOCABULARY CONSTRUCTION . Given a directed KG G = ( N , E , R ) consisting of |N | nodes , |E| edges , and |R| relation types , our task is to reduce the original vocabulary size of |N | nodes to a smaller , fixed-size vocabulary of node pieces akin to subword units . In this work , we represent node pieces through anchor nodes a ∈ A , A ⊂ N , a pre-selected set of nodes in a graph following a deterministic or stochastic strategy . A full NodePiece vocabulary is then constructed from anchor nodes and relation types , i.e , V = A+R . Note that in order to maintain reachability of each node and balance in- and out-degrees we enrich G with inverse edges with inverse relation types , such that |R|inverse = |R|direct and |R| = |R|direct + |R|inverse . Using elements of the constructed vocabulary each node n can be tokenized into hash ( n ) as a sequence of k closest anchors , discrete anchor distances , and a relational context of m immediate relations . Then , any encoder function enc ( n ) : hash ( n ) → Rd can be applied to embed the hash into a d-dimensional vector . An intuition of the approach is presented in Fig . 1 with each step explained in more detail below . 3.1 ANCHOR SELECTION . Subword tokenization algorithms such as BPE ( Sennrich et al. , 2016 ) employ deterministic strategies to create tokens and construct a vocabulary , e.g. , based on frequencies of co-occurring n-grams , such that more frequent words are tokenized with fewer subword units . On graphs , such strategies might employ centrality measures like degree centrality or Personalized PageRank ( Page et al. , 1999 ) . However , in our preliminary experiments , we found random anchor selection to be as effective as centrality-based strategies . A choice for deterministic strategies might be justified when optimizing for certain task-specific topological characteristics , e.g. , degree and PPR strategies indeed skew the distribution of shortest anchor distances towards smaller values thus increasing chances to find anchors in 2- or 3-hop neighborhood of any node ( we provide more evidence for that in Appendix C ) . 3.2 NODE TOKENIZATION . Once the vocabulary V = A + R is constructed , each node n can be hashed ( or tokenized ) into a hash ( n ) using 1 ) k nearest anchors and their discrete distances ; 2 ) m immediate outgoing relations from the relational context of n. Since anchor nodes are concrete nodes in G , they get hashed in the same way as other non-anchor nodes . Anchors per node . Given |A| anchor nodes , it is impractical to use all of them for encoding each node . Instead , we select k anchors per node and describe two possible strategies for that , i.e. , random and deterministic . The basic random strategy uniformly samples an unordered set of k anchors from A yielding ( |A| k ) possible combinations . To avoid collisions when hashing the nodes , |A| and k are to be chosen according to the lower bound on possible combinations that is defined by the total number of nodes , e.g. , ( |A| k ) ≥ |N | . Note that running depth-first search ( DFS ) to random anchors at inference time is inefficient and , therefore , hash ( n ) of the random strategy has to be pre-computed . On the other hand , the deterministic strategy selects an ordered sequence of k nearest anchors . Hence , the anchors can be obtained via breadth-first search ( BFS ) in the l-hop neighborhood of n at inference time ( or pre-computed for speed reasons ) . However , the combinatorial bound is not applicable in this strategy and we need more discriminative signals to avoid hash collisions since nearby nodes will have similar anchors ( we elaborate on the uniqueness issue in Appendix K ) . Such signals have to better ground anchors to the underlying graph structure , and we accomplish that using anchor distances3 and relational context described below . A node residing in a disconnected component is assigned with an auxiliary [ DISCONNECTED ] token or can be turned into an anchor . However , the majority of existing KGs are graphs with one large connected component with very few disconnected nodes , such that this effect is negligible . Anchor Distances . Given a target node n and an anchor ai , we define anchor distance zai ∈ [ 0 ; diameter ( G ) ] as an integer denoting the shortest path distance between ai and n in the original graph G. Note that when tokenizing an anchor aj with the deterministic strategy , the nearest anchor among top-k is always aj itself with distance 0 . We then map each integer to a learnable d-dimensional vector fz : zai → Rd akin to relative distance encoding scheme . Relational Context . We also leverage the multi-relational nature of an underlying KG . Commonly4 , the amount of unique edge types in G is orders of magnitude smaller than the total number of nodes , i.e. , |R| |N | . This fact allows to include the entire |R| in the NodePiece vocabulary VNP and further featurize each node with a unique relational context . We construct a relational context of a node n by randomly sampling a set of m immediate unique outgoing relations starting from n , i.e. , rconn = { rj } m ⊆ Nr ( n ) where Nr ( n ) denotes all outgoing relation types . Due to a non-uniform degree distribution , if |Nr ( n ) | < m , we add auxiliary [ PAD ] tokens to complete rconn to size m .
Conventional methods for learning knowledge graph embeddings often learn separate embeddings for each vertex in a knowledge graph. This paper presents empirical results about a new heuristic for encoding the vertices in a knowledge graph which drastically reduces the number of parameters in a graph, and allows better handling of novel vertices, whose connections were not known at the time of learning/training. The basic approach is to encode each vertex through a sketch of its neighborhood. Specifically the paper heuristically selects a subset of training vertices, called "anchors". Each vertex is represented by its closest k-anchor vertices, and the types of the outgoing edges starting from the vertex. Individual embedding vectors are learnt only for the anchor vertices, and at inference time, the embedding of a novel vertex is composed from the closest k-anchor vertex embeddings using some neural network layer such as an MLP or a transformer.
SP:fe000f810454388d43212a0128b4f1ca29de447a
Practical and Private Heterogeneous Federated Learning
1 INTRODUCTION . Heterogeneous federated learning ( HFL ) ( Li & Wang , 2019 ; Chang et al. , 2019 ) , as a promising variant of federated learning ( FL ) , enables clients equipped with different computation and communication capabilities to collaboratively train their own customized models that may differ in size , numerical precision or structure ( Lin et al. , 2020 ) . In particular , clients share the knowledge of models via their predictions on auxiliary datasets , such as unlabeled problem domain datasets ( ChoquetteChoo et al. , 2021 ) and public non-problem domain datasets ( Li & Wang , 2019 ; Lin et al. , 2020 ) . This flexible approach facilitates customized FL-driven services in areas like Healthcare and Finance ( Kairouz et al. , 2019 ) , while solving the intellectual property concerns of FL models ( Atli et al. , 2020 ) . However , HFL suffers from two major limitations : ( 1 ) The assumption of auxiliary datasets may be unrealistic for many data-critical scenarios ( Zhu et al. , 2021 ) . For example , in Healthcare applications , task-related auxiliary datasets that contain patients ’ sensitive information are usually difficult to obtain due to current strict regulations like General Data Protection Regulation . ( 2 ) Sharing predictions may still leak the privacy of local data ( Papernot et al. , 2017 ) . Several works have demonstrated that given the black-box access to a trained model , adversaries can infer membership ( Salem et al. , 2019 ) and attribute information ( Ganju et al. , 2018 ) of the target sample , and even can reconstruct the original training data ( Yang et al. , 2019 ) . Therefore , to promote the deployment of HFL in real-world applications , it is crucial to solve the above two problems . To the best of our knowledge , in HFL the relaxation of the auxiliary dataset assumption has not been explored before . Specifically , it is challenging to achieve collaborative training under heterogeneous models when there is no an auxiliary dataset as a medium for the model knowledge transfer ( Li & Wang , 2019 ) . On the other hand , to mitigate the above privacy risks , a natural solution is to integrate advanced secure prediction protocols , such as CrypTFlow2 ( Rathee et al. , 2020 ) , CryptGPU ( Tan et al. , 2021 ) , and HE-transformer ( Boemer et al. , 2019b ) . These schemes can protect the private information during the model knowledge transfer by utilizing homomorphic encryption ( HE ) ( Gentry , 2009 ) , garbled circuit ( GC ) ( Yao , 1986 ) or oblivious transfer ( OT ) ( Naor & Pinkas , 2001 ) techniques ( refer to Appendix D.2 for more details ) . Unfortunately , such methods add huge computation and communication overhead due to the use of heavy cryptographic primitives . For instance , ChoquetteChoo et al . ( 2021 ) recently proposed CaPC , the first private collaborative learning scheme based on HE-transformer ( Boemer et al. , 2019b ) supporting heterogeneous models , which can be directly extended to HFL . As mentioned above , this work still suffers from efficiency issues , inherited from prior secure prediction protocols . Moreover , their scheme is implemented by the interaction between clients1 , however in real-world applications , clients ( e.g. , mobile devices ) generally can not establish direct communication channels with others ( Bonawitz et al. , 2017 ) . Therefore , the challenge here is how to efficiently implement secure prediction protocols in the realistic HFL setting . In this work , to approach the above challenges , we develop PrivHFL , a general and practical framework for privacy-preserving HFL . First , PrivHFL relaxes the assumption of dependence on auxiliary datasets and designs a simple but effective dataset expansion method only with clients ’ private datasets . To this end , we instantiate it by leveraging mixup ( Zhang et al. , 2018 ) that is originally a regularization technique to improve generalization , and also present some exploration with other data augmentation methods like cutout ( DeVries & Taylor , 2017 ) and cutmix ( Yun et al. , 2019 ) . The key idea is that the expanded data could provide good coverage of natural dataset distributions and hence could be used as an effective medium for transferring model knowledge . Second , to securely and efficiently evaluate HFL , we leverage the lightweight additive secret sharing technique ( Demmler et al. , 2015 ) to construct customized secure prediction protocols from scratch in a practical setting where there is no direct communication between clients . Our gains mainly come from the improvement in communication and computation through the elimination of costly HE and GC protocols . Moreover , in contrast to prior works that evaluate cryptographic protocols in CPUs , PrivHFL converts complex cryptographic operations to simple computations on large blocks of data , which are friendly with GPUs and can be processed by highly-optimized CUDA kernels ( Tan et al. , 2021 ) . As a result , PrivHFL is suitable for the batch prediction ( i.e. , performing multiple predictions at the same time ) with lower amortized cost . We evaluate the designed protocol on GPUs and CPUs , and the results show that our GPU-based protocol is up to 10× faster than its CPU analog . Our key contributions can be summarized as follows : • We introduce a practical HFL framework , which is independent on any auxiliary datasets while provably providing comprehensive privacy protection . • We design a simple yet effective dataset expansion method to promote the sharing of model knowledge , and construct customized cryptographic protocols for secure prediction . • Extensive experiments on SVHN , CIFAR10 , Tiny ImageNet ( including IID and Non-IID settings ) and various heterogeneous models demonstrate that PrivHFL outperforms prior art up to two orders of magnitude in efficiency and realizes roughly 10 % accuracy gains . 2 BACKGROUND . Before introducing PrivHFL , we first describe the heterogeneous federated learning and the threat model , and then review the cryptographic primitives that are required to understand our work . 2.1 HETEROGENEOUS FEDERATED LEARNING . In HFL ( Li & Wang , 2019 ; Choquette-Choo et al. , 2021 ) , the clients independently design their own unique models , but due to the model heterogeneity , they can not directly share model parameters with each other . Instead , they learn the knowledge of other models via the predictions on a task-related auxiliary dataset , where a server routes messages between the clients since they generally can not establish direct communication channels with others ( Bonawitz et al. , 2017 ; Bell et al. , 2020 ) . Specifically , clients first train local models with their own private datasets . Then , each client performs prediction on the auxiliary dataset based on the local model and sends the prediction results to the server to aggregate . Later , the server broadcasts aggregated results to clients , which will retrain local models based on the auxiliary dataset and received predictions . The whole process is 1As shown in C.5 , by carefully designing protocols , CaPC can be extended to the communication-limited setting but at the cost of increased communication overhead . iterative until each local model meets the pre-defined accuracy requirement . Details can be referred to Figure 9 and Appendix A . 2.2 THREAT MODEL . We work in an honest-but-curious adversary setting ( Goldreich , 2009 ) , where each entity ( including the clients and the server ) strictly follows the specification of the designed protocol but attempts to infer more knowledge about other clients ’ private information such as model parameters and private datasets . Moreover , to maintain the reputation and provide more services , the server does not collude with any clients . Formally , an attacker either corrupts the server or a subset of clients but not both . This setting is reasonable and has been widely instantiated in previous works ( Phong et al. , 2018 ; Sun & Lyu , 2021 ; Choquette-Choo et al. , 2021 ) . 2.3 CRYPTOGRAPHIC PRIMITIVES . Additive secret sharing . We adopt lightweight 2-out-of-2 additive secret sharing over the ring ZL ( Demmler et al. , 2015 ) as the cryptographic building block . We let Share ( x ) denote the sharing algorithm that takes as input x in ZL and outputs random sampled shares [ x ] 0 , [ x ] 1 with the constraint x = [ x ] 0 + [ x ] 1 in ZL . Arithmetic operations can be implemented in the sharing form as shown in Appendix C.4.1 . The reconstruction algorithm Recon ( [ x ] 0 , [ x ] 1 ) takes as input the two shares and outputs x = [ x ] 0 + [ x ] 1 in ZL . The security of the additive secret sharing protocol guarantees that given a share [ x ] 0 or [ x ] 1 , the value x is perfectly hidden . Pseudorandom generator . A Pseudorandom Generator ( PRG ) takes as input a uniformly random seed and a security parameter κ , and outputs a long pseudorandom string . The security of PRG ensures that the output is indistinguishable from the uniform distribution . In PrivHFL , PRGs enable two parties to generate same ( pseudo- ) random numbers without communication . We instantiate PRG with the technique from ( Matyas , 1985 ) and the seed can be generated by the Diffie-Hellman Key Agreement protocol ( Diffie & Hellman , 1976 ) . Details can be referred to Appendix C.4.2 . 3 THE PRIVHFL PROTOCOL . In this section , we introduce the high-level view of PrivHFL , followed by describing in detail our dataset expansion method and secure prediction scheme . 3.1 HIGH-LEVEL VIEW OF PRIVHFL . Synthetic Pool Sampling Query Data … Data Expansion Private Data Local Model Heterogeneous Models prediction label label Local Training Query-data Generation Secure Querying Re-training Local Model prediction Figure 1 : High-level view of PrivHFL PrivHFL follows prior HFL works ( Li & Wang , 2019 ) and iteratively optimizes the clients ’ models . Each client in PrivHFL can play the role of the querying party and the answering party at the same time , and without loss of generality , we denote them as PQ and PA , respectively . As shown in Figure 1 , in each iteration , each PQ performs four-phase operations with other PA , i.e. , local training , query-data generation , secure querying , and re-training . In detail , clients first train the local model on their own private datasets , which is the baseline any future improvements will be compared with . After that , by utilizing our dataset expansion method , each PQ can generate the query data to query other PA ( C fraction of all clients ) for prediction results . To protect the privacy of query samples , predictions and model parameters , clients adopt our secure prediction protocol and conduct collaborative querying in the ciphertext form . To the end , each client can retrain the local model based on the private dataset , as well as the query samples and corresponding prediction results . Algorithm 1 in Appendix C.1 gives the detailed description of PrivHFL . 3.2 QUERY-DATA GENERATION . To relax the assumption of auxiliary datasets , we design and instantiate an effective dataset expansion method , inspired by the success of mixup ( Zhang et al. , 2018 ) in improving model generalization and the efficiency of knowledge distillation ( Wang et al. , 2020 ) . Specifically , we repurpose mixup to construct a big synthesized pool on the small private dataset , which could provide a good coverage of the manifold of natural samples . Given any two private samples xi and xj , we generate multiple synthetic query samples by a convex combination with different coefficients λ as follows : x̃i , j ( λ ) = λ · xi + ( 1− λ ) · xj . ( 1 ) Empirically , we set λ ∈ [ 0.1 , 0.9 ] with an interval of 0.1 to generate more diverse synthetic images . We also explore the influence of different λ values in Appendix B.3 and Table 5 . This simple method can exponentially expand the size of initial dataset and hence provide more candidate samples for query . Following CaPC ( Choquette-Choo et al. , 2021 ) , we use random sampling and active learning strategies ( Tong & Koller , 2001 ) in Appendix C.3 to select informative samples from the synthesized pool . Note that an alternative solution is to directly use private datasets to query like knowledge distillation ( Hinton et al. , 2015 ) . We compare against this method in Section 4 . Extensions of mixup-based method . Dataset expansion based on private samples is a universal and modular method , therefore it can be readily extended with techniques in data augmentation literature . For example , one may replace the mixup-based dataset expansion in PrivHFL with recent methods such as cutout ( DeVries & Taylor , 2017 ) and cutmix ( Yun et al. , 2019 ) . We present some exploration and experiments with these extensions in Appendix B.1 and Figure 11 .
PrivHFL is a protocol for collaborative learning that does not reveal the private data to the server or answering parties $P_A$ if they operate as honest-but-curious entities. PrivHFL tackles the bottleneck of private inference by building a new protocol from scratch that can run on GPU. It is not assumed that parties have new data or that there is a public pool of data to run inference on but the additional query data for knowledge transfer is obtained by augmenting the private data with MixUp.
SP:cdde1fb9ae1b2a8c9eb81c36e74f5e6dae6f2006
Practical and Private Heterogeneous Federated Learning
1 INTRODUCTION . Heterogeneous federated learning ( HFL ) ( Li & Wang , 2019 ; Chang et al. , 2019 ) , as a promising variant of federated learning ( FL ) , enables clients equipped with different computation and communication capabilities to collaboratively train their own customized models that may differ in size , numerical precision or structure ( Lin et al. , 2020 ) . In particular , clients share the knowledge of models via their predictions on auxiliary datasets , such as unlabeled problem domain datasets ( ChoquetteChoo et al. , 2021 ) and public non-problem domain datasets ( Li & Wang , 2019 ; Lin et al. , 2020 ) . This flexible approach facilitates customized FL-driven services in areas like Healthcare and Finance ( Kairouz et al. , 2019 ) , while solving the intellectual property concerns of FL models ( Atli et al. , 2020 ) . However , HFL suffers from two major limitations : ( 1 ) The assumption of auxiliary datasets may be unrealistic for many data-critical scenarios ( Zhu et al. , 2021 ) . For example , in Healthcare applications , task-related auxiliary datasets that contain patients ’ sensitive information are usually difficult to obtain due to current strict regulations like General Data Protection Regulation . ( 2 ) Sharing predictions may still leak the privacy of local data ( Papernot et al. , 2017 ) . Several works have demonstrated that given the black-box access to a trained model , adversaries can infer membership ( Salem et al. , 2019 ) and attribute information ( Ganju et al. , 2018 ) of the target sample , and even can reconstruct the original training data ( Yang et al. , 2019 ) . Therefore , to promote the deployment of HFL in real-world applications , it is crucial to solve the above two problems . To the best of our knowledge , in HFL the relaxation of the auxiliary dataset assumption has not been explored before . Specifically , it is challenging to achieve collaborative training under heterogeneous models when there is no an auxiliary dataset as a medium for the model knowledge transfer ( Li & Wang , 2019 ) . On the other hand , to mitigate the above privacy risks , a natural solution is to integrate advanced secure prediction protocols , such as CrypTFlow2 ( Rathee et al. , 2020 ) , CryptGPU ( Tan et al. , 2021 ) , and HE-transformer ( Boemer et al. , 2019b ) . These schemes can protect the private information during the model knowledge transfer by utilizing homomorphic encryption ( HE ) ( Gentry , 2009 ) , garbled circuit ( GC ) ( Yao , 1986 ) or oblivious transfer ( OT ) ( Naor & Pinkas , 2001 ) techniques ( refer to Appendix D.2 for more details ) . Unfortunately , such methods add huge computation and communication overhead due to the use of heavy cryptographic primitives . For instance , ChoquetteChoo et al . ( 2021 ) recently proposed CaPC , the first private collaborative learning scheme based on HE-transformer ( Boemer et al. , 2019b ) supporting heterogeneous models , which can be directly extended to HFL . As mentioned above , this work still suffers from efficiency issues , inherited from prior secure prediction protocols . Moreover , their scheme is implemented by the interaction between clients1 , however in real-world applications , clients ( e.g. , mobile devices ) generally can not establish direct communication channels with others ( Bonawitz et al. , 2017 ) . Therefore , the challenge here is how to efficiently implement secure prediction protocols in the realistic HFL setting . In this work , to approach the above challenges , we develop PrivHFL , a general and practical framework for privacy-preserving HFL . First , PrivHFL relaxes the assumption of dependence on auxiliary datasets and designs a simple but effective dataset expansion method only with clients ’ private datasets . To this end , we instantiate it by leveraging mixup ( Zhang et al. , 2018 ) that is originally a regularization technique to improve generalization , and also present some exploration with other data augmentation methods like cutout ( DeVries & Taylor , 2017 ) and cutmix ( Yun et al. , 2019 ) . The key idea is that the expanded data could provide good coverage of natural dataset distributions and hence could be used as an effective medium for transferring model knowledge . Second , to securely and efficiently evaluate HFL , we leverage the lightweight additive secret sharing technique ( Demmler et al. , 2015 ) to construct customized secure prediction protocols from scratch in a practical setting where there is no direct communication between clients . Our gains mainly come from the improvement in communication and computation through the elimination of costly HE and GC protocols . Moreover , in contrast to prior works that evaluate cryptographic protocols in CPUs , PrivHFL converts complex cryptographic operations to simple computations on large blocks of data , which are friendly with GPUs and can be processed by highly-optimized CUDA kernels ( Tan et al. , 2021 ) . As a result , PrivHFL is suitable for the batch prediction ( i.e. , performing multiple predictions at the same time ) with lower amortized cost . We evaluate the designed protocol on GPUs and CPUs , and the results show that our GPU-based protocol is up to 10× faster than its CPU analog . Our key contributions can be summarized as follows : • We introduce a practical HFL framework , which is independent on any auxiliary datasets while provably providing comprehensive privacy protection . • We design a simple yet effective dataset expansion method to promote the sharing of model knowledge , and construct customized cryptographic protocols for secure prediction . • Extensive experiments on SVHN , CIFAR10 , Tiny ImageNet ( including IID and Non-IID settings ) and various heterogeneous models demonstrate that PrivHFL outperforms prior art up to two orders of magnitude in efficiency and realizes roughly 10 % accuracy gains . 2 BACKGROUND . Before introducing PrivHFL , we first describe the heterogeneous federated learning and the threat model , and then review the cryptographic primitives that are required to understand our work . 2.1 HETEROGENEOUS FEDERATED LEARNING . In HFL ( Li & Wang , 2019 ; Choquette-Choo et al. , 2021 ) , the clients independently design their own unique models , but due to the model heterogeneity , they can not directly share model parameters with each other . Instead , they learn the knowledge of other models via the predictions on a task-related auxiliary dataset , where a server routes messages between the clients since they generally can not establish direct communication channels with others ( Bonawitz et al. , 2017 ; Bell et al. , 2020 ) . Specifically , clients first train local models with their own private datasets . Then , each client performs prediction on the auxiliary dataset based on the local model and sends the prediction results to the server to aggregate . Later , the server broadcasts aggregated results to clients , which will retrain local models based on the auxiliary dataset and received predictions . The whole process is 1As shown in C.5 , by carefully designing protocols , CaPC can be extended to the communication-limited setting but at the cost of increased communication overhead . iterative until each local model meets the pre-defined accuracy requirement . Details can be referred to Figure 9 and Appendix A . 2.2 THREAT MODEL . We work in an honest-but-curious adversary setting ( Goldreich , 2009 ) , where each entity ( including the clients and the server ) strictly follows the specification of the designed protocol but attempts to infer more knowledge about other clients ’ private information such as model parameters and private datasets . Moreover , to maintain the reputation and provide more services , the server does not collude with any clients . Formally , an attacker either corrupts the server or a subset of clients but not both . This setting is reasonable and has been widely instantiated in previous works ( Phong et al. , 2018 ; Sun & Lyu , 2021 ; Choquette-Choo et al. , 2021 ) . 2.3 CRYPTOGRAPHIC PRIMITIVES . Additive secret sharing . We adopt lightweight 2-out-of-2 additive secret sharing over the ring ZL ( Demmler et al. , 2015 ) as the cryptographic building block . We let Share ( x ) denote the sharing algorithm that takes as input x in ZL and outputs random sampled shares [ x ] 0 , [ x ] 1 with the constraint x = [ x ] 0 + [ x ] 1 in ZL . Arithmetic operations can be implemented in the sharing form as shown in Appendix C.4.1 . The reconstruction algorithm Recon ( [ x ] 0 , [ x ] 1 ) takes as input the two shares and outputs x = [ x ] 0 + [ x ] 1 in ZL . The security of the additive secret sharing protocol guarantees that given a share [ x ] 0 or [ x ] 1 , the value x is perfectly hidden . Pseudorandom generator . A Pseudorandom Generator ( PRG ) takes as input a uniformly random seed and a security parameter κ , and outputs a long pseudorandom string . The security of PRG ensures that the output is indistinguishable from the uniform distribution . In PrivHFL , PRGs enable two parties to generate same ( pseudo- ) random numbers without communication . We instantiate PRG with the technique from ( Matyas , 1985 ) and the seed can be generated by the Diffie-Hellman Key Agreement protocol ( Diffie & Hellman , 1976 ) . Details can be referred to Appendix C.4.2 . 3 THE PRIVHFL PROTOCOL . In this section , we introduce the high-level view of PrivHFL , followed by describing in detail our dataset expansion method and secure prediction scheme . 3.1 HIGH-LEVEL VIEW OF PRIVHFL . Synthetic Pool Sampling Query Data … Data Expansion Private Data Local Model Heterogeneous Models prediction label label Local Training Query-data Generation Secure Querying Re-training Local Model prediction Figure 1 : High-level view of PrivHFL PrivHFL follows prior HFL works ( Li & Wang , 2019 ) and iteratively optimizes the clients ’ models . Each client in PrivHFL can play the role of the querying party and the answering party at the same time , and without loss of generality , we denote them as PQ and PA , respectively . As shown in Figure 1 , in each iteration , each PQ performs four-phase operations with other PA , i.e. , local training , query-data generation , secure querying , and re-training . In detail , clients first train the local model on their own private datasets , which is the baseline any future improvements will be compared with . After that , by utilizing our dataset expansion method , each PQ can generate the query data to query other PA ( C fraction of all clients ) for prediction results . To protect the privacy of query samples , predictions and model parameters , clients adopt our secure prediction protocol and conduct collaborative querying in the ciphertext form . To the end , each client can retrain the local model based on the private dataset , as well as the query samples and corresponding prediction results . Algorithm 1 in Appendix C.1 gives the detailed description of PrivHFL . 3.2 QUERY-DATA GENERATION . To relax the assumption of auxiliary datasets , we design and instantiate an effective dataset expansion method , inspired by the success of mixup ( Zhang et al. , 2018 ) in improving model generalization and the efficiency of knowledge distillation ( Wang et al. , 2020 ) . Specifically , we repurpose mixup to construct a big synthesized pool on the small private dataset , which could provide a good coverage of the manifold of natural samples . Given any two private samples xi and xj , we generate multiple synthetic query samples by a convex combination with different coefficients λ as follows : x̃i , j ( λ ) = λ · xi + ( 1− λ ) · xj . ( 1 ) Empirically , we set λ ∈ [ 0.1 , 0.9 ] with an interval of 0.1 to generate more diverse synthetic images . We also explore the influence of different λ values in Appendix B.3 and Table 5 . This simple method can exponentially expand the size of initial dataset and hence provide more candidate samples for query . Following CaPC ( Choquette-Choo et al. , 2021 ) , we use random sampling and active learning strategies ( Tong & Koller , 2001 ) in Appendix C.3 to select informative samples from the synthesized pool . Note that an alternative solution is to directly use private datasets to query like knowledge distillation ( Hinton et al. , 2015 ) . We compare against this method in Section 4 . Extensions of mixup-based method . Dataset expansion based on private samples is a universal and modular method , therefore it can be readily extended with techniques in data augmentation literature . For example , one may replace the mixup-based dataset expansion in PrivHFL with recent methods such as cutout ( DeVries & Taylor , 2017 ) and cutmix ( Yun et al. , 2019 ) . We present some exploration and experiments with these extensions in Appendix B.1 and Figure 11 .
The paper showed a new approach for heterogeneous federated learning, which uses augmented dataset instead of a public dataset for knowledge transfer between heterogeneous models. The authors also suggested a lightweight additive secret sharing technique to construct a series of tailored cryptographic protocols, which is friendly with GPU and the CUDA kernels. The method was evaluated in a simulated scenario with three public imaging datasets. And shows superiority over the baseline methods, and also efficiency with regard to run time and communication cost.
SP:cdde1fb9ae1b2a8c9eb81c36e74f5e6dae6f2006
Practical and Private Heterogeneous Federated Learning
1 INTRODUCTION . Heterogeneous federated learning ( HFL ) ( Li & Wang , 2019 ; Chang et al. , 2019 ) , as a promising variant of federated learning ( FL ) , enables clients equipped with different computation and communication capabilities to collaboratively train their own customized models that may differ in size , numerical precision or structure ( Lin et al. , 2020 ) . In particular , clients share the knowledge of models via their predictions on auxiliary datasets , such as unlabeled problem domain datasets ( ChoquetteChoo et al. , 2021 ) and public non-problem domain datasets ( Li & Wang , 2019 ; Lin et al. , 2020 ) . This flexible approach facilitates customized FL-driven services in areas like Healthcare and Finance ( Kairouz et al. , 2019 ) , while solving the intellectual property concerns of FL models ( Atli et al. , 2020 ) . However , HFL suffers from two major limitations : ( 1 ) The assumption of auxiliary datasets may be unrealistic for many data-critical scenarios ( Zhu et al. , 2021 ) . For example , in Healthcare applications , task-related auxiliary datasets that contain patients ’ sensitive information are usually difficult to obtain due to current strict regulations like General Data Protection Regulation . ( 2 ) Sharing predictions may still leak the privacy of local data ( Papernot et al. , 2017 ) . Several works have demonstrated that given the black-box access to a trained model , adversaries can infer membership ( Salem et al. , 2019 ) and attribute information ( Ganju et al. , 2018 ) of the target sample , and even can reconstruct the original training data ( Yang et al. , 2019 ) . Therefore , to promote the deployment of HFL in real-world applications , it is crucial to solve the above two problems . To the best of our knowledge , in HFL the relaxation of the auxiliary dataset assumption has not been explored before . Specifically , it is challenging to achieve collaborative training under heterogeneous models when there is no an auxiliary dataset as a medium for the model knowledge transfer ( Li & Wang , 2019 ) . On the other hand , to mitigate the above privacy risks , a natural solution is to integrate advanced secure prediction protocols , such as CrypTFlow2 ( Rathee et al. , 2020 ) , CryptGPU ( Tan et al. , 2021 ) , and HE-transformer ( Boemer et al. , 2019b ) . These schemes can protect the private information during the model knowledge transfer by utilizing homomorphic encryption ( HE ) ( Gentry , 2009 ) , garbled circuit ( GC ) ( Yao , 1986 ) or oblivious transfer ( OT ) ( Naor & Pinkas , 2001 ) techniques ( refer to Appendix D.2 for more details ) . Unfortunately , such methods add huge computation and communication overhead due to the use of heavy cryptographic primitives . For instance , ChoquetteChoo et al . ( 2021 ) recently proposed CaPC , the first private collaborative learning scheme based on HE-transformer ( Boemer et al. , 2019b ) supporting heterogeneous models , which can be directly extended to HFL . As mentioned above , this work still suffers from efficiency issues , inherited from prior secure prediction protocols . Moreover , their scheme is implemented by the interaction between clients1 , however in real-world applications , clients ( e.g. , mobile devices ) generally can not establish direct communication channels with others ( Bonawitz et al. , 2017 ) . Therefore , the challenge here is how to efficiently implement secure prediction protocols in the realistic HFL setting . In this work , to approach the above challenges , we develop PrivHFL , a general and practical framework for privacy-preserving HFL . First , PrivHFL relaxes the assumption of dependence on auxiliary datasets and designs a simple but effective dataset expansion method only with clients ’ private datasets . To this end , we instantiate it by leveraging mixup ( Zhang et al. , 2018 ) that is originally a regularization technique to improve generalization , and also present some exploration with other data augmentation methods like cutout ( DeVries & Taylor , 2017 ) and cutmix ( Yun et al. , 2019 ) . The key idea is that the expanded data could provide good coverage of natural dataset distributions and hence could be used as an effective medium for transferring model knowledge . Second , to securely and efficiently evaluate HFL , we leverage the lightweight additive secret sharing technique ( Demmler et al. , 2015 ) to construct customized secure prediction protocols from scratch in a practical setting where there is no direct communication between clients . Our gains mainly come from the improvement in communication and computation through the elimination of costly HE and GC protocols . Moreover , in contrast to prior works that evaluate cryptographic protocols in CPUs , PrivHFL converts complex cryptographic operations to simple computations on large blocks of data , which are friendly with GPUs and can be processed by highly-optimized CUDA kernels ( Tan et al. , 2021 ) . As a result , PrivHFL is suitable for the batch prediction ( i.e. , performing multiple predictions at the same time ) with lower amortized cost . We evaluate the designed protocol on GPUs and CPUs , and the results show that our GPU-based protocol is up to 10× faster than its CPU analog . Our key contributions can be summarized as follows : • We introduce a practical HFL framework , which is independent on any auxiliary datasets while provably providing comprehensive privacy protection . • We design a simple yet effective dataset expansion method to promote the sharing of model knowledge , and construct customized cryptographic protocols for secure prediction . • Extensive experiments on SVHN , CIFAR10 , Tiny ImageNet ( including IID and Non-IID settings ) and various heterogeneous models demonstrate that PrivHFL outperforms prior art up to two orders of magnitude in efficiency and realizes roughly 10 % accuracy gains . 2 BACKGROUND . Before introducing PrivHFL , we first describe the heterogeneous federated learning and the threat model , and then review the cryptographic primitives that are required to understand our work . 2.1 HETEROGENEOUS FEDERATED LEARNING . In HFL ( Li & Wang , 2019 ; Choquette-Choo et al. , 2021 ) , the clients independently design their own unique models , but due to the model heterogeneity , they can not directly share model parameters with each other . Instead , they learn the knowledge of other models via the predictions on a task-related auxiliary dataset , where a server routes messages between the clients since they generally can not establish direct communication channels with others ( Bonawitz et al. , 2017 ; Bell et al. , 2020 ) . Specifically , clients first train local models with their own private datasets . Then , each client performs prediction on the auxiliary dataset based on the local model and sends the prediction results to the server to aggregate . Later , the server broadcasts aggregated results to clients , which will retrain local models based on the auxiliary dataset and received predictions . The whole process is 1As shown in C.5 , by carefully designing protocols , CaPC can be extended to the communication-limited setting but at the cost of increased communication overhead . iterative until each local model meets the pre-defined accuracy requirement . Details can be referred to Figure 9 and Appendix A . 2.2 THREAT MODEL . We work in an honest-but-curious adversary setting ( Goldreich , 2009 ) , where each entity ( including the clients and the server ) strictly follows the specification of the designed protocol but attempts to infer more knowledge about other clients ’ private information such as model parameters and private datasets . Moreover , to maintain the reputation and provide more services , the server does not collude with any clients . Formally , an attacker either corrupts the server or a subset of clients but not both . This setting is reasonable and has been widely instantiated in previous works ( Phong et al. , 2018 ; Sun & Lyu , 2021 ; Choquette-Choo et al. , 2021 ) . 2.3 CRYPTOGRAPHIC PRIMITIVES . Additive secret sharing . We adopt lightweight 2-out-of-2 additive secret sharing over the ring ZL ( Demmler et al. , 2015 ) as the cryptographic building block . We let Share ( x ) denote the sharing algorithm that takes as input x in ZL and outputs random sampled shares [ x ] 0 , [ x ] 1 with the constraint x = [ x ] 0 + [ x ] 1 in ZL . Arithmetic operations can be implemented in the sharing form as shown in Appendix C.4.1 . The reconstruction algorithm Recon ( [ x ] 0 , [ x ] 1 ) takes as input the two shares and outputs x = [ x ] 0 + [ x ] 1 in ZL . The security of the additive secret sharing protocol guarantees that given a share [ x ] 0 or [ x ] 1 , the value x is perfectly hidden . Pseudorandom generator . A Pseudorandom Generator ( PRG ) takes as input a uniformly random seed and a security parameter κ , and outputs a long pseudorandom string . The security of PRG ensures that the output is indistinguishable from the uniform distribution . In PrivHFL , PRGs enable two parties to generate same ( pseudo- ) random numbers without communication . We instantiate PRG with the technique from ( Matyas , 1985 ) and the seed can be generated by the Diffie-Hellman Key Agreement protocol ( Diffie & Hellman , 1976 ) . Details can be referred to Appendix C.4.2 . 3 THE PRIVHFL PROTOCOL . In this section , we introduce the high-level view of PrivHFL , followed by describing in detail our dataset expansion method and secure prediction scheme . 3.1 HIGH-LEVEL VIEW OF PRIVHFL . Synthetic Pool Sampling Query Data … Data Expansion Private Data Local Model Heterogeneous Models prediction label label Local Training Query-data Generation Secure Querying Re-training Local Model prediction Figure 1 : High-level view of PrivHFL PrivHFL follows prior HFL works ( Li & Wang , 2019 ) and iteratively optimizes the clients ’ models . Each client in PrivHFL can play the role of the querying party and the answering party at the same time , and without loss of generality , we denote them as PQ and PA , respectively . As shown in Figure 1 , in each iteration , each PQ performs four-phase operations with other PA , i.e. , local training , query-data generation , secure querying , and re-training . In detail , clients first train the local model on their own private datasets , which is the baseline any future improvements will be compared with . After that , by utilizing our dataset expansion method , each PQ can generate the query data to query other PA ( C fraction of all clients ) for prediction results . To protect the privacy of query samples , predictions and model parameters , clients adopt our secure prediction protocol and conduct collaborative querying in the ciphertext form . To the end , each client can retrain the local model based on the private dataset , as well as the query samples and corresponding prediction results . Algorithm 1 in Appendix C.1 gives the detailed description of PrivHFL . 3.2 QUERY-DATA GENERATION . To relax the assumption of auxiliary datasets , we design and instantiate an effective dataset expansion method , inspired by the success of mixup ( Zhang et al. , 2018 ) in improving model generalization and the efficiency of knowledge distillation ( Wang et al. , 2020 ) . Specifically , we repurpose mixup to construct a big synthesized pool on the small private dataset , which could provide a good coverage of the manifold of natural samples . Given any two private samples xi and xj , we generate multiple synthetic query samples by a convex combination with different coefficients λ as follows : x̃i , j ( λ ) = λ · xi + ( 1− λ ) · xj . ( 1 ) Empirically , we set λ ∈ [ 0.1 , 0.9 ] with an interval of 0.1 to generate more diverse synthetic images . We also explore the influence of different λ values in Appendix B.3 and Table 5 . This simple method can exponentially expand the size of initial dataset and hence provide more candidate samples for query . Following CaPC ( Choquette-Choo et al. , 2021 ) , we use random sampling and active learning strategies ( Tong & Koller , 2001 ) in Appendix C.3 to select informative samples from the synthesized pool . Note that an alternative solution is to directly use private datasets to query like knowledge distillation ( Hinton et al. , 2015 ) . We compare against this method in Section 4 . Extensions of mixup-based method . Dataset expansion based on private samples is a universal and modular method , therefore it can be readily extended with techniques in data augmentation literature . For example , one may replace the mixup-based dataset expansion in PrivHFL with recent methods such as cutout ( DeVries & Taylor , 2017 ) and cutmix ( Yun et al. , 2019 ) . We present some exploration and experiments with these extensions in Appendix B.1 and Figure 11 .
This paper introduces a heterogeneous federated learning framework without requiring public datasets by designing a dataset expansion method and constructing cryptographic protocols for secure prediction. The main strength of this paper is regarding the experimental results. The authors have provided experiments on different datasets, heterogeneous local models, various degree of non-IID-ness across clients, and showed the runtime of three steps in their proposed secure querying protocol.
SP:cdde1fb9ae1b2a8c9eb81c36e74f5e6dae6f2006
Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching
1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms have been shown to effectively solve a wide variety of complex problems ( e.g. , Mnih et al. , 2015 ; Bellemare et al. , 2013 ) . However , they are often designed to solve one single task at a time and they need to restart the learning process from scratch for any new problem , even when it is defined on the very same environment ( e.g. , a robot navigating to different locations in the same apartment ) . Recently , Unsupervised RL ( URL ) has been proposed as an approach to address this limitation . In URL , the agent first interacts with the environment without any extrinsic reward signal . Afterward , the agent leverages the experience accumulated during the unsupervised learning phase to efficiently solve a variety of downstream tasks defined on the same environment . This approach is particularly effective in problems such as navigation ( see e.g. , Bagaria et al. , 2021 ) and robotics ( see e.g. , Pong et al. , 2020 ) where the agent is often required to readily solve a wide range of tasks while the dynamics of environment remains fixed . In this paper , we focus on the unsupervised objective of discovering a set of skills that can be used to efficiently solve sparse-reward downstream tasks . In particular , we build on the insight that mutual information ( MI ) between the skills ’ latent variables and the states reached by them effectively formalizes the dual objective of learning policies that both cover and navigate the environment efficiently ( e.g. , Gregor et al. , 2016 ) . Indeed , maximizing MI has been shown to be a powerful approach for encouraging exploration in RL ( Houthooft et al. , 2016 ; Mohamed & Rezende , 2015 ) and for unsupervised skill discovery ( e.g. , Gregor et al. , 2016 ; Eysenbach et al. , 2019 ; Achiam et al. , 2018 ; Sharma et al. , 2020 ; Campos et al. , 2020 ) . Nonetheless , learning policies that maximize MI is a challenging optimization problem . Several approximations have been proposed to simplify it at the cost of possibly deviating from the original objective of coverage and directedness ( see Sect . 4 for a review of related work ) . In this paper , we propose UPSIDE ( UnsuPervised Skills that dIrect then DiffusE ) to learn a set of policies that can be effectively used to cover the environment and solve goal-reaching downstream tasks . Our solution builds on the following components ( Fig . 1 ) : • Policy structure ( Sect . 3.1 , see Fig . 1 ( A ) ) . We consider policies composed of two parts : 1 ) a directed part , referred to as the skill , that is trained to reach a specific region of the environment , and 2 ) a diffusing part that induces a local coverage around the region attained by the first part . This structure favors coverage and directedness at the level of a single policy . • New constrained objective ( Sect . 3.2 , see Fig . 1 ( B ) & ( C ) ) . We then introduce a constrained optimization problem designed to maximize the number of policies under the constraint that the states reached by each of the diffusing parts are distinct enough ( i.e. , they satisfy a minimum level of discriminability ) . We prove that this problem can be cast as a lower bound to the original MI objective , thus preserving its coverage-directedness trade-off . UPSIDE solves it by adaptively adding or removing policies to a given initial set , without requiring any prior knowledge on a sensible number of policies . • Tree structure ( Sect . 3.3 , see Fig . 1 ( D ) & ( E ) ) . Leveraging the directed nature of the skills , UPSIDE effectively composes them to build longer and longer policies organized in a tree structure . This overcomes the need of defining a suitable policy length in advance . Thus in UPSIDE we can consider short policies to make the optimization easier , while composing their skills along a growing tree structure to ensure an adaptive and thorough coverage of the environment . The combination of these components allows UPSIDE to effectively adapt the number and the length of policies to the specific structure of the environment , while learning policies that ensure coverage and directedness . We study the effectiveness of UPSIDE and the impact of its components in hardto-explore continuous navigation and control environments , where UPSIDE improves over existing baselines both in terms of exploration and learning performance . 2 SETTING . We consider the URL setting where the agent interacts with a Markov decision process ( MDP ) M with state space S , action space A , dynamics p ( s′|s , a ) , and no reward . The agent starts each episode from a designated initial state s0 ∈ S.1 Upon termination of the chosen policy , the agent is then reset to s0 . This setting is particularly challenging from an exploration point of view since the agent can not rely on the initial distribution to cover the state space . We recall the MI-based unsupervised skill discovery approach ( see e.g. , Gregor et al. , 2016 ) . Denote by Z some ( latent ) variables on which the policies of length T are conditioned2 ( we assume that Z is categorical for simplicity and because it is the most common case in practice ) . There are three optimization variables : ( i ) the cardinality of Z denoted by NZ , i.e. , the number of policies ( we write Z = { 1 , . . . , NZ } = [ NZ ] ) , ( ii ) the parameters π ( z ) of the policy indexed by z ∈ Z , and ( iii ) the 1More generally , s0 could be drawn from any distribution supported over a compact region . 2In URL , there is no terminal state and the agent has to define a criterion to terminate the episode and restart . In practice , most of unsupervised skill discovery methods set a fixed number of steps . policy sampling distribution ρ ( i.e. , ρ ( z ) is the probability of sampling policy z at the beginning of the episode ) . Denote policy z ’ s action distribution in state s by π ( ·|z , s ) and the entropy function by H. Let the variable ST be the random ( final ) state induced by sampling a policy z from ρ and executing π ( z ) from s0 for an episode . Denote by pπ ( z ) ( sT ) the distribution over ( final ) states induced by executing policy z , by p ( z|sT ) the probability of z being the policy to induce state sT , and let p ( sT ) = ∑ z∈Z ρ ( z ) pπ ( z ) ( sT ) . Maximizing the MI between Z and ST can be written as max NZ , ρ , π I ( ST ; Z ) = H ( ST ) −H ( ST |Z ) = − ∑ sT p ( sT ) log p ( sT ) + ∑ z∈Z ρ ( z ) EsT |z [ log pπ ( z ) ( sT ) ] = H ( Z ) −H ( Z|ST ) = − ∑ z∈Z ρ ( z ) log ρ ( z ) + ∑ z∈Z ρ ( z ) EsT |z [ log p ( z|sT ) ] , ( 1 ) where in the expectations sT |z ∼ pπ ( z ) ( sT ) . In the first formulation , the entropy term over states captures the requirement that policies thoroughly cover the state space , while the second term measures the entropy over the states reached by each policy and thus promotes policies that have a directed behavior . Learning the optimal NZ , ρ , and π is a challenging problem and several approximations have been proposed ( see e.g. , Gregor et al. , 2016 ; Eysenbach et al. , 2019 ; Achiam et al. , 2018 ; Campos et al. , 2020 ) . Many approaches focus on the so-called reverse formulation of the MI ( second line of Equation 1 ) . In this case , the conditional distribution p ( z|sT ) is usually replaced with a parametric model qφ ( z|sT ) called the discriminator that is trained via a negative log likelihood loss simultaneously with all other variables . Then one can maximize the lower bound ( Barber & Agakov , 2004 ) : I ( ST ; Z ) ≥ Ez∼ρ ( z ) , τ∼π ( z ) [ log qφ ( z|sT ) − log ρ ( z ) ] , where we denote by τ ∼ π ( z ) trajectories sampled from the policy indexed by z . As a result , each policy π ( z ) can be trained with RL to maximize the intrinsic reward rz ( sT ) : = log qφ ( z|sT ) − log ρ ( z ) . 3 THE UPSIDE ALGORITHM In this section we detail the three main components of UPSIDE , which is summarized in Sect . 3.4 . 3.1 DECOUPLED POLICY STRUCTURE OF DIRECT-THEN-DIFFUSE . While the trade-off between coverage and directedness is determined by the MI objective , the amount of stochasticity of each policy ( e.g. , injected via a regularization on the entropy over the actions ) has also a major impact on the effectiveness of the overall algorithm ( Eysenbach et al. , 2019 ) . In fact , while randomness can promote broader coverage , a highly stochastic policy tends to induce a distribution pπ ( z ) ( sT ) over final states with high entropy , thus decreasing −H ( ST |Z ) and losing in directedness . In UPSIDE , we define policies with a decoupled structure ( see Fig . 1 ( A ) ) composed of a ) a directed part ( of length T ) that we refer to as skill , with low stochasticity and trained to reach a specific region of the environment and b ) a diffusing part ( of length H ) with high stochasticity to promote local coverage of the states around the region reached by the skill . Coherently with this structure , the state variable in the conditional entropy in Equation 1 becomes any state reached during the diffusing part ( denote by Sdiff the random variable ) and not just the skill ’ s terminal state . Following Sect . 2 we define an intrinsic reward rz ( s ) = log qφ ( z|s ) − log ρ ( z ) and the skill of policy z maximizes the cumulative reward over the states traversed by the diffusing part . Formally , we can conveniently define the objective function : max π ( z ) Eτ∼π ( z ) [ ∑ t∈J α · rz ( st ) + β · H ( π ( ·|z , st ) ) ] , ( 2 ) where J = { T , . . . , T + H } and α = 1 , β = 0 ( resp.α = 0 , β = 1 ) when optimizing for the skill ( resp . diffusing part ) . In words , the skill is incentivized to bring the diffusing part to a discriminable region of the state space , while the diffusing part is optimized by a simple random walk policy ( i.e. , a stochastic policy with uniform distribution over actions ) to promote local coverage around sT . Table 1 illustrates how UPSIDE ’ s policies compare to other methods . Unlike VIC and similar to DIAYN , the diffusing parts of the policies tend to “ push ” the skills away so as to reach diverse regions of the environment . The combination of the directedness of the skills and local coverage of the diffusing parts thus ensures that the whole environment can be properly visited with NZ |S| policies.3 Furthermore , the diffusing part can be seen as defining a cluster of states that represents the goal region of the directed skill . This is in contrast with DIAYN policies whose stochasticity may be spread over the whole trajectory . This allows us to “ ground ” the latent variable representations of the policies Z to specific regions of the environment ( i.e. , the clusters ) . As a result , maximizing the MI I ( Sdiff ; Z ) can be seen as learning a set of “ cluster-conditioned ” policies .
The paper proposes a novel algorithm for learning unsupervised skills based on empowerment. Specifically, direct-and-diffuse policies are proposed that first optimize empowerment and then after a fixed number of episode steps optimize action entropy. This yields policies that go to a certain point in a directed manner and then move around that point at random. Further, the UPSIDE algorithm is based on tracking the policies with high discriminability (i.e. low H(z|x)). The algorithm keeps a set of policies that are currently above a threshold and attempts to add new policies incrementally. Finally, each new policy can be assigned a parent policy, wherein the parent policy is executed to produce the starting state for the new policy. This allows the proposed algorithm to stack policies sequentially and reach goals further away. Empirically, the algorithm achieves good performance on pointmass, cheetah, walker, and ant tasks.
SP:f202f3d6780876a0bdd7d7bd4d7047719a145177
Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching
1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms have been shown to effectively solve a wide variety of complex problems ( e.g. , Mnih et al. , 2015 ; Bellemare et al. , 2013 ) . However , they are often designed to solve one single task at a time and they need to restart the learning process from scratch for any new problem , even when it is defined on the very same environment ( e.g. , a robot navigating to different locations in the same apartment ) . Recently , Unsupervised RL ( URL ) has been proposed as an approach to address this limitation . In URL , the agent first interacts with the environment without any extrinsic reward signal . Afterward , the agent leverages the experience accumulated during the unsupervised learning phase to efficiently solve a variety of downstream tasks defined on the same environment . This approach is particularly effective in problems such as navigation ( see e.g. , Bagaria et al. , 2021 ) and robotics ( see e.g. , Pong et al. , 2020 ) where the agent is often required to readily solve a wide range of tasks while the dynamics of environment remains fixed . In this paper , we focus on the unsupervised objective of discovering a set of skills that can be used to efficiently solve sparse-reward downstream tasks . In particular , we build on the insight that mutual information ( MI ) between the skills ’ latent variables and the states reached by them effectively formalizes the dual objective of learning policies that both cover and navigate the environment efficiently ( e.g. , Gregor et al. , 2016 ) . Indeed , maximizing MI has been shown to be a powerful approach for encouraging exploration in RL ( Houthooft et al. , 2016 ; Mohamed & Rezende , 2015 ) and for unsupervised skill discovery ( e.g. , Gregor et al. , 2016 ; Eysenbach et al. , 2019 ; Achiam et al. , 2018 ; Sharma et al. , 2020 ; Campos et al. , 2020 ) . Nonetheless , learning policies that maximize MI is a challenging optimization problem . Several approximations have been proposed to simplify it at the cost of possibly deviating from the original objective of coverage and directedness ( see Sect . 4 for a review of related work ) . In this paper , we propose UPSIDE ( UnsuPervised Skills that dIrect then DiffusE ) to learn a set of policies that can be effectively used to cover the environment and solve goal-reaching downstream tasks . Our solution builds on the following components ( Fig . 1 ) : • Policy structure ( Sect . 3.1 , see Fig . 1 ( A ) ) . We consider policies composed of two parts : 1 ) a directed part , referred to as the skill , that is trained to reach a specific region of the environment , and 2 ) a diffusing part that induces a local coverage around the region attained by the first part . This structure favors coverage and directedness at the level of a single policy . • New constrained objective ( Sect . 3.2 , see Fig . 1 ( B ) & ( C ) ) . We then introduce a constrained optimization problem designed to maximize the number of policies under the constraint that the states reached by each of the diffusing parts are distinct enough ( i.e. , they satisfy a minimum level of discriminability ) . We prove that this problem can be cast as a lower bound to the original MI objective , thus preserving its coverage-directedness trade-off . UPSIDE solves it by adaptively adding or removing policies to a given initial set , without requiring any prior knowledge on a sensible number of policies . • Tree structure ( Sect . 3.3 , see Fig . 1 ( D ) & ( E ) ) . Leveraging the directed nature of the skills , UPSIDE effectively composes them to build longer and longer policies organized in a tree structure . This overcomes the need of defining a suitable policy length in advance . Thus in UPSIDE we can consider short policies to make the optimization easier , while composing their skills along a growing tree structure to ensure an adaptive and thorough coverage of the environment . The combination of these components allows UPSIDE to effectively adapt the number and the length of policies to the specific structure of the environment , while learning policies that ensure coverage and directedness . We study the effectiveness of UPSIDE and the impact of its components in hardto-explore continuous navigation and control environments , where UPSIDE improves over existing baselines both in terms of exploration and learning performance . 2 SETTING . We consider the URL setting where the agent interacts with a Markov decision process ( MDP ) M with state space S , action space A , dynamics p ( s′|s , a ) , and no reward . The agent starts each episode from a designated initial state s0 ∈ S.1 Upon termination of the chosen policy , the agent is then reset to s0 . This setting is particularly challenging from an exploration point of view since the agent can not rely on the initial distribution to cover the state space . We recall the MI-based unsupervised skill discovery approach ( see e.g. , Gregor et al. , 2016 ) . Denote by Z some ( latent ) variables on which the policies of length T are conditioned2 ( we assume that Z is categorical for simplicity and because it is the most common case in practice ) . There are three optimization variables : ( i ) the cardinality of Z denoted by NZ , i.e. , the number of policies ( we write Z = { 1 , . . . , NZ } = [ NZ ] ) , ( ii ) the parameters π ( z ) of the policy indexed by z ∈ Z , and ( iii ) the 1More generally , s0 could be drawn from any distribution supported over a compact region . 2In URL , there is no terminal state and the agent has to define a criterion to terminate the episode and restart . In practice , most of unsupervised skill discovery methods set a fixed number of steps . policy sampling distribution ρ ( i.e. , ρ ( z ) is the probability of sampling policy z at the beginning of the episode ) . Denote policy z ’ s action distribution in state s by π ( ·|z , s ) and the entropy function by H. Let the variable ST be the random ( final ) state induced by sampling a policy z from ρ and executing π ( z ) from s0 for an episode . Denote by pπ ( z ) ( sT ) the distribution over ( final ) states induced by executing policy z , by p ( z|sT ) the probability of z being the policy to induce state sT , and let p ( sT ) = ∑ z∈Z ρ ( z ) pπ ( z ) ( sT ) . Maximizing the MI between Z and ST can be written as max NZ , ρ , π I ( ST ; Z ) = H ( ST ) −H ( ST |Z ) = − ∑ sT p ( sT ) log p ( sT ) + ∑ z∈Z ρ ( z ) EsT |z [ log pπ ( z ) ( sT ) ] = H ( Z ) −H ( Z|ST ) = − ∑ z∈Z ρ ( z ) log ρ ( z ) + ∑ z∈Z ρ ( z ) EsT |z [ log p ( z|sT ) ] , ( 1 ) where in the expectations sT |z ∼ pπ ( z ) ( sT ) . In the first formulation , the entropy term over states captures the requirement that policies thoroughly cover the state space , while the second term measures the entropy over the states reached by each policy and thus promotes policies that have a directed behavior . Learning the optimal NZ , ρ , and π is a challenging problem and several approximations have been proposed ( see e.g. , Gregor et al. , 2016 ; Eysenbach et al. , 2019 ; Achiam et al. , 2018 ; Campos et al. , 2020 ) . Many approaches focus on the so-called reverse formulation of the MI ( second line of Equation 1 ) . In this case , the conditional distribution p ( z|sT ) is usually replaced with a parametric model qφ ( z|sT ) called the discriminator that is trained via a negative log likelihood loss simultaneously with all other variables . Then one can maximize the lower bound ( Barber & Agakov , 2004 ) : I ( ST ; Z ) ≥ Ez∼ρ ( z ) , τ∼π ( z ) [ log qφ ( z|sT ) − log ρ ( z ) ] , where we denote by τ ∼ π ( z ) trajectories sampled from the policy indexed by z . As a result , each policy π ( z ) can be trained with RL to maximize the intrinsic reward rz ( sT ) : = log qφ ( z|sT ) − log ρ ( z ) . 3 THE UPSIDE ALGORITHM In this section we detail the three main components of UPSIDE , which is summarized in Sect . 3.4 . 3.1 DECOUPLED POLICY STRUCTURE OF DIRECT-THEN-DIFFUSE . While the trade-off between coverage and directedness is determined by the MI objective , the amount of stochasticity of each policy ( e.g. , injected via a regularization on the entropy over the actions ) has also a major impact on the effectiveness of the overall algorithm ( Eysenbach et al. , 2019 ) . In fact , while randomness can promote broader coverage , a highly stochastic policy tends to induce a distribution pπ ( z ) ( sT ) over final states with high entropy , thus decreasing −H ( ST |Z ) and losing in directedness . In UPSIDE , we define policies with a decoupled structure ( see Fig . 1 ( A ) ) composed of a ) a directed part ( of length T ) that we refer to as skill , with low stochasticity and trained to reach a specific region of the environment and b ) a diffusing part ( of length H ) with high stochasticity to promote local coverage of the states around the region reached by the skill . Coherently with this structure , the state variable in the conditional entropy in Equation 1 becomes any state reached during the diffusing part ( denote by Sdiff the random variable ) and not just the skill ’ s terminal state . Following Sect . 2 we define an intrinsic reward rz ( s ) = log qφ ( z|s ) − log ρ ( z ) and the skill of policy z maximizes the cumulative reward over the states traversed by the diffusing part . Formally , we can conveniently define the objective function : max π ( z ) Eτ∼π ( z ) [ ∑ t∈J α · rz ( st ) + β · H ( π ( ·|z , st ) ) ] , ( 2 ) where J = { T , . . . , T + H } and α = 1 , β = 0 ( resp.α = 0 , β = 1 ) when optimizing for the skill ( resp . diffusing part ) . In words , the skill is incentivized to bring the diffusing part to a discriminable region of the state space , while the diffusing part is optimized by a simple random walk policy ( i.e. , a stochastic policy with uniform distribution over actions ) to promote local coverage around sT . Table 1 illustrates how UPSIDE ’ s policies compare to other methods . Unlike VIC and similar to DIAYN , the diffusing parts of the policies tend to “ push ” the skills away so as to reach diverse regions of the environment . The combination of the directedness of the skills and local coverage of the diffusing parts thus ensures that the whole environment can be properly visited with NZ |S| policies.3 Furthermore , the diffusing part can be seen as defining a cluster of states that represents the goal region of the directed skill . This is in contrast with DIAYN policies whose stochasticity may be spread over the whole trajectory . This allows us to “ ground ” the latent variable representations of the policies Z to specific regions of the environment ( i.e. , the clusters ) . As a result , maximizing the MI I ( Sdiff ; Z ) can be seen as learning a set of “ cluster-conditioned ” policies .
To increase the state space coverage with unsupervised skill discovery, the authors propose UPSIDE. They tackle reaching distant states (direct) and covering neighbor states (diffuse) at different stages. Also, they discover skills by forming a tree, which enables chaining of smaller skills to form far-reaching skills. Also, instead of fixing the number of skills to learn, they maximize the number of skills with a constraint on the output of the discriminator $q_\phi(z|s_{diff})$ and show that it is a lower bound of the mutual information objective. Empirical results on Maze and MuJoCo environments and further analyses are presented.
SP:f202f3d6780876a0bdd7d7bd4d7047719a145177
Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching
1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms have been shown to effectively solve a wide variety of complex problems ( e.g. , Mnih et al. , 2015 ; Bellemare et al. , 2013 ) . However , they are often designed to solve one single task at a time and they need to restart the learning process from scratch for any new problem , even when it is defined on the very same environment ( e.g. , a robot navigating to different locations in the same apartment ) . Recently , Unsupervised RL ( URL ) has been proposed as an approach to address this limitation . In URL , the agent first interacts with the environment without any extrinsic reward signal . Afterward , the agent leverages the experience accumulated during the unsupervised learning phase to efficiently solve a variety of downstream tasks defined on the same environment . This approach is particularly effective in problems such as navigation ( see e.g. , Bagaria et al. , 2021 ) and robotics ( see e.g. , Pong et al. , 2020 ) where the agent is often required to readily solve a wide range of tasks while the dynamics of environment remains fixed . In this paper , we focus on the unsupervised objective of discovering a set of skills that can be used to efficiently solve sparse-reward downstream tasks . In particular , we build on the insight that mutual information ( MI ) between the skills ’ latent variables and the states reached by them effectively formalizes the dual objective of learning policies that both cover and navigate the environment efficiently ( e.g. , Gregor et al. , 2016 ) . Indeed , maximizing MI has been shown to be a powerful approach for encouraging exploration in RL ( Houthooft et al. , 2016 ; Mohamed & Rezende , 2015 ) and for unsupervised skill discovery ( e.g. , Gregor et al. , 2016 ; Eysenbach et al. , 2019 ; Achiam et al. , 2018 ; Sharma et al. , 2020 ; Campos et al. , 2020 ) . Nonetheless , learning policies that maximize MI is a challenging optimization problem . Several approximations have been proposed to simplify it at the cost of possibly deviating from the original objective of coverage and directedness ( see Sect . 4 for a review of related work ) . In this paper , we propose UPSIDE ( UnsuPervised Skills that dIrect then DiffusE ) to learn a set of policies that can be effectively used to cover the environment and solve goal-reaching downstream tasks . Our solution builds on the following components ( Fig . 1 ) : • Policy structure ( Sect . 3.1 , see Fig . 1 ( A ) ) . We consider policies composed of two parts : 1 ) a directed part , referred to as the skill , that is trained to reach a specific region of the environment , and 2 ) a diffusing part that induces a local coverage around the region attained by the first part . This structure favors coverage and directedness at the level of a single policy . • New constrained objective ( Sect . 3.2 , see Fig . 1 ( B ) & ( C ) ) . We then introduce a constrained optimization problem designed to maximize the number of policies under the constraint that the states reached by each of the diffusing parts are distinct enough ( i.e. , they satisfy a minimum level of discriminability ) . We prove that this problem can be cast as a lower bound to the original MI objective , thus preserving its coverage-directedness trade-off . UPSIDE solves it by adaptively adding or removing policies to a given initial set , without requiring any prior knowledge on a sensible number of policies . • Tree structure ( Sect . 3.3 , see Fig . 1 ( D ) & ( E ) ) . Leveraging the directed nature of the skills , UPSIDE effectively composes them to build longer and longer policies organized in a tree structure . This overcomes the need of defining a suitable policy length in advance . Thus in UPSIDE we can consider short policies to make the optimization easier , while composing their skills along a growing tree structure to ensure an adaptive and thorough coverage of the environment . The combination of these components allows UPSIDE to effectively adapt the number and the length of policies to the specific structure of the environment , while learning policies that ensure coverage and directedness . We study the effectiveness of UPSIDE and the impact of its components in hardto-explore continuous navigation and control environments , where UPSIDE improves over existing baselines both in terms of exploration and learning performance . 2 SETTING . We consider the URL setting where the agent interacts with a Markov decision process ( MDP ) M with state space S , action space A , dynamics p ( s′|s , a ) , and no reward . The agent starts each episode from a designated initial state s0 ∈ S.1 Upon termination of the chosen policy , the agent is then reset to s0 . This setting is particularly challenging from an exploration point of view since the agent can not rely on the initial distribution to cover the state space . We recall the MI-based unsupervised skill discovery approach ( see e.g. , Gregor et al. , 2016 ) . Denote by Z some ( latent ) variables on which the policies of length T are conditioned2 ( we assume that Z is categorical for simplicity and because it is the most common case in practice ) . There are three optimization variables : ( i ) the cardinality of Z denoted by NZ , i.e. , the number of policies ( we write Z = { 1 , . . . , NZ } = [ NZ ] ) , ( ii ) the parameters π ( z ) of the policy indexed by z ∈ Z , and ( iii ) the 1More generally , s0 could be drawn from any distribution supported over a compact region . 2In URL , there is no terminal state and the agent has to define a criterion to terminate the episode and restart . In practice , most of unsupervised skill discovery methods set a fixed number of steps . policy sampling distribution ρ ( i.e. , ρ ( z ) is the probability of sampling policy z at the beginning of the episode ) . Denote policy z ’ s action distribution in state s by π ( ·|z , s ) and the entropy function by H. Let the variable ST be the random ( final ) state induced by sampling a policy z from ρ and executing π ( z ) from s0 for an episode . Denote by pπ ( z ) ( sT ) the distribution over ( final ) states induced by executing policy z , by p ( z|sT ) the probability of z being the policy to induce state sT , and let p ( sT ) = ∑ z∈Z ρ ( z ) pπ ( z ) ( sT ) . Maximizing the MI between Z and ST can be written as max NZ , ρ , π I ( ST ; Z ) = H ( ST ) −H ( ST |Z ) = − ∑ sT p ( sT ) log p ( sT ) + ∑ z∈Z ρ ( z ) EsT |z [ log pπ ( z ) ( sT ) ] = H ( Z ) −H ( Z|ST ) = − ∑ z∈Z ρ ( z ) log ρ ( z ) + ∑ z∈Z ρ ( z ) EsT |z [ log p ( z|sT ) ] , ( 1 ) where in the expectations sT |z ∼ pπ ( z ) ( sT ) . In the first formulation , the entropy term over states captures the requirement that policies thoroughly cover the state space , while the second term measures the entropy over the states reached by each policy and thus promotes policies that have a directed behavior . Learning the optimal NZ , ρ , and π is a challenging problem and several approximations have been proposed ( see e.g. , Gregor et al. , 2016 ; Eysenbach et al. , 2019 ; Achiam et al. , 2018 ; Campos et al. , 2020 ) . Many approaches focus on the so-called reverse formulation of the MI ( second line of Equation 1 ) . In this case , the conditional distribution p ( z|sT ) is usually replaced with a parametric model qφ ( z|sT ) called the discriminator that is trained via a negative log likelihood loss simultaneously with all other variables . Then one can maximize the lower bound ( Barber & Agakov , 2004 ) : I ( ST ; Z ) ≥ Ez∼ρ ( z ) , τ∼π ( z ) [ log qφ ( z|sT ) − log ρ ( z ) ] , where we denote by τ ∼ π ( z ) trajectories sampled from the policy indexed by z . As a result , each policy π ( z ) can be trained with RL to maximize the intrinsic reward rz ( sT ) : = log qφ ( z|sT ) − log ρ ( z ) . 3 THE UPSIDE ALGORITHM In this section we detail the three main components of UPSIDE , which is summarized in Sect . 3.4 . 3.1 DECOUPLED POLICY STRUCTURE OF DIRECT-THEN-DIFFUSE . While the trade-off between coverage and directedness is determined by the MI objective , the amount of stochasticity of each policy ( e.g. , injected via a regularization on the entropy over the actions ) has also a major impact on the effectiveness of the overall algorithm ( Eysenbach et al. , 2019 ) . In fact , while randomness can promote broader coverage , a highly stochastic policy tends to induce a distribution pπ ( z ) ( sT ) over final states with high entropy , thus decreasing −H ( ST |Z ) and losing in directedness . In UPSIDE , we define policies with a decoupled structure ( see Fig . 1 ( A ) ) composed of a ) a directed part ( of length T ) that we refer to as skill , with low stochasticity and trained to reach a specific region of the environment and b ) a diffusing part ( of length H ) with high stochasticity to promote local coverage of the states around the region reached by the skill . Coherently with this structure , the state variable in the conditional entropy in Equation 1 becomes any state reached during the diffusing part ( denote by Sdiff the random variable ) and not just the skill ’ s terminal state . Following Sect . 2 we define an intrinsic reward rz ( s ) = log qφ ( z|s ) − log ρ ( z ) and the skill of policy z maximizes the cumulative reward over the states traversed by the diffusing part . Formally , we can conveniently define the objective function : max π ( z ) Eτ∼π ( z ) [ ∑ t∈J α · rz ( st ) + β · H ( π ( ·|z , st ) ) ] , ( 2 ) where J = { T , . . . , T + H } and α = 1 , β = 0 ( resp.α = 0 , β = 1 ) when optimizing for the skill ( resp . diffusing part ) . In words , the skill is incentivized to bring the diffusing part to a discriminable region of the state space , while the diffusing part is optimized by a simple random walk policy ( i.e. , a stochastic policy with uniform distribution over actions ) to promote local coverage around sT . Table 1 illustrates how UPSIDE ’ s policies compare to other methods . Unlike VIC and similar to DIAYN , the diffusing parts of the policies tend to “ push ” the skills away so as to reach diverse regions of the environment . The combination of the directedness of the skills and local coverage of the diffusing parts thus ensures that the whole environment can be properly visited with NZ |S| policies.3 Furthermore , the diffusing part can be seen as defining a cluster of states that represents the goal region of the directed skill . This is in contrast with DIAYN policies whose stochasticity may be spread over the whole trajectory . This allows us to “ ground ” the latent variable representations of the policies Z to specific regions of the environment ( i.e. , the clusters ) . As a result , maximizing the MI I ( Sdiff ; Z ) can be seen as learning a set of “ cluster-conditioned ” policies .
The paper proposes an unsupervised exploration method for reinforcement learning, called UPSIDE, that combines learning of directed skills that enable covering distant states, and a diffusive part that explores locally and helps expand the explored region further. The main contributions are both the topology of the policy (division to tree-structured skills and diffusive parts), a theory for training such policies, and a practical implementation that simplifies some of the technicalities induced by the theory. The experiments illustrate well the assumed benefits and include both toy tasks (a point mass in a maze) as well as more complex tasks such as HalfCheetah and Ant from OpenAI Gym. The experiments also compare to prior methods such as DIYAN and provide ablations of the importance of the different components.
SP:f202f3d6780876a0bdd7d7bd4d7047719a145177
Contrastive Mutual Information Maximization for Binary Neural Networks
1 INTRODUCTION . Although deep neural networks ( DNNs ) [ 1 ] have achieved remarkable success in various computer vision tasks such as image classification [ 2 ] and semantic image segmentation [ 3 ] , their over-parametrization problem makes them computationally expensive and storage excessive . To advance the development of deep learning towards resource-constrained edge devices , researchers proposed several neural network compression paradigms , such as network pruning [ 4 , 5 ] , knowledge distillation [ 6 ] and network quantization [ 7 , 8 ] . Among the network quantization methods , the network binarization method [ 7 ] stands out for quantizing weights and activations ( i.e . intermediate feature maps ) to ±1 , compressing the full-precision counterpart 32× , and replacing time-consuming inner-product in full-precision networks with efficient xnor-bitcount operation in the BNNs . However , severe accuracy drops always exist between full-precision models and their binary counterparts . To tackle this problem , previous works mainly focus on reducing the quantization error induced by weights binarization [ 9 , 10 ] , and elaborately approximating binarization function to relieve the gradient mismatch in the backward propagation [ 11 , 8 ] . Indeed , they achieve the state-of-the-art performance . Yet with those two paradigms developing , narrowing down the quantization error and enhancing the gradient transmission reach their bottlenecks [ 12 , 13 ] , since the 1W32A ( only quantizing the weights into 1-bit , remaining the activations 32-bit ) models are capable of performing as well as the full-precision models , implying that the activations binarization becomes the main issue for further performance improvement . To address the accuracy degradation caused by the activations binarization , a few studies propose to regulate the distributions of the binary activations , e.g . researchers in [ 14 ] design a distribution loss to explicitly regularize the activation flow ; researchers in [ 13 ] propose to shift the thresholds of binary activation functions to make the distribution of binary activation unbalanced . They heuristically design low-level patterns to analyze the distributions of binary activations such as minimum of the activations and the balanced property of distributions . Nevertheless , they neglect the high-level indicators of the distribution and the unique characteristics of BNN , where the binary activations and latent full-precision activations exist in the same forward pass . Thus , we argue that the high-level properties of distributions , such as correlations and dependencies between binary and full-precision activations should be captured and utilized . In this work , we explore introducing mutual information for BNNs , in which the mutual information acts as a fundamental quantity to measure the information amount shared by the binary and latent real-valued activations in BNNs . In contrast to the aforementioned works focusing on learning the distribution of binary activations , mutual information naturally captures non-linear statistical dependencies between variables , and thus can be used as a measure of true dependence [ 15 ] . Based on this metric , we propose a novel method , termed as Contrastive Mutual Information Maximization for Binary Neural Networks ( CMIM-BNN ) . Specifically , we design a highly effective optimization strategy using contrastive estimation for the mutual information maximization . As illustrated in Figure 1 , we replace the data transformation module in contrastive learning with the exclusive structure in BNNs , where full-precision and binary activations are in the same forward pass . In this way , contrastive learning contributes to inter-class decorrelation of binary activations , and avoids collapse solutions . In other words , our method is built upon a contrastive learning framework to learn representative binary activations , in which we pull the binary activation closer to the full-precision activation and push the binary activation further away from other binary activations in the contrastive space . Moreover , by utilizing an additional MLP module to extract representations of activations , our method can explicitly capture higher-order dependencies in the contrastive space . To the best of our knowledge , it is the first work aiming at maximizing the mutual information of the activations in BNNs within a contrastive learning framework . Overall , the contributions of this paper are three-fold : • Considering the distributions of activations , we propose a novel CMIM framework to optimize BNNs , by maximizing the mutual information between the binary activation and its latent realvalued counterpart ; • We develop an effective contrastive learning strategy to achieve the goal of mutual information maximization for BNNs , and benefited from it , the representation ability of BNNs is clearly strengthened for not only the classification task but also downstream CV tasks ; • Experimental results show that our method can significantly improve the existing SOTA methods over the classification task on CIFAR-10/100 and ImageNet , e.g . 6.4 % on CIFAR-100 and 3.0 % on ImageNet . Besides , we also demonstrate good generalization ability of the proposed CMIM on other challenging CV tasks such as depth estimation and semantic segmentation . 2 MUTUAL INFORMATION MAXIMIZATION FOR TRAINING BNNS . 2.1 PRELIMINARIES . We first define a Multi-Layer Perceptron ( MLP ) with K layers . For simplification of derivation , we discard the bias term of the network . Then the MLP f ( x ) can be denoted as : f ( W1 , · · · , WK ; x ) = ( WK · σ ·WK−1 · · · · · σ ·W1 ) ( x ) , ( 1 ) where x is the input sample and Wk : Rdk−1 7−→ Rdk ( k = 1 , ... , K ) stands for the weight matrix connecting the ( k− 1 ) -th and the k-th layer , with dk−1 and dk representing the sizes of the input and output of the k-th network layer , respectively . The σ ( · ) function performs element-wise activation for the feature maps . Notably , for a convolution layer with the input map of m channels and the output map of n channels , and the size of the kernel w× h , it results in m× n×w× h parameters . We can re-arrange the parameters to a weight matrix of size n × ( m × h × w ) , such that this convolution layer can also operate in the same way as the other fully-connected layers . Hence , it is sufficient to consider networks with the fully-connected layers . Based on those predefined notions , the sectional MLP fk ( x ) with the front k layers of the f ( x ) can be represented as : fk ( W1 , · · · , Wk ; x ) = ( Wk · σ · · ·σ ·W1 ) ( x ) . ( 2 ) And the MLP f can be seen as a special case in the function sequence { fk } ( k ∈ { 1 , · · · , K } ) , i.e . f = fK , when k = K. Binary Neural Networks . Here , we revisit the general binarization method in [ 16 , 7 ] , which maintains full-precision latent variables WF for gradient updates , and the k-th weight matrix WkF is binarized into ±1 , obatining the binary weight matrix WkB by a binarize function ( normally sgn ( · ) ) , i.e . WkB = sgn ( WkF ) . Then the intermediate activation map ( full-precision ) of the k-th layer is produced by AkF = W k BA k−1 B , then the same quantization method is used to binarize the full-precision activation map as AkB = sgn ( A k F ) , and a whole forward pass of binarization is performed by iterating this process for L times . Mutual Information . For two discrete variables X and Y , their mutual information can be defined as [ 17 ] : I ( X , Y ) = ∑ x , y PXY ( x , y ) log PXY ( x , y ) PX ( x ) PY ( y ) , ( 3 ) where PXY ( x , y ) is the joint distribution , PX ( x ) = ∑ y PXY ( x , y ) and PY ( y ) = ∑ x PXY ( x , y ) are the marginals of X and Y , respectively . Mutual information quantifies the amount of information obtained about one random variable by observing the other random variable . It is a dimensionless quantity with ( generally ) units of bits , and can be thought as the reduction in uncertainty about one random variable given knowledge of another . High mutual information indicates a large reduction in uncertainty ; low mutual information indicates a small reduction ; and zero mutual information between two random variables means the variables are independent . In the content of binarization , considering the binary and full-precision activations as random variables , we would like them share as much information as possible , since the binary activations are proceeded from their corresponding full-precision activations . Theoretically , the mutual information between those two variables should be maximized . 2.2 CONTRASTIVE MUTUAL INFORMATION MAXIMIZATION . In the following section , we formalize the idea of constructing a Noise-Contrastive Estimation ( NCE ) loss to maximize the mutual information between the binary and the full-precision activations . Particularly , we derive a novel CMIM loss for training BNNs , where NCE is introduced to avoid the direct mutual information computation by estimating it with its lower bound in Eq . 7 . For binary network fB and its latent full-precision counterpart fF in the same training iteration , the series of their activations { akB } and { akF } ( k ∈ { 1 , · · · , K } ) , where AkB = ( a k,1 B , · · · , a k , N B ) and AkF = ( a k,1 F , · · · , a k , N F ) can be considered as a series of variables . The corresponding variables ( akB , a k F ) should share more information , i.e . the mutual information of the same layer ’ s output activations I ( akB , a k F ) ( k ∈ { 1 , · · · , K } ) should be maximized to enforce them mutually dependent . To this end , we introduce the contrastive learning framework into our targeted binarization task . The basic idea of contrastive learning is to compare different views of the data ( usually under different data augmentations ) to calculate similarity scores [ 18 , 19 , 20 , 21 , 22 ] . This framework is suitable for our case , since the binary and full-precision activations can be seen as two different views . For a training batch with N samples , the samples can be denoted as : { xi } ( i ∈ { 1 , · · · , N } ) . We feed a batch of samples to the BNN and obtain KN2 pairs of activations ( ak , iB , a k , j F ) , which augments the data for the auxiliary task . We define a pair containing two activations from the same sample as positive pair , i.e . if i = j , ( ak , iB , a k , j F ) + and vice versa . With the Bayes ’ theorem , the posterior probability of two activations from the positive pair can be formalized as : P ( i = j | ak , iB , a k , j F ) = P ( ak , iB , a k , j F | i = j ) 1 N P ( ak , iB , a k , j F | i = j ) 1 N + P ( a k , i B , a k , j F | i 6= j ) N−1 N . ( 4 ) And the probability of activations from negative pair is : P ( i 6= j | ak , iB , a k , j F ) = 1 − P ( i = j | ak , iB , a k , j F ) . To simplify the NCE derivative , several works [ 23 , 24 , 25 ] build assumption about the dependence of the variables , we also use this assumption that the activations from positive pairs are dependent and the ones from negative pairs are independent , i.e . P ( ak , iB , a k , j F | i = j ) = P ( a k , i B , a k , j F ) and P ( ak , iB , a k , j F | i 6= j ) = P ( a k , i B ) P ( a k , j F ) . Hence , the above equation can be simplified as : P ( i = j | ak , iB , a k , j F ) = P ( ak , iB , a k , j F ) P ( ak , iB , a k , j F ) + P ( a k , i B ) P ( a k , j F ) ( N − 1 ) . ( 5 ) Performing logarithm to Eq . 5 and arranging the terms , we can achieve logP ( i = j | ak , iB , a k , j F ) = − log [ 1 + ( N − 1 ) P ( ak , iB ) P ( a k , j F ) P ( ak , iB , a k , j F ) ] ≤ log P ( ak , iB , a k , j F ) P ( ak , iB ) P ( a k , j F ) − log ( N − 1 ) . ( 6 ) Taking expectation on both sides with respect to P ( ak , iB , a k , j F ) , and combining the definition of mutual information in Eq . 3 , we can derive the form of mutual information as : targeted mutual information︷ ︸︸ ︷ I ( akB , a k F ) = ∑ i ∑ j P ( ak , iB , a k , j F ) log P ( ak , iB , a k , j F ) P ( ak , iB ) P ( a k , j F ) ≥ ∑ i ∑ j P ( ak , iB , a k , j F | i = j ) [ logP ( i = j | ak , iB , a k , j F ) + log ( N − 1 ) ] = optimized lower bound︷ ︸︸ ︷ EP ( ak , iB , ak , jF |i=j ) [ logP ( i = j | ak , iB , a k , j F ) ] + log ( N − 1 ) , ( 7 ) where I ( akB , a k F ) is the mutual information between the binary and full-precision distributions , our targeted object . Instead of directly maximizing the mutual information , we choose to maximize its lower bound in the Eq . 7 . However , the distribution P ( i = j | ak , iB , a k , j F ) is hard to estimate . We take advantage of the idea of contrastive learning , and introduce a critic function h to approximate the targeted distribution [ 18 , 19 , 20 ] . In practice , we use the following : h ( ak , iB , a k , j F ) = exp ( τ ( ak , iB ) > ak , jF ) ∑ j exp ( τ ( a k , i B ) > ak , jF ) ( 8 ) Algorithm 1 Forward and Backward Propagation of CMIM Require : A minibatch of data samples ( X , Y ) , current binary weight WkB , latent fullprecision weights WkF , and learning rate η . Ensure : Update weights WkF ′ . 1 : Forward Propagation : 2 : for k = 1 to K − 1 do 3 : Binarize latent weights : WkB ←− sgn ( WkF ) ; 4 : Perform binary operation with the activations of next layer : AkF ←− XnorDotProduct ( WkB , A k−1 B ) ; 5 : Perform Batch Normalization : AkF ←− BatchNorm ( AkF ) ; 6 : Binarize full-precision activations and obtain binary activations : AkB ←− sgn ( AkF ) ; 7 : end for 8 : For k = 1 , · · · , K , pair { ak , iB } and { ak , jB } as negative and positive pairs , then use Eq . 9 layer by layer to compute the NCE loss LkNCE between AkB and AkF for contrastive learning ; 9 : Combine a series of NCE loss { LkNCE } with the classification loss L into the CMIM loss LCMIM , with Eq . 11 ; 10 : Backward Propagation : compute the gradient of the overall loss function , i.e . ∂L∂WB , using the straight through estimator to tackle the sign function ; 11 : Parameter Update : update the fullprecision weights : WiF ′ ←−WkF − η ∂L∂WkB . where τ is a temperature parameter that controls the concentration level of the distribution [ 6 ] . τ is important for supervised feature learning , and also necessary for tuning the concentration of ak , iB and ak , jF on our contrastive space . Loss Function . We define the contrastive loss function LkNCE between the k-th layer ’ s activations AkB and A k F as : LkNCE = EP ( ak , iB , ak , jF |i=j ) [ log h ( ak , iB , a k , j F ) ] +NEP ( ak , iB , ak , jF |i6=j ) [ log ( 1− h ( ak , iB , a k , j F ) ) ] . ( 9 ) We would like to comment on the above loss function from the perspective of contrastive learning . The first term of positive pairs is optimized for capturing more intra-class correlations and the second term of negative pairs is for inter-class decorrelation . Because the pair construction is instance-wise , the number of negative samples theoretically can be the size of the entire training set , e.g . 1.2 million for ImageNet . With those additional hand-craft designed contrastive pairs for the proxy optimization problem in Eq . 9 , the representation capacity of BNNs can be further improved , as many contrastive learning methods demonstrated [ 22 , 18 , 19 , 20 ] . Moreover , the optimal ĥ = argmaxh LkNCE can approximate the targeted distribution , i.e . ĥ ( ak , iB , a k , j F ) = P ( i = j | a k , i B , a k , j F ) , ( 10 ) where the detailed proof is shown in the supplementary material . Thus , with Eq . 7-10 we can derive that minimizing the NCE loss LkNCE is equivalent to maximizing the targeted mutual information between the binary and full-precision activations , I ( akB , a k F ) . Combining the series of NCE loss from different layers { LkNCE } , ( k = 1 , · · · , K ) , the overall loss L can be defined as : L = λ K∑ k=1 LkNCE βK−1−k + Lcls , ( 11 ) where Lcls is the classification loss respect to the ground truth , λ is used to control the degree of NCE loss , β is a coefficient greater than 1 , and we denote the CMIM loss as LCMIM = ∑K k=1 LkNCE βK−1−k . Hence , the βK−1−k decreases with k increasing and consequently the L k NCE βK−1−k increases . In this way , the activations of latter layer can be substantially retained , which leads to better performance in practice . The complete training process of CMIM is presented in Algorithm 1 . Discussion on the CMIM Loss . Besides the theoretical formulation from the perspective of mutual information maximization , we also provide an intuitive explanation about CMIM . As illustrated in Figure 2 , we strengthen the representation ability of binary activations ( Figure 3 ) via designing a proxy task with the contrastive learning framework . By embedding the activations to the contrastive space and pull-and-push the paired embeddings , the BNNs can learn better representations from this difficult yet effective auxiliary contrastive learning task . Note that even though we only pick up two images to formulate Figure 2 , the actual number of negative samples can be huge in practice ( e.g . 16,384 for training ResNet-18 on ImageNet ) , benefit from the MemoryBank [ 24 ] technique . With this property , we speculate that the contrastive pairing works as the data augmentation , which contributes to our method . This additional pairing provides more information for training the BNNs , thus our CMIM loss can be treated as an overfitting-mitigated module . We also conduct experiments in the Section 3.2 and 3.3 to validate our speculation . Difference with other contrastive learning methods . The key idea of contrastive learning is to pull representations close in positive pairs and push representations apart in negative pairs in a contrastive space . Several self-supervised learning methods are rooted in well-established idea of the mutual information maximization , such as Deep InfoMax [ 19 ] , Contrastive Predictive Coding [ 18 ] , MemoryBank [ 24 ] , Augmented Multiscale DIM [ 20 ] , MoCo [ 21 ] and SimSaim [ 22 ] . These are based on NCE [ 23 ] and InfoNCE [ 19 ] which can be seen as a lower bound on mutual information [ 27 ] . The formulation of our CMIM-BNN is similar to the classic contrastive learning methods , where we all are inspired by NCE . However , our approach has several differences from those methods . Firstly , the training process of BNNs is different from regular network training , where binary and latent full-precision activations exist in the same forward pass . We seamlessly integrate this mixedactivation property with NCE , and thus the targeted lower bound formulated for optimization is different . Secondly , the binary and full-precision weights are both optimized by the NCE loss ( i.e . the two view augmentation networks are optimized simultaneously in contrastive learning ) , yet most aforementioned contrastive learning methods optimize their view augmentation networks separately .
The authors propose to make full use of the full-precision latent weights in BNN training by utilizing the popular contrastive loss between samples generated by full-precision activations and binary counterparts. They follow the derivations of "Contrastive Representation Distillation" to bridge the gap between mutual information maximization and the proposed loss function. The experiment results show consistent improvements over strong baseline methods on image recognition tasks.
SP:56347e78cf47b8eaf87bef52c8649f5dcd92c31b
Contrastive Mutual Information Maximization for Binary Neural Networks
1 INTRODUCTION . Although deep neural networks ( DNNs ) [ 1 ] have achieved remarkable success in various computer vision tasks such as image classification [ 2 ] and semantic image segmentation [ 3 ] , their over-parametrization problem makes them computationally expensive and storage excessive . To advance the development of deep learning towards resource-constrained edge devices , researchers proposed several neural network compression paradigms , such as network pruning [ 4 , 5 ] , knowledge distillation [ 6 ] and network quantization [ 7 , 8 ] . Among the network quantization methods , the network binarization method [ 7 ] stands out for quantizing weights and activations ( i.e . intermediate feature maps ) to ±1 , compressing the full-precision counterpart 32× , and replacing time-consuming inner-product in full-precision networks with efficient xnor-bitcount operation in the BNNs . However , severe accuracy drops always exist between full-precision models and their binary counterparts . To tackle this problem , previous works mainly focus on reducing the quantization error induced by weights binarization [ 9 , 10 ] , and elaborately approximating binarization function to relieve the gradient mismatch in the backward propagation [ 11 , 8 ] . Indeed , they achieve the state-of-the-art performance . Yet with those two paradigms developing , narrowing down the quantization error and enhancing the gradient transmission reach their bottlenecks [ 12 , 13 ] , since the 1W32A ( only quantizing the weights into 1-bit , remaining the activations 32-bit ) models are capable of performing as well as the full-precision models , implying that the activations binarization becomes the main issue for further performance improvement . To address the accuracy degradation caused by the activations binarization , a few studies propose to regulate the distributions of the binary activations , e.g . researchers in [ 14 ] design a distribution loss to explicitly regularize the activation flow ; researchers in [ 13 ] propose to shift the thresholds of binary activation functions to make the distribution of binary activation unbalanced . They heuristically design low-level patterns to analyze the distributions of binary activations such as minimum of the activations and the balanced property of distributions . Nevertheless , they neglect the high-level indicators of the distribution and the unique characteristics of BNN , where the binary activations and latent full-precision activations exist in the same forward pass . Thus , we argue that the high-level properties of distributions , such as correlations and dependencies between binary and full-precision activations should be captured and utilized . In this work , we explore introducing mutual information for BNNs , in which the mutual information acts as a fundamental quantity to measure the information amount shared by the binary and latent real-valued activations in BNNs . In contrast to the aforementioned works focusing on learning the distribution of binary activations , mutual information naturally captures non-linear statistical dependencies between variables , and thus can be used as a measure of true dependence [ 15 ] . Based on this metric , we propose a novel method , termed as Contrastive Mutual Information Maximization for Binary Neural Networks ( CMIM-BNN ) . Specifically , we design a highly effective optimization strategy using contrastive estimation for the mutual information maximization . As illustrated in Figure 1 , we replace the data transformation module in contrastive learning with the exclusive structure in BNNs , where full-precision and binary activations are in the same forward pass . In this way , contrastive learning contributes to inter-class decorrelation of binary activations , and avoids collapse solutions . In other words , our method is built upon a contrastive learning framework to learn representative binary activations , in which we pull the binary activation closer to the full-precision activation and push the binary activation further away from other binary activations in the contrastive space . Moreover , by utilizing an additional MLP module to extract representations of activations , our method can explicitly capture higher-order dependencies in the contrastive space . To the best of our knowledge , it is the first work aiming at maximizing the mutual information of the activations in BNNs within a contrastive learning framework . Overall , the contributions of this paper are three-fold : • Considering the distributions of activations , we propose a novel CMIM framework to optimize BNNs , by maximizing the mutual information between the binary activation and its latent realvalued counterpart ; • We develop an effective contrastive learning strategy to achieve the goal of mutual information maximization for BNNs , and benefited from it , the representation ability of BNNs is clearly strengthened for not only the classification task but also downstream CV tasks ; • Experimental results show that our method can significantly improve the existing SOTA methods over the classification task on CIFAR-10/100 and ImageNet , e.g . 6.4 % on CIFAR-100 and 3.0 % on ImageNet . Besides , we also demonstrate good generalization ability of the proposed CMIM on other challenging CV tasks such as depth estimation and semantic segmentation . 2 MUTUAL INFORMATION MAXIMIZATION FOR TRAINING BNNS . 2.1 PRELIMINARIES . We first define a Multi-Layer Perceptron ( MLP ) with K layers . For simplification of derivation , we discard the bias term of the network . Then the MLP f ( x ) can be denoted as : f ( W1 , · · · , WK ; x ) = ( WK · σ ·WK−1 · · · · · σ ·W1 ) ( x ) , ( 1 ) where x is the input sample and Wk : Rdk−1 7−→ Rdk ( k = 1 , ... , K ) stands for the weight matrix connecting the ( k− 1 ) -th and the k-th layer , with dk−1 and dk representing the sizes of the input and output of the k-th network layer , respectively . The σ ( · ) function performs element-wise activation for the feature maps . Notably , for a convolution layer with the input map of m channels and the output map of n channels , and the size of the kernel w× h , it results in m× n×w× h parameters . We can re-arrange the parameters to a weight matrix of size n × ( m × h × w ) , such that this convolution layer can also operate in the same way as the other fully-connected layers . Hence , it is sufficient to consider networks with the fully-connected layers . Based on those predefined notions , the sectional MLP fk ( x ) with the front k layers of the f ( x ) can be represented as : fk ( W1 , · · · , Wk ; x ) = ( Wk · σ · · ·σ ·W1 ) ( x ) . ( 2 ) And the MLP f can be seen as a special case in the function sequence { fk } ( k ∈ { 1 , · · · , K } ) , i.e . f = fK , when k = K. Binary Neural Networks . Here , we revisit the general binarization method in [ 16 , 7 ] , which maintains full-precision latent variables WF for gradient updates , and the k-th weight matrix WkF is binarized into ±1 , obatining the binary weight matrix WkB by a binarize function ( normally sgn ( · ) ) , i.e . WkB = sgn ( WkF ) . Then the intermediate activation map ( full-precision ) of the k-th layer is produced by AkF = W k BA k−1 B , then the same quantization method is used to binarize the full-precision activation map as AkB = sgn ( A k F ) , and a whole forward pass of binarization is performed by iterating this process for L times . Mutual Information . For two discrete variables X and Y , their mutual information can be defined as [ 17 ] : I ( X , Y ) = ∑ x , y PXY ( x , y ) log PXY ( x , y ) PX ( x ) PY ( y ) , ( 3 ) where PXY ( x , y ) is the joint distribution , PX ( x ) = ∑ y PXY ( x , y ) and PY ( y ) = ∑ x PXY ( x , y ) are the marginals of X and Y , respectively . Mutual information quantifies the amount of information obtained about one random variable by observing the other random variable . It is a dimensionless quantity with ( generally ) units of bits , and can be thought as the reduction in uncertainty about one random variable given knowledge of another . High mutual information indicates a large reduction in uncertainty ; low mutual information indicates a small reduction ; and zero mutual information between two random variables means the variables are independent . In the content of binarization , considering the binary and full-precision activations as random variables , we would like them share as much information as possible , since the binary activations are proceeded from their corresponding full-precision activations . Theoretically , the mutual information between those two variables should be maximized . 2.2 CONTRASTIVE MUTUAL INFORMATION MAXIMIZATION . In the following section , we formalize the idea of constructing a Noise-Contrastive Estimation ( NCE ) loss to maximize the mutual information between the binary and the full-precision activations . Particularly , we derive a novel CMIM loss for training BNNs , where NCE is introduced to avoid the direct mutual information computation by estimating it with its lower bound in Eq . 7 . For binary network fB and its latent full-precision counterpart fF in the same training iteration , the series of their activations { akB } and { akF } ( k ∈ { 1 , · · · , K } ) , where AkB = ( a k,1 B , · · · , a k , N B ) and AkF = ( a k,1 F , · · · , a k , N F ) can be considered as a series of variables . The corresponding variables ( akB , a k F ) should share more information , i.e . the mutual information of the same layer ’ s output activations I ( akB , a k F ) ( k ∈ { 1 , · · · , K } ) should be maximized to enforce them mutually dependent . To this end , we introduce the contrastive learning framework into our targeted binarization task . The basic idea of contrastive learning is to compare different views of the data ( usually under different data augmentations ) to calculate similarity scores [ 18 , 19 , 20 , 21 , 22 ] . This framework is suitable for our case , since the binary and full-precision activations can be seen as two different views . For a training batch with N samples , the samples can be denoted as : { xi } ( i ∈ { 1 , · · · , N } ) . We feed a batch of samples to the BNN and obtain KN2 pairs of activations ( ak , iB , a k , j F ) , which augments the data for the auxiliary task . We define a pair containing two activations from the same sample as positive pair , i.e . if i = j , ( ak , iB , a k , j F ) + and vice versa . With the Bayes ’ theorem , the posterior probability of two activations from the positive pair can be formalized as : P ( i = j | ak , iB , a k , j F ) = P ( ak , iB , a k , j F | i = j ) 1 N P ( ak , iB , a k , j F | i = j ) 1 N + P ( a k , i B , a k , j F | i 6= j ) N−1 N . ( 4 ) And the probability of activations from negative pair is : P ( i 6= j | ak , iB , a k , j F ) = 1 − P ( i = j | ak , iB , a k , j F ) . To simplify the NCE derivative , several works [ 23 , 24 , 25 ] build assumption about the dependence of the variables , we also use this assumption that the activations from positive pairs are dependent and the ones from negative pairs are independent , i.e . P ( ak , iB , a k , j F | i = j ) = P ( a k , i B , a k , j F ) and P ( ak , iB , a k , j F | i 6= j ) = P ( a k , i B ) P ( a k , j F ) . Hence , the above equation can be simplified as : P ( i = j | ak , iB , a k , j F ) = P ( ak , iB , a k , j F ) P ( ak , iB , a k , j F ) + P ( a k , i B ) P ( a k , j F ) ( N − 1 ) . ( 5 ) Performing logarithm to Eq . 5 and arranging the terms , we can achieve logP ( i = j | ak , iB , a k , j F ) = − log [ 1 + ( N − 1 ) P ( ak , iB ) P ( a k , j F ) P ( ak , iB , a k , j F ) ] ≤ log P ( ak , iB , a k , j F ) P ( ak , iB ) P ( a k , j F ) − log ( N − 1 ) . ( 6 ) Taking expectation on both sides with respect to P ( ak , iB , a k , j F ) , and combining the definition of mutual information in Eq . 3 , we can derive the form of mutual information as : targeted mutual information︷ ︸︸ ︷ I ( akB , a k F ) = ∑ i ∑ j P ( ak , iB , a k , j F ) log P ( ak , iB , a k , j F ) P ( ak , iB ) P ( a k , j F ) ≥ ∑ i ∑ j P ( ak , iB , a k , j F | i = j ) [ logP ( i = j | ak , iB , a k , j F ) + log ( N − 1 ) ] = optimized lower bound︷ ︸︸ ︷ EP ( ak , iB , ak , jF |i=j ) [ logP ( i = j | ak , iB , a k , j F ) ] + log ( N − 1 ) , ( 7 ) where I ( akB , a k F ) is the mutual information between the binary and full-precision distributions , our targeted object . Instead of directly maximizing the mutual information , we choose to maximize its lower bound in the Eq . 7 . However , the distribution P ( i = j | ak , iB , a k , j F ) is hard to estimate . We take advantage of the idea of contrastive learning , and introduce a critic function h to approximate the targeted distribution [ 18 , 19 , 20 ] . In practice , we use the following : h ( ak , iB , a k , j F ) = exp ( τ ( ak , iB ) > ak , jF ) ∑ j exp ( τ ( a k , i B ) > ak , jF ) ( 8 ) Algorithm 1 Forward and Backward Propagation of CMIM Require : A minibatch of data samples ( X , Y ) , current binary weight WkB , latent fullprecision weights WkF , and learning rate η . Ensure : Update weights WkF ′ . 1 : Forward Propagation : 2 : for k = 1 to K − 1 do 3 : Binarize latent weights : WkB ←− sgn ( WkF ) ; 4 : Perform binary operation with the activations of next layer : AkF ←− XnorDotProduct ( WkB , A k−1 B ) ; 5 : Perform Batch Normalization : AkF ←− BatchNorm ( AkF ) ; 6 : Binarize full-precision activations and obtain binary activations : AkB ←− sgn ( AkF ) ; 7 : end for 8 : For k = 1 , · · · , K , pair { ak , iB } and { ak , jB } as negative and positive pairs , then use Eq . 9 layer by layer to compute the NCE loss LkNCE between AkB and AkF for contrastive learning ; 9 : Combine a series of NCE loss { LkNCE } with the classification loss L into the CMIM loss LCMIM , with Eq . 11 ; 10 : Backward Propagation : compute the gradient of the overall loss function , i.e . ∂L∂WB , using the straight through estimator to tackle the sign function ; 11 : Parameter Update : update the fullprecision weights : WiF ′ ←−WkF − η ∂L∂WkB . where τ is a temperature parameter that controls the concentration level of the distribution [ 6 ] . τ is important for supervised feature learning , and also necessary for tuning the concentration of ak , iB and ak , jF on our contrastive space . Loss Function . We define the contrastive loss function LkNCE between the k-th layer ’ s activations AkB and A k F as : LkNCE = EP ( ak , iB , ak , jF |i=j ) [ log h ( ak , iB , a k , j F ) ] +NEP ( ak , iB , ak , jF |i6=j ) [ log ( 1− h ( ak , iB , a k , j F ) ) ] . ( 9 ) We would like to comment on the above loss function from the perspective of contrastive learning . The first term of positive pairs is optimized for capturing more intra-class correlations and the second term of negative pairs is for inter-class decorrelation . Because the pair construction is instance-wise , the number of negative samples theoretically can be the size of the entire training set , e.g . 1.2 million for ImageNet . With those additional hand-craft designed contrastive pairs for the proxy optimization problem in Eq . 9 , the representation capacity of BNNs can be further improved , as many contrastive learning methods demonstrated [ 22 , 18 , 19 , 20 ] . Moreover , the optimal ĥ = argmaxh LkNCE can approximate the targeted distribution , i.e . ĥ ( ak , iB , a k , j F ) = P ( i = j | a k , i B , a k , j F ) , ( 10 ) where the detailed proof is shown in the supplementary material . Thus , with Eq . 7-10 we can derive that minimizing the NCE loss LkNCE is equivalent to maximizing the targeted mutual information between the binary and full-precision activations , I ( akB , a k F ) . Combining the series of NCE loss from different layers { LkNCE } , ( k = 1 , · · · , K ) , the overall loss L can be defined as : L = λ K∑ k=1 LkNCE βK−1−k + Lcls , ( 11 ) where Lcls is the classification loss respect to the ground truth , λ is used to control the degree of NCE loss , β is a coefficient greater than 1 , and we denote the CMIM loss as LCMIM = ∑K k=1 LkNCE βK−1−k . Hence , the βK−1−k decreases with k increasing and consequently the L k NCE βK−1−k increases . In this way , the activations of latter layer can be substantially retained , which leads to better performance in practice . The complete training process of CMIM is presented in Algorithm 1 . Discussion on the CMIM Loss . Besides the theoretical formulation from the perspective of mutual information maximization , we also provide an intuitive explanation about CMIM . As illustrated in Figure 2 , we strengthen the representation ability of binary activations ( Figure 3 ) via designing a proxy task with the contrastive learning framework . By embedding the activations to the contrastive space and pull-and-push the paired embeddings , the BNNs can learn better representations from this difficult yet effective auxiliary contrastive learning task . Note that even though we only pick up two images to formulate Figure 2 , the actual number of negative samples can be huge in practice ( e.g . 16,384 for training ResNet-18 on ImageNet ) , benefit from the MemoryBank [ 24 ] technique . With this property , we speculate that the contrastive pairing works as the data augmentation , which contributes to our method . This additional pairing provides more information for training the BNNs , thus our CMIM loss can be treated as an overfitting-mitigated module . We also conduct experiments in the Section 3.2 and 3.3 to validate our speculation . Difference with other contrastive learning methods . The key idea of contrastive learning is to pull representations close in positive pairs and push representations apart in negative pairs in a contrastive space . Several self-supervised learning methods are rooted in well-established idea of the mutual information maximization , such as Deep InfoMax [ 19 ] , Contrastive Predictive Coding [ 18 ] , MemoryBank [ 24 ] , Augmented Multiscale DIM [ 20 ] , MoCo [ 21 ] and SimSaim [ 22 ] . These are based on NCE [ 23 ] and InfoNCE [ 19 ] which can be seen as a lower bound on mutual information [ 27 ] . The formulation of our CMIM-BNN is similar to the classic contrastive learning methods , where we all are inspired by NCE . However , our approach has several differences from those methods . Firstly , the training process of BNNs is different from regular network training , where binary and latent full-precision activations exist in the same forward pass . We seamlessly integrate this mixedactivation property with NCE , and thus the targeted lower bound formulated for optimization is different . Secondly , the binary and full-precision weights are both optimized by the NCE loss ( i.e . the two view augmentation networks are optimized simultaneously in contrastive learning ) , yet most aforementioned contrastive learning methods optimize their view augmentation networks separately .
This paper proposes to use contrastive distillation to train a binary network by maximizing the mutual information between itself (student network) and the full-precision network (teacher network). Empirical results show that this new objective further improves the binarization performance on top of several recent binary networks on image classification tasks. The authors also empirically show that models trained with the proposed contrastive objective have good transfer performance.
SP:56347e78cf47b8eaf87bef52c8649f5dcd92c31b
Contrastive Mutual Information Maximization for Binary Neural Networks
1 INTRODUCTION . Although deep neural networks ( DNNs ) [ 1 ] have achieved remarkable success in various computer vision tasks such as image classification [ 2 ] and semantic image segmentation [ 3 ] , their over-parametrization problem makes them computationally expensive and storage excessive . To advance the development of deep learning towards resource-constrained edge devices , researchers proposed several neural network compression paradigms , such as network pruning [ 4 , 5 ] , knowledge distillation [ 6 ] and network quantization [ 7 , 8 ] . Among the network quantization methods , the network binarization method [ 7 ] stands out for quantizing weights and activations ( i.e . intermediate feature maps ) to ±1 , compressing the full-precision counterpart 32× , and replacing time-consuming inner-product in full-precision networks with efficient xnor-bitcount operation in the BNNs . However , severe accuracy drops always exist between full-precision models and their binary counterparts . To tackle this problem , previous works mainly focus on reducing the quantization error induced by weights binarization [ 9 , 10 ] , and elaborately approximating binarization function to relieve the gradient mismatch in the backward propagation [ 11 , 8 ] . Indeed , they achieve the state-of-the-art performance . Yet with those two paradigms developing , narrowing down the quantization error and enhancing the gradient transmission reach their bottlenecks [ 12 , 13 ] , since the 1W32A ( only quantizing the weights into 1-bit , remaining the activations 32-bit ) models are capable of performing as well as the full-precision models , implying that the activations binarization becomes the main issue for further performance improvement . To address the accuracy degradation caused by the activations binarization , a few studies propose to regulate the distributions of the binary activations , e.g . researchers in [ 14 ] design a distribution loss to explicitly regularize the activation flow ; researchers in [ 13 ] propose to shift the thresholds of binary activation functions to make the distribution of binary activation unbalanced . They heuristically design low-level patterns to analyze the distributions of binary activations such as minimum of the activations and the balanced property of distributions . Nevertheless , they neglect the high-level indicators of the distribution and the unique characteristics of BNN , where the binary activations and latent full-precision activations exist in the same forward pass . Thus , we argue that the high-level properties of distributions , such as correlations and dependencies between binary and full-precision activations should be captured and utilized . In this work , we explore introducing mutual information for BNNs , in which the mutual information acts as a fundamental quantity to measure the information amount shared by the binary and latent real-valued activations in BNNs . In contrast to the aforementioned works focusing on learning the distribution of binary activations , mutual information naturally captures non-linear statistical dependencies between variables , and thus can be used as a measure of true dependence [ 15 ] . Based on this metric , we propose a novel method , termed as Contrastive Mutual Information Maximization for Binary Neural Networks ( CMIM-BNN ) . Specifically , we design a highly effective optimization strategy using contrastive estimation for the mutual information maximization . As illustrated in Figure 1 , we replace the data transformation module in contrastive learning with the exclusive structure in BNNs , where full-precision and binary activations are in the same forward pass . In this way , contrastive learning contributes to inter-class decorrelation of binary activations , and avoids collapse solutions . In other words , our method is built upon a contrastive learning framework to learn representative binary activations , in which we pull the binary activation closer to the full-precision activation and push the binary activation further away from other binary activations in the contrastive space . Moreover , by utilizing an additional MLP module to extract representations of activations , our method can explicitly capture higher-order dependencies in the contrastive space . To the best of our knowledge , it is the first work aiming at maximizing the mutual information of the activations in BNNs within a contrastive learning framework . Overall , the contributions of this paper are three-fold : • Considering the distributions of activations , we propose a novel CMIM framework to optimize BNNs , by maximizing the mutual information between the binary activation and its latent realvalued counterpart ; • We develop an effective contrastive learning strategy to achieve the goal of mutual information maximization for BNNs , and benefited from it , the representation ability of BNNs is clearly strengthened for not only the classification task but also downstream CV tasks ; • Experimental results show that our method can significantly improve the existing SOTA methods over the classification task on CIFAR-10/100 and ImageNet , e.g . 6.4 % on CIFAR-100 and 3.0 % on ImageNet . Besides , we also demonstrate good generalization ability of the proposed CMIM on other challenging CV tasks such as depth estimation and semantic segmentation . 2 MUTUAL INFORMATION MAXIMIZATION FOR TRAINING BNNS . 2.1 PRELIMINARIES . We first define a Multi-Layer Perceptron ( MLP ) with K layers . For simplification of derivation , we discard the bias term of the network . Then the MLP f ( x ) can be denoted as : f ( W1 , · · · , WK ; x ) = ( WK · σ ·WK−1 · · · · · σ ·W1 ) ( x ) , ( 1 ) where x is the input sample and Wk : Rdk−1 7−→ Rdk ( k = 1 , ... , K ) stands for the weight matrix connecting the ( k− 1 ) -th and the k-th layer , with dk−1 and dk representing the sizes of the input and output of the k-th network layer , respectively . The σ ( · ) function performs element-wise activation for the feature maps . Notably , for a convolution layer with the input map of m channels and the output map of n channels , and the size of the kernel w× h , it results in m× n×w× h parameters . We can re-arrange the parameters to a weight matrix of size n × ( m × h × w ) , such that this convolution layer can also operate in the same way as the other fully-connected layers . Hence , it is sufficient to consider networks with the fully-connected layers . Based on those predefined notions , the sectional MLP fk ( x ) with the front k layers of the f ( x ) can be represented as : fk ( W1 , · · · , Wk ; x ) = ( Wk · σ · · ·σ ·W1 ) ( x ) . ( 2 ) And the MLP f can be seen as a special case in the function sequence { fk } ( k ∈ { 1 , · · · , K } ) , i.e . f = fK , when k = K. Binary Neural Networks . Here , we revisit the general binarization method in [ 16 , 7 ] , which maintains full-precision latent variables WF for gradient updates , and the k-th weight matrix WkF is binarized into ±1 , obatining the binary weight matrix WkB by a binarize function ( normally sgn ( · ) ) , i.e . WkB = sgn ( WkF ) . Then the intermediate activation map ( full-precision ) of the k-th layer is produced by AkF = W k BA k−1 B , then the same quantization method is used to binarize the full-precision activation map as AkB = sgn ( A k F ) , and a whole forward pass of binarization is performed by iterating this process for L times . Mutual Information . For two discrete variables X and Y , their mutual information can be defined as [ 17 ] : I ( X , Y ) = ∑ x , y PXY ( x , y ) log PXY ( x , y ) PX ( x ) PY ( y ) , ( 3 ) where PXY ( x , y ) is the joint distribution , PX ( x ) = ∑ y PXY ( x , y ) and PY ( y ) = ∑ x PXY ( x , y ) are the marginals of X and Y , respectively . Mutual information quantifies the amount of information obtained about one random variable by observing the other random variable . It is a dimensionless quantity with ( generally ) units of bits , and can be thought as the reduction in uncertainty about one random variable given knowledge of another . High mutual information indicates a large reduction in uncertainty ; low mutual information indicates a small reduction ; and zero mutual information between two random variables means the variables are independent . In the content of binarization , considering the binary and full-precision activations as random variables , we would like them share as much information as possible , since the binary activations are proceeded from their corresponding full-precision activations . Theoretically , the mutual information between those two variables should be maximized . 2.2 CONTRASTIVE MUTUAL INFORMATION MAXIMIZATION . In the following section , we formalize the idea of constructing a Noise-Contrastive Estimation ( NCE ) loss to maximize the mutual information between the binary and the full-precision activations . Particularly , we derive a novel CMIM loss for training BNNs , where NCE is introduced to avoid the direct mutual information computation by estimating it with its lower bound in Eq . 7 . For binary network fB and its latent full-precision counterpart fF in the same training iteration , the series of their activations { akB } and { akF } ( k ∈ { 1 , · · · , K } ) , where AkB = ( a k,1 B , · · · , a k , N B ) and AkF = ( a k,1 F , · · · , a k , N F ) can be considered as a series of variables . The corresponding variables ( akB , a k F ) should share more information , i.e . the mutual information of the same layer ’ s output activations I ( akB , a k F ) ( k ∈ { 1 , · · · , K } ) should be maximized to enforce them mutually dependent . To this end , we introduce the contrastive learning framework into our targeted binarization task . The basic idea of contrastive learning is to compare different views of the data ( usually under different data augmentations ) to calculate similarity scores [ 18 , 19 , 20 , 21 , 22 ] . This framework is suitable for our case , since the binary and full-precision activations can be seen as two different views . For a training batch with N samples , the samples can be denoted as : { xi } ( i ∈ { 1 , · · · , N } ) . We feed a batch of samples to the BNN and obtain KN2 pairs of activations ( ak , iB , a k , j F ) , which augments the data for the auxiliary task . We define a pair containing two activations from the same sample as positive pair , i.e . if i = j , ( ak , iB , a k , j F ) + and vice versa . With the Bayes ’ theorem , the posterior probability of two activations from the positive pair can be formalized as : P ( i = j | ak , iB , a k , j F ) = P ( ak , iB , a k , j F | i = j ) 1 N P ( ak , iB , a k , j F | i = j ) 1 N + P ( a k , i B , a k , j F | i 6= j ) N−1 N . ( 4 ) And the probability of activations from negative pair is : P ( i 6= j | ak , iB , a k , j F ) = 1 − P ( i = j | ak , iB , a k , j F ) . To simplify the NCE derivative , several works [ 23 , 24 , 25 ] build assumption about the dependence of the variables , we also use this assumption that the activations from positive pairs are dependent and the ones from negative pairs are independent , i.e . P ( ak , iB , a k , j F | i = j ) = P ( a k , i B , a k , j F ) and P ( ak , iB , a k , j F | i 6= j ) = P ( a k , i B ) P ( a k , j F ) . Hence , the above equation can be simplified as : P ( i = j | ak , iB , a k , j F ) = P ( ak , iB , a k , j F ) P ( ak , iB , a k , j F ) + P ( a k , i B ) P ( a k , j F ) ( N − 1 ) . ( 5 ) Performing logarithm to Eq . 5 and arranging the terms , we can achieve logP ( i = j | ak , iB , a k , j F ) = − log [ 1 + ( N − 1 ) P ( ak , iB ) P ( a k , j F ) P ( ak , iB , a k , j F ) ] ≤ log P ( ak , iB , a k , j F ) P ( ak , iB ) P ( a k , j F ) − log ( N − 1 ) . ( 6 ) Taking expectation on both sides with respect to P ( ak , iB , a k , j F ) , and combining the definition of mutual information in Eq . 3 , we can derive the form of mutual information as : targeted mutual information︷ ︸︸ ︷ I ( akB , a k F ) = ∑ i ∑ j P ( ak , iB , a k , j F ) log P ( ak , iB , a k , j F ) P ( ak , iB ) P ( a k , j F ) ≥ ∑ i ∑ j P ( ak , iB , a k , j F | i = j ) [ logP ( i = j | ak , iB , a k , j F ) + log ( N − 1 ) ] = optimized lower bound︷ ︸︸ ︷ EP ( ak , iB , ak , jF |i=j ) [ logP ( i = j | ak , iB , a k , j F ) ] + log ( N − 1 ) , ( 7 ) where I ( akB , a k F ) is the mutual information between the binary and full-precision distributions , our targeted object . Instead of directly maximizing the mutual information , we choose to maximize its lower bound in the Eq . 7 . However , the distribution P ( i = j | ak , iB , a k , j F ) is hard to estimate . We take advantage of the idea of contrastive learning , and introduce a critic function h to approximate the targeted distribution [ 18 , 19 , 20 ] . In practice , we use the following : h ( ak , iB , a k , j F ) = exp ( τ ( ak , iB ) > ak , jF ) ∑ j exp ( τ ( a k , i B ) > ak , jF ) ( 8 ) Algorithm 1 Forward and Backward Propagation of CMIM Require : A minibatch of data samples ( X , Y ) , current binary weight WkB , latent fullprecision weights WkF , and learning rate η . Ensure : Update weights WkF ′ . 1 : Forward Propagation : 2 : for k = 1 to K − 1 do 3 : Binarize latent weights : WkB ←− sgn ( WkF ) ; 4 : Perform binary operation with the activations of next layer : AkF ←− XnorDotProduct ( WkB , A k−1 B ) ; 5 : Perform Batch Normalization : AkF ←− BatchNorm ( AkF ) ; 6 : Binarize full-precision activations and obtain binary activations : AkB ←− sgn ( AkF ) ; 7 : end for 8 : For k = 1 , · · · , K , pair { ak , iB } and { ak , jB } as negative and positive pairs , then use Eq . 9 layer by layer to compute the NCE loss LkNCE between AkB and AkF for contrastive learning ; 9 : Combine a series of NCE loss { LkNCE } with the classification loss L into the CMIM loss LCMIM , with Eq . 11 ; 10 : Backward Propagation : compute the gradient of the overall loss function , i.e . ∂L∂WB , using the straight through estimator to tackle the sign function ; 11 : Parameter Update : update the fullprecision weights : WiF ′ ←−WkF − η ∂L∂WkB . where τ is a temperature parameter that controls the concentration level of the distribution [ 6 ] . τ is important for supervised feature learning , and also necessary for tuning the concentration of ak , iB and ak , jF on our contrastive space . Loss Function . We define the contrastive loss function LkNCE between the k-th layer ’ s activations AkB and A k F as : LkNCE = EP ( ak , iB , ak , jF |i=j ) [ log h ( ak , iB , a k , j F ) ] +NEP ( ak , iB , ak , jF |i6=j ) [ log ( 1− h ( ak , iB , a k , j F ) ) ] . ( 9 ) We would like to comment on the above loss function from the perspective of contrastive learning . The first term of positive pairs is optimized for capturing more intra-class correlations and the second term of negative pairs is for inter-class decorrelation . Because the pair construction is instance-wise , the number of negative samples theoretically can be the size of the entire training set , e.g . 1.2 million for ImageNet . With those additional hand-craft designed contrastive pairs for the proxy optimization problem in Eq . 9 , the representation capacity of BNNs can be further improved , as many contrastive learning methods demonstrated [ 22 , 18 , 19 , 20 ] . Moreover , the optimal ĥ = argmaxh LkNCE can approximate the targeted distribution , i.e . ĥ ( ak , iB , a k , j F ) = P ( i = j | a k , i B , a k , j F ) , ( 10 ) where the detailed proof is shown in the supplementary material . Thus , with Eq . 7-10 we can derive that minimizing the NCE loss LkNCE is equivalent to maximizing the targeted mutual information between the binary and full-precision activations , I ( akB , a k F ) . Combining the series of NCE loss from different layers { LkNCE } , ( k = 1 , · · · , K ) , the overall loss L can be defined as : L = λ K∑ k=1 LkNCE βK−1−k + Lcls , ( 11 ) where Lcls is the classification loss respect to the ground truth , λ is used to control the degree of NCE loss , β is a coefficient greater than 1 , and we denote the CMIM loss as LCMIM = ∑K k=1 LkNCE βK−1−k . Hence , the βK−1−k decreases with k increasing and consequently the L k NCE βK−1−k increases . In this way , the activations of latter layer can be substantially retained , which leads to better performance in practice . The complete training process of CMIM is presented in Algorithm 1 . Discussion on the CMIM Loss . Besides the theoretical formulation from the perspective of mutual information maximization , we also provide an intuitive explanation about CMIM . As illustrated in Figure 2 , we strengthen the representation ability of binary activations ( Figure 3 ) via designing a proxy task with the contrastive learning framework . By embedding the activations to the contrastive space and pull-and-push the paired embeddings , the BNNs can learn better representations from this difficult yet effective auxiliary contrastive learning task . Note that even though we only pick up two images to formulate Figure 2 , the actual number of negative samples can be huge in practice ( e.g . 16,384 for training ResNet-18 on ImageNet ) , benefit from the MemoryBank [ 24 ] technique . With this property , we speculate that the contrastive pairing works as the data augmentation , which contributes to our method . This additional pairing provides more information for training the BNNs , thus our CMIM loss can be treated as an overfitting-mitigated module . We also conduct experiments in the Section 3.2 and 3.3 to validate our speculation . Difference with other contrastive learning methods . The key idea of contrastive learning is to pull representations close in positive pairs and push representations apart in negative pairs in a contrastive space . Several self-supervised learning methods are rooted in well-established idea of the mutual information maximization , such as Deep InfoMax [ 19 ] , Contrastive Predictive Coding [ 18 ] , MemoryBank [ 24 ] , Augmented Multiscale DIM [ 20 ] , MoCo [ 21 ] and SimSaim [ 22 ] . These are based on NCE [ 23 ] and InfoNCE [ 19 ] which can be seen as a lower bound on mutual information [ 27 ] . The formulation of our CMIM-BNN is similar to the classic contrastive learning methods , where we all are inspired by NCE . However , our approach has several differences from those methods . Firstly , the training process of BNNs is different from regular network training , where binary and latent full-precision activations exist in the same forward pass . We seamlessly integrate this mixedactivation property with NCE , and thus the targeted lower bound formulated for optimization is different . Secondly , the binary and full-precision weights are both optimized by the NCE loss ( i.e . the two view augmentation networks are optimized simultaneously in contrastive learning ) , yet most aforementioned contrastive learning methods optimize their view augmentation networks separately .
This paper proposes an auxiliary method for training BNN models. It follows the idea of contrastive-mutual-information-maximization, which utilizes the full-precision and the binary activation of BNN to form positive (binary and fp activation of the same sample) and negative (binary activation of different samples) pairs for contrastive training. The auxiliary contrastive loss can provide data augmentation and effectively enhance the model's generalization ability.
SP:56347e78cf47b8eaf87bef52c8649f5dcd92c31b
Representation Learning for Online and Offline RL in Low-rank MDPs
1 INTRODUCTION . When applying Reinforcement Learning ( RL ) to large-scale problems where data is complex and high-dimensional , learning effective transformations of the data , i.e. , representation learning , can often significantly improve the sample and computation efficiency of the RL procedure . Indeed , several empirical works have shown that leveraging representation learning techniques developed in supervised or unsupervised learning settings can accelerate the search for good decision-making strategies ( Silver et al. , 2018 ; Stooke et al. , 2021 ; Srinivas et al. , 2020 ; Yang & Nachum , 2021 ) . However , representation learning in RL is far more subtle than it is for non-sequential and noninteractive learning tasks ( e.g. , supervised learning ) . Prior works have shown that even if one is given the magic representation that exactly linearizes the optimal policy ( Du et al. , 2019b ) or the optimal value functions ( Wang et al. , 2020 ; Weisz et al. , 2021 ) , RL is still challenging ( i.e. , one may still need exponentially many samples to learn ) . This indicates that an effective representation that permits efficient RL needs to encode more information about the underlying Markov Decision Processes ( MDPs ) . Despite the recent empirical success of representation learning in RL , its statistical guarantee and theoretical properties remain under-investigated . In this work , we study the representation learning question under the low-rank MDP assumption . Concretely , a low-rank MDP assumes that the MDP transition matrix admits a low-rank factorization , i.e. , there exists two unknown mappings µ ( s′ ) , φ ( s , a ) , such that P ( s′|s , a ) = µ ( s′ ) > φ ( s , a ) for all s , a , s′ , where P ( s′|s , a ) is the probability of transiting to the next state s′ under the current state and action ( s , a ) . The representation φ in a low-rank MDP not only linearizes the optimal stateaction value function of the MDP ( Jin et al. , 2020a ) , but also linearizes the transition operator . A lowrankness assumption on large stochastic matrices is a common assumption and has enabled successful development of algorithms for real world applications such as movie recommendation systems ( Koren et al. , 2009 ) . We note that a low-rank MDP strictly generalizes the linear MDP model ( Yang & Wang , 2020 ; Jin et al. , 2020a ) which assumes φ is known a priori . The unknown representation φ makes learning in low-rank MDPs much more challenging than that in linear MDPs since one can no longer directly use linear function approximations . On the other hand , the fact that linear MDPs can be solved statistical and computational efficiently if φ is known a priori implies that if one could learn the representation of the low-rank MDP , one could then efficiently learn the optimal policy . Indeed , prior works have shown that learning in low-rank MDPs is statistically feasible ( Jiang et al. , 2017 ; Sun et al. , 2019 ; Du et al. , 2021 ) via leveraging rich function approximators . However , these algorithms are version space algorithms and are not computationally efficient . Recent work FLAMBE proposes an oracle-efficient algorithm1 that learns in low-rank MDPs with a polynomial sample complexity , where the computation oracle is Maximum Likelihood Estimation ( MLE ) . In this work , we follow the same setup from FLAMBE ( Agarwal et al. , 2020b ) , and propose a new algorithm — Upper Confidence Bound driven Representation Learning , Exploration and Exploitation ( REP-UCB ) , which can learn a near optimal policy for a low-rank MDP with a polynomial sample complexity and is oracle-efficient . Comparing to FLAMBE , our algorithm significantly improves the sample complexity from O ( d7A9/ ( 10 ( 1− γ ) 22 ) for FLAMBE to O ( d4A4/ ( 2 ( 1− γ ) 3 ) , where d is the rank of the transition matrix ( or dimension of the true representation ) , A is the number of actions , is the suboptimality gap and γ ∈ [ 0 , 1 ) is the discount factor in the MDP . Our algorithm is also arguably much simpler than FLAMBE : FLAMBE is an explore-then-commit algorithm , has to explore in a layer-by-layer forward way , and does not permit data sharing across different time steps . In contrast , REP-UCB carefully trades exploration versus exploitation by combining the reward signal and exploration bonus ( constructed using the latest learned representation ) , and enables data sharing across all time steps . Our sample complexity nearly matches the ones from those computationally inefficient algorithms ( Jiang et al. , 2017 ; Sun et al. , 2019 ; Du et al. , 2021 ) . We summarize the comparison with the prior works that study representation learning in Table 1 . In addition to the online exploration setting , we also show that our new techniques can be directly used for designing offline RL algorithms for low-rank MDPs under partial coverage . More specifically , we propose an algorithm REP-LCB—Lower Confidence Bound driven Reprepresentation Learning for offline RL , that given an offline dataset , can learn to compete against any policy ( including history-dependent policies ) as long as it is covered by the offline data where the coverage is measured using the relative condition number ( Agarwal et al. , 2021 ) associated with the ground 1The oracle generally refers to empirical risk minimization oracle employing non-bilevel optimization . We seek to design an algorithm that runs in polynomial time , with each oracle call counting as O ( 1 ) . This algorithmic framework has lead to many successful algorithms in the literature of contextual bandit ( Agarwal et al. , 2014 ; Dudı́k et al. , 2017 ; Foster & Rakhlin , 2020 ) . truth representation . Thus , our offline RL result generalizes prior offline RL works on linear MDPs ( Jin et al. , 2020b ; Zhang et al. , 2021b ) which assume representation is known a priori and use linear function approximation . Computation-wise , our approach uses one call to the MLE computation oracle , and hence is oracle-efficient . REP-LCB is the first oracle efficient offline algorithm for low-rank MDP enjoying the aforementioned statistical guarantee . See Section 2 for a more detailed comparison with the existing literature on representation learning in offline RL . Our contributions . We develop new representation learning RL algorithms that enable sample efficient learning in low-rank MDPs under both online and offline settings : 1 . In the online episodic learning setting , our new algorithm REP-UCB integrates representation learning , exploration , and exploitation together , and significantly improves the sample complexity of the prior state-of-art algorithm FLAMBE ; 2 . In the offline learning setting , we propose a natural concentrability coefficient ( i.e. , relative condition number under the true representation ) that captures the partial coverage condition in low-rank MDP , and our algorithm REP-LCB learns to compete against any policy ( including history-dependent ones ) under such a partial coverage condition . 2 RELATED WORK . Online Setting We list the comparison as follows , which is summarized in Table 1 . FLAMBE ( Agarwal et al. , 2020b ) was a state-of-the-art oracle-efficient algorithm for low-rank MDPs . In all parameters , the statistical complexity is much worse than REP-UCB . Our algorithm and FLAMBE operate under the same computation oracle . FLAMBE does not balance exploration and exploitation , and uses explore-then-committee style techniques ( i.e. , constructions of absorbing MDPs ( Brafman & Tennenholtz , 2002 ) ) which results in its worse sample complexity . With a more complex oracle , Moffle ( Modi et al. , 2021 ) is a model-free algorithm for low-rank MDPs , with two additional assumptions : ( 1 ) the transition has low non-negative rank ( nnr ) , and ( 2 ) reachability in latent states . The first assumption significantly restricts the scope of low-rank MDPs as there are matrices whose nnr is exponentially larger than the rank ( Agarwal et al. , 2020b ) . The sample complexity of Moffle can scale O ( d6|A|13/ ( 2η5 ( 1− γ ) 5 ) ) , where η is the reachability probability , and 1/η could be as large as nnr1/2 ( Proposition 4 in Agarwal et al . ( 2020b ) ) , which essentially means that Moffle has a polynomial dependence on the nnr . Thus , Moffle needs the nnr of the transition matrix to be small . OLIVE ( Jiang et al. , 2017 ) , Witness rank ( Sun et al. , 2019 ) and Bilinear-UCB ( Du et al. , 2021 ) , when specialized to low-rank MDPs , have slightly tighter dependence on d ( e.g. , O ( d2/ 2 ) ) . But these algorithms are computationally inefficient as they are version space algorithms . Dann et al . ( 2021 ) shows that with a policy class , solving a low-rank MDP can take Ω ( 2d ) samples . In this work , similar to Witness rank ( Sun et al. , 2019 ) and FLAMBE , we use function approximators to model the transition . Thus our positive result is not in contradiction to the result from Dann et al . ( 2021 ) . VALOR ( Dann et al. , 2018 ) , PCID ( Du et al. , 2019a ) , HOMER ( Misra et al. , 2020 ) , RegRL ( Foster et al. , 2020 ) , and the approach from Feng et al . ( 2020 ) are algorithms for block MDPs which is a more restricted setting than low-rank MDPs . These works require additional assumptions such as deterministic transitions ( Dann et al. , 2018 ) , reachability ( Misra et al. , 2020 ; Du et al. , 2019a ) , Bellman completion ( Foster et al. , 2020 ) , and strong unsupervised learning oracles ( Feng et al. , 2020 ) . Offline Setting We discuss related works in offline RL . Uehara & Sun ( 2021 ) obtained similar results for offline RL on low-rank MDPs . Though the sample complexity in their algorithm is slightly tighter , our algorithm is oracle-efficient , while the CPPO algorithm from Uehara & Sun ( 2021 ) is a version space algorithm . Xie et al . ( 2021 ) propose a ( general ) pessimistic model-free algorithm in the offline setting . We can also apply their algorithm to low-rank MDPs and show some finite-sample guarantee . However , it is unclear whether the final bounds in their results can be characterized by the relative condition number only using the true representation , and whether they can compete with history-dependent policies . Thus , our result is still considered superior on low-rank MDPs . The detail is given in Section E. In addition to the above two works , the pessimistic approach in offline RL has been extensively investigated . Empirically , it can work on simulation control tasks ( Kidambi et al. , 2020 ; Yu et al. , 2020 ; Kumar et al. , 2020 ; Liu et al. , 2020 ; Chang et al. , 2021 ) . On the theoretical side , pessimism allows us to obtain the PAC guarantee on various models when a comparator policy is covered by offline data in some forms ( Jin et al. , 2020b ; Rashidinejad et al. , 2021 ; Yin et al. , 2021 ; Zanette et al. , 2021b ; Zhang et al. , 2021b ; Chang et al. , 2021 ) . However , these algorithms and their analysis rely on a known representation . We emphasize the differences in Section 5 .
This paper studies low-rank episodic MDPs when the reward is deterministic and the reward function is known. The paper proposes a method that collects data, uses the data to estimate the low-rank MDP, uses this estimate, confidence bound around it, to come up with the policy to be used for the next episode. The algorithm provides PAC style bound on how many episodes are needed to learn epsilon optimal policy. The authors extend the scope of this work to offline low-rank MDP where a logged data set is given where an RL algorithm is needed to learn and optimize to come up with a new policy.
SP:1e459fa0fc602c70167f3e3d8e75788c32400722
Representation Learning for Online and Offline RL in Low-rank MDPs
1 INTRODUCTION . When applying Reinforcement Learning ( RL ) to large-scale problems where data is complex and high-dimensional , learning effective transformations of the data , i.e. , representation learning , can often significantly improve the sample and computation efficiency of the RL procedure . Indeed , several empirical works have shown that leveraging representation learning techniques developed in supervised or unsupervised learning settings can accelerate the search for good decision-making strategies ( Silver et al. , 2018 ; Stooke et al. , 2021 ; Srinivas et al. , 2020 ; Yang & Nachum , 2021 ) . However , representation learning in RL is far more subtle than it is for non-sequential and noninteractive learning tasks ( e.g. , supervised learning ) . Prior works have shown that even if one is given the magic representation that exactly linearizes the optimal policy ( Du et al. , 2019b ) or the optimal value functions ( Wang et al. , 2020 ; Weisz et al. , 2021 ) , RL is still challenging ( i.e. , one may still need exponentially many samples to learn ) . This indicates that an effective representation that permits efficient RL needs to encode more information about the underlying Markov Decision Processes ( MDPs ) . Despite the recent empirical success of representation learning in RL , its statistical guarantee and theoretical properties remain under-investigated . In this work , we study the representation learning question under the low-rank MDP assumption . Concretely , a low-rank MDP assumes that the MDP transition matrix admits a low-rank factorization , i.e. , there exists two unknown mappings µ ( s′ ) , φ ( s , a ) , such that P ( s′|s , a ) = µ ( s′ ) > φ ( s , a ) for all s , a , s′ , where P ( s′|s , a ) is the probability of transiting to the next state s′ under the current state and action ( s , a ) . The representation φ in a low-rank MDP not only linearizes the optimal stateaction value function of the MDP ( Jin et al. , 2020a ) , but also linearizes the transition operator . A lowrankness assumption on large stochastic matrices is a common assumption and has enabled successful development of algorithms for real world applications such as movie recommendation systems ( Koren et al. , 2009 ) . We note that a low-rank MDP strictly generalizes the linear MDP model ( Yang & Wang , 2020 ; Jin et al. , 2020a ) which assumes φ is known a priori . The unknown representation φ makes learning in low-rank MDPs much more challenging than that in linear MDPs since one can no longer directly use linear function approximations . On the other hand , the fact that linear MDPs can be solved statistical and computational efficiently if φ is known a priori implies that if one could learn the representation of the low-rank MDP , one could then efficiently learn the optimal policy . Indeed , prior works have shown that learning in low-rank MDPs is statistically feasible ( Jiang et al. , 2017 ; Sun et al. , 2019 ; Du et al. , 2021 ) via leveraging rich function approximators . However , these algorithms are version space algorithms and are not computationally efficient . Recent work FLAMBE proposes an oracle-efficient algorithm1 that learns in low-rank MDPs with a polynomial sample complexity , where the computation oracle is Maximum Likelihood Estimation ( MLE ) . In this work , we follow the same setup from FLAMBE ( Agarwal et al. , 2020b ) , and propose a new algorithm — Upper Confidence Bound driven Representation Learning , Exploration and Exploitation ( REP-UCB ) , which can learn a near optimal policy for a low-rank MDP with a polynomial sample complexity and is oracle-efficient . Comparing to FLAMBE , our algorithm significantly improves the sample complexity from O ( d7A9/ ( 10 ( 1− γ ) 22 ) for FLAMBE to O ( d4A4/ ( 2 ( 1− γ ) 3 ) , where d is the rank of the transition matrix ( or dimension of the true representation ) , A is the number of actions , is the suboptimality gap and γ ∈ [ 0 , 1 ) is the discount factor in the MDP . Our algorithm is also arguably much simpler than FLAMBE : FLAMBE is an explore-then-commit algorithm , has to explore in a layer-by-layer forward way , and does not permit data sharing across different time steps . In contrast , REP-UCB carefully trades exploration versus exploitation by combining the reward signal and exploration bonus ( constructed using the latest learned representation ) , and enables data sharing across all time steps . Our sample complexity nearly matches the ones from those computationally inefficient algorithms ( Jiang et al. , 2017 ; Sun et al. , 2019 ; Du et al. , 2021 ) . We summarize the comparison with the prior works that study representation learning in Table 1 . In addition to the online exploration setting , we also show that our new techniques can be directly used for designing offline RL algorithms for low-rank MDPs under partial coverage . More specifically , we propose an algorithm REP-LCB—Lower Confidence Bound driven Reprepresentation Learning for offline RL , that given an offline dataset , can learn to compete against any policy ( including history-dependent policies ) as long as it is covered by the offline data where the coverage is measured using the relative condition number ( Agarwal et al. , 2021 ) associated with the ground 1The oracle generally refers to empirical risk minimization oracle employing non-bilevel optimization . We seek to design an algorithm that runs in polynomial time , with each oracle call counting as O ( 1 ) . This algorithmic framework has lead to many successful algorithms in the literature of contextual bandit ( Agarwal et al. , 2014 ; Dudı́k et al. , 2017 ; Foster & Rakhlin , 2020 ) . truth representation . Thus , our offline RL result generalizes prior offline RL works on linear MDPs ( Jin et al. , 2020b ; Zhang et al. , 2021b ) which assume representation is known a priori and use linear function approximation . Computation-wise , our approach uses one call to the MLE computation oracle , and hence is oracle-efficient . REP-LCB is the first oracle efficient offline algorithm for low-rank MDP enjoying the aforementioned statistical guarantee . See Section 2 for a more detailed comparison with the existing literature on representation learning in offline RL . Our contributions . We develop new representation learning RL algorithms that enable sample efficient learning in low-rank MDPs under both online and offline settings : 1 . In the online episodic learning setting , our new algorithm REP-UCB integrates representation learning , exploration , and exploitation together , and significantly improves the sample complexity of the prior state-of-art algorithm FLAMBE ; 2 . In the offline learning setting , we propose a natural concentrability coefficient ( i.e. , relative condition number under the true representation ) that captures the partial coverage condition in low-rank MDP , and our algorithm REP-LCB learns to compete against any policy ( including history-dependent ones ) under such a partial coverage condition . 2 RELATED WORK . Online Setting We list the comparison as follows , which is summarized in Table 1 . FLAMBE ( Agarwal et al. , 2020b ) was a state-of-the-art oracle-efficient algorithm for low-rank MDPs . In all parameters , the statistical complexity is much worse than REP-UCB . Our algorithm and FLAMBE operate under the same computation oracle . FLAMBE does not balance exploration and exploitation , and uses explore-then-committee style techniques ( i.e. , constructions of absorbing MDPs ( Brafman & Tennenholtz , 2002 ) ) which results in its worse sample complexity . With a more complex oracle , Moffle ( Modi et al. , 2021 ) is a model-free algorithm for low-rank MDPs , with two additional assumptions : ( 1 ) the transition has low non-negative rank ( nnr ) , and ( 2 ) reachability in latent states . The first assumption significantly restricts the scope of low-rank MDPs as there are matrices whose nnr is exponentially larger than the rank ( Agarwal et al. , 2020b ) . The sample complexity of Moffle can scale O ( d6|A|13/ ( 2η5 ( 1− γ ) 5 ) ) , where η is the reachability probability , and 1/η could be as large as nnr1/2 ( Proposition 4 in Agarwal et al . ( 2020b ) ) , which essentially means that Moffle has a polynomial dependence on the nnr . Thus , Moffle needs the nnr of the transition matrix to be small . OLIVE ( Jiang et al. , 2017 ) , Witness rank ( Sun et al. , 2019 ) and Bilinear-UCB ( Du et al. , 2021 ) , when specialized to low-rank MDPs , have slightly tighter dependence on d ( e.g. , O ( d2/ 2 ) ) . But these algorithms are computationally inefficient as they are version space algorithms . Dann et al . ( 2021 ) shows that with a policy class , solving a low-rank MDP can take Ω ( 2d ) samples . In this work , similar to Witness rank ( Sun et al. , 2019 ) and FLAMBE , we use function approximators to model the transition . Thus our positive result is not in contradiction to the result from Dann et al . ( 2021 ) . VALOR ( Dann et al. , 2018 ) , PCID ( Du et al. , 2019a ) , HOMER ( Misra et al. , 2020 ) , RegRL ( Foster et al. , 2020 ) , and the approach from Feng et al . ( 2020 ) are algorithms for block MDPs which is a more restricted setting than low-rank MDPs . These works require additional assumptions such as deterministic transitions ( Dann et al. , 2018 ) , reachability ( Misra et al. , 2020 ; Du et al. , 2019a ) , Bellman completion ( Foster et al. , 2020 ) , and strong unsupervised learning oracles ( Feng et al. , 2020 ) . Offline Setting We discuss related works in offline RL . Uehara & Sun ( 2021 ) obtained similar results for offline RL on low-rank MDPs . Though the sample complexity in their algorithm is slightly tighter , our algorithm is oracle-efficient , while the CPPO algorithm from Uehara & Sun ( 2021 ) is a version space algorithm . Xie et al . ( 2021 ) propose a ( general ) pessimistic model-free algorithm in the offline setting . We can also apply their algorithm to low-rank MDPs and show some finite-sample guarantee . However , it is unclear whether the final bounds in their results can be characterized by the relative condition number only using the true representation , and whether they can compete with history-dependent policies . Thus , our result is still considered superior on low-rank MDPs . The detail is given in Section E. In addition to the above two works , the pessimistic approach in offline RL has been extensively investigated . Empirically , it can work on simulation control tasks ( Kidambi et al. , 2020 ; Yu et al. , 2020 ; Kumar et al. , 2020 ; Liu et al. , 2020 ; Chang et al. , 2021 ) . On the theoretical side , pessimism allows us to obtain the PAC guarantee on various models when a comparator policy is covered by offline data in some forms ( Jin et al. , 2020b ; Rashidinejad et al. , 2021 ; Yin et al. , 2021 ; Zanette et al. , 2021b ; Zhang et al. , 2021b ; Chang et al. , 2021 ) . However , these algorithms and their analysis rely on a known representation . We emphasize the differences in Section 5 .
This paper studies the representation learning for linear MDP. The authors provide an online algorithm and its offline counterpart. Theoretical analysis justifies that the proposed algorithms are sample efficient.
SP:1e459fa0fc602c70167f3e3d8e75788c32400722
Representation Learning for Online and Offline RL in Low-rank MDPs
1 INTRODUCTION . When applying Reinforcement Learning ( RL ) to large-scale problems where data is complex and high-dimensional , learning effective transformations of the data , i.e. , representation learning , can often significantly improve the sample and computation efficiency of the RL procedure . Indeed , several empirical works have shown that leveraging representation learning techniques developed in supervised or unsupervised learning settings can accelerate the search for good decision-making strategies ( Silver et al. , 2018 ; Stooke et al. , 2021 ; Srinivas et al. , 2020 ; Yang & Nachum , 2021 ) . However , representation learning in RL is far more subtle than it is for non-sequential and noninteractive learning tasks ( e.g. , supervised learning ) . Prior works have shown that even if one is given the magic representation that exactly linearizes the optimal policy ( Du et al. , 2019b ) or the optimal value functions ( Wang et al. , 2020 ; Weisz et al. , 2021 ) , RL is still challenging ( i.e. , one may still need exponentially many samples to learn ) . This indicates that an effective representation that permits efficient RL needs to encode more information about the underlying Markov Decision Processes ( MDPs ) . Despite the recent empirical success of representation learning in RL , its statistical guarantee and theoretical properties remain under-investigated . In this work , we study the representation learning question under the low-rank MDP assumption . Concretely , a low-rank MDP assumes that the MDP transition matrix admits a low-rank factorization , i.e. , there exists two unknown mappings µ ( s′ ) , φ ( s , a ) , such that P ( s′|s , a ) = µ ( s′ ) > φ ( s , a ) for all s , a , s′ , where P ( s′|s , a ) is the probability of transiting to the next state s′ under the current state and action ( s , a ) . The representation φ in a low-rank MDP not only linearizes the optimal stateaction value function of the MDP ( Jin et al. , 2020a ) , but also linearizes the transition operator . A lowrankness assumption on large stochastic matrices is a common assumption and has enabled successful development of algorithms for real world applications such as movie recommendation systems ( Koren et al. , 2009 ) . We note that a low-rank MDP strictly generalizes the linear MDP model ( Yang & Wang , 2020 ; Jin et al. , 2020a ) which assumes φ is known a priori . The unknown representation φ makes learning in low-rank MDPs much more challenging than that in linear MDPs since one can no longer directly use linear function approximations . On the other hand , the fact that linear MDPs can be solved statistical and computational efficiently if φ is known a priori implies that if one could learn the representation of the low-rank MDP , one could then efficiently learn the optimal policy . Indeed , prior works have shown that learning in low-rank MDPs is statistically feasible ( Jiang et al. , 2017 ; Sun et al. , 2019 ; Du et al. , 2021 ) via leveraging rich function approximators . However , these algorithms are version space algorithms and are not computationally efficient . Recent work FLAMBE proposes an oracle-efficient algorithm1 that learns in low-rank MDPs with a polynomial sample complexity , where the computation oracle is Maximum Likelihood Estimation ( MLE ) . In this work , we follow the same setup from FLAMBE ( Agarwal et al. , 2020b ) , and propose a new algorithm — Upper Confidence Bound driven Representation Learning , Exploration and Exploitation ( REP-UCB ) , which can learn a near optimal policy for a low-rank MDP with a polynomial sample complexity and is oracle-efficient . Comparing to FLAMBE , our algorithm significantly improves the sample complexity from O ( d7A9/ ( 10 ( 1− γ ) 22 ) for FLAMBE to O ( d4A4/ ( 2 ( 1− γ ) 3 ) , where d is the rank of the transition matrix ( or dimension of the true representation ) , A is the number of actions , is the suboptimality gap and γ ∈ [ 0 , 1 ) is the discount factor in the MDP . Our algorithm is also arguably much simpler than FLAMBE : FLAMBE is an explore-then-commit algorithm , has to explore in a layer-by-layer forward way , and does not permit data sharing across different time steps . In contrast , REP-UCB carefully trades exploration versus exploitation by combining the reward signal and exploration bonus ( constructed using the latest learned representation ) , and enables data sharing across all time steps . Our sample complexity nearly matches the ones from those computationally inefficient algorithms ( Jiang et al. , 2017 ; Sun et al. , 2019 ; Du et al. , 2021 ) . We summarize the comparison with the prior works that study representation learning in Table 1 . In addition to the online exploration setting , we also show that our new techniques can be directly used for designing offline RL algorithms for low-rank MDPs under partial coverage . More specifically , we propose an algorithm REP-LCB—Lower Confidence Bound driven Reprepresentation Learning for offline RL , that given an offline dataset , can learn to compete against any policy ( including history-dependent policies ) as long as it is covered by the offline data where the coverage is measured using the relative condition number ( Agarwal et al. , 2021 ) associated with the ground 1The oracle generally refers to empirical risk minimization oracle employing non-bilevel optimization . We seek to design an algorithm that runs in polynomial time , with each oracle call counting as O ( 1 ) . This algorithmic framework has lead to many successful algorithms in the literature of contextual bandit ( Agarwal et al. , 2014 ; Dudı́k et al. , 2017 ; Foster & Rakhlin , 2020 ) . truth representation . Thus , our offline RL result generalizes prior offline RL works on linear MDPs ( Jin et al. , 2020b ; Zhang et al. , 2021b ) which assume representation is known a priori and use linear function approximation . Computation-wise , our approach uses one call to the MLE computation oracle , and hence is oracle-efficient . REP-LCB is the first oracle efficient offline algorithm for low-rank MDP enjoying the aforementioned statistical guarantee . See Section 2 for a more detailed comparison with the existing literature on representation learning in offline RL . Our contributions . We develop new representation learning RL algorithms that enable sample efficient learning in low-rank MDPs under both online and offline settings : 1 . In the online episodic learning setting , our new algorithm REP-UCB integrates representation learning , exploration , and exploitation together , and significantly improves the sample complexity of the prior state-of-art algorithm FLAMBE ; 2 . In the offline learning setting , we propose a natural concentrability coefficient ( i.e. , relative condition number under the true representation ) that captures the partial coverage condition in low-rank MDP , and our algorithm REP-LCB learns to compete against any policy ( including history-dependent ones ) under such a partial coverage condition . 2 RELATED WORK . Online Setting We list the comparison as follows , which is summarized in Table 1 . FLAMBE ( Agarwal et al. , 2020b ) was a state-of-the-art oracle-efficient algorithm for low-rank MDPs . In all parameters , the statistical complexity is much worse than REP-UCB . Our algorithm and FLAMBE operate under the same computation oracle . FLAMBE does not balance exploration and exploitation , and uses explore-then-committee style techniques ( i.e. , constructions of absorbing MDPs ( Brafman & Tennenholtz , 2002 ) ) which results in its worse sample complexity . With a more complex oracle , Moffle ( Modi et al. , 2021 ) is a model-free algorithm for low-rank MDPs , with two additional assumptions : ( 1 ) the transition has low non-negative rank ( nnr ) , and ( 2 ) reachability in latent states . The first assumption significantly restricts the scope of low-rank MDPs as there are matrices whose nnr is exponentially larger than the rank ( Agarwal et al. , 2020b ) . The sample complexity of Moffle can scale O ( d6|A|13/ ( 2η5 ( 1− γ ) 5 ) ) , where η is the reachability probability , and 1/η could be as large as nnr1/2 ( Proposition 4 in Agarwal et al . ( 2020b ) ) , which essentially means that Moffle has a polynomial dependence on the nnr . Thus , Moffle needs the nnr of the transition matrix to be small . OLIVE ( Jiang et al. , 2017 ) , Witness rank ( Sun et al. , 2019 ) and Bilinear-UCB ( Du et al. , 2021 ) , when specialized to low-rank MDPs , have slightly tighter dependence on d ( e.g. , O ( d2/ 2 ) ) . But these algorithms are computationally inefficient as they are version space algorithms . Dann et al . ( 2021 ) shows that with a policy class , solving a low-rank MDP can take Ω ( 2d ) samples . In this work , similar to Witness rank ( Sun et al. , 2019 ) and FLAMBE , we use function approximators to model the transition . Thus our positive result is not in contradiction to the result from Dann et al . ( 2021 ) . VALOR ( Dann et al. , 2018 ) , PCID ( Du et al. , 2019a ) , HOMER ( Misra et al. , 2020 ) , RegRL ( Foster et al. , 2020 ) , and the approach from Feng et al . ( 2020 ) are algorithms for block MDPs which is a more restricted setting than low-rank MDPs . These works require additional assumptions such as deterministic transitions ( Dann et al. , 2018 ) , reachability ( Misra et al. , 2020 ; Du et al. , 2019a ) , Bellman completion ( Foster et al. , 2020 ) , and strong unsupervised learning oracles ( Feng et al. , 2020 ) . Offline Setting We discuss related works in offline RL . Uehara & Sun ( 2021 ) obtained similar results for offline RL on low-rank MDPs . Though the sample complexity in their algorithm is slightly tighter , our algorithm is oracle-efficient , while the CPPO algorithm from Uehara & Sun ( 2021 ) is a version space algorithm . Xie et al . ( 2021 ) propose a ( general ) pessimistic model-free algorithm in the offline setting . We can also apply their algorithm to low-rank MDPs and show some finite-sample guarantee . However , it is unclear whether the final bounds in their results can be characterized by the relative condition number only using the true representation , and whether they can compete with history-dependent policies . Thus , our result is still considered superior on low-rank MDPs . The detail is given in Section E. In addition to the above two works , the pessimistic approach in offline RL has been extensively investigated . Empirically , it can work on simulation control tasks ( Kidambi et al. , 2020 ; Yu et al. , 2020 ; Kumar et al. , 2020 ; Liu et al. , 2020 ; Chang et al. , 2021 ) . On the theoretical side , pessimism allows us to obtain the PAC guarantee on various models when a comparator policy is covered by offline data in some forms ( Jin et al. , 2020b ; Rashidinejad et al. , 2021 ; Yin et al. , 2021 ; Zanette et al. , 2021b ; Zhang et al. , 2021b ; Chang et al. , 2021 ) . However , these algorithms and their analysis rely on a known representation . We emphasize the differences in Section 5 .
This paper studies representation learning in low rank MDPs in both online and offline setting. This paper proposes REP-UCB algorithm, which improves the sample complexity of FLAMBE significantly. In addition, REP-UCB is much simpler than FLAMBE. For the offline setting, the REP-LCB algorithm also achieves polynomial sample complexity with partial coverage. In particular, the coverage is measured by the ground-truth feature.
SP:1e459fa0fc602c70167f3e3d8e75788c32400722
Evaluating Disentanglement of Structured Latent Representations
1 INTRODUCTION . A salient challenge in generative modeling is the ability to decompose the representation of images and scenes into distinct objects that are represented separately and then combined together . Indeed , the capacity to reason about objects and their relations is a central aspect of human intelligence ( Spelke et al. , 1992 ) , which can be used in conjunction with graph neural networks or symbolic solvers to enable relational reasoning . For instance , ( Wang et al. , 2018 ) exploit robot body structure to obtain competitive performance and transferability when learning to walk or swim in the Gym environments . ( Yi et al. , 2019 ) perform state-of-the-art visual reasoning by using an object based representation in combination with a symbolic model . In the last years , several models have been proposed to learn compositional representations in an unsupervised fashion : TAGGER ( Greff et al. , 2016 ) , NEM ( Greff et al. , 2017 ) , R-NEM ( Van Steenkiste et al. , 2018 ) , MONet ( Burgess et al. , 2019 ) , IODINE ( Greff et al. , 2019 ) , GENESIS ( Engelcke et al. , 2019 ) , Slot Attention ( Locatello et al. , 2020 ) , MulMON ( Li et al. , 2020 ) , SPACE ( Lin et al. , 2020 ) . They jointly learn to represent individual objects and to segment the image into meaningful components , the latter often being called `` perceptual grouping '' . These models share a number of common principles : ( i ) Splitting the latent representation into several slots that are meant to contain the representation of an individual object . ( ii ) Inside each slot , encoding information about both object position and appearance . ( iii ) Maintaining a symmetry between slots in order to respect the permutation invariance of objects composition . These mechanisms are intuitively illustrated in Figure 1 . To compare and select models , it is indispensable to have robust disentanglement metrics . At the level of individual factors of variations , a representation is said to be disentangled when information about the different factors is separated between different latent dimensions ( Bengio et al. , 2013 ; Locatello et al. , 2019 ) . At object-level , disentanglement measures the degree of object separation between slots . However , all existing metrics ( Higgins et al. , 2016 ; Chen et al. , 2018 ; Ridgeway and Mozer , 2018 ; Eastwood and Williams , 2018 ; Kumar et al. , 2017 ) are limited to the individual case , which disregards representation structure . To cite Kim and Mnih ( 2018 ) about the FactorVAE metric : The definition of disentanglement we use [ ... ] is clearly a simplistic one . It does not allow correlations among the factors or hierarchies over them . Thus this definition seems more suited to synthetic data with independent factors of variation than to most realistic datasets . As a result , prior work has restricted to measuring the degree of object separation via pixel-level segmentation metrics . Most considered is the Adjusted Rand ( ARI ) Index ( Rand , 1971 ; Greff et al. , 2019 ) , where image segmentation is viewed as a cluster assignment for pixels . Other metrics such as Segmentation Covering ( mSC ) ( Arbelaez et al. , 2010 ) have been introduced to penalize oversegmentation of objects . A fundamental limitation is that they do not evaluate directly the quality of the representation , but instead consider a visual proxy of object separation . This results in problematic dependence on the quality of the inferred segmentation masks , a problem first identified by Greff et al . ( 2019 ) for IODINE , and confirmed in our experimental study . To address these limitations , we introduce the first metric for evaluating disentanglement at individual hierarchy levels of a structured latent representation . Applied to object-centric generative models , this offers a systematic , unified approach to evaluating ( i ) object separation between latent slots ( ii ) disentanglement of object properties inside individual slots ( iii ) disentanglement of intrinsic and extrinsic object properties . We theoretically show that our framework gives stronger guarantees of representation quality than previous disentanglement metrics . Thus , it can safely substitute to them . We experimentally demonstrate the applicability of our metric to three architectures : MONet , GENESIS and IODINE . The results confirm issues with pixel-level segmentation metrics , and offer valuable insight about the representation learned by these models . Finally , as a core technical component , we present the first representation probing algorithm handling slot permutation invariance . 2 BACKGROUND . 2.1 DISENTANGLEMENT CRITERIA . There exists an extensive literature discussing notions of disentanglement , accounting for all the different definitions is outside the scope of this paper . We chose to focus on the three criteria formalized by Eastwood and Williams ( 2018 ) , which stand out because of their clarity and simplicity . Disentanglement is the degree to which a representation separates the underlying factors of variation , with each latent variable capturing at most one generative factor . Completeness is the degree to which each underlying factor is captured by a single latent variable . Informativeness is the amount of information that a representation captures about the underlying factors of variation . Similarly to prior work , the word disentanglement is also used as a generic term that simultaneously refers to these three criteria . It should be clear depending on the context whether it is meant as general or specific . 2.2 THE DCI FRAMEWORK . For brevity reasons , we only describe the DCI metrics of Eastwood and Williams ( 2018 ) , which are most closely related to our work . The supplementary material provides a comprehensive overview of alternative metrics . Consider a dataset X composed of n observations x1 , . . . , xn , which we assume are generated by combining F underlying factors of variation . The value of the different factors for observation xl are denoted vl1 , . . . , v l F . Suppose we have learned a representation z = ( z1 , . . . , zL ) from this dataset . The DCI metric is based on the affinity matrix R = ( Ri , j ) , where Ri , j measures the relative importance of latent zi in predicting the value of factor vj . Supposing appropriate normalization for R , disentanglement is measured as the weighted average of the matrix ’ row entropy . Conversely , completeness is measured as the weighted average of column entropy . Informativeness is measured as the normalized error of the predictor used to obtain the matrix R . 3 EVALUATING STRUCTURED DISENTANGLEMENT . 3.1 contains a high level description of the goals steering our framework . Our metric is formally described in 3.2 . In 3.3 , we present theoretical results showing that our framework can safely substitute to DCI since it provides stronger guarantees that the selected model is correctly disentangled . 3.1 PRESENTATION OF THE FRAMEWORK . Prior work has focused on measuring disentanglement at the level of individual factors of variation and latent dimensions ( that we will refer to as global or unstructured disentanglement ) . In the case of compositional representations with several objects slots , additional properties are desirable : 1 . Object-level disentanglement : Objects and latent slots should have a one-to-one matching . That is , changing one object should lead to changes in a single slot , and vice-versa . 2 . Slot disentanglement : Inside a given slot , properties of the object must be disentangled . 3 . Slot symmetry : The latent dimension responsible for a given factor ( e.g. , color ) should be invariant across slots . This means that all slots should have the same inner structure . Our structured disentanglement metric allows to evaluate all these criteria within a single unified framework . Similarly to DCI , it is based on the affinity matrix ( Rτ̂ , τ ) , where Rτ̂ , τ measures the relative importance of latent zτ̂ in predicting the value of factor vτ . The key novelty compared to DCI is that we propose to measure disentanglement with respect to arbitrary projections of the matrix R. Intuitively , projections correspond to a marginalization operation that selects a subset of hierarchy levels and discards the others . The coefficients Rτ̂ , τ that are projected together are summed to create group affinities . Figure 2 gives an intuitive illustration of this process on a toy example . With object-centric representations , projecting at the object/slot level allows to study the relation of objects and slots without taking their internal structure into consideration . Ultimately this permits to evaluate our object-level disentanglement criterion . Projecting at property level allows to study the internal slot representation independently of the object , for evaluation of both internal slot disentanglement and slot symmetry , with a single metric . The identity projection conserving all levels measures flat ( DCI ) disentanglement . This generalizes to arbitrary hierarchies in the representation , such as disentanglement of position and appearance , or intrinsic and extrinsic object properties . 3.2 MATHEMATICAL DEFINITION . To formalize our framework most conveniently , we propose to view the affinity matrix R = ( Rτ̂ , τ ) as a joint random variable ( X , Y ) , where X is a random latent dimension , and Y a random factor . Supposing that R is normalized to have sum one , this means P [ X = τ̂ , Y = τ ] = Rτ̂ , τ . This point of view , which is only implicit in prior work , allows to formalize the projections of R as a coupled marginalization operation ( ρ ( X ) , ρ ( Y ) ) , where ρ ( α1 , . . . , αh ) = ( αe1 , . . . , αel ) selects a subset of hierarchy levels . This yields a concise and elegant mathematical framework , where we can build on standard information-theoretic identities to derive theoretical properties . In the following , HU ( A|B ) denotes the conditional entropy of A with respect to B ( detailed definitions in the supplementary ) . Our disentanglement criteria are the following : Completeness with respect to projection ρ measures how well the projected factors are captured by a coherent group of latents , by measuring column entropy in the projected space . It is measured as C ( ρ ) = 1−HU ( ρ ( X ) |ρ ( Y ) ) , where U = |ρ ( T̂ ) | is the number of groups of latents in the projection . Disentanglement with respect to projection ρ measures to what extent a group of latent variables influences a coherent subset of factors , by measuring projected row entropy . It is measured as D ( ρ ) = 1−HV ( ρ ( Y ) |ρ ( X ) ) , where V = |ρ ( T ) | is the number of groups of factors in the projection . Informativeness does not depend on the projection . It is defined as the normalized error of a low-capacity model f that learns to predict the factor values v from the latents z , i.e . I = ‖f ( z ) − v‖2 ‖v‖2 . The changing log bases U and V aim at ensuring normalization between 0 and 1 . 3.3 ESTABLISHING TRUST IN OUR FRAMEWORK : A THEORETICAL ANALYSIS . Our disentanglement D ( ρ ) and completeness C ( ρ ) metrics depend on the subset of hierarchy levels contained in the projection ρ . We theoretically analyze the influence of the projection choice . It is especially interesting to study the behavior when distinct subsets of hierarchy levels are combined . Our key results are the following : Theorem 1 shows that our framework contains DCI as a special case , when the identity projection is chosen . Theorem 2 shows that the metrics associated to the identity projection can be decomposed in terms of one dimensional projections along a single hierarchy level . Together , these results formally show that one dimensional projections provide a sound substitute for prior unstructured metrics . This is very useful to build trust in our framework . Theorem 1 ( Relation with DCI ) . The DCI metrics of Eastwood and Williams ( 2018 ) are a special case of our framework , with the identity projection conserving all hierarchy levels . Theorem 2 uses the intuitive notion of decomposition of a projection . The projection ρ is said to be decomposed into ρ1 , . . . ρk if the set of hierarchy levels selected by ρ is the disjunct union of the set of levels selected by ρ1 , . . . ρk . In the case of object-centric latent representations , the identity projection ρid that keeps all hierarchy levels { object , property } can be decomposed as ρobject considering only { object } and ρproperty considering only { property } . Theorem 2 ( Decomposition of a projection ) . Consider a decomposition of the projection ρ ( with L latent groups ) into disjunct projections ρ1 , . . . ρk ( with respectively L1 , . . . , Lk latent groups ) . The following lower bound for the joint completeness ( resp . disentanglement ) holds 1− k + k∑ s=1 C ( ρs ) ≤ 1− k∑ s=1 1− C ( ρs ) logLs ( L ) ≤ C ( ρ ) . Suppose that all one dimensional projections verify C ( ρs ) ≥ 1 − , where ≥ 0 . We obtain the lower bound C ( ρid ) ≥ ( 1− k ) + k ( 1− ) = 1− k for the identity projection . For object-centric representations , this implies that when object-level completeness and property completeness are both perfect ( that is , = 0 ) , then DCI completeness is also perfect . The same works for disentanglement . The supplementary materials contains detailed proofs , a matching upper bound , as well as an explicit formula in the special case k = 2 .
This paper proposes a new method for evaluating the disentanglement of object-centric representations. It can be seen as generalization of the DCI metrics of Eastwood et al. (2018) to object-centric representations, permitting the evaluation of both inter- and intra-object disentanglement. The two main contributions are: (i) proposing projection/marginalization operations on the matrix R in order to select an abstraction level (e.g. object or property level, as nicely illustrated in Figure 2); and (ii) an EM-based object-matching or "representation probing" algorithm which matches latent object slots with underlying GT objects (specifically, it optimizes the latent slot permutation to maximize the accuracy of the linear predictor f:z → v).
SP:6d220adce21785812fe44207f5806688cad89e9c
Evaluating Disentanglement of Structured Latent Representations
1 INTRODUCTION . A salient challenge in generative modeling is the ability to decompose the representation of images and scenes into distinct objects that are represented separately and then combined together . Indeed , the capacity to reason about objects and their relations is a central aspect of human intelligence ( Spelke et al. , 1992 ) , which can be used in conjunction with graph neural networks or symbolic solvers to enable relational reasoning . For instance , ( Wang et al. , 2018 ) exploit robot body structure to obtain competitive performance and transferability when learning to walk or swim in the Gym environments . ( Yi et al. , 2019 ) perform state-of-the-art visual reasoning by using an object based representation in combination with a symbolic model . In the last years , several models have been proposed to learn compositional representations in an unsupervised fashion : TAGGER ( Greff et al. , 2016 ) , NEM ( Greff et al. , 2017 ) , R-NEM ( Van Steenkiste et al. , 2018 ) , MONet ( Burgess et al. , 2019 ) , IODINE ( Greff et al. , 2019 ) , GENESIS ( Engelcke et al. , 2019 ) , Slot Attention ( Locatello et al. , 2020 ) , MulMON ( Li et al. , 2020 ) , SPACE ( Lin et al. , 2020 ) . They jointly learn to represent individual objects and to segment the image into meaningful components , the latter often being called `` perceptual grouping '' . These models share a number of common principles : ( i ) Splitting the latent representation into several slots that are meant to contain the representation of an individual object . ( ii ) Inside each slot , encoding information about both object position and appearance . ( iii ) Maintaining a symmetry between slots in order to respect the permutation invariance of objects composition . These mechanisms are intuitively illustrated in Figure 1 . To compare and select models , it is indispensable to have robust disentanglement metrics . At the level of individual factors of variations , a representation is said to be disentangled when information about the different factors is separated between different latent dimensions ( Bengio et al. , 2013 ; Locatello et al. , 2019 ) . At object-level , disentanglement measures the degree of object separation between slots . However , all existing metrics ( Higgins et al. , 2016 ; Chen et al. , 2018 ; Ridgeway and Mozer , 2018 ; Eastwood and Williams , 2018 ; Kumar et al. , 2017 ) are limited to the individual case , which disregards representation structure . To cite Kim and Mnih ( 2018 ) about the FactorVAE metric : The definition of disentanglement we use [ ... ] is clearly a simplistic one . It does not allow correlations among the factors or hierarchies over them . Thus this definition seems more suited to synthetic data with independent factors of variation than to most realistic datasets . As a result , prior work has restricted to measuring the degree of object separation via pixel-level segmentation metrics . Most considered is the Adjusted Rand ( ARI ) Index ( Rand , 1971 ; Greff et al. , 2019 ) , where image segmentation is viewed as a cluster assignment for pixels . Other metrics such as Segmentation Covering ( mSC ) ( Arbelaez et al. , 2010 ) have been introduced to penalize oversegmentation of objects . A fundamental limitation is that they do not evaluate directly the quality of the representation , but instead consider a visual proxy of object separation . This results in problematic dependence on the quality of the inferred segmentation masks , a problem first identified by Greff et al . ( 2019 ) for IODINE , and confirmed in our experimental study . To address these limitations , we introduce the first metric for evaluating disentanglement at individual hierarchy levels of a structured latent representation . Applied to object-centric generative models , this offers a systematic , unified approach to evaluating ( i ) object separation between latent slots ( ii ) disentanglement of object properties inside individual slots ( iii ) disentanglement of intrinsic and extrinsic object properties . We theoretically show that our framework gives stronger guarantees of representation quality than previous disentanglement metrics . Thus , it can safely substitute to them . We experimentally demonstrate the applicability of our metric to three architectures : MONet , GENESIS and IODINE . The results confirm issues with pixel-level segmentation metrics , and offer valuable insight about the representation learned by these models . Finally , as a core technical component , we present the first representation probing algorithm handling slot permutation invariance . 2 BACKGROUND . 2.1 DISENTANGLEMENT CRITERIA . There exists an extensive literature discussing notions of disentanglement , accounting for all the different definitions is outside the scope of this paper . We chose to focus on the three criteria formalized by Eastwood and Williams ( 2018 ) , which stand out because of their clarity and simplicity . Disentanglement is the degree to which a representation separates the underlying factors of variation , with each latent variable capturing at most one generative factor . Completeness is the degree to which each underlying factor is captured by a single latent variable . Informativeness is the amount of information that a representation captures about the underlying factors of variation . Similarly to prior work , the word disentanglement is also used as a generic term that simultaneously refers to these three criteria . It should be clear depending on the context whether it is meant as general or specific . 2.2 THE DCI FRAMEWORK . For brevity reasons , we only describe the DCI metrics of Eastwood and Williams ( 2018 ) , which are most closely related to our work . The supplementary material provides a comprehensive overview of alternative metrics . Consider a dataset X composed of n observations x1 , . . . , xn , which we assume are generated by combining F underlying factors of variation . The value of the different factors for observation xl are denoted vl1 , . . . , v l F . Suppose we have learned a representation z = ( z1 , . . . , zL ) from this dataset . The DCI metric is based on the affinity matrix R = ( Ri , j ) , where Ri , j measures the relative importance of latent zi in predicting the value of factor vj . Supposing appropriate normalization for R , disentanglement is measured as the weighted average of the matrix ’ row entropy . Conversely , completeness is measured as the weighted average of column entropy . Informativeness is measured as the normalized error of the predictor used to obtain the matrix R . 3 EVALUATING STRUCTURED DISENTANGLEMENT . 3.1 contains a high level description of the goals steering our framework . Our metric is formally described in 3.2 . In 3.3 , we present theoretical results showing that our framework can safely substitute to DCI since it provides stronger guarantees that the selected model is correctly disentangled . 3.1 PRESENTATION OF THE FRAMEWORK . Prior work has focused on measuring disentanglement at the level of individual factors of variation and latent dimensions ( that we will refer to as global or unstructured disentanglement ) . In the case of compositional representations with several objects slots , additional properties are desirable : 1 . Object-level disentanglement : Objects and latent slots should have a one-to-one matching . That is , changing one object should lead to changes in a single slot , and vice-versa . 2 . Slot disentanglement : Inside a given slot , properties of the object must be disentangled . 3 . Slot symmetry : The latent dimension responsible for a given factor ( e.g. , color ) should be invariant across slots . This means that all slots should have the same inner structure . Our structured disentanglement metric allows to evaluate all these criteria within a single unified framework . Similarly to DCI , it is based on the affinity matrix ( Rτ̂ , τ ) , where Rτ̂ , τ measures the relative importance of latent zτ̂ in predicting the value of factor vτ . The key novelty compared to DCI is that we propose to measure disentanglement with respect to arbitrary projections of the matrix R. Intuitively , projections correspond to a marginalization operation that selects a subset of hierarchy levels and discards the others . The coefficients Rτ̂ , τ that are projected together are summed to create group affinities . Figure 2 gives an intuitive illustration of this process on a toy example . With object-centric representations , projecting at the object/slot level allows to study the relation of objects and slots without taking their internal structure into consideration . Ultimately this permits to evaluate our object-level disentanglement criterion . Projecting at property level allows to study the internal slot representation independently of the object , for evaluation of both internal slot disentanglement and slot symmetry , with a single metric . The identity projection conserving all levels measures flat ( DCI ) disentanglement . This generalizes to arbitrary hierarchies in the representation , such as disentanglement of position and appearance , or intrinsic and extrinsic object properties . 3.2 MATHEMATICAL DEFINITION . To formalize our framework most conveniently , we propose to view the affinity matrix R = ( Rτ̂ , τ ) as a joint random variable ( X , Y ) , where X is a random latent dimension , and Y a random factor . Supposing that R is normalized to have sum one , this means P [ X = τ̂ , Y = τ ] = Rτ̂ , τ . This point of view , which is only implicit in prior work , allows to formalize the projections of R as a coupled marginalization operation ( ρ ( X ) , ρ ( Y ) ) , where ρ ( α1 , . . . , αh ) = ( αe1 , . . . , αel ) selects a subset of hierarchy levels . This yields a concise and elegant mathematical framework , where we can build on standard information-theoretic identities to derive theoretical properties . In the following , HU ( A|B ) denotes the conditional entropy of A with respect to B ( detailed definitions in the supplementary ) . Our disentanglement criteria are the following : Completeness with respect to projection ρ measures how well the projected factors are captured by a coherent group of latents , by measuring column entropy in the projected space . It is measured as C ( ρ ) = 1−HU ( ρ ( X ) |ρ ( Y ) ) , where U = |ρ ( T̂ ) | is the number of groups of latents in the projection . Disentanglement with respect to projection ρ measures to what extent a group of latent variables influences a coherent subset of factors , by measuring projected row entropy . It is measured as D ( ρ ) = 1−HV ( ρ ( Y ) |ρ ( X ) ) , where V = |ρ ( T ) | is the number of groups of factors in the projection . Informativeness does not depend on the projection . It is defined as the normalized error of a low-capacity model f that learns to predict the factor values v from the latents z , i.e . I = ‖f ( z ) − v‖2 ‖v‖2 . The changing log bases U and V aim at ensuring normalization between 0 and 1 . 3.3 ESTABLISHING TRUST IN OUR FRAMEWORK : A THEORETICAL ANALYSIS . Our disentanglement D ( ρ ) and completeness C ( ρ ) metrics depend on the subset of hierarchy levels contained in the projection ρ . We theoretically analyze the influence of the projection choice . It is especially interesting to study the behavior when distinct subsets of hierarchy levels are combined . Our key results are the following : Theorem 1 shows that our framework contains DCI as a special case , when the identity projection is chosen . Theorem 2 shows that the metrics associated to the identity projection can be decomposed in terms of one dimensional projections along a single hierarchy level . Together , these results formally show that one dimensional projections provide a sound substitute for prior unstructured metrics . This is very useful to build trust in our framework . Theorem 1 ( Relation with DCI ) . The DCI metrics of Eastwood and Williams ( 2018 ) are a special case of our framework , with the identity projection conserving all hierarchy levels . Theorem 2 uses the intuitive notion of decomposition of a projection . The projection ρ is said to be decomposed into ρ1 , . . . ρk if the set of hierarchy levels selected by ρ is the disjunct union of the set of levels selected by ρ1 , . . . ρk . In the case of object-centric latent representations , the identity projection ρid that keeps all hierarchy levels { object , property } can be decomposed as ρobject considering only { object } and ρproperty considering only { property } . Theorem 2 ( Decomposition of a projection ) . Consider a decomposition of the projection ρ ( with L latent groups ) into disjunct projections ρ1 , . . . ρk ( with respectively L1 , . . . , Lk latent groups ) . The following lower bound for the joint completeness ( resp . disentanglement ) holds 1− k + k∑ s=1 C ( ρs ) ≤ 1− k∑ s=1 1− C ( ρs ) logLs ( L ) ≤ C ( ρ ) . Suppose that all one dimensional projections verify C ( ρs ) ≥ 1 − , where ≥ 0 . We obtain the lower bound C ( ρid ) ≥ ( 1− k ) + k ( 1− ) = 1− k for the identity projection . For object-centric representations , this implies that when object-level completeness and property completeness are both perfect ( that is , = 0 ) , then DCI completeness is also perfect . The same works for disentanglement . The supplementary materials contains detailed proofs , a matching upper bound , as well as an explicit formula in the special case k = 2 .
This paper proposes a metric for evaluating disentanglement in compositional representation learning upending existing global disentanglement metrics, that disregard any representational structure. The proposed metric is based on the projections of the affinity matrix proposed earlier in the DCI metric. The paper also proposes an EM-like permutation-invariant formulation for obtaining the relative importance of a latent in predicting a generative factor. This allows for slots to be permuted wlog. Empirical studies are performed to study the disentanglement of existing compositional representation learners.
SP:6d220adce21785812fe44207f5806688cad89e9c
Evaluating Disentanglement of Structured Latent Representations
1 INTRODUCTION . A salient challenge in generative modeling is the ability to decompose the representation of images and scenes into distinct objects that are represented separately and then combined together . Indeed , the capacity to reason about objects and their relations is a central aspect of human intelligence ( Spelke et al. , 1992 ) , which can be used in conjunction with graph neural networks or symbolic solvers to enable relational reasoning . For instance , ( Wang et al. , 2018 ) exploit robot body structure to obtain competitive performance and transferability when learning to walk or swim in the Gym environments . ( Yi et al. , 2019 ) perform state-of-the-art visual reasoning by using an object based representation in combination with a symbolic model . In the last years , several models have been proposed to learn compositional representations in an unsupervised fashion : TAGGER ( Greff et al. , 2016 ) , NEM ( Greff et al. , 2017 ) , R-NEM ( Van Steenkiste et al. , 2018 ) , MONet ( Burgess et al. , 2019 ) , IODINE ( Greff et al. , 2019 ) , GENESIS ( Engelcke et al. , 2019 ) , Slot Attention ( Locatello et al. , 2020 ) , MulMON ( Li et al. , 2020 ) , SPACE ( Lin et al. , 2020 ) . They jointly learn to represent individual objects and to segment the image into meaningful components , the latter often being called `` perceptual grouping '' . These models share a number of common principles : ( i ) Splitting the latent representation into several slots that are meant to contain the representation of an individual object . ( ii ) Inside each slot , encoding information about both object position and appearance . ( iii ) Maintaining a symmetry between slots in order to respect the permutation invariance of objects composition . These mechanisms are intuitively illustrated in Figure 1 . To compare and select models , it is indispensable to have robust disentanglement metrics . At the level of individual factors of variations , a representation is said to be disentangled when information about the different factors is separated between different latent dimensions ( Bengio et al. , 2013 ; Locatello et al. , 2019 ) . At object-level , disentanglement measures the degree of object separation between slots . However , all existing metrics ( Higgins et al. , 2016 ; Chen et al. , 2018 ; Ridgeway and Mozer , 2018 ; Eastwood and Williams , 2018 ; Kumar et al. , 2017 ) are limited to the individual case , which disregards representation structure . To cite Kim and Mnih ( 2018 ) about the FactorVAE metric : The definition of disentanglement we use [ ... ] is clearly a simplistic one . It does not allow correlations among the factors or hierarchies over them . Thus this definition seems more suited to synthetic data with independent factors of variation than to most realistic datasets . As a result , prior work has restricted to measuring the degree of object separation via pixel-level segmentation metrics . Most considered is the Adjusted Rand ( ARI ) Index ( Rand , 1971 ; Greff et al. , 2019 ) , where image segmentation is viewed as a cluster assignment for pixels . Other metrics such as Segmentation Covering ( mSC ) ( Arbelaez et al. , 2010 ) have been introduced to penalize oversegmentation of objects . A fundamental limitation is that they do not evaluate directly the quality of the representation , but instead consider a visual proxy of object separation . This results in problematic dependence on the quality of the inferred segmentation masks , a problem first identified by Greff et al . ( 2019 ) for IODINE , and confirmed in our experimental study . To address these limitations , we introduce the first metric for evaluating disentanglement at individual hierarchy levels of a structured latent representation . Applied to object-centric generative models , this offers a systematic , unified approach to evaluating ( i ) object separation between latent slots ( ii ) disentanglement of object properties inside individual slots ( iii ) disentanglement of intrinsic and extrinsic object properties . We theoretically show that our framework gives stronger guarantees of representation quality than previous disentanglement metrics . Thus , it can safely substitute to them . We experimentally demonstrate the applicability of our metric to three architectures : MONet , GENESIS and IODINE . The results confirm issues with pixel-level segmentation metrics , and offer valuable insight about the representation learned by these models . Finally , as a core technical component , we present the first representation probing algorithm handling slot permutation invariance . 2 BACKGROUND . 2.1 DISENTANGLEMENT CRITERIA . There exists an extensive literature discussing notions of disentanglement , accounting for all the different definitions is outside the scope of this paper . We chose to focus on the three criteria formalized by Eastwood and Williams ( 2018 ) , which stand out because of their clarity and simplicity . Disentanglement is the degree to which a representation separates the underlying factors of variation , with each latent variable capturing at most one generative factor . Completeness is the degree to which each underlying factor is captured by a single latent variable . Informativeness is the amount of information that a representation captures about the underlying factors of variation . Similarly to prior work , the word disentanglement is also used as a generic term that simultaneously refers to these three criteria . It should be clear depending on the context whether it is meant as general or specific . 2.2 THE DCI FRAMEWORK . For brevity reasons , we only describe the DCI metrics of Eastwood and Williams ( 2018 ) , which are most closely related to our work . The supplementary material provides a comprehensive overview of alternative metrics . Consider a dataset X composed of n observations x1 , . . . , xn , which we assume are generated by combining F underlying factors of variation . The value of the different factors for observation xl are denoted vl1 , . . . , v l F . Suppose we have learned a representation z = ( z1 , . . . , zL ) from this dataset . The DCI metric is based on the affinity matrix R = ( Ri , j ) , where Ri , j measures the relative importance of latent zi in predicting the value of factor vj . Supposing appropriate normalization for R , disentanglement is measured as the weighted average of the matrix ’ row entropy . Conversely , completeness is measured as the weighted average of column entropy . Informativeness is measured as the normalized error of the predictor used to obtain the matrix R . 3 EVALUATING STRUCTURED DISENTANGLEMENT . 3.1 contains a high level description of the goals steering our framework . Our metric is formally described in 3.2 . In 3.3 , we present theoretical results showing that our framework can safely substitute to DCI since it provides stronger guarantees that the selected model is correctly disentangled . 3.1 PRESENTATION OF THE FRAMEWORK . Prior work has focused on measuring disentanglement at the level of individual factors of variation and latent dimensions ( that we will refer to as global or unstructured disentanglement ) . In the case of compositional representations with several objects slots , additional properties are desirable : 1 . Object-level disentanglement : Objects and latent slots should have a one-to-one matching . That is , changing one object should lead to changes in a single slot , and vice-versa . 2 . Slot disentanglement : Inside a given slot , properties of the object must be disentangled . 3 . Slot symmetry : The latent dimension responsible for a given factor ( e.g. , color ) should be invariant across slots . This means that all slots should have the same inner structure . Our structured disentanglement metric allows to evaluate all these criteria within a single unified framework . Similarly to DCI , it is based on the affinity matrix ( Rτ̂ , τ ) , where Rτ̂ , τ measures the relative importance of latent zτ̂ in predicting the value of factor vτ . The key novelty compared to DCI is that we propose to measure disentanglement with respect to arbitrary projections of the matrix R. Intuitively , projections correspond to a marginalization operation that selects a subset of hierarchy levels and discards the others . The coefficients Rτ̂ , τ that are projected together are summed to create group affinities . Figure 2 gives an intuitive illustration of this process on a toy example . With object-centric representations , projecting at the object/slot level allows to study the relation of objects and slots without taking their internal structure into consideration . Ultimately this permits to evaluate our object-level disentanglement criterion . Projecting at property level allows to study the internal slot representation independently of the object , for evaluation of both internal slot disentanglement and slot symmetry , with a single metric . The identity projection conserving all levels measures flat ( DCI ) disentanglement . This generalizes to arbitrary hierarchies in the representation , such as disentanglement of position and appearance , or intrinsic and extrinsic object properties . 3.2 MATHEMATICAL DEFINITION . To formalize our framework most conveniently , we propose to view the affinity matrix R = ( Rτ̂ , τ ) as a joint random variable ( X , Y ) , where X is a random latent dimension , and Y a random factor . Supposing that R is normalized to have sum one , this means P [ X = τ̂ , Y = τ ] = Rτ̂ , τ . This point of view , which is only implicit in prior work , allows to formalize the projections of R as a coupled marginalization operation ( ρ ( X ) , ρ ( Y ) ) , where ρ ( α1 , . . . , αh ) = ( αe1 , . . . , αel ) selects a subset of hierarchy levels . This yields a concise and elegant mathematical framework , where we can build on standard information-theoretic identities to derive theoretical properties . In the following , HU ( A|B ) denotes the conditional entropy of A with respect to B ( detailed definitions in the supplementary ) . Our disentanglement criteria are the following : Completeness with respect to projection ρ measures how well the projected factors are captured by a coherent group of latents , by measuring column entropy in the projected space . It is measured as C ( ρ ) = 1−HU ( ρ ( X ) |ρ ( Y ) ) , where U = |ρ ( T̂ ) | is the number of groups of latents in the projection . Disentanglement with respect to projection ρ measures to what extent a group of latent variables influences a coherent subset of factors , by measuring projected row entropy . It is measured as D ( ρ ) = 1−HV ( ρ ( Y ) |ρ ( X ) ) , where V = |ρ ( T ) | is the number of groups of factors in the projection . Informativeness does not depend on the projection . It is defined as the normalized error of a low-capacity model f that learns to predict the factor values v from the latents z , i.e . I = ‖f ( z ) − v‖2 ‖v‖2 . The changing log bases U and V aim at ensuring normalization between 0 and 1 . 3.3 ESTABLISHING TRUST IN OUR FRAMEWORK : A THEORETICAL ANALYSIS . Our disentanglement D ( ρ ) and completeness C ( ρ ) metrics depend on the subset of hierarchy levels contained in the projection ρ . We theoretically analyze the influence of the projection choice . It is especially interesting to study the behavior when distinct subsets of hierarchy levels are combined . Our key results are the following : Theorem 1 shows that our framework contains DCI as a special case , when the identity projection is chosen . Theorem 2 shows that the metrics associated to the identity projection can be decomposed in terms of one dimensional projections along a single hierarchy level . Together , these results formally show that one dimensional projections provide a sound substitute for prior unstructured metrics . This is very useful to build trust in our framework . Theorem 1 ( Relation with DCI ) . The DCI metrics of Eastwood and Williams ( 2018 ) are a special case of our framework , with the identity projection conserving all hierarchy levels . Theorem 2 uses the intuitive notion of decomposition of a projection . The projection ρ is said to be decomposed into ρ1 , . . . ρk if the set of hierarchy levels selected by ρ is the disjunct union of the set of levels selected by ρ1 , . . . ρk . In the case of object-centric latent representations , the identity projection ρid that keeps all hierarchy levels { object , property } can be decomposed as ρobject considering only { object } and ρproperty considering only { property } . Theorem 2 ( Decomposition of a projection ) . Consider a decomposition of the projection ρ ( with L latent groups ) into disjunct projections ρ1 , . . . ρk ( with respectively L1 , . . . , Lk latent groups ) . The following lower bound for the joint completeness ( resp . disentanglement ) holds 1− k + k∑ s=1 C ( ρs ) ≤ 1− k∑ s=1 1− C ( ρs ) logLs ( L ) ≤ C ( ρ ) . Suppose that all one dimensional projections verify C ( ρs ) ≥ 1 − , where ≥ 0 . We obtain the lower bound C ( ρid ) ≥ ( 1− k ) + k ( 1− ) = 1− k for the identity projection . For object-centric representations , this implies that when object-level completeness and property completeness are both perfect ( that is , = 0 ) , then DCI completeness is also perfect . The same works for disentanglement . The supplementary materials contains detailed proofs , a matching upper bound , as well as an explicit formula in the special case k = 2 .
This paper introduces a new metric for evaluating disentanglement of latent representations, which -- unlike prior work -- supports object-centric, structured latent spaces with a set of latent variables (as opposed to a single global latent variable). While prior work on object-centric generative models primarily evaluated their ability to decompose visual scenes into spatial regions corresponding to individual objects, this novel metric allows for systematic evaluation of disentanglement of learned object properties within and across "object slots" (i.e. individual latent variables) in these models. The paper further introduces a representation probing algorithm using an EM-style iterative procedure, which is invariant to slot permutation. The metric is experimentally demonstrated on standard unsupervised scene decomposition datasets and models.
SP:6d220adce21785812fe44207f5806688cad89e9c
Fast and Efficient Once-For-All Networks for Diverse Hardware Deployment
1 INTRODUCTION . Convolutional neural networks ( CNNs ) are overwhelmingly successful in many machine learning applications . These applications may have different inference constraints ( e.g. , latency ) and are deployed in different hardware platforms that range from server-grade platforms to edge devices such as smartphones . Optimal network architectures need to be designed to meet the requirement for a target deployment scenario . However , naively designing a specialized architecture for each scenario is very expensive as it requires to fully retrain the model each time . This is an excessively expensive process in terms of the required machine learning expertise , time , energy and CO2 emission . Recently , researchers have proposed efficient methods which are based on training a super network only once . Then , for a specific deployment scenario , a sub-network is sampled from the super-network that meet the deployment constraints with the best accuracy . The weight of the sampled network is shared with original super network , hence retraining is not required . Once-for-all ( OFA ) ( Cai et al. , 2020 ) is among the first methods proposed to tackle this problem . The OFA method trains a once-for-all network that jointly optimizes the accuracy of a large number of subnetworks ( more than 1019 ) sampled from the once-for-all network . Each sub-network is selected from the once-for-all network where layer depths , channel widths , kernel sizes and input resolution are scaled independently . Such scaling provides a family of CNNs networks with different computation and representation power to flexibly support deployment under diverse platforms and configurations . With this massive search space , OFA co-trains all the sub-networks by a complex four-stage progressive training process which is prohibitively expensive and costs around 1200 GPU hours . Compound OFA ( CompOFA ) ( Sahni et al. , 2020 ) builds upon the original OFA by shrinking the design space of possible sub-networks . This is done by only considering networks whose dimensions are coupled . This reduces the number of possible models by 17 orders of magnitudes , from 1019 down to 243 . Sahni et al . ( 2020 ) demonstrate that this smaller design space is sufficient , as most warmup phase in-place distillation teacher training compound training teacher training elastic kernel elastic depth elastic widthOFA compOFA fOFA Figure 1 : Comparison of training schedules for OFA , CompOFA , and fOFA . Length on the horizontal axis is proportional to the number of epochs in each phase . For OFA , “ elastic kernel , ” “ elastic width , ” and “ elastic depth ” are the phases of training specified in Cai et al . ( 2020 ) that are not used in CompOFA and fOFA . sub-networks in the original OFA design space are far from the optimal accuracy-latency frontier . With this smaller space , the training procedure can be simplified as well as these suboptimal subnetworks are no longer influencing the training process . CompOFA reduces the four stages of the original OFA process to two stages , and this optimization speeds up the training time of CompOFA by a factor of 2× over OFA . However , 2× faster than OFA ’ s 1200 GPU-hours is still 600 GPU-hours . Even with this significant improvement , the training cost remains vary expensive , especially when effects on the environment are considered ( Strubell et al. , 2019 ) . While some of this cost can be mitigated by improvements in hardware efficiency and the continued development of specialized platforms for training CNNs , algorithmic enhancements still have a large role to play . While CompOFA greatly simplifies the progressive shrinking training procedure used in OFA , it is still dependent on pre-training a supernetwork to act as a teacher for the sub-network co-training process , which uses knowledge distillation Hinton et al . ( 2015 ) . Due to the optimizations in the co-training process , training the supernetwork in CompOFA requires more than half ( 180 out of 330 ) of the total training epochs . In this work , we propose several optimizations to the once-for-all training process that produces a one-stage training algorithm for fast and efficient neural architecture search . The key features of our method are : • We co-train all of the sub-networks from scratch without pre-training a teacher network , using the concept of in-place distillation ( Yu et al. , 2020 ) . The largest network we train using in-place distillation is smaller than the pre-trained teacher network used in Cai et al . ( 2020 ) and Sahni et al . ( 2020 ) . • During the co-training process , we develop an upper-attentive sampling method which always sample the full-sized sub-net at each iteration to help co-train the rest sub-networks . • Before co-training , we use an upper-attentive warmup technique which trains only the fullsized sub-net for a few epochs before co-training to further improve the performance . • With these optimizations , we can decrease the number of sampled sub-networks in each iteration of training , further improving performance . The benefits of our proposed fast OFA ( fOFA ) method are shown in Figure 1 . Furthermore , since our method has only a single stage , we can easily increase the training time and improve on the accuracy of previous methods while still requiring less training time . The rest of this paper is organized as follows . We describe related work in more detail in Section 2 and illustrate our method in depth in Section 3 . We report on our experimental results in Section 4 and finish the paper with conclusions in 6 . 2 RELATED WORK . Neural architecture search ( NAS ) aims to automatically find the optimal network architecture given hardware constraints , such as FLOPs or latency . Early NAS works mainly adapted reinforcement learning ( Zoph et al. , 2018 ; Zoph & Le , 2016 ) , evolutionary search ( Real et al. , 2019 ; 2017 ) , or sparse connection learning ( Kim et al. , 2018 ) to sample different architectures . However , each sampled architecture needed to be trained from scratch , resulting in a huge and intractable computing cost . More recent NAS works greatly reduce the cost by training an over-parameterized network named a super-network , and then sample various sub-networks which share the weights with the super-network . Such super-network-based methods can be further divided into two main categories as follows : 2.1 TWO-STAGE TRAINING . The main idea of the two-stage training methods ( Berman et al. , 2020 ; Bender et al. , 2018 ; Brock et al. , 2017 ; Guo et al. , 2019 ; Liu et al. , 2018 ; Pham et al. , 2018 ) is that after searching for the best architectures in the first stage of training , the best architectures then have to be retrained from scratch to obtain a final model . Generally , a single two-stage search experiment can only target a single resource budget or a narrow range of resource budgets at a time , which is inefficient . 2.2 ONE-STAGE TRAINING . To alleviate the inefficiency of two-stage training , Once-For-All ( OFA ) ( Cai et al. , 2020 ) was proposed to jointly train various sub-networks of the super-network in a single stage . By doing so , the sub-networks could be directly deployed into different hardware platform without retraining . However , to support an extremely large number of sub-networks ( i.e. , 1019 ) , such one-stage training involves multi-steps to gradually add more sub-networks by using the proposed progressive shrinking technique . Moreover , OFA also needs to train a single full-sized teacher network ( same size as super-network ) from scratch firstly to guide the training of the sub-networks by using knowledge distillation . Thus , due to the complex training procedure , OFA still suffers from a prohibitive training cost , requiring around 1200 GPU hours . More recently , inspired by studies on neural network design spaces ( Tan & Le , 2019 ; Radosavovic et al. , 2020 ) , CompOFA ( Sahni et al. , 2020 ) proposes a compound sub-network scaling method , which couples the depth and width configuration of the sampled sub-networks to constrain the search space , reducing the space to only 243 number of sub-networks without losing accuracy . Although CompOFA achieves a 2× training cost reduction compared with OFA , it follows the similar training procedure that also needs to train a teacher model from scratch . A similar approach is investigated in Yang et al . ( 2020 ) , which couples network width and input resolution into a single mutual learning framework . In addition , bigNAS ( Yu et al. , 2020 ) proposed replacing OFA ’ s multi-step training by a single step , namely one-shot NAS , challenging the usual practice of progressive training in OFA . The idea of the one-shot NAS is to jointly train sub-networks from scratch directly by using the sandwich rule and in-place distillation techniques proposed for slimmable networks ( Yu et al. , 2018 ) . Based on bigNAS , Wang et al . ( 2020 ) points out the unnecessary updates on sub-optimal models in one-stage training , and uses attention mechanisms to push the Pareto front . However , the primary objective of these two works is to obtain better accuracy , thus still suffer from high training cost , e.g. , bigNAS needs over 2300 TPU hours for O ( 1012 ) models . In addition , ( Li et al. , 2021 ) works to improve the trade-off between accuracy and computation complexity based for slimmable networks by introducing a dynamic gating mechanism and in-place ensemble bootstrapping to increase training stability .. However , it requires a one more gating training step , resulting in larger training cost . Our approach also follows the direction of the one-shot model . Differentiating from bigNAS , the primary objective of this work is to reduce the training cost without loss of Pareto optimality under the design space of OFA and CompOFA . 3 METHODS . 3.1 BUILDING THE SEARCH SPACE . A neural network N is a function that takes an input set X and generates a set of outputs δ ( N , X ) . In this work , we focus on a fixed input set ( i.e. , ImageNet ) , and thus write the network output as δ ( N ) . In the supervised learning setting , the performance of the neural network is evaluated against a set of labels YD . Following the standard practice in neural architecture search , we limit our neural network space to the set of architectures that consists of a sequence of blocks B1 , B2 , . . . , Bm , where m = 5 is a typical value . Each block is based on the inverted residual in the architecture space of MobileNetV3 ( Howard et al. , 2019 ) . A block is parameterized by three dimensions : the depth ( number of layers in the block ) D , the width ( channel expansion ratio ) W , and the convolution kernel size K. This search space is illustrated in Figure 2 . To reduce the size of the search space , we use the same coupling heuristic as CompOFA ( Sahni et al. , 2020 ) ; that is , if there are n choices for the depth dimension and n choices for the width dimension , we sample the ith largest depth wi whenever we sample the ith largest depth di for each layer in the block . While OFA uses an elastic kernel that allows for different kernel sizes within blocks , we follow CompOFA and use a fixed kernel size within each block . We call the network where the values of K , D , and W are each their largest possible value the full-sized network or super-net , and the network created by any other choice of these values a sub-network . As in CompOFA , we choose three possible values for D ∈ { 2 , 3 , 4 } and three possible values for W ∈ { 3 , 4 , 6 } and fix the kernel size to that of ( Howard et al. , 2019 ) , that is , K = 3 in the first , third , and fourth blocks , and K = 5 in the second and fifth blocks . Thus , with five blocks , we have 35 = 243 models in our search space . In neural architecture search , the input resolution can vary as well , up to a maximum size of 224 × 224 for ImageNet ( Deng et al. , 2009 ) . In this work , we use an elastic resolution , where input images are resized to be square with dimension in the set { 128 , 160 , 192 , 224 } .
This paper proposed a method to train a once-for-all network, where one network can run at different resource constraints. The method is based on previous methods, and the author further improved the training speed by around 1.5x - 1.8x without loss of performance. The method is evaluated on ImageNet classification.
SP:d1c920796e6586b12f2f04f108fa32fdeac13f19
Fast and Efficient Once-For-All Networks for Diverse Hardware Deployment
1 INTRODUCTION . Convolutional neural networks ( CNNs ) are overwhelmingly successful in many machine learning applications . These applications may have different inference constraints ( e.g. , latency ) and are deployed in different hardware platforms that range from server-grade platforms to edge devices such as smartphones . Optimal network architectures need to be designed to meet the requirement for a target deployment scenario . However , naively designing a specialized architecture for each scenario is very expensive as it requires to fully retrain the model each time . This is an excessively expensive process in terms of the required machine learning expertise , time , energy and CO2 emission . Recently , researchers have proposed efficient methods which are based on training a super network only once . Then , for a specific deployment scenario , a sub-network is sampled from the super-network that meet the deployment constraints with the best accuracy . The weight of the sampled network is shared with original super network , hence retraining is not required . Once-for-all ( OFA ) ( Cai et al. , 2020 ) is among the first methods proposed to tackle this problem . The OFA method trains a once-for-all network that jointly optimizes the accuracy of a large number of subnetworks ( more than 1019 ) sampled from the once-for-all network . Each sub-network is selected from the once-for-all network where layer depths , channel widths , kernel sizes and input resolution are scaled independently . Such scaling provides a family of CNNs networks with different computation and representation power to flexibly support deployment under diverse platforms and configurations . With this massive search space , OFA co-trains all the sub-networks by a complex four-stage progressive training process which is prohibitively expensive and costs around 1200 GPU hours . Compound OFA ( CompOFA ) ( Sahni et al. , 2020 ) builds upon the original OFA by shrinking the design space of possible sub-networks . This is done by only considering networks whose dimensions are coupled . This reduces the number of possible models by 17 orders of magnitudes , from 1019 down to 243 . Sahni et al . ( 2020 ) demonstrate that this smaller design space is sufficient , as most warmup phase in-place distillation teacher training compound training teacher training elastic kernel elastic depth elastic widthOFA compOFA fOFA Figure 1 : Comparison of training schedules for OFA , CompOFA , and fOFA . Length on the horizontal axis is proportional to the number of epochs in each phase . For OFA , “ elastic kernel , ” “ elastic width , ” and “ elastic depth ” are the phases of training specified in Cai et al . ( 2020 ) that are not used in CompOFA and fOFA . sub-networks in the original OFA design space are far from the optimal accuracy-latency frontier . With this smaller space , the training procedure can be simplified as well as these suboptimal subnetworks are no longer influencing the training process . CompOFA reduces the four stages of the original OFA process to two stages , and this optimization speeds up the training time of CompOFA by a factor of 2× over OFA . However , 2× faster than OFA ’ s 1200 GPU-hours is still 600 GPU-hours . Even with this significant improvement , the training cost remains vary expensive , especially when effects on the environment are considered ( Strubell et al. , 2019 ) . While some of this cost can be mitigated by improvements in hardware efficiency and the continued development of specialized platforms for training CNNs , algorithmic enhancements still have a large role to play . While CompOFA greatly simplifies the progressive shrinking training procedure used in OFA , it is still dependent on pre-training a supernetwork to act as a teacher for the sub-network co-training process , which uses knowledge distillation Hinton et al . ( 2015 ) . Due to the optimizations in the co-training process , training the supernetwork in CompOFA requires more than half ( 180 out of 330 ) of the total training epochs . In this work , we propose several optimizations to the once-for-all training process that produces a one-stage training algorithm for fast and efficient neural architecture search . The key features of our method are : • We co-train all of the sub-networks from scratch without pre-training a teacher network , using the concept of in-place distillation ( Yu et al. , 2020 ) . The largest network we train using in-place distillation is smaller than the pre-trained teacher network used in Cai et al . ( 2020 ) and Sahni et al . ( 2020 ) . • During the co-training process , we develop an upper-attentive sampling method which always sample the full-sized sub-net at each iteration to help co-train the rest sub-networks . • Before co-training , we use an upper-attentive warmup technique which trains only the fullsized sub-net for a few epochs before co-training to further improve the performance . • With these optimizations , we can decrease the number of sampled sub-networks in each iteration of training , further improving performance . The benefits of our proposed fast OFA ( fOFA ) method are shown in Figure 1 . Furthermore , since our method has only a single stage , we can easily increase the training time and improve on the accuracy of previous methods while still requiring less training time . The rest of this paper is organized as follows . We describe related work in more detail in Section 2 and illustrate our method in depth in Section 3 . We report on our experimental results in Section 4 and finish the paper with conclusions in 6 . 2 RELATED WORK . Neural architecture search ( NAS ) aims to automatically find the optimal network architecture given hardware constraints , such as FLOPs or latency . Early NAS works mainly adapted reinforcement learning ( Zoph et al. , 2018 ; Zoph & Le , 2016 ) , evolutionary search ( Real et al. , 2019 ; 2017 ) , or sparse connection learning ( Kim et al. , 2018 ) to sample different architectures . However , each sampled architecture needed to be trained from scratch , resulting in a huge and intractable computing cost . More recent NAS works greatly reduce the cost by training an over-parameterized network named a super-network , and then sample various sub-networks which share the weights with the super-network . Such super-network-based methods can be further divided into two main categories as follows : 2.1 TWO-STAGE TRAINING . The main idea of the two-stage training methods ( Berman et al. , 2020 ; Bender et al. , 2018 ; Brock et al. , 2017 ; Guo et al. , 2019 ; Liu et al. , 2018 ; Pham et al. , 2018 ) is that after searching for the best architectures in the first stage of training , the best architectures then have to be retrained from scratch to obtain a final model . Generally , a single two-stage search experiment can only target a single resource budget or a narrow range of resource budgets at a time , which is inefficient . 2.2 ONE-STAGE TRAINING . To alleviate the inefficiency of two-stage training , Once-For-All ( OFA ) ( Cai et al. , 2020 ) was proposed to jointly train various sub-networks of the super-network in a single stage . By doing so , the sub-networks could be directly deployed into different hardware platform without retraining . However , to support an extremely large number of sub-networks ( i.e. , 1019 ) , such one-stage training involves multi-steps to gradually add more sub-networks by using the proposed progressive shrinking technique . Moreover , OFA also needs to train a single full-sized teacher network ( same size as super-network ) from scratch firstly to guide the training of the sub-networks by using knowledge distillation . Thus , due to the complex training procedure , OFA still suffers from a prohibitive training cost , requiring around 1200 GPU hours . More recently , inspired by studies on neural network design spaces ( Tan & Le , 2019 ; Radosavovic et al. , 2020 ) , CompOFA ( Sahni et al. , 2020 ) proposes a compound sub-network scaling method , which couples the depth and width configuration of the sampled sub-networks to constrain the search space , reducing the space to only 243 number of sub-networks without losing accuracy . Although CompOFA achieves a 2× training cost reduction compared with OFA , it follows the similar training procedure that also needs to train a teacher model from scratch . A similar approach is investigated in Yang et al . ( 2020 ) , which couples network width and input resolution into a single mutual learning framework . In addition , bigNAS ( Yu et al. , 2020 ) proposed replacing OFA ’ s multi-step training by a single step , namely one-shot NAS , challenging the usual practice of progressive training in OFA . The idea of the one-shot NAS is to jointly train sub-networks from scratch directly by using the sandwich rule and in-place distillation techniques proposed for slimmable networks ( Yu et al. , 2018 ) . Based on bigNAS , Wang et al . ( 2020 ) points out the unnecessary updates on sub-optimal models in one-stage training , and uses attention mechanisms to push the Pareto front . However , the primary objective of these two works is to obtain better accuracy , thus still suffer from high training cost , e.g. , bigNAS needs over 2300 TPU hours for O ( 1012 ) models . In addition , ( Li et al. , 2021 ) works to improve the trade-off between accuracy and computation complexity based for slimmable networks by introducing a dynamic gating mechanism and in-place ensemble bootstrapping to increase training stability .. However , it requires a one more gating training step , resulting in larger training cost . Our approach also follows the direction of the one-shot model . Differentiating from bigNAS , the primary objective of this work is to reduce the training cost without loss of Pareto optimality under the design space of OFA and CompOFA . 3 METHODS . 3.1 BUILDING THE SEARCH SPACE . A neural network N is a function that takes an input set X and generates a set of outputs δ ( N , X ) . In this work , we focus on a fixed input set ( i.e. , ImageNet ) , and thus write the network output as δ ( N ) . In the supervised learning setting , the performance of the neural network is evaluated against a set of labels YD . Following the standard practice in neural architecture search , we limit our neural network space to the set of architectures that consists of a sequence of blocks B1 , B2 , . . . , Bm , where m = 5 is a typical value . Each block is based on the inverted residual in the architecture space of MobileNetV3 ( Howard et al. , 2019 ) . A block is parameterized by three dimensions : the depth ( number of layers in the block ) D , the width ( channel expansion ratio ) W , and the convolution kernel size K. This search space is illustrated in Figure 2 . To reduce the size of the search space , we use the same coupling heuristic as CompOFA ( Sahni et al. , 2020 ) ; that is , if there are n choices for the depth dimension and n choices for the width dimension , we sample the ith largest depth wi whenever we sample the ith largest depth di for each layer in the block . While OFA uses an elastic kernel that allows for different kernel sizes within blocks , we follow CompOFA and use a fixed kernel size within each block . We call the network where the values of K , D , and W are each their largest possible value the full-sized network or super-net , and the network created by any other choice of these values a sub-network . As in CompOFA , we choose three possible values for D ∈ { 2 , 3 , 4 } and three possible values for W ∈ { 3 , 4 , 6 } and fix the kernel size to that of ( Howard et al. , 2019 ) , that is , K = 3 in the first , third , and fourth blocks , and K = 5 in the second and fifth blocks . Thus , with five blocks , we have 35 = 243 models in our search space . In neural architecture search , the input resolution can vary as well , up to a maximum size of 224 × 224 for ImageNet ( Deng et al. , 2009 ) . In this work , we use an elastic resolution , where input images are resized to be square with dimension in the set { 128 , 160 , 192 , 224 } .
This manuscript aims to reduce the training cost of once-for-all networks. The proposed method is built upon CompOFA that reduces the number of sub-networks within the OFA network to 243 by only considering networks whose dimensions are coupled. This paper introduces several techniques to further reduce the training cost, including in-place distillation, upper-attentive sampling, and upper-attentive warmup.
SP:d1c920796e6586b12f2f04f108fa32fdeac13f19
Fast and Efficient Once-For-All Networks for Diverse Hardware Deployment
1 INTRODUCTION . Convolutional neural networks ( CNNs ) are overwhelmingly successful in many machine learning applications . These applications may have different inference constraints ( e.g. , latency ) and are deployed in different hardware platforms that range from server-grade platforms to edge devices such as smartphones . Optimal network architectures need to be designed to meet the requirement for a target deployment scenario . However , naively designing a specialized architecture for each scenario is very expensive as it requires to fully retrain the model each time . This is an excessively expensive process in terms of the required machine learning expertise , time , energy and CO2 emission . Recently , researchers have proposed efficient methods which are based on training a super network only once . Then , for a specific deployment scenario , a sub-network is sampled from the super-network that meet the deployment constraints with the best accuracy . The weight of the sampled network is shared with original super network , hence retraining is not required . Once-for-all ( OFA ) ( Cai et al. , 2020 ) is among the first methods proposed to tackle this problem . The OFA method trains a once-for-all network that jointly optimizes the accuracy of a large number of subnetworks ( more than 1019 ) sampled from the once-for-all network . Each sub-network is selected from the once-for-all network where layer depths , channel widths , kernel sizes and input resolution are scaled independently . Such scaling provides a family of CNNs networks with different computation and representation power to flexibly support deployment under diverse platforms and configurations . With this massive search space , OFA co-trains all the sub-networks by a complex four-stage progressive training process which is prohibitively expensive and costs around 1200 GPU hours . Compound OFA ( CompOFA ) ( Sahni et al. , 2020 ) builds upon the original OFA by shrinking the design space of possible sub-networks . This is done by only considering networks whose dimensions are coupled . This reduces the number of possible models by 17 orders of magnitudes , from 1019 down to 243 . Sahni et al . ( 2020 ) demonstrate that this smaller design space is sufficient , as most warmup phase in-place distillation teacher training compound training teacher training elastic kernel elastic depth elastic widthOFA compOFA fOFA Figure 1 : Comparison of training schedules for OFA , CompOFA , and fOFA . Length on the horizontal axis is proportional to the number of epochs in each phase . For OFA , “ elastic kernel , ” “ elastic width , ” and “ elastic depth ” are the phases of training specified in Cai et al . ( 2020 ) that are not used in CompOFA and fOFA . sub-networks in the original OFA design space are far from the optimal accuracy-latency frontier . With this smaller space , the training procedure can be simplified as well as these suboptimal subnetworks are no longer influencing the training process . CompOFA reduces the four stages of the original OFA process to two stages , and this optimization speeds up the training time of CompOFA by a factor of 2× over OFA . However , 2× faster than OFA ’ s 1200 GPU-hours is still 600 GPU-hours . Even with this significant improvement , the training cost remains vary expensive , especially when effects on the environment are considered ( Strubell et al. , 2019 ) . While some of this cost can be mitigated by improvements in hardware efficiency and the continued development of specialized platforms for training CNNs , algorithmic enhancements still have a large role to play . While CompOFA greatly simplifies the progressive shrinking training procedure used in OFA , it is still dependent on pre-training a supernetwork to act as a teacher for the sub-network co-training process , which uses knowledge distillation Hinton et al . ( 2015 ) . Due to the optimizations in the co-training process , training the supernetwork in CompOFA requires more than half ( 180 out of 330 ) of the total training epochs . In this work , we propose several optimizations to the once-for-all training process that produces a one-stage training algorithm for fast and efficient neural architecture search . The key features of our method are : • We co-train all of the sub-networks from scratch without pre-training a teacher network , using the concept of in-place distillation ( Yu et al. , 2020 ) . The largest network we train using in-place distillation is smaller than the pre-trained teacher network used in Cai et al . ( 2020 ) and Sahni et al . ( 2020 ) . • During the co-training process , we develop an upper-attentive sampling method which always sample the full-sized sub-net at each iteration to help co-train the rest sub-networks . • Before co-training , we use an upper-attentive warmup technique which trains only the fullsized sub-net for a few epochs before co-training to further improve the performance . • With these optimizations , we can decrease the number of sampled sub-networks in each iteration of training , further improving performance . The benefits of our proposed fast OFA ( fOFA ) method are shown in Figure 1 . Furthermore , since our method has only a single stage , we can easily increase the training time and improve on the accuracy of previous methods while still requiring less training time . The rest of this paper is organized as follows . We describe related work in more detail in Section 2 and illustrate our method in depth in Section 3 . We report on our experimental results in Section 4 and finish the paper with conclusions in 6 . 2 RELATED WORK . Neural architecture search ( NAS ) aims to automatically find the optimal network architecture given hardware constraints , such as FLOPs or latency . Early NAS works mainly adapted reinforcement learning ( Zoph et al. , 2018 ; Zoph & Le , 2016 ) , evolutionary search ( Real et al. , 2019 ; 2017 ) , or sparse connection learning ( Kim et al. , 2018 ) to sample different architectures . However , each sampled architecture needed to be trained from scratch , resulting in a huge and intractable computing cost . More recent NAS works greatly reduce the cost by training an over-parameterized network named a super-network , and then sample various sub-networks which share the weights with the super-network . Such super-network-based methods can be further divided into two main categories as follows : 2.1 TWO-STAGE TRAINING . The main idea of the two-stage training methods ( Berman et al. , 2020 ; Bender et al. , 2018 ; Brock et al. , 2017 ; Guo et al. , 2019 ; Liu et al. , 2018 ; Pham et al. , 2018 ) is that after searching for the best architectures in the first stage of training , the best architectures then have to be retrained from scratch to obtain a final model . Generally , a single two-stage search experiment can only target a single resource budget or a narrow range of resource budgets at a time , which is inefficient . 2.2 ONE-STAGE TRAINING . To alleviate the inefficiency of two-stage training , Once-For-All ( OFA ) ( Cai et al. , 2020 ) was proposed to jointly train various sub-networks of the super-network in a single stage . By doing so , the sub-networks could be directly deployed into different hardware platform without retraining . However , to support an extremely large number of sub-networks ( i.e. , 1019 ) , such one-stage training involves multi-steps to gradually add more sub-networks by using the proposed progressive shrinking technique . Moreover , OFA also needs to train a single full-sized teacher network ( same size as super-network ) from scratch firstly to guide the training of the sub-networks by using knowledge distillation . Thus , due to the complex training procedure , OFA still suffers from a prohibitive training cost , requiring around 1200 GPU hours . More recently , inspired by studies on neural network design spaces ( Tan & Le , 2019 ; Radosavovic et al. , 2020 ) , CompOFA ( Sahni et al. , 2020 ) proposes a compound sub-network scaling method , which couples the depth and width configuration of the sampled sub-networks to constrain the search space , reducing the space to only 243 number of sub-networks without losing accuracy . Although CompOFA achieves a 2× training cost reduction compared with OFA , it follows the similar training procedure that also needs to train a teacher model from scratch . A similar approach is investigated in Yang et al . ( 2020 ) , which couples network width and input resolution into a single mutual learning framework . In addition , bigNAS ( Yu et al. , 2020 ) proposed replacing OFA ’ s multi-step training by a single step , namely one-shot NAS , challenging the usual practice of progressive training in OFA . The idea of the one-shot NAS is to jointly train sub-networks from scratch directly by using the sandwich rule and in-place distillation techniques proposed for slimmable networks ( Yu et al. , 2018 ) . Based on bigNAS , Wang et al . ( 2020 ) points out the unnecessary updates on sub-optimal models in one-stage training , and uses attention mechanisms to push the Pareto front . However , the primary objective of these two works is to obtain better accuracy , thus still suffer from high training cost , e.g. , bigNAS needs over 2300 TPU hours for O ( 1012 ) models . In addition , ( Li et al. , 2021 ) works to improve the trade-off between accuracy and computation complexity based for slimmable networks by introducing a dynamic gating mechanism and in-place ensemble bootstrapping to increase training stability .. However , it requires a one more gating training step , resulting in larger training cost . Our approach also follows the direction of the one-shot model . Differentiating from bigNAS , the primary objective of this work is to reduce the training cost without loss of Pareto optimality under the design space of OFA and CompOFA . 3 METHODS . 3.1 BUILDING THE SEARCH SPACE . A neural network N is a function that takes an input set X and generates a set of outputs δ ( N , X ) . In this work , we focus on a fixed input set ( i.e. , ImageNet ) , and thus write the network output as δ ( N ) . In the supervised learning setting , the performance of the neural network is evaluated against a set of labels YD . Following the standard practice in neural architecture search , we limit our neural network space to the set of architectures that consists of a sequence of blocks B1 , B2 , . . . , Bm , where m = 5 is a typical value . Each block is based on the inverted residual in the architecture space of MobileNetV3 ( Howard et al. , 2019 ) . A block is parameterized by three dimensions : the depth ( number of layers in the block ) D , the width ( channel expansion ratio ) W , and the convolution kernel size K. This search space is illustrated in Figure 2 . To reduce the size of the search space , we use the same coupling heuristic as CompOFA ( Sahni et al. , 2020 ) ; that is , if there are n choices for the depth dimension and n choices for the width dimension , we sample the ith largest depth wi whenever we sample the ith largest depth di for each layer in the block . While OFA uses an elastic kernel that allows for different kernel sizes within blocks , we follow CompOFA and use a fixed kernel size within each block . We call the network where the values of K , D , and W are each their largest possible value the full-sized network or super-net , and the network created by any other choice of these values a sub-network . As in CompOFA , we choose three possible values for D ∈ { 2 , 3 , 4 } and three possible values for W ∈ { 3 , 4 , 6 } and fix the kernel size to that of ( Howard et al. , 2019 ) , that is , K = 3 in the first , third , and fourth blocks , and K = 5 in the second and fifth blocks . Thus , with five blocks , we have 35 = 243 models in our search space . In neural architecture search , the input resolution can vary as well , up to a maximum size of 224 × 224 for ImageNet ( Deng et al. , 2009 ) . In this work , we use an elastic resolution , where input images are resized to be square with dimension in the set { 128 , 160 , 192 , 224 } .
In this manuscript, the authors propose a new framework for training the once-for-all (OFA) networks. This framework uses an in-place distillation schema to replace the teacher training in the former OFA training fashion and makes the overall training time faster. The paper is overall well-written with good performance.
SP:d1c920796e6586b12f2f04f108fa32fdeac13f19
Meta-Forecasting by combining Global Deep Representations with Local Adaptation
1 INTRODUCTION . Time series ( TS ) forecasting is of fundamental importance for various applications like marketing , customer/inventory management , and finance ( Petropoulos et al. , 2020 ) . Classical examples are forecasting the number of daily sales of a product over the next few weeks or the energy production needs in the next few hours . Accurate TS forecasting results in better down-stream decision making with potentially large monetary implications ( e.g. , Seeger et al . ( 2016 ) ; Faloutsos et al . ( 2019 ) ) . From a machine learning perspective , each TS represents a forecasting task . Conventional approaches for TS forecasting have typically been local , i.e . each TS/task is modeled independently by a forecasting model with relatively few parameters ( see Hyndman and Athanasopoulos ( 2018 ) for an introductory overview ) . Despite the modest amount of data used to train local TS forecasting models , they are effective in practice . They have only recently been outperformed by global deep learning strategies , which jointly train a deep neural network on a large set of related TS/tasks . Global models are designed to work well on the set of TS they are trained on , but they perform poorly on out-of-sample TS , i.e . TS which are not present in the training set . For example , Oreshkin et al . ( 2020 ) show that DeepAR ( Salinas et al. , 2020 ) trained on the M4 dataset ( Makridakis et al. , 2020 ) performs poorly on the M3 dataset . Global-local approaches ( Sen et al. , 2019 ; Smyl , 2020 ; Wang et al. , 2019 ) , such as the M4 competition winner ( Smyl , 2020 ) , exhibit a greater level of specialization as they learn parameters that are shared by all TS in the training set , as well as parameters specific to each TS . However , global-local models are still not able to handle out-of-sample TS as both types of parameters are learned jointly on a large training set of related TS in a multi-task fashion . In this work , we tackle the problem of out-of-sample TS forecasting by transferring knowledge from a set of TS , called the source dataset , to another set of TS , called the target dataset . We assume that source and target datasets share some underlying structure which makes transfer learning possible , although they may contain TS from different domains . Our work can be seen as an instance of meta-learning ( Schmidhuber , 1987 ; Ravi and Larochelle , 2016 ; Finn et al. , 2017 ) , whose goal is to leverage a pool of related tasks to learn how to adapt to a new one with little data . We will refer to a forecasting method performing well in this scenario as a meta-forecasting method.1 Such models 1This was called zero-shot transfer learning by Oreshkin et al . ( 2020 ) . could in principle be trained on a large set of TS and still produce accurate and fast predictions when applied to out-of-sample TS , potentially combining the inference speed and accuracy of deep learning models with the ease-of-use of classical local models . Our meta-forecasting method produces one-step ahead forecasts by combining learned representations with a differentiable closed-form adaptation mechanism inspired by the few-shot image classification method proposed by Bertinetto et al . ( 2018 ) . Specifically , we propose a class of models which we call Meta Global-Local Auto-Regressive models ( Meta-GLAR ) . Meta-GLAR models compute point forecasts for a single TS in three steps . First , the TS is passed through a representation model to obtain a representation for each time step . Second , a local ( i.e . TS-specific ) linear model is learned in closed-form by solving a ridge regression problem mapping the representations to the observed fraction of the TS . Lastly , the local linear model is applied to the global representation to compute the final predictions . Crucially , forecasts are computed in the same way also during training , where we backpropagate through the closed-form adaptation step to learn the representation parameters globally , i.e . across multiple TS . Hence , we can learn a representation which works well in combination with an efficient closed-form local adaptation . We use the RNN backbone in DeepAR ( Salinas et al. , 2020 ) for the representation . However , one can transform any neural forecasting method in a meta-GLAR one by substituting the last global linear layer with the closed-form adaptation during both training and prediction . We also stress that , since our method adapts locally also during training , it is fundamentally different from fine-tuning the last layer on each TS , which we show to perform significantly worse . This is in contrast to recent results for few-shot image classification where fine-tuning the last layer is demonstrated to outperform many modern meta-learning methods ( Tian et al. , 2020 ) . Our main contributions can be summarized as follows . • We propose a novel meta-learning method for TS forecasting that is suitable for out-ofsample TS forecasting . Meta-GLAR significantly improves the accuracy on out-of-sample TS forecasting ( i.e. , transfer setting ) over a global neural forecasting models such as DeepAR , which employs the same RNN backbone . • Our meta-forecasting method is competitive with classical local methods , for example beating the winner of the M3 competition , as well as NBEATS ( Oreshkin et al. , 2020 ) , the state-of-the-art method for out-of-sample TS forecasting , while having substantially fewer parameters than the latter method . • We perform an extensive ablation study which shows that the closed-form adaptation , the RNN backbone , and the use of iterated forecasts during training are needed to achieve the best performance . Furthermore we show that Meta-GLAR enjoys similar time and memory costs compared to a global one-step ahead RNN with the same backbone architecture , while converging faster and achieving better accuracy . 2 RELATED WORK . Meta-learning has received considerable attention in recent years and several models have been developed primarily for few-shot image classification . Notable examples are Ravi and Larochelle ( 2016 ) ; Finn et al . ( 2017 ) ; Snell et al . ( 2017 ) ; Nichol et al . ( 2018 ) ; Sung et al . ( 2018 ) ; Bertinetto et al . ( 2018 ) . These methods work by adapting to the task at hand before making predictions . Differently from fine-tuning , this adaptation is performed also during a ( meta ) training procedure over a large set of tasks to learn the ( meta ) parameters of the model . However , realistic datasets with the high number of tasks needed by meta-learning methods are rare , hence in commonly used benchmarks like mini-imagenet ( Vinyals et al. , 2016 ) , each classification task is constructed by randomly selecting a small set of classes and related images from a large single-task dataset . This construction is artificial and departs from real-world scenarios . Recently , the work by Tian et al . ( 2020 ) showed that training a neural network on the original single-task dataset and fine-tuning only the last layer on new tasks outperforms many modern meta-learning methods for few-shot image classification . By contrast , popular TS forecasting datasets like M4 fit more naturally into the meta-learning framework , since they already contain a large number of TS/tasks ( up to 105 for M4 ) . Our method relies on a differentiable closed-form solver to perform the local ( or TS-specific ) adaptation . Meta-learning is achieved by solving a task-specific ridge regression problem that maps a deep representation to the target TS in closed-form , while the parameters of the representation are learned by backpropagation through the solver . Aside from the original application in few-shot image classification ( Bertinetto et al. , 2018 ) , differentiable closed-form solvers have been used for other few-shot problems like visual tracking ( Zheng et al. , 2019 ) , video object segmentation ( Liu et al. , 2020 ) , spoken intent recognition ( Mittal et al. , 2020 ) and spatial regression ( Iwata and Tanaka , 2020 ) , while we are not aware of any application in forecasting . Meta-learning in the context of TS forecasting has originally been synonym with model selection or combination of experts ( see e.g . Collopy and Armstrong ( 1992 ) ; Lemke and Gabrys ( 2010 ) ; Talagala et al . ( 2018 ) ) . This class of methods builds a meta-model which uses TS features to select the best performing model or the best combination of models to apply to a target TS . One drawback of these methods is that the features are usually manually designed from the data at hand and that the same set of features does not transfer well to other applications . Laptev et al . ( 2018 ) train an LSTM neural network Hochreiter and Schmidhuber ( 1997 ) on the source TS dataset and then its last layers are finetuned separately on each target TS . This approach overcomes the problems related to human designed features since the input of the network are just the previous TS observations . However , retraining the last layers of the network for each TS can be expensive , especially when dealing with a large number of TS . Additionally , their fine-tuning procedure requires the selection of hyperparameters like learning rate and number of steps of the optimizer . By contrast , our approach adapts only the last linear layer in closed-form , requiring a small increase in compute and no additional hyperparameters compared to a standard neural forecasting model , while outperforming the simple fine tuning approach . More recently , the NBEATS model has shown strong performance both in the standard ( Oreshkin et al. , 2019 ) and in the meta-learning ( Oreshkin et al. , 2020 ) setting . This multi-step ahead method uses a residual architecture ( He et al. , 2016 ) which takes past observations of a single TS as input and outputs point predictions over the whole forecast horizon . Thanks to the residual connections , a forward pass of the network allows it to implicitly adapt to the input out-of-sample TS . However , the final performance is achieved using a large ensemble and the number of parameters of the model is quite large even when the residual blocks share the same parameters . Our method , although also using ensembles , achieves good accuracy on out-of-sample TS forecasting with significantly less parameters . Finally , Iwata and Kumagai ( 2020 ) consider the few-shot TS forecasting setting where each task is formed by a small group of closely related TS . Their method , which combines LSTMs and attention , uses the TS in the support set of the task to compute the one-step ahead forecasts for each TS in the query set . This is different from our approach , where we do not exploit the other TS in the target dataset to compute predictions and we view each TS as a separate task . Our method can be extended to the case considered by Iwata and Kumagai ( 2020 ) by performing the closed-form adaptation on all the TS in the task instead of just on one . We leave this for future work . 3 PROBLEM FORMULATION . We consider the setting studied by Oreshkin et al . ( 2020 ) , where a forecasting method can learn global parameters on a source TS dataset DS to produce accurate forecasts for an out-of-sample TS which belongs to another , target TS dataset DT . The model can only adapt locally to each TS in DT , i.e . it can not use information from the other TS in DT . We view each TS as a task . Hence , our setting fits a meta-learning framework where DS is the meta-training set and DT is the meta-testing set . We will denote a single TS , as a tuple ( z , x ) where z = [ z1 , . . . , zT ] ∈ RT are the ( target ) observations and x = [ x1 , . . . , xT ] ∈ RT×p is the matrix of covariates . We will denote with t0 ∈ N the split point which divides the context window ( or past ) z1 : t0 = [ z1 , . . . , zt0 ] , x1 : t0 = [ x1 , . . . , xt0 ] from the forecast horizon ( or future ) zt0+1 : T = [ zt0+1 , . . . , zT ] , xt0+1 : T = [ xt0+1 , . . . , xT ] . We also denote with H = T − t0 the length of the forecast horizon . We view each TS as a supervised learning task with training set { ( xt , zt ) } t0t=1 and test set { ( xt , zt ) } Tt=t0+1 where an example is the covariates-target pair ( xt , zt ) . We assume that the covariates vector xt can contain some of the previous observations zt−1 , zt−2 , . . . as time-lagged values ( or time-lags ) . Differently from standard supervised learning tasks , we can not assume that the examples are independent due to the temporal dependency . The goal of TS forecasting is to compute predictions ẑt0+1 : T for the observations in the forecast horizon zt0+1 : T using the covariates x and the observations in the context window z1 : t0 . In this work we focus on the accuracy of the predictions which can be measured for example with the sMAPE metric ( Hyndman and Koehler , 2006 ) : sMAPE = 1 |D| 200 H H∑ i=1 |zt0+i − ẑt0+i| |zt0+i|+ |ẑt0+i| . ( 1 )
The authors propose an autoregressive framework, Meta-GLAR, comprising of a global RNN model and a local linear model (learned using a closed form solution) for the task of meta-forecasting of time series (TS). The linear model is the task-specific model varies for each TS in the dataset, whereas the RNN model is the meta model that is shared across al time series. The authors perform comparisons with local models as well as NBEATS and DeepAR, and show that Meta-GLAR is competitive with some of them on particular metrics. They also perform an ablation study to show that each component in their model is important to the overall zero-shot transfer in TS forecasting.
SP:0a176b5367df631455b88c25007fcf8e6194a0f0
Meta-Forecasting by combining Global Deep Representations with Local Adaptation
1 INTRODUCTION . Time series ( TS ) forecasting is of fundamental importance for various applications like marketing , customer/inventory management , and finance ( Petropoulos et al. , 2020 ) . Classical examples are forecasting the number of daily sales of a product over the next few weeks or the energy production needs in the next few hours . Accurate TS forecasting results in better down-stream decision making with potentially large monetary implications ( e.g. , Seeger et al . ( 2016 ) ; Faloutsos et al . ( 2019 ) ) . From a machine learning perspective , each TS represents a forecasting task . Conventional approaches for TS forecasting have typically been local , i.e . each TS/task is modeled independently by a forecasting model with relatively few parameters ( see Hyndman and Athanasopoulos ( 2018 ) for an introductory overview ) . Despite the modest amount of data used to train local TS forecasting models , they are effective in practice . They have only recently been outperformed by global deep learning strategies , which jointly train a deep neural network on a large set of related TS/tasks . Global models are designed to work well on the set of TS they are trained on , but they perform poorly on out-of-sample TS , i.e . TS which are not present in the training set . For example , Oreshkin et al . ( 2020 ) show that DeepAR ( Salinas et al. , 2020 ) trained on the M4 dataset ( Makridakis et al. , 2020 ) performs poorly on the M3 dataset . Global-local approaches ( Sen et al. , 2019 ; Smyl , 2020 ; Wang et al. , 2019 ) , such as the M4 competition winner ( Smyl , 2020 ) , exhibit a greater level of specialization as they learn parameters that are shared by all TS in the training set , as well as parameters specific to each TS . However , global-local models are still not able to handle out-of-sample TS as both types of parameters are learned jointly on a large training set of related TS in a multi-task fashion . In this work , we tackle the problem of out-of-sample TS forecasting by transferring knowledge from a set of TS , called the source dataset , to another set of TS , called the target dataset . We assume that source and target datasets share some underlying structure which makes transfer learning possible , although they may contain TS from different domains . Our work can be seen as an instance of meta-learning ( Schmidhuber , 1987 ; Ravi and Larochelle , 2016 ; Finn et al. , 2017 ) , whose goal is to leverage a pool of related tasks to learn how to adapt to a new one with little data . We will refer to a forecasting method performing well in this scenario as a meta-forecasting method.1 Such models 1This was called zero-shot transfer learning by Oreshkin et al . ( 2020 ) . could in principle be trained on a large set of TS and still produce accurate and fast predictions when applied to out-of-sample TS , potentially combining the inference speed and accuracy of deep learning models with the ease-of-use of classical local models . Our meta-forecasting method produces one-step ahead forecasts by combining learned representations with a differentiable closed-form adaptation mechanism inspired by the few-shot image classification method proposed by Bertinetto et al . ( 2018 ) . Specifically , we propose a class of models which we call Meta Global-Local Auto-Regressive models ( Meta-GLAR ) . Meta-GLAR models compute point forecasts for a single TS in three steps . First , the TS is passed through a representation model to obtain a representation for each time step . Second , a local ( i.e . TS-specific ) linear model is learned in closed-form by solving a ridge regression problem mapping the representations to the observed fraction of the TS . Lastly , the local linear model is applied to the global representation to compute the final predictions . Crucially , forecasts are computed in the same way also during training , where we backpropagate through the closed-form adaptation step to learn the representation parameters globally , i.e . across multiple TS . Hence , we can learn a representation which works well in combination with an efficient closed-form local adaptation . We use the RNN backbone in DeepAR ( Salinas et al. , 2020 ) for the representation . However , one can transform any neural forecasting method in a meta-GLAR one by substituting the last global linear layer with the closed-form adaptation during both training and prediction . We also stress that , since our method adapts locally also during training , it is fundamentally different from fine-tuning the last layer on each TS , which we show to perform significantly worse . This is in contrast to recent results for few-shot image classification where fine-tuning the last layer is demonstrated to outperform many modern meta-learning methods ( Tian et al. , 2020 ) . Our main contributions can be summarized as follows . • We propose a novel meta-learning method for TS forecasting that is suitable for out-ofsample TS forecasting . Meta-GLAR significantly improves the accuracy on out-of-sample TS forecasting ( i.e. , transfer setting ) over a global neural forecasting models such as DeepAR , which employs the same RNN backbone . • Our meta-forecasting method is competitive with classical local methods , for example beating the winner of the M3 competition , as well as NBEATS ( Oreshkin et al. , 2020 ) , the state-of-the-art method for out-of-sample TS forecasting , while having substantially fewer parameters than the latter method . • We perform an extensive ablation study which shows that the closed-form adaptation , the RNN backbone , and the use of iterated forecasts during training are needed to achieve the best performance . Furthermore we show that Meta-GLAR enjoys similar time and memory costs compared to a global one-step ahead RNN with the same backbone architecture , while converging faster and achieving better accuracy . 2 RELATED WORK . Meta-learning has received considerable attention in recent years and several models have been developed primarily for few-shot image classification . Notable examples are Ravi and Larochelle ( 2016 ) ; Finn et al . ( 2017 ) ; Snell et al . ( 2017 ) ; Nichol et al . ( 2018 ) ; Sung et al . ( 2018 ) ; Bertinetto et al . ( 2018 ) . These methods work by adapting to the task at hand before making predictions . Differently from fine-tuning , this adaptation is performed also during a ( meta ) training procedure over a large set of tasks to learn the ( meta ) parameters of the model . However , realistic datasets with the high number of tasks needed by meta-learning methods are rare , hence in commonly used benchmarks like mini-imagenet ( Vinyals et al. , 2016 ) , each classification task is constructed by randomly selecting a small set of classes and related images from a large single-task dataset . This construction is artificial and departs from real-world scenarios . Recently , the work by Tian et al . ( 2020 ) showed that training a neural network on the original single-task dataset and fine-tuning only the last layer on new tasks outperforms many modern meta-learning methods for few-shot image classification . By contrast , popular TS forecasting datasets like M4 fit more naturally into the meta-learning framework , since they already contain a large number of TS/tasks ( up to 105 for M4 ) . Our method relies on a differentiable closed-form solver to perform the local ( or TS-specific ) adaptation . Meta-learning is achieved by solving a task-specific ridge regression problem that maps a deep representation to the target TS in closed-form , while the parameters of the representation are learned by backpropagation through the solver . Aside from the original application in few-shot image classification ( Bertinetto et al. , 2018 ) , differentiable closed-form solvers have been used for other few-shot problems like visual tracking ( Zheng et al. , 2019 ) , video object segmentation ( Liu et al. , 2020 ) , spoken intent recognition ( Mittal et al. , 2020 ) and spatial regression ( Iwata and Tanaka , 2020 ) , while we are not aware of any application in forecasting . Meta-learning in the context of TS forecasting has originally been synonym with model selection or combination of experts ( see e.g . Collopy and Armstrong ( 1992 ) ; Lemke and Gabrys ( 2010 ) ; Talagala et al . ( 2018 ) ) . This class of methods builds a meta-model which uses TS features to select the best performing model or the best combination of models to apply to a target TS . One drawback of these methods is that the features are usually manually designed from the data at hand and that the same set of features does not transfer well to other applications . Laptev et al . ( 2018 ) train an LSTM neural network Hochreiter and Schmidhuber ( 1997 ) on the source TS dataset and then its last layers are finetuned separately on each target TS . This approach overcomes the problems related to human designed features since the input of the network are just the previous TS observations . However , retraining the last layers of the network for each TS can be expensive , especially when dealing with a large number of TS . Additionally , their fine-tuning procedure requires the selection of hyperparameters like learning rate and number of steps of the optimizer . By contrast , our approach adapts only the last linear layer in closed-form , requiring a small increase in compute and no additional hyperparameters compared to a standard neural forecasting model , while outperforming the simple fine tuning approach . More recently , the NBEATS model has shown strong performance both in the standard ( Oreshkin et al. , 2019 ) and in the meta-learning ( Oreshkin et al. , 2020 ) setting . This multi-step ahead method uses a residual architecture ( He et al. , 2016 ) which takes past observations of a single TS as input and outputs point predictions over the whole forecast horizon . Thanks to the residual connections , a forward pass of the network allows it to implicitly adapt to the input out-of-sample TS . However , the final performance is achieved using a large ensemble and the number of parameters of the model is quite large even when the residual blocks share the same parameters . Our method , although also using ensembles , achieves good accuracy on out-of-sample TS forecasting with significantly less parameters . Finally , Iwata and Kumagai ( 2020 ) consider the few-shot TS forecasting setting where each task is formed by a small group of closely related TS . Their method , which combines LSTMs and attention , uses the TS in the support set of the task to compute the one-step ahead forecasts for each TS in the query set . This is different from our approach , where we do not exploit the other TS in the target dataset to compute predictions and we view each TS as a separate task . Our method can be extended to the case considered by Iwata and Kumagai ( 2020 ) by performing the closed-form adaptation on all the TS in the task instead of just on one . We leave this for future work . 3 PROBLEM FORMULATION . We consider the setting studied by Oreshkin et al . ( 2020 ) , where a forecasting method can learn global parameters on a source TS dataset DS to produce accurate forecasts for an out-of-sample TS which belongs to another , target TS dataset DT . The model can only adapt locally to each TS in DT , i.e . it can not use information from the other TS in DT . We view each TS as a task . Hence , our setting fits a meta-learning framework where DS is the meta-training set and DT is the meta-testing set . We will denote a single TS , as a tuple ( z , x ) where z = [ z1 , . . . , zT ] ∈ RT are the ( target ) observations and x = [ x1 , . . . , xT ] ∈ RT×p is the matrix of covariates . We will denote with t0 ∈ N the split point which divides the context window ( or past ) z1 : t0 = [ z1 , . . . , zt0 ] , x1 : t0 = [ x1 , . . . , xt0 ] from the forecast horizon ( or future ) zt0+1 : T = [ zt0+1 , . . . , zT ] , xt0+1 : T = [ xt0+1 , . . . , xT ] . We also denote with H = T − t0 the length of the forecast horizon . We view each TS as a supervised learning task with training set { ( xt , zt ) } t0t=1 and test set { ( xt , zt ) } Tt=t0+1 where an example is the covariates-target pair ( xt , zt ) . We assume that the covariates vector xt can contain some of the previous observations zt−1 , zt−2 , . . . as time-lagged values ( or time-lags ) . Differently from standard supervised learning tasks , we can not assume that the examples are independent due to the temporal dependency . The goal of TS forecasting is to compute predictions ẑt0+1 : T for the observations in the forecast horizon zt0+1 : T using the covariates x and the observations in the context window z1 : t0 . In this work we focus on the accuracy of the predictions which can be measured for example with the sMAPE metric ( Hyndman and Koehler , 2006 ) : sMAPE = 1 |D| 200 H H∑ i=1 |zt0+i − ẑt0+i| |zt0+i|+ |ẑt0+i| . ( 1 )
The authors propose a new meta-learning framework to tackle the zero-shot learning problem for time series data – through the combination of: 1. A standard autoregressive architecture encoding historical information (e.g. DeepAR). 2. An adaptive linear output layer whose weights are calibrated in closed-form using encoder history. This allows the model to be trained end-to-end on a source task, while automatically performing domain adaptation when applied to a new target task.
SP:0a176b5367df631455b88c25007fcf8e6194a0f0
Meta-Forecasting by combining Global Deep Representations with Local Adaptation
1 INTRODUCTION . Time series ( TS ) forecasting is of fundamental importance for various applications like marketing , customer/inventory management , and finance ( Petropoulos et al. , 2020 ) . Classical examples are forecasting the number of daily sales of a product over the next few weeks or the energy production needs in the next few hours . Accurate TS forecasting results in better down-stream decision making with potentially large monetary implications ( e.g. , Seeger et al . ( 2016 ) ; Faloutsos et al . ( 2019 ) ) . From a machine learning perspective , each TS represents a forecasting task . Conventional approaches for TS forecasting have typically been local , i.e . each TS/task is modeled independently by a forecasting model with relatively few parameters ( see Hyndman and Athanasopoulos ( 2018 ) for an introductory overview ) . Despite the modest amount of data used to train local TS forecasting models , they are effective in practice . They have only recently been outperformed by global deep learning strategies , which jointly train a deep neural network on a large set of related TS/tasks . Global models are designed to work well on the set of TS they are trained on , but they perform poorly on out-of-sample TS , i.e . TS which are not present in the training set . For example , Oreshkin et al . ( 2020 ) show that DeepAR ( Salinas et al. , 2020 ) trained on the M4 dataset ( Makridakis et al. , 2020 ) performs poorly on the M3 dataset . Global-local approaches ( Sen et al. , 2019 ; Smyl , 2020 ; Wang et al. , 2019 ) , such as the M4 competition winner ( Smyl , 2020 ) , exhibit a greater level of specialization as they learn parameters that are shared by all TS in the training set , as well as parameters specific to each TS . However , global-local models are still not able to handle out-of-sample TS as both types of parameters are learned jointly on a large training set of related TS in a multi-task fashion . In this work , we tackle the problem of out-of-sample TS forecasting by transferring knowledge from a set of TS , called the source dataset , to another set of TS , called the target dataset . We assume that source and target datasets share some underlying structure which makes transfer learning possible , although they may contain TS from different domains . Our work can be seen as an instance of meta-learning ( Schmidhuber , 1987 ; Ravi and Larochelle , 2016 ; Finn et al. , 2017 ) , whose goal is to leverage a pool of related tasks to learn how to adapt to a new one with little data . We will refer to a forecasting method performing well in this scenario as a meta-forecasting method.1 Such models 1This was called zero-shot transfer learning by Oreshkin et al . ( 2020 ) . could in principle be trained on a large set of TS and still produce accurate and fast predictions when applied to out-of-sample TS , potentially combining the inference speed and accuracy of deep learning models with the ease-of-use of classical local models . Our meta-forecasting method produces one-step ahead forecasts by combining learned representations with a differentiable closed-form adaptation mechanism inspired by the few-shot image classification method proposed by Bertinetto et al . ( 2018 ) . Specifically , we propose a class of models which we call Meta Global-Local Auto-Regressive models ( Meta-GLAR ) . Meta-GLAR models compute point forecasts for a single TS in three steps . First , the TS is passed through a representation model to obtain a representation for each time step . Second , a local ( i.e . TS-specific ) linear model is learned in closed-form by solving a ridge regression problem mapping the representations to the observed fraction of the TS . Lastly , the local linear model is applied to the global representation to compute the final predictions . Crucially , forecasts are computed in the same way also during training , where we backpropagate through the closed-form adaptation step to learn the representation parameters globally , i.e . across multiple TS . Hence , we can learn a representation which works well in combination with an efficient closed-form local adaptation . We use the RNN backbone in DeepAR ( Salinas et al. , 2020 ) for the representation . However , one can transform any neural forecasting method in a meta-GLAR one by substituting the last global linear layer with the closed-form adaptation during both training and prediction . We also stress that , since our method adapts locally also during training , it is fundamentally different from fine-tuning the last layer on each TS , which we show to perform significantly worse . This is in contrast to recent results for few-shot image classification where fine-tuning the last layer is demonstrated to outperform many modern meta-learning methods ( Tian et al. , 2020 ) . Our main contributions can be summarized as follows . • We propose a novel meta-learning method for TS forecasting that is suitable for out-ofsample TS forecasting . Meta-GLAR significantly improves the accuracy on out-of-sample TS forecasting ( i.e. , transfer setting ) over a global neural forecasting models such as DeepAR , which employs the same RNN backbone . • Our meta-forecasting method is competitive with classical local methods , for example beating the winner of the M3 competition , as well as NBEATS ( Oreshkin et al. , 2020 ) , the state-of-the-art method for out-of-sample TS forecasting , while having substantially fewer parameters than the latter method . • We perform an extensive ablation study which shows that the closed-form adaptation , the RNN backbone , and the use of iterated forecasts during training are needed to achieve the best performance . Furthermore we show that Meta-GLAR enjoys similar time and memory costs compared to a global one-step ahead RNN with the same backbone architecture , while converging faster and achieving better accuracy . 2 RELATED WORK . Meta-learning has received considerable attention in recent years and several models have been developed primarily for few-shot image classification . Notable examples are Ravi and Larochelle ( 2016 ) ; Finn et al . ( 2017 ) ; Snell et al . ( 2017 ) ; Nichol et al . ( 2018 ) ; Sung et al . ( 2018 ) ; Bertinetto et al . ( 2018 ) . These methods work by adapting to the task at hand before making predictions . Differently from fine-tuning , this adaptation is performed also during a ( meta ) training procedure over a large set of tasks to learn the ( meta ) parameters of the model . However , realistic datasets with the high number of tasks needed by meta-learning methods are rare , hence in commonly used benchmarks like mini-imagenet ( Vinyals et al. , 2016 ) , each classification task is constructed by randomly selecting a small set of classes and related images from a large single-task dataset . This construction is artificial and departs from real-world scenarios . Recently , the work by Tian et al . ( 2020 ) showed that training a neural network on the original single-task dataset and fine-tuning only the last layer on new tasks outperforms many modern meta-learning methods for few-shot image classification . By contrast , popular TS forecasting datasets like M4 fit more naturally into the meta-learning framework , since they already contain a large number of TS/tasks ( up to 105 for M4 ) . Our method relies on a differentiable closed-form solver to perform the local ( or TS-specific ) adaptation . Meta-learning is achieved by solving a task-specific ridge regression problem that maps a deep representation to the target TS in closed-form , while the parameters of the representation are learned by backpropagation through the solver . Aside from the original application in few-shot image classification ( Bertinetto et al. , 2018 ) , differentiable closed-form solvers have been used for other few-shot problems like visual tracking ( Zheng et al. , 2019 ) , video object segmentation ( Liu et al. , 2020 ) , spoken intent recognition ( Mittal et al. , 2020 ) and spatial regression ( Iwata and Tanaka , 2020 ) , while we are not aware of any application in forecasting . Meta-learning in the context of TS forecasting has originally been synonym with model selection or combination of experts ( see e.g . Collopy and Armstrong ( 1992 ) ; Lemke and Gabrys ( 2010 ) ; Talagala et al . ( 2018 ) ) . This class of methods builds a meta-model which uses TS features to select the best performing model or the best combination of models to apply to a target TS . One drawback of these methods is that the features are usually manually designed from the data at hand and that the same set of features does not transfer well to other applications . Laptev et al . ( 2018 ) train an LSTM neural network Hochreiter and Schmidhuber ( 1997 ) on the source TS dataset and then its last layers are finetuned separately on each target TS . This approach overcomes the problems related to human designed features since the input of the network are just the previous TS observations . However , retraining the last layers of the network for each TS can be expensive , especially when dealing with a large number of TS . Additionally , their fine-tuning procedure requires the selection of hyperparameters like learning rate and number of steps of the optimizer . By contrast , our approach adapts only the last linear layer in closed-form , requiring a small increase in compute and no additional hyperparameters compared to a standard neural forecasting model , while outperforming the simple fine tuning approach . More recently , the NBEATS model has shown strong performance both in the standard ( Oreshkin et al. , 2019 ) and in the meta-learning ( Oreshkin et al. , 2020 ) setting . This multi-step ahead method uses a residual architecture ( He et al. , 2016 ) which takes past observations of a single TS as input and outputs point predictions over the whole forecast horizon . Thanks to the residual connections , a forward pass of the network allows it to implicitly adapt to the input out-of-sample TS . However , the final performance is achieved using a large ensemble and the number of parameters of the model is quite large even when the residual blocks share the same parameters . Our method , although also using ensembles , achieves good accuracy on out-of-sample TS forecasting with significantly less parameters . Finally , Iwata and Kumagai ( 2020 ) consider the few-shot TS forecasting setting where each task is formed by a small group of closely related TS . Their method , which combines LSTMs and attention , uses the TS in the support set of the task to compute the one-step ahead forecasts for each TS in the query set . This is different from our approach , where we do not exploit the other TS in the target dataset to compute predictions and we view each TS as a separate task . Our method can be extended to the case considered by Iwata and Kumagai ( 2020 ) by performing the closed-form adaptation on all the TS in the task instead of just on one . We leave this for future work . 3 PROBLEM FORMULATION . We consider the setting studied by Oreshkin et al . ( 2020 ) , where a forecasting method can learn global parameters on a source TS dataset DS to produce accurate forecasts for an out-of-sample TS which belongs to another , target TS dataset DT . The model can only adapt locally to each TS in DT , i.e . it can not use information from the other TS in DT . We view each TS as a task . Hence , our setting fits a meta-learning framework where DS is the meta-training set and DT is the meta-testing set . We will denote a single TS , as a tuple ( z , x ) where z = [ z1 , . . . , zT ] ∈ RT are the ( target ) observations and x = [ x1 , . . . , xT ] ∈ RT×p is the matrix of covariates . We will denote with t0 ∈ N the split point which divides the context window ( or past ) z1 : t0 = [ z1 , . . . , zt0 ] , x1 : t0 = [ x1 , . . . , xt0 ] from the forecast horizon ( or future ) zt0+1 : T = [ zt0+1 , . . . , zT ] , xt0+1 : T = [ xt0+1 , . . . , xT ] . We also denote with H = T − t0 the length of the forecast horizon . We view each TS as a supervised learning task with training set { ( xt , zt ) } t0t=1 and test set { ( xt , zt ) } Tt=t0+1 where an example is the covariates-target pair ( xt , zt ) . We assume that the covariates vector xt can contain some of the previous observations zt−1 , zt−2 , . . . as time-lagged values ( or time-lags ) . Differently from standard supervised learning tasks , we can not assume that the examples are independent due to the temporal dependency . The goal of TS forecasting is to compute predictions ẑt0+1 : T for the observations in the forecast horizon zt0+1 : T using the covariates x and the observations in the context window z1 : t0 . In this work we focus on the accuracy of the predictions which can be measured for example with the sMAPE metric ( Hyndman and Koehler , 2006 ) : sMAPE = 1 |D| 200 H H∑ i=1 |zt0+i − ẑt0+i| |zt0+i|+ |ẑt0+i| . ( 1 )
This work proposes a new forecasting method for jointly learning from a large pool of related time-series. The method called Meta Global-Local Auto-Regression (Meta-GLAR) learns the mapping from representations produced by an RNN to one-step ahead forecasts where the parameters are learned across multiple time-series through backpropagation. This work studies the zero-shot transfer learning problem and proposes a method for it. The method is somewhat incremental and evaluation has some issues, see below for details.
SP:0a176b5367df631455b88c25007fcf8e6194a0f0
FED-$\chi^2$: Secure Federated Correlation Test
1 INTRODUCTION . Correlation test , as the name implies , is the process of examining the correlation between two random variables using observational data . It is a fundamental building block in a wide variety of real-world applications , including feature selection ( Zheng et al. , 2004 ) , cryptanalysis ( Nyberg , 2001 ) , causal graph discovery ( Spirtes et al. , 2000 ) , empirical finance ( Ledoit & Wolf , 2008 ; Kim & Ji , 2015 ) , medical studies ( Kassirer , 1983 ) and genomics ( Wilson et al. , 1999 ; Dudoit et al. , 2003 ) . Because the observational data used in correlation tests may contain sensitive information such as genomic information , centralizing the data collection is risky . To address , we resort to a federated setting in which each client maintains its own data and communicates with a centralized server to calculate a function . Note that the communication should contain as little information as feasible . Otherwise , the server may be able to infer sensitive information from the communication transcript . In the present work , we study a representative correlation test , namely χ2-test , under the federated setting . There are two straightforward methods for conducting χ2-test in such a context . First , clients can upload their raw data to the centralized server and delegate the test to it . While this method is effective in terms of communication , it entirely exposes the clients ’ private information . Second , clients may run secure multiparty computation ( MPC ) under the server ’ s coordination . Thus , clients can jointly run χ2-test without disclosing their data to the server . However , general-purpose MPC imposes significant computation and communication overhead , which is typically intolerable in a federated setting with computationally limited clients , e.g. , mobile devices . To address the dilemma , we present a federated protocol optimized for χ2-test that is computationally and communicationally efficient and discloses limited information to the server . We begin by recasting χ2-test as a second frequency moment estimation problem . To approximate the second frequency moment in a federated setting , each client encodes its raw data into a low-dimensional vector via stable random projection ( Indyk , 2006 ; Vempala , 2005 ; Li , 2008 ) . Such encodings can be aggregated with only summation , allowing clients to leverage secure aggregation ( Bonawitz et al. , 2017 ; Bell et al. , 2020 ) to aggregate the encodings and the server to decode them to approximate the second frequency moment . Because secure aggregation conceals each client ’ s individual update within the aggregated global update , the server learns only limited information about the clients ’ data . Our evaluation on four synthetic datasets and 16 real-world datasets shows that FED-χ2 can replace centralized χ2-test with good accuracy and low computation overhead . Additionally , we analyze FED-χ2 in three real-world use cases : feature selection , cryptanalysis , and online false discovery rate control . The results show that FED-χ2 can achieve comparable performance with centralized χ2-test and can withstand up to 20 % of clients dropping out with minor influence on the accuracy . In summary , we make the following contributions : We propose FED-χ2 , the first secure federated χ2-test protocol . FED-χ2 is computation- and communication-efficient and leaks much less information than trivially deploying secure aggregation . FED-χ2 decomposes χ2-test into frequency moments estimation that can easily be encoded/decoded using stable projection and secure aggregation techniques . We give formal security proof and utility analysis of FED-χ2 . We evaluate FED-χ2 in real-world use cases , and the findings suggest that FED-χ2 can substitute centralized χ2-test with comparable accuracy , and FED-χ2 can tolerate up to 20 % of clients dropout with minor accuracy drop . 2 RELATED WORK . Bonawitz et al . ( 2017 ) proposed the well-celebrated secure aggregation protocol as a low-cost way to calculate linear functions in a federated setting . It has seen many variants and improvements since then . For instance , Truex et al . ( 2019 ) and Xu et al . ( 2019 ) employed advanced crypto tools for secure aggregation , such as threshold homomorphic encryption and functional encryption . So et al . ( 2021 ) proposed TURBOAGG , which combines secure sharing with erasure codes for better dropout tolerance . To improve communication efficiency , Bell et al . ( 2020 ) and Choi et al . ( 2020 ) replaced the complete graph in secure aggregation with either a sparse random graph or a low-degree graph . Secure aggregation is deployed in a variety of applications . Agarwal et al . ( 2018 ) added binomial noise to local gradients , resulting in both differential privacy and communication efficiency . Wang et al . ( 2020 ) replaced the binomial noise with discrete Gaussian noise , which is shown to exhibit better composability . Kairouz et al . ( 2021 ) proved that the sum of discrete Gaussian is close to discrete Gaussian , thus discarding the common random seed assumption from Wang et al . ( 2020 ) . The above three works all incorporate secure aggregation in their protocols to lower the noise scale required for differential privacy . Chen et al . ( 2020 ) added an extra public parameter to each client to force them to train in the same way , allowing for the detection of malicious clients during aggregation . Nevertheless , designing secure federated correlation tests , despite its importance in real-world scenarios , is not explored by existing research in this field . On the other end of the spectrum , Wang et al . ( 2021 ) proved that stable projection is differentially private if the projection matrix is secret . In our protocol , the projection matrix is public information ; hence FED-χ2 does not consider the differential privacy guarantee . 3 FEDERATED CORRELATION TEST WITH MINIMAL LEAKAGE . In this section , we elaborate on the design of FED-χ2 , a secure federated protocol for χ2-test . Sec . 3.1 first formalizes the problem , establishes the notation system , and introduces the threat model . In Sec . 3.2 , we recast χ2-test as a second frequency moment estimation problem in the federated setting , and consequently , we are able to leverage stable projection to encode each client ’ s local information ( Sec . 3.3 ) , and then aggregate them using secure aggregation ( Sec . 3.4 ) . Sec . 3.5 , 3.6 , and 3.7 present security proof , utility analysis , communication analysis , and computation analysis of FED-χ2 . 3.1 PROBLEM SETUP . We now formulate the problem of the federated correlation test and establish the notation system . We use [ n ] to denote { 1 , · · · , n } . We denote vectors with bold lower-case letters ( e.g. , a , b , c ) and matrices with bold upper-case letters ( e.g. , A , B , C ) . We consider a population of n clients C = { ci } i∈ [ n ] . Each client has one share of local data composed of the triplets Di = { ( x , y , v ( i ) xy ) } , x ∈ X , y ∈ Y , v ( i ) xy ∈ { −M , · · · , M } , where x and y are categories of the contingency table , v ( i ) xy is the observed counting of the categories x and y in the local contingency table of the ith client , |X | = mx and |Y| = my are finite domains , and M is the maximum value |v ( i ) xy | can be . The global dataset is defined as D = { ( x , y , vxy ) : vxy =∑ i∈ [ n ] v ( i ) xy } . We focus on federated χ2-test and the data in contingency table is discrete . For the ease of presentation , we define the marginal statistics vx = ∑ y∈ [ |Y| ] vxy , vy = ∑ x∈ [ |X | ] vxy , and v = ∑ x∈ [ |X | ] , y∈ [ |Y| ] vxy . Besides , we define v̄xy = vx×vy v , denoting the expectation of vxy if x and y are uncorrelated . We define m = mxmy and use an indexing function I : [ mx ] × [ my ] → [ m ] to obtain a uniform indexing given the indexing of each variable . A centralized server S calculates the statistics for χ2-test sχ2 ( D ) = ∑ x∈ [ |X | ] , y∈ [ |Y| ] ( vxy−v̄xy ) 2 v̄xy on the global dataset to decide whether X and Y are correlated without collecting the raw data from clients . Overall , using MPC to conduct secure correlation tests in a federated scenario is highly expensive and impractical ( Boyle et al. , 2015 ; Damgård et al. , 2012 ) . Hence , in the present work , we trade off accuracy for efficiency , as long as the estimation error is small with a high probability . Formally , if FED-χ2 outputs ŝχ2 , whose corresponding standard centralized χ2-test output is sχ2 , the following accuracy requirement should be satisfied with small and δ. P [ ( 1− ) sχ2 ≤ ŝχ2 ≤ ( 1 + ) sχ2 ] ≥ 1− δ Threat Model . We assume that the centralized server S is honest but curious . It honestly follows the protocol due to regulatory or reputational pressure but is curious to discover extra private information from clients ’ legitimate updates for profit or surveillance purposes . As a result , client updates should contain as little sensitive information as feasible . We want to emphasize that , while the server may explore the privacy of clients , the server will honestly follow the protocol due to regulation or reputational pressure . The server won ’ t provide adversarial vectors to the clients . On the other hand , we assume honest clients . Specifically , we do not consider client-side adversarial attacks ( e.g. , data poisoning attacks ( Bagdasaryan et al. , 2020 ; Bhagoji et al. , 2019 ) ) . However , we allow a small portion of clients to drop out during the execution . Also , as we mentioned in Sec . 2 , we do not consider the differential privacy guarantee ; see further clarification in Appendix A . 3.2 FROM CORRELATION TEST TO FREQUENCY MOMENTS ESTIMATION . We first recast correlation test to a second frequency moments estimation problem as defined below . Given a set of key-value pairs S = { ki , vi } i∈ [ n ] , we re-organize it into a histogramH = { kj , vj =∑ ki=kj , i∈ [ n ] vi } , and estimate the αth frequency moments as Fα = ∑ j v α j . χ 2-test can thus be recast to a 2nd frequency moments estimation problem as follows : sχ2 ( D ) = ∑ x , y ( vxy − v̄xy ) 2 v̄xy = ∑ x , y ( vxy − v̄xy√ v̄xy ) 2 In federated setting , each client ci holds a local dataset Di = { ( x , y , v ( i ) xy ) } and computes a vector ui , where ui [ I ( x , y ) ] = v ( i ) xy−v̄xy/n√ v̄xy and ui has m elements . Thus , the challenge in federated χ2-test becomes calculating the following equation : sχ2 ( D ) = ∑ x , y ( vxy − v̄xy√ v̄xy ) 2 = || ∑ i∈ [ n ] ui||22
The paper proposed a federated $\chi^{2}$-test protocol. This is proposed as a technique that is computation- and communication-efficient and leaks much less information than other alternatives. The claim is that the proposed technique can tolerate up to 20\% of clients dropout with minor accuracy drop. The proposed technique does not include differential privacy guarantee.
SP:250105ee671ab78d23446b9c306ca81626c243db
FED-$\chi^2$: Secure Federated Correlation Test
1 INTRODUCTION . Correlation test , as the name implies , is the process of examining the correlation between two random variables using observational data . It is a fundamental building block in a wide variety of real-world applications , including feature selection ( Zheng et al. , 2004 ) , cryptanalysis ( Nyberg , 2001 ) , causal graph discovery ( Spirtes et al. , 2000 ) , empirical finance ( Ledoit & Wolf , 2008 ; Kim & Ji , 2015 ) , medical studies ( Kassirer , 1983 ) and genomics ( Wilson et al. , 1999 ; Dudoit et al. , 2003 ) . Because the observational data used in correlation tests may contain sensitive information such as genomic information , centralizing the data collection is risky . To address , we resort to a federated setting in which each client maintains its own data and communicates with a centralized server to calculate a function . Note that the communication should contain as little information as feasible . Otherwise , the server may be able to infer sensitive information from the communication transcript . In the present work , we study a representative correlation test , namely χ2-test , under the federated setting . There are two straightforward methods for conducting χ2-test in such a context . First , clients can upload their raw data to the centralized server and delegate the test to it . While this method is effective in terms of communication , it entirely exposes the clients ’ private information . Second , clients may run secure multiparty computation ( MPC ) under the server ’ s coordination . Thus , clients can jointly run χ2-test without disclosing their data to the server . However , general-purpose MPC imposes significant computation and communication overhead , which is typically intolerable in a federated setting with computationally limited clients , e.g. , mobile devices . To address the dilemma , we present a federated protocol optimized for χ2-test that is computationally and communicationally efficient and discloses limited information to the server . We begin by recasting χ2-test as a second frequency moment estimation problem . To approximate the second frequency moment in a federated setting , each client encodes its raw data into a low-dimensional vector via stable random projection ( Indyk , 2006 ; Vempala , 2005 ; Li , 2008 ) . Such encodings can be aggregated with only summation , allowing clients to leverage secure aggregation ( Bonawitz et al. , 2017 ; Bell et al. , 2020 ) to aggregate the encodings and the server to decode them to approximate the second frequency moment . Because secure aggregation conceals each client ’ s individual update within the aggregated global update , the server learns only limited information about the clients ’ data . Our evaluation on four synthetic datasets and 16 real-world datasets shows that FED-χ2 can replace centralized χ2-test with good accuracy and low computation overhead . Additionally , we analyze FED-χ2 in three real-world use cases : feature selection , cryptanalysis , and online false discovery rate control . The results show that FED-χ2 can achieve comparable performance with centralized χ2-test and can withstand up to 20 % of clients dropping out with minor influence on the accuracy . In summary , we make the following contributions : We propose FED-χ2 , the first secure federated χ2-test protocol . FED-χ2 is computation- and communication-efficient and leaks much less information than trivially deploying secure aggregation . FED-χ2 decomposes χ2-test into frequency moments estimation that can easily be encoded/decoded using stable projection and secure aggregation techniques . We give formal security proof and utility analysis of FED-χ2 . We evaluate FED-χ2 in real-world use cases , and the findings suggest that FED-χ2 can substitute centralized χ2-test with comparable accuracy , and FED-χ2 can tolerate up to 20 % of clients dropout with minor accuracy drop . 2 RELATED WORK . Bonawitz et al . ( 2017 ) proposed the well-celebrated secure aggregation protocol as a low-cost way to calculate linear functions in a federated setting . It has seen many variants and improvements since then . For instance , Truex et al . ( 2019 ) and Xu et al . ( 2019 ) employed advanced crypto tools for secure aggregation , such as threshold homomorphic encryption and functional encryption . So et al . ( 2021 ) proposed TURBOAGG , which combines secure sharing with erasure codes for better dropout tolerance . To improve communication efficiency , Bell et al . ( 2020 ) and Choi et al . ( 2020 ) replaced the complete graph in secure aggregation with either a sparse random graph or a low-degree graph . Secure aggregation is deployed in a variety of applications . Agarwal et al . ( 2018 ) added binomial noise to local gradients , resulting in both differential privacy and communication efficiency . Wang et al . ( 2020 ) replaced the binomial noise with discrete Gaussian noise , which is shown to exhibit better composability . Kairouz et al . ( 2021 ) proved that the sum of discrete Gaussian is close to discrete Gaussian , thus discarding the common random seed assumption from Wang et al . ( 2020 ) . The above three works all incorporate secure aggregation in their protocols to lower the noise scale required for differential privacy . Chen et al . ( 2020 ) added an extra public parameter to each client to force them to train in the same way , allowing for the detection of malicious clients during aggregation . Nevertheless , designing secure federated correlation tests , despite its importance in real-world scenarios , is not explored by existing research in this field . On the other end of the spectrum , Wang et al . ( 2021 ) proved that stable projection is differentially private if the projection matrix is secret . In our protocol , the projection matrix is public information ; hence FED-χ2 does not consider the differential privacy guarantee . 3 FEDERATED CORRELATION TEST WITH MINIMAL LEAKAGE . In this section , we elaborate on the design of FED-χ2 , a secure federated protocol for χ2-test . Sec . 3.1 first formalizes the problem , establishes the notation system , and introduces the threat model . In Sec . 3.2 , we recast χ2-test as a second frequency moment estimation problem in the federated setting , and consequently , we are able to leverage stable projection to encode each client ’ s local information ( Sec . 3.3 ) , and then aggregate them using secure aggregation ( Sec . 3.4 ) . Sec . 3.5 , 3.6 , and 3.7 present security proof , utility analysis , communication analysis , and computation analysis of FED-χ2 . 3.1 PROBLEM SETUP . We now formulate the problem of the federated correlation test and establish the notation system . We use [ n ] to denote { 1 , · · · , n } . We denote vectors with bold lower-case letters ( e.g. , a , b , c ) and matrices with bold upper-case letters ( e.g. , A , B , C ) . We consider a population of n clients C = { ci } i∈ [ n ] . Each client has one share of local data composed of the triplets Di = { ( x , y , v ( i ) xy ) } , x ∈ X , y ∈ Y , v ( i ) xy ∈ { −M , · · · , M } , where x and y are categories of the contingency table , v ( i ) xy is the observed counting of the categories x and y in the local contingency table of the ith client , |X | = mx and |Y| = my are finite domains , and M is the maximum value |v ( i ) xy | can be . The global dataset is defined as D = { ( x , y , vxy ) : vxy =∑ i∈ [ n ] v ( i ) xy } . We focus on federated χ2-test and the data in contingency table is discrete . For the ease of presentation , we define the marginal statistics vx = ∑ y∈ [ |Y| ] vxy , vy = ∑ x∈ [ |X | ] vxy , and v = ∑ x∈ [ |X | ] , y∈ [ |Y| ] vxy . Besides , we define v̄xy = vx×vy v , denoting the expectation of vxy if x and y are uncorrelated . We define m = mxmy and use an indexing function I : [ mx ] × [ my ] → [ m ] to obtain a uniform indexing given the indexing of each variable . A centralized server S calculates the statistics for χ2-test sχ2 ( D ) = ∑ x∈ [ |X | ] , y∈ [ |Y| ] ( vxy−v̄xy ) 2 v̄xy on the global dataset to decide whether X and Y are correlated without collecting the raw data from clients . Overall , using MPC to conduct secure correlation tests in a federated scenario is highly expensive and impractical ( Boyle et al. , 2015 ; Damgård et al. , 2012 ) . Hence , in the present work , we trade off accuracy for efficiency , as long as the estimation error is small with a high probability . Formally , if FED-χ2 outputs ŝχ2 , whose corresponding standard centralized χ2-test output is sχ2 , the following accuracy requirement should be satisfied with small and δ. P [ ( 1− ) sχ2 ≤ ŝχ2 ≤ ( 1 + ) sχ2 ] ≥ 1− δ Threat Model . We assume that the centralized server S is honest but curious . It honestly follows the protocol due to regulatory or reputational pressure but is curious to discover extra private information from clients ’ legitimate updates for profit or surveillance purposes . As a result , client updates should contain as little sensitive information as feasible . We want to emphasize that , while the server may explore the privacy of clients , the server will honestly follow the protocol due to regulation or reputational pressure . The server won ’ t provide adversarial vectors to the clients . On the other hand , we assume honest clients . Specifically , we do not consider client-side adversarial attacks ( e.g. , data poisoning attacks ( Bagdasaryan et al. , 2020 ; Bhagoji et al. , 2019 ) ) . However , we allow a small portion of clients to drop out during the execution . Also , as we mentioned in Sec . 2 , we do not consider the differential privacy guarantee ; see further clarification in Appendix A . 3.2 FROM CORRELATION TEST TO FREQUENCY MOMENTS ESTIMATION . We first recast correlation test to a second frequency moments estimation problem as defined below . Given a set of key-value pairs S = { ki , vi } i∈ [ n ] , we re-organize it into a histogramH = { kj , vj =∑ ki=kj , i∈ [ n ] vi } , and estimate the αth frequency moments as Fα = ∑ j v α j . χ 2-test can thus be recast to a 2nd frequency moments estimation problem as follows : sχ2 ( D ) = ∑ x , y ( vxy − v̄xy ) 2 v̄xy = ∑ x , y ( vxy − v̄xy√ v̄xy ) 2 In federated setting , each client ci holds a local dataset Di = { ( x , y , v ( i ) xy ) } and computes a vector ui , where ui [ I ( x , y ) ] = v ( i ) xy−v̄xy/n√ v̄xy and ui has m elements . Thus , the challenge in federated χ2-test becomes calculating the following equation : sχ2 ( D ) = ∑ x , y ( vxy − v̄xy√ v̄xy ) 2 = || ∑ i∈ [ n ] ui||22
This paper discusses the problem of conducting correlation tests with sensitive data separately collected from $n$ clients, where centralizing data collection is risky but the available secure multiparty computation is costly in computation. The proposed test adjusts the classical chi-square test by first rewriting it as a second frequency moments estimation using $n$ vectors, each from a client. The proposed test prevent data recovering when performing the test by letting each client project its vector to a lower dimension. The higher the dimension is, the more accurate the test is, yet the higher the computation costs.
SP:250105ee671ab78d23446b9c306ca81626c243db
FED-$\chi^2$: Secure Federated Correlation Test
1 INTRODUCTION . Correlation test , as the name implies , is the process of examining the correlation between two random variables using observational data . It is a fundamental building block in a wide variety of real-world applications , including feature selection ( Zheng et al. , 2004 ) , cryptanalysis ( Nyberg , 2001 ) , causal graph discovery ( Spirtes et al. , 2000 ) , empirical finance ( Ledoit & Wolf , 2008 ; Kim & Ji , 2015 ) , medical studies ( Kassirer , 1983 ) and genomics ( Wilson et al. , 1999 ; Dudoit et al. , 2003 ) . Because the observational data used in correlation tests may contain sensitive information such as genomic information , centralizing the data collection is risky . To address , we resort to a federated setting in which each client maintains its own data and communicates with a centralized server to calculate a function . Note that the communication should contain as little information as feasible . Otherwise , the server may be able to infer sensitive information from the communication transcript . In the present work , we study a representative correlation test , namely χ2-test , under the federated setting . There are two straightforward methods for conducting χ2-test in such a context . First , clients can upload their raw data to the centralized server and delegate the test to it . While this method is effective in terms of communication , it entirely exposes the clients ’ private information . Second , clients may run secure multiparty computation ( MPC ) under the server ’ s coordination . Thus , clients can jointly run χ2-test without disclosing their data to the server . However , general-purpose MPC imposes significant computation and communication overhead , which is typically intolerable in a federated setting with computationally limited clients , e.g. , mobile devices . To address the dilemma , we present a federated protocol optimized for χ2-test that is computationally and communicationally efficient and discloses limited information to the server . We begin by recasting χ2-test as a second frequency moment estimation problem . To approximate the second frequency moment in a federated setting , each client encodes its raw data into a low-dimensional vector via stable random projection ( Indyk , 2006 ; Vempala , 2005 ; Li , 2008 ) . Such encodings can be aggregated with only summation , allowing clients to leverage secure aggregation ( Bonawitz et al. , 2017 ; Bell et al. , 2020 ) to aggregate the encodings and the server to decode them to approximate the second frequency moment . Because secure aggregation conceals each client ’ s individual update within the aggregated global update , the server learns only limited information about the clients ’ data . Our evaluation on four synthetic datasets and 16 real-world datasets shows that FED-χ2 can replace centralized χ2-test with good accuracy and low computation overhead . Additionally , we analyze FED-χ2 in three real-world use cases : feature selection , cryptanalysis , and online false discovery rate control . The results show that FED-χ2 can achieve comparable performance with centralized χ2-test and can withstand up to 20 % of clients dropping out with minor influence on the accuracy . In summary , we make the following contributions : We propose FED-χ2 , the first secure federated χ2-test protocol . FED-χ2 is computation- and communication-efficient and leaks much less information than trivially deploying secure aggregation . FED-χ2 decomposes χ2-test into frequency moments estimation that can easily be encoded/decoded using stable projection and secure aggregation techniques . We give formal security proof and utility analysis of FED-χ2 . We evaluate FED-χ2 in real-world use cases , and the findings suggest that FED-χ2 can substitute centralized χ2-test with comparable accuracy , and FED-χ2 can tolerate up to 20 % of clients dropout with minor accuracy drop . 2 RELATED WORK . Bonawitz et al . ( 2017 ) proposed the well-celebrated secure aggregation protocol as a low-cost way to calculate linear functions in a federated setting . It has seen many variants and improvements since then . For instance , Truex et al . ( 2019 ) and Xu et al . ( 2019 ) employed advanced crypto tools for secure aggregation , such as threshold homomorphic encryption and functional encryption . So et al . ( 2021 ) proposed TURBOAGG , which combines secure sharing with erasure codes for better dropout tolerance . To improve communication efficiency , Bell et al . ( 2020 ) and Choi et al . ( 2020 ) replaced the complete graph in secure aggregation with either a sparse random graph or a low-degree graph . Secure aggregation is deployed in a variety of applications . Agarwal et al . ( 2018 ) added binomial noise to local gradients , resulting in both differential privacy and communication efficiency . Wang et al . ( 2020 ) replaced the binomial noise with discrete Gaussian noise , which is shown to exhibit better composability . Kairouz et al . ( 2021 ) proved that the sum of discrete Gaussian is close to discrete Gaussian , thus discarding the common random seed assumption from Wang et al . ( 2020 ) . The above three works all incorporate secure aggregation in their protocols to lower the noise scale required for differential privacy . Chen et al . ( 2020 ) added an extra public parameter to each client to force them to train in the same way , allowing for the detection of malicious clients during aggregation . Nevertheless , designing secure federated correlation tests , despite its importance in real-world scenarios , is not explored by existing research in this field . On the other end of the spectrum , Wang et al . ( 2021 ) proved that stable projection is differentially private if the projection matrix is secret . In our protocol , the projection matrix is public information ; hence FED-χ2 does not consider the differential privacy guarantee . 3 FEDERATED CORRELATION TEST WITH MINIMAL LEAKAGE . In this section , we elaborate on the design of FED-χ2 , a secure federated protocol for χ2-test . Sec . 3.1 first formalizes the problem , establishes the notation system , and introduces the threat model . In Sec . 3.2 , we recast χ2-test as a second frequency moment estimation problem in the federated setting , and consequently , we are able to leverage stable projection to encode each client ’ s local information ( Sec . 3.3 ) , and then aggregate them using secure aggregation ( Sec . 3.4 ) . Sec . 3.5 , 3.6 , and 3.7 present security proof , utility analysis , communication analysis , and computation analysis of FED-χ2 . 3.1 PROBLEM SETUP . We now formulate the problem of the federated correlation test and establish the notation system . We use [ n ] to denote { 1 , · · · , n } . We denote vectors with bold lower-case letters ( e.g. , a , b , c ) and matrices with bold upper-case letters ( e.g. , A , B , C ) . We consider a population of n clients C = { ci } i∈ [ n ] . Each client has one share of local data composed of the triplets Di = { ( x , y , v ( i ) xy ) } , x ∈ X , y ∈ Y , v ( i ) xy ∈ { −M , · · · , M } , where x and y are categories of the contingency table , v ( i ) xy is the observed counting of the categories x and y in the local contingency table of the ith client , |X | = mx and |Y| = my are finite domains , and M is the maximum value |v ( i ) xy | can be . The global dataset is defined as D = { ( x , y , vxy ) : vxy =∑ i∈ [ n ] v ( i ) xy } . We focus on federated χ2-test and the data in contingency table is discrete . For the ease of presentation , we define the marginal statistics vx = ∑ y∈ [ |Y| ] vxy , vy = ∑ x∈ [ |X | ] vxy , and v = ∑ x∈ [ |X | ] , y∈ [ |Y| ] vxy . Besides , we define v̄xy = vx×vy v , denoting the expectation of vxy if x and y are uncorrelated . We define m = mxmy and use an indexing function I : [ mx ] × [ my ] → [ m ] to obtain a uniform indexing given the indexing of each variable . A centralized server S calculates the statistics for χ2-test sχ2 ( D ) = ∑ x∈ [ |X | ] , y∈ [ |Y| ] ( vxy−v̄xy ) 2 v̄xy on the global dataset to decide whether X and Y are correlated without collecting the raw data from clients . Overall , using MPC to conduct secure correlation tests in a federated scenario is highly expensive and impractical ( Boyle et al. , 2015 ; Damgård et al. , 2012 ) . Hence , in the present work , we trade off accuracy for efficiency , as long as the estimation error is small with a high probability . Formally , if FED-χ2 outputs ŝχ2 , whose corresponding standard centralized χ2-test output is sχ2 , the following accuracy requirement should be satisfied with small and δ. P [ ( 1− ) sχ2 ≤ ŝχ2 ≤ ( 1 + ) sχ2 ] ≥ 1− δ Threat Model . We assume that the centralized server S is honest but curious . It honestly follows the protocol due to regulatory or reputational pressure but is curious to discover extra private information from clients ’ legitimate updates for profit or surveillance purposes . As a result , client updates should contain as little sensitive information as feasible . We want to emphasize that , while the server may explore the privacy of clients , the server will honestly follow the protocol due to regulation or reputational pressure . The server won ’ t provide adversarial vectors to the clients . On the other hand , we assume honest clients . Specifically , we do not consider client-side adversarial attacks ( e.g. , data poisoning attacks ( Bagdasaryan et al. , 2020 ; Bhagoji et al. , 2019 ) ) . However , we allow a small portion of clients to drop out during the execution . Also , as we mentioned in Sec . 2 , we do not consider the differential privacy guarantee ; see further clarification in Appendix A . 3.2 FROM CORRELATION TEST TO FREQUENCY MOMENTS ESTIMATION . We first recast correlation test to a second frequency moments estimation problem as defined below . Given a set of key-value pairs S = { ki , vi } i∈ [ n ] , we re-organize it into a histogramH = { kj , vj =∑ ki=kj , i∈ [ n ] vi } , and estimate the αth frequency moments as Fα = ∑ j v α j . χ 2-test can thus be recast to a 2nd frequency moments estimation problem as follows : sχ2 ( D ) = ∑ x , y ( vxy − v̄xy ) 2 v̄xy = ∑ x , y ( vxy − v̄xy√ v̄xy ) 2 In federated setting , each client ci holds a local dataset Di = { ( x , y , v ( i ) xy ) } and computes a vector ui , where ui [ I ( x , y ) ] = v ( i ) xy−v̄xy/n√ v̄xy and ui has m elements . Thus , the challenge in federated χ2-test becomes calculating the following equation : sχ2 ( D ) = ∑ x , y ( vxy − v̄xy√ v̄xy ) 2 = || ∑ i∈ [ n ] ui||22
This paper introduces a federated analytics technique for computing the Pearson correlation for two random variables. The method is based on repeated uses of secure aggregation, as well as stable random projections. The protocol is argued to be secure in a semi-honest security model.
SP:250105ee671ab78d23446b9c306ca81626c243db
ClsVC: Learning Speech Representations with two different classification tasks.
1 INTRODUCTION . Voice conversion ( VC ) is an exciting topic committed to converting one utterance of a source speaker into another utterance of a target person by keeping the content in the original utterance while replacing it with the vocal features from the target speaker . Up to now , many methods have been applied in VC successfully . Commonly , these methods can be roughly categorized into two classes , i.e. , parallel VC and non-parallel VC Mohammadi & Kain ( 2017 ) . Specifically , parallel VC means that model training requires parallel corpus , which is unnecessary for non-parallel VC . Recently , more researchers have focused on the solutions of non-parallel VC since it is not easy for us to collect so many paired source-target speech datasets . Early VC systems , like Gaussian Mixture Model ( GMM ) , Stylianou et al . ( 1998 ) ; Toda et al . ( 2007 ) needed a lot of parallel data for model training , and the generated speech quality was not good enough . With the advance of deep learning , a variety of novel VC methods have been proposed in recent years . Among them , GAN-based model is one of the most popular methods , Goodfellow et al . ( 2014 ) ; Hsu et al . ( 2017a ) ; Kaneko & Kameoka ( 2018 ) ; Kaneko et al . ( 2019a ) ; Kameoka et al . ( 2018 ) ; Kaneko et al . ( 2019b ) which could learn a global generative distribution of the target speech without explicit approximation . These GAN-based models jointly train a generator and a discriminator . An adversarial loss derived from the discriminator is used to encourage the generator outputs to build indistinguishable from real speech . Thanks to the cycle consistency training , GANbased VC models can be trained with non-parallel speech datasets . Besides , learning discrete speech representations has also attracted much attention . Vector Quantization ( VQ ) , an extremely important signal compression method , which can quantify continuous data into discrete data . Previous studies have confirmed that the quantized discrete data produced by the continuous speech data is closely related with the phoneme information Chorowski et al . ( 2019 ) . Recently , VQVC Wu & Lee ( 2020 ) has been proposed to learn to disentangle the content and speaker information with reconstruction loss only . Then , VQVC+ Wu et al . ( 2020 ) was soon presented to improve the conversion performance of VQVC by adding the U-Net architecture within an auto-encoder-based VC system . To far improve the performance of disentangling content and speaker information , many other existing studies were introduced to combine with VQ , such as VQ-Wav2Vec , VQ-VAE and VQ-CPC . Baevski et al . ( 2019 ) ; Ding & Gutierrez-Osuna ( 2019 ) ; van Niekerk et al . ( 2020 ) There is also another line of research focus on learning latent representations with Autoencoder . In particular , Variational Auto Encoder ( VAE ) is the most famous . The network structure of VAE contains an encoder and a decoder and the core idea is very clear : the encoder learns a specific latent space from input speech and the decoder outputs a reconstructed speech from this latent space . In this process , VAE focuses on how to force the encoder to learn a specific latent space . So far , many VAE-based models have been successfully applied to VC Hsu et al . ( 2016 ) ; van den Oord et al . ( 2017 ) ; Hsu et al . ( 2017b ) ; Ding & Gutierrez-Osuna ( 2019 ) ; Hsu et al . ( 2017a ) . In addition , AutoVC is another successful application of Autoencoder . Qian et al . ( 2019 ) Through ingenious experimental design , AutoVC uses two different encoders to learn content and speaker information , respectively , so that this model can achieve distribution-matching style transfer by only on a self-reconstruction loss . Unfortunately , in the field of VC , all the models mentioned above have their inherent disadvantages . For example , GAN-based models can usually achieve a good conversion effect and ensure the matching between the generated data and the input data , but it is recognized that the training of GAN is very unstable . On the contrary , the training of VQVC is simple and fast enough , but the quantity of audio produced by this method is very poor . This may be because the discrete speech representations will inevitably lose some content information . In addition , although the VAE-based model also has a great conversion effect , it can not guarantee distribution matching . AutoVC is a great study , the training is very simple and achieves state-of-the-art results . However , in order to realize style conversion , it has to introduce a pre-trained speaker encoder . Based on these existing methods , we naturally wonder if there is a new solution that can achieve the distribution matching as AutoVC and GAN , that trains as easily as VQVC and VAE , that can disentangle content and speaker information by only one encoder as VQ does , and also has better performance in voice conversion or in decoupling linguistic and timbre information from speech ? In this paper , we proposed a novel voice conversion framework to meet all the above requirements . Specifically , our model is similar to VAE , Autoencoder is the main framework of our model , and two different types of classification tasks are applied to force our model to separate the content and speaker information correctly . Here , the two classification tasks refer to general classification tasks and adversarial classification tasks respectively . The goal of the general classification task is to identify the features related to the speaker as accurately as possible , that is , the speaker information . While the latter is designed to eliminate speaker information in latent space to get speakerindependent features , that is , content information . Experiment results are carried out on the VCTK dataset . Objective and Subjective evaluations demonstrate that the proposed method outperforms VQVC , AutoVC , VQ-VAE and StarGAN-VC in terms of naturalness and speaker similarity . 2 BACKGROUND . In mathematical statistics , if we already know the joint probability density functions of X and Y , we can easily find the marginal probability density functions of X and Y respectively . Formally , if ( x , y ) ∼ p ( x , y ) is known , we can get the marginal distributions of X and Y by the following formula : fX ( x ) = w p ( x , y ) dy fY ( y ) = w p ( x , y ) dx ( 1 ) Further , under the constraints of some setting conditions , although the closed-form of joint probability density functions p ( x , y ) in Eq . ( 1 ) is generally unknown , it is still feasible for a neural network to learn the marginal distributions of x and y from input samples z when each z corresponds to a unique pair of ( x , y ) . Mutual information ( MI ) , a crucial indicator to measure the dependence between two different variables . Recently , many MI estimators have been successfully applied to constrain neural networks to disentangle different components of the input data . which can be formulated as I ( X , Y ) = ∫ X ∫ Y P ( X , Y ) log P ( X , Y ) P ( X ) P ( Y ) ( 2 ) Where P ( X ) and P ( Y ) are the marginal distributions of X and Y respectively , and P ( X , Y ) denotes the joint distribution of X and Y . Since it is hard to obtain the required distribution formula P ( X , Y ) , many studies focus on proposing a sample-based MI lower bound ( or upper bound ) to get an approximation that can be calculated . Moon & Hero ( 2014 ) ; Hjelm et al . ( 2018 ) ; Cheng et al . ( 2020 ) Recently , Yuan et al . Yuan et al . ( 2020 ) , introduced a new MI estimator to learn content and style information from speech for voice conversion . Specifically , they proposed a novel MI-based learning objective to encourage a content encoder to output the content embedding and guide a speaker encoder to output the speaker embedding . Inspired by this , we proposed a new , simple and more effective framework for learning latent speech representation . Gradient Reversal Layer ( GRL ) Gradient Reversal Layer ( GRL ) was first proposed to address the issue of domain adaption Ganin & Lempitsky ( 2015 ) , which aims to force model to output domainshared features which are independent with domains . Specifically , GRL is often located between an encoder and a domain classifier . During the forward propagation , GRL acts as an identity transform . During the backpropagation , GRL takes the gradient from the subsequent level , multiplies it by −1 and passes it to the preceding layer so that the encoder and domain classifier have completely opposite optimization objectives . 3 METHOD . Firstly , for every speech x , we use content embedding Cx to represent linguistic information and speaker embedding Sx is proposed to represent timbre and style information . And , U means the set of speakers . The following two theorems are the premise of our framework : Theorem 3.1 The content embedding Cx and speaker identity U are independent for each other , In addition , the probability of each speaker ’ s speech being selected is the same , Formally , P ( U = u|Cx ) = P ( U = u ) = constant regardless of the speaker identity u . Theorem 3.2 The speaker embeddingSx and speaker identityU are in one-to-one correspondence . That is , for a speaker u who produced speech x , P ( U = u|Sx ) = 1 and for other speakers v ( v 6= u ) , P ( U = v|Sx ) = 0 . 3.1 PROBLEM FORMULATION . Given a dataset of multi speakers and their audio recordings X , where speaker u has Nu audio recordings . Formally , for each speaker u , Xu = { xui } Nu i=1 . For every input speech x ∈ X , we use T to represent the number of frames of speech x , noted that T is constant , which means that for any speech segments longer than T , we randomly select T frames , at the same time , for those speech segments with a length shorter than T , we pad them with constant . In this case , x can also expressed as x ( 1 : T ) which is a random process randomly sampled from the speech distribution pX ( ·|CX = Cx , SX = Sx ) . Here , we assume that every speech x can be expressed as a function f ( · ) of content embeddingCx and speaker embedding Sx . That is , x = f ( Cx , Sx ) . And , we also assume that x is uniquely determined by Cx and Sx . Formally , x1 = x2 only if Cx1 = Cx2 and Sx1 = Sx2 . Based on this assumption , for ∀ xi , xj ∈ X , the following formula must be true : I ( Sxi , f ( Cx , Sxi ) ) = I ( Sxj , f ( Cx , Sxj ) ) , I ( Cxi , f ( Cxi , Sx ) ) = I ( Cxj , f ( Cxj , Sx ) ) . ( 3 ) Now , assume two speech x1 and x2 from speaker u and speaker v respectively , x1 = f ( Cx1 , Sx1 ) and x2 = f ( Cx2 , Sx2 ) . Our goal is to design a speech conversion framework to generate a new speech x̂1→2 which preserves the content information of x1 but the speaker information is matched with x2 . Formally , an ideal converted speech should satisfy the following forms : I ( Sx1 , x1 ) = I ( Sx2 , x̂1→2 ) , I ( Cx2 , x2 ) = I ( Cx1 , x̂1→2 ) . ( 4 ) Based on the above assumption , the formula in Eq . ( 4 ) can be equivalent to x̂1→2 = f ( Cx1 , Sx2 ) . This conclusion is quite strong , and the formal proof will be presented in the appendix .
This paper proposes methods for voice conversion. The proposed approach follows an encoder-decoder framework in which it adds classification tasks over the learned embeddings to enhance voice conversion. The representation space is divided into two parts - content and speaker, to enable disentangling of the content and the speaker. Experiments are shown on “traditional” and one-shot VC tasks. Objective as well as subjective scores are used to evaluate all systems.
SP:2833410f7e227427311a1b72b9ddb3e1bd4576b8
ClsVC: Learning Speech Representations with two different classification tasks.
1 INTRODUCTION . Voice conversion ( VC ) is an exciting topic committed to converting one utterance of a source speaker into another utterance of a target person by keeping the content in the original utterance while replacing it with the vocal features from the target speaker . Up to now , many methods have been applied in VC successfully . Commonly , these methods can be roughly categorized into two classes , i.e. , parallel VC and non-parallel VC Mohammadi & Kain ( 2017 ) . Specifically , parallel VC means that model training requires parallel corpus , which is unnecessary for non-parallel VC . Recently , more researchers have focused on the solutions of non-parallel VC since it is not easy for us to collect so many paired source-target speech datasets . Early VC systems , like Gaussian Mixture Model ( GMM ) , Stylianou et al . ( 1998 ) ; Toda et al . ( 2007 ) needed a lot of parallel data for model training , and the generated speech quality was not good enough . With the advance of deep learning , a variety of novel VC methods have been proposed in recent years . Among them , GAN-based model is one of the most popular methods , Goodfellow et al . ( 2014 ) ; Hsu et al . ( 2017a ) ; Kaneko & Kameoka ( 2018 ) ; Kaneko et al . ( 2019a ) ; Kameoka et al . ( 2018 ) ; Kaneko et al . ( 2019b ) which could learn a global generative distribution of the target speech without explicit approximation . These GAN-based models jointly train a generator and a discriminator . An adversarial loss derived from the discriminator is used to encourage the generator outputs to build indistinguishable from real speech . Thanks to the cycle consistency training , GANbased VC models can be trained with non-parallel speech datasets . Besides , learning discrete speech representations has also attracted much attention . Vector Quantization ( VQ ) , an extremely important signal compression method , which can quantify continuous data into discrete data . Previous studies have confirmed that the quantized discrete data produced by the continuous speech data is closely related with the phoneme information Chorowski et al . ( 2019 ) . Recently , VQVC Wu & Lee ( 2020 ) has been proposed to learn to disentangle the content and speaker information with reconstruction loss only . Then , VQVC+ Wu et al . ( 2020 ) was soon presented to improve the conversion performance of VQVC by adding the U-Net architecture within an auto-encoder-based VC system . To far improve the performance of disentangling content and speaker information , many other existing studies were introduced to combine with VQ , such as VQ-Wav2Vec , VQ-VAE and VQ-CPC . Baevski et al . ( 2019 ) ; Ding & Gutierrez-Osuna ( 2019 ) ; van Niekerk et al . ( 2020 ) There is also another line of research focus on learning latent representations with Autoencoder . In particular , Variational Auto Encoder ( VAE ) is the most famous . The network structure of VAE contains an encoder and a decoder and the core idea is very clear : the encoder learns a specific latent space from input speech and the decoder outputs a reconstructed speech from this latent space . In this process , VAE focuses on how to force the encoder to learn a specific latent space . So far , many VAE-based models have been successfully applied to VC Hsu et al . ( 2016 ) ; van den Oord et al . ( 2017 ) ; Hsu et al . ( 2017b ) ; Ding & Gutierrez-Osuna ( 2019 ) ; Hsu et al . ( 2017a ) . In addition , AutoVC is another successful application of Autoencoder . Qian et al . ( 2019 ) Through ingenious experimental design , AutoVC uses two different encoders to learn content and speaker information , respectively , so that this model can achieve distribution-matching style transfer by only on a self-reconstruction loss . Unfortunately , in the field of VC , all the models mentioned above have their inherent disadvantages . For example , GAN-based models can usually achieve a good conversion effect and ensure the matching between the generated data and the input data , but it is recognized that the training of GAN is very unstable . On the contrary , the training of VQVC is simple and fast enough , but the quantity of audio produced by this method is very poor . This may be because the discrete speech representations will inevitably lose some content information . In addition , although the VAE-based model also has a great conversion effect , it can not guarantee distribution matching . AutoVC is a great study , the training is very simple and achieves state-of-the-art results . However , in order to realize style conversion , it has to introduce a pre-trained speaker encoder . Based on these existing methods , we naturally wonder if there is a new solution that can achieve the distribution matching as AutoVC and GAN , that trains as easily as VQVC and VAE , that can disentangle content and speaker information by only one encoder as VQ does , and also has better performance in voice conversion or in decoupling linguistic and timbre information from speech ? In this paper , we proposed a novel voice conversion framework to meet all the above requirements . Specifically , our model is similar to VAE , Autoencoder is the main framework of our model , and two different types of classification tasks are applied to force our model to separate the content and speaker information correctly . Here , the two classification tasks refer to general classification tasks and adversarial classification tasks respectively . The goal of the general classification task is to identify the features related to the speaker as accurately as possible , that is , the speaker information . While the latter is designed to eliminate speaker information in latent space to get speakerindependent features , that is , content information . Experiment results are carried out on the VCTK dataset . Objective and Subjective evaluations demonstrate that the proposed method outperforms VQVC , AutoVC , VQ-VAE and StarGAN-VC in terms of naturalness and speaker similarity . 2 BACKGROUND . In mathematical statistics , if we already know the joint probability density functions of X and Y , we can easily find the marginal probability density functions of X and Y respectively . Formally , if ( x , y ) ∼ p ( x , y ) is known , we can get the marginal distributions of X and Y by the following formula : fX ( x ) = w p ( x , y ) dy fY ( y ) = w p ( x , y ) dx ( 1 ) Further , under the constraints of some setting conditions , although the closed-form of joint probability density functions p ( x , y ) in Eq . ( 1 ) is generally unknown , it is still feasible for a neural network to learn the marginal distributions of x and y from input samples z when each z corresponds to a unique pair of ( x , y ) . Mutual information ( MI ) , a crucial indicator to measure the dependence between two different variables . Recently , many MI estimators have been successfully applied to constrain neural networks to disentangle different components of the input data . which can be formulated as I ( X , Y ) = ∫ X ∫ Y P ( X , Y ) log P ( X , Y ) P ( X ) P ( Y ) ( 2 ) Where P ( X ) and P ( Y ) are the marginal distributions of X and Y respectively , and P ( X , Y ) denotes the joint distribution of X and Y . Since it is hard to obtain the required distribution formula P ( X , Y ) , many studies focus on proposing a sample-based MI lower bound ( or upper bound ) to get an approximation that can be calculated . Moon & Hero ( 2014 ) ; Hjelm et al . ( 2018 ) ; Cheng et al . ( 2020 ) Recently , Yuan et al . Yuan et al . ( 2020 ) , introduced a new MI estimator to learn content and style information from speech for voice conversion . Specifically , they proposed a novel MI-based learning objective to encourage a content encoder to output the content embedding and guide a speaker encoder to output the speaker embedding . Inspired by this , we proposed a new , simple and more effective framework for learning latent speech representation . Gradient Reversal Layer ( GRL ) Gradient Reversal Layer ( GRL ) was first proposed to address the issue of domain adaption Ganin & Lempitsky ( 2015 ) , which aims to force model to output domainshared features which are independent with domains . Specifically , GRL is often located between an encoder and a domain classifier . During the forward propagation , GRL acts as an identity transform . During the backpropagation , GRL takes the gradient from the subsequent level , multiplies it by −1 and passes it to the preceding layer so that the encoder and domain classifier have completely opposite optimization objectives . 3 METHOD . Firstly , for every speech x , we use content embedding Cx to represent linguistic information and speaker embedding Sx is proposed to represent timbre and style information . And , U means the set of speakers . The following two theorems are the premise of our framework : Theorem 3.1 The content embedding Cx and speaker identity U are independent for each other , In addition , the probability of each speaker ’ s speech being selected is the same , Formally , P ( U = u|Cx ) = P ( U = u ) = constant regardless of the speaker identity u . Theorem 3.2 The speaker embeddingSx and speaker identityU are in one-to-one correspondence . That is , for a speaker u who produced speech x , P ( U = u|Sx ) = 1 and for other speakers v ( v 6= u ) , P ( U = v|Sx ) = 0 . 3.1 PROBLEM FORMULATION . Given a dataset of multi speakers and their audio recordings X , where speaker u has Nu audio recordings . Formally , for each speaker u , Xu = { xui } Nu i=1 . For every input speech x ∈ X , we use T to represent the number of frames of speech x , noted that T is constant , which means that for any speech segments longer than T , we randomly select T frames , at the same time , for those speech segments with a length shorter than T , we pad them with constant . In this case , x can also expressed as x ( 1 : T ) which is a random process randomly sampled from the speech distribution pX ( ·|CX = Cx , SX = Sx ) . Here , we assume that every speech x can be expressed as a function f ( · ) of content embeddingCx and speaker embedding Sx . That is , x = f ( Cx , Sx ) . And , we also assume that x is uniquely determined by Cx and Sx . Formally , x1 = x2 only if Cx1 = Cx2 and Sx1 = Sx2 . Based on this assumption , for ∀ xi , xj ∈ X , the following formula must be true : I ( Sxi , f ( Cx , Sxi ) ) = I ( Sxj , f ( Cx , Sxj ) ) , I ( Cxi , f ( Cxi , Sx ) ) = I ( Cxj , f ( Cxj , Sx ) ) . ( 3 ) Now , assume two speech x1 and x2 from speaker u and speaker v respectively , x1 = f ( Cx1 , Sx1 ) and x2 = f ( Cx2 , Sx2 ) . Our goal is to design a speech conversion framework to generate a new speech x̂1→2 which preserves the content information of x1 but the speaker information is matched with x2 . Formally , an ideal converted speech should satisfy the following forms : I ( Sx1 , x1 ) = I ( Sx2 , x̂1→2 ) , I ( Cx2 , x2 ) = I ( Cx1 , x̂1→2 ) . ( 4 ) Based on the above assumption , the formula in Eq . ( 4 ) can be equivalent to x̂1→2 = f ( Cx1 , Sx2 ) . This conclusion is quite strong , and the formal proof will be presented in the appendix .
The paper has proposed a VC system called ClsVC by learning the latent speech representation from reference speech. To ensure the disentanglement of speaker and content inforamtion, the proposed method separates the latent speech representation into two parts, for speaker and content embedding respectively. To ensure the performance of disentanglement, a common speaker classifier and an adversarial speaker classifier are proposed. Furthermore, a new loss called code-reconstruction loss is proposed. Experimental results validate the effectiveness of the proposed method.
SP:2833410f7e227427311a1b72b9ddb3e1bd4576b8
ClsVC: Learning Speech Representations with two different classification tasks.
1 INTRODUCTION . Voice conversion ( VC ) is an exciting topic committed to converting one utterance of a source speaker into another utterance of a target person by keeping the content in the original utterance while replacing it with the vocal features from the target speaker . Up to now , many methods have been applied in VC successfully . Commonly , these methods can be roughly categorized into two classes , i.e. , parallel VC and non-parallel VC Mohammadi & Kain ( 2017 ) . Specifically , parallel VC means that model training requires parallel corpus , which is unnecessary for non-parallel VC . Recently , more researchers have focused on the solutions of non-parallel VC since it is not easy for us to collect so many paired source-target speech datasets . Early VC systems , like Gaussian Mixture Model ( GMM ) , Stylianou et al . ( 1998 ) ; Toda et al . ( 2007 ) needed a lot of parallel data for model training , and the generated speech quality was not good enough . With the advance of deep learning , a variety of novel VC methods have been proposed in recent years . Among them , GAN-based model is one of the most popular methods , Goodfellow et al . ( 2014 ) ; Hsu et al . ( 2017a ) ; Kaneko & Kameoka ( 2018 ) ; Kaneko et al . ( 2019a ) ; Kameoka et al . ( 2018 ) ; Kaneko et al . ( 2019b ) which could learn a global generative distribution of the target speech without explicit approximation . These GAN-based models jointly train a generator and a discriminator . An adversarial loss derived from the discriminator is used to encourage the generator outputs to build indistinguishable from real speech . Thanks to the cycle consistency training , GANbased VC models can be trained with non-parallel speech datasets . Besides , learning discrete speech representations has also attracted much attention . Vector Quantization ( VQ ) , an extremely important signal compression method , which can quantify continuous data into discrete data . Previous studies have confirmed that the quantized discrete data produced by the continuous speech data is closely related with the phoneme information Chorowski et al . ( 2019 ) . Recently , VQVC Wu & Lee ( 2020 ) has been proposed to learn to disentangle the content and speaker information with reconstruction loss only . Then , VQVC+ Wu et al . ( 2020 ) was soon presented to improve the conversion performance of VQVC by adding the U-Net architecture within an auto-encoder-based VC system . To far improve the performance of disentangling content and speaker information , many other existing studies were introduced to combine with VQ , such as VQ-Wav2Vec , VQ-VAE and VQ-CPC . Baevski et al . ( 2019 ) ; Ding & Gutierrez-Osuna ( 2019 ) ; van Niekerk et al . ( 2020 ) There is also another line of research focus on learning latent representations with Autoencoder . In particular , Variational Auto Encoder ( VAE ) is the most famous . The network structure of VAE contains an encoder and a decoder and the core idea is very clear : the encoder learns a specific latent space from input speech and the decoder outputs a reconstructed speech from this latent space . In this process , VAE focuses on how to force the encoder to learn a specific latent space . So far , many VAE-based models have been successfully applied to VC Hsu et al . ( 2016 ) ; van den Oord et al . ( 2017 ) ; Hsu et al . ( 2017b ) ; Ding & Gutierrez-Osuna ( 2019 ) ; Hsu et al . ( 2017a ) . In addition , AutoVC is another successful application of Autoencoder . Qian et al . ( 2019 ) Through ingenious experimental design , AutoVC uses two different encoders to learn content and speaker information , respectively , so that this model can achieve distribution-matching style transfer by only on a self-reconstruction loss . Unfortunately , in the field of VC , all the models mentioned above have their inherent disadvantages . For example , GAN-based models can usually achieve a good conversion effect and ensure the matching between the generated data and the input data , but it is recognized that the training of GAN is very unstable . On the contrary , the training of VQVC is simple and fast enough , but the quantity of audio produced by this method is very poor . This may be because the discrete speech representations will inevitably lose some content information . In addition , although the VAE-based model also has a great conversion effect , it can not guarantee distribution matching . AutoVC is a great study , the training is very simple and achieves state-of-the-art results . However , in order to realize style conversion , it has to introduce a pre-trained speaker encoder . Based on these existing methods , we naturally wonder if there is a new solution that can achieve the distribution matching as AutoVC and GAN , that trains as easily as VQVC and VAE , that can disentangle content and speaker information by only one encoder as VQ does , and also has better performance in voice conversion or in decoupling linguistic and timbre information from speech ? In this paper , we proposed a novel voice conversion framework to meet all the above requirements . Specifically , our model is similar to VAE , Autoencoder is the main framework of our model , and two different types of classification tasks are applied to force our model to separate the content and speaker information correctly . Here , the two classification tasks refer to general classification tasks and adversarial classification tasks respectively . The goal of the general classification task is to identify the features related to the speaker as accurately as possible , that is , the speaker information . While the latter is designed to eliminate speaker information in latent space to get speakerindependent features , that is , content information . Experiment results are carried out on the VCTK dataset . Objective and Subjective evaluations demonstrate that the proposed method outperforms VQVC , AutoVC , VQ-VAE and StarGAN-VC in terms of naturalness and speaker similarity . 2 BACKGROUND . In mathematical statistics , if we already know the joint probability density functions of X and Y , we can easily find the marginal probability density functions of X and Y respectively . Formally , if ( x , y ) ∼ p ( x , y ) is known , we can get the marginal distributions of X and Y by the following formula : fX ( x ) = w p ( x , y ) dy fY ( y ) = w p ( x , y ) dx ( 1 ) Further , under the constraints of some setting conditions , although the closed-form of joint probability density functions p ( x , y ) in Eq . ( 1 ) is generally unknown , it is still feasible for a neural network to learn the marginal distributions of x and y from input samples z when each z corresponds to a unique pair of ( x , y ) . Mutual information ( MI ) , a crucial indicator to measure the dependence between two different variables . Recently , many MI estimators have been successfully applied to constrain neural networks to disentangle different components of the input data . which can be formulated as I ( X , Y ) = ∫ X ∫ Y P ( X , Y ) log P ( X , Y ) P ( X ) P ( Y ) ( 2 ) Where P ( X ) and P ( Y ) are the marginal distributions of X and Y respectively , and P ( X , Y ) denotes the joint distribution of X and Y . Since it is hard to obtain the required distribution formula P ( X , Y ) , many studies focus on proposing a sample-based MI lower bound ( or upper bound ) to get an approximation that can be calculated . Moon & Hero ( 2014 ) ; Hjelm et al . ( 2018 ) ; Cheng et al . ( 2020 ) Recently , Yuan et al . Yuan et al . ( 2020 ) , introduced a new MI estimator to learn content and style information from speech for voice conversion . Specifically , they proposed a novel MI-based learning objective to encourage a content encoder to output the content embedding and guide a speaker encoder to output the speaker embedding . Inspired by this , we proposed a new , simple and more effective framework for learning latent speech representation . Gradient Reversal Layer ( GRL ) Gradient Reversal Layer ( GRL ) was first proposed to address the issue of domain adaption Ganin & Lempitsky ( 2015 ) , which aims to force model to output domainshared features which are independent with domains . Specifically , GRL is often located between an encoder and a domain classifier . During the forward propagation , GRL acts as an identity transform . During the backpropagation , GRL takes the gradient from the subsequent level , multiplies it by −1 and passes it to the preceding layer so that the encoder and domain classifier have completely opposite optimization objectives . 3 METHOD . Firstly , for every speech x , we use content embedding Cx to represent linguistic information and speaker embedding Sx is proposed to represent timbre and style information . And , U means the set of speakers . The following two theorems are the premise of our framework : Theorem 3.1 The content embedding Cx and speaker identity U are independent for each other , In addition , the probability of each speaker ’ s speech being selected is the same , Formally , P ( U = u|Cx ) = P ( U = u ) = constant regardless of the speaker identity u . Theorem 3.2 The speaker embeddingSx and speaker identityU are in one-to-one correspondence . That is , for a speaker u who produced speech x , P ( U = u|Sx ) = 1 and for other speakers v ( v 6= u ) , P ( U = v|Sx ) = 0 . 3.1 PROBLEM FORMULATION . Given a dataset of multi speakers and their audio recordings X , where speaker u has Nu audio recordings . Formally , for each speaker u , Xu = { xui } Nu i=1 . For every input speech x ∈ X , we use T to represent the number of frames of speech x , noted that T is constant , which means that for any speech segments longer than T , we randomly select T frames , at the same time , for those speech segments with a length shorter than T , we pad them with constant . In this case , x can also expressed as x ( 1 : T ) which is a random process randomly sampled from the speech distribution pX ( ·|CX = Cx , SX = Sx ) . Here , we assume that every speech x can be expressed as a function f ( · ) of content embeddingCx and speaker embedding Sx . That is , x = f ( Cx , Sx ) . And , we also assume that x is uniquely determined by Cx and Sx . Formally , x1 = x2 only if Cx1 = Cx2 and Sx1 = Sx2 . Based on this assumption , for ∀ xi , xj ∈ X , the following formula must be true : I ( Sxi , f ( Cx , Sxi ) ) = I ( Sxj , f ( Cx , Sxj ) ) , I ( Cxi , f ( Cxi , Sx ) ) = I ( Cxj , f ( Cxj , Sx ) ) . ( 3 ) Now , assume two speech x1 and x2 from speaker u and speaker v respectively , x1 = f ( Cx1 , Sx1 ) and x2 = f ( Cx2 , Sx2 ) . Our goal is to design a speech conversion framework to generate a new speech x̂1→2 which preserves the content information of x1 but the speaker information is matched with x2 . Formally , an ideal converted speech should satisfy the following forms : I ( Sx1 , x1 ) = I ( Sx2 , x̂1→2 ) , I ( Cx2 , x2 ) = I ( Cx1 , x̂1→2 ) . ( 4 ) Based on the above assumption , the formula in Eq . ( 4 ) can be equivalent to x̂1→2 = f ( Cx1 , Sx2 ) . This conclusion is quite strong , and the formal proof will be presented in the appendix .
The paper proposes a model for voice conversion. To achieve voice conversion, the approach is to separate linguistic content and speaker information into two embedding vectors. Voice conversion is achieved by swapping out the speaker embedding. There are four losses involved, one for reconstructing speech, one for reconstructing the embedding vectors, one for classifying speakers, and the last is a task to make speaker classification adversarially hard.
SP:2833410f7e227427311a1b72b9ddb3e1bd4576b8
Benchmarking person re-identification approaches and training datasets for practical real-world implementations
1 INTRODUCTION . As many cameras are being deployed in public places ( e.g. , airports , malls , parks ) , real-time monitoring of the video streams by security agents becomes impractical . Automated video processing appears as a promising solution to analyze the whole network in real time and select only relevant sequences for verification by human operators . This paper deals with person Re-Identification ( ReID ) , a computer vision problem that aims at finding an individual in a network of non-overlapping cameras ( Bedagkar-Gala & Shah , 2014 ) . It has diverse potential security applications , such as suspect searching ( Liao et al. , 2014 ) , identifying owners of abandoned luggage ( Altunay et al. , 2018 ) , or recovering missing children ( Deb et al. , 2021 ) , among others . In the literature , the problem of Re-ID is studied under different settings depending on the application context ( see Section 2.1 ) . On the one hand , the most studied Re-ID paradigm , which we refer to as standard Re-ID , tries to find images representing the query person within a gallery of pre-cropped images of persons , containing at least one correct match ( Lavi et al. , 2020 ) . On the other hand , Sumari et al . ( 2020 ) recently introduced a setting considering specifically the constraints related to implementing Re-ID for use during live operations . We call it live Re-ID , and a first contribution of this work is to formalize the definition and constraints associated to this setting . We also extend the live Re-ID evaluation metrics proposed by Sumari et al . ( 2020 ) in order to facilitate interpretation . Standard Re-ID is not the best suited paradigm for practical implementations , as it does not consider the influence of potential domain shift due to pedestrian detection errors or to deployment in a city with different characteristics than the training dataset . Indeed , in their experiments , Sumari et al . ( 2020 ) showed that training a successful Re-ID model with respect to standard Re-ID metrics does not guarantee good performance when evaluated on specific live Re-ID metrics . Nevertheless , most publicly available large scale datasets for Re-ID focus on the standard Re-ID setting , and many successful approaches have been developed for this specific purpose . For this reason , we believe that it is essential to study if these datasets and approaches can be used to implement and deploy practical applications in different contexts . More specifically , the objective of this paper is to answer the following questions : 1 . Which standard Re-ID approaches can be successfully deployed for practical implementations in the live Re-ID setting ? 2 . Which standard Re-ID dataset is better suited to train standard Re-ID models for the live Re-ID setting ? 3 . Do different Re-ID approaches have different optimal datasets for deployment ? 4 . Can we use a simple cross-dataset evaluation methodology to assess the deployability of a given approach-dataset pair ? To answer these questions , we conducted a study using three standard Re-ID datasets and four recent standard Re-ID approaches . For each approach-dataset pair , the Re-ID model obtained was evaluated against the other two datasets and against a fourth dataset configured for the live Re-ID setting . A conceptual overview of the objectives of our study is represented in Figure 1 . Xiao et al . ( 2017 ) showed that considering pedestrian detection and Re-ID separately is not as good as end-toend approaches for person search , i.e. , galleries of whole scene images . However , our results show that this two step approach can perform well on the live Re-ID setting . In addition , we believe that the results from our study can be very useful to pre-train successful initial live Re-ID models and to guide the development of more complex end-to-end architectures for live Re-ID . This paper is organized as follows : Section 2 discusses the relevant related literature . The methodology for the proposed benchmark experiments is detailed in Section 3 . The results are presented in Section 4 and discussed in Section 5 . Finally , Section 6 presents our conclusions and potential future work . 2 RELATED WORK . A complete literature review about Re-ID approaches is not the purpose of this paper . Instead , we present clear definitions of the different existing Re-ID settings in Section 2.1 . Previous benchmark studies about Re-ID are also discussed in Section 2.2 . 2.1 PERSON RE-IDENTIFICATION SETTINGS . The field of Re-ID was first formalized by Gheissari et al . ( 2006 ) , it consists in retrieving instances of a given individual , called the query person , within a complex set of multimedia content called the gallery . The different settings presented here are defined by how they represent the query person , the format of the gallery items , the constraints on the gallery content , the boundaries of the Re-ID system , and the constraints imposed on the evaluation methodology . 2.1.1 POPULAR SETTINGS . Standard Re-ID In the standard Re-ID setting , both the query image ( representing the query person ) , and all items in the gallery are well-cropped images representing the entire body of a person . It is sometimes called closed-set Re-ID as it assumes that the query person has at least one representative in the gallery . According to the statistics in Papers with Code ( PwC ) , it is the most studied Re-ID setting by a large margin , in terms of number of papers , datasets and benchmarks published . Some standard Re-ID datasets and successful methods are used for our benchmark study and presented in Section 3 . For a more complete overview of standard Re-ID approaches , we refer the reader to the surveys by Lavi et al . ( 2020 ) and Ye et al . ( 2021 ) . Person search The person search setting was introduced by Xu et al . ( 2014 ) . It consists in replacing the gallery items by whole scene images ( Xiao et al. , 2017 ) . In other words , a person search model must return not only the index of the gallery image where the query is present , but also its location in terms of Bounding Box ( BB ) coordinates . A survey about person search approaches was proposed by Islam ( 2020 ) . Open-set Re-ID The open-set Re-ID setting was first defined by Liao et al . ( 2014 ) . It differs from standard Re-Id in that there is no guarantee that the query person is represented in the gallery , i.e. , an open-set Re-ID model should be able to answer whether the gallery contains the query . The reader can refer to the survey by Leng et al . ( 2019 ) for an overview of recent open-set Re-ID approaches . Video-based Re-ID The video-based Re-ID setting was first studied by Wang et al . ( 2014 ) . In this setting , all images ( query and gallery ) are replaced by image sequences extracted from consecutive frames of a video . Sequences are composed of well-cropped entire body images representing the same person . A complete review of video-based Re-ID was proposed by Ye et al . ( 2021 ) . Others For completeness , we mention the existence of other Re-ID variants in the literature , namely unsupervised Re-ID ( Yang et al. , 2021 ) , semi-supervised Re-ID ( Moskvyak et al. , 2021 ) , human-in-the-loop Re-ID ( Wang et al. , 2016 ) , or federated Re-ID ( Zhuang et al. , 2020 ) . However , their specificities lie in how Re-ID models are trained while the other settings above focus on constraints at inference time . For this reason , these Re-ID paradigms are not presented further here . 2.1.2 LIVE RE-ID SETTING . In this section , we clearly define and formalize the live Re-ID setting , which is inspired by the work of Sumari et al . ( 2020 ) . It takes into account all relevant aspects for deploying Re-ID models in practical real-world applications . When looking for a query person during live operations , whole scene videos need to be processed in near real time , hence the galleries for live Re-ID are composed of the consecutive whole scene frames from short video sequences . The live Re-ID context is also highly open-set as the probability to have the query in a short video sequence from a given camera is low . Hence , this setting combines elements from several of the Re-ID settings mentioned above . Another key characteristic of live Re-ID is that the training context is different from the deployment context . Indeed , building new specialized datasets for deployment in every shopping mall or small city is unrealistic from the perspective of future advances in the field . This highlights the importance of studying cross domain transfer of Re-ID , which was first discussed and highlighted by Luo et al . ( 2020 ) . Finally , this setting also takes into account that Re-ID model predictions need to be processed by a human security agent , who takes the final decision and trigger appropriate actions . This way , very high rank-1 accuracy is not mandatory for live Re-ID , as the operator can find the query in later ranks . On the other hand , false alarm rates must be kept low to avoid overloading the human operators , who have limited processing capacity . To evaluate these two objectives , Sumari et al . ( 2020 ) introduced two evaluation metrics representing both dimensions of the problem ( see Section 3.3.3 ) . The experiments conducted in this paper aim at studying the transferability of standard Re-ID approaches and datasets for deployment in the live Re-ID setting . 2.2 PERSON RE-IDENTIFICATION BENCHMARKS . Most recent research dealing with Re-ID present comparative evaluation of different approaches . While listing all these papers is out of the scope of this work , this section presents several benchmark studies considering different Re-ID settings or specific aspects of the Re-ID pipeline . A large scale benchmark experiment was conducted by Gou et al . ( 2018 ) to compare various approaches for standard Re-ID and video-based Re-ID . By evaluating more than 30 approaches on 16 public datasets , they produced the largest Re-ID benchmark to date . In addition , they built a new dataset to represent several constraints relevant for real-world implementations , such as pedestrian detection errors and illumination variations , among others . However , they do not consider cross domain performance and all evaluations are conducted in the closed-set setting , which are major limitations regarding future deployments . In addition , a smaller systematic evaluation of video-based Re-ID approaches was proposed by Zheng et al . ( 2016 ) . Another extensive set of experiments was conducted by Zheng et al . ( 2017 ) to evaluate different pedestrian detection models on a two-step person search pipeline . They demonstrated that the best performing models on standard object detection metrics are not necessarily the best suited for Re-ID from whole scene frames . In addition , a first benchmark regarding cross-domain transfer of Re-ID approaches was proposed by He et al . ( 2020 ) . Their experiments consisted in training an approach on one standard Re-ID dataset and evaluating on another . Finally , on another note , Zhuang et al . ( 2020 ) compared different approaches for federated Re-ID , i.e . learning Re-ID across decentralized clients to preserve privacy . The studies presented above have brought valuable insights to the Re-ID community . However , none of them allows to assess the performance of a Re-ID model against all the challenges involved during deployment in a new environment for practical use in security applications . This paper contributes to bridging this gap by conducting experiments within the live Re-ID setting , which was designed to take into account all these challenges . In particular , we consider the influence of different standard Re-ID approaches and training datasets on live Re-ID results .
The authors conduct a systematic analysis of some existing person re-identification approaches. Specifically, they evaluate four publicly available models against three publicly available datasets using standard metrics. They also check how the models generalize to other datasets when not trained on them. Finally, they discuss the results and show which approach and training dataset combination works best in live re-id settings.
SP:0bb1dc14e8ced49c047fe8ae3a491dc30a7b1a3d
Benchmarking person re-identification approaches and training datasets for practical real-world implementations
1 INTRODUCTION . As many cameras are being deployed in public places ( e.g. , airports , malls , parks ) , real-time monitoring of the video streams by security agents becomes impractical . Automated video processing appears as a promising solution to analyze the whole network in real time and select only relevant sequences for verification by human operators . This paper deals with person Re-Identification ( ReID ) , a computer vision problem that aims at finding an individual in a network of non-overlapping cameras ( Bedagkar-Gala & Shah , 2014 ) . It has diverse potential security applications , such as suspect searching ( Liao et al. , 2014 ) , identifying owners of abandoned luggage ( Altunay et al. , 2018 ) , or recovering missing children ( Deb et al. , 2021 ) , among others . In the literature , the problem of Re-ID is studied under different settings depending on the application context ( see Section 2.1 ) . On the one hand , the most studied Re-ID paradigm , which we refer to as standard Re-ID , tries to find images representing the query person within a gallery of pre-cropped images of persons , containing at least one correct match ( Lavi et al. , 2020 ) . On the other hand , Sumari et al . ( 2020 ) recently introduced a setting considering specifically the constraints related to implementing Re-ID for use during live operations . We call it live Re-ID , and a first contribution of this work is to formalize the definition and constraints associated to this setting . We also extend the live Re-ID evaluation metrics proposed by Sumari et al . ( 2020 ) in order to facilitate interpretation . Standard Re-ID is not the best suited paradigm for practical implementations , as it does not consider the influence of potential domain shift due to pedestrian detection errors or to deployment in a city with different characteristics than the training dataset . Indeed , in their experiments , Sumari et al . ( 2020 ) showed that training a successful Re-ID model with respect to standard Re-ID metrics does not guarantee good performance when evaluated on specific live Re-ID metrics . Nevertheless , most publicly available large scale datasets for Re-ID focus on the standard Re-ID setting , and many successful approaches have been developed for this specific purpose . For this reason , we believe that it is essential to study if these datasets and approaches can be used to implement and deploy practical applications in different contexts . More specifically , the objective of this paper is to answer the following questions : 1 . Which standard Re-ID approaches can be successfully deployed for practical implementations in the live Re-ID setting ? 2 . Which standard Re-ID dataset is better suited to train standard Re-ID models for the live Re-ID setting ? 3 . Do different Re-ID approaches have different optimal datasets for deployment ? 4 . Can we use a simple cross-dataset evaluation methodology to assess the deployability of a given approach-dataset pair ? To answer these questions , we conducted a study using three standard Re-ID datasets and four recent standard Re-ID approaches . For each approach-dataset pair , the Re-ID model obtained was evaluated against the other two datasets and against a fourth dataset configured for the live Re-ID setting . A conceptual overview of the objectives of our study is represented in Figure 1 . Xiao et al . ( 2017 ) showed that considering pedestrian detection and Re-ID separately is not as good as end-toend approaches for person search , i.e. , galleries of whole scene images . However , our results show that this two step approach can perform well on the live Re-ID setting . In addition , we believe that the results from our study can be very useful to pre-train successful initial live Re-ID models and to guide the development of more complex end-to-end architectures for live Re-ID . This paper is organized as follows : Section 2 discusses the relevant related literature . The methodology for the proposed benchmark experiments is detailed in Section 3 . The results are presented in Section 4 and discussed in Section 5 . Finally , Section 6 presents our conclusions and potential future work . 2 RELATED WORK . A complete literature review about Re-ID approaches is not the purpose of this paper . Instead , we present clear definitions of the different existing Re-ID settings in Section 2.1 . Previous benchmark studies about Re-ID are also discussed in Section 2.2 . 2.1 PERSON RE-IDENTIFICATION SETTINGS . The field of Re-ID was first formalized by Gheissari et al . ( 2006 ) , it consists in retrieving instances of a given individual , called the query person , within a complex set of multimedia content called the gallery . The different settings presented here are defined by how they represent the query person , the format of the gallery items , the constraints on the gallery content , the boundaries of the Re-ID system , and the constraints imposed on the evaluation methodology . 2.1.1 POPULAR SETTINGS . Standard Re-ID In the standard Re-ID setting , both the query image ( representing the query person ) , and all items in the gallery are well-cropped images representing the entire body of a person . It is sometimes called closed-set Re-ID as it assumes that the query person has at least one representative in the gallery . According to the statistics in Papers with Code ( PwC ) , it is the most studied Re-ID setting by a large margin , in terms of number of papers , datasets and benchmarks published . Some standard Re-ID datasets and successful methods are used for our benchmark study and presented in Section 3 . For a more complete overview of standard Re-ID approaches , we refer the reader to the surveys by Lavi et al . ( 2020 ) and Ye et al . ( 2021 ) . Person search The person search setting was introduced by Xu et al . ( 2014 ) . It consists in replacing the gallery items by whole scene images ( Xiao et al. , 2017 ) . In other words , a person search model must return not only the index of the gallery image where the query is present , but also its location in terms of Bounding Box ( BB ) coordinates . A survey about person search approaches was proposed by Islam ( 2020 ) . Open-set Re-ID The open-set Re-ID setting was first defined by Liao et al . ( 2014 ) . It differs from standard Re-Id in that there is no guarantee that the query person is represented in the gallery , i.e. , an open-set Re-ID model should be able to answer whether the gallery contains the query . The reader can refer to the survey by Leng et al . ( 2019 ) for an overview of recent open-set Re-ID approaches . Video-based Re-ID The video-based Re-ID setting was first studied by Wang et al . ( 2014 ) . In this setting , all images ( query and gallery ) are replaced by image sequences extracted from consecutive frames of a video . Sequences are composed of well-cropped entire body images representing the same person . A complete review of video-based Re-ID was proposed by Ye et al . ( 2021 ) . Others For completeness , we mention the existence of other Re-ID variants in the literature , namely unsupervised Re-ID ( Yang et al. , 2021 ) , semi-supervised Re-ID ( Moskvyak et al. , 2021 ) , human-in-the-loop Re-ID ( Wang et al. , 2016 ) , or federated Re-ID ( Zhuang et al. , 2020 ) . However , their specificities lie in how Re-ID models are trained while the other settings above focus on constraints at inference time . For this reason , these Re-ID paradigms are not presented further here . 2.1.2 LIVE RE-ID SETTING . In this section , we clearly define and formalize the live Re-ID setting , which is inspired by the work of Sumari et al . ( 2020 ) . It takes into account all relevant aspects for deploying Re-ID models in practical real-world applications . When looking for a query person during live operations , whole scene videos need to be processed in near real time , hence the galleries for live Re-ID are composed of the consecutive whole scene frames from short video sequences . The live Re-ID context is also highly open-set as the probability to have the query in a short video sequence from a given camera is low . Hence , this setting combines elements from several of the Re-ID settings mentioned above . Another key characteristic of live Re-ID is that the training context is different from the deployment context . Indeed , building new specialized datasets for deployment in every shopping mall or small city is unrealistic from the perspective of future advances in the field . This highlights the importance of studying cross domain transfer of Re-ID , which was first discussed and highlighted by Luo et al . ( 2020 ) . Finally , this setting also takes into account that Re-ID model predictions need to be processed by a human security agent , who takes the final decision and trigger appropriate actions . This way , very high rank-1 accuracy is not mandatory for live Re-ID , as the operator can find the query in later ranks . On the other hand , false alarm rates must be kept low to avoid overloading the human operators , who have limited processing capacity . To evaluate these two objectives , Sumari et al . ( 2020 ) introduced two evaluation metrics representing both dimensions of the problem ( see Section 3.3.3 ) . The experiments conducted in this paper aim at studying the transferability of standard Re-ID approaches and datasets for deployment in the live Re-ID setting . 2.2 PERSON RE-IDENTIFICATION BENCHMARKS . Most recent research dealing with Re-ID present comparative evaluation of different approaches . While listing all these papers is out of the scope of this work , this section presents several benchmark studies considering different Re-ID settings or specific aspects of the Re-ID pipeline . A large scale benchmark experiment was conducted by Gou et al . ( 2018 ) to compare various approaches for standard Re-ID and video-based Re-ID . By evaluating more than 30 approaches on 16 public datasets , they produced the largest Re-ID benchmark to date . In addition , they built a new dataset to represent several constraints relevant for real-world implementations , such as pedestrian detection errors and illumination variations , among others . However , they do not consider cross domain performance and all evaluations are conducted in the closed-set setting , which are major limitations regarding future deployments . In addition , a smaller systematic evaluation of video-based Re-ID approaches was proposed by Zheng et al . ( 2016 ) . Another extensive set of experiments was conducted by Zheng et al . ( 2017 ) to evaluate different pedestrian detection models on a two-step person search pipeline . They demonstrated that the best performing models on standard object detection metrics are not necessarily the best suited for Re-ID from whole scene frames . In addition , a first benchmark regarding cross-domain transfer of Re-ID approaches was proposed by He et al . ( 2020 ) . Their experiments consisted in training an approach on one standard Re-ID dataset and evaluating on another . Finally , on another note , Zhuang et al . ( 2020 ) compared different approaches for federated Re-ID , i.e . learning Re-ID across decentralized clients to preserve privacy . The studies presented above have brought valuable insights to the Re-ID community . However , none of them allows to assess the performance of a Re-ID model against all the challenges involved during deployment in a new environment for practical use in security applications . This paper contributes to bridging this gap by conducting experiments within the live Re-ID setting , which was designed to take into account all these challenges . In particular , we consider the influence of different standard Re-ID approaches and training datasets on live Re-ID results .
This paper studies the evaluation methodology to assess if different standard Re-ID approaches and training datasets can be used to build efficient live Re-ID pipeline for real-world deployment. In particular, this paper formalizes the live Re-ID setting and define two evaluation metrics named mAP (according to True Validation Rate vs Finding Rate) and $F_\gamma$. Extensive experiments of different baselines on three benchmarks under three kinds of evaluation methods, i.e., single-dataset evaluations, cross-dataset evaluations and live re-ID evaluation, are conducted, which gives insights of building a good live re-ID pipeline.
SP:0bb1dc14e8ced49c047fe8ae3a491dc30a7b1a3d
Benchmarking person re-identification approaches and training datasets for practical real-world implementations
1 INTRODUCTION . As many cameras are being deployed in public places ( e.g. , airports , malls , parks ) , real-time monitoring of the video streams by security agents becomes impractical . Automated video processing appears as a promising solution to analyze the whole network in real time and select only relevant sequences for verification by human operators . This paper deals with person Re-Identification ( ReID ) , a computer vision problem that aims at finding an individual in a network of non-overlapping cameras ( Bedagkar-Gala & Shah , 2014 ) . It has diverse potential security applications , such as suspect searching ( Liao et al. , 2014 ) , identifying owners of abandoned luggage ( Altunay et al. , 2018 ) , or recovering missing children ( Deb et al. , 2021 ) , among others . In the literature , the problem of Re-ID is studied under different settings depending on the application context ( see Section 2.1 ) . On the one hand , the most studied Re-ID paradigm , which we refer to as standard Re-ID , tries to find images representing the query person within a gallery of pre-cropped images of persons , containing at least one correct match ( Lavi et al. , 2020 ) . On the other hand , Sumari et al . ( 2020 ) recently introduced a setting considering specifically the constraints related to implementing Re-ID for use during live operations . We call it live Re-ID , and a first contribution of this work is to formalize the definition and constraints associated to this setting . We also extend the live Re-ID evaluation metrics proposed by Sumari et al . ( 2020 ) in order to facilitate interpretation . Standard Re-ID is not the best suited paradigm for practical implementations , as it does not consider the influence of potential domain shift due to pedestrian detection errors or to deployment in a city with different characteristics than the training dataset . Indeed , in their experiments , Sumari et al . ( 2020 ) showed that training a successful Re-ID model with respect to standard Re-ID metrics does not guarantee good performance when evaluated on specific live Re-ID metrics . Nevertheless , most publicly available large scale datasets for Re-ID focus on the standard Re-ID setting , and many successful approaches have been developed for this specific purpose . For this reason , we believe that it is essential to study if these datasets and approaches can be used to implement and deploy practical applications in different contexts . More specifically , the objective of this paper is to answer the following questions : 1 . Which standard Re-ID approaches can be successfully deployed for practical implementations in the live Re-ID setting ? 2 . Which standard Re-ID dataset is better suited to train standard Re-ID models for the live Re-ID setting ? 3 . Do different Re-ID approaches have different optimal datasets for deployment ? 4 . Can we use a simple cross-dataset evaluation methodology to assess the deployability of a given approach-dataset pair ? To answer these questions , we conducted a study using three standard Re-ID datasets and four recent standard Re-ID approaches . For each approach-dataset pair , the Re-ID model obtained was evaluated against the other two datasets and against a fourth dataset configured for the live Re-ID setting . A conceptual overview of the objectives of our study is represented in Figure 1 . Xiao et al . ( 2017 ) showed that considering pedestrian detection and Re-ID separately is not as good as end-toend approaches for person search , i.e. , galleries of whole scene images . However , our results show that this two step approach can perform well on the live Re-ID setting . In addition , we believe that the results from our study can be very useful to pre-train successful initial live Re-ID models and to guide the development of more complex end-to-end architectures for live Re-ID . This paper is organized as follows : Section 2 discusses the relevant related literature . The methodology for the proposed benchmark experiments is detailed in Section 3 . The results are presented in Section 4 and discussed in Section 5 . Finally , Section 6 presents our conclusions and potential future work . 2 RELATED WORK . A complete literature review about Re-ID approaches is not the purpose of this paper . Instead , we present clear definitions of the different existing Re-ID settings in Section 2.1 . Previous benchmark studies about Re-ID are also discussed in Section 2.2 . 2.1 PERSON RE-IDENTIFICATION SETTINGS . The field of Re-ID was first formalized by Gheissari et al . ( 2006 ) , it consists in retrieving instances of a given individual , called the query person , within a complex set of multimedia content called the gallery . The different settings presented here are defined by how they represent the query person , the format of the gallery items , the constraints on the gallery content , the boundaries of the Re-ID system , and the constraints imposed on the evaluation methodology . 2.1.1 POPULAR SETTINGS . Standard Re-ID In the standard Re-ID setting , both the query image ( representing the query person ) , and all items in the gallery are well-cropped images representing the entire body of a person . It is sometimes called closed-set Re-ID as it assumes that the query person has at least one representative in the gallery . According to the statistics in Papers with Code ( PwC ) , it is the most studied Re-ID setting by a large margin , in terms of number of papers , datasets and benchmarks published . Some standard Re-ID datasets and successful methods are used for our benchmark study and presented in Section 3 . For a more complete overview of standard Re-ID approaches , we refer the reader to the surveys by Lavi et al . ( 2020 ) and Ye et al . ( 2021 ) . Person search The person search setting was introduced by Xu et al . ( 2014 ) . It consists in replacing the gallery items by whole scene images ( Xiao et al. , 2017 ) . In other words , a person search model must return not only the index of the gallery image where the query is present , but also its location in terms of Bounding Box ( BB ) coordinates . A survey about person search approaches was proposed by Islam ( 2020 ) . Open-set Re-ID The open-set Re-ID setting was first defined by Liao et al . ( 2014 ) . It differs from standard Re-Id in that there is no guarantee that the query person is represented in the gallery , i.e. , an open-set Re-ID model should be able to answer whether the gallery contains the query . The reader can refer to the survey by Leng et al . ( 2019 ) for an overview of recent open-set Re-ID approaches . Video-based Re-ID The video-based Re-ID setting was first studied by Wang et al . ( 2014 ) . In this setting , all images ( query and gallery ) are replaced by image sequences extracted from consecutive frames of a video . Sequences are composed of well-cropped entire body images representing the same person . A complete review of video-based Re-ID was proposed by Ye et al . ( 2021 ) . Others For completeness , we mention the existence of other Re-ID variants in the literature , namely unsupervised Re-ID ( Yang et al. , 2021 ) , semi-supervised Re-ID ( Moskvyak et al. , 2021 ) , human-in-the-loop Re-ID ( Wang et al. , 2016 ) , or federated Re-ID ( Zhuang et al. , 2020 ) . However , their specificities lie in how Re-ID models are trained while the other settings above focus on constraints at inference time . For this reason , these Re-ID paradigms are not presented further here . 2.1.2 LIVE RE-ID SETTING . In this section , we clearly define and formalize the live Re-ID setting , which is inspired by the work of Sumari et al . ( 2020 ) . It takes into account all relevant aspects for deploying Re-ID models in practical real-world applications . When looking for a query person during live operations , whole scene videos need to be processed in near real time , hence the galleries for live Re-ID are composed of the consecutive whole scene frames from short video sequences . The live Re-ID context is also highly open-set as the probability to have the query in a short video sequence from a given camera is low . Hence , this setting combines elements from several of the Re-ID settings mentioned above . Another key characteristic of live Re-ID is that the training context is different from the deployment context . Indeed , building new specialized datasets for deployment in every shopping mall or small city is unrealistic from the perspective of future advances in the field . This highlights the importance of studying cross domain transfer of Re-ID , which was first discussed and highlighted by Luo et al . ( 2020 ) . Finally , this setting also takes into account that Re-ID model predictions need to be processed by a human security agent , who takes the final decision and trigger appropriate actions . This way , very high rank-1 accuracy is not mandatory for live Re-ID , as the operator can find the query in later ranks . On the other hand , false alarm rates must be kept low to avoid overloading the human operators , who have limited processing capacity . To evaluate these two objectives , Sumari et al . ( 2020 ) introduced two evaluation metrics representing both dimensions of the problem ( see Section 3.3.3 ) . The experiments conducted in this paper aim at studying the transferability of standard Re-ID approaches and datasets for deployment in the live Re-ID setting . 2.2 PERSON RE-IDENTIFICATION BENCHMARKS . Most recent research dealing with Re-ID present comparative evaluation of different approaches . While listing all these papers is out of the scope of this work , this section presents several benchmark studies considering different Re-ID settings or specific aspects of the Re-ID pipeline . A large scale benchmark experiment was conducted by Gou et al . ( 2018 ) to compare various approaches for standard Re-ID and video-based Re-ID . By evaluating more than 30 approaches on 16 public datasets , they produced the largest Re-ID benchmark to date . In addition , they built a new dataset to represent several constraints relevant for real-world implementations , such as pedestrian detection errors and illumination variations , among others . However , they do not consider cross domain performance and all evaluations are conducted in the closed-set setting , which are major limitations regarding future deployments . In addition , a smaller systematic evaluation of video-based Re-ID approaches was proposed by Zheng et al . ( 2016 ) . Another extensive set of experiments was conducted by Zheng et al . ( 2017 ) to evaluate different pedestrian detection models on a two-step person search pipeline . They demonstrated that the best performing models on standard object detection metrics are not necessarily the best suited for Re-ID from whole scene frames . In addition , a first benchmark regarding cross-domain transfer of Re-ID approaches was proposed by He et al . ( 2020 ) . Their experiments consisted in training an approach on one standard Re-ID dataset and evaluating on another . Finally , on another note , Zhuang et al . ( 2020 ) compared different approaches for federated Re-ID , i.e . learning Re-ID across decentralized clients to preserve privacy . The studies presented above have brought valuable insights to the Re-ID community . However , none of them allows to assess the performance of a Re-ID model against all the challenges involved during deployment in a new environment for practical use in security applications . This paper contributes to bridging this gap by conducting experiments within the live Re-ID setting , which was designed to take into account all these challenges . In particular , we consider the influence of different standard Re-ID approaches and training datasets on live Re-ID results .
This paper proposes a live peron re-identification (Live Re-ID) setting to evaluate person re-id approaches closer to real application scenarios. The paper benchmarks a few state-of-the-art re-id methods, by training on popular re-id datasets (CUHK03, Market-1501, DukeMTMC) and testing on the m-PRID dataset. Experimental results suggest that a proper selection of the training dataset is important.
SP:0bb1dc14e8ced49c047fe8ae3a491dc30a7b1a3d
Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity
Brain-wide measurements of neural activity combined with detailed measurements of anatomical connectivity of the C. elegans nervous system in principle allow for the development of detailed mechanistic computational models . However , there are several challenges . We often do not have direct experimental access to important modeling details such as single-neuron dynamics and the signs and strengths of the synaptic connectivity . Further , neural activity can only be measured in a subset of neurons , often indirectly via calcium imaging , and significant trial-totrial variability has been observed . To overcome these challenges , we introduce a connectome-constrained latent variable model ( CC-LVM ) of the unobserved voltage dynamics of the entire C. elegans nervous system and the observed calcium signals . We used the framework of variational autoencoders to fit parameters of the mechanistic simulation constituting the generative model of the LVM to calcium imaging observations . A variational approximate posterior distribution over latent voltage traces for all neurons is efficiently inferred using an inference network , and constrained by a prior distribution given by the biophysical simulation of neural dynamics . When applied to a recent dataset , we find that connectomic constraints enable our LVM to predict the activity of neurons whose activity were withheld . We explored models with different degrees of biophysical detail , and found that models with the most realistic conductance-based synapses provide markedly better predictions than current-based synapses for this system . 1 INTRODUCTION . The anatomical connectivity of the entire C. elegans nervous system , including both chemical and electrical synapses , has been known for several decades [ 25 ; 23 ; 26 ] . However , well-calibrated and predictive connectome-constrained mechanistic models of this nervous system have yet to be demonstrated [ 10 ; 24 ; 22 ; 7 ; 5 ] . This is because currently available experimental data are insufficient to completely constrain computational models . First , the single-neuron and synapse dynamics are generally unknown . Second , the connectome does not directly inform the signs and strengths of individual synapses . Third , the response properties of sensory neurons are incompletely known . Further , it is unclear what level of biophysical detail is necessary to reproduce the essential computational function of the C. elegans nervous system . We use recently available whole brain calcium imaging data to constrain the missing parameters in a connectome-constrained , biophysically detailed model of the C. elegans nervous system . We start with a simplified non-spiking passive point-neuron model of the voltage dynamics of individual neurons in the circuit . We model inputs to the neurons from electrical synapses , and model nonlinear chemical synapses with either current-based or conductance-based biophysics . The challenge of fitting such a model to data is two-fold . First , the voltage dynamics of the neurons are not directly observed , but rather indirectly measured via their slow calcium dynamics . Second , there is significant trial-to-trial variability in neural activity , suggesting strong initial state dependence in the neural responses even to the same sensory stimulus [ 6 ] . These issues can both be addressed by treating the collective voltage signals of all neurons in the nervous system as an unobserved latent variable whose dynamics are determined by the simplified connectome-constrained biophysical model with unknown neuronal and synaptic parameters . Our connectome-constrained latent variable model ( CC-LVM ) of voltage dynamics of the entire C. elegans nervous system is a large-scale latent variable model with a very high-dimensional latent space consisting of voltage dynamics of 300 neurons over 5 minutes of time . Many sources say that C. elegans has 302 neurons , but two of them are not connected to the rest of the nervous system , so we only model the 300 that are [ 26 ] . The generative model for these latent variables are described by stochastic differential equations modeling the nonlinear dynamics of the network activity , and a novel differential equation describing how the calcium signals signals are generated from the voltage dynamics of individual non-spiking neurons . We developed a variational autoencoder based framework for inferring the unobserved voltage dynamics from the observed calcium dynamics of only a subset of the neurons in the nervous system . An inference network enables efficient variational inference of the latent variables , and fitting the unknown neuronal and synaptic parameters of our connectome and biophysics based generative model . The compact nervous system of C. elegans makes it an ideal platform for systems neuroscience . The C. elegans nervous system consists of 300 neurons , divided into 118 distinct classes ( often bilaterally symmetric pairs ) [ 21 ; 11 ; 20 ] . These neurons can largely be categorized as sensory neurons , interneurons , and motor neurons . The majority of these neurons , about 200 , are concentrated in the head , forming the brain of the worm . The synaptic connectivity of C. elegans has been mapped with electron microscopy , providing researchers with a complete connectome [ 25 ; 26 ] . We apply the CC-LVM to whole-brain calcium imaging data which captures 170 of the 300 neurons in multiple worms as they respond to chemosensory stimuli [ 27 ] . In principle , an accurate model of the nervous system constrained by incomplete activity measurements but complete connectivity measurements can enable accurate predictions of activity in neurons which were not recorded . We tested this hypothesis by using the CC-LVM to predict the activity of neurons which were measured , but whose activity was withheld during model training . We also used the model to predict the activity of entire worms , by holding out single trials during training . The CC-LVM predicted activity of withheld neurons and withheld worms significantly better than models unconstrained by the connectome , demonstrating the utility of the connectome even when little is known about the signs and strengths of individual connections . Further , we found that models with conductance-based synapses provide superior predictions to models with current-based synapses . CC-LVMs thus provide a new tool for connectome and activity constrained modeling of neural circuits and for discovering the appropriate level of detail of biophysical model for a given system . 1.1 PRIOR WORK . Previous work has been focused on creating network simulations based on anatomical connectome data , with unknown single neuron synaptic biophysics [ 13 ; 5 ; 24 ; 15 ; 10 ] . These network models provided a holistic view of the entire nervous system and were validated by comparing simulated locomotion with movements of live animals [ 8 ] . However , they were not fit against prerecorded calcium fluorescence data , so their simulated neuronal activities are not quantitatively confirmed . Recent advances in machine learning have enabled increasingly sophisticated latent variable models to uncover structure from data generated by advanced neural interfacing technologies [ 17 ] . This framework has been used in other contexts to infer neuronal voltage dynamics from high-dimensional calcium fluorescence recordings [ 16 ; 1 ; 19 ] . Despite their utility , these models do not account for connectomic data which may greatly improve their predictive power . Our work employs both LVMs and connectomic constraints to create a model that is informed by both neural population dynamics and network structure . Recently , Bayesian models have been widely applied in neuronal datasets to infer spiking activity , neural dynamics , and connectivity [ 24 ; 1 ; 19 ] . In particular , Warrington et al . [ 24 ] . use sequential Monte Carlo ( SMC ) to impute the intracellular voltage potentials of C. elegans neurons from 49 recorded calcium traces [ 9 ] . Their work combined neuronal , body and calcium observation simulators in order to model locomotion [ 3 ; 14 ] . Their simulator produces a series of exemplar neuronal voltage traces but they do not evaluate the imputed traces of unmeasured neurons against real data so the accuracy of their inferences is unknown . Furthermore , they explore how different methods for approximate Bayesian inference affect parameter estimation on simulated data and show that their method is best . Our model uses a variational auto-encoder ( VAE ) to perform inference instead of SMC which is more computationally efficient because it allows direct sampling from the approximate posterior . Additionally , the CC-LVM is trained on significantly more neurons , and we validate our model by evaluating its ability to predict the activity of neurons held out from the training data . Finally , rather than study how various inference methods can affect parameter estimation in a test autoregressive model , we search a space of generative models to optimize voltage predictions by comparing them to real calcium data . 2 CONNECTOME-CONSTRAINED LATENT VARIABLE MODEL . We constructed a connectome-constrained latent variable model ( CC-LVM ) of the C. elegans nervous system , where each node in our network represents a specific neuron in the animal . The activity of each neuron was modeled with a latent variable analogous to voltage . The dynamics of each neuron in the network is represented with a stochastic non-spiking passive leaky integrator equation with learned time constants and resting membrane potentials [ 4 ] . The neurons were coupled by both chemical and electrical synapses with learned weights [ 4 ] . We allowed these weights to be non-zero only where the connectome indicates the existence of synapses . Given a set of learned parameters ( weights , time constants , and resting membrane potentials ) , the dynamics of the network defines a prior distribution over neural activity trajectories . Importantly , the stochastic nature of the dynamics allows for deviations from perfectly deterministic dynamics . This allows the model to accommodate the observed variability in single-neuron dynamics . This variability has several potential sources , including the unmeasured initial states of the neurons and our incomplete knowledge of the sensory inputs driving the nervous system . A latent variable model of this scale with a nonlinear generative model defined by the stochastic dynamics is difficult to fit . To address this challenge , we used the probabilistic inference framework of variational auto-encoders ( VAE ) [ 18 ] to train a black-box voltage inference network to predict a posterior distribution over neural activity trajectories . We then used this inference network to train the parameters of the CC-LVM . The resulting LVM has a biologically realistic generative model of the nonlinear neural dynamics of the C. elegans nervous system , and a black-box temporal convolutional inference network which , given sensory stimulus and calcium imaging measurements , predicts a factorized Gaussian distribution over the voltages of all the neurons in the network . Several variants of the LVM were developed : we tested models in which the synaptic connections were modeled as either current-based or conductance-based [ 4 ] , and evaluated different levels of connectome constraint . The LVM was optimized with the ELBO ( evidence lower bound ) objective . 2.1 NETWORK MODEL WITH PASSIVE POINT-NEURON VOLTAGE DYNAMICS . Neurons in the C. elegans nervous system are largely non-spiking [ 2 ] , so we model the voltage dynamics for these neurons as a passive point neurons with a single electrical compartment . Let v ∈ RN denote the voltages of the N neurons . The voltage vi ( t ) for each post-synaptic neuron i at the time t was calculated using a first-order leaky integrator equation given by τiv̇i ( t ) + vi ( t ) = s c i ( t ) + s e i ( t ) + v rest i + hi ( t ) , ( 1 ) where τi is the voltage time constant , hi is the chemosensory input provided to only the sensory neurons , sci is the chemical synaptic input , s e i is the electrical synaptic input , v rest i is the resting neuron voltage . We studied two variations of the model , the current-based model and the conductance-based model , which differ in their formulations of chemical synaptic input sci . Since neurons in the C. elegans nervous system are largely non-spiking , we model the chemical synapses as having graded release of neurotransmitter , rather than the all-or-none quantal release seen in spiking neurons . In both models , we model the amount of neurotransmitter released as proportional to the pre-synaptic voltage vj , following softplus activation g ( · ) which sets a minimum voltage below which there is no synaptic release , leading to W cjig ( vj ( t ) ) . We use g ( · ) , to denote a softplus function for the rest of the paper . In our current-based model , synaptic input sci to a post-synaptic neuron i is directly proportional pre-synaptic neurotransmitter concentration : sci ( t ) = N∑ j W cjig ( vj ( t ) ) , ( 2 ) where W cji represents the chemical synaptic weight between pre-synaptic neuron j and post-synaptic neuron i. W cji can be positive or negative depending on if the synaptic connection is excitatory or inhibitory . If neurons j and i are not connected , W cji is set to zero . In the conductance-based model , we model the synaptic current entering the post-synaptic neuron with more biophysical detail as sci ( t ) = N∑ j ( Eji − vi ( t ) ) W cjig ( vj ( t ) ) . ( 3 ) Here , the pre-synaptic neurotransmitter concentration W cjig ( vj ( t ) ) is more accurately modeled as proportional to the conductance at the post-synaptic terminal . The post-synaptic current is then given by the product of the synaptic conductance and the difference between the post-synaptic voltage vi ( t ) and the synaptic reversal potential Eji . In contrast to the current-based synapse whose input is independent of the post-synaptic voltage vi ( t ) , the conductance-based synapse model more accurately also has a dependence on the post-synaptic voltage . In addition , this model decouples the sign of the synapse from the strength . The reversal potential of a synapses Eji dictates whether a synapse is excitatory or inhibitory . A large and positive Eji corresponds to an excitatory synapse causing depolarization of the post-synaptic neuron . And an inhibitory synapse will have a negative Eji causing hyperpolarization . In this model , we can now independently train the sign of a synapse and its non-negative strength W cji , which is not easily possible with current-based synapses . In both the current-based and conductance-based models , the following equation was used to represent electrical synaptic inputs : sei ( t ) = N∑ j W eji ( vj ( t ) − vi ( t ) ) , ( 4 ) where W eji is restricted to be non-negative and vj − vi is the potential difference between presynaptic and postsynaptic neurons . We restrictW eji = W e ij because the potential differences between electrical synapses are symmetric . To directly compare the outputs of the LVM to neural activity measurements , our model must generate calcium signals from the voltage traces . We model the calcium concentration [ Ca ] i of each neuron i as a first-order leaky integrator , driven by voltage-gated calcium channels with the same nonlinear current-voltage ( I-V ) function g ( vi ) : τ [ Ca ] [ Ċa ] i ( t ) + [ Ca ] i ( t ) = g ( vi ( t ) ) , ( 5 ) where g ( · ) represents softplus activation , τ [ Ca ] is a time constant shared across all neurons . We then map calcium concentration [ Ca ] into the measured calcium fluorescence signals f via an affine transform with scalar αf and bias βf , fi ( t ) = α f [ Ca ] i ( t ) + β f + σf fi ( t ) , ( 6 ) with measurement noise represented by a noise amplitude σf and a noise term f i ( t ) ∼ N ( 0 , 1 ) .
The authors propose a biologically constrained latent linear dynamical model of the C. elegans nervous system. They use connectomic information (including chemical vs electrical synapses) to constrain connections between units during inference. They fit the model to calcium imaging data from whole C. elegans using a variational approach. They find that biological constraints improve model performance and validity using both withheld neuron and across worm metrics.
SP:e4f38b0766d08cb530e4a0133c62fed4849a14e3
Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity
Brain-wide measurements of neural activity combined with detailed measurements of anatomical connectivity of the C. elegans nervous system in principle allow for the development of detailed mechanistic computational models . However , there are several challenges . We often do not have direct experimental access to important modeling details such as single-neuron dynamics and the signs and strengths of the synaptic connectivity . Further , neural activity can only be measured in a subset of neurons , often indirectly via calcium imaging , and significant trial-totrial variability has been observed . To overcome these challenges , we introduce a connectome-constrained latent variable model ( CC-LVM ) of the unobserved voltage dynamics of the entire C. elegans nervous system and the observed calcium signals . We used the framework of variational autoencoders to fit parameters of the mechanistic simulation constituting the generative model of the LVM to calcium imaging observations . A variational approximate posterior distribution over latent voltage traces for all neurons is efficiently inferred using an inference network , and constrained by a prior distribution given by the biophysical simulation of neural dynamics . When applied to a recent dataset , we find that connectomic constraints enable our LVM to predict the activity of neurons whose activity were withheld . We explored models with different degrees of biophysical detail , and found that models with the most realistic conductance-based synapses provide markedly better predictions than current-based synapses for this system . 1 INTRODUCTION . The anatomical connectivity of the entire C. elegans nervous system , including both chemical and electrical synapses , has been known for several decades [ 25 ; 23 ; 26 ] . However , well-calibrated and predictive connectome-constrained mechanistic models of this nervous system have yet to be demonstrated [ 10 ; 24 ; 22 ; 7 ; 5 ] . This is because currently available experimental data are insufficient to completely constrain computational models . First , the single-neuron and synapse dynamics are generally unknown . Second , the connectome does not directly inform the signs and strengths of individual synapses . Third , the response properties of sensory neurons are incompletely known . Further , it is unclear what level of biophysical detail is necessary to reproduce the essential computational function of the C. elegans nervous system . We use recently available whole brain calcium imaging data to constrain the missing parameters in a connectome-constrained , biophysically detailed model of the C. elegans nervous system . We start with a simplified non-spiking passive point-neuron model of the voltage dynamics of individual neurons in the circuit . We model inputs to the neurons from electrical synapses , and model nonlinear chemical synapses with either current-based or conductance-based biophysics . The challenge of fitting such a model to data is two-fold . First , the voltage dynamics of the neurons are not directly observed , but rather indirectly measured via their slow calcium dynamics . Second , there is significant trial-to-trial variability in neural activity , suggesting strong initial state dependence in the neural responses even to the same sensory stimulus [ 6 ] . These issues can both be addressed by treating the collective voltage signals of all neurons in the nervous system as an unobserved latent variable whose dynamics are determined by the simplified connectome-constrained biophysical model with unknown neuronal and synaptic parameters . Our connectome-constrained latent variable model ( CC-LVM ) of voltage dynamics of the entire C. elegans nervous system is a large-scale latent variable model with a very high-dimensional latent space consisting of voltage dynamics of 300 neurons over 5 minutes of time . Many sources say that C. elegans has 302 neurons , but two of them are not connected to the rest of the nervous system , so we only model the 300 that are [ 26 ] . The generative model for these latent variables are described by stochastic differential equations modeling the nonlinear dynamics of the network activity , and a novel differential equation describing how the calcium signals signals are generated from the voltage dynamics of individual non-spiking neurons . We developed a variational autoencoder based framework for inferring the unobserved voltage dynamics from the observed calcium dynamics of only a subset of the neurons in the nervous system . An inference network enables efficient variational inference of the latent variables , and fitting the unknown neuronal and synaptic parameters of our connectome and biophysics based generative model . The compact nervous system of C. elegans makes it an ideal platform for systems neuroscience . The C. elegans nervous system consists of 300 neurons , divided into 118 distinct classes ( often bilaterally symmetric pairs ) [ 21 ; 11 ; 20 ] . These neurons can largely be categorized as sensory neurons , interneurons , and motor neurons . The majority of these neurons , about 200 , are concentrated in the head , forming the brain of the worm . The synaptic connectivity of C. elegans has been mapped with electron microscopy , providing researchers with a complete connectome [ 25 ; 26 ] . We apply the CC-LVM to whole-brain calcium imaging data which captures 170 of the 300 neurons in multiple worms as they respond to chemosensory stimuli [ 27 ] . In principle , an accurate model of the nervous system constrained by incomplete activity measurements but complete connectivity measurements can enable accurate predictions of activity in neurons which were not recorded . We tested this hypothesis by using the CC-LVM to predict the activity of neurons which were measured , but whose activity was withheld during model training . We also used the model to predict the activity of entire worms , by holding out single trials during training . The CC-LVM predicted activity of withheld neurons and withheld worms significantly better than models unconstrained by the connectome , demonstrating the utility of the connectome even when little is known about the signs and strengths of individual connections . Further , we found that models with conductance-based synapses provide superior predictions to models with current-based synapses . CC-LVMs thus provide a new tool for connectome and activity constrained modeling of neural circuits and for discovering the appropriate level of detail of biophysical model for a given system . 1.1 PRIOR WORK . Previous work has been focused on creating network simulations based on anatomical connectome data , with unknown single neuron synaptic biophysics [ 13 ; 5 ; 24 ; 15 ; 10 ] . These network models provided a holistic view of the entire nervous system and were validated by comparing simulated locomotion with movements of live animals [ 8 ] . However , they were not fit against prerecorded calcium fluorescence data , so their simulated neuronal activities are not quantitatively confirmed . Recent advances in machine learning have enabled increasingly sophisticated latent variable models to uncover structure from data generated by advanced neural interfacing technologies [ 17 ] . This framework has been used in other contexts to infer neuronal voltage dynamics from high-dimensional calcium fluorescence recordings [ 16 ; 1 ; 19 ] . Despite their utility , these models do not account for connectomic data which may greatly improve their predictive power . Our work employs both LVMs and connectomic constraints to create a model that is informed by both neural population dynamics and network structure . Recently , Bayesian models have been widely applied in neuronal datasets to infer spiking activity , neural dynamics , and connectivity [ 24 ; 1 ; 19 ] . In particular , Warrington et al . [ 24 ] . use sequential Monte Carlo ( SMC ) to impute the intracellular voltage potentials of C. elegans neurons from 49 recorded calcium traces [ 9 ] . Their work combined neuronal , body and calcium observation simulators in order to model locomotion [ 3 ; 14 ] . Their simulator produces a series of exemplar neuronal voltage traces but they do not evaluate the imputed traces of unmeasured neurons against real data so the accuracy of their inferences is unknown . Furthermore , they explore how different methods for approximate Bayesian inference affect parameter estimation on simulated data and show that their method is best . Our model uses a variational auto-encoder ( VAE ) to perform inference instead of SMC which is more computationally efficient because it allows direct sampling from the approximate posterior . Additionally , the CC-LVM is trained on significantly more neurons , and we validate our model by evaluating its ability to predict the activity of neurons held out from the training data . Finally , rather than study how various inference methods can affect parameter estimation in a test autoregressive model , we search a space of generative models to optimize voltage predictions by comparing them to real calcium data . 2 CONNECTOME-CONSTRAINED LATENT VARIABLE MODEL . We constructed a connectome-constrained latent variable model ( CC-LVM ) of the C. elegans nervous system , where each node in our network represents a specific neuron in the animal . The activity of each neuron was modeled with a latent variable analogous to voltage . The dynamics of each neuron in the network is represented with a stochastic non-spiking passive leaky integrator equation with learned time constants and resting membrane potentials [ 4 ] . The neurons were coupled by both chemical and electrical synapses with learned weights [ 4 ] . We allowed these weights to be non-zero only where the connectome indicates the existence of synapses . Given a set of learned parameters ( weights , time constants , and resting membrane potentials ) , the dynamics of the network defines a prior distribution over neural activity trajectories . Importantly , the stochastic nature of the dynamics allows for deviations from perfectly deterministic dynamics . This allows the model to accommodate the observed variability in single-neuron dynamics . This variability has several potential sources , including the unmeasured initial states of the neurons and our incomplete knowledge of the sensory inputs driving the nervous system . A latent variable model of this scale with a nonlinear generative model defined by the stochastic dynamics is difficult to fit . To address this challenge , we used the probabilistic inference framework of variational auto-encoders ( VAE ) [ 18 ] to train a black-box voltage inference network to predict a posterior distribution over neural activity trajectories . We then used this inference network to train the parameters of the CC-LVM . The resulting LVM has a biologically realistic generative model of the nonlinear neural dynamics of the C. elegans nervous system , and a black-box temporal convolutional inference network which , given sensory stimulus and calcium imaging measurements , predicts a factorized Gaussian distribution over the voltages of all the neurons in the network . Several variants of the LVM were developed : we tested models in which the synaptic connections were modeled as either current-based or conductance-based [ 4 ] , and evaluated different levels of connectome constraint . The LVM was optimized with the ELBO ( evidence lower bound ) objective . 2.1 NETWORK MODEL WITH PASSIVE POINT-NEURON VOLTAGE DYNAMICS . Neurons in the C. elegans nervous system are largely non-spiking [ 2 ] , so we model the voltage dynamics for these neurons as a passive point neurons with a single electrical compartment . Let v ∈ RN denote the voltages of the N neurons . The voltage vi ( t ) for each post-synaptic neuron i at the time t was calculated using a first-order leaky integrator equation given by τiv̇i ( t ) + vi ( t ) = s c i ( t ) + s e i ( t ) + v rest i + hi ( t ) , ( 1 ) where τi is the voltage time constant , hi is the chemosensory input provided to only the sensory neurons , sci is the chemical synaptic input , s e i is the electrical synaptic input , v rest i is the resting neuron voltage . We studied two variations of the model , the current-based model and the conductance-based model , which differ in their formulations of chemical synaptic input sci . Since neurons in the C. elegans nervous system are largely non-spiking , we model the chemical synapses as having graded release of neurotransmitter , rather than the all-or-none quantal release seen in spiking neurons . In both models , we model the amount of neurotransmitter released as proportional to the pre-synaptic voltage vj , following softplus activation g ( · ) which sets a minimum voltage below which there is no synaptic release , leading to W cjig ( vj ( t ) ) . We use g ( · ) , to denote a softplus function for the rest of the paper . In our current-based model , synaptic input sci to a post-synaptic neuron i is directly proportional pre-synaptic neurotransmitter concentration : sci ( t ) = N∑ j W cjig ( vj ( t ) ) , ( 2 ) where W cji represents the chemical synaptic weight between pre-synaptic neuron j and post-synaptic neuron i. W cji can be positive or negative depending on if the synaptic connection is excitatory or inhibitory . If neurons j and i are not connected , W cji is set to zero . In the conductance-based model , we model the synaptic current entering the post-synaptic neuron with more biophysical detail as sci ( t ) = N∑ j ( Eji − vi ( t ) ) W cjig ( vj ( t ) ) . ( 3 ) Here , the pre-synaptic neurotransmitter concentration W cjig ( vj ( t ) ) is more accurately modeled as proportional to the conductance at the post-synaptic terminal . The post-synaptic current is then given by the product of the synaptic conductance and the difference between the post-synaptic voltage vi ( t ) and the synaptic reversal potential Eji . In contrast to the current-based synapse whose input is independent of the post-synaptic voltage vi ( t ) , the conductance-based synapse model more accurately also has a dependence on the post-synaptic voltage . In addition , this model decouples the sign of the synapse from the strength . The reversal potential of a synapses Eji dictates whether a synapse is excitatory or inhibitory . A large and positive Eji corresponds to an excitatory synapse causing depolarization of the post-synaptic neuron . And an inhibitory synapse will have a negative Eji causing hyperpolarization . In this model , we can now independently train the sign of a synapse and its non-negative strength W cji , which is not easily possible with current-based synapses . In both the current-based and conductance-based models , the following equation was used to represent electrical synaptic inputs : sei ( t ) = N∑ j W eji ( vj ( t ) − vi ( t ) ) , ( 4 ) where W eji is restricted to be non-negative and vj − vi is the potential difference between presynaptic and postsynaptic neurons . We restrictW eji = W e ij because the potential differences between electrical synapses are symmetric . To directly compare the outputs of the LVM to neural activity measurements , our model must generate calcium signals from the voltage traces . We model the calcium concentration [ Ca ] i of each neuron i as a first-order leaky integrator , driven by voltage-gated calcium channels with the same nonlinear current-voltage ( I-V ) function g ( vi ) : τ [ Ca ] [ Ċa ] i ( t ) + [ Ca ] i ( t ) = g ( vi ( t ) ) , ( 5 ) where g ( · ) represents softplus activation , τ [ Ca ] is a time constant shared across all neurons . We then map calcium concentration [ Ca ] into the measured calcium fluorescence signals f via an affine transform with scalar αf and bias βf , fi ( t ) = α f [ Ca ] i ( t ) + β f + σf fi ( t ) , ( 6 ) with measurement noise represented by a noise amplitude σf and a noise term f i ( t ) ∼ N ( 0 , 1 ) .
The paper develops a latent variable model with biologically meaningful parameters for the worm *C. elegans* whole brain neural traces. Given the connectivity between the neurons, the neural traces are modeled as stochastic leaky integrators with either conductance or current based synaptic input model. Given the voltage traces, the observed calcium signals are modeled as first order leaky integrators added by noise to account for the observation noise. External input is nonlinearly transformed and fed into the neurons to account for the stimulus-dependent modulation of neural activity. Multiple connectivity models (fully trainable, fully trainable with $L_2$ regularization, trainable weight magnitudes, trainable global scale) are used to investigate whether biological connectome constraints help with neural activity predictions or not. Variational inference is used to infer a posterior distribution over neural activity trajectories with factorized Gaussian distribution as the variational family. Results on neuron-hold-out and worm-hold-out experiments suggest that connectome constraints and conductance-based modeling improves model prediction over unseen neurons and worms.
SP:e4f38b0766d08cb530e4a0133c62fed4849a14e3
Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity
Brain-wide measurements of neural activity combined with detailed measurements of anatomical connectivity of the C. elegans nervous system in principle allow for the development of detailed mechanistic computational models . However , there are several challenges . We often do not have direct experimental access to important modeling details such as single-neuron dynamics and the signs and strengths of the synaptic connectivity . Further , neural activity can only be measured in a subset of neurons , often indirectly via calcium imaging , and significant trial-totrial variability has been observed . To overcome these challenges , we introduce a connectome-constrained latent variable model ( CC-LVM ) of the unobserved voltage dynamics of the entire C. elegans nervous system and the observed calcium signals . We used the framework of variational autoencoders to fit parameters of the mechanistic simulation constituting the generative model of the LVM to calcium imaging observations . A variational approximate posterior distribution over latent voltage traces for all neurons is efficiently inferred using an inference network , and constrained by a prior distribution given by the biophysical simulation of neural dynamics . When applied to a recent dataset , we find that connectomic constraints enable our LVM to predict the activity of neurons whose activity were withheld . We explored models with different degrees of biophysical detail , and found that models with the most realistic conductance-based synapses provide markedly better predictions than current-based synapses for this system . 1 INTRODUCTION . The anatomical connectivity of the entire C. elegans nervous system , including both chemical and electrical synapses , has been known for several decades [ 25 ; 23 ; 26 ] . However , well-calibrated and predictive connectome-constrained mechanistic models of this nervous system have yet to be demonstrated [ 10 ; 24 ; 22 ; 7 ; 5 ] . This is because currently available experimental data are insufficient to completely constrain computational models . First , the single-neuron and synapse dynamics are generally unknown . Second , the connectome does not directly inform the signs and strengths of individual synapses . Third , the response properties of sensory neurons are incompletely known . Further , it is unclear what level of biophysical detail is necessary to reproduce the essential computational function of the C. elegans nervous system . We use recently available whole brain calcium imaging data to constrain the missing parameters in a connectome-constrained , biophysically detailed model of the C. elegans nervous system . We start with a simplified non-spiking passive point-neuron model of the voltage dynamics of individual neurons in the circuit . We model inputs to the neurons from electrical synapses , and model nonlinear chemical synapses with either current-based or conductance-based biophysics . The challenge of fitting such a model to data is two-fold . First , the voltage dynamics of the neurons are not directly observed , but rather indirectly measured via their slow calcium dynamics . Second , there is significant trial-to-trial variability in neural activity , suggesting strong initial state dependence in the neural responses even to the same sensory stimulus [ 6 ] . These issues can both be addressed by treating the collective voltage signals of all neurons in the nervous system as an unobserved latent variable whose dynamics are determined by the simplified connectome-constrained biophysical model with unknown neuronal and synaptic parameters . Our connectome-constrained latent variable model ( CC-LVM ) of voltage dynamics of the entire C. elegans nervous system is a large-scale latent variable model with a very high-dimensional latent space consisting of voltage dynamics of 300 neurons over 5 minutes of time . Many sources say that C. elegans has 302 neurons , but two of them are not connected to the rest of the nervous system , so we only model the 300 that are [ 26 ] . The generative model for these latent variables are described by stochastic differential equations modeling the nonlinear dynamics of the network activity , and a novel differential equation describing how the calcium signals signals are generated from the voltage dynamics of individual non-spiking neurons . We developed a variational autoencoder based framework for inferring the unobserved voltage dynamics from the observed calcium dynamics of only a subset of the neurons in the nervous system . An inference network enables efficient variational inference of the latent variables , and fitting the unknown neuronal and synaptic parameters of our connectome and biophysics based generative model . The compact nervous system of C. elegans makes it an ideal platform for systems neuroscience . The C. elegans nervous system consists of 300 neurons , divided into 118 distinct classes ( often bilaterally symmetric pairs ) [ 21 ; 11 ; 20 ] . These neurons can largely be categorized as sensory neurons , interneurons , and motor neurons . The majority of these neurons , about 200 , are concentrated in the head , forming the brain of the worm . The synaptic connectivity of C. elegans has been mapped with electron microscopy , providing researchers with a complete connectome [ 25 ; 26 ] . We apply the CC-LVM to whole-brain calcium imaging data which captures 170 of the 300 neurons in multiple worms as they respond to chemosensory stimuli [ 27 ] . In principle , an accurate model of the nervous system constrained by incomplete activity measurements but complete connectivity measurements can enable accurate predictions of activity in neurons which were not recorded . We tested this hypothesis by using the CC-LVM to predict the activity of neurons which were measured , but whose activity was withheld during model training . We also used the model to predict the activity of entire worms , by holding out single trials during training . The CC-LVM predicted activity of withheld neurons and withheld worms significantly better than models unconstrained by the connectome , demonstrating the utility of the connectome even when little is known about the signs and strengths of individual connections . Further , we found that models with conductance-based synapses provide superior predictions to models with current-based synapses . CC-LVMs thus provide a new tool for connectome and activity constrained modeling of neural circuits and for discovering the appropriate level of detail of biophysical model for a given system . 1.1 PRIOR WORK . Previous work has been focused on creating network simulations based on anatomical connectome data , with unknown single neuron synaptic biophysics [ 13 ; 5 ; 24 ; 15 ; 10 ] . These network models provided a holistic view of the entire nervous system and were validated by comparing simulated locomotion with movements of live animals [ 8 ] . However , they were not fit against prerecorded calcium fluorescence data , so their simulated neuronal activities are not quantitatively confirmed . Recent advances in machine learning have enabled increasingly sophisticated latent variable models to uncover structure from data generated by advanced neural interfacing technologies [ 17 ] . This framework has been used in other contexts to infer neuronal voltage dynamics from high-dimensional calcium fluorescence recordings [ 16 ; 1 ; 19 ] . Despite their utility , these models do not account for connectomic data which may greatly improve their predictive power . Our work employs both LVMs and connectomic constraints to create a model that is informed by both neural population dynamics and network structure . Recently , Bayesian models have been widely applied in neuronal datasets to infer spiking activity , neural dynamics , and connectivity [ 24 ; 1 ; 19 ] . In particular , Warrington et al . [ 24 ] . use sequential Monte Carlo ( SMC ) to impute the intracellular voltage potentials of C. elegans neurons from 49 recorded calcium traces [ 9 ] . Their work combined neuronal , body and calcium observation simulators in order to model locomotion [ 3 ; 14 ] . Their simulator produces a series of exemplar neuronal voltage traces but they do not evaluate the imputed traces of unmeasured neurons against real data so the accuracy of their inferences is unknown . Furthermore , they explore how different methods for approximate Bayesian inference affect parameter estimation on simulated data and show that their method is best . Our model uses a variational auto-encoder ( VAE ) to perform inference instead of SMC which is more computationally efficient because it allows direct sampling from the approximate posterior . Additionally , the CC-LVM is trained on significantly more neurons , and we validate our model by evaluating its ability to predict the activity of neurons held out from the training data . Finally , rather than study how various inference methods can affect parameter estimation in a test autoregressive model , we search a space of generative models to optimize voltage predictions by comparing them to real calcium data . 2 CONNECTOME-CONSTRAINED LATENT VARIABLE MODEL . We constructed a connectome-constrained latent variable model ( CC-LVM ) of the C. elegans nervous system , where each node in our network represents a specific neuron in the animal . The activity of each neuron was modeled with a latent variable analogous to voltage . The dynamics of each neuron in the network is represented with a stochastic non-spiking passive leaky integrator equation with learned time constants and resting membrane potentials [ 4 ] . The neurons were coupled by both chemical and electrical synapses with learned weights [ 4 ] . We allowed these weights to be non-zero only where the connectome indicates the existence of synapses . Given a set of learned parameters ( weights , time constants , and resting membrane potentials ) , the dynamics of the network defines a prior distribution over neural activity trajectories . Importantly , the stochastic nature of the dynamics allows for deviations from perfectly deterministic dynamics . This allows the model to accommodate the observed variability in single-neuron dynamics . This variability has several potential sources , including the unmeasured initial states of the neurons and our incomplete knowledge of the sensory inputs driving the nervous system . A latent variable model of this scale with a nonlinear generative model defined by the stochastic dynamics is difficult to fit . To address this challenge , we used the probabilistic inference framework of variational auto-encoders ( VAE ) [ 18 ] to train a black-box voltage inference network to predict a posterior distribution over neural activity trajectories . We then used this inference network to train the parameters of the CC-LVM . The resulting LVM has a biologically realistic generative model of the nonlinear neural dynamics of the C. elegans nervous system , and a black-box temporal convolutional inference network which , given sensory stimulus and calcium imaging measurements , predicts a factorized Gaussian distribution over the voltages of all the neurons in the network . Several variants of the LVM were developed : we tested models in which the synaptic connections were modeled as either current-based or conductance-based [ 4 ] , and evaluated different levels of connectome constraint . The LVM was optimized with the ELBO ( evidence lower bound ) objective . 2.1 NETWORK MODEL WITH PASSIVE POINT-NEURON VOLTAGE DYNAMICS . Neurons in the C. elegans nervous system are largely non-spiking [ 2 ] , so we model the voltage dynamics for these neurons as a passive point neurons with a single electrical compartment . Let v ∈ RN denote the voltages of the N neurons . The voltage vi ( t ) for each post-synaptic neuron i at the time t was calculated using a first-order leaky integrator equation given by τiv̇i ( t ) + vi ( t ) = s c i ( t ) + s e i ( t ) + v rest i + hi ( t ) , ( 1 ) where τi is the voltage time constant , hi is the chemosensory input provided to only the sensory neurons , sci is the chemical synaptic input , s e i is the electrical synaptic input , v rest i is the resting neuron voltage . We studied two variations of the model , the current-based model and the conductance-based model , which differ in their formulations of chemical synaptic input sci . Since neurons in the C. elegans nervous system are largely non-spiking , we model the chemical synapses as having graded release of neurotransmitter , rather than the all-or-none quantal release seen in spiking neurons . In both models , we model the amount of neurotransmitter released as proportional to the pre-synaptic voltage vj , following softplus activation g ( · ) which sets a minimum voltage below which there is no synaptic release , leading to W cjig ( vj ( t ) ) . We use g ( · ) , to denote a softplus function for the rest of the paper . In our current-based model , synaptic input sci to a post-synaptic neuron i is directly proportional pre-synaptic neurotransmitter concentration : sci ( t ) = N∑ j W cjig ( vj ( t ) ) , ( 2 ) where W cji represents the chemical synaptic weight between pre-synaptic neuron j and post-synaptic neuron i. W cji can be positive or negative depending on if the synaptic connection is excitatory or inhibitory . If neurons j and i are not connected , W cji is set to zero . In the conductance-based model , we model the synaptic current entering the post-synaptic neuron with more biophysical detail as sci ( t ) = N∑ j ( Eji − vi ( t ) ) W cjig ( vj ( t ) ) . ( 3 ) Here , the pre-synaptic neurotransmitter concentration W cjig ( vj ( t ) ) is more accurately modeled as proportional to the conductance at the post-synaptic terminal . The post-synaptic current is then given by the product of the synaptic conductance and the difference between the post-synaptic voltage vi ( t ) and the synaptic reversal potential Eji . In contrast to the current-based synapse whose input is independent of the post-synaptic voltage vi ( t ) , the conductance-based synapse model more accurately also has a dependence on the post-synaptic voltage . In addition , this model decouples the sign of the synapse from the strength . The reversal potential of a synapses Eji dictates whether a synapse is excitatory or inhibitory . A large and positive Eji corresponds to an excitatory synapse causing depolarization of the post-synaptic neuron . And an inhibitory synapse will have a negative Eji causing hyperpolarization . In this model , we can now independently train the sign of a synapse and its non-negative strength W cji , which is not easily possible with current-based synapses . In both the current-based and conductance-based models , the following equation was used to represent electrical synaptic inputs : sei ( t ) = N∑ j W eji ( vj ( t ) − vi ( t ) ) , ( 4 ) where W eji is restricted to be non-negative and vj − vi is the potential difference between presynaptic and postsynaptic neurons . We restrictW eji = W e ij because the potential differences between electrical synapses are symmetric . To directly compare the outputs of the LVM to neural activity measurements , our model must generate calcium signals from the voltage traces . We model the calcium concentration [ Ca ] i of each neuron i as a first-order leaky integrator , driven by voltage-gated calcium channels with the same nonlinear current-voltage ( I-V ) function g ( vi ) : τ [ Ca ] [ Ċa ] i ( t ) + [ Ca ] i ( t ) = g ( vi ( t ) ) , ( 5 ) where g ( · ) represents softplus activation , τ [ Ca ] is a time constant shared across all neurons . We then map calcium concentration [ Ca ] into the measured calcium fluorescence signals f via an affine transform with scalar αf and bias βf , fi ( t ) = α f [ Ca ] i ( t ) + β f + σf fi ( t ) , ( 6 ) with measurement noise represented by a noise amplitude σf and a noise term f i ( t ) ∼ N ( 0 , 1 ) .
The current manuscript presented a connectome constrained latent variable model for whole-brain calcium imaging of C.elegans nervous system. The whole-brain calcium imaging of C.elegans was collected while C.elegans was undergoing chemosensory testing, and the dataset has been published recently. In the current study, the authors aimed to present a model that could predict the single-neuron and the single-trial activity of this dataset. Specifically, the activity of each neuron of the whole brain imaging was modeled by latent variable analogous to the voltage of one unit in the model network. The connection between model network units was constrained by the connectome. The authors showed that the connectome constraints significantly improved the prediction power of the model, in predicting the activity of missing units, as well as predicting the single-trial activity of hold-out warms. Overall, the authors have provided a clear description of the model and carried out a good amount of experiments to evaluate different model variants.
SP:e4f38b0766d08cb530e4a0133c62fed4849a14e3
Self-supervised Learning is More Robust to Dataset Imbalance
1 INTRODUCTION . Self-supervised learning ( SSL ) is an important paradigm for machine learning , because it can leverage the availability of large-scale unlabeled datasets to learn representations for a wide range of downstream tasks and datasets ( He et al. , 2020 ; Chen et al. , 2020 ; Grill et al. , 2020 ; Caron et al. , 2020 ; Chen & He , 2021 ) . Current SSL algorithms are mostly trained on curated , balanced datasets , but large-scale unlabeled datasets in the wild are inevitably imbalanced with a long-tailed label distribution ( Reed , 2001 ) . Curating a class-balanced unlabeled dataset requires the knowledge of labels , which defeats the purpose of leveraging unlabeled data by SSL . The robustness of SSL algorithms to dataset imbalance remains largely underexplored in the literature , but extensive studies on supervised learning ( SL ) with imbalanced datasets do not bode well . The performance of vanilla supervised methods degrades significantly on imbalanced datasets ( Cui et al. , 2019 ; Cao et al. , 2019 ; Buda et al. , 2018 ) , posing challenges to practical applications such as instance segmentation ( Tang et al. , 2020 ) and depth estimation ( Yang et al. , 2021 ) . Many recent works address this issue with various regularization and re-weighting/re-sampling techniques ( Ando & Huang , 2017 ; Wang et al. , 2017 ; Jamal et al. , 2020 ; Cui et al. , 2019 ; Cao et al. , 2019 ; 2021 ; Wang et al. , 2020 ; Tian et al. , 2020 ; Hong et al. , 2021 ) . In this work , we systematically investigate the representation quality of SSL algorithms under class imbalance . Perhaps surprisingly , we find out that off-the-shelf SSL representations are already more robust to dataset imbalance than the representations learned by supervised pre-training . We evaluate the representation quality by linear probe on in-domain ( ID ) data and finetuning on out-of-domain ( OOD ) data . We compare the robustness of SL and SSL representations by computing the gap between the performance of the representations trained on balanced and imbalanced datasets of the same sizes . We observe that the balance-imbalance gap for SSL is much smaller than SL , under a variety of configurations with varying dataset sizes and imbalance ratios and with both ID and OOD evaluations ( see Figure 1 and Section 2 for more details ) . This robustness holds even with the same number of samples for SL and SSL , although SSL does not require labels and hence can be more easily applied to larger datasets than SL . Why is SSL robust to dataset imbalance ? We hypothesize the following underlying cause to answer this fundamental question : SSL learns richer features from the frequent classes than SL does . These features may help classify the rare classes under ID evaluation and are transferable to the downstream tasks under OOD evaluation . For simplicity , consider the situation where rare classes have so limited data that either SL or SSL models overfit to the rare data . In this case , it is important for the models to learn diverse features from the frequent classes which can help classify the rare classes . Supervised learning is only incentivized to learn those features relevant to predicting frequent classes and may ignore other features . In contrast , SSL may learn the structures within the frequent classes better— because it is not supervised or incentivized by any labels , it can learn not only the label-relevant features but also other interesting features capturing the intrinsic properties of the input distribution , which may generalize/transfer better to rare classes and downstream tasks . We empirically validate this intuition by visualizing the features on a semi-synthetic dataset where the label-relevant features and label-irrelevant-but-transferable features are prominently seen by design ( cf . Section 3.2 ) . In addition , we construct a toy example where we can rigorously prove the difference between self-supervised and supervised features in Section 3.1 . Finally , given our theoretical insights , we take a step towards further improving SSL algorithms , closing the small gap between balanced and imbalanced datasets . We identify the generalization gap between the empirical and population pre-training losses on rare data as the key to improvements . To this end , we design a simple algorithm that first roughly estimates the density of examples with kernel density estimation and then applies a larger sharpness-based regularization ( Foret et al. , 2020 ) to the estimated rare examples . Our algorithm consistently improves the representation quality under several evaluation protocols . We sum up our contributions as follows . ( 1 ) We are the first to systematically investigate the robustness of self-supervised representation learning to dataset imbalance . ( 2 ) We propose and validate an explanation of this robustness of SSL , empirically and theoretically . ( 3 ) We propose a principled method to improve SSL under unknown dataset imbalance . 2 EXPLORING THE EFFECT OF CLASS IMBALANCE ON SSL . Dataset class imbalance can pose challenge to self-supervised learning in the wild . Without access to labels , we can not know in advance whether a large-scale unlabeled dataset is imbalanced . Hence , we need to study how SSL will behave under dataset imbalance to deploy SSL in the wild safely . In this section , we systematically investigate the effect of class imbalance on the self-supervised representations with experiments . 2.1 PROBLEM FORMULATION . Class-imbalanced pre-training datasets . We assume the datapoints / inputs are in Rd and come from C underlying classes . Let x denote the input and y denote the corresponding label . Supervised pre-training algorithms have access to the inputs and corresponding labels , whereas self-supervised pre-training only observes the inputs . Given a pre-training distribution P over over Rd × [ C ] , let r denote the ratio of class imbalance . That is , r is the ratio between the probability of the rarest class and the most frequent class : r = minj∈ [ C ] P ( y=j ) maxj∈ [ C ] P ( y=j ) ≤ 1 . We will construct distributions with varying imbalance ratios and use Pr to denote the dataset with ratio r. We also use Pbal for the case where r = 1 , i.e . the dataset is balanced . Large scale data in the wild often follow heavily long-tailed label distributions , i.e. , r is small . Throughout this paper we assume that for any class j ∈ [ C ] , the class-conditional distribution Pr ( x|y = j ) is the same across balanced and imbalanced datasets for all r. The pre-training dataset P̂rn consists of n i.i.d . samples from Pr . Pre-trained models . A feature extractor is a function fφ : Rd → Rm parameterized by neural network parameters φ , which maps inputs to representations . A linear head is a linear function gθ : Rm → RC , which can be composed with fφ to produce the label . SSL algorithms learn φ from unlabeled data . Supervised pre-training learns the feature extractor and the linear head from labeled data . We drop the head and only evaluate the quality of feature extractor φ.1 Following the standard evaluation protocol in prior works ( He et al. , 2020 ; Chen et al. , 2020 ) , we measure the quality of learned representations on both in-domain and out-of-domain datasets with either linear probe or fine-tuning , as detailed below . In-domain ( ID ) evaluation tests the performance of representations on the balanced in-domain distribution Pbal with linear probe . Given a feature extractor fφ pre-trained on a pre-training dataset P̂rn with n data points and imbalance ratio r , we train a C-way linear classifier θ on a balanced dataset2 sampled i.i.d . from Pbal . We evaluate the representation quality with the top-1 accuracy of the learned linear head on Pbal . We denote the ID accuracy of supervised pre-trained representations by ASLID ( n , r ) . Note that A SL ID ( n , 1 ) stands for the result with balanced pre-training dataset . For SSL representations , we denote the accuracy by ASSLID ( n , r ) . Out-of-domain ( OOD ) evaluation tests the performance of representations by fine-tuning the feature extractor and the head on a ( or multiple ) downstream target distribution Pt . Starting from a feature extractor fφ ( pre-trained on a dataset of size n and imbalance ratio r ) and a randomly initialized classifier θ , we fine-tune φ and θ on the target dataset P̂t , and evaluate the representation quality by the expected top-1 accuracy on Pt . We use ASLOOD ( n , r ) and ASSLOOD ( n , r ) to denote the resulting accuracies of supervised and self-supervised representations , respectively . Summary of varying factors . We aim to study the effect of class imbalance to feature qualities on a diverse set of configurations with the following varying factors : ( 1 ) the number of examples in pre-training n , ( 2 ) the imbalance ratio of the pre-training dataset r , ( 3 ) ID or OOD evaluation , and ( 4 ) self-supervised learning algorithms : MoCo v2 ( He et al. , 2020 ) , or SimSiam ( Chen & He , 2021 ) . 2.2 EXPERIMENTAL SETUP . Datasets . We pre-train the representations on variants of ImageNet ( Russakovsky et al. , 2015 ) or CIFAR-10 ( Krizhevsky & Hinton , 2009 ) with a wide range of numbers of examples and ratios of imbalance . Following Liu et al . ( 2019 ) , we consider exponential and Pareto distributions , which closely simulate the natural long-tailed distributions . We consider imbalance ratio in { 1 , 0.004 , 0.0025 } for ImageNet and { 1 , 0.1 , 0.01 } for CIFAR-10 . For each imbalance ratio , we further downsample the dataset with a sampling ratio in { 0.75 , 0.5 , 0.25 , 0.125 } to form datasets with varying sizes . Note that we fix the variant of the dataset when comparing different algorithms . For ID evaluation , we use the original CIFAR-10 or ImageNet training set for the training phase of linear probe and use 1It is well-known that the composition of the head and features learned from supervised learning is more sensitive to imbalanced dataset than the quality of feature extractor φ ( Cao et al. , 2019 ; Kang et al. , 2020 ) . Please also see Table 4 in Appendix C for a comparison . 2We essentially use the largest balanced labeled ID dataset for this evaluation , which oftentimes means the entire curated training dataset , such as CIFAR-10 with 50,000 examples and ImageNet with 1,281,167 examples . the original validation set for the final evaluation . For OOD evaluation of representations learned on CIFAR-10 , we use STL-10 ( Coates et al. , 2011 ) as the target /downstream dataset . For OOD evaluation of representations learned on ImageNet , we fine-tune the pre-trained feature extractors on CUB-200 ( Wah et al. , 2011 ) , Stanford Cars ( Krause et al. , 2013 ) , Oxford Pets ( Parkhi et al. , 2012 ) , and Aircrafts ( Maji et al. , 2013 ) , and measure the representation quality with average accuracy on the downstream tasks . Models . We use ResNet-18 on CIFAR-10 and ResNet-50 on ImageNet as backbones . For supervised pre-training , we follow the standard protocol of He et al . ( 2016 ) and Kang et al . ( 2020 ) . For selfsupervised pre-training , we consider MoCo v2 ( He et al. , 2020 ) and SimSiam ( Chen & He , 2021 ) . We run each evaluation experiment with 3 seeds and report the average and standard deviation in the figures . Further implementation details are deferred to Section A.1 .
The authors explore the robustness of feature learning via self-supervised learning (SSL) and supervised learning (SL) with imbalance datasets. Generally, SL can learn better features than SSL, and features are better from balanced than from imbalance datasets. However, with imbalance data, SSL is more robust than SL in terms of the performance difference of features learned from balance and imbalance datasets. That is, the performance does not drop as much for SSL with imbalance data from balanced data. The robustness is observed from both in-domain (ID) and out-of-domain (OOD) tasks (different downstream tasks). They hypothesized that "SSL learns richer features from frequent data that are transferable to rare data." Using a synthetic dataset with 2 frequent classes and 1 rare class, they observe that SL learns one feature to distinguish the two frequent classes and one feature to overfit the rare class. However, SSL learns two features that can classify the 3 classes well. That is, SSL can learn label-irrelevant-but-transferable features from the frequent classes which can help classify the rare class. To illustrate the observation from a synthetic dataset also exists in the real world, they construct a semi-synthetic dataset. From CIFAR 10, they generate 5 fequent classes and 5 rare classes. For the frequent classes, the right half of the image is replaced by a random half image, ie not relevant to the label. For the rare classes, the left half is replaced by blank. Using activation maps, they observe that SL learn features from only the left half (relevant to the label) but not from the right half (irrelevant to the label). However, SSL learns features from both halves. To further improve robustness, they adapt SAM to the imbalance scenario: rwSAM. SAM improves model generalization by penalizing loss sharpness. For SAM, the loss is uniformly low in the neighboring area. To have a flatter region for rare examples, rwSAM reweights rare examples in the inner maximization step of SAM. Since labels are not available, they estimate kernel density on feature vectors and the weight is inversely proportional to the density. On two datasets, empirical results indicate rwSAM improves performance over SAM.
SP:53d2472985aefa5cfb31cbedd27a453a124c6938
Self-supervised Learning is More Robust to Dataset Imbalance
1 INTRODUCTION . Self-supervised learning ( SSL ) is an important paradigm for machine learning , because it can leverage the availability of large-scale unlabeled datasets to learn representations for a wide range of downstream tasks and datasets ( He et al. , 2020 ; Chen et al. , 2020 ; Grill et al. , 2020 ; Caron et al. , 2020 ; Chen & He , 2021 ) . Current SSL algorithms are mostly trained on curated , balanced datasets , but large-scale unlabeled datasets in the wild are inevitably imbalanced with a long-tailed label distribution ( Reed , 2001 ) . Curating a class-balanced unlabeled dataset requires the knowledge of labels , which defeats the purpose of leveraging unlabeled data by SSL . The robustness of SSL algorithms to dataset imbalance remains largely underexplored in the literature , but extensive studies on supervised learning ( SL ) with imbalanced datasets do not bode well . The performance of vanilla supervised methods degrades significantly on imbalanced datasets ( Cui et al. , 2019 ; Cao et al. , 2019 ; Buda et al. , 2018 ) , posing challenges to practical applications such as instance segmentation ( Tang et al. , 2020 ) and depth estimation ( Yang et al. , 2021 ) . Many recent works address this issue with various regularization and re-weighting/re-sampling techniques ( Ando & Huang , 2017 ; Wang et al. , 2017 ; Jamal et al. , 2020 ; Cui et al. , 2019 ; Cao et al. , 2019 ; 2021 ; Wang et al. , 2020 ; Tian et al. , 2020 ; Hong et al. , 2021 ) . In this work , we systematically investigate the representation quality of SSL algorithms under class imbalance . Perhaps surprisingly , we find out that off-the-shelf SSL representations are already more robust to dataset imbalance than the representations learned by supervised pre-training . We evaluate the representation quality by linear probe on in-domain ( ID ) data and finetuning on out-of-domain ( OOD ) data . We compare the robustness of SL and SSL representations by computing the gap between the performance of the representations trained on balanced and imbalanced datasets of the same sizes . We observe that the balance-imbalance gap for SSL is much smaller than SL , under a variety of configurations with varying dataset sizes and imbalance ratios and with both ID and OOD evaluations ( see Figure 1 and Section 2 for more details ) . This robustness holds even with the same number of samples for SL and SSL , although SSL does not require labels and hence can be more easily applied to larger datasets than SL . Why is SSL robust to dataset imbalance ? We hypothesize the following underlying cause to answer this fundamental question : SSL learns richer features from the frequent classes than SL does . These features may help classify the rare classes under ID evaluation and are transferable to the downstream tasks under OOD evaluation . For simplicity , consider the situation where rare classes have so limited data that either SL or SSL models overfit to the rare data . In this case , it is important for the models to learn diverse features from the frequent classes which can help classify the rare classes . Supervised learning is only incentivized to learn those features relevant to predicting frequent classes and may ignore other features . In contrast , SSL may learn the structures within the frequent classes better— because it is not supervised or incentivized by any labels , it can learn not only the label-relevant features but also other interesting features capturing the intrinsic properties of the input distribution , which may generalize/transfer better to rare classes and downstream tasks . We empirically validate this intuition by visualizing the features on a semi-synthetic dataset where the label-relevant features and label-irrelevant-but-transferable features are prominently seen by design ( cf . Section 3.2 ) . In addition , we construct a toy example where we can rigorously prove the difference between self-supervised and supervised features in Section 3.1 . Finally , given our theoretical insights , we take a step towards further improving SSL algorithms , closing the small gap between balanced and imbalanced datasets . We identify the generalization gap between the empirical and population pre-training losses on rare data as the key to improvements . To this end , we design a simple algorithm that first roughly estimates the density of examples with kernel density estimation and then applies a larger sharpness-based regularization ( Foret et al. , 2020 ) to the estimated rare examples . Our algorithm consistently improves the representation quality under several evaluation protocols . We sum up our contributions as follows . ( 1 ) We are the first to systematically investigate the robustness of self-supervised representation learning to dataset imbalance . ( 2 ) We propose and validate an explanation of this robustness of SSL , empirically and theoretically . ( 3 ) We propose a principled method to improve SSL under unknown dataset imbalance . 2 EXPLORING THE EFFECT OF CLASS IMBALANCE ON SSL . Dataset class imbalance can pose challenge to self-supervised learning in the wild . Without access to labels , we can not know in advance whether a large-scale unlabeled dataset is imbalanced . Hence , we need to study how SSL will behave under dataset imbalance to deploy SSL in the wild safely . In this section , we systematically investigate the effect of class imbalance on the self-supervised representations with experiments . 2.1 PROBLEM FORMULATION . Class-imbalanced pre-training datasets . We assume the datapoints / inputs are in Rd and come from C underlying classes . Let x denote the input and y denote the corresponding label . Supervised pre-training algorithms have access to the inputs and corresponding labels , whereas self-supervised pre-training only observes the inputs . Given a pre-training distribution P over over Rd × [ C ] , let r denote the ratio of class imbalance . That is , r is the ratio between the probability of the rarest class and the most frequent class : r = minj∈ [ C ] P ( y=j ) maxj∈ [ C ] P ( y=j ) ≤ 1 . We will construct distributions with varying imbalance ratios and use Pr to denote the dataset with ratio r. We also use Pbal for the case where r = 1 , i.e . the dataset is balanced . Large scale data in the wild often follow heavily long-tailed label distributions , i.e. , r is small . Throughout this paper we assume that for any class j ∈ [ C ] , the class-conditional distribution Pr ( x|y = j ) is the same across balanced and imbalanced datasets for all r. The pre-training dataset P̂rn consists of n i.i.d . samples from Pr . Pre-trained models . A feature extractor is a function fφ : Rd → Rm parameterized by neural network parameters φ , which maps inputs to representations . A linear head is a linear function gθ : Rm → RC , which can be composed with fφ to produce the label . SSL algorithms learn φ from unlabeled data . Supervised pre-training learns the feature extractor and the linear head from labeled data . We drop the head and only evaluate the quality of feature extractor φ.1 Following the standard evaluation protocol in prior works ( He et al. , 2020 ; Chen et al. , 2020 ) , we measure the quality of learned representations on both in-domain and out-of-domain datasets with either linear probe or fine-tuning , as detailed below . In-domain ( ID ) evaluation tests the performance of representations on the balanced in-domain distribution Pbal with linear probe . Given a feature extractor fφ pre-trained on a pre-training dataset P̂rn with n data points and imbalance ratio r , we train a C-way linear classifier θ on a balanced dataset2 sampled i.i.d . from Pbal . We evaluate the representation quality with the top-1 accuracy of the learned linear head on Pbal . We denote the ID accuracy of supervised pre-trained representations by ASLID ( n , r ) . Note that A SL ID ( n , 1 ) stands for the result with balanced pre-training dataset . For SSL representations , we denote the accuracy by ASSLID ( n , r ) . Out-of-domain ( OOD ) evaluation tests the performance of representations by fine-tuning the feature extractor and the head on a ( or multiple ) downstream target distribution Pt . Starting from a feature extractor fφ ( pre-trained on a dataset of size n and imbalance ratio r ) and a randomly initialized classifier θ , we fine-tune φ and θ on the target dataset P̂t , and evaluate the representation quality by the expected top-1 accuracy on Pt . We use ASLOOD ( n , r ) and ASSLOOD ( n , r ) to denote the resulting accuracies of supervised and self-supervised representations , respectively . Summary of varying factors . We aim to study the effect of class imbalance to feature qualities on a diverse set of configurations with the following varying factors : ( 1 ) the number of examples in pre-training n , ( 2 ) the imbalance ratio of the pre-training dataset r , ( 3 ) ID or OOD evaluation , and ( 4 ) self-supervised learning algorithms : MoCo v2 ( He et al. , 2020 ) , or SimSiam ( Chen & He , 2021 ) . 2.2 EXPERIMENTAL SETUP . Datasets . We pre-train the representations on variants of ImageNet ( Russakovsky et al. , 2015 ) or CIFAR-10 ( Krizhevsky & Hinton , 2009 ) with a wide range of numbers of examples and ratios of imbalance . Following Liu et al . ( 2019 ) , we consider exponential and Pareto distributions , which closely simulate the natural long-tailed distributions . We consider imbalance ratio in { 1 , 0.004 , 0.0025 } for ImageNet and { 1 , 0.1 , 0.01 } for CIFAR-10 . For each imbalance ratio , we further downsample the dataset with a sampling ratio in { 0.75 , 0.5 , 0.25 , 0.125 } to form datasets with varying sizes . Note that we fix the variant of the dataset when comparing different algorithms . For ID evaluation , we use the original CIFAR-10 or ImageNet training set for the training phase of linear probe and use 1It is well-known that the composition of the head and features learned from supervised learning is more sensitive to imbalanced dataset than the quality of feature extractor φ ( Cao et al. , 2019 ; Kang et al. , 2020 ) . Please also see Table 4 in Appendix C for a comparison . 2We essentially use the largest balanced labeled ID dataset for this evaluation , which oftentimes means the entire curated training dataset , such as CIFAR-10 with 50,000 examples and ImageNet with 1,281,167 examples . the original validation set for the final evaluation . For OOD evaluation of representations learned on CIFAR-10 , we use STL-10 ( Coates et al. , 2011 ) as the target /downstream dataset . For OOD evaluation of representations learned on ImageNet , we fine-tune the pre-trained feature extractors on CUB-200 ( Wah et al. , 2011 ) , Stanford Cars ( Krause et al. , 2013 ) , Oxford Pets ( Parkhi et al. , 2012 ) , and Aircrafts ( Maji et al. , 2013 ) , and measure the representation quality with average accuracy on the downstream tasks . Models . We use ResNet-18 on CIFAR-10 and ResNet-50 on ImageNet as backbones . For supervised pre-training , we follow the standard protocol of He et al . ( 2016 ) and Kang et al . ( 2020 ) . For selfsupervised pre-training , we consider MoCo v2 ( He et al. , 2020 ) and SimSiam ( Chen & He , 2021 ) . We run each evaluation experiment with 3 seeds and report the average and standard deviation in the figures . Further implementation details are deferred to Section A.1 .
The paper studies the differences between representations pretrained with Self-Supervised Learning (SSL) and the standard Supervised Learning (SL) frameworks in the In-Domain (ID) and Out-Of-Domain (OOD) settings of the image classification task with an imbalanced dataset. The paper presents different results that are relevant to the computer vision and machine learning community: 1) In the ID setting, representations pretrained with SL outperform those pretrained with SSL. However, in the OOD setting, representations pretrained with SSL outperform those pretrained with SL and are even robust to the imbalance factor. 2) An analysis in Section 3.1 on a toy dataset and semi-synthetic data (by cropping and fusing half images during pretraining) in Section 3.2 validate that SSL methods learn representations that do not overfit to labels and show improved accuracy at test time. 3) The paper also proposes a regularization framework (called rwSAM) that promotes flatter landscape for rare examples to improve performance at test time (see Table 1 and 2).
SP:53d2472985aefa5cfb31cbedd27a453a124c6938
Self-supervised Learning is More Robust to Dataset Imbalance
1 INTRODUCTION . Self-supervised learning ( SSL ) is an important paradigm for machine learning , because it can leverage the availability of large-scale unlabeled datasets to learn representations for a wide range of downstream tasks and datasets ( He et al. , 2020 ; Chen et al. , 2020 ; Grill et al. , 2020 ; Caron et al. , 2020 ; Chen & He , 2021 ) . Current SSL algorithms are mostly trained on curated , balanced datasets , but large-scale unlabeled datasets in the wild are inevitably imbalanced with a long-tailed label distribution ( Reed , 2001 ) . Curating a class-balanced unlabeled dataset requires the knowledge of labels , which defeats the purpose of leveraging unlabeled data by SSL . The robustness of SSL algorithms to dataset imbalance remains largely underexplored in the literature , but extensive studies on supervised learning ( SL ) with imbalanced datasets do not bode well . The performance of vanilla supervised methods degrades significantly on imbalanced datasets ( Cui et al. , 2019 ; Cao et al. , 2019 ; Buda et al. , 2018 ) , posing challenges to practical applications such as instance segmentation ( Tang et al. , 2020 ) and depth estimation ( Yang et al. , 2021 ) . Many recent works address this issue with various regularization and re-weighting/re-sampling techniques ( Ando & Huang , 2017 ; Wang et al. , 2017 ; Jamal et al. , 2020 ; Cui et al. , 2019 ; Cao et al. , 2019 ; 2021 ; Wang et al. , 2020 ; Tian et al. , 2020 ; Hong et al. , 2021 ) . In this work , we systematically investigate the representation quality of SSL algorithms under class imbalance . Perhaps surprisingly , we find out that off-the-shelf SSL representations are already more robust to dataset imbalance than the representations learned by supervised pre-training . We evaluate the representation quality by linear probe on in-domain ( ID ) data and finetuning on out-of-domain ( OOD ) data . We compare the robustness of SL and SSL representations by computing the gap between the performance of the representations trained on balanced and imbalanced datasets of the same sizes . We observe that the balance-imbalance gap for SSL is much smaller than SL , under a variety of configurations with varying dataset sizes and imbalance ratios and with both ID and OOD evaluations ( see Figure 1 and Section 2 for more details ) . This robustness holds even with the same number of samples for SL and SSL , although SSL does not require labels and hence can be more easily applied to larger datasets than SL . Why is SSL robust to dataset imbalance ? We hypothesize the following underlying cause to answer this fundamental question : SSL learns richer features from the frequent classes than SL does . These features may help classify the rare classes under ID evaluation and are transferable to the downstream tasks under OOD evaluation . For simplicity , consider the situation where rare classes have so limited data that either SL or SSL models overfit to the rare data . In this case , it is important for the models to learn diverse features from the frequent classes which can help classify the rare classes . Supervised learning is only incentivized to learn those features relevant to predicting frequent classes and may ignore other features . In contrast , SSL may learn the structures within the frequent classes better— because it is not supervised or incentivized by any labels , it can learn not only the label-relevant features but also other interesting features capturing the intrinsic properties of the input distribution , which may generalize/transfer better to rare classes and downstream tasks . We empirically validate this intuition by visualizing the features on a semi-synthetic dataset where the label-relevant features and label-irrelevant-but-transferable features are prominently seen by design ( cf . Section 3.2 ) . In addition , we construct a toy example where we can rigorously prove the difference between self-supervised and supervised features in Section 3.1 . Finally , given our theoretical insights , we take a step towards further improving SSL algorithms , closing the small gap between balanced and imbalanced datasets . We identify the generalization gap between the empirical and population pre-training losses on rare data as the key to improvements . To this end , we design a simple algorithm that first roughly estimates the density of examples with kernel density estimation and then applies a larger sharpness-based regularization ( Foret et al. , 2020 ) to the estimated rare examples . Our algorithm consistently improves the representation quality under several evaluation protocols . We sum up our contributions as follows . ( 1 ) We are the first to systematically investigate the robustness of self-supervised representation learning to dataset imbalance . ( 2 ) We propose and validate an explanation of this robustness of SSL , empirically and theoretically . ( 3 ) We propose a principled method to improve SSL under unknown dataset imbalance . 2 EXPLORING THE EFFECT OF CLASS IMBALANCE ON SSL . Dataset class imbalance can pose challenge to self-supervised learning in the wild . Without access to labels , we can not know in advance whether a large-scale unlabeled dataset is imbalanced . Hence , we need to study how SSL will behave under dataset imbalance to deploy SSL in the wild safely . In this section , we systematically investigate the effect of class imbalance on the self-supervised representations with experiments . 2.1 PROBLEM FORMULATION . Class-imbalanced pre-training datasets . We assume the datapoints / inputs are in Rd and come from C underlying classes . Let x denote the input and y denote the corresponding label . Supervised pre-training algorithms have access to the inputs and corresponding labels , whereas self-supervised pre-training only observes the inputs . Given a pre-training distribution P over over Rd × [ C ] , let r denote the ratio of class imbalance . That is , r is the ratio between the probability of the rarest class and the most frequent class : r = minj∈ [ C ] P ( y=j ) maxj∈ [ C ] P ( y=j ) ≤ 1 . We will construct distributions with varying imbalance ratios and use Pr to denote the dataset with ratio r. We also use Pbal for the case where r = 1 , i.e . the dataset is balanced . Large scale data in the wild often follow heavily long-tailed label distributions , i.e. , r is small . Throughout this paper we assume that for any class j ∈ [ C ] , the class-conditional distribution Pr ( x|y = j ) is the same across balanced and imbalanced datasets for all r. The pre-training dataset P̂rn consists of n i.i.d . samples from Pr . Pre-trained models . A feature extractor is a function fφ : Rd → Rm parameterized by neural network parameters φ , which maps inputs to representations . A linear head is a linear function gθ : Rm → RC , which can be composed with fφ to produce the label . SSL algorithms learn φ from unlabeled data . Supervised pre-training learns the feature extractor and the linear head from labeled data . We drop the head and only evaluate the quality of feature extractor φ.1 Following the standard evaluation protocol in prior works ( He et al. , 2020 ; Chen et al. , 2020 ) , we measure the quality of learned representations on both in-domain and out-of-domain datasets with either linear probe or fine-tuning , as detailed below . In-domain ( ID ) evaluation tests the performance of representations on the balanced in-domain distribution Pbal with linear probe . Given a feature extractor fφ pre-trained on a pre-training dataset P̂rn with n data points and imbalance ratio r , we train a C-way linear classifier θ on a balanced dataset2 sampled i.i.d . from Pbal . We evaluate the representation quality with the top-1 accuracy of the learned linear head on Pbal . We denote the ID accuracy of supervised pre-trained representations by ASLID ( n , r ) . Note that A SL ID ( n , 1 ) stands for the result with balanced pre-training dataset . For SSL representations , we denote the accuracy by ASSLID ( n , r ) . Out-of-domain ( OOD ) evaluation tests the performance of representations by fine-tuning the feature extractor and the head on a ( or multiple ) downstream target distribution Pt . Starting from a feature extractor fφ ( pre-trained on a dataset of size n and imbalance ratio r ) and a randomly initialized classifier θ , we fine-tune φ and θ on the target dataset P̂t , and evaluate the representation quality by the expected top-1 accuracy on Pt . We use ASLOOD ( n , r ) and ASSLOOD ( n , r ) to denote the resulting accuracies of supervised and self-supervised representations , respectively . Summary of varying factors . We aim to study the effect of class imbalance to feature qualities on a diverse set of configurations with the following varying factors : ( 1 ) the number of examples in pre-training n , ( 2 ) the imbalance ratio of the pre-training dataset r , ( 3 ) ID or OOD evaluation , and ( 4 ) self-supervised learning algorithms : MoCo v2 ( He et al. , 2020 ) , or SimSiam ( Chen & He , 2021 ) . 2.2 EXPERIMENTAL SETUP . Datasets . We pre-train the representations on variants of ImageNet ( Russakovsky et al. , 2015 ) or CIFAR-10 ( Krizhevsky & Hinton , 2009 ) with a wide range of numbers of examples and ratios of imbalance . Following Liu et al . ( 2019 ) , we consider exponential and Pareto distributions , which closely simulate the natural long-tailed distributions . We consider imbalance ratio in { 1 , 0.004 , 0.0025 } for ImageNet and { 1 , 0.1 , 0.01 } for CIFAR-10 . For each imbalance ratio , we further downsample the dataset with a sampling ratio in { 0.75 , 0.5 , 0.25 , 0.125 } to form datasets with varying sizes . Note that we fix the variant of the dataset when comparing different algorithms . For ID evaluation , we use the original CIFAR-10 or ImageNet training set for the training phase of linear probe and use 1It is well-known that the composition of the head and features learned from supervised learning is more sensitive to imbalanced dataset than the quality of feature extractor φ ( Cao et al. , 2019 ; Kang et al. , 2020 ) . Please also see Table 4 in Appendix C for a comparison . 2We essentially use the largest balanced labeled ID dataset for this evaluation , which oftentimes means the entire curated training dataset , such as CIFAR-10 with 50,000 examples and ImageNet with 1,281,167 examples . the original validation set for the final evaluation . For OOD evaluation of representations learned on CIFAR-10 , we use STL-10 ( Coates et al. , 2011 ) as the target /downstream dataset . For OOD evaluation of representations learned on ImageNet , we fine-tune the pre-trained feature extractors on CUB-200 ( Wah et al. , 2011 ) , Stanford Cars ( Krause et al. , 2013 ) , Oxford Pets ( Parkhi et al. , 2012 ) , and Aircrafts ( Maji et al. , 2013 ) , and measure the representation quality with average accuracy on the downstream tasks . Models . We use ResNet-18 on CIFAR-10 and ResNet-50 on ImageNet as backbones . For supervised pre-training , we follow the standard protocol of He et al . ( 2016 ) and Kang et al . ( 2020 ) . For selfsupervised pre-training , we consider MoCo v2 ( He et al. , 2020 ) and SimSiam ( Chen & He , 2021 ) . We run each evaluation experiment with 3 seeds and report the average and standard deviation in the figures . Further implementation details are deferred to Section A.1 .
This paper studies and compares the performance of self-supervised representation learning methods against supervised representation learning when there are class imbalance. It shows empirically that self-supervised methods are more robust to class imbalance, i.e. the performance gap between models trained on balanced and imbalanced datasets is smaller. The paper also investigates the quality of the features learnt by SSL vs SL, and shows both theoretically in a limited setting and empirically on synthetic datasets that features learnt by SSL are diverse and capture characteristics that might be useful for rare classes. Finally, they provide a regularisation method for SSL to encourage model to learn better features for rare classes without having access to labels.
SP:53d2472985aefa5cfb31cbedd27a453a124c6938
PARS: PSEUDO-LABEL AWARE ROBUST SAMPLE SELECTION FOR LEARNING WITH NOISY LABELS
1 INTRODUCTION . Deep neural networks rely on large-scale training data with human annotated labels for achieving good performance ( Deng et al. , 2009 ; Everingham et al. , 2010 ) . Collecting millions or billions of labeled training data instances is very expensive , requires significant human time and effort , and can also compromise user privacy ( Zheng et al. , 2020 ; Bonawitz et al. , 2017 ) . Hence , there has been a paradigm shift in the interests of the research community from large-scale supervised learning ( Krizhevsky et al. , 2017 ; He et al. , 2016a ; Huang et al. , 2017 ) to Learning with Noisy Labels ( LNL ) ( Natarajan et al. , 2013 ; Goldberger & Ben-Reuven , 2016 ; Patrini et al. , 2017 ; Tanno et al. , 2019 ) and/or unlabeled data ( Berthelot et al. , 2019b ; a ; Sohn et al. , 2020 ) . This is largely due to the abundance of raw unlabeled data with weak user tags ( Plummer et al. , 2015 ; Xiao et al. , 2015 ) or caption descriptions ( Lin et al. , 2014 ) . However , it is not trivial to build models that are robust to these noisy labels as the deep convolutional neural networks ( CNNs ) trained with cross-entropy loss can quickly overfit to the noise in the dataset , harming generalization ( Zhang et al. , 2016 ) . Most of the existing approaches on LNL can be divided into three main categories . First , several noise robust loss functions ( Ghosh et al. , 2017 ; Wang et al. , 2019a ; Zhang & Sabuncu , 2018 ) were proposed that are inherently tolerant to label noise . Second , sample selection methods ( also referred to as loss correction in some literature ) ( Han et al. , 2018 ; Yu et al. , 2019 ; Arazo et al. , 2019 ) are a popular technique that analyzes the per-sample loss distribution and separates the clean and noisy samples . The identified noisy samples are then re-weighted so that they contribute less in the loss computation . A challenge in this direction is to design a reliable criterion for separation and hence prevent overfitting to highly confident noisy samples , a behavior known as self-confirmation bias . Third , label correction methods attempt to correct the noisy labels using class-prototypes ( Han et al. , 2019 ) or pseudo-labeling techniques ( Tanaka et al. , 2018 ; Yi & Wu , 2019 ) . However , in order to correct noisy labels , we typically need an extra ( usually small ) set of correctly labeled validation labels . In particular , these methods can fail when the noise ratio is high and estimating correct labels or high-quality pseudo-labels is non-trivial . More recently , the success of several state-of-the-art LNL methods is attributed to leveraging SemiSupervised Learning ( SSL ) based approaches ( Li et al. , 2020 ; Kim et al. , 2019 ) . Typically , a sample selection technique is applied to separate clean and noisy labels in the training data , then the noisy labels are deemed unreliable and hence treated as unlabeled in a SSL setting . Following the recent SSL literature ( Lee et al. , 2013 ; Arazo et al. , 2020 ) , estimated pseudo-labels are usually used to replace the filtered noisy labels during training . These approaches have shown to be highly tolerant to label-noise . However , the noisy labels are always discarded in favor of pseudo-labels in all the existing literature , but they may still contain useful information for training . Pseudo-labeling is in turn only applied to the filtered noisy subset while the rest of the raw labels are typically used as is , which makes it sensitive to the quality of the filtering algorithm . In particular , motivated by a simple principle of making the most of the signal contained in the noisy training data , we design PARS , short for Pseudo-Label Aware Robust Sample Selection . Our contributions are as follows : 1 . PARS proposes a novel , principled training framework for LNL . It trains on both the original labels and pseudo-labels . Unlike previous works , instead of filtering and then discarding the low-confident noisy labels , PARS uses the entire set of original labels , and applies self-training with pseudo-labeling and data augmentation for the entire dataset ( rather than the filtered noisy data only ) . 2 . PARS is able to learn useful information from all the available data samples through label-dependent noise-aware loss functions . Specifically , in order to prevent overfitting to inaccurate original labels ( or inaccurate pseudo-labels ) , PARS performs a simple confidencebased filtering technique by setting a high threshold on their predicted confidence , and applies robust/negative learning ( or positive/negative learning ) accordingly . 3 . We perform extensive experiments on multiple benchmark datasets i.e . noisy CIFAR-10 , noisy CIFAR-100 and Clothing1M . Results demonstrate that PARS outperforms previous state-of-the-art methods by a significant margin , in particular when high level of noise is present in the training data . We also conduct sufficient ablation studies to validate the importance of our contributions . 4 . We design a novel low-resource semi-supervised LNL setting where only a small subset of data is weakly labeled ( Section 4.3 ) . We show significant gains over state-of-the-art approaches using PARS . This setting is particularly interesting when it is hard to obtain large-scale noisy labeled data . In particular , we find that surprisingly none of the existing LNL methods outperform a baseline SSL model ( FixMatch ) ( Sohn et al. , 2020 ) that is not even designed to handle label noise , and yet PARS can achieve up to an absolute 27 % improvement in test accuracy in a controlled high-noise low-resource setting . 2 RELATED WORK . In recent literature on LNL , methods typically fall into three design categories to learn a noise robust model : noise robust loss functions , sample selection approaches , or label correction methods . Noise robust loss function based methods propose objective functions that are tolerant to label noise . A commonly used loss function that is identified to be robust to noisy labels is Mean Absolute Error ( MAE ) ( Ghosh et al. , 2017 ) . Wang et al . ( 2019a ) proposed Improved MAE which is a re-weighted version of MAE . Zhang & Sabuncu ( 2018 ) proposed Generalized Cross Entropy Loss ( GCE ) which is a generalization of MAE and Categorical Cross Entropy loss . More recently , Wang et al . ( 2019b ) designed a Symmetric Cross Entropy ( SCE ) loss which is similar in spirit to the symmetric KLdivergence and combines the cross-entropy loss with the reverse cross-entropy . Although SCE is robust to noisy labels , Ma et al . ( 2020 ) proposed a normalized family of loss functions which are shown to be more robust than SCE for extreme levels of label noise . Kim et al . ( 2019 ) and Kim et al . ( 2021 ) designed a framework that alternates between positive learning on accurate/clean labels and negative learning on complementary/wrong labels . Loss correction approaches explicitly modify the loss function during training to take into account the noise distribution by modeling a noise transition matrix ( Patrini et al. , 2017 ; Tanno et al. , 2019 ; Xia et al. , 2019 ; Goldberger & Ben-Reuven , 2016 ) or based on label-dependent weights ( Natarajan et al. , 2013 ) . Another family of methods focus primarily on sample selection , where the model selects small-loss samples as “ clean ” samples under the assumption that the model first fits to the clean samples before memorizing the noisy samples ( also known as the early-learning assumption ) ( Arpit et al. , 2017 ; Zhang et al. , 2016 ; Liu et al. , 2020 ) . Han et al . ( 2018 ) and Yu et al . ( 2019 ) proposed Co-teaching where sample selection is conducted using two networks to select clean and noisy samples , and then the clean samples are used for further training . MentorNet ( Jiang et al. , 2018 ) is a student-teacher framework where a pre-trained teacher network guides the learning of the student network with clean samples ( whose labels are deemed “ correct ” ) . In the decoupling training strategy proposed by Malach & Shalev-Shwartz ( 2017 ) , two networks are trained simultaneously and guide each other on when and how to update . One limitation of these approaches is that they ignore all the noisy/unclean samples during training and only leverage the expected-to-be clean samples for improving performance . Li et al . ( 2020 ) proposed DivideMix where sample selection is conducted based on per-sample loss distribution , and then the noisy samples are treated as unlabeled data in a SSL setting ( Berthelot et al. , 2019b ) . Nishi et al . ( 2021 ) further investigated the potential of using augmentation strategies in LNL , one used for loss analysis and another for learning , thus improving generalization of DivideMix . Compared to the above , label correction aims to improve the quality of the noisy labels by explicitly correcting the wrong labels . Tanaka et al . ( 2018 ) and Yi & Wu ( 2019 ) predicted the correct labels either as estimates of label probabilities ( soft labels ) or as one-hot class labels ( hard labels ) . Arazo et al . ( 2019 ) combined label correction with iterative sample selection by first filtering clean labels from noisy labels modeling a two-component Beta Mixture , and then estimating to correct labels for those noisy samples . Tanaka et al . ( 2018 ) combined label correction with additional regularization terms that were proved to be helpful for LNL . Song et al . ( 2019 ) also used label replacement to refurbish a subset of labels , thereby gradually increasing the number of available training samples . Liu et al . ( 2020 ) computed soft labels as model predictions and then exploit it to avoid memorization . 3 METHODOLOGY . 3.1 PRELIMINARIES . For a K-class classification problem with noisy labels , let D = { ( xi , yi ) } n i=1 with x ∈ X ⊂ Rd as the input features and y ∈ Y = { 1 , . . . , K } as the corresponding label , we aim to learn a classifier p ( · ; θ ) : X → Y parametrized by θ . For a clean labeled set , i.e.where yi is the true label for the i-th training sample xi , we can learn the model parameters θ by minimizing the Cross Entropy ( CE ) loss : min θ 1 |D| ∑ ( x , y ) ∈D LCE ( p ( x ; θ ) , y ) . ( 1 ) However , in the presence of noisy labels where yi is possibly incorrect for the i-th training sample xi , the cross-entropy loss can quickly overfit to the noise in the dataset ( Zhang et al. , 2016 ) . Robust Learning . Variants to the CE loss have been proposed to improve the classification performance under label noise . Examples of such noise robust loss functions include Mean Absolute Error LMAE ( Ghosh et al. , 2017 ) , Symmetric Cross Entropy LSCE ( Wang et al. , 2019b ) , Normalized Cross Entropy LNCE or Active Passive Loss LAPL ( Ma et al. , 2020 ) . We may simply replace the LCE with any of these noise robust loss , denoted by LRL , in the optimization Equation ( 1 ) to perform Robust Learning : LRL ∈ { LMAE , LSCE , LNCE , LAPL } . ( 2 ) For the sake of the limited space , definitions and further discussion about the robust losses are deferred to Appendix A.1 . Positive and Negative Learning . A particular variant to the CE loss that is designed to handle noisy/wrong labels is called Negative Learning ( Kim et al. , 2019 ) . Given a sample with a correct label ( x , y ) , Positive Learning ( PL ) is equivalent to optimizing the standard CE loss in Equation ( 1 ) , so that we optimize the output probabilities corresponding to the true label to be close to 1 : LPL ( p ( x ; θ ) , y ) : = LCE ( p ( x ; θ ) , y ) = K∑ k=1 [ y ] k log [ p ( x ; θ ) ] k , ( 3 ) where the class label y is one-hot encoded , and [ · ] k denotes the k-th dimension of a vector . Now for a data example with a wrong label ( x , ȳ ) , Negative Learning ( NL ) applies the CE loss function to fit the “ complement ” of the label in Equation ( 1 ) , so that the predicted probabilities corresponding to the given label is optimized to be far from 1 : LNL ( p ( x ; θ ) , ȳ ) : = − K∑ k=1 [ ȳ ] k log [ 1− p ( x ; θ ) ] k . ( 4 ) Sample Selection . In order to apply the Positive and Negative Learning framework in LNL , Kim et al . ( 2019 ) proposed to select samples based on a simple threshold of the per-sample loss value , i.e.a data point ( x , y ) is selected to be clean for PL if [ p ( x ; θ ) ] y > γ where γ is a noise-aware threshold . This simple yet effective procedure is based on the commonly used `` small-loss '' assumption that smaller-loss samples ( typically in early learning stages ) are usually easier to learn and hence expected to be “ clean ” , and it has been successfully applied in several other studies ( Jiang et al. , 2018 ; 2020 ) . 3.2 PARS : PSEUDO-LABEL AWARE ROBUST SAMPLE SELECTION We propose PARS : Pseudo-Label Aware Robust Sample Selection . The method is illustrated in Figure 1 ( and Algorithm 1 in Appendix A.2 ) , and each component is discussed in detail below . In particular , there are three major differences between PARS and previous works : 1 ) PARS does not treat the high-confidence samples as `` clean '' , but rather applies robust loss functions to address the relatively low yet non-negligible level of noise . 2 ) PARS does not discard the noisy ( or lowconfidence ) labels , but rather applies a negative learning framework to learn the “ complement ” of these labels ( Kim et al. , 2019 ) . This constitutes one major difference between the selective negative learning procedure of Kim et al . ( 2019 ) and PARS , that Kim et al . ( 2019 ) only apply it during the warm-up stage and still choose to discard the filtered noisy labels in favour of pseudo-labels in a SSL manner , and we unify the two stages and show improved performance . 3 ) We improve the convergence of PARS by self-training using pseudo-labels and data augmentation for the entire dataset as a proxy to enhance clean labels and correct noisy labels . Robust Warm-up Training . In the presence of label noise , the model overfits to noise when trained with CE loss and produces overconfident and wrong predictions after a few epochs . This problem is more severe with high levels of label noise . To overcome this , we perform “ robust warm-up ” using a robust loss for a few epochs for the model to produce reliable and confident predictions . Specifically , in the warm-up training phase , we minimize : min θ 1 |D| ∑ ( x , y ) ∈D LRL ( p ( x ; θ ) , y ) , ( 5 ) where LRL can be any loss from Equation ( 2 ) . In our experiments , we test different loss functions for warm-up training ( for more details see Appendix A.3 ) . Robust Sample Selection . As shown in Figure 2a , after the robust warm-up training , we observe that there is a weak trend that high-confidence samples tend to have clean labels . Given the model parameters θ̂ at the current iteration and a confidence threshold τ ∈ [ 0 , 1 ] , we divide the entire dataset D ( or similarly , a minibatch B ⊂ D ) into an ambiguous1 set DA and noisy set DN of samples by thresholding the maximum of the predicted confidence probabilities over all classes : DA = { ( x , y ) ∈ D|max k [ p ( x ; θ̂ ) ] k > τ } , DN = D \ DA . ( 6 ) A notable difference is that our confidence-based thresholding is label-free ( i.e.thresholding of the maximum confidence over all classes ) , while the widely used loss-based thresholding Kim et al . ( 2019 ) depends on the ground-truth ( noisy ) label ( i.e.thresholding the confidence corresponding to the given class ) . Our approach has two advantages : 1 ) our selective criterion depends on the predicted confidence probabilities alone , therefore it is straightforward to apply to cases where ground-truth labels are unavailable ( e.g.in a semi-supervised setting in Section 4.3 ) ; 2 ) our strategy helps to reduce the self-confirmation bias of propagating the mistake of overfitting to possibly noisy samples , because our model only selects samples that it is confident to make a prediction on regardless of the correctness of the raw label . Notably , previous works such as Jiang et al . ( 2018 ) ; Li et al . ( 2020 ) usually maintain two divergent networks ( one for sample selection and one for the main classification task ) in order to avoid the self-confirmation bias . Our method proves to be highly resilient to the overfitting issue with a single network simultaneously updated to perform both sample selection and classification , which drastically simplifies implementation and training complexity . Label-Dependent Noise-Aware Loss Functions . Despite the robust sample selection of dividing samples into ambiguous and noisy set , we propose to learn useful information from all the available training data instead of discarding the raw labels of the noisy samples as typically done in previous literature ( Jiang et al. , 2018 ; Kim et al. , 2019 ; Li et al. , 2020 ) . Specifically , we propose to use two separate loss functions , tailored for either the ambiguous or noisy labels respectively , which prevents overfitting to noise and enables further filtering of labels effectively ( Figure 2b ) . For the ambiguous set , DA , we train with the robust learning loss , LRL , ( same loss used during the warm-up training ) , where we deem that the given label in this set may still contain non-negligible noise . For the noisy set , DN , we train with the negative learning loss , LNL , to learn the complement of the given raw labels for the identified least-confident highly-noisy samples . The overall label-dependent noise-aware loss function for the proposed robust/negative learning framework using the given raw labels is thus : Lraw ( θ ) : = 1 |DA| ∑ ( x , y ) ∈DA LRL ( p ( x ; θ ) , y ) + λN 1 |DN | ∑ ( x , ȳ ) ∈DN LNL ( p ( x ; θ ) , ȳ ) , ( 7 ) where λN is the weight for the negative learning loss term , andDA , DN are defined as in Equation ( 6 ) . We set a high threshold ( e.g.0.95 throughout our experiments ) on the predicted confidence in the robust sample selection ( Equation ( 6 ) ) since we observe that this yields better performance as shown 1We choose the term “ ambiguous ” rather than the typically used term “ clean ” ( Jiang et al. , 2018 ; 2020 ) because there is still significant amount of noise in the expected-to-be clean samples when a simple thresholding method is used to select samples ( Arazo et al. , 2019 ) . in Appendix A.4 . We will show that , by using the noisy labels in the negative learning setting , we are able to consistently improve the model performance ( Table 4 ) . Self-Training with Pseudo-Labels . After the robust warm-up training , the initial convergence of the model leads to reliable predictions which can be used as labels to further improve model convergence via self-training . Therefore , in parallel to using the given raw labels during training , we use pseudolabels generated by the network itself to guide the learning process . Given the model parameters θ̂ at the current iteration , for each sample ( x , · ) ∈ D , we generate the underlying pseudo-label z corresponding to the model ’ s most confident class , and then randomly sample a complementary “ wrong ” pseudo-label z̄ from the remaining classes following Kim et al . ( 2019 ) : z : = arg max k [ p ( x ; θ̂ ) ] k , z̄ ∈ { 1 , . . . , K } \ { z } . ( 8 ) Similarly to how we use label-dependent noise-aware loss functions for the given raw labels in the ambiguous and noisy sets , we train with the pseudo-labels from the two sets using different losses too . Specifically , we apply positive/negative learning for the generated pseudo-labels in the ambiguous/noisy set respectively : Lpseudo ( θ ) : = 1 |DA| ∑ ( x , · ) ∈DA LPL ( p ( x ; θ ) , z ) +λN 1 |DN | ∑ ( x , · ) ∈DN LNL ( p ( x ; θ ) , z̄ ) , ( 9 ) where λN is the weight for the negative learning loss term ( equal to the weight in Equation ( 7 ) ) , DA , DN are defined as in Equation ( 6 ) , z , z̄ are defined as in Equation ( 8 ) . We find that it suffices to train with the standard CE loss for self-training with pseudo-labels from the ambiguous set , without the need for noise robust learning . One caveat of applying self-training is that , the model ’ s own predictions may become unreliable when most of the samples are primarily guided by the self-training , encouraging the network to predict the same class to minimize the loss . We observe that this is particularly the case under high levels of label noise . To reliably apply self-training , we regularize Equation ( 9 ) with a confidence penalty ( Tanaka et al. , 2018 ; Arazo et al. , 2019 ) : Lreg ( θ ) : = K∑ k=1 [ p ] k log ( [ p ] k [ h ( D ; θ ) ] k ) , ( 10 ) where [ p ] k is the prior probability distribution for class k , and [ h ( D ; θ ) ] k is shorthand for the mean predicted probability of the model for class k across all samples in the dataset D. In our experiments , we assume a uniform distribution for the prior probabilities ( i.e . [ p ] k = 1/K ) , while approximating h ( D ; θ ) by h ( B ; θ ) using mini-batches B ⊂ D as suggested by Tanaka et al . ( 2018 ) . The generated pseudo-labels become reliably confident with help of the regularization , which guides the robust sample selection to resemble the true distribution more closely ( Figure 2c ) . PARS : Pseudo-Label Aware Robust Sample Selection . After the robust warm-up training stage ( Equation ( 5 ) ) , our final model , PARS , is then trained using both the original labels and the selfgenerated pseudo-labels with robust sample selection . The final loss is given by : min θ Ltotal ( θ ) : = Lraw ( θ ) + λSLpseudo ( θ ) + λRLreg ( θ ) , ( 11 ) where Lraw , Lpseudo , Lreg are defined as in Equations ( 7 ) , ( 9 ) and ( 10 ) respectively , and λS , λR are the weights for the self-training loss and confidence penalty term . Data Augmentation . For image classification tasks in LNL , data augmentation techniques have recently been shown to significantly improve generalization ( Nishi et al. , 2021 ) . We adapt such data augmentation strategies to further improve PARS for benchmarking image classification performance in our experiments . Specifically , we apply weak augmentation ( e.g.standard random flip , crop , normalization ) to all data samples in the robust/negative learning with original raw labels in computing Equation ( 7 ) . We also apply strong augmentation ( e.g.AutoAugment ( Cubuk et al. , 2019 ) , RandAugment ( Cubuk et al. , 2020 ) ) to all data samples in the positive/negative learning with pseudo-labels in computing Equations ( 9 ) and ( 10 ) . We do not incorporate strong augmentation for training with the original labels as it leads to poor performance in presence of high label noise , as also observed by Nishi et al . ( 2021 ) . In this way , PARS can be seen as one example of how to extend the work of Nishi et al . ( 2021 ) by combining weak and strong data augmentation effectively with noisy labels and pseudo-labeling , as an effective technique to further advance the state of the art in LNL .
The paper combines three branches of approaches (1. sample selection, 2. noise robust loss, 3. label correction) to address label noise in classification in a single framework. Specifically, the method includes (1) warm up phase, (2) a novel label-free sample selection, (3) noise aware loss (as a standard technique) and (4) self-training with pseudo labels along with the given labels. The proposed method outperforms prior arts in CIFAR-10/100, especially in high noise regimes (80-90%). More interestingly, in the small sample training regime, gains by the proposed method increase significantly in evaluations with CIFAR-10/100.
SP:1bc9c8ea6302a6fc0286fe4dfade669907053946
PARS: PSEUDO-LABEL AWARE ROBUST SAMPLE SELECTION FOR LEARNING WITH NOISY LABELS
1 INTRODUCTION . Deep neural networks rely on large-scale training data with human annotated labels for achieving good performance ( Deng et al. , 2009 ; Everingham et al. , 2010 ) . Collecting millions or billions of labeled training data instances is very expensive , requires significant human time and effort , and can also compromise user privacy ( Zheng et al. , 2020 ; Bonawitz et al. , 2017 ) . Hence , there has been a paradigm shift in the interests of the research community from large-scale supervised learning ( Krizhevsky et al. , 2017 ; He et al. , 2016a ; Huang et al. , 2017 ) to Learning with Noisy Labels ( LNL ) ( Natarajan et al. , 2013 ; Goldberger & Ben-Reuven , 2016 ; Patrini et al. , 2017 ; Tanno et al. , 2019 ) and/or unlabeled data ( Berthelot et al. , 2019b ; a ; Sohn et al. , 2020 ) . This is largely due to the abundance of raw unlabeled data with weak user tags ( Plummer et al. , 2015 ; Xiao et al. , 2015 ) or caption descriptions ( Lin et al. , 2014 ) . However , it is not trivial to build models that are robust to these noisy labels as the deep convolutional neural networks ( CNNs ) trained with cross-entropy loss can quickly overfit to the noise in the dataset , harming generalization ( Zhang et al. , 2016 ) . Most of the existing approaches on LNL can be divided into three main categories . First , several noise robust loss functions ( Ghosh et al. , 2017 ; Wang et al. , 2019a ; Zhang & Sabuncu , 2018 ) were proposed that are inherently tolerant to label noise . Second , sample selection methods ( also referred to as loss correction in some literature ) ( Han et al. , 2018 ; Yu et al. , 2019 ; Arazo et al. , 2019 ) are a popular technique that analyzes the per-sample loss distribution and separates the clean and noisy samples . The identified noisy samples are then re-weighted so that they contribute less in the loss computation . A challenge in this direction is to design a reliable criterion for separation and hence prevent overfitting to highly confident noisy samples , a behavior known as self-confirmation bias . Third , label correction methods attempt to correct the noisy labels using class-prototypes ( Han et al. , 2019 ) or pseudo-labeling techniques ( Tanaka et al. , 2018 ; Yi & Wu , 2019 ) . However , in order to correct noisy labels , we typically need an extra ( usually small ) set of correctly labeled validation labels . In particular , these methods can fail when the noise ratio is high and estimating correct labels or high-quality pseudo-labels is non-trivial . More recently , the success of several state-of-the-art LNL methods is attributed to leveraging SemiSupervised Learning ( SSL ) based approaches ( Li et al. , 2020 ; Kim et al. , 2019 ) . Typically , a sample selection technique is applied to separate clean and noisy labels in the training data , then the noisy labels are deemed unreliable and hence treated as unlabeled in a SSL setting . Following the recent SSL literature ( Lee et al. , 2013 ; Arazo et al. , 2020 ) , estimated pseudo-labels are usually used to replace the filtered noisy labels during training . These approaches have shown to be highly tolerant to label-noise . However , the noisy labels are always discarded in favor of pseudo-labels in all the existing literature , but they may still contain useful information for training . Pseudo-labeling is in turn only applied to the filtered noisy subset while the rest of the raw labels are typically used as is , which makes it sensitive to the quality of the filtering algorithm . In particular , motivated by a simple principle of making the most of the signal contained in the noisy training data , we design PARS , short for Pseudo-Label Aware Robust Sample Selection . Our contributions are as follows : 1 . PARS proposes a novel , principled training framework for LNL . It trains on both the original labels and pseudo-labels . Unlike previous works , instead of filtering and then discarding the low-confident noisy labels , PARS uses the entire set of original labels , and applies self-training with pseudo-labeling and data augmentation for the entire dataset ( rather than the filtered noisy data only ) . 2 . PARS is able to learn useful information from all the available data samples through label-dependent noise-aware loss functions . Specifically , in order to prevent overfitting to inaccurate original labels ( or inaccurate pseudo-labels ) , PARS performs a simple confidencebased filtering technique by setting a high threshold on their predicted confidence , and applies robust/negative learning ( or positive/negative learning ) accordingly . 3 . We perform extensive experiments on multiple benchmark datasets i.e . noisy CIFAR-10 , noisy CIFAR-100 and Clothing1M . Results demonstrate that PARS outperforms previous state-of-the-art methods by a significant margin , in particular when high level of noise is present in the training data . We also conduct sufficient ablation studies to validate the importance of our contributions . 4 . We design a novel low-resource semi-supervised LNL setting where only a small subset of data is weakly labeled ( Section 4.3 ) . We show significant gains over state-of-the-art approaches using PARS . This setting is particularly interesting when it is hard to obtain large-scale noisy labeled data . In particular , we find that surprisingly none of the existing LNL methods outperform a baseline SSL model ( FixMatch ) ( Sohn et al. , 2020 ) that is not even designed to handle label noise , and yet PARS can achieve up to an absolute 27 % improvement in test accuracy in a controlled high-noise low-resource setting . 2 RELATED WORK . In recent literature on LNL , methods typically fall into three design categories to learn a noise robust model : noise robust loss functions , sample selection approaches , or label correction methods . Noise robust loss function based methods propose objective functions that are tolerant to label noise . A commonly used loss function that is identified to be robust to noisy labels is Mean Absolute Error ( MAE ) ( Ghosh et al. , 2017 ) . Wang et al . ( 2019a ) proposed Improved MAE which is a re-weighted version of MAE . Zhang & Sabuncu ( 2018 ) proposed Generalized Cross Entropy Loss ( GCE ) which is a generalization of MAE and Categorical Cross Entropy loss . More recently , Wang et al . ( 2019b ) designed a Symmetric Cross Entropy ( SCE ) loss which is similar in spirit to the symmetric KLdivergence and combines the cross-entropy loss with the reverse cross-entropy . Although SCE is robust to noisy labels , Ma et al . ( 2020 ) proposed a normalized family of loss functions which are shown to be more robust than SCE for extreme levels of label noise . Kim et al . ( 2019 ) and Kim et al . ( 2021 ) designed a framework that alternates between positive learning on accurate/clean labels and negative learning on complementary/wrong labels . Loss correction approaches explicitly modify the loss function during training to take into account the noise distribution by modeling a noise transition matrix ( Patrini et al. , 2017 ; Tanno et al. , 2019 ; Xia et al. , 2019 ; Goldberger & Ben-Reuven , 2016 ) or based on label-dependent weights ( Natarajan et al. , 2013 ) . Another family of methods focus primarily on sample selection , where the model selects small-loss samples as “ clean ” samples under the assumption that the model first fits to the clean samples before memorizing the noisy samples ( also known as the early-learning assumption ) ( Arpit et al. , 2017 ; Zhang et al. , 2016 ; Liu et al. , 2020 ) . Han et al . ( 2018 ) and Yu et al . ( 2019 ) proposed Co-teaching where sample selection is conducted using two networks to select clean and noisy samples , and then the clean samples are used for further training . MentorNet ( Jiang et al. , 2018 ) is a student-teacher framework where a pre-trained teacher network guides the learning of the student network with clean samples ( whose labels are deemed “ correct ” ) . In the decoupling training strategy proposed by Malach & Shalev-Shwartz ( 2017 ) , two networks are trained simultaneously and guide each other on when and how to update . One limitation of these approaches is that they ignore all the noisy/unclean samples during training and only leverage the expected-to-be clean samples for improving performance . Li et al . ( 2020 ) proposed DivideMix where sample selection is conducted based on per-sample loss distribution , and then the noisy samples are treated as unlabeled data in a SSL setting ( Berthelot et al. , 2019b ) . Nishi et al . ( 2021 ) further investigated the potential of using augmentation strategies in LNL , one used for loss analysis and another for learning , thus improving generalization of DivideMix . Compared to the above , label correction aims to improve the quality of the noisy labels by explicitly correcting the wrong labels . Tanaka et al . ( 2018 ) and Yi & Wu ( 2019 ) predicted the correct labels either as estimates of label probabilities ( soft labels ) or as one-hot class labels ( hard labels ) . Arazo et al . ( 2019 ) combined label correction with iterative sample selection by first filtering clean labels from noisy labels modeling a two-component Beta Mixture , and then estimating to correct labels for those noisy samples . Tanaka et al . ( 2018 ) combined label correction with additional regularization terms that were proved to be helpful for LNL . Song et al . ( 2019 ) also used label replacement to refurbish a subset of labels , thereby gradually increasing the number of available training samples . Liu et al . ( 2020 ) computed soft labels as model predictions and then exploit it to avoid memorization . 3 METHODOLOGY . 3.1 PRELIMINARIES . For a K-class classification problem with noisy labels , let D = { ( xi , yi ) } n i=1 with x ∈ X ⊂ Rd as the input features and y ∈ Y = { 1 , . . . , K } as the corresponding label , we aim to learn a classifier p ( · ; θ ) : X → Y parametrized by θ . For a clean labeled set , i.e.where yi is the true label for the i-th training sample xi , we can learn the model parameters θ by minimizing the Cross Entropy ( CE ) loss : min θ 1 |D| ∑ ( x , y ) ∈D LCE ( p ( x ; θ ) , y ) . ( 1 ) However , in the presence of noisy labels where yi is possibly incorrect for the i-th training sample xi , the cross-entropy loss can quickly overfit to the noise in the dataset ( Zhang et al. , 2016 ) . Robust Learning . Variants to the CE loss have been proposed to improve the classification performance under label noise . Examples of such noise robust loss functions include Mean Absolute Error LMAE ( Ghosh et al. , 2017 ) , Symmetric Cross Entropy LSCE ( Wang et al. , 2019b ) , Normalized Cross Entropy LNCE or Active Passive Loss LAPL ( Ma et al. , 2020 ) . We may simply replace the LCE with any of these noise robust loss , denoted by LRL , in the optimization Equation ( 1 ) to perform Robust Learning : LRL ∈ { LMAE , LSCE , LNCE , LAPL } . ( 2 ) For the sake of the limited space , definitions and further discussion about the robust losses are deferred to Appendix A.1 . Positive and Negative Learning . A particular variant to the CE loss that is designed to handle noisy/wrong labels is called Negative Learning ( Kim et al. , 2019 ) . Given a sample with a correct label ( x , y ) , Positive Learning ( PL ) is equivalent to optimizing the standard CE loss in Equation ( 1 ) , so that we optimize the output probabilities corresponding to the true label to be close to 1 : LPL ( p ( x ; θ ) , y ) : = LCE ( p ( x ; θ ) , y ) = K∑ k=1 [ y ] k log [ p ( x ; θ ) ] k , ( 3 ) where the class label y is one-hot encoded , and [ · ] k denotes the k-th dimension of a vector . Now for a data example with a wrong label ( x , ȳ ) , Negative Learning ( NL ) applies the CE loss function to fit the “ complement ” of the label in Equation ( 1 ) , so that the predicted probabilities corresponding to the given label is optimized to be far from 1 : LNL ( p ( x ; θ ) , ȳ ) : = − K∑ k=1 [ ȳ ] k log [ 1− p ( x ; θ ) ] k . ( 4 ) Sample Selection . In order to apply the Positive and Negative Learning framework in LNL , Kim et al . ( 2019 ) proposed to select samples based on a simple threshold of the per-sample loss value , i.e.a data point ( x , y ) is selected to be clean for PL if [ p ( x ; θ ) ] y > γ where γ is a noise-aware threshold . This simple yet effective procedure is based on the commonly used `` small-loss '' assumption that smaller-loss samples ( typically in early learning stages ) are usually easier to learn and hence expected to be “ clean ” , and it has been successfully applied in several other studies ( Jiang et al. , 2018 ; 2020 ) . 3.2 PARS : PSEUDO-LABEL AWARE ROBUST SAMPLE SELECTION We propose PARS : Pseudo-Label Aware Robust Sample Selection . The method is illustrated in Figure 1 ( and Algorithm 1 in Appendix A.2 ) , and each component is discussed in detail below . In particular , there are three major differences between PARS and previous works : 1 ) PARS does not treat the high-confidence samples as `` clean '' , but rather applies robust loss functions to address the relatively low yet non-negligible level of noise . 2 ) PARS does not discard the noisy ( or lowconfidence ) labels , but rather applies a negative learning framework to learn the “ complement ” of these labels ( Kim et al. , 2019 ) . This constitutes one major difference between the selective negative learning procedure of Kim et al . ( 2019 ) and PARS , that Kim et al . ( 2019 ) only apply it during the warm-up stage and still choose to discard the filtered noisy labels in favour of pseudo-labels in a SSL manner , and we unify the two stages and show improved performance . 3 ) We improve the convergence of PARS by self-training using pseudo-labels and data augmentation for the entire dataset as a proxy to enhance clean labels and correct noisy labels . Robust Warm-up Training . In the presence of label noise , the model overfits to noise when trained with CE loss and produces overconfident and wrong predictions after a few epochs . This problem is more severe with high levels of label noise . To overcome this , we perform “ robust warm-up ” using a robust loss for a few epochs for the model to produce reliable and confident predictions . Specifically , in the warm-up training phase , we minimize : min θ 1 |D| ∑ ( x , y ) ∈D LRL ( p ( x ; θ ) , y ) , ( 5 ) where LRL can be any loss from Equation ( 2 ) . In our experiments , we test different loss functions for warm-up training ( for more details see Appendix A.3 ) . Robust Sample Selection . As shown in Figure 2a , after the robust warm-up training , we observe that there is a weak trend that high-confidence samples tend to have clean labels . Given the model parameters θ̂ at the current iteration and a confidence threshold τ ∈ [ 0 , 1 ] , we divide the entire dataset D ( or similarly , a minibatch B ⊂ D ) into an ambiguous1 set DA and noisy set DN of samples by thresholding the maximum of the predicted confidence probabilities over all classes : DA = { ( x , y ) ∈ D|max k [ p ( x ; θ̂ ) ] k > τ } , DN = D \ DA . ( 6 ) A notable difference is that our confidence-based thresholding is label-free ( i.e.thresholding of the maximum confidence over all classes ) , while the widely used loss-based thresholding Kim et al . ( 2019 ) depends on the ground-truth ( noisy ) label ( i.e.thresholding the confidence corresponding to the given class ) . Our approach has two advantages : 1 ) our selective criterion depends on the predicted confidence probabilities alone , therefore it is straightforward to apply to cases where ground-truth labels are unavailable ( e.g.in a semi-supervised setting in Section 4.3 ) ; 2 ) our strategy helps to reduce the self-confirmation bias of propagating the mistake of overfitting to possibly noisy samples , because our model only selects samples that it is confident to make a prediction on regardless of the correctness of the raw label . Notably , previous works such as Jiang et al . ( 2018 ) ; Li et al . ( 2020 ) usually maintain two divergent networks ( one for sample selection and one for the main classification task ) in order to avoid the self-confirmation bias . Our method proves to be highly resilient to the overfitting issue with a single network simultaneously updated to perform both sample selection and classification , which drastically simplifies implementation and training complexity . Label-Dependent Noise-Aware Loss Functions . Despite the robust sample selection of dividing samples into ambiguous and noisy set , we propose to learn useful information from all the available training data instead of discarding the raw labels of the noisy samples as typically done in previous literature ( Jiang et al. , 2018 ; Kim et al. , 2019 ; Li et al. , 2020 ) . Specifically , we propose to use two separate loss functions , tailored for either the ambiguous or noisy labels respectively , which prevents overfitting to noise and enables further filtering of labels effectively ( Figure 2b ) . For the ambiguous set , DA , we train with the robust learning loss , LRL , ( same loss used during the warm-up training ) , where we deem that the given label in this set may still contain non-negligible noise . For the noisy set , DN , we train with the negative learning loss , LNL , to learn the complement of the given raw labels for the identified least-confident highly-noisy samples . The overall label-dependent noise-aware loss function for the proposed robust/negative learning framework using the given raw labels is thus : Lraw ( θ ) : = 1 |DA| ∑ ( x , y ) ∈DA LRL ( p ( x ; θ ) , y ) + λN 1 |DN | ∑ ( x , ȳ ) ∈DN LNL ( p ( x ; θ ) , ȳ ) , ( 7 ) where λN is the weight for the negative learning loss term , andDA , DN are defined as in Equation ( 6 ) . We set a high threshold ( e.g.0.95 throughout our experiments ) on the predicted confidence in the robust sample selection ( Equation ( 6 ) ) since we observe that this yields better performance as shown 1We choose the term “ ambiguous ” rather than the typically used term “ clean ” ( Jiang et al. , 2018 ; 2020 ) because there is still significant amount of noise in the expected-to-be clean samples when a simple thresholding method is used to select samples ( Arazo et al. , 2019 ) . in Appendix A.4 . We will show that , by using the noisy labels in the negative learning setting , we are able to consistently improve the model performance ( Table 4 ) . Self-Training with Pseudo-Labels . After the robust warm-up training , the initial convergence of the model leads to reliable predictions which can be used as labels to further improve model convergence via self-training . Therefore , in parallel to using the given raw labels during training , we use pseudolabels generated by the network itself to guide the learning process . Given the model parameters θ̂ at the current iteration , for each sample ( x , · ) ∈ D , we generate the underlying pseudo-label z corresponding to the model ’ s most confident class , and then randomly sample a complementary “ wrong ” pseudo-label z̄ from the remaining classes following Kim et al . ( 2019 ) : z : = arg max k [ p ( x ; θ̂ ) ] k , z̄ ∈ { 1 , . . . , K } \ { z } . ( 8 ) Similarly to how we use label-dependent noise-aware loss functions for the given raw labels in the ambiguous and noisy sets , we train with the pseudo-labels from the two sets using different losses too . Specifically , we apply positive/negative learning for the generated pseudo-labels in the ambiguous/noisy set respectively : Lpseudo ( θ ) : = 1 |DA| ∑ ( x , · ) ∈DA LPL ( p ( x ; θ ) , z ) +λN 1 |DN | ∑ ( x , · ) ∈DN LNL ( p ( x ; θ ) , z̄ ) , ( 9 ) where λN is the weight for the negative learning loss term ( equal to the weight in Equation ( 7 ) ) , DA , DN are defined as in Equation ( 6 ) , z , z̄ are defined as in Equation ( 8 ) . We find that it suffices to train with the standard CE loss for self-training with pseudo-labels from the ambiguous set , without the need for noise robust learning . One caveat of applying self-training is that , the model ’ s own predictions may become unreliable when most of the samples are primarily guided by the self-training , encouraging the network to predict the same class to minimize the loss . We observe that this is particularly the case under high levels of label noise . To reliably apply self-training , we regularize Equation ( 9 ) with a confidence penalty ( Tanaka et al. , 2018 ; Arazo et al. , 2019 ) : Lreg ( θ ) : = K∑ k=1 [ p ] k log ( [ p ] k [ h ( D ; θ ) ] k ) , ( 10 ) where [ p ] k is the prior probability distribution for class k , and [ h ( D ; θ ) ] k is shorthand for the mean predicted probability of the model for class k across all samples in the dataset D. In our experiments , we assume a uniform distribution for the prior probabilities ( i.e . [ p ] k = 1/K ) , while approximating h ( D ; θ ) by h ( B ; θ ) using mini-batches B ⊂ D as suggested by Tanaka et al . ( 2018 ) . The generated pseudo-labels become reliably confident with help of the regularization , which guides the robust sample selection to resemble the true distribution more closely ( Figure 2c ) . PARS : Pseudo-Label Aware Robust Sample Selection . After the robust warm-up training stage ( Equation ( 5 ) ) , our final model , PARS , is then trained using both the original labels and the selfgenerated pseudo-labels with robust sample selection . The final loss is given by : min θ Ltotal ( θ ) : = Lraw ( θ ) + λSLpseudo ( θ ) + λRLreg ( θ ) , ( 11 ) where Lraw , Lpseudo , Lreg are defined as in Equations ( 7 ) , ( 9 ) and ( 10 ) respectively , and λS , λR are the weights for the self-training loss and confidence penalty term . Data Augmentation . For image classification tasks in LNL , data augmentation techniques have recently been shown to significantly improve generalization ( Nishi et al. , 2021 ) . We adapt such data augmentation strategies to further improve PARS for benchmarking image classification performance in our experiments . Specifically , we apply weak augmentation ( e.g.standard random flip , crop , normalization ) to all data samples in the robust/negative learning with original raw labels in computing Equation ( 7 ) . We also apply strong augmentation ( e.g.AutoAugment ( Cubuk et al. , 2019 ) , RandAugment ( Cubuk et al. , 2020 ) ) to all data samples in the positive/negative learning with pseudo-labels in computing Equations ( 9 ) and ( 10 ) . We do not incorporate strong augmentation for training with the original labels as it leads to poor performance in presence of high label noise , as also observed by Nishi et al . ( 2021 ) . In this way , PARS can be seen as one example of how to extend the work of Nishi et al . ( 2021 ) by combining weak and strong data augmentation effectively with noisy labels and pseudo-labeling , as an effective technique to further advance the state of the art in LNL .
This paper proposes a hybrid framework PARS for Learning with Noisy Labels (LNL) task. The framework jointly leverages the original noisy labels and estimated pseudo labels of all samples for model training. Specifically, for samples whose maximum classification probabilities are higher than a threshold, their original/pseudo labels are used in robust/positive learning, while for the remaining samples, their original/pseudo labels are used in negative learning. When using pseudo labels, strong augmentations are also applied to the samples. Experiments conducted on three public datasets with the traditional and a new low-resource semi-supervised LNL settings show certain improvements over existing methods.
SP:1bc9c8ea6302a6fc0286fe4dfade669907053946
PARS: PSEUDO-LABEL AWARE ROBUST SAMPLE SELECTION FOR LEARNING WITH NOISY LABELS
1 INTRODUCTION . Deep neural networks rely on large-scale training data with human annotated labels for achieving good performance ( Deng et al. , 2009 ; Everingham et al. , 2010 ) . Collecting millions or billions of labeled training data instances is very expensive , requires significant human time and effort , and can also compromise user privacy ( Zheng et al. , 2020 ; Bonawitz et al. , 2017 ) . Hence , there has been a paradigm shift in the interests of the research community from large-scale supervised learning ( Krizhevsky et al. , 2017 ; He et al. , 2016a ; Huang et al. , 2017 ) to Learning with Noisy Labels ( LNL ) ( Natarajan et al. , 2013 ; Goldberger & Ben-Reuven , 2016 ; Patrini et al. , 2017 ; Tanno et al. , 2019 ) and/or unlabeled data ( Berthelot et al. , 2019b ; a ; Sohn et al. , 2020 ) . This is largely due to the abundance of raw unlabeled data with weak user tags ( Plummer et al. , 2015 ; Xiao et al. , 2015 ) or caption descriptions ( Lin et al. , 2014 ) . However , it is not trivial to build models that are robust to these noisy labels as the deep convolutional neural networks ( CNNs ) trained with cross-entropy loss can quickly overfit to the noise in the dataset , harming generalization ( Zhang et al. , 2016 ) . Most of the existing approaches on LNL can be divided into three main categories . First , several noise robust loss functions ( Ghosh et al. , 2017 ; Wang et al. , 2019a ; Zhang & Sabuncu , 2018 ) were proposed that are inherently tolerant to label noise . Second , sample selection methods ( also referred to as loss correction in some literature ) ( Han et al. , 2018 ; Yu et al. , 2019 ; Arazo et al. , 2019 ) are a popular technique that analyzes the per-sample loss distribution and separates the clean and noisy samples . The identified noisy samples are then re-weighted so that they contribute less in the loss computation . A challenge in this direction is to design a reliable criterion for separation and hence prevent overfitting to highly confident noisy samples , a behavior known as self-confirmation bias . Third , label correction methods attempt to correct the noisy labels using class-prototypes ( Han et al. , 2019 ) or pseudo-labeling techniques ( Tanaka et al. , 2018 ; Yi & Wu , 2019 ) . However , in order to correct noisy labels , we typically need an extra ( usually small ) set of correctly labeled validation labels . In particular , these methods can fail when the noise ratio is high and estimating correct labels or high-quality pseudo-labels is non-trivial . More recently , the success of several state-of-the-art LNL methods is attributed to leveraging SemiSupervised Learning ( SSL ) based approaches ( Li et al. , 2020 ; Kim et al. , 2019 ) . Typically , a sample selection technique is applied to separate clean and noisy labels in the training data , then the noisy labels are deemed unreliable and hence treated as unlabeled in a SSL setting . Following the recent SSL literature ( Lee et al. , 2013 ; Arazo et al. , 2020 ) , estimated pseudo-labels are usually used to replace the filtered noisy labels during training . These approaches have shown to be highly tolerant to label-noise . However , the noisy labels are always discarded in favor of pseudo-labels in all the existing literature , but they may still contain useful information for training . Pseudo-labeling is in turn only applied to the filtered noisy subset while the rest of the raw labels are typically used as is , which makes it sensitive to the quality of the filtering algorithm . In particular , motivated by a simple principle of making the most of the signal contained in the noisy training data , we design PARS , short for Pseudo-Label Aware Robust Sample Selection . Our contributions are as follows : 1 . PARS proposes a novel , principled training framework for LNL . It trains on both the original labels and pseudo-labels . Unlike previous works , instead of filtering and then discarding the low-confident noisy labels , PARS uses the entire set of original labels , and applies self-training with pseudo-labeling and data augmentation for the entire dataset ( rather than the filtered noisy data only ) . 2 . PARS is able to learn useful information from all the available data samples through label-dependent noise-aware loss functions . Specifically , in order to prevent overfitting to inaccurate original labels ( or inaccurate pseudo-labels ) , PARS performs a simple confidencebased filtering technique by setting a high threshold on their predicted confidence , and applies robust/negative learning ( or positive/negative learning ) accordingly . 3 . We perform extensive experiments on multiple benchmark datasets i.e . noisy CIFAR-10 , noisy CIFAR-100 and Clothing1M . Results demonstrate that PARS outperforms previous state-of-the-art methods by a significant margin , in particular when high level of noise is present in the training data . We also conduct sufficient ablation studies to validate the importance of our contributions . 4 . We design a novel low-resource semi-supervised LNL setting where only a small subset of data is weakly labeled ( Section 4.3 ) . We show significant gains over state-of-the-art approaches using PARS . This setting is particularly interesting when it is hard to obtain large-scale noisy labeled data . In particular , we find that surprisingly none of the existing LNL methods outperform a baseline SSL model ( FixMatch ) ( Sohn et al. , 2020 ) that is not even designed to handle label noise , and yet PARS can achieve up to an absolute 27 % improvement in test accuracy in a controlled high-noise low-resource setting . 2 RELATED WORK . In recent literature on LNL , methods typically fall into three design categories to learn a noise robust model : noise robust loss functions , sample selection approaches , or label correction methods . Noise robust loss function based methods propose objective functions that are tolerant to label noise . A commonly used loss function that is identified to be robust to noisy labels is Mean Absolute Error ( MAE ) ( Ghosh et al. , 2017 ) . Wang et al . ( 2019a ) proposed Improved MAE which is a re-weighted version of MAE . Zhang & Sabuncu ( 2018 ) proposed Generalized Cross Entropy Loss ( GCE ) which is a generalization of MAE and Categorical Cross Entropy loss . More recently , Wang et al . ( 2019b ) designed a Symmetric Cross Entropy ( SCE ) loss which is similar in spirit to the symmetric KLdivergence and combines the cross-entropy loss with the reverse cross-entropy . Although SCE is robust to noisy labels , Ma et al . ( 2020 ) proposed a normalized family of loss functions which are shown to be more robust than SCE for extreme levels of label noise . Kim et al . ( 2019 ) and Kim et al . ( 2021 ) designed a framework that alternates between positive learning on accurate/clean labels and negative learning on complementary/wrong labels . Loss correction approaches explicitly modify the loss function during training to take into account the noise distribution by modeling a noise transition matrix ( Patrini et al. , 2017 ; Tanno et al. , 2019 ; Xia et al. , 2019 ; Goldberger & Ben-Reuven , 2016 ) or based on label-dependent weights ( Natarajan et al. , 2013 ) . Another family of methods focus primarily on sample selection , where the model selects small-loss samples as “ clean ” samples under the assumption that the model first fits to the clean samples before memorizing the noisy samples ( also known as the early-learning assumption ) ( Arpit et al. , 2017 ; Zhang et al. , 2016 ; Liu et al. , 2020 ) . Han et al . ( 2018 ) and Yu et al . ( 2019 ) proposed Co-teaching where sample selection is conducted using two networks to select clean and noisy samples , and then the clean samples are used for further training . MentorNet ( Jiang et al. , 2018 ) is a student-teacher framework where a pre-trained teacher network guides the learning of the student network with clean samples ( whose labels are deemed “ correct ” ) . In the decoupling training strategy proposed by Malach & Shalev-Shwartz ( 2017 ) , two networks are trained simultaneously and guide each other on when and how to update . One limitation of these approaches is that they ignore all the noisy/unclean samples during training and only leverage the expected-to-be clean samples for improving performance . Li et al . ( 2020 ) proposed DivideMix where sample selection is conducted based on per-sample loss distribution , and then the noisy samples are treated as unlabeled data in a SSL setting ( Berthelot et al. , 2019b ) . Nishi et al . ( 2021 ) further investigated the potential of using augmentation strategies in LNL , one used for loss analysis and another for learning , thus improving generalization of DivideMix . Compared to the above , label correction aims to improve the quality of the noisy labels by explicitly correcting the wrong labels . Tanaka et al . ( 2018 ) and Yi & Wu ( 2019 ) predicted the correct labels either as estimates of label probabilities ( soft labels ) or as one-hot class labels ( hard labels ) . Arazo et al . ( 2019 ) combined label correction with iterative sample selection by first filtering clean labels from noisy labels modeling a two-component Beta Mixture , and then estimating to correct labels for those noisy samples . Tanaka et al . ( 2018 ) combined label correction with additional regularization terms that were proved to be helpful for LNL . Song et al . ( 2019 ) also used label replacement to refurbish a subset of labels , thereby gradually increasing the number of available training samples . Liu et al . ( 2020 ) computed soft labels as model predictions and then exploit it to avoid memorization . 3 METHODOLOGY . 3.1 PRELIMINARIES . For a K-class classification problem with noisy labels , let D = { ( xi , yi ) } n i=1 with x ∈ X ⊂ Rd as the input features and y ∈ Y = { 1 , . . . , K } as the corresponding label , we aim to learn a classifier p ( · ; θ ) : X → Y parametrized by θ . For a clean labeled set , i.e.where yi is the true label for the i-th training sample xi , we can learn the model parameters θ by minimizing the Cross Entropy ( CE ) loss : min θ 1 |D| ∑ ( x , y ) ∈D LCE ( p ( x ; θ ) , y ) . ( 1 ) However , in the presence of noisy labels where yi is possibly incorrect for the i-th training sample xi , the cross-entropy loss can quickly overfit to the noise in the dataset ( Zhang et al. , 2016 ) . Robust Learning . Variants to the CE loss have been proposed to improve the classification performance under label noise . Examples of such noise robust loss functions include Mean Absolute Error LMAE ( Ghosh et al. , 2017 ) , Symmetric Cross Entropy LSCE ( Wang et al. , 2019b ) , Normalized Cross Entropy LNCE or Active Passive Loss LAPL ( Ma et al. , 2020 ) . We may simply replace the LCE with any of these noise robust loss , denoted by LRL , in the optimization Equation ( 1 ) to perform Robust Learning : LRL ∈ { LMAE , LSCE , LNCE , LAPL } . ( 2 ) For the sake of the limited space , definitions and further discussion about the robust losses are deferred to Appendix A.1 . Positive and Negative Learning . A particular variant to the CE loss that is designed to handle noisy/wrong labels is called Negative Learning ( Kim et al. , 2019 ) . Given a sample with a correct label ( x , y ) , Positive Learning ( PL ) is equivalent to optimizing the standard CE loss in Equation ( 1 ) , so that we optimize the output probabilities corresponding to the true label to be close to 1 : LPL ( p ( x ; θ ) , y ) : = LCE ( p ( x ; θ ) , y ) = K∑ k=1 [ y ] k log [ p ( x ; θ ) ] k , ( 3 ) where the class label y is one-hot encoded , and [ · ] k denotes the k-th dimension of a vector . Now for a data example with a wrong label ( x , ȳ ) , Negative Learning ( NL ) applies the CE loss function to fit the “ complement ” of the label in Equation ( 1 ) , so that the predicted probabilities corresponding to the given label is optimized to be far from 1 : LNL ( p ( x ; θ ) , ȳ ) : = − K∑ k=1 [ ȳ ] k log [ 1− p ( x ; θ ) ] k . ( 4 ) Sample Selection . In order to apply the Positive and Negative Learning framework in LNL , Kim et al . ( 2019 ) proposed to select samples based on a simple threshold of the per-sample loss value , i.e.a data point ( x , y ) is selected to be clean for PL if [ p ( x ; θ ) ] y > γ where γ is a noise-aware threshold . This simple yet effective procedure is based on the commonly used `` small-loss '' assumption that smaller-loss samples ( typically in early learning stages ) are usually easier to learn and hence expected to be “ clean ” , and it has been successfully applied in several other studies ( Jiang et al. , 2018 ; 2020 ) . 3.2 PARS : PSEUDO-LABEL AWARE ROBUST SAMPLE SELECTION We propose PARS : Pseudo-Label Aware Robust Sample Selection . The method is illustrated in Figure 1 ( and Algorithm 1 in Appendix A.2 ) , and each component is discussed in detail below . In particular , there are three major differences between PARS and previous works : 1 ) PARS does not treat the high-confidence samples as `` clean '' , but rather applies robust loss functions to address the relatively low yet non-negligible level of noise . 2 ) PARS does not discard the noisy ( or lowconfidence ) labels , but rather applies a negative learning framework to learn the “ complement ” of these labels ( Kim et al. , 2019 ) . This constitutes one major difference between the selective negative learning procedure of Kim et al . ( 2019 ) and PARS , that Kim et al . ( 2019 ) only apply it during the warm-up stage and still choose to discard the filtered noisy labels in favour of pseudo-labels in a SSL manner , and we unify the two stages and show improved performance . 3 ) We improve the convergence of PARS by self-training using pseudo-labels and data augmentation for the entire dataset as a proxy to enhance clean labels and correct noisy labels . Robust Warm-up Training . In the presence of label noise , the model overfits to noise when trained with CE loss and produces overconfident and wrong predictions after a few epochs . This problem is more severe with high levels of label noise . To overcome this , we perform “ robust warm-up ” using a robust loss for a few epochs for the model to produce reliable and confident predictions . Specifically , in the warm-up training phase , we minimize : min θ 1 |D| ∑ ( x , y ) ∈D LRL ( p ( x ; θ ) , y ) , ( 5 ) where LRL can be any loss from Equation ( 2 ) . In our experiments , we test different loss functions for warm-up training ( for more details see Appendix A.3 ) . Robust Sample Selection . As shown in Figure 2a , after the robust warm-up training , we observe that there is a weak trend that high-confidence samples tend to have clean labels . Given the model parameters θ̂ at the current iteration and a confidence threshold τ ∈ [ 0 , 1 ] , we divide the entire dataset D ( or similarly , a minibatch B ⊂ D ) into an ambiguous1 set DA and noisy set DN of samples by thresholding the maximum of the predicted confidence probabilities over all classes : DA = { ( x , y ) ∈ D|max k [ p ( x ; θ̂ ) ] k > τ } , DN = D \ DA . ( 6 ) A notable difference is that our confidence-based thresholding is label-free ( i.e.thresholding of the maximum confidence over all classes ) , while the widely used loss-based thresholding Kim et al . ( 2019 ) depends on the ground-truth ( noisy ) label ( i.e.thresholding the confidence corresponding to the given class ) . Our approach has two advantages : 1 ) our selective criterion depends on the predicted confidence probabilities alone , therefore it is straightforward to apply to cases where ground-truth labels are unavailable ( e.g.in a semi-supervised setting in Section 4.3 ) ; 2 ) our strategy helps to reduce the self-confirmation bias of propagating the mistake of overfitting to possibly noisy samples , because our model only selects samples that it is confident to make a prediction on regardless of the correctness of the raw label . Notably , previous works such as Jiang et al . ( 2018 ) ; Li et al . ( 2020 ) usually maintain two divergent networks ( one for sample selection and one for the main classification task ) in order to avoid the self-confirmation bias . Our method proves to be highly resilient to the overfitting issue with a single network simultaneously updated to perform both sample selection and classification , which drastically simplifies implementation and training complexity . Label-Dependent Noise-Aware Loss Functions . Despite the robust sample selection of dividing samples into ambiguous and noisy set , we propose to learn useful information from all the available training data instead of discarding the raw labels of the noisy samples as typically done in previous literature ( Jiang et al. , 2018 ; Kim et al. , 2019 ; Li et al. , 2020 ) . Specifically , we propose to use two separate loss functions , tailored for either the ambiguous or noisy labels respectively , which prevents overfitting to noise and enables further filtering of labels effectively ( Figure 2b ) . For the ambiguous set , DA , we train with the robust learning loss , LRL , ( same loss used during the warm-up training ) , where we deem that the given label in this set may still contain non-negligible noise . For the noisy set , DN , we train with the negative learning loss , LNL , to learn the complement of the given raw labels for the identified least-confident highly-noisy samples . The overall label-dependent noise-aware loss function for the proposed robust/negative learning framework using the given raw labels is thus : Lraw ( θ ) : = 1 |DA| ∑ ( x , y ) ∈DA LRL ( p ( x ; θ ) , y ) + λN 1 |DN | ∑ ( x , ȳ ) ∈DN LNL ( p ( x ; θ ) , ȳ ) , ( 7 ) where λN is the weight for the negative learning loss term , andDA , DN are defined as in Equation ( 6 ) . We set a high threshold ( e.g.0.95 throughout our experiments ) on the predicted confidence in the robust sample selection ( Equation ( 6 ) ) since we observe that this yields better performance as shown 1We choose the term “ ambiguous ” rather than the typically used term “ clean ” ( Jiang et al. , 2018 ; 2020 ) because there is still significant amount of noise in the expected-to-be clean samples when a simple thresholding method is used to select samples ( Arazo et al. , 2019 ) . in Appendix A.4 . We will show that , by using the noisy labels in the negative learning setting , we are able to consistently improve the model performance ( Table 4 ) . Self-Training with Pseudo-Labels . After the robust warm-up training , the initial convergence of the model leads to reliable predictions which can be used as labels to further improve model convergence via self-training . Therefore , in parallel to using the given raw labels during training , we use pseudolabels generated by the network itself to guide the learning process . Given the model parameters θ̂ at the current iteration , for each sample ( x , · ) ∈ D , we generate the underlying pseudo-label z corresponding to the model ’ s most confident class , and then randomly sample a complementary “ wrong ” pseudo-label z̄ from the remaining classes following Kim et al . ( 2019 ) : z : = arg max k [ p ( x ; θ̂ ) ] k , z̄ ∈ { 1 , . . . , K } \ { z } . ( 8 ) Similarly to how we use label-dependent noise-aware loss functions for the given raw labels in the ambiguous and noisy sets , we train with the pseudo-labels from the two sets using different losses too . Specifically , we apply positive/negative learning for the generated pseudo-labels in the ambiguous/noisy set respectively : Lpseudo ( θ ) : = 1 |DA| ∑ ( x , · ) ∈DA LPL ( p ( x ; θ ) , z ) +λN 1 |DN | ∑ ( x , · ) ∈DN LNL ( p ( x ; θ ) , z̄ ) , ( 9 ) where λN is the weight for the negative learning loss term ( equal to the weight in Equation ( 7 ) ) , DA , DN are defined as in Equation ( 6 ) , z , z̄ are defined as in Equation ( 8 ) . We find that it suffices to train with the standard CE loss for self-training with pseudo-labels from the ambiguous set , without the need for noise robust learning . One caveat of applying self-training is that , the model ’ s own predictions may become unreliable when most of the samples are primarily guided by the self-training , encouraging the network to predict the same class to minimize the loss . We observe that this is particularly the case under high levels of label noise . To reliably apply self-training , we regularize Equation ( 9 ) with a confidence penalty ( Tanaka et al. , 2018 ; Arazo et al. , 2019 ) : Lreg ( θ ) : = K∑ k=1 [ p ] k log ( [ p ] k [ h ( D ; θ ) ] k ) , ( 10 ) where [ p ] k is the prior probability distribution for class k , and [ h ( D ; θ ) ] k is shorthand for the mean predicted probability of the model for class k across all samples in the dataset D. In our experiments , we assume a uniform distribution for the prior probabilities ( i.e . [ p ] k = 1/K ) , while approximating h ( D ; θ ) by h ( B ; θ ) using mini-batches B ⊂ D as suggested by Tanaka et al . ( 2018 ) . The generated pseudo-labels become reliably confident with help of the regularization , which guides the robust sample selection to resemble the true distribution more closely ( Figure 2c ) . PARS : Pseudo-Label Aware Robust Sample Selection . After the robust warm-up training stage ( Equation ( 5 ) ) , our final model , PARS , is then trained using both the original labels and the selfgenerated pseudo-labels with robust sample selection . The final loss is given by : min θ Ltotal ( θ ) : = Lraw ( θ ) + λSLpseudo ( θ ) + λRLreg ( θ ) , ( 11 ) where Lraw , Lpseudo , Lreg are defined as in Equations ( 7 ) , ( 9 ) and ( 10 ) respectively , and λS , λR are the weights for the self-training loss and confidence penalty term . Data Augmentation . For image classification tasks in LNL , data augmentation techniques have recently been shown to significantly improve generalization ( Nishi et al. , 2021 ) . We adapt such data augmentation strategies to further improve PARS for benchmarking image classification performance in our experiments . Specifically , we apply weak augmentation ( e.g.standard random flip , crop , normalization ) to all data samples in the robust/negative learning with original raw labels in computing Equation ( 7 ) . We also apply strong augmentation ( e.g.AutoAugment ( Cubuk et al. , 2019 ) , RandAugment ( Cubuk et al. , 2020 ) ) to all data samples in the positive/negative learning with pseudo-labels in computing Equations ( 9 ) and ( 10 ) . We do not incorporate strong augmentation for training with the original labels as it leads to poor performance in presence of high label noise , as also observed by Nishi et al . ( 2021 ) . In this way , PARS can be seen as one example of how to extend the work of Nishi et al . ( 2021 ) by combining weak and strong data augmentation effectively with noisy labels and pseudo-labeling , as an effective technique to further advance the state of the art in LNL .
This paper proposes a unified approach to handle noisy labels for training neural networks and utilize all the training data to learn effectively when noise is present. The authors utilize the assumed correct and noisy labels with different loss functions adjusted using different weights in order to adjust the impact on training. The authors show how their approach improves over CIFAR-10 and CIFAR-100 with high noise ratio and demonstrate competitive results in Clothing1M dataset. The authors also show strong results in semi-supervised setting, outperforming FixMatch significantly.
SP:1bc9c8ea6302a6fc0286fe4dfade669907053946
Unsupervised Learning of Neurosymbolic Encoders
1 INTRODUCTION . Advances in unsupervised learning have enabled the discovery of latent structures in data from a variety of domains , such as image data ( Dupont , 2018 ) , sound recordings ( Calhoun et al. , 2019 ) , and tracking data ( Luxem et al. , 2020 ) . For instance , a common approach is to use encoder-decoder frameworks , such as variational autoencoders ( VAE ) ( Kingma & Welling , 2014 ) , to identify a lowdimensional latent representation from the raw data that could contain disentangled factors of variation ( Dupont , 2018 ) or semantically meaningful clusters ( Luxem et al. , 2020 ) . Such approaches typically employ complex mappings based on neural networks , which can make it difficult to explain how the model assigns inputs to latent representations ( Zhang et al. , 2020 ) . To address this issue , we introduce unsupervised neurosymbolic representation learning , where the goal is to find a programmatically interpretable representation ( as part of a larger neurosymbolic representation ) of the raw data . We consider programs to be differentiable , symbolic models instantiated using a domain-specific language ( DSL ) , and use neurosymbolic to refer to blendings of neural and symbolic . Neurosymbolic encoders can offer a few key benefits . First , since the DSL reflects structured domain knowledge , they can often be human-interpretable ( Verma et al. , 2018 ; Shah et al. , 2020 ) . Second , by leveraging the inductive bias of the DSL , neurosymbolic encoders can potentially offer more factorized or well-separated representations of the raw data ( i.e. , the representations are more semantically meaningful ) , which has been observed in studies that used hand-crafted programmatic encoders ( Zhan et al. , 2020 ) . Our learning algorithm is grounded in the VAE framework ( Kingma & Welling , 2014 ; Mnih & Gregor , 2014 ) , where the goal is to learn a neurosymbolic encoder coupled with a standard neural decoder . A key challenge is that the space of programs is combinatorial , which we tackle via a tight integration between standard VAE training with modern program synthesis methods ( Shah et al. , 2020 ) . We further show how to incorporate ideas from adversarial information factorization ( Creswell et al. , 2017 ) and enforcing capacity constraints ( Burgess et al. , 2017 ; Dupont , 2018 ) in order to mitigate issues such as posterior and index collapse in the learned representation . We evaluate our neurosymbolic encoding approach on multiple behavior analysis domains , where the data are from challenging real-world settings and cluster interpretability is important for domain experts . Our contributions are : • We propose a novel unsupervised approach to train neurosymbolic encoders , to result in a programmatically interpretable representation of data ( as part of a neurosymbolic representation ) . • We show that our approach can significantly outperform purely neural encoders in extracting semantically meaningful representations of behavior , as measured by standard unsupervised metrics . • We further explore the flexibility of our approach , by showing that performance can be robust across different DSL designs by domain experts . • We showcase the practicality of our approach on downstream tasks , by incorporating our approach into a state-of-the-art self-supervised learning approach for behavior analysis ( Sun et al. , 2021b ) . 2 BACKGROUND . 2.1 VARIATIONAL AUTOENCODERS . We build on VAEs ( Kingma & Welling , 2014 ; Mnih & Gregor , 2014 ) , a latent variable modeling framework shown to learn effective latent representations ( also called encodings/embeddings ) ( Higgins et al. , 2016 ; Zhao et al. , 2017 ; Yingzhen & Mandt , 2018 ) and can capture the generative process ( Oord et al. , 2017 ; Vahdat & Kautz , 2020 ; Zhan et al. , 2020 ) . VAEs introduce a latent variable z , an encoder q , a decoder p✓ , and a prior distribution p on z. and ✓ are the parameters of the q and p respectively , often instantiated with neural networks . The learning objective is to maximize the evidence lower bound ( ELBO ) of the data log-likelihood : ELBO : = Eq ( z|x ) ⇥ log p✓ ( x|z ) ⇤ DKL q ( z|x ) ||p ( z ) log p ( x ) . ( 1 ) The first term in Eq . 1 is the log-density assigned to the data , while the second term is the KLdivergence between the prior and approximate posterior of z . Latent representations z are often continuous and modeled with a Gaussian prior , but z can be modeled to contain discrete dimensions as well ( Kingma et al. , 2014 ; Hu et al. , 2017 ; Dupont , 2018 ) . Our experiments are focused on behavioral tracking data in the form of trajectories , and so in practice we utilize a trajectory variant of VAEs ( Co-Reyes et al. , 2018 ; Zhan et al. , 2020 ; Sun et al. , 2021b ) , described in Section 3.4 . One challenge with VAEs ( and deep encoder-decoder models in general ) is that while the model is expressive , it is often difficult to interpret what is encoded in the latent representation z . Common approaches include taking traversals in the latent space and visualizing the resulting generations ( Burgess et al. , 2017 ) , or post-processing the latent variables using techniques such as clustering ( Luxem et al. , 2020 ) . Such techniques are post-hoc and thus can not guide ( in an interpretable way ) the encoder to be biased towards a family of structures . Some recent work have studied how to impose structure in the form of graphical models or dynamics in the latent space ( Johnson et al. , 2016 ; Deng et al. , 2017 ) , and our work can be thought of as a first step towards imposing structure in the form of symbolic knowledge encoded in a domain specific programming language . 2.2 SYNTHESIS OF DIFFERENTIABLE PROGRAMS . Our approach utilizes recent work on the synthesis of differentiable programs ( Shah et al. , 2020 ; Valkov et al. , 2018 ) , where one learns both the discrete structure of the symbolic program ( analogous to the architecture of a neural network ) as well as differentiable parameters within that structure . Our formulation of this problem closely follows that of Shah et al . ( 2020 ) . We use a domain-specific functional programming language ( DSL ) , generated with a context-free grammar ( see Figure 2 for an example ) . Programs are represented as a pair ( ↵ , ) , where ↵ is a discrete program architecture and are its real-valued parameters . We denote P as the space of symbolic programs ( i.e . programs with complete architectures ) . The semantics of a program ( ↵ , ) are given by a function [ [ ↵ ] ] ( x , ) , which is guaranteed by the semantics of the DSL to be differentiable in both x and . Like Shah et al . ( 2020 ) , we pose the problem of learning differentiable programs as search through a directed program graph G. The graph G models the top-down construction of program architectures ↵ through the repeated firing of rules of the DSL grammar , starting with an empty architecture ( represented by the “ start ” nonterminal of the grammar ) . The leaf nodes of G represent programs with complete architectures ( no nonterminals ) . Thus , P is the set of programs in the leaf nodes of G. The other nodes in G contain programs with partial architectures ( has at least one nonterminal ) . We interpret a program in a non-leaf node as being neurosymbolic , by viewing its nonterminals as representing neural networks with free parameters . The root node in G is the empty architecture ↵0 , interpreted as a fully neural program . An edge ( ↵ , ↵0 ) exists in G if one can obtain ↵0 from ↵ by applying a rule in the DSL that replaces a nonterminal in ↵ . Program synthesis in this problem setting equates to searching through G to find the optimal complete program architecture , and then learning corresponding parameters , i.e. , to find the optimal ( ↵ , ) that minimizes a combination of standard training loss ( e.g. , classification error ) and structural loss ( preferring “ simpler ” ↵ ’ s ) . Shah et al . ( 2020 ) evaluate multiple strategies for solving this problem and finds informed search using admissible neural heuristics to be the most efficient strategy ( see appendix ) . Consequently , we adopt this algorithm for our program synthesis task . 3 NEUROSYMBOLIC ENCODERS . The structure of our neurosymbolic encoder is shown in the right diagram of Figure 1 . The latent representation z = [ z , z ( ↵ , ) ] is partitioned into neurally encoded z and programmatically encoded z ( ↵ , ) . This approach boasts several advantages : • The symbolic component of the latent representation is programmatically interpretable . • The neural component can encode any residual information not captured by the program , which maintains the model ’ s capacity compared to standard deep encoders . • By incorporating a modular design , we can leverage state-of-the-art learning algorithms for both differentiable encoder-decoder training and program synthesis . We denote q and q ( ↵ , ) as the neural and symbolic encoders respectively ( see Figure 1 ) , where z ⇠ q ( ·|x ) and z ( ↵ , ) ⇠ q ( ↵ , ) ( ·|x ) . q is instantiated with a neural network , but q ( ↵ , ) is a differentiable program with architecture ↵ and parameters in some program space P defined by a DSL . Given an unlabeled training set of x ’ s , the VAE learning objective in Eq . 1 then becomes : max , ( ↵ , ) , ✓ E q ( z |x ) q ( ↵ , ) ( z ( ↵ , ) |x ) ⇥ log p✓ ( x|z , z ( ↵ , ) ) | { z } reconstruction loss ⇤ DKL q ( z |x ) ||p ( z ) | { z } regularization for neural latent DKL q ( ↵ , ) ( z ( ↵ , ) |x ) ||p ( z ( ↵ , ) ) | { z } regularization for symbolic latent . ( 2 ) Compared to the standard VAE objective in Eq . 1 for a single neural encoder , Eq . 2 has separate KL-divergence terms for the neural and programmatic encoders . 3.1 LEARNING ALGORITHM . The challenge with solving for Eq . 2 is that while ( , , ✓ ) can be optimized via back-propagation with ↵ fixed , optimizing for ↵ is a discrete optimization problem . Since it is difficult to jointly optimize over both continuous and discrete spaces , we take an iterative , alternating optimization approach . We start with a fully neural program ( one with empty architecture ↵0 ) trained using standard differentiable optimization ( Figure 1 , Step 1 ) . We then gradually make it more symbolic ( Figure 1 , Step 2 ) by finding a program that is a child of the current program in G ( more symbolic by construction of G ) that outputs as similar to the current latent representations as possible : min ↵0 : ( ↵ , ↵0 ) 2G , 0 Lsupervised q ( ↵ , ) ( x ) , q ( ↵0 , 0 ) ( x ) , ( 3 ) which can be viewed as a form of distillation ( from less symbolic to more symbolic programs ) via matching the input/output behavior . We solve Eq . 3 by enumerating over all child programs and selecting the best one , which is similar to iteratively-deepened depth-first search in Shah et al . ( 2020 ) ( see appendix ) . We alternate between optimizing Eq . 2 and Eq . 3 until we obtain a complete program . Algorithm 1 outlines this procedure and is guaranteed to terminate if G is finite by specifying a maximum program depth . We chose this optimization procedure for two reasons . First , it maximally leverages state-of-the-art tools in both differentiable latent variable modeling ( VAE-style training ) and supervised program synthesis , leading to tractable algorithm design . Second , this procedure never makes a drastic change to the program architecture , leading to relatively stable learning behavior across iterations . Algorithm 1 Learning a neurosymbolic encoder 1 : Input : program space P , program graph G 2 : initialize , , ✓ , ↵ = ↵0 ( empty architecture ) 3 : while ↵ is not complete do 4 : , , ✓ optimize Eq . 2 with ↵ fixed 5 : ( ↵ , ) optimize Eq . 3 6 : end while 7 : , , ✓ optimize Eq . 2 with complete ↵ 8 : Return : encoder { q , q ( ↵ , ) } Algorithm 2 Learning a neurosymbolic encoder with k programs 1 : Input : program space P , program graph G , k 2 : for i = 1 .. k do 3 : fix programs { q ( ↵1 , 1 ) , . . . , q ( ↵i 1 , i 1 ) } 4 : execute Algorithm 1 to learn q ( ↵i , i ) 5 : remove q ( ↵i , i ) from P to avoid redundancies 6 : end for 7 : Return : encoder { q , q ( ↵1 , 1 ) , . . . , q ( ↵k , k ) }
This paper is a clean extension of the supervised neurosymbolic approach of Shah et al. to the unsupervised setting where a VAE is used to encode input to the system as an interpretable program. These latent representations are shown to be interpretable for a synthetic experiment and useful for downstream tasks.
SP:765e1367ffc8ae4d453a5a784256e20c45e0d35e
Unsupervised Learning of Neurosymbolic Encoders
1 INTRODUCTION . Advances in unsupervised learning have enabled the discovery of latent structures in data from a variety of domains , such as image data ( Dupont , 2018 ) , sound recordings ( Calhoun et al. , 2019 ) , and tracking data ( Luxem et al. , 2020 ) . For instance , a common approach is to use encoder-decoder frameworks , such as variational autoencoders ( VAE ) ( Kingma & Welling , 2014 ) , to identify a lowdimensional latent representation from the raw data that could contain disentangled factors of variation ( Dupont , 2018 ) or semantically meaningful clusters ( Luxem et al. , 2020 ) . Such approaches typically employ complex mappings based on neural networks , which can make it difficult to explain how the model assigns inputs to latent representations ( Zhang et al. , 2020 ) . To address this issue , we introduce unsupervised neurosymbolic representation learning , where the goal is to find a programmatically interpretable representation ( as part of a larger neurosymbolic representation ) of the raw data . We consider programs to be differentiable , symbolic models instantiated using a domain-specific language ( DSL ) , and use neurosymbolic to refer to blendings of neural and symbolic . Neurosymbolic encoders can offer a few key benefits . First , since the DSL reflects structured domain knowledge , they can often be human-interpretable ( Verma et al. , 2018 ; Shah et al. , 2020 ) . Second , by leveraging the inductive bias of the DSL , neurosymbolic encoders can potentially offer more factorized or well-separated representations of the raw data ( i.e. , the representations are more semantically meaningful ) , which has been observed in studies that used hand-crafted programmatic encoders ( Zhan et al. , 2020 ) . Our learning algorithm is grounded in the VAE framework ( Kingma & Welling , 2014 ; Mnih & Gregor , 2014 ) , where the goal is to learn a neurosymbolic encoder coupled with a standard neural decoder . A key challenge is that the space of programs is combinatorial , which we tackle via a tight integration between standard VAE training with modern program synthesis methods ( Shah et al. , 2020 ) . We further show how to incorporate ideas from adversarial information factorization ( Creswell et al. , 2017 ) and enforcing capacity constraints ( Burgess et al. , 2017 ; Dupont , 2018 ) in order to mitigate issues such as posterior and index collapse in the learned representation . We evaluate our neurosymbolic encoding approach on multiple behavior analysis domains , where the data are from challenging real-world settings and cluster interpretability is important for domain experts . Our contributions are : • We propose a novel unsupervised approach to train neurosymbolic encoders , to result in a programmatically interpretable representation of data ( as part of a neurosymbolic representation ) . • We show that our approach can significantly outperform purely neural encoders in extracting semantically meaningful representations of behavior , as measured by standard unsupervised metrics . • We further explore the flexibility of our approach , by showing that performance can be robust across different DSL designs by domain experts . • We showcase the practicality of our approach on downstream tasks , by incorporating our approach into a state-of-the-art self-supervised learning approach for behavior analysis ( Sun et al. , 2021b ) . 2 BACKGROUND . 2.1 VARIATIONAL AUTOENCODERS . We build on VAEs ( Kingma & Welling , 2014 ; Mnih & Gregor , 2014 ) , a latent variable modeling framework shown to learn effective latent representations ( also called encodings/embeddings ) ( Higgins et al. , 2016 ; Zhao et al. , 2017 ; Yingzhen & Mandt , 2018 ) and can capture the generative process ( Oord et al. , 2017 ; Vahdat & Kautz , 2020 ; Zhan et al. , 2020 ) . VAEs introduce a latent variable z , an encoder q , a decoder p✓ , and a prior distribution p on z. and ✓ are the parameters of the q and p respectively , often instantiated with neural networks . The learning objective is to maximize the evidence lower bound ( ELBO ) of the data log-likelihood : ELBO : = Eq ( z|x ) ⇥ log p✓ ( x|z ) ⇤ DKL q ( z|x ) ||p ( z ) log p ( x ) . ( 1 ) The first term in Eq . 1 is the log-density assigned to the data , while the second term is the KLdivergence between the prior and approximate posterior of z . Latent representations z are often continuous and modeled with a Gaussian prior , but z can be modeled to contain discrete dimensions as well ( Kingma et al. , 2014 ; Hu et al. , 2017 ; Dupont , 2018 ) . Our experiments are focused on behavioral tracking data in the form of trajectories , and so in practice we utilize a trajectory variant of VAEs ( Co-Reyes et al. , 2018 ; Zhan et al. , 2020 ; Sun et al. , 2021b ) , described in Section 3.4 . One challenge with VAEs ( and deep encoder-decoder models in general ) is that while the model is expressive , it is often difficult to interpret what is encoded in the latent representation z . Common approaches include taking traversals in the latent space and visualizing the resulting generations ( Burgess et al. , 2017 ) , or post-processing the latent variables using techniques such as clustering ( Luxem et al. , 2020 ) . Such techniques are post-hoc and thus can not guide ( in an interpretable way ) the encoder to be biased towards a family of structures . Some recent work have studied how to impose structure in the form of graphical models or dynamics in the latent space ( Johnson et al. , 2016 ; Deng et al. , 2017 ) , and our work can be thought of as a first step towards imposing structure in the form of symbolic knowledge encoded in a domain specific programming language . 2.2 SYNTHESIS OF DIFFERENTIABLE PROGRAMS . Our approach utilizes recent work on the synthesis of differentiable programs ( Shah et al. , 2020 ; Valkov et al. , 2018 ) , where one learns both the discrete structure of the symbolic program ( analogous to the architecture of a neural network ) as well as differentiable parameters within that structure . Our formulation of this problem closely follows that of Shah et al . ( 2020 ) . We use a domain-specific functional programming language ( DSL ) , generated with a context-free grammar ( see Figure 2 for an example ) . Programs are represented as a pair ( ↵ , ) , where ↵ is a discrete program architecture and are its real-valued parameters . We denote P as the space of symbolic programs ( i.e . programs with complete architectures ) . The semantics of a program ( ↵ , ) are given by a function [ [ ↵ ] ] ( x , ) , which is guaranteed by the semantics of the DSL to be differentiable in both x and . Like Shah et al . ( 2020 ) , we pose the problem of learning differentiable programs as search through a directed program graph G. The graph G models the top-down construction of program architectures ↵ through the repeated firing of rules of the DSL grammar , starting with an empty architecture ( represented by the “ start ” nonterminal of the grammar ) . The leaf nodes of G represent programs with complete architectures ( no nonterminals ) . Thus , P is the set of programs in the leaf nodes of G. The other nodes in G contain programs with partial architectures ( has at least one nonterminal ) . We interpret a program in a non-leaf node as being neurosymbolic , by viewing its nonterminals as representing neural networks with free parameters . The root node in G is the empty architecture ↵0 , interpreted as a fully neural program . An edge ( ↵ , ↵0 ) exists in G if one can obtain ↵0 from ↵ by applying a rule in the DSL that replaces a nonterminal in ↵ . Program synthesis in this problem setting equates to searching through G to find the optimal complete program architecture , and then learning corresponding parameters , i.e. , to find the optimal ( ↵ , ) that minimizes a combination of standard training loss ( e.g. , classification error ) and structural loss ( preferring “ simpler ” ↵ ’ s ) . Shah et al . ( 2020 ) evaluate multiple strategies for solving this problem and finds informed search using admissible neural heuristics to be the most efficient strategy ( see appendix ) . Consequently , we adopt this algorithm for our program synthesis task . 3 NEUROSYMBOLIC ENCODERS . The structure of our neurosymbolic encoder is shown in the right diagram of Figure 1 . The latent representation z = [ z , z ( ↵ , ) ] is partitioned into neurally encoded z and programmatically encoded z ( ↵ , ) . This approach boasts several advantages : • The symbolic component of the latent representation is programmatically interpretable . • The neural component can encode any residual information not captured by the program , which maintains the model ’ s capacity compared to standard deep encoders . • By incorporating a modular design , we can leverage state-of-the-art learning algorithms for both differentiable encoder-decoder training and program synthesis . We denote q and q ( ↵ , ) as the neural and symbolic encoders respectively ( see Figure 1 ) , where z ⇠ q ( ·|x ) and z ( ↵ , ) ⇠ q ( ↵ , ) ( ·|x ) . q is instantiated with a neural network , but q ( ↵ , ) is a differentiable program with architecture ↵ and parameters in some program space P defined by a DSL . Given an unlabeled training set of x ’ s , the VAE learning objective in Eq . 1 then becomes : max , ( ↵ , ) , ✓ E q ( z |x ) q ( ↵ , ) ( z ( ↵ , ) |x ) ⇥ log p✓ ( x|z , z ( ↵ , ) ) | { z } reconstruction loss ⇤ DKL q ( z |x ) ||p ( z ) | { z } regularization for neural latent DKL q ( ↵ , ) ( z ( ↵ , ) |x ) ||p ( z ( ↵ , ) ) | { z } regularization for symbolic latent . ( 2 ) Compared to the standard VAE objective in Eq . 1 for a single neural encoder , Eq . 2 has separate KL-divergence terms for the neural and programmatic encoders . 3.1 LEARNING ALGORITHM . The challenge with solving for Eq . 2 is that while ( , , ✓ ) can be optimized via back-propagation with ↵ fixed , optimizing for ↵ is a discrete optimization problem . Since it is difficult to jointly optimize over both continuous and discrete spaces , we take an iterative , alternating optimization approach . We start with a fully neural program ( one with empty architecture ↵0 ) trained using standard differentiable optimization ( Figure 1 , Step 1 ) . We then gradually make it more symbolic ( Figure 1 , Step 2 ) by finding a program that is a child of the current program in G ( more symbolic by construction of G ) that outputs as similar to the current latent representations as possible : min ↵0 : ( ↵ , ↵0 ) 2G , 0 Lsupervised q ( ↵ , ) ( x ) , q ( ↵0 , 0 ) ( x ) , ( 3 ) which can be viewed as a form of distillation ( from less symbolic to more symbolic programs ) via matching the input/output behavior . We solve Eq . 3 by enumerating over all child programs and selecting the best one , which is similar to iteratively-deepened depth-first search in Shah et al . ( 2020 ) ( see appendix ) . We alternate between optimizing Eq . 2 and Eq . 3 until we obtain a complete program . Algorithm 1 outlines this procedure and is guaranteed to terminate if G is finite by specifying a maximum program depth . We chose this optimization procedure for two reasons . First , it maximally leverages state-of-the-art tools in both differentiable latent variable modeling ( VAE-style training ) and supervised program synthesis , leading to tractable algorithm design . Second , this procedure never makes a drastic change to the program architecture , leading to relatively stable learning behavior across iterations . Algorithm 1 Learning a neurosymbolic encoder 1 : Input : program space P , program graph G 2 : initialize , , ✓ , ↵ = ↵0 ( empty architecture ) 3 : while ↵ is not complete do 4 : , , ✓ optimize Eq . 2 with ↵ fixed 5 : ( ↵ , ) optimize Eq . 3 6 : end while 7 : , , ✓ optimize Eq . 2 with complete ↵ 8 : Return : encoder { q , q ( ↵ , ) } Algorithm 2 Learning a neurosymbolic encoder with k programs 1 : Input : program space P , program graph G , k 2 : for i = 1 .. k do 3 : fix programs { q ( ↵1 , 1 ) , . . . , q ( ↵i 1 , i 1 ) } 4 : execute Algorithm 1 to learn q ( ↵i , i ) 5 : remove q ( ↵i , i ) from P to avoid redundancies 6 : end for 7 : Return : encoder { q , q ( ↵1 , 1 ) , . . . , q ( ↵k , k ) }
The paper proposes to learn a novel, interpretable neuro-symbolic encoder for sequences in an autoencoding framework. The key idea is that one learns both a symbolic as well as a neural encoder, and gradually makes the symbolic encoder more and more structured progressively increasing the complexity of it. Given such an encoder, the paper proceeds to show that the encoding from it is useful for clustering sequence data and demonstrates gains over other “unstructured” purely neural approaches.
SP:765e1367ffc8ae4d453a5a784256e20c45e0d35e
Unsupervised Learning of Neurosymbolic Encoders
1 INTRODUCTION . Advances in unsupervised learning have enabled the discovery of latent structures in data from a variety of domains , such as image data ( Dupont , 2018 ) , sound recordings ( Calhoun et al. , 2019 ) , and tracking data ( Luxem et al. , 2020 ) . For instance , a common approach is to use encoder-decoder frameworks , such as variational autoencoders ( VAE ) ( Kingma & Welling , 2014 ) , to identify a lowdimensional latent representation from the raw data that could contain disentangled factors of variation ( Dupont , 2018 ) or semantically meaningful clusters ( Luxem et al. , 2020 ) . Such approaches typically employ complex mappings based on neural networks , which can make it difficult to explain how the model assigns inputs to latent representations ( Zhang et al. , 2020 ) . To address this issue , we introduce unsupervised neurosymbolic representation learning , where the goal is to find a programmatically interpretable representation ( as part of a larger neurosymbolic representation ) of the raw data . We consider programs to be differentiable , symbolic models instantiated using a domain-specific language ( DSL ) , and use neurosymbolic to refer to blendings of neural and symbolic . Neurosymbolic encoders can offer a few key benefits . First , since the DSL reflects structured domain knowledge , they can often be human-interpretable ( Verma et al. , 2018 ; Shah et al. , 2020 ) . Second , by leveraging the inductive bias of the DSL , neurosymbolic encoders can potentially offer more factorized or well-separated representations of the raw data ( i.e. , the representations are more semantically meaningful ) , which has been observed in studies that used hand-crafted programmatic encoders ( Zhan et al. , 2020 ) . Our learning algorithm is grounded in the VAE framework ( Kingma & Welling , 2014 ; Mnih & Gregor , 2014 ) , where the goal is to learn a neurosymbolic encoder coupled with a standard neural decoder . A key challenge is that the space of programs is combinatorial , which we tackle via a tight integration between standard VAE training with modern program synthesis methods ( Shah et al. , 2020 ) . We further show how to incorporate ideas from adversarial information factorization ( Creswell et al. , 2017 ) and enforcing capacity constraints ( Burgess et al. , 2017 ; Dupont , 2018 ) in order to mitigate issues such as posterior and index collapse in the learned representation . We evaluate our neurosymbolic encoding approach on multiple behavior analysis domains , where the data are from challenging real-world settings and cluster interpretability is important for domain experts . Our contributions are : • We propose a novel unsupervised approach to train neurosymbolic encoders , to result in a programmatically interpretable representation of data ( as part of a neurosymbolic representation ) . • We show that our approach can significantly outperform purely neural encoders in extracting semantically meaningful representations of behavior , as measured by standard unsupervised metrics . • We further explore the flexibility of our approach , by showing that performance can be robust across different DSL designs by domain experts . • We showcase the practicality of our approach on downstream tasks , by incorporating our approach into a state-of-the-art self-supervised learning approach for behavior analysis ( Sun et al. , 2021b ) . 2 BACKGROUND . 2.1 VARIATIONAL AUTOENCODERS . We build on VAEs ( Kingma & Welling , 2014 ; Mnih & Gregor , 2014 ) , a latent variable modeling framework shown to learn effective latent representations ( also called encodings/embeddings ) ( Higgins et al. , 2016 ; Zhao et al. , 2017 ; Yingzhen & Mandt , 2018 ) and can capture the generative process ( Oord et al. , 2017 ; Vahdat & Kautz , 2020 ; Zhan et al. , 2020 ) . VAEs introduce a latent variable z , an encoder q , a decoder p✓ , and a prior distribution p on z. and ✓ are the parameters of the q and p respectively , often instantiated with neural networks . The learning objective is to maximize the evidence lower bound ( ELBO ) of the data log-likelihood : ELBO : = Eq ( z|x ) ⇥ log p✓ ( x|z ) ⇤ DKL q ( z|x ) ||p ( z ) log p ( x ) . ( 1 ) The first term in Eq . 1 is the log-density assigned to the data , while the second term is the KLdivergence between the prior and approximate posterior of z . Latent representations z are often continuous and modeled with a Gaussian prior , but z can be modeled to contain discrete dimensions as well ( Kingma et al. , 2014 ; Hu et al. , 2017 ; Dupont , 2018 ) . Our experiments are focused on behavioral tracking data in the form of trajectories , and so in practice we utilize a trajectory variant of VAEs ( Co-Reyes et al. , 2018 ; Zhan et al. , 2020 ; Sun et al. , 2021b ) , described in Section 3.4 . One challenge with VAEs ( and deep encoder-decoder models in general ) is that while the model is expressive , it is often difficult to interpret what is encoded in the latent representation z . Common approaches include taking traversals in the latent space and visualizing the resulting generations ( Burgess et al. , 2017 ) , or post-processing the latent variables using techniques such as clustering ( Luxem et al. , 2020 ) . Such techniques are post-hoc and thus can not guide ( in an interpretable way ) the encoder to be biased towards a family of structures . Some recent work have studied how to impose structure in the form of graphical models or dynamics in the latent space ( Johnson et al. , 2016 ; Deng et al. , 2017 ) , and our work can be thought of as a first step towards imposing structure in the form of symbolic knowledge encoded in a domain specific programming language . 2.2 SYNTHESIS OF DIFFERENTIABLE PROGRAMS . Our approach utilizes recent work on the synthesis of differentiable programs ( Shah et al. , 2020 ; Valkov et al. , 2018 ) , where one learns both the discrete structure of the symbolic program ( analogous to the architecture of a neural network ) as well as differentiable parameters within that structure . Our formulation of this problem closely follows that of Shah et al . ( 2020 ) . We use a domain-specific functional programming language ( DSL ) , generated with a context-free grammar ( see Figure 2 for an example ) . Programs are represented as a pair ( ↵ , ) , where ↵ is a discrete program architecture and are its real-valued parameters . We denote P as the space of symbolic programs ( i.e . programs with complete architectures ) . The semantics of a program ( ↵ , ) are given by a function [ [ ↵ ] ] ( x , ) , which is guaranteed by the semantics of the DSL to be differentiable in both x and . Like Shah et al . ( 2020 ) , we pose the problem of learning differentiable programs as search through a directed program graph G. The graph G models the top-down construction of program architectures ↵ through the repeated firing of rules of the DSL grammar , starting with an empty architecture ( represented by the “ start ” nonterminal of the grammar ) . The leaf nodes of G represent programs with complete architectures ( no nonterminals ) . Thus , P is the set of programs in the leaf nodes of G. The other nodes in G contain programs with partial architectures ( has at least one nonterminal ) . We interpret a program in a non-leaf node as being neurosymbolic , by viewing its nonterminals as representing neural networks with free parameters . The root node in G is the empty architecture ↵0 , interpreted as a fully neural program . An edge ( ↵ , ↵0 ) exists in G if one can obtain ↵0 from ↵ by applying a rule in the DSL that replaces a nonterminal in ↵ . Program synthesis in this problem setting equates to searching through G to find the optimal complete program architecture , and then learning corresponding parameters , i.e. , to find the optimal ( ↵ , ) that minimizes a combination of standard training loss ( e.g. , classification error ) and structural loss ( preferring “ simpler ” ↵ ’ s ) . Shah et al . ( 2020 ) evaluate multiple strategies for solving this problem and finds informed search using admissible neural heuristics to be the most efficient strategy ( see appendix ) . Consequently , we adopt this algorithm for our program synthesis task . 3 NEUROSYMBOLIC ENCODERS . The structure of our neurosymbolic encoder is shown in the right diagram of Figure 1 . The latent representation z = [ z , z ( ↵ , ) ] is partitioned into neurally encoded z and programmatically encoded z ( ↵ , ) . This approach boasts several advantages : • The symbolic component of the latent representation is programmatically interpretable . • The neural component can encode any residual information not captured by the program , which maintains the model ’ s capacity compared to standard deep encoders . • By incorporating a modular design , we can leverage state-of-the-art learning algorithms for both differentiable encoder-decoder training and program synthesis . We denote q and q ( ↵ , ) as the neural and symbolic encoders respectively ( see Figure 1 ) , where z ⇠ q ( ·|x ) and z ( ↵ , ) ⇠ q ( ↵ , ) ( ·|x ) . q is instantiated with a neural network , but q ( ↵ , ) is a differentiable program with architecture ↵ and parameters in some program space P defined by a DSL . Given an unlabeled training set of x ’ s , the VAE learning objective in Eq . 1 then becomes : max , ( ↵ , ) , ✓ E q ( z |x ) q ( ↵ , ) ( z ( ↵ , ) |x ) ⇥ log p✓ ( x|z , z ( ↵ , ) ) | { z } reconstruction loss ⇤ DKL q ( z |x ) ||p ( z ) | { z } regularization for neural latent DKL q ( ↵ , ) ( z ( ↵ , ) |x ) ||p ( z ( ↵ , ) ) | { z } regularization for symbolic latent . ( 2 ) Compared to the standard VAE objective in Eq . 1 for a single neural encoder , Eq . 2 has separate KL-divergence terms for the neural and programmatic encoders . 3.1 LEARNING ALGORITHM . The challenge with solving for Eq . 2 is that while ( , , ✓ ) can be optimized via back-propagation with ↵ fixed , optimizing for ↵ is a discrete optimization problem . Since it is difficult to jointly optimize over both continuous and discrete spaces , we take an iterative , alternating optimization approach . We start with a fully neural program ( one with empty architecture ↵0 ) trained using standard differentiable optimization ( Figure 1 , Step 1 ) . We then gradually make it more symbolic ( Figure 1 , Step 2 ) by finding a program that is a child of the current program in G ( more symbolic by construction of G ) that outputs as similar to the current latent representations as possible : min ↵0 : ( ↵ , ↵0 ) 2G , 0 Lsupervised q ( ↵ , ) ( x ) , q ( ↵0 , 0 ) ( x ) , ( 3 ) which can be viewed as a form of distillation ( from less symbolic to more symbolic programs ) via matching the input/output behavior . We solve Eq . 3 by enumerating over all child programs and selecting the best one , which is similar to iteratively-deepened depth-first search in Shah et al . ( 2020 ) ( see appendix ) . We alternate between optimizing Eq . 2 and Eq . 3 until we obtain a complete program . Algorithm 1 outlines this procedure and is guaranteed to terminate if G is finite by specifying a maximum program depth . We chose this optimization procedure for two reasons . First , it maximally leverages state-of-the-art tools in both differentiable latent variable modeling ( VAE-style training ) and supervised program synthesis , leading to tractable algorithm design . Second , this procedure never makes a drastic change to the program architecture , leading to relatively stable learning behavior across iterations . Algorithm 1 Learning a neurosymbolic encoder 1 : Input : program space P , program graph G 2 : initialize , , ✓ , ↵ = ↵0 ( empty architecture ) 3 : while ↵ is not complete do 4 : , , ✓ optimize Eq . 2 with ↵ fixed 5 : ( ↵ , ) optimize Eq . 3 6 : end while 7 : , , ✓ optimize Eq . 2 with complete ↵ 8 : Return : encoder { q , q ( ↵ , ) } Algorithm 2 Learning a neurosymbolic encoder with k programs 1 : Input : program space P , program graph G , k 2 : for i = 1 .. k do 3 : fix programs { q ( ↵1 , 1 ) , . . . , q ( ↵i 1 , i 1 ) } 4 : execute Algorithm 1 to learn q ( ↵i , i ) 5 : remove q ( ↵i , i ) from P to avoid redundancies 6 : end for 7 : Return : encoder { q , q ( ↵1 , 1 ) , . . . , q ( ↵k , k ) }
The authors propose an unsupervised approach to train neurosymbolic encoders to obtain a programmatically interpretable representation using the dictionary of a domain-specific language (DSL). The experimental results show that the proposed method can outperform baseline neural encoders in extracting semantically meaningful representations of behavior in CalMS21 (mice) dataset. The results also show that the performance can be robust across different DSL designs by domain experts.
SP:765e1367ffc8ae4d453a5a784256e20c45e0d35e
A First-Occupancy Representation for Reinforcement Learning
Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states . The successor representation ( SR ) , which measures the expected cumulative , discounted state occupancy under a fixed policy , enables efficient transfer to different reward structures in an otherwise constant Markovian environment and has been hypothesized to underlie aspects of biological behavior and neural activity . However , in the real world , rewards may move or only be available for consumption once , may shift location , or agents may simply aim to reach goal states as rapidly as possible without the constraint of artificially imposed task horizons . In such cases , the most behaviorally-relevant representation would carry information about when the agent was likely to first reach states of interest , rather than how often it should expect to visit them over a potentially infinite time span . To reflect such demands , we introduce the firstoccupancy representation ( FR ) , which measures the expected temporal discount to the first time a state is accessed . We demonstrate that the FR facilitates exploration , the selection of efficient paths to desired states , allows the agent , under certain conditions , to plan provably optimal trajectories defined by a sequence of subgoals , and induces similar behavior to animals avoiding threatening stimuli . 1 INTRODUCTION . In order to maximize reward , both animals and machines must quickly make decisions in uncertain environments with rapidly changing reward structure . Often , the strategies these agents employ are categorized as either model-free ( MF ) or model-based ( MB ) ( Sutton & Barto , 2018 ) . In the former , the optimal action in each state is identified through trial and error , with propagation of learnt value from state to state . By contrast , the latter depends on the acquisition of a map-like representation of the environment ’ s transition structure , from which an optimal course of action may be derived . This dichotomy has motivated a search for intermediate models which cache information about environmental structure , and so enable efficient but flexible planning . One such approach , based on the successor representation ( SR ) ( Dayan , 1993 ) , has been the subject of recent interest in the context of both biological ( Stachenfeld et al. , 2017 ; Gershman , 2018 ; Momennejad et al. , 2017 ; Vértes & Sahani , 2019 ; Behrens et al. , 2018 ) and machine ( Kulkarni et al. , 2016 ; Barreto et al. , 2017b ; a ; 2018 ; Machado et al. , 2020 ; Ma et al. , 2020 ; Madarasz & Behrens , 2019 ) learning . The SR associates with each state and policy of action a measure of the expected rate of future occupancy of all states if that policy were to be followed indefinitely . This cached representation can be acquired through experience in much the same way as MF methods and provides some of the flexibility of MB behaviour at reduced computational cost . Importantly , the SR makes it possible to rapidly evaluate the expected return of each available policy in an otherwise unchanging environment , provided that the reward distribution remains consistent . However , these requirements limit the applicabilty of the SR . In the real world , rewards are frequently non-Markovian . They may be depleted by consumption , frequently only being available on the first entry to each state . Internal goals for control—say , to pick up a particular object—need to be achieved as rapidly as possible , but only one at a time . Furthermore , while a collection of SRs for different policies makes it possible to select the best amongst them , or to improve upon them all by considering the best immediate policy-dependent state values ( Barreto et al. , 2018 ) , this capacity still falls far short of the power of planning within complete models of the environment . Here , we propose a different form of representation in which the information cached is appropriate for achieving ephemeral rewards and for planning complex combinations of policies . Both features arise from considering the expected time at which other states will be first accessed by following the available policies . We refer to this as a first-occupancy representation ( FR ) . The shift from expected rate of future occupancy ( SR ) to delay to first occupancy makes it possible to handle settings where the underlying environment remains stationary , but reward availability is not Markovian . Our primary goal in this paper is to formally introduce the FR and to highlight the breadth of settings in which it offers a compelling alternative to the SR , including , but not limited to : exploration , unsupervised RL , planning , and modeling animal behavior . 2 REINFORCEMENT LEARNING PRELIMINARIES . Policy evaluation and improvement In reinforcement learning ( RL ) , the goal of the agent is to act so as to maximize the discounted cumulative reward received within a task-defined environment . Typically , task T is modelled as a finite Markov decision process ( MDP ; ( Puterman , 2010 ) ) , T = ( S , A , p , r , γ , µ ) , where S is a finite state space , A is a finite action space , p : S × A → ∆ ( S ) is the transition distribution ( where ∆ ( S ) is the probability simplex over S ) , r : S → R is the reward function , γ ∈ [ 0 , 1 ) is a discount factor , and µ ∈ ∆ ( S ) is the distribution over initial states . Note that the reward function is also frequently defined over state-action pairs ( s , a ) or triples ( s , a , s′ ) , but we restrict our analysis to state-based rewards for now . The goal of the agent is to maximize its expected return , or discounted cumulative reward ∑ t γ tr ( st ) . To simplify notation , we will frequently write r ( st ) : = rt and r ∈ R|S| as the vector of rewards for each state . The agent acts according to a stationary policy π : S → ∆ ( A ) . For finite MDPs , we can describe the expected transition probabilities under π using a |S| × |S| matrix Pπ such that Pπs , s′ = pπ ( s′|s ) : = ∑ a p ( s ′|s , a ) π ( a|s ) . Given π and a reward function r , the expected return is Qπr ( s , a ) = Eπ [ ∞∑ k=0 γkrt+k ∣∣∣st = s , at = a ] = Es′∼pπ ( ·|s ) [ rt + γQπr ( s′ , π ( s′ ) ) ] . ( 1 ) Qπr are called the state-action values or simply the Q-values of π . The expectation Eπ [ · ] is taken with respect to the randomness of both the policy and the transition dynamics . For simplicity of notation , from here onwards we will write expectations of the form Eπ [ ·|st = s , at = a ] as Eπ [ ·|st , at ] . This recursive form is called the Bellman equation , and it makes the process of estimating Qπr— termed policy evaluation—tractable via dynamic programming ( DP ; ( Bellman , 1957 ) ) . In particular , successive applications of the Bellman operator T πQ : = r + γPπQ are guaranteed to converge to the true value function Qπ for any initial real-valued |S| × |A| matrix Q . When the transition dynamics and reward function are unknown , temporal difference ( TD ) learning updates value estimates using a bootstrapped estimate of the Bellman target ( Sutton & Barto , 2018 ) . Given a transition sequence ( st , at , rt , st+1 ) and at+1 ∼ π ( ·|st+1 ) , Qπr ( st , at ) ← Qπr ( st , at ) + αδt , δt : = rt + γQπr ( st+1 , at+1 ) −Qπr ( st , at ) . ( 2 ) Once a policy has been evaluated , policy improvement identifies a new policy π′ such that Qπr ( s , a ) ≥ Qπ ′ r ( s , a ) , ∀ ( s , a ) ∈ Qπr ( s , a ) . Helpfully , such a policy can be defined as π′ ( s ) ∈ argmaxaQπr ( s , a ) . The successor representation The successor representation ( SR ; ( Dayan , 1993 ) ) is motivated by the idea that a state representation for policy evaluation should be dependent on the similarity of different paths under the current policy . The SR is defined as a policy ’ s expected discounted state occupancy , and for discrete state spaces can be stored in an |S| × |S| matrix Mπ , where Mπ ( s , s′ ) : = Eπ [ ∑ k γk1 ( st+k = s ′ ) ∣∣∣st ] = Eπ [ 1 ( st = s′ ) + γMπ ( st+1 , s′ ) ∣∣∣st ] , ( 3 ) where 1 ( · ) is the indicator function . The SR can also be conditioned on actions , i.e. , Mπ ( s , a , s′ ) : = Eπ [ ∑ k γ k 1 ( st+k = s ′ ) ∣∣∣st , at ] , and expressed in a vectorized format , we can write Mπ ( s ) : = Mπ ( s , · ) or Mπ ( s , a ) : = Mπ ( s , a , · ) . The recursion in Eq . ( 3 ) admits a TD error : δMt : = 1 ( st ) + γM π ( st+1 , π ( st+1 ) ) −Mπ ( st , at ) , ( 4 ) where 1 ( st ) is a one-hot state representation of length |S| . One useful property of the SR is that , once converged , it facilitates rapid policy evaluation for any reward function in a given environment : rTMπ ( s , a ) = rTEπ [ ∑ k γk1 ( st+k ) ∣∣∣st , at ] = Eπ [ ∑ k γkrt+k ∣∣∣st , at ] = Qπr ( s , a ) . ( 5 ) Fast transfer for multiple tasks In the real world , we often have to perform multiple tasks within a single environment . A simplified framework for this scenario is to consider a set of MDPsM that share every property ( i.e. , S , A , p , γ , µ ) except reward functions , where each task within this family is determined by a reward function r belonging to a setR . Extending the notions of policy evaluation and improvement to this multitask setting , we can define generalized policy evaluation ( GPE ) as the computation of the value function of a policy π on a set of tasks R. Similarly , generalized policy improvement ( GPI ) for a set of “ base ” policies Π is the definition of a policy π′ such that Qπ ′ r ( s , a ) ≥ sup π∈Π Qπr ( s , a ) ∀ ( s , a ) ∈ S ×A ( 6 ) for some r ∈ R. As hinted above , the SR offers a way to take advantage of this shared structure by decoupling the agent ’ s evaluation of its expected transition dynamics under a given policy from a single reward function . Rather than needing to directly estimate Qπ ∀π ∈ Π , Mπ only needs to be computed once , and given a new reward vector r , the agent can quickly peform GPE via Eq . ( 5 ) . As shown by Barreto et al . ( 2017a ) , GPE and GPI can be combined to define a new policy π′ via π′ ( s ) ∈ argmax a∈A max π∈Π rTMπ ( s , a ) . ( 7 ) For brevity , we will refer to this combined procedure of GPE and GPI simply as “ GPI '' , unless otherwise noted . The resulting policy π′ is guaranteed to perform at least as well as any individual π ∈ Π ( Barreto et al. , 2020 ) and is part of a larger class of policies termed set-improving policies which perform at least as well as any single policy in a given set ( Zahavy et al. , 2021 ) .
This paper proposes a new notion of state representation, the first-occupancy representation (FR), inspired by the Successor Representation. It is motivated by situations where the rewards are non-Markovian. Similarly to the SR, the FR can be learnt by TD learning. The usefulness of the FR is demonstrated on exploration, unsupervised RL, planning and escape behaviour tasks.
SP:eb55626de42119899036299794ed7aa2e3b69d8d
A First-Occupancy Representation for Reinforcement Learning
Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states . The successor representation ( SR ) , which measures the expected cumulative , discounted state occupancy under a fixed policy , enables efficient transfer to different reward structures in an otherwise constant Markovian environment and has been hypothesized to underlie aspects of biological behavior and neural activity . However , in the real world , rewards may move or only be available for consumption once , may shift location , or agents may simply aim to reach goal states as rapidly as possible without the constraint of artificially imposed task horizons . In such cases , the most behaviorally-relevant representation would carry information about when the agent was likely to first reach states of interest , rather than how often it should expect to visit them over a potentially infinite time span . To reflect such demands , we introduce the firstoccupancy representation ( FR ) , which measures the expected temporal discount to the first time a state is accessed . We demonstrate that the FR facilitates exploration , the selection of efficient paths to desired states , allows the agent , under certain conditions , to plan provably optimal trajectories defined by a sequence of subgoals , and induces similar behavior to animals avoiding threatening stimuli . 1 INTRODUCTION . In order to maximize reward , both animals and machines must quickly make decisions in uncertain environments with rapidly changing reward structure . Often , the strategies these agents employ are categorized as either model-free ( MF ) or model-based ( MB ) ( Sutton & Barto , 2018 ) . In the former , the optimal action in each state is identified through trial and error , with propagation of learnt value from state to state . By contrast , the latter depends on the acquisition of a map-like representation of the environment ’ s transition structure , from which an optimal course of action may be derived . This dichotomy has motivated a search for intermediate models which cache information about environmental structure , and so enable efficient but flexible planning . One such approach , based on the successor representation ( SR ) ( Dayan , 1993 ) , has been the subject of recent interest in the context of both biological ( Stachenfeld et al. , 2017 ; Gershman , 2018 ; Momennejad et al. , 2017 ; Vértes & Sahani , 2019 ; Behrens et al. , 2018 ) and machine ( Kulkarni et al. , 2016 ; Barreto et al. , 2017b ; a ; 2018 ; Machado et al. , 2020 ; Ma et al. , 2020 ; Madarasz & Behrens , 2019 ) learning . The SR associates with each state and policy of action a measure of the expected rate of future occupancy of all states if that policy were to be followed indefinitely . This cached representation can be acquired through experience in much the same way as MF methods and provides some of the flexibility of MB behaviour at reduced computational cost . Importantly , the SR makes it possible to rapidly evaluate the expected return of each available policy in an otherwise unchanging environment , provided that the reward distribution remains consistent . However , these requirements limit the applicabilty of the SR . In the real world , rewards are frequently non-Markovian . They may be depleted by consumption , frequently only being available on the first entry to each state . Internal goals for control—say , to pick up a particular object—need to be achieved as rapidly as possible , but only one at a time . Furthermore , while a collection of SRs for different policies makes it possible to select the best amongst them , or to improve upon them all by considering the best immediate policy-dependent state values ( Barreto et al. , 2018 ) , this capacity still falls far short of the power of planning within complete models of the environment . Here , we propose a different form of representation in which the information cached is appropriate for achieving ephemeral rewards and for planning complex combinations of policies . Both features arise from considering the expected time at which other states will be first accessed by following the available policies . We refer to this as a first-occupancy representation ( FR ) . The shift from expected rate of future occupancy ( SR ) to delay to first occupancy makes it possible to handle settings where the underlying environment remains stationary , but reward availability is not Markovian . Our primary goal in this paper is to formally introduce the FR and to highlight the breadth of settings in which it offers a compelling alternative to the SR , including , but not limited to : exploration , unsupervised RL , planning , and modeling animal behavior . 2 REINFORCEMENT LEARNING PRELIMINARIES . Policy evaluation and improvement In reinforcement learning ( RL ) , the goal of the agent is to act so as to maximize the discounted cumulative reward received within a task-defined environment . Typically , task T is modelled as a finite Markov decision process ( MDP ; ( Puterman , 2010 ) ) , T = ( S , A , p , r , γ , µ ) , where S is a finite state space , A is a finite action space , p : S × A → ∆ ( S ) is the transition distribution ( where ∆ ( S ) is the probability simplex over S ) , r : S → R is the reward function , γ ∈ [ 0 , 1 ) is a discount factor , and µ ∈ ∆ ( S ) is the distribution over initial states . Note that the reward function is also frequently defined over state-action pairs ( s , a ) or triples ( s , a , s′ ) , but we restrict our analysis to state-based rewards for now . The goal of the agent is to maximize its expected return , or discounted cumulative reward ∑ t γ tr ( st ) . To simplify notation , we will frequently write r ( st ) : = rt and r ∈ R|S| as the vector of rewards for each state . The agent acts according to a stationary policy π : S → ∆ ( A ) . For finite MDPs , we can describe the expected transition probabilities under π using a |S| × |S| matrix Pπ such that Pπs , s′ = pπ ( s′|s ) : = ∑ a p ( s ′|s , a ) π ( a|s ) . Given π and a reward function r , the expected return is Qπr ( s , a ) = Eπ [ ∞∑ k=0 γkrt+k ∣∣∣st = s , at = a ] = Es′∼pπ ( ·|s ) [ rt + γQπr ( s′ , π ( s′ ) ) ] . ( 1 ) Qπr are called the state-action values or simply the Q-values of π . The expectation Eπ [ · ] is taken with respect to the randomness of both the policy and the transition dynamics . For simplicity of notation , from here onwards we will write expectations of the form Eπ [ ·|st = s , at = a ] as Eπ [ ·|st , at ] . This recursive form is called the Bellman equation , and it makes the process of estimating Qπr— termed policy evaluation—tractable via dynamic programming ( DP ; ( Bellman , 1957 ) ) . In particular , successive applications of the Bellman operator T πQ : = r + γPπQ are guaranteed to converge to the true value function Qπ for any initial real-valued |S| × |A| matrix Q . When the transition dynamics and reward function are unknown , temporal difference ( TD ) learning updates value estimates using a bootstrapped estimate of the Bellman target ( Sutton & Barto , 2018 ) . Given a transition sequence ( st , at , rt , st+1 ) and at+1 ∼ π ( ·|st+1 ) , Qπr ( st , at ) ← Qπr ( st , at ) + αδt , δt : = rt + γQπr ( st+1 , at+1 ) −Qπr ( st , at ) . ( 2 ) Once a policy has been evaluated , policy improvement identifies a new policy π′ such that Qπr ( s , a ) ≥ Qπ ′ r ( s , a ) , ∀ ( s , a ) ∈ Qπr ( s , a ) . Helpfully , such a policy can be defined as π′ ( s ) ∈ argmaxaQπr ( s , a ) . The successor representation The successor representation ( SR ; ( Dayan , 1993 ) ) is motivated by the idea that a state representation for policy evaluation should be dependent on the similarity of different paths under the current policy . The SR is defined as a policy ’ s expected discounted state occupancy , and for discrete state spaces can be stored in an |S| × |S| matrix Mπ , where Mπ ( s , s′ ) : = Eπ [ ∑ k γk1 ( st+k = s ′ ) ∣∣∣st ] = Eπ [ 1 ( st = s′ ) + γMπ ( st+1 , s′ ) ∣∣∣st ] , ( 3 ) where 1 ( · ) is the indicator function . The SR can also be conditioned on actions , i.e. , Mπ ( s , a , s′ ) : = Eπ [ ∑ k γ k 1 ( st+k = s ′ ) ∣∣∣st , at ] , and expressed in a vectorized format , we can write Mπ ( s ) : = Mπ ( s , · ) or Mπ ( s , a ) : = Mπ ( s , a , · ) . The recursion in Eq . ( 3 ) admits a TD error : δMt : = 1 ( st ) + γM π ( st+1 , π ( st+1 ) ) −Mπ ( st , at ) , ( 4 ) where 1 ( st ) is a one-hot state representation of length |S| . One useful property of the SR is that , once converged , it facilitates rapid policy evaluation for any reward function in a given environment : rTMπ ( s , a ) = rTEπ [ ∑ k γk1 ( st+k ) ∣∣∣st , at ] = Eπ [ ∑ k γkrt+k ∣∣∣st , at ] = Qπr ( s , a ) . ( 5 ) Fast transfer for multiple tasks In the real world , we often have to perform multiple tasks within a single environment . A simplified framework for this scenario is to consider a set of MDPsM that share every property ( i.e. , S , A , p , γ , µ ) except reward functions , where each task within this family is determined by a reward function r belonging to a setR . Extending the notions of policy evaluation and improvement to this multitask setting , we can define generalized policy evaluation ( GPE ) as the computation of the value function of a policy π on a set of tasks R. Similarly , generalized policy improvement ( GPI ) for a set of “ base ” policies Π is the definition of a policy π′ such that Qπ ′ r ( s , a ) ≥ sup π∈Π Qπr ( s , a ) ∀ ( s , a ) ∈ S ×A ( 6 ) for some r ∈ R. As hinted above , the SR offers a way to take advantage of this shared structure by decoupling the agent ’ s evaluation of its expected transition dynamics under a given policy from a single reward function . Rather than needing to directly estimate Qπ ∀π ∈ Π , Mπ only needs to be computed once , and given a new reward vector r , the agent can quickly peform GPE via Eq . ( 5 ) . As shown by Barreto et al . ( 2017a ) , GPE and GPI can be combined to define a new policy π′ via π′ ( s ) ∈ argmax a∈A max π∈Π rTMπ ( s , a ) . ( 7 ) For brevity , we will refer to this combined procedure of GPE and GPI simply as “ GPI '' , unless otherwise noted . The resulting policy π′ is guaranteed to perform at least as well as any individual π ∈ Π ( Barreto et al. , 2020 ) and is part of a larger class of policies termed set-improving policies which perform at least as well as any single policy in a given set ( Zahavy et al. , 2021 ) .
The paper proposes an alternative to the successor representation (SR) -- the first-occupancy representation (FR) -- which represents the expected time to first visitation of a state. The paper motivates the representation to be applicable in environments with non-Markovian rewards and show that the proposed FR can handle such cases. The authors present a well-executed comparison to SR and demonstrate improved exploration in MF settings and the ability to plan in a model-based setting. The authors also provide some theoretical analysis of the representation, most notably a convergence result on the Bellman operator for the FR, and planning optimality in a model-based setting.
SP:eb55626de42119899036299794ed7aa2e3b69d8d
A First-Occupancy Representation for Reinforcement Learning
Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states . The successor representation ( SR ) , which measures the expected cumulative , discounted state occupancy under a fixed policy , enables efficient transfer to different reward structures in an otherwise constant Markovian environment and has been hypothesized to underlie aspects of biological behavior and neural activity . However , in the real world , rewards may move or only be available for consumption once , may shift location , or agents may simply aim to reach goal states as rapidly as possible without the constraint of artificially imposed task horizons . In such cases , the most behaviorally-relevant representation would carry information about when the agent was likely to first reach states of interest , rather than how often it should expect to visit them over a potentially infinite time span . To reflect such demands , we introduce the firstoccupancy representation ( FR ) , which measures the expected temporal discount to the first time a state is accessed . We demonstrate that the FR facilitates exploration , the selection of efficient paths to desired states , allows the agent , under certain conditions , to plan provably optimal trajectories defined by a sequence of subgoals , and induces similar behavior to animals avoiding threatening stimuli . 1 INTRODUCTION . In order to maximize reward , both animals and machines must quickly make decisions in uncertain environments with rapidly changing reward structure . Often , the strategies these agents employ are categorized as either model-free ( MF ) or model-based ( MB ) ( Sutton & Barto , 2018 ) . In the former , the optimal action in each state is identified through trial and error , with propagation of learnt value from state to state . By contrast , the latter depends on the acquisition of a map-like representation of the environment ’ s transition structure , from which an optimal course of action may be derived . This dichotomy has motivated a search for intermediate models which cache information about environmental structure , and so enable efficient but flexible planning . One such approach , based on the successor representation ( SR ) ( Dayan , 1993 ) , has been the subject of recent interest in the context of both biological ( Stachenfeld et al. , 2017 ; Gershman , 2018 ; Momennejad et al. , 2017 ; Vértes & Sahani , 2019 ; Behrens et al. , 2018 ) and machine ( Kulkarni et al. , 2016 ; Barreto et al. , 2017b ; a ; 2018 ; Machado et al. , 2020 ; Ma et al. , 2020 ; Madarasz & Behrens , 2019 ) learning . The SR associates with each state and policy of action a measure of the expected rate of future occupancy of all states if that policy were to be followed indefinitely . This cached representation can be acquired through experience in much the same way as MF methods and provides some of the flexibility of MB behaviour at reduced computational cost . Importantly , the SR makes it possible to rapidly evaluate the expected return of each available policy in an otherwise unchanging environment , provided that the reward distribution remains consistent . However , these requirements limit the applicabilty of the SR . In the real world , rewards are frequently non-Markovian . They may be depleted by consumption , frequently only being available on the first entry to each state . Internal goals for control—say , to pick up a particular object—need to be achieved as rapidly as possible , but only one at a time . Furthermore , while a collection of SRs for different policies makes it possible to select the best amongst them , or to improve upon them all by considering the best immediate policy-dependent state values ( Barreto et al. , 2018 ) , this capacity still falls far short of the power of planning within complete models of the environment . Here , we propose a different form of representation in which the information cached is appropriate for achieving ephemeral rewards and for planning complex combinations of policies . Both features arise from considering the expected time at which other states will be first accessed by following the available policies . We refer to this as a first-occupancy representation ( FR ) . The shift from expected rate of future occupancy ( SR ) to delay to first occupancy makes it possible to handle settings where the underlying environment remains stationary , but reward availability is not Markovian . Our primary goal in this paper is to formally introduce the FR and to highlight the breadth of settings in which it offers a compelling alternative to the SR , including , but not limited to : exploration , unsupervised RL , planning , and modeling animal behavior . 2 REINFORCEMENT LEARNING PRELIMINARIES . Policy evaluation and improvement In reinforcement learning ( RL ) , the goal of the agent is to act so as to maximize the discounted cumulative reward received within a task-defined environment . Typically , task T is modelled as a finite Markov decision process ( MDP ; ( Puterman , 2010 ) ) , T = ( S , A , p , r , γ , µ ) , where S is a finite state space , A is a finite action space , p : S × A → ∆ ( S ) is the transition distribution ( where ∆ ( S ) is the probability simplex over S ) , r : S → R is the reward function , γ ∈ [ 0 , 1 ) is a discount factor , and µ ∈ ∆ ( S ) is the distribution over initial states . Note that the reward function is also frequently defined over state-action pairs ( s , a ) or triples ( s , a , s′ ) , but we restrict our analysis to state-based rewards for now . The goal of the agent is to maximize its expected return , or discounted cumulative reward ∑ t γ tr ( st ) . To simplify notation , we will frequently write r ( st ) : = rt and r ∈ R|S| as the vector of rewards for each state . The agent acts according to a stationary policy π : S → ∆ ( A ) . For finite MDPs , we can describe the expected transition probabilities under π using a |S| × |S| matrix Pπ such that Pπs , s′ = pπ ( s′|s ) : = ∑ a p ( s ′|s , a ) π ( a|s ) . Given π and a reward function r , the expected return is Qπr ( s , a ) = Eπ [ ∞∑ k=0 γkrt+k ∣∣∣st = s , at = a ] = Es′∼pπ ( ·|s ) [ rt + γQπr ( s′ , π ( s′ ) ) ] . ( 1 ) Qπr are called the state-action values or simply the Q-values of π . The expectation Eπ [ · ] is taken with respect to the randomness of both the policy and the transition dynamics . For simplicity of notation , from here onwards we will write expectations of the form Eπ [ ·|st = s , at = a ] as Eπ [ ·|st , at ] . This recursive form is called the Bellman equation , and it makes the process of estimating Qπr— termed policy evaluation—tractable via dynamic programming ( DP ; ( Bellman , 1957 ) ) . In particular , successive applications of the Bellman operator T πQ : = r + γPπQ are guaranteed to converge to the true value function Qπ for any initial real-valued |S| × |A| matrix Q . When the transition dynamics and reward function are unknown , temporal difference ( TD ) learning updates value estimates using a bootstrapped estimate of the Bellman target ( Sutton & Barto , 2018 ) . Given a transition sequence ( st , at , rt , st+1 ) and at+1 ∼ π ( ·|st+1 ) , Qπr ( st , at ) ← Qπr ( st , at ) + αδt , δt : = rt + γQπr ( st+1 , at+1 ) −Qπr ( st , at ) . ( 2 ) Once a policy has been evaluated , policy improvement identifies a new policy π′ such that Qπr ( s , a ) ≥ Qπ ′ r ( s , a ) , ∀ ( s , a ) ∈ Qπr ( s , a ) . Helpfully , such a policy can be defined as π′ ( s ) ∈ argmaxaQπr ( s , a ) . The successor representation The successor representation ( SR ; ( Dayan , 1993 ) ) is motivated by the idea that a state representation for policy evaluation should be dependent on the similarity of different paths under the current policy . The SR is defined as a policy ’ s expected discounted state occupancy , and for discrete state spaces can be stored in an |S| × |S| matrix Mπ , where Mπ ( s , s′ ) : = Eπ [ ∑ k γk1 ( st+k = s ′ ) ∣∣∣st ] = Eπ [ 1 ( st = s′ ) + γMπ ( st+1 , s′ ) ∣∣∣st ] , ( 3 ) where 1 ( · ) is the indicator function . The SR can also be conditioned on actions , i.e. , Mπ ( s , a , s′ ) : = Eπ [ ∑ k γ k 1 ( st+k = s ′ ) ∣∣∣st , at ] , and expressed in a vectorized format , we can write Mπ ( s ) : = Mπ ( s , · ) or Mπ ( s , a ) : = Mπ ( s , a , · ) . The recursion in Eq . ( 3 ) admits a TD error : δMt : = 1 ( st ) + γM π ( st+1 , π ( st+1 ) ) −Mπ ( st , at ) , ( 4 ) where 1 ( st ) is a one-hot state representation of length |S| . One useful property of the SR is that , once converged , it facilitates rapid policy evaluation for any reward function in a given environment : rTMπ ( s , a ) = rTEπ [ ∑ k γk1 ( st+k ) ∣∣∣st , at ] = Eπ [ ∑ k γkrt+k ∣∣∣st , at ] = Qπr ( s , a ) . ( 5 ) Fast transfer for multiple tasks In the real world , we often have to perform multiple tasks within a single environment . A simplified framework for this scenario is to consider a set of MDPsM that share every property ( i.e. , S , A , p , γ , µ ) except reward functions , where each task within this family is determined by a reward function r belonging to a setR . Extending the notions of policy evaluation and improvement to this multitask setting , we can define generalized policy evaluation ( GPE ) as the computation of the value function of a policy π on a set of tasks R. Similarly , generalized policy improvement ( GPI ) for a set of “ base ” policies Π is the definition of a policy π′ such that Qπ ′ r ( s , a ) ≥ sup π∈Π Qπr ( s , a ) ∀ ( s , a ) ∈ S ×A ( 6 ) for some r ∈ R. As hinted above , the SR offers a way to take advantage of this shared structure by decoupling the agent ’ s evaluation of its expected transition dynamics under a given policy from a single reward function . Rather than needing to directly estimate Qπ ∀π ∈ Π , Mπ only needs to be computed once , and given a new reward vector r , the agent can quickly peform GPE via Eq . ( 5 ) . As shown by Barreto et al . ( 2017a ) , GPE and GPI can be combined to define a new policy π′ via π′ ( s ) ∈ argmax a∈A max π∈Π rTMπ ( s , a ) . ( 7 ) For brevity , we will refer to this combined procedure of GPE and GPI simply as “ GPI '' , unless otherwise noted . The resulting policy π′ is guaranteed to perform at least as well as any individual π ∈ Π ( Barreto et al. , 2020 ) and is part of a larger class of policies termed set-improving policies which perform at least as well as any single policy in a given set ( Zahavy et al. , 2021 ) .
The paper proposes a novel representation of the dynamics of an environment that is independent of the reward structure encoding a particular sequential decision task in it, and thus can be reused to speed up the computation of policies for multiple tasks in this environment. Unlike successor representations, which compute the likelihood of occupying a particular state at any time in the future, when starting from a given initial state, the proposed first-occupancy representation encodes the likelihood of reaching a particular state for the first time. Because of this, the novel representation effectively represents the expected path length between all pairs of states when following a particular policy. The benefits of the proposed representation are illustrated in several decision problems with non-Markovian reward structure. It is also argued that animals might use similar representations of their environment to effectively find escape routes in a short amount of time.
SP:eb55626de42119899036299794ed7aa2e3b69d8d
Q-learning for real time control of heterogeneous microagent collectives
1 INTRODUCTION . The ability to control the behaviour of agents at the microscale or smaller such has implications across fields such as nanomedicine ( Hauert & Bhatia ( 2014 ) ) and environmental remediation ( Wang et al . ( 2019 ) ) , with possible agent types including micromotors , nanoparticles and bacterial cells . Exerting control at this scale remains challenging however , due in large part to the simplicity and limited programmability of typical microagents . In this work , an external optical control scheme is used control the microagents , here Volvox algae . Machine learning allows for the fine-tuning of the control to each individual Volvox in real-time . Through this , individual models can be learnt that enable optimal motion control , in this case learning how to alternate illumination and relaxation periods to stop the motion of individual Volvox . Light is a powerful tool at the microscale , capable of forming and breaking bonds [ Chen et al . ( 2018 ) ] , powering micromotors ( Palagi et al . ( 2019 ) ) , and interacting with light sensitive organisms ( Jékely et al . ( 2008 ) ) . Furthermore , the use of spatially structured light offers interaction with agents independently and in parallel ( Palagi et al . ( 2019 ) ) , making it particularly well suited to the control of collective systems ( Mukherjee et al . ( 2018 ) ; Izquierdo et al . ( 2018 ) ; Schmidt et al . ( 2019 ) ; Deng et al . ( 2018 ) ) . The dynamic nature of light also means that it can be combined with Q-learning to produce rapid and effective and closed-loop control outcomes . This was demonstrated by MuiñosLandin et al. , with the use of tabular Q-learning on self-thermophoretic microswimmers to achieve navigation in a noisy , grid-like environment ( Muiños-Landin et al . ( 2021 ) ) . The work presented here similarly uses tabular Q-learning to influence the dynamics of motile microscale agents using optical interactions , however in this case , each agent performs the learning independently , with significant heterogeneity present among the collective of agents owing to their biological nature . Furthermore , the learning and closed-loop optical control were here implemented on a low-cost , open source platform , demonstrating the power of this learning process even in instances with limited computational resources . Optical control is enacted using the open source DOME platform , a light-weight device which combines digital light projection with microscopy to image a microsystem in real time and provide closed-loop localised light patterning . Given the limited computing power of the DOME , which operates on a Raspberry Pi computer , this work provides an exploration of the potential for the application of Q-learning algorithms in low computational resource environments . Results show that tabular Q-learning allows us to learn how light may be projected onto Volvox algae in order to maximally reduce their velocity . The state and action space of a complex biological system is simplified so as to run tabular Q-learning experiment , and the learnt values for individual agents used to achieve herding behaviour in living algae . 2 METHODOLOGY . This section introduces the experimental setup for light-based control of Volvox , the simulation environment , and Q-learning methodology applied both in simulation and reality . 2.1 OPTICALLY CONTROLLING Volvox Volvox are a type of green microscopic algae that exhibit phototactic behaviour . They are multicellular organisms , with somatic cells that have flagella for locomotion and an eyespot for light perception . These cells allow the Volvox to move towards a light source ( Ueki et al . ( 2010 ) ) . This phototactic response is adaptive , meaning that when a Volvox comes into contact with light its speed is typically reduced for around 2s before adapting to the new light environment and recovering previous velocity ( Drescher et al . ( 2010 ) ) . In this work , the light response exhibited by Volvox is used as a means to regulate the velocity of individual agents by providing spatially localised illumination . To overcome the adaptive nature of the response , illumination must be provided intermittently rather than as a continuous stimuli . Qlearning is therefore applied as a means to determine the optimum cycle length of illumination and relaxation for each agent that results in the largest velocity reduction . The Volvox used here were acquired from Blades Biological UK and are of the species Volvox aureus . 2.2 THE DOME . The experimental part of this work was performed using the DOME ( Figure 1 ) , an open source platform for the study and engineering of microagent collectives through spatiotemporal illumination ( Denniss et al . ( 2020 ) ) . In this device , a closed-loop control scheme is established by linking a digital light processing unit to real time imaging and image analysis , enabling the optical microenvironment to be shaped around the evolving system dynamics . The DOME has a maximum projection resolution of 30×30 µm , and is thus well suited to illumination of individual Volvox agents , which are around 350–500 µm in diameter . 2.3 Q-LEARNING FOR Volvox CONTROL Due to inherent variability of living algae , in order to have an adaptable method of control of the Volvox , a reinforcement learning algorithm was required . Because the algae would be controlled using the DOME system , the learning algorithm could not be computationally expensive . Although this could have been circumnavigated by running the algorithm an external computer in communication with the DOME , this work aimed to explore the potential for implementing reinforcement learning in limited resource environments . Additionally , maintaining a self-contained computational set up allows for the possibility of operating the system in enclosed conditions , such as within an incubator for live cell study . For this reason , tabular Q-learning was chosen , instead of more flexible alternatives such as deep Q-learning . Tabular Q-learning has the restriction of needing a discrete action space and state space , but biological systems are inherently continuous . Due to this restriction , the action and state space were defined in a discrete way : The action space consisted on two actions , either to illuminate the Volvox , or not . The state space needed to represent the amount of light that a Volvox had received . The Volvox ’ s speed is affected by the amount of light and darkness received . If the state space could be continuous , it would be defined by the amount of time ( in milliseconds ) that the agent had been illuminated and non-illuminated . Instead of measuring milliseconds , the measurement was discretised using the amount of frames . Since the number of states had to be finite , the number of frames of light or darkness could not grow infinitely . However , observation showed that after 10 frames of either illumination or darkness , the agent ’ s behaviour did not change anymore . Because of this , if an agent hasn ’ t had a change in illumination for over 10 frames , it will be in the same state as if it had had the same illumination for 10 frames . A state was therefore defined by the number of frames for which the agent was subjected to light , the number of frames for which it was not subjected to light , and the present light value . The present light value was necessary to distinguish between a state that had light on , then light off , and a state that had light off , then light on . Using this method , the total number of states was 242 , which was the combination of possible frames on ( fon , between 0 and 10 ) and frames off ( fo f f , between 0 and 10 ) and the light value ( l , either ON or OFF ) . Note that testing all combinations would not be possible in real time , as each trial is a real-world experiment involving a Volvox reaction . Table 1 shows some examples of states with their descriptions . The state index S ( fon , o f f , l ) was calculated as follows S ( fon , fo f f , l ) = fon +11∗ fo f f +121∗ l. ( 1 ) The reward for each state was calculated based on the agent ’ s velocity ( v ) and acceleration ( a ) at that state . Since the goal was to minimize the magnitude of the velocity , rewards were given for agents with their velocity below a threshold , while accelerating agents were penalized . The direction of acceleration and velocity were not considered . Furthermore , because we wanted to minimize the number of light transitions ( from on to off and vice-versa ) , states that had more frames on and off would have higher rewards . The reward function R ( v , a , fon , fo f f ) was defined as R ( v , a , fon , fo f f ) = { fon + fo f f |v| < 0.05 −5 |v| ≥ 0.05 and a > 0 −1 |v| ≥ 0.05 } . ( 2 ) At each step , the possible actions that could be performed were to turn the light either on or off for each agent , giving an action space of size 2 . Given this , the Q-table was initialized as an empty matrix with NumStates= 242 and NumActions= 2 , where each cell encodes the quality of choosing that action for that state . The Q-table was updated at each step of the learning as the agent explored the environment and different possible states . The Q-learning algorithm stored the previously chosen action ( action ) , and the previous state ( s ) , so as to update the reward at the next iteration . 2.3.1 VOLVOX SIMULATOR . The proposed learning methodology was refined in simulation before use in reality . To this end , an agent-based Volvox simulator was built in Python to perform rapid iterations on the control algorithms . This simulator replicates the way in which Volvox behave in response to light , and was designed such that all code developed was also suitable to run on the DOME platform . Volvox agents were modelled based on three assumptions from observation and literature : • Agent velocity is reduced for period of time when coming into contact with light . • After a period of time in contact with light , agent velocity recovers . • The duration of the aforementioned two time periods vary from agent to agent . The simulated agents follow a straight line , with a probability of them changing direction . This is to replicate the randomness of the Volvox movement . In both the simulator and real world experiments , the passage of time was broken up using the number of elapsed camera frames , allowing an otherwise continuous measurement to be discretised . Since each Volvox reacts to light in a different way , there exists are a pair of values for the number frames with light on fon , and number of frames with light off fo f f that if repeated continuously , will keep the agent at its minimum speed . To model this light responsive behaviour of the Volvox , a light accumulator model was developed . The emulated agent had two local variables , in which the amount of light ( aL ) and darkness ( aD ) were stored . These variables were bounded between the values of 1 and 20 , and increased exponentially with every new frame of light or darkness . The choice of having an exponential increase was to reflect the fact that at each frame of light that an agent received , the agent would have more capacity to absorb the light , because it would adapt to its new illumination environment . If any of these variables went above the maximum value , it meant that the agent would no longer react to that impulse . In the following equation , tL indicates the number of consecutive frames of light , and tD indicates the number of consecutive frames without light . aL ( tL ) = eλL∗tL aD ( tD ) = eλD∗tD The parameters λD and λL indicate the rate at which an agent stops reacting to darkness or light , respectively . These values were calculated based on the number of required on and off frames for the agent to stop . λL = ln ( 20 ) ∗ 1 fon λD = ln ( 20 ) ∗ 1 fo f f The code used for this simulator is publicly available online at bitbucket.org/hauertlab .
The authors propose learning control strategies in real-time for agent collectives. They demonstrate the result of tabular Q-learning on a closed-loop Dynamic Optical Micro-Environment (DOME) platform to control the motion of light-responsive Volvox agents. Specifically, Q-learning allows learning how light may be projected onto Volvox algae to maximally reduce their velocity.
SP:142cae927407616fb57fc69c736a4d91ee1d58cd
Q-learning for real time control of heterogeneous microagent collectives
1 INTRODUCTION . The ability to control the behaviour of agents at the microscale or smaller such has implications across fields such as nanomedicine ( Hauert & Bhatia ( 2014 ) ) and environmental remediation ( Wang et al . ( 2019 ) ) , with possible agent types including micromotors , nanoparticles and bacterial cells . Exerting control at this scale remains challenging however , due in large part to the simplicity and limited programmability of typical microagents . In this work , an external optical control scheme is used control the microagents , here Volvox algae . Machine learning allows for the fine-tuning of the control to each individual Volvox in real-time . Through this , individual models can be learnt that enable optimal motion control , in this case learning how to alternate illumination and relaxation periods to stop the motion of individual Volvox . Light is a powerful tool at the microscale , capable of forming and breaking bonds [ Chen et al . ( 2018 ) ] , powering micromotors ( Palagi et al . ( 2019 ) ) , and interacting with light sensitive organisms ( Jékely et al . ( 2008 ) ) . Furthermore , the use of spatially structured light offers interaction with agents independently and in parallel ( Palagi et al . ( 2019 ) ) , making it particularly well suited to the control of collective systems ( Mukherjee et al . ( 2018 ) ; Izquierdo et al . ( 2018 ) ; Schmidt et al . ( 2019 ) ; Deng et al . ( 2018 ) ) . The dynamic nature of light also means that it can be combined with Q-learning to produce rapid and effective and closed-loop control outcomes . This was demonstrated by MuiñosLandin et al. , with the use of tabular Q-learning on self-thermophoretic microswimmers to achieve navigation in a noisy , grid-like environment ( Muiños-Landin et al . ( 2021 ) ) . The work presented here similarly uses tabular Q-learning to influence the dynamics of motile microscale agents using optical interactions , however in this case , each agent performs the learning independently , with significant heterogeneity present among the collective of agents owing to their biological nature . Furthermore , the learning and closed-loop optical control were here implemented on a low-cost , open source platform , demonstrating the power of this learning process even in instances with limited computational resources . Optical control is enacted using the open source DOME platform , a light-weight device which combines digital light projection with microscopy to image a microsystem in real time and provide closed-loop localised light patterning . Given the limited computing power of the DOME , which operates on a Raspberry Pi computer , this work provides an exploration of the potential for the application of Q-learning algorithms in low computational resource environments . Results show that tabular Q-learning allows us to learn how light may be projected onto Volvox algae in order to maximally reduce their velocity . The state and action space of a complex biological system is simplified so as to run tabular Q-learning experiment , and the learnt values for individual agents used to achieve herding behaviour in living algae . 2 METHODOLOGY . This section introduces the experimental setup for light-based control of Volvox , the simulation environment , and Q-learning methodology applied both in simulation and reality . 2.1 OPTICALLY CONTROLLING Volvox Volvox are a type of green microscopic algae that exhibit phototactic behaviour . They are multicellular organisms , with somatic cells that have flagella for locomotion and an eyespot for light perception . These cells allow the Volvox to move towards a light source ( Ueki et al . ( 2010 ) ) . This phototactic response is adaptive , meaning that when a Volvox comes into contact with light its speed is typically reduced for around 2s before adapting to the new light environment and recovering previous velocity ( Drescher et al . ( 2010 ) ) . In this work , the light response exhibited by Volvox is used as a means to regulate the velocity of individual agents by providing spatially localised illumination . To overcome the adaptive nature of the response , illumination must be provided intermittently rather than as a continuous stimuli . Qlearning is therefore applied as a means to determine the optimum cycle length of illumination and relaxation for each agent that results in the largest velocity reduction . The Volvox used here were acquired from Blades Biological UK and are of the species Volvox aureus . 2.2 THE DOME . The experimental part of this work was performed using the DOME ( Figure 1 ) , an open source platform for the study and engineering of microagent collectives through spatiotemporal illumination ( Denniss et al . ( 2020 ) ) . In this device , a closed-loop control scheme is established by linking a digital light processing unit to real time imaging and image analysis , enabling the optical microenvironment to be shaped around the evolving system dynamics . The DOME has a maximum projection resolution of 30×30 µm , and is thus well suited to illumination of individual Volvox agents , which are around 350–500 µm in diameter . 2.3 Q-LEARNING FOR Volvox CONTROL Due to inherent variability of living algae , in order to have an adaptable method of control of the Volvox , a reinforcement learning algorithm was required . Because the algae would be controlled using the DOME system , the learning algorithm could not be computationally expensive . Although this could have been circumnavigated by running the algorithm an external computer in communication with the DOME , this work aimed to explore the potential for implementing reinforcement learning in limited resource environments . Additionally , maintaining a self-contained computational set up allows for the possibility of operating the system in enclosed conditions , such as within an incubator for live cell study . For this reason , tabular Q-learning was chosen , instead of more flexible alternatives such as deep Q-learning . Tabular Q-learning has the restriction of needing a discrete action space and state space , but biological systems are inherently continuous . Due to this restriction , the action and state space were defined in a discrete way : The action space consisted on two actions , either to illuminate the Volvox , or not . The state space needed to represent the amount of light that a Volvox had received . The Volvox ’ s speed is affected by the amount of light and darkness received . If the state space could be continuous , it would be defined by the amount of time ( in milliseconds ) that the agent had been illuminated and non-illuminated . Instead of measuring milliseconds , the measurement was discretised using the amount of frames . Since the number of states had to be finite , the number of frames of light or darkness could not grow infinitely . However , observation showed that after 10 frames of either illumination or darkness , the agent ’ s behaviour did not change anymore . Because of this , if an agent hasn ’ t had a change in illumination for over 10 frames , it will be in the same state as if it had had the same illumination for 10 frames . A state was therefore defined by the number of frames for which the agent was subjected to light , the number of frames for which it was not subjected to light , and the present light value . The present light value was necessary to distinguish between a state that had light on , then light off , and a state that had light off , then light on . Using this method , the total number of states was 242 , which was the combination of possible frames on ( fon , between 0 and 10 ) and frames off ( fo f f , between 0 and 10 ) and the light value ( l , either ON or OFF ) . Note that testing all combinations would not be possible in real time , as each trial is a real-world experiment involving a Volvox reaction . Table 1 shows some examples of states with their descriptions . The state index S ( fon , o f f , l ) was calculated as follows S ( fon , fo f f , l ) = fon +11∗ fo f f +121∗ l. ( 1 ) The reward for each state was calculated based on the agent ’ s velocity ( v ) and acceleration ( a ) at that state . Since the goal was to minimize the magnitude of the velocity , rewards were given for agents with their velocity below a threshold , while accelerating agents were penalized . The direction of acceleration and velocity were not considered . Furthermore , because we wanted to minimize the number of light transitions ( from on to off and vice-versa ) , states that had more frames on and off would have higher rewards . The reward function R ( v , a , fon , fo f f ) was defined as R ( v , a , fon , fo f f ) = { fon + fo f f |v| < 0.05 −5 |v| ≥ 0.05 and a > 0 −1 |v| ≥ 0.05 } . ( 2 ) At each step , the possible actions that could be performed were to turn the light either on or off for each agent , giving an action space of size 2 . Given this , the Q-table was initialized as an empty matrix with NumStates= 242 and NumActions= 2 , where each cell encodes the quality of choosing that action for that state . The Q-table was updated at each step of the learning as the agent explored the environment and different possible states . The Q-learning algorithm stored the previously chosen action ( action ) , and the previous state ( s ) , so as to update the reward at the next iteration . 2.3.1 VOLVOX SIMULATOR . The proposed learning methodology was refined in simulation before use in reality . To this end , an agent-based Volvox simulator was built in Python to perform rapid iterations on the control algorithms . This simulator replicates the way in which Volvox behave in response to light , and was designed such that all code developed was also suitable to run on the DOME platform . Volvox agents were modelled based on three assumptions from observation and literature : • Agent velocity is reduced for period of time when coming into contact with light . • After a period of time in contact with light , agent velocity recovers . • The duration of the aforementioned two time periods vary from agent to agent . The simulated agents follow a straight line , with a probability of them changing direction . This is to replicate the randomness of the Volvox movement . In both the simulator and real world experiments , the passage of time was broken up using the number of elapsed camera frames , allowing an otherwise continuous measurement to be discretised . Since each Volvox reacts to light in a different way , there exists are a pair of values for the number frames with light on fon , and number of frames with light off fo f f that if repeated continuously , will keep the agent at its minimum speed . To model this light responsive behaviour of the Volvox , a light accumulator model was developed . The emulated agent had two local variables , in which the amount of light ( aL ) and darkness ( aD ) were stored . These variables were bounded between the values of 1 and 20 , and increased exponentially with every new frame of light or darkness . The choice of having an exponential increase was to reflect the fact that at each frame of light that an agent received , the agent would have more capacity to absorb the light , because it would adapt to its new illumination environment . If any of these variables went above the maximum value , it meant that the agent would no longer react to that impulse . In the following equation , tL indicates the number of consecutive frames of light , and tD indicates the number of consecutive frames without light . aL ( tL ) = eλL∗tL aD ( tD ) = eλD∗tD The parameters λD and λL indicate the rate at which an agent stops reacting to darkness or light , respectively . These values were calculated based on the number of required on and off frames for the agent to stop . λL = ln ( 20 ) ∗ 1 fon λD = ln ( 20 ) ∗ 1 fo f f The code used for this simulator is publicly available online at bitbucket.org/hauertlab .
In this paper, the authors use machine learning to control the velocity/motion of a type of microorganisms, the Volvox algae. The algae's velocity will react to light in a non trivial, adaptive way, and the authors use Q learning to regulate the algae's velocity. Experiments are conducted in both simulator and real world. The authors demonstrate that after learning, the Q table method can reduce the motion of this micro-agents to be almost stationary, significantly better than baselines.
SP:142cae927407616fb57fc69c736a4d91ee1d58cd
Q-learning for real time control of heterogeneous microagent collectives
1 INTRODUCTION . The ability to control the behaviour of agents at the microscale or smaller such has implications across fields such as nanomedicine ( Hauert & Bhatia ( 2014 ) ) and environmental remediation ( Wang et al . ( 2019 ) ) , with possible agent types including micromotors , nanoparticles and bacterial cells . Exerting control at this scale remains challenging however , due in large part to the simplicity and limited programmability of typical microagents . In this work , an external optical control scheme is used control the microagents , here Volvox algae . Machine learning allows for the fine-tuning of the control to each individual Volvox in real-time . Through this , individual models can be learnt that enable optimal motion control , in this case learning how to alternate illumination and relaxation periods to stop the motion of individual Volvox . Light is a powerful tool at the microscale , capable of forming and breaking bonds [ Chen et al . ( 2018 ) ] , powering micromotors ( Palagi et al . ( 2019 ) ) , and interacting with light sensitive organisms ( Jékely et al . ( 2008 ) ) . Furthermore , the use of spatially structured light offers interaction with agents independently and in parallel ( Palagi et al . ( 2019 ) ) , making it particularly well suited to the control of collective systems ( Mukherjee et al . ( 2018 ) ; Izquierdo et al . ( 2018 ) ; Schmidt et al . ( 2019 ) ; Deng et al . ( 2018 ) ) . The dynamic nature of light also means that it can be combined with Q-learning to produce rapid and effective and closed-loop control outcomes . This was demonstrated by MuiñosLandin et al. , with the use of tabular Q-learning on self-thermophoretic microswimmers to achieve navigation in a noisy , grid-like environment ( Muiños-Landin et al . ( 2021 ) ) . The work presented here similarly uses tabular Q-learning to influence the dynamics of motile microscale agents using optical interactions , however in this case , each agent performs the learning independently , with significant heterogeneity present among the collective of agents owing to their biological nature . Furthermore , the learning and closed-loop optical control were here implemented on a low-cost , open source platform , demonstrating the power of this learning process even in instances with limited computational resources . Optical control is enacted using the open source DOME platform , a light-weight device which combines digital light projection with microscopy to image a microsystem in real time and provide closed-loop localised light patterning . Given the limited computing power of the DOME , which operates on a Raspberry Pi computer , this work provides an exploration of the potential for the application of Q-learning algorithms in low computational resource environments . Results show that tabular Q-learning allows us to learn how light may be projected onto Volvox algae in order to maximally reduce their velocity . The state and action space of a complex biological system is simplified so as to run tabular Q-learning experiment , and the learnt values for individual agents used to achieve herding behaviour in living algae . 2 METHODOLOGY . This section introduces the experimental setup for light-based control of Volvox , the simulation environment , and Q-learning methodology applied both in simulation and reality . 2.1 OPTICALLY CONTROLLING Volvox Volvox are a type of green microscopic algae that exhibit phototactic behaviour . They are multicellular organisms , with somatic cells that have flagella for locomotion and an eyespot for light perception . These cells allow the Volvox to move towards a light source ( Ueki et al . ( 2010 ) ) . This phototactic response is adaptive , meaning that when a Volvox comes into contact with light its speed is typically reduced for around 2s before adapting to the new light environment and recovering previous velocity ( Drescher et al . ( 2010 ) ) . In this work , the light response exhibited by Volvox is used as a means to regulate the velocity of individual agents by providing spatially localised illumination . To overcome the adaptive nature of the response , illumination must be provided intermittently rather than as a continuous stimuli . Qlearning is therefore applied as a means to determine the optimum cycle length of illumination and relaxation for each agent that results in the largest velocity reduction . The Volvox used here were acquired from Blades Biological UK and are of the species Volvox aureus . 2.2 THE DOME . The experimental part of this work was performed using the DOME ( Figure 1 ) , an open source platform for the study and engineering of microagent collectives through spatiotemporal illumination ( Denniss et al . ( 2020 ) ) . In this device , a closed-loop control scheme is established by linking a digital light processing unit to real time imaging and image analysis , enabling the optical microenvironment to be shaped around the evolving system dynamics . The DOME has a maximum projection resolution of 30×30 µm , and is thus well suited to illumination of individual Volvox agents , which are around 350–500 µm in diameter . 2.3 Q-LEARNING FOR Volvox CONTROL Due to inherent variability of living algae , in order to have an adaptable method of control of the Volvox , a reinforcement learning algorithm was required . Because the algae would be controlled using the DOME system , the learning algorithm could not be computationally expensive . Although this could have been circumnavigated by running the algorithm an external computer in communication with the DOME , this work aimed to explore the potential for implementing reinforcement learning in limited resource environments . Additionally , maintaining a self-contained computational set up allows for the possibility of operating the system in enclosed conditions , such as within an incubator for live cell study . For this reason , tabular Q-learning was chosen , instead of more flexible alternatives such as deep Q-learning . Tabular Q-learning has the restriction of needing a discrete action space and state space , but biological systems are inherently continuous . Due to this restriction , the action and state space were defined in a discrete way : The action space consisted on two actions , either to illuminate the Volvox , or not . The state space needed to represent the amount of light that a Volvox had received . The Volvox ’ s speed is affected by the amount of light and darkness received . If the state space could be continuous , it would be defined by the amount of time ( in milliseconds ) that the agent had been illuminated and non-illuminated . Instead of measuring milliseconds , the measurement was discretised using the amount of frames . Since the number of states had to be finite , the number of frames of light or darkness could not grow infinitely . However , observation showed that after 10 frames of either illumination or darkness , the agent ’ s behaviour did not change anymore . Because of this , if an agent hasn ’ t had a change in illumination for over 10 frames , it will be in the same state as if it had had the same illumination for 10 frames . A state was therefore defined by the number of frames for which the agent was subjected to light , the number of frames for which it was not subjected to light , and the present light value . The present light value was necessary to distinguish between a state that had light on , then light off , and a state that had light off , then light on . Using this method , the total number of states was 242 , which was the combination of possible frames on ( fon , between 0 and 10 ) and frames off ( fo f f , between 0 and 10 ) and the light value ( l , either ON or OFF ) . Note that testing all combinations would not be possible in real time , as each trial is a real-world experiment involving a Volvox reaction . Table 1 shows some examples of states with their descriptions . The state index S ( fon , o f f , l ) was calculated as follows S ( fon , fo f f , l ) = fon +11∗ fo f f +121∗ l. ( 1 ) The reward for each state was calculated based on the agent ’ s velocity ( v ) and acceleration ( a ) at that state . Since the goal was to minimize the magnitude of the velocity , rewards were given for agents with their velocity below a threshold , while accelerating agents were penalized . The direction of acceleration and velocity were not considered . Furthermore , because we wanted to minimize the number of light transitions ( from on to off and vice-versa ) , states that had more frames on and off would have higher rewards . The reward function R ( v , a , fon , fo f f ) was defined as R ( v , a , fon , fo f f ) = { fon + fo f f |v| < 0.05 −5 |v| ≥ 0.05 and a > 0 −1 |v| ≥ 0.05 } . ( 2 ) At each step , the possible actions that could be performed were to turn the light either on or off for each agent , giving an action space of size 2 . Given this , the Q-table was initialized as an empty matrix with NumStates= 242 and NumActions= 2 , where each cell encodes the quality of choosing that action for that state . The Q-table was updated at each step of the learning as the agent explored the environment and different possible states . The Q-learning algorithm stored the previously chosen action ( action ) , and the previous state ( s ) , so as to update the reward at the next iteration . 2.3.1 VOLVOX SIMULATOR . The proposed learning methodology was refined in simulation before use in reality . To this end , an agent-based Volvox simulator was built in Python to perform rapid iterations on the control algorithms . This simulator replicates the way in which Volvox behave in response to light , and was designed such that all code developed was also suitable to run on the DOME platform . Volvox agents were modelled based on three assumptions from observation and literature : • Agent velocity is reduced for period of time when coming into contact with light . • After a period of time in contact with light , agent velocity recovers . • The duration of the aforementioned two time periods vary from agent to agent . The simulated agents follow a straight line , with a probability of them changing direction . This is to replicate the randomness of the Volvox movement . In both the simulator and real world experiments , the passage of time was broken up using the number of elapsed camera frames , allowing an otherwise continuous measurement to be discretised . Since each Volvox reacts to light in a different way , there exists are a pair of values for the number frames with light on fon , and number of frames with light off fo f f that if repeated continuously , will keep the agent at its minimum speed . To model this light responsive behaviour of the Volvox , a light accumulator model was developed . The emulated agent had two local variables , in which the amount of light ( aL ) and darkness ( aD ) were stored . These variables were bounded between the values of 1 and 20 , and increased exponentially with every new frame of light or darkness . The choice of having an exponential increase was to reflect the fact that at each frame of light that an agent received , the agent would have more capacity to absorb the light , because it would adapt to its new illumination environment . If any of these variables went above the maximum value , it meant that the agent would no longer react to that impulse . In the following equation , tL indicates the number of consecutive frames of light , and tD indicates the number of consecutive frames without light . aL ( tL ) = eλL∗tL aD ( tD ) = eλD∗tD The parameters λD and λL indicate the rate at which an agent stops reacting to darkness or light , respectively . These values were calculated based on the number of required on and off frames for the agent to stop . λL = ln ( 20 ) ∗ 1 fon λD = ln ( 20 ) ∗ 1 fo f f The code used for this simulator is publicly available online at bitbucket.org/hauertlab .
The given work discusses the use of Q learning to control the motion of a light responsive Volvox agent. They also develop a simulation environment, providing an empirical estimate of the dynamics of the system. They evaluateon both single and multi-agent control. The proposed tabular Q learning agent was able to achieve superior performance(ie slower speed in this setup) compared to the baselines proposed.
SP:142cae927407616fb57fc69c736a4d91ee1d58cd
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
1 INTRODUCTION . Self-supervised textual representation learning ( Devlin et al. , 2018 ; Radford et al. , 2018 ; 2019 ; Liu et al. , 2019 ; Yang et al. , 2019 ; Raffel et al. , 2019 ; Brown et al. , 2020 ) based on Transformers ( Vaswani et al. , 2017 ) has pushed the state of the art on a wide range of natural language processing ( NLP ) tasks ( Rajpurkar et al. , 2016 ; Wang et al. , 2018 ; Sarlin et al. , 2020 ) . One successful approach is to first pretrain the model ( e.g . BERT ) on large-scale unlabled text corpora using masked language modeling ( MLM ) objective ( Devlin et al. , 2018 ) , followed by finetuning on downstream tasks . While this pretraining-finetuning paradigm has been widely adopted , recent work on autoregressive language models ( LM ) ( Radford et al. , 2019 ; Brown et al. , 2020 ) such as GPT-3 has shown strong performance without finetuning by utilizing few-shot prompts ( Liu et al. , 2021 ) , suggesting the text guided zero-shot generalization is a promising alternative . Motivated by the success of textual representation pretraining , various efforts have been made to build the multi-modal ( visual and textual ) counterpart . A line of work ( Tan & Bansal , 2019 ; Lu et al. , 2019 ; Li et al. , 2019 ; Chen et al. , 2020b ; Li et al. , 2020 ; Su et al. , 2020 ; Zhang et al. , 2021 ) has explored vision-language pretraining ( VLP ) that learns a joint representation of both modalities to be finetuned on vision-language ( VL ) benchmarks , such as visual question answering ( VQA ) ( Goyal et al. , 2017 ) . In order to capture the alignment between images and text , previous methods have extensively exploited two types of human-labeled datasets from multiple sources , which typically consist of the following steps . Firstly , object detection datasets are used to train a supervised object detector ( OD ) which allows further extracting region-of-interest ( ROI ) features from images . Next , datasets with aligned image-text pairs are used for MLM pretraining of a fusion model that usually takes as input the concatenation of the extracted ROI features and the paired text . In addition , due to the limited scale of human annotated data , various task-specific auxiliary losses have been introduced in order to improve performance . These design choices complicate the pretraining protocol of VLP , creating a bottleneck for further quality improvement . What is more , such pretraining-finetuning based approaches usually lack the zero-shot capability , just like their lan- guage counterparts . In comparison , another line of work ( Radford et al. , 2021 ; Ramesh et al. , 2021 ; Jia et al. , 2021 ) utilizes weakly labeled/aligned data crawled from the web to perform pretraining , achieving good performance and certain zero-shot learning capability on image classification and image-text retrieval . Nonetheless , these methods mainly focus on specific tasks of consideration and thus may not serve as a generic pretraining-finetuning representation for VL benchmarks . In light of these disadvantages of the existing techniques , we are interested in building a VLP model that : ( 1 ) can be seamlessly plugged into the pretraining-finetuning paradigm and achieve competitive performance on standard VL benchmarks ; ( 2 ) does not require a complicated pretraining protocol as in previous methods ; and ( 3 ) has the potential towards text guided zero-shot generalization in cross-modal settings . To this end , we propose SimVLM , standing for Simple Visual Language Model , which significantly simplifies VLP by solely exploiting language modeling objectives on weakly aligned image-text pairs ( Jia et al. , 2021 ) . In a nutshell , SimVLM consists of the following components : • Objective . It is trained end-to-end from scratch with a single objective of Prefix Language Modeling ( PrefixLM ) , which can not only naturally perform text generation as GPT-3 , but also process contextual information in a bidirectional manner as BERT does . • Architecture . The framework employs ViT/CoAtNet ( Dosovitskiy et al. , 2021 ; Dai et al. , 2021 ) and directly takes raw images as inputs . These models can also fit the large-scale data and are readily compatible with the PrefixLM objective . • Data . These setups relieve the requirement for object detection and allow the model to utilize the large-scale weakly labeled dataset , which has better potential towards zero-shot generalization . Not only is SimVLM simpler , requiring neither object detection pretraining nor auxiliary losses , but it also obtains better performance than previous work . Empirically , SimVLM consistently outperforms existing VLP models and achieves new state-of-the-art results on 6 VL benchmarks without additional data nor task-specific customization . Besides , it acquires stronger generalization in visual-language understanding that empowers zero-shot image captioning and open-ended VQA . In particular , SimVLM learns unified multimodal representation that enables zero-shot cross-modality transfer , where the model is finetuned on text-only data and directly evaluated on image-and-text test examples without further training . Our results suggest that generative VLP can not only match existing MLM-based methods on VL benchmarks but also demonstrate promising zero-shot potential . 2 RELATED WORK . Recent years have seen a rapid progress made in vision-language pretraining ( Uppal et al. , 2020 ; Han et al. , 2021 ; Khan et al. , 2021 ) . While a variety of approaches have been proposed , a large portion of them require object detection for image region feature regression or tagging as part of the pre-training objectives ( Tan & Bansal , 2019 ; Su et al. , 2020 ; Li et al. , 2019 ; Chen et al. , 2020b ; Gan et al. , 2020 ; Li et al. , 2020 ; Yu et al. , 2021 ; Li et al. , 2021 ; Zhang et al. , 2021 ; Hu et al. , 2021 ; Cho et al. , 2021 ) . These methods rely on a strong object detection model like Fast ( er ) R-CNN ( Ren et al. , 2015 ) , which is often trained on human annotated data sets like Visual Genome ( Krishna et al. , 2016 ) . Using such labeled training data as a prerequisite increases the cost of building the training pipeline , and makes the approach less scalable . Some recent efforts have also explored VLP without object detection module ( Xu et al. , 2021 ; Kim et al. , 2021 ; Huang et al. , 2021 ) , but they only use clean pretraining data with small scales and thus their zero-shot capability is limited . On the other hand , multiple cross-modality loss functions have been proposed as part of the training objectives , for example image-text matching ( Tan & Bansal , 2019 ; Lu et al. , 2019 ; Xu et al. , 2021 ) , masked region classification/feature regression ( Tan & Bansal , 2019 ; Chen et al. , 2020b ) , object attribute prediction ( Xu et al. , 2021 ) , contrastive loss ( Li et al. , 2020 ; 2021 ) , word-region alignment ( Chen et al. , 2020b ) word-patch alignment ( Kim et al. , 2021 ) . They are often mixed with other objectives including image caption generation and masked language modeling to form compound pre-training losses . This creates the challenge of balancing among different losses and datasets , and thus complicates the optimization procedure . Our work by contrast , follows a minimalist approach that takes raw image inputs and makes use of only the language modeling loss , without resorting to auxiliary models like faster R-CNN for image region detection . Motivated by recent works ( Radford et al. , 2021 ; Ramesh et al. , 2021 ; Jia et al. , 2021 ) that illustrate zero-shot learning in certain image-text tasks , we train our model using largescale weakly labeled data only . While concurrent work ( Shen et al. , 2021 ) has explored building on top of models pretrained with such dataset , we focus on pretraining from scratch to explore the limit of generative VLP . 3 SIMVLM . 3.1 BACKGROUND . The bidirectional Masked Language Modeling ( MLM ) has been one of the most popular selfsupervised training objectives for textual representation learning . As demonstrated by BERT ( Devlin et al. , 2018 ) , it is based on the idea of denoising autoencoder such that the model is trained to recover the corrupted tokens in a document . Specifically , given a text sequence x , a subset of tokens xm are randomly sampled and a corrupted sequence x\m is constructed by replacing tokens in xm with a special [ MASK ] token . The training objective is to reconstruct xm from the context x\m by minimizing the negative log-likelihood : LMLM ( θ ) = −Ex∼D [ logPθ ( xm|x\m ) ] , ( 1 ) where θ is the trainable parameters of the model and D is the pretraining data . This approach learns contextualized representations that can be further finetuned for downstream tasks . The MLM-style pretraining has been widely adopted in previous VLP models , whereby the input is an image-text pair and the model needs to predict masked tokens by leveraging image ROI features . Alternatively , the unidirectional Language Modeling ( LM ) trains the model to directly maximize the likelihood of the sequence x under the forward autoregressive factorization : LLM ( θ ) = −Ex∼D [ logPθ ( x ) ] = −Ex∼D [ T∑ t=1 logPθ ( xt|x < t ) ] . ( 2 ) Compared with MLM , the LM pretraining has also been shown to be highly effective for multiple NLP tasks ( Radford et al. , 2018 ) . More importantly , it facilitates the model with strong generation capability that enables text induced zero-shot generalization without finetuning ( Brown et al. , 2020 ) . While MLM has become the de facto approach in VLP models reviewed above , the generative LM has been understudied . 3.2 PROPOSED OBJECTIVE : PREFIX LANGUAGE MODELING . Motivated by the zero-shot capability introduced by pre-training with LM loss , we propose to pretain vision-language representation using the Prefix Language Modeling ( PrefixLM ) . PrefixLM differs from the standard LM such that it enables bi-directional attention on the prefix sequence ( e.g . x < Tp in Eq . ( 3 ) ) , and only conducts autoregressive factorization on the remaining tokens ( e.g . x≥Tp in Eq . ( 3 ) ) . During pretraining , a prefix sequence of tokens of ( a randomly selected ) length Tp is truncated from input sequence and the training objective becomes : LPrefixLM ( θ ) = −Ex∼D [ logPθ ( x≥Tp |x < Tp ) ] = −Ex∼D T∑ t=Tp Pθ ( xt|x [ Tp , t ] , x < Tp ) . ( 3 ) Intuitively , images can be considered as prefix for their textual descriptions as they often appear before text in a web document . Therefore , for a given image-text pair , we prepend image feature sequence of length Ti to the text sequence , and enforce the model to sample a prefix of length Tp ≥ Ti to calculate LM loss on text data only ( an example is shown in Figure 1 ) . Compared to prior MLM style VLP methods , our PrefixLM model under the sequence-to-sequence framework not only enjoys the bidirectional contextualized representation as in MLM , but also can perform text generation similar to LM .
This paper proposes a Prefix Language Modeling (PrefixLM) objective for a pretraining procedure for multiple vision-language downstream tasks and zero-shot evaluations. They argue that it successfully replaces the masked language model (MLM). This work follows the notion of weak supervision suggested by ALIGN (Jia et al., 2021), having noisy image captions and holistic labels. Surprisingly, this simple pretraining approach achieves new state-of-the-art on "a wide range of discriminative and generative vision-language benchmarks," and shows "strong generalization and transfer ability" in zero-shot settings.
SP:633b0833507e5b36e8f9fe8319ea5a30538b4cc4
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
1 INTRODUCTION . Self-supervised textual representation learning ( Devlin et al. , 2018 ; Radford et al. , 2018 ; 2019 ; Liu et al. , 2019 ; Yang et al. , 2019 ; Raffel et al. , 2019 ; Brown et al. , 2020 ) based on Transformers ( Vaswani et al. , 2017 ) has pushed the state of the art on a wide range of natural language processing ( NLP ) tasks ( Rajpurkar et al. , 2016 ; Wang et al. , 2018 ; Sarlin et al. , 2020 ) . One successful approach is to first pretrain the model ( e.g . BERT ) on large-scale unlabled text corpora using masked language modeling ( MLM ) objective ( Devlin et al. , 2018 ) , followed by finetuning on downstream tasks . While this pretraining-finetuning paradigm has been widely adopted , recent work on autoregressive language models ( LM ) ( Radford et al. , 2019 ; Brown et al. , 2020 ) such as GPT-3 has shown strong performance without finetuning by utilizing few-shot prompts ( Liu et al. , 2021 ) , suggesting the text guided zero-shot generalization is a promising alternative . Motivated by the success of textual representation pretraining , various efforts have been made to build the multi-modal ( visual and textual ) counterpart . A line of work ( Tan & Bansal , 2019 ; Lu et al. , 2019 ; Li et al. , 2019 ; Chen et al. , 2020b ; Li et al. , 2020 ; Su et al. , 2020 ; Zhang et al. , 2021 ) has explored vision-language pretraining ( VLP ) that learns a joint representation of both modalities to be finetuned on vision-language ( VL ) benchmarks , such as visual question answering ( VQA ) ( Goyal et al. , 2017 ) . In order to capture the alignment between images and text , previous methods have extensively exploited two types of human-labeled datasets from multiple sources , which typically consist of the following steps . Firstly , object detection datasets are used to train a supervised object detector ( OD ) which allows further extracting region-of-interest ( ROI ) features from images . Next , datasets with aligned image-text pairs are used for MLM pretraining of a fusion model that usually takes as input the concatenation of the extracted ROI features and the paired text . In addition , due to the limited scale of human annotated data , various task-specific auxiliary losses have been introduced in order to improve performance . These design choices complicate the pretraining protocol of VLP , creating a bottleneck for further quality improvement . What is more , such pretraining-finetuning based approaches usually lack the zero-shot capability , just like their lan- guage counterparts . In comparison , another line of work ( Radford et al. , 2021 ; Ramesh et al. , 2021 ; Jia et al. , 2021 ) utilizes weakly labeled/aligned data crawled from the web to perform pretraining , achieving good performance and certain zero-shot learning capability on image classification and image-text retrieval . Nonetheless , these methods mainly focus on specific tasks of consideration and thus may not serve as a generic pretraining-finetuning representation for VL benchmarks . In light of these disadvantages of the existing techniques , we are interested in building a VLP model that : ( 1 ) can be seamlessly plugged into the pretraining-finetuning paradigm and achieve competitive performance on standard VL benchmarks ; ( 2 ) does not require a complicated pretraining protocol as in previous methods ; and ( 3 ) has the potential towards text guided zero-shot generalization in cross-modal settings . To this end , we propose SimVLM , standing for Simple Visual Language Model , which significantly simplifies VLP by solely exploiting language modeling objectives on weakly aligned image-text pairs ( Jia et al. , 2021 ) . In a nutshell , SimVLM consists of the following components : • Objective . It is trained end-to-end from scratch with a single objective of Prefix Language Modeling ( PrefixLM ) , which can not only naturally perform text generation as GPT-3 , but also process contextual information in a bidirectional manner as BERT does . • Architecture . The framework employs ViT/CoAtNet ( Dosovitskiy et al. , 2021 ; Dai et al. , 2021 ) and directly takes raw images as inputs . These models can also fit the large-scale data and are readily compatible with the PrefixLM objective . • Data . These setups relieve the requirement for object detection and allow the model to utilize the large-scale weakly labeled dataset , which has better potential towards zero-shot generalization . Not only is SimVLM simpler , requiring neither object detection pretraining nor auxiliary losses , but it also obtains better performance than previous work . Empirically , SimVLM consistently outperforms existing VLP models and achieves new state-of-the-art results on 6 VL benchmarks without additional data nor task-specific customization . Besides , it acquires stronger generalization in visual-language understanding that empowers zero-shot image captioning and open-ended VQA . In particular , SimVLM learns unified multimodal representation that enables zero-shot cross-modality transfer , where the model is finetuned on text-only data and directly evaluated on image-and-text test examples without further training . Our results suggest that generative VLP can not only match existing MLM-based methods on VL benchmarks but also demonstrate promising zero-shot potential . 2 RELATED WORK . Recent years have seen a rapid progress made in vision-language pretraining ( Uppal et al. , 2020 ; Han et al. , 2021 ; Khan et al. , 2021 ) . While a variety of approaches have been proposed , a large portion of them require object detection for image region feature regression or tagging as part of the pre-training objectives ( Tan & Bansal , 2019 ; Su et al. , 2020 ; Li et al. , 2019 ; Chen et al. , 2020b ; Gan et al. , 2020 ; Li et al. , 2020 ; Yu et al. , 2021 ; Li et al. , 2021 ; Zhang et al. , 2021 ; Hu et al. , 2021 ; Cho et al. , 2021 ) . These methods rely on a strong object detection model like Fast ( er ) R-CNN ( Ren et al. , 2015 ) , which is often trained on human annotated data sets like Visual Genome ( Krishna et al. , 2016 ) . Using such labeled training data as a prerequisite increases the cost of building the training pipeline , and makes the approach less scalable . Some recent efforts have also explored VLP without object detection module ( Xu et al. , 2021 ; Kim et al. , 2021 ; Huang et al. , 2021 ) , but they only use clean pretraining data with small scales and thus their zero-shot capability is limited . On the other hand , multiple cross-modality loss functions have been proposed as part of the training objectives , for example image-text matching ( Tan & Bansal , 2019 ; Lu et al. , 2019 ; Xu et al. , 2021 ) , masked region classification/feature regression ( Tan & Bansal , 2019 ; Chen et al. , 2020b ) , object attribute prediction ( Xu et al. , 2021 ) , contrastive loss ( Li et al. , 2020 ; 2021 ) , word-region alignment ( Chen et al. , 2020b ) word-patch alignment ( Kim et al. , 2021 ) . They are often mixed with other objectives including image caption generation and masked language modeling to form compound pre-training losses . This creates the challenge of balancing among different losses and datasets , and thus complicates the optimization procedure . Our work by contrast , follows a minimalist approach that takes raw image inputs and makes use of only the language modeling loss , without resorting to auxiliary models like faster R-CNN for image region detection . Motivated by recent works ( Radford et al. , 2021 ; Ramesh et al. , 2021 ; Jia et al. , 2021 ) that illustrate zero-shot learning in certain image-text tasks , we train our model using largescale weakly labeled data only . While concurrent work ( Shen et al. , 2021 ) has explored building on top of models pretrained with such dataset , we focus on pretraining from scratch to explore the limit of generative VLP . 3 SIMVLM . 3.1 BACKGROUND . The bidirectional Masked Language Modeling ( MLM ) has been one of the most popular selfsupervised training objectives for textual representation learning . As demonstrated by BERT ( Devlin et al. , 2018 ) , it is based on the idea of denoising autoencoder such that the model is trained to recover the corrupted tokens in a document . Specifically , given a text sequence x , a subset of tokens xm are randomly sampled and a corrupted sequence x\m is constructed by replacing tokens in xm with a special [ MASK ] token . The training objective is to reconstruct xm from the context x\m by minimizing the negative log-likelihood : LMLM ( θ ) = −Ex∼D [ logPθ ( xm|x\m ) ] , ( 1 ) where θ is the trainable parameters of the model and D is the pretraining data . This approach learns contextualized representations that can be further finetuned for downstream tasks . The MLM-style pretraining has been widely adopted in previous VLP models , whereby the input is an image-text pair and the model needs to predict masked tokens by leveraging image ROI features . Alternatively , the unidirectional Language Modeling ( LM ) trains the model to directly maximize the likelihood of the sequence x under the forward autoregressive factorization : LLM ( θ ) = −Ex∼D [ logPθ ( x ) ] = −Ex∼D [ T∑ t=1 logPθ ( xt|x < t ) ] . ( 2 ) Compared with MLM , the LM pretraining has also been shown to be highly effective for multiple NLP tasks ( Radford et al. , 2018 ) . More importantly , it facilitates the model with strong generation capability that enables text induced zero-shot generalization without finetuning ( Brown et al. , 2020 ) . While MLM has become the de facto approach in VLP models reviewed above , the generative LM has been understudied . 3.2 PROPOSED OBJECTIVE : PREFIX LANGUAGE MODELING . Motivated by the zero-shot capability introduced by pre-training with LM loss , we propose to pretain vision-language representation using the Prefix Language Modeling ( PrefixLM ) . PrefixLM differs from the standard LM such that it enables bi-directional attention on the prefix sequence ( e.g . x < Tp in Eq . ( 3 ) ) , and only conducts autoregressive factorization on the remaining tokens ( e.g . x≥Tp in Eq . ( 3 ) ) . During pretraining , a prefix sequence of tokens of ( a randomly selected ) length Tp is truncated from input sequence and the training objective becomes : LPrefixLM ( θ ) = −Ex∼D [ logPθ ( x≥Tp |x < Tp ) ] = −Ex∼D T∑ t=Tp Pθ ( xt|x [ Tp , t ] , x < Tp ) . ( 3 ) Intuitively , images can be considered as prefix for their textual descriptions as they often appear before text in a web document . Therefore , for a given image-text pair , we prepend image feature sequence of length Ti to the text sequence , and enforce the model to sample a prefix of length Tp ≥ Ti to calculate LM loss on text data only ( an example is shown in Figure 1 ) . Compared to prior MLM style VLP methods , our PrefixLM model under the sequence-to-sequence framework not only enjoys the bidirectional contextualized representation as in MLM , but also can perform text generation similar to LM .
The paper proposes to pre-train a generative language model conditioned on a visual input on billion-scale web image-text data. Such a model can be then transferred to various vision-and-language tasks with ease. SimVLM establishes new SotA on several new tasks and shows promising zero-shot capacity in certain tasks.
SP:633b0833507e5b36e8f9fe8319ea5a30538b4cc4
SimVLM: Simple Visual Language Model Pretraining with Weak Supervision
1 INTRODUCTION . Self-supervised textual representation learning ( Devlin et al. , 2018 ; Radford et al. , 2018 ; 2019 ; Liu et al. , 2019 ; Yang et al. , 2019 ; Raffel et al. , 2019 ; Brown et al. , 2020 ) based on Transformers ( Vaswani et al. , 2017 ) has pushed the state of the art on a wide range of natural language processing ( NLP ) tasks ( Rajpurkar et al. , 2016 ; Wang et al. , 2018 ; Sarlin et al. , 2020 ) . One successful approach is to first pretrain the model ( e.g . BERT ) on large-scale unlabled text corpora using masked language modeling ( MLM ) objective ( Devlin et al. , 2018 ) , followed by finetuning on downstream tasks . While this pretraining-finetuning paradigm has been widely adopted , recent work on autoregressive language models ( LM ) ( Radford et al. , 2019 ; Brown et al. , 2020 ) such as GPT-3 has shown strong performance without finetuning by utilizing few-shot prompts ( Liu et al. , 2021 ) , suggesting the text guided zero-shot generalization is a promising alternative . Motivated by the success of textual representation pretraining , various efforts have been made to build the multi-modal ( visual and textual ) counterpart . A line of work ( Tan & Bansal , 2019 ; Lu et al. , 2019 ; Li et al. , 2019 ; Chen et al. , 2020b ; Li et al. , 2020 ; Su et al. , 2020 ; Zhang et al. , 2021 ) has explored vision-language pretraining ( VLP ) that learns a joint representation of both modalities to be finetuned on vision-language ( VL ) benchmarks , such as visual question answering ( VQA ) ( Goyal et al. , 2017 ) . In order to capture the alignment between images and text , previous methods have extensively exploited two types of human-labeled datasets from multiple sources , which typically consist of the following steps . Firstly , object detection datasets are used to train a supervised object detector ( OD ) which allows further extracting region-of-interest ( ROI ) features from images . Next , datasets with aligned image-text pairs are used for MLM pretraining of a fusion model that usually takes as input the concatenation of the extracted ROI features and the paired text . In addition , due to the limited scale of human annotated data , various task-specific auxiliary losses have been introduced in order to improve performance . These design choices complicate the pretraining protocol of VLP , creating a bottleneck for further quality improvement . What is more , such pretraining-finetuning based approaches usually lack the zero-shot capability , just like their lan- guage counterparts . In comparison , another line of work ( Radford et al. , 2021 ; Ramesh et al. , 2021 ; Jia et al. , 2021 ) utilizes weakly labeled/aligned data crawled from the web to perform pretraining , achieving good performance and certain zero-shot learning capability on image classification and image-text retrieval . Nonetheless , these methods mainly focus on specific tasks of consideration and thus may not serve as a generic pretraining-finetuning representation for VL benchmarks . In light of these disadvantages of the existing techniques , we are interested in building a VLP model that : ( 1 ) can be seamlessly plugged into the pretraining-finetuning paradigm and achieve competitive performance on standard VL benchmarks ; ( 2 ) does not require a complicated pretraining protocol as in previous methods ; and ( 3 ) has the potential towards text guided zero-shot generalization in cross-modal settings . To this end , we propose SimVLM , standing for Simple Visual Language Model , which significantly simplifies VLP by solely exploiting language modeling objectives on weakly aligned image-text pairs ( Jia et al. , 2021 ) . In a nutshell , SimVLM consists of the following components : • Objective . It is trained end-to-end from scratch with a single objective of Prefix Language Modeling ( PrefixLM ) , which can not only naturally perform text generation as GPT-3 , but also process contextual information in a bidirectional manner as BERT does . • Architecture . The framework employs ViT/CoAtNet ( Dosovitskiy et al. , 2021 ; Dai et al. , 2021 ) and directly takes raw images as inputs . These models can also fit the large-scale data and are readily compatible with the PrefixLM objective . • Data . These setups relieve the requirement for object detection and allow the model to utilize the large-scale weakly labeled dataset , which has better potential towards zero-shot generalization . Not only is SimVLM simpler , requiring neither object detection pretraining nor auxiliary losses , but it also obtains better performance than previous work . Empirically , SimVLM consistently outperforms existing VLP models and achieves new state-of-the-art results on 6 VL benchmarks without additional data nor task-specific customization . Besides , it acquires stronger generalization in visual-language understanding that empowers zero-shot image captioning and open-ended VQA . In particular , SimVLM learns unified multimodal representation that enables zero-shot cross-modality transfer , where the model is finetuned on text-only data and directly evaluated on image-and-text test examples without further training . Our results suggest that generative VLP can not only match existing MLM-based methods on VL benchmarks but also demonstrate promising zero-shot potential . 2 RELATED WORK . Recent years have seen a rapid progress made in vision-language pretraining ( Uppal et al. , 2020 ; Han et al. , 2021 ; Khan et al. , 2021 ) . While a variety of approaches have been proposed , a large portion of them require object detection for image region feature regression or tagging as part of the pre-training objectives ( Tan & Bansal , 2019 ; Su et al. , 2020 ; Li et al. , 2019 ; Chen et al. , 2020b ; Gan et al. , 2020 ; Li et al. , 2020 ; Yu et al. , 2021 ; Li et al. , 2021 ; Zhang et al. , 2021 ; Hu et al. , 2021 ; Cho et al. , 2021 ) . These methods rely on a strong object detection model like Fast ( er ) R-CNN ( Ren et al. , 2015 ) , which is often trained on human annotated data sets like Visual Genome ( Krishna et al. , 2016 ) . Using such labeled training data as a prerequisite increases the cost of building the training pipeline , and makes the approach less scalable . Some recent efforts have also explored VLP without object detection module ( Xu et al. , 2021 ; Kim et al. , 2021 ; Huang et al. , 2021 ) , but they only use clean pretraining data with small scales and thus their zero-shot capability is limited . On the other hand , multiple cross-modality loss functions have been proposed as part of the training objectives , for example image-text matching ( Tan & Bansal , 2019 ; Lu et al. , 2019 ; Xu et al. , 2021 ) , masked region classification/feature regression ( Tan & Bansal , 2019 ; Chen et al. , 2020b ) , object attribute prediction ( Xu et al. , 2021 ) , contrastive loss ( Li et al. , 2020 ; 2021 ) , word-region alignment ( Chen et al. , 2020b ) word-patch alignment ( Kim et al. , 2021 ) . They are often mixed with other objectives including image caption generation and masked language modeling to form compound pre-training losses . This creates the challenge of balancing among different losses and datasets , and thus complicates the optimization procedure . Our work by contrast , follows a minimalist approach that takes raw image inputs and makes use of only the language modeling loss , without resorting to auxiliary models like faster R-CNN for image region detection . Motivated by recent works ( Radford et al. , 2021 ; Ramesh et al. , 2021 ; Jia et al. , 2021 ) that illustrate zero-shot learning in certain image-text tasks , we train our model using largescale weakly labeled data only . While concurrent work ( Shen et al. , 2021 ) has explored building on top of models pretrained with such dataset , we focus on pretraining from scratch to explore the limit of generative VLP . 3 SIMVLM . 3.1 BACKGROUND . The bidirectional Masked Language Modeling ( MLM ) has been one of the most popular selfsupervised training objectives for textual representation learning . As demonstrated by BERT ( Devlin et al. , 2018 ) , it is based on the idea of denoising autoencoder such that the model is trained to recover the corrupted tokens in a document . Specifically , given a text sequence x , a subset of tokens xm are randomly sampled and a corrupted sequence x\m is constructed by replacing tokens in xm with a special [ MASK ] token . The training objective is to reconstruct xm from the context x\m by minimizing the negative log-likelihood : LMLM ( θ ) = −Ex∼D [ logPθ ( xm|x\m ) ] , ( 1 ) where θ is the trainable parameters of the model and D is the pretraining data . This approach learns contextualized representations that can be further finetuned for downstream tasks . The MLM-style pretraining has been widely adopted in previous VLP models , whereby the input is an image-text pair and the model needs to predict masked tokens by leveraging image ROI features . Alternatively , the unidirectional Language Modeling ( LM ) trains the model to directly maximize the likelihood of the sequence x under the forward autoregressive factorization : LLM ( θ ) = −Ex∼D [ logPθ ( x ) ] = −Ex∼D [ T∑ t=1 logPθ ( xt|x < t ) ] . ( 2 ) Compared with MLM , the LM pretraining has also been shown to be highly effective for multiple NLP tasks ( Radford et al. , 2018 ) . More importantly , it facilitates the model with strong generation capability that enables text induced zero-shot generalization without finetuning ( Brown et al. , 2020 ) . While MLM has become the de facto approach in VLP models reviewed above , the generative LM has been understudied . 3.2 PROPOSED OBJECTIVE : PREFIX LANGUAGE MODELING . Motivated by the zero-shot capability introduced by pre-training with LM loss , we propose to pretain vision-language representation using the Prefix Language Modeling ( PrefixLM ) . PrefixLM differs from the standard LM such that it enables bi-directional attention on the prefix sequence ( e.g . x < Tp in Eq . ( 3 ) ) , and only conducts autoregressive factorization on the remaining tokens ( e.g . x≥Tp in Eq . ( 3 ) ) . During pretraining , a prefix sequence of tokens of ( a randomly selected ) length Tp is truncated from input sequence and the training objective becomes : LPrefixLM ( θ ) = −Ex∼D [ logPθ ( x≥Tp |x < Tp ) ] = −Ex∼D T∑ t=Tp Pθ ( xt|x [ Tp , t ] , x < Tp ) . ( 3 ) Intuitively , images can be considered as prefix for their textual descriptions as they often appear before text in a web document . Therefore , for a given image-text pair , we prepend image feature sequence of length Ti to the text sequence , and enforce the model to sample a prefix of length Tp ≥ Ti to calculate LM loss on text data only ( an example is shown in Figure 1 ) . Compared to prior MLM style VLP methods , our PrefixLM model under the sequence-to-sequence framework not only enjoys the bidirectional contextualized representation as in MLM , but also can perform text generation similar to LM .
This paper proposes a simple but effective image-text multimodal representation learning method that leverages a transformer-based encoder-decoder using a simple prefix langemodel as pretraining task from large-scale noisy image-text aligned data. As fine tuning tasks, the authors evaluate their method (SimVLM) on VQA, NLVR2, SNLI-VE, CoCo caption, NoCaps, and Multi30k. With extensive experiments, this work presents promising few-shot and zero-shot performance results outperforming previous models.
SP:633b0833507e5b36e8f9fe8319ea5a30538b4cc4
Teamwork makes von Neumann work:Min-Max Optimization in Two-Team Zero-Sum Games
Motivated by recent advances in both theoretical and applied aspects of multiplayer games , spanning from e-sports to multi-agent generative adversarial networks , we focus on min-max optimization in team zero-sum games . In this class of games , players are split into two teams with payoffs equal within the same team and of opposite sign across the opponent team . Unlike the textbook twoplayer zero-sum games , finding a Nash equilibrium in our class can be shown to be CLS-hard , i.e. , it is unlikely to have a polynomial-time algorithm for computing Nash equilibria . Moreover , in this generalized framework , we establish that even asymptotic last iterate or time average convergence to a Nash Equilibrium is not possible using Gradient Descent Ascent ( GDA ) , its optimistic variant , and extra gradient . Specifically , we present a family of team games whose induced utility is non-multilinear with non-attractive per-se mixed Nash Equilibria , as strict saddle points of the underlying optimization landscape . Leveraging techniques from control theory , we complement these negative results by designing a modified GDA that converges locally to Nash equilibria . Finally , we discuss connections of our framework with AI architectures with team competition structures like multi-agent generative adversarial networks . 1 INTRODUCTION . Team competition has played a central role in the development of Game Theory ( Marschak , 1955 ; von Stengel & Koller , 1997 ; Bacharach , 1999 ; Gold , 2005 ) , Economics ( Marschak , 1955 ; Gottinger , 1974 ) and Evolutionary Biology ( Nagylaki , 1993 ; Nowak et al. , 2004 ) , however , the behavior of the underlying dynamics within the teams are usually sidelined . Either for reasons of mathematical convenience or bigger picture understanding , “ teams ” in literature are typically modeled as if they were unitary actors , i.e. , single individuals without unveiling the internal decision-making of the team members ( see Kim et al . ( 2019 ) ) . For instance , in the biology setting of weak selection model ( Nagylaki , 1993 ; Chastain et al. , 2014 ; Mehta et al. , 2015 ) species are modeled to compete as teams , while at the crux of the matter the genes of each species are the actual players and their alleles are the actions in the survival game . Similarly , it was the social-media collaboration of the Reddit retail trading crowd as a team that touches off the last year ’ s GameStop frenzy of short squeeze ( Umar et al. , 2021 ; Hasso et al. , 2021 ) transforming the markets into a tug of war game against the team of Wall Street hedge funds . Recently , these intrinsic details behind the competition among teams have attracted renewed interest in the Machine Learning community , motivated by the advent of multi-agent systems that are used for generative tasks or playing complex games like CTF ( Jaderberg et al. , 2019 ) or Starcraft ( Vinyals et al. , 2019 ) . So as to win this kind of games , self-training AI systems have to develop both collaborative attributes ( coordination within each team ) as well as contesting ones ( competition across the teams ) . Moreover , following the complementary thread of multi-agent generative adversarial network research , the creation of a pool by efficient incumbent agents , either in generators ( Arora et al. , 2017 ; Hoang et al. , 2017 ; 2018 ; Zhang et al. , 2018 ; Tang , 2020 ) , or discriminators ( Hardy et al. , 2019 ; Albuquerque et al. , 2019 ) has been tested providing significant statistical and computa- tional benefits . In this direction , researchers strive to harness the efficacy of distributed processing , utilizing shallower networks that can learn all the while more diverse datasets 1 . In order to shed some light on this persistent strain of research , the main premise of the theoretical scaffolding developed in this paper is that The “ unitary two-players ” min-max approach misses the critical component of the collective strategy making within each competing team . Our class of games . In this regard , we turn our attention to Two-Team Zero-Sum games , proposed by Schulman & Vazirani ( 2019b ) , a quite general class of min-max optimization problems that include bilinear games as well as a wide range of non-convex non-concave games . In this class , the players fall in two teams of size k1 , k2 and submit their own probabilistic strategy vector independently , akin to a general normal form multi-player game . Following the econometric common value assumption of Marschak ( 1955 ) , what makes a group of players a team is that in any outcome the players of each team receive an identical payoff . Thus , to build some intuition , it is easy to see that if perfect coordination existed within each team , the interaction between the teams is merely a zero-sum game between two “ virtual ” players . To streamline our presentation here , we defer the more precise description of our model to Section 2 . Challenges behind Two-Team Zero-Sum games . In the archetypical case of two players , i.e. , ( k1 = k2 = 1 ) , min-max strategies are typically thought of as the axiomatically correct predictions thanks to the seminal Von Neumann ’ s minmax theorem ( Von Neumann , 1928 ) . Unfortunately , min-max optimization for case k > 1 is a much more tenuous affair : Schulman & Vazirani ( 2019b ) preclude the existence of unique value by presenting a family of team games where min max 6= max min together with bounds about this duality gap , which quantifies exactly the effect of exchanging the order of strategy commitment either between the teams or the players thereof . If defining the correct figure of merit for Team games is rife with frustration , what is even more demanding is understanding what kind of algorithms/dynamics are able to solve this problem when a game-theoretically meaningful solution exists : Firstly , computing local Nash Equilibria ( NE ) in general non-convex non-concave games is PPAD-complete ( Daskalakis et al. , 2009 ; 2021 ) . Thus , all well-celebrated first-order methods , like gradient descent-ascent ( Lin et al. , 2020 ; Daskalakis & Panageas , 2019 ) , its optimistic ( Popov , 1980 ; Daskalakis & Panageas , 2018 ; Mertikopoulos et al. , 2019 ) and extra gradient variant ( Korpelevich , 1976 ) would require an exponential number of steps in the parameters of the problem to find an appoximate NE under Nemirovsky-Yudin ( Nemirovskij & Yudin , 1983 ) oracle optimization model . Secondly , even if a regret notion could be defined , noregret methodology is guarranteed to attract only to the set of coarse correlated equilibria ( CCE ) ( Fudenberg , 1991 ; Hannan , 2016 ; Flokas et al. , 2020 ; Giannou et al. , 2021 ) , a weaker notion that may be exclusively supported on strictly dominated strategies , even for simple symmetric two-player games ( See also Viossat & Zapechelnyuk ( 2013 ) ) . Whilst the aforementioned intractability failures for the general case of non-convex non-concave min-max problems provides significant insights , they can not a fortiori answer the fundamental question , restricted in the model of Two-Team Zero-Sum Games : Can we compute Nash equilibria in Two-Team Zero-Sum Games and ultimately are there first-order methods that converge to them under tangible guarantees ? Our results . To the best of our knowledge , the following contributions are the first-of-its-kind type of results for the case of Two-Team Zero-Sum games : • For the case of the computational complexity of approximate ( possibly mixed ) NE we establish a sweeping negative result proving that it is CLS-hard ( Theorem 3.1 ) , i.e. , is computationally harder than finding pure NE in a congestion game or finding approximate gradient descent fixed points . • From an optimization perspective , we settle these questions with a resounding “ no ” for all the well-known discrete gradient flow variations . Specifically , we present a simple family of twoteam with two-players zero-sum games where Projected-GDA , Optimistic-GDA , and Extra Gradient fail even to stabilize around a mixed NE , when they are initialized nearby ( Theorem 3.5 ) . 1Indeed , from the training perspective , it ’ s more computationally preferable to back-propagate through two equally sized neural networks with smaller capacity rather than through a giant single one that would be twice as deep . ( Tang , 2020 ) Additionally , for the category GDA in the non-degenerate team games with unique mixed NE , one could acquire an even stronger result for any high-dimensional configuration of actions and players . ( Theorem 3.2 ) • In order to make some substantial headway under the burden of the above instability results , we shift our attention to adaptive control generalizations of the celebrated Washout filters–traditionally used for stabilizing the Dutch-roll motion of an aircraft during a flight ( Hassouneh et al. , 2004 ; Grant & Reid , 1997 ) . Inspired by this framework , we propose the modified KPV-GDA2 which consists of a tandem combination of GDA together with a stabilizing feedback introduces by Bazanella et al . ( 1997 ) . { state ( k+1 ) = state ( k ) + ηGDA ( state ( k+1 ) ) + ηK ( state ( k ) − stress ( k ) ) stress ( k+1 ) = stress ( k ) + ηP ( state ( k ) − stress ( k ) ) ( KPV-GDA ) The main linchpin of KPV-GDA method is the Simon ’ s and Theil ’ s ( Simon , 1956 ; Theil , 1957 ) certainty equivalence principle , a widely used methodology in Control theory in developing applied dynamic rational expectations models . According to this principle , the feedback law is split into two optimization steps whereby K−step attracts quickly the state to the stress while P−step converges slowly to the fixed points of GDA . Compared with the plethora of the proposed dynamics for min-max problems , the crucial advantage of the afore-described technique is that does not introduce any extra fixed points than GDA ’ s ones . In Section 2.2 , we provide some illustrative examples of KPV-GDA technique , while in Theorem 3.7 we prove the existence of such control feedback for our class of games . • Finally , in Section 4 we provide a series of experiments in simple two-team zero-sum games showcasing both the messy behaviors of traditional methods like GDA , OGDA and the power of KPV-GDA method in these optimization environments . Additionally , we show that multi-agent GAN architectures achieve better performance than the single-agent ones , in terms of network capacity , when they are trained in synthetic or real-world datasets like CIFAR10 . 2 PRELIMINARIES . 2.1 DEFINITIONS . Our setting . Formally , a two-team game in normal form is defined as a tuple Γ = Γ ( N , A , u ) consisting of ( i ) a finite set of players N , split into two teams A , B with kA and kB players correspondingly such that : N = NA ∪ NB = { A1 , · · · , AkA , B1 , · · · , BkB } ; ( ii ) a finite set of actions ( or pure strategies ) Ai = { α1 , . . . , αni } per player i ∈ N ; ( iii ) each team ’ s payoff function uA , uB : A → R , where A : = ∏ iAi denotes the ensemble of all possible action profiles α = ( αA1 , . . . , αAkA , αB1 , . . . , αBkB ) while the individual utility of a player is identical to her teammates , i.e. , ui = uA & uj = uB ∀ ( i , j ) ∈ NA×NB . In this general context , players could also adhere mixed strategies , i.e , probability distributions sk ∈ ∆ ( Ak ) over the pure strategies αk ∈ Ak . Correspondingly , we define the product distributions x = sA1 ⊗ · · · ⊗ sAkA , y = sB1 ⊗ · · · ⊗ sBkB as the teams ’ strategies . Collectively , we will write X : = ∏ i∈NA Xi = ∏ i∈NA ∆ ( Ai ) , Y : =∏ i∈NA Yi = ∏ i∈NB ∆ ( Ai ) the space of mixed strategy profiles of teams A , B . Similarly with the bilinear two-player games , the teams ’ utility functions can be expressed via the payoff-tensors A , B ∈ Rτ with τ = ∏ i∈N |Ai| and acquire the form3 : uA = Ayx & uB = B y x ( 2.1 ) 2The name “ KPV-GDA ” is an initialism from ( K , P ) -Vaned Gradient Descent Ascent method ; Just like the tail section of an aircraft where the vanes are flight control surfaces that control the unstable yaw , ( K , P ) control feedback aims to stabilize the unstable arrows of Gradient flow around a mixed NE . 3Figuratively , the latter form denotes what is known as a tensor contraction given : Ayx =∑ i , ... , j , k ... , l xi , ... , jAi , ... , j , k ... , lyk ... , l If x , y have shapes ( i , j ) and ( k , l ) that would be equivalent to u = einsum ( ’ ijkl , ij , kl ’ , A , x , y ) In terms of solutions , we focus on the per player Nash Equilibrium ( NE ) , i.e. , a state strategy profile s∗ = ( x , y ) = ( ( s∗A1 , . . . , s ∗ AkA ) , ( s∗B1 , . . . , s ∗ BkB ) ) such that ui ( s ∗ ) ≥ ui ( si ; s∗−i ) 4 for all si ∈ ∆ ( Ai ) and all i ∈ N ( NE ) The state strategy profile s∗ is called pure if every player of both teams chooses a single action ; otherwise we say that it is mixed . Finally , a two-team game is called two-team zero-sum if uA = −uB or equivalently A + B = O . Remark 2.1 . A quite technical prerequisite for the rest of this work , we will assume that a succinct representation of the utility tensors of the game is available or equivalently that a payoff oracle provides efficiently both the value of the utility function and its derivatives for a specific input , which is consistent with the vast majority of the applications that are described in the literature ( von Stengel & Koller , 1997 ) . A first approach on computing Nash equilibria in Two-Team Zero-Sum games . Given the existence of the duality-gap between the min max and max min , in lieu of the two-player zero-sum game , an equilibrium in our setting can not be computed via linear programming . For the goal of computing Nash equilibria in two-team zero-sum games , we have experimented with a selection of first-order methods that have been utilized with varying success in the setting of the two-person zerosum case . Namely , we analyze the following methods : i ) Gradient Descent-Ascent ii ) Optimistic Gradient Descent-Ascent iii ) Extra Gradient Method iv ) Optimistic Multiplicative Weights Update Method We defer their precise definitions in Appendix B . The below remark will play a key role in the sequel . Remark 2.2 . Any fixed point of the aforementioned discrete-time dynamics on the utility function corresponds necessarily to the Nash equilibria of the game . Hence , an important testbed for the long-run behavior of GDA , OGDA , and EG methods is to examine whether these methods stabilize around their fixed points , which effectively constitute the Nash equilibria of the game . In Section 3.2 , we show that in lack of pure Nash equilibria , all the above methods fail to stabilize on their fixed points even for a simple class of ( 2 , 2 ) -players game , and as a consequence to the mixed Nash equilibria of the game . The presence of these results showcases the need for a different approach that lies outside purely optimization-based ideas . Inspired by the applications of washout filters to stabilize highly susceptible systems and their adaptive control generalizations , we design a new incarnation of GDA vaned by two matrices-control feedback . Surprisingly , in contrast with the aforementioned traditional methods , our proposed technique accomplishes last-iterate stabilization on its fixed point , i.e. , the mixed Nash equilibria of the team game . ( K , P ) -Vaned GDA Method . After concatenating the vectors of the minimizing and the maximizing agents z ( k ) = ( x ( k ) , y ( k ) ) we can write our method , for appropriate matrices K , P : z ( k+1 ) = ΠZ { z ( k ) + η ( −∇xf ( z ( k ) ) ∇yf ( z ( k ) ) ) + ηK ( z ( k ) − θ ( k ) ) } θ ( k+1 ) = ΠZ { θ ( k ) + ηP ( z ( k ) − θ ( k ) ) } ( 2.2 ) Intuitively , the added variable θ ( k ) holds an estimate of the fixed point , and through the feedback ηK ( z ( k ) − θ ( k ) ) the vector z stabilizes around that estimate which slowly moves towards the real fixed point of the plain GDA dynamic . It is crucial to note that no additional fixed points are introduced to the system .
The main contributions of the paper are as follows. First, the authors show show that the computation of Nash equilibrium in two-team zero-sum game is CLS-hard. As a result, GDA and its variants (including optimistic GDA and extragradient) cannot---in general---be used to converge to the Nash equilibrium. Then, the authors propose a "stabilized" version of GDA (called KPV-GDA) obtained through certain stabilization techniques in control theory.
SP:343e04f91a7e7e5574b94fac74d8f0c0d41c70ea
Teamwork makes von Neumann work:Min-Max Optimization in Two-Team Zero-Sum Games
Motivated by recent advances in both theoretical and applied aspects of multiplayer games , spanning from e-sports to multi-agent generative adversarial networks , we focus on min-max optimization in team zero-sum games . In this class of games , players are split into two teams with payoffs equal within the same team and of opposite sign across the opponent team . Unlike the textbook twoplayer zero-sum games , finding a Nash equilibrium in our class can be shown to be CLS-hard , i.e. , it is unlikely to have a polynomial-time algorithm for computing Nash equilibria . Moreover , in this generalized framework , we establish that even asymptotic last iterate or time average convergence to a Nash Equilibrium is not possible using Gradient Descent Ascent ( GDA ) , its optimistic variant , and extra gradient . Specifically , we present a family of team games whose induced utility is non-multilinear with non-attractive per-se mixed Nash Equilibria , as strict saddle points of the underlying optimization landscape . Leveraging techniques from control theory , we complement these negative results by designing a modified GDA that converges locally to Nash equilibria . Finally , we discuss connections of our framework with AI architectures with team competition structures like multi-agent generative adversarial networks . 1 INTRODUCTION . Team competition has played a central role in the development of Game Theory ( Marschak , 1955 ; von Stengel & Koller , 1997 ; Bacharach , 1999 ; Gold , 2005 ) , Economics ( Marschak , 1955 ; Gottinger , 1974 ) and Evolutionary Biology ( Nagylaki , 1993 ; Nowak et al. , 2004 ) , however , the behavior of the underlying dynamics within the teams are usually sidelined . Either for reasons of mathematical convenience or bigger picture understanding , “ teams ” in literature are typically modeled as if they were unitary actors , i.e. , single individuals without unveiling the internal decision-making of the team members ( see Kim et al . ( 2019 ) ) . For instance , in the biology setting of weak selection model ( Nagylaki , 1993 ; Chastain et al. , 2014 ; Mehta et al. , 2015 ) species are modeled to compete as teams , while at the crux of the matter the genes of each species are the actual players and their alleles are the actions in the survival game . Similarly , it was the social-media collaboration of the Reddit retail trading crowd as a team that touches off the last year ’ s GameStop frenzy of short squeeze ( Umar et al. , 2021 ; Hasso et al. , 2021 ) transforming the markets into a tug of war game against the team of Wall Street hedge funds . Recently , these intrinsic details behind the competition among teams have attracted renewed interest in the Machine Learning community , motivated by the advent of multi-agent systems that are used for generative tasks or playing complex games like CTF ( Jaderberg et al. , 2019 ) or Starcraft ( Vinyals et al. , 2019 ) . So as to win this kind of games , self-training AI systems have to develop both collaborative attributes ( coordination within each team ) as well as contesting ones ( competition across the teams ) . Moreover , following the complementary thread of multi-agent generative adversarial network research , the creation of a pool by efficient incumbent agents , either in generators ( Arora et al. , 2017 ; Hoang et al. , 2017 ; 2018 ; Zhang et al. , 2018 ; Tang , 2020 ) , or discriminators ( Hardy et al. , 2019 ; Albuquerque et al. , 2019 ) has been tested providing significant statistical and computa- tional benefits . In this direction , researchers strive to harness the efficacy of distributed processing , utilizing shallower networks that can learn all the while more diverse datasets 1 . In order to shed some light on this persistent strain of research , the main premise of the theoretical scaffolding developed in this paper is that The “ unitary two-players ” min-max approach misses the critical component of the collective strategy making within each competing team . Our class of games . In this regard , we turn our attention to Two-Team Zero-Sum games , proposed by Schulman & Vazirani ( 2019b ) , a quite general class of min-max optimization problems that include bilinear games as well as a wide range of non-convex non-concave games . In this class , the players fall in two teams of size k1 , k2 and submit their own probabilistic strategy vector independently , akin to a general normal form multi-player game . Following the econometric common value assumption of Marschak ( 1955 ) , what makes a group of players a team is that in any outcome the players of each team receive an identical payoff . Thus , to build some intuition , it is easy to see that if perfect coordination existed within each team , the interaction between the teams is merely a zero-sum game between two “ virtual ” players . To streamline our presentation here , we defer the more precise description of our model to Section 2 . Challenges behind Two-Team Zero-Sum games . In the archetypical case of two players , i.e. , ( k1 = k2 = 1 ) , min-max strategies are typically thought of as the axiomatically correct predictions thanks to the seminal Von Neumann ’ s minmax theorem ( Von Neumann , 1928 ) . Unfortunately , min-max optimization for case k > 1 is a much more tenuous affair : Schulman & Vazirani ( 2019b ) preclude the existence of unique value by presenting a family of team games where min max 6= max min together with bounds about this duality gap , which quantifies exactly the effect of exchanging the order of strategy commitment either between the teams or the players thereof . If defining the correct figure of merit for Team games is rife with frustration , what is even more demanding is understanding what kind of algorithms/dynamics are able to solve this problem when a game-theoretically meaningful solution exists : Firstly , computing local Nash Equilibria ( NE ) in general non-convex non-concave games is PPAD-complete ( Daskalakis et al. , 2009 ; 2021 ) . Thus , all well-celebrated first-order methods , like gradient descent-ascent ( Lin et al. , 2020 ; Daskalakis & Panageas , 2019 ) , its optimistic ( Popov , 1980 ; Daskalakis & Panageas , 2018 ; Mertikopoulos et al. , 2019 ) and extra gradient variant ( Korpelevich , 1976 ) would require an exponential number of steps in the parameters of the problem to find an appoximate NE under Nemirovsky-Yudin ( Nemirovskij & Yudin , 1983 ) oracle optimization model . Secondly , even if a regret notion could be defined , noregret methodology is guarranteed to attract only to the set of coarse correlated equilibria ( CCE ) ( Fudenberg , 1991 ; Hannan , 2016 ; Flokas et al. , 2020 ; Giannou et al. , 2021 ) , a weaker notion that may be exclusively supported on strictly dominated strategies , even for simple symmetric two-player games ( See also Viossat & Zapechelnyuk ( 2013 ) ) . Whilst the aforementioned intractability failures for the general case of non-convex non-concave min-max problems provides significant insights , they can not a fortiori answer the fundamental question , restricted in the model of Two-Team Zero-Sum Games : Can we compute Nash equilibria in Two-Team Zero-Sum Games and ultimately are there first-order methods that converge to them under tangible guarantees ? Our results . To the best of our knowledge , the following contributions are the first-of-its-kind type of results for the case of Two-Team Zero-Sum games : • For the case of the computational complexity of approximate ( possibly mixed ) NE we establish a sweeping negative result proving that it is CLS-hard ( Theorem 3.1 ) , i.e. , is computationally harder than finding pure NE in a congestion game or finding approximate gradient descent fixed points . • From an optimization perspective , we settle these questions with a resounding “ no ” for all the well-known discrete gradient flow variations . Specifically , we present a simple family of twoteam with two-players zero-sum games where Projected-GDA , Optimistic-GDA , and Extra Gradient fail even to stabilize around a mixed NE , when they are initialized nearby ( Theorem 3.5 ) . 1Indeed , from the training perspective , it ’ s more computationally preferable to back-propagate through two equally sized neural networks with smaller capacity rather than through a giant single one that would be twice as deep . ( Tang , 2020 ) Additionally , for the category GDA in the non-degenerate team games with unique mixed NE , one could acquire an even stronger result for any high-dimensional configuration of actions and players . ( Theorem 3.2 ) • In order to make some substantial headway under the burden of the above instability results , we shift our attention to adaptive control generalizations of the celebrated Washout filters–traditionally used for stabilizing the Dutch-roll motion of an aircraft during a flight ( Hassouneh et al. , 2004 ; Grant & Reid , 1997 ) . Inspired by this framework , we propose the modified KPV-GDA2 which consists of a tandem combination of GDA together with a stabilizing feedback introduces by Bazanella et al . ( 1997 ) . { state ( k+1 ) = state ( k ) + ηGDA ( state ( k+1 ) ) + ηK ( state ( k ) − stress ( k ) ) stress ( k+1 ) = stress ( k ) + ηP ( state ( k ) − stress ( k ) ) ( KPV-GDA ) The main linchpin of KPV-GDA method is the Simon ’ s and Theil ’ s ( Simon , 1956 ; Theil , 1957 ) certainty equivalence principle , a widely used methodology in Control theory in developing applied dynamic rational expectations models . According to this principle , the feedback law is split into two optimization steps whereby K−step attracts quickly the state to the stress while P−step converges slowly to the fixed points of GDA . Compared with the plethora of the proposed dynamics for min-max problems , the crucial advantage of the afore-described technique is that does not introduce any extra fixed points than GDA ’ s ones . In Section 2.2 , we provide some illustrative examples of KPV-GDA technique , while in Theorem 3.7 we prove the existence of such control feedback for our class of games . • Finally , in Section 4 we provide a series of experiments in simple two-team zero-sum games showcasing both the messy behaviors of traditional methods like GDA , OGDA and the power of KPV-GDA method in these optimization environments . Additionally , we show that multi-agent GAN architectures achieve better performance than the single-agent ones , in terms of network capacity , when they are trained in synthetic or real-world datasets like CIFAR10 . 2 PRELIMINARIES . 2.1 DEFINITIONS . Our setting . Formally , a two-team game in normal form is defined as a tuple Γ = Γ ( N , A , u ) consisting of ( i ) a finite set of players N , split into two teams A , B with kA and kB players correspondingly such that : N = NA ∪ NB = { A1 , · · · , AkA , B1 , · · · , BkB } ; ( ii ) a finite set of actions ( or pure strategies ) Ai = { α1 , . . . , αni } per player i ∈ N ; ( iii ) each team ’ s payoff function uA , uB : A → R , where A : = ∏ iAi denotes the ensemble of all possible action profiles α = ( αA1 , . . . , αAkA , αB1 , . . . , αBkB ) while the individual utility of a player is identical to her teammates , i.e. , ui = uA & uj = uB ∀ ( i , j ) ∈ NA×NB . In this general context , players could also adhere mixed strategies , i.e , probability distributions sk ∈ ∆ ( Ak ) over the pure strategies αk ∈ Ak . Correspondingly , we define the product distributions x = sA1 ⊗ · · · ⊗ sAkA , y = sB1 ⊗ · · · ⊗ sBkB as the teams ’ strategies . Collectively , we will write X : = ∏ i∈NA Xi = ∏ i∈NA ∆ ( Ai ) , Y : =∏ i∈NA Yi = ∏ i∈NB ∆ ( Ai ) the space of mixed strategy profiles of teams A , B . Similarly with the bilinear two-player games , the teams ’ utility functions can be expressed via the payoff-tensors A , B ∈ Rτ with τ = ∏ i∈N |Ai| and acquire the form3 : uA = Ayx & uB = B y x ( 2.1 ) 2The name “ KPV-GDA ” is an initialism from ( K , P ) -Vaned Gradient Descent Ascent method ; Just like the tail section of an aircraft where the vanes are flight control surfaces that control the unstable yaw , ( K , P ) control feedback aims to stabilize the unstable arrows of Gradient flow around a mixed NE . 3Figuratively , the latter form denotes what is known as a tensor contraction given : Ayx =∑ i , ... , j , k ... , l xi , ... , jAi , ... , j , k ... , lyk ... , l If x , y have shapes ( i , j ) and ( k , l ) that would be equivalent to u = einsum ( ’ ijkl , ij , kl ’ , A , x , y ) In terms of solutions , we focus on the per player Nash Equilibrium ( NE ) , i.e. , a state strategy profile s∗ = ( x , y ) = ( ( s∗A1 , . . . , s ∗ AkA ) , ( s∗B1 , . . . , s ∗ BkB ) ) such that ui ( s ∗ ) ≥ ui ( si ; s∗−i ) 4 for all si ∈ ∆ ( Ai ) and all i ∈ N ( NE ) The state strategy profile s∗ is called pure if every player of both teams chooses a single action ; otherwise we say that it is mixed . Finally , a two-team game is called two-team zero-sum if uA = −uB or equivalently A + B = O . Remark 2.1 . A quite technical prerequisite for the rest of this work , we will assume that a succinct representation of the utility tensors of the game is available or equivalently that a payoff oracle provides efficiently both the value of the utility function and its derivatives for a specific input , which is consistent with the vast majority of the applications that are described in the literature ( von Stengel & Koller , 1997 ) . A first approach on computing Nash equilibria in Two-Team Zero-Sum games . Given the existence of the duality-gap between the min max and max min , in lieu of the two-player zero-sum game , an equilibrium in our setting can not be computed via linear programming . For the goal of computing Nash equilibria in two-team zero-sum games , we have experimented with a selection of first-order methods that have been utilized with varying success in the setting of the two-person zerosum case . Namely , we analyze the following methods : i ) Gradient Descent-Ascent ii ) Optimistic Gradient Descent-Ascent iii ) Extra Gradient Method iv ) Optimistic Multiplicative Weights Update Method We defer their precise definitions in Appendix B . The below remark will play a key role in the sequel . Remark 2.2 . Any fixed point of the aforementioned discrete-time dynamics on the utility function corresponds necessarily to the Nash equilibria of the game . Hence , an important testbed for the long-run behavior of GDA , OGDA , and EG methods is to examine whether these methods stabilize around their fixed points , which effectively constitute the Nash equilibria of the game . In Section 3.2 , we show that in lack of pure Nash equilibria , all the above methods fail to stabilize on their fixed points even for a simple class of ( 2 , 2 ) -players game , and as a consequence to the mixed Nash equilibria of the game . The presence of these results showcases the need for a different approach that lies outside purely optimization-based ideas . Inspired by the applications of washout filters to stabilize highly susceptible systems and their adaptive control generalizations , we design a new incarnation of GDA vaned by two matrices-control feedback . Surprisingly , in contrast with the aforementioned traditional methods , our proposed technique accomplishes last-iterate stabilization on its fixed point , i.e. , the mixed Nash equilibria of the team game . ( K , P ) -Vaned GDA Method . After concatenating the vectors of the minimizing and the maximizing agents z ( k ) = ( x ( k ) , y ( k ) ) we can write our method , for appropriate matrices K , P : z ( k+1 ) = ΠZ { z ( k ) + η ( −∇xf ( z ( k ) ) ∇yf ( z ( k ) ) ) + ηK ( z ( k ) − θ ( k ) ) } θ ( k+1 ) = ΠZ { θ ( k ) + ηP ( z ( k ) − θ ( k ) ) } ( 2.2 ) Intuitively , the added variable θ ( k ) holds an estimate of the fixed point , and through the feedback ηK ( z ( k ) − θ ( k ) ) the vector z stabilizes around that estimate which slowly moves towards the real fixed point of the plain GDA dynamic . It is crucial to note that no additional fixed points are introduced to the system .
This work considers two-team zero-sum games where two teams with opposite objectives are facing each other. While the literature is quite extensive on learning in zero-sum games, very little is known for two-team zero sum-games. This paper first shows that this problem is much harder than classical zero-sum games by showing that the computation of a NE is CLS hard. Besides showing the failure of classical optimization methods for zero-sum games, the authors propose an optimisation algorithm better suited to this setting, which converges locally to a NE for carefully tuned hyperparameters.
SP:343e04f91a7e7e5574b94fac74d8f0c0d41c70ea
Teamwork makes von Neumann work:Min-Max Optimization in Two-Team Zero-Sum Games
Motivated by recent advances in both theoretical and applied aspects of multiplayer games , spanning from e-sports to multi-agent generative adversarial networks , we focus on min-max optimization in team zero-sum games . In this class of games , players are split into two teams with payoffs equal within the same team and of opposite sign across the opponent team . Unlike the textbook twoplayer zero-sum games , finding a Nash equilibrium in our class can be shown to be CLS-hard , i.e. , it is unlikely to have a polynomial-time algorithm for computing Nash equilibria . Moreover , in this generalized framework , we establish that even asymptotic last iterate or time average convergence to a Nash Equilibrium is not possible using Gradient Descent Ascent ( GDA ) , its optimistic variant , and extra gradient . Specifically , we present a family of team games whose induced utility is non-multilinear with non-attractive per-se mixed Nash Equilibria , as strict saddle points of the underlying optimization landscape . Leveraging techniques from control theory , we complement these negative results by designing a modified GDA that converges locally to Nash equilibria . Finally , we discuss connections of our framework with AI architectures with team competition structures like multi-agent generative adversarial networks . 1 INTRODUCTION . Team competition has played a central role in the development of Game Theory ( Marschak , 1955 ; von Stengel & Koller , 1997 ; Bacharach , 1999 ; Gold , 2005 ) , Economics ( Marschak , 1955 ; Gottinger , 1974 ) and Evolutionary Biology ( Nagylaki , 1993 ; Nowak et al. , 2004 ) , however , the behavior of the underlying dynamics within the teams are usually sidelined . Either for reasons of mathematical convenience or bigger picture understanding , “ teams ” in literature are typically modeled as if they were unitary actors , i.e. , single individuals without unveiling the internal decision-making of the team members ( see Kim et al . ( 2019 ) ) . For instance , in the biology setting of weak selection model ( Nagylaki , 1993 ; Chastain et al. , 2014 ; Mehta et al. , 2015 ) species are modeled to compete as teams , while at the crux of the matter the genes of each species are the actual players and their alleles are the actions in the survival game . Similarly , it was the social-media collaboration of the Reddit retail trading crowd as a team that touches off the last year ’ s GameStop frenzy of short squeeze ( Umar et al. , 2021 ; Hasso et al. , 2021 ) transforming the markets into a tug of war game against the team of Wall Street hedge funds . Recently , these intrinsic details behind the competition among teams have attracted renewed interest in the Machine Learning community , motivated by the advent of multi-agent systems that are used for generative tasks or playing complex games like CTF ( Jaderberg et al. , 2019 ) or Starcraft ( Vinyals et al. , 2019 ) . So as to win this kind of games , self-training AI systems have to develop both collaborative attributes ( coordination within each team ) as well as contesting ones ( competition across the teams ) . Moreover , following the complementary thread of multi-agent generative adversarial network research , the creation of a pool by efficient incumbent agents , either in generators ( Arora et al. , 2017 ; Hoang et al. , 2017 ; 2018 ; Zhang et al. , 2018 ; Tang , 2020 ) , or discriminators ( Hardy et al. , 2019 ; Albuquerque et al. , 2019 ) has been tested providing significant statistical and computa- tional benefits . In this direction , researchers strive to harness the efficacy of distributed processing , utilizing shallower networks that can learn all the while more diverse datasets 1 . In order to shed some light on this persistent strain of research , the main premise of the theoretical scaffolding developed in this paper is that The “ unitary two-players ” min-max approach misses the critical component of the collective strategy making within each competing team . Our class of games . In this regard , we turn our attention to Two-Team Zero-Sum games , proposed by Schulman & Vazirani ( 2019b ) , a quite general class of min-max optimization problems that include bilinear games as well as a wide range of non-convex non-concave games . In this class , the players fall in two teams of size k1 , k2 and submit their own probabilistic strategy vector independently , akin to a general normal form multi-player game . Following the econometric common value assumption of Marschak ( 1955 ) , what makes a group of players a team is that in any outcome the players of each team receive an identical payoff . Thus , to build some intuition , it is easy to see that if perfect coordination existed within each team , the interaction between the teams is merely a zero-sum game between two “ virtual ” players . To streamline our presentation here , we defer the more precise description of our model to Section 2 . Challenges behind Two-Team Zero-Sum games . In the archetypical case of two players , i.e. , ( k1 = k2 = 1 ) , min-max strategies are typically thought of as the axiomatically correct predictions thanks to the seminal Von Neumann ’ s minmax theorem ( Von Neumann , 1928 ) . Unfortunately , min-max optimization for case k > 1 is a much more tenuous affair : Schulman & Vazirani ( 2019b ) preclude the existence of unique value by presenting a family of team games where min max 6= max min together with bounds about this duality gap , which quantifies exactly the effect of exchanging the order of strategy commitment either between the teams or the players thereof . If defining the correct figure of merit for Team games is rife with frustration , what is even more demanding is understanding what kind of algorithms/dynamics are able to solve this problem when a game-theoretically meaningful solution exists : Firstly , computing local Nash Equilibria ( NE ) in general non-convex non-concave games is PPAD-complete ( Daskalakis et al. , 2009 ; 2021 ) . Thus , all well-celebrated first-order methods , like gradient descent-ascent ( Lin et al. , 2020 ; Daskalakis & Panageas , 2019 ) , its optimistic ( Popov , 1980 ; Daskalakis & Panageas , 2018 ; Mertikopoulos et al. , 2019 ) and extra gradient variant ( Korpelevich , 1976 ) would require an exponential number of steps in the parameters of the problem to find an appoximate NE under Nemirovsky-Yudin ( Nemirovskij & Yudin , 1983 ) oracle optimization model . Secondly , even if a regret notion could be defined , noregret methodology is guarranteed to attract only to the set of coarse correlated equilibria ( CCE ) ( Fudenberg , 1991 ; Hannan , 2016 ; Flokas et al. , 2020 ; Giannou et al. , 2021 ) , a weaker notion that may be exclusively supported on strictly dominated strategies , even for simple symmetric two-player games ( See also Viossat & Zapechelnyuk ( 2013 ) ) . Whilst the aforementioned intractability failures for the general case of non-convex non-concave min-max problems provides significant insights , they can not a fortiori answer the fundamental question , restricted in the model of Two-Team Zero-Sum Games : Can we compute Nash equilibria in Two-Team Zero-Sum Games and ultimately are there first-order methods that converge to them under tangible guarantees ? Our results . To the best of our knowledge , the following contributions are the first-of-its-kind type of results for the case of Two-Team Zero-Sum games : • For the case of the computational complexity of approximate ( possibly mixed ) NE we establish a sweeping negative result proving that it is CLS-hard ( Theorem 3.1 ) , i.e. , is computationally harder than finding pure NE in a congestion game or finding approximate gradient descent fixed points . • From an optimization perspective , we settle these questions with a resounding “ no ” for all the well-known discrete gradient flow variations . Specifically , we present a simple family of twoteam with two-players zero-sum games where Projected-GDA , Optimistic-GDA , and Extra Gradient fail even to stabilize around a mixed NE , when they are initialized nearby ( Theorem 3.5 ) . 1Indeed , from the training perspective , it ’ s more computationally preferable to back-propagate through two equally sized neural networks with smaller capacity rather than through a giant single one that would be twice as deep . ( Tang , 2020 ) Additionally , for the category GDA in the non-degenerate team games with unique mixed NE , one could acquire an even stronger result for any high-dimensional configuration of actions and players . ( Theorem 3.2 ) • In order to make some substantial headway under the burden of the above instability results , we shift our attention to adaptive control generalizations of the celebrated Washout filters–traditionally used for stabilizing the Dutch-roll motion of an aircraft during a flight ( Hassouneh et al. , 2004 ; Grant & Reid , 1997 ) . Inspired by this framework , we propose the modified KPV-GDA2 which consists of a tandem combination of GDA together with a stabilizing feedback introduces by Bazanella et al . ( 1997 ) . { state ( k+1 ) = state ( k ) + ηGDA ( state ( k+1 ) ) + ηK ( state ( k ) − stress ( k ) ) stress ( k+1 ) = stress ( k ) + ηP ( state ( k ) − stress ( k ) ) ( KPV-GDA ) The main linchpin of KPV-GDA method is the Simon ’ s and Theil ’ s ( Simon , 1956 ; Theil , 1957 ) certainty equivalence principle , a widely used methodology in Control theory in developing applied dynamic rational expectations models . According to this principle , the feedback law is split into two optimization steps whereby K−step attracts quickly the state to the stress while P−step converges slowly to the fixed points of GDA . Compared with the plethora of the proposed dynamics for min-max problems , the crucial advantage of the afore-described technique is that does not introduce any extra fixed points than GDA ’ s ones . In Section 2.2 , we provide some illustrative examples of KPV-GDA technique , while in Theorem 3.7 we prove the existence of such control feedback for our class of games . • Finally , in Section 4 we provide a series of experiments in simple two-team zero-sum games showcasing both the messy behaviors of traditional methods like GDA , OGDA and the power of KPV-GDA method in these optimization environments . Additionally , we show that multi-agent GAN architectures achieve better performance than the single-agent ones , in terms of network capacity , when they are trained in synthetic or real-world datasets like CIFAR10 . 2 PRELIMINARIES . 2.1 DEFINITIONS . Our setting . Formally , a two-team game in normal form is defined as a tuple Γ = Γ ( N , A , u ) consisting of ( i ) a finite set of players N , split into two teams A , B with kA and kB players correspondingly such that : N = NA ∪ NB = { A1 , · · · , AkA , B1 , · · · , BkB } ; ( ii ) a finite set of actions ( or pure strategies ) Ai = { α1 , . . . , αni } per player i ∈ N ; ( iii ) each team ’ s payoff function uA , uB : A → R , where A : = ∏ iAi denotes the ensemble of all possible action profiles α = ( αA1 , . . . , αAkA , αB1 , . . . , αBkB ) while the individual utility of a player is identical to her teammates , i.e. , ui = uA & uj = uB ∀ ( i , j ) ∈ NA×NB . In this general context , players could also adhere mixed strategies , i.e , probability distributions sk ∈ ∆ ( Ak ) over the pure strategies αk ∈ Ak . Correspondingly , we define the product distributions x = sA1 ⊗ · · · ⊗ sAkA , y = sB1 ⊗ · · · ⊗ sBkB as the teams ’ strategies . Collectively , we will write X : = ∏ i∈NA Xi = ∏ i∈NA ∆ ( Ai ) , Y : =∏ i∈NA Yi = ∏ i∈NB ∆ ( Ai ) the space of mixed strategy profiles of teams A , B . Similarly with the bilinear two-player games , the teams ’ utility functions can be expressed via the payoff-tensors A , B ∈ Rτ with τ = ∏ i∈N |Ai| and acquire the form3 : uA = Ayx & uB = B y x ( 2.1 ) 2The name “ KPV-GDA ” is an initialism from ( K , P ) -Vaned Gradient Descent Ascent method ; Just like the tail section of an aircraft where the vanes are flight control surfaces that control the unstable yaw , ( K , P ) control feedback aims to stabilize the unstable arrows of Gradient flow around a mixed NE . 3Figuratively , the latter form denotes what is known as a tensor contraction given : Ayx =∑ i , ... , j , k ... , l xi , ... , jAi , ... , j , k ... , lyk ... , l If x , y have shapes ( i , j ) and ( k , l ) that would be equivalent to u = einsum ( ’ ijkl , ij , kl ’ , A , x , y ) In terms of solutions , we focus on the per player Nash Equilibrium ( NE ) , i.e. , a state strategy profile s∗ = ( x , y ) = ( ( s∗A1 , . . . , s ∗ AkA ) , ( s∗B1 , . . . , s ∗ BkB ) ) such that ui ( s ∗ ) ≥ ui ( si ; s∗−i ) 4 for all si ∈ ∆ ( Ai ) and all i ∈ N ( NE ) The state strategy profile s∗ is called pure if every player of both teams chooses a single action ; otherwise we say that it is mixed . Finally , a two-team game is called two-team zero-sum if uA = −uB or equivalently A + B = O . Remark 2.1 . A quite technical prerequisite for the rest of this work , we will assume that a succinct representation of the utility tensors of the game is available or equivalently that a payoff oracle provides efficiently both the value of the utility function and its derivatives for a specific input , which is consistent with the vast majority of the applications that are described in the literature ( von Stengel & Koller , 1997 ) . A first approach on computing Nash equilibria in Two-Team Zero-Sum games . Given the existence of the duality-gap between the min max and max min , in lieu of the two-player zero-sum game , an equilibrium in our setting can not be computed via linear programming . For the goal of computing Nash equilibria in two-team zero-sum games , we have experimented with a selection of first-order methods that have been utilized with varying success in the setting of the two-person zerosum case . Namely , we analyze the following methods : i ) Gradient Descent-Ascent ii ) Optimistic Gradient Descent-Ascent iii ) Extra Gradient Method iv ) Optimistic Multiplicative Weights Update Method We defer their precise definitions in Appendix B . The below remark will play a key role in the sequel . Remark 2.2 . Any fixed point of the aforementioned discrete-time dynamics on the utility function corresponds necessarily to the Nash equilibria of the game . Hence , an important testbed for the long-run behavior of GDA , OGDA , and EG methods is to examine whether these methods stabilize around their fixed points , which effectively constitute the Nash equilibria of the game . In Section 3.2 , we show that in lack of pure Nash equilibria , all the above methods fail to stabilize on their fixed points even for a simple class of ( 2 , 2 ) -players game , and as a consequence to the mixed Nash equilibria of the game . The presence of these results showcases the need for a different approach that lies outside purely optimization-based ideas . Inspired by the applications of washout filters to stabilize highly susceptible systems and their adaptive control generalizations , we design a new incarnation of GDA vaned by two matrices-control feedback . Surprisingly , in contrast with the aforementioned traditional methods , our proposed technique accomplishes last-iterate stabilization on its fixed point , i.e. , the mixed Nash equilibria of the team game . ( K , P ) -Vaned GDA Method . After concatenating the vectors of the minimizing and the maximizing agents z ( k ) = ( x ( k ) , y ( k ) ) we can write our method , for appropriate matrices K , P : z ( k+1 ) = ΠZ { z ( k ) + η ( −∇xf ( z ( k ) ) ∇yf ( z ( k ) ) ) + ηK ( z ( k ) − θ ( k ) ) } θ ( k+1 ) = ΠZ { θ ( k ) + ηP ( z ( k ) − θ ( k ) ) } ( 2.2 ) Intuitively , the added variable θ ( k ) holds an estimate of the fixed point , and through the feedback ηK ( z ( k ) − θ ( k ) ) the vector z stabilizes around that estimate which slowly moves towards the real fixed point of the plain GDA dynamic . It is crucial to note that no additional fixed points are introduced to the system .
This paper studies the equilibrium compution in two-team zero-sum games. Finding the per player Nash is proved to be CLS-hard, and many popular gradient-based algorithms are proved to be not stable. A vaned gradient descent ascent algorithm is proposed to address the instability, shown to locally converge.
SP:343e04f91a7e7e5574b94fac74d8f0c0d41c70ea
Automatic Loss Function Search for Predict-Then-Optimize Problems with Strong Ranking Property
1 INTRODUCTION . Many decision making processes under uncertainty in real world are solving combinatorial problems with parameters unknown to the decision maker . A traditional method to address the uncertainty issue is to add assumptions on the distributions of parameters ( Hentenryck & Bent , 2006 ) . Alternatively , a recent approach is to predict the unknown parameters from the correlated features using machine learning methods and solve the combinatorial problems based on the predictions . This paradigm , which is called predict-then-optimize ( PTO ) , has been widely employed in practice systematically ( Elmachtoub & Grigas , 2021 ) . For example , Google Maps estimates the travel times of roads in the traffic to compute a shortest path ( Lau , 2020 ) ; Computer clusters predict the processing times and resource demands of computation tasks to make good job scheduling on servers ( Mao et al. , 2016 ) ; Hedge funds forecast the return rates of different stocks to optimize their portfolio in the next trading day ( Thomson , 2021 ) . However , the commonly used l2 loss function ( i.e. , l2 distance between predictions and true values ) usually can not achieve ideal decision results in predict-then-optimize problems ( Demirović et al. , 2019 ) . This misalignment between the l2 loss in prediction and quality of the decision comes from the misalignment between the continuity on the prediction problem and the discontinuity on the structure of the combinatorial optimization problem . A straightforward remedy of the above misalignment problem is to adjust the loss to reflect the difference in objective values of the optimization solutions generated using the predicted and observed parameters , called Smart “ Predict , then Optimize ” ( SPO ) loss ( Elmachtoub & Grigas , 2021 ) . However , due to the combinatorial nature of the optimization problems , the SPO loss is usually piecewise flat and has multiple discontinued points . As a result , the derivatives of SPO loss is either zero or nonexistent , which prohibits the training of gradient-based deep learning algorithms . Besides , most solvers for combinatorial optimization problems are not differentiable , which can not be directly incorporated with the widely adopted gradient-based learning approaches nowadays . Current approaches to minimizing the SPO loss can be categorized into two classes . The first one is to differentiate the discrete optimization solver by approximating the objective with a certain family of functions , and this series of methods works when the combinatorial optimization problem is linear ( Wilder et al. , 2019a ; b ; Pogančić et al. , 2019 ) . The other class of research tries to propose new surrogate loss functions for SPO loss , e.g. , SPO+ loss for linear programming ( Elmachtoub & Grigas , 2021 ) and piecewise linear loss for dynamic programming ( Stuckey et al. , 2020 ) . However , these surrogate loss functions are problem-specific ( i.e. , depending on the type of the combinatorial optimization problem ) and require much domain knowledge from experts to design . Instead of designing the surrogate loss metric for each optimization problem manually , an alternative approach is to find a good loss metric from a predefined search space in an automatic manner , which is inspired by the recent progress in automated machine learning ( AutoML ) ( Zoph & Le , 2017 ; Pham et al. , 2018 ; Liu et al. , 2018 ) . These AutoML approaches usually define an appropriate search space and then conduct some search algorithms , e.g. , reinforcement learning and genetic algorithm , to find a good metric . Li et al . ( 2019 ) and Li et al . ( 2021 ) studied the automatic loss function search problem for computer vision tasks . They defined the search space by replacing the mainstream metrics for semantic segmentation with their differentiable approximations . However for predict-then-optimize problems , there is no such evaluation metrics available except for SPO loss . Moreover , these metrics , which are designed for segmentation tasks in computer vision domain , are not suitable to evaluate the prediction results for the combinatorial optimization problems . In this paper , we propose a framework of automatic loss function search called APOC ( Automatic Prediction and Optimization Connector ) to tackle a wide range of predict-then-optimize problems whose optimal solution is determined by the total group preorder of the parameters of the combinatorial problem , called strong ranking property ( see for Definition 2 for details. ) . These problems have the same optimal solution for different sets of parameters , as long as those sets of parameters preserve the same total group preorders . The key idea is to build the proper search space in which different loss functions capture the partial comparisons between different groups of the parameters of the optimization problem . Our contributions are summarized as follows : 1 ) We theoretically prove that l2 loss is not an ideal choice in a linear regression setting of PTO problems ; 2 ) We propose the total group preorder loss for PTO problems and relax it to a differentiable approximated version , which is fitted using a family of Derivative-of-Guassian wavelet functions to capture the group comparison relationship among the items with variable sizes and weights ; 3 ) We propose an effective method to automatically search for a differentiable surrogate loss based on approximated group total preorders of items for PTO problems called APOC ; 4 ) The proposed APOC method has been validated on three classic combinatorial problems with strong ranking property . 2 RELATED WORK . Predict-then-optimize ( PTO ) problems capture a common pipeline of machine-learning-assisted optimization solvers : predicting the unknown parameters of the optimization problem from the contextual information and then optimizing based on the predicted parameters ( Elmachtoub & Grigas , 2021 ) . However , many combinatorial optimization problems have piecewise flat value function landscapes which prohibit an end-to-end learning with gradient-based prediction models . One line of work is to differentiate the optimization solvers for specific problems and embed them as layers into the network architecture through which the gradients can be propagated , which originates from the differentiable optimization ( Amos , 2019 ) . Grover et al . ( 2018 ) proposes a sorting network for ranking problems by relaxing the permutation matrix output by sorting algorithms to the unimodal row-stochastic matrix . For the node clustering problem , Wilder et al . ( 2019b ) differentiates the K-means algorithm by assigning nodes to clusters according to a soft-min function . Pogančić et al . ( 2019 ) interpolates the value of function of linear programming using piecewise linear functions and embeds as layers the solvers of linear programming . The other line of work focuses on searching for a surrogate loss function to approximate the value function of the optimization problem , so-called regret in decision theory ( Bell , 1982 ) . Elmachtoub & Grigas ( 2021 ) derives a convex upper bound on the regret of linear programming , called SPO+ loss . Wilder et al . ( 2019a ) proposes the QTPL loss by adding a quadratic penalty term to the continuous relaxation of the regret such that results from differentiating over quadratic programs can be used . Mandi & Guns ( 2020 ) overcomes the non-differentiablity of the regret by adding a logarithmic barrier term , called IntOpt . Stuckey et al . ( 2020 ) approximates the value function of dynamic programming problems as piecewise linear functions with learn-able parameters . Yoon et al . ( 2013 ) proposes mean objective cost of uncertainty ( MOCU ) , the difference between the value functions of a robust solution and the optimal solution , to evaluate the uncertainty of parameters in decision making , e.g. , experiment design ( Boluki et al. , 2018 ) and active learning ( Zhao et al. , 2021 ) . Loss function guides the machine learning algorithms to produce good predictions for different tasks , which is usually designed with domain knowledge from experts ( Masnadi-Shirazi & Vasconcelos , 2008 ; Bruch et al. , 2019 ) . Automatic search of suitable loss function without domain knowledge has recently received much attention from the computer vision community . Li et al . ( 2019 ) uses reinforcement learning algorithms to learn better loss functions with good generalization and transfer-ability on different vision tasks . Wang et al . ( 2020 ) adopts both random search and reinforcement learning algorithms to search for loss functions on face recognition problems . Li et al . ( 2021 ) explores the possibility of searching loss function automatically from scratch for generic tasks , e.g. , semantic segmentation , object detection , and pose estimation . However , the search spaces in these methods are specially designed for vision tasks and can not be directly applied to PTO problems . Most work on automatic loss search follows the searching algorithms used in AutoML . We refer to He et al . ( 2021 ) for comprehensive survey on searching methods in AutoML . Natural questions are : 1 ) whether we can design a suitable search space for loss functions in PTO problems ; and 2 ) whether there is a strategy to search adequate loss functions systematically for different optimization problems using techniques from AutoML . 3 ATGP LOSS FOR PREDICT-THEN-OPTIMIZE PROBLEMS . 3.1 MISALIGNMENT IN PREDICT-THEN-OPTIMIZE PROBLEM . The problems we consider are the type of predict-then-optimize ( PTO ) problems in the following formulation . For an optimization problem in the following form maximize z Uc ( z ) subject to z ∈ Z ( 1 ) where z is the decision variable , c ∈ Rd is the parameter of the objective function U , and Z is the feasible set which does not depend on c. A decision maker needs to solve ( 1 ) without knowing the exact value of c. Instead , the decision maker will observe a feature vector x ∈ Rk . The goal of the decision maker is to predict the value of c and then optimize ( 1 ) based on the prediction ĉ . To distinguish with the parameters of the prediction models ( e.g. , weights of neural networks ) , we call c the PTO parameter . Instead of focusing on the prediction error between c and ĉ ( e.g. , l2 distance ) , the decision maker cares more about a high-quality decision result generated from the prediction . Elmachtoub & Grigas ( 2021 ) characterizes this quantity by the Smart “ Predict , then Optimize ” ( SPO ) loss , which equals to the loss incurred by solving the optimization problem ( 1 ) using ĉ instead of the true parameter c. ` SPO ( ĉ , c ) = E [ U∗c − Uc ( z∗ ( ĉ ) ) ] where z∗ ( c ) = arg maxz∈Z Uc ( z ) and U∗c = Uc ( z ∗ ( c ) ) . Figure 2a shows the misalignment between the l2 distance ( green ) and the SPO loss ( blue ) in a ranking PTO problem . In this ranking PTO problem , there are two items to be ranked . The true values of the items are c1 = 4.9 and c2 = 5.1 , which are also the values to predict . The prediction denoted by the magenta triangle has a lower SPO loss than the orange one , and therefore yields a better solution . However , prediction model that minimizes the l2 distance will prefer the orange one ( whose l2 distance is lower ) and produce a worse solution with regard to the ranking task . However , the SPO loss of most PTO problems is a piecewise linear function due to the combinatorial structure in the objective function Uc ( z ) and the solution map z∗ ( c ) . As a result , its derivatives in the predicted PTO parameter ĉ is either zero or nonexistent , which prohibits the usage of prediction models whose training is based on gradient descent algorithms . In this paper , we consider a collection of combinatorial optimization problems with strong ranking property . For this collection of optimization problems , we propose the total group preorder ( TGP ) loss and its differentiable variant approximated total group preorder ( ATGP ) loss as the training targets in the prediction stage . The ATGP loss will approximate the SPO loss with smooth shape , which is friendly and effective for the gradient-based training manner , as is shown in Figure 2c .
In this paper, the authors propose a novel loss function called the approximated total group preorder (ATGP) loss function, which aims to address the potential limitations of l2 loss in predict-then-optimize (PTO) problems. The authors argue that the use of l2 loss in the prediction phase is "misaligned" with the final goal when the ultimate objective is to make discrete decisions in a combinatorial optimization setting. While the so-called SPO (smart PTO) loss, recently proposed by Elmachtoub and Grigas (2021) provides a remedy, the authors note that it cannot be used with gradient-based learning algorithms. The authors aim to rectify this issue by proposing a novel loss function called the total group preorder (TGP) loss - for better alignment - and then an algorithm for automatically searching for an approximate TGP (ATGP) loss function so that it is differentiable and therefore can take advantage of efficient gradient-based learning schemes.
SP:e2d56d76c6467658c85d8a59a7afb23561dd03b3
Automatic Loss Function Search for Predict-Then-Optimize Problems with Strong Ranking Property
1 INTRODUCTION . Many decision making processes under uncertainty in real world are solving combinatorial problems with parameters unknown to the decision maker . A traditional method to address the uncertainty issue is to add assumptions on the distributions of parameters ( Hentenryck & Bent , 2006 ) . Alternatively , a recent approach is to predict the unknown parameters from the correlated features using machine learning methods and solve the combinatorial problems based on the predictions . This paradigm , which is called predict-then-optimize ( PTO ) , has been widely employed in practice systematically ( Elmachtoub & Grigas , 2021 ) . For example , Google Maps estimates the travel times of roads in the traffic to compute a shortest path ( Lau , 2020 ) ; Computer clusters predict the processing times and resource demands of computation tasks to make good job scheduling on servers ( Mao et al. , 2016 ) ; Hedge funds forecast the return rates of different stocks to optimize their portfolio in the next trading day ( Thomson , 2021 ) . However , the commonly used l2 loss function ( i.e. , l2 distance between predictions and true values ) usually can not achieve ideal decision results in predict-then-optimize problems ( Demirović et al. , 2019 ) . This misalignment between the l2 loss in prediction and quality of the decision comes from the misalignment between the continuity on the prediction problem and the discontinuity on the structure of the combinatorial optimization problem . A straightforward remedy of the above misalignment problem is to adjust the loss to reflect the difference in objective values of the optimization solutions generated using the predicted and observed parameters , called Smart “ Predict , then Optimize ” ( SPO ) loss ( Elmachtoub & Grigas , 2021 ) . However , due to the combinatorial nature of the optimization problems , the SPO loss is usually piecewise flat and has multiple discontinued points . As a result , the derivatives of SPO loss is either zero or nonexistent , which prohibits the training of gradient-based deep learning algorithms . Besides , most solvers for combinatorial optimization problems are not differentiable , which can not be directly incorporated with the widely adopted gradient-based learning approaches nowadays . Current approaches to minimizing the SPO loss can be categorized into two classes . The first one is to differentiate the discrete optimization solver by approximating the objective with a certain family of functions , and this series of methods works when the combinatorial optimization problem is linear ( Wilder et al. , 2019a ; b ; Pogančić et al. , 2019 ) . The other class of research tries to propose new surrogate loss functions for SPO loss , e.g. , SPO+ loss for linear programming ( Elmachtoub & Grigas , 2021 ) and piecewise linear loss for dynamic programming ( Stuckey et al. , 2020 ) . However , these surrogate loss functions are problem-specific ( i.e. , depending on the type of the combinatorial optimization problem ) and require much domain knowledge from experts to design . Instead of designing the surrogate loss metric for each optimization problem manually , an alternative approach is to find a good loss metric from a predefined search space in an automatic manner , which is inspired by the recent progress in automated machine learning ( AutoML ) ( Zoph & Le , 2017 ; Pham et al. , 2018 ; Liu et al. , 2018 ) . These AutoML approaches usually define an appropriate search space and then conduct some search algorithms , e.g. , reinforcement learning and genetic algorithm , to find a good metric . Li et al . ( 2019 ) and Li et al . ( 2021 ) studied the automatic loss function search problem for computer vision tasks . They defined the search space by replacing the mainstream metrics for semantic segmentation with their differentiable approximations . However for predict-then-optimize problems , there is no such evaluation metrics available except for SPO loss . Moreover , these metrics , which are designed for segmentation tasks in computer vision domain , are not suitable to evaluate the prediction results for the combinatorial optimization problems . In this paper , we propose a framework of automatic loss function search called APOC ( Automatic Prediction and Optimization Connector ) to tackle a wide range of predict-then-optimize problems whose optimal solution is determined by the total group preorder of the parameters of the combinatorial problem , called strong ranking property ( see for Definition 2 for details. ) . These problems have the same optimal solution for different sets of parameters , as long as those sets of parameters preserve the same total group preorders . The key idea is to build the proper search space in which different loss functions capture the partial comparisons between different groups of the parameters of the optimization problem . Our contributions are summarized as follows : 1 ) We theoretically prove that l2 loss is not an ideal choice in a linear regression setting of PTO problems ; 2 ) We propose the total group preorder loss for PTO problems and relax it to a differentiable approximated version , which is fitted using a family of Derivative-of-Guassian wavelet functions to capture the group comparison relationship among the items with variable sizes and weights ; 3 ) We propose an effective method to automatically search for a differentiable surrogate loss based on approximated group total preorders of items for PTO problems called APOC ; 4 ) The proposed APOC method has been validated on three classic combinatorial problems with strong ranking property . 2 RELATED WORK . Predict-then-optimize ( PTO ) problems capture a common pipeline of machine-learning-assisted optimization solvers : predicting the unknown parameters of the optimization problem from the contextual information and then optimizing based on the predicted parameters ( Elmachtoub & Grigas , 2021 ) . However , many combinatorial optimization problems have piecewise flat value function landscapes which prohibit an end-to-end learning with gradient-based prediction models . One line of work is to differentiate the optimization solvers for specific problems and embed them as layers into the network architecture through which the gradients can be propagated , which originates from the differentiable optimization ( Amos , 2019 ) . Grover et al . ( 2018 ) proposes a sorting network for ranking problems by relaxing the permutation matrix output by sorting algorithms to the unimodal row-stochastic matrix . For the node clustering problem , Wilder et al . ( 2019b ) differentiates the K-means algorithm by assigning nodes to clusters according to a soft-min function . Pogančić et al . ( 2019 ) interpolates the value of function of linear programming using piecewise linear functions and embeds as layers the solvers of linear programming . The other line of work focuses on searching for a surrogate loss function to approximate the value function of the optimization problem , so-called regret in decision theory ( Bell , 1982 ) . Elmachtoub & Grigas ( 2021 ) derives a convex upper bound on the regret of linear programming , called SPO+ loss . Wilder et al . ( 2019a ) proposes the QTPL loss by adding a quadratic penalty term to the continuous relaxation of the regret such that results from differentiating over quadratic programs can be used . Mandi & Guns ( 2020 ) overcomes the non-differentiablity of the regret by adding a logarithmic barrier term , called IntOpt . Stuckey et al . ( 2020 ) approximates the value function of dynamic programming problems as piecewise linear functions with learn-able parameters . Yoon et al . ( 2013 ) proposes mean objective cost of uncertainty ( MOCU ) , the difference between the value functions of a robust solution and the optimal solution , to evaluate the uncertainty of parameters in decision making , e.g. , experiment design ( Boluki et al. , 2018 ) and active learning ( Zhao et al. , 2021 ) . Loss function guides the machine learning algorithms to produce good predictions for different tasks , which is usually designed with domain knowledge from experts ( Masnadi-Shirazi & Vasconcelos , 2008 ; Bruch et al. , 2019 ) . Automatic search of suitable loss function without domain knowledge has recently received much attention from the computer vision community . Li et al . ( 2019 ) uses reinforcement learning algorithms to learn better loss functions with good generalization and transfer-ability on different vision tasks . Wang et al . ( 2020 ) adopts both random search and reinforcement learning algorithms to search for loss functions on face recognition problems . Li et al . ( 2021 ) explores the possibility of searching loss function automatically from scratch for generic tasks , e.g. , semantic segmentation , object detection , and pose estimation . However , the search spaces in these methods are specially designed for vision tasks and can not be directly applied to PTO problems . Most work on automatic loss search follows the searching algorithms used in AutoML . We refer to He et al . ( 2021 ) for comprehensive survey on searching methods in AutoML . Natural questions are : 1 ) whether we can design a suitable search space for loss functions in PTO problems ; and 2 ) whether there is a strategy to search adequate loss functions systematically for different optimization problems using techniques from AutoML . 3 ATGP LOSS FOR PREDICT-THEN-OPTIMIZE PROBLEMS . 3.1 MISALIGNMENT IN PREDICT-THEN-OPTIMIZE PROBLEM . The problems we consider are the type of predict-then-optimize ( PTO ) problems in the following formulation . For an optimization problem in the following form maximize z Uc ( z ) subject to z ∈ Z ( 1 ) where z is the decision variable , c ∈ Rd is the parameter of the objective function U , and Z is the feasible set which does not depend on c. A decision maker needs to solve ( 1 ) without knowing the exact value of c. Instead , the decision maker will observe a feature vector x ∈ Rk . The goal of the decision maker is to predict the value of c and then optimize ( 1 ) based on the prediction ĉ . To distinguish with the parameters of the prediction models ( e.g. , weights of neural networks ) , we call c the PTO parameter . Instead of focusing on the prediction error between c and ĉ ( e.g. , l2 distance ) , the decision maker cares more about a high-quality decision result generated from the prediction . Elmachtoub & Grigas ( 2021 ) characterizes this quantity by the Smart “ Predict , then Optimize ” ( SPO ) loss , which equals to the loss incurred by solving the optimization problem ( 1 ) using ĉ instead of the true parameter c. ` SPO ( ĉ , c ) = E [ U∗c − Uc ( z∗ ( ĉ ) ) ] where z∗ ( c ) = arg maxz∈Z Uc ( z ) and U∗c = Uc ( z ∗ ( c ) ) . Figure 2a shows the misalignment between the l2 distance ( green ) and the SPO loss ( blue ) in a ranking PTO problem . In this ranking PTO problem , there are two items to be ranked . The true values of the items are c1 = 4.9 and c2 = 5.1 , which are also the values to predict . The prediction denoted by the magenta triangle has a lower SPO loss than the orange one , and therefore yields a better solution . However , prediction model that minimizes the l2 distance will prefer the orange one ( whose l2 distance is lower ) and produce a worse solution with regard to the ranking task . However , the SPO loss of most PTO problems is a piecewise linear function due to the combinatorial structure in the objective function Uc ( z ) and the solution map z∗ ( c ) . As a result , its derivatives in the predicted PTO parameter ĉ is either zero or nonexistent , which prohibits the usage of prediction models whose training is based on gradient descent algorithms . In this paper , we consider a collection of combinatorial optimization problems with strong ranking property . For this collection of optimization problems , we propose the total group preorder ( TGP ) loss and its differentiable variant approximated total group preorder ( ATGP ) loss as the training targets in the prediction stage . The ATGP loss will approximate the SPO loss with smooth shape , which is friendly and effective for the gradient-based training manner , as is shown in Figure 2c .
This paper aims to solve combinatorial optimization problems with unknown parameters that need to be predicted from observations. It proposed a total group preorder loss and its differential version approximated total group preorder loss for predict-then-optimize (PTO) problems with strong ranking property. It studied a very interesting problem.
SP:e2d56d76c6467658c85d8a59a7afb23561dd03b3
Automatic Loss Function Search for Predict-Then-Optimize Problems with Strong Ranking Property
1 INTRODUCTION . Many decision making processes under uncertainty in real world are solving combinatorial problems with parameters unknown to the decision maker . A traditional method to address the uncertainty issue is to add assumptions on the distributions of parameters ( Hentenryck & Bent , 2006 ) . Alternatively , a recent approach is to predict the unknown parameters from the correlated features using machine learning methods and solve the combinatorial problems based on the predictions . This paradigm , which is called predict-then-optimize ( PTO ) , has been widely employed in practice systematically ( Elmachtoub & Grigas , 2021 ) . For example , Google Maps estimates the travel times of roads in the traffic to compute a shortest path ( Lau , 2020 ) ; Computer clusters predict the processing times and resource demands of computation tasks to make good job scheduling on servers ( Mao et al. , 2016 ) ; Hedge funds forecast the return rates of different stocks to optimize their portfolio in the next trading day ( Thomson , 2021 ) . However , the commonly used l2 loss function ( i.e. , l2 distance between predictions and true values ) usually can not achieve ideal decision results in predict-then-optimize problems ( Demirović et al. , 2019 ) . This misalignment between the l2 loss in prediction and quality of the decision comes from the misalignment between the continuity on the prediction problem and the discontinuity on the structure of the combinatorial optimization problem . A straightforward remedy of the above misalignment problem is to adjust the loss to reflect the difference in objective values of the optimization solutions generated using the predicted and observed parameters , called Smart “ Predict , then Optimize ” ( SPO ) loss ( Elmachtoub & Grigas , 2021 ) . However , due to the combinatorial nature of the optimization problems , the SPO loss is usually piecewise flat and has multiple discontinued points . As a result , the derivatives of SPO loss is either zero or nonexistent , which prohibits the training of gradient-based deep learning algorithms . Besides , most solvers for combinatorial optimization problems are not differentiable , which can not be directly incorporated with the widely adopted gradient-based learning approaches nowadays . Current approaches to minimizing the SPO loss can be categorized into two classes . The first one is to differentiate the discrete optimization solver by approximating the objective with a certain family of functions , and this series of methods works when the combinatorial optimization problem is linear ( Wilder et al. , 2019a ; b ; Pogančić et al. , 2019 ) . The other class of research tries to propose new surrogate loss functions for SPO loss , e.g. , SPO+ loss for linear programming ( Elmachtoub & Grigas , 2021 ) and piecewise linear loss for dynamic programming ( Stuckey et al. , 2020 ) . However , these surrogate loss functions are problem-specific ( i.e. , depending on the type of the combinatorial optimization problem ) and require much domain knowledge from experts to design . Instead of designing the surrogate loss metric for each optimization problem manually , an alternative approach is to find a good loss metric from a predefined search space in an automatic manner , which is inspired by the recent progress in automated machine learning ( AutoML ) ( Zoph & Le , 2017 ; Pham et al. , 2018 ; Liu et al. , 2018 ) . These AutoML approaches usually define an appropriate search space and then conduct some search algorithms , e.g. , reinforcement learning and genetic algorithm , to find a good metric . Li et al . ( 2019 ) and Li et al . ( 2021 ) studied the automatic loss function search problem for computer vision tasks . They defined the search space by replacing the mainstream metrics for semantic segmentation with their differentiable approximations . However for predict-then-optimize problems , there is no such evaluation metrics available except for SPO loss . Moreover , these metrics , which are designed for segmentation tasks in computer vision domain , are not suitable to evaluate the prediction results for the combinatorial optimization problems . In this paper , we propose a framework of automatic loss function search called APOC ( Automatic Prediction and Optimization Connector ) to tackle a wide range of predict-then-optimize problems whose optimal solution is determined by the total group preorder of the parameters of the combinatorial problem , called strong ranking property ( see for Definition 2 for details. ) . These problems have the same optimal solution for different sets of parameters , as long as those sets of parameters preserve the same total group preorders . The key idea is to build the proper search space in which different loss functions capture the partial comparisons between different groups of the parameters of the optimization problem . Our contributions are summarized as follows : 1 ) We theoretically prove that l2 loss is not an ideal choice in a linear regression setting of PTO problems ; 2 ) We propose the total group preorder loss for PTO problems and relax it to a differentiable approximated version , which is fitted using a family of Derivative-of-Guassian wavelet functions to capture the group comparison relationship among the items with variable sizes and weights ; 3 ) We propose an effective method to automatically search for a differentiable surrogate loss based on approximated group total preorders of items for PTO problems called APOC ; 4 ) The proposed APOC method has been validated on three classic combinatorial problems with strong ranking property . 2 RELATED WORK . Predict-then-optimize ( PTO ) problems capture a common pipeline of machine-learning-assisted optimization solvers : predicting the unknown parameters of the optimization problem from the contextual information and then optimizing based on the predicted parameters ( Elmachtoub & Grigas , 2021 ) . However , many combinatorial optimization problems have piecewise flat value function landscapes which prohibit an end-to-end learning with gradient-based prediction models . One line of work is to differentiate the optimization solvers for specific problems and embed them as layers into the network architecture through which the gradients can be propagated , which originates from the differentiable optimization ( Amos , 2019 ) . Grover et al . ( 2018 ) proposes a sorting network for ranking problems by relaxing the permutation matrix output by sorting algorithms to the unimodal row-stochastic matrix . For the node clustering problem , Wilder et al . ( 2019b ) differentiates the K-means algorithm by assigning nodes to clusters according to a soft-min function . Pogančić et al . ( 2019 ) interpolates the value of function of linear programming using piecewise linear functions and embeds as layers the solvers of linear programming . The other line of work focuses on searching for a surrogate loss function to approximate the value function of the optimization problem , so-called regret in decision theory ( Bell , 1982 ) . Elmachtoub & Grigas ( 2021 ) derives a convex upper bound on the regret of linear programming , called SPO+ loss . Wilder et al . ( 2019a ) proposes the QTPL loss by adding a quadratic penalty term to the continuous relaxation of the regret such that results from differentiating over quadratic programs can be used . Mandi & Guns ( 2020 ) overcomes the non-differentiablity of the regret by adding a logarithmic barrier term , called IntOpt . Stuckey et al . ( 2020 ) approximates the value function of dynamic programming problems as piecewise linear functions with learn-able parameters . Yoon et al . ( 2013 ) proposes mean objective cost of uncertainty ( MOCU ) , the difference between the value functions of a robust solution and the optimal solution , to evaluate the uncertainty of parameters in decision making , e.g. , experiment design ( Boluki et al. , 2018 ) and active learning ( Zhao et al. , 2021 ) . Loss function guides the machine learning algorithms to produce good predictions for different tasks , which is usually designed with domain knowledge from experts ( Masnadi-Shirazi & Vasconcelos , 2008 ; Bruch et al. , 2019 ) . Automatic search of suitable loss function without domain knowledge has recently received much attention from the computer vision community . Li et al . ( 2019 ) uses reinforcement learning algorithms to learn better loss functions with good generalization and transfer-ability on different vision tasks . Wang et al . ( 2020 ) adopts both random search and reinforcement learning algorithms to search for loss functions on face recognition problems . Li et al . ( 2021 ) explores the possibility of searching loss function automatically from scratch for generic tasks , e.g. , semantic segmentation , object detection , and pose estimation . However , the search spaces in these methods are specially designed for vision tasks and can not be directly applied to PTO problems . Most work on automatic loss search follows the searching algorithms used in AutoML . We refer to He et al . ( 2021 ) for comprehensive survey on searching methods in AutoML . Natural questions are : 1 ) whether we can design a suitable search space for loss functions in PTO problems ; and 2 ) whether there is a strategy to search adequate loss functions systematically for different optimization problems using techniques from AutoML . 3 ATGP LOSS FOR PREDICT-THEN-OPTIMIZE PROBLEMS . 3.1 MISALIGNMENT IN PREDICT-THEN-OPTIMIZE PROBLEM . The problems we consider are the type of predict-then-optimize ( PTO ) problems in the following formulation . For an optimization problem in the following form maximize z Uc ( z ) subject to z ∈ Z ( 1 ) where z is the decision variable , c ∈ Rd is the parameter of the objective function U , and Z is the feasible set which does not depend on c. A decision maker needs to solve ( 1 ) without knowing the exact value of c. Instead , the decision maker will observe a feature vector x ∈ Rk . The goal of the decision maker is to predict the value of c and then optimize ( 1 ) based on the prediction ĉ . To distinguish with the parameters of the prediction models ( e.g. , weights of neural networks ) , we call c the PTO parameter . Instead of focusing on the prediction error between c and ĉ ( e.g. , l2 distance ) , the decision maker cares more about a high-quality decision result generated from the prediction . Elmachtoub & Grigas ( 2021 ) characterizes this quantity by the Smart “ Predict , then Optimize ” ( SPO ) loss , which equals to the loss incurred by solving the optimization problem ( 1 ) using ĉ instead of the true parameter c. ` SPO ( ĉ , c ) = E [ U∗c − Uc ( z∗ ( ĉ ) ) ] where z∗ ( c ) = arg maxz∈Z Uc ( z ) and U∗c = Uc ( z ∗ ( c ) ) . Figure 2a shows the misalignment between the l2 distance ( green ) and the SPO loss ( blue ) in a ranking PTO problem . In this ranking PTO problem , there are two items to be ranked . The true values of the items are c1 = 4.9 and c2 = 5.1 , which are also the values to predict . The prediction denoted by the magenta triangle has a lower SPO loss than the orange one , and therefore yields a better solution . However , prediction model that minimizes the l2 distance will prefer the orange one ( whose l2 distance is lower ) and produce a worse solution with regard to the ranking task . However , the SPO loss of most PTO problems is a piecewise linear function due to the combinatorial structure in the objective function Uc ( z ) and the solution map z∗ ( c ) . As a result , its derivatives in the predicted PTO parameter ĉ is either zero or nonexistent , which prohibits the usage of prediction models whose training is based on gradient descent algorithms . In this paper , we consider a collection of combinatorial optimization problems with strong ranking property . For this collection of optimization problems , we propose the total group preorder ( TGP ) loss and its differentiable variant approximated total group preorder ( ATGP ) loss as the training targets in the prediction stage . The ATGP loss will approximate the SPO loss with smooth shape , which is friendly and effective for the gradient-based training manner , as is shown in Figure 2c .
This paper studies a problem where the goal is to solve an optimization problem, but key parameters of the optimization problem (like a road network’s edge weights) are unknown and can only be predicted from historical data. This line of research is called “predict then optimize.” For the prediction step, the first thing one might try would be to minimize a standard loss function like the l2 loss, but this loss function may not necessarily lead the downstream optimization problem to return the highest quality solutions. This paper studies a way to learn a good loss function for the downstream optimization problem at hand. To give an example, if the optimization problem is to rank a set of elements, even if one can predict the elements’ values with low l2-error, the resulting ranking might be totally incorrect. Motivated by this example, this paper’s results apply to combinatorial optimization problems that satisfy a “strong ranking property,” which essentially means that the optimization problem’s solution only depends on group-wise comparisons of the problem’s parameters (rather than on the specific values of the parameters themselves). Let $c \in R^d$ be the true parameters and $\hat{c}$ be the learned parameters. The authors propose that the loss function one should optimize should ideally have the form $||sign(L\hat{c}) - sign(Lc)||$, where $L$ is a matrix of size ${2^d \choose 2} \times d$ that allows for all $2^d \choose 2$ group-wise comparisons between all groups of parameters. Since this ideal loss function is computationally intractable, the authors propose replace $sign$ with $\tanh$. Then, they propose fixing the number of rows of $L$ and parameterizing the matrix using a set of discretized Derivative-of-Gaussian filters, which they optimize using the reinforcement learning algorithm REINFORCE [Williams ‘92] (the details here are a bit unclear to me). They evaluate their algorithm for three problems: a scheduling problem, portfolio optimization, and shortest path. They compare against a number of different predict-then-optimize algorithms from prior research and show that their algorithm has much better losses on the problems studied.
SP:e2d56d76c6467658c85d8a59a7afb23561dd03b3
Boosting Randomized Smoothing with Variance Reduced Classifiers
1 INTRODUCTION . Modern deep neural networks are successfully applied to an ever-increasing range of applications , but while they often achieve excellent accuracy on the data distribution they were trained on , they have been shown to be very sensitive to slightly perturbed inputs , called adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) . This limits their applicability to safety-critical domains . Heuristic defenses against this vulnerability have been shown to be breakable ( Tramèr et al. , 2020 ; Carlini & Wagner , 2017 ) , highlighting the need for provable robustness guarantees . A promising method providing such guarantees for large networks is randomized smoothing ( RS ) ( Cohen et al. , 2019 ) . The core idea is to provide probabilistic robustness guarantees with arbitrarily high confidence by adding noise to the input of a base classifier and computing the expected classification over the perturbed inputs using Monte Carlo sampling . The key to obtaining high robust accuracies is a base classifier with low variance with respect to these perturbations that remains consistently accurate even under high levels of noise . Existing works use different regularization and loss terms to encourage such behavior ( Salman et al. , 2019 ; Zhai et al. , 2020 ; Jeong & Shin , 2020 ) but are ultimately all limited by the bias-variance trade-off of individual models . We show first theoretically and then empirically how ensembles can be constructed to significantly reduce this variance component and thereby increase certified radii for individual samples , and consequently , certified accuracy across the whole dataset . We illustrate this in Fig . 1 . Ensembles are a well-known tool for reducing classifier variance at the cost of increased computational cost ( Hansen & Salamon , 1990 ) . However , in the face of modern architectures ( He et al. , 2016 ; Huang et al. , 2017 ) allowing the stable training of large models , they have been considered computationally inefficient . Yet , recent work ( Wasay & Idreos , 2021 ) shows ensembles to be more efficient than single monolithic networks in many regimes . In light of this , we develop a theoretical framework analyzing this variance reducing property of ensembles under the perturbations introduced by RS . Further , we show how this reduced variance can significantly increase the majority class ’ s prediction probability , leading to much larger certified radii than evaluating more perturbations with an individual model . Certification with RS is computationally costly as the base classifier has to be evaluated many thousand times . To avoid exacerbating these costs by using ensembles as base models , we develop two techniques : ( i ) an adaptive sampling scheme for RS , which certifies samples for predetermined certification radii in stages , reducing the mean certification time up to 55-fold , and ( ii ) a special aggregation mechanism for ensembles which only evaluates the full ensemble on challenging samples , for which there is no consensus between a predefined subset of the constituting models . Main Contributions Our key contributions are : • A novel , theoretically motivated , and statistically sound soft-ensemble scheme for randomized smoothing , reducing perturbation variance and increasing certified radii ( §4 and §5 ) . • A data-dependent adaptive sampling scheme for RS that reduces the sample complexity for predetermined certification radii in a statistically sound manner ( §6 ) . • An extensive evaluation , examining the effects and interactions of ensemble size , training method , and perturbation size . We obtain state-of-the-art results on ImageNet and CIFAR10 for a wide range of settings , including denoised smoothing ( §7 ) . 2 RELATED WORK . Adversarial Robustness Following the discovery of adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) , defenses aiming to robustify networks were proposed ( Madry et al. , 2018 ) . Particularly relevant to this work are approaches that certify or enforce robustness properties . We consider probabilistic and deterministic approaches . Deterministic certification methods compute the reachable set for given input specifications using convex relaxations ( Singh et al. , 2019 ; Xu et al. , 2020 ) , mixed-integer linear programming ( Tjeng et al. , 2019 ) , semidefinite programming ( Dathathri et al. , 2020 ) , or SMT ( Ehlers , 2017 ) , to reason about properties of the output . To obtain networks amenable to such approaches , specialized training methods have been proposed ( Mirman et al. , 2018 ; Balunovic & Vechev , 2020 ; Xu et al. , 2020 ) . Probabilistic certification ( Li et al. , 2019 ; Lécuyer et al. , 2019 ) introduces noise to the classification process to obtain probabilistic robustness guarantees , allowing the certification of larger models than deterministic methods . We review Randomized Smoothing ( RS ) ( Cohen et al. , 2019 ) in §3 and associated training methods ( Jeong & Shin , 2020 ; Zhai et al. , 2020 ; Salman et al. , 2019 ) in App . G.3 . Orthogonally to training , RS has been extneded in numerous ways ( Yang et al. , 2020 ; Zhang et al. , 2020 ; Lee et al. , 2019 ; Dvijotham et al. , 2020 ; Fischer et al. , 2020 ) , which we review in App . B. Ensembles have been extensively analyzed with respect to different aggregation methods ( Kittler et al. , 1998 ; Inoue , 2019 ) , diversification ( Dietterich , 2000 ) , and the reduction of generalization errors ( Tumer & Ghosh , 1996b ; a ) . Randomized Smoothing and ensembles were first combined in Liu et al . ( 2020 ) as ensembles of smoothed classifiers . However , the method does not retain strong certificates for individual inputs ; thus , we consider the work to be in a different setting from ours . We discuss this in App . A . While similar at first glance , Qin et al . ( 2021 ) randomly sample models to an ensemble , evaluating them under noise to obtain an empirical defense against adversarial attacks . They , however , do not provide robustness guarantees . 3 RANDOMIZED SMOOTHING . Here we review the relevant background on Randomized Smoothing ( RS ) as introduced in Cohen et al . ( 2019 ) . We let f : Rd 7→ Rm denote a base classifier that takes a d-dimensional input and produces m numerical scores ( pre-softmax logits ) , one for each class . Further , we let F ( x ) : = arg maxq fq ( x ) denote a function Rd 7→ [ 1 , . . . , m ] that directly outputs the class with the highest score . For a random variable ∼ N ( 0 , σ2 I ) , we define a smoothed classifier G : Rd 7→ [ 1 , . . . , m ] as G ( x ) : = arg max c P ∼N ( 0 , σ2 I ) ( F ( x+ ) = c ) . ( 1 ) This classifier G is then robust to adversarial perturbations as follows : Theorem 3.1 ( From Cohen et al . ( 2019 ) ) . Let cA ∈ [ 1 , . . . , m ] , pA , pB ∈ [ 0 , 1 ] . If P ( F ( x+ ) = cA ) ≥ pA ≥ pB ≥ max c6=cA P ( F ( x+ ) = c ) , then G ( x+ δ ) = cA for all δ satisfying ‖δ‖2 < R with R : = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) . Algorithm 1 Certify from ( Cohen et al. , 2019 ) 1 : function CERTIFY ( F , σ , x , n0 , n , α ) 2 : cnts0← SAMPLEUNDERNOISE ( F , x , n0 , σ ) 3 : ĉA ← top index in cnts0 4 : cnts← SAMPLEUNDERNOISE ( F , x , n , σ ) 5 : pA ← LOWERCONFBND ( cnts [ ĉA ] , n , 1− α ) 6 : if pA > 12 then 7 : return prediction ĉA and radius σ Φ−1 ( pA ) 8 : return For radii R as per Theorem 3.1 with inverse Gaussian CDF Φ−1 and any x , showingR ≥ δ certifies G ( x ) = G ( x′ ) for any x′ ∈ { x+ η | ‖η‖2 < δ } . Computing the exact probabilities P ( F ( x+ ) = c ) is generally intractable . Thus , to allow practical application , CERTIFY ( Cohen et al. , 2019 ) ( see Algorithm 1 ) utilizes sampling : First , n0 samples to determine the majority class , then n samples to compute a lower bound pA to the success probability with confidence 1− α via the Clopper-Pearson lemma ( Clopper & Pearson , 1934 ) . If pA > 0.5 , we set pB = 1− pA and obtain radius R = σ Φ−1 ( pA ) via Theorem 3.1 else we abstain ( return ) . See App . F for exact definitions of the sampling and lower bounding procedures . To obtain high certified radii , the base model F must be trained to cope with the added Gaussian noise . To achieve this , several training methods , discussed in App . G.3 , have been introduced . We also see this in Fig . 1 , where various models obtain different pA , and thereby different radii R . 4 RANDOMIZED SMOOTHING FOR ENSEMBLE CLASSIFIERS . In this section , we extend the methods discussed in §3 from single models to ensembles . For a set of k classifiers { f l : Rd 7→ Rm } kl=1 , we construct an ensemble f̄ via weighted aggregation , f̄ ( x ) = ∑k l=1 w l γ ( f l ( x ) ) , where wl are the weights , γ : Rm 7→ Rm is a post-processing function , and f l ( x ) are the pre-softmax outputs of an individual model . Soft-voting ( where γ denotes identity ) and equal weights wl = 1k perform experimentally well ( see App . H.3.1 ) while being mathematically simple . Thus , we consider soft-ensembling via the averaging of the logits : f̄ ( x ) = 1 k k∑ l=1 f l ( x ) ( 2 ) The ensemble f̄ and its corresponding hard-classifier F̄ ( x ) : = arg maxq f̄q ( x ) , can be used without further modification as base classifiers for RS . We find that classifiers of identical architecture and trained with the same method but different random seeds are sufficiently diverse too , when ensembled with k ∈ [ 3 , 50 ] , exhibit a notably reduced variance with respect to the perturbations . As we will show in the next section , this increases both the true majority class probability pA and its lower confidence bound pA , raising the certified radius as per Theorem 3.1 . 5 VARIANCE REDUCTION VIA ENSEMBLES FOR RANDOMIZED SMOOTHING . We now show how ensembling even similar classifiers f l ( cf . Eq . ( 2 ) ) reduces their variance over the perturbations introduced in RS and thereby increases their majority class probability pA , making these ensembles f̄ particularly well-suited as base classifiers for RS . To this end , we first model network outputs with general distributions before investigating our theory empirically for Gaussian distributions . We defer algebra to App . C and justify our modeling assumptions empirically in App . D. Our modeling approach is mathematically closely related to Tumer & Ghosh ( 1996a ) , which focuses on analyzing ensemble over the whole dataset . However , we condition on a single input to study the interplay between the stochasticity in training and random perturbations encountered in RS . Individual Classifier We consider individual classifiers f l : Rd 7→ Rm and perturbed inputs x+ for a single arbitrary but fixed x and Gaussian perturbations ∼ N ( 0 , σ2 I ) . We model the presoftmax logits f l ( x ) = : yl ∈ Rm as the sum of two random variables yl = ylp + ylc . Here , ylc corresponds to the classifier ’ s behavior on the unperturbed sample and models the stochasticity in weight initialization and training with random noise augmentation . ylp describes the effect of the random perturbations applied during RS . Note that this split will become essential when analyzing the ensembles . We drop the superscript l when discussing an individual classifier to avoid clutter . We model the distribution of the clean component yc over classifiers with mean c = El [ f l ( x ) ] , the expectation for a fixed sample x over the randomness in the training process , and covariance Σc characterizing this randomness . We assume the distribution of the perturbation effect yp to be zero-mean ( following from local linearization and zero mean perturbations ) and to have a covariance Σp . While Σp might depend on the noise level σ , it is distinct from it . We do not restrict the structure of either covariance matrix and denote Σii = σ2i and Σij = σiσjρij . As yc models the global training effects and yp models the local behavior under small perturbations , we assume them to be independent . We thus obtain logits y with mean c , the expectation over the training process for the clean sample , and covariance matrix Σ = Σc + Σp , where Σc and Σp encode the stochasticity of the training process and over perturbations respectively . The classifier prediction F l ( x ) = arg maxq yq is not determined by the absolute values of logits but rather by the differences between them . We thus call the difference between the majority class logit and others the classification margin . During certification with RS , the first step is to determine the majority class . Without loss of generality , let the class with index 1 be the majority class , leading to the classification margin zi = y1 − yi . Note that if zi > 0 for all i 6= 1 , then the majority class logit y1 is larger than the logits of all other classes yi . Under the above assumptions , the statistics of the classification margin for a single classifier are : E [ zi ] = c1 − ci Var [ zi ] = σ 2 p,1 + σ 2 p , i + σ 2 c,1 + σ 2 c , i − 2ρp,1iσp,1σp , i − 2ρc,1iσc,1σc , i Ensemble Now , we construct an ensemble of k of such classifiers . We use soft-voting ( cf . Eq . ( 2 ) ) to compute the ensemble output ȳ = 1k ∑k l=1 y l and then the corresponding classification margins z̄i = ȳ1 − ȳi . By the linearity of expectation we have E [ z̄i ] = E [ zi ] = c1 − ci . We consider similar classifiers , differing only in the random seed used for training . Hence , we assume that the correlation between the logits of different classifiers has a similar structure but smaller magnitude than the correlation between logits of one classifier . Correspondingly , we parametrize the covariance between yic and y j c for classifiers i 6= j with ζcΣc and similarly between yip and yjp with ζpΣp for ζc , ζp ∈ [ 0 , 1 ] . Depending on these correlation coefficients ζc and ζp , this model captures the range from no correlation ( ζ = 0 ) to perfect correlation ( ζ = 1 ) . We obtain the following variance : Var [ z̄i ] = k + 2 ( k 2 ) ζp k2 ( σ2p,1 + σ 2 p , i − 2ρp,1iσp,1σp , i ) ︸ ︷︷ ︸ σ2p ( k ) + k + 2 ( k 2 ) ζc k2 ( σ2c,1 + σ 2 c , i − 2ρc,1iσc,1σc , i ) ︸ ︷︷ ︸ σ2c ( k ) . Variance Reduction We can split Var [ z̄i ] into the components associated with the clean prediction σ2c ( k ) and the perturbation effect σ 2 p ( k ) , both as functions of the ensemble size k. We now compare these variance terms independently to the corresponding terms of an individual classifier dropping the subscripts p and c from σ2c ( k ) /σ 2 c ( 1 ) and σ 2 p ( k ) /σ 2 p ( 1 ) as they follow the same structure : σ2 ( k ) σ2 ( 1 ) = ( 1 + ζ ( k − 1 ) ) ( σ21 + σ2i − 2ρ1iσ1σi ) k ( σ21 + σ 2 i − 2ρ1iσ1σi ) = 1 + ζ ( k − 1 ) k k→∞−−−−→ ζ ( 3 ) We observe that both variance components go towards their corresponding correlation coefficients ζc and ζp as ensemble size grows , highlighting the importance of non-identical classifiers and illustrated under a Gaussian assumption ( explained later ) in Fig . 2a . Effect on Success Probability We can compute the probability of an ensemble of k classifiers predicting the majority class by integrating the probability distribution of the classification margin over the orthant1 z̄i > 0 ∀i ≥ 2 where majority class 1 is predicted : p1 : = P ( F̄ ( x+ ) = 1 ) = P ( z̄i > 0 : ∀ 2 ≤ i ≤ m ) = ∫ z̄s.t.z̄i > 0 , ∀ 2 ≤i≤m P ( z̄ ) dz̄ . ( 4 ) Assuming Gaussian distributions , we observe the increase in success probability shown in Fig . 2b . Without assuming a specific distribution , we can still lower-bound the success probability using Chebyshev ’ s inequality and the union bound . Given that a mean classification yields the majority class and hence c1 − ci > 0 , we let ti = c1−ciσi ( k ) and have : p1 ≥ 1− m∑ i=2 P ( |z̄i − c1 + ci| ≥ ti σi ( k ) ) ≥ 1− m∑ i=2 σi ( k ) 2 ( c1 − ci ) 2 ( 5 ) where σi ( k ) 2 = σc , i ( k ) 2 + σp , i ( k ) 2 is the variance of classification margin z̄i . We observe that as σi ( k ) decreases with increasing ensemble size k ( see Eq . ( 3 ) ) , the lower bound to the success probability increases quadratically . The further we are away from the decision boundary , i.e. , the larger ci , the smaller the absolute increase in success probability for the same variance reduction . Given a concrete success probability p1 , we compute the probability distribution over the certifiable radii reported by CERTIFY for a given confidence α , sample number n , and perturbation variance σ2 ( up to choosing an incorrect majority class ĉA ) as : P ( R = σ Φ −1 ( p1 ( n1 , n , α ) ) ) = B ( n1 , n , p1 ) , for R > 0 ( 6 ) where B ( s , r , p ) is the probability of drawing s successes in r trials from a Binomial distribution with success probability p , and p ( s , r , α ) is the lower confidence bound to the success probability of a Bernoulli experiment given s successes in r trials with confidence α according to the Clopper-Pearson interval ( Clopper & Pearson , 1934 ) . We illustrate the resulting effect assuming Gaussian distributions in Fig . 2c . 1n-dimensional equivalent of a quadrant Empirical Analysis via Gaussian Assumption To investigate our theory , we now assume ylc and ylp and hence also z̄i to be multivariate Gaussians . This choice is empirically well-fitting ( see App . D ) and follows from the central limit theorem for ylc and from Gaussian perturbations and local linearization for ylp . To estimate the free parameters c , Σc , Σp , ζc , and ζp , we evaluate ensembles of up to k = 50 GAUSSIAN trained ResNet20 ( for details , see §7 ) at σ = 0.25 . We obtain c and Σc as the mean and covariance of the output on a randomly chosen sample x . Subtracting the clean outputs from those for the perturbed samples , we estimate the covariance matrix Σp . We determine ζc ≈ 0 and ζp ≈ 0.82 as the median ratio of the inter- and intra-classifier covariance . ζc ≈ 0 implies that our models can be treated as independent conditioned on a single fixed and unperturbed input . Plugging these estimates into our model under the Gaussian assumption , we observe a significant decrease in the variance of the classification margin to the runner-up class as ensemble size k increases ( see Fig . 2a ) . This generally leads to more polarized success probabilities : the majority class ’ probability is increased , while the other classes ’ probabilities are decreased because the probability mass concentrates around the mean , and consequently on one side of the decision threshold z̄ = 0 , which determines the success probability via Eq . ( 4 ) . An increase , as in our case ( see Fig . 2b ) , leads to much larger expected certified radii ( see Fig . 2c ) for a given number of sampled perturbations ( here n = 103 ) via Eq . ( 6 ) . In contrast , sampling more perturbations will , in the limit , only recover the true success probability . In our example , going from one classifier to an ensemble of 50 increases the expected certified radius by 191 % , while drawing 50 times more perturbations for a single model only yields a 28 % increase ( see Fig . 2c ) . As illustrated in Fig . 3 , these effects are strongest close to decision boundaries ( pA 1 ) , where small increases in pA impact the certified radius much more than the number of samples n , which dominated at pA ≈ 1 .̧
The paper aims to boost randomized smoothing (RS). Specifically, the paper demonstrates that an ensemble of diverse base models can enhance RS both theoretically and empirically. The key insight is that reducing variance of ensembles over the introduced perturbations can lead to more consistent classifications for inputs. The paper also introduce two simple yet effective techniques to speed up the certification. Extensive experiments are conducted to thoroughly evaluate the proposed boosted RS.
SP:1547a102dd07cf55533d46ec741be79f7d2d5140
Boosting Randomized Smoothing with Variance Reduced Classifiers
1 INTRODUCTION . Modern deep neural networks are successfully applied to an ever-increasing range of applications , but while they often achieve excellent accuracy on the data distribution they were trained on , they have been shown to be very sensitive to slightly perturbed inputs , called adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) . This limits their applicability to safety-critical domains . Heuristic defenses against this vulnerability have been shown to be breakable ( Tramèr et al. , 2020 ; Carlini & Wagner , 2017 ) , highlighting the need for provable robustness guarantees . A promising method providing such guarantees for large networks is randomized smoothing ( RS ) ( Cohen et al. , 2019 ) . The core idea is to provide probabilistic robustness guarantees with arbitrarily high confidence by adding noise to the input of a base classifier and computing the expected classification over the perturbed inputs using Monte Carlo sampling . The key to obtaining high robust accuracies is a base classifier with low variance with respect to these perturbations that remains consistently accurate even under high levels of noise . Existing works use different regularization and loss terms to encourage such behavior ( Salman et al. , 2019 ; Zhai et al. , 2020 ; Jeong & Shin , 2020 ) but are ultimately all limited by the bias-variance trade-off of individual models . We show first theoretically and then empirically how ensembles can be constructed to significantly reduce this variance component and thereby increase certified radii for individual samples , and consequently , certified accuracy across the whole dataset . We illustrate this in Fig . 1 . Ensembles are a well-known tool for reducing classifier variance at the cost of increased computational cost ( Hansen & Salamon , 1990 ) . However , in the face of modern architectures ( He et al. , 2016 ; Huang et al. , 2017 ) allowing the stable training of large models , they have been considered computationally inefficient . Yet , recent work ( Wasay & Idreos , 2021 ) shows ensembles to be more efficient than single monolithic networks in many regimes . In light of this , we develop a theoretical framework analyzing this variance reducing property of ensembles under the perturbations introduced by RS . Further , we show how this reduced variance can significantly increase the majority class ’ s prediction probability , leading to much larger certified radii than evaluating more perturbations with an individual model . Certification with RS is computationally costly as the base classifier has to be evaluated many thousand times . To avoid exacerbating these costs by using ensembles as base models , we develop two techniques : ( i ) an adaptive sampling scheme for RS , which certifies samples for predetermined certification radii in stages , reducing the mean certification time up to 55-fold , and ( ii ) a special aggregation mechanism for ensembles which only evaluates the full ensemble on challenging samples , for which there is no consensus between a predefined subset of the constituting models . Main Contributions Our key contributions are : • A novel , theoretically motivated , and statistically sound soft-ensemble scheme for randomized smoothing , reducing perturbation variance and increasing certified radii ( §4 and §5 ) . • A data-dependent adaptive sampling scheme for RS that reduces the sample complexity for predetermined certification radii in a statistically sound manner ( §6 ) . • An extensive evaluation , examining the effects and interactions of ensemble size , training method , and perturbation size . We obtain state-of-the-art results on ImageNet and CIFAR10 for a wide range of settings , including denoised smoothing ( §7 ) . 2 RELATED WORK . Adversarial Robustness Following the discovery of adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) , defenses aiming to robustify networks were proposed ( Madry et al. , 2018 ) . Particularly relevant to this work are approaches that certify or enforce robustness properties . We consider probabilistic and deterministic approaches . Deterministic certification methods compute the reachable set for given input specifications using convex relaxations ( Singh et al. , 2019 ; Xu et al. , 2020 ) , mixed-integer linear programming ( Tjeng et al. , 2019 ) , semidefinite programming ( Dathathri et al. , 2020 ) , or SMT ( Ehlers , 2017 ) , to reason about properties of the output . To obtain networks amenable to such approaches , specialized training methods have been proposed ( Mirman et al. , 2018 ; Balunovic & Vechev , 2020 ; Xu et al. , 2020 ) . Probabilistic certification ( Li et al. , 2019 ; Lécuyer et al. , 2019 ) introduces noise to the classification process to obtain probabilistic robustness guarantees , allowing the certification of larger models than deterministic methods . We review Randomized Smoothing ( RS ) ( Cohen et al. , 2019 ) in §3 and associated training methods ( Jeong & Shin , 2020 ; Zhai et al. , 2020 ; Salman et al. , 2019 ) in App . G.3 . Orthogonally to training , RS has been extneded in numerous ways ( Yang et al. , 2020 ; Zhang et al. , 2020 ; Lee et al. , 2019 ; Dvijotham et al. , 2020 ; Fischer et al. , 2020 ) , which we review in App . B. Ensembles have been extensively analyzed with respect to different aggregation methods ( Kittler et al. , 1998 ; Inoue , 2019 ) , diversification ( Dietterich , 2000 ) , and the reduction of generalization errors ( Tumer & Ghosh , 1996b ; a ) . Randomized Smoothing and ensembles were first combined in Liu et al . ( 2020 ) as ensembles of smoothed classifiers . However , the method does not retain strong certificates for individual inputs ; thus , we consider the work to be in a different setting from ours . We discuss this in App . A . While similar at first glance , Qin et al . ( 2021 ) randomly sample models to an ensemble , evaluating them under noise to obtain an empirical defense against adversarial attacks . They , however , do not provide robustness guarantees . 3 RANDOMIZED SMOOTHING . Here we review the relevant background on Randomized Smoothing ( RS ) as introduced in Cohen et al . ( 2019 ) . We let f : Rd 7→ Rm denote a base classifier that takes a d-dimensional input and produces m numerical scores ( pre-softmax logits ) , one for each class . Further , we let F ( x ) : = arg maxq fq ( x ) denote a function Rd 7→ [ 1 , . . . , m ] that directly outputs the class with the highest score . For a random variable ∼ N ( 0 , σ2 I ) , we define a smoothed classifier G : Rd 7→ [ 1 , . . . , m ] as G ( x ) : = arg max c P ∼N ( 0 , σ2 I ) ( F ( x+ ) = c ) . ( 1 ) This classifier G is then robust to adversarial perturbations as follows : Theorem 3.1 ( From Cohen et al . ( 2019 ) ) . Let cA ∈ [ 1 , . . . , m ] , pA , pB ∈ [ 0 , 1 ] . If P ( F ( x+ ) = cA ) ≥ pA ≥ pB ≥ max c6=cA P ( F ( x+ ) = c ) , then G ( x+ δ ) = cA for all δ satisfying ‖δ‖2 < R with R : = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) . Algorithm 1 Certify from ( Cohen et al. , 2019 ) 1 : function CERTIFY ( F , σ , x , n0 , n , α ) 2 : cnts0← SAMPLEUNDERNOISE ( F , x , n0 , σ ) 3 : ĉA ← top index in cnts0 4 : cnts← SAMPLEUNDERNOISE ( F , x , n , σ ) 5 : pA ← LOWERCONFBND ( cnts [ ĉA ] , n , 1− α ) 6 : if pA > 12 then 7 : return prediction ĉA and radius σ Φ−1 ( pA ) 8 : return For radii R as per Theorem 3.1 with inverse Gaussian CDF Φ−1 and any x , showingR ≥ δ certifies G ( x ) = G ( x′ ) for any x′ ∈ { x+ η | ‖η‖2 < δ } . Computing the exact probabilities P ( F ( x+ ) = c ) is generally intractable . Thus , to allow practical application , CERTIFY ( Cohen et al. , 2019 ) ( see Algorithm 1 ) utilizes sampling : First , n0 samples to determine the majority class , then n samples to compute a lower bound pA to the success probability with confidence 1− α via the Clopper-Pearson lemma ( Clopper & Pearson , 1934 ) . If pA > 0.5 , we set pB = 1− pA and obtain radius R = σ Φ−1 ( pA ) via Theorem 3.1 else we abstain ( return ) . See App . F for exact definitions of the sampling and lower bounding procedures . To obtain high certified radii , the base model F must be trained to cope with the added Gaussian noise . To achieve this , several training methods , discussed in App . G.3 , have been introduced . We also see this in Fig . 1 , where various models obtain different pA , and thereby different radii R . 4 RANDOMIZED SMOOTHING FOR ENSEMBLE CLASSIFIERS . In this section , we extend the methods discussed in §3 from single models to ensembles . For a set of k classifiers { f l : Rd 7→ Rm } kl=1 , we construct an ensemble f̄ via weighted aggregation , f̄ ( x ) = ∑k l=1 w l γ ( f l ( x ) ) , where wl are the weights , γ : Rm 7→ Rm is a post-processing function , and f l ( x ) are the pre-softmax outputs of an individual model . Soft-voting ( where γ denotes identity ) and equal weights wl = 1k perform experimentally well ( see App . H.3.1 ) while being mathematically simple . Thus , we consider soft-ensembling via the averaging of the logits : f̄ ( x ) = 1 k k∑ l=1 f l ( x ) ( 2 ) The ensemble f̄ and its corresponding hard-classifier F̄ ( x ) : = arg maxq f̄q ( x ) , can be used without further modification as base classifiers for RS . We find that classifiers of identical architecture and trained with the same method but different random seeds are sufficiently diverse too , when ensembled with k ∈ [ 3 , 50 ] , exhibit a notably reduced variance with respect to the perturbations . As we will show in the next section , this increases both the true majority class probability pA and its lower confidence bound pA , raising the certified radius as per Theorem 3.1 . 5 VARIANCE REDUCTION VIA ENSEMBLES FOR RANDOMIZED SMOOTHING . We now show how ensembling even similar classifiers f l ( cf . Eq . ( 2 ) ) reduces their variance over the perturbations introduced in RS and thereby increases their majority class probability pA , making these ensembles f̄ particularly well-suited as base classifiers for RS . To this end , we first model network outputs with general distributions before investigating our theory empirically for Gaussian distributions . We defer algebra to App . C and justify our modeling assumptions empirically in App . D. Our modeling approach is mathematically closely related to Tumer & Ghosh ( 1996a ) , which focuses on analyzing ensemble over the whole dataset . However , we condition on a single input to study the interplay between the stochasticity in training and random perturbations encountered in RS . Individual Classifier We consider individual classifiers f l : Rd 7→ Rm and perturbed inputs x+ for a single arbitrary but fixed x and Gaussian perturbations ∼ N ( 0 , σ2 I ) . We model the presoftmax logits f l ( x ) = : yl ∈ Rm as the sum of two random variables yl = ylp + ylc . Here , ylc corresponds to the classifier ’ s behavior on the unperturbed sample and models the stochasticity in weight initialization and training with random noise augmentation . ylp describes the effect of the random perturbations applied during RS . Note that this split will become essential when analyzing the ensembles . We drop the superscript l when discussing an individual classifier to avoid clutter . We model the distribution of the clean component yc over classifiers with mean c = El [ f l ( x ) ] , the expectation for a fixed sample x over the randomness in the training process , and covariance Σc characterizing this randomness . We assume the distribution of the perturbation effect yp to be zero-mean ( following from local linearization and zero mean perturbations ) and to have a covariance Σp . While Σp might depend on the noise level σ , it is distinct from it . We do not restrict the structure of either covariance matrix and denote Σii = σ2i and Σij = σiσjρij . As yc models the global training effects and yp models the local behavior under small perturbations , we assume them to be independent . We thus obtain logits y with mean c , the expectation over the training process for the clean sample , and covariance matrix Σ = Σc + Σp , where Σc and Σp encode the stochasticity of the training process and over perturbations respectively . The classifier prediction F l ( x ) = arg maxq yq is not determined by the absolute values of logits but rather by the differences between them . We thus call the difference between the majority class logit and others the classification margin . During certification with RS , the first step is to determine the majority class . Without loss of generality , let the class with index 1 be the majority class , leading to the classification margin zi = y1 − yi . Note that if zi > 0 for all i 6= 1 , then the majority class logit y1 is larger than the logits of all other classes yi . Under the above assumptions , the statistics of the classification margin for a single classifier are : E [ zi ] = c1 − ci Var [ zi ] = σ 2 p,1 + σ 2 p , i + σ 2 c,1 + σ 2 c , i − 2ρp,1iσp,1σp , i − 2ρc,1iσc,1σc , i Ensemble Now , we construct an ensemble of k of such classifiers . We use soft-voting ( cf . Eq . ( 2 ) ) to compute the ensemble output ȳ = 1k ∑k l=1 y l and then the corresponding classification margins z̄i = ȳ1 − ȳi . By the linearity of expectation we have E [ z̄i ] = E [ zi ] = c1 − ci . We consider similar classifiers , differing only in the random seed used for training . Hence , we assume that the correlation between the logits of different classifiers has a similar structure but smaller magnitude than the correlation between logits of one classifier . Correspondingly , we parametrize the covariance between yic and y j c for classifiers i 6= j with ζcΣc and similarly between yip and yjp with ζpΣp for ζc , ζp ∈ [ 0 , 1 ] . Depending on these correlation coefficients ζc and ζp , this model captures the range from no correlation ( ζ = 0 ) to perfect correlation ( ζ = 1 ) . We obtain the following variance : Var [ z̄i ] = k + 2 ( k 2 ) ζp k2 ( σ2p,1 + σ 2 p , i − 2ρp,1iσp,1σp , i ) ︸ ︷︷ ︸ σ2p ( k ) + k + 2 ( k 2 ) ζc k2 ( σ2c,1 + σ 2 c , i − 2ρc,1iσc,1σc , i ) ︸ ︷︷ ︸ σ2c ( k ) . Variance Reduction We can split Var [ z̄i ] into the components associated with the clean prediction σ2c ( k ) and the perturbation effect σ 2 p ( k ) , both as functions of the ensemble size k. We now compare these variance terms independently to the corresponding terms of an individual classifier dropping the subscripts p and c from σ2c ( k ) /σ 2 c ( 1 ) and σ 2 p ( k ) /σ 2 p ( 1 ) as they follow the same structure : σ2 ( k ) σ2 ( 1 ) = ( 1 + ζ ( k − 1 ) ) ( σ21 + σ2i − 2ρ1iσ1σi ) k ( σ21 + σ 2 i − 2ρ1iσ1σi ) = 1 + ζ ( k − 1 ) k k→∞−−−−→ ζ ( 3 ) We observe that both variance components go towards their corresponding correlation coefficients ζc and ζp as ensemble size grows , highlighting the importance of non-identical classifiers and illustrated under a Gaussian assumption ( explained later ) in Fig . 2a . Effect on Success Probability We can compute the probability of an ensemble of k classifiers predicting the majority class by integrating the probability distribution of the classification margin over the orthant1 z̄i > 0 ∀i ≥ 2 where majority class 1 is predicted : p1 : = P ( F̄ ( x+ ) = 1 ) = P ( z̄i > 0 : ∀ 2 ≤ i ≤ m ) = ∫ z̄s.t.z̄i > 0 , ∀ 2 ≤i≤m P ( z̄ ) dz̄ . ( 4 ) Assuming Gaussian distributions , we observe the increase in success probability shown in Fig . 2b . Without assuming a specific distribution , we can still lower-bound the success probability using Chebyshev ’ s inequality and the union bound . Given that a mean classification yields the majority class and hence c1 − ci > 0 , we let ti = c1−ciσi ( k ) and have : p1 ≥ 1− m∑ i=2 P ( |z̄i − c1 + ci| ≥ ti σi ( k ) ) ≥ 1− m∑ i=2 σi ( k ) 2 ( c1 − ci ) 2 ( 5 ) where σi ( k ) 2 = σc , i ( k ) 2 + σp , i ( k ) 2 is the variance of classification margin z̄i . We observe that as σi ( k ) decreases with increasing ensemble size k ( see Eq . ( 3 ) ) , the lower bound to the success probability increases quadratically . The further we are away from the decision boundary , i.e. , the larger ci , the smaller the absolute increase in success probability for the same variance reduction . Given a concrete success probability p1 , we compute the probability distribution over the certifiable radii reported by CERTIFY for a given confidence α , sample number n , and perturbation variance σ2 ( up to choosing an incorrect majority class ĉA ) as : P ( R = σ Φ −1 ( p1 ( n1 , n , α ) ) ) = B ( n1 , n , p1 ) , for R > 0 ( 6 ) where B ( s , r , p ) is the probability of drawing s successes in r trials from a Binomial distribution with success probability p , and p ( s , r , α ) is the lower confidence bound to the success probability of a Bernoulli experiment given s successes in r trials with confidence α according to the Clopper-Pearson interval ( Clopper & Pearson , 1934 ) . We illustrate the resulting effect assuming Gaussian distributions in Fig . 2c . 1n-dimensional equivalent of a quadrant Empirical Analysis via Gaussian Assumption To investigate our theory , we now assume ylc and ylp and hence also z̄i to be multivariate Gaussians . This choice is empirically well-fitting ( see App . D ) and follows from the central limit theorem for ylc and from Gaussian perturbations and local linearization for ylp . To estimate the free parameters c , Σc , Σp , ζc , and ζp , we evaluate ensembles of up to k = 50 GAUSSIAN trained ResNet20 ( for details , see §7 ) at σ = 0.25 . We obtain c and Σc as the mean and covariance of the output on a randomly chosen sample x . Subtracting the clean outputs from those for the perturbed samples , we estimate the covariance matrix Σp . We determine ζc ≈ 0 and ζp ≈ 0.82 as the median ratio of the inter- and intra-classifier covariance . ζc ≈ 0 implies that our models can be treated as independent conditioned on a single fixed and unperturbed input . Plugging these estimates into our model under the Gaussian assumption , we observe a significant decrease in the variance of the classification margin to the runner-up class as ensemble size k increases ( see Fig . 2a ) . This generally leads to more polarized success probabilities : the majority class ’ probability is increased , while the other classes ’ probabilities are decreased because the probability mass concentrates around the mean , and consequently on one side of the decision threshold z̄ = 0 , which determines the success probability via Eq . ( 4 ) . An increase , as in our case ( see Fig . 2b ) , leads to much larger expected certified radii ( see Fig . 2c ) for a given number of sampled perturbations ( here n = 103 ) via Eq . ( 6 ) . In contrast , sampling more perturbations will , in the limit , only recover the true success probability . In our example , going from one classifier to an ensemble of 50 increases the expected certified radius by 191 % , while drawing 50 times more perturbations for a single model only yields a 28 % increase ( see Fig . 2c ) . As illustrated in Fig . 3 , these effects are strongest close to decision boundaries ( pA 1 ) , where small increases in pA impact the certified radius much more than the number of samples n , which dominated at pA ≈ 1 .̧
This paper integrates model ensembles with randomized smoothing to improve the certified accuracy. The methodology is motivated theoretically by showing the effect of model ensemble on reducing the variance of smooth classifiers. Moreover, it proposes an adaptive sampling algorithm to reduce the computation required for certifying with randomized smoothing. Extensive experiments were conducted on CIFAR10 and ImageNet datasets.
SP:1547a102dd07cf55533d46ec741be79f7d2d5140
Boosting Randomized Smoothing with Variance Reduced Classifiers
1 INTRODUCTION . Modern deep neural networks are successfully applied to an ever-increasing range of applications , but while they often achieve excellent accuracy on the data distribution they were trained on , they have been shown to be very sensitive to slightly perturbed inputs , called adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) . This limits their applicability to safety-critical domains . Heuristic defenses against this vulnerability have been shown to be breakable ( Tramèr et al. , 2020 ; Carlini & Wagner , 2017 ) , highlighting the need for provable robustness guarantees . A promising method providing such guarantees for large networks is randomized smoothing ( RS ) ( Cohen et al. , 2019 ) . The core idea is to provide probabilistic robustness guarantees with arbitrarily high confidence by adding noise to the input of a base classifier and computing the expected classification over the perturbed inputs using Monte Carlo sampling . The key to obtaining high robust accuracies is a base classifier with low variance with respect to these perturbations that remains consistently accurate even under high levels of noise . Existing works use different regularization and loss terms to encourage such behavior ( Salman et al. , 2019 ; Zhai et al. , 2020 ; Jeong & Shin , 2020 ) but are ultimately all limited by the bias-variance trade-off of individual models . We show first theoretically and then empirically how ensembles can be constructed to significantly reduce this variance component and thereby increase certified radii for individual samples , and consequently , certified accuracy across the whole dataset . We illustrate this in Fig . 1 . Ensembles are a well-known tool for reducing classifier variance at the cost of increased computational cost ( Hansen & Salamon , 1990 ) . However , in the face of modern architectures ( He et al. , 2016 ; Huang et al. , 2017 ) allowing the stable training of large models , they have been considered computationally inefficient . Yet , recent work ( Wasay & Idreos , 2021 ) shows ensembles to be more efficient than single monolithic networks in many regimes . In light of this , we develop a theoretical framework analyzing this variance reducing property of ensembles under the perturbations introduced by RS . Further , we show how this reduced variance can significantly increase the majority class ’ s prediction probability , leading to much larger certified radii than evaluating more perturbations with an individual model . Certification with RS is computationally costly as the base classifier has to be evaluated many thousand times . To avoid exacerbating these costs by using ensembles as base models , we develop two techniques : ( i ) an adaptive sampling scheme for RS , which certifies samples for predetermined certification radii in stages , reducing the mean certification time up to 55-fold , and ( ii ) a special aggregation mechanism for ensembles which only evaluates the full ensemble on challenging samples , for which there is no consensus between a predefined subset of the constituting models . Main Contributions Our key contributions are : • A novel , theoretically motivated , and statistically sound soft-ensemble scheme for randomized smoothing , reducing perturbation variance and increasing certified radii ( §4 and §5 ) . • A data-dependent adaptive sampling scheme for RS that reduces the sample complexity for predetermined certification radii in a statistically sound manner ( §6 ) . • An extensive evaluation , examining the effects and interactions of ensemble size , training method , and perturbation size . We obtain state-of-the-art results on ImageNet and CIFAR10 for a wide range of settings , including denoised smoothing ( §7 ) . 2 RELATED WORK . Adversarial Robustness Following the discovery of adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) , defenses aiming to robustify networks were proposed ( Madry et al. , 2018 ) . Particularly relevant to this work are approaches that certify or enforce robustness properties . We consider probabilistic and deterministic approaches . Deterministic certification methods compute the reachable set for given input specifications using convex relaxations ( Singh et al. , 2019 ; Xu et al. , 2020 ) , mixed-integer linear programming ( Tjeng et al. , 2019 ) , semidefinite programming ( Dathathri et al. , 2020 ) , or SMT ( Ehlers , 2017 ) , to reason about properties of the output . To obtain networks amenable to such approaches , specialized training methods have been proposed ( Mirman et al. , 2018 ; Balunovic & Vechev , 2020 ; Xu et al. , 2020 ) . Probabilistic certification ( Li et al. , 2019 ; Lécuyer et al. , 2019 ) introduces noise to the classification process to obtain probabilistic robustness guarantees , allowing the certification of larger models than deterministic methods . We review Randomized Smoothing ( RS ) ( Cohen et al. , 2019 ) in §3 and associated training methods ( Jeong & Shin , 2020 ; Zhai et al. , 2020 ; Salman et al. , 2019 ) in App . G.3 . Orthogonally to training , RS has been extneded in numerous ways ( Yang et al. , 2020 ; Zhang et al. , 2020 ; Lee et al. , 2019 ; Dvijotham et al. , 2020 ; Fischer et al. , 2020 ) , which we review in App . B. Ensembles have been extensively analyzed with respect to different aggregation methods ( Kittler et al. , 1998 ; Inoue , 2019 ) , diversification ( Dietterich , 2000 ) , and the reduction of generalization errors ( Tumer & Ghosh , 1996b ; a ) . Randomized Smoothing and ensembles were first combined in Liu et al . ( 2020 ) as ensembles of smoothed classifiers . However , the method does not retain strong certificates for individual inputs ; thus , we consider the work to be in a different setting from ours . We discuss this in App . A . While similar at first glance , Qin et al . ( 2021 ) randomly sample models to an ensemble , evaluating them under noise to obtain an empirical defense against adversarial attacks . They , however , do not provide robustness guarantees . 3 RANDOMIZED SMOOTHING . Here we review the relevant background on Randomized Smoothing ( RS ) as introduced in Cohen et al . ( 2019 ) . We let f : Rd 7→ Rm denote a base classifier that takes a d-dimensional input and produces m numerical scores ( pre-softmax logits ) , one for each class . Further , we let F ( x ) : = arg maxq fq ( x ) denote a function Rd 7→ [ 1 , . . . , m ] that directly outputs the class with the highest score . For a random variable ∼ N ( 0 , σ2 I ) , we define a smoothed classifier G : Rd 7→ [ 1 , . . . , m ] as G ( x ) : = arg max c P ∼N ( 0 , σ2 I ) ( F ( x+ ) = c ) . ( 1 ) This classifier G is then robust to adversarial perturbations as follows : Theorem 3.1 ( From Cohen et al . ( 2019 ) ) . Let cA ∈ [ 1 , . . . , m ] , pA , pB ∈ [ 0 , 1 ] . If P ( F ( x+ ) = cA ) ≥ pA ≥ pB ≥ max c6=cA P ( F ( x+ ) = c ) , then G ( x+ δ ) = cA for all δ satisfying ‖δ‖2 < R with R : = σ 2 ( Φ−1 ( pA ) − Φ−1 ( pB ) ) . Algorithm 1 Certify from ( Cohen et al. , 2019 ) 1 : function CERTIFY ( F , σ , x , n0 , n , α ) 2 : cnts0← SAMPLEUNDERNOISE ( F , x , n0 , σ ) 3 : ĉA ← top index in cnts0 4 : cnts← SAMPLEUNDERNOISE ( F , x , n , σ ) 5 : pA ← LOWERCONFBND ( cnts [ ĉA ] , n , 1− α ) 6 : if pA > 12 then 7 : return prediction ĉA and radius σ Φ−1 ( pA ) 8 : return For radii R as per Theorem 3.1 with inverse Gaussian CDF Φ−1 and any x , showingR ≥ δ certifies G ( x ) = G ( x′ ) for any x′ ∈ { x+ η | ‖η‖2 < δ } . Computing the exact probabilities P ( F ( x+ ) = c ) is generally intractable . Thus , to allow practical application , CERTIFY ( Cohen et al. , 2019 ) ( see Algorithm 1 ) utilizes sampling : First , n0 samples to determine the majority class , then n samples to compute a lower bound pA to the success probability with confidence 1− α via the Clopper-Pearson lemma ( Clopper & Pearson , 1934 ) . If pA > 0.5 , we set pB = 1− pA and obtain radius R = σ Φ−1 ( pA ) via Theorem 3.1 else we abstain ( return ) . See App . F for exact definitions of the sampling and lower bounding procedures . To obtain high certified radii , the base model F must be trained to cope with the added Gaussian noise . To achieve this , several training methods , discussed in App . G.3 , have been introduced . We also see this in Fig . 1 , where various models obtain different pA , and thereby different radii R . 4 RANDOMIZED SMOOTHING FOR ENSEMBLE CLASSIFIERS . In this section , we extend the methods discussed in §3 from single models to ensembles . For a set of k classifiers { f l : Rd 7→ Rm } kl=1 , we construct an ensemble f̄ via weighted aggregation , f̄ ( x ) = ∑k l=1 w l γ ( f l ( x ) ) , where wl are the weights , γ : Rm 7→ Rm is a post-processing function , and f l ( x ) are the pre-softmax outputs of an individual model . Soft-voting ( where γ denotes identity ) and equal weights wl = 1k perform experimentally well ( see App . H.3.1 ) while being mathematically simple . Thus , we consider soft-ensembling via the averaging of the logits : f̄ ( x ) = 1 k k∑ l=1 f l ( x ) ( 2 ) The ensemble f̄ and its corresponding hard-classifier F̄ ( x ) : = arg maxq f̄q ( x ) , can be used without further modification as base classifiers for RS . We find that classifiers of identical architecture and trained with the same method but different random seeds are sufficiently diverse too , when ensembled with k ∈ [ 3 , 50 ] , exhibit a notably reduced variance with respect to the perturbations . As we will show in the next section , this increases both the true majority class probability pA and its lower confidence bound pA , raising the certified radius as per Theorem 3.1 . 5 VARIANCE REDUCTION VIA ENSEMBLES FOR RANDOMIZED SMOOTHING . We now show how ensembling even similar classifiers f l ( cf . Eq . ( 2 ) ) reduces their variance over the perturbations introduced in RS and thereby increases their majority class probability pA , making these ensembles f̄ particularly well-suited as base classifiers for RS . To this end , we first model network outputs with general distributions before investigating our theory empirically for Gaussian distributions . We defer algebra to App . C and justify our modeling assumptions empirically in App . D. Our modeling approach is mathematically closely related to Tumer & Ghosh ( 1996a ) , which focuses on analyzing ensemble over the whole dataset . However , we condition on a single input to study the interplay between the stochasticity in training and random perturbations encountered in RS . Individual Classifier We consider individual classifiers f l : Rd 7→ Rm and perturbed inputs x+ for a single arbitrary but fixed x and Gaussian perturbations ∼ N ( 0 , σ2 I ) . We model the presoftmax logits f l ( x ) = : yl ∈ Rm as the sum of two random variables yl = ylp + ylc . Here , ylc corresponds to the classifier ’ s behavior on the unperturbed sample and models the stochasticity in weight initialization and training with random noise augmentation . ylp describes the effect of the random perturbations applied during RS . Note that this split will become essential when analyzing the ensembles . We drop the superscript l when discussing an individual classifier to avoid clutter . We model the distribution of the clean component yc over classifiers with mean c = El [ f l ( x ) ] , the expectation for a fixed sample x over the randomness in the training process , and covariance Σc characterizing this randomness . We assume the distribution of the perturbation effect yp to be zero-mean ( following from local linearization and zero mean perturbations ) and to have a covariance Σp . While Σp might depend on the noise level σ , it is distinct from it . We do not restrict the structure of either covariance matrix and denote Σii = σ2i and Σij = σiσjρij . As yc models the global training effects and yp models the local behavior under small perturbations , we assume them to be independent . We thus obtain logits y with mean c , the expectation over the training process for the clean sample , and covariance matrix Σ = Σc + Σp , where Σc and Σp encode the stochasticity of the training process and over perturbations respectively . The classifier prediction F l ( x ) = arg maxq yq is not determined by the absolute values of logits but rather by the differences between them . We thus call the difference between the majority class logit and others the classification margin . During certification with RS , the first step is to determine the majority class . Without loss of generality , let the class with index 1 be the majority class , leading to the classification margin zi = y1 − yi . Note that if zi > 0 for all i 6= 1 , then the majority class logit y1 is larger than the logits of all other classes yi . Under the above assumptions , the statistics of the classification margin for a single classifier are : E [ zi ] = c1 − ci Var [ zi ] = σ 2 p,1 + σ 2 p , i + σ 2 c,1 + σ 2 c , i − 2ρp,1iσp,1σp , i − 2ρc,1iσc,1σc , i Ensemble Now , we construct an ensemble of k of such classifiers . We use soft-voting ( cf . Eq . ( 2 ) ) to compute the ensemble output ȳ = 1k ∑k l=1 y l and then the corresponding classification margins z̄i = ȳ1 − ȳi . By the linearity of expectation we have E [ z̄i ] = E [ zi ] = c1 − ci . We consider similar classifiers , differing only in the random seed used for training . Hence , we assume that the correlation between the logits of different classifiers has a similar structure but smaller magnitude than the correlation between logits of one classifier . Correspondingly , we parametrize the covariance between yic and y j c for classifiers i 6= j with ζcΣc and similarly between yip and yjp with ζpΣp for ζc , ζp ∈ [ 0 , 1 ] . Depending on these correlation coefficients ζc and ζp , this model captures the range from no correlation ( ζ = 0 ) to perfect correlation ( ζ = 1 ) . We obtain the following variance : Var [ z̄i ] = k + 2 ( k 2 ) ζp k2 ( σ2p,1 + σ 2 p , i − 2ρp,1iσp,1σp , i ) ︸ ︷︷ ︸ σ2p ( k ) + k + 2 ( k 2 ) ζc k2 ( σ2c,1 + σ 2 c , i − 2ρc,1iσc,1σc , i ) ︸ ︷︷ ︸ σ2c ( k ) . Variance Reduction We can split Var [ z̄i ] into the components associated with the clean prediction σ2c ( k ) and the perturbation effect σ 2 p ( k ) , both as functions of the ensemble size k. We now compare these variance terms independently to the corresponding terms of an individual classifier dropping the subscripts p and c from σ2c ( k ) /σ 2 c ( 1 ) and σ 2 p ( k ) /σ 2 p ( 1 ) as they follow the same structure : σ2 ( k ) σ2 ( 1 ) = ( 1 + ζ ( k − 1 ) ) ( σ21 + σ2i − 2ρ1iσ1σi ) k ( σ21 + σ 2 i − 2ρ1iσ1σi ) = 1 + ζ ( k − 1 ) k k→∞−−−−→ ζ ( 3 ) We observe that both variance components go towards their corresponding correlation coefficients ζc and ζp as ensemble size grows , highlighting the importance of non-identical classifiers and illustrated under a Gaussian assumption ( explained later ) in Fig . 2a . Effect on Success Probability We can compute the probability of an ensemble of k classifiers predicting the majority class by integrating the probability distribution of the classification margin over the orthant1 z̄i > 0 ∀i ≥ 2 where majority class 1 is predicted : p1 : = P ( F̄ ( x+ ) = 1 ) = P ( z̄i > 0 : ∀ 2 ≤ i ≤ m ) = ∫ z̄s.t.z̄i > 0 , ∀ 2 ≤i≤m P ( z̄ ) dz̄ . ( 4 ) Assuming Gaussian distributions , we observe the increase in success probability shown in Fig . 2b . Without assuming a specific distribution , we can still lower-bound the success probability using Chebyshev ’ s inequality and the union bound . Given that a mean classification yields the majority class and hence c1 − ci > 0 , we let ti = c1−ciσi ( k ) and have : p1 ≥ 1− m∑ i=2 P ( |z̄i − c1 + ci| ≥ ti σi ( k ) ) ≥ 1− m∑ i=2 σi ( k ) 2 ( c1 − ci ) 2 ( 5 ) where σi ( k ) 2 = σc , i ( k ) 2 + σp , i ( k ) 2 is the variance of classification margin z̄i . We observe that as σi ( k ) decreases with increasing ensemble size k ( see Eq . ( 3 ) ) , the lower bound to the success probability increases quadratically . The further we are away from the decision boundary , i.e. , the larger ci , the smaller the absolute increase in success probability for the same variance reduction . Given a concrete success probability p1 , we compute the probability distribution over the certifiable radii reported by CERTIFY for a given confidence α , sample number n , and perturbation variance σ2 ( up to choosing an incorrect majority class ĉA ) as : P ( R = σ Φ −1 ( p1 ( n1 , n , α ) ) ) = B ( n1 , n , p1 ) , for R > 0 ( 6 ) where B ( s , r , p ) is the probability of drawing s successes in r trials from a Binomial distribution with success probability p , and p ( s , r , α ) is the lower confidence bound to the success probability of a Bernoulli experiment given s successes in r trials with confidence α according to the Clopper-Pearson interval ( Clopper & Pearson , 1934 ) . We illustrate the resulting effect assuming Gaussian distributions in Fig . 2c . 1n-dimensional equivalent of a quadrant Empirical Analysis via Gaussian Assumption To investigate our theory , we now assume ylc and ylp and hence also z̄i to be multivariate Gaussians . This choice is empirically well-fitting ( see App . D ) and follows from the central limit theorem for ylc and from Gaussian perturbations and local linearization for ylp . To estimate the free parameters c , Σc , Σp , ζc , and ζp , we evaluate ensembles of up to k = 50 GAUSSIAN trained ResNet20 ( for details , see §7 ) at σ = 0.25 . We obtain c and Σc as the mean and covariance of the output on a randomly chosen sample x . Subtracting the clean outputs from those for the perturbed samples , we estimate the covariance matrix Σp . We determine ζc ≈ 0 and ζp ≈ 0.82 as the median ratio of the inter- and intra-classifier covariance . ζc ≈ 0 implies that our models can be treated as independent conditioned on a single fixed and unperturbed input . Plugging these estimates into our model under the Gaussian assumption , we observe a significant decrease in the variance of the classification margin to the runner-up class as ensemble size k increases ( see Fig . 2a ) . This generally leads to more polarized success probabilities : the majority class ’ probability is increased , while the other classes ’ probabilities are decreased because the probability mass concentrates around the mean , and consequently on one side of the decision threshold z̄ = 0 , which determines the success probability via Eq . ( 4 ) . An increase , as in our case ( see Fig . 2b ) , leads to much larger expected certified radii ( see Fig . 2c ) for a given number of sampled perturbations ( here n = 103 ) via Eq . ( 6 ) . In contrast , sampling more perturbations will , in the limit , only recover the true success probability . In our example , going from one classifier to an ensemble of 50 increases the expected certified radius by 191 % , while drawing 50 times more perturbations for a single model only yields a 28 % increase ( see Fig . 2c ) . As illustrated in Fig . 3 , these effects are strongest close to decision boundaries ( pA 1 ) , where small increases in pA impact the certified radius much more than the number of samples n , which dominated at pA ≈ 1 .̧
In this paper, the authors propose using the aggregation of an ensemble of similar models as the base classifier in the randomized smoothing (RS). They show that the use of ensembles helps reduce the variance of the base classifier under noisy inputs, thus, improving the performance of RS. Both theoretical arguments and numerical experiments are included to support their idea. Further, the authors provide practical algorithms that significantly reduce the computational costs of their method.
SP:1547a102dd07cf55533d46ec741be79f7d2d5140
Self-Supervision Enhanced Feature Selection with Correlated Gates
1 INTRODUCTION . High-dimensional datasets are increasingly prevalent in a variety of fields , including critical domains such as medicine and biology . Discovering features responsible for the target variable is an essential but challenging step in the analysis of such data . For example , while next generation sequencing can detect the expression of tens of thousands of genes per sample ( e.g. , RNA-seq ) , many genetic disorders stem from the variation in only a few groups of related genes ( Jackson et al. , 2018 ) . Identification of such disease-related factors is crucial for the design of therapeutic treatments . Furthermore , feature selection has additional benefits , including reduced experimental costs , greater interpretability , and improved model generalization ( Min et al. , 2014 ; Ribeiro et al. , 2016 ) . However , the high-dimensional and often noisy nature of such data prevents relevant features from being readily discovered . Moreover , in many domains there is a scarcity of label information due to cost , privacy reasons , or experimental study design . While deep learning approaches have resulted in improvements in feature selection ( e.g. , Yamada et al . ( 2020 ) ; Lemhadri et al . ( 2021 ) , see Related Work ) , such methods typically assume access to many labeled samples . With low sample size , feature selection methods , in particular those employing deep learning , have shown a propensity to overfit high-dimensional data ( Kuncheva et al. , 2020 ) . This issue is further exacerbated by the inherent structure of such data . Many ( high-dimensional ) datasets exhibit substantial inter-correlations or multicollinearity – i.e. , there exist features that are ( highly ) correlated among themselves – which impacts the performance of feature selection methods ( Chong & Jun , 2005 ; Katrutsa & Strijov , 2017 ; Belsley et al. , 2005 ) . In particular , this structure can cause parameter estimates to be unstable ( Dobson & Barnett , 2018 ) and , in the extreme , can prevent variable effects from being separated , confounding the problem of feature selection ( Meloun et al. , 2002 ) . Existing deep learning-based methods for feature selection do not ( explicitly ) consider the correlations between features which will often result in the selection of redundant features . ∗Equal contribution When labeled samples are scarce , large numbers of unlabeled samples are often available ( e.g. , Perez-Riverol et al . ( 2019 ) ) . However , the importance of unlabeled data has been overlooked in feature selection despite its potential to prevent the model selection process from overfitting by allowing informative feature representations to be learned . Therefore , the development of methods for feature selection that can exploit both labeled and unlabeled data is of great practical importance . Contributions . We propose a novel method for feature selection that addresses two key challenges in real-world scenarios : limited labeled data and correlated features . We use a self-supervised approach to train an encoder using unlabeled data via two pretext tasks : feature vector reconstruction and gate vector estimation . This pre-conditions the encoder to learn informative representations from partial feature sets , aligning the self-supervision with the model selection process of the downstream feature selection task . In addition , we introduce a novel gating procedure that accounts for the correlation structure of the input features . This ensures the pretext tasks remain challenging by preventing the model from memorizing trivial relations between features . Moreover , unlike previous deep learning-based feature selection methods , the correlated gate vectors encourage our method to select the most relevant features by making multiple correlated features compete against each other . We validate our approach through experiments on synthetic and multiple real-world datasets , including clinical and omics , where only a small number of labeled samples are available . Our model discovers relevant features that provide superior prediction performance compared to state-of-the-art benchmarks , and we corroborate these features with supporting medical and scientific literature . 2 RELATED WORK . Feature Selection Methods . Feature selection is a well-studied problem , with a number of proposed solutions including wrapper ( Kohavi & John , 1997 ) and filter ( Liu & Setiono , 1996 ; Kira & Rendell , 1992 ) methods . Recent advances in deep learning provide an elegant way of training embedded feature selection methods by jointly learning to perform feature selection while training a prediction network ( Huang et al. , 2020 ) . These methods learn to perform the non-differentiable process of selecting feature subsets by approximating it either with Lasso or elastic net penalization ( Li et al. , 2016 ) , using an MCMC sampling approach ( Liang et al. , 2018 ) , or more recently with continuous relaxation using independent Gaussian random variables ( Yamada et al. , 2020 ) . However , supervised feature selection methods can fail to identify relevant features when limited labeled samples are available due to overfitting ( Kuncheva et al. , 2020 ) , impacting their suitability in many real-world scenarios . Moreover , these methods do not consider the underlying correlation structure of the input features which can be problematic when selecting relevant features among ( highly ) correlated ones . Relatively few feature selection methods use unlabeled samples . Abid et al . ( 2019 ) used autoencoders to identify a pre-specified number of features that are sufficient for reconstructing the data . Lindenbaum et al . ( 2020 ) improved the well-known Laplacian score ( He et al. , 2005 ) by selecting feature subsets that better capture the “ local structure ” of the data . However , both approaches are fully unsupervised and , without the guidance of label information , can fail to identify features relevant to the target outcome . While a number of semi-supervised feature selection methods have been proposed ( Sheikhpour et al. , 2017 ) , they have typically been extensions of traditional methods , such as a Laplacian score with modified affinity scores using labels ( Zhao et al. , 2008 ) or manifold regularization based on linear SVMs ( Dai et al. , 2013 ) . To the best of our knowledge , this is the first deep learning framework that fully utilizes both labeled and unlabeled samples for feature selection . Self-Supervised Learning . Self-supervised learning methods create ( weak ) supervised signals from unlabeled data , employing contrastive learning or pretext task ( s ) to provide surrogate labels . While self-supervised learning has found success in computer vision ( Chen et al. , 2020 ) and natural language processing ( Devlin et al. , 2018 ) , the tabular domain , which is most relevant in feature selection , has been largely neglected . Methods for tabular data seek to reconstruct data either based on a corrupted sample alone ( Vincent et al. , 2008 ; Arik & Pfister , 2019 ; Yin et al. , 2020 ) or with knowledge of which entries have been corrupted ( Pathak et al. , 2016 ) . Recently , Yoon et al . ( 2020 ) jointly learn to recover the original sample and predict the mask vector used to corrupt the data . While our pretext tasks are also reconstruction-based , we propose a novel method for generating the gate vector used to produce the input feature vector . This is particularly important when there exists substantial correlation between features ( see Section 5 ) . All existing methods have used indepen- dent , uniform sampling ; however , this allows highly correlated features to be readily reconstructed since it is likely that not all such features will be corrupted . We prevent this by incorporating the correlation structure into the gating procedure . In this work , we employ pretext tasks that mirror the goal of the feature selection process : the input to the encoder is a subset of the features selected at random . Employing partial feature sets for self-supervised learning encourages the encoder to learn better representations of these partial feature sets . This in turn benefits learning both the model selection function and the feature selection step since they are also trained using partial features sets . 3 PROBLEM FORMULATION . Let X = ( X1 , · · · , Xp ) ∈ X p and Y ∈ Y be random variables for the high-dimensional input features ( e.g. , gene expressions ) and the target outcome ( e.g. , disease traits ) whose realizations are denoted as x = ( x1 , · · · , xp ) and y , respectively . Throughout the paper , we will often use lowercase letters to denote realizations of random variables . Embedded feature selection aims to select a subset , S ⊂ [ p ] , of features that are relevant for predicting the target as part of the model selection process . Denote ∗ be any point not in X and define XS = ( X∪ { ∗ } ) p.1 Then , given X ∈ X p , the selected subset of features can be denoted as XS ∈ XS where xS , k = xk if k ∈ S and xk = ∗ if k /∈ S. Let f : XS → Y be a function in F that takes as input the subset XS and outputs Y . Then , selecting relevant features can be achieved by solving the following optimization problem : minimize f∈F , S⊂ [ p ] Ex , y∼pXY [ ` Y ( y , f ( xS ) ) ] subject to |S| ≤ δ ( 1 ) where δ constrains the number of selected features and ` Y ( y , y′ ) = − ∑C c=1 yc log y ′ c for C-way classification tasks ( i.e. , Y = { 1 , · · · , C } ) and ` Y ( y , y′ ) = ‖y − y′‖22 for regression tasks ( i.e. , Y = R ) .2 Unfortunately , the combinatorial problem in ( 1 ) becomes intractable for high-dimensional data as the search space increases exponentially with p. Hence , we instead focus on a relaxation by converting the combinatorial search in ( 1 ) into a search over the space of possibly correlated binary random variables . It is worth highlighting that unlike existing work ( e.g. , Yamada et al . ( 2020 ) ; Yoon et al . ( 2019 ) ) , we do not assume independence among these random variables . Let M = ( M1 , · · · , Mp ) ∈ { 0 , 1 } p be binary random variables governed by distribution pM , whose realization m is referred to as the gate vector , for indicating selection of the corresponding features . Then , the selected features given gate vector m can be written as x̃ , m x + ( 1−m ) x̄ ( 2 ) where x̄ = Ex∼pX [ x ] and indicates element-wise multiplication . Here , we replace not-selected features with their means to resolve any issue when a feature having zero value ( e.g. , turned-off genes ) has specific meaning . Finally , we can approximately achieve ( 1 ) by jointly learning the model selection f and the gate vector distribution pM based on the following optimization problem : minimize f , pM Ex , y∼pXY Em∼pM [ ` Y ( y , f ( x̃ ) ) + β‖m‖0 ] ( 3 ) where f is implemented as a neural network and β is a balancing coefficient that controls the number of features to be selected . Challenges . In practice ( especially in the healthcare domain ) , there are two main challenges that confound selecting relevant features for predicting the target : ( i ) inter-correlation or multicollinearity , which is the existence of features that are ( highly ) correlated among themselves , and ( ii ) the absence of sufficient labeled samples . These challenges make embedded feature selection vulnerable to overfitting . More specifically , model selection ( i.e. , learning f ) and feature selection ( i.e. , learning pM ) are conducted jointly . Either one being overfit to correlated or noisy irrelevant features will end up discovering spurious relations and poor prediction . For instance , if the network is biased toward irrelevant features , the selection probability ( i.e. , importance ) of those features will be increased as if those features were “ discriminative ” or “ predictive ” , and vice versa . 1To enable neural networks to be trained with varying feature subsets , we set ∗ = x̄ . 2For Y = { 1 , · · · , C } , we will occasionally abuse notation and write yc to denote the c-th element of the one-hot encoding of y when clear in the context .
This work aims to use self-supervision to identify the most useful features for downstream tasks particularly in the context of correlated features. This is an important aspect that is lacking from many feature selection approaches commonly used within the highly correlated datasets of healthcare. This approach - using both feature vector reconstruction and gate vector estimation appears to provide better results for both true structure estimation in a dataset with known truth as well as downstream predictive performance in real world data sets.
SP:f55ff3197220cbadfba540f98997f23f76723c96
Self-Supervision Enhanced Feature Selection with Correlated Gates
1 INTRODUCTION . High-dimensional datasets are increasingly prevalent in a variety of fields , including critical domains such as medicine and biology . Discovering features responsible for the target variable is an essential but challenging step in the analysis of such data . For example , while next generation sequencing can detect the expression of tens of thousands of genes per sample ( e.g. , RNA-seq ) , many genetic disorders stem from the variation in only a few groups of related genes ( Jackson et al. , 2018 ) . Identification of such disease-related factors is crucial for the design of therapeutic treatments . Furthermore , feature selection has additional benefits , including reduced experimental costs , greater interpretability , and improved model generalization ( Min et al. , 2014 ; Ribeiro et al. , 2016 ) . However , the high-dimensional and often noisy nature of such data prevents relevant features from being readily discovered . Moreover , in many domains there is a scarcity of label information due to cost , privacy reasons , or experimental study design . While deep learning approaches have resulted in improvements in feature selection ( e.g. , Yamada et al . ( 2020 ) ; Lemhadri et al . ( 2021 ) , see Related Work ) , such methods typically assume access to many labeled samples . With low sample size , feature selection methods , in particular those employing deep learning , have shown a propensity to overfit high-dimensional data ( Kuncheva et al. , 2020 ) . This issue is further exacerbated by the inherent structure of such data . Many ( high-dimensional ) datasets exhibit substantial inter-correlations or multicollinearity – i.e. , there exist features that are ( highly ) correlated among themselves – which impacts the performance of feature selection methods ( Chong & Jun , 2005 ; Katrutsa & Strijov , 2017 ; Belsley et al. , 2005 ) . In particular , this structure can cause parameter estimates to be unstable ( Dobson & Barnett , 2018 ) and , in the extreme , can prevent variable effects from being separated , confounding the problem of feature selection ( Meloun et al. , 2002 ) . Existing deep learning-based methods for feature selection do not ( explicitly ) consider the correlations between features which will often result in the selection of redundant features . ∗Equal contribution When labeled samples are scarce , large numbers of unlabeled samples are often available ( e.g. , Perez-Riverol et al . ( 2019 ) ) . However , the importance of unlabeled data has been overlooked in feature selection despite its potential to prevent the model selection process from overfitting by allowing informative feature representations to be learned . Therefore , the development of methods for feature selection that can exploit both labeled and unlabeled data is of great practical importance . Contributions . We propose a novel method for feature selection that addresses two key challenges in real-world scenarios : limited labeled data and correlated features . We use a self-supervised approach to train an encoder using unlabeled data via two pretext tasks : feature vector reconstruction and gate vector estimation . This pre-conditions the encoder to learn informative representations from partial feature sets , aligning the self-supervision with the model selection process of the downstream feature selection task . In addition , we introduce a novel gating procedure that accounts for the correlation structure of the input features . This ensures the pretext tasks remain challenging by preventing the model from memorizing trivial relations between features . Moreover , unlike previous deep learning-based feature selection methods , the correlated gate vectors encourage our method to select the most relevant features by making multiple correlated features compete against each other . We validate our approach through experiments on synthetic and multiple real-world datasets , including clinical and omics , where only a small number of labeled samples are available . Our model discovers relevant features that provide superior prediction performance compared to state-of-the-art benchmarks , and we corroborate these features with supporting medical and scientific literature . 2 RELATED WORK . Feature Selection Methods . Feature selection is a well-studied problem , with a number of proposed solutions including wrapper ( Kohavi & John , 1997 ) and filter ( Liu & Setiono , 1996 ; Kira & Rendell , 1992 ) methods . Recent advances in deep learning provide an elegant way of training embedded feature selection methods by jointly learning to perform feature selection while training a prediction network ( Huang et al. , 2020 ) . These methods learn to perform the non-differentiable process of selecting feature subsets by approximating it either with Lasso or elastic net penalization ( Li et al. , 2016 ) , using an MCMC sampling approach ( Liang et al. , 2018 ) , or more recently with continuous relaxation using independent Gaussian random variables ( Yamada et al. , 2020 ) . However , supervised feature selection methods can fail to identify relevant features when limited labeled samples are available due to overfitting ( Kuncheva et al. , 2020 ) , impacting their suitability in many real-world scenarios . Moreover , these methods do not consider the underlying correlation structure of the input features which can be problematic when selecting relevant features among ( highly ) correlated ones . Relatively few feature selection methods use unlabeled samples . Abid et al . ( 2019 ) used autoencoders to identify a pre-specified number of features that are sufficient for reconstructing the data . Lindenbaum et al . ( 2020 ) improved the well-known Laplacian score ( He et al. , 2005 ) by selecting feature subsets that better capture the “ local structure ” of the data . However , both approaches are fully unsupervised and , without the guidance of label information , can fail to identify features relevant to the target outcome . While a number of semi-supervised feature selection methods have been proposed ( Sheikhpour et al. , 2017 ) , they have typically been extensions of traditional methods , such as a Laplacian score with modified affinity scores using labels ( Zhao et al. , 2008 ) or manifold regularization based on linear SVMs ( Dai et al. , 2013 ) . To the best of our knowledge , this is the first deep learning framework that fully utilizes both labeled and unlabeled samples for feature selection . Self-Supervised Learning . Self-supervised learning methods create ( weak ) supervised signals from unlabeled data , employing contrastive learning or pretext task ( s ) to provide surrogate labels . While self-supervised learning has found success in computer vision ( Chen et al. , 2020 ) and natural language processing ( Devlin et al. , 2018 ) , the tabular domain , which is most relevant in feature selection , has been largely neglected . Methods for tabular data seek to reconstruct data either based on a corrupted sample alone ( Vincent et al. , 2008 ; Arik & Pfister , 2019 ; Yin et al. , 2020 ) or with knowledge of which entries have been corrupted ( Pathak et al. , 2016 ) . Recently , Yoon et al . ( 2020 ) jointly learn to recover the original sample and predict the mask vector used to corrupt the data . While our pretext tasks are also reconstruction-based , we propose a novel method for generating the gate vector used to produce the input feature vector . This is particularly important when there exists substantial correlation between features ( see Section 5 ) . All existing methods have used indepen- dent , uniform sampling ; however , this allows highly correlated features to be readily reconstructed since it is likely that not all such features will be corrupted . We prevent this by incorporating the correlation structure into the gating procedure . In this work , we employ pretext tasks that mirror the goal of the feature selection process : the input to the encoder is a subset of the features selected at random . Employing partial feature sets for self-supervised learning encourages the encoder to learn better representations of these partial feature sets . This in turn benefits learning both the model selection function and the feature selection step since they are also trained using partial features sets . 3 PROBLEM FORMULATION . Let X = ( X1 , · · · , Xp ) ∈ X p and Y ∈ Y be random variables for the high-dimensional input features ( e.g. , gene expressions ) and the target outcome ( e.g. , disease traits ) whose realizations are denoted as x = ( x1 , · · · , xp ) and y , respectively . Throughout the paper , we will often use lowercase letters to denote realizations of random variables . Embedded feature selection aims to select a subset , S ⊂ [ p ] , of features that are relevant for predicting the target as part of the model selection process . Denote ∗ be any point not in X and define XS = ( X∪ { ∗ } ) p.1 Then , given X ∈ X p , the selected subset of features can be denoted as XS ∈ XS where xS , k = xk if k ∈ S and xk = ∗ if k /∈ S. Let f : XS → Y be a function in F that takes as input the subset XS and outputs Y . Then , selecting relevant features can be achieved by solving the following optimization problem : minimize f∈F , S⊂ [ p ] Ex , y∼pXY [ ` Y ( y , f ( xS ) ) ] subject to |S| ≤ δ ( 1 ) where δ constrains the number of selected features and ` Y ( y , y′ ) = − ∑C c=1 yc log y ′ c for C-way classification tasks ( i.e. , Y = { 1 , · · · , C } ) and ` Y ( y , y′ ) = ‖y − y′‖22 for regression tasks ( i.e. , Y = R ) .2 Unfortunately , the combinatorial problem in ( 1 ) becomes intractable for high-dimensional data as the search space increases exponentially with p. Hence , we instead focus on a relaxation by converting the combinatorial search in ( 1 ) into a search over the space of possibly correlated binary random variables . It is worth highlighting that unlike existing work ( e.g. , Yamada et al . ( 2020 ) ; Yoon et al . ( 2019 ) ) , we do not assume independence among these random variables . Let M = ( M1 , · · · , Mp ) ∈ { 0 , 1 } p be binary random variables governed by distribution pM , whose realization m is referred to as the gate vector , for indicating selection of the corresponding features . Then , the selected features given gate vector m can be written as x̃ , m x + ( 1−m ) x̄ ( 2 ) where x̄ = Ex∼pX [ x ] and indicates element-wise multiplication . Here , we replace not-selected features with their means to resolve any issue when a feature having zero value ( e.g. , turned-off genes ) has specific meaning . Finally , we can approximately achieve ( 1 ) by jointly learning the model selection f and the gate vector distribution pM based on the following optimization problem : minimize f , pM Ex , y∼pXY Em∼pM [ ` Y ( y , f ( x̃ ) ) + β‖m‖0 ] ( 3 ) where f is implemented as a neural network and β is a balancing coefficient that controls the number of features to be selected . Challenges . In practice ( especially in the healthcare domain ) , there are two main challenges that confound selecting relevant features for predicting the target : ( i ) inter-correlation or multicollinearity , which is the existence of features that are ( highly ) correlated among themselves , and ( ii ) the absence of sufficient labeled samples . These challenges make embedded feature selection vulnerable to overfitting . More specifically , model selection ( i.e. , learning f ) and feature selection ( i.e. , learning pM ) are conducted jointly . Either one being overfit to correlated or noisy irrelevant features will end up discovering spurious relations and poor prediction . For instance , if the network is biased toward irrelevant features , the selection probability ( i.e. , importance ) of those features will be increased as if those features were “ discriminative ” or “ predictive ” , and vice versa . 1To enable neural networks to be trained with varying feature subsets , we set ∗ = x̄ . 2For Y = { 1 , · · · , C } , we will occasionally abuse notation and write yc to denote the c-th element of the one-hot encoding of y when clear in the context .
This paper presents a novel feature selection method for tabular data prediction. It tackles two challenges by respective novel designs, i.e. labeled data scarcity issue by an unsupervised self-supervision phase, and feature correlation issue by multivariate Bernoulli gate vector with learnable correlation matrix. The experiments on one synthetic and two medicine/biology datasets demonstrate effectiveness.
SP:f55ff3197220cbadfba540f98997f23f76723c96
Self-Supervision Enhanced Feature Selection with Correlated Gates
1 INTRODUCTION . High-dimensional datasets are increasingly prevalent in a variety of fields , including critical domains such as medicine and biology . Discovering features responsible for the target variable is an essential but challenging step in the analysis of such data . For example , while next generation sequencing can detect the expression of tens of thousands of genes per sample ( e.g. , RNA-seq ) , many genetic disorders stem from the variation in only a few groups of related genes ( Jackson et al. , 2018 ) . Identification of such disease-related factors is crucial for the design of therapeutic treatments . Furthermore , feature selection has additional benefits , including reduced experimental costs , greater interpretability , and improved model generalization ( Min et al. , 2014 ; Ribeiro et al. , 2016 ) . However , the high-dimensional and often noisy nature of such data prevents relevant features from being readily discovered . Moreover , in many domains there is a scarcity of label information due to cost , privacy reasons , or experimental study design . While deep learning approaches have resulted in improvements in feature selection ( e.g. , Yamada et al . ( 2020 ) ; Lemhadri et al . ( 2021 ) , see Related Work ) , such methods typically assume access to many labeled samples . With low sample size , feature selection methods , in particular those employing deep learning , have shown a propensity to overfit high-dimensional data ( Kuncheva et al. , 2020 ) . This issue is further exacerbated by the inherent structure of such data . Many ( high-dimensional ) datasets exhibit substantial inter-correlations or multicollinearity – i.e. , there exist features that are ( highly ) correlated among themselves – which impacts the performance of feature selection methods ( Chong & Jun , 2005 ; Katrutsa & Strijov , 2017 ; Belsley et al. , 2005 ) . In particular , this structure can cause parameter estimates to be unstable ( Dobson & Barnett , 2018 ) and , in the extreme , can prevent variable effects from being separated , confounding the problem of feature selection ( Meloun et al. , 2002 ) . Existing deep learning-based methods for feature selection do not ( explicitly ) consider the correlations between features which will often result in the selection of redundant features . ∗Equal contribution When labeled samples are scarce , large numbers of unlabeled samples are often available ( e.g. , Perez-Riverol et al . ( 2019 ) ) . However , the importance of unlabeled data has been overlooked in feature selection despite its potential to prevent the model selection process from overfitting by allowing informative feature representations to be learned . Therefore , the development of methods for feature selection that can exploit both labeled and unlabeled data is of great practical importance . Contributions . We propose a novel method for feature selection that addresses two key challenges in real-world scenarios : limited labeled data and correlated features . We use a self-supervised approach to train an encoder using unlabeled data via two pretext tasks : feature vector reconstruction and gate vector estimation . This pre-conditions the encoder to learn informative representations from partial feature sets , aligning the self-supervision with the model selection process of the downstream feature selection task . In addition , we introduce a novel gating procedure that accounts for the correlation structure of the input features . This ensures the pretext tasks remain challenging by preventing the model from memorizing trivial relations between features . Moreover , unlike previous deep learning-based feature selection methods , the correlated gate vectors encourage our method to select the most relevant features by making multiple correlated features compete against each other . We validate our approach through experiments on synthetic and multiple real-world datasets , including clinical and omics , where only a small number of labeled samples are available . Our model discovers relevant features that provide superior prediction performance compared to state-of-the-art benchmarks , and we corroborate these features with supporting medical and scientific literature . 2 RELATED WORK . Feature Selection Methods . Feature selection is a well-studied problem , with a number of proposed solutions including wrapper ( Kohavi & John , 1997 ) and filter ( Liu & Setiono , 1996 ; Kira & Rendell , 1992 ) methods . Recent advances in deep learning provide an elegant way of training embedded feature selection methods by jointly learning to perform feature selection while training a prediction network ( Huang et al. , 2020 ) . These methods learn to perform the non-differentiable process of selecting feature subsets by approximating it either with Lasso or elastic net penalization ( Li et al. , 2016 ) , using an MCMC sampling approach ( Liang et al. , 2018 ) , or more recently with continuous relaxation using independent Gaussian random variables ( Yamada et al. , 2020 ) . However , supervised feature selection methods can fail to identify relevant features when limited labeled samples are available due to overfitting ( Kuncheva et al. , 2020 ) , impacting their suitability in many real-world scenarios . Moreover , these methods do not consider the underlying correlation structure of the input features which can be problematic when selecting relevant features among ( highly ) correlated ones . Relatively few feature selection methods use unlabeled samples . Abid et al . ( 2019 ) used autoencoders to identify a pre-specified number of features that are sufficient for reconstructing the data . Lindenbaum et al . ( 2020 ) improved the well-known Laplacian score ( He et al. , 2005 ) by selecting feature subsets that better capture the “ local structure ” of the data . However , both approaches are fully unsupervised and , without the guidance of label information , can fail to identify features relevant to the target outcome . While a number of semi-supervised feature selection methods have been proposed ( Sheikhpour et al. , 2017 ) , they have typically been extensions of traditional methods , such as a Laplacian score with modified affinity scores using labels ( Zhao et al. , 2008 ) or manifold regularization based on linear SVMs ( Dai et al. , 2013 ) . To the best of our knowledge , this is the first deep learning framework that fully utilizes both labeled and unlabeled samples for feature selection . Self-Supervised Learning . Self-supervised learning methods create ( weak ) supervised signals from unlabeled data , employing contrastive learning or pretext task ( s ) to provide surrogate labels . While self-supervised learning has found success in computer vision ( Chen et al. , 2020 ) and natural language processing ( Devlin et al. , 2018 ) , the tabular domain , which is most relevant in feature selection , has been largely neglected . Methods for tabular data seek to reconstruct data either based on a corrupted sample alone ( Vincent et al. , 2008 ; Arik & Pfister , 2019 ; Yin et al. , 2020 ) or with knowledge of which entries have been corrupted ( Pathak et al. , 2016 ) . Recently , Yoon et al . ( 2020 ) jointly learn to recover the original sample and predict the mask vector used to corrupt the data . While our pretext tasks are also reconstruction-based , we propose a novel method for generating the gate vector used to produce the input feature vector . This is particularly important when there exists substantial correlation between features ( see Section 5 ) . All existing methods have used indepen- dent , uniform sampling ; however , this allows highly correlated features to be readily reconstructed since it is likely that not all such features will be corrupted . We prevent this by incorporating the correlation structure into the gating procedure . In this work , we employ pretext tasks that mirror the goal of the feature selection process : the input to the encoder is a subset of the features selected at random . Employing partial feature sets for self-supervised learning encourages the encoder to learn better representations of these partial feature sets . This in turn benefits learning both the model selection function and the feature selection step since they are also trained using partial features sets . 3 PROBLEM FORMULATION . Let X = ( X1 , · · · , Xp ) ∈ X p and Y ∈ Y be random variables for the high-dimensional input features ( e.g. , gene expressions ) and the target outcome ( e.g. , disease traits ) whose realizations are denoted as x = ( x1 , · · · , xp ) and y , respectively . Throughout the paper , we will often use lowercase letters to denote realizations of random variables . Embedded feature selection aims to select a subset , S ⊂ [ p ] , of features that are relevant for predicting the target as part of the model selection process . Denote ∗ be any point not in X and define XS = ( X∪ { ∗ } ) p.1 Then , given X ∈ X p , the selected subset of features can be denoted as XS ∈ XS where xS , k = xk if k ∈ S and xk = ∗ if k /∈ S. Let f : XS → Y be a function in F that takes as input the subset XS and outputs Y . Then , selecting relevant features can be achieved by solving the following optimization problem : minimize f∈F , S⊂ [ p ] Ex , y∼pXY [ ` Y ( y , f ( xS ) ) ] subject to |S| ≤ δ ( 1 ) where δ constrains the number of selected features and ` Y ( y , y′ ) = − ∑C c=1 yc log y ′ c for C-way classification tasks ( i.e. , Y = { 1 , · · · , C } ) and ` Y ( y , y′ ) = ‖y − y′‖22 for regression tasks ( i.e. , Y = R ) .2 Unfortunately , the combinatorial problem in ( 1 ) becomes intractable for high-dimensional data as the search space increases exponentially with p. Hence , we instead focus on a relaxation by converting the combinatorial search in ( 1 ) into a search over the space of possibly correlated binary random variables . It is worth highlighting that unlike existing work ( e.g. , Yamada et al . ( 2020 ) ; Yoon et al . ( 2019 ) ) , we do not assume independence among these random variables . Let M = ( M1 , · · · , Mp ) ∈ { 0 , 1 } p be binary random variables governed by distribution pM , whose realization m is referred to as the gate vector , for indicating selection of the corresponding features . Then , the selected features given gate vector m can be written as x̃ , m x + ( 1−m ) x̄ ( 2 ) where x̄ = Ex∼pX [ x ] and indicates element-wise multiplication . Here , we replace not-selected features with their means to resolve any issue when a feature having zero value ( e.g. , turned-off genes ) has specific meaning . Finally , we can approximately achieve ( 1 ) by jointly learning the model selection f and the gate vector distribution pM based on the following optimization problem : minimize f , pM Ex , y∼pXY Em∼pM [ ` Y ( y , f ( x̃ ) ) + β‖m‖0 ] ( 3 ) where f is implemented as a neural network and β is a balancing coefficient that controls the number of features to be selected . Challenges . In practice ( especially in the healthcare domain ) , there are two main challenges that confound selecting relevant features for predicting the target : ( i ) inter-correlation or multicollinearity , which is the existence of features that are ( highly ) correlated among themselves , and ( ii ) the absence of sufficient labeled samples . These challenges make embedded feature selection vulnerable to overfitting . More specifically , model selection ( i.e. , learning f ) and feature selection ( i.e. , learning pM ) are conducted jointly . Either one being overfit to correlated or noisy irrelevant features will end up discovering spurious relations and poor prediction . For instance , if the network is biased toward irrelevant features , the selection probability ( i.e. , importance ) of those features will be increased as if those features were “ discriminative ” or “ predictive ” , and vice versa . 1To enable neural networks to be trained with varying feature subsets , we set ∗ = x̄ . 2For Y = { 1 , · · · , C } , we will occasionally abuse notation and write yc to denote the c-th element of the one-hot encoding of y when clear in the context .
This work presents some additional tweaks and tricks in using neural networks for feature selection. The main contributions include an inclusion of a parameterized masking function that selects the feature and a self-supervised component for pretraining. The overall proposal makes sense and there are strong experimental results that justifies the merits.
SP:f55ff3197220cbadfba540f98997f23f76723c96
On Evaluation Metrics for Graph Generative Models
In image generation , generative models can be evaluated naturally by visually inspecting model outputs . However , this is not always the case for graph generative models ( GGMs ) , making their evaluation challenging . Currently , the standard process for evaluating GGMs suffers from three critical limitations : i ) it does not produce a single score which makes model selection challenging , ii ) in many cases it fails to consider underlying edge and node features , and iii ) it is prohibitively slow to perform . In this work , we mitigate these issues by searching for scalar , domain-agnostic , and scalable metrics for evaluating and ranking GGMs . To this end , we study existing GGM metrics and neural-network-based metrics emerging from generative models of images that use embeddings extracted from a task-specific network . Motivated by the power of certain Graph Neural Networks ( GNNs ) to extract meaningful graph representations without any training , we introduce several metrics based on the features extracted by an untrained random GNN . We design experiments to thoroughly test metrics on their ability to measure the diversity and fidelity of generated graphs , as well as their sample and computational efficiency . Depending on the quantity of samples , we recommend one of two random-GNN-based metrics that we show to be more expressive than pre-existing metrics . While we focus on applying these metrics to GGM evaluation , in practice this enables the ability to easily compute the dissimilarity between any two sets of graphs regardless of domain . 1 INTRODUCTION . Graph generation is a key problem in a wide range of domains such as molecule generation ( Samanta et al. , 2020 ; Popova et al. , 2019 ; Li et al. , 2018 ; Kong et al. , 2021 ; Jin et al. , 2020 ) and structure generation ( Bapst et al. , 2019 ; Thompson et al. , 2020 ) . An evaluation metric that is capable of accurately measuring the distance between a set of generated and reference graphs is critical for advancing research on graph generative models ( GGMs ) . This is frequently done by comparing empirical distributions of graph statistics such as orbit counts , degree coefficients , and clustering coefficients through Maximum Mean Discrepancy ( MMD ) ( You et al. , 2018 ; Gretton et al. , 2006 ) . While these metrics are capable of making a meaningful comparison between generated and real graphs ( You et al. , 2018 ) , this evaluation method yields a metric for each individual statistic . In addition , recent works have further increased the number of metrics by performing MMD directly with node and edge feature distributions ( Goyal et al. , 2020 ) , or on alternative graph statistics such as graph spectra ( Liao et al. , 2019 ) . While this is not an issue provided there is a primary statistic of interest , all metrics are frequently displayed together to approximate generation quality and evaluate GGMs ( You et al. , 2018 ; Liao et al. , 2019 ) . This process makes it challenging to measure progress as the ranking of generative models may vary between metrics . In addition , the computation of the metrics from You et al . ( 2018 ) can be prohibitively slow ( Liao et al. , 2019 ; O ’ Bray et al. , 2021 ) , and they are based only on graph structure , meaning they do not incorporate edge and node features . Therefore , they are less applicable in specific domains such as molecule generation where such features are essential . This particular limitation has led to the use of the Neighborhood Subgraph Pairwise Distance kernel ( NSPDK ) ( Costa & Grave , 2010 ) in GGM evaluation ( Goyal et al. , 2020 ; Podda & Bacciu , 2021 ; Kawai et al. , 2019 ) as it naturally incorporates edge and node features . However , this metric is still unable to incorporate continuous features in evaluation ( Costa & Grave , 2010 ) . Faced with a wide array of metrics and ambiguity regarding when each should be the focus , the community needs robust and scalable standalone metrics that can consistently rank GGMs . While less popular , metrics from image generation literature have been successfully utilized in GGM evaluation . These metrics rely on the use of a task-specific neural network to extract meaningful representations of samples , enabling a more straightforward comparison between generated and reference distributions ( Preuer et al. , 2018 ; Liu et al. , 2019 ; Thompson et al. , 2020 ) . Although these metrics have been validated empirically in the image domain , they are not universally applicable to GGMs . For example , Fréchet Chemnet Distance ( Preuer et al. , 2018 ) uses a language model trained on SMILES strings , rendering it unusable for evaluation of GGMs in other domains . Furthermore , a pretrained GNN can not be applied to datasets with a different number of edge or node labels . Pretraining a GNN for every dataset can be prohibitive , making the use of such metrics in GGM evaluation less appealing than in the more established and standardized image domain . In image generation evaluation , classifiers trained on ImageNet ( Deng et al. , 2009 ) are frequently used to extract image embeddings ( Bińkowski et al. , 2018 ; Heusel et al. , 2017 ; Kynkäänniemi et al. , 2019 ; Xu et al. , 2018 ; Naeem et al. , 2020 ) . While classifiers such as Inception v3 ( Szegedy et al. , 2016 ) are consistently used , recent works have investigated the use of randomly-initialized CNNs with no further training ( hereafter referred to as a random network ) in generative model evaluation . Xu et al . ( 2018 ) ; Naeem et al . ( 2020 ) found that a random CNN performs similarly to ImageNet classifiers on natural images and is superior outside of the natural image domain . In the graph domain , random GNNs have been shown to extract meaningful features to solve downstream graph tasks without training ( Kipf & Welling , 2017 ; Morris et al. , 2019 ; Xu et al. , 2019 ) . However , the applicability of random GNNs for the evaluation of GGMs remains unexplored . In this work , we aim to identify one or more scalar metrics that accurately measures the dissimilarity between two sets of graphs to simplify the ranking of GGMs regardless of domain . We tackle this problem by exploring the use of random GNNs in the evaluation of GGMs using metrics that were developed in the image domain . In addition , we perform objective evaluation of a large number of possible evaluation metrics . We design experiments to thoroughly test each metric on its ability to measure the diversity and fidelity ( realism ) of generated graphs , as well as their sample and computational efficiency . We study three families of metrics : existing GGM evaluation metrics based on graph statistics and graph kernels , which we call classical metrics ; image domain metrics using a random GNN ; and image domain metrics using a pretrained GNN . We aim to answer the following questions empirically : ( Q1 ) What are the strengths and limitations of each metric ? ( Q2 ) Is pretraining a GNN necessary to accurately evaluate GGMs with image domain metrics ? ( Q3 ) Is there a strong scalar and domain-agnostic metric for evaluating and ranking GGMs ? Addressing these questions enabled us to reveal several surprising findings that directly impact the future of GGM evaluation . For example , regarding Q1 , we identify a failure mode in the classical metrics in that they are poor at measuring the diversity of generated graphs . Consequently , we find several metrics that are more expressive . In terms of Q2 , we determine that pretraining is unnecessary to utilize neural-network-based ( NN-based ) metrics . Regarding Q3 , we find two scalar metrics that are ideal for evaluating and ranking GGMs in certain scenarios ; they are scalable , powerful , and easily incorporate continuous or discrete node and edge features . These findings enable computationally inexpensive and domain-agnostic GGM evaluation . 2 BACKGROUND & RELATED WORK . Evaluating generative models in any domain is a notoriously difficult task ( Theis et al. , 2016 ) . However , researchers have found success through the use of sample-based evaluation metrics that estimate the distance ρ between real and generated distributions Pr and Pg by drawing random samples ( Heusel et al. , 2017 ; You et al. , 2018 ; Bińkowski et al. , 2018 ) . That is , they compute ρ̂ ( Sg , Sr ) ≈ ρ ( Pg , Pr ) , with Sr = { xr1 , . . . , xrm } ∼ Pr and Sg = { x g 1 , . . . , x g n } ∼ Pg , where xi is defined as some feature vector extracted from a corresponding graph Gi . These metrics are model agnostic and therefore applicable to all generative models . 2.1 CLASSICAL METRICS . Metrics based on graph statistics ( You et al. , 2018 ) are standard in evaluating GGMs ( Liao et al. , 2019 ; Dai et al. , 2020 ) . These metrics require extracting the clustering coefficient , node degree , and 4-node orbit count histograms1 that are then used to compute the empirical MMD between generated and reference sets Sg , Sr ( Gretton et al. , 2006 ) : MMD ( Sg , Sr ) : = 1 m2 m∑ i , j=1 k ( xri , x r j ) + 1 n2 n∑ i , j=1 k ( xgi , x g j ) − 2 nm n∑ i=1 m∑ j=1 k ( xgi , x r j ) , ( 1 ) where k ( · , · ) is a general kernel function . You et al . ( 2018 ) proposed a form of the RBF kernel : k ( xi , xj ) = exp ( −d ( xi , xj ) 2σ2 ) , ( 2 ) where d ( · , · ) computes pairwise distance , and in that work was chosen to be the Earth Mover ’ s Distance ( EMD ) . This yields three metrics , one for each graph statistic . The computational cost of these metrics may be decreased by using the total variation distance as d ( · , · ) in Equation 1 ( Liao et al. , 2019 ) . However , this change leads to an indefinite kernel and undefined behaviour ( O ’ Bray et al. , 2021 ) . Therefore , we only compute these metrics using EMD ( You et al. , 2018 ) . In addition , several works ( Goyal et al. , 2020 ; Podda & Bacciu , 2021 ; Kawai et al. , 2019 ) evaluate GGMs by replacing k ( · , · ) with the Neighborhood Subgraph Pairwise Distance graph kernel ( NSPDK ) . This metric has the benefit of incorporating discrete edge and node features along with the underlying graph structure in evaluation . Similar to the metrics proposed by You et al . ( 2018 ) , Moreno et al . ( 2018 ) extract graph structure properties such as node degree , clustering coefficient , and geodesic distance . However , these properties are then combined into a scalar metric through the KolmorogovSmirnov ( KS ) multidimensional distance ( Justel et al. , 1997 ) . We exclude KS from our experiments as it is unable to incorporate edge and node features , which is one of the key properties we seek . Finally , note that other domain-specific metrics such as “ percentage of valid graphs ” exist . Our goal is not to incorporate , eliminate , or evaluate such metrics ; they are properties of generated graphs , and unlike the metrics described above do not provide a comparison to a reference distribution . We believe that such metrics can still provide valuable information in GGM evaluation .
This paper evaluates the effectiveness of different metrics for graph generative models from many perspectives. They thoroughly study the following factors: fidelity, diversity, sensitivity to node/edge features. They find that pre-existing GGM metrics fail to capture the diversity of data and find several random GIN-based metrics that are more expressive and have low computational costs.
SP:4decc5ce6b1e056289f45b1e2f48e831a82e314f
On Evaluation Metrics for Graph Generative Models
In image generation , generative models can be evaluated naturally by visually inspecting model outputs . However , this is not always the case for graph generative models ( GGMs ) , making their evaluation challenging . Currently , the standard process for evaluating GGMs suffers from three critical limitations : i ) it does not produce a single score which makes model selection challenging , ii ) in many cases it fails to consider underlying edge and node features , and iii ) it is prohibitively slow to perform . In this work , we mitigate these issues by searching for scalar , domain-agnostic , and scalable metrics for evaluating and ranking GGMs . To this end , we study existing GGM metrics and neural-network-based metrics emerging from generative models of images that use embeddings extracted from a task-specific network . Motivated by the power of certain Graph Neural Networks ( GNNs ) to extract meaningful graph representations without any training , we introduce several metrics based on the features extracted by an untrained random GNN . We design experiments to thoroughly test metrics on their ability to measure the diversity and fidelity of generated graphs , as well as their sample and computational efficiency . Depending on the quantity of samples , we recommend one of two random-GNN-based metrics that we show to be more expressive than pre-existing metrics . While we focus on applying these metrics to GGM evaluation , in practice this enables the ability to easily compute the dissimilarity between any two sets of graphs regardless of domain . 1 INTRODUCTION . Graph generation is a key problem in a wide range of domains such as molecule generation ( Samanta et al. , 2020 ; Popova et al. , 2019 ; Li et al. , 2018 ; Kong et al. , 2021 ; Jin et al. , 2020 ) and structure generation ( Bapst et al. , 2019 ; Thompson et al. , 2020 ) . An evaluation metric that is capable of accurately measuring the distance between a set of generated and reference graphs is critical for advancing research on graph generative models ( GGMs ) . This is frequently done by comparing empirical distributions of graph statistics such as orbit counts , degree coefficients , and clustering coefficients through Maximum Mean Discrepancy ( MMD ) ( You et al. , 2018 ; Gretton et al. , 2006 ) . While these metrics are capable of making a meaningful comparison between generated and real graphs ( You et al. , 2018 ) , this evaluation method yields a metric for each individual statistic . In addition , recent works have further increased the number of metrics by performing MMD directly with node and edge feature distributions ( Goyal et al. , 2020 ) , or on alternative graph statistics such as graph spectra ( Liao et al. , 2019 ) . While this is not an issue provided there is a primary statistic of interest , all metrics are frequently displayed together to approximate generation quality and evaluate GGMs ( You et al. , 2018 ; Liao et al. , 2019 ) . This process makes it challenging to measure progress as the ranking of generative models may vary between metrics . In addition , the computation of the metrics from You et al . ( 2018 ) can be prohibitively slow ( Liao et al. , 2019 ; O ’ Bray et al. , 2021 ) , and they are based only on graph structure , meaning they do not incorporate edge and node features . Therefore , they are less applicable in specific domains such as molecule generation where such features are essential . This particular limitation has led to the use of the Neighborhood Subgraph Pairwise Distance kernel ( NSPDK ) ( Costa & Grave , 2010 ) in GGM evaluation ( Goyal et al. , 2020 ; Podda & Bacciu , 2021 ; Kawai et al. , 2019 ) as it naturally incorporates edge and node features . However , this metric is still unable to incorporate continuous features in evaluation ( Costa & Grave , 2010 ) . Faced with a wide array of metrics and ambiguity regarding when each should be the focus , the community needs robust and scalable standalone metrics that can consistently rank GGMs . While less popular , metrics from image generation literature have been successfully utilized in GGM evaluation . These metrics rely on the use of a task-specific neural network to extract meaningful representations of samples , enabling a more straightforward comparison between generated and reference distributions ( Preuer et al. , 2018 ; Liu et al. , 2019 ; Thompson et al. , 2020 ) . Although these metrics have been validated empirically in the image domain , they are not universally applicable to GGMs . For example , Fréchet Chemnet Distance ( Preuer et al. , 2018 ) uses a language model trained on SMILES strings , rendering it unusable for evaluation of GGMs in other domains . Furthermore , a pretrained GNN can not be applied to datasets with a different number of edge or node labels . Pretraining a GNN for every dataset can be prohibitive , making the use of such metrics in GGM evaluation less appealing than in the more established and standardized image domain . In image generation evaluation , classifiers trained on ImageNet ( Deng et al. , 2009 ) are frequently used to extract image embeddings ( Bińkowski et al. , 2018 ; Heusel et al. , 2017 ; Kynkäänniemi et al. , 2019 ; Xu et al. , 2018 ; Naeem et al. , 2020 ) . While classifiers such as Inception v3 ( Szegedy et al. , 2016 ) are consistently used , recent works have investigated the use of randomly-initialized CNNs with no further training ( hereafter referred to as a random network ) in generative model evaluation . Xu et al . ( 2018 ) ; Naeem et al . ( 2020 ) found that a random CNN performs similarly to ImageNet classifiers on natural images and is superior outside of the natural image domain . In the graph domain , random GNNs have been shown to extract meaningful features to solve downstream graph tasks without training ( Kipf & Welling , 2017 ; Morris et al. , 2019 ; Xu et al. , 2019 ) . However , the applicability of random GNNs for the evaluation of GGMs remains unexplored . In this work , we aim to identify one or more scalar metrics that accurately measures the dissimilarity between two sets of graphs to simplify the ranking of GGMs regardless of domain . We tackle this problem by exploring the use of random GNNs in the evaluation of GGMs using metrics that were developed in the image domain . In addition , we perform objective evaluation of a large number of possible evaluation metrics . We design experiments to thoroughly test each metric on its ability to measure the diversity and fidelity ( realism ) of generated graphs , as well as their sample and computational efficiency . We study three families of metrics : existing GGM evaluation metrics based on graph statistics and graph kernels , which we call classical metrics ; image domain metrics using a random GNN ; and image domain metrics using a pretrained GNN . We aim to answer the following questions empirically : ( Q1 ) What are the strengths and limitations of each metric ? ( Q2 ) Is pretraining a GNN necessary to accurately evaluate GGMs with image domain metrics ? ( Q3 ) Is there a strong scalar and domain-agnostic metric for evaluating and ranking GGMs ? Addressing these questions enabled us to reveal several surprising findings that directly impact the future of GGM evaluation . For example , regarding Q1 , we identify a failure mode in the classical metrics in that they are poor at measuring the diversity of generated graphs . Consequently , we find several metrics that are more expressive . In terms of Q2 , we determine that pretraining is unnecessary to utilize neural-network-based ( NN-based ) metrics . Regarding Q3 , we find two scalar metrics that are ideal for evaluating and ranking GGMs in certain scenarios ; they are scalable , powerful , and easily incorporate continuous or discrete node and edge features . These findings enable computationally inexpensive and domain-agnostic GGM evaluation . 2 BACKGROUND & RELATED WORK . Evaluating generative models in any domain is a notoriously difficult task ( Theis et al. , 2016 ) . However , researchers have found success through the use of sample-based evaluation metrics that estimate the distance ρ between real and generated distributions Pr and Pg by drawing random samples ( Heusel et al. , 2017 ; You et al. , 2018 ; Bińkowski et al. , 2018 ) . That is , they compute ρ̂ ( Sg , Sr ) ≈ ρ ( Pg , Pr ) , with Sr = { xr1 , . . . , xrm } ∼ Pr and Sg = { x g 1 , . . . , x g n } ∼ Pg , where xi is defined as some feature vector extracted from a corresponding graph Gi . These metrics are model agnostic and therefore applicable to all generative models . 2.1 CLASSICAL METRICS . Metrics based on graph statistics ( You et al. , 2018 ) are standard in evaluating GGMs ( Liao et al. , 2019 ; Dai et al. , 2020 ) . These metrics require extracting the clustering coefficient , node degree , and 4-node orbit count histograms1 that are then used to compute the empirical MMD between generated and reference sets Sg , Sr ( Gretton et al. , 2006 ) : MMD ( Sg , Sr ) : = 1 m2 m∑ i , j=1 k ( xri , x r j ) + 1 n2 n∑ i , j=1 k ( xgi , x g j ) − 2 nm n∑ i=1 m∑ j=1 k ( xgi , x r j ) , ( 1 ) where k ( · , · ) is a general kernel function . You et al . ( 2018 ) proposed a form of the RBF kernel : k ( xi , xj ) = exp ( −d ( xi , xj ) 2σ2 ) , ( 2 ) where d ( · , · ) computes pairwise distance , and in that work was chosen to be the Earth Mover ’ s Distance ( EMD ) . This yields three metrics , one for each graph statistic . The computational cost of these metrics may be decreased by using the total variation distance as d ( · , · ) in Equation 1 ( Liao et al. , 2019 ) . However , this change leads to an indefinite kernel and undefined behaviour ( O ’ Bray et al. , 2021 ) . Therefore , we only compute these metrics using EMD ( You et al. , 2018 ) . In addition , several works ( Goyal et al. , 2020 ; Podda & Bacciu , 2021 ; Kawai et al. , 2019 ) evaluate GGMs by replacing k ( · , · ) with the Neighborhood Subgraph Pairwise Distance graph kernel ( NSPDK ) . This metric has the benefit of incorporating discrete edge and node features along with the underlying graph structure in evaluation . Similar to the metrics proposed by You et al . ( 2018 ) , Moreno et al . ( 2018 ) extract graph structure properties such as node degree , clustering coefficient , and geodesic distance . However , these properties are then combined into a scalar metric through the KolmorogovSmirnov ( KS ) multidimensional distance ( Justel et al. , 1997 ) . We exclude KS from our experiments as it is unable to incorporate edge and node features , which is one of the key properties we seek . Finally , note that other domain-specific metrics such as “ percentage of valid graphs ” exist . Our goal is not to incorporate , eliminate , or evaluate such metrics ; they are properties of generated graphs , and unlike the metrics described above do not provide a comparison to a reference distribution . We believe that such metrics can still provide valuable information in GGM evaluation .
The paper shows a detailed comparison of different graph generative model evaluation metrics and highlights that current approaches for the evaluation of GGMs are insufficient and perform poorly in terms of assessing diversity and fidelity of generated samples. The paper pinpoints these issues to the reliance of many metrics on a predefined set of features extracted from the generated graphs. It claims that the efficacy of prior approaches is typically dependent on the graph featurization and that this leads to inconsistencies in the ranking of models when using different feature representations. It proposes to instead use features from randomly initialized Graph Neural Networks as a basis for the analysis and shows that this represents a competitive evaluation approach.
SP:4decc5ce6b1e056289f45b1e2f48e831a82e314f
On Evaluation Metrics for Graph Generative Models
In image generation , generative models can be evaluated naturally by visually inspecting model outputs . However , this is not always the case for graph generative models ( GGMs ) , making their evaluation challenging . Currently , the standard process for evaluating GGMs suffers from three critical limitations : i ) it does not produce a single score which makes model selection challenging , ii ) in many cases it fails to consider underlying edge and node features , and iii ) it is prohibitively slow to perform . In this work , we mitigate these issues by searching for scalar , domain-agnostic , and scalable metrics for evaluating and ranking GGMs . To this end , we study existing GGM metrics and neural-network-based metrics emerging from generative models of images that use embeddings extracted from a task-specific network . Motivated by the power of certain Graph Neural Networks ( GNNs ) to extract meaningful graph representations without any training , we introduce several metrics based on the features extracted by an untrained random GNN . We design experiments to thoroughly test metrics on their ability to measure the diversity and fidelity of generated graphs , as well as their sample and computational efficiency . Depending on the quantity of samples , we recommend one of two random-GNN-based metrics that we show to be more expressive than pre-existing metrics . While we focus on applying these metrics to GGM evaluation , in practice this enables the ability to easily compute the dissimilarity between any two sets of graphs regardless of domain . 1 INTRODUCTION . Graph generation is a key problem in a wide range of domains such as molecule generation ( Samanta et al. , 2020 ; Popova et al. , 2019 ; Li et al. , 2018 ; Kong et al. , 2021 ; Jin et al. , 2020 ) and structure generation ( Bapst et al. , 2019 ; Thompson et al. , 2020 ) . An evaluation metric that is capable of accurately measuring the distance between a set of generated and reference graphs is critical for advancing research on graph generative models ( GGMs ) . This is frequently done by comparing empirical distributions of graph statistics such as orbit counts , degree coefficients , and clustering coefficients through Maximum Mean Discrepancy ( MMD ) ( You et al. , 2018 ; Gretton et al. , 2006 ) . While these metrics are capable of making a meaningful comparison between generated and real graphs ( You et al. , 2018 ) , this evaluation method yields a metric for each individual statistic . In addition , recent works have further increased the number of metrics by performing MMD directly with node and edge feature distributions ( Goyal et al. , 2020 ) , or on alternative graph statistics such as graph spectra ( Liao et al. , 2019 ) . While this is not an issue provided there is a primary statistic of interest , all metrics are frequently displayed together to approximate generation quality and evaluate GGMs ( You et al. , 2018 ; Liao et al. , 2019 ) . This process makes it challenging to measure progress as the ranking of generative models may vary between metrics . In addition , the computation of the metrics from You et al . ( 2018 ) can be prohibitively slow ( Liao et al. , 2019 ; O ’ Bray et al. , 2021 ) , and they are based only on graph structure , meaning they do not incorporate edge and node features . Therefore , they are less applicable in specific domains such as molecule generation where such features are essential . This particular limitation has led to the use of the Neighborhood Subgraph Pairwise Distance kernel ( NSPDK ) ( Costa & Grave , 2010 ) in GGM evaluation ( Goyal et al. , 2020 ; Podda & Bacciu , 2021 ; Kawai et al. , 2019 ) as it naturally incorporates edge and node features . However , this metric is still unable to incorporate continuous features in evaluation ( Costa & Grave , 2010 ) . Faced with a wide array of metrics and ambiguity regarding when each should be the focus , the community needs robust and scalable standalone metrics that can consistently rank GGMs . While less popular , metrics from image generation literature have been successfully utilized in GGM evaluation . These metrics rely on the use of a task-specific neural network to extract meaningful representations of samples , enabling a more straightforward comparison between generated and reference distributions ( Preuer et al. , 2018 ; Liu et al. , 2019 ; Thompson et al. , 2020 ) . Although these metrics have been validated empirically in the image domain , they are not universally applicable to GGMs . For example , Fréchet Chemnet Distance ( Preuer et al. , 2018 ) uses a language model trained on SMILES strings , rendering it unusable for evaluation of GGMs in other domains . Furthermore , a pretrained GNN can not be applied to datasets with a different number of edge or node labels . Pretraining a GNN for every dataset can be prohibitive , making the use of such metrics in GGM evaluation less appealing than in the more established and standardized image domain . In image generation evaluation , classifiers trained on ImageNet ( Deng et al. , 2009 ) are frequently used to extract image embeddings ( Bińkowski et al. , 2018 ; Heusel et al. , 2017 ; Kynkäänniemi et al. , 2019 ; Xu et al. , 2018 ; Naeem et al. , 2020 ) . While classifiers such as Inception v3 ( Szegedy et al. , 2016 ) are consistently used , recent works have investigated the use of randomly-initialized CNNs with no further training ( hereafter referred to as a random network ) in generative model evaluation . Xu et al . ( 2018 ) ; Naeem et al . ( 2020 ) found that a random CNN performs similarly to ImageNet classifiers on natural images and is superior outside of the natural image domain . In the graph domain , random GNNs have been shown to extract meaningful features to solve downstream graph tasks without training ( Kipf & Welling , 2017 ; Morris et al. , 2019 ; Xu et al. , 2019 ) . However , the applicability of random GNNs for the evaluation of GGMs remains unexplored . In this work , we aim to identify one or more scalar metrics that accurately measures the dissimilarity between two sets of graphs to simplify the ranking of GGMs regardless of domain . We tackle this problem by exploring the use of random GNNs in the evaluation of GGMs using metrics that were developed in the image domain . In addition , we perform objective evaluation of a large number of possible evaluation metrics . We design experiments to thoroughly test each metric on its ability to measure the diversity and fidelity ( realism ) of generated graphs , as well as their sample and computational efficiency . We study three families of metrics : existing GGM evaluation metrics based on graph statistics and graph kernels , which we call classical metrics ; image domain metrics using a random GNN ; and image domain metrics using a pretrained GNN . We aim to answer the following questions empirically : ( Q1 ) What are the strengths and limitations of each metric ? ( Q2 ) Is pretraining a GNN necessary to accurately evaluate GGMs with image domain metrics ? ( Q3 ) Is there a strong scalar and domain-agnostic metric for evaluating and ranking GGMs ? Addressing these questions enabled us to reveal several surprising findings that directly impact the future of GGM evaluation . For example , regarding Q1 , we identify a failure mode in the classical metrics in that they are poor at measuring the diversity of generated graphs . Consequently , we find several metrics that are more expressive . In terms of Q2 , we determine that pretraining is unnecessary to utilize neural-network-based ( NN-based ) metrics . Regarding Q3 , we find two scalar metrics that are ideal for evaluating and ranking GGMs in certain scenarios ; they are scalable , powerful , and easily incorporate continuous or discrete node and edge features . These findings enable computationally inexpensive and domain-agnostic GGM evaluation . 2 BACKGROUND & RELATED WORK . Evaluating generative models in any domain is a notoriously difficult task ( Theis et al. , 2016 ) . However , researchers have found success through the use of sample-based evaluation metrics that estimate the distance ρ between real and generated distributions Pr and Pg by drawing random samples ( Heusel et al. , 2017 ; You et al. , 2018 ; Bińkowski et al. , 2018 ) . That is , they compute ρ̂ ( Sg , Sr ) ≈ ρ ( Pg , Pr ) , with Sr = { xr1 , . . . , xrm } ∼ Pr and Sg = { x g 1 , . . . , x g n } ∼ Pg , where xi is defined as some feature vector extracted from a corresponding graph Gi . These metrics are model agnostic and therefore applicable to all generative models . 2.1 CLASSICAL METRICS . Metrics based on graph statistics ( You et al. , 2018 ) are standard in evaluating GGMs ( Liao et al. , 2019 ; Dai et al. , 2020 ) . These metrics require extracting the clustering coefficient , node degree , and 4-node orbit count histograms1 that are then used to compute the empirical MMD between generated and reference sets Sg , Sr ( Gretton et al. , 2006 ) : MMD ( Sg , Sr ) : = 1 m2 m∑ i , j=1 k ( xri , x r j ) + 1 n2 n∑ i , j=1 k ( xgi , x g j ) − 2 nm n∑ i=1 m∑ j=1 k ( xgi , x r j ) , ( 1 ) where k ( · , · ) is a general kernel function . You et al . ( 2018 ) proposed a form of the RBF kernel : k ( xi , xj ) = exp ( −d ( xi , xj ) 2σ2 ) , ( 2 ) where d ( · , · ) computes pairwise distance , and in that work was chosen to be the Earth Mover ’ s Distance ( EMD ) . This yields three metrics , one for each graph statistic . The computational cost of these metrics may be decreased by using the total variation distance as d ( · , · ) in Equation 1 ( Liao et al. , 2019 ) . However , this change leads to an indefinite kernel and undefined behaviour ( O ’ Bray et al. , 2021 ) . Therefore , we only compute these metrics using EMD ( You et al. , 2018 ) . In addition , several works ( Goyal et al. , 2020 ; Podda & Bacciu , 2021 ; Kawai et al. , 2019 ) evaluate GGMs by replacing k ( · , · ) with the Neighborhood Subgraph Pairwise Distance graph kernel ( NSPDK ) . This metric has the benefit of incorporating discrete edge and node features along with the underlying graph structure in evaluation . Similar to the metrics proposed by You et al . ( 2018 ) , Moreno et al . ( 2018 ) extract graph structure properties such as node degree , clustering coefficient , and geodesic distance . However , these properties are then combined into a scalar metric through the KolmorogovSmirnov ( KS ) multidimensional distance ( Justel et al. , 1997 ) . We exclude KS from our experiments as it is unable to incorporate edge and node features , which is one of the key properties we seek . Finally , note that other domain-specific metrics such as “ percentage of valid graphs ” exist . Our goal is not to incorporate , eliminate , or evaluate such metrics ; they are properties of generated graphs , and unlike the metrics described above do not provide a comparison to a reference distribution . We believe that such metrics can still provide valuable information in GGM evaluation .
The paper proposes a scalar metric for evaluating Graph Generative Models (GGMs). The metric is based on computing the Maximum Mean Discrepancy (with an RBF kernel) between graph representations of the sampled and real graphs, as extracted from an untrained GIN model. In the paper, the authors analyze several metrics (some used in the literature, some similar to the proposed setup, e.g. replacing MMD), and measure their fidelity (i.e. how sensitive is the metric to random graphs), diversity (i.e. how sensitive it is to mode collapse or mode dropping), sensitivity to node/edge features, sample efficiency (minimum number of graphs necessary to discriminate noise from real samples), and computational efficiency.
SP:4decc5ce6b1e056289f45b1e2f48e831a82e314f
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path
1 INTRODUCTION . Modern deep learning includes paradigmatic procedures that are commonly adopted , but not entirely understood . Some examples are multi-layered architectures , stochastic gradient descent , batch normalization , cross-entropy ( CE ) loss , and training past zero error towards zero loss . Analyzing the properties of these practices is an important research task . In this work , we theoretically investigate behavior of last-layer features in classification deep nets . In particular , we consider the training of deep neural networks on datasets containing images from C different classes with N examples in each class . After passing the i-th example in the c-th class through all layers except the last-layer of the network , the network outputs some last-layer features hi , c ∈ RP . The last-layer of the network—which , for each class c , possesses a classifier wc ∈ RP and bias bc ∈ R—then predicts a label for the example using the rule argmaxc′ ( ⟨wc′ , hi , c⟩+ bc′ ) . The network ’ s performance is evaluated by calculating the error defined by Error = Ave i , c 1 { c ̸= argmax c′ ( ⟨wc′ , hi , c⟩+ bc′ ) } , while the weights , biases , and other parameters of the network ( that determine the behavior of the layers before the last layer ) in the network are updated by minimizing the CE loss defined by CE = −Ave i , c log exp { ⟨wc , hi , c⟩+ bc } ∑C c′=1 exp { ⟨wc′ , hi , c⟩+ bc′ } , ∗Equal contribution . Listed alphabetically . where Ave is the operator that averages over its subscript indices . Prior works such as Zhang et al . ( 2016 ) ; Belkin et al . ( 2019 ) have shown that overparameterized classifiers ( such as deep nets ) can “ memorize ” their training set without harming performance on unseen test data . Moreover , works such as Soudry et al . ( 2018 ) have further shown that continuing to train networks past memorization can still lead to performance improvements1,2 . Papyan , Han , and Donoho ( 2020 ) recently examined this setting , referring to the phase during which one trains past zero-error towards zero-CE-loss as the Terminal Phase of Training ( TPT ) . During TPT , they exposed a phenomenon called Neural Collapse ( NC ) . 1.1 NEURAL COLLAPSE . NC is defined relative to the feature global mean , µG = Ave i , c hi , c , the feature class-means , µc = Ave i hi , c , c = 1 , . . . , C , the feature within-class covariance , ΣW = Ave i , c ( hi , c − µc ) ( hi , c − µc ) ⊤ , ( 1 ) and the feature between-class covariance , ΣB = Ave c ( µc − µG ) ( µc − µG ) ⊤ . It is characterized by the following four limiting behaviors where limits take place with increasing training epoch t : ( NC1 ) Within-class variability collapse3 : Σ†BΣW → 0 , where † denotes the Moore-Penrose pseudoinverse . ( NC2 ) Convergence to Simplex ETF : ⟨µc − µG , µc′ − µG⟩ ∥µc − µG∥2∥µc′ − µG∥2 → { 1 , c = c′ −1 C−1 , c ̸= c ′ ∥µc − µG∥2 − ∥µc′ − µG∥2 → 0 ∀c ̸= c′ ( NC3 ) Convergence to self-duality : wc ∥wc∥2 − µc − µG ∥µc − µG∥2 → 0 ( NC4 ) : Simplification to nearest class center : argmax c′ ⟨wc′ , h⟩+ bc′ → argmin c′ ∥h− µc′∥2 1A similar phenomenon had been previously observed in boosting ( Bartlett et al. , 1998 ) . 2An alternative line of work on “ early stopping ” ( Prechelt , 1998 ; Li et al. , 2020 ; Rice et al. , 2020 ) advocates for terminating the training process early and shows that it could be beneficial when training on noisy data or small datasets . Such datasets are out of the scope of this paper . 3Our characterization of ( NC1 ) is more precise than that given by Papyan , Han , and Donoho ( 2020 ) , which only states ΣW→0 for rhetorical simplicity . The convergence of the trace-norm of Σ†BΣW to zero is the actual quantity measured by Figure 6 of Papyan , Han , and Donoho ( 2020 ) when demonstrating ( NC1 ) . Σ†BΣW The ( NC2 ) property captures convergence to a simple geometric structure called an Equiangular Tight Frame ( ETF ) . An ETF is a collection of vectors { vc } Cc=1 having equal lengths and equal , maximally separated pair-wise angles . In classification deep nets , last-layer features are of higher dimension than the number of classes i.e . P > C . In this setting , the maximal angles are given by ⟨vc , vc′⟩ ∥vc∥2∥vc′∥2 = { 1 , for c = c′ − 1C−1 , for c ̸= c ′ , and the ETF is called a Simplex ETF4 . Observe that as C increases , the ETF approaches a ( partial ) orthogonal matrix . Thus , when C is large , this translates to the intuitive notion that classifiers and class-means tend with training to ( near ) orthogonality . 1.2 DEEP NET CLASSIFICATION WITH MSE LOSS . While classification deep nets are typically trained with CE loss , Demirkaya et al . ( 2020 ) and Hui & Belkin ( 2020 ) recently reported deep nets trained with squared error ( MSE ) loss , L ( W , b , H ) =1 2 Ave i , c ∥Whi , c + b− yi , c∥22 + λ 2 ( ∥W ∥2F + ∥b∥22 ) ( 2 ) = 1 2CN ∥WH + b1⊤CN − Y ∥2F + λ 2 ( ∥W ∥2F + ∥b∥22 ) , achieve comparable test performance as those trained with CE loss . Above , H ∈ RP×CN and Y ∈ RC×CN are matrices resulting from stacking5 together the feature vectors hi , c and one-hot vectors yi , c as their respective columns ; W ∈ RC×P is the matrix resulting from the stacking of classifiers wc as rows ; b ∈ RC is the vector resulting from concatenating the scalars { bc } Cc=1 ; and 1CN is the length-CN vector of ones . Table 1 in Appendix A shows supplementary measurements affirming the findings of Hui & Belkin ( 2020 ) ; Demirkaya et al . ( 2020 ) , i.e . the measurements affirm that MSE-trained networks indeed achieve accuracies on testing data comparable to those of CE-trained networks ( cf . the analogous Table 1 in Papyan , Han , and Donoho ( 2020 ) ) . The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss and inspires this paper ’ s main theoretical contributions explicitly characterizing MSE-NC . 1.3 CONTRIBUTIONS . Our main contributions are as follows : • We propose a new decomposition of the MSE loss : L = LNC1 + LNC2/3 + L⊥LS . The terms LNC1 and LNC2/3 possess interpretations attributable to NC phenomena and assume the classifier W is exactly the least-squares optimal classifier WLS ( relative to the given H ) ; and L⊥LS captures the deviation of W from WLS . ( Section 2 ) • We provide empirical measurements of our decomposition ( Figure 2 ) on realistic datasetnetwork combinations showing that L⊥LS becomes negligible during training—leading us to define the central path where L⊥LS=zero . ( Section 2 ) • We reveal a key invariance property on the central path . The invariance motivates the examination of a representative set of features , X=Σ− 1 2 W H , that we call renormalized features . ( Sections 3.1-3.2 ) • We study the gradient flow of those renormalized features along the central path and derive exact , closed-form dynamics that imply NC . The dynamics are explicit in terms of the captures the subtlety that the size of the within-class “ noise ” ( captured by ΣW ) should be viewed relative to the size of the class-means ( captured by ΣB ) . 4Traditional research on ETFs ( cf . Strohmer & Heath Jr ( 2003 ) ; Fickus & Mixon ( 2015 ) ) examines the P≤C setting where C ETF-vectors span their ambient RP space . However , in the P > C setting of classification deep nets , the vectors can not span RP . Following the precedent of Papyan , Han , and Donoho ( 2020 ) , we interpret ETFs as equal-length and maximally-equiangular vectors that are not necessarily spanning . 5Assume the stacking is performed in i-then-c order : the first column is ( i , c ) = ( 1 , 1 ) , the second is ( i , c ) = ( 2 , 1 ) , ... , the N -th is ( i , c ) = ( N , 1 ) , the ( N + 1 ) -st is ( i , c ) = ( 1 , 2 ) , and so on . This matters for formalizing Equation 11 in Corollary 2. singular value decomposition of the renormalized feature class-means at initialization . ( Section 3.3 ) Additionally , we complement this paper with new , extensive measurements on five benchmark datasets—in particular , the MNIST , FashionMNIST , CIFAR10 , SVHN , and STL10 datasets6 ( Deng et al. , 2009 ; Krizhevsky & Hinton , 2009 ; LeCun et al. , 2010 ; Xiao et al. , 2017 ) —and three canonical deep nets—in particular , the VGG , ResNet , and DenseNet networks ( He et al. , 2016 ; Huang et al. , 2017 ; Simonyan & Zisserman , 2014 ) —that verify the empirical reality of MSE-NC i.e . they show ( NC1 ) - ( NC4 ) indeed occur for networks trained with MSE loss . These experiments establish that theoretical modeling of MSE-NC is empirically well-motivated . They are lengthy—together spanning four pages with seven figures and a table—so we collectively defer them to Appendix A . 2 DECOMPOSITION OF MSE LOSS . Inspired by the community ’ s interest in the role of the MSE loss in deep net training ( Demirkaya et al. , 2020 ; Hui & Belkin , 2020 ; Mixon et al. , 2020 ; Poggio & Liao , 2020a ; b ) , we derive a new decomposition of the MSE loss that gives insights into the NC phenomenon . First , absorb the bias vector into the weight matrix—by defining the extended weight matrix W̃ = [ W , b ] ∈ RC× ( P+1 ) and the extended feature vector h̃i , c = [ hi , c ; 1 ] ∈ RP+1—so that Equation 2 can be rewritten as L ( W̃ , H̃ ) = 1 2 Ave i , c ∥W̃ h̃i , c − yi , c∥22 + λ 2 ∥W̃ ∥2F . ( 3 ) Using { h̃i , c } , define the entities µ̃c , µ̃G , H̃ , and Σ̃W analogously to those in Section 1.1 . We further define the extended total covariance and extended class-means matrices , respectively , as Σ̃T =Ave i , c ( h̃i , c − µ̃G ) ( h̃i , c − µ̃G ) ⊤ ∈ R ( P+1 ) × ( P+1 ) M̃ = [ µ̃1 , . . . , µ̃C ] ∈ R ( P+1 ) ×C . Next , we reformulate , with weight decay incorporated , a classic result of Webb & Lowe ( 1990 ) : Proposition 1 ( Webb & Lowe ( 1990 ) with Weight Decay ) . For fixed extended features H̃ , the optimal classifier minimizing the MSE loss L ( W̃ , H̃ ) is W̃LS = 1 C M̃⊤ ( Σ̃T + λI ) −1 , where I is the identity matrix . Note that W̃LS depends on H̃ only . Note that we can interpret W̃LS as a function of H̃ , leading us to identify the following decomposition of L where one term depends on H̃ only . Theorem 1 . ( Decomposition of MSE Loss ; Proof in Appendix B ) The MSE loss , L ( W̃ , H̃ ) , can be decomposed into two terms , L ( W̃ , H̃ ) = LLS ( H̃ ) + L⊥LS ( W̃ , H̃ ) , where LLS ( H̃ ) = 1 2 Ave i , c ∥W̃LSh̃i , c − yi , c∥22 + λ 2 ∥W̃LS∥2F , and L⊥LS ( W̃ , H̃ ) = 1 2 tr { ( W̃ − W̃LS ) ( Σ̃T + µ̃Gµ̃ ⊤ G + λI ) ( W̃ − W̃LS ) ⊤ } . In the above , LLS ( H̃ ) is independent of W̃ . Intuitively , it is the MSE-performance of the optimal classifiers W̃LS ( rather than the “ real classifiers ” W̃ ) on input H̃ . 6CIFAR100 and ImageNet are omitted because Demirkaya et al . ( 2020 ) ; Hui & Belkin ( 2020 ) showed that they require an additional scaling-heuristic on top of the traditional MSE loss . Rigorous investigation into this scaling-heuristic would have demanded mathematical analysis outside the scope of this paper as well as experimentation that would have consumed expensive amounts of computational resources . The component L⊥LS ( W̃ , H̃ ) is non-negative7 and is zero only when W̃ = W̃LS . Therefore , L⊥LS ( H̃ ) quantifies the distance of W̃ from W̃LS . In short , the least-squares component , LLS ( H̃ ) , captures the behavior of the network when the classifier possesses optimal , least squares behavior . Meanwhile , the deviation component , L⊥LS ( W̃ , H̃ ) , captures the divergence from that behavior . We can further decompose LLS ( H̃ ) into two terms , one capturing activation collapse ( NC1 ) and the other capturing convergence to Simplex ETF of both features and classifiers ( ( NC2 ) and ( NC3 ) ) . Theorem 2 . ( Decomposition of Least-Squares Component ; Proof in Appendix C ) The least-squares component , LLS ( H̃ ) , of the MSE decomposition in Theorem 1 can be further decomposed into LLS ( H̃ ) = LNC1 ( H̃ ) + LNC2/3 ( H̃ ) , where LNC1 ( H̃ ) = 1 2 tr { W̃LS [ Σ̃W + λI ] W̃⊤LS } , LNC2/3 ( H̃ ) = 1 2C ∥W̃LSM̃ − I∥2F . Inspection of these terms is revealing . First , observe that LNC2/3 is a function of the class-means and MSE-optimal classifiers . Minimizing LNC2/3 will push the ( unextended ) class-means and classifiers towards the same Simpex ETF matrix8 i.e . ( NC2 ) - ( NC3 ) . Next , note that the within-class variation is independent of the means . Thus , despite the fact that classifiers are converging towards some ( potentially large ) ETF matrix , we can always reduce LNC1 by pushing ΣW towards zero , which corresponds to ( NC1 ) . Figure 2 measures the empirical values of the above decomposition terms on five canonical datasets and three prototypical networks . It shows that L⊥LS ( W̃ , H̃ ) becomes negligible compared to LLS ( H̃ ) when training canonical deep nets on benchmark datasets . In other words , L ( W̃ , H̃ ) ≈LLS ( H̃ ) starting from an early epoch in training and persists into TPT . This motivates us to formulate the central path : P = { ( W̃LS ( H̃ ) , H̃ ) | H̃ ∈ R ( P+1 ) ×CN } , ( 4 ) where the notation W̃LS ( · ) makes explicit the fact that W̃LS is a function of H̃ only . Intuitively , for a classifier-features pair to lie on the central path , i.e . ( W̃ , H̃ ) ∈ P , means that the “ real classifier ” W̃ will exactly equal W̃LS ( H̃ ) —the optimal classifier that would result from fixing H̃ and minimizing L w.r.t . just the classifier . Combined with Theorem 1 , we see ( W̃ , H̃ ) ∈ P if and only if L ( W̃ , H̃ ) =LLS ( H̃ ) . Figure 2 shows that classifier-features pairs lie approximately on the central path during TPT , allowing us to shift focus from L to LLS .
This paper extends the recent work on Neural Collapse, using Mean Squared Error (MSE) instead of CE, as MSE is easier for analysis. With this, the paper shows that the least square loss can be decomposed into one that corresponds to a so called 'central' path (namely a set of optimal tuples (W, b, H) given H for which there is an optimal loss), and a perpendicular loss. The paper shows that the perpendicular loss is much smaller than the optimal least square loss and thus Neural Collapse appears due to the fact that the optimizer focuses on the central path. Empirical studies confirm the findings.
SP:92024e2a9e9b4e30822b9305c177fb0aa23c693e
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path
1 INTRODUCTION . Modern deep learning includes paradigmatic procedures that are commonly adopted , but not entirely understood . Some examples are multi-layered architectures , stochastic gradient descent , batch normalization , cross-entropy ( CE ) loss , and training past zero error towards zero loss . Analyzing the properties of these practices is an important research task . In this work , we theoretically investigate behavior of last-layer features in classification deep nets . In particular , we consider the training of deep neural networks on datasets containing images from C different classes with N examples in each class . After passing the i-th example in the c-th class through all layers except the last-layer of the network , the network outputs some last-layer features hi , c ∈ RP . The last-layer of the network—which , for each class c , possesses a classifier wc ∈ RP and bias bc ∈ R—then predicts a label for the example using the rule argmaxc′ ( ⟨wc′ , hi , c⟩+ bc′ ) . The network ’ s performance is evaluated by calculating the error defined by Error = Ave i , c 1 { c ̸= argmax c′ ( ⟨wc′ , hi , c⟩+ bc′ ) } , while the weights , biases , and other parameters of the network ( that determine the behavior of the layers before the last layer ) in the network are updated by minimizing the CE loss defined by CE = −Ave i , c log exp { ⟨wc , hi , c⟩+ bc } ∑C c′=1 exp { ⟨wc′ , hi , c⟩+ bc′ } , ∗Equal contribution . Listed alphabetically . where Ave is the operator that averages over its subscript indices . Prior works such as Zhang et al . ( 2016 ) ; Belkin et al . ( 2019 ) have shown that overparameterized classifiers ( such as deep nets ) can “ memorize ” their training set without harming performance on unseen test data . Moreover , works such as Soudry et al . ( 2018 ) have further shown that continuing to train networks past memorization can still lead to performance improvements1,2 . Papyan , Han , and Donoho ( 2020 ) recently examined this setting , referring to the phase during which one trains past zero-error towards zero-CE-loss as the Terminal Phase of Training ( TPT ) . During TPT , they exposed a phenomenon called Neural Collapse ( NC ) . 1.1 NEURAL COLLAPSE . NC is defined relative to the feature global mean , µG = Ave i , c hi , c , the feature class-means , µc = Ave i hi , c , c = 1 , . . . , C , the feature within-class covariance , ΣW = Ave i , c ( hi , c − µc ) ( hi , c − µc ) ⊤ , ( 1 ) and the feature between-class covariance , ΣB = Ave c ( µc − µG ) ( µc − µG ) ⊤ . It is characterized by the following four limiting behaviors where limits take place with increasing training epoch t : ( NC1 ) Within-class variability collapse3 : Σ†BΣW → 0 , where † denotes the Moore-Penrose pseudoinverse . ( NC2 ) Convergence to Simplex ETF : ⟨µc − µG , µc′ − µG⟩ ∥µc − µG∥2∥µc′ − µG∥2 → { 1 , c = c′ −1 C−1 , c ̸= c ′ ∥µc − µG∥2 − ∥µc′ − µG∥2 → 0 ∀c ̸= c′ ( NC3 ) Convergence to self-duality : wc ∥wc∥2 − µc − µG ∥µc − µG∥2 → 0 ( NC4 ) : Simplification to nearest class center : argmax c′ ⟨wc′ , h⟩+ bc′ → argmin c′ ∥h− µc′∥2 1A similar phenomenon had been previously observed in boosting ( Bartlett et al. , 1998 ) . 2An alternative line of work on “ early stopping ” ( Prechelt , 1998 ; Li et al. , 2020 ; Rice et al. , 2020 ) advocates for terminating the training process early and shows that it could be beneficial when training on noisy data or small datasets . Such datasets are out of the scope of this paper . 3Our characterization of ( NC1 ) is more precise than that given by Papyan , Han , and Donoho ( 2020 ) , which only states ΣW→0 for rhetorical simplicity . The convergence of the trace-norm of Σ†BΣW to zero is the actual quantity measured by Figure 6 of Papyan , Han , and Donoho ( 2020 ) when demonstrating ( NC1 ) . Σ†BΣW The ( NC2 ) property captures convergence to a simple geometric structure called an Equiangular Tight Frame ( ETF ) . An ETF is a collection of vectors { vc } Cc=1 having equal lengths and equal , maximally separated pair-wise angles . In classification deep nets , last-layer features are of higher dimension than the number of classes i.e . P > C . In this setting , the maximal angles are given by ⟨vc , vc′⟩ ∥vc∥2∥vc′∥2 = { 1 , for c = c′ − 1C−1 , for c ̸= c ′ , and the ETF is called a Simplex ETF4 . Observe that as C increases , the ETF approaches a ( partial ) orthogonal matrix . Thus , when C is large , this translates to the intuitive notion that classifiers and class-means tend with training to ( near ) orthogonality . 1.2 DEEP NET CLASSIFICATION WITH MSE LOSS . While classification deep nets are typically trained with CE loss , Demirkaya et al . ( 2020 ) and Hui & Belkin ( 2020 ) recently reported deep nets trained with squared error ( MSE ) loss , L ( W , b , H ) =1 2 Ave i , c ∥Whi , c + b− yi , c∥22 + λ 2 ( ∥W ∥2F + ∥b∥22 ) ( 2 ) = 1 2CN ∥WH + b1⊤CN − Y ∥2F + λ 2 ( ∥W ∥2F + ∥b∥22 ) , achieve comparable test performance as those trained with CE loss . Above , H ∈ RP×CN and Y ∈ RC×CN are matrices resulting from stacking5 together the feature vectors hi , c and one-hot vectors yi , c as their respective columns ; W ∈ RC×P is the matrix resulting from the stacking of classifiers wc as rows ; b ∈ RC is the vector resulting from concatenating the scalars { bc } Cc=1 ; and 1CN is the length-CN vector of ones . Table 1 in Appendix A shows supplementary measurements affirming the findings of Hui & Belkin ( 2020 ) ; Demirkaya et al . ( 2020 ) , i.e . the measurements affirm that MSE-trained networks indeed achieve accuracies on testing data comparable to those of CE-trained networks ( cf . the analogous Table 1 in Papyan , Han , and Donoho ( 2020 ) ) . The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss and inspires this paper ’ s main theoretical contributions explicitly characterizing MSE-NC . 1.3 CONTRIBUTIONS . Our main contributions are as follows : • We propose a new decomposition of the MSE loss : L = LNC1 + LNC2/3 + L⊥LS . The terms LNC1 and LNC2/3 possess interpretations attributable to NC phenomena and assume the classifier W is exactly the least-squares optimal classifier WLS ( relative to the given H ) ; and L⊥LS captures the deviation of W from WLS . ( Section 2 ) • We provide empirical measurements of our decomposition ( Figure 2 ) on realistic datasetnetwork combinations showing that L⊥LS becomes negligible during training—leading us to define the central path where L⊥LS=zero . ( Section 2 ) • We reveal a key invariance property on the central path . The invariance motivates the examination of a representative set of features , X=Σ− 1 2 W H , that we call renormalized features . ( Sections 3.1-3.2 ) • We study the gradient flow of those renormalized features along the central path and derive exact , closed-form dynamics that imply NC . The dynamics are explicit in terms of the captures the subtlety that the size of the within-class “ noise ” ( captured by ΣW ) should be viewed relative to the size of the class-means ( captured by ΣB ) . 4Traditional research on ETFs ( cf . Strohmer & Heath Jr ( 2003 ) ; Fickus & Mixon ( 2015 ) ) examines the P≤C setting where C ETF-vectors span their ambient RP space . However , in the P > C setting of classification deep nets , the vectors can not span RP . Following the precedent of Papyan , Han , and Donoho ( 2020 ) , we interpret ETFs as equal-length and maximally-equiangular vectors that are not necessarily spanning . 5Assume the stacking is performed in i-then-c order : the first column is ( i , c ) = ( 1 , 1 ) , the second is ( i , c ) = ( 2 , 1 ) , ... , the N -th is ( i , c ) = ( N , 1 ) , the ( N + 1 ) -st is ( i , c ) = ( 1 , 2 ) , and so on . This matters for formalizing Equation 11 in Corollary 2. singular value decomposition of the renormalized feature class-means at initialization . ( Section 3.3 ) Additionally , we complement this paper with new , extensive measurements on five benchmark datasets—in particular , the MNIST , FashionMNIST , CIFAR10 , SVHN , and STL10 datasets6 ( Deng et al. , 2009 ; Krizhevsky & Hinton , 2009 ; LeCun et al. , 2010 ; Xiao et al. , 2017 ) —and three canonical deep nets—in particular , the VGG , ResNet , and DenseNet networks ( He et al. , 2016 ; Huang et al. , 2017 ; Simonyan & Zisserman , 2014 ) —that verify the empirical reality of MSE-NC i.e . they show ( NC1 ) - ( NC4 ) indeed occur for networks trained with MSE loss . These experiments establish that theoretical modeling of MSE-NC is empirically well-motivated . They are lengthy—together spanning four pages with seven figures and a table—so we collectively defer them to Appendix A . 2 DECOMPOSITION OF MSE LOSS . Inspired by the community ’ s interest in the role of the MSE loss in deep net training ( Demirkaya et al. , 2020 ; Hui & Belkin , 2020 ; Mixon et al. , 2020 ; Poggio & Liao , 2020a ; b ) , we derive a new decomposition of the MSE loss that gives insights into the NC phenomenon . First , absorb the bias vector into the weight matrix—by defining the extended weight matrix W̃ = [ W , b ] ∈ RC× ( P+1 ) and the extended feature vector h̃i , c = [ hi , c ; 1 ] ∈ RP+1—so that Equation 2 can be rewritten as L ( W̃ , H̃ ) = 1 2 Ave i , c ∥W̃ h̃i , c − yi , c∥22 + λ 2 ∥W̃ ∥2F . ( 3 ) Using { h̃i , c } , define the entities µ̃c , µ̃G , H̃ , and Σ̃W analogously to those in Section 1.1 . We further define the extended total covariance and extended class-means matrices , respectively , as Σ̃T =Ave i , c ( h̃i , c − µ̃G ) ( h̃i , c − µ̃G ) ⊤ ∈ R ( P+1 ) × ( P+1 ) M̃ = [ µ̃1 , . . . , µ̃C ] ∈ R ( P+1 ) ×C . Next , we reformulate , with weight decay incorporated , a classic result of Webb & Lowe ( 1990 ) : Proposition 1 ( Webb & Lowe ( 1990 ) with Weight Decay ) . For fixed extended features H̃ , the optimal classifier minimizing the MSE loss L ( W̃ , H̃ ) is W̃LS = 1 C M̃⊤ ( Σ̃T + λI ) −1 , where I is the identity matrix . Note that W̃LS depends on H̃ only . Note that we can interpret W̃LS as a function of H̃ , leading us to identify the following decomposition of L where one term depends on H̃ only . Theorem 1 . ( Decomposition of MSE Loss ; Proof in Appendix B ) The MSE loss , L ( W̃ , H̃ ) , can be decomposed into two terms , L ( W̃ , H̃ ) = LLS ( H̃ ) + L⊥LS ( W̃ , H̃ ) , where LLS ( H̃ ) = 1 2 Ave i , c ∥W̃LSh̃i , c − yi , c∥22 + λ 2 ∥W̃LS∥2F , and L⊥LS ( W̃ , H̃ ) = 1 2 tr { ( W̃ − W̃LS ) ( Σ̃T + µ̃Gµ̃ ⊤ G + λI ) ( W̃ − W̃LS ) ⊤ } . In the above , LLS ( H̃ ) is independent of W̃ . Intuitively , it is the MSE-performance of the optimal classifiers W̃LS ( rather than the “ real classifiers ” W̃ ) on input H̃ . 6CIFAR100 and ImageNet are omitted because Demirkaya et al . ( 2020 ) ; Hui & Belkin ( 2020 ) showed that they require an additional scaling-heuristic on top of the traditional MSE loss . Rigorous investigation into this scaling-heuristic would have demanded mathematical analysis outside the scope of this paper as well as experimentation that would have consumed expensive amounts of computational resources . The component L⊥LS ( W̃ , H̃ ) is non-negative7 and is zero only when W̃ = W̃LS . Therefore , L⊥LS ( H̃ ) quantifies the distance of W̃ from W̃LS . In short , the least-squares component , LLS ( H̃ ) , captures the behavior of the network when the classifier possesses optimal , least squares behavior . Meanwhile , the deviation component , L⊥LS ( W̃ , H̃ ) , captures the divergence from that behavior . We can further decompose LLS ( H̃ ) into two terms , one capturing activation collapse ( NC1 ) and the other capturing convergence to Simplex ETF of both features and classifiers ( ( NC2 ) and ( NC3 ) ) . Theorem 2 . ( Decomposition of Least-Squares Component ; Proof in Appendix C ) The least-squares component , LLS ( H̃ ) , of the MSE decomposition in Theorem 1 can be further decomposed into LLS ( H̃ ) = LNC1 ( H̃ ) + LNC2/3 ( H̃ ) , where LNC1 ( H̃ ) = 1 2 tr { W̃LS [ Σ̃W + λI ] W̃⊤LS } , LNC2/3 ( H̃ ) = 1 2C ∥W̃LSM̃ − I∥2F . Inspection of these terms is revealing . First , observe that LNC2/3 is a function of the class-means and MSE-optimal classifiers . Minimizing LNC2/3 will push the ( unextended ) class-means and classifiers towards the same Simpex ETF matrix8 i.e . ( NC2 ) - ( NC3 ) . Next , note that the within-class variation is independent of the means . Thus , despite the fact that classifiers are converging towards some ( potentially large ) ETF matrix , we can always reduce LNC1 by pushing ΣW towards zero , which corresponds to ( NC1 ) . Figure 2 measures the empirical values of the above decomposition terms on five canonical datasets and three prototypical networks . It shows that L⊥LS ( W̃ , H̃ ) becomes negligible compared to LLS ( H̃ ) when training canonical deep nets on benchmark datasets . In other words , L ( W̃ , H̃ ) ≈LLS ( H̃ ) starting from an early epoch in training and persists into TPT . This motivates us to formulate the central path : P = { ( W̃LS ( H̃ ) , H̃ ) | H̃ ∈ R ( P+1 ) ×CN } , ( 4 ) where the notation W̃LS ( · ) makes explicit the fact that W̃LS is a function of H̃ only . Intuitively , for a classifier-features pair to lie on the central path , i.e . ( W̃ , H̃ ) ∈ P , means that the “ real classifier ” W̃ will exactly equal W̃LS ( H̃ ) —the optimal classifier that would result from fixing H̃ and minimizing L w.r.t . just the classifier . Combined with Theorem 1 , we see ( W̃ , H̃ ) ∈ P if and only if L ( W̃ , H̃ ) =LLS ( H̃ ) . Figure 2 shows that classifier-features pairs lie approximately on the central path during TPT , allowing us to shift focus from L to LLS .
Recently, Papyan, Han and Donoho (2020) have observed that training neural networks beyond zero training error leads to simplex arrangements of the features. The submission studies this phenomenon, called neural collapse, from a theoretical perspective, when training is done via minimizing the square loss. In particular, it is shown that the square loss can be split, such that one summand corresponds to the quality of the features (quantified by the loss of the MSE classifier on them) and that the other summand corresponds to the quality of the classification layer (quantified by the its deviationfrom the least squares classifier). Furthermore, the first summand quantifies the closeness of the features to neural collapse. In a second step, the authors investigate the gradient flow induced from the first summand and derive closed form solutions.
SP:92024e2a9e9b4e30822b9305c177fb0aa23c693e
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path
1 INTRODUCTION . Modern deep learning includes paradigmatic procedures that are commonly adopted , but not entirely understood . Some examples are multi-layered architectures , stochastic gradient descent , batch normalization , cross-entropy ( CE ) loss , and training past zero error towards zero loss . Analyzing the properties of these practices is an important research task . In this work , we theoretically investigate behavior of last-layer features in classification deep nets . In particular , we consider the training of deep neural networks on datasets containing images from C different classes with N examples in each class . After passing the i-th example in the c-th class through all layers except the last-layer of the network , the network outputs some last-layer features hi , c ∈ RP . The last-layer of the network—which , for each class c , possesses a classifier wc ∈ RP and bias bc ∈ R—then predicts a label for the example using the rule argmaxc′ ( ⟨wc′ , hi , c⟩+ bc′ ) . The network ’ s performance is evaluated by calculating the error defined by Error = Ave i , c 1 { c ̸= argmax c′ ( ⟨wc′ , hi , c⟩+ bc′ ) } , while the weights , biases , and other parameters of the network ( that determine the behavior of the layers before the last layer ) in the network are updated by minimizing the CE loss defined by CE = −Ave i , c log exp { ⟨wc , hi , c⟩+ bc } ∑C c′=1 exp { ⟨wc′ , hi , c⟩+ bc′ } , ∗Equal contribution . Listed alphabetically . where Ave is the operator that averages over its subscript indices . Prior works such as Zhang et al . ( 2016 ) ; Belkin et al . ( 2019 ) have shown that overparameterized classifiers ( such as deep nets ) can “ memorize ” their training set without harming performance on unseen test data . Moreover , works such as Soudry et al . ( 2018 ) have further shown that continuing to train networks past memorization can still lead to performance improvements1,2 . Papyan , Han , and Donoho ( 2020 ) recently examined this setting , referring to the phase during which one trains past zero-error towards zero-CE-loss as the Terminal Phase of Training ( TPT ) . During TPT , they exposed a phenomenon called Neural Collapse ( NC ) . 1.1 NEURAL COLLAPSE . NC is defined relative to the feature global mean , µG = Ave i , c hi , c , the feature class-means , µc = Ave i hi , c , c = 1 , . . . , C , the feature within-class covariance , ΣW = Ave i , c ( hi , c − µc ) ( hi , c − µc ) ⊤ , ( 1 ) and the feature between-class covariance , ΣB = Ave c ( µc − µG ) ( µc − µG ) ⊤ . It is characterized by the following four limiting behaviors where limits take place with increasing training epoch t : ( NC1 ) Within-class variability collapse3 : Σ†BΣW → 0 , where † denotes the Moore-Penrose pseudoinverse . ( NC2 ) Convergence to Simplex ETF : ⟨µc − µG , µc′ − µG⟩ ∥µc − µG∥2∥µc′ − µG∥2 → { 1 , c = c′ −1 C−1 , c ̸= c ′ ∥µc − µG∥2 − ∥µc′ − µG∥2 → 0 ∀c ̸= c′ ( NC3 ) Convergence to self-duality : wc ∥wc∥2 − µc − µG ∥µc − µG∥2 → 0 ( NC4 ) : Simplification to nearest class center : argmax c′ ⟨wc′ , h⟩+ bc′ → argmin c′ ∥h− µc′∥2 1A similar phenomenon had been previously observed in boosting ( Bartlett et al. , 1998 ) . 2An alternative line of work on “ early stopping ” ( Prechelt , 1998 ; Li et al. , 2020 ; Rice et al. , 2020 ) advocates for terminating the training process early and shows that it could be beneficial when training on noisy data or small datasets . Such datasets are out of the scope of this paper . 3Our characterization of ( NC1 ) is more precise than that given by Papyan , Han , and Donoho ( 2020 ) , which only states ΣW→0 for rhetorical simplicity . The convergence of the trace-norm of Σ†BΣW to zero is the actual quantity measured by Figure 6 of Papyan , Han , and Donoho ( 2020 ) when demonstrating ( NC1 ) . Σ†BΣW The ( NC2 ) property captures convergence to a simple geometric structure called an Equiangular Tight Frame ( ETF ) . An ETF is a collection of vectors { vc } Cc=1 having equal lengths and equal , maximally separated pair-wise angles . In classification deep nets , last-layer features are of higher dimension than the number of classes i.e . P > C . In this setting , the maximal angles are given by ⟨vc , vc′⟩ ∥vc∥2∥vc′∥2 = { 1 , for c = c′ − 1C−1 , for c ̸= c ′ , and the ETF is called a Simplex ETF4 . Observe that as C increases , the ETF approaches a ( partial ) orthogonal matrix . Thus , when C is large , this translates to the intuitive notion that classifiers and class-means tend with training to ( near ) orthogonality . 1.2 DEEP NET CLASSIFICATION WITH MSE LOSS . While classification deep nets are typically trained with CE loss , Demirkaya et al . ( 2020 ) and Hui & Belkin ( 2020 ) recently reported deep nets trained with squared error ( MSE ) loss , L ( W , b , H ) =1 2 Ave i , c ∥Whi , c + b− yi , c∥22 + λ 2 ( ∥W ∥2F + ∥b∥22 ) ( 2 ) = 1 2CN ∥WH + b1⊤CN − Y ∥2F + λ 2 ( ∥W ∥2F + ∥b∥22 ) , achieve comparable test performance as those trained with CE loss . Above , H ∈ RP×CN and Y ∈ RC×CN are matrices resulting from stacking5 together the feature vectors hi , c and one-hot vectors yi , c as their respective columns ; W ∈ RC×P is the matrix resulting from the stacking of classifiers wc as rows ; b ∈ RC is the vector resulting from concatenating the scalars { bc } Cc=1 ; and 1CN is the length-CN vector of ones . Table 1 in Appendix A shows supplementary measurements affirming the findings of Hui & Belkin ( 2020 ) ; Demirkaya et al . ( 2020 ) , i.e . the measurements affirm that MSE-trained networks indeed achieve accuracies on testing data comparable to those of CE-trained networks ( cf . the analogous Table 1 in Papyan , Han , and Donoho ( 2020 ) ) . The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss and inspires this paper ’ s main theoretical contributions explicitly characterizing MSE-NC . 1.3 CONTRIBUTIONS . Our main contributions are as follows : • We propose a new decomposition of the MSE loss : L = LNC1 + LNC2/3 + L⊥LS . The terms LNC1 and LNC2/3 possess interpretations attributable to NC phenomena and assume the classifier W is exactly the least-squares optimal classifier WLS ( relative to the given H ) ; and L⊥LS captures the deviation of W from WLS . ( Section 2 ) • We provide empirical measurements of our decomposition ( Figure 2 ) on realistic datasetnetwork combinations showing that L⊥LS becomes negligible during training—leading us to define the central path where L⊥LS=zero . ( Section 2 ) • We reveal a key invariance property on the central path . The invariance motivates the examination of a representative set of features , X=Σ− 1 2 W H , that we call renormalized features . ( Sections 3.1-3.2 ) • We study the gradient flow of those renormalized features along the central path and derive exact , closed-form dynamics that imply NC . The dynamics are explicit in terms of the captures the subtlety that the size of the within-class “ noise ” ( captured by ΣW ) should be viewed relative to the size of the class-means ( captured by ΣB ) . 4Traditional research on ETFs ( cf . Strohmer & Heath Jr ( 2003 ) ; Fickus & Mixon ( 2015 ) ) examines the P≤C setting where C ETF-vectors span their ambient RP space . However , in the P > C setting of classification deep nets , the vectors can not span RP . Following the precedent of Papyan , Han , and Donoho ( 2020 ) , we interpret ETFs as equal-length and maximally-equiangular vectors that are not necessarily spanning . 5Assume the stacking is performed in i-then-c order : the first column is ( i , c ) = ( 1 , 1 ) , the second is ( i , c ) = ( 2 , 1 ) , ... , the N -th is ( i , c ) = ( N , 1 ) , the ( N + 1 ) -st is ( i , c ) = ( 1 , 2 ) , and so on . This matters for formalizing Equation 11 in Corollary 2. singular value decomposition of the renormalized feature class-means at initialization . ( Section 3.3 ) Additionally , we complement this paper with new , extensive measurements on five benchmark datasets—in particular , the MNIST , FashionMNIST , CIFAR10 , SVHN , and STL10 datasets6 ( Deng et al. , 2009 ; Krizhevsky & Hinton , 2009 ; LeCun et al. , 2010 ; Xiao et al. , 2017 ) —and three canonical deep nets—in particular , the VGG , ResNet , and DenseNet networks ( He et al. , 2016 ; Huang et al. , 2017 ; Simonyan & Zisserman , 2014 ) —that verify the empirical reality of MSE-NC i.e . they show ( NC1 ) - ( NC4 ) indeed occur for networks trained with MSE loss . These experiments establish that theoretical modeling of MSE-NC is empirically well-motivated . They are lengthy—together spanning four pages with seven figures and a table—so we collectively defer them to Appendix A . 2 DECOMPOSITION OF MSE LOSS . Inspired by the community ’ s interest in the role of the MSE loss in deep net training ( Demirkaya et al. , 2020 ; Hui & Belkin , 2020 ; Mixon et al. , 2020 ; Poggio & Liao , 2020a ; b ) , we derive a new decomposition of the MSE loss that gives insights into the NC phenomenon . First , absorb the bias vector into the weight matrix—by defining the extended weight matrix W̃ = [ W , b ] ∈ RC× ( P+1 ) and the extended feature vector h̃i , c = [ hi , c ; 1 ] ∈ RP+1—so that Equation 2 can be rewritten as L ( W̃ , H̃ ) = 1 2 Ave i , c ∥W̃ h̃i , c − yi , c∥22 + λ 2 ∥W̃ ∥2F . ( 3 ) Using { h̃i , c } , define the entities µ̃c , µ̃G , H̃ , and Σ̃W analogously to those in Section 1.1 . We further define the extended total covariance and extended class-means matrices , respectively , as Σ̃T =Ave i , c ( h̃i , c − µ̃G ) ( h̃i , c − µ̃G ) ⊤ ∈ R ( P+1 ) × ( P+1 ) M̃ = [ µ̃1 , . . . , µ̃C ] ∈ R ( P+1 ) ×C . Next , we reformulate , with weight decay incorporated , a classic result of Webb & Lowe ( 1990 ) : Proposition 1 ( Webb & Lowe ( 1990 ) with Weight Decay ) . For fixed extended features H̃ , the optimal classifier minimizing the MSE loss L ( W̃ , H̃ ) is W̃LS = 1 C M̃⊤ ( Σ̃T + λI ) −1 , where I is the identity matrix . Note that W̃LS depends on H̃ only . Note that we can interpret W̃LS as a function of H̃ , leading us to identify the following decomposition of L where one term depends on H̃ only . Theorem 1 . ( Decomposition of MSE Loss ; Proof in Appendix B ) The MSE loss , L ( W̃ , H̃ ) , can be decomposed into two terms , L ( W̃ , H̃ ) = LLS ( H̃ ) + L⊥LS ( W̃ , H̃ ) , where LLS ( H̃ ) = 1 2 Ave i , c ∥W̃LSh̃i , c − yi , c∥22 + λ 2 ∥W̃LS∥2F , and L⊥LS ( W̃ , H̃ ) = 1 2 tr { ( W̃ − W̃LS ) ( Σ̃T + µ̃Gµ̃ ⊤ G + λI ) ( W̃ − W̃LS ) ⊤ } . In the above , LLS ( H̃ ) is independent of W̃ . Intuitively , it is the MSE-performance of the optimal classifiers W̃LS ( rather than the “ real classifiers ” W̃ ) on input H̃ . 6CIFAR100 and ImageNet are omitted because Demirkaya et al . ( 2020 ) ; Hui & Belkin ( 2020 ) showed that they require an additional scaling-heuristic on top of the traditional MSE loss . Rigorous investigation into this scaling-heuristic would have demanded mathematical analysis outside the scope of this paper as well as experimentation that would have consumed expensive amounts of computational resources . The component L⊥LS ( W̃ , H̃ ) is non-negative7 and is zero only when W̃ = W̃LS . Therefore , L⊥LS ( H̃ ) quantifies the distance of W̃ from W̃LS . In short , the least-squares component , LLS ( H̃ ) , captures the behavior of the network when the classifier possesses optimal , least squares behavior . Meanwhile , the deviation component , L⊥LS ( W̃ , H̃ ) , captures the divergence from that behavior . We can further decompose LLS ( H̃ ) into two terms , one capturing activation collapse ( NC1 ) and the other capturing convergence to Simplex ETF of both features and classifiers ( ( NC2 ) and ( NC3 ) ) . Theorem 2 . ( Decomposition of Least-Squares Component ; Proof in Appendix C ) The least-squares component , LLS ( H̃ ) , of the MSE decomposition in Theorem 1 can be further decomposed into LLS ( H̃ ) = LNC1 ( H̃ ) + LNC2/3 ( H̃ ) , where LNC1 ( H̃ ) = 1 2 tr { W̃LS [ Σ̃W + λI ] W̃⊤LS } , LNC2/3 ( H̃ ) = 1 2C ∥W̃LSM̃ − I∥2F . Inspection of these terms is revealing . First , observe that LNC2/3 is a function of the class-means and MSE-optimal classifiers . Minimizing LNC2/3 will push the ( unextended ) class-means and classifiers towards the same Simpex ETF matrix8 i.e . ( NC2 ) - ( NC3 ) . Next , note that the within-class variation is independent of the means . Thus , despite the fact that classifiers are converging towards some ( potentially large ) ETF matrix , we can always reduce LNC1 by pushing ΣW towards zero , which corresponds to ( NC1 ) . Figure 2 measures the empirical values of the above decomposition terms on five canonical datasets and three prototypical networks . It shows that L⊥LS ( W̃ , H̃ ) becomes negligible compared to LLS ( H̃ ) when training canonical deep nets on benchmark datasets . In other words , L ( W̃ , H̃ ) ≈LLS ( H̃ ) starting from an early epoch in training and persists into TPT . This motivates us to formulate the central path : P = { ( W̃LS ( H̃ ) , H̃ ) | H̃ ∈ R ( P+1 ) ×CN } , ( 4 ) where the notation W̃LS ( · ) makes explicit the fact that W̃LS is a function of H̃ only . Intuitively , for a classifier-features pair to lie on the central path , i.e . ( W̃ , H̃ ) ∈ P , means that the “ real classifier ” W̃ will exactly equal W̃LS ( H̃ ) —the optimal classifier that would result from fixing H̃ and minimizing L w.r.t . just the classifier . Combined with Theorem 1 , we see ( W̃ , H̃ ) ∈ P if and only if L ( W̃ , H̃ ) =LLS ( H̃ ) . Figure 2 shows that classifier-features pairs lie approximately on the central path during TPT , allowing us to shift focus from L to LLS .
This paper studies the phenomenon of Neural Collapse (NC) and empirically shows that it occurs during the training of deep networks with the MSE loss. Then it theoretically analyzes NC with MSE loss by decomposing it and introducing the notion of central path. It shows a closed-form dynamics predicts NC in this setting.
SP:92024e2a9e9b4e30822b9305c177fb0aa23c693e
Exploiting Class Activation Value for Partial-Label Learning
1 INTRODUCTION . To liberate humans from exhaustive label annotation work , numerous researchers have dedicated themselves to investigating various weakly supervised learning ( WSL ) ( Zhou , 2017 ) problems , including but not limited to noisy-label learning ( Liu & Tao , 2015 ; Xia et al. , 2020 ; Han et al. , 2020 ) , semi-supervised learning ( Zhu & Goldberg , 2009 ; Miyato et al. , 2018 ; Luo et al. , 2018 ) , and multiple-instance learning ( Zhou et al. , 2012 ) . This paper focuses on another popular WSL problem called partial-label learning ( PLL ) ( Jin & Ghahramani , 2002 ; Cour et al. , 2011 ) , which aims to learn a model from training examples equipped with a set of candidate labels that include the true †Correspondence to Lei Feng ( lfeng @ cqu.edu.cn ) and Bo Han ( bhanml @ comp.hkbu.edu.hk ) . label . Due to the cost and difficulty of annotating every example exactly with the true label in huge datasets , PLL has been widely applied to various tasks such as multimedia context analysis ( Zeng et al. , 2013 ) and web mining ( Luo & Orabona , 2010 ) . To solve the PLL problem , there are two mainstream strategies to discriminate the unknown true label from candidate labels , including the average-based strategy ( ABS ) and the identification-based strategy ( IBS ) . ABS always treats each candidate label equally and averages the model outputs of all candidate labels for prediction ( Hüllermeier & Beringer , 2006 ; Cour et al. , 2011 ; Feng et al. , 2020 ; Yao et al. , 2020 ; Wen et al. , 2021 ) . IBS concentrates on iteratively selecting one label from the candidate label set as the true label to exclude the uncertainty , forming PLL into an ordinary classification problem ( Jin & Ghahramani , 2002 ; Liu & Dietterich , 2014 ; Zhang & Yu , 2015 ; Zhang et al. , 2016 ) . Among these methods , only several of them ( Yao et al. , 2020 ; Wen et al. , 2021 ) can be compatible with deep neural networks , which achieved state-of-the-art performance . Most of the existing PLL methods aim at designing proper training objectives under various assumptions on the collected data . For example , the consistent methods proposed by Feng et al . ( 2020 ) are based on a specific data generation assumption . The derivation of the PRODEN method ( Lv et al. , 2020 ) stems from the assumption that the true label achieves the minimal loss among the candidate labels . Yan & Guo ( 2021 ) proposed the MGPLL method based on the assumption of non-random label noise . Although these methods have achieved generally great empirical performance , their performance could be degraded when the collected data can not meet the adopted assumptions . In this paper , we aim to investigate a novel PLL method by exploiting the learned intrinsic representation of the model to identify the ground truth in the training process without relying on any assumptions on the collected data . We focus on the class activation map ( CAM ) ( Zhou et al. , 2016 ) , which is a prevailing tool for visualizing the representation information of convolutional neural network ( CNN ) -based models and could be easily obtained by the weighted linear sum of the feature maps . As CAM is able to discriminate the learning patterns of each class for the model , we conjecture that CAM could be used to guide the model to spot the true label . To verify this assumption , we conducted a pilot experiment on CIFAR-10 ( Krizhevsky et al. , 2009 ) , where the candidate label sets were generated in two different ways . As shown in Figure 1 ( please refer to Section 2.2 ) , it is surprising to find that CAM can potentially help the model recognize the true labels , and such capacity of CAM synchronously changes with the classifier performance during the whole training phase . Based on the experimental results , it can be conjectured that such intrinsic representations can be useful to distinguish the true label from candidate labels for PLL . Motivated by the above observations , we naturally consider leveraging CAM to solve the PLL problem . However , CAM was only proposed for addressing image datasets with using deep models built on CNN , revealing two limitations on its application . Firstly , CAM can not be adopted to the classifier based on shallow models such as the linear model and multilayer perceptron ( MLP ) . Secondly , CAM is unable to deal with any inputs other than the image . To overcome these shortcomings , we for the first time propose a simple but effective tool—the class activation value ( CAV ) to capture the learned representation information in a more general way , which is essentially the weighted output of the classifier . Experimental results in Figure 2 and Appendix B.7 also shows that our CAV works similarly as CAM during the training phase . Building upon CAV , we propose a simple yet effective PLL method called CAV Learning ( CAVL ) that guides the model to differentiate the true label from the candidate set during the training process . Specifically , we first train the model by treating all candidate labels equally and obtain CAVs of given training examples , and then regard the class with the maximum CAV as the true label for each instance during the training process . In this way , CAVL transforms PLL into supervised learning , thereby making the model reliable to recognize the true label with learned representation information by CAV . Extensive experiments on benchmark-simulated and real-world datasets show that our proposed CAVL method achieves state-of-the-art performance . 2 DISCOVERING CLASS ACTIVATION MAP FOR PARTIAL-LABEL LEARNING . In this section , we provide a detailed discussion about our motivation and empirically show that the class activation map could be helpful for addressing PLL . 2.1 CLASS ACTIVATION MAP . The class activation map ( CAM ) ( Zhou et al. , 2016 ) is a popular and elementary mechanism in computer vision to represent the discriminative part of an input image captured by the classifier to identify each class . In other words , CAM manifests the learning patterns of the classifier , denoted as f , to the specific class in the image . Let us denote an input image as x ∈ Rd=c×h×w ( with c channels , height h and width w ) and CAM of x ∈ Rd=c×h×w as m ∈ Rk×h×w where k is the number of classes . We note thatm could be generated by training a specific classification model fc , which normally comprises of an encoder function e : X → Rcf×h×w for feature map extraction , a Global Average-Pooling layer gap : Rcf×h×w → Rcf×1 , and a linear layer with weight θ ∈ Rk×Cf , where cf is the number of channels in the feature maps . Therefore , CAM of x related to the j-th class could be obtained by mj = ∑Cf i=1 ( θj ) ie i ( x ) , j ∈ { 1 , . . . , k } , ( 1 ) where θj ∈ R1×Cf is the corresponding linear weight related to the j-th object . CAM is a weighted linear sum of feature maps and the linear weights . The weakness of CAM is the limited applications , since not all classification network architectures follow the model fc mentioned above . To improve its generalization for any CNN-based model f , Gradient-weighted CAM ( Grad-CAM ) ( Selvaraju et al. , 2017 ) was proposed to implement such an internal representation by reasonably leveraging class-specific gradient information . Concretely , let us denote by gj ∈ Rd=c×h×w the Grad-CAM of x related to the j-th object , and gj could be expressed as gj = 1 h× w ∑cf z=1 ∑h m=1 ∑w n=1 ∂f j ( x ) ∂ez ( x ) m , n ez ( x ) . ( 2 ) Intuitively , the derivative of logit f ( x ) with respect to feature map e ( x ) is used as the weights for calculating g. Note that both Grad-CAM and CAM are generated by weighted sums of the feature maps , and Grad-CAM is equivalent to CAM when f follows the same architecture as fc . Based on Grad-CAM , Grad-CAM++ ( Chattopadhay et al. , 2018 ) aims to provide better visual explanations of CNN and occurrences of multiple foreground objects . Thanks to such a simple technique , numerous problems in computer vision such as interpretation ( Xu et al. , 2019a ) and weakly supervised semantic segmentation ( Wei et al. , 2016 ) have achieved marvelous progress . In this paper , we aim to extract and improve the useful knowledge from CAM to address the PLL problem , leading the classifier to identify the ground truth . 2.2 PILOT EXPERIMENT ON CAM . Generally , Grad-CAM or CAM is treated as the internal representation of f . We believe that such a mechanism could guide f to differentiate the true label from the candidate set because it is constructed by taking advantage of internal elements in f . To validate our conjecture , we conducted a pilot experiment by adopting two different label selection methods . The first one is the intuitive method ( IM ) , which simply regards all the labels from the candidate sets as the true labels with using cross entropy loss . The second one is the “ CAM label ” selection strategy , which discriminates the true label from the candidate label set by using CAM . During each training epoch , we calculated the number of the foreground seeds of CAM from each candidate label set . Specifically , to calculate the number of foreground seeds of mj , we counted the elements with positive values in mj . Note that each x could possess k CAMs and the true label is always in the candidate set . Thus we selected the label from the candidate set , whose CAM owns most foreground seeds as the true label . The label candidate label sets were generated by two different approaches : ( I ) Uniformly Sampling Strategy ( USS ) . Uniformly sampling the candidate label set for each training instance from all the possible candidate label sets ( Feng et al. , 2020 ) . ( II ) Flipping Probability Strategy ( FPS ) . By setting a flipping probability q to any false label , the false label could be selected as a candidate label with a probability q ( Feng & An , 2019a ; Yan & Guo , 2020 ; Lv et al. , 2020 ; Wen et al. , 2021 ) . Here the detailed training settings of the experiments are similar to the experiment part ( please refer to Section 4.1.1 for details ) . ResNet ( He et al. , 2016 ) and DenseNet ( Huang et al. , 2017 ) were chosen as the backbones to train a classifier on CIFAR-10 ( Krizhevsky et al. , 2009 ) . Figure 1 shows the average results of 5-time trials and we can find that CAM demonstrates two helpful features during the training phase . Firstly , it is clear that the performance of the classifier ( a ) ResNet + USS ( b ) DenseNet + FPS Methods in ( a ) are implemented by ResNet with using USS to generate the candidate set . Methods in ( b ) are conducted by DenseNet with the candidate label sets produced by FPS . For three lines representing accuracy performance , the results drawn by the blue dashed line and red line ( “ CAM ” line ) are obtained by using the “ CAM label ” selection strategy , namely training the model with regarding the class with the most number of activated seeds in its CAM as the true label . Specifically , the red line depicts the training accuracy measured by selecting the CAM with the maximum activated seeds from the candidate set as the true label , and the blue line simply depicts the test accuracy of the classifier . The yellow dashed line ( “ IM ” line ) depicts the test accuracy of the classifier trained by IM . The paired circle indicates the similar accuracy fluctuation between the classifier and CAM , which shows the dynamic attribute . The double arrow marks the performance gap between “ CAM ” label selection strategy and IM , which represents the power attribute . Figure 1 : Comparison of accuracy performance of two different label selection strategies . trained by “ CAM label ” selection strategy is better than IM ( the gap between the blue dashed line and the yellow dashed line ) , which shows that CAM could potentially guide the classifier toward the true label from the candidate set . Here we name this attribute of CAM as power . Secondly , it is found that the accuracy of identifying true labels by CAM is synchronously changed with that by the classifier . Several fluctuations ( marked by the paired circle ) of the classifier performance ( test accuracy ) and CAM accuracy are similar to a large extent , resulting in continuous improvement to the model itself . Here we name this observed feature of CAM as dynamic . The power and dynamic attributes disclose the fact that CAM may show more than we thought , which is an inspiring finding to address PLL . The power attribute motivates us to consider that the classifier may learn more accurate true labels if it is forced to approximate the label distribution recognized by CAM . The dynamic attribute may guarantee that such guidance would be constantly effective during the whole training phase since CAM would synchronously be self-improved as the classifier becomes stronger . Hence it would be reasonable and meaningful to explore such internal sign as guidance for PLL .
This paper intends to exploit the learned representation of the model to tackle partial-label learning tasks. The paper begins by a PILOT experiment to show that the class activation map is better at selecting the true label from the candidate set than the model output itself. To overcome the limitation that CAM cannot be applied to linear model and non-image data, they propose class activation value (CAV) to replace CAM and show similar properties.
SP:280c8069cdfecd5d7c4a3a4644db8a7d04d93197