paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach | 1 INTRODUCTION . Interpretable and explainable machine learning ( ML ) has seen a recent surge of interest because it is viewed as one of the key pillars in making models trustworthy , with implications on fairness , reliability , and safety ( Varshney , 2019 ) . It is generally accepted that the ultimate measure of ML explainability is whether a human finds the explanations useful ( Doshi-Velez & Kim , 2017 ; Dhurandhar et al. , 2017 ) . However , less attention has been paid to deeper reasons behind the human desire for explainability . In this paper , we posit an important reason is toward achieving safety and preventing unexpected harms ( Varshney & Alemzadeh , 2017 ) . This reason is implicit in the dichotomy between directly interpretable models vs. post hoc explanations of black-box models . Some argue that only directly interpretable models should be used in high-risk applications ( Rudin , 2019 ) . The crux of this argument is that post hoc explanations leave a gap between the explanation and the model that is producing the predictions . Thus , unusual data points may appear to be harmless based on the explanation , but truly cause havoc . This argument however does not explicitly address the question : What does safety mean for such models , and how is it intertwined with interpretability ? Towards answering this question , we propose a mathematical definition for assessing the safety of supervised learning ( i.e . predictive ) models . Viewing these models as functions mapping an input space to an output space , a key way in which these models can cause harm is through grossly unexpected outputs , corresponding to inputs that are poorly represented in training data . Accordingly , we approach safety assessment for a model by determining its maximum deviation over a certification set from the output of a reference model . The idea of a certification set is that it is a large subset of the input space and is intended to cover all conceivable inputs to the model . The reference model could be a simple , well-understood model or an existing model that has been “ tried and tested. ” These concepts are discussed further in Section 2 . In Section 4 , we discuss the computation of the maximum deviation for different model classes and show how this is facilitated by interpretability . For model classes regarded as interpretable , including trees , rule lists , generalized linear and additive models , the maximum deviation can be computed exactly and efficiently by exploiting the model structure . For tree ensembles , which are not regarded as interpretable , discrete optimization techniques can exploit their composition in terms of trees to provide anytime bounds on the maximum deviation . The case of trees is also generalized in a different direction by considering a broader class of functions that are piecewise Lipschitz , which we argue cover many popular interpretable functions . Here we show that the benefit of interpretability is significantly tighter regret bounds on the maximum deviation compared with black box functions , by appropriately repurposing results from the multi-armed bandit literature in this context . On the other hand , it is less clear that post hoc explanations , which approximate a model locally ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Dhurandhar et al. , 2018 ) or globally ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ) , can help with evaluating the maximum deviation and hence safety . We conduct experiments to illustrate the deviation maximization methods in Section 4 for decision trees , linear and additive models , and tree ensembles . The results in Section 5 quantify how the maximum deviation increases as the size of the certification set increases and as the smoothness of the models decreases . For tree ensembles , we find that the obtained upper bounds on the maximum deviation are informative , showing that the maximum deviation does not increase with the number of trees in the ensemble . We also study the feature combinations that maximize deviation , which can shed light on the sources of extreme model outputs and guide further investigation . Overall , our discussion suggests that a reason for preferring more interpretable models is that it is easier to assess them for unexpected and potentially unsafe outputs . 2 ASSESSING SAFETY THROUGH MAXIMUM DEVIATION . We are given a supervised learning model f , which is a function mapping an input feature space X to an output space Y . We wish to assess the safety of this model by finding its worst-case deviation from a given reference model f0 : X 7→ Y . To do this , we additionally require 1 ) a measure of deviation D : Y × Y 7→ R+ , where R+ is the set of non-negative reals , and 2 ) a certification set C ⊆ X over which the deviation is maximized . Then the problem to be solved is max x∈C D ( f ( x ) , f0 ( x ) ) . ( 1 ) The deviation is worst-case because the maximization is over all x ∈ C ; further implications of this are discussed in Appendix C. We view problem ( 1 ) as a means toward the goal of evaluating safety . In particular , a large deviation value is not necessarily indicative of a safety risk , as two models may differ significantly for valid reasons . For example , one model may capture a useful pattern that the other does not . What large deviation values do indicate , however , is a ( possibly ) sufficient reason for further investigation . Hence , the maximizing solutions in ( 1 ) ( i.e. , the arg max ) are of operational interest . Below we further discuss some elements in this problem formulation . Output space Y . In the case of regression , Y is the set of reals R or an interval thereof . In the case of binary classification , while Y could be { 0 , 1 } or { −1 , +1 } , these limit the possible deviations to binary values as well ( “ same ” or “ different ” ) . Thus to provide more informative results , we take Y to be the space of real-valued scores that are thresholded to produce a binary label . For example , y could be a predicted probability in [ 0 , 1 ] or a log-odds ratio in R. Similarly for multiclass classification withM classes , Y ⊂ RM could be aM -dimensional space of real-valued scores . Models that abstain can also be accommodated as noted in Appendix A . Reference model f0 . The premise of the reference model is that it should be “ safe ” above all . The simplest case mathematically is for f0 to be a constant function representing a baseline value , for example zero . More generally , f0 may be a simple model that can be readily grasped by a human , may be validated against domain knowledge , or may be based on a small number of expert-selected features . Such models are common in medical risk assessment , consumer finance , and predicting semiconductor yield . By simple , we mean for example a linear model with 10 non-zero coefficients or a decision tree with 10 leaves . The reference model could also be an existing model that is not necessarily simple but has been extensively tested and deployed . In this case , f could be a new version of the model , trained on more recent data or improved in some fashion , and we wish to evaluate its safety before deploying it in place of f0 . In this and more complex settings , f0 may not be globally interpretable , but may be so in local regions . The machinery developed in this work could be applied in these settings as well to assess the safety of f ( more discussion in Appendix C ) . Certification set C. The premise of the certification set is that it contains all inputs that the model might conceivably be exposed to . This may include inputs that are highly improbable but not physically or logically impossible ( for example , a severely hypothermic human body temperature of 27°C ) . Thus , while C might be based on the support set of a probability distribution or data sample , it does not depend on the likelihood of points within the support . The set C may also be a strict superset of the training data domain . For example , a model may have been trained on data for males , and we would now like to determine its worst-case behavior on an unseen population of females . For tabular or lower-dimensional data , C might be the entire input space X . For non-tabular or higher-dimensional data , the choice C = X may be too unrepresentative because the manifold of realistic inputs is lower in dimension . In this case , if we have a dataset { xi } ni=1 , one possibility is to use a union of ` p balls centered at xi , C = n⋃ i=1 Bpr [ xi ] , Bpr [ xi ] = { x ∈ X : ‖x− xi‖p ≤ r } . ( 2 ) The set C is thus comprised of points somewhat close to the n observed examples xi , but the radius r does not have to be “ small ” . In addition to determining the maximum deviation over the entire set C , maximum deviations over subsets of C ( e.g. , different age groups ) may also be of interest . For example , Appendix D.1 shows deviation values separately for leaves of a decision tree , which partition the input space . 3 RELATED WORK . Our work relates to a number of different technical directions . Varshney & Alemzadeh ( 2017 ) and Mohseni et al . ( 2021 ) give qualitative accounts suggesting that directly interpretable models are an inherently safe design because humans can inspect them to find spurious elements ; in this paper , we attempt to make those qualitative suggestions more quantitative . Furthermore , several other authors have highlighted safety as a goal for interpretability , but without much further development as done here ( Otte , 2013 ; Doshi-Velez & Kim , 2017 ; Tomsett et al. , 2018 ; Gilpin et al. , 2018 ; Rudin , 2019 ) . Moreover , there is no consensus on how to measure interpretability , which motivates the relationship explored in this paper between interpretability and the ease of evaluating safety . In the area of ML verification , robustness certification methods aim to provide guarantees that the classification remains constant within a radius of an input point , while output reachability is concerned with characterizing the set of outputs corresponding to a region of inputs ( Wong & Kolter , 2018 ; Raghunathan et al. , 2018 ; Huang et al. , 2020 ) . Our problem of deviation maximization ( 1 ) is more closely related to output reachability . The differences in our work are : 1 ) we consider two models , a model f to be assessed and a reference f0 , and are interested in their difference as measured by deviation function D ; 2 ) our focus is global , over a comprehensive set C , rather than local to small neighborhoods around input points ; 3 ) we study the role of interpretability in safety verification . Moreover , works in robust optimization applied to machine learning minimize the worst-case probability of error , but this worst case is over parameters of f rather than over individual values of x ( Lanckriet et al. , 2002 ) . Thomas et al . ( 2019 ) present a framework where during the model training , a set of safety tests is specified in order to accept or reject the possible solution . The specification of these tests is left to the model designer but the goal of the proposed solution is to provide a reusable paradigm to support safety in ML solutions . We build on related literature from model robustness and explainability areas that deals specifically with tree ensembles . Kantchelian et al . ( 2016 ) seek to find the smallest perturbation of an input instance to ‘ evade ’ a classifier using mixed-integer programming ( MIP ) . Optimization formulations are also explored by Parmentier & Vidal ( 2021 ) for the purposes of counterfactual explanations . MIP approaches are computationally intensive however . To address this Chen et al . ( 2019 ) introduce graph based approaches for verification on trees . Their central idea , which we use , is to discretize verification computations onto a graph constructed from the way leaves intersect . The verification problem is transformed to finding all maximum cliques . Devos et al . ( 2021 ) expand on this idea by providing anytime bounds by probing unexplored nodes . Safety has become a critical issue in reinforcement learning ( RL ) with multiple works focusing on making RL policies safe ( Amodei et al. , 2016 ; Zhu et al. , 2019 ; Inala et al. , 2020 ; Rupprecht et al. , 2020 ) . There are two broad themes ( Garcı́a et al. , 2015 ) : ( i ) a safe and verifiable policy is learned at the outset by enforcing certain constraints , and ( ii ) post hoc methods are used to identify bad regimes or failure points of an existing policy . Our current proposal is complementary to these works as we focus on the supervised learning setup viewed from the lens of interpretability . Nonetheless , ramifications of our work in the RL context are briefly discussed in Appendix C . | **Summary:** * The paper puts forth methodology for (efficiently?) computing the worst-case deviation between two fitted models (one is a "candidate" model, the other is a "reference" model), over some feasible region $\mathcal C$ (that need not be convex). * The takeaway message is that these kind of computations are useful for evaluating the safety of the candidate model, b/c if it (i.e., its predictions) deviate(s) too far from the reference model, then that is a bad sign and someone should "investigate something". * Some calculations are worked out showing that for a variety of model classes, these computations can be done efficiently-ish. * Some experimental results are presented, mainly showing how the deviation varies as the tuning parameters used to fit the candidate model are varied ... | SP:4280f6d9d84c4fc28f13b79ccd80018b7e4a57ca |
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach | 1 INTRODUCTION . Interpretable and explainable machine learning ( ML ) has seen a recent surge of interest because it is viewed as one of the key pillars in making models trustworthy , with implications on fairness , reliability , and safety ( Varshney , 2019 ) . It is generally accepted that the ultimate measure of ML explainability is whether a human finds the explanations useful ( Doshi-Velez & Kim , 2017 ; Dhurandhar et al. , 2017 ) . However , less attention has been paid to deeper reasons behind the human desire for explainability . In this paper , we posit an important reason is toward achieving safety and preventing unexpected harms ( Varshney & Alemzadeh , 2017 ) . This reason is implicit in the dichotomy between directly interpretable models vs. post hoc explanations of black-box models . Some argue that only directly interpretable models should be used in high-risk applications ( Rudin , 2019 ) . The crux of this argument is that post hoc explanations leave a gap between the explanation and the model that is producing the predictions . Thus , unusual data points may appear to be harmless based on the explanation , but truly cause havoc . This argument however does not explicitly address the question : What does safety mean for such models , and how is it intertwined with interpretability ? Towards answering this question , we propose a mathematical definition for assessing the safety of supervised learning ( i.e . predictive ) models . Viewing these models as functions mapping an input space to an output space , a key way in which these models can cause harm is through grossly unexpected outputs , corresponding to inputs that are poorly represented in training data . Accordingly , we approach safety assessment for a model by determining its maximum deviation over a certification set from the output of a reference model . The idea of a certification set is that it is a large subset of the input space and is intended to cover all conceivable inputs to the model . The reference model could be a simple , well-understood model or an existing model that has been “ tried and tested. ” These concepts are discussed further in Section 2 . In Section 4 , we discuss the computation of the maximum deviation for different model classes and show how this is facilitated by interpretability . For model classes regarded as interpretable , including trees , rule lists , generalized linear and additive models , the maximum deviation can be computed exactly and efficiently by exploiting the model structure . For tree ensembles , which are not regarded as interpretable , discrete optimization techniques can exploit their composition in terms of trees to provide anytime bounds on the maximum deviation . The case of trees is also generalized in a different direction by considering a broader class of functions that are piecewise Lipschitz , which we argue cover many popular interpretable functions . Here we show that the benefit of interpretability is significantly tighter regret bounds on the maximum deviation compared with black box functions , by appropriately repurposing results from the multi-armed bandit literature in this context . On the other hand , it is less clear that post hoc explanations , which approximate a model locally ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Dhurandhar et al. , 2018 ) or globally ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ) , can help with evaluating the maximum deviation and hence safety . We conduct experiments to illustrate the deviation maximization methods in Section 4 for decision trees , linear and additive models , and tree ensembles . The results in Section 5 quantify how the maximum deviation increases as the size of the certification set increases and as the smoothness of the models decreases . For tree ensembles , we find that the obtained upper bounds on the maximum deviation are informative , showing that the maximum deviation does not increase with the number of trees in the ensemble . We also study the feature combinations that maximize deviation , which can shed light on the sources of extreme model outputs and guide further investigation . Overall , our discussion suggests that a reason for preferring more interpretable models is that it is easier to assess them for unexpected and potentially unsafe outputs . 2 ASSESSING SAFETY THROUGH MAXIMUM DEVIATION . We are given a supervised learning model f , which is a function mapping an input feature space X to an output space Y . We wish to assess the safety of this model by finding its worst-case deviation from a given reference model f0 : X 7→ Y . To do this , we additionally require 1 ) a measure of deviation D : Y × Y 7→ R+ , where R+ is the set of non-negative reals , and 2 ) a certification set C ⊆ X over which the deviation is maximized . Then the problem to be solved is max x∈C D ( f ( x ) , f0 ( x ) ) . ( 1 ) The deviation is worst-case because the maximization is over all x ∈ C ; further implications of this are discussed in Appendix C. We view problem ( 1 ) as a means toward the goal of evaluating safety . In particular , a large deviation value is not necessarily indicative of a safety risk , as two models may differ significantly for valid reasons . For example , one model may capture a useful pattern that the other does not . What large deviation values do indicate , however , is a ( possibly ) sufficient reason for further investigation . Hence , the maximizing solutions in ( 1 ) ( i.e. , the arg max ) are of operational interest . Below we further discuss some elements in this problem formulation . Output space Y . In the case of regression , Y is the set of reals R or an interval thereof . In the case of binary classification , while Y could be { 0 , 1 } or { −1 , +1 } , these limit the possible deviations to binary values as well ( “ same ” or “ different ” ) . Thus to provide more informative results , we take Y to be the space of real-valued scores that are thresholded to produce a binary label . For example , y could be a predicted probability in [ 0 , 1 ] or a log-odds ratio in R. Similarly for multiclass classification withM classes , Y ⊂ RM could be aM -dimensional space of real-valued scores . Models that abstain can also be accommodated as noted in Appendix A . Reference model f0 . The premise of the reference model is that it should be “ safe ” above all . The simplest case mathematically is for f0 to be a constant function representing a baseline value , for example zero . More generally , f0 may be a simple model that can be readily grasped by a human , may be validated against domain knowledge , or may be based on a small number of expert-selected features . Such models are common in medical risk assessment , consumer finance , and predicting semiconductor yield . By simple , we mean for example a linear model with 10 non-zero coefficients or a decision tree with 10 leaves . The reference model could also be an existing model that is not necessarily simple but has been extensively tested and deployed . In this case , f could be a new version of the model , trained on more recent data or improved in some fashion , and we wish to evaluate its safety before deploying it in place of f0 . In this and more complex settings , f0 may not be globally interpretable , but may be so in local regions . The machinery developed in this work could be applied in these settings as well to assess the safety of f ( more discussion in Appendix C ) . Certification set C. The premise of the certification set is that it contains all inputs that the model might conceivably be exposed to . This may include inputs that are highly improbable but not physically or logically impossible ( for example , a severely hypothermic human body temperature of 27°C ) . Thus , while C might be based on the support set of a probability distribution or data sample , it does not depend on the likelihood of points within the support . The set C may also be a strict superset of the training data domain . For example , a model may have been trained on data for males , and we would now like to determine its worst-case behavior on an unseen population of females . For tabular or lower-dimensional data , C might be the entire input space X . For non-tabular or higher-dimensional data , the choice C = X may be too unrepresentative because the manifold of realistic inputs is lower in dimension . In this case , if we have a dataset { xi } ni=1 , one possibility is to use a union of ` p balls centered at xi , C = n⋃ i=1 Bpr [ xi ] , Bpr [ xi ] = { x ∈ X : ‖x− xi‖p ≤ r } . ( 2 ) The set C is thus comprised of points somewhat close to the n observed examples xi , but the radius r does not have to be “ small ” . In addition to determining the maximum deviation over the entire set C , maximum deviations over subsets of C ( e.g. , different age groups ) may also be of interest . For example , Appendix D.1 shows deviation values separately for leaves of a decision tree , which partition the input space . 3 RELATED WORK . Our work relates to a number of different technical directions . Varshney & Alemzadeh ( 2017 ) and Mohseni et al . ( 2021 ) give qualitative accounts suggesting that directly interpretable models are an inherently safe design because humans can inspect them to find spurious elements ; in this paper , we attempt to make those qualitative suggestions more quantitative . Furthermore , several other authors have highlighted safety as a goal for interpretability , but without much further development as done here ( Otte , 2013 ; Doshi-Velez & Kim , 2017 ; Tomsett et al. , 2018 ; Gilpin et al. , 2018 ; Rudin , 2019 ) . Moreover , there is no consensus on how to measure interpretability , which motivates the relationship explored in this paper between interpretability and the ease of evaluating safety . In the area of ML verification , robustness certification methods aim to provide guarantees that the classification remains constant within a radius of an input point , while output reachability is concerned with characterizing the set of outputs corresponding to a region of inputs ( Wong & Kolter , 2018 ; Raghunathan et al. , 2018 ; Huang et al. , 2020 ) . Our problem of deviation maximization ( 1 ) is more closely related to output reachability . The differences in our work are : 1 ) we consider two models , a model f to be assessed and a reference f0 , and are interested in their difference as measured by deviation function D ; 2 ) our focus is global , over a comprehensive set C , rather than local to small neighborhoods around input points ; 3 ) we study the role of interpretability in safety verification . Moreover , works in robust optimization applied to machine learning minimize the worst-case probability of error , but this worst case is over parameters of f rather than over individual values of x ( Lanckriet et al. , 2002 ) . Thomas et al . ( 2019 ) present a framework where during the model training , a set of safety tests is specified in order to accept or reject the possible solution . The specification of these tests is left to the model designer but the goal of the proposed solution is to provide a reusable paradigm to support safety in ML solutions . We build on related literature from model robustness and explainability areas that deals specifically with tree ensembles . Kantchelian et al . ( 2016 ) seek to find the smallest perturbation of an input instance to ‘ evade ’ a classifier using mixed-integer programming ( MIP ) . Optimization formulations are also explored by Parmentier & Vidal ( 2021 ) for the purposes of counterfactual explanations . MIP approaches are computationally intensive however . To address this Chen et al . ( 2019 ) introduce graph based approaches for verification on trees . Their central idea , which we use , is to discretize verification computations onto a graph constructed from the way leaves intersect . The verification problem is transformed to finding all maximum cliques . Devos et al . ( 2021 ) expand on this idea by providing anytime bounds by probing unexplored nodes . Safety has become a critical issue in reinforcement learning ( RL ) with multiple works focusing on making RL policies safe ( Amodei et al. , 2016 ; Zhu et al. , 2019 ; Inala et al. , 2020 ; Rupprecht et al. , 2020 ) . There are two broad themes ( Garcı́a et al. , 2015 ) : ( i ) a safe and verifiable policy is learned at the outset by enforcing certain constraints , and ( ii ) post hoc methods are used to identify bad regimes or failure points of an existing policy . Our current proposal is complementary to these works as we focus on the supervised learning setup viewed from the lens of interpretability . Nonetheless , ramifications of our work in the RL context are briefly discussed in Appendix C . | In this paper, the authors proposed inspecting the deviation between the reference model (e.g., black-box model) and its approximation (e.g., white-box model). The major motivation is on inspecting the safety of the reference black-box model through the approximated white-box model. The safety, such as the possible maximum output within the prescribed input domain, of white-box model, such as decision tree and linear model, are easier to inspect. Thus, if we can evaluate the maximum deviation between the black-box model and the white-box model, we can evaluate the safety of the black-box model as well. Based on this idea, the paper considers some possible approaches for inspecting the maximum deviation between the models. In the experiments, the authors demonstrated that the maximum deviation are considerably large in practice. | SP:4280f6d9d84c4fc28f13b79ccd80018b7e4a57ca |
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach | 1 INTRODUCTION . Interpretable and explainable machine learning ( ML ) has seen a recent surge of interest because it is viewed as one of the key pillars in making models trustworthy , with implications on fairness , reliability , and safety ( Varshney , 2019 ) . It is generally accepted that the ultimate measure of ML explainability is whether a human finds the explanations useful ( Doshi-Velez & Kim , 2017 ; Dhurandhar et al. , 2017 ) . However , less attention has been paid to deeper reasons behind the human desire for explainability . In this paper , we posit an important reason is toward achieving safety and preventing unexpected harms ( Varshney & Alemzadeh , 2017 ) . This reason is implicit in the dichotomy between directly interpretable models vs. post hoc explanations of black-box models . Some argue that only directly interpretable models should be used in high-risk applications ( Rudin , 2019 ) . The crux of this argument is that post hoc explanations leave a gap between the explanation and the model that is producing the predictions . Thus , unusual data points may appear to be harmless based on the explanation , but truly cause havoc . This argument however does not explicitly address the question : What does safety mean for such models , and how is it intertwined with interpretability ? Towards answering this question , we propose a mathematical definition for assessing the safety of supervised learning ( i.e . predictive ) models . Viewing these models as functions mapping an input space to an output space , a key way in which these models can cause harm is through grossly unexpected outputs , corresponding to inputs that are poorly represented in training data . Accordingly , we approach safety assessment for a model by determining its maximum deviation over a certification set from the output of a reference model . The idea of a certification set is that it is a large subset of the input space and is intended to cover all conceivable inputs to the model . The reference model could be a simple , well-understood model or an existing model that has been “ tried and tested. ” These concepts are discussed further in Section 2 . In Section 4 , we discuss the computation of the maximum deviation for different model classes and show how this is facilitated by interpretability . For model classes regarded as interpretable , including trees , rule lists , generalized linear and additive models , the maximum deviation can be computed exactly and efficiently by exploiting the model structure . For tree ensembles , which are not regarded as interpretable , discrete optimization techniques can exploit their composition in terms of trees to provide anytime bounds on the maximum deviation . The case of trees is also generalized in a different direction by considering a broader class of functions that are piecewise Lipschitz , which we argue cover many popular interpretable functions . Here we show that the benefit of interpretability is significantly tighter regret bounds on the maximum deviation compared with black box functions , by appropriately repurposing results from the multi-armed bandit literature in this context . On the other hand , it is less clear that post hoc explanations , which approximate a model locally ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Dhurandhar et al. , 2018 ) or globally ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ) , can help with evaluating the maximum deviation and hence safety . We conduct experiments to illustrate the deviation maximization methods in Section 4 for decision trees , linear and additive models , and tree ensembles . The results in Section 5 quantify how the maximum deviation increases as the size of the certification set increases and as the smoothness of the models decreases . For tree ensembles , we find that the obtained upper bounds on the maximum deviation are informative , showing that the maximum deviation does not increase with the number of trees in the ensemble . We also study the feature combinations that maximize deviation , which can shed light on the sources of extreme model outputs and guide further investigation . Overall , our discussion suggests that a reason for preferring more interpretable models is that it is easier to assess them for unexpected and potentially unsafe outputs . 2 ASSESSING SAFETY THROUGH MAXIMUM DEVIATION . We are given a supervised learning model f , which is a function mapping an input feature space X to an output space Y . We wish to assess the safety of this model by finding its worst-case deviation from a given reference model f0 : X 7→ Y . To do this , we additionally require 1 ) a measure of deviation D : Y × Y 7→ R+ , where R+ is the set of non-negative reals , and 2 ) a certification set C ⊆ X over which the deviation is maximized . Then the problem to be solved is max x∈C D ( f ( x ) , f0 ( x ) ) . ( 1 ) The deviation is worst-case because the maximization is over all x ∈ C ; further implications of this are discussed in Appendix C. We view problem ( 1 ) as a means toward the goal of evaluating safety . In particular , a large deviation value is not necessarily indicative of a safety risk , as two models may differ significantly for valid reasons . For example , one model may capture a useful pattern that the other does not . What large deviation values do indicate , however , is a ( possibly ) sufficient reason for further investigation . Hence , the maximizing solutions in ( 1 ) ( i.e. , the arg max ) are of operational interest . Below we further discuss some elements in this problem formulation . Output space Y . In the case of regression , Y is the set of reals R or an interval thereof . In the case of binary classification , while Y could be { 0 , 1 } or { −1 , +1 } , these limit the possible deviations to binary values as well ( “ same ” or “ different ” ) . Thus to provide more informative results , we take Y to be the space of real-valued scores that are thresholded to produce a binary label . For example , y could be a predicted probability in [ 0 , 1 ] or a log-odds ratio in R. Similarly for multiclass classification withM classes , Y ⊂ RM could be aM -dimensional space of real-valued scores . Models that abstain can also be accommodated as noted in Appendix A . Reference model f0 . The premise of the reference model is that it should be “ safe ” above all . The simplest case mathematically is for f0 to be a constant function representing a baseline value , for example zero . More generally , f0 may be a simple model that can be readily grasped by a human , may be validated against domain knowledge , or may be based on a small number of expert-selected features . Such models are common in medical risk assessment , consumer finance , and predicting semiconductor yield . By simple , we mean for example a linear model with 10 non-zero coefficients or a decision tree with 10 leaves . The reference model could also be an existing model that is not necessarily simple but has been extensively tested and deployed . In this case , f could be a new version of the model , trained on more recent data or improved in some fashion , and we wish to evaluate its safety before deploying it in place of f0 . In this and more complex settings , f0 may not be globally interpretable , but may be so in local regions . The machinery developed in this work could be applied in these settings as well to assess the safety of f ( more discussion in Appendix C ) . Certification set C. The premise of the certification set is that it contains all inputs that the model might conceivably be exposed to . This may include inputs that are highly improbable but not physically or logically impossible ( for example , a severely hypothermic human body temperature of 27°C ) . Thus , while C might be based on the support set of a probability distribution or data sample , it does not depend on the likelihood of points within the support . The set C may also be a strict superset of the training data domain . For example , a model may have been trained on data for males , and we would now like to determine its worst-case behavior on an unseen population of females . For tabular or lower-dimensional data , C might be the entire input space X . For non-tabular or higher-dimensional data , the choice C = X may be too unrepresentative because the manifold of realistic inputs is lower in dimension . In this case , if we have a dataset { xi } ni=1 , one possibility is to use a union of ` p balls centered at xi , C = n⋃ i=1 Bpr [ xi ] , Bpr [ xi ] = { x ∈ X : ‖x− xi‖p ≤ r } . ( 2 ) The set C is thus comprised of points somewhat close to the n observed examples xi , but the radius r does not have to be “ small ” . In addition to determining the maximum deviation over the entire set C , maximum deviations over subsets of C ( e.g. , different age groups ) may also be of interest . For example , Appendix D.1 shows deviation values separately for leaves of a decision tree , which partition the input space . 3 RELATED WORK . Our work relates to a number of different technical directions . Varshney & Alemzadeh ( 2017 ) and Mohseni et al . ( 2021 ) give qualitative accounts suggesting that directly interpretable models are an inherently safe design because humans can inspect them to find spurious elements ; in this paper , we attempt to make those qualitative suggestions more quantitative . Furthermore , several other authors have highlighted safety as a goal for interpretability , but without much further development as done here ( Otte , 2013 ; Doshi-Velez & Kim , 2017 ; Tomsett et al. , 2018 ; Gilpin et al. , 2018 ; Rudin , 2019 ) . Moreover , there is no consensus on how to measure interpretability , which motivates the relationship explored in this paper between interpretability and the ease of evaluating safety . In the area of ML verification , robustness certification methods aim to provide guarantees that the classification remains constant within a radius of an input point , while output reachability is concerned with characterizing the set of outputs corresponding to a region of inputs ( Wong & Kolter , 2018 ; Raghunathan et al. , 2018 ; Huang et al. , 2020 ) . Our problem of deviation maximization ( 1 ) is more closely related to output reachability . The differences in our work are : 1 ) we consider two models , a model f to be assessed and a reference f0 , and are interested in their difference as measured by deviation function D ; 2 ) our focus is global , over a comprehensive set C , rather than local to small neighborhoods around input points ; 3 ) we study the role of interpretability in safety verification . Moreover , works in robust optimization applied to machine learning minimize the worst-case probability of error , but this worst case is over parameters of f rather than over individual values of x ( Lanckriet et al. , 2002 ) . Thomas et al . ( 2019 ) present a framework where during the model training , a set of safety tests is specified in order to accept or reject the possible solution . The specification of these tests is left to the model designer but the goal of the proposed solution is to provide a reusable paradigm to support safety in ML solutions . We build on related literature from model robustness and explainability areas that deals specifically with tree ensembles . Kantchelian et al . ( 2016 ) seek to find the smallest perturbation of an input instance to ‘ evade ’ a classifier using mixed-integer programming ( MIP ) . Optimization formulations are also explored by Parmentier & Vidal ( 2021 ) for the purposes of counterfactual explanations . MIP approaches are computationally intensive however . To address this Chen et al . ( 2019 ) introduce graph based approaches for verification on trees . Their central idea , which we use , is to discretize verification computations onto a graph constructed from the way leaves intersect . The verification problem is transformed to finding all maximum cliques . Devos et al . ( 2021 ) expand on this idea by providing anytime bounds by probing unexplored nodes . Safety has become a critical issue in reinforcement learning ( RL ) with multiple works focusing on making RL policies safe ( Amodei et al. , 2016 ; Zhu et al. , 2019 ; Inala et al. , 2020 ; Rupprecht et al. , 2020 ) . There are two broad themes ( Garcı́a et al. , 2015 ) : ( i ) a safe and verifiable policy is learned at the outset by enforcing certain constraints , and ( ii ) post hoc methods are used to identify bad regimes or failure points of an existing policy . Our current proposal is complementary to these works as we focus on the supervised learning setup viewed from the lens of interpretability . Nonetheless , ramifications of our work in the RL context are briefly discussed in Appendix C . | The authors posit that the demand for explainable/interpretable models in machine learning is linked to safety. In other words, domain experts trust explainable or interpretable models more. Based on this premise, they suggest to assess the safety of a model by measuring its maximum deviation of a model to a reference model, which is supposed to be safe. This deviation is computed on a certification set of data points. Additionally, the authors show that their proposed maximum derivation can be computed exactly for some interpretable models, such as decision trees, rule lists and generalized linear and additive models. Further, they discuss implications for tree ensembles and empirically evaluate their approach. | SP:4280f6d9d84c4fc28f13b79ccd80018b7e4a57ca |
Prototype Based Classification from Hierarchy to Fairness | 1 INTRODUCTION . Neural networks are able to learn rich representations of data that support highly accurate classification ; however , understanding or controlling what neural nets learn remains challenging . Some techniques offer insight into pre-trained models by uncovering directions within latent spaces that correspond to particular concepts , image manipulations , or more ( Goetschalckx et al. , 2019 ; Kim et al. , 2018 ) , while approaches focused on interpretability provide techniques that are more comprehensible to humans ( Li et al. , 2018 ; Chen et al. , 2019 ) . While these methods provide insight , they fail to offer control : humans observe learned patterns but are unable to guide models such that learned relationships are useful for a particular setting or task . Another line of work has advanced the design of models for particular types of classification tasks ( such as fair or hierarchical classification ) but these techniques are often developed with only one problem in mind ( Zemel et al. , 2016 ; Xie et al. , 2017 ; Hase et al. , 2019 ) . For example , models built for fair classification ( predicting an outcome regardless of information about a protected field ) are only used to enforce independence of concepts rather than hierarchy . Thus , humans may exert control over learned representations by selecting an appropriate technique rather than tuning training parameters within the same technique . We have designed a new neural network architecture , the concept subspace network ( CSN ) , which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships . CSNs use prototype-based representations , a technique employed in interpretable neural networks in prior art ( Li et al. , 2018 ; Chen et al. , 2019 ; Garnot & Landrieu , 2020 ) . A single CSN uses sets of prototypes in order to simultaneously learn multiple concepts ; classification within a single concept ( e.g. , “ type of animal ” ) is performed by projecting encodings into a concept subspace defined by the prototypes for that concept ( e.g. , “ bird , ” “ dog , ” etc. ) . Lastly , CSNs use a measure of concept subspace alignment to guide concept relationships such as independence or hierarchy . In our experiments , CSNs performed comparably to state-of-the art in fair classification , despite prior methods only being designed for this type of problem . In applying CSNs to hierarchical classification tasks , networks automatically deduced interpretable representations of the hierarchical problem structure , allowing them to outperform state-of-the-art , for a given neural network backbone , in terms of both accuracy and average cost of errors on the CIFAR100 dataset . Lastly , in a human-motion prediction task , we demonstrated how a single CSN could enforce both fairness ( to preserve participant privacy ) and hierarchy ( to exploit a known taxonomy of tasks ) . Our findings suggest that CSNs may be applied to a wide range of problems that had previously only been addressed individually , or not at all . 2 RELATED WORK . 2.1 INTERPRETABILITY AND PROTOTYPE NETWORKS . Numerous post-hoc explanation techniques fit models to pre-trained neural nets ; if humans understand these auxiliary models , they can hypothesize about how the neural nets behave ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) . However , techniques in which explanations are decoupled from underlying logic may be susceptible to adversarial attacks or produce misleading explanations ( Heo et al. , 2019 ; Slack et al. , 2020 ) . Unlike such decoupled explanations , interpretability research seeks to expose a model ’ s reasoning . In this work we focus on prototype-based latent representations in neural nets . There is a long history of learning discrete representations in continuous spaces , originating under “ vector quantization ” literature ( Kohonen , 1990 ; Schneider et al. , 2009 ) . More recently , the prototype case network ( PCN ) comprised an autoencoder model that clustered encodings around understandable , trainable prototypes , with classifications made via a linear weighting of the distances from encodings to prototypes ( Li et al. , 2018 ) . Further research in image classification extended PCNs to use convolutional filters as prototypes and for hierarchical classification in the hierarchical prototype network ( HPN ) ( Chen et al. , 2019 ; Hase et al. , 2019 ) . Lastly , Garnot & Landrieu ( 2020 ) use prototypes in Metric-Guided Prototype Learning ( MGP ) in conjunction with a loss function to cluster prototypes to minimize user-defined costs . Our model similarly uses trainable prototypes for classification , but differs from prior art in two respects . First , we modify the standard PCN architecture to support other changes , without degrading classification performance . Second , like HPNs ( but not PCNs or MGP ) , CSNs leverage multiple sets of prototypes to enable hierarchical classification but also allow for non-hierarchical concept relationships . 2.2 FAIR AND HIERARCHICAL CLASSIFICATION . AI fairness research considers how to mitigate undesirable patterns or biases in machine learning models . Consider the problem of predicting a person ’ s credit risk : non-causal correlations between age and risk may lead AI models to inappropriately penalize people according to their age ( Zemel et al. , 2016 ) . The problem of fair classification is often framed as follows : given inputs , x , which are informative of a protected field , s , and outcome , y , predict y from x without being influenced by s ( Zemel et al. , 2013 ) . Merely removing s from x ( e.g. , not including age as an input to a credit predictor ) rarely removes all information about s , so researchers have developed a variety of techniques to create representations that “ purge ” information about s ( Zemel et al. , 2016 ; Xie et al. , 2017 ; Jiang et al. , 2020 ) . Hierarchical classification solves a different problem : given a hierarchical taxonomy of classes ( e.g. , birds vs. dogs at a high level and sparrows vs. toucans at a low level ) , output the correct label at each classification level . Neural nets using convolution and recurrent layers in specialized designs have achieved remarkable success in hierarchical image classification ( Zhu & Bain , 2017 ; Guo et al. , 2018 ) . The hierarchical prototype network ( HPN ) uses prototypes and a training routine based upon conditional subsets of training data to create hierarchically-organized prototypes ( Hase et al. , 2019 ) . Garnot & Landrieu ( 2020 ) also use prototypes for hierarchical classification in Metric-Guided Prototype Learning ( MGP ) by adjusting the training loss to guide prototype arrangement . Neither HPN nor MGP explicitly models relationships between multiple subsets of prototypes . Lastly , recent works propose hyperbolic latent spaces as a natural way to model hierarchical data ( Dai et al. , 2021 ; Mathieu et al. , 2019 ; Nickel & Kiela , 2017 ; Liu et al. , 2020 ) . Our method , conversely , relies upon concepts from Euclidean geometry . Extending the principle of subspace alignment that we develop to non-Euclidean geometric spaces is a promising direction but is beyond the scope of this work . 3 TECHNICAL APPROACH . In this section , we outlined the design of the CSN , which was inspired by desires for both interpretable representations and explicit concept relationships . First , we wished for interpretable representations , so we built upon the PCN design , with modifications . Second , we explicitly encoded relationships between concepts by introducing multiple sets of prototypes , instead of just one in PCNs . Third , we enabled guidance of the concept relationships by modifying the CSN training loss . Together , these changes supported not only interpretable classification , but also provided a flexible framework for a single model architecture to learn different concept relationships . 3.1 CONCEPT SUBSPACE CLASSIFICATION . A CSN performing a single classification task ( e.g. , identifying a digit in an image ) is defined by three sets of trainable weights . First , an encoder parametrized by weights θ , eθ , maps from inputs of dimension X to encodings of dimension Z : eθ : RX → RZ . Second , a decoder parametrized by weights φ , dφ , performs the decoding function of mapping from encodings to reconstructed inputs : dφ : R Z → RX . Third , there exist a set of k trainable prototype weights , p , that are each Zdimensional vectors : p1 , p2 , ... , pk ∈ RZ . This architecture resembles that of the PCN , but without the additional linear classification layer ( Li et al. , 2018 ) . Here , we focus briefly on the set of prototypes , p. Given a set of k prototypes in RZ , we define a “ concept subspace , ” C as follows : vi = pi − p1 ∀i ∈ [ 2 , k ] ( 1 ) C = { x|x ∈ RZ where x = p1 + ∑ i∈ [ 2 , k ] λivifor λi ∈ R ∀i } ( 2 ) C is the linear subspace in RZ defined by starting at the first prototype and adding linear scalings of vector differences to all other prototypes . We call this subspace a concept subspace because it represents a space of encodings between prototypes defining a single concept ( e.g. , prototypes for digits 0 , 1 , 2 , etc . define a concept subspace for digit classification ) . A CSN ’ s architecture — consisting of an encoder , a decoder , and a set of prototypes and the associated concept subspace — enables two types of functionality : the encoder and decoder may be composed to reconstruct inputs via their latent representations , and CSNs may perform classification tasks by mapping an input , x , to one of Y discrete categories . Classification is performed by first encoding an input into a latent representation , z = eθ ( x ) . The l2 distance from z to each prototype is then calculated , yielding k distance values : di ( z , p ) = ||z − pi||22 ; i ∈ [ 1 , k ] . These distances are mapped to a probability distribution , PK ( i ) ; i ∈ [ 1 , k ] , by taking the softmax of their negatives . Lastly , if there are more prototypes than classes , ( e.g. , two prototypes for dogs , two for cats , etc . ) the distribution over k is converted to a distribution over Y categories summing the probabilities for prototypes belonging to the same class . For single-concept classification , CSNs differ from PCNs primarily by removing the linear layer that PCNs used to transform distances to prototypes into classifications . We found this unnecessary for high classification accuracy ( Appendix A ) and instead directly used negative distances . Without the linear layer , CSN classification is equivalent to projecting encodings , z , onto a concept subspace before calculating distances . The distances between projected encoding , dubbed zproj , and prototypes will induce the same softmax distribution as when the orthogonal component remains . Indeed , we find projection more intuitive - only the component of z that corresponds to the subspace is used for classification - and list projection as a standard step in the remainder of this paper . A simple example of projecting an encoding and calculating distances to prototypes is shown in Figure 1 a . For some tasks , we used an encoder design from variational-autoencoders ( VAEs ) in order to regularize the distribution of encodings to conform to unit Gaussians ( Kingma & Welling , 2014 ) . By default , this regularization loss was set to 0 , but it sometimes proved useful in some domains to prevent overfitting ( as detailed in experiments later ) . We emphasize that CSNs are discriminative , rather than generative , models , so we did not seek to learn a latent space from which to sample . | The paper builds on prior work on prototypical classification networks (more specifically, the work of Li et al. 2018) and additionally tries to include criteria such as orthogonality to enable applications such as fair classification. An application to hierarchical networks is also described though the details are very hard to understand. Experiments show that the resulting models are able to achieve reasonable fairness accuracy tradeoffs. | SP:3711c6d28737e8773b24bb41074add91a3cc1383 |
Prototype Based Classification from Hierarchy to Fairness | 1 INTRODUCTION . Neural networks are able to learn rich representations of data that support highly accurate classification ; however , understanding or controlling what neural nets learn remains challenging . Some techniques offer insight into pre-trained models by uncovering directions within latent spaces that correspond to particular concepts , image manipulations , or more ( Goetschalckx et al. , 2019 ; Kim et al. , 2018 ) , while approaches focused on interpretability provide techniques that are more comprehensible to humans ( Li et al. , 2018 ; Chen et al. , 2019 ) . While these methods provide insight , they fail to offer control : humans observe learned patterns but are unable to guide models such that learned relationships are useful for a particular setting or task . Another line of work has advanced the design of models for particular types of classification tasks ( such as fair or hierarchical classification ) but these techniques are often developed with only one problem in mind ( Zemel et al. , 2016 ; Xie et al. , 2017 ; Hase et al. , 2019 ) . For example , models built for fair classification ( predicting an outcome regardless of information about a protected field ) are only used to enforce independence of concepts rather than hierarchy . Thus , humans may exert control over learned representations by selecting an appropriate technique rather than tuning training parameters within the same technique . We have designed a new neural network architecture , the concept subspace network ( CSN ) , which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships . CSNs use prototype-based representations , a technique employed in interpretable neural networks in prior art ( Li et al. , 2018 ; Chen et al. , 2019 ; Garnot & Landrieu , 2020 ) . A single CSN uses sets of prototypes in order to simultaneously learn multiple concepts ; classification within a single concept ( e.g. , “ type of animal ” ) is performed by projecting encodings into a concept subspace defined by the prototypes for that concept ( e.g. , “ bird , ” “ dog , ” etc. ) . Lastly , CSNs use a measure of concept subspace alignment to guide concept relationships such as independence or hierarchy . In our experiments , CSNs performed comparably to state-of-the art in fair classification , despite prior methods only being designed for this type of problem . In applying CSNs to hierarchical classification tasks , networks automatically deduced interpretable representations of the hierarchical problem structure , allowing them to outperform state-of-the-art , for a given neural network backbone , in terms of both accuracy and average cost of errors on the CIFAR100 dataset . Lastly , in a human-motion prediction task , we demonstrated how a single CSN could enforce both fairness ( to preserve participant privacy ) and hierarchy ( to exploit a known taxonomy of tasks ) . Our findings suggest that CSNs may be applied to a wide range of problems that had previously only been addressed individually , or not at all . 2 RELATED WORK . 2.1 INTERPRETABILITY AND PROTOTYPE NETWORKS . Numerous post-hoc explanation techniques fit models to pre-trained neural nets ; if humans understand these auxiliary models , they can hypothesize about how the neural nets behave ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) . However , techniques in which explanations are decoupled from underlying logic may be susceptible to adversarial attacks or produce misleading explanations ( Heo et al. , 2019 ; Slack et al. , 2020 ) . Unlike such decoupled explanations , interpretability research seeks to expose a model ’ s reasoning . In this work we focus on prototype-based latent representations in neural nets . There is a long history of learning discrete representations in continuous spaces , originating under “ vector quantization ” literature ( Kohonen , 1990 ; Schneider et al. , 2009 ) . More recently , the prototype case network ( PCN ) comprised an autoencoder model that clustered encodings around understandable , trainable prototypes , with classifications made via a linear weighting of the distances from encodings to prototypes ( Li et al. , 2018 ) . Further research in image classification extended PCNs to use convolutional filters as prototypes and for hierarchical classification in the hierarchical prototype network ( HPN ) ( Chen et al. , 2019 ; Hase et al. , 2019 ) . Lastly , Garnot & Landrieu ( 2020 ) use prototypes in Metric-Guided Prototype Learning ( MGP ) in conjunction with a loss function to cluster prototypes to minimize user-defined costs . Our model similarly uses trainable prototypes for classification , but differs from prior art in two respects . First , we modify the standard PCN architecture to support other changes , without degrading classification performance . Second , like HPNs ( but not PCNs or MGP ) , CSNs leverage multiple sets of prototypes to enable hierarchical classification but also allow for non-hierarchical concept relationships . 2.2 FAIR AND HIERARCHICAL CLASSIFICATION . AI fairness research considers how to mitigate undesirable patterns or biases in machine learning models . Consider the problem of predicting a person ’ s credit risk : non-causal correlations between age and risk may lead AI models to inappropriately penalize people according to their age ( Zemel et al. , 2016 ) . The problem of fair classification is often framed as follows : given inputs , x , which are informative of a protected field , s , and outcome , y , predict y from x without being influenced by s ( Zemel et al. , 2013 ) . Merely removing s from x ( e.g. , not including age as an input to a credit predictor ) rarely removes all information about s , so researchers have developed a variety of techniques to create representations that “ purge ” information about s ( Zemel et al. , 2016 ; Xie et al. , 2017 ; Jiang et al. , 2020 ) . Hierarchical classification solves a different problem : given a hierarchical taxonomy of classes ( e.g. , birds vs. dogs at a high level and sparrows vs. toucans at a low level ) , output the correct label at each classification level . Neural nets using convolution and recurrent layers in specialized designs have achieved remarkable success in hierarchical image classification ( Zhu & Bain , 2017 ; Guo et al. , 2018 ) . The hierarchical prototype network ( HPN ) uses prototypes and a training routine based upon conditional subsets of training data to create hierarchically-organized prototypes ( Hase et al. , 2019 ) . Garnot & Landrieu ( 2020 ) also use prototypes for hierarchical classification in Metric-Guided Prototype Learning ( MGP ) by adjusting the training loss to guide prototype arrangement . Neither HPN nor MGP explicitly models relationships between multiple subsets of prototypes . Lastly , recent works propose hyperbolic latent spaces as a natural way to model hierarchical data ( Dai et al. , 2021 ; Mathieu et al. , 2019 ; Nickel & Kiela , 2017 ; Liu et al. , 2020 ) . Our method , conversely , relies upon concepts from Euclidean geometry . Extending the principle of subspace alignment that we develop to non-Euclidean geometric spaces is a promising direction but is beyond the scope of this work . 3 TECHNICAL APPROACH . In this section , we outlined the design of the CSN , which was inspired by desires for both interpretable representations and explicit concept relationships . First , we wished for interpretable representations , so we built upon the PCN design , with modifications . Second , we explicitly encoded relationships between concepts by introducing multiple sets of prototypes , instead of just one in PCNs . Third , we enabled guidance of the concept relationships by modifying the CSN training loss . Together , these changes supported not only interpretable classification , but also provided a flexible framework for a single model architecture to learn different concept relationships . 3.1 CONCEPT SUBSPACE CLASSIFICATION . A CSN performing a single classification task ( e.g. , identifying a digit in an image ) is defined by three sets of trainable weights . First , an encoder parametrized by weights θ , eθ , maps from inputs of dimension X to encodings of dimension Z : eθ : RX → RZ . Second , a decoder parametrized by weights φ , dφ , performs the decoding function of mapping from encodings to reconstructed inputs : dφ : R Z → RX . Third , there exist a set of k trainable prototype weights , p , that are each Zdimensional vectors : p1 , p2 , ... , pk ∈ RZ . This architecture resembles that of the PCN , but without the additional linear classification layer ( Li et al. , 2018 ) . Here , we focus briefly on the set of prototypes , p. Given a set of k prototypes in RZ , we define a “ concept subspace , ” C as follows : vi = pi − p1 ∀i ∈ [ 2 , k ] ( 1 ) C = { x|x ∈ RZ where x = p1 + ∑ i∈ [ 2 , k ] λivifor λi ∈ R ∀i } ( 2 ) C is the linear subspace in RZ defined by starting at the first prototype and adding linear scalings of vector differences to all other prototypes . We call this subspace a concept subspace because it represents a space of encodings between prototypes defining a single concept ( e.g. , prototypes for digits 0 , 1 , 2 , etc . define a concept subspace for digit classification ) . A CSN ’ s architecture — consisting of an encoder , a decoder , and a set of prototypes and the associated concept subspace — enables two types of functionality : the encoder and decoder may be composed to reconstruct inputs via their latent representations , and CSNs may perform classification tasks by mapping an input , x , to one of Y discrete categories . Classification is performed by first encoding an input into a latent representation , z = eθ ( x ) . The l2 distance from z to each prototype is then calculated , yielding k distance values : di ( z , p ) = ||z − pi||22 ; i ∈ [ 1 , k ] . These distances are mapped to a probability distribution , PK ( i ) ; i ∈ [ 1 , k ] , by taking the softmax of their negatives . Lastly , if there are more prototypes than classes , ( e.g. , two prototypes for dogs , two for cats , etc . ) the distribution over k is converted to a distribution over Y categories summing the probabilities for prototypes belonging to the same class . For single-concept classification , CSNs differ from PCNs primarily by removing the linear layer that PCNs used to transform distances to prototypes into classifications . We found this unnecessary for high classification accuracy ( Appendix A ) and instead directly used negative distances . Without the linear layer , CSN classification is equivalent to projecting encodings , z , onto a concept subspace before calculating distances . The distances between projected encoding , dubbed zproj , and prototypes will induce the same softmax distribution as when the orthogonal component remains . Indeed , we find projection more intuitive - only the component of z that corresponds to the subspace is used for classification - and list projection as a standard step in the remainder of this paper . A simple example of projecting an encoding and calculating distances to prototypes is shown in Figure 1 a . For some tasks , we used an encoder design from variational-autoencoders ( VAEs ) in order to regularize the distribution of encodings to conform to unit Gaussians ( Kingma & Welling , 2014 ) . By default , this regularization loss was set to 0 , but it sometimes proved useful in some domains to prevent overfitting ( as detailed in experiments later ) . We emphasize that CSNs are discriminative , rather than generative , models , so we did not seek to learn a latent space from which to sample . | The authors propose a novel model — called Concept Subspace Network (CSN) — for both hierarchical and fair classification. The idea behind the network is to use sets of prototypes to define concept subspaces in the latent space defined by the neural network itself. The relationships between the subspaces can be manipulated at training time to enforce concept relationships (i.e., two concept subspaces are orthogonal if the concepts they represent are independent, while they are parallel if the concepts they represent are hierarchically organised). | SP:3711c6d28737e8773b24bb41074add91a3cc1383 |
Prototype Based Classification from Hierarchy to Fairness | 1 INTRODUCTION . Neural networks are able to learn rich representations of data that support highly accurate classification ; however , understanding or controlling what neural nets learn remains challenging . Some techniques offer insight into pre-trained models by uncovering directions within latent spaces that correspond to particular concepts , image manipulations , or more ( Goetschalckx et al. , 2019 ; Kim et al. , 2018 ) , while approaches focused on interpretability provide techniques that are more comprehensible to humans ( Li et al. , 2018 ; Chen et al. , 2019 ) . While these methods provide insight , they fail to offer control : humans observe learned patterns but are unable to guide models such that learned relationships are useful for a particular setting or task . Another line of work has advanced the design of models for particular types of classification tasks ( such as fair or hierarchical classification ) but these techniques are often developed with only one problem in mind ( Zemel et al. , 2016 ; Xie et al. , 2017 ; Hase et al. , 2019 ) . For example , models built for fair classification ( predicting an outcome regardless of information about a protected field ) are only used to enforce independence of concepts rather than hierarchy . Thus , humans may exert control over learned representations by selecting an appropriate technique rather than tuning training parameters within the same technique . We have designed a new neural network architecture , the concept subspace network ( CSN ) , which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships . CSNs use prototype-based representations , a technique employed in interpretable neural networks in prior art ( Li et al. , 2018 ; Chen et al. , 2019 ; Garnot & Landrieu , 2020 ) . A single CSN uses sets of prototypes in order to simultaneously learn multiple concepts ; classification within a single concept ( e.g. , “ type of animal ” ) is performed by projecting encodings into a concept subspace defined by the prototypes for that concept ( e.g. , “ bird , ” “ dog , ” etc. ) . Lastly , CSNs use a measure of concept subspace alignment to guide concept relationships such as independence or hierarchy . In our experiments , CSNs performed comparably to state-of-the art in fair classification , despite prior methods only being designed for this type of problem . In applying CSNs to hierarchical classification tasks , networks automatically deduced interpretable representations of the hierarchical problem structure , allowing them to outperform state-of-the-art , for a given neural network backbone , in terms of both accuracy and average cost of errors on the CIFAR100 dataset . Lastly , in a human-motion prediction task , we demonstrated how a single CSN could enforce both fairness ( to preserve participant privacy ) and hierarchy ( to exploit a known taxonomy of tasks ) . Our findings suggest that CSNs may be applied to a wide range of problems that had previously only been addressed individually , or not at all . 2 RELATED WORK . 2.1 INTERPRETABILITY AND PROTOTYPE NETWORKS . Numerous post-hoc explanation techniques fit models to pre-trained neural nets ; if humans understand these auxiliary models , they can hypothesize about how the neural nets behave ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) . However , techniques in which explanations are decoupled from underlying logic may be susceptible to adversarial attacks or produce misleading explanations ( Heo et al. , 2019 ; Slack et al. , 2020 ) . Unlike such decoupled explanations , interpretability research seeks to expose a model ’ s reasoning . In this work we focus on prototype-based latent representations in neural nets . There is a long history of learning discrete representations in continuous spaces , originating under “ vector quantization ” literature ( Kohonen , 1990 ; Schneider et al. , 2009 ) . More recently , the prototype case network ( PCN ) comprised an autoencoder model that clustered encodings around understandable , trainable prototypes , with classifications made via a linear weighting of the distances from encodings to prototypes ( Li et al. , 2018 ) . Further research in image classification extended PCNs to use convolutional filters as prototypes and for hierarchical classification in the hierarchical prototype network ( HPN ) ( Chen et al. , 2019 ; Hase et al. , 2019 ) . Lastly , Garnot & Landrieu ( 2020 ) use prototypes in Metric-Guided Prototype Learning ( MGP ) in conjunction with a loss function to cluster prototypes to minimize user-defined costs . Our model similarly uses trainable prototypes for classification , but differs from prior art in two respects . First , we modify the standard PCN architecture to support other changes , without degrading classification performance . Second , like HPNs ( but not PCNs or MGP ) , CSNs leverage multiple sets of prototypes to enable hierarchical classification but also allow for non-hierarchical concept relationships . 2.2 FAIR AND HIERARCHICAL CLASSIFICATION . AI fairness research considers how to mitigate undesirable patterns or biases in machine learning models . Consider the problem of predicting a person ’ s credit risk : non-causal correlations between age and risk may lead AI models to inappropriately penalize people according to their age ( Zemel et al. , 2016 ) . The problem of fair classification is often framed as follows : given inputs , x , which are informative of a protected field , s , and outcome , y , predict y from x without being influenced by s ( Zemel et al. , 2013 ) . Merely removing s from x ( e.g. , not including age as an input to a credit predictor ) rarely removes all information about s , so researchers have developed a variety of techniques to create representations that “ purge ” information about s ( Zemel et al. , 2016 ; Xie et al. , 2017 ; Jiang et al. , 2020 ) . Hierarchical classification solves a different problem : given a hierarchical taxonomy of classes ( e.g. , birds vs. dogs at a high level and sparrows vs. toucans at a low level ) , output the correct label at each classification level . Neural nets using convolution and recurrent layers in specialized designs have achieved remarkable success in hierarchical image classification ( Zhu & Bain , 2017 ; Guo et al. , 2018 ) . The hierarchical prototype network ( HPN ) uses prototypes and a training routine based upon conditional subsets of training data to create hierarchically-organized prototypes ( Hase et al. , 2019 ) . Garnot & Landrieu ( 2020 ) also use prototypes for hierarchical classification in Metric-Guided Prototype Learning ( MGP ) by adjusting the training loss to guide prototype arrangement . Neither HPN nor MGP explicitly models relationships between multiple subsets of prototypes . Lastly , recent works propose hyperbolic latent spaces as a natural way to model hierarchical data ( Dai et al. , 2021 ; Mathieu et al. , 2019 ; Nickel & Kiela , 2017 ; Liu et al. , 2020 ) . Our method , conversely , relies upon concepts from Euclidean geometry . Extending the principle of subspace alignment that we develop to non-Euclidean geometric spaces is a promising direction but is beyond the scope of this work . 3 TECHNICAL APPROACH . In this section , we outlined the design of the CSN , which was inspired by desires for both interpretable representations and explicit concept relationships . First , we wished for interpretable representations , so we built upon the PCN design , with modifications . Second , we explicitly encoded relationships between concepts by introducing multiple sets of prototypes , instead of just one in PCNs . Third , we enabled guidance of the concept relationships by modifying the CSN training loss . Together , these changes supported not only interpretable classification , but also provided a flexible framework for a single model architecture to learn different concept relationships . 3.1 CONCEPT SUBSPACE CLASSIFICATION . A CSN performing a single classification task ( e.g. , identifying a digit in an image ) is defined by three sets of trainable weights . First , an encoder parametrized by weights θ , eθ , maps from inputs of dimension X to encodings of dimension Z : eθ : RX → RZ . Second , a decoder parametrized by weights φ , dφ , performs the decoding function of mapping from encodings to reconstructed inputs : dφ : R Z → RX . Third , there exist a set of k trainable prototype weights , p , that are each Zdimensional vectors : p1 , p2 , ... , pk ∈ RZ . This architecture resembles that of the PCN , but without the additional linear classification layer ( Li et al. , 2018 ) . Here , we focus briefly on the set of prototypes , p. Given a set of k prototypes in RZ , we define a “ concept subspace , ” C as follows : vi = pi − p1 ∀i ∈ [ 2 , k ] ( 1 ) C = { x|x ∈ RZ where x = p1 + ∑ i∈ [ 2 , k ] λivifor λi ∈ R ∀i } ( 2 ) C is the linear subspace in RZ defined by starting at the first prototype and adding linear scalings of vector differences to all other prototypes . We call this subspace a concept subspace because it represents a space of encodings between prototypes defining a single concept ( e.g. , prototypes for digits 0 , 1 , 2 , etc . define a concept subspace for digit classification ) . A CSN ’ s architecture — consisting of an encoder , a decoder , and a set of prototypes and the associated concept subspace — enables two types of functionality : the encoder and decoder may be composed to reconstruct inputs via their latent representations , and CSNs may perform classification tasks by mapping an input , x , to one of Y discrete categories . Classification is performed by first encoding an input into a latent representation , z = eθ ( x ) . The l2 distance from z to each prototype is then calculated , yielding k distance values : di ( z , p ) = ||z − pi||22 ; i ∈ [ 1 , k ] . These distances are mapped to a probability distribution , PK ( i ) ; i ∈ [ 1 , k ] , by taking the softmax of their negatives . Lastly , if there are more prototypes than classes , ( e.g. , two prototypes for dogs , two for cats , etc . ) the distribution over k is converted to a distribution over Y categories summing the probabilities for prototypes belonging to the same class . For single-concept classification , CSNs differ from PCNs primarily by removing the linear layer that PCNs used to transform distances to prototypes into classifications . We found this unnecessary for high classification accuracy ( Appendix A ) and instead directly used negative distances . Without the linear layer , CSN classification is equivalent to projecting encodings , z , onto a concept subspace before calculating distances . The distances between projected encoding , dubbed zproj , and prototypes will induce the same softmax distribution as when the orthogonal component remains . Indeed , we find projection more intuitive - only the component of z that corresponds to the subspace is used for classification - and list projection as a standard step in the remainder of this paper . A simple example of projecting an encoding and calculating distances to prototypes is shown in Figure 1 a . For some tasks , we used an encoder design from variational-autoencoders ( VAEs ) in order to regularize the distribution of encodings to conform to unit Gaussians ( Kingma & Welling , 2014 ) . By default , this regularization loss was set to 0 , but it sometimes proved useful in some domains to prevent overfitting ( as detailed in experiments later ) . We emphasize that CSNs are discriminative , rather than generative , models , so we did not seek to learn a latent space from which to sample . | The present paper proposes a novel architecture for prototype-based classification to support class hierarchies and fairness. In particular, hierarchies are supported by training the model for multiple classification problems jointly, each in its own subspace of the feature space, spanned by the respective prototypes. For fairness, the paper proposes to make the subspace for the classification between subgroups orthogonal to all other subspaces, such that any change in subgroup membership does not influence any other classification. In a series of experiments, the paper evaluates hierarchical classification and fairness separately as well as jointly and demonstrates equal or superior results to a state-of-the-art approach from the literature. | SP:3711c6d28737e8773b24bb41074add91a3cc1383 |
8-bit Optimizers via Block-wise Quantization | Increasing model size is an effective way to achieve better performance for given resources ( Kaplan et al. , 2020 ; Henighan et al. , 2020 ; Raffel et al. , 2019 ; Lewis et al. , 2021 ) . However , training such large models requires storing the model , gradient , and state of the optimizer ( e.g. , exponentially smoothed sum and squared sum of previous gradients for Adam ) , all in a fixed amount of available memory . Although significant research has focused on enabling larger model training by reducing or efficiently distributing the memory required for the model parameters ( Shoeybi et al. , 2019 ; Lepikhin et al. , 2020 ; Fedus et al. , 2021 ; Brown et al. , 2020 ; Rajbhandari et al. , 2020 ) , reducing the memory footprint of optimizer gradient statistics is much less studied . This is a significant missed opportunity since these optimizer states use 33-75 % of the total memory footprint during training . For example , the Adam optimizer states for the largest GPT-2 ( Radford et al. , 2019 ) and T5 ( Raffel et al. , 2019 ) models are 11 GB and 41 GB in size . In this paper , we develop a fast , high-precision non-linear quantization method – block-wise dynamic quantization – that enables stable 8-bit optimizers ( e.g. , Adam , AdamW , and Momentum ) which maintain 32-bit performance at a fraction of the memory footprint and without any changes to the original hyperparameters.1 While most current work uses 32-bit optimizer states , recent high-profile efforts to use 16-bit optimizers report difficultly for large models with more than 1B parameters ( Ramesh et al. , 2021 ) . Going from 16-bit optimizers to 8-bit optimizers reduces the range of possible values from 216 = 65536 values to just 28 = 256 . To our knowledge , this has not been attempted before . Effectively using this very limited range is challenging for three reasons : quantization accuracy , computational efficiency , and large-scale stability . To maintain accuracy , it is critical to introduce some form of non-linear quantization to reduce errors for both common small magnitude values and rare large ones . However , to be practical , 8-bit optimizers need to be fast enough to not slow 1We study 8-bit optimization with current best practice model and gradient representations ( typically 16-bit mixed precision ) , to isolate optimization challenges . Future work could explore further compressing all three . down training , which is especially difficult for non-linear methods that require more complex data structures to maintain the quantization buckets . Finally , to maintain stability with huge models beyond 1B parameters , a quantization method needs to not only have a good mean error but excellent worse case performance since a single large quantization error can cause the entire training run to diverge . We introduce a new block-wise quantization approach that addresses all three of these challenges . Block-wise quantization splits input tensors into blocks and performs quantization on each block independently . This block-wise division reduces the effect of outliers on the quantization process since they are isolated to particular blocks , thereby improving stability and performance , especially for large-scale models . Block-wise processing also allows for high optimizer throughput since each normalization can be computed independently in each core . This contrasts with tensor-wide normalization , which requires slow cross-core synchronization that is highly dependent on task-core scheduling . We combine block-wise quantization with two novel methods for stable , high-performance 8-bit optimizers : dynamic quantization and a stable embedding layer . Dynamic quantization is an extension of dynamic tree quantization for unsigned input data . The stable embedding layer is a variation of a standard word embedding layer that supports more aggressive quantization by normalizing the highly non-uniform distribution of inputs to avoid extreme gradient variation . Our 8-bit optimizers maintain 32-bit performance at a fraction of the original memory footprint . We show this for a broad range of tasks : 1.5B and 355M parameter language modeling , GLUE finetuning , ImageNet classification , WMT ’ 14+WMT ’ 16 machine translation , MoCo v2 contrastive image pretraining+finetuning , and RoBERTa pretraining . We also report additional ablations and sensitivity analysis showing that all components – block-wise quantization , dynamic quantization , and stable embedding layer – are crucial for these results and that 8-bit Adam can be used as a simple drop-in replacement for 32-bit Adam , with no hyperparameter changes . We open-source our custom CUDA kernels and provide a PyTorch implementation that enables 8-bit optimization by changing two lines of code . 1 BACKGROUND . 1.1 STATEFUL OPTIMIZERS . An optimizer updates the parameters w of a neural network by using the gradient of the loss with respect to the weight gt = ∂L∂w at update iteration t. Stateful optimizers compute statistics of the gradient with respect to each parameter over time for accelerated optimization . Two of the most commonly used stateful optimizers are Adam ( Kingma and Ba , 2014 ) , and SGD with momentum ( Qian , 1999 ) – or Momentum for short . Without damping and scaling constants , the update rules of these optimizers are given by : Momentum ( gt , wt−1 , mt−1 ) = m0 = g0 Initialization mt = β1mt−1 + gt State 1 update wt = wt−1 − α ·mt Weight update ( 1 ) Adam ( gt , wt−1 , mt−1 , rt−1 ) = r0 = m0 = 0 Initialization mt = β1mt−1 + ( 1− β1 ) gt State 1 update rt = β2rt−1 + ( 1− β2 ) g2t State 2 update wt = wt−1 − α · mt√rt+ Weight update , ( 2 ) where β1 and β2 are smoothing constants , is a small constant , and α is the learning rate . For 32-bit states , Momentum and Adam consume 4 and 8 bytes per parameter . That is 4 GB and 8 GB for a 1B parameter model . Our 8-bit non-linear quantization reduces these costs to 1 GB and 2 GB . 1.2 NON-LINEAR QUANTIZATION . Quantization compresses numeric representations to save space at the cost of precision . Quantization is the mapping of a k-bit integer to a real element in D , that is , Qmap : [ 0 , 2k − 1 ] 7→ D. For example , the IEEE 32-bit floating point data type maps the indices 0 ... 232 − 1 to the domain [ -3.4e38 , +3.4e38 ] . We use the following notation : Qmap ( i ) = Qmapi = qi , for example Qmap ( 231 + 131072 ) = 2.03125 , for the IEEE 32-bit floating point data type . To perform general quantization from one data type into another we require three steps . ( 1 ) Compute a normalization constant N that transforms the input tensor T into the range of the domain D of the target quantization data type Qmap , ( 2 ) for each element of T/N find the closest corresponding value qi in the domain D , ( 3 ) store the index i corresponding to qi in the quantized output tensor TQ . To receive the dequantized tensor TD we look up the index and denormalize : TDi = Q map ( TQi ) ·N . To perform this procedure for dynamic quantization we first normalize into the range [ -1 , 1 ] through division by the absolute maximum value : N = max ( |T| ) . Then we find the closest values via a binary search : TQi = 2n argmin j=0 |Qmapj − Ti N | ( 3 ) 1.3 DYNAMIC TREE QUANTIZATION Figure 2 : Dynamic tree quantization . Dynamic Tree quantization ( Dettmers , 2016 ) is a method that yields low quantization error for both small and large magnitude values . Unlike data types with fixed exponent and fraction , dynamic tree quantization uses a datatype with a dynamic exponent and fraction that can change with each number . It is made up of four parts , as seen in Figure 2 : ( 1 ) The first bit of the data type is reserved for a sign . ( 2 ) The number of subsequent zero bits indicates the magnitude of the exponent . ( 3 ) The first bit that is set to one indicates that all following values are reserved for ( 4 ) linear quantization . By moving the indicator bit , num- bers can have a large exponent 10−7 or precision as high as 1/63 . Compared to linear quantization , dynamic tree quantization has better absolute and relative quantization errors for non-uniform distributions . Dynamic tree quantization is strictly defined to quantize numbers in the range [ -1.0 , 1.0 ] , which is ensured by performing tensor-level absolute max normalization . 2 8-BIT OPTIMIZERS . Our 8-bit optimizers have three components : ( 1 ) block-wise quantization that isolates outliers and distributes the error more equally over all bits ; ( 2 ) dynamic quantization , which quantizes both small and large values with high precision ; and ( 3 ) a stable embedding layer to improve stability during optimization for models with word embeddings . With these components , performing an optimizer update with 8-bit states is straightforward . We dequantize the 8-bit optimizer states to 32-bit , perform the update , and then quantize the states back to 8-bit for storage . We do this 8-bit to 32-bit conversion element-by-element in registers , which means no slow copies to GPU memory or additional temporary memory are needed to perform quantization and dequantization . For GPUs , this makes 8-bit optimizers faster than regular 32-bit optimizers , as we show in Section 3 . 2.1 BLOCK-WISE QUANTIZATION . Our block-wise quantization reduces the cost of computing normalization and improves quantization precision by isolating outliers . In order to dynamically quantize a tensor , as defined in Section 1.2 , we need to normalize the tensor into the range [ -1 , 1 ] . Such normalization requires a reduction over the entire tensor , which entails multiple synchronizations across GPU cores . Block-wise dynamic quantization reduces this cost by chunking an input tensor into small blocks of size B = 2048 and performing normalization independently in each core across this block . More formally , using the notation introduced in Section 1.2 , in block-wise quantization , we treat T as a one-dimensional sequence of elements that we chunk in blocks of size B . This means for an input tensor T with n elements we have n/B blocks . We proceed to compute a normalization constant for each block : Nb = max ( |Tb| ) , where b is the index of the block 0 .. n/B . With this block-wise normalization constant , each block can be quantized independently : TQbi = 2n argmin j=0 |Qmapj − Tbi Nb | ∣∣∣∣ 0 < i < B ( 4 ) This approach has several advantages , both for stability and efficiency . First , each block normalization can be computed independently . Thus no synchronization between cores is required , and throughput is enhanced . Secondly , it is also much more robust to outliers in the input tensor . For example , to contrast blockwise and regular quantization , if we create an input tensor with one million elements sampled from the standard normal distribution , we expect less than 1 % of elements of the tensor will be in the range [ 3 , +∞ ) . However , since we normalize the input tensor into the range [ -1,1 ] this means the maximum values of the distribution determine the range of quantization buckets . This means if the input tensor contains an outlier with magnitude 5 , the quantization buckets reserved for numbers between 3 and 5 will mostly go unused since less than 1 % of numbers are in this range . With blockwise quantization , the effect of outliers is limited to a single block . As such , most bits are used effectively in other blocks . Furthermore , because outliers represent the absolute maximum value in the input tensor , blockwise quantization approximates outlier values without any error . This guarantees that the largest optimizer states , arguably the most important , will always be quantized with full precision . This property makes block-wise dynamic quantization both robust and precise and is essential for good training performance in practice . | This paper addresses the very important problem of reducing the memory footprint of neural networks training. For that matter, the authors propose replacing standard optimizers with their 8-bit quantize counterparts. The proposed scheme of optimizer state quantization has three components (I) block-wise quantization which osolat4ed outliers impact on the error (ii) dynamic quantization which quantize both small and large values with high precision and (iii) stable embedding layer which improves the stability of the optimizer during training. | SP:c87e5e1f360981cceea63a1edead85ac96cbfb33 |
8-bit Optimizers via Block-wise Quantization | Increasing model size is an effective way to achieve better performance for given resources ( Kaplan et al. , 2020 ; Henighan et al. , 2020 ; Raffel et al. , 2019 ; Lewis et al. , 2021 ) . However , training such large models requires storing the model , gradient , and state of the optimizer ( e.g. , exponentially smoothed sum and squared sum of previous gradients for Adam ) , all in a fixed amount of available memory . Although significant research has focused on enabling larger model training by reducing or efficiently distributing the memory required for the model parameters ( Shoeybi et al. , 2019 ; Lepikhin et al. , 2020 ; Fedus et al. , 2021 ; Brown et al. , 2020 ; Rajbhandari et al. , 2020 ) , reducing the memory footprint of optimizer gradient statistics is much less studied . This is a significant missed opportunity since these optimizer states use 33-75 % of the total memory footprint during training . For example , the Adam optimizer states for the largest GPT-2 ( Radford et al. , 2019 ) and T5 ( Raffel et al. , 2019 ) models are 11 GB and 41 GB in size . In this paper , we develop a fast , high-precision non-linear quantization method – block-wise dynamic quantization – that enables stable 8-bit optimizers ( e.g. , Adam , AdamW , and Momentum ) which maintain 32-bit performance at a fraction of the memory footprint and without any changes to the original hyperparameters.1 While most current work uses 32-bit optimizer states , recent high-profile efforts to use 16-bit optimizers report difficultly for large models with more than 1B parameters ( Ramesh et al. , 2021 ) . Going from 16-bit optimizers to 8-bit optimizers reduces the range of possible values from 216 = 65536 values to just 28 = 256 . To our knowledge , this has not been attempted before . Effectively using this very limited range is challenging for three reasons : quantization accuracy , computational efficiency , and large-scale stability . To maintain accuracy , it is critical to introduce some form of non-linear quantization to reduce errors for both common small magnitude values and rare large ones . However , to be practical , 8-bit optimizers need to be fast enough to not slow 1We study 8-bit optimization with current best practice model and gradient representations ( typically 16-bit mixed precision ) , to isolate optimization challenges . Future work could explore further compressing all three . down training , which is especially difficult for non-linear methods that require more complex data structures to maintain the quantization buckets . Finally , to maintain stability with huge models beyond 1B parameters , a quantization method needs to not only have a good mean error but excellent worse case performance since a single large quantization error can cause the entire training run to diverge . We introduce a new block-wise quantization approach that addresses all three of these challenges . Block-wise quantization splits input tensors into blocks and performs quantization on each block independently . This block-wise division reduces the effect of outliers on the quantization process since they are isolated to particular blocks , thereby improving stability and performance , especially for large-scale models . Block-wise processing also allows for high optimizer throughput since each normalization can be computed independently in each core . This contrasts with tensor-wide normalization , which requires slow cross-core synchronization that is highly dependent on task-core scheduling . We combine block-wise quantization with two novel methods for stable , high-performance 8-bit optimizers : dynamic quantization and a stable embedding layer . Dynamic quantization is an extension of dynamic tree quantization for unsigned input data . The stable embedding layer is a variation of a standard word embedding layer that supports more aggressive quantization by normalizing the highly non-uniform distribution of inputs to avoid extreme gradient variation . Our 8-bit optimizers maintain 32-bit performance at a fraction of the original memory footprint . We show this for a broad range of tasks : 1.5B and 355M parameter language modeling , GLUE finetuning , ImageNet classification , WMT ’ 14+WMT ’ 16 machine translation , MoCo v2 contrastive image pretraining+finetuning , and RoBERTa pretraining . We also report additional ablations and sensitivity analysis showing that all components – block-wise quantization , dynamic quantization , and stable embedding layer – are crucial for these results and that 8-bit Adam can be used as a simple drop-in replacement for 32-bit Adam , with no hyperparameter changes . We open-source our custom CUDA kernels and provide a PyTorch implementation that enables 8-bit optimization by changing two lines of code . 1 BACKGROUND . 1.1 STATEFUL OPTIMIZERS . An optimizer updates the parameters w of a neural network by using the gradient of the loss with respect to the weight gt = ∂L∂w at update iteration t. Stateful optimizers compute statistics of the gradient with respect to each parameter over time for accelerated optimization . Two of the most commonly used stateful optimizers are Adam ( Kingma and Ba , 2014 ) , and SGD with momentum ( Qian , 1999 ) – or Momentum for short . Without damping and scaling constants , the update rules of these optimizers are given by : Momentum ( gt , wt−1 , mt−1 ) = m0 = g0 Initialization mt = β1mt−1 + gt State 1 update wt = wt−1 − α ·mt Weight update ( 1 ) Adam ( gt , wt−1 , mt−1 , rt−1 ) = r0 = m0 = 0 Initialization mt = β1mt−1 + ( 1− β1 ) gt State 1 update rt = β2rt−1 + ( 1− β2 ) g2t State 2 update wt = wt−1 − α · mt√rt+ Weight update , ( 2 ) where β1 and β2 are smoothing constants , is a small constant , and α is the learning rate . For 32-bit states , Momentum and Adam consume 4 and 8 bytes per parameter . That is 4 GB and 8 GB for a 1B parameter model . Our 8-bit non-linear quantization reduces these costs to 1 GB and 2 GB . 1.2 NON-LINEAR QUANTIZATION . Quantization compresses numeric representations to save space at the cost of precision . Quantization is the mapping of a k-bit integer to a real element in D , that is , Qmap : [ 0 , 2k − 1 ] 7→ D. For example , the IEEE 32-bit floating point data type maps the indices 0 ... 232 − 1 to the domain [ -3.4e38 , +3.4e38 ] . We use the following notation : Qmap ( i ) = Qmapi = qi , for example Qmap ( 231 + 131072 ) = 2.03125 , for the IEEE 32-bit floating point data type . To perform general quantization from one data type into another we require three steps . ( 1 ) Compute a normalization constant N that transforms the input tensor T into the range of the domain D of the target quantization data type Qmap , ( 2 ) for each element of T/N find the closest corresponding value qi in the domain D , ( 3 ) store the index i corresponding to qi in the quantized output tensor TQ . To receive the dequantized tensor TD we look up the index and denormalize : TDi = Q map ( TQi ) ·N . To perform this procedure for dynamic quantization we first normalize into the range [ -1 , 1 ] through division by the absolute maximum value : N = max ( |T| ) . Then we find the closest values via a binary search : TQi = 2n argmin j=0 |Qmapj − Ti N | ( 3 ) 1.3 DYNAMIC TREE QUANTIZATION Figure 2 : Dynamic tree quantization . Dynamic Tree quantization ( Dettmers , 2016 ) is a method that yields low quantization error for both small and large magnitude values . Unlike data types with fixed exponent and fraction , dynamic tree quantization uses a datatype with a dynamic exponent and fraction that can change with each number . It is made up of four parts , as seen in Figure 2 : ( 1 ) The first bit of the data type is reserved for a sign . ( 2 ) The number of subsequent zero bits indicates the magnitude of the exponent . ( 3 ) The first bit that is set to one indicates that all following values are reserved for ( 4 ) linear quantization . By moving the indicator bit , num- bers can have a large exponent 10−7 or precision as high as 1/63 . Compared to linear quantization , dynamic tree quantization has better absolute and relative quantization errors for non-uniform distributions . Dynamic tree quantization is strictly defined to quantize numbers in the range [ -1.0 , 1.0 ] , which is ensured by performing tensor-level absolute max normalization . 2 8-BIT OPTIMIZERS . Our 8-bit optimizers have three components : ( 1 ) block-wise quantization that isolates outliers and distributes the error more equally over all bits ; ( 2 ) dynamic quantization , which quantizes both small and large values with high precision ; and ( 3 ) a stable embedding layer to improve stability during optimization for models with word embeddings . With these components , performing an optimizer update with 8-bit states is straightforward . We dequantize the 8-bit optimizer states to 32-bit , perform the update , and then quantize the states back to 8-bit for storage . We do this 8-bit to 32-bit conversion element-by-element in registers , which means no slow copies to GPU memory or additional temporary memory are needed to perform quantization and dequantization . For GPUs , this makes 8-bit optimizers faster than regular 32-bit optimizers , as we show in Section 3 . 2.1 BLOCK-WISE QUANTIZATION . Our block-wise quantization reduces the cost of computing normalization and improves quantization precision by isolating outliers . In order to dynamically quantize a tensor , as defined in Section 1.2 , we need to normalize the tensor into the range [ -1 , 1 ] . Such normalization requires a reduction over the entire tensor , which entails multiple synchronizations across GPU cores . Block-wise dynamic quantization reduces this cost by chunking an input tensor into small blocks of size B = 2048 and performing normalization independently in each core across this block . More formally , using the notation introduced in Section 1.2 , in block-wise quantization , we treat T as a one-dimensional sequence of elements that we chunk in blocks of size B . This means for an input tensor T with n elements we have n/B blocks . We proceed to compute a normalization constant for each block : Nb = max ( |Tb| ) , where b is the index of the block 0 .. n/B . With this block-wise normalization constant , each block can be quantized independently : TQbi = 2n argmin j=0 |Qmapj − Tbi Nb | ∣∣∣∣ 0 < i < B ( 4 ) This approach has several advantages , both for stability and efficiency . First , each block normalization can be computed independently . Thus no synchronization between cores is required , and throughput is enhanced . Secondly , it is also much more robust to outliers in the input tensor . For example , to contrast blockwise and regular quantization , if we create an input tensor with one million elements sampled from the standard normal distribution , we expect less than 1 % of elements of the tensor will be in the range [ 3 , +∞ ) . However , since we normalize the input tensor into the range [ -1,1 ] this means the maximum values of the distribution determine the range of quantization buckets . This means if the input tensor contains an outlier with magnitude 5 , the quantization buckets reserved for numbers between 3 and 5 will mostly go unused since less than 1 % of numbers are in this range . With blockwise quantization , the effect of outliers is limited to a single block . As such , most bits are used effectively in other blocks . Furthermore , because outliers represent the absolute maximum value in the input tensor , blockwise quantization approximates outlier values without any error . This guarantees that the largest optimizer states , arguably the most important , will always be quantized with full precision . This property makes block-wise dynamic quantization both robust and precise and is essential for good training performance in practice . | The paper shows a working implementation of 8bit states for momentum and second momentum. This is achieved by using block-wise dynamic quantization for efficient compression and fast implementation. They show drastic improvement over f32 diagonal optimizers (mainly Adam) and improvement over a subset of previously proposed sub-linear memory optimizer (AdaFactor f32 variant). | SP:c87e5e1f360981cceea63a1edead85ac96cbfb33 |
8-bit Optimizers via Block-wise Quantization | Increasing model size is an effective way to achieve better performance for given resources ( Kaplan et al. , 2020 ; Henighan et al. , 2020 ; Raffel et al. , 2019 ; Lewis et al. , 2021 ) . However , training such large models requires storing the model , gradient , and state of the optimizer ( e.g. , exponentially smoothed sum and squared sum of previous gradients for Adam ) , all in a fixed amount of available memory . Although significant research has focused on enabling larger model training by reducing or efficiently distributing the memory required for the model parameters ( Shoeybi et al. , 2019 ; Lepikhin et al. , 2020 ; Fedus et al. , 2021 ; Brown et al. , 2020 ; Rajbhandari et al. , 2020 ) , reducing the memory footprint of optimizer gradient statistics is much less studied . This is a significant missed opportunity since these optimizer states use 33-75 % of the total memory footprint during training . For example , the Adam optimizer states for the largest GPT-2 ( Radford et al. , 2019 ) and T5 ( Raffel et al. , 2019 ) models are 11 GB and 41 GB in size . In this paper , we develop a fast , high-precision non-linear quantization method – block-wise dynamic quantization – that enables stable 8-bit optimizers ( e.g. , Adam , AdamW , and Momentum ) which maintain 32-bit performance at a fraction of the memory footprint and without any changes to the original hyperparameters.1 While most current work uses 32-bit optimizer states , recent high-profile efforts to use 16-bit optimizers report difficultly for large models with more than 1B parameters ( Ramesh et al. , 2021 ) . Going from 16-bit optimizers to 8-bit optimizers reduces the range of possible values from 216 = 65536 values to just 28 = 256 . To our knowledge , this has not been attempted before . Effectively using this very limited range is challenging for three reasons : quantization accuracy , computational efficiency , and large-scale stability . To maintain accuracy , it is critical to introduce some form of non-linear quantization to reduce errors for both common small magnitude values and rare large ones . However , to be practical , 8-bit optimizers need to be fast enough to not slow 1We study 8-bit optimization with current best practice model and gradient representations ( typically 16-bit mixed precision ) , to isolate optimization challenges . Future work could explore further compressing all three . down training , which is especially difficult for non-linear methods that require more complex data structures to maintain the quantization buckets . Finally , to maintain stability with huge models beyond 1B parameters , a quantization method needs to not only have a good mean error but excellent worse case performance since a single large quantization error can cause the entire training run to diverge . We introduce a new block-wise quantization approach that addresses all three of these challenges . Block-wise quantization splits input tensors into blocks and performs quantization on each block independently . This block-wise division reduces the effect of outliers on the quantization process since they are isolated to particular blocks , thereby improving stability and performance , especially for large-scale models . Block-wise processing also allows for high optimizer throughput since each normalization can be computed independently in each core . This contrasts with tensor-wide normalization , which requires slow cross-core synchronization that is highly dependent on task-core scheduling . We combine block-wise quantization with two novel methods for stable , high-performance 8-bit optimizers : dynamic quantization and a stable embedding layer . Dynamic quantization is an extension of dynamic tree quantization for unsigned input data . The stable embedding layer is a variation of a standard word embedding layer that supports more aggressive quantization by normalizing the highly non-uniform distribution of inputs to avoid extreme gradient variation . Our 8-bit optimizers maintain 32-bit performance at a fraction of the original memory footprint . We show this for a broad range of tasks : 1.5B and 355M parameter language modeling , GLUE finetuning , ImageNet classification , WMT ’ 14+WMT ’ 16 machine translation , MoCo v2 contrastive image pretraining+finetuning , and RoBERTa pretraining . We also report additional ablations and sensitivity analysis showing that all components – block-wise quantization , dynamic quantization , and stable embedding layer – are crucial for these results and that 8-bit Adam can be used as a simple drop-in replacement for 32-bit Adam , with no hyperparameter changes . We open-source our custom CUDA kernels and provide a PyTorch implementation that enables 8-bit optimization by changing two lines of code . 1 BACKGROUND . 1.1 STATEFUL OPTIMIZERS . An optimizer updates the parameters w of a neural network by using the gradient of the loss with respect to the weight gt = ∂L∂w at update iteration t. Stateful optimizers compute statistics of the gradient with respect to each parameter over time for accelerated optimization . Two of the most commonly used stateful optimizers are Adam ( Kingma and Ba , 2014 ) , and SGD with momentum ( Qian , 1999 ) – or Momentum for short . Without damping and scaling constants , the update rules of these optimizers are given by : Momentum ( gt , wt−1 , mt−1 ) = m0 = g0 Initialization mt = β1mt−1 + gt State 1 update wt = wt−1 − α ·mt Weight update ( 1 ) Adam ( gt , wt−1 , mt−1 , rt−1 ) = r0 = m0 = 0 Initialization mt = β1mt−1 + ( 1− β1 ) gt State 1 update rt = β2rt−1 + ( 1− β2 ) g2t State 2 update wt = wt−1 − α · mt√rt+ Weight update , ( 2 ) where β1 and β2 are smoothing constants , is a small constant , and α is the learning rate . For 32-bit states , Momentum and Adam consume 4 and 8 bytes per parameter . That is 4 GB and 8 GB for a 1B parameter model . Our 8-bit non-linear quantization reduces these costs to 1 GB and 2 GB . 1.2 NON-LINEAR QUANTIZATION . Quantization compresses numeric representations to save space at the cost of precision . Quantization is the mapping of a k-bit integer to a real element in D , that is , Qmap : [ 0 , 2k − 1 ] 7→ D. For example , the IEEE 32-bit floating point data type maps the indices 0 ... 232 − 1 to the domain [ -3.4e38 , +3.4e38 ] . We use the following notation : Qmap ( i ) = Qmapi = qi , for example Qmap ( 231 + 131072 ) = 2.03125 , for the IEEE 32-bit floating point data type . To perform general quantization from one data type into another we require three steps . ( 1 ) Compute a normalization constant N that transforms the input tensor T into the range of the domain D of the target quantization data type Qmap , ( 2 ) for each element of T/N find the closest corresponding value qi in the domain D , ( 3 ) store the index i corresponding to qi in the quantized output tensor TQ . To receive the dequantized tensor TD we look up the index and denormalize : TDi = Q map ( TQi ) ·N . To perform this procedure for dynamic quantization we first normalize into the range [ -1 , 1 ] through division by the absolute maximum value : N = max ( |T| ) . Then we find the closest values via a binary search : TQi = 2n argmin j=0 |Qmapj − Ti N | ( 3 ) 1.3 DYNAMIC TREE QUANTIZATION Figure 2 : Dynamic tree quantization . Dynamic Tree quantization ( Dettmers , 2016 ) is a method that yields low quantization error for both small and large magnitude values . Unlike data types with fixed exponent and fraction , dynamic tree quantization uses a datatype with a dynamic exponent and fraction that can change with each number . It is made up of four parts , as seen in Figure 2 : ( 1 ) The first bit of the data type is reserved for a sign . ( 2 ) The number of subsequent zero bits indicates the magnitude of the exponent . ( 3 ) The first bit that is set to one indicates that all following values are reserved for ( 4 ) linear quantization . By moving the indicator bit , num- bers can have a large exponent 10−7 or precision as high as 1/63 . Compared to linear quantization , dynamic tree quantization has better absolute and relative quantization errors for non-uniform distributions . Dynamic tree quantization is strictly defined to quantize numbers in the range [ -1.0 , 1.0 ] , which is ensured by performing tensor-level absolute max normalization . 2 8-BIT OPTIMIZERS . Our 8-bit optimizers have three components : ( 1 ) block-wise quantization that isolates outliers and distributes the error more equally over all bits ; ( 2 ) dynamic quantization , which quantizes both small and large values with high precision ; and ( 3 ) a stable embedding layer to improve stability during optimization for models with word embeddings . With these components , performing an optimizer update with 8-bit states is straightforward . We dequantize the 8-bit optimizer states to 32-bit , perform the update , and then quantize the states back to 8-bit for storage . We do this 8-bit to 32-bit conversion element-by-element in registers , which means no slow copies to GPU memory or additional temporary memory are needed to perform quantization and dequantization . For GPUs , this makes 8-bit optimizers faster than regular 32-bit optimizers , as we show in Section 3 . 2.1 BLOCK-WISE QUANTIZATION . Our block-wise quantization reduces the cost of computing normalization and improves quantization precision by isolating outliers . In order to dynamically quantize a tensor , as defined in Section 1.2 , we need to normalize the tensor into the range [ -1 , 1 ] . Such normalization requires a reduction over the entire tensor , which entails multiple synchronizations across GPU cores . Block-wise dynamic quantization reduces this cost by chunking an input tensor into small blocks of size B = 2048 and performing normalization independently in each core across this block . More formally , using the notation introduced in Section 1.2 , in block-wise quantization , we treat T as a one-dimensional sequence of elements that we chunk in blocks of size B . This means for an input tensor T with n elements we have n/B blocks . We proceed to compute a normalization constant for each block : Nb = max ( |Tb| ) , where b is the index of the block 0 .. n/B . With this block-wise normalization constant , each block can be quantized independently : TQbi = 2n argmin j=0 |Qmapj − Tbi Nb | ∣∣∣∣ 0 < i < B ( 4 ) This approach has several advantages , both for stability and efficiency . First , each block normalization can be computed independently . Thus no synchronization between cores is required , and throughput is enhanced . Secondly , it is also much more robust to outliers in the input tensor . For example , to contrast blockwise and regular quantization , if we create an input tensor with one million elements sampled from the standard normal distribution , we expect less than 1 % of elements of the tensor will be in the range [ 3 , +∞ ) . However , since we normalize the input tensor into the range [ -1,1 ] this means the maximum values of the distribution determine the range of quantization buckets . This means if the input tensor contains an outlier with magnitude 5 , the quantization buckets reserved for numbers between 3 and 5 will mostly go unused since less than 1 % of numbers are in this range . With blockwise quantization , the effect of outliers is limited to a single block . As such , most bits are used effectively in other blocks . Furthermore , because outliers represent the absolute maximum value in the input tensor , blockwise quantization approximates outlier values without any error . This guarantees that the largest optimizer states , arguably the most important , will always be quantized with full precision . This property makes block-wise dynamic quantization both robust and precise and is essential for good training performance in practice . | This paper proposes a non-linear block-wise quantization method to reduce the memory overhead of stateful optimizers, without sacrificing performance going from 32bits to 8 bit. The authors combine block-wise quantization with 2 methods to stabilize training: dynamic tree quantization and a stable embedding layer. Results on WMT, GLUE and Moco show the effectiveness of the method. | SP:c87e5e1f360981cceea63a1edead85ac96cbfb33 |
Contrastive Label Disambiguation for Partial Label Learning | 1 INTRODUCTION . The training of modern deep neural networks typically requires massive labeled data , which imposes formidable obstacles in data collection . Of a particular challenge , data annotation in the real-world can naturally be subject to inherent label ambiguity and noise . For example , as shown in Figure 1 , identifying an Alaskan Malamute from a Siberian Husky can be difficult for a human annotator . The issue of labeling ambiguity is prevalent yet often overlooked in many applications , such as web mining ( Luo & Orabona , 2010 ) and automatic image annotation ( Chen et al. , 2018 ) . This gives rise to the importance of partial label learning ( PLL ) ( Hüllermeier & Beringer , 2006 ; Cour et al. , 2011 ) , where each training example is equipped with a set of candidate labels instead of the exact groundtruth label . This stands in contrast to its supervised counterpart where one label must be chosen as the “ gold ” . Arguably , the PLL problem is deemed more common and practical in various situations due to its relatively lower cost to annotations . Despite the promise , a core challenge in PLL is label disambiguation , i.e. , identifying the groundtruth label from the candidate label set . Existing methods typically require a good feature representation ( Liu & Dietterich , 2012 ; Zhang et al. , 2016 ; Lyu et al. , 2021 ) , and operate under the assumption that data points closer in the feature space are more likely to share the same ground-truth label . However , the reliance on representations has led to a non-trivial dilemma—the inherent label uncertainty can undesirably manifest in the representation learning process—the quality of which may in turn prevent effective label disambiguation . To date , few efforts have been made to resolve this . This paper bridges the gap by reconciling the intrinsic tension between the two highly dependant problems—representation learning and label disambiguation—in one coherent and synergistic framework . Our framework , Partial label learning with COntrastive label disambiguation ( dubbed PiCO ) , produces closely aligned representations for examples from the same classes and facilitates label disambiguation . Specifically , PiCO encapsulates two key components . First , we leverage contrastive learning ( CL ) ( Khosla et al. , 2020 ) to partial label learning , which is unexplored in previous PLL literature . To mitigate the key challenge of constructing positive pairs , we employ the classifier ’ s output and generate pseudo positive pairs for contrastive comparison ( Section 3.1 ) . Second , based on the learned embeddings , we propose a novel prototype-based label disambiguation strategy ( Section 3.2 ) . Key to our method , we gradually update the pseudo target for classification , based on the closest class prototype . By alternating the two steps above , PiCO converges to a solution with a highly distinguishable representation for accurate classification . Empirically , PiCO establishes state-of-the-art performance on three benchmark datasets , outperforming the baselines by a significant margin ( Section 4 ) and obtains results that are competitive with fully supervised learning . Theoretically , we demonstrate that our contrastive representation learning and prototype-based label disambiguation are mutually beneficial , and can be rigorously interpreted from an ExpectationMaximization ( EM ) algorithm perspective ( Section 5 ) . First , the refined pseudo labeling improves contrastive learning by selecting pseudo positive examples accurately . This can be analogous to the E-step , where we utilize the classifier ’ s output to assign each data example to one label-specific cluster . Second , better contrastive performance in turn improves the quality of representations and thus the effectiveness of label disambiguation . This can be reasoned from an M-step perspective , where the contrastive loss partially maximizes the likelihood by clustering similar data examples . Finally , the training data will be mapped to a mixture of von Mises-Fisher distributions on the unit hypersphere , which facilitates label disambiguation by using the component-specific label . Our main contributions are summarized as follows : 1 ( Methodology ) . To the best of our knowledge , our paper pioneers the exploration of contrastive learning for partial label learning and proposes a novel framework termed PiCO . As an integral part of our algorithm , we also introduce a new prototype-based label disambiguation mechanism , leverages the contrastively learned embeddings . 2 ( Experiments ) . Empirically , our proposed PiCO framework establishes the state-of-the-art performance on three PLL tasks . Moreover , we make the first attempt to conduct experiments on fine-grained classification datasets , where we show classification performance improvement by up to 9.61 % compared with the best baseline on the CUB-200 dataset . 3 ( Theory ) . We theoretically interpret our framework from the expectation-maximization perspective . Our derivation is also generalizable to other CL methods and shows the alignment property in CL ( Wang & Isola , 2020 ) mathematically equals the M-step in center-based clustering algorithms . 2 BACKGROUND The problem of partial label learning ( PLL ) is defined using the following setup . Let X be the input space , and Y = { 1 , 2 , ... , C } be the output label space . We consider a training dataset D = { ( xi , Yi ) } ni=1 , where each tuple comprises of an image xi ∈ X and a candidate label set Yi ⊂ Y . Identical to the supervised learning setup , the goal of PLL is to obtain a functional mapping that predicts the one true label associated with the input . Yet differently , the PLL setup bears significantly more uncertainty in the label space . A basic assumption of PLL is that the groundtruth label yi is concealed in its candidate set , i.e. , yi ∈ Yi , and is invisible to the learner . For this reason , the learning process can suffer from inherent ambiguity , compared with the supervised learning task with explicit ground-truth . The key challenge of PLL is to identify the ground-truth label from the candidate label set . During training , we assign each image xi a normalized vector si ∈ [ 0 , 1 ] C as the pseudo target , whose entries denote the probability of labels being the ground-truth . The total probability mass of 1 is allocated among candidate labels in Yi . Note that si will be updated during the training procedure . Ideally , si should put more probability mass on the ( unknown ) ground-truth label yi over the course of training . We train a classifier f : X → [ 0 , 1 ] C using cross-entropy loss , with si being the target prediction . The per-sample loss is given by : Lcls ( f ; xi , Yi ) = ∑C j=1 −si , j log ( f j ( xi ) ) s.t . ∑ j∈Yi si , j = 1 and si , j = 0 , ∀j /∈ Yi , ( 1 ) where j denotes the indices of labels . si , j denotes the j-th pseudo target of xi . Here f is the softmax output of the networks and we denote f j as its j-th entry . In the remainder of this paper , we omit the sample index i when the context is clear . We proceed by describing our proposed framework . 3 METHOD . In this section , we describe our novel Partial label learning with COntrastive label disambiguation ( PiCO ) framework in detail . In a nutshell , PiCO comprises two key components tackling the representation quality ( Section 3.1 ) and label ambiguity respectively ( Section 3.2 ) . The two components systematically work as a whole and reciprocate each other . We further rigorously provide a theoretical interpretation of PiCO from an EM perspective in Section 5 . 3.1 CONTRASTIVE REPRESENTATION LEARNING FOR PLL . The uncertainty in the label space posits a unique obstacle for learning effective representations . In PiCO , we couple the classification loss in Eq . ( 1 ) with a contrastive term that facilitates a clustering effect in the embedding space . While contrastive learning has been extensively studied in recent literature , it remains untapped in the domain of PLL . The main challenge lies in the construction of positive sample set . In conventional supervised CL frameworks , the positive sample pairs can be easily drawn according to the ground-truth labels ( Khosla et al. , 2020 ) . However , this is not straightforward in the setting of PLL . Training Objective . To begin with , we describe the standard contrastive loss term . We adopt the most popular setups by closely following MoCo ( He et al. , 2020 ) and SupCon ( Khosla et al. , 2020 ) . Given each sample ( x , Y ) , we generate two views—a query view and a key view—by way of randomized data augmentation Aug ( x ) . The two images are then fed into a query network g ( · ) and a key network g′ ( · ) , yielding a pair of L2-normalized embeddings q = g ( Augq ( x ) ) and k = g′ ( Augk ( x ) ) . In implementations , the query network shares the same convolutional blocks as the classifier , followed by a prediction head ( see Figure 2 ) . Following MoCo , the key network uses a momentum update with the query network . We additionally maintain a queue storing the most current key embeddings k , and we update the queue chronologically . To this end , we have the following contrastive embedding pool : A = Bq ∪Bk ∪ queue , ( 2 ) where Bq and Bk are vectorial embeddings corresponding to the query and key views of the current mini-batch . Given an example x , the per-sample contrastive loss is defined by contrasting its query embedding with the remainder of the pool A , Lcont ( g ; x , τ , A ) = − 1 |P ( x ) | ∑ k+∈P ( x ) log exp ( q > k+/τ ) ∑ k′∈A ( x ) exp ( q > k′/τ ) , ( 3 ) where P ( x ) is the positive set and A ( x ) = { A\ { q } } . τ ≥ 0 is the temperature . Positive Set Selection . As mentioned earlier , the crucial challenge is how to construct the positive set P ( x ) . We propose utilizing the predicted label ỹ = arg maxj∈Y f j ( Augq ( x ) ) from the classifier . Note that we restrict the predicted label to be in the candidate set Y . The positive examples are then selected as follows , P ( x ) = { k′|k′ ∈ A ( x ) , ỹ′ = ỹ } . ( 4 ) where ỹ′ is the predicted label for the corresponding training example of k′ . For computational efficiency , we also maintain a label queue to store past predictions . In other words , we define the positive set of x to be those examples carrying the same approximated label prediction ỹ . Despite its simplicity , we show that our selection strategy can be theoretically justified ( Section 5 ) and also lead to superior empirical results ( Section 4 ) . Note that more sophisticated selection strategies can be explored , for which we discuss in Appendix B.4 . Putting it all together , we jointly train the classifier as well as the contrastive network . The overall loss function is : L = Lcls + λLcont . ( 5 ) Still , our goal of learning high-quality representation by CL relies on accurate classifier prediction for positive set selection , which remains unsolved in the presence of label ambiguity . To this end , we further propose a novel label disambiguation mechanism based on contrastive embeddings and show that these two components are mutually beneficial . | The authors present a new technique for partial label learning (PLL). PLL is the task where the labels for each instance include both the ground truth label and a randomly sampled set of distractor labels, and during training the model learns a latent decision for which among this set is the ground truth. The technique presented by the authors uses a combination of momentum (in the representation) and contrastive learning (to augment the label set) that leads to improved PLL results, reaching nearly fully supervised performance. | SP:33e3c74b2ec27a45d0ad5aaa5e50c8da0ed28b9f |
Contrastive Label Disambiguation for Partial Label Learning | 1 INTRODUCTION . The training of modern deep neural networks typically requires massive labeled data , which imposes formidable obstacles in data collection . Of a particular challenge , data annotation in the real-world can naturally be subject to inherent label ambiguity and noise . For example , as shown in Figure 1 , identifying an Alaskan Malamute from a Siberian Husky can be difficult for a human annotator . The issue of labeling ambiguity is prevalent yet often overlooked in many applications , such as web mining ( Luo & Orabona , 2010 ) and automatic image annotation ( Chen et al. , 2018 ) . This gives rise to the importance of partial label learning ( PLL ) ( Hüllermeier & Beringer , 2006 ; Cour et al. , 2011 ) , where each training example is equipped with a set of candidate labels instead of the exact groundtruth label . This stands in contrast to its supervised counterpart where one label must be chosen as the “ gold ” . Arguably , the PLL problem is deemed more common and practical in various situations due to its relatively lower cost to annotations . Despite the promise , a core challenge in PLL is label disambiguation , i.e. , identifying the groundtruth label from the candidate label set . Existing methods typically require a good feature representation ( Liu & Dietterich , 2012 ; Zhang et al. , 2016 ; Lyu et al. , 2021 ) , and operate under the assumption that data points closer in the feature space are more likely to share the same ground-truth label . However , the reliance on representations has led to a non-trivial dilemma—the inherent label uncertainty can undesirably manifest in the representation learning process—the quality of which may in turn prevent effective label disambiguation . To date , few efforts have been made to resolve this . This paper bridges the gap by reconciling the intrinsic tension between the two highly dependant problems—representation learning and label disambiguation—in one coherent and synergistic framework . Our framework , Partial label learning with COntrastive label disambiguation ( dubbed PiCO ) , produces closely aligned representations for examples from the same classes and facilitates label disambiguation . Specifically , PiCO encapsulates two key components . First , we leverage contrastive learning ( CL ) ( Khosla et al. , 2020 ) to partial label learning , which is unexplored in previous PLL literature . To mitigate the key challenge of constructing positive pairs , we employ the classifier ’ s output and generate pseudo positive pairs for contrastive comparison ( Section 3.1 ) . Second , based on the learned embeddings , we propose a novel prototype-based label disambiguation strategy ( Section 3.2 ) . Key to our method , we gradually update the pseudo target for classification , based on the closest class prototype . By alternating the two steps above , PiCO converges to a solution with a highly distinguishable representation for accurate classification . Empirically , PiCO establishes state-of-the-art performance on three benchmark datasets , outperforming the baselines by a significant margin ( Section 4 ) and obtains results that are competitive with fully supervised learning . Theoretically , we demonstrate that our contrastive representation learning and prototype-based label disambiguation are mutually beneficial , and can be rigorously interpreted from an ExpectationMaximization ( EM ) algorithm perspective ( Section 5 ) . First , the refined pseudo labeling improves contrastive learning by selecting pseudo positive examples accurately . This can be analogous to the E-step , where we utilize the classifier ’ s output to assign each data example to one label-specific cluster . Second , better contrastive performance in turn improves the quality of representations and thus the effectiveness of label disambiguation . This can be reasoned from an M-step perspective , where the contrastive loss partially maximizes the likelihood by clustering similar data examples . Finally , the training data will be mapped to a mixture of von Mises-Fisher distributions on the unit hypersphere , which facilitates label disambiguation by using the component-specific label . Our main contributions are summarized as follows : 1 ( Methodology ) . To the best of our knowledge , our paper pioneers the exploration of contrastive learning for partial label learning and proposes a novel framework termed PiCO . As an integral part of our algorithm , we also introduce a new prototype-based label disambiguation mechanism , leverages the contrastively learned embeddings . 2 ( Experiments ) . Empirically , our proposed PiCO framework establishes the state-of-the-art performance on three PLL tasks . Moreover , we make the first attempt to conduct experiments on fine-grained classification datasets , where we show classification performance improvement by up to 9.61 % compared with the best baseline on the CUB-200 dataset . 3 ( Theory ) . We theoretically interpret our framework from the expectation-maximization perspective . Our derivation is also generalizable to other CL methods and shows the alignment property in CL ( Wang & Isola , 2020 ) mathematically equals the M-step in center-based clustering algorithms . 2 BACKGROUND The problem of partial label learning ( PLL ) is defined using the following setup . Let X be the input space , and Y = { 1 , 2 , ... , C } be the output label space . We consider a training dataset D = { ( xi , Yi ) } ni=1 , where each tuple comprises of an image xi ∈ X and a candidate label set Yi ⊂ Y . Identical to the supervised learning setup , the goal of PLL is to obtain a functional mapping that predicts the one true label associated with the input . Yet differently , the PLL setup bears significantly more uncertainty in the label space . A basic assumption of PLL is that the groundtruth label yi is concealed in its candidate set , i.e. , yi ∈ Yi , and is invisible to the learner . For this reason , the learning process can suffer from inherent ambiguity , compared with the supervised learning task with explicit ground-truth . The key challenge of PLL is to identify the ground-truth label from the candidate label set . During training , we assign each image xi a normalized vector si ∈ [ 0 , 1 ] C as the pseudo target , whose entries denote the probability of labels being the ground-truth . The total probability mass of 1 is allocated among candidate labels in Yi . Note that si will be updated during the training procedure . Ideally , si should put more probability mass on the ( unknown ) ground-truth label yi over the course of training . We train a classifier f : X → [ 0 , 1 ] C using cross-entropy loss , with si being the target prediction . The per-sample loss is given by : Lcls ( f ; xi , Yi ) = ∑C j=1 −si , j log ( f j ( xi ) ) s.t . ∑ j∈Yi si , j = 1 and si , j = 0 , ∀j /∈ Yi , ( 1 ) where j denotes the indices of labels . si , j denotes the j-th pseudo target of xi . Here f is the softmax output of the networks and we denote f j as its j-th entry . In the remainder of this paper , we omit the sample index i when the context is clear . We proceed by describing our proposed framework . 3 METHOD . In this section , we describe our novel Partial label learning with COntrastive label disambiguation ( PiCO ) framework in detail . In a nutshell , PiCO comprises two key components tackling the representation quality ( Section 3.1 ) and label ambiguity respectively ( Section 3.2 ) . The two components systematically work as a whole and reciprocate each other . We further rigorously provide a theoretical interpretation of PiCO from an EM perspective in Section 5 . 3.1 CONTRASTIVE REPRESENTATION LEARNING FOR PLL . The uncertainty in the label space posits a unique obstacle for learning effective representations . In PiCO , we couple the classification loss in Eq . ( 1 ) with a contrastive term that facilitates a clustering effect in the embedding space . While contrastive learning has been extensively studied in recent literature , it remains untapped in the domain of PLL . The main challenge lies in the construction of positive sample set . In conventional supervised CL frameworks , the positive sample pairs can be easily drawn according to the ground-truth labels ( Khosla et al. , 2020 ) . However , this is not straightforward in the setting of PLL . Training Objective . To begin with , we describe the standard contrastive loss term . We adopt the most popular setups by closely following MoCo ( He et al. , 2020 ) and SupCon ( Khosla et al. , 2020 ) . Given each sample ( x , Y ) , we generate two views—a query view and a key view—by way of randomized data augmentation Aug ( x ) . The two images are then fed into a query network g ( · ) and a key network g′ ( · ) , yielding a pair of L2-normalized embeddings q = g ( Augq ( x ) ) and k = g′ ( Augk ( x ) ) . In implementations , the query network shares the same convolutional blocks as the classifier , followed by a prediction head ( see Figure 2 ) . Following MoCo , the key network uses a momentum update with the query network . We additionally maintain a queue storing the most current key embeddings k , and we update the queue chronologically . To this end , we have the following contrastive embedding pool : A = Bq ∪Bk ∪ queue , ( 2 ) where Bq and Bk are vectorial embeddings corresponding to the query and key views of the current mini-batch . Given an example x , the per-sample contrastive loss is defined by contrasting its query embedding with the remainder of the pool A , Lcont ( g ; x , τ , A ) = − 1 |P ( x ) | ∑ k+∈P ( x ) log exp ( q > k+/τ ) ∑ k′∈A ( x ) exp ( q > k′/τ ) , ( 3 ) where P ( x ) is the positive set and A ( x ) = { A\ { q } } . τ ≥ 0 is the temperature . Positive Set Selection . As mentioned earlier , the crucial challenge is how to construct the positive set P ( x ) . We propose utilizing the predicted label ỹ = arg maxj∈Y f j ( Augq ( x ) ) from the classifier . Note that we restrict the predicted label to be in the candidate set Y . The positive examples are then selected as follows , P ( x ) = { k′|k′ ∈ A ( x ) , ỹ′ = ỹ } . ( 4 ) where ỹ′ is the predicted label for the corresponding training example of k′ . For computational efficiency , we also maintain a label queue to store past predictions . In other words , we define the positive set of x to be those examples carrying the same approximated label prediction ỹ . Despite its simplicity , we show that our selection strategy can be theoretically justified ( Section 5 ) and also lead to superior empirical results ( Section 4 ) . Note that more sophisticated selection strategies can be explored , for which we discuss in Appendix B.4 . Putting it all together , we jointly train the classifier as well as the contrastive network . The overall loss function is : L = Lcls + λLcont . ( 5 ) Still , our goal of learning high-quality representation by CL relies on accurate classifier prediction for positive set selection , which remains unsolved in the presence of label ambiguity . To this end , we further propose a novel label disambiguation mechanism based on contrastive embeddings and show that these two components are mutually beneficial . | The paper proposes an innovative approach to partial label learning, where an instance is assigned with some false positive labels besides its true label (due to difficulty in labelling). The approach blends contrastive learning with prototype learning: (i) the former helps to form good clusters for the latter to learn prototype representations (ii) in return the latter helps to select positive samples for the former. Theoretically, the authors prove that the two of them work collaboratively in the EM fashion. The paper demonstrates the effectiveness of the proposed approach using a typical setting for the task. Specifically, CIFAR 10 and CIFAR 100 were used and each instance was randomly assigned false labels with some probabilities. The proposed approach achieved impressive results, substantially outperforming five recent models in the literature. Moreover, its performance strongly approaches the one of supervised learning. The paper also presents a much stricter setting when false positive labels are semantically correlated with true labels. The proposed approach also gained impressive results on CUB-200 and CIFAR-100-H datasets. Moreover, the paper shows several in-depth analyses. | SP:33e3c74b2ec27a45d0ad5aaa5e50c8da0ed28b9f |
Contrastive Label Disambiguation for Partial Label Learning | 1 INTRODUCTION . The training of modern deep neural networks typically requires massive labeled data , which imposes formidable obstacles in data collection . Of a particular challenge , data annotation in the real-world can naturally be subject to inherent label ambiguity and noise . For example , as shown in Figure 1 , identifying an Alaskan Malamute from a Siberian Husky can be difficult for a human annotator . The issue of labeling ambiguity is prevalent yet often overlooked in many applications , such as web mining ( Luo & Orabona , 2010 ) and automatic image annotation ( Chen et al. , 2018 ) . This gives rise to the importance of partial label learning ( PLL ) ( Hüllermeier & Beringer , 2006 ; Cour et al. , 2011 ) , where each training example is equipped with a set of candidate labels instead of the exact groundtruth label . This stands in contrast to its supervised counterpart where one label must be chosen as the “ gold ” . Arguably , the PLL problem is deemed more common and practical in various situations due to its relatively lower cost to annotations . Despite the promise , a core challenge in PLL is label disambiguation , i.e. , identifying the groundtruth label from the candidate label set . Existing methods typically require a good feature representation ( Liu & Dietterich , 2012 ; Zhang et al. , 2016 ; Lyu et al. , 2021 ) , and operate under the assumption that data points closer in the feature space are more likely to share the same ground-truth label . However , the reliance on representations has led to a non-trivial dilemma—the inherent label uncertainty can undesirably manifest in the representation learning process—the quality of which may in turn prevent effective label disambiguation . To date , few efforts have been made to resolve this . This paper bridges the gap by reconciling the intrinsic tension between the two highly dependant problems—representation learning and label disambiguation—in one coherent and synergistic framework . Our framework , Partial label learning with COntrastive label disambiguation ( dubbed PiCO ) , produces closely aligned representations for examples from the same classes and facilitates label disambiguation . Specifically , PiCO encapsulates two key components . First , we leverage contrastive learning ( CL ) ( Khosla et al. , 2020 ) to partial label learning , which is unexplored in previous PLL literature . To mitigate the key challenge of constructing positive pairs , we employ the classifier ’ s output and generate pseudo positive pairs for contrastive comparison ( Section 3.1 ) . Second , based on the learned embeddings , we propose a novel prototype-based label disambiguation strategy ( Section 3.2 ) . Key to our method , we gradually update the pseudo target for classification , based on the closest class prototype . By alternating the two steps above , PiCO converges to a solution with a highly distinguishable representation for accurate classification . Empirically , PiCO establishes state-of-the-art performance on three benchmark datasets , outperforming the baselines by a significant margin ( Section 4 ) and obtains results that are competitive with fully supervised learning . Theoretically , we demonstrate that our contrastive representation learning and prototype-based label disambiguation are mutually beneficial , and can be rigorously interpreted from an ExpectationMaximization ( EM ) algorithm perspective ( Section 5 ) . First , the refined pseudo labeling improves contrastive learning by selecting pseudo positive examples accurately . This can be analogous to the E-step , where we utilize the classifier ’ s output to assign each data example to one label-specific cluster . Second , better contrastive performance in turn improves the quality of representations and thus the effectiveness of label disambiguation . This can be reasoned from an M-step perspective , where the contrastive loss partially maximizes the likelihood by clustering similar data examples . Finally , the training data will be mapped to a mixture of von Mises-Fisher distributions on the unit hypersphere , which facilitates label disambiguation by using the component-specific label . Our main contributions are summarized as follows : 1 ( Methodology ) . To the best of our knowledge , our paper pioneers the exploration of contrastive learning for partial label learning and proposes a novel framework termed PiCO . As an integral part of our algorithm , we also introduce a new prototype-based label disambiguation mechanism , leverages the contrastively learned embeddings . 2 ( Experiments ) . Empirically , our proposed PiCO framework establishes the state-of-the-art performance on three PLL tasks . Moreover , we make the first attempt to conduct experiments on fine-grained classification datasets , where we show classification performance improvement by up to 9.61 % compared with the best baseline on the CUB-200 dataset . 3 ( Theory ) . We theoretically interpret our framework from the expectation-maximization perspective . Our derivation is also generalizable to other CL methods and shows the alignment property in CL ( Wang & Isola , 2020 ) mathematically equals the M-step in center-based clustering algorithms . 2 BACKGROUND The problem of partial label learning ( PLL ) is defined using the following setup . Let X be the input space , and Y = { 1 , 2 , ... , C } be the output label space . We consider a training dataset D = { ( xi , Yi ) } ni=1 , where each tuple comprises of an image xi ∈ X and a candidate label set Yi ⊂ Y . Identical to the supervised learning setup , the goal of PLL is to obtain a functional mapping that predicts the one true label associated with the input . Yet differently , the PLL setup bears significantly more uncertainty in the label space . A basic assumption of PLL is that the groundtruth label yi is concealed in its candidate set , i.e. , yi ∈ Yi , and is invisible to the learner . For this reason , the learning process can suffer from inherent ambiguity , compared with the supervised learning task with explicit ground-truth . The key challenge of PLL is to identify the ground-truth label from the candidate label set . During training , we assign each image xi a normalized vector si ∈ [ 0 , 1 ] C as the pseudo target , whose entries denote the probability of labels being the ground-truth . The total probability mass of 1 is allocated among candidate labels in Yi . Note that si will be updated during the training procedure . Ideally , si should put more probability mass on the ( unknown ) ground-truth label yi over the course of training . We train a classifier f : X → [ 0 , 1 ] C using cross-entropy loss , with si being the target prediction . The per-sample loss is given by : Lcls ( f ; xi , Yi ) = ∑C j=1 −si , j log ( f j ( xi ) ) s.t . ∑ j∈Yi si , j = 1 and si , j = 0 , ∀j /∈ Yi , ( 1 ) where j denotes the indices of labels . si , j denotes the j-th pseudo target of xi . Here f is the softmax output of the networks and we denote f j as its j-th entry . In the remainder of this paper , we omit the sample index i when the context is clear . We proceed by describing our proposed framework . 3 METHOD . In this section , we describe our novel Partial label learning with COntrastive label disambiguation ( PiCO ) framework in detail . In a nutshell , PiCO comprises two key components tackling the representation quality ( Section 3.1 ) and label ambiguity respectively ( Section 3.2 ) . The two components systematically work as a whole and reciprocate each other . We further rigorously provide a theoretical interpretation of PiCO from an EM perspective in Section 5 . 3.1 CONTRASTIVE REPRESENTATION LEARNING FOR PLL . The uncertainty in the label space posits a unique obstacle for learning effective representations . In PiCO , we couple the classification loss in Eq . ( 1 ) with a contrastive term that facilitates a clustering effect in the embedding space . While contrastive learning has been extensively studied in recent literature , it remains untapped in the domain of PLL . The main challenge lies in the construction of positive sample set . In conventional supervised CL frameworks , the positive sample pairs can be easily drawn according to the ground-truth labels ( Khosla et al. , 2020 ) . However , this is not straightforward in the setting of PLL . Training Objective . To begin with , we describe the standard contrastive loss term . We adopt the most popular setups by closely following MoCo ( He et al. , 2020 ) and SupCon ( Khosla et al. , 2020 ) . Given each sample ( x , Y ) , we generate two views—a query view and a key view—by way of randomized data augmentation Aug ( x ) . The two images are then fed into a query network g ( · ) and a key network g′ ( · ) , yielding a pair of L2-normalized embeddings q = g ( Augq ( x ) ) and k = g′ ( Augk ( x ) ) . In implementations , the query network shares the same convolutional blocks as the classifier , followed by a prediction head ( see Figure 2 ) . Following MoCo , the key network uses a momentum update with the query network . We additionally maintain a queue storing the most current key embeddings k , and we update the queue chronologically . To this end , we have the following contrastive embedding pool : A = Bq ∪Bk ∪ queue , ( 2 ) where Bq and Bk are vectorial embeddings corresponding to the query and key views of the current mini-batch . Given an example x , the per-sample contrastive loss is defined by contrasting its query embedding with the remainder of the pool A , Lcont ( g ; x , τ , A ) = − 1 |P ( x ) | ∑ k+∈P ( x ) log exp ( q > k+/τ ) ∑ k′∈A ( x ) exp ( q > k′/τ ) , ( 3 ) where P ( x ) is the positive set and A ( x ) = { A\ { q } } . τ ≥ 0 is the temperature . Positive Set Selection . As mentioned earlier , the crucial challenge is how to construct the positive set P ( x ) . We propose utilizing the predicted label ỹ = arg maxj∈Y f j ( Augq ( x ) ) from the classifier . Note that we restrict the predicted label to be in the candidate set Y . The positive examples are then selected as follows , P ( x ) = { k′|k′ ∈ A ( x ) , ỹ′ = ỹ } . ( 4 ) where ỹ′ is the predicted label for the corresponding training example of k′ . For computational efficiency , we also maintain a label queue to store past predictions . In other words , we define the positive set of x to be those examples carrying the same approximated label prediction ỹ . Despite its simplicity , we show that our selection strategy can be theoretically justified ( Section 5 ) and also lead to superior empirical results ( Section 4 ) . Note that more sophisticated selection strategies can be explored , for which we discuss in Appendix B.4 . Putting it all together , we jointly train the classifier as well as the contrastive network . The overall loss function is : L = Lcls + λLcont . ( 5 ) Still , our goal of learning high-quality representation by CL relies on accurate classifier prediction for positive set selection , which remains unsolved in the presence of label ambiguity . To this end , we further propose a novel label disambiguation mechanism based on contrastive embeddings and show that these two components are mutually beneficial . | This work approaches the partial label learning problem where each training example is annotated with multiple candidate labels, in contrast to the conventional supervised learning setup that the ground-truth label is provided. The proposed framework comprises two key components: (1) a contrastive learning module that uses the classifier output to select positive pairs and (2) a label disambiguation method that uses the contrastive prototypes to update the pseudo targets in a moving-average style. The experimental results look quite strong, where the performance of PiCO nearly approaches the fully-supervised results. | SP:33e3c74b2ec27a45d0ad5aaa5e50c8da0ed28b9f |
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective | 1 INTRODUCTION . Emerging studies on the inner mechanisms of deep neural networks ( DNNs ) have revealed that many models have shortcut biases ( Cadene et al. , 2019 ; Weinzaepfel & Rogez , 2021 ; Bahng et al. , 2020 ; Geirhos et al. , 2020 ) . DNNs often pick up simple , non-essential cues , which are nonetheless effective within a particular dataset . For example , a DNN trained for the task of animal recognition may recognize ducks while attending on water backgrounds , given the strong correlations between such background cues and the target label ( Choe et al. , 2020 ) . These shortcut biases often result in a striking qualitative difference between human and machine recognition systems ; for example , convolutional neural networks ( CNNs ) trained on ImageNet extensively rely on texture features , while humans would preferentially look at the global shape of objects ( Geirhos et al. , 2019 ) . In other cases , the shortcut bias arises in models that suppress certain streams of inputs : visual question answering ( VQA ) models often neglect the entire image cues , for one does not require images to answer questions like “ what color is the banana in the image ? ” ( Cadene et al. , 2019 ) Since DNNs have successfully outperformed humans on many tasks ( Silver et al. , 2016 ; Rajpurkar et al. , 2017 ) , such a phenomenon may look benign at face value . However , the shortcut biases become problematic when it comes to the generalization to more challenging test-time conditions , where the shortcuts are no longer valid ( Cadene et al. , 2019 ; Weinzaepfel & Rogez , 2021 ; Bahng et al. , 2020 ; Geirhos et al. , 2020 ) . These biases also cause ethical concerns when the shortcut features adopted by a model are sensitive like gender or skin color ( Wang et al. , 2019 ; Xu et al. , 2020 ) . Instead of proposing a method or solution , this work focuses on deepening our understanding of the shortcut bias phenomena . In particular , we design a dataset where multiple cues are equally valid ∗First two authors contributed equally . for solving a particular task and observe which cues tend to be preferentially adopted by DNNs . The experimental setup is inspired by the Wisconsin Card Sorting Test ( WCST , Banno et al . ( 2012 ) ) in Cognitive Neuroscience1 . See Figure 1 for an illustration of the setup . It consists of a training set with multiple highly correlated cues ( e.g . color , shape , and scale ) that offer equally plausible pathways to the successful target prediction ( Y ∈ { 1 , 2 , 3 } ) . We call this a diagonal training set , highlighting the spatial arrangement of such samples in the product space of all combinations . A model f trained on such a dataset will adopt or neglect certain cues . We analyse the cues adopted by a model by observing its predictions on off-diagonal samples . With regards to Figure 1 , for example , consider the f ’ s prediction for a small , blue triangle as an off-diagonal sample . The prediction values f ( ) ∈ { 1 , 2 , 3 } tell us which cue the model is biased towards to , e.g . if f ( ) = 1 , then f is biased towards scale ; if f ( ) = 2 , towards shape ; and if f ( ) = 3 towards color . All three scenarios are plausible and only testing on off-diagonal samples will reveal model bias . We make important observations on the nature of shortcut bias under WCST-ML . We discover that , despite the equal amounts of correlations with the target label , there tends to be a preferential ordering of cues . The preference is largely shared across different DNN architectures , such as feedforward networks , ResNets ( He et al. , 2015 ) , Vision Transformers ( Dosovitskiy et al. , 2021 ) , and multiple initial parameters . From the parameter-space perspective , we further observe that the set of solutions Θp biased to the preferred cues takes a far greater volume than those corresponding to the averted cues Θa . The loss landscape also tends to be flatter around Θp than around Θa . Why are certain cues preferred to others by general DNNs ? We provide an explanation based on the Kolmogorov complexity of cues , which measures the minimal description length for representing cues ( Kolmogorov , 1963 ) . Prior studies have shown that in the parameter space of generic DNNs , there are exponentially more Kolmogorov-simple functions than Kolmogorov-complex ones ( VallePerez et al. , 2019 ; De Palma et al. , 2019 ) . Based on these theoretical results , we argue that DNNs are naturally drawn to Kolmogorov-simple cues . We empirically verify that the preferences for cues correlate well with their Kolmogorov complexity estimates . What are the consequences of the inborn preference for simple cues ? Firstly , this may hinder the generalization of DNNs to challenging test scenarios where the simple shortcut cues are no longer valid ( Geirhos et al. , 2020 ) . Secondly , we expose the possibility that certain protected attributes correspond to the simple shortcut cue for the task at hand , endangering the fairness of DNNs ( Barocas et al. , 2017 ) . In such a case , human intervention on the learning procedure may be necessary to enforce fairness , for the dataset and DNNs can be naturally drawn to exploit protected attributes . The primary goal of this manuscript is to shed light on the nature of shortcut biases and the underlying mechanisms behind the scenes . Our contributions are summarized as follows : an experimental setup for studying the shortcut bias in-depth ( WCST-ML ) ( §2 ) ; novel observations on the nature of shortcut biases , such as the existence of preferential ordering of cues and its connections to the geometry of the loss landscape in the parameter space ( §3 ) ; an explanation based on the descriptional 1Original WCST gauges the subjects ’ cognitive ability to flexibly shift their underlying rules ( adopted cues ) for categorizing samples . Inability to do so may indicate dysfunctional frontal lobe activities . complexity of cues ( §4 ) ; and a discussion on the implications on generalization and fairness ( §5 ) , such as the preferential use of ethnical features for face recognition on the UTKFace dataset . 2 SETUP . We introduce the setup that will provide the basis for the analysis in this paper . We describe the procedure for building a dataset with multiple equally valid cues for recognition ( §2.1 ) . The procedure is applied to DSprites and UTKFace datasets in §2.2 . In §2.3 , we introduce terminologies for the analysis of the parameter space and make theoretical connections with our data framework . 2.1 DATA FRAMEWORK : WCST-ML Many factors affect the preference of models to certain cues . The existence of dominant classes is an example ; it encourages models to favor cues conducive to a good performance on the dominant classes ( Barocas et al. , 2017 ; Hashimoto et al. , 2018 ) . In other cases , some cues have higher degrees of correlation with the target label ( Geirhos et al. , 2020 ) . In this work , we test whether bias is still present under fair conditions , i.e . : when a training dataset contains a set of valid cues , each of which equally correlates with the targets , will DNNs still have a preference for certain cues ? If so , why ? To study this , we introduce a data construction framework called Wisconsin Card Sorting Test for Machine Learners ( WCST-ML ) , named after a clinical test in cognitive neuroscience ( Banno et al. , 2012 ) . See Figure 1 for an overview . As a running example , we assume a dataset where each image can be described by varying two latent variables , object shape and object color . Let X and Y denote image and label , respectively . We write Xij for the image with color i and shape j , where i , j ∈ { 1 , · · · , L } . When we want to considerK > 2 varying factors , we may writeXi1 , ··· , iK for the image random variable with kth factor chosen to be ik ∈ { 1 , · · · , L } . Importantly , we fix the number of categories for each factor to L to enforce similar conditions for all cues . Similar learning setups have appeared in prior papers : “ Cross-bias generalisation ” ( Bahng et al. , 2020 ) , “ What if multiple features are predictive ? ” ( Hermann & Lampinen , 2020 ) , and “ Zero generalization opportunities ” ( Eulig et al. , 2021 ) . While we fully acknowledge the conceptual similarities , we stress that our work presents the first dedicated study into the cue selection problem and the underlying mechanisms . The same set of images { Xij | 1 ≤ i , j ≤ L } admits two possible tasks : color and shape classification . The task is determined by the labels Y . Denoting Yij as the label for image Xij , setting Yij = i leads to the color classification , and setting Yij = j leads to the shape classification tasks . We may then build the data distribution for the task at hand via Dcolor : = ⋃ 1≤i , j≤L ( Xij , Yij = i ) Dshape : = ⋃ 1≤i , j≤L ( Xij , Yij = j ) ( 1 ) for color and shape recognition tasks , respectively . More generally , we may write Dk : = ⋃ 1≤i1 , ··· , iK≤L ( Xi1 , ··· , iK , Yi1 , ··· , iK = ik ) ( 2 ) for the data distribution where the task is to recognize the kth cue . We define the union of random variables as the balanced mixture : ⋃L i=1 Zi : = ZI where I ∼ Unif { 1 , · · · , L } . We now introduce the notion of a diagonal dataset , where every cue ( e.g . color and shape ) contains all the needed information to predict the true label Y . That is , a perfect prediction for either color or shape attribute leads to a 100 % accuracy for the task at hand . This can be done by letting the factors always vary together i = j in the dataset ( and thus the name ) . We write Ddiag : = ⋃ 1≤i≤L ( Xii , Yii = i ) . ( 3 ) Such a dataset completely leaves it to the model to choose the cue for recognition . Given a model f trained on Ddiag , we analyse the recognition cue adopted by f by measuring its unbiased accuracy on all the cells ( See Figure 1 ) . There are K different unbiased accuracies for each task , depending on how the off-diagonal cells are labelled : e.g . Dcolor and Dshape in equation 1 . For a general setting with K cues , the unbiased accuracy for kth cue is defined as acck ( f ) : = 1 Lk ∑ i1 , ··· , iK Pr [ f ( Xi1 , ··· , iK ) = ik ] . ( 4 ) Proposition 1 . For k ∈ { 1 , · · · , K } , acck ( f ) = 1 if and only if f ( Xi1 , ··· , iK ) = ik almost surely for all 1 ≤ i1 , · · · , iK ≤ L. Moreover , if the condition above holds ( i.e . f is perfectly biased to cue k ) , then accm ( f ) = 1L for all m 6= k. The proposition implies that the unbiased accuracy is capable of detecting the bias in a model f : acck ( f ) = 1 implies that f ’ s prediction is solely based on the cue k. It also emphasizes that it is impossible for a model to be perfectly biased to multiple cues . Finally , we remark that the WCST-ML analysis does not require the cues to be orthogonal or interpretable to humans . The only requirement is the availability of the labelled samples ( Xi1 , ··· , iK , Yi1 , ··· , iK ) for the cue of interest . | The authors propose a framework for studying the tendency of deep neural networks to preferentially adopt "cues". Specifically, they focus on settings where multiple cues are equally likely, though not all of them are equally exploited. To set up such a scenario, they introduce the WCST-ML task, in which the prevalence of cues can be parametrically controlled. They also conduct empirical studies on the more naturalistic UTKFace dataset. The authors introduce a set of metrics, such as path connectivity, attractor basin properties, etc. to analyze cue preferences from a loss landscape perspective. The authors also explain these observations based on the "complexity" of cues. | SP:1dfda46dbe3bbe38868402568e33df37e4fcf91d |
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective | 1 INTRODUCTION . Emerging studies on the inner mechanisms of deep neural networks ( DNNs ) have revealed that many models have shortcut biases ( Cadene et al. , 2019 ; Weinzaepfel & Rogez , 2021 ; Bahng et al. , 2020 ; Geirhos et al. , 2020 ) . DNNs often pick up simple , non-essential cues , which are nonetheless effective within a particular dataset . For example , a DNN trained for the task of animal recognition may recognize ducks while attending on water backgrounds , given the strong correlations between such background cues and the target label ( Choe et al. , 2020 ) . These shortcut biases often result in a striking qualitative difference between human and machine recognition systems ; for example , convolutional neural networks ( CNNs ) trained on ImageNet extensively rely on texture features , while humans would preferentially look at the global shape of objects ( Geirhos et al. , 2019 ) . In other cases , the shortcut bias arises in models that suppress certain streams of inputs : visual question answering ( VQA ) models often neglect the entire image cues , for one does not require images to answer questions like “ what color is the banana in the image ? ” ( Cadene et al. , 2019 ) Since DNNs have successfully outperformed humans on many tasks ( Silver et al. , 2016 ; Rajpurkar et al. , 2017 ) , such a phenomenon may look benign at face value . However , the shortcut biases become problematic when it comes to the generalization to more challenging test-time conditions , where the shortcuts are no longer valid ( Cadene et al. , 2019 ; Weinzaepfel & Rogez , 2021 ; Bahng et al. , 2020 ; Geirhos et al. , 2020 ) . These biases also cause ethical concerns when the shortcut features adopted by a model are sensitive like gender or skin color ( Wang et al. , 2019 ; Xu et al. , 2020 ) . Instead of proposing a method or solution , this work focuses on deepening our understanding of the shortcut bias phenomena . In particular , we design a dataset where multiple cues are equally valid ∗First two authors contributed equally . for solving a particular task and observe which cues tend to be preferentially adopted by DNNs . The experimental setup is inspired by the Wisconsin Card Sorting Test ( WCST , Banno et al . ( 2012 ) ) in Cognitive Neuroscience1 . See Figure 1 for an illustration of the setup . It consists of a training set with multiple highly correlated cues ( e.g . color , shape , and scale ) that offer equally plausible pathways to the successful target prediction ( Y ∈ { 1 , 2 , 3 } ) . We call this a diagonal training set , highlighting the spatial arrangement of such samples in the product space of all combinations . A model f trained on such a dataset will adopt or neglect certain cues . We analyse the cues adopted by a model by observing its predictions on off-diagonal samples . With regards to Figure 1 , for example , consider the f ’ s prediction for a small , blue triangle as an off-diagonal sample . The prediction values f ( ) ∈ { 1 , 2 , 3 } tell us which cue the model is biased towards to , e.g . if f ( ) = 1 , then f is biased towards scale ; if f ( ) = 2 , towards shape ; and if f ( ) = 3 towards color . All three scenarios are plausible and only testing on off-diagonal samples will reveal model bias . We make important observations on the nature of shortcut bias under WCST-ML . We discover that , despite the equal amounts of correlations with the target label , there tends to be a preferential ordering of cues . The preference is largely shared across different DNN architectures , such as feedforward networks , ResNets ( He et al. , 2015 ) , Vision Transformers ( Dosovitskiy et al. , 2021 ) , and multiple initial parameters . From the parameter-space perspective , we further observe that the set of solutions Θp biased to the preferred cues takes a far greater volume than those corresponding to the averted cues Θa . The loss landscape also tends to be flatter around Θp than around Θa . Why are certain cues preferred to others by general DNNs ? We provide an explanation based on the Kolmogorov complexity of cues , which measures the minimal description length for representing cues ( Kolmogorov , 1963 ) . Prior studies have shown that in the parameter space of generic DNNs , there are exponentially more Kolmogorov-simple functions than Kolmogorov-complex ones ( VallePerez et al. , 2019 ; De Palma et al. , 2019 ) . Based on these theoretical results , we argue that DNNs are naturally drawn to Kolmogorov-simple cues . We empirically verify that the preferences for cues correlate well with their Kolmogorov complexity estimates . What are the consequences of the inborn preference for simple cues ? Firstly , this may hinder the generalization of DNNs to challenging test scenarios where the simple shortcut cues are no longer valid ( Geirhos et al. , 2020 ) . Secondly , we expose the possibility that certain protected attributes correspond to the simple shortcut cue for the task at hand , endangering the fairness of DNNs ( Barocas et al. , 2017 ) . In such a case , human intervention on the learning procedure may be necessary to enforce fairness , for the dataset and DNNs can be naturally drawn to exploit protected attributes . The primary goal of this manuscript is to shed light on the nature of shortcut biases and the underlying mechanisms behind the scenes . Our contributions are summarized as follows : an experimental setup for studying the shortcut bias in-depth ( WCST-ML ) ( §2 ) ; novel observations on the nature of shortcut biases , such as the existence of preferential ordering of cues and its connections to the geometry of the loss landscape in the parameter space ( §3 ) ; an explanation based on the descriptional 1Original WCST gauges the subjects ’ cognitive ability to flexibly shift their underlying rules ( adopted cues ) for categorizing samples . Inability to do so may indicate dysfunctional frontal lobe activities . complexity of cues ( §4 ) ; and a discussion on the implications on generalization and fairness ( §5 ) , such as the preferential use of ethnical features for face recognition on the UTKFace dataset . 2 SETUP . We introduce the setup that will provide the basis for the analysis in this paper . We describe the procedure for building a dataset with multiple equally valid cues for recognition ( §2.1 ) . The procedure is applied to DSprites and UTKFace datasets in §2.2 . In §2.3 , we introduce terminologies for the analysis of the parameter space and make theoretical connections with our data framework . 2.1 DATA FRAMEWORK : WCST-ML Many factors affect the preference of models to certain cues . The existence of dominant classes is an example ; it encourages models to favor cues conducive to a good performance on the dominant classes ( Barocas et al. , 2017 ; Hashimoto et al. , 2018 ) . In other cases , some cues have higher degrees of correlation with the target label ( Geirhos et al. , 2020 ) . In this work , we test whether bias is still present under fair conditions , i.e . : when a training dataset contains a set of valid cues , each of which equally correlates with the targets , will DNNs still have a preference for certain cues ? If so , why ? To study this , we introduce a data construction framework called Wisconsin Card Sorting Test for Machine Learners ( WCST-ML ) , named after a clinical test in cognitive neuroscience ( Banno et al. , 2012 ) . See Figure 1 for an overview . As a running example , we assume a dataset where each image can be described by varying two latent variables , object shape and object color . Let X and Y denote image and label , respectively . We write Xij for the image with color i and shape j , where i , j ∈ { 1 , · · · , L } . When we want to considerK > 2 varying factors , we may writeXi1 , ··· , iK for the image random variable with kth factor chosen to be ik ∈ { 1 , · · · , L } . Importantly , we fix the number of categories for each factor to L to enforce similar conditions for all cues . Similar learning setups have appeared in prior papers : “ Cross-bias generalisation ” ( Bahng et al. , 2020 ) , “ What if multiple features are predictive ? ” ( Hermann & Lampinen , 2020 ) , and “ Zero generalization opportunities ” ( Eulig et al. , 2021 ) . While we fully acknowledge the conceptual similarities , we stress that our work presents the first dedicated study into the cue selection problem and the underlying mechanisms . The same set of images { Xij | 1 ≤ i , j ≤ L } admits two possible tasks : color and shape classification . The task is determined by the labels Y . Denoting Yij as the label for image Xij , setting Yij = i leads to the color classification , and setting Yij = j leads to the shape classification tasks . We may then build the data distribution for the task at hand via Dcolor : = ⋃ 1≤i , j≤L ( Xij , Yij = i ) Dshape : = ⋃ 1≤i , j≤L ( Xij , Yij = j ) ( 1 ) for color and shape recognition tasks , respectively . More generally , we may write Dk : = ⋃ 1≤i1 , ··· , iK≤L ( Xi1 , ··· , iK , Yi1 , ··· , iK = ik ) ( 2 ) for the data distribution where the task is to recognize the kth cue . We define the union of random variables as the balanced mixture : ⋃L i=1 Zi : = ZI where I ∼ Unif { 1 , · · · , L } . We now introduce the notion of a diagonal dataset , where every cue ( e.g . color and shape ) contains all the needed information to predict the true label Y . That is , a perfect prediction for either color or shape attribute leads to a 100 % accuracy for the task at hand . This can be done by letting the factors always vary together i = j in the dataset ( and thus the name ) . We write Ddiag : = ⋃ 1≤i≤L ( Xii , Yii = i ) . ( 3 ) Such a dataset completely leaves it to the model to choose the cue for recognition . Given a model f trained on Ddiag , we analyse the recognition cue adopted by f by measuring its unbiased accuracy on all the cells ( See Figure 1 ) . There are K different unbiased accuracies for each task , depending on how the off-diagonal cells are labelled : e.g . Dcolor and Dshape in equation 1 . For a general setting with K cues , the unbiased accuracy for kth cue is defined as acck ( f ) : = 1 Lk ∑ i1 , ··· , iK Pr [ f ( Xi1 , ··· , iK ) = ik ] . ( 4 ) Proposition 1 . For k ∈ { 1 , · · · , K } , acck ( f ) = 1 if and only if f ( Xi1 , ··· , iK ) = ik almost surely for all 1 ≤ i1 , · · · , iK ≤ L. Moreover , if the condition above holds ( i.e . f is perfectly biased to cue k ) , then accm ( f ) = 1L for all m 6= k. The proposition implies that the unbiased accuracy is capable of detecting the bias in a model f : acck ( f ) = 1 implies that f ’ s prediction is solely based on the cue k. It also emphasizes that it is impossible for a model to be perfectly biased to multiple cues . Finally , we remark that the WCST-ML analysis does not require the cues to be orthogonal or interpretable to humans . The only requirement is the availability of the labelled samples ( Xi1 , ··· , iK , Yi1 , ··· , iK ) for the cue of interest . | The paper discusses biases in inductive learning in deep neural networks that stem from pathologically sampled data. The authors pose a problem set where a trainer only sees samples where two or more latent values can only be observed in a fully correlated fashion (e.g. scale, color, shape). They then design various criteria and protocols to gain insight in the behavior of the learned model when the correlation does not hold any more (i.e. samples that have not been seen in the training data). They conclude that in this case of generalization to unseen data, the trained model has an implicit bias towards choosing (1) mostly single cues to determine the predicted label (e.g. color only), (2) that more preferred cues are simpler than less preferred cues and (3) that the underlying solution space of possible parameters has more solutions that prefer the simple cues. They demonstrate empirical results on variations of existing datasets (DSprites, UTKFace). | SP:1dfda46dbe3bbe38868402568e33df37e4fcf91d |
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective | 1 INTRODUCTION . Emerging studies on the inner mechanisms of deep neural networks ( DNNs ) have revealed that many models have shortcut biases ( Cadene et al. , 2019 ; Weinzaepfel & Rogez , 2021 ; Bahng et al. , 2020 ; Geirhos et al. , 2020 ) . DNNs often pick up simple , non-essential cues , which are nonetheless effective within a particular dataset . For example , a DNN trained for the task of animal recognition may recognize ducks while attending on water backgrounds , given the strong correlations between such background cues and the target label ( Choe et al. , 2020 ) . These shortcut biases often result in a striking qualitative difference between human and machine recognition systems ; for example , convolutional neural networks ( CNNs ) trained on ImageNet extensively rely on texture features , while humans would preferentially look at the global shape of objects ( Geirhos et al. , 2019 ) . In other cases , the shortcut bias arises in models that suppress certain streams of inputs : visual question answering ( VQA ) models often neglect the entire image cues , for one does not require images to answer questions like “ what color is the banana in the image ? ” ( Cadene et al. , 2019 ) Since DNNs have successfully outperformed humans on many tasks ( Silver et al. , 2016 ; Rajpurkar et al. , 2017 ) , such a phenomenon may look benign at face value . However , the shortcut biases become problematic when it comes to the generalization to more challenging test-time conditions , where the shortcuts are no longer valid ( Cadene et al. , 2019 ; Weinzaepfel & Rogez , 2021 ; Bahng et al. , 2020 ; Geirhos et al. , 2020 ) . These biases also cause ethical concerns when the shortcut features adopted by a model are sensitive like gender or skin color ( Wang et al. , 2019 ; Xu et al. , 2020 ) . Instead of proposing a method or solution , this work focuses on deepening our understanding of the shortcut bias phenomena . In particular , we design a dataset where multiple cues are equally valid ∗First two authors contributed equally . for solving a particular task and observe which cues tend to be preferentially adopted by DNNs . The experimental setup is inspired by the Wisconsin Card Sorting Test ( WCST , Banno et al . ( 2012 ) ) in Cognitive Neuroscience1 . See Figure 1 for an illustration of the setup . It consists of a training set with multiple highly correlated cues ( e.g . color , shape , and scale ) that offer equally plausible pathways to the successful target prediction ( Y ∈ { 1 , 2 , 3 } ) . We call this a diagonal training set , highlighting the spatial arrangement of such samples in the product space of all combinations . A model f trained on such a dataset will adopt or neglect certain cues . We analyse the cues adopted by a model by observing its predictions on off-diagonal samples . With regards to Figure 1 , for example , consider the f ’ s prediction for a small , blue triangle as an off-diagonal sample . The prediction values f ( ) ∈ { 1 , 2 , 3 } tell us which cue the model is biased towards to , e.g . if f ( ) = 1 , then f is biased towards scale ; if f ( ) = 2 , towards shape ; and if f ( ) = 3 towards color . All three scenarios are plausible and only testing on off-diagonal samples will reveal model bias . We make important observations on the nature of shortcut bias under WCST-ML . We discover that , despite the equal amounts of correlations with the target label , there tends to be a preferential ordering of cues . The preference is largely shared across different DNN architectures , such as feedforward networks , ResNets ( He et al. , 2015 ) , Vision Transformers ( Dosovitskiy et al. , 2021 ) , and multiple initial parameters . From the parameter-space perspective , we further observe that the set of solutions Θp biased to the preferred cues takes a far greater volume than those corresponding to the averted cues Θa . The loss landscape also tends to be flatter around Θp than around Θa . Why are certain cues preferred to others by general DNNs ? We provide an explanation based on the Kolmogorov complexity of cues , which measures the minimal description length for representing cues ( Kolmogorov , 1963 ) . Prior studies have shown that in the parameter space of generic DNNs , there are exponentially more Kolmogorov-simple functions than Kolmogorov-complex ones ( VallePerez et al. , 2019 ; De Palma et al. , 2019 ) . Based on these theoretical results , we argue that DNNs are naturally drawn to Kolmogorov-simple cues . We empirically verify that the preferences for cues correlate well with their Kolmogorov complexity estimates . What are the consequences of the inborn preference for simple cues ? Firstly , this may hinder the generalization of DNNs to challenging test scenarios where the simple shortcut cues are no longer valid ( Geirhos et al. , 2020 ) . Secondly , we expose the possibility that certain protected attributes correspond to the simple shortcut cue for the task at hand , endangering the fairness of DNNs ( Barocas et al. , 2017 ) . In such a case , human intervention on the learning procedure may be necessary to enforce fairness , for the dataset and DNNs can be naturally drawn to exploit protected attributes . The primary goal of this manuscript is to shed light on the nature of shortcut biases and the underlying mechanisms behind the scenes . Our contributions are summarized as follows : an experimental setup for studying the shortcut bias in-depth ( WCST-ML ) ( §2 ) ; novel observations on the nature of shortcut biases , such as the existence of preferential ordering of cues and its connections to the geometry of the loss landscape in the parameter space ( §3 ) ; an explanation based on the descriptional 1Original WCST gauges the subjects ’ cognitive ability to flexibly shift their underlying rules ( adopted cues ) for categorizing samples . Inability to do so may indicate dysfunctional frontal lobe activities . complexity of cues ( §4 ) ; and a discussion on the implications on generalization and fairness ( §5 ) , such as the preferential use of ethnical features for face recognition on the UTKFace dataset . 2 SETUP . We introduce the setup that will provide the basis for the analysis in this paper . We describe the procedure for building a dataset with multiple equally valid cues for recognition ( §2.1 ) . The procedure is applied to DSprites and UTKFace datasets in §2.2 . In §2.3 , we introduce terminologies for the analysis of the parameter space and make theoretical connections with our data framework . 2.1 DATA FRAMEWORK : WCST-ML Many factors affect the preference of models to certain cues . The existence of dominant classes is an example ; it encourages models to favor cues conducive to a good performance on the dominant classes ( Barocas et al. , 2017 ; Hashimoto et al. , 2018 ) . In other cases , some cues have higher degrees of correlation with the target label ( Geirhos et al. , 2020 ) . In this work , we test whether bias is still present under fair conditions , i.e . : when a training dataset contains a set of valid cues , each of which equally correlates with the targets , will DNNs still have a preference for certain cues ? If so , why ? To study this , we introduce a data construction framework called Wisconsin Card Sorting Test for Machine Learners ( WCST-ML ) , named after a clinical test in cognitive neuroscience ( Banno et al. , 2012 ) . See Figure 1 for an overview . As a running example , we assume a dataset where each image can be described by varying two latent variables , object shape and object color . Let X and Y denote image and label , respectively . We write Xij for the image with color i and shape j , where i , j ∈ { 1 , · · · , L } . When we want to considerK > 2 varying factors , we may writeXi1 , ··· , iK for the image random variable with kth factor chosen to be ik ∈ { 1 , · · · , L } . Importantly , we fix the number of categories for each factor to L to enforce similar conditions for all cues . Similar learning setups have appeared in prior papers : “ Cross-bias generalisation ” ( Bahng et al. , 2020 ) , “ What if multiple features are predictive ? ” ( Hermann & Lampinen , 2020 ) , and “ Zero generalization opportunities ” ( Eulig et al. , 2021 ) . While we fully acknowledge the conceptual similarities , we stress that our work presents the first dedicated study into the cue selection problem and the underlying mechanisms . The same set of images { Xij | 1 ≤ i , j ≤ L } admits two possible tasks : color and shape classification . The task is determined by the labels Y . Denoting Yij as the label for image Xij , setting Yij = i leads to the color classification , and setting Yij = j leads to the shape classification tasks . We may then build the data distribution for the task at hand via Dcolor : = ⋃ 1≤i , j≤L ( Xij , Yij = i ) Dshape : = ⋃ 1≤i , j≤L ( Xij , Yij = j ) ( 1 ) for color and shape recognition tasks , respectively . More generally , we may write Dk : = ⋃ 1≤i1 , ··· , iK≤L ( Xi1 , ··· , iK , Yi1 , ··· , iK = ik ) ( 2 ) for the data distribution where the task is to recognize the kth cue . We define the union of random variables as the balanced mixture : ⋃L i=1 Zi : = ZI where I ∼ Unif { 1 , · · · , L } . We now introduce the notion of a diagonal dataset , where every cue ( e.g . color and shape ) contains all the needed information to predict the true label Y . That is , a perfect prediction for either color or shape attribute leads to a 100 % accuracy for the task at hand . This can be done by letting the factors always vary together i = j in the dataset ( and thus the name ) . We write Ddiag : = ⋃ 1≤i≤L ( Xii , Yii = i ) . ( 3 ) Such a dataset completely leaves it to the model to choose the cue for recognition . Given a model f trained on Ddiag , we analyse the recognition cue adopted by f by measuring its unbiased accuracy on all the cells ( See Figure 1 ) . There are K different unbiased accuracies for each task , depending on how the off-diagonal cells are labelled : e.g . Dcolor and Dshape in equation 1 . For a general setting with K cues , the unbiased accuracy for kth cue is defined as acck ( f ) : = 1 Lk ∑ i1 , ··· , iK Pr [ f ( Xi1 , ··· , iK ) = ik ] . ( 4 ) Proposition 1 . For k ∈ { 1 , · · · , K } , acck ( f ) = 1 if and only if f ( Xi1 , ··· , iK ) = ik almost surely for all 1 ≤ i1 , · · · , iK ≤ L. Moreover , if the condition above holds ( i.e . f is perfectly biased to cue k ) , then accm ( f ) = 1L for all m 6= k. The proposition implies that the unbiased accuracy is capable of detecting the bias in a model f : acck ( f ) = 1 implies that f ’ s prediction is solely based on the cue k. It also emphasizes that it is impossible for a model to be perfectly biased to multiple cues . Finally , we remark that the WCST-ML analysis does not require the cues to be orthogonal or interpretable to humans . The only requirement is the availability of the labelled samples ( Xi1 , ··· , iK , Yi1 , ··· , iK ) for the cue of interest . | The paper conducts a study of which visual cues are preferred by current vision models. The paper designs a training setup with several cues where each cue is equally correlated with the image label. The paper shows that visual cues like color are much easier to be learned by a vision model, than other cues such as orientation and shape. The paper also provides evidence that easy-to-learn cues tend to converge to relatively flat minima and models that prefer these cues are more abundant in parameter space. | SP:1dfda46dbe3bbe38868402568e33df37e4fcf91d |
Disentangling Generalization in Reinforcement Learning | Generalization in Reinforcement Learning ( RL ) is usually measured according to concepts from supervised learning . Unlike a supervised learning model however , an RL agent must generalize across states , actions and observations from limited reward-based feedback . We propose to measure an RL agent ’ s capacity to generalize by evaluating it in a contextual decision process that combines a tabular environment with observations from a supervised learning dataset . The resulting environment , while simple , necessitates function approximation for state abstraction and provides ground-truth labels for optimal policies and value functions . The ground truth labels provided by our environment enable us to characterize generalization in RL across different axes : state-space , observation-space and action-space . Putting this method to work , we combine the MNIST dataset with various gridworld environments to rigorously evaluate generalization of DQN and QR-DQN in state , observation and action spaces for both online and offline learning . Contrary to previous reports about common regularization methods , we find that dropout does not improve observation generalization . We find , however , that dropout improves action generalization . Our results also corroborate recent findings that QR-DQN is able to generalize to new observations better than DQN in the offline setting . This success does not extend to state generalization , where DQN is able to generalize better than QR-DQN . These findings demonstrate the need for careful consideration of generalization in RL , and we hope that this line of research will continue to shed light on generalization claims in the literature . 1 Generalization in Reinforcement Learning . A Reinforcement Learning ( RL ) agent perpetually finds itself in novel states of an environment . To act intelligently , the agent must generalize its previous experience to new situations . Function approximation helps distill this previous experience into the agent ’ s learnable parameters , which allows previous experience to be leveraged with new state inputs ( Boyan and Moore , 1994 ; Sutton , 1995 ) . While there is a growing literature of new methods for improving generalization of deep RL algorithms , principled and quantitative methods for evaluating generalization remain lacking . This is due in part to the complexity of the MDP problem formulation and the difficulty of disentangling generalization from the performance of RL algorithms in terms of achieving higher expected cumulative reward , i.e . return . One common notion of generalization in RL evaluates an agent ’ s capabilities by checking if it can achieve similar performance in an environment that is similar to , but not exactly the same as , the environment in which it was trained ( Whiteson et al. , 2011 ) . This can be accomplished through randomization , i.e . by randomizing the parameters underlying the environment , such as wind velocity in helicopter hovering or the mass of objects in a simulator , thereby changing the transition dynamics ( Peng et al. , 2017 ) . Generalization in this sense draws parallels to supervised learning , where classifiers are often trained on a fixed dataset , and evaluated on a separate testing dataset under the I.I.D . assumption . The agent is said to generalize well if the difference between training error and testing error is small . While these supervised learning concepts are relevant to RL , there are two problems with taking a cross-environment approach to evaluating RL generalization . First , this paradigm does not disentangle the various aspects of generalization that are required for an RL agent to succeed in its task , whether it be the value estimates or the policy , and whether they are robust to variations in states , observations or actions . This is the question of “ what ” performance criterion should be measured when we discuss generalization . In RL , we have function approximators for many quantities : state-transition , reward , state-value , actionvalue and policy . While generalization of state-value is similar to regression , generalization with quantities related to action , such as policy or action-value , do not have supervised learning analoguesand hence require : . : : : : This : : is : : : : : : : because : : : : : : : policies : : : : and : : : : : : : : : : : : action-values : : : : have : : as : : : : : : many : : : : : : : outputs : : as : : : : : : : : actions , : : : : and : : : : only : : : : the : : : : : : action : : : : : taken : : : by : : : : the : : : : : agent : : is : : : : : : : : : updated , : : : : : : which : : : : : : : : : : necessitates separate consideration . Second , the paradigm follows practice in supervised learning to impose a strict separation of test and train environments . This practice effectively focuses on transfer performance across environments but fails to evaluate the agent ’ s ability to generalize within a single environment . I.e . it fails to answer the question of “ how ” to measure generalization under a performance criterion . Unlike supervised learning , the agent ’ s state distribution changes during learning because the policy changes . Even where we have randomly generated training and testing sets of environments , the environments ’ complex dynamics do not admit ground-truth labels to determine optimal actions or values for comparison and evaluation . These environments only allow an agent ’ s generalization capabilities to be measured in terms of Monte-Carlo rollouts of the learned policy , leaving us restricted to the single performance criterion of return and preventing us from specifically measuring important nuanced differences in the agent ’ s ability to generalize . In addition , because the upper bound on return is unknown to the researcher , informed judgments about the quality of the policy is very difficult to make . Within the RL-oriented literature on generalization , there are two distinct categories of research . The first proposes environments and methodologies for measuring RL generalization . Examples of this approach are ProcGen/CoinRun ( Cobbe et al. , 2020 ; ? ) : : : : : : : : : : : : : : : : : : : : : : : ( Cobbe et al. , 2020 ; 2019 ) , randomized-reward CartPole ( Zhang et al. , 2018a ) , the GridWorld maze ( Zhang et al. , 2018b ) , observation projection ( ? ) : : : : : : : : : : : : : : : : : ( Song et al. , 2020 ) and the hierarchy of state generalization ( Witty et al. , 2018 ) . These efforts are aimed at the second issue raised above , i.e . “ how ” generalization should be measured in RL . Underlying these approaches is specifying how to split the environment into testing and training scenarios . Early work by Zhang et al . ( 2018a ) proposes using separate seeds . This , however , does not ensure that the states encountered by the agent are truly separate . Other works , such as those by Cobbe et al . ( 2020 ) ; ? ) ; Elsayed et al . ( 2020 ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Cobbe et al . ( 2020 ; 2019 ) ; Elsayed et al . ( 2020 ) , procedurally generate separate environments for testing and training . When generalization is measured on truly separate testing and training environments , we are able to determine whether an agent ’ s policy is generalizing from one environment to another . Again , this formulation does not allow us to study generalization within a single environment , nor does it allow us to measure generalization of the value functions . The second category proposes or investigates RL methods that improve generalization . These include regularization experiments in Atari ( Farebrother et al. , 2018 ) and continuous control ( ? ) : : : : : : : : : : : : : : : ( Liu et al. , 2021 ) , contrastive similarity embeddings ( Agarwal et al. , 2021 ) and bisimulation metrics ( ? ) : : : : : : : : : : : : : : : : : ( Zhang et al. , 2021 ) . There is also the hypothesis that better , and hence more generalizable , representations arise from auxiliary tasks ( ? ) : : : : : : : : : : : : : : : : : : : : ( Jaderberg et al. , 2017 ) , which is also suspected to be the reason for the success of distributional RL ( ? Bellemare et al. , 2017 ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : ( Dabney et al. , 2018 ; Bellemare et al. , 2017 ) and has recently been investigated in the offline setting ( ? ) . : : : : : : : : : : : : : : : : : : : ( Agarwal et al. , 2020 ) : . : Previous work closest to ours is using Contextual Decision Processes ( CDPs ) as : : To : : : : : : : : : : understand : : : : : : : : : : : : generalization : : : in : : : RL , : : : we : : : : use : : : the : : : : : : : : : : Contextual : : : : : : : : Decision : : : : : : : Process : : : : : : : ( CDP ) : : : : : : : : : : framework , : : : : : which : : is : a problem class for theoretical analysis of RL algorithms that use function approximation ( Du et al. , 2019 ; Jiang et al. , 2017 ; Dann et al. , 2018 ) . The CDP problem formulation renders the states unobservable , but allows the agent to view observations that contain enough information to recover the state . This formulation enables the study of function approximation as applied to RL and has significant implications for RL in general . For example , it helps to : : : : : : : : : : : : : : : Du et al . ( 2019 ) show that there exist algorithms with exponentially more efficient explo- ration than Q-learning ( Du et al. , 2019 ) , a result suggesting that RL algorithms should be designed to take advantage of function approximation and its generalization abilities , rather than naively extending tabular algorithms with function approximation . However , no existing work has leveraged CDPs empirically , despite that it connects supervised learning with RL . The MNIST gridworld : : 2D : : : : : : : MNIST : : : : : : maze environment used by ? : : : : : : : : : : : : : : : Lee et al . ( 2019 ) may be considered a CDP , : : : : : simple : : : : : : CDP , : : : : : where : : : : the : : : : : : : : : : : observations : : : : are : : : : : : : : : : : : deterministic : : : : and : : : : : : : : : equivalent : : to : : : : : state , but is not recognized as such . The work by ? : : : : : : : : : : : : : : : : Song et al . ( 2020 ) proposed projecting the state of simple control environment and varying this projection between training and testing sets . These two works have similar goals to ours . However , the first work does not investigate generalization and neither makes use of the ground-truth values to probe generalization rigorously , such as in the label-corruption experiment by ( Zhang et al. , 2017 ) that we will extend to RL . : : : : : Our : : : : work : : is : : : : : : : : uniquely : : : : : : : : : : analogous : : to : : : : : : : : : : : : : generalization : : in : : : : : : : : : : supervised : : : : : : : : learning , : : : : : : : : : : : : : simultaneously : : : : : : : : : answering : : : : : : “ how ” : : : : : : : : : : : : generalization : : : : : : should : : : be : : : : : : : : : measured : : in : : : RL : : : : and : : : : : : “ what ” : : : : : : should : : be : : : : : : : : : measured . : : : In : : : : : : : : : answering : : : : : : theses : : : : two : : : : : : : : : questions , : : : we : : : : : : : : : : : disentangle : : : : : : : : : : : : generalization : : : : : : across : : : : : three : : : : axes : : : : : : : : states , : : : : : : : : : : : : observations : : : : and : : : : : : : : actions . : : : : : : : : Finally , : : : : this : : : : : : : : : : : : disentangled : : : : : : : : : : perspective : : : : : : shows : : : : : how : : : : : : : different : : : : : : : : : : : : : generalization : : : : : : : : : : : mechanisms : : : : : : : benefit : : : the : : : : : : : : different : : : : axes : : : of : : : : : : : : : : : : generalization . | This paper discusses generalization in deep RL. The key contribution of the paper, from my understanding, is that the authors argue that different from generalization in SL, in RL state, observation and action should be considered separately. A measurement (Eq.4) is proposed to evaluate generalization capacity of deep RL within the contectual decision process (CDP) scheme. Experiments were performed on grid world environments with MNIST image as observations and several results were concluded. | SP:c00f6a4198816665d335df1c8210dc612fa6443f |
Disentangling Generalization in Reinforcement Learning | Generalization in Reinforcement Learning ( RL ) is usually measured according to concepts from supervised learning . Unlike a supervised learning model however , an RL agent must generalize across states , actions and observations from limited reward-based feedback . We propose to measure an RL agent ’ s capacity to generalize by evaluating it in a contextual decision process that combines a tabular environment with observations from a supervised learning dataset . The resulting environment , while simple , necessitates function approximation for state abstraction and provides ground-truth labels for optimal policies and value functions . The ground truth labels provided by our environment enable us to characterize generalization in RL across different axes : state-space , observation-space and action-space . Putting this method to work , we combine the MNIST dataset with various gridworld environments to rigorously evaluate generalization of DQN and QR-DQN in state , observation and action spaces for both online and offline learning . Contrary to previous reports about common regularization methods , we find that dropout does not improve observation generalization . We find , however , that dropout improves action generalization . Our results also corroborate recent findings that QR-DQN is able to generalize to new observations better than DQN in the offline setting . This success does not extend to state generalization , where DQN is able to generalize better than QR-DQN . These findings demonstrate the need for careful consideration of generalization in RL , and we hope that this line of research will continue to shed light on generalization claims in the literature . 1 Generalization in Reinforcement Learning . A Reinforcement Learning ( RL ) agent perpetually finds itself in novel states of an environment . To act intelligently , the agent must generalize its previous experience to new situations . Function approximation helps distill this previous experience into the agent ’ s learnable parameters , which allows previous experience to be leveraged with new state inputs ( Boyan and Moore , 1994 ; Sutton , 1995 ) . While there is a growing literature of new methods for improving generalization of deep RL algorithms , principled and quantitative methods for evaluating generalization remain lacking . This is due in part to the complexity of the MDP problem formulation and the difficulty of disentangling generalization from the performance of RL algorithms in terms of achieving higher expected cumulative reward , i.e . return . One common notion of generalization in RL evaluates an agent ’ s capabilities by checking if it can achieve similar performance in an environment that is similar to , but not exactly the same as , the environment in which it was trained ( Whiteson et al. , 2011 ) . This can be accomplished through randomization , i.e . by randomizing the parameters underlying the environment , such as wind velocity in helicopter hovering or the mass of objects in a simulator , thereby changing the transition dynamics ( Peng et al. , 2017 ) . Generalization in this sense draws parallels to supervised learning , where classifiers are often trained on a fixed dataset , and evaluated on a separate testing dataset under the I.I.D . assumption . The agent is said to generalize well if the difference between training error and testing error is small . While these supervised learning concepts are relevant to RL , there are two problems with taking a cross-environment approach to evaluating RL generalization . First , this paradigm does not disentangle the various aspects of generalization that are required for an RL agent to succeed in its task , whether it be the value estimates or the policy , and whether they are robust to variations in states , observations or actions . This is the question of “ what ” performance criterion should be measured when we discuss generalization . In RL , we have function approximators for many quantities : state-transition , reward , state-value , actionvalue and policy . While generalization of state-value is similar to regression , generalization with quantities related to action , such as policy or action-value , do not have supervised learning analoguesand hence require : . : : : : This : : is : : : : : : : because : : : : : : : policies : : : : and : : : : : : : : : : : : action-values : : : : have : : as : : : : : : many : : : : : : : outputs : : as : : : : : : : : actions , : : : : and : : : : only : : : : the : : : : : : action : : : : : taken : : : by : : : : the : : : : : agent : : is : : : : : : : : : updated , : : : : : : which : : : : : : : : : : necessitates separate consideration . Second , the paradigm follows practice in supervised learning to impose a strict separation of test and train environments . This practice effectively focuses on transfer performance across environments but fails to evaluate the agent ’ s ability to generalize within a single environment . I.e . it fails to answer the question of “ how ” to measure generalization under a performance criterion . Unlike supervised learning , the agent ’ s state distribution changes during learning because the policy changes . Even where we have randomly generated training and testing sets of environments , the environments ’ complex dynamics do not admit ground-truth labels to determine optimal actions or values for comparison and evaluation . These environments only allow an agent ’ s generalization capabilities to be measured in terms of Monte-Carlo rollouts of the learned policy , leaving us restricted to the single performance criterion of return and preventing us from specifically measuring important nuanced differences in the agent ’ s ability to generalize . In addition , because the upper bound on return is unknown to the researcher , informed judgments about the quality of the policy is very difficult to make . Within the RL-oriented literature on generalization , there are two distinct categories of research . The first proposes environments and methodologies for measuring RL generalization . Examples of this approach are ProcGen/CoinRun ( Cobbe et al. , 2020 ; ? ) : : : : : : : : : : : : : : : : : : : : : : : ( Cobbe et al. , 2020 ; 2019 ) , randomized-reward CartPole ( Zhang et al. , 2018a ) , the GridWorld maze ( Zhang et al. , 2018b ) , observation projection ( ? ) : : : : : : : : : : : : : : : : : ( Song et al. , 2020 ) and the hierarchy of state generalization ( Witty et al. , 2018 ) . These efforts are aimed at the second issue raised above , i.e . “ how ” generalization should be measured in RL . Underlying these approaches is specifying how to split the environment into testing and training scenarios . Early work by Zhang et al . ( 2018a ) proposes using separate seeds . This , however , does not ensure that the states encountered by the agent are truly separate . Other works , such as those by Cobbe et al . ( 2020 ) ; ? ) ; Elsayed et al . ( 2020 ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Cobbe et al . ( 2020 ; 2019 ) ; Elsayed et al . ( 2020 ) , procedurally generate separate environments for testing and training . When generalization is measured on truly separate testing and training environments , we are able to determine whether an agent ’ s policy is generalizing from one environment to another . Again , this formulation does not allow us to study generalization within a single environment , nor does it allow us to measure generalization of the value functions . The second category proposes or investigates RL methods that improve generalization . These include regularization experiments in Atari ( Farebrother et al. , 2018 ) and continuous control ( ? ) : : : : : : : : : : : : : : : ( Liu et al. , 2021 ) , contrastive similarity embeddings ( Agarwal et al. , 2021 ) and bisimulation metrics ( ? ) : : : : : : : : : : : : : : : : : ( Zhang et al. , 2021 ) . There is also the hypothesis that better , and hence more generalizable , representations arise from auxiliary tasks ( ? ) : : : : : : : : : : : : : : : : : : : : ( Jaderberg et al. , 2017 ) , which is also suspected to be the reason for the success of distributional RL ( ? Bellemare et al. , 2017 ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : ( Dabney et al. , 2018 ; Bellemare et al. , 2017 ) and has recently been investigated in the offline setting ( ? ) . : : : : : : : : : : : : : : : : : : : ( Agarwal et al. , 2020 ) : . : Previous work closest to ours is using Contextual Decision Processes ( CDPs ) as : : To : : : : : : : : : : understand : : : : : : : : : : : : generalization : : : in : : : RL , : : : we : : : : use : : : the : : : : : : : : : : Contextual : : : : : : : : Decision : : : : : : : Process : : : : : : : ( CDP ) : : : : : : : : : : framework , : : : : : which : : is : a problem class for theoretical analysis of RL algorithms that use function approximation ( Du et al. , 2019 ; Jiang et al. , 2017 ; Dann et al. , 2018 ) . The CDP problem formulation renders the states unobservable , but allows the agent to view observations that contain enough information to recover the state . This formulation enables the study of function approximation as applied to RL and has significant implications for RL in general . For example , it helps to : : : : : : : : : : : : : : : Du et al . ( 2019 ) show that there exist algorithms with exponentially more efficient explo- ration than Q-learning ( Du et al. , 2019 ) , a result suggesting that RL algorithms should be designed to take advantage of function approximation and its generalization abilities , rather than naively extending tabular algorithms with function approximation . However , no existing work has leveraged CDPs empirically , despite that it connects supervised learning with RL . The MNIST gridworld : : 2D : : : : : : : MNIST : : : : : : maze environment used by ? : : : : : : : : : : : : : : : Lee et al . ( 2019 ) may be considered a CDP , : : : : : simple : : : : : : CDP , : : : : : where : : : : the : : : : : : : : : : : observations : : : : are : : : : : : : : : : : : deterministic : : : : and : : : : : : : : : equivalent : : to : : : : : state , but is not recognized as such . The work by ? : : : : : : : : : : : : : : : : Song et al . ( 2020 ) proposed projecting the state of simple control environment and varying this projection between training and testing sets . These two works have similar goals to ours . However , the first work does not investigate generalization and neither makes use of the ground-truth values to probe generalization rigorously , such as in the label-corruption experiment by ( Zhang et al. , 2017 ) that we will extend to RL . : : : : : Our : : : : work : : is : : : : : : : : uniquely : : : : : : : : : : analogous : : to : : : : : : : : : : : : : generalization : : in : : : : : : : : : : supervised : : : : : : : : learning , : : : : : : : : : : : : : simultaneously : : : : : : : : : answering : : : : : : “ how ” : : : : : : : : : : : : generalization : : : : : : should : : : be : : : : : : : : : measured : : in : : : RL : : : : and : : : : : : “ what ” : : : : : : should : : be : : : : : : : : : measured . : : : In : : : : : : : : : answering : : : : : : theses : : : : two : : : : : : : : : questions , : : : we : : : : : : : : : : : disentangle : : : : : : : : : : : : generalization : : : : : : across : : : : : three : : : : axes : : : : : : : : states , : : : : : : : : : : : : observations : : : : and : : : : : : : : actions . : : : : : : : : Finally , : : : : this : : : : : : : : : : : : disentangled : : : : : : : : : : perspective : : : : : : shows : : : : : how : : : : : : : different : : : : : : : : : : : : : generalization : : : : : : : : : : : mechanisms : : : : : : : benefit : : : the : : : : : : : : different : : : : axes : : : of : : : : : : : : : : : : generalization . | This paper proposes an empirical evaluation method to measure of the generalization capacity of an RL agent. It relies on CDPs combining a tabular environment with a supervised learning dataset. Generalization is measured across three axis: state space, observation space and action space. The empirical evaluation is led on DQN and QR-DQN on the four room domain and corridor domain combined with the MNIST dataset in the online and offline settings. The authors find that dropout improves action generalization but not observation generalization while regularisation improves observation generalization. They also find that QR-DQN generalise better than DQN in the offline setting on the observation axis and action axis but not on the state space axis. | SP:c00f6a4198816665d335df1c8210dc612fa6443f |
Disentangling Generalization in Reinforcement Learning | Generalization in Reinforcement Learning ( RL ) is usually measured according to concepts from supervised learning . Unlike a supervised learning model however , an RL agent must generalize across states , actions and observations from limited reward-based feedback . We propose to measure an RL agent ’ s capacity to generalize by evaluating it in a contextual decision process that combines a tabular environment with observations from a supervised learning dataset . The resulting environment , while simple , necessitates function approximation for state abstraction and provides ground-truth labels for optimal policies and value functions . The ground truth labels provided by our environment enable us to characterize generalization in RL across different axes : state-space , observation-space and action-space . Putting this method to work , we combine the MNIST dataset with various gridworld environments to rigorously evaluate generalization of DQN and QR-DQN in state , observation and action spaces for both online and offline learning . Contrary to previous reports about common regularization methods , we find that dropout does not improve observation generalization . We find , however , that dropout improves action generalization . Our results also corroborate recent findings that QR-DQN is able to generalize to new observations better than DQN in the offline setting . This success does not extend to state generalization , where DQN is able to generalize better than QR-DQN . These findings demonstrate the need for careful consideration of generalization in RL , and we hope that this line of research will continue to shed light on generalization claims in the literature . 1 Generalization in Reinforcement Learning . A Reinforcement Learning ( RL ) agent perpetually finds itself in novel states of an environment . To act intelligently , the agent must generalize its previous experience to new situations . Function approximation helps distill this previous experience into the agent ’ s learnable parameters , which allows previous experience to be leveraged with new state inputs ( Boyan and Moore , 1994 ; Sutton , 1995 ) . While there is a growing literature of new methods for improving generalization of deep RL algorithms , principled and quantitative methods for evaluating generalization remain lacking . This is due in part to the complexity of the MDP problem formulation and the difficulty of disentangling generalization from the performance of RL algorithms in terms of achieving higher expected cumulative reward , i.e . return . One common notion of generalization in RL evaluates an agent ’ s capabilities by checking if it can achieve similar performance in an environment that is similar to , but not exactly the same as , the environment in which it was trained ( Whiteson et al. , 2011 ) . This can be accomplished through randomization , i.e . by randomizing the parameters underlying the environment , such as wind velocity in helicopter hovering or the mass of objects in a simulator , thereby changing the transition dynamics ( Peng et al. , 2017 ) . Generalization in this sense draws parallels to supervised learning , where classifiers are often trained on a fixed dataset , and evaluated on a separate testing dataset under the I.I.D . assumption . The agent is said to generalize well if the difference between training error and testing error is small . While these supervised learning concepts are relevant to RL , there are two problems with taking a cross-environment approach to evaluating RL generalization . First , this paradigm does not disentangle the various aspects of generalization that are required for an RL agent to succeed in its task , whether it be the value estimates or the policy , and whether they are robust to variations in states , observations or actions . This is the question of “ what ” performance criterion should be measured when we discuss generalization . In RL , we have function approximators for many quantities : state-transition , reward , state-value , actionvalue and policy . While generalization of state-value is similar to regression , generalization with quantities related to action , such as policy or action-value , do not have supervised learning analoguesand hence require : . : : : : This : : is : : : : : : : because : : : : : : : policies : : : : and : : : : : : : : : : : : action-values : : : : have : : as : : : : : : many : : : : : : : outputs : : as : : : : : : : : actions , : : : : and : : : : only : : : : the : : : : : : action : : : : : taken : : : by : : : : the : : : : : agent : : is : : : : : : : : : updated , : : : : : : which : : : : : : : : : : necessitates separate consideration . Second , the paradigm follows practice in supervised learning to impose a strict separation of test and train environments . This practice effectively focuses on transfer performance across environments but fails to evaluate the agent ’ s ability to generalize within a single environment . I.e . it fails to answer the question of “ how ” to measure generalization under a performance criterion . Unlike supervised learning , the agent ’ s state distribution changes during learning because the policy changes . Even where we have randomly generated training and testing sets of environments , the environments ’ complex dynamics do not admit ground-truth labels to determine optimal actions or values for comparison and evaluation . These environments only allow an agent ’ s generalization capabilities to be measured in terms of Monte-Carlo rollouts of the learned policy , leaving us restricted to the single performance criterion of return and preventing us from specifically measuring important nuanced differences in the agent ’ s ability to generalize . In addition , because the upper bound on return is unknown to the researcher , informed judgments about the quality of the policy is very difficult to make . Within the RL-oriented literature on generalization , there are two distinct categories of research . The first proposes environments and methodologies for measuring RL generalization . Examples of this approach are ProcGen/CoinRun ( Cobbe et al. , 2020 ; ? ) : : : : : : : : : : : : : : : : : : : : : : : ( Cobbe et al. , 2020 ; 2019 ) , randomized-reward CartPole ( Zhang et al. , 2018a ) , the GridWorld maze ( Zhang et al. , 2018b ) , observation projection ( ? ) : : : : : : : : : : : : : : : : : ( Song et al. , 2020 ) and the hierarchy of state generalization ( Witty et al. , 2018 ) . These efforts are aimed at the second issue raised above , i.e . “ how ” generalization should be measured in RL . Underlying these approaches is specifying how to split the environment into testing and training scenarios . Early work by Zhang et al . ( 2018a ) proposes using separate seeds . This , however , does not ensure that the states encountered by the agent are truly separate . Other works , such as those by Cobbe et al . ( 2020 ) ; ? ) ; Elsayed et al . ( 2020 ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Cobbe et al . ( 2020 ; 2019 ) ; Elsayed et al . ( 2020 ) , procedurally generate separate environments for testing and training . When generalization is measured on truly separate testing and training environments , we are able to determine whether an agent ’ s policy is generalizing from one environment to another . Again , this formulation does not allow us to study generalization within a single environment , nor does it allow us to measure generalization of the value functions . The second category proposes or investigates RL methods that improve generalization . These include regularization experiments in Atari ( Farebrother et al. , 2018 ) and continuous control ( ? ) : : : : : : : : : : : : : : : ( Liu et al. , 2021 ) , contrastive similarity embeddings ( Agarwal et al. , 2021 ) and bisimulation metrics ( ? ) : : : : : : : : : : : : : : : : : ( Zhang et al. , 2021 ) . There is also the hypothesis that better , and hence more generalizable , representations arise from auxiliary tasks ( ? ) : : : : : : : : : : : : : : : : : : : : ( Jaderberg et al. , 2017 ) , which is also suspected to be the reason for the success of distributional RL ( ? Bellemare et al. , 2017 ) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : ( Dabney et al. , 2018 ; Bellemare et al. , 2017 ) and has recently been investigated in the offline setting ( ? ) . : : : : : : : : : : : : : : : : : : : ( Agarwal et al. , 2020 ) : . : Previous work closest to ours is using Contextual Decision Processes ( CDPs ) as : : To : : : : : : : : : : understand : : : : : : : : : : : : generalization : : : in : : : RL , : : : we : : : : use : : : the : : : : : : : : : : Contextual : : : : : : : : Decision : : : : : : : Process : : : : : : : ( CDP ) : : : : : : : : : : framework , : : : : : which : : is : a problem class for theoretical analysis of RL algorithms that use function approximation ( Du et al. , 2019 ; Jiang et al. , 2017 ; Dann et al. , 2018 ) . The CDP problem formulation renders the states unobservable , but allows the agent to view observations that contain enough information to recover the state . This formulation enables the study of function approximation as applied to RL and has significant implications for RL in general . For example , it helps to : : : : : : : : : : : : : : : Du et al . ( 2019 ) show that there exist algorithms with exponentially more efficient explo- ration than Q-learning ( Du et al. , 2019 ) , a result suggesting that RL algorithms should be designed to take advantage of function approximation and its generalization abilities , rather than naively extending tabular algorithms with function approximation . However , no existing work has leveraged CDPs empirically , despite that it connects supervised learning with RL . The MNIST gridworld : : 2D : : : : : : : MNIST : : : : : : maze environment used by ? : : : : : : : : : : : : : : : Lee et al . ( 2019 ) may be considered a CDP , : : : : : simple : : : : : : CDP , : : : : : where : : : : the : : : : : : : : : : : observations : : : : are : : : : : : : : : : : : deterministic : : : : and : : : : : : : : : equivalent : : to : : : : : state , but is not recognized as such . The work by ? : : : : : : : : : : : : : : : : Song et al . ( 2020 ) proposed projecting the state of simple control environment and varying this projection between training and testing sets . These two works have similar goals to ours . However , the first work does not investigate generalization and neither makes use of the ground-truth values to probe generalization rigorously , such as in the label-corruption experiment by ( Zhang et al. , 2017 ) that we will extend to RL . : : : : : Our : : : : work : : is : : : : : : : : uniquely : : : : : : : : : : analogous : : to : : : : : : : : : : : : : generalization : : in : : : : : : : : : : supervised : : : : : : : : learning , : : : : : : : : : : : : : simultaneously : : : : : : : : : answering : : : : : : “ how ” : : : : : : : : : : : : generalization : : : : : : should : : : be : : : : : : : : : measured : : in : : : RL : : : : and : : : : : : “ what ” : : : : : : should : : be : : : : : : : : : measured . : : : In : : : : : : : : : answering : : : : : : theses : : : : two : : : : : : : : : questions , : : : we : : : : : : : : : : : disentangle : : : : : : : : : : : : generalization : : : : : : across : : : : : three : : : : axes : : : : : : : : states , : : : : : : : : : : : : observations : : : : and : : : : : : : : actions . : : : : : : : : Finally , : : : : this : : : : : : : : : : : : disentangled : : : : : : : : : : perspective : : : : : : shows : : : : : how : : : : : : : different : : : : : : : : : : : : : generalization : : : : : : : : : : : mechanisms : : : : : : : benefit : : : the : : : : : : : : different : : : : axes : : : of : : : : : : : : : : : : generalization . | This paper proposes an approach to measure to quantify generalization properties (state generalization, observation generalization, action generalization) in single-task RL in the context of offline RL. The paper discusses the limitations of several existing approaches for measuring cross-environment generalization, then presents their generic measure of generalization (Equation 4), and evaluates generalization when learning from offline data using a DQN and QR-DQN in a contextual decision process (CDP) problem created out of MNIST classification. The results suggest that dropout is effective for action generalization, not state generalization, and L2 penalty is effective, and QR-DQN can generalize better than DQN in the offline setting, but worse in terms state generalization. | SP:c00f6a4198816665d335df1c8210dc612fa6443f |
Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators | Vision transformers have delivered tremendous success in representation learning . This is primarily due to effective token mixing through self attention . However , this scales quadratically with the number of pixels , which becomes infeasible for high-resolution inputs . To cope with this challenge , we propose Adaptive Fourier Neural Operator ( AFNO ) as an efficient token mixer that learns to mix in the Fourier domain . AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution . This principle was previously used to design FNO , which solves global convolution efficiently in the Fourier domain and has shown promise in learning challenging PDEs . To handle challenges in visual representation learning such as discontinuities in images and high resolution inputs , we propose principled architectural modifications to FNO which results in memory and computational efficiency . This includes imposing a block-diagonal structure on the channel mixing weights , adaptively sharing weights across tokens , and sparsifying the frequency modes via soft-thresholding and shrinkage . The resulting model is highly parallel with a quasi-linear complexity and has linear memory in the sequence size . AFNO outperforms self-attention mechanisms for few-shot segmentation in terms of both efficiency and accuracy . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms . 1 INTRODUCTION . Vision transformers have recently shown promise in producing rich contextual representations for recognition and generation tasks . However , a major challenge is posed by long sequences from high resolution images and videos . Here , long-range and multiway dependencies are crucial to understand the compositionality and relationships among the objects in a scene . A key component for the effectiveness of transformers is attributed to proper mixing of tokens . Finding a good mixer is however challenging as it needs to scale with the sequence size , and systematically generalize to downstream tasks . Recently , there has been extensive research to find good token mixers ; see e.g. , Tay et al . ( 2020b ) and references therein . The original self-attention imposes graph structures , and uses the similarity among the tokens to capture the long-range dependencies Vaswani et al . ( 2017 ) ; Dosovitskiy et al . ( 2020 ) . It is parameter efficient and adaptive , but suffers from a quadratic complexity in the sequence size . To achieve efficient mixing with linear complexity , several approximations have been introduced for self-attention ; see Section 2 . These approximations typically compromise accuracy for the sake of efficiency . For instance , long-short ( LS ) transformer aggregates a long-range attention with dynamic projection to model distant correlations and a short-term attention to capture local correlations Zhu et al . ( 2021 ) . Long range dependencies are modeled in low dimensions , which can limit expressiveness . More recently , alternatives have been introduced for self-attention that relax the graph assumption for efficient mixing . Instead , they leverage the geometric structures using Fourier transform Rao et al . ( 2021 ) ; Lee-Thorp et al . ( 2021 ) . For instance , the Global Filter Networks ( GFN ) proposes depthwise global convolution for token mixing that enjoys an efficient implementation in the Fourier domain Rao et al . ( 2021 ) . GFN mainly involves three steps : ( i ) spatial token mixing via fast Fourier transform ( FFT ) ; ( ii ) frequency gating ; and ( iii ) inverse FFT for token demixing . GFN however lacks adaptivity and expressiveness at high resolutions since the parameter count grows with the sequence size , and no channel mixing is involved in ( ii ) . Our Approach . To address these shortcomings , we frame token mixing as operator learning that learns mappings between continuous functions in infinite dimensional spaces . We treat tokens as continuous elements in the function space , and model token mixing as continuous global convolution , which captures global relationships in the geometric space . One way to solve global convolution efficiently is through FFT . More generally , we compose such global convolution operations with nonlinearity such as ReLU to learn any general non-linear operator . This forms the basis for designing Fourier Neural operators ( FNOs ) which has shown promise in solving PDEs Li et al . ( 2020a ) . We thus adopt FNO as a starting point for designing efficient token mixing . Designing AFNO . Adapting FNO from PDEs to vision needs several design modifications . Images have high-resolution content with discontinuities due to edges and other structures . The channel mixing in standard FNO incurs a quadratic complexity in the channel size . To control this complexity , we impose a block-diagonal structure on the channel mixing weights . Also , to enhance generalization , inspired by sparse regression , we sparsify the frequencies via soft-thresholding Tibshirani ( 1996 ) . Also , for parameter efficiency , our MLP layer shares weights across tokens ( see Table 1 ) . We term the resulting model as adaptive FNO ( AFNO ) . We perform extensive experiments with pretraining vision transformers for upstream classification and inpainting that are then finetuned for downstream segmentation . Compared with the state-of-theart , our AFNO using the ViT-B backbone outperforms existing GFN , LS , and self-attention for fewshot segmentation in terms of both efficiency and accuracy , e.g , compared with self-attention , AFNO achieves slightly better accuracy while being 30 % more efficient . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO achieves state-of-the-art and beats previous methods , e.g . AFNO achieves more than 2 % better mIoU compared with efficient self-attention Xie et al . ( 2021 ) , and is also competitive with GFN and LS . Key Contributions . Our main contributions are summarized as follows : • We establish a link between operator learning and high-resolution token mixing and adapt FNO from PDEs as an efficient mixer with a quasi-linear complexity in the sequence length . • We design AFNO in a principled way to improve its expressiveness and generalization by imposing block-diagonal structure , adaptive weight-sharing , and sparsity . • We conduct experiments for pretraining and finetuning . AFNO outperforms existing mixers for few-shot segmentation . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO ( sequence : 65k ) achieves state-of-the-art , e.g. , with 2 % gain over the efficient self-attention . 2 RELATED WORKS . Our work is at the intersection of operator learning and efficient transformers . Since the inception of transformers , there have been several works to improve the efficiency of self-attention . We divide them into three lines of work based on the structural constraints . Graph-Based Mixers primarily focus on finding efficient surrogates to approximate self-attention . Those include : ( i ) sparse attentions that promote predefined sparse patterns ; see e.g. , sparse transformer Child et al . ( 2019 ) , image transformer Parmar et al . ( 2018 ) , axial transformer Ho et al . ( 2019 ) , and longformer Beltagy et al . ( 2020 ) ; ( ii ) low-rank attention that use linear sketching such as linformers Wang et al . ( 2020 ) , long-short transformers Lian et al . ( 2021 ) , Nyströmformer Xiong et al . ( 2021 ) ; ( iii ) kernel methods that approximate attention with ensemble of kernels such as performer Choromanski et al . ( 2020 ) , linear transformer Katharopoulos et al . ( 2020 ) , and random feature attention Peng et al . ( 2021 ) ; and ( iv ) clustering-based methods such as reformer Kitaev et al . ( 2020 ) , routing transformer Roy et al . ( 2021 ) , and Sinkhorn transformer Tay et al . ( 2020a ) . These surrogates however compromise accuracy for efficiencyy . MLP-Based Mixers relax the graph similarity constraints of the self-attention and spatially mix tokens using MLP projections . The original MLP-mixer Tolstikhin et al . ( 2021 ) achieves similar accuracy as self-attention . It is further accelerated by ResMLP Touvron et al . ( 2021 ) that replaces the layer norm with the affine transforms . gMLP Liu et al . ( 2021a ) also uses an additional gating to weight tokens before mixing . This class of methods however lack scalability due to quadratic complexity of MLP projection , and their parameter inefficiency for high resolution images . Fourier-Based Mixers apply the Fourier transform to spatially mix tokens . FNet Lee-Thorp et al . ( 2021 ) resembles the MLP-mixer with token mixer simply being pre-fixed DFT . No filtering is done to adapt the data distribution . Global filter networks ( GFNs ) Rao et al . ( 2021 ) however learn Fourier filters to perform depthwise global convolution , where no channel mixing is involved . Also , GFN filters lack adaptivity that could negatively impact generalization . In contrast , our proposed AFNO performs global convolution with dynamic filtering and channel mixing that leads to better expressivity and generalization . Operator Learning deals with mapping from functions to functions and commonly used for PDEs . Operator learning can be deployed in computer vision as images are RGB-valued functions on a 2D plane . This continuous generalization allows us to permeate benefits from operators . Recent advances in operator learning include DeepONet Lu et al . ( 2019 ) that learns the coefficients and basis of the operators , and neural operators Kovachki et al . ( 2021 ) that are parameterized by integral operators . In this work , we adopt Fourier neural operators Li et al . ( 2020a ) that implement global convolution via FFT which has been very successful for solving nonlinear and chaotic PDEs . 3 PRELIMINARIES AND PROBLEM STATEMENT . Consider a 2D image that is divided into a h × w grid of small and non-overlapping patches . Each patch is represented as a d-dimensional token , and the image can be represented as a token tensor X ∈ Rh×w×d . Treating image as a token sequence , transformers then aim to learn a contextual embedding that transfers well to downstream tasks . To end up with a rich representation , the tokens need to be effectively mixed over the layers . Self-attention is an effective mixing that learns the graph similarity among tokens . It however scales quadratically with the sequence size , which impedes training high resolution images . Our goal is then to find an alternative mixing strategy that achieves favorable scaling trade-offs in terms of computational complexity , memory , and downstream transfer accuracy . 3.1 KERNEL INTEGRATION . The self-attention mechanism can be written as a kernel integration ( Tsai et al. , 2019 ; Cao , 2021 ; Kovachki et al. , 2021 ) . For the input tensor X we denote the ( n , m ) -th token as xn , m ∈ Rd . For notation convenience , we index the token sequence as X [ s ] : = X [ ns , ms ] for some s , t ∈ [ hw ] . Define also N : = hw as the sequence length . The self-attention mixing is then defined as follows : Definition 1 ( Self Attention ) . Att : RN×d → RN×d Att ( X ) : = softmax ( XWq ( XWk ) > √ d ) XWv ( 1 ) where Wq , Wk , Wv ∈ Rd×d are the query , key , and value matrices , respectively . Define K : = softmax ( 〈XWq , XWk〉/ √ d ) as the N × N score array with 〈· , ·〉 being inner product in Rd . We then treat self-attention as an asymmetric matrix-valued kernel κ : [ N ] × [ N ] → Rd×d parameterized as κ [ s , t ] = KWv , which can be viewed as a kernel summation . Att ( X ) [ s ] : = N∑ t=1 κ [ s , t ] X [ t ] ∀s ∈ [ N ] . ( 2 ) This kernel summation can be extended to continuous kernel integrals . The input tensor X is no longer a finite-dimensional vector in the Euclidean space X ∈ RN×d , but rather a spatial function in the function space X ∈ ( D , Rd ) defined on domain D ⊂ R2 which is the physical space of the images . In this continuum formulation , the neural network becomes an operator that acts on the input functions . This brings us efficient characterization originating from operator learning . Definition 2 ( Kernel Integral ) . We define the kernel integral operator K : ( D , Rd ) → ( D , Rd ) as K ( X ) ( s ) = ∫ D κ ( s , t ) X ( t ) dt ∀s ∈ D. ( 3 ) with a continuous kernel function κ : D ×D → Rd×d Li et al . ( 2020b ) . For the special case of the Green ’ s kernel κ ( s , t ) = κ ( s− t ) , the integral leads to global convolution defined below . Definition 3 ( Global Convolution ) . Assuming κ ( s , t ) = κ ( s− t ) , the kernel operator admits K ( X ) ( s ) = ∫ D κ ( s− t ) X ( t ) dt ∀s ∈ D. ( 4 ) The convolution is a smaller complexity class of operation compared to integration . The Green ’ s kernel has beneficial regularization effect but it is also expressive enough to capture global interactions . Furthermore , the global convolution can be efficiently implemented by the FFT . | As is known, vision transformer has been becoming a more and more popular topic in the area of computer vision which is inspired by the success of this deep neural network fashion in other fields. However, the computing burden is a common disadvantage across the majority of transformer models compared with other deep neural networks. From a view of adaptive network operation in transformer, authors proposed to improve its expressiveness and generalization by imposing block-diagonal structure, adaptive weight-sharing, and sparsity. | SP:47f3678073df28aeb8d2a85c56da2846df66bf97 |
Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators | Vision transformers have delivered tremendous success in representation learning . This is primarily due to effective token mixing through self attention . However , this scales quadratically with the number of pixels , which becomes infeasible for high-resolution inputs . To cope with this challenge , we propose Adaptive Fourier Neural Operator ( AFNO ) as an efficient token mixer that learns to mix in the Fourier domain . AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution . This principle was previously used to design FNO , which solves global convolution efficiently in the Fourier domain and has shown promise in learning challenging PDEs . To handle challenges in visual representation learning such as discontinuities in images and high resolution inputs , we propose principled architectural modifications to FNO which results in memory and computational efficiency . This includes imposing a block-diagonal structure on the channel mixing weights , adaptively sharing weights across tokens , and sparsifying the frequency modes via soft-thresholding and shrinkage . The resulting model is highly parallel with a quasi-linear complexity and has linear memory in the sequence size . AFNO outperforms self-attention mechanisms for few-shot segmentation in terms of both efficiency and accuracy . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms . 1 INTRODUCTION . Vision transformers have recently shown promise in producing rich contextual representations for recognition and generation tasks . However , a major challenge is posed by long sequences from high resolution images and videos . Here , long-range and multiway dependencies are crucial to understand the compositionality and relationships among the objects in a scene . A key component for the effectiveness of transformers is attributed to proper mixing of tokens . Finding a good mixer is however challenging as it needs to scale with the sequence size , and systematically generalize to downstream tasks . Recently , there has been extensive research to find good token mixers ; see e.g. , Tay et al . ( 2020b ) and references therein . The original self-attention imposes graph structures , and uses the similarity among the tokens to capture the long-range dependencies Vaswani et al . ( 2017 ) ; Dosovitskiy et al . ( 2020 ) . It is parameter efficient and adaptive , but suffers from a quadratic complexity in the sequence size . To achieve efficient mixing with linear complexity , several approximations have been introduced for self-attention ; see Section 2 . These approximations typically compromise accuracy for the sake of efficiency . For instance , long-short ( LS ) transformer aggregates a long-range attention with dynamic projection to model distant correlations and a short-term attention to capture local correlations Zhu et al . ( 2021 ) . Long range dependencies are modeled in low dimensions , which can limit expressiveness . More recently , alternatives have been introduced for self-attention that relax the graph assumption for efficient mixing . Instead , they leverage the geometric structures using Fourier transform Rao et al . ( 2021 ) ; Lee-Thorp et al . ( 2021 ) . For instance , the Global Filter Networks ( GFN ) proposes depthwise global convolution for token mixing that enjoys an efficient implementation in the Fourier domain Rao et al . ( 2021 ) . GFN mainly involves three steps : ( i ) spatial token mixing via fast Fourier transform ( FFT ) ; ( ii ) frequency gating ; and ( iii ) inverse FFT for token demixing . GFN however lacks adaptivity and expressiveness at high resolutions since the parameter count grows with the sequence size , and no channel mixing is involved in ( ii ) . Our Approach . To address these shortcomings , we frame token mixing as operator learning that learns mappings between continuous functions in infinite dimensional spaces . We treat tokens as continuous elements in the function space , and model token mixing as continuous global convolution , which captures global relationships in the geometric space . One way to solve global convolution efficiently is through FFT . More generally , we compose such global convolution operations with nonlinearity such as ReLU to learn any general non-linear operator . This forms the basis for designing Fourier Neural operators ( FNOs ) which has shown promise in solving PDEs Li et al . ( 2020a ) . We thus adopt FNO as a starting point for designing efficient token mixing . Designing AFNO . Adapting FNO from PDEs to vision needs several design modifications . Images have high-resolution content with discontinuities due to edges and other structures . The channel mixing in standard FNO incurs a quadratic complexity in the channel size . To control this complexity , we impose a block-diagonal structure on the channel mixing weights . Also , to enhance generalization , inspired by sparse regression , we sparsify the frequencies via soft-thresholding Tibshirani ( 1996 ) . Also , for parameter efficiency , our MLP layer shares weights across tokens ( see Table 1 ) . We term the resulting model as adaptive FNO ( AFNO ) . We perform extensive experiments with pretraining vision transformers for upstream classification and inpainting that are then finetuned for downstream segmentation . Compared with the state-of-theart , our AFNO using the ViT-B backbone outperforms existing GFN , LS , and self-attention for fewshot segmentation in terms of both efficiency and accuracy , e.g , compared with self-attention , AFNO achieves slightly better accuracy while being 30 % more efficient . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO achieves state-of-the-art and beats previous methods , e.g . AFNO achieves more than 2 % better mIoU compared with efficient self-attention Xie et al . ( 2021 ) , and is also competitive with GFN and LS . Key Contributions . Our main contributions are summarized as follows : • We establish a link between operator learning and high-resolution token mixing and adapt FNO from PDEs as an efficient mixer with a quasi-linear complexity in the sequence length . • We design AFNO in a principled way to improve its expressiveness and generalization by imposing block-diagonal structure , adaptive weight-sharing , and sparsity . • We conduct experiments for pretraining and finetuning . AFNO outperforms existing mixers for few-shot segmentation . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO ( sequence : 65k ) achieves state-of-the-art , e.g. , with 2 % gain over the efficient self-attention . 2 RELATED WORKS . Our work is at the intersection of operator learning and efficient transformers . Since the inception of transformers , there have been several works to improve the efficiency of self-attention . We divide them into three lines of work based on the structural constraints . Graph-Based Mixers primarily focus on finding efficient surrogates to approximate self-attention . Those include : ( i ) sparse attentions that promote predefined sparse patterns ; see e.g. , sparse transformer Child et al . ( 2019 ) , image transformer Parmar et al . ( 2018 ) , axial transformer Ho et al . ( 2019 ) , and longformer Beltagy et al . ( 2020 ) ; ( ii ) low-rank attention that use linear sketching such as linformers Wang et al . ( 2020 ) , long-short transformers Lian et al . ( 2021 ) , Nyströmformer Xiong et al . ( 2021 ) ; ( iii ) kernel methods that approximate attention with ensemble of kernels such as performer Choromanski et al . ( 2020 ) , linear transformer Katharopoulos et al . ( 2020 ) , and random feature attention Peng et al . ( 2021 ) ; and ( iv ) clustering-based methods such as reformer Kitaev et al . ( 2020 ) , routing transformer Roy et al . ( 2021 ) , and Sinkhorn transformer Tay et al . ( 2020a ) . These surrogates however compromise accuracy for efficiencyy . MLP-Based Mixers relax the graph similarity constraints of the self-attention and spatially mix tokens using MLP projections . The original MLP-mixer Tolstikhin et al . ( 2021 ) achieves similar accuracy as self-attention . It is further accelerated by ResMLP Touvron et al . ( 2021 ) that replaces the layer norm with the affine transforms . gMLP Liu et al . ( 2021a ) also uses an additional gating to weight tokens before mixing . This class of methods however lack scalability due to quadratic complexity of MLP projection , and their parameter inefficiency for high resolution images . Fourier-Based Mixers apply the Fourier transform to spatially mix tokens . FNet Lee-Thorp et al . ( 2021 ) resembles the MLP-mixer with token mixer simply being pre-fixed DFT . No filtering is done to adapt the data distribution . Global filter networks ( GFNs ) Rao et al . ( 2021 ) however learn Fourier filters to perform depthwise global convolution , where no channel mixing is involved . Also , GFN filters lack adaptivity that could negatively impact generalization . In contrast , our proposed AFNO performs global convolution with dynamic filtering and channel mixing that leads to better expressivity and generalization . Operator Learning deals with mapping from functions to functions and commonly used for PDEs . Operator learning can be deployed in computer vision as images are RGB-valued functions on a 2D plane . This continuous generalization allows us to permeate benefits from operators . Recent advances in operator learning include DeepONet Lu et al . ( 2019 ) that learns the coefficients and basis of the operators , and neural operators Kovachki et al . ( 2021 ) that are parameterized by integral operators . In this work , we adopt Fourier neural operators Li et al . ( 2020a ) that implement global convolution via FFT which has been very successful for solving nonlinear and chaotic PDEs . 3 PRELIMINARIES AND PROBLEM STATEMENT . Consider a 2D image that is divided into a h × w grid of small and non-overlapping patches . Each patch is represented as a d-dimensional token , and the image can be represented as a token tensor X ∈ Rh×w×d . Treating image as a token sequence , transformers then aim to learn a contextual embedding that transfers well to downstream tasks . To end up with a rich representation , the tokens need to be effectively mixed over the layers . Self-attention is an effective mixing that learns the graph similarity among tokens . It however scales quadratically with the sequence size , which impedes training high resolution images . Our goal is then to find an alternative mixing strategy that achieves favorable scaling trade-offs in terms of computational complexity , memory , and downstream transfer accuracy . 3.1 KERNEL INTEGRATION . The self-attention mechanism can be written as a kernel integration ( Tsai et al. , 2019 ; Cao , 2021 ; Kovachki et al. , 2021 ) . For the input tensor X we denote the ( n , m ) -th token as xn , m ∈ Rd . For notation convenience , we index the token sequence as X [ s ] : = X [ ns , ms ] for some s , t ∈ [ hw ] . Define also N : = hw as the sequence length . The self-attention mixing is then defined as follows : Definition 1 ( Self Attention ) . Att : RN×d → RN×d Att ( X ) : = softmax ( XWq ( XWk ) > √ d ) XWv ( 1 ) where Wq , Wk , Wv ∈ Rd×d are the query , key , and value matrices , respectively . Define K : = softmax ( 〈XWq , XWk〉/ √ d ) as the N × N score array with 〈· , ·〉 being inner product in Rd . We then treat self-attention as an asymmetric matrix-valued kernel κ : [ N ] × [ N ] → Rd×d parameterized as κ [ s , t ] = KWv , which can be viewed as a kernel summation . Att ( X ) [ s ] : = N∑ t=1 κ [ s , t ] X [ t ] ∀s ∈ [ N ] . ( 2 ) This kernel summation can be extended to continuous kernel integrals . The input tensor X is no longer a finite-dimensional vector in the Euclidean space X ∈ RN×d , but rather a spatial function in the function space X ∈ ( D , Rd ) defined on domain D ⊂ R2 which is the physical space of the images . In this continuum formulation , the neural network becomes an operator that acts on the input functions . This brings us efficient characterization originating from operator learning . Definition 2 ( Kernel Integral ) . We define the kernel integral operator K : ( D , Rd ) → ( D , Rd ) as K ( X ) ( s ) = ∫ D κ ( s , t ) X ( t ) dt ∀s ∈ D. ( 3 ) with a continuous kernel function κ : D ×D → Rd×d Li et al . ( 2020b ) . For the special case of the Green ’ s kernel κ ( s , t ) = κ ( s− t ) , the integral leads to global convolution defined below . Definition 3 ( Global Convolution ) . Assuming κ ( s , t ) = κ ( s− t ) , the kernel operator admits K ( X ) ( s ) = ∫ D κ ( s− t ) X ( t ) dt ∀s ∈ D. ( 4 ) The convolution is a smaller complexity class of operation compared to integration . The Green ’ s kernel has beneficial regularization effect but it is also expressive enough to capture global interactions . Furthermore , the global convolution can be efficiently implemented by the FFT . | The paper proposes a new Adaptive Fourier Neural Operator (AFNO) for mixing tokens in visual transformers. The idea is based on Fourier neural operators (FNO) that transform feature flow in the Fourier space. The difference w.r.t. existing FNO is in two modifications. First, the weight matrix is block-diagonal (analog of multi-head) and, second, use of an MLP instead of just linear weighting. The experiments show that the proposed method is competitive and often achieves results as good as with original self-attention (with fewer flops). | SP:47f3678073df28aeb8d2a85c56da2846df66bf97 |
Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators | Vision transformers have delivered tremendous success in representation learning . This is primarily due to effective token mixing through self attention . However , this scales quadratically with the number of pixels , which becomes infeasible for high-resolution inputs . To cope with this challenge , we propose Adaptive Fourier Neural Operator ( AFNO ) as an efficient token mixer that learns to mix in the Fourier domain . AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution . This principle was previously used to design FNO , which solves global convolution efficiently in the Fourier domain and has shown promise in learning challenging PDEs . To handle challenges in visual representation learning such as discontinuities in images and high resolution inputs , we propose principled architectural modifications to FNO which results in memory and computational efficiency . This includes imposing a block-diagonal structure on the channel mixing weights , adaptively sharing weights across tokens , and sparsifying the frequency modes via soft-thresholding and shrinkage . The resulting model is highly parallel with a quasi-linear complexity and has linear memory in the sequence size . AFNO outperforms self-attention mechanisms for few-shot segmentation in terms of both efficiency and accuracy . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms . 1 INTRODUCTION . Vision transformers have recently shown promise in producing rich contextual representations for recognition and generation tasks . However , a major challenge is posed by long sequences from high resolution images and videos . Here , long-range and multiway dependencies are crucial to understand the compositionality and relationships among the objects in a scene . A key component for the effectiveness of transformers is attributed to proper mixing of tokens . Finding a good mixer is however challenging as it needs to scale with the sequence size , and systematically generalize to downstream tasks . Recently , there has been extensive research to find good token mixers ; see e.g. , Tay et al . ( 2020b ) and references therein . The original self-attention imposes graph structures , and uses the similarity among the tokens to capture the long-range dependencies Vaswani et al . ( 2017 ) ; Dosovitskiy et al . ( 2020 ) . It is parameter efficient and adaptive , but suffers from a quadratic complexity in the sequence size . To achieve efficient mixing with linear complexity , several approximations have been introduced for self-attention ; see Section 2 . These approximations typically compromise accuracy for the sake of efficiency . For instance , long-short ( LS ) transformer aggregates a long-range attention with dynamic projection to model distant correlations and a short-term attention to capture local correlations Zhu et al . ( 2021 ) . Long range dependencies are modeled in low dimensions , which can limit expressiveness . More recently , alternatives have been introduced for self-attention that relax the graph assumption for efficient mixing . Instead , they leverage the geometric structures using Fourier transform Rao et al . ( 2021 ) ; Lee-Thorp et al . ( 2021 ) . For instance , the Global Filter Networks ( GFN ) proposes depthwise global convolution for token mixing that enjoys an efficient implementation in the Fourier domain Rao et al . ( 2021 ) . GFN mainly involves three steps : ( i ) spatial token mixing via fast Fourier transform ( FFT ) ; ( ii ) frequency gating ; and ( iii ) inverse FFT for token demixing . GFN however lacks adaptivity and expressiveness at high resolutions since the parameter count grows with the sequence size , and no channel mixing is involved in ( ii ) . Our Approach . To address these shortcomings , we frame token mixing as operator learning that learns mappings between continuous functions in infinite dimensional spaces . We treat tokens as continuous elements in the function space , and model token mixing as continuous global convolution , which captures global relationships in the geometric space . One way to solve global convolution efficiently is through FFT . More generally , we compose such global convolution operations with nonlinearity such as ReLU to learn any general non-linear operator . This forms the basis for designing Fourier Neural operators ( FNOs ) which has shown promise in solving PDEs Li et al . ( 2020a ) . We thus adopt FNO as a starting point for designing efficient token mixing . Designing AFNO . Adapting FNO from PDEs to vision needs several design modifications . Images have high-resolution content with discontinuities due to edges and other structures . The channel mixing in standard FNO incurs a quadratic complexity in the channel size . To control this complexity , we impose a block-diagonal structure on the channel mixing weights . Also , to enhance generalization , inspired by sparse regression , we sparsify the frequencies via soft-thresholding Tibshirani ( 1996 ) . Also , for parameter efficiency , our MLP layer shares weights across tokens ( see Table 1 ) . We term the resulting model as adaptive FNO ( AFNO ) . We perform extensive experiments with pretraining vision transformers for upstream classification and inpainting that are then finetuned for downstream segmentation . Compared with the state-of-theart , our AFNO using the ViT-B backbone outperforms existing GFN , LS , and self-attention for fewshot segmentation in terms of both efficiency and accuracy , e.g , compared with self-attention , AFNO achieves slightly better accuracy while being 30 % more efficient . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO achieves state-of-the-art and beats previous methods , e.g . AFNO achieves more than 2 % better mIoU compared with efficient self-attention Xie et al . ( 2021 ) , and is also competitive with GFN and LS . Key Contributions . Our main contributions are summarized as follows : • We establish a link between operator learning and high-resolution token mixing and adapt FNO from PDEs as an efficient mixer with a quasi-linear complexity in the sequence length . • We design AFNO in a principled way to improve its expressiveness and generalization by imposing block-diagonal structure , adaptive weight-sharing , and sparsity . • We conduct experiments for pretraining and finetuning . AFNO outperforms existing mixers for few-shot segmentation . For Cityscapes segmentation with the Segformer-B3 backbone , AFNO ( sequence : 65k ) achieves state-of-the-art , e.g. , with 2 % gain over the efficient self-attention . 2 RELATED WORKS . Our work is at the intersection of operator learning and efficient transformers . Since the inception of transformers , there have been several works to improve the efficiency of self-attention . We divide them into three lines of work based on the structural constraints . Graph-Based Mixers primarily focus on finding efficient surrogates to approximate self-attention . Those include : ( i ) sparse attentions that promote predefined sparse patterns ; see e.g. , sparse transformer Child et al . ( 2019 ) , image transformer Parmar et al . ( 2018 ) , axial transformer Ho et al . ( 2019 ) , and longformer Beltagy et al . ( 2020 ) ; ( ii ) low-rank attention that use linear sketching such as linformers Wang et al . ( 2020 ) , long-short transformers Lian et al . ( 2021 ) , Nyströmformer Xiong et al . ( 2021 ) ; ( iii ) kernel methods that approximate attention with ensemble of kernels such as performer Choromanski et al . ( 2020 ) , linear transformer Katharopoulos et al . ( 2020 ) , and random feature attention Peng et al . ( 2021 ) ; and ( iv ) clustering-based methods such as reformer Kitaev et al . ( 2020 ) , routing transformer Roy et al . ( 2021 ) , and Sinkhorn transformer Tay et al . ( 2020a ) . These surrogates however compromise accuracy for efficiencyy . MLP-Based Mixers relax the graph similarity constraints of the self-attention and spatially mix tokens using MLP projections . The original MLP-mixer Tolstikhin et al . ( 2021 ) achieves similar accuracy as self-attention . It is further accelerated by ResMLP Touvron et al . ( 2021 ) that replaces the layer norm with the affine transforms . gMLP Liu et al . ( 2021a ) also uses an additional gating to weight tokens before mixing . This class of methods however lack scalability due to quadratic complexity of MLP projection , and their parameter inefficiency for high resolution images . Fourier-Based Mixers apply the Fourier transform to spatially mix tokens . FNet Lee-Thorp et al . ( 2021 ) resembles the MLP-mixer with token mixer simply being pre-fixed DFT . No filtering is done to adapt the data distribution . Global filter networks ( GFNs ) Rao et al . ( 2021 ) however learn Fourier filters to perform depthwise global convolution , where no channel mixing is involved . Also , GFN filters lack adaptivity that could negatively impact generalization . In contrast , our proposed AFNO performs global convolution with dynamic filtering and channel mixing that leads to better expressivity and generalization . Operator Learning deals with mapping from functions to functions and commonly used for PDEs . Operator learning can be deployed in computer vision as images are RGB-valued functions on a 2D plane . This continuous generalization allows us to permeate benefits from operators . Recent advances in operator learning include DeepONet Lu et al . ( 2019 ) that learns the coefficients and basis of the operators , and neural operators Kovachki et al . ( 2021 ) that are parameterized by integral operators . In this work , we adopt Fourier neural operators Li et al . ( 2020a ) that implement global convolution via FFT which has been very successful for solving nonlinear and chaotic PDEs . 3 PRELIMINARIES AND PROBLEM STATEMENT . Consider a 2D image that is divided into a h × w grid of small and non-overlapping patches . Each patch is represented as a d-dimensional token , and the image can be represented as a token tensor X ∈ Rh×w×d . Treating image as a token sequence , transformers then aim to learn a contextual embedding that transfers well to downstream tasks . To end up with a rich representation , the tokens need to be effectively mixed over the layers . Self-attention is an effective mixing that learns the graph similarity among tokens . It however scales quadratically with the sequence size , which impedes training high resolution images . Our goal is then to find an alternative mixing strategy that achieves favorable scaling trade-offs in terms of computational complexity , memory , and downstream transfer accuracy . 3.1 KERNEL INTEGRATION . The self-attention mechanism can be written as a kernel integration ( Tsai et al. , 2019 ; Cao , 2021 ; Kovachki et al. , 2021 ) . For the input tensor X we denote the ( n , m ) -th token as xn , m ∈ Rd . For notation convenience , we index the token sequence as X [ s ] : = X [ ns , ms ] for some s , t ∈ [ hw ] . Define also N : = hw as the sequence length . The self-attention mixing is then defined as follows : Definition 1 ( Self Attention ) . Att : RN×d → RN×d Att ( X ) : = softmax ( XWq ( XWk ) > √ d ) XWv ( 1 ) where Wq , Wk , Wv ∈ Rd×d are the query , key , and value matrices , respectively . Define K : = softmax ( 〈XWq , XWk〉/ √ d ) as the N × N score array with 〈· , ·〉 being inner product in Rd . We then treat self-attention as an asymmetric matrix-valued kernel κ : [ N ] × [ N ] → Rd×d parameterized as κ [ s , t ] = KWv , which can be viewed as a kernel summation . Att ( X ) [ s ] : = N∑ t=1 κ [ s , t ] X [ t ] ∀s ∈ [ N ] . ( 2 ) This kernel summation can be extended to continuous kernel integrals . The input tensor X is no longer a finite-dimensional vector in the Euclidean space X ∈ RN×d , but rather a spatial function in the function space X ∈ ( D , Rd ) defined on domain D ⊂ R2 which is the physical space of the images . In this continuum formulation , the neural network becomes an operator that acts on the input functions . This brings us efficient characterization originating from operator learning . Definition 2 ( Kernel Integral ) . We define the kernel integral operator K : ( D , Rd ) → ( D , Rd ) as K ( X ) ( s ) = ∫ D κ ( s , t ) X ( t ) dt ∀s ∈ D. ( 3 ) with a continuous kernel function κ : D ×D → Rd×d Li et al . ( 2020b ) . For the special case of the Green ’ s kernel κ ( s , t ) = κ ( s− t ) , the integral leads to global convolution defined below . Definition 3 ( Global Convolution ) . Assuming κ ( s , t ) = κ ( s− t ) , the kernel operator admits K ( X ) ( s ) = ∫ D κ ( s− t ) X ( t ) dt ∀s ∈ D. ( 4 ) The convolution is a smaller complexity class of operation compared to integration . The Green ’ s kernel has beneficial regularization effect but it is also expressive enough to capture global interactions . Furthermore , the global convolution can be efficiently implemented by the FFT . | Vision transformers scale quadratically with the number of pixels. To cope with this challenge, this paper proposes Adaptive Fourier Neural Operator (AFNO) as an efficient token mixer that learns to mix in the Fourier domain. This is achieved by modifying FNO, including imposing a block-diagonal structure on the channel mixing weights, adaptively sharing weights across tokens, and sparsifying the frequency modes via soft-thresholding and shrinkage. The resulting model has a quasi-linear complexity and linear memory in the sequence size. | SP:47f3678073df28aeb8d2a85c56da2846df66bf97 |
Pairwise Adversarial Training for Unsupervised Class-imbalanced Domain Adaptation | ‘ 1 INTRODUCTION . Unsupervised domain adaptation ( UDA ) aims to achieve knowledge transfer from a labeled source domain to an unlabelled target domain . Recent years have witnessed the significant progress of UDA based on deep neural networks ( Pei et al. , 2018 ; Cui et al. , 2020 ; Hu et al. , 2020 ; Liang et al. , 2020 ; Na et al. , 2021 ) . Most of existing UDA methods assume that only covariate shift occurs in the source domain and target domain , while the label distributions in two domains are identical . However , this assumption may not hold in real-world applications . For instance , in wild-life pictures , the commonly seen animals such as rabbit and deer appear more frequently than the rare animal such as panda and crocodile . Public datasets such as DomainNet ( Peng et al. , 2019 ) and and MSCOCO ( Lin et al. , 2014 ) exhibit imbalanced class distribution . Figure 1 illustrates the imbalanced label distributions in the Real domain and Sketch domain from the DomainNet dataset . To address the issue of imbalanced label distributions in domain adaptation , some recent studies ( Wu et al. , 2019 ; Tan et al. , 2020 ; Jiang et al. , 2020 ) try to jointly model the conditional feature distribution shift and label distribution shift ( LDS ) . This problem is referred to as Classimbalanced Domain Adaptation ( CDA ) . Let x and y denote the samples and labels , respectively . p and q separately represent the probability distribution of source domain and target domain . The common assumptions in UDA involve the covariate shift ( i.e. , p ( x ) 6= q ( x ) ) and identical label distribution ( i.e. , p ( y ) = q ( y ) ) . In CDA , however , apart from the covariate shift , both the condi- tional feature shift and label shift exist , i.e. , p ( x|y ) 6= q ( x|y ) , p ( y ) 6= q ( y ) . CDA is a more challenging task than UDA . Recent studies ( Tan et al. , 2020 ) have demonstrated that the mainstream UDA methods will suffer significant performance drop , as the classifier will favor the majority classes . Only a few CDA approaches have been proposed by far . In Tan et al . ( 2020 ) ’ s work , the negative effect of label shift is reduced by exploiting the pseudo labelled target samples via self-training . Jiang et al . ( 2020 ) use an implicit sampling method based on pseudo labels to align the joint distribution between features and labels . However , one critical problem of these methods is that the pseudo labels are likely to suffer from ill-calibrated probabilities ( Guo et al. , 2017 ) , and thus the unreliable pseudo labels will cause error accumulation during the training process , which will largely degrade the model performance . Augmenting training data has been proven as an effective strategy to tackle the issue of biased label distributions in class-imbalance learning ( Chawla et al. , 2002 ; Chou et al. , 2020 ) . In addition to the traditional data augmentation techniques , adversarial training is also capable of generating semantically meaningful synthetic samples that help enhance the robustness of models . However , these approaches only consider a single domain , and they can not be directly applied to solve the CDA problem . In this paper , we propose a pairwise adversarial training ( PAT ) approach that augments training data for class-imbalanced domain adaptation . Unlike conventional adversarial training in which the adversarial samples are obtained from the ` p ball of the original data , we obtain the semantic adversarial samples from the interpolated line of the aligned pair-wise samples from source domain and target domain . Moreover , a class-imbalanced semantic centroid alignment strategy is designed to explicitly align the source and target domains in the feature space . The main contributions of this paper are three-fold . ( 1 ) We propose a novel pairwise adversarial training approach that generates adversarial samples from pairs of samples across the source and target domains , and further exploits these samples to augment training data . ( 2 ) We propose a new optimization algorithm to solve pairwise adversarial training problem . ( 3 ) We conduct extensive evaluations on benchmark datasets , and results show that our approach obtains competing performance compared with state-of-art CDA methods . 2 RELATED WORK . In this section , we briefly introduce three relevant research topics , including unsupervised domain adaptation , class-imbalanced domain adaptation and adversarial training . Unsupervised Domain Adaptation . In recent years , unsupervised domain adaption ( UDA ) has attracted increasing attention . Existing UDA methods could be roughly categorized into two groups , including the discrepancy-based methods and adversarial-based methods . The discrepancy-based methods usually align source and target feature distributions in the embedding space using various statistical distance metrics , such as Maximum Mean Discrepancy ( MMD ) ( Long et al. , 2016 ; 2017 ; Kang et al. , 2019 ) , Correlation Alignment ( CORAL ) ( Sun & Saenko , 2016 ) , and Wasserstein distance ( Lee & Raginsky , 2018 ; Shen et al. , 2018 ; Balaji et al. , 2019 ) . On the other hand , the adversarial-based methods focus on learning domain invariant features via domain adversarial training ( Ganin et al. , 2016 ; Shu et al. , 2018 ; Pei et al. , 2018 ; Saito et al. , 2018 ; Deng et al. , 2019 ; Yu et al. , 2019 ) . Recently , Zhang et al . ( 2019 ) proposed the margin disparity discrepancy ( MDD ) to measure the discrepancy of two domains with generalization bounds . This theory is tailored into an adversarial learning algorithm for domain adaptation . Unlike other adversarial learning based UDA methods that align two domains by confusing a domain discriminator , MDD aligns two domains by minimizing the maximum margin disparity discrepancy of an optimal classifier f and an auxiliary classifier f ′ . The optimization problem of MDD is formulated as : min f , ψ ε ( Ds ) + ηDγ ( Ds , Dt ) , ( 1 ) max f ′ Dγ ( Ds , Dt ) , ( 2 ) where ε is the classification loss on the source domain and Dγ measures the discrepancy of source domain and target domain . Specifically , ε ( Ds ) = E ( xs , ys ) ∼DsL ( f ( ψ ( x s ) ) , ys ) , ( 3 ) Ladv = Dγ ( Ds , Dt ) = Ext∼DtL′ ( f ′ ( ψ ( xt ) ) , f ( ψ ( xt ) ) ) − γExs∼DsL ( f ′ ( ψ ( xs ) ) ) , f ( ψ ( xs ) ) , ( 4 ) L is cross-entropy function , and L′ ( f ′ ( ψ ( xt ) ) , f ( ψ ( xt ) ) ) = log ( 1 − σy′ ( f ′ ( ψ ( xt ) ) ) . y′ is the pseudo label generated from an optimal classifier . MDD is the backbone of our method . Class-imbalanced Domain Adaptation . As a branch of domain adaptation , the class-imbalanced domain adaptation ( CDA ) aims to deal with data with biased class distribution . Tan et al . ( 2020 ) might be the first one to investigate the CDA problem , and they exploited the pseudo labelled target data to reduce the negative effect of label shift . Wu et al . ( 2019 ) proposed the asymmetrically-relaxed distances as replacement of the standard ones under biased label distribution . Jiang et al . ( 2020 ) adopted the implicit sampling strategy to ensure class alignment at the minibatch level . Prabhu et al . ( 2021 ) avoided the use of highly unreliable pseudo labels by assessing the reliability of target data with predictive consistency under random image transformations . Our method refrains from the exploitation of pseudo labeled target data directly in the training process , while reducing the effect of biased label shift by incorporating the semantic adversarial samples into the training process . Adversarial Training . Adversarial training ( AT ) ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) is an effective regularization method for enhancing the robustness and generalization ability of deep learning models . In particular , adversarial samples are incorporated in the model training process , which are intentionally designed to deceive the deep learning model by adding small perturbation on the original data . Furthermore , virtual adversarial training ( VAT ) has been proposed ( Miyato et al. , 2018 ) , which seeks the adversarial direction for regularization without using label information . Both AT and VAT have been employed to tackle the standard UDA problems Shu et al . ( 2018 ) . However , to the best of our knowledge , our work is the first attempt to address the class-imbalanced domain adaptation problem using adversarial training . 3 PROPOSED APPROACH . In this section , we first give the problem definition of CDA , and then present the details of the proposed pairwise adversarial training approach . Finally , we introduce how to integrate the pairwise adversarial training with MDD to address the CDA problem . 3.1 PROBLEM DEFINITION . In class-imbalanced domain adaptation , both the source and target domains suffer from label distribution shift . We are given a source domain Ds = { ( xsi , ysi ) } Ns i=1 with N s labelled samples and a target domain Dt = { xti } Nt i=1 with N t unlabelled samples . Each domain contains K classes , and the class label is denoted as ys ∈ { 0 , 1 , 2 , ... , K − 1 } . Let p and q denote the probability distributions of the source domain and target domain , respectively . We assume that both the covariate shift ( i.e. , p ( x ) 6= q ( x ) ) and label distribution shift ( i.e. , p ( y ) 6= q ( y ) and p ( x|y ) 6= q ( x|y ) ) exist in two domains . Our goal is to train a model that can learn domain invariant features , reduce the gap between source and target domains , and mitigate the label distribution shift . The model typically consists of a feature extractor ψ : X → Z and a classifier f : Z → Y that aims to minimize the target risk . 3.2 PAIRWISE ADVERSARIAL TRAINING ( PAT ) . We investigate how to mitigate the challenging issue of label distribution shift in CDA , as illustrated in Figure 1 . Previous studies ( Tan et al. , 2020 ) found that when the source domain is imbalanced , the model performance on target domains will be significantly dropped , especially when the target domain is also imbalanced . An intuitive solution is to augment the training data in two domains , such that the model training would not be dominated by the majority classes in either domain . However , this task is not trivial , considering the mixed effects of domain gap and imbalanced class distributions . Inspired by adversarial training , we aim to create adversarial samples to augment training data . In adversarial training , the adversarial samples will be exploited to enhance the robustness and generalization ability of model . The loss function of adversarial training is : Lce ( x+ δ∗ , y ; θ ) where δ∗ : = argmax ||δ||p≤ Lce ( x+ δ , y ; θ ) , ( 5 ) where x is the original sample , y is the ground-truth label of x , θ refers to model parameters , and δ is the perturbation added to x . The existing adversarial training methods could not be directly used to tackle the CDA problem for two reasons . First , existing methods simply generate adversarial samples within the neighborhood of the original samples , but they could not mitigate the gap between source and target domains . Second , existing methods treat majority classes and minority classes equally , so they are unable to address the class imbalance issue . In this paper , we propose pairwise adversarial training that generates adversarial samples from the linear interpolation of source and target samples and meanwhile reduces the domain discrepancy . In the following , we will introduce two key components of PAT , including the generation of interpolated adversarial samples and semantic centroid alignment . | This paper proposes a pairwise adversarial training approach for class-imbalanced domain adaptation. Specifically, the adversarial samples are generated from the interpolated line of the aligned pairwise source domain samples and target domain samples. The generated adversarial data can augment the training data and help enhance the robustness of models. | SP:a9301566377e1f0c1871146621e0cb358385098c |
Pairwise Adversarial Training for Unsupervised Class-imbalanced Domain Adaptation | ‘ 1 INTRODUCTION . Unsupervised domain adaptation ( UDA ) aims to achieve knowledge transfer from a labeled source domain to an unlabelled target domain . Recent years have witnessed the significant progress of UDA based on deep neural networks ( Pei et al. , 2018 ; Cui et al. , 2020 ; Hu et al. , 2020 ; Liang et al. , 2020 ; Na et al. , 2021 ) . Most of existing UDA methods assume that only covariate shift occurs in the source domain and target domain , while the label distributions in two domains are identical . However , this assumption may not hold in real-world applications . For instance , in wild-life pictures , the commonly seen animals such as rabbit and deer appear more frequently than the rare animal such as panda and crocodile . Public datasets such as DomainNet ( Peng et al. , 2019 ) and and MSCOCO ( Lin et al. , 2014 ) exhibit imbalanced class distribution . Figure 1 illustrates the imbalanced label distributions in the Real domain and Sketch domain from the DomainNet dataset . To address the issue of imbalanced label distributions in domain adaptation , some recent studies ( Wu et al. , 2019 ; Tan et al. , 2020 ; Jiang et al. , 2020 ) try to jointly model the conditional feature distribution shift and label distribution shift ( LDS ) . This problem is referred to as Classimbalanced Domain Adaptation ( CDA ) . Let x and y denote the samples and labels , respectively . p and q separately represent the probability distribution of source domain and target domain . The common assumptions in UDA involve the covariate shift ( i.e. , p ( x ) 6= q ( x ) ) and identical label distribution ( i.e. , p ( y ) = q ( y ) ) . In CDA , however , apart from the covariate shift , both the condi- tional feature shift and label shift exist , i.e. , p ( x|y ) 6= q ( x|y ) , p ( y ) 6= q ( y ) . CDA is a more challenging task than UDA . Recent studies ( Tan et al. , 2020 ) have demonstrated that the mainstream UDA methods will suffer significant performance drop , as the classifier will favor the majority classes . Only a few CDA approaches have been proposed by far . In Tan et al . ( 2020 ) ’ s work , the negative effect of label shift is reduced by exploiting the pseudo labelled target samples via self-training . Jiang et al . ( 2020 ) use an implicit sampling method based on pseudo labels to align the joint distribution between features and labels . However , one critical problem of these methods is that the pseudo labels are likely to suffer from ill-calibrated probabilities ( Guo et al. , 2017 ) , and thus the unreliable pseudo labels will cause error accumulation during the training process , which will largely degrade the model performance . Augmenting training data has been proven as an effective strategy to tackle the issue of biased label distributions in class-imbalance learning ( Chawla et al. , 2002 ; Chou et al. , 2020 ) . In addition to the traditional data augmentation techniques , adversarial training is also capable of generating semantically meaningful synthetic samples that help enhance the robustness of models . However , these approaches only consider a single domain , and they can not be directly applied to solve the CDA problem . In this paper , we propose a pairwise adversarial training ( PAT ) approach that augments training data for class-imbalanced domain adaptation . Unlike conventional adversarial training in which the adversarial samples are obtained from the ` p ball of the original data , we obtain the semantic adversarial samples from the interpolated line of the aligned pair-wise samples from source domain and target domain . Moreover , a class-imbalanced semantic centroid alignment strategy is designed to explicitly align the source and target domains in the feature space . The main contributions of this paper are three-fold . ( 1 ) We propose a novel pairwise adversarial training approach that generates adversarial samples from pairs of samples across the source and target domains , and further exploits these samples to augment training data . ( 2 ) We propose a new optimization algorithm to solve pairwise adversarial training problem . ( 3 ) We conduct extensive evaluations on benchmark datasets , and results show that our approach obtains competing performance compared with state-of-art CDA methods . 2 RELATED WORK . In this section , we briefly introduce three relevant research topics , including unsupervised domain adaptation , class-imbalanced domain adaptation and adversarial training . Unsupervised Domain Adaptation . In recent years , unsupervised domain adaption ( UDA ) has attracted increasing attention . Existing UDA methods could be roughly categorized into two groups , including the discrepancy-based methods and adversarial-based methods . The discrepancy-based methods usually align source and target feature distributions in the embedding space using various statistical distance metrics , such as Maximum Mean Discrepancy ( MMD ) ( Long et al. , 2016 ; 2017 ; Kang et al. , 2019 ) , Correlation Alignment ( CORAL ) ( Sun & Saenko , 2016 ) , and Wasserstein distance ( Lee & Raginsky , 2018 ; Shen et al. , 2018 ; Balaji et al. , 2019 ) . On the other hand , the adversarial-based methods focus on learning domain invariant features via domain adversarial training ( Ganin et al. , 2016 ; Shu et al. , 2018 ; Pei et al. , 2018 ; Saito et al. , 2018 ; Deng et al. , 2019 ; Yu et al. , 2019 ) . Recently , Zhang et al . ( 2019 ) proposed the margin disparity discrepancy ( MDD ) to measure the discrepancy of two domains with generalization bounds . This theory is tailored into an adversarial learning algorithm for domain adaptation . Unlike other adversarial learning based UDA methods that align two domains by confusing a domain discriminator , MDD aligns two domains by minimizing the maximum margin disparity discrepancy of an optimal classifier f and an auxiliary classifier f ′ . The optimization problem of MDD is formulated as : min f , ψ ε ( Ds ) + ηDγ ( Ds , Dt ) , ( 1 ) max f ′ Dγ ( Ds , Dt ) , ( 2 ) where ε is the classification loss on the source domain and Dγ measures the discrepancy of source domain and target domain . Specifically , ε ( Ds ) = E ( xs , ys ) ∼DsL ( f ( ψ ( x s ) ) , ys ) , ( 3 ) Ladv = Dγ ( Ds , Dt ) = Ext∼DtL′ ( f ′ ( ψ ( xt ) ) , f ( ψ ( xt ) ) ) − γExs∼DsL ( f ′ ( ψ ( xs ) ) ) , f ( ψ ( xs ) ) , ( 4 ) L is cross-entropy function , and L′ ( f ′ ( ψ ( xt ) ) , f ( ψ ( xt ) ) ) = log ( 1 − σy′ ( f ′ ( ψ ( xt ) ) ) . y′ is the pseudo label generated from an optimal classifier . MDD is the backbone of our method . Class-imbalanced Domain Adaptation . As a branch of domain adaptation , the class-imbalanced domain adaptation ( CDA ) aims to deal with data with biased class distribution . Tan et al . ( 2020 ) might be the first one to investigate the CDA problem , and they exploited the pseudo labelled target data to reduce the negative effect of label shift . Wu et al . ( 2019 ) proposed the asymmetrically-relaxed distances as replacement of the standard ones under biased label distribution . Jiang et al . ( 2020 ) adopted the implicit sampling strategy to ensure class alignment at the minibatch level . Prabhu et al . ( 2021 ) avoided the use of highly unreliable pseudo labels by assessing the reliability of target data with predictive consistency under random image transformations . Our method refrains from the exploitation of pseudo labeled target data directly in the training process , while reducing the effect of biased label shift by incorporating the semantic adversarial samples into the training process . Adversarial Training . Adversarial training ( AT ) ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) is an effective regularization method for enhancing the robustness and generalization ability of deep learning models . In particular , adversarial samples are incorporated in the model training process , which are intentionally designed to deceive the deep learning model by adding small perturbation on the original data . Furthermore , virtual adversarial training ( VAT ) has been proposed ( Miyato et al. , 2018 ) , which seeks the adversarial direction for regularization without using label information . Both AT and VAT have been employed to tackle the standard UDA problems Shu et al . ( 2018 ) . However , to the best of our knowledge , our work is the first attempt to address the class-imbalanced domain adaptation problem using adversarial training . 3 PROPOSED APPROACH . In this section , we first give the problem definition of CDA , and then present the details of the proposed pairwise adversarial training approach . Finally , we introduce how to integrate the pairwise adversarial training with MDD to address the CDA problem . 3.1 PROBLEM DEFINITION . In class-imbalanced domain adaptation , both the source and target domains suffer from label distribution shift . We are given a source domain Ds = { ( xsi , ysi ) } Ns i=1 with N s labelled samples and a target domain Dt = { xti } Nt i=1 with N t unlabelled samples . Each domain contains K classes , and the class label is denoted as ys ∈ { 0 , 1 , 2 , ... , K − 1 } . Let p and q denote the probability distributions of the source domain and target domain , respectively . We assume that both the covariate shift ( i.e. , p ( x ) 6= q ( x ) ) and label distribution shift ( i.e. , p ( y ) 6= q ( y ) and p ( x|y ) 6= q ( x|y ) ) exist in two domains . Our goal is to train a model that can learn domain invariant features , reduce the gap between source and target domains , and mitigate the label distribution shift . The model typically consists of a feature extractor ψ : X → Z and a classifier f : Z → Y that aims to minimize the target risk . 3.2 PAIRWISE ADVERSARIAL TRAINING ( PAT ) . We investigate how to mitigate the challenging issue of label distribution shift in CDA , as illustrated in Figure 1 . Previous studies ( Tan et al. , 2020 ) found that when the source domain is imbalanced , the model performance on target domains will be significantly dropped , especially when the target domain is also imbalanced . An intuitive solution is to augment the training data in two domains , such that the model training would not be dominated by the majority classes in either domain . However , this task is not trivial , considering the mixed effects of domain gap and imbalanced class distributions . Inspired by adversarial training , we aim to create adversarial samples to augment training data . In adversarial training , the adversarial samples will be exploited to enhance the robustness and generalization ability of model . The loss function of adversarial training is : Lce ( x+ δ∗ , y ; θ ) where δ∗ : = argmax ||δ||p≤ Lce ( x+ δ , y ; θ ) , ( 5 ) where x is the original sample , y is the ground-truth label of x , θ refers to model parameters , and δ is the perturbation added to x . The existing adversarial training methods could not be directly used to tackle the CDA problem for two reasons . First , existing methods simply generate adversarial samples within the neighborhood of the original samples , but they could not mitigate the gap between source and target domains . Second , existing methods treat majority classes and minority classes equally , so they are unable to address the class imbalance issue . In this paper , we propose pairwise adversarial training that generates adversarial samples from the linear interpolation of source and target samples and meanwhile reduces the domain discrepancy . In the following , we will introduce two key components of PAT , including the generation of interpolated adversarial samples and semantic centroid alignment . | This work proposes a method for solving the UDA problem with imbalanced class, which is a sub-problem of UDA. The challenge lies in how to handle the difficulties introduced by imbalanced classes. To this end, this work proposes a new data augmentation strategy, that is taking the interpolation of two samples from the same class but from different domains as the augmented samples. The traditional MMD loss and a class centroid distance based loss are also imposed for the model training. Experiments on multiple benchmark datasets are conducted. | SP:a9301566377e1f0c1871146621e0cb358385098c |
Pairwise Adversarial Training for Unsupervised Class-imbalanced Domain Adaptation | ‘ 1 INTRODUCTION . Unsupervised domain adaptation ( UDA ) aims to achieve knowledge transfer from a labeled source domain to an unlabelled target domain . Recent years have witnessed the significant progress of UDA based on deep neural networks ( Pei et al. , 2018 ; Cui et al. , 2020 ; Hu et al. , 2020 ; Liang et al. , 2020 ; Na et al. , 2021 ) . Most of existing UDA methods assume that only covariate shift occurs in the source domain and target domain , while the label distributions in two domains are identical . However , this assumption may not hold in real-world applications . For instance , in wild-life pictures , the commonly seen animals such as rabbit and deer appear more frequently than the rare animal such as panda and crocodile . Public datasets such as DomainNet ( Peng et al. , 2019 ) and and MSCOCO ( Lin et al. , 2014 ) exhibit imbalanced class distribution . Figure 1 illustrates the imbalanced label distributions in the Real domain and Sketch domain from the DomainNet dataset . To address the issue of imbalanced label distributions in domain adaptation , some recent studies ( Wu et al. , 2019 ; Tan et al. , 2020 ; Jiang et al. , 2020 ) try to jointly model the conditional feature distribution shift and label distribution shift ( LDS ) . This problem is referred to as Classimbalanced Domain Adaptation ( CDA ) . Let x and y denote the samples and labels , respectively . p and q separately represent the probability distribution of source domain and target domain . The common assumptions in UDA involve the covariate shift ( i.e. , p ( x ) 6= q ( x ) ) and identical label distribution ( i.e. , p ( y ) = q ( y ) ) . In CDA , however , apart from the covariate shift , both the condi- tional feature shift and label shift exist , i.e. , p ( x|y ) 6= q ( x|y ) , p ( y ) 6= q ( y ) . CDA is a more challenging task than UDA . Recent studies ( Tan et al. , 2020 ) have demonstrated that the mainstream UDA methods will suffer significant performance drop , as the classifier will favor the majority classes . Only a few CDA approaches have been proposed by far . In Tan et al . ( 2020 ) ’ s work , the negative effect of label shift is reduced by exploiting the pseudo labelled target samples via self-training . Jiang et al . ( 2020 ) use an implicit sampling method based on pseudo labels to align the joint distribution between features and labels . However , one critical problem of these methods is that the pseudo labels are likely to suffer from ill-calibrated probabilities ( Guo et al. , 2017 ) , and thus the unreliable pseudo labels will cause error accumulation during the training process , which will largely degrade the model performance . Augmenting training data has been proven as an effective strategy to tackle the issue of biased label distributions in class-imbalance learning ( Chawla et al. , 2002 ; Chou et al. , 2020 ) . In addition to the traditional data augmentation techniques , adversarial training is also capable of generating semantically meaningful synthetic samples that help enhance the robustness of models . However , these approaches only consider a single domain , and they can not be directly applied to solve the CDA problem . In this paper , we propose a pairwise adversarial training ( PAT ) approach that augments training data for class-imbalanced domain adaptation . Unlike conventional adversarial training in which the adversarial samples are obtained from the ` p ball of the original data , we obtain the semantic adversarial samples from the interpolated line of the aligned pair-wise samples from source domain and target domain . Moreover , a class-imbalanced semantic centroid alignment strategy is designed to explicitly align the source and target domains in the feature space . The main contributions of this paper are three-fold . ( 1 ) We propose a novel pairwise adversarial training approach that generates adversarial samples from pairs of samples across the source and target domains , and further exploits these samples to augment training data . ( 2 ) We propose a new optimization algorithm to solve pairwise adversarial training problem . ( 3 ) We conduct extensive evaluations on benchmark datasets , and results show that our approach obtains competing performance compared with state-of-art CDA methods . 2 RELATED WORK . In this section , we briefly introduce three relevant research topics , including unsupervised domain adaptation , class-imbalanced domain adaptation and adversarial training . Unsupervised Domain Adaptation . In recent years , unsupervised domain adaption ( UDA ) has attracted increasing attention . Existing UDA methods could be roughly categorized into two groups , including the discrepancy-based methods and adversarial-based methods . The discrepancy-based methods usually align source and target feature distributions in the embedding space using various statistical distance metrics , such as Maximum Mean Discrepancy ( MMD ) ( Long et al. , 2016 ; 2017 ; Kang et al. , 2019 ) , Correlation Alignment ( CORAL ) ( Sun & Saenko , 2016 ) , and Wasserstein distance ( Lee & Raginsky , 2018 ; Shen et al. , 2018 ; Balaji et al. , 2019 ) . On the other hand , the adversarial-based methods focus on learning domain invariant features via domain adversarial training ( Ganin et al. , 2016 ; Shu et al. , 2018 ; Pei et al. , 2018 ; Saito et al. , 2018 ; Deng et al. , 2019 ; Yu et al. , 2019 ) . Recently , Zhang et al . ( 2019 ) proposed the margin disparity discrepancy ( MDD ) to measure the discrepancy of two domains with generalization bounds . This theory is tailored into an adversarial learning algorithm for domain adaptation . Unlike other adversarial learning based UDA methods that align two domains by confusing a domain discriminator , MDD aligns two domains by minimizing the maximum margin disparity discrepancy of an optimal classifier f and an auxiliary classifier f ′ . The optimization problem of MDD is formulated as : min f , ψ ε ( Ds ) + ηDγ ( Ds , Dt ) , ( 1 ) max f ′ Dγ ( Ds , Dt ) , ( 2 ) where ε is the classification loss on the source domain and Dγ measures the discrepancy of source domain and target domain . Specifically , ε ( Ds ) = E ( xs , ys ) ∼DsL ( f ( ψ ( x s ) ) , ys ) , ( 3 ) Ladv = Dγ ( Ds , Dt ) = Ext∼DtL′ ( f ′ ( ψ ( xt ) ) , f ( ψ ( xt ) ) ) − γExs∼DsL ( f ′ ( ψ ( xs ) ) ) , f ( ψ ( xs ) ) , ( 4 ) L is cross-entropy function , and L′ ( f ′ ( ψ ( xt ) ) , f ( ψ ( xt ) ) ) = log ( 1 − σy′ ( f ′ ( ψ ( xt ) ) ) . y′ is the pseudo label generated from an optimal classifier . MDD is the backbone of our method . Class-imbalanced Domain Adaptation . As a branch of domain adaptation , the class-imbalanced domain adaptation ( CDA ) aims to deal with data with biased class distribution . Tan et al . ( 2020 ) might be the first one to investigate the CDA problem , and they exploited the pseudo labelled target data to reduce the negative effect of label shift . Wu et al . ( 2019 ) proposed the asymmetrically-relaxed distances as replacement of the standard ones under biased label distribution . Jiang et al . ( 2020 ) adopted the implicit sampling strategy to ensure class alignment at the minibatch level . Prabhu et al . ( 2021 ) avoided the use of highly unreliable pseudo labels by assessing the reliability of target data with predictive consistency under random image transformations . Our method refrains from the exploitation of pseudo labeled target data directly in the training process , while reducing the effect of biased label shift by incorporating the semantic adversarial samples into the training process . Adversarial Training . Adversarial training ( AT ) ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) is an effective regularization method for enhancing the robustness and generalization ability of deep learning models . In particular , adversarial samples are incorporated in the model training process , which are intentionally designed to deceive the deep learning model by adding small perturbation on the original data . Furthermore , virtual adversarial training ( VAT ) has been proposed ( Miyato et al. , 2018 ) , which seeks the adversarial direction for regularization without using label information . Both AT and VAT have been employed to tackle the standard UDA problems Shu et al . ( 2018 ) . However , to the best of our knowledge , our work is the first attempt to address the class-imbalanced domain adaptation problem using adversarial training . 3 PROPOSED APPROACH . In this section , we first give the problem definition of CDA , and then present the details of the proposed pairwise adversarial training approach . Finally , we introduce how to integrate the pairwise adversarial training with MDD to address the CDA problem . 3.1 PROBLEM DEFINITION . In class-imbalanced domain adaptation , both the source and target domains suffer from label distribution shift . We are given a source domain Ds = { ( xsi , ysi ) } Ns i=1 with N s labelled samples and a target domain Dt = { xti } Nt i=1 with N t unlabelled samples . Each domain contains K classes , and the class label is denoted as ys ∈ { 0 , 1 , 2 , ... , K − 1 } . Let p and q denote the probability distributions of the source domain and target domain , respectively . We assume that both the covariate shift ( i.e. , p ( x ) 6= q ( x ) ) and label distribution shift ( i.e. , p ( y ) 6= q ( y ) and p ( x|y ) 6= q ( x|y ) ) exist in two domains . Our goal is to train a model that can learn domain invariant features , reduce the gap between source and target domains , and mitigate the label distribution shift . The model typically consists of a feature extractor ψ : X → Z and a classifier f : Z → Y that aims to minimize the target risk . 3.2 PAIRWISE ADVERSARIAL TRAINING ( PAT ) . We investigate how to mitigate the challenging issue of label distribution shift in CDA , as illustrated in Figure 1 . Previous studies ( Tan et al. , 2020 ) found that when the source domain is imbalanced , the model performance on target domains will be significantly dropped , especially when the target domain is also imbalanced . An intuitive solution is to augment the training data in two domains , such that the model training would not be dominated by the majority classes in either domain . However , this task is not trivial , considering the mixed effects of domain gap and imbalanced class distributions . Inspired by adversarial training , we aim to create adversarial samples to augment training data . In adversarial training , the adversarial samples will be exploited to enhance the robustness and generalization ability of model . The loss function of adversarial training is : Lce ( x+ δ∗ , y ; θ ) where δ∗ : = argmax ||δ||p≤ Lce ( x+ δ , y ; θ ) , ( 5 ) where x is the original sample , y is the ground-truth label of x , θ refers to model parameters , and δ is the perturbation added to x . The existing adversarial training methods could not be directly used to tackle the CDA problem for two reasons . First , existing methods simply generate adversarial samples within the neighborhood of the original samples , but they could not mitigate the gap between source and target domains . Second , existing methods treat majority classes and minority classes equally , so they are unable to address the class imbalance issue . In this paper , we propose pairwise adversarial training that generates adversarial samples from the linear interpolation of source and target samples and meanwhile reduces the domain discrepancy . In the following , we will introduce two key components of PAT , including the generation of interpolated adversarial samples and semantic centroid alignment . | This paper proposes a new method called Pairwise Adversarial Training (PAT) that augments training data for class-imbalanced domain adaptation (CDA). Different from vanilla unsupervised domain adaptation, the label distributions of different distributions are quite different in CDA. The proposed PAT approach mainly consists of two part, centroid alignment (CA) and interpolated adversarial samples (IAS). Experiments on several benchmarks verify the effectiveness of PAT for the CDA problem. | SP:a9301566377e1f0c1871146621e0cb358385098c |
MixRL: Data Mixing Augmentation for Regression using Reinforcement Learning | 1 INTRODUCTION . As machine learning ( ML ) becomes widely used in critical applications including manufacturing and finance , data augmentation for regression becomes essential as it provides an opportunity to improve model performance without additional data collection . In comparison to classification tasks like object detection in images , the goal of regression is to predict one or more real numbers . To emphasize the importance of data augmentation in regression , we provide a case study of semiconductor manufacturing . Here a common quality check is to measure the layer thicknesses of a 3-dimensional semiconductor and see if they are even . However , directly measuring each thickness results in destroying the semiconductor itself , so a recently-common approach is to take an indirect measurement by applying light waves on the semiconductor , measuring the spectrum of wavelengths that bounce back from all the layers , and use ML to predict the layer thicknesses from the spectrum data ( see Fig . 4 in Sec . 4 for an illustration ) . With enough spectrum data and thickness information , ML models can be trained to accurately predict thicknesses from a spectrum . The main challenge is that there is not enough training data , and the only cost-effective solution is to augment small amounts of data that exist . Even a small improvement in model performance from the data augmentation has significant impact in this industry . In general , any regression task that predicts real values like emissions , stock prices , or even someone ’ s salary can also benefit from data augmentation . Most data augmentation techniques are designed for image classification . In particular , Mixup ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ; Yun et al. , 2019 ) is a popular data augmentation technique that is widely used for classification tasks , but is seldom used for regression because it assumes a distinct label space . The idea of Mixup is to mix pairs of examples using the key assumption that taking a linear interpolation between examples can be used to estimate the label of any examples in between . Mixup is known to effectively regularize the model . More recently , Manifold Mixup ( Verma et al. , 2019 ) has been proposed to improve the hidden representation and decision boundaries where two examples are mixed in multiple hidden layers of neural networks . However , the Mixup techniques are not readily applicable to a regression setting because the key linearity assumption does not necessarily hold . Since the label space is continuous , taking a linear interpolation of examples that are very different either data-wise or label-wise may result in arbitrarily-incorrect labels as shown in Fig . 1a . As a result , the linearity assumption only holds to a certain extent , and the degree may vary for each example . Moreover , other data augmentation techniques for classification including image processing ( e.g. , flipping or rotating ) and generative models ( e.g. , GAN ( Goodfellow et al. , 2014 ) and VAE ( Kingma & Welling , 2014 ) ) are even less applicable to a regression setting ( see Sec . 5 ) . We propose MixRL , a data mixing augmentation framework that is the first to tailor Mixup for regression tasks using reinforcement learning . MixRL uses a stricter linearity assumption where it only holds within a certain data or label distance . These distance limits may vary by example , and we formulate the problem of learning for each example how many nearest neighbors it should be mixed with . MixRL employs a meta learning framework that estimates how valuable mixing an example is for reducing the model loss on a small validation set using Monte Carlo policy gradient reinforcement learning . MixRL ’ s framework is inspired by the recent Data Valuation using Reinforcement Learning ( DVRL ) framework ( Yoon et al. , 2020 ) , which solves the different problem of measuring how individual examples contribute to model performance without any mixing involved . Fig . 1b shows how limiting the nearest neighbors to mix is better than mixing with all neighbors as in classification . To see if the augmentation is useful , we train simple models on the original four examples ( i.e. , no augmentation ) , the augmented data in Fig . 1a , and the augmented data in Fig . 1b . Evaluating the models on 20 random test examples results in Root Mean Square Error ( RMSE ; see Sec . 4 ) values of 0.2967 , 0.5615 , and 0.1834 , respectively , where a lower RMSE is better . We thus conclude that carefully mixing examples is important for improving regression performance . Experiments conducted on real and synthetic datasets show that MixRL shows better model performances relative to baselines , especially when the linearity is limited , and the mixing must be done selectively . In addition , MixRL only requires small validation sets and scales to large training sets . 2 LIMITED LINEARITY IN DATA AND LABEL SPACE FOR REGRESSION . We explain why the key linearity assumption used for Mixup in classification has limitations in a regression setting . In classification , the labels are discrete where many examples may have the same label . The original version of Mixup ( Zhang et al. , 2018 ) is to take a linear interpolation between any pair of examples xi and xj with the labels yi and yj to produce the new example λxi+ ( 1−λ ) xj with the label λyi + ( 1 − λ ) yj where λ ∼ Beta ( α , α ) . The linearity assumption turns out to be reasonable because the label difference between examples is only 0 or 1 and thus not that sensitive to the data difference . In contrast , the labels in regression are in a continuous space . Although there is still a many-to-one relationship where multiple examples may have the same label , the degree is much smaller than in classification . As a result , when two examples are mixed , the interpolated label can be arbitrarily different than the actual label , e.g. , mixing the points a and d in Fig . 1a results in a label nowhere near the actual label . In Sec . 4.1 , we also show empirical results where the label error increases for larger data or label distances . Furthermore , mixing examples with larger data or label distances tend to have increasingly-negative effects on the model trained on the augmented training set . Figs . 2a and 2b show the model accuracies using RMSE for the Product dataset ( described in Sec . 4 ) when mixing examples with different ranges of distances . Regardless of adjusting the label or data distance , there are diminishing benefits for larger distances . How do we limit the data and label distances to improve Mixup for regression ? One approach is to only limit the data distance , which limits the label distance as well . Suppose that the regression function f is continuous where limx→c f ( x ) = f ( c ) for any x and c within the domain of f . We show that a short-enough data distance sufficiently reduces the label distance as well . Given f ’ s domain D and limx→c f ( x ) = L , the following is known to hold : ∀ , ∃δ s.t . ∀x ∈ D , if |x − c| < δ , then |f ( x ) −L| < . We can use this result to prove that ∀ , ∃δ s.t . ∀xi , xj ∈ D , if αxi + ( 1− α ) xj = c , 0 ≤ α ≤ 1 , and |xi − xj | < δ then the absolute difference between the mixed example ’ s xj value and L is small where |αf ( xi ) + ( 1 − α ) f ( xj ) − L| = |α ( f ( xi ) − L ) + ( 1 − α ) ( f ( xj ) − L ) | ≤ α|f ( xi ) − L|+ ( 1− α ) |f ( xj ) − L| < α + ( 1− α ) = . We can not do the converse and limit the data distance by limiting label distance because there is a many-to-one mapping from data to labels , which means that two different examples may have identical labels ( e.g. , Fig . 1a ’ s a and d ) . However , limiting the data distance to also limit the label distance may be too restrictive because there is not much correlation between the data and label distances in real data . Fig . 2c shows how the data distance relates to the label distance for the Product dataset . Even for small data distances , the label distance has a large range , which means that the data distance would have to be extremely limited . Hence , our solution is to limit the data and/or label distance as needed instead of just the data distance . This approach turns out to be more practical as we demonstrate in Sec . 4.4 . 3 MIXRL . The goal of MixRL is to identify which examples to mix with which nearest neighbors . Instead of finding the actual distance limits themselves , we solve the identical problem of finding the number of nearest neighbors to mix per example for convenience . We use reinforcement learning because there is no training data on how mixing each example affects the model performance , and the reward function is non-differentiable as we explain below . MixRL ’ s framework is inspired by DVRL ( Yoon et al. , 2020 ) , but we solve the different problem of mixing examples and address new issues . 3.1 POLICY OPTIMIZATION REINFORCEMENT LEARNING . Finding the optimal policy involves taking the gradient of the objective function J ( θ ) = Eπθ [ R ] where π is the policy , θ is its parameters , and R is the reward function . For MixRL , we would like to minimize the regression model loss on a small validation set when mixing a batch of examples . However , the validation loss is computed using the regression model , which does not involve θ . Hence we can not analytically compute the differential of the reward function with respect to θ . Instead , we use the REINFORCE ( Williams , 1992 ) policy gradient algorithm , which is a Monte Carlo method that is also widely used for data valuation ( Yoon et al. , 2020 ) and neural architecture searching ( Zoph & Le , 2017 ) . It is known that∇θJ ( θ ) can be approximated ( see Sec . A.1 ) as : ∇θJ ( θ ) ≈ 1 m m∑ i=1 R ( τ i ) T−1∑ t=0 ∇θ log πθ ( ait|sit ) ( 1 ) where τ i is the ith state-action trajectory under policy πθ , m is the number of sample trajectories , T is the number of actions taken in a path , sit is a state at time t , and a i t is an action at time t. In our setting of minimizing loss , REINFORCE performs gradient descent for each example decreasing θ by α∇θJ ( θ ) where α is a learning rate . Although the estimated gradient is unbiased , it is known to have a high variance , which we reduce using baseline techniques ( Sutton & Barto , 2018 ) . 3.2 FRAMEWORK . We define notations used in MixRL ’ s framework shown in Fig . 3 . Let D = { ( xi , yi ) } Si=1 ∼ P be the training set where xi ∈ X is a d-dimensional input example , and yi ∈ Y is an e-dimensional label . Let Dv = { ( xvi , yvi ) } Vi=1 ∼ P t be the validation set , where P t is the distribution of the test set , which is not necessarily the same as P . Let fφ be a regression model , and L the loss function that returns a performance score comparing fφ ( xi ) with the true label yi using Mean Square Error ( MSE ) . We assume a list N of possible data and label nearest neighbors ( NNs ) that can be mixed with an example . For instance , N could contain the options “ 1 data NN ” , “ 2 data NNs ” , and “ 2 label NNs ” . The more fine-grained the NN options are , the more precisely MixRL can determine the optimal number of NNs to mix per example . We do not add a “ 0 NN ” option because selecting and not selecting it have identical effects , which makes the policy network training unstable because nothing can be learned . Instead , we support this option separately by excluding examples that are not worth mixing as we explain later in this section . The possible NNs can be represented as a one-hot encoding vector of |N | values . We now define the states and actions , which is an important design choice of MixRL . A state st is a batch of examples Db = { ( ( xi , yi ) , ki ) } Bi=1 where { ( xi , yi ) } Bi=1 ⊆ D and each ki is an index of an NN option in N that specifies how many data or label NNs xi should be mixed with . An action at is then choosing Dm ⊆ Db where each ( xi , yi ) in Dm is mixed with its N [ ki ] NNs in D. A policy πθ ( Dm|Db ) returns the probability of selecting Dm at state Db . For each episode , MixRL selects a batch ( state ) once and chooses a subset of the batch to obtain a reward ( action ) once . Since there is only one time step , the transition function does not play a role . A naı̈ve implementation of the policy network would have an input dimension of B× ( d+ e+ |N | ) and an output dimension of 2B , which may be too large to train for large batch sizes . For example , in a typical setting of B = 1000 , d = 100 , e = 1 , and |N | = 10 , the input and output dimensions Algorithm 1 : Pseudo code for Mixup value network training . Input : Training set D , validation set Dv , nearest neighbor options N , learning rate α , reward scaling constant C , moving average window W Output : Mixup value network hθ Initialize θ , Base = 0 ; while until convergence do Sample Db = { ( xi , yi ) , ki } B 1 ∼ P × Uniform ( N ) ; Dm = ∅ ; for ( ( xi , yi ) , ki ) ∈ D b do Add ( xi , yi , ki ) to D m with probability hθ ( xi , yi , ki ) ; Train regression model fφ with initialized φ on Db ∪Mix ( Dm , D , N ) ; Loss = 1 V ∑V i=1 L ( fφ ( x v i ) , y v i ) ; θ = θ − α · C · ( Loss−Base ) · ∇θ log ( πθ ( Dm , Db ) ) ; Base = W−1 W Base+ 1 W Loss ; return return hθ ; become 111 , 000 and 21000 , respectively . Instead , we can significantly reduce the network size by assuming independence among examples and decomposing the policy network ’ s prediction into two phases . First , a Mixup value network hθ ( x , y , k ) is used to estimate the probability of an example being mixed with k NNs . Next , we randomly select which examples to mix according to the probabilities . The value of πθ ( Dm|Db ) is thus ∏ ( x , y , k ) ∈Dm hθ ( x , y , k ) ∏ ( x , y , k ) ∈Db\Dm [ 1− hθ ( x , y , k ) ] . Also , hθ has input and output dimensions of only d+ e+ |N | and 1 , respectively , and is practical . The objective function J ( θ ) is the validation set loss lθ of fφ trained on Db ∪ Mix ( Dm , D , N ) ( either from scratch or from a pre-trained state ) where Mix ( Dm , D , N ) mixes each ( xi , yi ) in Dm with its N [ ki ] NNs in D : J ( θ ) = l ( φ ) = E ( xvi , y v i ) ∼P t L ( fφ ( xvi ) , y v i ) ( 2 ) Using Eq . 1 , the gradient of Eq . 2 can be approximated using the validation set Dvas : ∇θJ ( θ ) = ∇θl ( θ ) ≈ 1 V V∑ i=1 L ( fφ ( xvi ) , y v i ) · ∇θ log ( πθ ( Dm , Db ) ) ( 3 ) where∇θ log πθ ( Dm , Db ) =∇φ ∑ ( x , y , k ) ∈Dm log hθ ( x , y , k ) + ∑ ( x , y , k ) ∈Db\Dm [ 1− hθ ( x , y , k ) ] . We also use reward scaling ( Henderson et al. , 2018 ) to further improve the training . Algorithm 1 shows the pseudo code for training hθ . The computational complexity is not directly related to the training set size , but depends on the number of iterations needed to train hθ and how long each iteration takes . Another factor is the number of possible NNs |N | where a higher number may result in slower training . We show in Sec . 4.2 that MixRL scales to large training sets . Once we train hθ , we choose the ( x , y , k ) ’ s with the highest hθ ( x , y , k ) values of at least a threshold T . We optimize T ’ s value using the validation set ( see Sec . B.1 ) . We ignore any ( x , y , k ) whose ( x , y ) pair has already been chosen with a different k value for the same data or label distance . We apply T in order to exclude ( x , y , k ) ’ s that are not worth mixing ( i.e. , make them mix with 0 NNs ) . Although actor-critic methods ( Mnih et al. , 2016 ; Schulman et al. , 2017 ) improve on REINFORCE by training both value and policy networks , using the two networks may not be practical in our setting due to the high input dimensions . For a policy network , we can reduce its dimension using the fact that the log probabilities can be calculated by multiplying selection probabilities . Unfortunately , we can not use the same trick for a value network because its output values are not probabilities . | This paper improves on the idea of mixup by selecting samples for mixup via a model that selects suitable pairs found using knn in a batch. In order to provide a learning signal to this discrete process, the authors apply REINFORCE, using the loss of the downstream regressor (not applied to classification tasks). They show promising results on a number of regression tasks. | SP:a77ff59b58169bb0c0e2409a272ca73b310898e4 |
MixRL: Data Mixing Augmentation for Regression using Reinforcement Learning | 1 INTRODUCTION . As machine learning ( ML ) becomes widely used in critical applications including manufacturing and finance , data augmentation for regression becomes essential as it provides an opportunity to improve model performance without additional data collection . In comparison to classification tasks like object detection in images , the goal of regression is to predict one or more real numbers . To emphasize the importance of data augmentation in regression , we provide a case study of semiconductor manufacturing . Here a common quality check is to measure the layer thicknesses of a 3-dimensional semiconductor and see if they are even . However , directly measuring each thickness results in destroying the semiconductor itself , so a recently-common approach is to take an indirect measurement by applying light waves on the semiconductor , measuring the spectrum of wavelengths that bounce back from all the layers , and use ML to predict the layer thicknesses from the spectrum data ( see Fig . 4 in Sec . 4 for an illustration ) . With enough spectrum data and thickness information , ML models can be trained to accurately predict thicknesses from a spectrum . The main challenge is that there is not enough training data , and the only cost-effective solution is to augment small amounts of data that exist . Even a small improvement in model performance from the data augmentation has significant impact in this industry . In general , any regression task that predicts real values like emissions , stock prices , or even someone ’ s salary can also benefit from data augmentation . Most data augmentation techniques are designed for image classification . In particular , Mixup ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ; Yun et al. , 2019 ) is a popular data augmentation technique that is widely used for classification tasks , but is seldom used for regression because it assumes a distinct label space . The idea of Mixup is to mix pairs of examples using the key assumption that taking a linear interpolation between examples can be used to estimate the label of any examples in between . Mixup is known to effectively regularize the model . More recently , Manifold Mixup ( Verma et al. , 2019 ) has been proposed to improve the hidden representation and decision boundaries where two examples are mixed in multiple hidden layers of neural networks . However , the Mixup techniques are not readily applicable to a regression setting because the key linearity assumption does not necessarily hold . Since the label space is continuous , taking a linear interpolation of examples that are very different either data-wise or label-wise may result in arbitrarily-incorrect labels as shown in Fig . 1a . As a result , the linearity assumption only holds to a certain extent , and the degree may vary for each example . Moreover , other data augmentation techniques for classification including image processing ( e.g. , flipping or rotating ) and generative models ( e.g. , GAN ( Goodfellow et al. , 2014 ) and VAE ( Kingma & Welling , 2014 ) ) are even less applicable to a regression setting ( see Sec . 5 ) . We propose MixRL , a data mixing augmentation framework that is the first to tailor Mixup for regression tasks using reinforcement learning . MixRL uses a stricter linearity assumption where it only holds within a certain data or label distance . These distance limits may vary by example , and we formulate the problem of learning for each example how many nearest neighbors it should be mixed with . MixRL employs a meta learning framework that estimates how valuable mixing an example is for reducing the model loss on a small validation set using Monte Carlo policy gradient reinforcement learning . MixRL ’ s framework is inspired by the recent Data Valuation using Reinforcement Learning ( DVRL ) framework ( Yoon et al. , 2020 ) , which solves the different problem of measuring how individual examples contribute to model performance without any mixing involved . Fig . 1b shows how limiting the nearest neighbors to mix is better than mixing with all neighbors as in classification . To see if the augmentation is useful , we train simple models on the original four examples ( i.e. , no augmentation ) , the augmented data in Fig . 1a , and the augmented data in Fig . 1b . Evaluating the models on 20 random test examples results in Root Mean Square Error ( RMSE ; see Sec . 4 ) values of 0.2967 , 0.5615 , and 0.1834 , respectively , where a lower RMSE is better . We thus conclude that carefully mixing examples is important for improving regression performance . Experiments conducted on real and synthetic datasets show that MixRL shows better model performances relative to baselines , especially when the linearity is limited , and the mixing must be done selectively . In addition , MixRL only requires small validation sets and scales to large training sets . 2 LIMITED LINEARITY IN DATA AND LABEL SPACE FOR REGRESSION . We explain why the key linearity assumption used for Mixup in classification has limitations in a regression setting . In classification , the labels are discrete where many examples may have the same label . The original version of Mixup ( Zhang et al. , 2018 ) is to take a linear interpolation between any pair of examples xi and xj with the labels yi and yj to produce the new example λxi+ ( 1−λ ) xj with the label λyi + ( 1 − λ ) yj where λ ∼ Beta ( α , α ) . The linearity assumption turns out to be reasonable because the label difference between examples is only 0 or 1 and thus not that sensitive to the data difference . In contrast , the labels in regression are in a continuous space . Although there is still a many-to-one relationship where multiple examples may have the same label , the degree is much smaller than in classification . As a result , when two examples are mixed , the interpolated label can be arbitrarily different than the actual label , e.g. , mixing the points a and d in Fig . 1a results in a label nowhere near the actual label . In Sec . 4.1 , we also show empirical results where the label error increases for larger data or label distances . Furthermore , mixing examples with larger data or label distances tend to have increasingly-negative effects on the model trained on the augmented training set . Figs . 2a and 2b show the model accuracies using RMSE for the Product dataset ( described in Sec . 4 ) when mixing examples with different ranges of distances . Regardless of adjusting the label or data distance , there are diminishing benefits for larger distances . How do we limit the data and label distances to improve Mixup for regression ? One approach is to only limit the data distance , which limits the label distance as well . Suppose that the regression function f is continuous where limx→c f ( x ) = f ( c ) for any x and c within the domain of f . We show that a short-enough data distance sufficiently reduces the label distance as well . Given f ’ s domain D and limx→c f ( x ) = L , the following is known to hold : ∀ , ∃δ s.t . ∀x ∈ D , if |x − c| < δ , then |f ( x ) −L| < . We can use this result to prove that ∀ , ∃δ s.t . ∀xi , xj ∈ D , if αxi + ( 1− α ) xj = c , 0 ≤ α ≤ 1 , and |xi − xj | < δ then the absolute difference between the mixed example ’ s xj value and L is small where |αf ( xi ) + ( 1 − α ) f ( xj ) − L| = |α ( f ( xi ) − L ) + ( 1 − α ) ( f ( xj ) − L ) | ≤ α|f ( xi ) − L|+ ( 1− α ) |f ( xj ) − L| < α + ( 1− α ) = . We can not do the converse and limit the data distance by limiting label distance because there is a many-to-one mapping from data to labels , which means that two different examples may have identical labels ( e.g. , Fig . 1a ’ s a and d ) . However , limiting the data distance to also limit the label distance may be too restrictive because there is not much correlation between the data and label distances in real data . Fig . 2c shows how the data distance relates to the label distance for the Product dataset . Even for small data distances , the label distance has a large range , which means that the data distance would have to be extremely limited . Hence , our solution is to limit the data and/or label distance as needed instead of just the data distance . This approach turns out to be more practical as we demonstrate in Sec . 4.4 . 3 MIXRL . The goal of MixRL is to identify which examples to mix with which nearest neighbors . Instead of finding the actual distance limits themselves , we solve the identical problem of finding the number of nearest neighbors to mix per example for convenience . We use reinforcement learning because there is no training data on how mixing each example affects the model performance , and the reward function is non-differentiable as we explain below . MixRL ’ s framework is inspired by DVRL ( Yoon et al. , 2020 ) , but we solve the different problem of mixing examples and address new issues . 3.1 POLICY OPTIMIZATION REINFORCEMENT LEARNING . Finding the optimal policy involves taking the gradient of the objective function J ( θ ) = Eπθ [ R ] where π is the policy , θ is its parameters , and R is the reward function . For MixRL , we would like to minimize the regression model loss on a small validation set when mixing a batch of examples . However , the validation loss is computed using the regression model , which does not involve θ . Hence we can not analytically compute the differential of the reward function with respect to θ . Instead , we use the REINFORCE ( Williams , 1992 ) policy gradient algorithm , which is a Monte Carlo method that is also widely used for data valuation ( Yoon et al. , 2020 ) and neural architecture searching ( Zoph & Le , 2017 ) . It is known that∇θJ ( θ ) can be approximated ( see Sec . A.1 ) as : ∇θJ ( θ ) ≈ 1 m m∑ i=1 R ( τ i ) T−1∑ t=0 ∇θ log πθ ( ait|sit ) ( 1 ) where τ i is the ith state-action trajectory under policy πθ , m is the number of sample trajectories , T is the number of actions taken in a path , sit is a state at time t , and a i t is an action at time t. In our setting of minimizing loss , REINFORCE performs gradient descent for each example decreasing θ by α∇θJ ( θ ) where α is a learning rate . Although the estimated gradient is unbiased , it is known to have a high variance , which we reduce using baseline techniques ( Sutton & Barto , 2018 ) . 3.2 FRAMEWORK . We define notations used in MixRL ’ s framework shown in Fig . 3 . Let D = { ( xi , yi ) } Si=1 ∼ P be the training set where xi ∈ X is a d-dimensional input example , and yi ∈ Y is an e-dimensional label . Let Dv = { ( xvi , yvi ) } Vi=1 ∼ P t be the validation set , where P t is the distribution of the test set , which is not necessarily the same as P . Let fφ be a regression model , and L the loss function that returns a performance score comparing fφ ( xi ) with the true label yi using Mean Square Error ( MSE ) . We assume a list N of possible data and label nearest neighbors ( NNs ) that can be mixed with an example . For instance , N could contain the options “ 1 data NN ” , “ 2 data NNs ” , and “ 2 label NNs ” . The more fine-grained the NN options are , the more precisely MixRL can determine the optimal number of NNs to mix per example . We do not add a “ 0 NN ” option because selecting and not selecting it have identical effects , which makes the policy network training unstable because nothing can be learned . Instead , we support this option separately by excluding examples that are not worth mixing as we explain later in this section . The possible NNs can be represented as a one-hot encoding vector of |N | values . We now define the states and actions , which is an important design choice of MixRL . A state st is a batch of examples Db = { ( ( xi , yi ) , ki ) } Bi=1 where { ( xi , yi ) } Bi=1 ⊆ D and each ki is an index of an NN option in N that specifies how many data or label NNs xi should be mixed with . An action at is then choosing Dm ⊆ Db where each ( xi , yi ) in Dm is mixed with its N [ ki ] NNs in D. A policy πθ ( Dm|Db ) returns the probability of selecting Dm at state Db . For each episode , MixRL selects a batch ( state ) once and chooses a subset of the batch to obtain a reward ( action ) once . Since there is only one time step , the transition function does not play a role . A naı̈ve implementation of the policy network would have an input dimension of B× ( d+ e+ |N | ) and an output dimension of 2B , which may be too large to train for large batch sizes . For example , in a typical setting of B = 1000 , d = 100 , e = 1 , and |N | = 10 , the input and output dimensions Algorithm 1 : Pseudo code for Mixup value network training . Input : Training set D , validation set Dv , nearest neighbor options N , learning rate α , reward scaling constant C , moving average window W Output : Mixup value network hθ Initialize θ , Base = 0 ; while until convergence do Sample Db = { ( xi , yi ) , ki } B 1 ∼ P × Uniform ( N ) ; Dm = ∅ ; for ( ( xi , yi ) , ki ) ∈ D b do Add ( xi , yi , ki ) to D m with probability hθ ( xi , yi , ki ) ; Train regression model fφ with initialized φ on Db ∪Mix ( Dm , D , N ) ; Loss = 1 V ∑V i=1 L ( fφ ( x v i ) , y v i ) ; θ = θ − α · C · ( Loss−Base ) · ∇θ log ( πθ ( Dm , Db ) ) ; Base = W−1 W Base+ 1 W Loss ; return return hθ ; become 111 , 000 and 21000 , respectively . Instead , we can significantly reduce the network size by assuming independence among examples and decomposing the policy network ’ s prediction into two phases . First , a Mixup value network hθ ( x , y , k ) is used to estimate the probability of an example being mixed with k NNs . Next , we randomly select which examples to mix according to the probabilities . The value of πθ ( Dm|Db ) is thus ∏ ( x , y , k ) ∈Dm hθ ( x , y , k ) ∏ ( x , y , k ) ∈Db\Dm [ 1− hθ ( x , y , k ) ] . Also , hθ has input and output dimensions of only d+ e+ |N | and 1 , respectively , and is practical . The objective function J ( θ ) is the validation set loss lθ of fφ trained on Db ∪ Mix ( Dm , D , N ) ( either from scratch or from a pre-trained state ) where Mix ( Dm , D , N ) mixes each ( xi , yi ) in Dm with its N [ ki ] NNs in D : J ( θ ) = l ( φ ) = E ( xvi , y v i ) ∼P t L ( fφ ( xvi ) , y v i ) ( 2 ) Using Eq . 1 , the gradient of Eq . 2 can be approximated using the validation set Dvas : ∇θJ ( θ ) = ∇θl ( θ ) ≈ 1 V V∑ i=1 L ( fφ ( xvi ) , y v i ) · ∇θ log ( πθ ( Dm , Db ) ) ( 3 ) where∇θ log πθ ( Dm , Db ) =∇φ ∑ ( x , y , k ) ∈Dm log hθ ( x , y , k ) + ∑ ( x , y , k ) ∈Db\Dm [ 1− hθ ( x , y , k ) ] . We also use reward scaling ( Henderson et al. , 2018 ) to further improve the training . Algorithm 1 shows the pseudo code for training hθ . The computational complexity is not directly related to the training set size , but depends on the number of iterations needed to train hθ and how long each iteration takes . Another factor is the number of possible NNs |N | where a higher number may result in slower training . We show in Sec . 4.2 that MixRL scales to large training sets . Once we train hθ , we choose the ( x , y , k ) ’ s with the highest hθ ( x , y , k ) values of at least a threshold T . We optimize T ’ s value using the validation set ( see Sec . B.1 ) . We ignore any ( x , y , k ) whose ( x , y ) pair has already been chosen with a different k value for the same data or label distance . We apply T in order to exclude ( x , y , k ) ’ s that are not worth mixing ( i.e. , make them mix with 0 NNs ) . Although actor-critic methods ( Mnih et al. , 2016 ; Schulman et al. , 2017 ) improve on REINFORCE by training both value and policy networks , using the two networks may not be practical in our setting due to the high input dimensions . For a policy network , we can reduce its dimension using the fact that the log probabilities can be calculated by multiplying selection probabilities . Unfortunately , we can not use the same trick for a value network because its output values are not probabilities . | The authors propose MixRL to improve upon mixup in regression settings. MixRL is used to impose a proximity constraint on the input/output pairs that are mixed during mixup-based data augmentation, by predicting how many nearest neighbors to utilize, from a small set of pre-specified options, based on feedback from evaluating the validation set. Consistent but small gains over mixup and manifold mixup are realized on several datasets. | SP:a77ff59b58169bb0c0e2409a272ca73b310898e4 |
MixRL: Data Mixing Augmentation for Regression using Reinforcement Learning | 1 INTRODUCTION . As machine learning ( ML ) becomes widely used in critical applications including manufacturing and finance , data augmentation for regression becomes essential as it provides an opportunity to improve model performance without additional data collection . In comparison to classification tasks like object detection in images , the goal of regression is to predict one or more real numbers . To emphasize the importance of data augmentation in regression , we provide a case study of semiconductor manufacturing . Here a common quality check is to measure the layer thicknesses of a 3-dimensional semiconductor and see if they are even . However , directly measuring each thickness results in destroying the semiconductor itself , so a recently-common approach is to take an indirect measurement by applying light waves on the semiconductor , measuring the spectrum of wavelengths that bounce back from all the layers , and use ML to predict the layer thicknesses from the spectrum data ( see Fig . 4 in Sec . 4 for an illustration ) . With enough spectrum data and thickness information , ML models can be trained to accurately predict thicknesses from a spectrum . The main challenge is that there is not enough training data , and the only cost-effective solution is to augment small amounts of data that exist . Even a small improvement in model performance from the data augmentation has significant impact in this industry . In general , any regression task that predicts real values like emissions , stock prices , or even someone ’ s salary can also benefit from data augmentation . Most data augmentation techniques are designed for image classification . In particular , Mixup ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ; Yun et al. , 2019 ) is a popular data augmentation technique that is widely used for classification tasks , but is seldom used for regression because it assumes a distinct label space . The idea of Mixup is to mix pairs of examples using the key assumption that taking a linear interpolation between examples can be used to estimate the label of any examples in between . Mixup is known to effectively regularize the model . More recently , Manifold Mixup ( Verma et al. , 2019 ) has been proposed to improve the hidden representation and decision boundaries where two examples are mixed in multiple hidden layers of neural networks . However , the Mixup techniques are not readily applicable to a regression setting because the key linearity assumption does not necessarily hold . Since the label space is continuous , taking a linear interpolation of examples that are very different either data-wise or label-wise may result in arbitrarily-incorrect labels as shown in Fig . 1a . As a result , the linearity assumption only holds to a certain extent , and the degree may vary for each example . Moreover , other data augmentation techniques for classification including image processing ( e.g. , flipping or rotating ) and generative models ( e.g. , GAN ( Goodfellow et al. , 2014 ) and VAE ( Kingma & Welling , 2014 ) ) are even less applicable to a regression setting ( see Sec . 5 ) . We propose MixRL , a data mixing augmentation framework that is the first to tailor Mixup for regression tasks using reinforcement learning . MixRL uses a stricter linearity assumption where it only holds within a certain data or label distance . These distance limits may vary by example , and we formulate the problem of learning for each example how many nearest neighbors it should be mixed with . MixRL employs a meta learning framework that estimates how valuable mixing an example is for reducing the model loss on a small validation set using Monte Carlo policy gradient reinforcement learning . MixRL ’ s framework is inspired by the recent Data Valuation using Reinforcement Learning ( DVRL ) framework ( Yoon et al. , 2020 ) , which solves the different problem of measuring how individual examples contribute to model performance without any mixing involved . Fig . 1b shows how limiting the nearest neighbors to mix is better than mixing with all neighbors as in classification . To see if the augmentation is useful , we train simple models on the original four examples ( i.e. , no augmentation ) , the augmented data in Fig . 1a , and the augmented data in Fig . 1b . Evaluating the models on 20 random test examples results in Root Mean Square Error ( RMSE ; see Sec . 4 ) values of 0.2967 , 0.5615 , and 0.1834 , respectively , where a lower RMSE is better . We thus conclude that carefully mixing examples is important for improving regression performance . Experiments conducted on real and synthetic datasets show that MixRL shows better model performances relative to baselines , especially when the linearity is limited , and the mixing must be done selectively . In addition , MixRL only requires small validation sets and scales to large training sets . 2 LIMITED LINEARITY IN DATA AND LABEL SPACE FOR REGRESSION . We explain why the key linearity assumption used for Mixup in classification has limitations in a regression setting . In classification , the labels are discrete where many examples may have the same label . The original version of Mixup ( Zhang et al. , 2018 ) is to take a linear interpolation between any pair of examples xi and xj with the labels yi and yj to produce the new example λxi+ ( 1−λ ) xj with the label λyi + ( 1 − λ ) yj where λ ∼ Beta ( α , α ) . The linearity assumption turns out to be reasonable because the label difference between examples is only 0 or 1 and thus not that sensitive to the data difference . In contrast , the labels in regression are in a continuous space . Although there is still a many-to-one relationship where multiple examples may have the same label , the degree is much smaller than in classification . As a result , when two examples are mixed , the interpolated label can be arbitrarily different than the actual label , e.g. , mixing the points a and d in Fig . 1a results in a label nowhere near the actual label . In Sec . 4.1 , we also show empirical results where the label error increases for larger data or label distances . Furthermore , mixing examples with larger data or label distances tend to have increasingly-negative effects on the model trained on the augmented training set . Figs . 2a and 2b show the model accuracies using RMSE for the Product dataset ( described in Sec . 4 ) when mixing examples with different ranges of distances . Regardless of adjusting the label or data distance , there are diminishing benefits for larger distances . How do we limit the data and label distances to improve Mixup for regression ? One approach is to only limit the data distance , which limits the label distance as well . Suppose that the regression function f is continuous where limx→c f ( x ) = f ( c ) for any x and c within the domain of f . We show that a short-enough data distance sufficiently reduces the label distance as well . Given f ’ s domain D and limx→c f ( x ) = L , the following is known to hold : ∀ , ∃δ s.t . ∀x ∈ D , if |x − c| < δ , then |f ( x ) −L| < . We can use this result to prove that ∀ , ∃δ s.t . ∀xi , xj ∈ D , if αxi + ( 1− α ) xj = c , 0 ≤ α ≤ 1 , and |xi − xj | < δ then the absolute difference between the mixed example ’ s xj value and L is small where |αf ( xi ) + ( 1 − α ) f ( xj ) − L| = |α ( f ( xi ) − L ) + ( 1 − α ) ( f ( xj ) − L ) | ≤ α|f ( xi ) − L|+ ( 1− α ) |f ( xj ) − L| < α + ( 1− α ) = . We can not do the converse and limit the data distance by limiting label distance because there is a many-to-one mapping from data to labels , which means that two different examples may have identical labels ( e.g. , Fig . 1a ’ s a and d ) . However , limiting the data distance to also limit the label distance may be too restrictive because there is not much correlation between the data and label distances in real data . Fig . 2c shows how the data distance relates to the label distance for the Product dataset . Even for small data distances , the label distance has a large range , which means that the data distance would have to be extremely limited . Hence , our solution is to limit the data and/or label distance as needed instead of just the data distance . This approach turns out to be more practical as we demonstrate in Sec . 4.4 . 3 MIXRL . The goal of MixRL is to identify which examples to mix with which nearest neighbors . Instead of finding the actual distance limits themselves , we solve the identical problem of finding the number of nearest neighbors to mix per example for convenience . We use reinforcement learning because there is no training data on how mixing each example affects the model performance , and the reward function is non-differentiable as we explain below . MixRL ’ s framework is inspired by DVRL ( Yoon et al. , 2020 ) , but we solve the different problem of mixing examples and address new issues . 3.1 POLICY OPTIMIZATION REINFORCEMENT LEARNING . Finding the optimal policy involves taking the gradient of the objective function J ( θ ) = Eπθ [ R ] where π is the policy , θ is its parameters , and R is the reward function . For MixRL , we would like to minimize the regression model loss on a small validation set when mixing a batch of examples . However , the validation loss is computed using the regression model , which does not involve θ . Hence we can not analytically compute the differential of the reward function with respect to θ . Instead , we use the REINFORCE ( Williams , 1992 ) policy gradient algorithm , which is a Monte Carlo method that is also widely used for data valuation ( Yoon et al. , 2020 ) and neural architecture searching ( Zoph & Le , 2017 ) . It is known that∇θJ ( θ ) can be approximated ( see Sec . A.1 ) as : ∇θJ ( θ ) ≈ 1 m m∑ i=1 R ( τ i ) T−1∑ t=0 ∇θ log πθ ( ait|sit ) ( 1 ) where τ i is the ith state-action trajectory under policy πθ , m is the number of sample trajectories , T is the number of actions taken in a path , sit is a state at time t , and a i t is an action at time t. In our setting of minimizing loss , REINFORCE performs gradient descent for each example decreasing θ by α∇θJ ( θ ) where α is a learning rate . Although the estimated gradient is unbiased , it is known to have a high variance , which we reduce using baseline techniques ( Sutton & Barto , 2018 ) . 3.2 FRAMEWORK . We define notations used in MixRL ’ s framework shown in Fig . 3 . Let D = { ( xi , yi ) } Si=1 ∼ P be the training set where xi ∈ X is a d-dimensional input example , and yi ∈ Y is an e-dimensional label . Let Dv = { ( xvi , yvi ) } Vi=1 ∼ P t be the validation set , where P t is the distribution of the test set , which is not necessarily the same as P . Let fφ be a regression model , and L the loss function that returns a performance score comparing fφ ( xi ) with the true label yi using Mean Square Error ( MSE ) . We assume a list N of possible data and label nearest neighbors ( NNs ) that can be mixed with an example . For instance , N could contain the options “ 1 data NN ” , “ 2 data NNs ” , and “ 2 label NNs ” . The more fine-grained the NN options are , the more precisely MixRL can determine the optimal number of NNs to mix per example . We do not add a “ 0 NN ” option because selecting and not selecting it have identical effects , which makes the policy network training unstable because nothing can be learned . Instead , we support this option separately by excluding examples that are not worth mixing as we explain later in this section . The possible NNs can be represented as a one-hot encoding vector of |N | values . We now define the states and actions , which is an important design choice of MixRL . A state st is a batch of examples Db = { ( ( xi , yi ) , ki ) } Bi=1 where { ( xi , yi ) } Bi=1 ⊆ D and each ki is an index of an NN option in N that specifies how many data or label NNs xi should be mixed with . An action at is then choosing Dm ⊆ Db where each ( xi , yi ) in Dm is mixed with its N [ ki ] NNs in D. A policy πθ ( Dm|Db ) returns the probability of selecting Dm at state Db . For each episode , MixRL selects a batch ( state ) once and chooses a subset of the batch to obtain a reward ( action ) once . Since there is only one time step , the transition function does not play a role . A naı̈ve implementation of the policy network would have an input dimension of B× ( d+ e+ |N | ) and an output dimension of 2B , which may be too large to train for large batch sizes . For example , in a typical setting of B = 1000 , d = 100 , e = 1 , and |N | = 10 , the input and output dimensions Algorithm 1 : Pseudo code for Mixup value network training . Input : Training set D , validation set Dv , nearest neighbor options N , learning rate α , reward scaling constant C , moving average window W Output : Mixup value network hθ Initialize θ , Base = 0 ; while until convergence do Sample Db = { ( xi , yi ) , ki } B 1 ∼ P × Uniform ( N ) ; Dm = ∅ ; for ( ( xi , yi ) , ki ) ∈ D b do Add ( xi , yi , ki ) to D m with probability hθ ( xi , yi , ki ) ; Train regression model fφ with initialized φ on Db ∪Mix ( Dm , D , N ) ; Loss = 1 V ∑V i=1 L ( fφ ( x v i ) , y v i ) ; θ = θ − α · C · ( Loss−Base ) · ∇θ log ( πθ ( Dm , Db ) ) ; Base = W−1 W Base+ 1 W Loss ; return return hθ ; become 111 , 000 and 21000 , respectively . Instead , we can significantly reduce the network size by assuming independence among examples and decomposing the policy network ’ s prediction into two phases . First , a Mixup value network hθ ( x , y , k ) is used to estimate the probability of an example being mixed with k NNs . Next , we randomly select which examples to mix according to the probabilities . The value of πθ ( Dm|Db ) is thus ∏ ( x , y , k ) ∈Dm hθ ( x , y , k ) ∏ ( x , y , k ) ∈Db\Dm [ 1− hθ ( x , y , k ) ] . Also , hθ has input and output dimensions of only d+ e+ |N | and 1 , respectively , and is practical . The objective function J ( θ ) is the validation set loss lθ of fφ trained on Db ∪ Mix ( Dm , D , N ) ( either from scratch or from a pre-trained state ) where Mix ( Dm , D , N ) mixes each ( xi , yi ) in Dm with its N [ ki ] NNs in D : J ( θ ) = l ( φ ) = E ( xvi , y v i ) ∼P t L ( fφ ( xvi ) , y v i ) ( 2 ) Using Eq . 1 , the gradient of Eq . 2 can be approximated using the validation set Dvas : ∇θJ ( θ ) = ∇θl ( θ ) ≈ 1 V V∑ i=1 L ( fφ ( xvi ) , y v i ) · ∇θ log ( πθ ( Dm , Db ) ) ( 3 ) where∇θ log πθ ( Dm , Db ) =∇φ ∑ ( x , y , k ) ∈Dm log hθ ( x , y , k ) + ∑ ( x , y , k ) ∈Db\Dm [ 1− hθ ( x , y , k ) ] . We also use reward scaling ( Henderson et al. , 2018 ) to further improve the training . Algorithm 1 shows the pseudo code for training hθ . The computational complexity is not directly related to the training set size , but depends on the number of iterations needed to train hθ and how long each iteration takes . Another factor is the number of possible NNs |N | where a higher number may result in slower training . We show in Sec . 4.2 that MixRL scales to large training sets . Once we train hθ , we choose the ( x , y , k ) ’ s with the highest hθ ( x , y , k ) values of at least a threshold T . We optimize T ’ s value using the validation set ( see Sec . B.1 ) . We ignore any ( x , y , k ) whose ( x , y ) pair has already been chosen with a different k value for the same data or label distance . We apply T in order to exclude ( x , y , k ) ’ s that are not worth mixing ( i.e. , make them mix with 0 NNs ) . Although actor-critic methods ( Mnih et al. , 2016 ; Schulman et al. , 2017 ) improve on REINFORCE by training both value and policy networks , using the two networks may not be practical in our setting due to the high input dimensions . For a policy network , we can reduce its dimension using the fact that the log probabilities can be calculated by multiplying selection probabilities . Unfortunately , we can not use the same trick for a value network because its output values are not probabilities . | To apply Mixup for regression tasks, the paper first utilizes the stricter assumption that linearity only holds within specific data or label distances for regression. Then this paper proposes a data mixing augmentation method called MixRL. The goal of MixRL is to identify which examples to mix with which nearest neighbors. MixRL employs a meta-learning framework that estimates how important mixing a sample is to minimize the model loss on a validation set using policy gradient reinforcement learning. | SP:a77ff59b58169bb0c0e2409a272ca73b310898e4 |
Near-Optimal Algorithms for Autonomous Exploration and Multi-Goal Stochastic Shortest Path | 1 INTRODUCTION . Reinforcement learning ( RL ) with a known state space has been studied in a wide range of settings ( e.g. , Schmidhuber , 1991 ; Oudeyer et al. , 2007 ; Oudeyer and Kaplan , 2009 ; Baranes and Oudeyer , 2009 ) . When the state space is large , it is difficult for a learning agent to discover the whole environment . Instead , the agent can only explore a small portion of the environment . At a high level , we hope that the agent can discover states near the initial state , expand the range of known states by exploration , and learn near-optimal goal-conditioned policies for the known states . Because the agent discovers its known states of the environment incrementally , this learning problem was named Autonomous Exploration ( AX ) ( Lim and Auer , 2012 ; Tarbouriech et al. , 2020 ) . The autonomous exploration problem generalizes the Stochastic Shortest Path ( SSP ) problem ( Bertsekas et al. , 2000 ) where the agent aims to reach a predefined goal state while minimizing its total expected cost . However , in the autonomous exploration setting , the agent aims to discover a set of reachable states in a large environment and find the optimal policies to reach them . The autonomous exploration formulation is applicable to an increasing number of real-world RL problems , ranging from navigation in mazes ( Devo et al. , 2020 ) to game playing ( Mnih et al. , 2013 ) . For example , in the maze navigation problem , a robot aims to follow a predefined path in an unknown environment , and the robot has to discover and expand the size of regions known to itself autonomously without prior knowledge of the environment . This procedure also resembles some biological learning processes . See Lim and Auer ( 2012 ) for more discussions . Related Work . The setting of autonomous exploration was introduced by Lim and Auer ( 2012 ) , who gave the first algorithm , UcbExplore , with a sample complexity Õ ( L3S ( 1+ε ) LA/ε3 ) . Here L denotes the distance within which we hope the learning agent to discover , S ( 1+ε ) L denotes the total number of states within distance ( 1 + ε ) L from the starting state , A denotes the size of the action space , and ε denotes the error that we can tolerate . Recent work by Tarbouriech et al . ( 2020 ) designed the DisCo algorithm with a sample complexity bound Õ ( L3S2 ( 1+ε ) LA/ε 2 ) 1 , which improves the 1/ε 1We translate their absolute error ε to the relative error εL . Their bound has a refined form Õ ( L3S ( 1+ε ) LΓ ( 1+ε ) LA/ε 2 ) where Γ ( 1+ε ) L is the branching factor which in the worst case is S ( 1+ε ) L. dependency at the cost of a worse dependency on S ( 1+ε ) L. In this paper , we present new algorithms to further improve the sample complexity . 1.1 CONTRIBUTIONS . In this paper , we take important steps toward resolving the autonomous exploration problem . We compare our results with prior ones in Table 1.2 and we summarize our contributions below : 1 . We propose a new state discovery algorithm , Value-Optimistic Incremental State Discovery ( VOISD ) , which uses a value-optimistic method ( Neu and Pike-Burke , 2020 ) to estimate the expected cost of the optimal policy to reach these states . We prove this algorithm enjoys an Õ ( L3S ( 1+ε ) LA/ε 2 ) sample complexity , which improves prior results in Lim and Auer ( 2012 ) and Tarbouriech et al . ( 2020 ) . 2 . We connect the autonomous exploration problem to a new problem , multi-goal SSP and propose a new algorithm Re-MG-SSP that 1 ) satisfies a stronger criterion3 and 2 ) serves as a burn-in step for the next algorithm . 3 . We further propose Value-Aware Autonomous Exploration ( VALAE ) , which uses VOISD and Re-MG-SSP as initial steps and then uses the estimated value functions to guide our exploration . By doing so , for each state-action pair ( s , a ) , we derive an ( s , a ) -dependent sample complexity bound , which can exploit the variance information , and yield a sharper sample complexity bound than the bounds for VOISD and Re-MG-SSP . In particular , VALAE improves the dependency on L from cubic to linear . 4 . We give the first lower bound of the autonomous exploration problem . This lower bound shows VALAE is nearly minimax-optimal when the SL grows polynomially with respect to L . 1.2 MAIN DIFFICULTIES AND TECHNIQUE OVERVIEW . While our work borrows ideas from prior work on autonomous exploration ( Lim and Auer , 2012 ; Tarbouriech et al. , 2020 ) and recent advances in SSP ( Tarbouriech et al. , 2021 ) , we develop new techniques to overcome additional difficulties that are unique in autonomous exploration . Dependence between the Estimated Transition Matrix and Discovered States . Our algorithm incrementally adds new states to the set of discovered states K. To obtain a tight dependency on S ( 1+ε ) L , similar to the standard RL setting , one needs to use concentration on ( P̂s , a − Ps , a ) V ∗K , g instead of ‖P̂s , a − Ps , a‖1 ( used by Tarbouriech et al . ( 2020 ) ) where P̂s , a is the estimated transition , 2In ( Lim and Auer , 2012 ) and ( Tarbouriech et al. , 2020 ) , the cost is 1 uniformly for all state-action pairs . In this paper , we allow non-uniform costs . In Table , we consider uniform cost in Table 1 for fair comparisons . 3See Sect . 2 for different criteria . Ps , a is the true transition , and V ∗K , g is the value function of the optimal policy going to the state g restricted on the discovered states K. The main challenge is that the set of discovered states K is dependent on the samples collected , and thus V ∗K , g is dependent on P̂s , a . We can use the union bound on all the possible K , but the number of possible K is exponential in the number of states S. Our main technique is to construct a series of sets of states { s0 } = K0 ⊆ K1 ⊆ · · · ⊆ KZ = S→L , where Kz+1 is constructed by adding all the states that are reachable from s0 by some policy on Kz with expected cost no more than L. This series is only polynomially large so after applying the union bound we only have a logarithmic overhead . In order to use concentrations only on this sequence of sets , we also need to develop a modified definition of optimism . See Appendix B and Appendix C.2 for details . Connection between Autonomous Exploration and Multi-Goal SSP . In standard RL setting , it is known that in order to obtain a tight dependency on L , one needs to exploit the variance information in the value function ( Azar et al. , 2017 ) . However , in autonomous exploration , it is unclear how to exploit the variance information because even which state is in S→L is unknown . To this end , we first consider a simpler problem , multi-goal SSP , and extend the technique for singlegoal SSP ( Tarbouriech et al. , 2021 ) to this new problem ( cf . Alg . 3 ) . We also present a reduction from autonomous exploration to multi-goal SSP ( cf . Alg . 2 ) . These two techniques together yield the first tight dependency on L for autonomous exploration . 2 PRELIMINARIES . In this section , we introduce basic definitions and our problem setup . Notations . For any two vectorsX , Y ∈ RS , we write their inner product asXY : = ∑ s∈S X ( s ) Y ( s ) . We denote ‖X‖∞ : = maxs∈S |X ( S ) | , and if X is a probability distribution on S , we define V ( X , Y ) : = ∑ s∈S X ( s ) Y ( s ) 2 − ( ∑ s∈S X ( s ) Y ( s ) ) 2 . Markov Decision Process . We consider an MDP M : = 〈S , A , P , c , s0〉 , where S is the state space with size S , A is the action space with size A , and s0 ∈ S is the initial state . In state s , taking action a has a cost drawn i.i.d . from a distribution on [ cmin , 1 ] ( where cmin > 0 ) with expectation c ( s , a ) , and transits to the next state s′ with probability P ( s′|s , a ) . For convenience , we use Ps , a and Ps , a , s′ to denote P ( ·|s , a ) and P ( s′|s , a ) , respectively . A deterministic and stationary policy π : S → A is a mapping , and the agent following the policy π will take action π ( s ) at state s. For a fixed state g ∈ S we define the random variable tπg ( s ) as the number of steps it takes to reach state g starting from state swhen executing policy π , i.e . tπg ( s ) : = inf { t ≥ 0 : st+1 = g | s1 = s , π } . A policy π is a proper policy if for any state s ∈ S , tπg ( s ) < +∞ with probability 1 . Then we define the value function of a proper policy π with respect to the goal state g and its corresponding Q-function as follows : V πg ( s ) = E tπg ( s ) ∑ t=1 ct ( st , π ( st ) ) | s1 = s , Qπg ( s , a ) = E tπg ( s ) ∑ t=1 ct ( st , π ( st ) ) | s1 = s , π ( s1 ) = a , where ct ∈ [ cmin , 1 ] is the instantaneous cost at step t incurred by the state-action pair ( st , π ( st ) ) , and the expectation is taken over the random sequence of states generated by executing π starting from state s ∈ S. Here we have V πg ( g ) = 0 . We use πQ to denote the greedy policy over a vector Q ∈ RS×A , i.e . πQ ( s ) : = arg min a∈A Q ( s , a ) . For a fixed state g ∈ S , we denote V ∗g as the value function of the optimal policy with respect to goal state g , and here we list some important properties of V ∗g : there exists a stationary , deterministic and proper policy π∗ , such that its value function V ∗g : = V π∗ g and its corresponding Q-function Q∗g : = Q π∗ g satisfies the following Bellman optimality equations ( cf . Lem . 1 ) : Q∗g ( s , a ) = c ( s , a ) + Ps , aV ∗ g , V ∗ g ( s ) = min a∈A Q∗g ( s , a ) , ∀ ( s , a ) ∈ S ×A . Autonomous Exploration . Now we introduce the Autonomous Exploration problem . To formally discuss the setting , we need the following assumption on our MDP M . Assumption 1 . The action space contains a RESET action s.t . P ( s0|s , RESET ) = 1 for any s ∈ S . Moreover , taking RESET in any state s will incur a cost cRESET with probability 1 , where cRESET is a constant in [ cmin , 1 ] . Given any fixed length L ≥ 1 , the agent needs to learn the set of incrementally controllable states S→L . To introduce the concept of S→L , we first give the definition of policies restricted on a subset : Definition 1 ( Policy restricted on a subset ) . For any S ′ ⊆ S , a policy π is restricted on the set S ′ if π ( s ) = RESET for all s /∈ S ′ . Now we discuss the optimal policy restricted on a set of states K ⊆ S with respect to goal state g. We denote V ∗K , g ∈ RS as the value function of the optimal policy restricted on K with goal g , and Q∗K , g as the Q-function corresponding to V ∗ K , g . We consider the case that there exists at least one proper policy restricted on K with the goal state g. Then , V ∗K , g and Q∗K , g are finite , and they satisfy the following Bellman equations : Q∗K , g ( s , a ) = c ( s , a ) + Ps , aV ∗ K , g , ∀ ( s , a ) ∈ S ×A , V ∗K , g ( s ) = min a∈A Q∗K , g ( s , a ) , ∀s ∈ K , s 6= g , V ∗K , g ( s ) = Q ∗ K , g ( s , RESET ) = cRESET + V ∗ K , g ( s0 ) , ∀s /∈ K ∪ { g } , V ∗K , g ( g ) = 0 . We note that when K1 ⊆ K2 , for any g ∈ S , if V ∗K1 , g is finite , then V ∗ K2 , g is also finite , and we have V ∗K2 , g ≤ V ∗ K1 , g component-wise . And we note that for any s 6= g , we have mina∈AQ ∗ K , g ( s , a ) ≤ V ∗K , g ( s ) . Now we introduce the definition of incrementally controllable states S→L ( see Tarbouriech et al . ( 2020 ) for more intuitions on this definition . ) : Definition 2 ( Incrementally L-controllable states S→L ) . Let ≺ be any partial order on S . We denote S≺L as the set of states reachable from s0 with expected cost no more than L w.r.t . ≺ , which is defined as follows : • s0 ∈ S≺L , • if there is a policy π restricted on { s′ ∈ S≺L : s′ ≺ s } such that V πs ( s0 ) ≤ L , then s ∈ S ≺ L . The set of incrementally L-controllable states S→L is given by S→L = ⋃ ≺ S≺L . Learning Objective . In our settings , the learning agent knows the constants S , A and cmin , but the learning agent has no prior knowledge of the transition probability P or the cost function c ( · , · ) of the MDP M . We fix the length L ≥ 1 and an error parameter ε ∈ ( 0 , 1 ] . A learning algorithm of the autonomous exploration problem should output a set of states K ⊆ S that satisfies : • S→L ⊆ K , i.e. , the algorithm discovers all the states that we want to explore . The algorithm also outputs a set of policies { πs } s∈K that satisfy one of the following criteria : 1 . ( AXL on S→L ) ∀s ∈ S→L , V πss ( s0 ) ≤ ( 1 + ε ) L ; 2 . ( AX∗ on S→L ) ∀s ∈ S→L , V πss ( s0 ) ≤ V ∗S→L , s ( s0 ) + εL ; 3 . ( AXL on K ) ∀s ∈ K , V πss ( s0 ) ≤ ( 1 + ε ) L ; 4 . ( AX∗ on K ) ∀s ∈ K , V πss ( s0 ) ≤ V ∗K , s ( s0 ) + εL . We note that AX∗ on K is stronger than both AXL on S→L and AX ∗ on S→L , but it is not necessarily stronger than AXL on K , because we do not necessarily have V ∗K , s ( s0 ) ≤ L when s /∈ S→L . In the literature , AXL on S→L was studied in the original paper that proposed the autonomous exploration problem ( Lim and Auer , 2012 ) and AX∗ on K condition was studied in ( Tarbouriech et al. , 2020 ) . We denote T as the total number of steps the agent uses , and denote ( st , at ) as the state-action pair at the t-th step . We denote by ct ( st , at ) the instantaneous cost incurred at the t-th step . The performance of an algorithm is measured by the cumulative cost : CT : = T∑ t=1 ct ( st , at ) . Multi-goal SSP . We also study a new problem , multi-goal SSP , a natural generalization of the classical SSP problem . In multi-goal SSP , we consider an MDP and a fixed length L ≥ 1 . The MDP M satisfies Asmp . 1 , and all of its states are incrementally L-controllable , i.e . S→L = S. Algorithm 1 : Value-Optimistic Incremental State Discovery ( VOISD ) 1 Input : MDP M = 〈S , A , P , c , s0〉 , confidence δ ∈ ( 0 , 1 ) , error parameter ε ∈ ( 0 , 1 ] , and L ≥ 1 . 2 Initialize U ← { } , K ← { } , and snew ← s0 . Specify constants c1 = 6 , c2 = 72 , c3 = 2 √ 2 , c4 = 2 √ 2 . 3 Set ε← ε/3 , δ ← δ/2 , B ← 10L and πs0 ∈ Π ( { s0 } ) . 4 Set ψ ← 12000 ( L cminε ) 2 ln ( SA δ ) , and φ← 2dlog2 ψe . 5 For ( s , a , s′ ) ∈ S ×A× S , set N ( s , a ) ← 0 ; n ( s , a ) ← 0 ; N ( s , a , s′ ) ← 0 ; P̂s , a , s′ ← 0 ; θ ( s , a ) ← 0 ; ĉ ( s , a ) ← 0 . 6 for round r = 1 , 2 , · · · do 7 \\ ( a ) Discover Possible States in S→L 8 Add snew to K. Set s← snew . 9 for each a ∈ A do 10 while N ( s , a ) < φ do 11 Execute policy πs on MDP M until reaching state s. 12 Take action a , incur cost c and observe next state s′ ∼ P ( · | s , a ) . 13 Set N ( s , a ) ← N ( s , a ) + 1 , θ ( s , a ) ← θ ( s , a ) + c , N ( s , a , s′ ) ← N ( s , a , s′ ) + 1 . 14 If s′ /∈ K , add s′ to U . 15 Set ĉ ( s , a ) ← θ ( s , a ) N ( s , a ) and θ ( s , a ) ← 0 . 16 For all s′ ∈ S , set P̂s , a , s′ ← N ( s , a , s′ ) /N ( s , a ) , n ( s , a ) ← N ( s , a ) . 17 Stop the algorithm if U is empty . 18 \\ ( b ) Compute Optimistic Policy 19 For each g ∈ U , compute ( Qg , Vg ) : = VISGO ( S , A , K , s0 , g , cminε18 ) . 20 Choose a state s ∈ arg min g∈U Vg ( s0 ) . Stop the algorithm if Vs ( s0 ) > L. 21 Set the policy π̃ as the greedy policy over Qs . Remove s from U , set snew ← s and set πs ← π̃ . 22 Output : The discovered states K and their corresponding policies { πs } s∈K . In multi-goal SSP , a learning algorithm should output a set of policies { πs } s∈S , such that V πss ( s0 ) ≤ V ∗s ( s0 ) + εL for all s ∈ S. We observe that an algorithm that solves the autonomous exploration problem with AX∗ on S→L criterion can also solve the multi-goal SSP problem . | This paper studies the autonomous exploration problem and multi-goal SSP problem, and proposes three algorithms with improved cumulative costs on 4 learning objectives. The authors also construct a hard instance and show the lower bound of the cumulative cost. Their bounds are optimal in terms of $L, A, \epsilon$. The main technical contributions are a new series construction of $\mathcal{K}$, the connection between autonomous exploration and multi-goal SSP, and the construction of the hard instance. | SP:349be717a2c5a27f50b34a8f61247f0efa09a4f8 |
Near-Optimal Algorithms for Autonomous Exploration and Multi-Goal Stochastic Shortest Path | 1 INTRODUCTION . Reinforcement learning ( RL ) with a known state space has been studied in a wide range of settings ( e.g. , Schmidhuber , 1991 ; Oudeyer et al. , 2007 ; Oudeyer and Kaplan , 2009 ; Baranes and Oudeyer , 2009 ) . When the state space is large , it is difficult for a learning agent to discover the whole environment . Instead , the agent can only explore a small portion of the environment . At a high level , we hope that the agent can discover states near the initial state , expand the range of known states by exploration , and learn near-optimal goal-conditioned policies for the known states . Because the agent discovers its known states of the environment incrementally , this learning problem was named Autonomous Exploration ( AX ) ( Lim and Auer , 2012 ; Tarbouriech et al. , 2020 ) . The autonomous exploration problem generalizes the Stochastic Shortest Path ( SSP ) problem ( Bertsekas et al. , 2000 ) where the agent aims to reach a predefined goal state while minimizing its total expected cost . However , in the autonomous exploration setting , the agent aims to discover a set of reachable states in a large environment and find the optimal policies to reach them . The autonomous exploration formulation is applicable to an increasing number of real-world RL problems , ranging from navigation in mazes ( Devo et al. , 2020 ) to game playing ( Mnih et al. , 2013 ) . For example , in the maze navigation problem , a robot aims to follow a predefined path in an unknown environment , and the robot has to discover and expand the size of regions known to itself autonomously without prior knowledge of the environment . This procedure also resembles some biological learning processes . See Lim and Auer ( 2012 ) for more discussions . Related Work . The setting of autonomous exploration was introduced by Lim and Auer ( 2012 ) , who gave the first algorithm , UcbExplore , with a sample complexity Õ ( L3S ( 1+ε ) LA/ε3 ) . Here L denotes the distance within which we hope the learning agent to discover , S ( 1+ε ) L denotes the total number of states within distance ( 1 + ε ) L from the starting state , A denotes the size of the action space , and ε denotes the error that we can tolerate . Recent work by Tarbouriech et al . ( 2020 ) designed the DisCo algorithm with a sample complexity bound Õ ( L3S2 ( 1+ε ) LA/ε 2 ) 1 , which improves the 1/ε 1We translate their absolute error ε to the relative error εL . Their bound has a refined form Õ ( L3S ( 1+ε ) LΓ ( 1+ε ) LA/ε 2 ) where Γ ( 1+ε ) L is the branching factor which in the worst case is S ( 1+ε ) L. dependency at the cost of a worse dependency on S ( 1+ε ) L. In this paper , we present new algorithms to further improve the sample complexity . 1.1 CONTRIBUTIONS . In this paper , we take important steps toward resolving the autonomous exploration problem . We compare our results with prior ones in Table 1.2 and we summarize our contributions below : 1 . We propose a new state discovery algorithm , Value-Optimistic Incremental State Discovery ( VOISD ) , which uses a value-optimistic method ( Neu and Pike-Burke , 2020 ) to estimate the expected cost of the optimal policy to reach these states . We prove this algorithm enjoys an Õ ( L3S ( 1+ε ) LA/ε 2 ) sample complexity , which improves prior results in Lim and Auer ( 2012 ) and Tarbouriech et al . ( 2020 ) . 2 . We connect the autonomous exploration problem to a new problem , multi-goal SSP and propose a new algorithm Re-MG-SSP that 1 ) satisfies a stronger criterion3 and 2 ) serves as a burn-in step for the next algorithm . 3 . We further propose Value-Aware Autonomous Exploration ( VALAE ) , which uses VOISD and Re-MG-SSP as initial steps and then uses the estimated value functions to guide our exploration . By doing so , for each state-action pair ( s , a ) , we derive an ( s , a ) -dependent sample complexity bound , which can exploit the variance information , and yield a sharper sample complexity bound than the bounds for VOISD and Re-MG-SSP . In particular , VALAE improves the dependency on L from cubic to linear . 4 . We give the first lower bound of the autonomous exploration problem . This lower bound shows VALAE is nearly minimax-optimal when the SL grows polynomially with respect to L . 1.2 MAIN DIFFICULTIES AND TECHNIQUE OVERVIEW . While our work borrows ideas from prior work on autonomous exploration ( Lim and Auer , 2012 ; Tarbouriech et al. , 2020 ) and recent advances in SSP ( Tarbouriech et al. , 2021 ) , we develop new techniques to overcome additional difficulties that are unique in autonomous exploration . Dependence between the Estimated Transition Matrix and Discovered States . Our algorithm incrementally adds new states to the set of discovered states K. To obtain a tight dependency on S ( 1+ε ) L , similar to the standard RL setting , one needs to use concentration on ( P̂s , a − Ps , a ) V ∗K , g instead of ‖P̂s , a − Ps , a‖1 ( used by Tarbouriech et al . ( 2020 ) ) where P̂s , a is the estimated transition , 2In ( Lim and Auer , 2012 ) and ( Tarbouriech et al. , 2020 ) , the cost is 1 uniformly for all state-action pairs . In this paper , we allow non-uniform costs . In Table , we consider uniform cost in Table 1 for fair comparisons . 3See Sect . 2 for different criteria . Ps , a is the true transition , and V ∗K , g is the value function of the optimal policy going to the state g restricted on the discovered states K. The main challenge is that the set of discovered states K is dependent on the samples collected , and thus V ∗K , g is dependent on P̂s , a . We can use the union bound on all the possible K , but the number of possible K is exponential in the number of states S. Our main technique is to construct a series of sets of states { s0 } = K0 ⊆ K1 ⊆ · · · ⊆ KZ = S→L , where Kz+1 is constructed by adding all the states that are reachable from s0 by some policy on Kz with expected cost no more than L. This series is only polynomially large so after applying the union bound we only have a logarithmic overhead . In order to use concentrations only on this sequence of sets , we also need to develop a modified definition of optimism . See Appendix B and Appendix C.2 for details . Connection between Autonomous Exploration and Multi-Goal SSP . In standard RL setting , it is known that in order to obtain a tight dependency on L , one needs to exploit the variance information in the value function ( Azar et al. , 2017 ) . However , in autonomous exploration , it is unclear how to exploit the variance information because even which state is in S→L is unknown . To this end , we first consider a simpler problem , multi-goal SSP , and extend the technique for singlegoal SSP ( Tarbouriech et al. , 2021 ) to this new problem ( cf . Alg . 3 ) . We also present a reduction from autonomous exploration to multi-goal SSP ( cf . Alg . 2 ) . These two techniques together yield the first tight dependency on L for autonomous exploration . 2 PRELIMINARIES . In this section , we introduce basic definitions and our problem setup . Notations . For any two vectorsX , Y ∈ RS , we write their inner product asXY : = ∑ s∈S X ( s ) Y ( s ) . We denote ‖X‖∞ : = maxs∈S |X ( S ) | , and if X is a probability distribution on S , we define V ( X , Y ) : = ∑ s∈S X ( s ) Y ( s ) 2 − ( ∑ s∈S X ( s ) Y ( s ) ) 2 . Markov Decision Process . We consider an MDP M : = 〈S , A , P , c , s0〉 , where S is the state space with size S , A is the action space with size A , and s0 ∈ S is the initial state . In state s , taking action a has a cost drawn i.i.d . from a distribution on [ cmin , 1 ] ( where cmin > 0 ) with expectation c ( s , a ) , and transits to the next state s′ with probability P ( s′|s , a ) . For convenience , we use Ps , a and Ps , a , s′ to denote P ( ·|s , a ) and P ( s′|s , a ) , respectively . A deterministic and stationary policy π : S → A is a mapping , and the agent following the policy π will take action π ( s ) at state s. For a fixed state g ∈ S we define the random variable tπg ( s ) as the number of steps it takes to reach state g starting from state swhen executing policy π , i.e . tπg ( s ) : = inf { t ≥ 0 : st+1 = g | s1 = s , π } . A policy π is a proper policy if for any state s ∈ S , tπg ( s ) < +∞ with probability 1 . Then we define the value function of a proper policy π with respect to the goal state g and its corresponding Q-function as follows : V πg ( s ) = E tπg ( s ) ∑ t=1 ct ( st , π ( st ) ) | s1 = s , Qπg ( s , a ) = E tπg ( s ) ∑ t=1 ct ( st , π ( st ) ) | s1 = s , π ( s1 ) = a , where ct ∈ [ cmin , 1 ] is the instantaneous cost at step t incurred by the state-action pair ( st , π ( st ) ) , and the expectation is taken over the random sequence of states generated by executing π starting from state s ∈ S. Here we have V πg ( g ) = 0 . We use πQ to denote the greedy policy over a vector Q ∈ RS×A , i.e . πQ ( s ) : = arg min a∈A Q ( s , a ) . For a fixed state g ∈ S , we denote V ∗g as the value function of the optimal policy with respect to goal state g , and here we list some important properties of V ∗g : there exists a stationary , deterministic and proper policy π∗ , such that its value function V ∗g : = V π∗ g and its corresponding Q-function Q∗g : = Q π∗ g satisfies the following Bellman optimality equations ( cf . Lem . 1 ) : Q∗g ( s , a ) = c ( s , a ) + Ps , aV ∗ g , V ∗ g ( s ) = min a∈A Q∗g ( s , a ) , ∀ ( s , a ) ∈ S ×A . Autonomous Exploration . Now we introduce the Autonomous Exploration problem . To formally discuss the setting , we need the following assumption on our MDP M . Assumption 1 . The action space contains a RESET action s.t . P ( s0|s , RESET ) = 1 for any s ∈ S . Moreover , taking RESET in any state s will incur a cost cRESET with probability 1 , where cRESET is a constant in [ cmin , 1 ] . Given any fixed length L ≥ 1 , the agent needs to learn the set of incrementally controllable states S→L . To introduce the concept of S→L , we first give the definition of policies restricted on a subset : Definition 1 ( Policy restricted on a subset ) . For any S ′ ⊆ S , a policy π is restricted on the set S ′ if π ( s ) = RESET for all s /∈ S ′ . Now we discuss the optimal policy restricted on a set of states K ⊆ S with respect to goal state g. We denote V ∗K , g ∈ RS as the value function of the optimal policy restricted on K with goal g , and Q∗K , g as the Q-function corresponding to V ∗ K , g . We consider the case that there exists at least one proper policy restricted on K with the goal state g. Then , V ∗K , g and Q∗K , g are finite , and they satisfy the following Bellman equations : Q∗K , g ( s , a ) = c ( s , a ) + Ps , aV ∗ K , g , ∀ ( s , a ) ∈ S ×A , V ∗K , g ( s ) = min a∈A Q∗K , g ( s , a ) , ∀s ∈ K , s 6= g , V ∗K , g ( s ) = Q ∗ K , g ( s , RESET ) = cRESET + V ∗ K , g ( s0 ) , ∀s /∈ K ∪ { g } , V ∗K , g ( g ) = 0 . We note that when K1 ⊆ K2 , for any g ∈ S , if V ∗K1 , g is finite , then V ∗ K2 , g is also finite , and we have V ∗K2 , g ≤ V ∗ K1 , g component-wise . And we note that for any s 6= g , we have mina∈AQ ∗ K , g ( s , a ) ≤ V ∗K , g ( s ) . Now we introduce the definition of incrementally controllable states S→L ( see Tarbouriech et al . ( 2020 ) for more intuitions on this definition . ) : Definition 2 ( Incrementally L-controllable states S→L ) . Let ≺ be any partial order on S . We denote S≺L as the set of states reachable from s0 with expected cost no more than L w.r.t . ≺ , which is defined as follows : • s0 ∈ S≺L , • if there is a policy π restricted on { s′ ∈ S≺L : s′ ≺ s } such that V πs ( s0 ) ≤ L , then s ∈ S ≺ L . The set of incrementally L-controllable states S→L is given by S→L = ⋃ ≺ S≺L . Learning Objective . In our settings , the learning agent knows the constants S , A and cmin , but the learning agent has no prior knowledge of the transition probability P or the cost function c ( · , · ) of the MDP M . We fix the length L ≥ 1 and an error parameter ε ∈ ( 0 , 1 ] . A learning algorithm of the autonomous exploration problem should output a set of states K ⊆ S that satisfies : • S→L ⊆ K , i.e. , the algorithm discovers all the states that we want to explore . The algorithm also outputs a set of policies { πs } s∈K that satisfy one of the following criteria : 1 . ( AXL on S→L ) ∀s ∈ S→L , V πss ( s0 ) ≤ ( 1 + ε ) L ; 2 . ( AX∗ on S→L ) ∀s ∈ S→L , V πss ( s0 ) ≤ V ∗S→L , s ( s0 ) + εL ; 3 . ( AXL on K ) ∀s ∈ K , V πss ( s0 ) ≤ ( 1 + ε ) L ; 4 . ( AX∗ on K ) ∀s ∈ K , V πss ( s0 ) ≤ V ∗K , s ( s0 ) + εL . We note that AX∗ on K is stronger than both AXL on S→L and AX ∗ on S→L , but it is not necessarily stronger than AXL on K , because we do not necessarily have V ∗K , s ( s0 ) ≤ L when s /∈ S→L . In the literature , AXL on S→L was studied in the original paper that proposed the autonomous exploration problem ( Lim and Auer , 2012 ) and AX∗ on K condition was studied in ( Tarbouriech et al. , 2020 ) . We denote T as the total number of steps the agent uses , and denote ( st , at ) as the state-action pair at the t-th step . We denote by ct ( st , at ) the instantaneous cost incurred at the t-th step . The performance of an algorithm is measured by the cumulative cost : CT : = T∑ t=1 ct ( st , at ) . Multi-goal SSP . We also study a new problem , multi-goal SSP , a natural generalization of the classical SSP problem . In multi-goal SSP , we consider an MDP and a fixed length L ≥ 1 . The MDP M satisfies Asmp . 1 , and all of its states are incrementally L-controllable , i.e . S→L = S. Algorithm 1 : Value-Optimistic Incremental State Discovery ( VOISD ) 1 Input : MDP M = 〈S , A , P , c , s0〉 , confidence δ ∈ ( 0 , 1 ) , error parameter ε ∈ ( 0 , 1 ] , and L ≥ 1 . 2 Initialize U ← { } , K ← { } , and snew ← s0 . Specify constants c1 = 6 , c2 = 72 , c3 = 2 √ 2 , c4 = 2 √ 2 . 3 Set ε← ε/3 , δ ← δ/2 , B ← 10L and πs0 ∈ Π ( { s0 } ) . 4 Set ψ ← 12000 ( L cminε ) 2 ln ( SA δ ) , and φ← 2dlog2 ψe . 5 For ( s , a , s′ ) ∈ S ×A× S , set N ( s , a ) ← 0 ; n ( s , a ) ← 0 ; N ( s , a , s′ ) ← 0 ; P̂s , a , s′ ← 0 ; θ ( s , a ) ← 0 ; ĉ ( s , a ) ← 0 . 6 for round r = 1 , 2 , · · · do 7 \\ ( a ) Discover Possible States in S→L 8 Add snew to K. Set s← snew . 9 for each a ∈ A do 10 while N ( s , a ) < φ do 11 Execute policy πs on MDP M until reaching state s. 12 Take action a , incur cost c and observe next state s′ ∼ P ( · | s , a ) . 13 Set N ( s , a ) ← N ( s , a ) + 1 , θ ( s , a ) ← θ ( s , a ) + c , N ( s , a , s′ ) ← N ( s , a , s′ ) + 1 . 14 If s′ /∈ K , add s′ to U . 15 Set ĉ ( s , a ) ← θ ( s , a ) N ( s , a ) and θ ( s , a ) ← 0 . 16 For all s′ ∈ S , set P̂s , a , s′ ← N ( s , a , s′ ) /N ( s , a ) , n ( s , a ) ← N ( s , a ) . 17 Stop the algorithm if U is empty . 18 \\ ( b ) Compute Optimistic Policy 19 For each g ∈ U , compute ( Qg , Vg ) : = VISGO ( S , A , K , s0 , g , cminε18 ) . 20 Choose a state s ∈ arg min g∈U Vg ( s0 ) . Stop the algorithm if Vs ( s0 ) > L. 21 Set the policy π̃ as the greedy policy over Qs . Remove s from U , set snew ← s and set πs ← π̃ . 22 Output : The discovered states K and their corresponding policies { πs } s∈K . In multi-goal SSP , a learning algorithm should output a set of policies { πs } s∈S , such that V πss ( s0 ) ≤ V ∗s ( s0 ) + εL for all s ∈ S. We observe that an algorithm that solves the autonomous exploration problem with AX∗ on S→L criterion can also solve the multi-goal SSP problem . | The authors consider the incremental autonomous exploration problem. That is, the agent faces an MDP and wants to learn near-optimal goal-conditioned policies to reach the states that are L-controllable, i.e. incrementally reachable from an initial state s_0 within L steps in expectation. The learning procedure is the following: the agent collects transition by interacting with the MDP, decides to stop, and outputs a set of states (that should be the L-controllable states) and a policy for each state ( that should be the associated near-optimal goal-conditioned policy). The authors propose the Value-Aware Autonomous Exploration (VALAE) algorithms. It works in 3 phases: in the first one a sub-algorithm Value-Optimistic Incremental State Discovery (VOISD) aims at identifying the set of L-controllable states then sub-algorithm Reduce Autonomous Exploration to Multi-Goal SSP (Re-MG-SSP) as burn-in sample collector, finally a procedure close to the one of UCBExplore by Lime and Auer (2012). They prove a bound on the sample complexity ( the number of steps before the algorithm stops) of order O(L S_{2L} A / \epsilon^2 ) where S_L is the number of L-controllable states, A the number of actions, \epsilon the error tolerated for the goal-conditioned policy. This bound improves prior results by Lim and Auer (2012) and Tarbouriech et al. (2020) with respect to the dependence on \epsilon or L. The authors also proved a lower bound of order \Omega(L S_{L} A / \epsilon^2 ) implying that VALAE is nearly minimax optimal if S_L grows polynomially with L. | SP:349be717a2c5a27f50b34a8f61247f0efa09a4f8 |
Near-Optimal Algorithms for Autonomous Exploration and Multi-Goal Stochastic Shortest Path | 1 INTRODUCTION . Reinforcement learning ( RL ) with a known state space has been studied in a wide range of settings ( e.g. , Schmidhuber , 1991 ; Oudeyer et al. , 2007 ; Oudeyer and Kaplan , 2009 ; Baranes and Oudeyer , 2009 ) . When the state space is large , it is difficult for a learning agent to discover the whole environment . Instead , the agent can only explore a small portion of the environment . At a high level , we hope that the agent can discover states near the initial state , expand the range of known states by exploration , and learn near-optimal goal-conditioned policies for the known states . Because the agent discovers its known states of the environment incrementally , this learning problem was named Autonomous Exploration ( AX ) ( Lim and Auer , 2012 ; Tarbouriech et al. , 2020 ) . The autonomous exploration problem generalizes the Stochastic Shortest Path ( SSP ) problem ( Bertsekas et al. , 2000 ) where the agent aims to reach a predefined goal state while minimizing its total expected cost . However , in the autonomous exploration setting , the agent aims to discover a set of reachable states in a large environment and find the optimal policies to reach them . The autonomous exploration formulation is applicable to an increasing number of real-world RL problems , ranging from navigation in mazes ( Devo et al. , 2020 ) to game playing ( Mnih et al. , 2013 ) . For example , in the maze navigation problem , a robot aims to follow a predefined path in an unknown environment , and the robot has to discover and expand the size of regions known to itself autonomously without prior knowledge of the environment . This procedure also resembles some biological learning processes . See Lim and Auer ( 2012 ) for more discussions . Related Work . The setting of autonomous exploration was introduced by Lim and Auer ( 2012 ) , who gave the first algorithm , UcbExplore , with a sample complexity Õ ( L3S ( 1+ε ) LA/ε3 ) . Here L denotes the distance within which we hope the learning agent to discover , S ( 1+ε ) L denotes the total number of states within distance ( 1 + ε ) L from the starting state , A denotes the size of the action space , and ε denotes the error that we can tolerate . Recent work by Tarbouriech et al . ( 2020 ) designed the DisCo algorithm with a sample complexity bound Õ ( L3S2 ( 1+ε ) LA/ε 2 ) 1 , which improves the 1/ε 1We translate their absolute error ε to the relative error εL . Their bound has a refined form Õ ( L3S ( 1+ε ) LΓ ( 1+ε ) LA/ε 2 ) where Γ ( 1+ε ) L is the branching factor which in the worst case is S ( 1+ε ) L. dependency at the cost of a worse dependency on S ( 1+ε ) L. In this paper , we present new algorithms to further improve the sample complexity . 1.1 CONTRIBUTIONS . In this paper , we take important steps toward resolving the autonomous exploration problem . We compare our results with prior ones in Table 1.2 and we summarize our contributions below : 1 . We propose a new state discovery algorithm , Value-Optimistic Incremental State Discovery ( VOISD ) , which uses a value-optimistic method ( Neu and Pike-Burke , 2020 ) to estimate the expected cost of the optimal policy to reach these states . We prove this algorithm enjoys an Õ ( L3S ( 1+ε ) LA/ε 2 ) sample complexity , which improves prior results in Lim and Auer ( 2012 ) and Tarbouriech et al . ( 2020 ) . 2 . We connect the autonomous exploration problem to a new problem , multi-goal SSP and propose a new algorithm Re-MG-SSP that 1 ) satisfies a stronger criterion3 and 2 ) serves as a burn-in step for the next algorithm . 3 . We further propose Value-Aware Autonomous Exploration ( VALAE ) , which uses VOISD and Re-MG-SSP as initial steps and then uses the estimated value functions to guide our exploration . By doing so , for each state-action pair ( s , a ) , we derive an ( s , a ) -dependent sample complexity bound , which can exploit the variance information , and yield a sharper sample complexity bound than the bounds for VOISD and Re-MG-SSP . In particular , VALAE improves the dependency on L from cubic to linear . 4 . We give the first lower bound of the autonomous exploration problem . This lower bound shows VALAE is nearly minimax-optimal when the SL grows polynomially with respect to L . 1.2 MAIN DIFFICULTIES AND TECHNIQUE OVERVIEW . While our work borrows ideas from prior work on autonomous exploration ( Lim and Auer , 2012 ; Tarbouriech et al. , 2020 ) and recent advances in SSP ( Tarbouriech et al. , 2021 ) , we develop new techniques to overcome additional difficulties that are unique in autonomous exploration . Dependence between the Estimated Transition Matrix and Discovered States . Our algorithm incrementally adds new states to the set of discovered states K. To obtain a tight dependency on S ( 1+ε ) L , similar to the standard RL setting , one needs to use concentration on ( P̂s , a − Ps , a ) V ∗K , g instead of ‖P̂s , a − Ps , a‖1 ( used by Tarbouriech et al . ( 2020 ) ) where P̂s , a is the estimated transition , 2In ( Lim and Auer , 2012 ) and ( Tarbouriech et al. , 2020 ) , the cost is 1 uniformly for all state-action pairs . In this paper , we allow non-uniform costs . In Table , we consider uniform cost in Table 1 for fair comparisons . 3See Sect . 2 for different criteria . Ps , a is the true transition , and V ∗K , g is the value function of the optimal policy going to the state g restricted on the discovered states K. The main challenge is that the set of discovered states K is dependent on the samples collected , and thus V ∗K , g is dependent on P̂s , a . We can use the union bound on all the possible K , but the number of possible K is exponential in the number of states S. Our main technique is to construct a series of sets of states { s0 } = K0 ⊆ K1 ⊆ · · · ⊆ KZ = S→L , where Kz+1 is constructed by adding all the states that are reachable from s0 by some policy on Kz with expected cost no more than L. This series is only polynomially large so after applying the union bound we only have a logarithmic overhead . In order to use concentrations only on this sequence of sets , we also need to develop a modified definition of optimism . See Appendix B and Appendix C.2 for details . Connection between Autonomous Exploration and Multi-Goal SSP . In standard RL setting , it is known that in order to obtain a tight dependency on L , one needs to exploit the variance information in the value function ( Azar et al. , 2017 ) . However , in autonomous exploration , it is unclear how to exploit the variance information because even which state is in S→L is unknown . To this end , we first consider a simpler problem , multi-goal SSP , and extend the technique for singlegoal SSP ( Tarbouriech et al. , 2021 ) to this new problem ( cf . Alg . 3 ) . We also present a reduction from autonomous exploration to multi-goal SSP ( cf . Alg . 2 ) . These two techniques together yield the first tight dependency on L for autonomous exploration . 2 PRELIMINARIES . In this section , we introduce basic definitions and our problem setup . Notations . For any two vectorsX , Y ∈ RS , we write their inner product asXY : = ∑ s∈S X ( s ) Y ( s ) . We denote ‖X‖∞ : = maxs∈S |X ( S ) | , and if X is a probability distribution on S , we define V ( X , Y ) : = ∑ s∈S X ( s ) Y ( s ) 2 − ( ∑ s∈S X ( s ) Y ( s ) ) 2 . Markov Decision Process . We consider an MDP M : = 〈S , A , P , c , s0〉 , where S is the state space with size S , A is the action space with size A , and s0 ∈ S is the initial state . In state s , taking action a has a cost drawn i.i.d . from a distribution on [ cmin , 1 ] ( where cmin > 0 ) with expectation c ( s , a ) , and transits to the next state s′ with probability P ( s′|s , a ) . For convenience , we use Ps , a and Ps , a , s′ to denote P ( ·|s , a ) and P ( s′|s , a ) , respectively . A deterministic and stationary policy π : S → A is a mapping , and the agent following the policy π will take action π ( s ) at state s. For a fixed state g ∈ S we define the random variable tπg ( s ) as the number of steps it takes to reach state g starting from state swhen executing policy π , i.e . tπg ( s ) : = inf { t ≥ 0 : st+1 = g | s1 = s , π } . A policy π is a proper policy if for any state s ∈ S , tπg ( s ) < +∞ with probability 1 . Then we define the value function of a proper policy π with respect to the goal state g and its corresponding Q-function as follows : V πg ( s ) = E tπg ( s ) ∑ t=1 ct ( st , π ( st ) ) | s1 = s , Qπg ( s , a ) = E tπg ( s ) ∑ t=1 ct ( st , π ( st ) ) | s1 = s , π ( s1 ) = a , where ct ∈ [ cmin , 1 ] is the instantaneous cost at step t incurred by the state-action pair ( st , π ( st ) ) , and the expectation is taken over the random sequence of states generated by executing π starting from state s ∈ S. Here we have V πg ( g ) = 0 . We use πQ to denote the greedy policy over a vector Q ∈ RS×A , i.e . πQ ( s ) : = arg min a∈A Q ( s , a ) . For a fixed state g ∈ S , we denote V ∗g as the value function of the optimal policy with respect to goal state g , and here we list some important properties of V ∗g : there exists a stationary , deterministic and proper policy π∗ , such that its value function V ∗g : = V π∗ g and its corresponding Q-function Q∗g : = Q π∗ g satisfies the following Bellman optimality equations ( cf . Lem . 1 ) : Q∗g ( s , a ) = c ( s , a ) + Ps , aV ∗ g , V ∗ g ( s ) = min a∈A Q∗g ( s , a ) , ∀ ( s , a ) ∈ S ×A . Autonomous Exploration . Now we introduce the Autonomous Exploration problem . To formally discuss the setting , we need the following assumption on our MDP M . Assumption 1 . The action space contains a RESET action s.t . P ( s0|s , RESET ) = 1 for any s ∈ S . Moreover , taking RESET in any state s will incur a cost cRESET with probability 1 , where cRESET is a constant in [ cmin , 1 ] . Given any fixed length L ≥ 1 , the agent needs to learn the set of incrementally controllable states S→L . To introduce the concept of S→L , we first give the definition of policies restricted on a subset : Definition 1 ( Policy restricted on a subset ) . For any S ′ ⊆ S , a policy π is restricted on the set S ′ if π ( s ) = RESET for all s /∈ S ′ . Now we discuss the optimal policy restricted on a set of states K ⊆ S with respect to goal state g. We denote V ∗K , g ∈ RS as the value function of the optimal policy restricted on K with goal g , and Q∗K , g as the Q-function corresponding to V ∗ K , g . We consider the case that there exists at least one proper policy restricted on K with the goal state g. Then , V ∗K , g and Q∗K , g are finite , and they satisfy the following Bellman equations : Q∗K , g ( s , a ) = c ( s , a ) + Ps , aV ∗ K , g , ∀ ( s , a ) ∈ S ×A , V ∗K , g ( s ) = min a∈A Q∗K , g ( s , a ) , ∀s ∈ K , s 6= g , V ∗K , g ( s ) = Q ∗ K , g ( s , RESET ) = cRESET + V ∗ K , g ( s0 ) , ∀s /∈ K ∪ { g } , V ∗K , g ( g ) = 0 . We note that when K1 ⊆ K2 , for any g ∈ S , if V ∗K1 , g is finite , then V ∗ K2 , g is also finite , and we have V ∗K2 , g ≤ V ∗ K1 , g component-wise . And we note that for any s 6= g , we have mina∈AQ ∗ K , g ( s , a ) ≤ V ∗K , g ( s ) . Now we introduce the definition of incrementally controllable states S→L ( see Tarbouriech et al . ( 2020 ) for more intuitions on this definition . ) : Definition 2 ( Incrementally L-controllable states S→L ) . Let ≺ be any partial order on S . We denote S≺L as the set of states reachable from s0 with expected cost no more than L w.r.t . ≺ , which is defined as follows : • s0 ∈ S≺L , • if there is a policy π restricted on { s′ ∈ S≺L : s′ ≺ s } such that V πs ( s0 ) ≤ L , then s ∈ S ≺ L . The set of incrementally L-controllable states S→L is given by S→L = ⋃ ≺ S≺L . Learning Objective . In our settings , the learning agent knows the constants S , A and cmin , but the learning agent has no prior knowledge of the transition probability P or the cost function c ( · , · ) of the MDP M . We fix the length L ≥ 1 and an error parameter ε ∈ ( 0 , 1 ] . A learning algorithm of the autonomous exploration problem should output a set of states K ⊆ S that satisfies : • S→L ⊆ K , i.e. , the algorithm discovers all the states that we want to explore . The algorithm also outputs a set of policies { πs } s∈K that satisfy one of the following criteria : 1 . ( AXL on S→L ) ∀s ∈ S→L , V πss ( s0 ) ≤ ( 1 + ε ) L ; 2 . ( AX∗ on S→L ) ∀s ∈ S→L , V πss ( s0 ) ≤ V ∗S→L , s ( s0 ) + εL ; 3 . ( AXL on K ) ∀s ∈ K , V πss ( s0 ) ≤ ( 1 + ε ) L ; 4 . ( AX∗ on K ) ∀s ∈ K , V πss ( s0 ) ≤ V ∗K , s ( s0 ) + εL . We note that AX∗ on K is stronger than both AXL on S→L and AX ∗ on S→L , but it is not necessarily stronger than AXL on K , because we do not necessarily have V ∗K , s ( s0 ) ≤ L when s /∈ S→L . In the literature , AXL on S→L was studied in the original paper that proposed the autonomous exploration problem ( Lim and Auer , 2012 ) and AX∗ on K condition was studied in ( Tarbouriech et al. , 2020 ) . We denote T as the total number of steps the agent uses , and denote ( st , at ) as the state-action pair at the t-th step . We denote by ct ( st , at ) the instantaneous cost incurred at the t-th step . The performance of an algorithm is measured by the cumulative cost : CT : = T∑ t=1 ct ( st , at ) . Multi-goal SSP . We also study a new problem , multi-goal SSP , a natural generalization of the classical SSP problem . In multi-goal SSP , we consider an MDP and a fixed length L ≥ 1 . The MDP M satisfies Asmp . 1 , and all of its states are incrementally L-controllable , i.e . S→L = S. Algorithm 1 : Value-Optimistic Incremental State Discovery ( VOISD ) 1 Input : MDP M = 〈S , A , P , c , s0〉 , confidence δ ∈ ( 0 , 1 ) , error parameter ε ∈ ( 0 , 1 ] , and L ≥ 1 . 2 Initialize U ← { } , K ← { } , and snew ← s0 . Specify constants c1 = 6 , c2 = 72 , c3 = 2 √ 2 , c4 = 2 √ 2 . 3 Set ε← ε/3 , δ ← δ/2 , B ← 10L and πs0 ∈ Π ( { s0 } ) . 4 Set ψ ← 12000 ( L cminε ) 2 ln ( SA δ ) , and φ← 2dlog2 ψe . 5 For ( s , a , s′ ) ∈ S ×A× S , set N ( s , a ) ← 0 ; n ( s , a ) ← 0 ; N ( s , a , s′ ) ← 0 ; P̂s , a , s′ ← 0 ; θ ( s , a ) ← 0 ; ĉ ( s , a ) ← 0 . 6 for round r = 1 , 2 , · · · do 7 \\ ( a ) Discover Possible States in S→L 8 Add snew to K. Set s← snew . 9 for each a ∈ A do 10 while N ( s , a ) < φ do 11 Execute policy πs on MDP M until reaching state s. 12 Take action a , incur cost c and observe next state s′ ∼ P ( · | s , a ) . 13 Set N ( s , a ) ← N ( s , a ) + 1 , θ ( s , a ) ← θ ( s , a ) + c , N ( s , a , s′ ) ← N ( s , a , s′ ) + 1 . 14 If s′ /∈ K , add s′ to U . 15 Set ĉ ( s , a ) ← θ ( s , a ) N ( s , a ) and θ ( s , a ) ← 0 . 16 For all s′ ∈ S , set P̂s , a , s′ ← N ( s , a , s′ ) /N ( s , a ) , n ( s , a ) ← N ( s , a ) . 17 Stop the algorithm if U is empty . 18 \\ ( b ) Compute Optimistic Policy 19 For each g ∈ U , compute ( Qg , Vg ) : = VISGO ( S , A , K , s0 , g , cminε18 ) . 20 Choose a state s ∈ arg min g∈U Vg ( s0 ) . Stop the algorithm if Vs ( s0 ) > L. 21 Set the policy π̃ as the greedy policy over Qs . Remove s from U , set snew ← s and set πs ← π̃ . 22 Output : The discovered states K and their corresponding policies { πs } s∈K . In multi-goal SSP , a learning algorithm should output a set of policies { πs } s∈S , such that V πss ( s0 ) ≤ V ∗s ( s0 ) + εL for all s ∈ S. We observe that an algorithm that solves the autonomous exploration problem with AX∗ on S→L criterion can also solve the multi-goal SSP problem . | This paper studies the sample complexity of learning algorithms for autonomous explorations (AX). Theoretical results showed that the proposed algorithm (VOISD, VOISD+Re-MG-SSP, VALAE) achieved better sample complexities than the existing two studies. The authors compared the methods by setting the uniform cost. Further, the proposed algorithm seems to have more general performance. | SP:349be717a2c5a27f50b34a8f61247f0efa09a4f8 |
SiT: Simulation Transformer for Particle-based Physics Simulation | 1 INTRODUCTION . Particle-based physics simulation is a classic and important topic in computer science . It not only facilitates the exploration of underlying principles in physics , chemistry and biology , but also enables the creation of vivid visual effects such as explosion and fluid dynamic in films and games . Different from traditional simulators , such as grid-based ( Guo et al. , 2016 ) and mesh-based ( Bronstein et al. , 2017 ) methods , particle-based simulators view a system , which is an example in one domain , as a composition of particles and imitate system changes over time by predicting the changes of particle-wise states according to current particle states and particle interactions , of which the latter represents the influence of action-reaction forces , such as the collisions . Consequently , they follow the same forward process without separately considering different constraints to simulate different domains with varying materials , requiring no domain-specific physical priors . Moreover , since in particle-based simulators the dynamics of a system is modeled by the states of particles and their interactions , they also have the potential to possess a strong generalization ability , where they can estimate the dynamics of a system with varying number and configuration of particles in a more robust manner . After learning the dynamics of fluid water in a sandbox , the same particle-based simulator can be used to simulate a waterfall and a river . Recent particle-based simulators ( Battaglia et al. , 2016 ; Schenck & Fox , 2018 ; Mrowca et al. , 2018 ; Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ; Ummenhofer et al. , 2020 ) often view a system as a graph , and adopt graph neural network ( GNN ) ( Kipf & Welling , 2016 ) as the basic network structure . In these attempts , each particle is treated as a node in the graph , with edges linking it to all its neighboring particles , assuming interactions mainly occur between particles that are close to each other . Subsequently , state updates of particles are achieved by combining node features with the summation of edge features . While such a GNN-based formulation obtains satisfying simulation results , it faces two issues that affect efficiency and generality . First , it forces each particle to interact with all its nearby particles without providing a selective mechanism , which leads to computational redundancy and prevents the discovery of inherent patterns of particle interaction . Second , the GNN-based formulation uses particle-wise attributes to capture both material characteristics , such as viscosity or plastic deformations , and domain-specific semantics , such as the shape of a rigid material . Therefore , it may regard the latter as part of the intrinsic material semantics and fail to generalize to domains with the same materials but different particle amounts and configurations . In this paper , we propose a novel Transformer-based framework , referred to as Simulation Transformer ( SiT ) , for particle-based physics simulation . The model inherits the powerful multi-head self-attention mechanism in Transformer ( Vaswani et al. , 2017 ) to capture particle interactions . To further encourage efficient modeling of complex interactions , instead of treating particle interactions as attention weights obtained by dot-product , we introduce the notion of interaction tokens , which are high-dimensional representations for interactions , to model the rich semantics of particle interactions , such as how the particle is restored after deformations . In addition , to disentangle local material-specific characteristics from global domain-specific semantics , SiT further learns a highdimensional abstract token for each type of material to capture material semantics , forcing particles of the same material to interact with their corresponding abstract token . The proposed SiT is more appealing than previous methods in several aspects . First , through capturing particle interactions explicitly with interaction tokens and allowing dynamic inter-token attention , SiT dynamically focuses on essential particle interactions and reduces the computations for redundant and noisy ones . This is crucial for particle-based simulation , especially for domains containing hundreds and thousands of densely placed particles , where the modeling of all particle interactions is redundant , expensive and noisy in practice . Second , thanks to the trainable abstract tokens that disentangle intrinsic material characteristics from domain-specific semantics , we can reuse them to apply SiT to unseen domains of the same materials without retraining . As shown in our experiments , after training on one domain consisting of fluid water and a rigid cubic , SiT still yields fairly faithful simulations when the cubic is replaced with a ball or a bunny comparing with previous work . To show the effectiveness of SiT , we perform extensive evaluations on four standard environments commonly used in the literature ( Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ) , covering domains of different complexity and materials . The proposed method achieves superior performance across all these environments with fewer parameters compared to existing methods . We further demonstrate the generalization ability of SiT by adjusting the environments to create new domains and applying SiT to these domains without retraining . In all cases , SiT obtains more realistic simulation results than previous methods , which tend to overfit to the training domains . 2 RELATED WORK . Physics simulation by neural networks . There are many different kind of representations for physics simulations . Grid-based methods ( Lee & You , 2019 ; Thuerey et al. , 2020 ; Wang et al. , 2020 ) adopt convolutional architectures for learning high-dimensional physical system , while mesh-based simulations ( Bronstein et al. , 2017 ; Luo et al. , 2020 ; Hanocka et al. , 2019 ; Nash et al. , 2020 ; Qiao et al. , 2020 ; Weng et al. , 2021 ; Pfaff et al. , 2021 ) typically simulate objects with continuous surfaces , such as clothes , rigid objects , surfaces of water and so on . Many works ( Battaglia et al. , 2016 ; Schenck & Fox , 2018 ; Mrowca et al. , 2018 ; Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ; Ummenhofer et al. , 2020 ) simulate physics on particle-based systems , where all objects are represented by groups of particles . Specifically , Interaction Network ( IN ) ( Battaglia et al. , 2016 ) simulated interactions in object-level . Smooth Particle Networks ( SPNets ) ( Schenck & Fox , 2018 ) implemented fluid dynamics using position-based fluids ( Macklin & Müller , 2013 ) . Hierarchical Relation Network ( HRN ) ( Mrowca et al. , 2018 ) predicted physical dynamics based on hierarchical graph convolution . Dynamic Particle Interaction Networks ( DPI-Net ) ( Li et al. , 2019 ) combined dynamic graphs , multi-step spatial propagation , and hierarchical structure to simulate particles . CConv ( Ummenhofer et al. , 2020 ) used spatial convolutions to simulate fluid particles . Graph Network-based Simulators ( GNS ) ( Sanchez-Gonzalez et al. , 2020 ) computed dynamics via learned message-passing . Similar to particle-based systems , COPINGNet ( Shao et al. , 2021 ) applies graph networks to simulate rod dynamics , where the discretized rod is the basic unit similar to particle . Previous work mostly adopted graph networks for simulations and required each particle to interact with all its nearby particles without selective mechanism . In contrast , our SiT employs both particle and interaction tokens and selectively focus on necessary particle interactions through attention mechanism . Experiments show that SiT surpasses existing GNN-based methods and has more robust performances in generalizations . Transformer . Transformer ( Vaswani et al. , 2017 ) was designed for machine translation and achieved state-of-the-art performance in many natural langruage processing tasks ( Devlin et al. , 2019 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . Recently , Transformer starts to show great expandability and applicability in many other fields , such as computer vision ( Wang et al. , 2018 ; Carion et al. , 2020 ; Dosovitskiy et al. , 2021 ; Wang et al. , 2021 ; Liu et al. , 2021 ) , and graph representations ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Dwivedi & Bresson , 2020 ) . To our knowledge , no attempt has been made to apply Transformer on physics simulation . Our SiT models the interactions between particles by trainable sub-network given corresponding particle tokens . The same notion of extracting potential semantics between nodes is also applied in Graph Transformer ( Dwivedi & Bresson , 2020 ) , which we refer as GraphTrans for short . However , there are differences in our formulations . Specifically , GraphTrans adopts element-wise product between node representations followed by multi-layer perceptron ( MLP ) to update the interaction embeddings , which store the attention scores in each dimension and are reduced to a scalar for further attention mechanism to update node embeddings . In contrast , our model learns semantic tokens for interactions through sub-network instead of element-wise product . When updating the particle tokens , which are referred as node representations in GraphTrans , both particle and interaction tokens will attend to each other . The interaction tokens are not weighted scores any more . We adopt GraphTrans ( Dwivedi & Bresson , 2020 ) in particle-based simulation and compare it with SiT in experiments . The quantitative results show that SiT achieves better results than GraphTrans . 3 METHODOLOGY . 3.1 PROBLEM FORMULATION . For a particle-based system composed of N particles , we use X t = { xti } Ni=1 to denote the system state at time step t , where xti denotes the state of i-th particle . Specifically , x t i = [ p t i , q t i , ai ] , where pti , q t i ∈ R3 refer to position and velocity , and ai ∈ Rda represents fixed particle attributes such as its material type . The goal of a simulator is to learn a model φ ( · ) from previous rollouts of a system to causally predict a rollout trajectory in a specific time period conditioned on the initial system state X 0 . The prediction is run in a recursive manner , where the simulator will predict the state X̂ t+1 = φ ( X t ) at time step t + 1 based on the state X t = { xti } at time step t. In practice , we will predict the velocities of particles Q̂t+1 = { q̂t+1i } , and obtain their positions via p̂t+1i = p t i + ∆t · q̂ t+1 i , where ∆t is a domain-specific constant . 3.2 SIMULATION VIA VANILLA TRANSFORMER . To accurately simulate the changes of a system over time , it is crucial to effectively model the interactions among particles , as they indicate the energy transition of a system when constrained by material characteristics and physical laws . However , it is infeasible to know a priori how should particles interact with each other . Thus , a selective mechanism is needed to help the simulator focus only on necessary interactions . Since Transformer ( Vaswani et al. , 2017 ) is capable of modeling the dynamical attention scores between tokens via the self-attention module , we can regard the attention weights as the intensity of connectivities and the importance of interactions , where the larger the attention weight the more important the interaction is . Thus , it is naturally a good backbone of building an efficient particlebased simulator . To apply vanilla Transformer in particle-based simulation , we first encode the states of particles into corresponding particle tokens V = { vti } by vti = f enc V ( x t i ) , ( 1 ) where vti ∈ Rdh is a dh dimensional vector and f encV ( · ) is an encoding layer implemented as a MLP . Subsequently , particle interactions are achieved by L blocks of self-attention modules , where in the l-th block , particle tokens will attend to each other selectively as : vl+1 , ti = ∑ j ŵvijv l , t j , ( 2 ) ŵvij = exp ( wvij ) √ dh · ∑ j exp ( w v ij ) , ( 3 ) wvij = ( v l , t i ) > vl , tj . ( 4 ) Since a system usually contains hundreds of particles and the interactions among particles occur within the neighbors in our settings , considering all possible interactions , of which the number quadratically increases with respect to the number of particles , is computationally redundant and inefficient . Therefore , we follow previous literature Li et al . ( 2019 ) ; Sanchez-Gonzalez et al . ( 2020 ) to assume that interactions of distant particles can be omitted , which is realized by a window function : g ( pti , p t j ) = I ( ||pti − ptj ||2 < R ) , ( 5 ) where I ( condition ) is a indicator function that returns 1 if the condition is satisfied , and R defines the extent of the window . This window function will generate a mask to retain only interactions between neighboring particles as potential candidates in the self-attention modules . To predict particle states in the next time step , a decoding layer is applied to the updated token of i-th particle to obtain its velocity : q̂t+1i = f dec V ( v L , t i ) , ( 6 ) where fdecV ( · ) is implemented by another MLP . | The paper studies the use of transformers for particle-based physics simulation. The main contributions are as follows. C1. the paper proposes a specific transformer-inspired form of message passing, which, unlike transformer, still does explicit message computation. C2. the paper proposes a very specific way to encode material type information as abstract tokens. C3. the paper compares to several non-transfomer based baselines, as well as a pure transformer. C4. comparisons are performed on a subset of 4 environments including some generalization settings. As presented in the paper, the proposed model seems to perform better than the baselines in most settings. | SP:e21d406b8ee778d1c425fef19fcf5ef3c1bb7f5b |
SiT: Simulation Transformer for Particle-based Physics Simulation | 1 INTRODUCTION . Particle-based physics simulation is a classic and important topic in computer science . It not only facilitates the exploration of underlying principles in physics , chemistry and biology , but also enables the creation of vivid visual effects such as explosion and fluid dynamic in films and games . Different from traditional simulators , such as grid-based ( Guo et al. , 2016 ) and mesh-based ( Bronstein et al. , 2017 ) methods , particle-based simulators view a system , which is an example in one domain , as a composition of particles and imitate system changes over time by predicting the changes of particle-wise states according to current particle states and particle interactions , of which the latter represents the influence of action-reaction forces , such as the collisions . Consequently , they follow the same forward process without separately considering different constraints to simulate different domains with varying materials , requiring no domain-specific physical priors . Moreover , since in particle-based simulators the dynamics of a system is modeled by the states of particles and their interactions , they also have the potential to possess a strong generalization ability , where they can estimate the dynamics of a system with varying number and configuration of particles in a more robust manner . After learning the dynamics of fluid water in a sandbox , the same particle-based simulator can be used to simulate a waterfall and a river . Recent particle-based simulators ( Battaglia et al. , 2016 ; Schenck & Fox , 2018 ; Mrowca et al. , 2018 ; Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ; Ummenhofer et al. , 2020 ) often view a system as a graph , and adopt graph neural network ( GNN ) ( Kipf & Welling , 2016 ) as the basic network structure . In these attempts , each particle is treated as a node in the graph , with edges linking it to all its neighboring particles , assuming interactions mainly occur between particles that are close to each other . Subsequently , state updates of particles are achieved by combining node features with the summation of edge features . While such a GNN-based formulation obtains satisfying simulation results , it faces two issues that affect efficiency and generality . First , it forces each particle to interact with all its nearby particles without providing a selective mechanism , which leads to computational redundancy and prevents the discovery of inherent patterns of particle interaction . Second , the GNN-based formulation uses particle-wise attributes to capture both material characteristics , such as viscosity or plastic deformations , and domain-specific semantics , such as the shape of a rigid material . Therefore , it may regard the latter as part of the intrinsic material semantics and fail to generalize to domains with the same materials but different particle amounts and configurations . In this paper , we propose a novel Transformer-based framework , referred to as Simulation Transformer ( SiT ) , for particle-based physics simulation . The model inherits the powerful multi-head self-attention mechanism in Transformer ( Vaswani et al. , 2017 ) to capture particle interactions . To further encourage efficient modeling of complex interactions , instead of treating particle interactions as attention weights obtained by dot-product , we introduce the notion of interaction tokens , which are high-dimensional representations for interactions , to model the rich semantics of particle interactions , such as how the particle is restored after deformations . In addition , to disentangle local material-specific characteristics from global domain-specific semantics , SiT further learns a highdimensional abstract token for each type of material to capture material semantics , forcing particles of the same material to interact with their corresponding abstract token . The proposed SiT is more appealing than previous methods in several aspects . First , through capturing particle interactions explicitly with interaction tokens and allowing dynamic inter-token attention , SiT dynamically focuses on essential particle interactions and reduces the computations for redundant and noisy ones . This is crucial for particle-based simulation , especially for domains containing hundreds and thousands of densely placed particles , where the modeling of all particle interactions is redundant , expensive and noisy in practice . Second , thanks to the trainable abstract tokens that disentangle intrinsic material characteristics from domain-specific semantics , we can reuse them to apply SiT to unseen domains of the same materials without retraining . As shown in our experiments , after training on one domain consisting of fluid water and a rigid cubic , SiT still yields fairly faithful simulations when the cubic is replaced with a ball or a bunny comparing with previous work . To show the effectiveness of SiT , we perform extensive evaluations on four standard environments commonly used in the literature ( Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ) , covering domains of different complexity and materials . The proposed method achieves superior performance across all these environments with fewer parameters compared to existing methods . We further demonstrate the generalization ability of SiT by adjusting the environments to create new domains and applying SiT to these domains without retraining . In all cases , SiT obtains more realistic simulation results than previous methods , which tend to overfit to the training domains . 2 RELATED WORK . Physics simulation by neural networks . There are many different kind of representations for physics simulations . Grid-based methods ( Lee & You , 2019 ; Thuerey et al. , 2020 ; Wang et al. , 2020 ) adopt convolutional architectures for learning high-dimensional physical system , while mesh-based simulations ( Bronstein et al. , 2017 ; Luo et al. , 2020 ; Hanocka et al. , 2019 ; Nash et al. , 2020 ; Qiao et al. , 2020 ; Weng et al. , 2021 ; Pfaff et al. , 2021 ) typically simulate objects with continuous surfaces , such as clothes , rigid objects , surfaces of water and so on . Many works ( Battaglia et al. , 2016 ; Schenck & Fox , 2018 ; Mrowca et al. , 2018 ; Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ; Ummenhofer et al. , 2020 ) simulate physics on particle-based systems , where all objects are represented by groups of particles . Specifically , Interaction Network ( IN ) ( Battaglia et al. , 2016 ) simulated interactions in object-level . Smooth Particle Networks ( SPNets ) ( Schenck & Fox , 2018 ) implemented fluid dynamics using position-based fluids ( Macklin & Müller , 2013 ) . Hierarchical Relation Network ( HRN ) ( Mrowca et al. , 2018 ) predicted physical dynamics based on hierarchical graph convolution . Dynamic Particle Interaction Networks ( DPI-Net ) ( Li et al. , 2019 ) combined dynamic graphs , multi-step spatial propagation , and hierarchical structure to simulate particles . CConv ( Ummenhofer et al. , 2020 ) used spatial convolutions to simulate fluid particles . Graph Network-based Simulators ( GNS ) ( Sanchez-Gonzalez et al. , 2020 ) computed dynamics via learned message-passing . Similar to particle-based systems , COPINGNet ( Shao et al. , 2021 ) applies graph networks to simulate rod dynamics , where the discretized rod is the basic unit similar to particle . Previous work mostly adopted graph networks for simulations and required each particle to interact with all its nearby particles without selective mechanism . In contrast , our SiT employs both particle and interaction tokens and selectively focus on necessary particle interactions through attention mechanism . Experiments show that SiT surpasses existing GNN-based methods and has more robust performances in generalizations . Transformer . Transformer ( Vaswani et al. , 2017 ) was designed for machine translation and achieved state-of-the-art performance in many natural langruage processing tasks ( Devlin et al. , 2019 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . Recently , Transformer starts to show great expandability and applicability in many other fields , such as computer vision ( Wang et al. , 2018 ; Carion et al. , 2020 ; Dosovitskiy et al. , 2021 ; Wang et al. , 2021 ; Liu et al. , 2021 ) , and graph representations ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Dwivedi & Bresson , 2020 ) . To our knowledge , no attempt has been made to apply Transformer on physics simulation . Our SiT models the interactions between particles by trainable sub-network given corresponding particle tokens . The same notion of extracting potential semantics between nodes is also applied in Graph Transformer ( Dwivedi & Bresson , 2020 ) , which we refer as GraphTrans for short . However , there are differences in our formulations . Specifically , GraphTrans adopts element-wise product between node representations followed by multi-layer perceptron ( MLP ) to update the interaction embeddings , which store the attention scores in each dimension and are reduced to a scalar for further attention mechanism to update node embeddings . In contrast , our model learns semantic tokens for interactions through sub-network instead of element-wise product . When updating the particle tokens , which are referred as node representations in GraphTrans , both particle and interaction tokens will attend to each other . The interaction tokens are not weighted scores any more . We adopt GraphTrans ( Dwivedi & Bresson , 2020 ) in particle-based simulation and compare it with SiT in experiments . The quantitative results show that SiT achieves better results than GraphTrans . 3 METHODOLOGY . 3.1 PROBLEM FORMULATION . For a particle-based system composed of N particles , we use X t = { xti } Ni=1 to denote the system state at time step t , where xti denotes the state of i-th particle . Specifically , x t i = [ p t i , q t i , ai ] , where pti , q t i ∈ R3 refer to position and velocity , and ai ∈ Rda represents fixed particle attributes such as its material type . The goal of a simulator is to learn a model φ ( · ) from previous rollouts of a system to causally predict a rollout trajectory in a specific time period conditioned on the initial system state X 0 . The prediction is run in a recursive manner , where the simulator will predict the state X̂ t+1 = φ ( X t ) at time step t + 1 based on the state X t = { xti } at time step t. In practice , we will predict the velocities of particles Q̂t+1 = { q̂t+1i } , and obtain their positions via p̂t+1i = p t i + ∆t · q̂ t+1 i , where ∆t is a domain-specific constant . 3.2 SIMULATION VIA VANILLA TRANSFORMER . To accurately simulate the changes of a system over time , it is crucial to effectively model the interactions among particles , as they indicate the energy transition of a system when constrained by material characteristics and physical laws . However , it is infeasible to know a priori how should particles interact with each other . Thus , a selective mechanism is needed to help the simulator focus only on necessary interactions . Since Transformer ( Vaswani et al. , 2017 ) is capable of modeling the dynamical attention scores between tokens via the self-attention module , we can regard the attention weights as the intensity of connectivities and the importance of interactions , where the larger the attention weight the more important the interaction is . Thus , it is naturally a good backbone of building an efficient particlebased simulator . To apply vanilla Transformer in particle-based simulation , we first encode the states of particles into corresponding particle tokens V = { vti } by vti = f enc V ( x t i ) , ( 1 ) where vti ∈ Rdh is a dh dimensional vector and f encV ( · ) is an encoding layer implemented as a MLP . Subsequently , particle interactions are achieved by L blocks of self-attention modules , where in the l-th block , particle tokens will attend to each other selectively as : vl+1 , ti = ∑ j ŵvijv l , t j , ( 2 ) ŵvij = exp ( wvij ) √ dh · ∑ j exp ( w v ij ) , ( 3 ) wvij = ( v l , t i ) > vl , tj . ( 4 ) Since a system usually contains hundreds of particles and the interactions among particles occur within the neighbors in our settings , considering all possible interactions , of which the number quadratically increases with respect to the number of particles , is computationally redundant and inefficient . Therefore , we follow previous literature Li et al . ( 2019 ) ; Sanchez-Gonzalez et al . ( 2020 ) to assume that interactions of distant particles can be omitted , which is realized by a window function : g ( pti , p t j ) = I ( ||pti − ptj ||2 < R ) , ( 5 ) where I ( condition ) is a indicator function that returns 1 if the condition is satisfied , and R defines the extent of the window . This window function will generate a mask to retain only interactions between neighboring particles as potential candidates in the self-attention modules . To predict particle states in the next time step , a decoding layer is applied to the updated token of i-th particle to obtain its velocity : q̂t+1i = f dec V ( v L , t i ) , ( 6 ) where fdecV ( · ) is implemented by another MLP . | In this paper, authors propose Simulation Transformer (SiT), a transformer based approach for particle-based fluid simulations (in contrast to all the existing approaches which are overwhelmingly based on Graph Convolutional Networks). Specifically, in their paper the authors augment the vanilla transformers by incorporating sub-networks for richer modeling particle interactions (i.e., as opposed to particle interactions being modeled as dot products and being reduced to a single number, sub-networks allow richer modeling of interactions), as well as material specific properties. Authors evaluate the proposed SiT model on diverse environments and compare against several state of the art fluid simulation models and also demonstrate generalization across different materials. | SP:e21d406b8ee778d1c425fef19fcf5ef3c1bb7f5b |
SiT: Simulation Transformer for Particle-based Physics Simulation | 1 INTRODUCTION . Particle-based physics simulation is a classic and important topic in computer science . It not only facilitates the exploration of underlying principles in physics , chemistry and biology , but also enables the creation of vivid visual effects such as explosion and fluid dynamic in films and games . Different from traditional simulators , such as grid-based ( Guo et al. , 2016 ) and mesh-based ( Bronstein et al. , 2017 ) methods , particle-based simulators view a system , which is an example in one domain , as a composition of particles and imitate system changes over time by predicting the changes of particle-wise states according to current particle states and particle interactions , of which the latter represents the influence of action-reaction forces , such as the collisions . Consequently , they follow the same forward process without separately considering different constraints to simulate different domains with varying materials , requiring no domain-specific physical priors . Moreover , since in particle-based simulators the dynamics of a system is modeled by the states of particles and their interactions , they also have the potential to possess a strong generalization ability , where they can estimate the dynamics of a system with varying number and configuration of particles in a more robust manner . After learning the dynamics of fluid water in a sandbox , the same particle-based simulator can be used to simulate a waterfall and a river . Recent particle-based simulators ( Battaglia et al. , 2016 ; Schenck & Fox , 2018 ; Mrowca et al. , 2018 ; Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ; Ummenhofer et al. , 2020 ) often view a system as a graph , and adopt graph neural network ( GNN ) ( Kipf & Welling , 2016 ) as the basic network structure . In these attempts , each particle is treated as a node in the graph , with edges linking it to all its neighboring particles , assuming interactions mainly occur between particles that are close to each other . Subsequently , state updates of particles are achieved by combining node features with the summation of edge features . While such a GNN-based formulation obtains satisfying simulation results , it faces two issues that affect efficiency and generality . First , it forces each particle to interact with all its nearby particles without providing a selective mechanism , which leads to computational redundancy and prevents the discovery of inherent patterns of particle interaction . Second , the GNN-based formulation uses particle-wise attributes to capture both material characteristics , such as viscosity or plastic deformations , and domain-specific semantics , such as the shape of a rigid material . Therefore , it may regard the latter as part of the intrinsic material semantics and fail to generalize to domains with the same materials but different particle amounts and configurations . In this paper , we propose a novel Transformer-based framework , referred to as Simulation Transformer ( SiT ) , for particle-based physics simulation . The model inherits the powerful multi-head self-attention mechanism in Transformer ( Vaswani et al. , 2017 ) to capture particle interactions . To further encourage efficient modeling of complex interactions , instead of treating particle interactions as attention weights obtained by dot-product , we introduce the notion of interaction tokens , which are high-dimensional representations for interactions , to model the rich semantics of particle interactions , such as how the particle is restored after deformations . In addition , to disentangle local material-specific characteristics from global domain-specific semantics , SiT further learns a highdimensional abstract token for each type of material to capture material semantics , forcing particles of the same material to interact with their corresponding abstract token . The proposed SiT is more appealing than previous methods in several aspects . First , through capturing particle interactions explicitly with interaction tokens and allowing dynamic inter-token attention , SiT dynamically focuses on essential particle interactions and reduces the computations for redundant and noisy ones . This is crucial for particle-based simulation , especially for domains containing hundreds and thousands of densely placed particles , where the modeling of all particle interactions is redundant , expensive and noisy in practice . Second , thanks to the trainable abstract tokens that disentangle intrinsic material characteristics from domain-specific semantics , we can reuse them to apply SiT to unseen domains of the same materials without retraining . As shown in our experiments , after training on one domain consisting of fluid water and a rigid cubic , SiT still yields fairly faithful simulations when the cubic is replaced with a ball or a bunny comparing with previous work . To show the effectiveness of SiT , we perform extensive evaluations on four standard environments commonly used in the literature ( Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ) , covering domains of different complexity and materials . The proposed method achieves superior performance across all these environments with fewer parameters compared to existing methods . We further demonstrate the generalization ability of SiT by adjusting the environments to create new domains and applying SiT to these domains without retraining . In all cases , SiT obtains more realistic simulation results than previous methods , which tend to overfit to the training domains . 2 RELATED WORK . Physics simulation by neural networks . There are many different kind of representations for physics simulations . Grid-based methods ( Lee & You , 2019 ; Thuerey et al. , 2020 ; Wang et al. , 2020 ) adopt convolutional architectures for learning high-dimensional physical system , while mesh-based simulations ( Bronstein et al. , 2017 ; Luo et al. , 2020 ; Hanocka et al. , 2019 ; Nash et al. , 2020 ; Qiao et al. , 2020 ; Weng et al. , 2021 ; Pfaff et al. , 2021 ) typically simulate objects with continuous surfaces , such as clothes , rigid objects , surfaces of water and so on . Many works ( Battaglia et al. , 2016 ; Schenck & Fox , 2018 ; Mrowca et al. , 2018 ; Li et al. , 2019 ; Sanchez-Gonzalez et al. , 2020 ; Ummenhofer et al. , 2020 ) simulate physics on particle-based systems , where all objects are represented by groups of particles . Specifically , Interaction Network ( IN ) ( Battaglia et al. , 2016 ) simulated interactions in object-level . Smooth Particle Networks ( SPNets ) ( Schenck & Fox , 2018 ) implemented fluid dynamics using position-based fluids ( Macklin & Müller , 2013 ) . Hierarchical Relation Network ( HRN ) ( Mrowca et al. , 2018 ) predicted physical dynamics based on hierarchical graph convolution . Dynamic Particle Interaction Networks ( DPI-Net ) ( Li et al. , 2019 ) combined dynamic graphs , multi-step spatial propagation , and hierarchical structure to simulate particles . CConv ( Ummenhofer et al. , 2020 ) used spatial convolutions to simulate fluid particles . Graph Network-based Simulators ( GNS ) ( Sanchez-Gonzalez et al. , 2020 ) computed dynamics via learned message-passing . Similar to particle-based systems , COPINGNet ( Shao et al. , 2021 ) applies graph networks to simulate rod dynamics , where the discretized rod is the basic unit similar to particle . Previous work mostly adopted graph networks for simulations and required each particle to interact with all its nearby particles without selective mechanism . In contrast , our SiT employs both particle and interaction tokens and selectively focus on necessary particle interactions through attention mechanism . Experiments show that SiT surpasses existing GNN-based methods and has more robust performances in generalizations . Transformer . Transformer ( Vaswani et al. , 2017 ) was designed for machine translation and achieved state-of-the-art performance in many natural langruage processing tasks ( Devlin et al. , 2019 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . Recently , Transformer starts to show great expandability and applicability in many other fields , such as computer vision ( Wang et al. , 2018 ; Carion et al. , 2020 ; Dosovitskiy et al. , 2021 ; Wang et al. , 2021 ; Liu et al. , 2021 ) , and graph representations ( Zhou et al. , 2020 ; Zhang et al. , 2020 ; Dwivedi & Bresson , 2020 ) . To our knowledge , no attempt has been made to apply Transformer on physics simulation . Our SiT models the interactions between particles by trainable sub-network given corresponding particle tokens . The same notion of extracting potential semantics between nodes is also applied in Graph Transformer ( Dwivedi & Bresson , 2020 ) , which we refer as GraphTrans for short . However , there are differences in our formulations . Specifically , GraphTrans adopts element-wise product between node representations followed by multi-layer perceptron ( MLP ) to update the interaction embeddings , which store the attention scores in each dimension and are reduced to a scalar for further attention mechanism to update node embeddings . In contrast , our model learns semantic tokens for interactions through sub-network instead of element-wise product . When updating the particle tokens , which are referred as node representations in GraphTrans , both particle and interaction tokens will attend to each other . The interaction tokens are not weighted scores any more . We adopt GraphTrans ( Dwivedi & Bresson , 2020 ) in particle-based simulation and compare it with SiT in experiments . The quantitative results show that SiT achieves better results than GraphTrans . 3 METHODOLOGY . 3.1 PROBLEM FORMULATION . For a particle-based system composed of N particles , we use X t = { xti } Ni=1 to denote the system state at time step t , where xti denotes the state of i-th particle . Specifically , x t i = [ p t i , q t i , ai ] , where pti , q t i ∈ R3 refer to position and velocity , and ai ∈ Rda represents fixed particle attributes such as its material type . The goal of a simulator is to learn a model φ ( · ) from previous rollouts of a system to causally predict a rollout trajectory in a specific time period conditioned on the initial system state X 0 . The prediction is run in a recursive manner , where the simulator will predict the state X̂ t+1 = φ ( X t ) at time step t + 1 based on the state X t = { xti } at time step t. In practice , we will predict the velocities of particles Q̂t+1 = { q̂t+1i } , and obtain their positions via p̂t+1i = p t i + ∆t · q̂ t+1 i , where ∆t is a domain-specific constant . 3.2 SIMULATION VIA VANILLA TRANSFORMER . To accurately simulate the changes of a system over time , it is crucial to effectively model the interactions among particles , as they indicate the energy transition of a system when constrained by material characteristics and physical laws . However , it is infeasible to know a priori how should particles interact with each other . Thus , a selective mechanism is needed to help the simulator focus only on necessary interactions . Since Transformer ( Vaswani et al. , 2017 ) is capable of modeling the dynamical attention scores between tokens via the self-attention module , we can regard the attention weights as the intensity of connectivities and the importance of interactions , where the larger the attention weight the more important the interaction is . Thus , it is naturally a good backbone of building an efficient particlebased simulator . To apply vanilla Transformer in particle-based simulation , we first encode the states of particles into corresponding particle tokens V = { vti } by vti = f enc V ( x t i ) , ( 1 ) where vti ∈ Rdh is a dh dimensional vector and f encV ( · ) is an encoding layer implemented as a MLP . Subsequently , particle interactions are achieved by L blocks of self-attention modules , where in the l-th block , particle tokens will attend to each other selectively as : vl+1 , ti = ∑ j ŵvijv l , t j , ( 2 ) ŵvij = exp ( wvij ) √ dh · ∑ j exp ( w v ij ) , ( 3 ) wvij = ( v l , t i ) > vl , tj . ( 4 ) Since a system usually contains hundreds of particles and the interactions among particles occur within the neighbors in our settings , considering all possible interactions , of which the number quadratically increases with respect to the number of particles , is computationally redundant and inefficient . Therefore , we follow previous literature Li et al . ( 2019 ) ; Sanchez-Gonzalez et al . ( 2020 ) to assume that interactions of distant particles can be omitted , which is realized by a window function : g ( pti , p t j ) = I ( ||pti − ptj ||2 < R ) , ( 5 ) where I ( condition ) is a indicator function that returns 1 if the condition is satisfied , and R defines the extent of the window . This window function will generate a mask to retain only interactions between neighboring particles as potential candidates in the self-attention modules . To predict particle states in the next time step , a decoding layer is applied to the updated token of i-th particle to obtain its velocity : q̂t+1i = f dec V ( v L , t i ) , ( 6 ) where fdecV ( · ) is implemented by another MLP . | This paper proposes to learn particle dynamics of physical systems by the means of a transformer. In particular, the paper investigates using a vanilla transformer as well as a more customized variant called 'Simulation Transformer'. Both networks are validated based on variety of experiments of simulations of fluids, rigid and deformable objects, as well as through comparisons to existing approaches. | SP:e21d406b8ee778d1c425fef19fcf5ef3c1bb7f5b |
Combining Diverse Feature Priors | 1 INTRODUCTION . The driving force behind deep learning ’ s success is its ability to automatically discover predictive features in complex high-dimensional datasets . In fact , these features can generalize beyond the specific task at hand , thus enabling models to transfer to other ( yet similar ) tasks ( Donahue et al. , 2014 ) . At the same time , the set of features that the model learns has a large impact on how well it will perform on unseen inputs , especially in the presence of distribution shift ( Ponce et al. , 2006 ; Torralba & Efros , 2011 ; Sagawa et al. , 2020 ) or spurious correlations ( Heinze-Deml & Meinshausen , 2017 ; Beery et al. , 2018 ; Meinshausen , 2018 ) . Motivated by this , recent work focuses on encouraging specific modes of behavior by preventing the models from relying on certain features . Examples include suppressing texture features ( Geirhos et al. , 2019 ; Wang et al. , 2019 ) , avoiding ` p-non-robust features ( Tsipras et al. , 2019 ; Engstrom et al. , 2019 ) , or utilizing different parts of the frequency spectrum ( Yin et al. , 2019 ) . At a high level , these methods can be thought of as ways of imposing a feature prior on the learning process , so as to bias the model towards acquiring features that generalize better . This makes the choice of the feature prior to impose a key design decision . The goal of this work is thus to explore the underlying design space of feature priors and , specifically , to understand : How can we effectively harness the diversity of feature priors ? OUR CONTRIBUTIONS . In this paper , we cast diverse feature priors as different perspectives on the data and study how they can complement each other . In particular , we aim to understand whether training with distinct priors result in models with non-overlapping failure modes and how such models can be combined to improve generalization . This is particularly relevant in settings where the data is unreliable— e.g , when the training data contains a spurious correlation . From this perspective , we focus our study on two priors that arise naturally in the context of image classification , shape and texture , and investigate the following : Feature diversity . We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution , even if they perform similarly in terms of overall accuracy . Further , one can harness this diversity to build model ensembles that are more accurate than those based on combining models which have the same feature prior . Combining feature priors on unlabeled data . When learning from unlabeled data , the choice of feature prior can be especially important . For strategies such as self-training , sub-optimal prediction rules learned from sparse labeled data can be reinforced when pseudo-labeling the unlabeled data . We show that , in such settings , we can leverage the diversity of feature priors to address these issues . By jointly training models with different feature priors on the unlabeled data through the framework of co-training Blum & Mitchell ( 1998 ) , we find that the models can correct each other ’ s mistakes to learn prediction rules that generalize better . Learning in the presence of spurious correlations . Finally , we want to understand whether combining diverse priors during training , as described above , can prevent models from relying on correlations that are spurious , i.e. , correlations that do not hold on the actual distribution of interest . To model such scenarios , we consider a setting where a spurious correlation is present in the training data but we also have access to ( unlabeled ) data where this correlation does not hold . In this setting , we find that co-training models with diverse feature priors can actually steer them away from such correlations and thus enable them to generalize to the underlying distribution . Overall , our findings highlight the potential of incorporating distinct feature priors into the training process . We believe that further work along this direction will lead us to models that generalize more reliably . 2 BACKGROUND : FEATURE PRIORS IN COMPUTER VISION . When learning from structurally complex data , such as images , relying on raw input features alone ( e.g. , pixels ) is not particularly useful . There has thus been a long line of work on extracting input patterns that can be more effective for prediction . While early approaches , such as SIFT ( Lowe , 1999 ) and HOG ( Dalal & Triggs , 2005 ) , leveraged hand-crafted features , these have been by now largely replaced by features that are automatically learned in an end-to-end fashion ( Krizhevsky , 2009 ; Ciregan et al. , 2012 ; Krizhevsky et al. , 2012 ) . Nevertheless , even when features are learned , model designers still tune their models to better suit a particular task via changes in the architecture or training methodology . Such modifications can be thought of as imposing feature priors , i.e. , priors that bias a model towards a particular set of features . One prominent example here are convolutional neural networks , which are biased towards learning a hierarchy of localized features Fukushima ( 1980 ) ; LeCun et al . ( 1989 ) . Indeed , such a convolutional prior can be quite powerful : it is sufficient to enable many image synthesis tasks without any training Ulyanov et al . ( 2017 ) . More recently , there has been work exploring the impact of explicitly restricting the set of features utilized by the model . For instance , Geirhos et al . ( 2019 ) demonstrate that training models on stylized inputs ( and hence suppressing texture information ) can improve model robustness to common corruptions . In a similar vein , Wang et al . ( 2019 ) penalize the predictive power of local features to learn shape-biased models that generalize better between image styles . A parallel line of work focuses on training models to be robust to small , worst-case input perturbations using , for example , adversarial training Goodfellow et al . ( 2015 ) ; Madry et al . ( 2018 ) or randomized smoothing ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) . Such training biases these models away from non-robust features ( Tsipras et al. , 2019 ; Ilyas et al. , 2019 ; Engstrom et al. , 2019 ) , which tends to result in them being more aligned with human perception ( Tsipras et al. , 2019 ; Kaur et al. , 2019 ) , more resilient to certain input corruptions ( Ford et al. , 2019 ; Kireev et al. , 2021 ) , and better suited for transfer to downstream tasks Utrera et al . ( 2020 ) ; Salman et al . ( 2020 ) . 3 FEATURE PRIORS AS DIFFERENT PERSPECTIVES . As we discussed , the choice of feature prior can have a large effect on what features a model relies on and , by extension , on how well it generalizes to unseen inputs . In fact , one can view such priors as distinct perspectives on the data , capturing different information about the input . In this section , we provide evidence to support this view ; specifically , we examine a case study on a pair of feature priors that arise naturally in the context of image classification : shape and texture . 3.1 TRAINING SHAPE- AND TEXTURE-BIASED MODELS . In order to train shape- and texture-biased models , we either pre-process the model input or modify the model architecture as follows : Shape-biased models . To suppress texture information in the images , we pre-process our inputs by applying an edge detection algorithm . We consider two such canonical algorithms : the Canny algorithm Ding & Goshtasby ( 2001 ) which produces a binary edge mask , and the Sobel algorithm Sobel & Feldman ( 1968 ) which provide a softer edge detection , hence retaining some texture information ( see Figures 1b and 1c ) . Texture-biased models . To prevent the model from relying on the global structure of the image , we utilize a variant of the BagNet architecture Brendel & Bethge ( 2019 ) . This architecture deliberately limits the receptive field of the model , thus forcing it to rely on local features ( see Figure 1d ) . We visualize all of these priors in Figure 1 and provide implementation details in Appendix A . 3.2 DIVERSITY OF FEATURE-BIASED MODELS . After training models with shape and texture biases as outlined above , we evaluate whether these models indeed capture complementary information about the input . Specifically , we train models on a small subset ( 100 examples per class ) of the CIFAR-10 ( Krizhevsky , 2009 ) and STL-10 ( Coates et al. , 2011 ) datasets , and measure the correlation between which test examples they correctly classify . We find that pairs consisting of a shape-biased model and a texture-biased model ( i.e. , Canny and BagNet , or Sobel and BagNet ) indeed have the least correlated predictions—cf . Table 2 . In other words , the mistakes that these models make are more diverse than those made by identical models trained from different random initializations . At the same time , different shape-biased models ( Sobel and Canny ) are relatively well-correlated with each other , which corroborates the fact that models trained on similar features of the input are likely to make similar mistakes . Model ensembles . Having shown that training models with these feature priors results in diverse prediction rules , we examine if we can now combine them to improve our generalization . The canonical approach for doing so is to incorporate these models into an ensemble . We find that the diversity of models trained with different feature priors indeed directly translates into an improved performance when combining them into an ensemble—cf . Table 3 . In fact , we find that the performance of the ensemble is tightly connected to prediction similarity of its constituents ( as measured in Table 2 ) , i.e. , more diverse ensembles tend to perform better . For instance , the best ensemble for the STL-10 dataset is the one combining a shape-biased ( Canny ) and a texture-biased model ( BagNet ) which were the models with the least aligned predictions . 4 COMBINING DIVERSE PRIORS ON UNLABELED DATA . In the previous section , we saw that training models with different feature priors ( e.g. , shape- and texture-biased models ) can lead to prediction rules with less overlapping failure modes—which , in turn , can lead to more effective model ensembles . However , ensembles only combine model predictions post hoc and thus can not take advantage of diversity during the training process . In this section , we instead focus on utilizing diversity during training . Specifically , we will leverage the diversity introduced through feature priors in the context of self-training Lee et al . ( 2013 ) —a framework commonly used when the labeled data is insufficient to learn a well-generalizing model . This framework utilizes unlabeled data , which are then pseudo-labeled using an existing model and used for further training . While such methods can often improve the overall model performance , they suffer from a significant drawback : models tend to reinforce suboptimal prediction rules even when these rules do not generalize to the underlying distribution Arazo et al . ( 2020 ) . Our goal here is thus to leverage diverse feature priors to address this exact shortcoming . Specifically , we will jointly train models with different priors on the unlabeled data through the framework of co-training Blum & Mitchell ( 1998 ) . Since these models capture complementary information about the input ( cf . Table 2 ) , we expect them to correct each other ’ s mistakes and improve their prediction rules . As we will see in this section , this approach can indeed have a significant impact on the performance of the resulting model , outperforming ensembles that combine such models only at evaluation time—see summary in Figure 4 . Setup . We base our analysis on the CIFAR-10 and STL-10 datasets . Specifically , we treat a small fraction of the training set as labeled examples ( 100 examples per class ) , another fraction as our validation set for tuning hyperparameters ( 10 % of the total training examples ) , and the rest as unlabeled data . We report our results on the standard test set of each dataset . ( See Appendix A for experimental details , and Appendix B.6 for experiments with varying levels of labeled data . ) | The paper presents multiple techniques for training models with different feature priors (i.e. inclinations to focus on different aspects of the training data) and combining them, either post hoc via ensembles or by allowing the models to provide augmented pseudo-labelled training data to each other via co-training. When using simple ensembling techniques, ensembles with a diversity of feature priors are show to perform better than ensembles where the individual models have similar feature priors. Co-training is shown to boost performance substantially when models with diverse feature priors supply pseudo-labels to each other. The problem domain is image classification. The feature priors concern shape and texture. Different preprocessing and or architecture constraint techniques are used for different models so as to predispose them to focus on shape but not texture or vice versa. | SP:3270ebec400aec532a64592936c483e5f445fda5 |
Combining Diverse Feature Priors | 1 INTRODUCTION . The driving force behind deep learning ’ s success is its ability to automatically discover predictive features in complex high-dimensional datasets . In fact , these features can generalize beyond the specific task at hand , thus enabling models to transfer to other ( yet similar ) tasks ( Donahue et al. , 2014 ) . At the same time , the set of features that the model learns has a large impact on how well it will perform on unseen inputs , especially in the presence of distribution shift ( Ponce et al. , 2006 ; Torralba & Efros , 2011 ; Sagawa et al. , 2020 ) or spurious correlations ( Heinze-Deml & Meinshausen , 2017 ; Beery et al. , 2018 ; Meinshausen , 2018 ) . Motivated by this , recent work focuses on encouraging specific modes of behavior by preventing the models from relying on certain features . Examples include suppressing texture features ( Geirhos et al. , 2019 ; Wang et al. , 2019 ) , avoiding ` p-non-robust features ( Tsipras et al. , 2019 ; Engstrom et al. , 2019 ) , or utilizing different parts of the frequency spectrum ( Yin et al. , 2019 ) . At a high level , these methods can be thought of as ways of imposing a feature prior on the learning process , so as to bias the model towards acquiring features that generalize better . This makes the choice of the feature prior to impose a key design decision . The goal of this work is thus to explore the underlying design space of feature priors and , specifically , to understand : How can we effectively harness the diversity of feature priors ? OUR CONTRIBUTIONS . In this paper , we cast diverse feature priors as different perspectives on the data and study how they can complement each other . In particular , we aim to understand whether training with distinct priors result in models with non-overlapping failure modes and how such models can be combined to improve generalization . This is particularly relevant in settings where the data is unreliable— e.g , when the training data contains a spurious correlation . From this perspective , we focus our study on two priors that arise naturally in the context of image classification , shape and texture , and investigate the following : Feature diversity . We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution , even if they perform similarly in terms of overall accuracy . Further , one can harness this diversity to build model ensembles that are more accurate than those based on combining models which have the same feature prior . Combining feature priors on unlabeled data . When learning from unlabeled data , the choice of feature prior can be especially important . For strategies such as self-training , sub-optimal prediction rules learned from sparse labeled data can be reinforced when pseudo-labeling the unlabeled data . We show that , in such settings , we can leverage the diversity of feature priors to address these issues . By jointly training models with different feature priors on the unlabeled data through the framework of co-training Blum & Mitchell ( 1998 ) , we find that the models can correct each other ’ s mistakes to learn prediction rules that generalize better . Learning in the presence of spurious correlations . Finally , we want to understand whether combining diverse priors during training , as described above , can prevent models from relying on correlations that are spurious , i.e. , correlations that do not hold on the actual distribution of interest . To model such scenarios , we consider a setting where a spurious correlation is present in the training data but we also have access to ( unlabeled ) data where this correlation does not hold . In this setting , we find that co-training models with diverse feature priors can actually steer them away from such correlations and thus enable them to generalize to the underlying distribution . Overall , our findings highlight the potential of incorporating distinct feature priors into the training process . We believe that further work along this direction will lead us to models that generalize more reliably . 2 BACKGROUND : FEATURE PRIORS IN COMPUTER VISION . When learning from structurally complex data , such as images , relying on raw input features alone ( e.g. , pixels ) is not particularly useful . There has thus been a long line of work on extracting input patterns that can be more effective for prediction . While early approaches , such as SIFT ( Lowe , 1999 ) and HOG ( Dalal & Triggs , 2005 ) , leveraged hand-crafted features , these have been by now largely replaced by features that are automatically learned in an end-to-end fashion ( Krizhevsky , 2009 ; Ciregan et al. , 2012 ; Krizhevsky et al. , 2012 ) . Nevertheless , even when features are learned , model designers still tune their models to better suit a particular task via changes in the architecture or training methodology . Such modifications can be thought of as imposing feature priors , i.e. , priors that bias a model towards a particular set of features . One prominent example here are convolutional neural networks , which are biased towards learning a hierarchy of localized features Fukushima ( 1980 ) ; LeCun et al . ( 1989 ) . Indeed , such a convolutional prior can be quite powerful : it is sufficient to enable many image synthesis tasks without any training Ulyanov et al . ( 2017 ) . More recently , there has been work exploring the impact of explicitly restricting the set of features utilized by the model . For instance , Geirhos et al . ( 2019 ) demonstrate that training models on stylized inputs ( and hence suppressing texture information ) can improve model robustness to common corruptions . In a similar vein , Wang et al . ( 2019 ) penalize the predictive power of local features to learn shape-biased models that generalize better between image styles . A parallel line of work focuses on training models to be robust to small , worst-case input perturbations using , for example , adversarial training Goodfellow et al . ( 2015 ) ; Madry et al . ( 2018 ) or randomized smoothing ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) . Such training biases these models away from non-robust features ( Tsipras et al. , 2019 ; Ilyas et al. , 2019 ; Engstrom et al. , 2019 ) , which tends to result in them being more aligned with human perception ( Tsipras et al. , 2019 ; Kaur et al. , 2019 ) , more resilient to certain input corruptions ( Ford et al. , 2019 ; Kireev et al. , 2021 ) , and better suited for transfer to downstream tasks Utrera et al . ( 2020 ) ; Salman et al . ( 2020 ) . 3 FEATURE PRIORS AS DIFFERENT PERSPECTIVES . As we discussed , the choice of feature prior can have a large effect on what features a model relies on and , by extension , on how well it generalizes to unseen inputs . In fact , one can view such priors as distinct perspectives on the data , capturing different information about the input . In this section , we provide evidence to support this view ; specifically , we examine a case study on a pair of feature priors that arise naturally in the context of image classification : shape and texture . 3.1 TRAINING SHAPE- AND TEXTURE-BIASED MODELS . In order to train shape- and texture-biased models , we either pre-process the model input or modify the model architecture as follows : Shape-biased models . To suppress texture information in the images , we pre-process our inputs by applying an edge detection algorithm . We consider two such canonical algorithms : the Canny algorithm Ding & Goshtasby ( 2001 ) which produces a binary edge mask , and the Sobel algorithm Sobel & Feldman ( 1968 ) which provide a softer edge detection , hence retaining some texture information ( see Figures 1b and 1c ) . Texture-biased models . To prevent the model from relying on the global structure of the image , we utilize a variant of the BagNet architecture Brendel & Bethge ( 2019 ) . This architecture deliberately limits the receptive field of the model , thus forcing it to rely on local features ( see Figure 1d ) . We visualize all of these priors in Figure 1 and provide implementation details in Appendix A . 3.2 DIVERSITY OF FEATURE-BIASED MODELS . After training models with shape and texture biases as outlined above , we evaluate whether these models indeed capture complementary information about the input . Specifically , we train models on a small subset ( 100 examples per class ) of the CIFAR-10 ( Krizhevsky , 2009 ) and STL-10 ( Coates et al. , 2011 ) datasets , and measure the correlation between which test examples they correctly classify . We find that pairs consisting of a shape-biased model and a texture-biased model ( i.e. , Canny and BagNet , or Sobel and BagNet ) indeed have the least correlated predictions—cf . Table 2 . In other words , the mistakes that these models make are more diverse than those made by identical models trained from different random initializations . At the same time , different shape-biased models ( Sobel and Canny ) are relatively well-correlated with each other , which corroborates the fact that models trained on similar features of the input are likely to make similar mistakes . Model ensembles . Having shown that training models with these feature priors results in diverse prediction rules , we examine if we can now combine them to improve our generalization . The canonical approach for doing so is to incorporate these models into an ensemble . We find that the diversity of models trained with different feature priors indeed directly translates into an improved performance when combining them into an ensemble—cf . Table 3 . In fact , we find that the performance of the ensemble is tightly connected to prediction similarity of its constituents ( as measured in Table 2 ) , i.e. , more diverse ensembles tend to perform better . For instance , the best ensemble for the STL-10 dataset is the one combining a shape-biased ( Canny ) and a texture-biased model ( BagNet ) which were the models with the least aligned predictions . 4 COMBINING DIVERSE PRIORS ON UNLABELED DATA . In the previous section , we saw that training models with different feature priors ( e.g. , shape- and texture-biased models ) can lead to prediction rules with less overlapping failure modes—which , in turn , can lead to more effective model ensembles . However , ensembles only combine model predictions post hoc and thus can not take advantage of diversity during the training process . In this section , we instead focus on utilizing diversity during training . Specifically , we will leverage the diversity introduced through feature priors in the context of self-training Lee et al . ( 2013 ) —a framework commonly used when the labeled data is insufficient to learn a well-generalizing model . This framework utilizes unlabeled data , which are then pseudo-labeled using an existing model and used for further training . While such methods can often improve the overall model performance , they suffer from a significant drawback : models tend to reinforce suboptimal prediction rules even when these rules do not generalize to the underlying distribution Arazo et al . ( 2020 ) . Our goal here is thus to leverage diverse feature priors to address this exact shortcoming . Specifically , we will jointly train models with different priors on the unlabeled data through the framework of co-training Blum & Mitchell ( 1998 ) . Since these models capture complementary information about the input ( cf . Table 2 ) , we expect them to correct each other ’ s mistakes and improve their prediction rules . As we will see in this section , this approach can indeed have a significant impact on the performance of the resulting model , outperforming ensembles that combine such models only at evaluation time—see summary in Figure 4 . Setup . We base our analysis on the CIFAR-10 and STL-10 datasets . Specifically , we treat a small fraction of the training set as labeled examples ( 100 examples per class ) , another fraction as our validation set for tuning hyperparameters ( 10 % of the total training examples ) , and the rest as unlabeled data . We report our results on the standard test set of each dataset . ( See Appendix A for experimental details , and Appendix B.6 for experiments with varying levels of labeled data . ) | The paper proposes a formalized framework for imposing priors on the feature extraction in deep visual processing models. There has been earlier work on encouraging certain feature representations (e.g. suppressing the focus on texture in feature extraction) and also making feature representations robust to domain shift. The core contribution of this paper is the systematic formulation and investigation of how different, distinct feature priors leads to complementary feature representations that can be combined to provide more robust data representations - in other words, creating synthesized multi-view data representations. The paper ties back to early (1998) work on co-training (which essentially is multi-modal bootstrapping) and ties this to the more recent body of work on self-supervision and self-training. Experiments are performed with classical shape- and texture-biased models, and show that the hypothesis - that diverse feature priors are able to robustly create a set of complementary data views - holds. | SP:3270ebec400aec532a64592936c483e5f445fda5 |
Combining Diverse Feature Priors | 1 INTRODUCTION . The driving force behind deep learning ’ s success is its ability to automatically discover predictive features in complex high-dimensional datasets . In fact , these features can generalize beyond the specific task at hand , thus enabling models to transfer to other ( yet similar ) tasks ( Donahue et al. , 2014 ) . At the same time , the set of features that the model learns has a large impact on how well it will perform on unseen inputs , especially in the presence of distribution shift ( Ponce et al. , 2006 ; Torralba & Efros , 2011 ; Sagawa et al. , 2020 ) or spurious correlations ( Heinze-Deml & Meinshausen , 2017 ; Beery et al. , 2018 ; Meinshausen , 2018 ) . Motivated by this , recent work focuses on encouraging specific modes of behavior by preventing the models from relying on certain features . Examples include suppressing texture features ( Geirhos et al. , 2019 ; Wang et al. , 2019 ) , avoiding ` p-non-robust features ( Tsipras et al. , 2019 ; Engstrom et al. , 2019 ) , or utilizing different parts of the frequency spectrum ( Yin et al. , 2019 ) . At a high level , these methods can be thought of as ways of imposing a feature prior on the learning process , so as to bias the model towards acquiring features that generalize better . This makes the choice of the feature prior to impose a key design decision . The goal of this work is thus to explore the underlying design space of feature priors and , specifically , to understand : How can we effectively harness the diversity of feature priors ? OUR CONTRIBUTIONS . In this paper , we cast diverse feature priors as different perspectives on the data and study how they can complement each other . In particular , we aim to understand whether training with distinct priors result in models with non-overlapping failure modes and how such models can be combined to improve generalization . This is particularly relevant in settings where the data is unreliable— e.g , when the training data contains a spurious correlation . From this perspective , we focus our study on two priors that arise naturally in the context of image classification , shape and texture , and investigate the following : Feature diversity . We demonstrate that training models with diverse feature priors results in them making mistakes on different parts of the data distribution , even if they perform similarly in terms of overall accuracy . Further , one can harness this diversity to build model ensembles that are more accurate than those based on combining models which have the same feature prior . Combining feature priors on unlabeled data . When learning from unlabeled data , the choice of feature prior can be especially important . For strategies such as self-training , sub-optimal prediction rules learned from sparse labeled data can be reinforced when pseudo-labeling the unlabeled data . We show that , in such settings , we can leverage the diversity of feature priors to address these issues . By jointly training models with different feature priors on the unlabeled data through the framework of co-training Blum & Mitchell ( 1998 ) , we find that the models can correct each other ’ s mistakes to learn prediction rules that generalize better . Learning in the presence of spurious correlations . Finally , we want to understand whether combining diverse priors during training , as described above , can prevent models from relying on correlations that are spurious , i.e. , correlations that do not hold on the actual distribution of interest . To model such scenarios , we consider a setting where a spurious correlation is present in the training data but we also have access to ( unlabeled ) data where this correlation does not hold . In this setting , we find that co-training models with diverse feature priors can actually steer them away from such correlations and thus enable them to generalize to the underlying distribution . Overall , our findings highlight the potential of incorporating distinct feature priors into the training process . We believe that further work along this direction will lead us to models that generalize more reliably . 2 BACKGROUND : FEATURE PRIORS IN COMPUTER VISION . When learning from structurally complex data , such as images , relying on raw input features alone ( e.g. , pixels ) is not particularly useful . There has thus been a long line of work on extracting input patterns that can be more effective for prediction . While early approaches , such as SIFT ( Lowe , 1999 ) and HOG ( Dalal & Triggs , 2005 ) , leveraged hand-crafted features , these have been by now largely replaced by features that are automatically learned in an end-to-end fashion ( Krizhevsky , 2009 ; Ciregan et al. , 2012 ; Krizhevsky et al. , 2012 ) . Nevertheless , even when features are learned , model designers still tune their models to better suit a particular task via changes in the architecture or training methodology . Such modifications can be thought of as imposing feature priors , i.e. , priors that bias a model towards a particular set of features . One prominent example here are convolutional neural networks , which are biased towards learning a hierarchy of localized features Fukushima ( 1980 ) ; LeCun et al . ( 1989 ) . Indeed , such a convolutional prior can be quite powerful : it is sufficient to enable many image synthesis tasks without any training Ulyanov et al . ( 2017 ) . More recently , there has been work exploring the impact of explicitly restricting the set of features utilized by the model . For instance , Geirhos et al . ( 2019 ) demonstrate that training models on stylized inputs ( and hence suppressing texture information ) can improve model robustness to common corruptions . In a similar vein , Wang et al . ( 2019 ) penalize the predictive power of local features to learn shape-biased models that generalize better between image styles . A parallel line of work focuses on training models to be robust to small , worst-case input perturbations using , for example , adversarial training Goodfellow et al . ( 2015 ) ; Madry et al . ( 2018 ) or randomized smoothing ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) . Such training biases these models away from non-robust features ( Tsipras et al. , 2019 ; Ilyas et al. , 2019 ; Engstrom et al. , 2019 ) , which tends to result in them being more aligned with human perception ( Tsipras et al. , 2019 ; Kaur et al. , 2019 ) , more resilient to certain input corruptions ( Ford et al. , 2019 ; Kireev et al. , 2021 ) , and better suited for transfer to downstream tasks Utrera et al . ( 2020 ) ; Salman et al . ( 2020 ) . 3 FEATURE PRIORS AS DIFFERENT PERSPECTIVES . As we discussed , the choice of feature prior can have a large effect on what features a model relies on and , by extension , on how well it generalizes to unseen inputs . In fact , one can view such priors as distinct perspectives on the data , capturing different information about the input . In this section , we provide evidence to support this view ; specifically , we examine a case study on a pair of feature priors that arise naturally in the context of image classification : shape and texture . 3.1 TRAINING SHAPE- AND TEXTURE-BIASED MODELS . In order to train shape- and texture-biased models , we either pre-process the model input or modify the model architecture as follows : Shape-biased models . To suppress texture information in the images , we pre-process our inputs by applying an edge detection algorithm . We consider two such canonical algorithms : the Canny algorithm Ding & Goshtasby ( 2001 ) which produces a binary edge mask , and the Sobel algorithm Sobel & Feldman ( 1968 ) which provide a softer edge detection , hence retaining some texture information ( see Figures 1b and 1c ) . Texture-biased models . To prevent the model from relying on the global structure of the image , we utilize a variant of the BagNet architecture Brendel & Bethge ( 2019 ) . This architecture deliberately limits the receptive field of the model , thus forcing it to rely on local features ( see Figure 1d ) . We visualize all of these priors in Figure 1 and provide implementation details in Appendix A . 3.2 DIVERSITY OF FEATURE-BIASED MODELS . After training models with shape and texture biases as outlined above , we evaluate whether these models indeed capture complementary information about the input . Specifically , we train models on a small subset ( 100 examples per class ) of the CIFAR-10 ( Krizhevsky , 2009 ) and STL-10 ( Coates et al. , 2011 ) datasets , and measure the correlation between which test examples they correctly classify . We find that pairs consisting of a shape-biased model and a texture-biased model ( i.e. , Canny and BagNet , or Sobel and BagNet ) indeed have the least correlated predictions—cf . Table 2 . In other words , the mistakes that these models make are more diverse than those made by identical models trained from different random initializations . At the same time , different shape-biased models ( Sobel and Canny ) are relatively well-correlated with each other , which corroborates the fact that models trained on similar features of the input are likely to make similar mistakes . Model ensembles . Having shown that training models with these feature priors results in diverse prediction rules , we examine if we can now combine them to improve our generalization . The canonical approach for doing so is to incorporate these models into an ensemble . We find that the diversity of models trained with different feature priors indeed directly translates into an improved performance when combining them into an ensemble—cf . Table 3 . In fact , we find that the performance of the ensemble is tightly connected to prediction similarity of its constituents ( as measured in Table 2 ) , i.e. , more diverse ensembles tend to perform better . For instance , the best ensemble for the STL-10 dataset is the one combining a shape-biased ( Canny ) and a texture-biased model ( BagNet ) which were the models with the least aligned predictions . 4 COMBINING DIVERSE PRIORS ON UNLABELED DATA . In the previous section , we saw that training models with different feature priors ( e.g. , shape- and texture-biased models ) can lead to prediction rules with less overlapping failure modes—which , in turn , can lead to more effective model ensembles . However , ensembles only combine model predictions post hoc and thus can not take advantage of diversity during the training process . In this section , we instead focus on utilizing diversity during training . Specifically , we will leverage the diversity introduced through feature priors in the context of self-training Lee et al . ( 2013 ) —a framework commonly used when the labeled data is insufficient to learn a well-generalizing model . This framework utilizes unlabeled data , which are then pseudo-labeled using an existing model and used for further training . While such methods can often improve the overall model performance , they suffer from a significant drawback : models tend to reinforce suboptimal prediction rules even when these rules do not generalize to the underlying distribution Arazo et al . ( 2020 ) . Our goal here is thus to leverage diverse feature priors to address this exact shortcoming . Specifically , we will jointly train models with different priors on the unlabeled data through the framework of co-training Blum & Mitchell ( 1998 ) . Since these models capture complementary information about the input ( cf . Table 2 ) , we expect them to correct each other ’ s mistakes and improve their prediction rules . As we will see in this section , this approach can indeed have a significant impact on the performance of the resulting model , outperforming ensembles that combine such models only at evaluation time—see summary in Figure 4 . Setup . We base our analysis on the CIFAR-10 and STL-10 datasets . Specifically , we treat a small fraction of the training set as labeled examples ( 100 examples per class ) , another fraction as our validation set for tuning hyperparameters ( 10 % of the total training examples ) , and the rest as unlabeled data . We report our results on the standard test set of each dataset . ( See Appendix A for experimental details , and Appendix B.6 for experiments with varying levels of labeled data . ) | The goal of the paper is to improve model generalisation. The authors consider feature priors as distinct perspectives on the data. The results show that models trained with diverse sets of various feature priors have less overlapping modes and are more efficiently combined. | SP:3270ebec400aec532a64592936c483e5f445fda5 |
From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness | 1 INTRODUCTION . Graphs are permutation invariant , combinatorial structures used to represent relational data , with wide applications ranging from drug discovery , social network analysis , image analysis to bioinformatics ( Duvenaud et al. , 2015 ; Fan et al. , 2019 ; Shi et al. , 2019 ; Wu et al. , 2020 ) . In recent years , Graph Neural Networks ( GNNs ) have rapidly surpassed traditional methods like heuristically defined features and graph kernels to become the dominant approach for graph ML tasks . Message Passing Neural Networks ( MPNNs ) ( Gilmer et al. , 2017 ) are the most common type of GNNs owing to their intuitiveness , effectiveness and efficiency . They follow a recursive aggregation mechanism where each node aggregates information from its immediate neighbors repeatedly . However , unlike simple multi-layer feedforward networks ( MLPs ) which are universal approximators of continuous functions ( Hornik et al. , 1989 ) , MPNNs can not approximate all permutation-invariant graph functions ( Maron et al. , 2019b ) . In fact , their expressiveness is upper bounded by the first order Weisfeiler-Leman ( 1-WL ) isomorphism test ( Xu et al. , 2018 ) . Importantly , researchers have shown that such 1-WL equivalent GNNs are not expressive , or powerful , enough to capture basic structural concepts , i.e. , counting motifs such as cycles or triangles ( Zhengdao et al. , 2020 ; Arvind et al. , 2020 ) that are shown to be informative for bio- and chemo-informatics ( Elton et al. , 2019 ) . The weakness of MPNNs urges researchers to design more expressive GNNs , which are able to discriminate graphs from an isomorphism test perspective ; Chen et al . ( 2019 ) prove the equivalence between such tests and universal permutation invariant function approximation , which theoretically justifies it . As k-WL is strictly more expressive than 1-WL , many works ( Morris et al. , 2019 ; 2020b ) try to incorporate k-WL in the design of more powerful GNNs , while others approach k-WL expressiveness indirectly from matrix invariant operations ( Maron et al. , 2019a ; b ; Keriven & Peyré , 2019 ) and matrix language perspectives ( Balcilar et al. , 2021 ) . However , they require O ( k ) -order tensors to achieve k-WL expressiveness , and thus are not scalable or feasible for application on large , practical graphs . Besides , the bias-variance tradeoff between complexity and generalization ( Neal et al. , 2018 ) and the fact that almost all graphs ( i.e . O ( 2 ( n 2 ) ) graphs on n vertices , Babai et al . ( 1980 ) ) can be distinguished by 1-WL challenge the necessity of developing such extremely expressive models . In a complementary line of work , Loukas ( 2020a ) sheds light on developing more powerful GNNs while maintaining linear scalability , finding that MPNNs can be universal approximators provided that nodes are sufficiently distinguishable . Relatedly , several works propose to add features to make nodes more distinguishable , such as identifiers ( Loukas , 2020a ) , subgraph counts ( Bouritsas et al. , 2020 ) , distance encoding ( Li et al. , 2020 ) , and random features ( Sato et al. , 2021 ; Abboud et al. , 2021 ) . However , these methods either focus on handcrafted features which lose the premise of automatic learning , or create permutation sensitive features that hurt generalization . Present Work . Our work stands between the two regimes of extremely expressive but unscalable korder GNNs , and the limited expressiveness yet high scalability of MPNNs . Specifically , we propose a general framework that serves as a “ wrapper ” to uplift any GNN . We observe that MPNNs ’ local neighbor aggregation follows a star pattern , where the representation of a node is characterized by applying an injective aggregator function as an encoder to the star subgraph ( comprised of the central node and edges to neighbors ) . We propose a design which naturally generalizes from encoding the star to encoding a more flexibly defined subgraph , and we replace the standard injective aggregator with a GNN : in short , we characterize the new representation of a node by using a GNN to encode a locally induced encompassing subgraph , as shown in Fig.1 . This uplifts GNN as a base model in effect by applying it on each subgraph instead of the whole input graph . This generalization is close to Convolutional Neural Networks ( CNN ) in computer vision : like the CNN that convolves image patches with a kernel to compute new pixel embeddings , our designed wrapper convolves subgraphs with a GNN to generate new node embeddings . Hence , we name our approach GNN-AK ( GNN As Kernel ) . We show theoretically that GNN-AK is strictly more powerful than 1 & 2-WL with any MPNN as base model , and is not less powerful than 3-WL with PPGN ( Maron et al. , 2019a ) used . We also give sufficient conditions under which GNN-AK can successfully distinguish two non-isomorphic graphs . Given this increase in expressive power , we discuss careful implementation strategies for GNN-AK , which allow us to carefully leverage multiple modalities of information from subgraph encoding , and resulting in an empirically more expressive version GNN-AK+ . As a result , GNN-AK and GNN-AK+ induce a constant factor overhead in memory . To amplify our method ’ s practicality , we further develop a subgraph sampling strategy inspired by Dropout ( Srivastava et al. , 2014 ) to drastically reduce this overhead ( 1-3× in practice ) without hurting performance . We conduct extensive experiments on 4 simulation datasets and 5 well-known real-world graph classification & regression benchmarks ( Dwivedi et al. , 2020 ; Hu et al. , 2020 ) , to show significant and consistent practical benefits of our approach across different MPNNs and datasets . Specifically , GNN-AK+ sets new state-of-the-art performance on ZINC , CIFAR10 , and PATTERN – for example , on ZINC we see a relative error reduction of 60.3 % , 50.5 % , and 39.4 % for base model being GCN ( Kipf & Welling , 2017 ) , GIN ( Xu et al. , 2018 ) , and ( a variant of ) PNA ( Corso et al. , 2020 ) respectively . To summarize , our contributions are listed as follows : • A General GNN-AK Framework . We propose GNN-AK ( and enhanced GNN-AK+ ) , a general framework which uplifts any GNN by encoding local subgraph structure with a GNN . • Theoretical Findings . We show that GNN-AK ’ s expressiveness is strictly better than 1 & 2-WL , and is not less powerful than 3-WL . We analyze sufficient conditions for successful discrimination . • Effective and Efficient Realization . We present effective implementations for GNN-AK and GNN-AK+ to fully exploit all node embeddings within a subgraph . We design efficient online subgraph sampling to mitigate memory and runtime overhead while maintaining performance . • Experimental Results . We show strong empirical results , demonstrating both expressivity improvements as well as practical performance gains where we achieve new state-of-the-art performance on several graph-level benchmarks . Our implementation is easy-to-use , and directly accepts any GNN from PyG ( Fey & Lenssen , 2019 ) for plug-and-play use . See code at https : //github.com/GNNAsKernel/GNNAsKernel . 2 RELATED WORK . Improving Expressiveness of GNNs : Several works other than those mentioned in Sec.1 tackle expressive GNNs . Murphy et al . ( 2019 ) achieve universality by summing permutation-sensitive functions across a combinatorial number of permutations , limiting feasibility . Dasoulas et al . ( 2020 ) adds node indicators to make them distinguishable , but at the cost of an invariant model , while Vignac et al . ( 2020 ) further addresses the invariance problem , but at the cost of quadratic time complexity . Corso et al . ( 2020 ) generalizes MPNN ’ s default sum aggregator , but is still limited by 1-WL . Beani et al . ( 2021 ) generalizes spatial and spectral aggregation with > 1-WL expressiveness , but using expensive eigendecomposition . Recently , Bodnar et al . ( 2021b ) introduce MPNNs over simplicial complexes that shares similar expressiveness as GNN-AK . Ying et al . ( 2021 ) studies transformer with above 1-WL expressiveness . Azizian & Lelarge ( 2021 ) surveys GNN expressiveness work . Leveraging Substructures in Learning : Exploiting subgraph information in GNNs is not new ; in fact , k-WL considers all k node subgraphs . Monti et al . ( 2018 ) ; Lee et al . ( 2019 ) exploit motif information within aggregation , and others ( Bouritsas et al. , 2020 ; Barceló et al. , 2021 ) augment MPNN features with handcrafted subgraph based features . MixHop ( Abu-El-Haija et al. , 2019 ) directly aggregates k-hop information by using adjacency matrix powers , ignoring neighbor connections . Towards a meta-learning goal , G-meta ( Huang & Zitnik , 2020 ) applies GNNs on rooted subgraphs around each node to help transferring ability . Tahmasebi & Jegelka ( 2020 ) only theoretically justifies subgraph convolution with GNN by showing its ability in counting substructures . Zhengdao et al . ( 2020 ) also represent a node by encoding its local subgraph , however using non-scalable relational pooling . k-hop GNN ( Nikolentzos et al. , 2020 ) uses k-egonet in a specially designed way : it encodes a rooted subgraph via sequentially passing messages from k-th hops in the subgraph to k−1 hops , until it reaches the root node , and use the root node as encoding of the subgraph . Ego-GNNs ( Sandfelder et al. , 2021 ) computes a context encoding with SGC ( Wu et al. , 2019 ) as the subgraph encoder , and only be studied on node-level tasks . Both k-hop GNN and Ego-GNNs can be viewed as a special case of GNN-AK . You et al . ( 2021 ) designs ID-GNNs which inject node identity during message passing with the help of k-egonet , which is the receptive field of each node in k-layer GNN ( Hamilton et al. , 2017 ) – this differs from GNN-AK , which encodes each node by encoding its rooted subgraph . Unlike GNN-AK which uses rooted subgraphs , Fey et al . ( 2020 ) ; Thiede et al . ( 2021 ) ; Bodnar et al . ( 2021a ) design GNNs to use certain subgraph patterns ( like cycles and paths ) in message passing , however their preprocessing requires solving the subgraph isomorphism problem . To summarize , our work differs by ( i ) proposing a general subgraph encoding framework for uplifting GNNs , and ( ii ) addressing scalability issues involved with using subgraphs , which poses significant challenges for subgraph-based methods in practice . 3 GENERAL FRAMEWORK AND THEORY . We first introduce our setting and formalisms . Let G = ( V , E ) be a graph with node features xi ∈ Rd , ∀i ∈ V . We consider graph-level problems where the goal is to classify/regress a target yG by learning a graph-level representation hG . Let Nk ( v ) be the set of nodes in the k-hop egonet rooted at node v. N ( v ) = N1 ( v ) \v denotes the immediate neighbors of node v. For S ⊆ V , let G [ S ] be the induced subgraph : G [ S ] = ( S , { ( i , j ) ∈ E|i ∈ S , j ∈ S } ) . Then G [ Nk ( v ) ] denotes the k-hop egonet rooted at node v. We also define Star ( v ) = ( N1 ( v ) , { ( v , j ) ∈ E|j ∈ N ( v ) } ) be the induced star-like subgraph around v. We use { · } denotes multiset , i.e . set that allows repetition . Before presenting GNN-AK , we highlight the insights in designing GNN-AK and driving the expressiveness boost . Insight 1 : Generalizing star to subgraph . In MPNNs , every node aggregates information from its immediate neighbors following a star pattern . Consequently , MPNNs fail to distinguish any non-isomorphic regular graphs where all stars are the same , since all nodes have the same degree . Even simply generalizing star to the induced , 1-hop egonet considers connections among neighbors , enabling distinguishing regular graphs . Insight 2 : Divide and conquer . When two graphs are non-isomorphic , there exists a subgraph where this difference is captured ( see Figure 2 ) . Although a fixed-expressiveness GNN may not distinguish the two original graphs , it may distinguish the two smaller subgraphs , given that the required expressiveness for successful discrimination is proportional to graph size ( Loukas , 2020b ) . As such , GNN-AK divides the harder problem of encoding the whole graph to smaller and easier problems of encoding its subgraphs , and “ conquers ” the encoding with the base GNN . | This manuscript presents a general-purpose technique to improve GNN expressivity, dubbed as GNN-AK, that replaces the conventional neighbourhood/star-like aggregation (multi-set of neighbour embeddings) with an ego-net level aggregation. In particular, at each layer of GNN-AK, the authors first extract all $|\mathcal{V}|$ ego-nets of the original graph, then they apply a GNN on each one of them, and finally, they collect their outputs into node-wise representations. The expressive power of different variants of their model is theoretically analysed (more expressive than 1-WL when the ego-net aggregation is as expressive as 1-WL, and no less powerful than 3-WL, when the ego-net aggregation is as expressive as 3-WL) and a series of design choices for a practical instantiation is provided. The authors extensively evaluate their method on both synthetic and real-world datasets and ablate the influence of some of the moving parts in the overall performance. | SP:c49881605343384d061952b8a25982eb3dfc5cb8 |
From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness | 1 INTRODUCTION . Graphs are permutation invariant , combinatorial structures used to represent relational data , with wide applications ranging from drug discovery , social network analysis , image analysis to bioinformatics ( Duvenaud et al. , 2015 ; Fan et al. , 2019 ; Shi et al. , 2019 ; Wu et al. , 2020 ) . In recent years , Graph Neural Networks ( GNNs ) have rapidly surpassed traditional methods like heuristically defined features and graph kernels to become the dominant approach for graph ML tasks . Message Passing Neural Networks ( MPNNs ) ( Gilmer et al. , 2017 ) are the most common type of GNNs owing to their intuitiveness , effectiveness and efficiency . They follow a recursive aggregation mechanism where each node aggregates information from its immediate neighbors repeatedly . However , unlike simple multi-layer feedforward networks ( MLPs ) which are universal approximators of continuous functions ( Hornik et al. , 1989 ) , MPNNs can not approximate all permutation-invariant graph functions ( Maron et al. , 2019b ) . In fact , their expressiveness is upper bounded by the first order Weisfeiler-Leman ( 1-WL ) isomorphism test ( Xu et al. , 2018 ) . Importantly , researchers have shown that such 1-WL equivalent GNNs are not expressive , or powerful , enough to capture basic structural concepts , i.e. , counting motifs such as cycles or triangles ( Zhengdao et al. , 2020 ; Arvind et al. , 2020 ) that are shown to be informative for bio- and chemo-informatics ( Elton et al. , 2019 ) . The weakness of MPNNs urges researchers to design more expressive GNNs , which are able to discriminate graphs from an isomorphism test perspective ; Chen et al . ( 2019 ) prove the equivalence between such tests and universal permutation invariant function approximation , which theoretically justifies it . As k-WL is strictly more expressive than 1-WL , many works ( Morris et al. , 2019 ; 2020b ) try to incorporate k-WL in the design of more powerful GNNs , while others approach k-WL expressiveness indirectly from matrix invariant operations ( Maron et al. , 2019a ; b ; Keriven & Peyré , 2019 ) and matrix language perspectives ( Balcilar et al. , 2021 ) . However , they require O ( k ) -order tensors to achieve k-WL expressiveness , and thus are not scalable or feasible for application on large , practical graphs . Besides , the bias-variance tradeoff between complexity and generalization ( Neal et al. , 2018 ) and the fact that almost all graphs ( i.e . O ( 2 ( n 2 ) ) graphs on n vertices , Babai et al . ( 1980 ) ) can be distinguished by 1-WL challenge the necessity of developing such extremely expressive models . In a complementary line of work , Loukas ( 2020a ) sheds light on developing more powerful GNNs while maintaining linear scalability , finding that MPNNs can be universal approximators provided that nodes are sufficiently distinguishable . Relatedly , several works propose to add features to make nodes more distinguishable , such as identifiers ( Loukas , 2020a ) , subgraph counts ( Bouritsas et al. , 2020 ) , distance encoding ( Li et al. , 2020 ) , and random features ( Sato et al. , 2021 ; Abboud et al. , 2021 ) . However , these methods either focus on handcrafted features which lose the premise of automatic learning , or create permutation sensitive features that hurt generalization . Present Work . Our work stands between the two regimes of extremely expressive but unscalable korder GNNs , and the limited expressiveness yet high scalability of MPNNs . Specifically , we propose a general framework that serves as a “ wrapper ” to uplift any GNN . We observe that MPNNs ’ local neighbor aggregation follows a star pattern , where the representation of a node is characterized by applying an injective aggregator function as an encoder to the star subgraph ( comprised of the central node and edges to neighbors ) . We propose a design which naturally generalizes from encoding the star to encoding a more flexibly defined subgraph , and we replace the standard injective aggregator with a GNN : in short , we characterize the new representation of a node by using a GNN to encode a locally induced encompassing subgraph , as shown in Fig.1 . This uplifts GNN as a base model in effect by applying it on each subgraph instead of the whole input graph . This generalization is close to Convolutional Neural Networks ( CNN ) in computer vision : like the CNN that convolves image patches with a kernel to compute new pixel embeddings , our designed wrapper convolves subgraphs with a GNN to generate new node embeddings . Hence , we name our approach GNN-AK ( GNN As Kernel ) . We show theoretically that GNN-AK is strictly more powerful than 1 & 2-WL with any MPNN as base model , and is not less powerful than 3-WL with PPGN ( Maron et al. , 2019a ) used . We also give sufficient conditions under which GNN-AK can successfully distinguish two non-isomorphic graphs . Given this increase in expressive power , we discuss careful implementation strategies for GNN-AK , which allow us to carefully leverage multiple modalities of information from subgraph encoding , and resulting in an empirically more expressive version GNN-AK+ . As a result , GNN-AK and GNN-AK+ induce a constant factor overhead in memory . To amplify our method ’ s practicality , we further develop a subgraph sampling strategy inspired by Dropout ( Srivastava et al. , 2014 ) to drastically reduce this overhead ( 1-3× in practice ) without hurting performance . We conduct extensive experiments on 4 simulation datasets and 5 well-known real-world graph classification & regression benchmarks ( Dwivedi et al. , 2020 ; Hu et al. , 2020 ) , to show significant and consistent practical benefits of our approach across different MPNNs and datasets . Specifically , GNN-AK+ sets new state-of-the-art performance on ZINC , CIFAR10 , and PATTERN – for example , on ZINC we see a relative error reduction of 60.3 % , 50.5 % , and 39.4 % for base model being GCN ( Kipf & Welling , 2017 ) , GIN ( Xu et al. , 2018 ) , and ( a variant of ) PNA ( Corso et al. , 2020 ) respectively . To summarize , our contributions are listed as follows : • A General GNN-AK Framework . We propose GNN-AK ( and enhanced GNN-AK+ ) , a general framework which uplifts any GNN by encoding local subgraph structure with a GNN . • Theoretical Findings . We show that GNN-AK ’ s expressiveness is strictly better than 1 & 2-WL , and is not less powerful than 3-WL . We analyze sufficient conditions for successful discrimination . • Effective and Efficient Realization . We present effective implementations for GNN-AK and GNN-AK+ to fully exploit all node embeddings within a subgraph . We design efficient online subgraph sampling to mitigate memory and runtime overhead while maintaining performance . • Experimental Results . We show strong empirical results , demonstrating both expressivity improvements as well as practical performance gains where we achieve new state-of-the-art performance on several graph-level benchmarks . Our implementation is easy-to-use , and directly accepts any GNN from PyG ( Fey & Lenssen , 2019 ) for plug-and-play use . See code at https : //github.com/GNNAsKernel/GNNAsKernel . 2 RELATED WORK . Improving Expressiveness of GNNs : Several works other than those mentioned in Sec.1 tackle expressive GNNs . Murphy et al . ( 2019 ) achieve universality by summing permutation-sensitive functions across a combinatorial number of permutations , limiting feasibility . Dasoulas et al . ( 2020 ) adds node indicators to make them distinguishable , but at the cost of an invariant model , while Vignac et al . ( 2020 ) further addresses the invariance problem , but at the cost of quadratic time complexity . Corso et al . ( 2020 ) generalizes MPNN ’ s default sum aggregator , but is still limited by 1-WL . Beani et al . ( 2021 ) generalizes spatial and spectral aggregation with > 1-WL expressiveness , but using expensive eigendecomposition . Recently , Bodnar et al . ( 2021b ) introduce MPNNs over simplicial complexes that shares similar expressiveness as GNN-AK . Ying et al . ( 2021 ) studies transformer with above 1-WL expressiveness . Azizian & Lelarge ( 2021 ) surveys GNN expressiveness work . Leveraging Substructures in Learning : Exploiting subgraph information in GNNs is not new ; in fact , k-WL considers all k node subgraphs . Monti et al . ( 2018 ) ; Lee et al . ( 2019 ) exploit motif information within aggregation , and others ( Bouritsas et al. , 2020 ; Barceló et al. , 2021 ) augment MPNN features with handcrafted subgraph based features . MixHop ( Abu-El-Haija et al. , 2019 ) directly aggregates k-hop information by using adjacency matrix powers , ignoring neighbor connections . Towards a meta-learning goal , G-meta ( Huang & Zitnik , 2020 ) applies GNNs on rooted subgraphs around each node to help transferring ability . Tahmasebi & Jegelka ( 2020 ) only theoretically justifies subgraph convolution with GNN by showing its ability in counting substructures . Zhengdao et al . ( 2020 ) also represent a node by encoding its local subgraph , however using non-scalable relational pooling . k-hop GNN ( Nikolentzos et al. , 2020 ) uses k-egonet in a specially designed way : it encodes a rooted subgraph via sequentially passing messages from k-th hops in the subgraph to k−1 hops , until it reaches the root node , and use the root node as encoding of the subgraph . Ego-GNNs ( Sandfelder et al. , 2021 ) computes a context encoding with SGC ( Wu et al. , 2019 ) as the subgraph encoder , and only be studied on node-level tasks . Both k-hop GNN and Ego-GNNs can be viewed as a special case of GNN-AK . You et al . ( 2021 ) designs ID-GNNs which inject node identity during message passing with the help of k-egonet , which is the receptive field of each node in k-layer GNN ( Hamilton et al. , 2017 ) – this differs from GNN-AK , which encodes each node by encoding its rooted subgraph . Unlike GNN-AK which uses rooted subgraphs , Fey et al . ( 2020 ) ; Thiede et al . ( 2021 ) ; Bodnar et al . ( 2021a ) design GNNs to use certain subgraph patterns ( like cycles and paths ) in message passing , however their preprocessing requires solving the subgraph isomorphism problem . To summarize , our work differs by ( i ) proposing a general subgraph encoding framework for uplifting GNNs , and ( ii ) addressing scalability issues involved with using subgraphs , which poses significant challenges for subgraph-based methods in practice . 3 GENERAL FRAMEWORK AND THEORY . We first introduce our setting and formalisms . Let G = ( V , E ) be a graph with node features xi ∈ Rd , ∀i ∈ V . We consider graph-level problems where the goal is to classify/regress a target yG by learning a graph-level representation hG . Let Nk ( v ) be the set of nodes in the k-hop egonet rooted at node v. N ( v ) = N1 ( v ) \v denotes the immediate neighbors of node v. For S ⊆ V , let G [ S ] be the induced subgraph : G [ S ] = ( S , { ( i , j ) ∈ E|i ∈ S , j ∈ S } ) . Then G [ Nk ( v ) ] denotes the k-hop egonet rooted at node v. We also define Star ( v ) = ( N1 ( v ) , { ( v , j ) ∈ E|j ∈ N ( v ) } ) be the induced star-like subgraph around v. We use { · } denotes multiset , i.e . set that allows repetition . Before presenting GNN-AK , we highlight the insights in designing GNN-AK and driving the expressiveness boost . Insight 1 : Generalizing star to subgraph . In MPNNs , every node aggregates information from its immediate neighbors following a star pattern . Consequently , MPNNs fail to distinguish any non-isomorphic regular graphs where all stars are the same , since all nodes have the same degree . Even simply generalizing star to the induced , 1-hop egonet considers connections among neighbors , enabling distinguishing regular graphs . Insight 2 : Divide and conquer . When two graphs are non-isomorphic , there exists a subgraph where this difference is captured ( see Figure 2 ) . Although a fixed-expressiveness GNN may not distinguish the two original graphs , it may distinguish the two smaller subgraphs , given that the required expressiveness for successful discrimination is proportional to graph size ( Loukas , 2020b ) . As such , GNN-AK divides the harder problem of encoding the whole graph to smaller and easier problems of encoding its subgraphs , and “ conquers ” the encoding with the base GNN . | The paper deals with supervised learning with graphs, specifically graph-level tasks. The paper proposes an algorithm to overcome the expressive limits of the standard GNNs, which are upper-bounded by the 1-WL. The main idea of the paper is to, for each node v, extract the subgraph induced by nodes of at most distance k to node v, and then deploy a GNN on top of each of these subgraphs. Further, the authors study the gains of the expressive power of this architecture compared to the 1-, 2-, 3-WL, and k-WL. Moreover, the authors propose a subgraph sampling strategy to speed up the computation. The proposed method is evaluated on large benchmark datasets, mostly stemming from the molecular domain, reporting good performance boosts compared to standard GNN architectures. | SP:c49881605343384d061952b8a25982eb3dfc5cb8 |
From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness | 1 INTRODUCTION . Graphs are permutation invariant , combinatorial structures used to represent relational data , with wide applications ranging from drug discovery , social network analysis , image analysis to bioinformatics ( Duvenaud et al. , 2015 ; Fan et al. , 2019 ; Shi et al. , 2019 ; Wu et al. , 2020 ) . In recent years , Graph Neural Networks ( GNNs ) have rapidly surpassed traditional methods like heuristically defined features and graph kernels to become the dominant approach for graph ML tasks . Message Passing Neural Networks ( MPNNs ) ( Gilmer et al. , 2017 ) are the most common type of GNNs owing to their intuitiveness , effectiveness and efficiency . They follow a recursive aggregation mechanism where each node aggregates information from its immediate neighbors repeatedly . However , unlike simple multi-layer feedforward networks ( MLPs ) which are universal approximators of continuous functions ( Hornik et al. , 1989 ) , MPNNs can not approximate all permutation-invariant graph functions ( Maron et al. , 2019b ) . In fact , their expressiveness is upper bounded by the first order Weisfeiler-Leman ( 1-WL ) isomorphism test ( Xu et al. , 2018 ) . Importantly , researchers have shown that such 1-WL equivalent GNNs are not expressive , or powerful , enough to capture basic structural concepts , i.e. , counting motifs such as cycles or triangles ( Zhengdao et al. , 2020 ; Arvind et al. , 2020 ) that are shown to be informative for bio- and chemo-informatics ( Elton et al. , 2019 ) . The weakness of MPNNs urges researchers to design more expressive GNNs , which are able to discriminate graphs from an isomorphism test perspective ; Chen et al . ( 2019 ) prove the equivalence between such tests and universal permutation invariant function approximation , which theoretically justifies it . As k-WL is strictly more expressive than 1-WL , many works ( Morris et al. , 2019 ; 2020b ) try to incorporate k-WL in the design of more powerful GNNs , while others approach k-WL expressiveness indirectly from matrix invariant operations ( Maron et al. , 2019a ; b ; Keriven & Peyré , 2019 ) and matrix language perspectives ( Balcilar et al. , 2021 ) . However , they require O ( k ) -order tensors to achieve k-WL expressiveness , and thus are not scalable or feasible for application on large , practical graphs . Besides , the bias-variance tradeoff between complexity and generalization ( Neal et al. , 2018 ) and the fact that almost all graphs ( i.e . O ( 2 ( n 2 ) ) graphs on n vertices , Babai et al . ( 1980 ) ) can be distinguished by 1-WL challenge the necessity of developing such extremely expressive models . In a complementary line of work , Loukas ( 2020a ) sheds light on developing more powerful GNNs while maintaining linear scalability , finding that MPNNs can be universal approximators provided that nodes are sufficiently distinguishable . Relatedly , several works propose to add features to make nodes more distinguishable , such as identifiers ( Loukas , 2020a ) , subgraph counts ( Bouritsas et al. , 2020 ) , distance encoding ( Li et al. , 2020 ) , and random features ( Sato et al. , 2021 ; Abboud et al. , 2021 ) . However , these methods either focus on handcrafted features which lose the premise of automatic learning , or create permutation sensitive features that hurt generalization . Present Work . Our work stands between the two regimes of extremely expressive but unscalable korder GNNs , and the limited expressiveness yet high scalability of MPNNs . Specifically , we propose a general framework that serves as a “ wrapper ” to uplift any GNN . We observe that MPNNs ’ local neighbor aggregation follows a star pattern , where the representation of a node is characterized by applying an injective aggregator function as an encoder to the star subgraph ( comprised of the central node and edges to neighbors ) . We propose a design which naturally generalizes from encoding the star to encoding a more flexibly defined subgraph , and we replace the standard injective aggregator with a GNN : in short , we characterize the new representation of a node by using a GNN to encode a locally induced encompassing subgraph , as shown in Fig.1 . This uplifts GNN as a base model in effect by applying it on each subgraph instead of the whole input graph . This generalization is close to Convolutional Neural Networks ( CNN ) in computer vision : like the CNN that convolves image patches with a kernel to compute new pixel embeddings , our designed wrapper convolves subgraphs with a GNN to generate new node embeddings . Hence , we name our approach GNN-AK ( GNN As Kernel ) . We show theoretically that GNN-AK is strictly more powerful than 1 & 2-WL with any MPNN as base model , and is not less powerful than 3-WL with PPGN ( Maron et al. , 2019a ) used . We also give sufficient conditions under which GNN-AK can successfully distinguish two non-isomorphic graphs . Given this increase in expressive power , we discuss careful implementation strategies for GNN-AK , which allow us to carefully leverage multiple modalities of information from subgraph encoding , and resulting in an empirically more expressive version GNN-AK+ . As a result , GNN-AK and GNN-AK+ induce a constant factor overhead in memory . To amplify our method ’ s practicality , we further develop a subgraph sampling strategy inspired by Dropout ( Srivastava et al. , 2014 ) to drastically reduce this overhead ( 1-3× in practice ) without hurting performance . We conduct extensive experiments on 4 simulation datasets and 5 well-known real-world graph classification & regression benchmarks ( Dwivedi et al. , 2020 ; Hu et al. , 2020 ) , to show significant and consistent practical benefits of our approach across different MPNNs and datasets . Specifically , GNN-AK+ sets new state-of-the-art performance on ZINC , CIFAR10 , and PATTERN – for example , on ZINC we see a relative error reduction of 60.3 % , 50.5 % , and 39.4 % for base model being GCN ( Kipf & Welling , 2017 ) , GIN ( Xu et al. , 2018 ) , and ( a variant of ) PNA ( Corso et al. , 2020 ) respectively . To summarize , our contributions are listed as follows : • A General GNN-AK Framework . We propose GNN-AK ( and enhanced GNN-AK+ ) , a general framework which uplifts any GNN by encoding local subgraph structure with a GNN . • Theoretical Findings . We show that GNN-AK ’ s expressiveness is strictly better than 1 & 2-WL , and is not less powerful than 3-WL . We analyze sufficient conditions for successful discrimination . • Effective and Efficient Realization . We present effective implementations for GNN-AK and GNN-AK+ to fully exploit all node embeddings within a subgraph . We design efficient online subgraph sampling to mitigate memory and runtime overhead while maintaining performance . • Experimental Results . We show strong empirical results , demonstrating both expressivity improvements as well as practical performance gains where we achieve new state-of-the-art performance on several graph-level benchmarks . Our implementation is easy-to-use , and directly accepts any GNN from PyG ( Fey & Lenssen , 2019 ) for plug-and-play use . See code at https : //github.com/GNNAsKernel/GNNAsKernel . 2 RELATED WORK . Improving Expressiveness of GNNs : Several works other than those mentioned in Sec.1 tackle expressive GNNs . Murphy et al . ( 2019 ) achieve universality by summing permutation-sensitive functions across a combinatorial number of permutations , limiting feasibility . Dasoulas et al . ( 2020 ) adds node indicators to make them distinguishable , but at the cost of an invariant model , while Vignac et al . ( 2020 ) further addresses the invariance problem , but at the cost of quadratic time complexity . Corso et al . ( 2020 ) generalizes MPNN ’ s default sum aggregator , but is still limited by 1-WL . Beani et al . ( 2021 ) generalizes spatial and spectral aggregation with > 1-WL expressiveness , but using expensive eigendecomposition . Recently , Bodnar et al . ( 2021b ) introduce MPNNs over simplicial complexes that shares similar expressiveness as GNN-AK . Ying et al . ( 2021 ) studies transformer with above 1-WL expressiveness . Azizian & Lelarge ( 2021 ) surveys GNN expressiveness work . Leveraging Substructures in Learning : Exploiting subgraph information in GNNs is not new ; in fact , k-WL considers all k node subgraphs . Monti et al . ( 2018 ) ; Lee et al . ( 2019 ) exploit motif information within aggregation , and others ( Bouritsas et al. , 2020 ; Barceló et al. , 2021 ) augment MPNN features with handcrafted subgraph based features . MixHop ( Abu-El-Haija et al. , 2019 ) directly aggregates k-hop information by using adjacency matrix powers , ignoring neighbor connections . Towards a meta-learning goal , G-meta ( Huang & Zitnik , 2020 ) applies GNNs on rooted subgraphs around each node to help transferring ability . Tahmasebi & Jegelka ( 2020 ) only theoretically justifies subgraph convolution with GNN by showing its ability in counting substructures . Zhengdao et al . ( 2020 ) also represent a node by encoding its local subgraph , however using non-scalable relational pooling . k-hop GNN ( Nikolentzos et al. , 2020 ) uses k-egonet in a specially designed way : it encodes a rooted subgraph via sequentially passing messages from k-th hops in the subgraph to k−1 hops , until it reaches the root node , and use the root node as encoding of the subgraph . Ego-GNNs ( Sandfelder et al. , 2021 ) computes a context encoding with SGC ( Wu et al. , 2019 ) as the subgraph encoder , and only be studied on node-level tasks . Both k-hop GNN and Ego-GNNs can be viewed as a special case of GNN-AK . You et al . ( 2021 ) designs ID-GNNs which inject node identity during message passing with the help of k-egonet , which is the receptive field of each node in k-layer GNN ( Hamilton et al. , 2017 ) – this differs from GNN-AK , which encodes each node by encoding its rooted subgraph . Unlike GNN-AK which uses rooted subgraphs , Fey et al . ( 2020 ) ; Thiede et al . ( 2021 ) ; Bodnar et al . ( 2021a ) design GNNs to use certain subgraph patterns ( like cycles and paths ) in message passing , however their preprocessing requires solving the subgraph isomorphism problem . To summarize , our work differs by ( i ) proposing a general subgraph encoding framework for uplifting GNNs , and ( ii ) addressing scalability issues involved with using subgraphs , which poses significant challenges for subgraph-based methods in practice . 3 GENERAL FRAMEWORK AND THEORY . We first introduce our setting and formalisms . Let G = ( V , E ) be a graph with node features xi ∈ Rd , ∀i ∈ V . We consider graph-level problems where the goal is to classify/regress a target yG by learning a graph-level representation hG . Let Nk ( v ) be the set of nodes in the k-hop egonet rooted at node v. N ( v ) = N1 ( v ) \v denotes the immediate neighbors of node v. For S ⊆ V , let G [ S ] be the induced subgraph : G [ S ] = ( S , { ( i , j ) ∈ E|i ∈ S , j ∈ S } ) . Then G [ Nk ( v ) ] denotes the k-hop egonet rooted at node v. We also define Star ( v ) = ( N1 ( v ) , { ( v , j ) ∈ E|j ∈ N ( v ) } ) be the induced star-like subgraph around v. We use { · } denotes multiset , i.e . set that allows repetition . Before presenting GNN-AK , we highlight the insights in designing GNN-AK and driving the expressiveness boost . Insight 1 : Generalizing star to subgraph . In MPNNs , every node aggregates information from its immediate neighbors following a star pattern . Consequently , MPNNs fail to distinguish any non-isomorphic regular graphs where all stars are the same , since all nodes have the same degree . Even simply generalizing star to the induced , 1-hop egonet considers connections among neighbors , enabling distinguishing regular graphs . Insight 2 : Divide and conquer . When two graphs are non-isomorphic , there exists a subgraph where this difference is captured ( see Figure 2 ) . Although a fixed-expressiveness GNN may not distinguish the two original graphs , it may distinguish the two smaller subgraphs , given that the required expressiveness for successful discrimination is proportional to graph size ( Loukas , 2020b ) . As such , GNN-AK divides the harder problem of encoding the whole graph to smaller and easier problems of encoding its subgraphs , and “ conquers ” the encoding with the base GNN . | The paper proposes GNN-AK, a framework that can use GNN as a kernel to encode local features. It generalizes the local neighbour aggregation in Massage Passing Neural Networks (MPNN) from a star-like pattern to a more flexibly defined subgraph. The paper also provides theoretical support that shows the superiority of the proposed method to 1&2-WL in terms of expressiveness. The experimental results demonstrate that the proposed method outperforms other SOTA baselines on 7 different datasets. | SP:c49881605343384d061952b8a25982eb3dfc5cb8 |
Deep Inverse Reinforcement Learning via Adversarial One-Class Classification | 1 INTRODUCTION . Inverse reinforcement learning ( IRL ) ( Russell , 1998 ) refers to the problem of estimating rewards for reinforcement learning ( RL ) agents to acquire policies that can reproduce expert behavior . An RL algorithm learns a policy that maximizes the cumulative discounted reward under a given reward function . An IRL algorithm does the opposite ; it estimates the reward from the given policies or trajectories to satisfy the condition under the assumption that the expert is maximizing the reward . IRL has been applied in two main areas ( Ramachandran & Amir , 2007 ) . The first is apprenticeship learning , which enables the learning of complex policies for which it is difficult to design a reward function . Compared to behavioral cloning , IRL is robust to the covariate shift problem ( Ross et al. , 2011 ) and achieves superior performance even when the amount of data is small . The second is reward learning , where IRL is used to estimate rewards from the trajectory data of human and animal action sequences and to analyze the intention of the subject . In previous studies , IRL methods have been used to analyze human walking paths ( Kitani et al. , 2012 ) and the behavior of nematodes ( Yamaguchi et al. , 2018 ) . In traditional IRL methods , the IRL loop has an inner loop that computes the optimal policy for the reward being estimated until convergence . This inner loop presents a difficulty in applying IRL to tasks with a large state-action space because it is computation-intensive . As a solution to this , classification-based IRL methods transform the IRL problem into a problem of classifying the expert ’ s trajectory and the trajectory to be compared . Notable methods include AIRL ( Fu et al. , 2017 ) , LogReg-IRL ( Uchibe , 2018 ) , and T-REX ( Brown et al. , 2019 ) . These methods differ in the ways they are formulated , but they result in similar learning methods . Online methods , such as AIRL , collect the trajectories to be compared from the environment . Contrastingly , offline methods , such as LogReg-IRL and T-REX , collect the trajectories to be compared in advance , which enables them to further speed up and stabilize learning by not requiring access to the environment during training . However , the learning performance of current offline methods depends heavily on the properties of the trajectories to be compared or the ranking of the trajectories , which is difficult to collect . In this study , we exploited the fact that the learning process of LogReg-IRL by binary classification is equivalent to that of a discriminator in adversarial learning , such as with generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . Specifically , we developed an innovative deep IRL method , called state-only learned one-class classifier for IRL ( SOLO-IRL ) , in which binary classification is replaced with adversarial one-class classification . Figure 1 compares the traditional and proposed IRL methods . The proposed method does not require an inner loop and is an offline method ; thus , it can be trained extremely fast . In addition , it does not require that trajectories be compared . With these advantages , the proposed method greatly advances the application of IRL methods to real-world problems . 2 PRELIMINARIES . 2.1 MARKOV DECISION PROCESS ( MDP ) . RL is a learning problem based on the Markov decision process ( MDP ) . The MDP consists of a tuple M = 〈S , A , P , R , γ〉 , where S is the state space , A is the action space , P is the state-transition probability , R is the reward function , and γ is the discount factor indicating the degree of importance for future rewards . In the MDP , the state-value function for state st at time t is represented by the Bellman equation , as follows : V ( st ) = max a { R ( st , a ) + ∑ s′ p ( s′|st , a ) γV ( s′ ) } ( 1 ) where R ( st , at ) is the reward for taking action at in state st and p ( st+1|st , at ) is the probability of transitioning to the next state st+1 when taking action at in state st . 2.2 LINEARLY SOLVABLE MDP ( LMDP ) . The linearly solvable MDP ( LMDP ) is an extension of the MDP in which the agent directly determines the transition probability u ( st+1|st ) from the current state st to the next state st+1 as the control , instead of the action at in the MDP . Then , the Bellman equation is linearized under two assumptions . First , the state-transition probability p ( st+1|st , u ) is assumed to be expressed as the product of the uncontrolled transition probability p̄ ( st+1|st ) and u as follows : p ( st+1|st , u ( st+1|st ) ) = p̄ ( st+1|st ) exp { u ( st+1|st ) } ( 2 ) The uncontrolled transition probability p̄ ( st+1|st ) indicates a transitional relationship between the states in the environment . When a transition is impossible , i.e. , p̄ = 0 , then p = 0 . The second assumption is that the reward R ( st , u ) is composed of a state-dependent reward r ( st ) and penalty term DKL ( p||p̄ ) for state-transition probability p over the divergence from the uncontrolled transition probability p̄ . This assumption can be formulated as follows : R ( st , u ( st+1|st ) ) = r ( st ) −DKL ( p ( st+1|st , u ( st+1|st ) ) ||p̄ ( st+1|st ) ) ( 3 ) whereDKL ( Px||Py ) represents the Kullback–Leibler ( KL ) divergence of Px and Py . By rearranging Eq . ( 3 ) according to the definition of the KL divergence , the following equation is obtained : R ( st , u ( st+1|st ) ) = r ( st ) − ∑ s′ p ( s′|st , u ( s′|st ) ) u ( s′|st ) ( 4 ) Substituting Eq . ( 4 ) into the Bellman equation in Eq . ( 1 ) gives the following : V ( st ) = r ( st ) + max u { ∑ s′ p ( s′|st , u ( s′|st ) ) [ −u ( s′|st ) + γV ( s′ ) ] } ( 5 ) Eq . ( 2 ) is then substituted into Eq . ( 5 ) and the Lagrange multiplier applied with ∑ s′ p ( s ′|st , u ) = 1 as a constraint . Finally , the max operator is removed , resulting in the linear Bellman equation as follows : exp { V ( st ) } = exp { r ( st ) } ∑ s′ p̄ ( s′|st ) exp { γV ( s′ ) } ( 6 ) The optimal control u∗ in the LMDP is given by u∗ ( st+1|st ) = p̄ ( st+1|st ) exp { γV ( st+1 ) } ∑ s′ p̄ ( s ′|st ) exp { γV ( s′ ) } ( 7 ) 2.3 LOGISTIC REGRESSION-BASED IRL ( LOGREG-IRL ) . LogReg-IRL ( Uchibe , 2018 ) is a deep IRL method in the LMDP . The following is an overview of the IRL framework in LogReg-IRL . By rearranging the linear Bellman equation in Eq . ( 6 ) , the following is obtained : exp { V ( st ) − r ( st ) } = ∑ s′ p̄ ( s′|st ) exp { γV ( s′ ) } ( 8 ) Then , substituting Eq . ( 8 ) into Eq . ( 7 ) and rearranging the result gives u∗ ( st+1|st ) = p̄ ( st+1|st ) exp { γV ( st+1 ) } exp { V ( st ) − r ( st ) } u∗ ( st+1|st ) p̄ ( st+1|st ) = exp { r ( st ) + γV ( st+1 ) − V ( st ) } log u∗ ( st+1|st ) p̄ ( st+1|st ) = r ( st ) + γV ( st+1 ) − V ( st ) ( 9 ) Applying Bayes ’ theorem to Eq . ( 9 ) we obtain log u∗ ( st , st+1 ) p̄ ( st , st+1 ) = log u∗ ( st ) p̄ ( st ) + r ( st ) + γV ( st+1 ) − V ( st ) ( 10 ) The left-hand side and the first term on the right-hand side of Eq . ( 10 ) are the density-ratios . The density-ratio pa/pb can be estimated by assigning the label η = 1 to the samples from the probability distribution pa , assigning η = −1 to the samples from pb , and training a classifier using logistic regression ( Qin , 1998 ; Cheng et al. , 2004 ; Bickel et al. , 2007 ) . First , by Bayes ’ theorem , the following is obtained : pa ( x ) pb ( x ) = p ( η = 1|x ) p ( η = −1|x ) p ( η = −1 ) p ( η = 1 ) log pa ( x ) pb ( x ) = log p ( η = 1|x ) p ( η = −1|x ) + log p ( η = −1 ) p ( η = 1 ) ( 11 ) Next , the first discriminator D1 ( x ) is defined by the sigmoid function σ ( x ) = 1/ { 1 + exp ( −x ) } and a neural network f ( x ) : D1 ( x ) = p ( η = 1|x ) = σ ( f ( x ) ) ( 12 ) where the second term on the right-hand side of Eq . ( 11 ) can be approximated by calculating the sample number ratio Npa/Npb and taking its logarithm . For the first term , the following equation can be obtained from the definition of the discriminator in Eq . ( 12 ) : log p ( η = 1|x ) p ( η = −1|x ) = log D1 ( x ) 1−D1 ( x ) = log 1 + exp { f ( x ) } 1 + exp { −f ( x ) } = log exp { f ( x ) } = f ( x ) ( 13 ) From Eq . ( 13 ) , when Npa = Npb , the following holds : log pa ( x ) pb ( x ) = f ( x ) ( 14 ) Therefore , the density-ratio of the first term in Eq . ( 10 ) can be estimated by sampling the states s∗t ∼ τ∗ and s̄t ∼ τ̄ from the expert trajectory τ∗ according to the optimal control u∗ and the baseline trajectory τ̄ according to the uncontrolled transition probability p̄ , followed by training with the following cross-entropy loss : L1 ( D1 ) = −Es̄t∼τ̄ [ log ( 1−D1 ( s̄t ) ) ] − Es∗t∼τ∗ [ log ( D1 ( s ∗ t ) ) ] ( 15 ) The density-ratio on the left-hand side of Eq . ( 10 ) is defined as follows using the trained f ( x ) , reward-estimating neural network r̃ ( x ) , and state-value-estimating neural network Ṽ ( x ) : log u∗ ( st , st+1 ) p̄ ( st , st+1 ) = f ( st ) + r̃ ( st ) + γṼ ( st+1 ) − Ṽ ( st ) ( 16 ) The second discriminator D2 for the state-transition pair is defined as D2 ( x , y ) = σ ( f ( x ) + r̃ ( x ) + γṼ ( y ) − Ṽ ( x ) ) ( 17 ) As with D1 , the discriminator D2 is trained by cross-entropy loss L2 , given as L2 ( D2 ) = −E ( s̄t , s̄t+1 ) ∼τ̄ [ log ( 1−D2 ( s̄t , s̄t+1 ) ) ] − E ( s∗t , s∗t+1 ) ∼τ∗ [ log ( D2 ( s ∗ t , s ∗ t+1 ) ) ] ( 18 ) In the original LogReg-IRL , an L2 regularization term is added to the loss function . Following the process described above , LogReg-IRL estimates the reward and state-value by classifying the expert and baseline trajectories . Unlike traditional IRL methods , LogReg-IRL does not require RL in the reward estimation process and , thus , it can be trained very quickly . | The paper proposes a state-only offline ILR algorithm (learning reward function), SOLO-IRL, by reducing IRL to adversarial one-class classification. Compared to most existing ILR algorithms, the proposed algorithm is more efficient and requires fewer assumptions. It does not require solving RL problems in the inner loop and does not require ranked expert trajectories or assumptions on trajectories generated by uncontrolled transitions probabilities. The authors show the algorithm learns reasonable reward on two simulated control tasks and significantly outperforms the LogReg-IRL algorithm that it extends. | SP:4c333a407f2d109581cbacd99da6df40ccbec091 |
Deep Inverse Reinforcement Learning via Adversarial One-Class Classification | 1 INTRODUCTION . Inverse reinforcement learning ( IRL ) ( Russell , 1998 ) refers to the problem of estimating rewards for reinforcement learning ( RL ) agents to acquire policies that can reproduce expert behavior . An RL algorithm learns a policy that maximizes the cumulative discounted reward under a given reward function . An IRL algorithm does the opposite ; it estimates the reward from the given policies or trajectories to satisfy the condition under the assumption that the expert is maximizing the reward . IRL has been applied in two main areas ( Ramachandran & Amir , 2007 ) . The first is apprenticeship learning , which enables the learning of complex policies for which it is difficult to design a reward function . Compared to behavioral cloning , IRL is robust to the covariate shift problem ( Ross et al. , 2011 ) and achieves superior performance even when the amount of data is small . The second is reward learning , where IRL is used to estimate rewards from the trajectory data of human and animal action sequences and to analyze the intention of the subject . In previous studies , IRL methods have been used to analyze human walking paths ( Kitani et al. , 2012 ) and the behavior of nematodes ( Yamaguchi et al. , 2018 ) . In traditional IRL methods , the IRL loop has an inner loop that computes the optimal policy for the reward being estimated until convergence . This inner loop presents a difficulty in applying IRL to tasks with a large state-action space because it is computation-intensive . As a solution to this , classification-based IRL methods transform the IRL problem into a problem of classifying the expert ’ s trajectory and the trajectory to be compared . Notable methods include AIRL ( Fu et al. , 2017 ) , LogReg-IRL ( Uchibe , 2018 ) , and T-REX ( Brown et al. , 2019 ) . These methods differ in the ways they are formulated , but they result in similar learning methods . Online methods , such as AIRL , collect the trajectories to be compared from the environment . Contrastingly , offline methods , such as LogReg-IRL and T-REX , collect the trajectories to be compared in advance , which enables them to further speed up and stabilize learning by not requiring access to the environment during training . However , the learning performance of current offline methods depends heavily on the properties of the trajectories to be compared or the ranking of the trajectories , which is difficult to collect . In this study , we exploited the fact that the learning process of LogReg-IRL by binary classification is equivalent to that of a discriminator in adversarial learning , such as with generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . Specifically , we developed an innovative deep IRL method , called state-only learned one-class classifier for IRL ( SOLO-IRL ) , in which binary classification is replaced with adversarial one-class classification . Figure 1 compares the traditional and proposed IRL methods . The proposed method does not require an inner loop and is an offline method ; thus , it can be trained extremely fast . In addition , it does not require that trajectories be compared . With these advantages , the proposed method greatly advances the application of IRL methods to real-world problems . 2 PRELIMINARIES . 2.1 MARKOV DECISION PROCESS ( MDP ) . RL is a learning problem based on the Markov decision process ( MDP ) . The MDP consists of a tuple M = 〈S , A , P , R , γ〉 , where S is the state space , A is the action space , P is the state-transition probability , R is the reward function , and γ is the discount factor indicating the degree of importance for future rewards . In the MDP , the state-value function for state st at time t is represented by the Bellman equation , as follows : V ( st ) = max a { R ( st , a ) + ∑ s′ p ( s′|st , a ) γV ( s′ ) } ( 1 ) where R ( st , at ) is the reward for taking action at in state st and p ( st+1|st , at ) is the probability of transitioning to the next state st+1 when taking action at in state st . 2.2 LINEARLY SOLVABLE MDP ( LMDP ) . The linearly solvable MDP ( LMDP ) is an extension of the MDP in which the agent directly determines the transition probability u ( st+1|st ) from the current state st to the next state st+1 as the control , instead of the action at in the MDP . Then , the Bellman equation is linearized under two assumptions . First , the state-transition probability p ( st+1|st , u ) is assumed to be expressed as the product of the uncontrolled transition probability p̄ ( st+1|st ) and u as follows : p ( st+1|st , u ( st+1|st ) ) = p̄ ( st+1|st ) exp { u ( st+1|st ) } ( 2 ) The uncontrolled transition probability p̄ ( st+1|st ) indicates a transitional relationship between the states in the environment . When a transition is impossible , i.e. , p̄ = 0 , then p = 0 . The second assumption is that the reward R ( st , u ) is composed of a state-dependent reward r ( st ) and penalty term DKL ( p||p̄ ) for state-transition probability p over the divergence from the uncontrolled transition probability p̄ . This assumption can be formulated as follows : R ( st , u ( st+1|st ) ) = r ( st ) −DKL ( p ( st+1|st , u ( st+1|st ) ) ||p̄ ( st+1|st ) ) ( 3 ) whereDKL ( Px||Py ) represents the Kullback–Leibler ( KL ) divergence of Px and Py . By rearranging Eq . ( 3 ) according to the definition of the KL divergence , the following equation is obtained : R ( st , u ( st+1|st ) ) = r ( st ) − ∑ s′ p ( s′|st , u ( s′|st ) ) u ( s′|st ) ( 4 ) Substituting Eq . ( 4 ) into the Bellman equation in Eq . ( 1 ) gives the following : V ( st ) = r ( st ) + max u { ∑ s′ p ( s′|st , u ( s′|st ) ) [ −u ( s′|st ) + γV ( s′ ) ] } ( 5 ) Eq . ( 2 ) is then substituted into Eq . ( 5 ) and the Lagrange multiplier applied with ∑ s′ p ( s ′|st , u ) = 1 as a constraint . Finally , the max operator is removed , resulting in the linear Bellman equation as follows : exp { V ( st ) } = exp { r ( st ) } ∑ s′ p̄ ( s′|st ) exp { γV ( s′ ) } ( 6 ) The optimal control u∗ in the LMDP is given by u∗ ( st+1|st ) = p̄ ( st+1|st ) exp { γV ( st+1 ) } ∑ s′ p̄ ( s ′|st ) exp { γV ( s′ ) } ( 7 ) 2.3 LOGISTIC REGRESSION-BASED IRL ( LOGREG-IRL ) . LogReg-IRL ( Uchibe , 2018 ) is a deep IRL method in the LMDP . The following is an overview of the IRL framework in LogReg-IRL . By rearranging the linear Bellman equation in Eq . ( 6 ) , the following is obtained : exp { V ( st ) − r ( st ) } = ∑ s′ p̄ ( s′|st ) exp { γV ( s′ ) } ( 8 ) Then , substituting Eq . ( 8 ) into Eq . ( 7 ) and rearranging the result gives u∗ ( st+1|st ) = p̄ ( st+1|st ) exp { γV ( st+1 ) } exp { V ( st ) − r ( st ) } u∗ ( st+1|st ) p̄ ( st+1|st ) = exp { r ( st ) + γV ( st+1 ) − V ( st ) } log u∗ ( st+1|st ) p̄ ( st+1|st ) = r ( st ) + γV ( st+1 ) − V ( st ) ( 9 ) Applying Bayes ’ theorem to Eq . ( 9 ) we obtain log u∗ ( st , st+1 ) p̄ ( st , st+1 ) = log u∗ ( st ) p̄ ( st ) + r ( st ) + γV ( st+1 ) − V ( st ) ( 10 ) The left-hand side and the first term on the right-hand side of Eq . ( 10 ) are the density-ratios . The density-ratio pa/pb can be estimated by assigning the label η = 1 to the samples from the probability distribution pa , assigning η = −1 to the samples from pb , and training a classifier using logistic regression ( Qin , 1998 ; Cheng et al. , 2004 ; Bickel et al. , 2007 ) . First , by Bayes ’ theorem , the following is obtained : pa ( x ) pb ( x ) = p ( η = 1|x ) p ( η = −1|x ) p ( η = −1 ) p ( η = 1 ) log pa ( x ) pb ( x ) = log p ( η = 1|x ) p ( η = −1|x ) + log p ( η = −1 ) p ( η = 1 ) ( 11 ) Next , the first discriminator D1 ( x ) is defined by the sigmoid function σ ( x ) = 1/ { 1 + exp ( −x ) } and a neural network f ( x ) : D1 ( x ) = p ( η = 1|x ) = σ ( f ( x ) ) ( 12 ) where the second term on the right-hand side of Eq . ( 11 ) can be approximated by calculating the sample number ratio Npa/Npb and taking its logarithm . For the first term , the following equation can be obtained from the definition of the discriminator in Eq . ( 12 ) : log p ( η = 1|x ) p ( η = −1|x ) = log D1 ( x ) 1−D1 ( x ) = log 1 + exp { f ( x ) } 1 + exp { −f ( x ) } = log exp { f ( x ) } = f ( x ) ( 13 ) From Eq . ( 13 ) , when Npa = Npb , the following holds : log pa ( x ) pb ( x ) = f ( x ) ( 14 ) Therefore , the density-ratio of the first term in Eq . ( 10 ) can be estimated by sampling the states s∗t ∼ τ∗ and s̄t ∼ τ̄ from the expert trajectory τ∗ according to the optimal control u∗ and the baseline trajectory τ̄ according to the uncontrolled transition probability p̄ , followed by training with the following cross-entropy loss : L1 ( D1 ) = −Es̄t∼τ̄ [ log ( 1−D1 ( s̄t ) ) ] − Es∗t∼τ∗ [ log ( D1 ( s ∗ t ) ) ] ( 15 ) The density-ratio on the left-hand side of Eq . ( 10 ) is defined as follows using the trained f ( x ) , reward-estimating neural network r̃ ( x ) , and state-value-estimating neural network Ṽ ( x ) : log u∗ ( st , st+1 ) p̄ ( st , st+1 ) = f ( st ) + r̃ ( st ) + γṼ ( st+1 ) − Ṽ ( st ) ( 16 ) The second discriminator D2 for the state-transition pair is defined as D2 ( x , y ) = σ ( f ( x ) + r̃ ( x ) + γṼ ( y ) − Ṽ ( x ) ) ( 17 ) As with D1 , the discriminator D2 is trained by cross-entropy loss L2 , given as L2 ( D2 ) = −E ( s̄t , s̄t+1 ) ∼τ̄ [ log ( 1−D2 ( s̄t , s̄t+1 ) ) ] − E ( s∗t , s∗t+1 ) ∼τ∗ [ log ( D2 ( s ∗ t , s ∗ t+1 ) ) ] ( 18 ) In the original LogReg-IRL , an L2 regularization term is added to the loss function . Following the process described above , LogReg-IRL estimates the reward and state-value by classifying the expert and baseline trajectories . Unlike traditional IRL methods , LogReg-IRL does not require RL in the reward estimation process and , thus , it can be trained very quickly . | The paper proposes an adversarial inverse reinforcement learning algorithm that learns purely from expert demonstrations, and does not require any online interaction with the environment or a dataset of unlabeled interactions with the environment. The key idea is to synthesize negative examples (i.e., examples of non-expert behavior) using a denoising autoencoder trained on the positive examples (i.e., expert demonstrations). Experiments on the BipedalWalker simulated locomotion task show that the proposed method learns a reward function such that an RL agent trained to maximize the learned rewards achieves higher true rewards than a prior method. | SP:4c333a407f2d109581cbacd99da6df40ccbec091 |
Deep Inverse Reinforcement Learning via Adversarial One-Class Classification | 1 INTRODUCTION . Inverse reinforcement learning ( IRL ) ( Russell , 1998 ) refers to the problem of estimating rewards for reinforcement learning ( RL ) agents to acquire policies that can reproduce expert behavior . An RL algorithm learns a policy that maximizes the cumulative discounted reward under a given reward function . An IRL algorithm does the opposite ; it estimates the reward from the given policies or trajectories to satisfy the condition under the assumption that the expert is maximizing the reward . IRL has been applied in two main areas ( Ramachandran & Amir , 2007 ) . The first is apprenticeship learning , which enables the learning of complex policies for which it is difficult to design a reward function . Compared to behavioral cloning , IRL is robust to the covariate shift problem ( Ross et al. , 2011 ) and achieves superior performance even when the amount of data is small . The second is reward learning , where IRL is used to estimate rewards from the trajectory data of human and animal action sequences and to analyze the intention of the subject . In previous studies , IRL methods have been used to analyze human walking paths ( Kitani et al. , 2012 ) and the behavior of nematodes ( Yamaguchi et al. , 2018 ) . In traditional IRL methods , the IRL loop has an inner loop that computes the optimal policy for the reward being estimated until convergence . This inner loop presents a difficulty in applying IRL to tasks with a large state-action space because it is computation-intensive . As a solution to this , classification-based IRL methods transform the IRL problem into a problem of classifying the expert ’ s trajectory and the trajectory to be compared . Notable methods include AIRL ( Fu et al. , 2017 ) , LogReg-IRL ( Uchibe , 2018 ) , and T-REX ( Brown et al. , 2019 ) . These methods differ in the ways they are formulated , but they result in similar learning methods . Online methods , such as AIRL , collect the trajectories to be compared from the environment . Contrastingly , offline methods , such as LogReg-IRL and T-REX , collect the trajectories to be compared in advance , which enables them to further speed up and stabilize learning by not requiring access to the environment during training . However , the learning performance of current offline methods depends heavily on the properties of the trajectories to be compared or the ranking of the trajectories , which is difficult to collect . In this study , we exploited the fact that the learning process of LogReg-IRL by binary classification is equivalent to that of a discriminator in adversarial learning , such as with generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . Specifically , we developed an innovative deep IRL method , called state-only learned one-class classifier for IRL ( SOLO-IRL ) , in which binary classification is replaced with adversarial one-class classification . Figure 1 compares the traditional and proposed IRL methods . The proposed method does not require an inner loop and is an offline method ; thus , it can be trained extremely fast . In addition , it does not require that trajectories be compared . With these advantages , the proposed method greatly advances the application of IRL methods to real-world problems . 2 PRELIMINARIES . 2.1 MARKOV DECISION PROCESS ( MDP ) . RL is a learning problem based on the Markov decision process ( MDP ) . The MDP consists of a tuple M = 〈S , A , P , R , γ〉 , where S is the state space , A is the action space , P is the state-transition probability , R is the reward function , and γ is the discount factor indicating the degree of importance for future rewards . In the MDP , the state-value function for state st at time t is represented by the Bellman equation , as follows : V ( st ) = max a { R ( st , a ) + ∑ s′ p ( s′|st , a ) γV ( s′ ) } ( 1 ) where R ( st , at ) is the reward for taking action at in state st and p ( st+1|st , at ) is the probability of transitioning to the next state st+1 when taking action at in state st . 2.2 LINEARLY SOLVABLE MDP ( LMDP ) . The linearly solvable MDP ( LMDP ) is an extension of the MDP in which the agent directly determines the transition probability u ( st+1|st ) from the current state st to the next state st+1 as the control , instead of the action at in the MDP . Then , the Bellman equation is linearized under two assumptions . First , the state-transition probability p ( st+1|st , u ) is assumed to be expressed as the product of the uncontrolled transition probability p̄ ( st+1|st ) and u as follows : p ( st+1|st , u ( st+1|st ) ) = p̄ ( st+1|st ) exp { u ( st+1|st ) } ( 2 ) The uncontrolled transition probability p̄ ( st+1|st ) indicates a transitional relationship between the states in the environment . When a transition is impossible , i.e. , p̄ = 0 , then p = 0 . The second assumption is that the reward R ( st , u ) is composed of a state-dependent reward r ( st ) and penalty term DKL ( p||p̄ ) for state-transition probability p over the divergence from the uncontrolled transition probability p̄ . This assumption can be formulated as follows : R ( st , u ( st+1|st ) ) = r ( st ) −DKL ( p ( st+1|st , u ( st+1|st ) ) ||p̄ ( st+1|st ) ) ( 3 ) whereDKL ( Px||Py ) represents the Kullback–Leibler ( KL ) divergence of Px and Py . By rearranging Eq . ( 3 ) according to the definition of the KL divergence , the following equation is obtained : R ( st , u ( st+1|st ) ) = r ( st ) − ∑ s′ p ( s′|st , u ( s′|st ) ) u ( s′|st ) ( 4 ) Substituting Eq . ( 4 ) into the Bellman equation in Eq . ( 1 ) gives the following : V ( st ) = r ( st ) + max u { ∑ s′ p ( s′|st , u ( s′|st ) ) [ −u ( s′|st ) + γV ( s′ ) ] } ( 5 ) Eq . ( 2 ) is then substituted into Eq . ( 5 ) and the Lagrange multiplier applied with ∑ s′ p ( s ′|st , u ) = 1 as a constraint . Finally , the max operator is removed , resulting in the linear Bellman equation as follows : exp { V ( st ) } = exp { r ( st ) } ∑ s′ p̄ ( s′|st ) exp { γV ( s′ ) } ( 6 ) The optimal control u∗ in the LMDP is given by u∗ ( st+1|st ) = p̄ ( st+1|st ) exp { γV ( st+1 ) } ∑ s′ p̄ ( s ′|st ) exp { γV ( s′ ) } ( 7 ) 2.3 LOGISTIC REGRESSION-BASED IRL ( LOGREG-IRL ) . LogReg-IRL ( Uchibe , 2018 ) is a deep IRL method in the LMDP . The following is an overview of the IRL framework in LogReg-IRL . By rearranging the linear Bellman equation in Eq . ( 6 ) , the following is obtained : exp { V ( st ) − r ( st ) } = ∑ s′ p̄ ( s′|st ) exp { γV ( s′ ) } ( 8 ) Then , substituting Eq . ( 8 ) into Eq . ( 7 ) and rearranging the result gives u∗ ( st+1|st ) = p̄ ( st+1|st ) exp { γV ( st+1 ) } exp { V ( st ) − r ( st ) } u∗ ( st+1|st ) p̄ ( st+1|st ) = exp { r ( st ) + γV ( st+1 ) − V ( st ) } log u∗ ( st+1|st ) p̄ ( st+1|st ) = r ( st ) + γV ( st+1 ) − V ( st ) ( 9 ) Applying Bayes ’ theorem to Eq . ( 9 ) we obtain log u∗ ( st , st+1 ) p̄ ( st , st+1 ) = log u∗ ( st ) p̄ ( st ) + r ( st ) + γV ( st+1 ) − V ( st ) ( 10 ) The left-hand side and the first term on the right-hand side of Eq . ( 10 ) are the density-ratios . The density-ratio pa/pb can be estimated by assigning the label η = 1 to the samples from the probability distribution pa , assigning η = −1 to the samples from pb , and training a classifier using logistic regression ( Qin , 1998 ; Cheng et al. , 2004 ; Bickel et al. , 2007 ) . First , by Bayes ’ theorem , the following is obtained : pa ( x ) pb ( x ) = p ( η = 1|x ) p ( η = −1|x ) p ( η = −1 ) p ( η = 1 ) log pa ( x ) pb ( x ) = log p ( η = 1|x ) p ( η = −1|x ) + log p ( η = −1 ) p ( η = 1 ) ( 11 ) Next , the first discriminator D1 ( x ) is defined by the sigmoid function σ ( x ) = 1/ { 1 + exp ( −x ) } and a neural network f ( x ) : D1 ( x ) = p ( η = 1|x ) = σ ( f ( x ) ) ( 12 ) where the second term on the right-hand side of Eq . ( 11 ) can be approximated by calculating the sample number ratio Npa/Npb and taking its logarithm . For the first term , the following equation can be obtained from the definition of the discriminator in Eq . ( 12 ) : log p ( η = 1|x ) p ( η = −1|x ) = log D1 ( x ) 1−D1 ( x ) = log 1 + exp { f ( x ) } 1 + exp { −f ( x ) } = log exp { f ( x ) } = f ( x ) ( 13 ) From Eq . ( 13 ) , when Npa = Npb , the following holds : log pa ( x ) pb ( x ) = f ( x ) ( 14 ) Therefore , the density-ratio of the first term in Eq . ( 10 ) can be estimated by sampling the states s∗t ∼ τ∗ and s̄t ∼ τ̄ from the expert trajectory τ∗ according to the optimal control u∗ and the baseline trajectory τ̄ according to the uncontrolled transition probability p̄ , followed by training with the following cross-entropy loss : L1 ( D1 ) = −Es̄t∼τ̄ [ log ( 1−D1 ( s̄t ) ) ] − Es∗t∼τ∗ [ log ( D1 ( s ∗ t ) ) ] ( 15 ) The density-ratio on the left-hand side of Eq . ( 10 ) is defined as follows using the trained f ( x ) , reward-estimating neural network r̃ ( x ) , and state-value-estimating neural network Ṽ ( x ) : log u∗ ( st , st+1 ) p̄ ( st , st+1 ) = f ( st ) + r̃ ( st ) + γṼ ( st+1 ) − Ṽ ( st ) ( 16 ) The second discriminator D2 for the state-transition pair is defined as D2 ( x , y ) = σ ( f ( x ) + r̃ ( x ) + γṼ ( y ) − Ṽ ( x ) ) ( 17 ) As with D1 , the discriminator D2 is trained by cross-entropy loss L2 , given as L2 ( D2 ) = −E ( s̄t , s̄t+1 ) ∼τ̄ [ log ( 1−D2 ( s̄t , s̄t+1 ) ) ] − E ( s∗t , s∗t+1 ) ∼τ∗ [ log ( D2 ( s ∗ t , s ∗ t+1 ) ) ] ( 18 ) In the original LogReg-IRL , an L2 regularization term is added to the loss function . Following the process described above , LogReg-IRL estimates the reward and state-value by classifying the expert and baseline trajectories . Unlike traditional IRL methods , LogReg-IRL does not require RL in the reward estimation process and , thus , it can be trained very quickly . | This work introduces a new IRL framework, SOLO-IRL, that learns a reward function using only expert trajectories. This has the benefit of being trained in an offline manner, which speeds up the training process. SOLO-IRL builds on top of the work of Uchibe, 2018 and exploits the fact that a discriminator can replace the binary classification used in LogReg-IRL. They improve on LogReg-IRL by overcoming the difficulty of selecting an appropriate baseline trajectories by proposing to use adversarial one-class classification (Sabokrou et al., 2018). The authors empirically demonstrate their results on the CartPole and BipedalWalker tasks, and show superior performance over LogReg-IRL. | SP:4c333a407f2d109581cbacd99da6df40ccbec091 |
Interpretable Unsupervised Diversity Denoising and Artefact Removal | 1 INTRODUCTION . Deep Learning ( DL ) based methods are currently the state-of-the-art ( SOTA ) for image denoising and artefact removal with supervised methods typically showing best performance ( Zhang et al. , 2017 ; Weigert et al. , 2017 ; Chen et al. , 2018 ; Delbracio et al. , 2020 ) . However , in order to be trained , supervised methods need either paired high and low quality images ( Zhang et al. , 2017 ; Weigert et al. , 2017 ; Delbracio et al. , 2020 ) or pairs of low quality images ( Lehtinen et al. , 2018 ; Buchholz et al. , 2019 ) . Both requirements are often not , or at least not efficiently satisfiable , hindering application of these methods to many practical applications , mainly in microscopy and biomedical imaging . Hence , unsupervised methods that can be trained on noisy images alone Krull et al . ( 2019 ) ; Batson & Royer ( 2019 ) ; Xie et al . ( 2020 ) ; Quan et al . ( 2020 ) present an attractive alternative . Methods additionally using models of imaging noise have also been proposed ( Krull et al. , 2020 ; Laine et al. , 2019 ; Prakash et al. , 2019b ) and can further boost the denoising performance of unsupervised models . Still , unsupervised methods suffer from three drawbacks : piq they fail to account for the inherent uncertainty present in a corrupted image and produce only a single restored solution ( i.e . point estimation ) 1 , piiq they typically show weaker overall performance than their supervised counterparts , and piiiq they are , by design , limited to pixel-wise noise removal and can not handle structured noises or other image artefacts ( spatially correlated noise ) . Recently , the first of these drawbacks was addressed by DIVNOISING ( DN ) ( Prakash et al. , 2021 ) which proposed a convolutional Variational Autoencoder ( VAE ) architecture for unsupervised denoising and generates diverse denoised solutions , giving users access to samples from a distribution of sensible denoising results . But DN exhibits poor performance on harder ( visually more complex and varied ) datasets , e.g . diverse sets of natural images . Additionally , it does not improve on 1This is also true for supervised methods . the performance achieved with other unsupervised denoisers on existing microscopy benchmark datasets ( Prakash et al. , 2021 ) , where supervised methods , whenever applicable , lead to best results . Last but not least , DN does not address the problem of artefact removal ( structured noise removal ) . We hypothesize that the performance of methods like DN is limited by the used VAE architecture , which can neither capture longer range dependencies , nor the full structural complexity of many practically relevant datasets . Although more expressive hierarchical VAE ( HVAE ) architectures have been known for some time in the context of image generation ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) , so far they have never been applied to image restoration or denoising . Besides , well performing HVAE architectures are computationally very expensive , with training times easily spanning many days or even weeks on multiple modern GPUs ( Vahdat & Kautz , 2020 ; Child , 2021 ) . Hence , it remains unexplored if more expressive HVAE architectures can indeed improve unsupervised image restoration tasks and if they can do so efficiently enough to justify their application in practical use-cases . Contributions . In this paper , we first introduce a new architecture for unsupervised diversity denoising called HIERARCHICAL DIVNOISING ( HDN ) , inspired by ladder VAEs ( Sønderby et al. , 2016 ) , but introducing a number of important task-specific modifications . We show that HDN is considerably more expressive than DN ( see Fig . 2 ) , leading to much improved denoising results ( see Fig . 1 ) , while still not being excessively computationally expensive ( on a common 6 GB Tesla P100 GPU training still only takes about 1 day ) . Most importantly though , HDN leads to SOTA results on natural images and biomedical image data , outperforming 8 popular unsupervised denoising baselines and considerably closing the gap to supervised results ( see Table 1 ) . Additionally , we investigate the application of diversity denoising to the removal of spatially correlated structured noise in microscopy images , an application of immense practical impact . More specifically , we show that HDN , owing to its hierarchical architecture , enforces an interpretable decomposition of learned representations and we devise an effective approach to remove structured noise in microscopy data by taking full advantage of this interpretable hierarchical representation . We showcase this on multiple real-world datasets from diverse microscopy modalities , demonstrating that our method is an important step towards interpretable image restoration and is immediately applicable to many real-world datasets as they are acquired in the life-sciences on a daily basis . Related Work . Apart from the DL based works already discussed earlier , classical methods such as Non-Local Means ( Buades et al. , 2005 ) and BM3D ( Dabov et al. , 2007 ) have also found widespread use for denoising applications . A detailed review of classical denoising methods can be found in Milanfar ( 2012 ) . An interesting unsupervised DL based image restoration method is Deep Image Prior ( DIP ) ( Ulyanov et al. , 2018 ) which showed that convolutional neural networks ( CNNs ) can be used to restore corrupted images without supervision when training is stopped at an apriori unknown time before convergence . Methods such as DIP ( or other methods that require training for each input image separately , e.g . SELF2SELF ( Quan et al. , 2020 ) ) are , however , computationally demanding when applied to entire datasets . Diversity pixel denoising based on VAEs was introduced by DN ( Prakash et al. , 2021 ) , and a similar approach using GANs by Ohayon et al . ( 2021 ) . For structured noise removal , Broaddus et al . ( 2020 ) extended Krull et al . ( 2019 ) to specifically remove structured line artefacts that can arise in microscopy data . Their method needs exact apriori knowledge about the size of artefacts , which can not span more than some connected pixels . Dorta et al . ( 2018 ) model structured noises as the uncertainty in the VAE decoder represented by a structured Gaussian distribution with non-diagonal covariance matrix which is learned separately by another network . Jancsary et al . ( 2012 ) learn a structured noise model in a regression tree field framework . Unlike these approaches , we take an interpretability-first perspective for artefact removal and remove artefacts even without having/learning any structured noise model . 2 THE IMAGE RESTORATION TASK . The task of image restoration involves the estimation of a clean and unobservable signal s “ ps1 , s2 , . . . , sN q from a corrupted reconstruction x “ px1 , x2 , . . . , xN q , where si and xi , refer to the respective pixel intensities in the image domain . In general , the reconstructed image comes from solving an inverse imaging problem giving by the forward model , y “ Apsq ` e , ( 1 ) where A is the forward operator ( tomography , blurring , sub-sampling , etc . ) , e is noise in the measurements typically assumed iid , and y is the noisy measurements . An image reconstruction algorithm is needed to recover an estimation x of s from the noisy measurements y . Typically , the image reconstruction is obtained through an optimization formulation , where we seek a solution x that fits the observed values and is compatible with some image prior R , x “ argmin s1 } Aps1q ´ y } 2 ` λRps1q , ( 2 ) where s1 is the auxiliary variable for the signal being optimized for and λ ě 0 is related to the level of confidence of the prior R. There exists an extensive amount of work defining image priors ( Rudin et al. , 1992 ; Golub et al. , 1999 ; Bruckstein et al. , 2009 ; Zoran & Weiss , 2011 ; Romano et al. , 2017 ; Ulyanov et al. , 2018 ) . Without loss of generality , we can decompose the reconstructed image as x “ s ` n , where n is the residual ( noise ) between the ideal image and the reconstructed one . Generally , the noise n on the reconstruction x is composed of pixel-noise ( such as Poisson or Gaussian noise ) and multi-pixel artefacts or structured noise that affect groups of pixels in correlated ways . Such artefacts arise through dependencies that are introduced by the adopted reconstruction technique and the domainspecific inverse problem ( e.g . tomography , microscopy , or ISP in an optical camera ) . For example , consider the case where the reconstruction is done using Tikhonov regularization Rpsq “ } s } 2 , in the particular case of linear operator . In this case , the solution to Eq . 2 is x “ pATA ` λIq´1AT y . Thus , x “ pATA ` λIq´1ATAs ` pATA ` λIq´1AT e “ s ` n , ( 3 ) where n “ ppATA ` λIq´1ATA´ Iqs ` pATA ` λIq´1ATe . ( 4 ) The reconstructed image is affected by colored noise ( term in Eq . 4 depending on e ) , and also by a structured perturbation introduced by the reconstruction method ( term in Eq . 4 depending on s ) . This structured perturbation appears even in the case when the measurements are noiseless . Accurately modeling the noise/artefacts on the reconstructed image is therefore challenging even in the simplest case where the reconstruction is linear , showing that structured noise removal is non-trivial . In Section 5 , we deal with image denoising where noise contribution ni to each pixel xi is assumed to be conditionally independent given the signal si , i.e . ppn|sq “ ś i ppni|siq ( Krull et al. , 2019 ) . In many practical applications , including the ones presented in Section 6 , this assumption does not hold true , and the noise n is referred to as structured noise/artefacts . 3 NON-HIERARCHICAL VAE BASED DIVERSITY DENOISERS . Recently , non-hierarchical VAE based architectures ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Prakash et al. , 2021 ) were explored in the context of image denoising allowing a key advantage of providing samples sk drawn from a posterior distribution of denoised images . More formally , the denoising operation of these models fψ can be expressed by fψpxq “ sk „ pps|xq , where ψ are the model parameters . In this section , we provide a brief overview of these models . Vanilla VAEs . VAEs are generative encoder-decoder models , capable of learning a latent representation of the data and capturing a distribution over inputs x ( Kingma & Welling , 2019 ; Rezende et al. , 2014 ) . The encoder maps input x to a conditional distribution qφpz|xq in latent space . The decoder , gθpzq , takes a sample from qφpz|xq and maps it to a distribution pθpx|zq in image space . Encoder and decoder are neural networks , jointly trained to minimize the loss Lφ , θpxq “ Eqφpz|xqr´ log pθpx|zqs ` KL pqφpz|xq||ppzqq “ L R φ , θpxq ` LKLφ pxq , ( 5 ) with the second term being the KL-divergence between the encoder distribution qφpz|xq and prior distribution ppzq ( usually a unit Normal distribution ) . The network parameters of the encoder and decoder are given by φ and θ , respectively . The decoder usually factorizes over pixels as pθpx|zq “ N ź i “ 1 pθpxi|zq , ( 6 ) with pθpxi|zq being a Normal distribution predicted by the decoder , xi representing the intensity of pixel i , and N being the total number of pixels in image x . The encoder distribution is modeled in a similar way , factorizing over the dimensions of the latent space z . Performance of vanilla VAEs on pixel-noise removal tasks is analyzed and compared to DN in Prakash et al . ( 2021 ) . DIVNOISING ( DN ) . DN ( Prakash et al. , 2021 ) is a VAE based method for unsupervised pixel noise removal that incorporates an explicit pixel wise noise model pNMpx|sq in the decoder . More formally , the generic Normal distribution over pixel intensities of Eq . 6 is replaced with a known noise model pNMpx|sq which factorizes as a product of pixels , i.e . pNMpx|sq “ śN i pNMpxi|siq , and the decoder learns a mapping from z directly to the space of restored images , i.e . gθpzq “ s. Therefore , pθpx|zq “ pNMpx|gθpzqq . ( 7 ) The loss of DN hence becomes Lφ , θpxq “ Eqφpz|xq « N ÿ i “ 1 ´ log ppxi|gθpzqq ff ` KL pqφpz|xq||ppzqq “ LRφ , θpxq ` LKLφ pxq . ( 8 ) Intuitively , the reconstruction loss in Eq . 8 measures how likely the generated image is given the noise model , while the KL loss incentivizes the encoded distribution to be unit Normal . DN is the current SOTA for many unsupervised denoising benchmarks ( particularly working well for microscopy images ) , but performs poorly for complex domains such as natural image denoising ( Prakash et al. , 2021 ) . | The paper proposes an image restoration method that can not only reduce pixelwise noises but also remove artifacts in resultant images. Introducing the idea of hierarchical representation of latent variables analogous to VAEs, the proposed method improves the denoising performance of DivNoising (DN). In addition, the authors propose a method for artifacts removal based on the analysis of the image components represented by the latent variables at each layer. The method requires no pairs of noisy images and the corresponding noise-free signals for training but requires the probability distribution of pixel values conditioned by the corresponding signal strength. The experimental results demonstrate the proposed method outperforms DNs. | SP:47b13a929cf63405d682352339ecd8f3748c531f |
Interpretable Unsupervised Diversity Denoising and Artefact Removal | 1 INTRODUCTION . Deep Learning ( DL ) based methods are currently the state-of-the-art ( SOTA ) for image denoising and artefact removal with supervised methods typically showing best performance ( Zhang et al. , 2017 ; Weigert et al. , 2017 ; Chen et al. , 2018 ; Delbracio et al. , 2020 ) . However , in order to be trained , supervised methods need either paired high and low quality images ( Zhang et al. , 2017 ; Weigert et al. , 2017 ; Delbracio et al. , 2020 ) or pairs of low quality images ( Lehtinen et al. , 2018 ; Buchholz et al. , 2019 ) . Both requirements are often not , or at least not efficiently satisfiable , hindering application of these methods to many practical applications , mainly in microscopy and biomedical imaging . Hence , unsupervised methods that can be trained on noisy images alone Krull et al . ( 2019 ) ; Batson & Royer ( 2019 ) ; Xie et al . ( 2020 ) ; Quan et al . ( 2020 ) present an attractive alternative . Methods additionally using models of imaging noise have also been proposed ( Krull et al. , 2020 ; Laine et al. , 2019 ; Prakash et al. , 2019b ) and can further boost the denoising performance of unsupervised models . Still , unsupervised methods suffer from three drawbacks : piq they fail to account for the inherent uncertainty present in a corrupted image and produce only a single restored solution ( i.e . point estimation ) 1 , piiq they typically show weaker overall performance than their supervised counterparts , and piiiq they are , by design , limited to pixel-wise noise removal and can not handle structured noises or other image artefacts ( spatially correlated noise ) . Recently , the first of these drawbacks was addressed by DIVNOISING ( DN ) ( Prakash et al. , 2021 ) which proposed a convolutional Variational Autoencoder ( VAE ) architecture for unsupervised denoising and generates diverse denoised solutions , giving users access to samples from a distribution of sensible denoising results . But DN exhibits poor performance on harder ( visually more complex and varied ) datasets , e.g . diverse sets of natural images . Additionally , it does not improve on 1This is also true for supervised methods . the performance achieved with other unsupervised denoisers on existing microscopy benchmark datasets ( Prakash et al. , 2021 ) , where supervised methods , whenever applicable , lead to best results . Last but not least , DN does not address the problem of artefact removal ( structured noise removal ) . We hypothesize that the performance of methods like DN is limited by the used VAE architecture , which can neither capture longer range dependencies , nor the full structural complexity of many practically relevant datasets . Although more expressive hierarchical VAE ( HVAE ) architectures have been known for some time in the context of image generation ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) , so far they have never been applied to image restoration or denoising . Besides , well performing HVAE architectures are computationally very expensive , with training times easily spanning many days or even weeks on multiple modern GPUs ( Vahdat & Kautz , 2020 ; Child , 2021 ) . Hence , it remains unexplored if more expressive HVAE architectures can indeed improve unsupervised image restoration tasks and if they can do so efficiently enough to justify their application in practical use-cases . Contributions . In this paper , we first introduce a new architecture for unsupervised diversity denoising called HIERARCHICAL DIVNOISING ( HDN ) , inspired by ladder VAEs ( Sønderby et al. , 2016 ) , but introducing a number of important task-specific modifications . We show that HDN is considerably more expressive than DN ( see Fig . 2 ) , leading to much improved denoising results ( see Fig . 1 ) , while still not being excessively computationally expensive ( on a common 6 GB Tesla P100 GPU training still only takes about 1 day ) . Most importantly though , HDN leads to SOTA results on natural images and biomedical image data , outperforming 8 popular unsupervised denoising baselines and considerably closing the gap to supervised results ( see Table 1 ) . Additionally , we investigate the application of diversity denoising to the removal of spatially correlated structured noise in microscopy images , an application of immense practical impact . More specifically , we show that HDN , owing to its hierarchical architecture , enforces an interpretable decomposition of learned representations and we devise an effective approach to remove structured noise in microscopy data by taking full advantage of this interpretable hierarchical representation . We showcase this on multiple real-world datasets from diverse microscopy modalities , demonstrating that our method is an important step towards interpretable image restoration and is immediately applicable to many real-world datasets as they are acquired in the life-sciences on a daily basis . Related Work . Apart from the DL based works already discussed earlier , classical methods such as Non-Local Means ( Buades et al. , 2005 ) and BM3D ( Dabov et al. , 2007 ) have also found widespread use for denoising applications . A detailed review of classical denoising methods can be found in Milanfar ( 2012 ) . An interesting unsupervised DL based image restoration method is Deep Image Prior ( DIP ) ( Ulyanov et al. , 2018 ) which showed that convolutional neural networks ( CNNs ) can be used to restore corrupted images without supervision when training is stopped at an apriori unknown time before convergence . Methods such as DIP ( or other methods that require training for each input image separately , e.g . SELF2SELF ( Quan et al. , 2020 ) ) are , however , computationally demanding when applied to entire datasets . Diversity pixel denoising based on VAEs was introduced by DN ( Prakash et al. , 2021 ) , and a similar approach using GANs by Ohayon et al . ( 2021 ) . For structured noise removal , Broaddus et al . ( 2020 ) extended Krull et al . ( 2019 ) to specifically remove structured line artefacts that can arise in microscopy data . Their method needs exact apriori knowledge about the size of artefacts , which can not span more than some connected pixels . Dorta et al . ( 2018 ) model structured noises as the uncertainty in the VAE decoder represented by a structured Gaussian distribution with non-diagonal covariance matrix which is learned separately by another network . Jancsary et al . ( 2012 ) learn a structured noise model in a regression tree field framework . Unlike these approaches , we take an interpretability-first perspective for artefact removal and remove artefacts even without having/learning any structured noise model . 2 THE IMAGE RESTORATION TASK . The task of image restoration involves the estimation of a clean and unobservable signal s “ ps1 , s2 , . . . , sN q from a corrupted reconstruction x “ px1 , x2 , . . . , xN q , where si and xi , refer to the respective pixel intensities in the image domain . In general , the reconstructed image comes from solving an inverse imaging problem giving by the forward model , y “ Apsq ` e , ( 1 ) where A is the forward operator ( tomography , blurring , sub-sampling , etc . ) , e is noise in the measurements typically assumed iid , and y is the noisy measurements . An image reconstruction algorithm is needed to recover an estimation x of s from the noisy measurements y . Typically , the image reconstruction is obtained through an optimization formulation , where we seek a solution x that fits the observed values and is compatible with some image prior R , x “ argmin s1 } Aps1q ´ y } 2 ` λRps1q , ( 2 ) where s1 is the auxiliary variable for the signal being optimized for and λ ě 0 is related to the level of confidence of the prior R. There exists an extensive amount of work defining image priors ( Rudin et al. , 1992 ; Golub et al. , 1999 ; Bruckstein et al. , 2009 ; Zoran & Weiss , 2011 ; Romano et al. , 2017 ; Ulyanov et al. , 2018 ) . Without loss of generality , we can decompose the reconstructed image as x “ s ` n , where n is the residual ( noise ) between the ideal image and the reconstructed one . Generally , the noise n on the reconstruction x is composed of pixel-noise ( such as Poisson or Gaussian noise ) and multi-pixel artefacts or structured noise that affect groups of pixels in correlated ways . Such artefacts arise through dependencies that are introduced by the adopted reconstruction technique and the domainspecific inverse problem ( e.g . tomography , microscopy , or ISP in an optical camera ) . For example , consider the case where the reconstruction is done using Tikhonov regularization Rpsq “ } s } 2 , in the particular case of linear operator . In this case , the solution to Eq . 2 is x “ pATA ` λIq´1AT y . Thus , x “ pATA ` λIq´1ATAs ` pATA ` λIq´1AT e “ s ` n , ( 3 ) where n “ ppATA ` λIq´1ATA´ Iqs ` pATA ` λIq´1ATe . ( 4 ) The reconstructed image is affected by colored noise ( term in Eq . 4 depending on e ) , and also by a structured perturbation introduced by the reconstruction method ( term in Eq . 4 depending on s ) . This structured perturbation appears even in the case when the measurements are noiseless . Accurately modeling the noise/artefacts on the reconstructed image is therefore challenging even in the simplest case where the reconstruction is linear , showing that structured noise removal is non-trivial . In Section 5 , we deal with image denoising where noise contribution ni to each pixel xi is assumed to be conditionally independent given the signal si , i.e . ppn|sq “ ś i ppni|siq ( Krull et al. , 2019 ) . In many practical applications , including the ones presented in Section 6 , this assumption does not hold true , and the noise n is referred to as structured noise/artefacts . 3 NON-HIERARCHICAL VAE BASED DIVERSITY DENOISERS . Recently , non-hierarchical VAE based architectures ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Prakash et al. , 2021 ) were explored in the context of image denoising allowing a key advantage of providing samples sk drawn from a posterior distribution of denoised images . More formally , the denoising operation of these models fψ can be expressed by fψpxq “ sk „ pps|xq , where ψ are the model parameters . In this section , we provide a brief overview of these models . Vanilla VAEs . VAEs are generative encoder-decoder models , capable of learning a latent representation of the data and capturing a distribution over inputs x ( Kingma & Welling , 2019 ; Rezende et al. , 2014 ) . The encoder maps input x to a conditional distribution qφpz|xq in latent space . The decoder , gθpzq , takes a sample from qφpz|xq and maps it to a distribution pθpx|zq in image space . Encoder and decoder are neural networks , jointly trained to minimize the loss Lφ , θpxq “ Eqφpz|xqr´ log pθpx|zqs ` KL pqφpz|xq||ppzqq “ L R φ , θpxq ` LKLφ pxq , ( 5 ) with the second term being the KL-divergence between the encoder distribution qφpz|xq and prior distribution ppzq ( usually a unit Normal distribution ) . The network parameters of the encoder and decoder are given by φ and θ , respectively . The decoder usually factorizes over pixels as pθpx|zq “ N ź i “ 1 pθpxi|zq , ( 6 ) with pθpxi|zq being a Normal distribution predicted by the decoder , xi representing the intensity of pixel i , and N being the total number of pixels in image x . The encoder distribution is modeled in a similar way , factorizing over the dimensions of the latent space z . Performance of vanilla VAEs on pixel-noise removal tasks is analyzed and compared to DN in Prakash et al . ( 2021 ) . DIVNOISING ( DN ) . DN ( Prakash et al. , 2021 ) is a VAE based method for unsupervised pixel noise removal that incorporates an explicit pixel wise noise model pNMpx|sq in the decoder . More formally , the generic Normal distribution over pixel intensities of Eq . 6 is replaced with a known noise model pNMpx|sq which factorizes as a product of pixels , i.e . pNMpx|sq “ śN i pNMpxi|siq , and the decoder learns a mapping from z directly to the space of restored images , i.e . gθpzq “ s. Therefore , pθpx|zq “ pNMpx|gθpzqq . ( 7 ) The loss of DN hence becomes Lφ , θpxq “ Eqφpz|xq « N ÿ i “ 1 ´ log ppxi|gθpzqq ff ` KL pqφpz|xq||ppzqq “ LRφ , θpxq ` LKLφ pxq . ( 8 ) Intuitively , the reconstruction loss in Eq . 8 measures how likely the generated image is given the noise model , while the KL loss incentivizes the encoded distribution to be unit Normal . DN is the current SOTA for many unsupervised denoising benchmarks ( particularly working well for microscopy images ) , but performs poorly for complex domains such as natural image denoising ( Prakash et al. , 2021 ) . | This work extends the work of DivNoising (Prakash et al., 2021) to build an interpretable approach for unsupervised image restoration. The proposed method, namely Hierarchical DivNosing (HDN), is based on the well-studied concept of hierarchical Variational Autoencoder. This work is the first to use hierarchical Variational Autoencoder for the task of unsupervised image denoising. In doing so, HDN achieves state-of-the-art unsupervised image denoising results on twelve benchmark image datasets. The method is also shown to remove structured artifacts on three real microscopy datasets. | SP:47b13a929cf63405d682352339ecd8f3748c531f |
Interpretable Unsupervised Diversity Denoising and Artefact Removal | 1 INTRODUCTION . Deep Learning ( DL ) based methods are currently the state-of-the-art ( SOTA ) for image denoising and artefact removal with supervised methods typically showing best performance ( Zhang et al. , 2017 ; Weigert et al. , 2017 ; Chen et al. , 2018 ; Delbracio et al. , 2020 ) . However , in order to be trained , supervised methods need either paired high and low quality images ( Zhang et al. , 2017 ; Weigert et al. , 2017 ; Delbracio et al. , 2020 ) or pairs of low quality images ( Lehtinen et al. , 2018 ; Buchholz et al. , 2019 ) . Both requirements are often not , or at least not efficiently satisfiable , hindering application of these methods to many practical applications , mainly in microscopy and biomedical imaging . Hence , unsupervised methods that can be trained on noisy images alone Krull et al . ( 2019 ) ; Batson & Royer ( 2019 ) ; Xie et al . ( 2020 ) ; Quan et al . ( 2020 ) present an attractive alternative . Methods additionally using models of imaging noise have also been proposed ( Krull et al. , 2020 ; Laine et al. , 2019 ; Prakash et al. , 2019b ) and can further boost the denoising performance of unsupervised models . Still , unsupervised methods suffer from three drawbacks : piq they fail to account for the inherent uncertainty present in a corrupted image and produce only a single restored solution ( i.e . point estimation ) 1 , piiq they typically show weaker overall performance than their supervised counterparts , and piiiq they are , by design , limited to pixel-wise noise removal and can not handle structured noises or other image artefacts ( spatially correlated noise ) . Recently , the first of these drawbacks was addressed by DIVNOISING ( DN ) ( Prakash et al. , 2021 ) which proposed a convolutional Variational Autoencoder ( VAE ) architecture for unsupervised denoising and generates diverse denoised solutions , giving users access to samples from a distribution of sensible denoising results . But DN exhibits poor performance on harder ( visually more complex and varied ) datasets , e.g . diverse sets of natural images . Additionally , it does not improve on 1This is also true for supervised methods . the performance achieved with other unsupervised denoisers on existing microscopy benchmark datasets ( Prakash et al. , 2021 ) , where supervised methods , whenever applicable , lead to best results . Last but not least , DN does not address the problem of artefact removal ( structured noise removal ) . We hypothesize that the performance of methods like DN is limited by the used VAE architecture , which can neither capture longer range dependencies , nor the full structural complexity of many practically relevant datasets . Although more expressive hierarchical VAE ( HVAE ) architectures have been known for some time in the context of image generation ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) , so far they have never been applied to image restoration or denoising . Besides , well performing HVAE architectures are computationally very expensive , with training times easily spanning many days or even weeks on multiple modern GPUs ( Vahdat & Kautz , 2020 ; Child , 2021 ) . Hence , it remains unexplored if more expressive HVAE architectures can indeed improve unsupervised image restoration tasks and if they can do so efficiently enough to justify their application in practical use-cases . Contributions . In this paper , we first introduce a new architecture for unsupervised diversity denoising called HIERARCHICAL DIVNOISING ( HDN ) , inspired by ladder VAEs ( Sønderby et al. , 2016 ) , but introducing a number of important task-specific modifications . We show that HDN is considerably more expressive than DN ( see Fig . 2 ) , leading to much improved denoising results ( see Fig . 1 ) , while still not being excessively computationally expensive ( on a common 6 GB Tesla P100 GPU training still only takes about 1 day ) . Most importantly though , HDN leads to SOTA results on natural images and biomedical image data , outperforming 8 popular unsupervised denoising baselines and considerably closing the gap to supervised results ( see Table 1 ) . Additionally , we investigate the application of diversity denoising to the removal of spatially correlated structured noise in microscopy images , an application of immense practical impact . More specifically , we show that HDN , owing to its hierarchical architecture , enforces an interpretable decomposition of learned representations and we devise an effective approach to remove structured noise in microscopy data by taking full advantage of this interpretable hierarchical representation . We showcase this on multiple real-world datasets from diverse microscopy modalities , demonstrating that our method is an important step towards interpretable image restoration and is immediately applicable to many real-world datasets as they are acquired in the life-sciences on a daily basis . Related Work . Apart from the DL based works already discussed earlier , classical methods such as Non-Local Means ( Buades et al. , 2005 ) and BM3D ( Dabov et al. , 2007 ) have also found widespread use for denoising applications . A detailed review of classical denoising methods can be found in Milanfar ( 2012 ) . An interesting unsupervised DL based image restoration method is Deep Image Prior ( DIP ) ( Ulyanov et al. , 2018 ) which showed that convolutional neural networks ( CNNs ) can be used to restore corrupted images without supervision when training is stopped at an apriori unknown time before convergence . Methods such as DIP ( or other methods that require training for each input image separately , e.g . SELF2SELF ( Quan et al. , 2020 ) ) are , however , computationally demanding when applied to entire datasets . Diversity pixel denoising based on VAEs was introduced by DN ( Prakash et al. , 2021 ) , and a similar approach using GANs by Ohayon et al . ( 2021 ) . For structured noise removal , Broaddus et al . ( 2020 ) extended Krull et al . ( 2019 ) to specifically remove structured line artefacts that can arise in microscopy data . Their method needs exact apriori knowledge about the size of artefacts , which can not span more than some connected pixels . Dorta et al . ( 2018 ) model structured noises as the uncertainty in the VAE decoder represented by a structured Gaussian distribution with non-diagonal covariance matrix which is learned separately by another network . Jancsary et al . ( 2012 ) learn a structured noise model in a regression tree field framework . Unlike these approaches , we take an interpretability-first perspective for artefact removal and remove artefacts even without having/learning any structured noise model . 2 THE IMAGE RESTORATION TASK . The task of image restoration involves the estimation of a clean and unobservable signal s “ ps1 , s2 , . . . , sN q from a corrupted reconstruction x “ px1 , x2 , . . . , xN q , where si and xi , refer to the respective pixel intensities in the image domain . In general , the reconstructed image comes from solving an inverse imaging problem giving by the forward model , y “ Apsq ` e , ( 1 ) where A is the forward operator ( tomography , blurring , sub-sampling , etc . ) , e is noise in the measurements typically assumed iid , and y is the noisy measurements . An image reconstruction algorithm is needed to recover an estimation x of s from the noisy measurements y . Typically , the image reconstruction is obtained through an optimization formulation , where we seek a solution x that fits the observed values and is compatible with some image prior R , x “ argmin s1 } Aps1q ´ y } 2 ` λRps1q , ( 2 ) where s1 is the auxiliary variable for the signal being optimized for and λ ě 0 is related to the level of confidence of the prior R. There exists an extensive amount of work defining image priors ( Rudin et al. , 1992 ; Golub et al. , 1999 ; Bruckstein et al. , 2009 ; Zoran & Weiss , 2011 ; Romano et al. , 2017 ; Ulyanov et al. , 2018 ) . Without loss of generality , we can decompose the reconstructed image as x “ s ` n , where n is the residual ( noise ) between the ideal image and the reconstructed one . Generally , the noise n on the reconstruction x is composed of pixel-noise ( such as Poisson or Gaussian noise ) and multi-pixel artefacts or structured noise that affect groups of pixels in correlated ways . Such artefacts arise through dependencies that are introduced by the adopted reconstruction technique and the domainspecific inverse problem ( e.g . tomography , microscopy , or ISP in an optical camera ) . For example , consider the case where the reconstruction is done using Tikhonov regularization Rpsq “ } s } 2 , in the particular case of linear operator . In this case , the solution to Eq . 2 is x “ pATA ` λIq´1AT y . Thus , x “ pATA ` λIq´1ATAs ` pATA ` λIq´1AT e “ s ` n , ( 3 ) where n “ ppATA ` λIq´1ATA´ Iqs ` pATA ` λIq´1ATe . ( 4 ) The reconstructed image is affected by colored noise ( term in Eq . 4 depending on e ) , and also by a structured perturbation introduced by the reconstruction method ( term in Eq . 4 depending on s ) . This structured perturbation appears even in the case when the measurements are noiseless . Accurately modeling the noise/artefacts on the reconstructed image is therefore challenging even in the simplest case where the reconstruction is linear , showing that structured noise removal is non-trivial . In Section 5 , we deal with image denoising where noise contribution ni to each pixel xi is assumed to be conditionally independent given the signal si , i.e . ppn|sq “ ś i ppni|siq ( Krull et al. , 2019 ) . In many practical applications , including the ones presented in Section 6 , this assumption does not hold true , and the noise n is referred to as structured noise/artefacts . 3 NON-HIERARCHICAL VAE BASED DIVERSITY DENOISERS . Recently , non-hierarchical VAE based architectures ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ; Prakash et al. , 2021 ) were explored in the context of image denoising allowing a key advantage of providing samples sk drawn from a posterior distribution of denoised images . More formally , the denoising operation of these models fψ can be expressed by fψpxq “ sk „ pps|xq , where ψ are the model parameters . In this section , we provide a brief overview of these models . Vanilla VAEs . VAEs are generative encoder-decoder models , capable of learning a latent representation of the data and capturing a distribution over inputs x ( Kingma & Welling , 2019 ; Rezende et al. , 2014 ) . The encoder maps input x to a conditional distribution qφpz|xq in latent space . The decoder , gθpzq , takes a sample from qφpz|xq and maps it to a distribution pθpx|zq in image space . Encoder and decoder are neural networks , jointly trained to minimize the loss Lφ , θpxq “ Eqφpz|xqr´ log pθpx|zqs ` KL pqφpz|xq||ppzqq “ L R φ , θpxq ` LKLφ pxq , ( 5 ) with the second term being the KL-divergence between the encoder distribution qφpz|xq and prior distribution ppzq ( usually a unit Normal distribution ) . The network parameters of the encoder and decoder are given by φ and θ , respectively . The decoder usually factorizes over pixels as pθpx|zq “ N ź i “ 1 pθpxi|zq , ( 6 ) with pθpxi|zq being a Normal distribution predicted by the decoder , xi representing the intensity of pixel i , and N being the total number of pixels in image x . The encoder distribution is modeled in a similar way , factorizing over the dimensions of the latent space z . Performance of vanilla VAEs on pixel-noise removal tasks is analyzed and compared to DN in Prakash et al . ( 2021 ) . DIVNOISING ( DN ) . DN ( Prakash et al. , 2021 ) is a VAE based method for unsupervised pixel noise removal that incorporates an explicit pixel wise noise model pNMpx|sq in the decoder . More formally , the generic Normal distribution over pixel intensities of Eq . 6 is replaced with a known noise model pNMpx|sq which factorizes as a product of pixels , i.e . pNMpx|sq “ śN i pNMpxi|siq , and the decoder learns a mapping from z directly to the space of restored images , i.e . gθpzq “ s. Therefore , pθpx|zq “ pNMpx|gθpzqq . ( 7 ) The loss of DN hence becomes Lφ , θpxq “ Eqφpz|xq « N ÿ i “ 1 ´ log ppxi|gθpzqq ff ` KL pqφpz|xq||ppzqq “ LRφ , θpxq ` LKLφ pxq . ( 8 ) Intuitively , the reconstruction loss in Eq . 8 measures how likely the generated image is given the noise model , while the KL loss incentivizes the encoded distribution to be unit Normal . DN is the current SOTA for many unsupervised denoising benchmarks ( particularly working well for microscopy images ) , but performs poorly for complex domains such as natural image denoising ( Prakash et al. , 2021 ) . | Reviewed work presented a new and very promising approach to unsupervised denoising titled Hierarchical DivNoising. Hierarchical DivNoising is based on earlier published DivNoising architecture. The authors compared the performance of their HDN approach to the closest state-of-the-art unsupervised as well as supervised denoising methods on an impressive set of twelve datasets. In all datasets, the presented method has outperformed other unsupervised alternatives, while sometimes even improving on the performance of supervised counterparts. The authors also show that the layers of the proposed architecture encode interpretable levels of abstraction, which can be used to one's own advantage e.g. to get rid of the structured noise often present on the microscopy images. | SP:47b13a929cf63405d682352339ecd8f3748c531f |
Natural Attribute-based Shift Detection | 1 INTRODUCTION . Deep learning has significantly improved the performance in various domains such as computer vision , natural language processing , and healthcare ( Bojarski et al. , 2016 ; Mikolov et al. , 2013 ; Esteva et al. , 2016 ) . However , it has been reported that the deep classifiers make unreliable predictions on samples drawn from a different distribution than the training distribution ( Amodei et al. , 2016 ; Hendrycks & Gimpel , 2017 ; Nguyen et al. , 2015 ) . Detection of unreliable predictions is important to build a robust model but it is relatively hard when the test distribution is gradually shifting due to a natural attribute since it is difficult to determine whether the classifier fails when the shift is not significant . These shifts occur in the real-world as a result of a change in specific attribute . For example , a clinical text-based diagnosis classifier trained in 2021 will gradually encounter increasingly shifted samples as time flows , since writing styles change and new terms are introduced in time . Detection of such samples is a vital task especially in safety-critical systems , such as autonomous vehicle control or medical diagnosis , where wrong predictions can lead to dire consequences . To this end , we take a step forward by proposing a new task of detecting samples shifted by a natural attribute ( e.g. , age , time ) that can easily be observed in the real-world setting . We refer to such shifts as Natural Attribute-based Shifts ( NAtS ) , and the task of detecting them as NAtS detection . Detection of NAtS is both different from , and also more challenging than out-of-distribution ( OOD ) detection ( Hendrycks & Gimpel , 2017 ; Liang et al. , 2018 ; Lee et al. , 2018 ; Chandramouli & Sageev , 2020 ) , which typically evaluates the detection methods with a clearly distinguished in-distribution ( ID ) samples and OOD samples ( e.g. , CIFAR10 as ID and SVHN as OOD , which have disjoint labels ) . In contrast , we aim to detect samples from a natural attribute-based shift within the same label space . Since NAtS samples share more features with the ID than the typical OOD samples do , identifying the former is expected to be more challenging than the latter . Although OOD detection has some relevance to NAtS detection , comprehensive evaluation of the existing OOD detection methods on the natural attribute-based shift is an unexplored territory . Therefore , in this paper , we perform an extensive evaluation of representative OOD methods on NAtS samples . Depending on the task environment , NAtS detection can be pursued in parallel to domain generalization ( Seo et al. , 2020 ; Gulrajani & Lopez-Paz , 2020 ; Carlucci et al. , 2019 ) , which aims to overcome domain shifts ( e.g. , image classifier adapting to sketches , photos , art paintings , etc. ) . For example , an X-ray-based diagnosis model should detect images of unusual brightness so that the X-ray ma- chine can be properly configured , and the diagnosis model can perform in the optimal setting . In other cases , domain generalization can be preferred , such as when we expect the classifier to be deployed in a less controlled environment ( e.g. , online image classifier ) for non-safety critical tasks . In this paper , we formalize NAtS detection to enhance the reliability of real-world decision systems . Since there exists no standard dataset for this task , we create a new benchmark dataset in the vision , text , and medical domain by adjusting the natural attributes ( e.g. , age , time , and brightness ) of the ID dataset . Then we conduct an extensive evaluation of representative confidence- and distance-based OOD methods on our datasets and observe that none of the methods perform consistently across all NAtS datasets . After a careful analysis on where NAtS samples reside in the feature space and its impact on the distance- and confidence-based OOD detection performance , we identify the root cause of the inconsistent performance . Following this observation , we define three general NAtS categories based on two criteria : the distance between NAtS samples and the decision boundary , the distance between NAtS samples , and the ID data . Finally , we conduct an additional experiment to demonstrate that a simple modification to the negative log-likelihood training objective can dramatically help the Mahalanobis detector ( Lee et al. , 2018 ) , a distance-based OOD detection method , generalize to all NAtS categories . We also compare our results with various baselines and show that our proposed modification outperforms the baselines and is effective across the three NAtS datasets . In summary , the contributions of this paper are as follows : • We define a new task , Natural Attribute-based Shift detection ( NAtS detection ) , which aims to detect the samples from a distribution shifted by some natural attribute . We create a new benchmark dataset and provide them to encourage further research on evaluating NAtS detection . • To the best of our knowledge , this is the first work to conduct a comprehensive evaluation of the OOD detection methods on shifts based on natural attributes , and discover that none of the OOD methods perform consistently across all NAtS scenarios . • We provide novel analysis based on the location of shifted samples in the feature space and the performance of existing OOD detection methods . Based on the analysis , we split NAtS samples into three categories . • We demonstrate that a simple yet effective modification to the training objective for deep classifiers enables consistent OOD detection performances for all NAtS categories . 2 NATURAL ATTRIBUTE-BASED SHIFT DETECTION . We now formalize a new task , NAtS detection , which aims to enhance the reliability of real-world decision systems by detecting samples from NAtS . We address this task in the classification problems . Let DI = { X , Y } denote the in-distribution data , which is composed of N training samples with inputs X = { x1 , ... , xN } and labels Y = { y1 , ... , yN } . Specifically , xi ∈ Rd represents a d-dimensional input vector , and yi ∈ K represents its corresponding label where K = { 1 , ... , K } is a set of class labels . The discriminative model fθ : X → Y learns with ID dataset DI to assign label yi for each xi . In the NAtS detection setting , we assume that an in-distribution sample consists of attributes , and some of the attributes can be shifted in the test time due to natural causes such as time , age , or brightness . When a particular attribute A ( e.g. , age ) , which has a value of a ( e.g. , 16 ) , is shifted by the degree of δ , the shifted distribution can be denoted as DA=a+δS = { X ′ , Y ′ } . X ′ = { x′1 , ... , x′M } and Y ′ = { y′1 , ... , y′M } represents the M shifted samples and labels respectively . Importantly , in the NAtS setting , although the test distribution is changed from the ID , the label space is preserved as K , which is the set of class labels in DI . In the test time , the model fθ might encounter the sample x′ from a shifted data DA=a+δS , and it should be able to identify that the attribute-shifted sample is not from the ID . 3 NATS DATASET DESCRIPTION . In this section , we describe three benchmark datasets which have a controllable attribute for simulating realistic distribution shifts . Since there exists no standard dataset for NAtS detection , we create new benchmark datasets using existing datasets by adjusting natural attributes in order to reflect real-world scenarios . We carefully select datasets from vision , language , and medical domains containing natural attributes ( e.g. , year , age , and brightness ) , which allows us to naturally split the samples . By grouping samples based on these attributes , we can induce natural attribute-based distribution shifts as described below . Image . We use the UTKFace dataset ( Zhang et al. , 2017 ) which consists of over 20 , 000 face images with annotations of age , gender , and ethnicity . As shown in Figure 1 , we can visually observe that the facial images vary with age . Therefore , we set the 1 , 282 facial images of 26 years old age as DI . For creating the NAtS dataset , we vary the age of UTKFace dataset . To obtain an equal number of samples in the NAtS dataset , the age groups that has less than 200 images are merged into one group until it has 200 samples . Finally , 15 groups DageS are produced for the NAtS datasets , varying the ages from 25 to 1 ( i.e. , Dage=25S , D age=24 S , . . . , D age=1 S ) . Text . We use the Amazon Review dataset ( He & McAuley , 2016 ; McAuley et al. , 2015 ) which contains product reviews from Amazon . We consider the product category ” book ” and group its reviews based on the year to reflect the distributional shift across time . We obtain 9 groups with each group containing reviews from the year between 2005 and 2014 . Then , the group with 24 , 000 reviews posted in 2005 is set as DI , and the groups with reviews after 2005 as DyearS ( i.e. , Dyear=2006S , D year=2007 S , . . . , D year=2014 S ) . Each D year S group contains 1500 positive reviews and 1500 negative reviews . We observed that as we move ahead in time , the average length of a review gets shorter and it uses more adjectives than previous years . Due to the space constraint , we provide a detailed analysis of the dataset in the Section B of the Appendix . Medical . We use the RSNA Bone Age dataset ( Halabi et al. , 2019 ) , a real-world dataset that contains left-hand X-ray images of the patient , along with their gender and age ( 0 to 20 years ) . We consider patients in the age group of 10 to 12 years for our dataset . To reflect diverse X-ray imaging set-ups in the hospital , we varied the brightness factor between 0 and 4.5 and form 16 different dataset DbrightnessS ( i.e. , D brightness=0.0 S , D brightness=0.2 S , . . . , D brightness=4.5 S ) , and each group contains Xray images of 200 males and 200 females . Figure 1 presents X-ray images with different levels of brightness with realistic and continuous distribution shifts . In-distribution data DI is composed of 3 , 000 images of brightness factor 1.0 ( unmodified images ) . 4 CAN OOD DETECTION METHODS ALSO DETECT NATS ? . In this section , we briefly discuss about OOD detection methods and conduct an extensive evaluation of OOD detection methods on our proposed benchmark datasets . 4.1 OOD DETECTION METHODS . In this work , we use three widely-used post-hoc and modality-agnostic OOD detection methods . We use maximum of softmax probability ( MSP ) ( Hendrycks & Gimpel , 2017 ) and ODIN ( Liang et al. , 2018 ) as confidence-based OOD detection baselines , and Mahalanobis detector ( Lee et al. , 2018 ) as distance-based OOD detection baseline . Note that ODIN and Mahalanobis detector assume the availability of OOD validation dataset to tune their hyperparameters . However , for all our experiments , we use variants of the above methods that do not access the OOD validation dataset as Hsu et al . ( 2020 ) . The exact equations and details of how each OOD detection method assigns an OOD score to a given sample is provided in Section A of the Appendix . | The paper identifies the problem of a Natural Attributed-based Shift (NAS) in the data distribution affecting Deep Learning (DL) systems designed for the task of interest. An example may be a DL system for X-ray based diagnosis being able to handle a different brightness that it was trained for. The paper argues that it is similar to Out-of-Distribution (OOD) data but instead of the difference being in the label space (CIFAR vs SVHN) the label space is similar but the test data distribution is shifted due to a systemic shift in an important data attribute (brightness of image in the X-ray example above). The paper claims the following contributions: - A new task of NAS detection and datasets to study it. - A comprehensive evaluation of OOD methods on the NAS dataset and demonstration that none of them perform consistently across all NAS scenarios. - A novel analysis based on the location of shifted samples in the feature space and performance of existing OOD methods. - Demonstrate that a simple modification to training objective for deep classifiers enables consistent OOD performance on all NAS scenarios. | SP:2939e74d23584a619ab468cbd3831f36af0d7a6e |
Natural Attribute-based Shift Detection | 1 INTRODUCTION . Deep learning has significantly improved the performance in various domains such as computer vision , natural language processing , and healthcare ( Bojarski et al. , 2016 ; Mikolov et al. , 2013 ; Esteva et al. , 2016 ) . However , it has been reported that the deep classifiers make unreliable predictions on samples drawn from a different distribution than the training distribution ( Amodei et al. , 2016 ; Hendrycks & Gimpel , 2017 ; Nguyen et al. , 2015 ) . Detection of unreliable predictions is important to build a robust model but it is relatively hard when the test distribution is gradually shifting due to a natural attribute since it is difficult to determine whether the classifier fails when the shift is not significant . These shifts occur in the real-world as a result of a change in specific attribute . For example , a clinical text-based diagnosis classifier trained in 2021 will gradually encounter increasingly shifted samples as time flows , since writing styles change and new terms are introduced in time . Detection of such samples is a vital task especially in safety-critical systems , such as autonomous vehicle control or medical diagnosis , where wrong predictions can lead to dire consequences . To this end , we take a step forward by proposing a new task of detecting samples shifted by a natural attribute ( e.g. , age , time ) that can easily be observed in the real-world setting . We refer to such shifts as Natural Attribute-based Shifts ( NAtS ) , and the task of detecting them as NAtS detection . Detection of NAtS is both different from , and also more challenging than out-of-distribution ( OOD ) detection ( Hendrycks & Gimpel , 2017 ; Liang et al. , 2018 ; Lee et al. , 2018 ; Chandramouli & Sageev , 2020 ) , which typically evaluates the detection methods with a clearly distinguished in-distribution ( ID ) samples and OOD samples ( e.g. , CIFAR10 as ID and SVHN as OOD , which have disjoint labels ) . In contrast , we aim to detect samples from a natural attribute-based shift within the same label space . Since NAtS samples share more features with the ID than the typical OOD samples do , identifying the former is expected to be more challenging than the latter . Although OOD detection has some relevance to NAtS detection , comprehensive evaluation of the existing OOD detection methods on the natural attribute-based shift is an unexplored territory . Therefore , in this paper , we perform an extensive evaluation of representative OOD methods on NAtS samples . Depending on the task environment , NAtS detection can be pursued in parallel to domain generalization ( Seo et al. , 2020 ; Gulrajani & Lopez-Paz , 2020 ; Carlucci et al. , 2019 ) , which aims to overcome domain shifts ( e.g. , image classifier adapting to sketches , photos , art paintings , etc. ) . For example , an X-ray-based diagnosis model should detect images of unusual brightness so that the X-ray ma- chine can be properly configured , and the diagnosis model can perform in the optimal setting . In other cases , domain generalization can be preferred , such as when we expect the classifier to be deployed in a less controlled environment ( e.g. , online image classifier ) for non-safety critical tasks . In this paper , we formalize NAtS detection to enhance the reliability of real-world decision systems . Since there exists no standard dataset for this task , we create a new benchmark dataset in the vision , text , and medical domain by adjusting the natural attributes ( e.g. , age , time , and brightness ) of the ID dataset . Then we conduct an extensive evaluation of representative confidence- and distance-based OOD methods on our datasets and observe that none of the methods perform consistently across all NAtS datasets . After a careful analysis on where NAtS samples reside in the feature space and its impact on the distance- and confidence-based OOD detection performance , we identify the root cause of the inconsistent performance . Following this observation , we define three general NAtS categories based on two criteria : the distance between NAtS samples and the decision boundary , the distance between NAtS samples , and the ID data . Finally , we conduct an additional experiment to demonstrate that a simple modification to the negative log-likelihood training objective can dramatically help the Mahalanobis detector ( Lee et al. , 2018 ) , a distance-based OOD detection method , generalize to all NAtS categories . We also compare our results with various baselines and show that our proposed modification outperforms the baselines and is effective across the three NAtS datasets . In summary , the contributions of this paper are as follows : • We define a new task , Natural Attribute-based Shift detection ( NAtS detection ) , which aims to detect the samples from a distribution shifted by some natural attribute . We create a new benchmark dataset and provide them to encourage further research on evaluating NAtS detection . • To the best of our knowledge , this is the first work to conduct a comprehensive evaluation of the OOD detection methods on shifts based on natural attributes , and discover that none of the OOD methods perform consistently across all NAtS scenarios . • We provide novel analysis based on the location of shifted samples in the feature space and the performance of existing OOD detection methods . Based on the analysis , we split NAtS samples into three categories . • We demonstrate that a simple yet effective modification to the training objective for deep classifiers enables consistent OOD detection performances for all NAtS categories . 2 NATURAL ATTRIBUTE-BASED SHIFT DETECTION . We now formalize a new task , NAtS detection , which aims to enhance the reliability of real-world decision systems by detecting samples from NAtS . We address this task in the classification problems . Let DI = { X , Y } denote the in-distribution data , which is composed of N training samples with inputs X = { x1 , ... , xN } and labels Y = { y1 , ... , yN } . Specifically , xi ∈ Rd represents a d-dimensional input vector , and yi ∈ K represents its corresponding label where K = { 1 , ... , K } is a set of class labels . The discriminative model fθ : X → Y learns with ID dataset DI to assign label yi for each xi . In the NAtS detection setting , we assume that an in-distribution sample consists of attributes , and some of the attributes can be shifted in the test time due to natural causes such as time , age , or brightness . When a particular attribute A ( e.g. , age ) , which has a value of a ( e.g. , 16 ) , is shifted by the degree of δ , the shifted distribution can be denoted as DA=a+δS = { X ′ , Y ′ } . X ′ = { x′1 , ... , x′M } and Y ′ = { y′1 , ... , y′M } represents the M shifted samples and labels respectively . Importantly , in the NAtS setting , although the test distribution is changed from the ID , the label space is preserved as K , which is the set of class labels in DI . In the test time , the model fθ might encounter the sample x′ from a shifted data DA=a+δS , and it should be able to identify that the attribute-shifted sample is not from the ID . 3 NATS DATASET DESCRIPTION . In this section , we describe three benchmark datasets which have a controllable attribute for simulating realistic distribution shifts . Since there exists no standard dataset for NAtS detection , we create new benchmark datasets using existing datasets by adjusting natural attributes in order to reflect real-world scenarios . We carefully select datasets from vision , language , and medical domains containing natural attributes ( e.g. , year , age , and brightness ) , which allows us to naturally split the samples . By grouping samples based on these attributes , we can induce natural attribute-based distribution shifts as described below . Image . We use the UTKFace dataset ( Zhang et al. , 2017 ) which consists of over 20 , 000 face images with annotations of age , gender , and ethnicity . As shown in Figure 1 , we can visually observe that the facial images vary with age . Therefore , we set the 1 , 282 facial images of 26 years old age as DI . For creating the NAtS dataset , we vary the age of UTKFace dataset . To obtain an equal number of samples in the NAtS dataset , the age groups that has less than 200 images are merged into one group until it has 200 samples . Finally , 15 groups DageS are produced for the NAtS datasets , varying the ages from 25 to 1 ( i.e. , Dage=25S , D age=24 S , . . . , D age=1 S ) . Text . We use the Amazon Review dataset ( He & McAuley , 2016 ; McAuley et al. , 2015 ) which contains product reviews from Amazon . We consider the product category ” book ” and group its reviews based on the year to reflect the distributional shift across time . We obtain 9 groups with each group containing reviews from the year between 2005 and 2014 . Then , the group with 24 , 000 reviews posted in 2005 is set as DI , and the groups with reviews after 2005 as DyearS ( i.e. , Dyear=2006S , D year=2007 S , . . . , D year=2014 S ) . Each D year S group contains 1500 positive reviews and 1500 negative reviews . We observed that as we move ahead in time , the average length of a review gets shorter and it uses more adjectives than previous years . Due to the space constraint , we provide a detailed analysis of the dataset in the Section B of the Appendix . Medical . We use the RSNA Bone Age dataset ( Halabi et al. , 2019 ) , a real-world dataset that contains left-hand X-ray images of the patient , along with their gender and age ( 0 to 20 years ) . We consider patients in the age group of 10 to 12 years for our dataset . To reflect diverse X-ray imaging set-ups in the hospital , we varied the brightness factor between 0 and 4.5 and form 16 different dataset DbrightnessS ( i.e. , D brightness=0.0 S , D brightness=0.2 S , . . . , D brightness=4.5 S ) , and each group contains Xray images of 200 males and 200 females . Figure 1 presents X-ray images with different levels of brightness with realistic and continuous distribution shifts . In-distribution data DI is composed of 3 , 000 images of brightness factor 1.0 ( unmodified images ) . 4 CAN OOD DETECTION METHODS ALSO DETECT NATS ? . In this section , we briefly discuss about OOD detection methods and conduct an extensive evaluation of OOD detection methods on our proposed benchmark datasets . 4.1 OOD DETECTION METHODS . In this work , we use three widely-used post-hoc and modality-agnostic OOD detection methods . We use maximum of softmax probability ( MSP ) ( Hendrycks & Gimpel , 2017 ) and ODIN ( Liang et al. , 2018 ) as confidence-based OOD detection baselines , and Mahalanobis detector ( Lee et al. , 2018 ) as distance-based OOD detection baseline . Note that ODIN and Mahalanobis detector assume the availability of OOD validation dataset to tune their hyperparameters . However , for all our experiments , we use variants of the above methods that do not access the OOD validation dataset as Hsu et al . ( 2020 ) . The exact equations and details of how each OOD detection method assigns an OOD score to a given sample is provided in Section A of the Appendix . | The authors propose a new task named NAS (natural attribute-based shift), which is a sub-task of OOD (out of distribution). Three datasets are designed from existing datasets for the NAS experiment. By visualizing the distribution of category features produced by the model under different datasets using PCA, the authors find three categories for NAS distribution. Total three OOD methods are tested on NAS datasets and show an inconsistency in their performance. Finally, the authors propose to use distance loss and entropy loss to retrain the models to improve the distance-based OOD methods. | SP:2939e74d23584a619ab468cbd3831f36af0d7a6e |
Natural Attribute-based Shift Detection | 1 INTRODUCTION . Deep learning has significantly improved the performance in various domains such as computer vision , natural language processing , and healthcare ( Bojarski et al. , 2016 ; Mikolov et al. , 2013 ; Esteva et al. , 2016 ) . However , it has been reported that the deep classifiers make unreliable predictions on samples drawn from a different distribution than the training distribution ( Amodei et al. , 2016 ; Hendrycks & Gimpel , 2017 ; Nguyen et al. , 2015 ) . Detection of unreliable predictions is important to build a robust model but it is relatively hard when the test distribution is gradually shifting due to a natural attribute since it is difficult to determine whether the classifier fails when the shift is not significant . These shifts occur in the real-world as a result of a change in specific attribute . For example , a clinical text-based diagnosis classifier trained in 2021 will gradually encounter increasingly shifted samples as time flows , since writing styles change and new terms are introduced in time . Detection of such samples is a vital task especially in safety-critical systems , such as autonomous vehicle control or medical diagnosis , where wrong predictions can lead to dire consequences . To this end , we take a step forward by proposing a new task of detecting samples shifted by a natural attribute ( e.g. , age , time ) that can easily be observed in the real-world setting . We refer to such shifts as Natural Attribute-based Shifts ( NAtS ) , and the task of detecting them as NAtS detection . Detection of NAtS is both different from , and also more challenging than out-of-distribution ( OOD ) detection ( Hendrycks & Gimpel , 2017 ; Liang et al. , 2018 ; Lee et al. , 2018 ; Chandramouli & Sageev , 2020 ) , which typically evaluates the detection methods with a clearly distinguished in-distribution ( ID ) samples and OOD samples ( e.g. , CIFAR10 as ID and SVHN as OOD , which have disjoint labels ) . In contrast , we aim to detect samples from a natural attribute-based shift within the same label space . Since NAtS samples share more features with the ID than the typical OOD samples do , identifying the former is expected to be more challenging than the latter . Although OOD detection has some relevance to NAtS detection , comprehensive evaluation of the existing OOD detection methods on the natural attribute-based shift is an unexplored territory . Therefore , in this paper , we perform an extensive evaluation of representative OOD methods on NAtS samples . Depending on the task environment , NAtS detection can be pursued in parallel to domain generalization ( Seo et al. , 2020 ; Gulrajani & Lopez-Paz , 2020 ; Carlucci et al. , 2019 ) , which aims to overcome domain shifts ( e.g. , image classifier adapting to sketches , photos , art paintings , etc. ) . For example , an X-ray-based diagnosis model should detect images of unusual brightness so that the X-ray ma- chine can be properly configured , and the diagnosis model can perform in the optimal setting . In other cases , domain generalization can be preferred , such as when we expect the classifier to be deployed in a less controlled environment ( e.g. , online image classifier ) for non-safety critical tasks . In this paper , we formalize NAtS detection to enhance the reliability of real-world decision systems . Since there exists no standard dataset for this task , we create a new benchmark dataset in the vision , text , and medical domain by adjusting the natural attributes ( e.g. , age , time , and brightness ) of the ID dataset . Then we conduct an extensive evaluation of representative confidence- and distance-based OOD methods on our datasets and observe that none of the methods perform consistently across all NAtS datasets . After a careful analysis on where NAtS samples reside in the feature space and its impact on the distance- and confidence-based OOD detection performance , we identify the root cause of the inconsistent performance . Following this observation , we define three general NAtS categories based on two criteria : the distance between NAtS samples and the decision boundary , the distance between NAtS samples , and the ID data . Finally , we conduct an additional experiment to demonstrate that a simple modification to the negative log-likelihood training objective can dramatically help the Mahalanobis detector ( Lee et al. , 2018 ) , a distance-based OOD detection method , generalize to all NAtS categories . We also compare our results with various baselines and show that our proposed modification outperforms the baselines and is effective across the three NAtS datasets . In summary , the contributions of this paper are as follows : • We define a new task , Natural Attribute-based Shift detection ( NAtS detection ) , which aims to detect the samples from a distribution shifted by some natural attribute . We create a new benchmark dataset and provide them to encourage further research on evaluating NAtS detection . • To the best of our knowledge , this is the first work to conduct a comprehensive evaluation of the OOD detection methods on shifts based on natural attributes , and discover that none of the OOD methods perform consistently across all NAtS scenarios . • We provide novel analysis based on the location of shifted samples in the feature space and the performance of existing OOD detection methods . Based on the analysis , we split NAtS samples into three categories . • We demonstrate that a simple yet effective modification to the training objective for deep classifiers enables consistent OOD detection performances for all NAtS categories . 2 NATURAL ATTRIBUTE-BASED SHIFT DETECTION . We now formalize a new task , NAtS detection , which aims to enhance the reliability of real-world decision systems by detecting samples from NAtS . We address this task in the classification problems . Let DI = { X , Y } denote the in-distribution data , which is composed of N training samples with inputs X = { x1 , ... , xN } and labels Y = { y1 , ... , yN } . Specifically , xi ∈ Rd represents a d-dimensional input vector , and yi ∈ K represents its corresponding label where K = { 1 , ... , K } is a set of class labels . The discriminative model fθ : X → Y learns with ID dataset DI to assign label yi for each xi . In the NAtS detection setting , we assume that an in-distribution sample consists of attributes , and some of the attributes can be shifted in the test time due to natural causes such as time , age , or brightness . When a particular attribute A ( e.g. , age ) , which has a value of a ( e.g. , 16 ) , is shifted by the degree of δ , the shifted distribution can be denoted as DA=a+δS = { X ′ , Y ′ } . X ′ = { x′1 , ... , x′M } and Y ′ = { y′1 , ... , y′M } represents the M shifted samples and labels respectively . Importantly , in the NAtS setting , although the test distribution is changed from the ID , the label space is preserved as K , which is the set of class labels in DI . In the test time , the model fθ might encounter the sample x′ from a shifted data DA=a+δS , and it should be able to identify that the attribute-shifted sample is not from the ID . 3 NATS DATASET DESCRIPTION . In this section , we describe three benchmark datasets which have a controllable attribute for simulating realistic distribution shifts . Since there exists no standard dataset for NAtS detection , we create new benchmark datasets using existing datasets by adjusting natural attributes in order to reflect real-world scenarios . We carefully select datasets from vision , language , and medical domains containing natural attributes ( e.g. , year , age , and brightness ) , which allows us to naturally split the samples . By grouping samples based on these attributes , we can induce natural attribute-based distribution shifts as described below . Image . We use the UTKFace dataset ( Zhang et al. , 2017 ) which consists of over 20 , 000 face images with annotations of age , gender , and ethnicity . As shown in Figure 1 , we can visually observe that the facial images vary with age . Therefore , we set the 1 , 282 facial images of 26 years old age as DI . For creating the NAtS dataset , we vary the age of UTKFace dataset . To obtain an equal number of samples in the NAtS dataset , the age groups that has less than 200 images are merged into one group until it has 200 samples . Finally , 15 groups DageS are produced for the NAtS datasets , varying the ages from 25 to 1 ( i.e. , Dage=25S , D age=24 S , . . . , D age=1 S ) . Text . We use the Amazon Review dataset ( He & McAuley , 2016 ; McAuley et al. , 2015 ) which contains product reviews from Amazon . We consider the product category ” book ” and group its reviews based on the year to reflect the distributional shift across time . We obtain 9 groups with each group containing reviews from the year between 2005 and 2014 . Then , the group with 24 , 000 reviews posted in 2005 is set as DI , and the groups with reviews after 2005 as DyearS ( i.e. , Dyear=2006S , D year=2007 S , . . . , D year=2014 S ) . Each D year S group contains 1500 positive reviews and 1500 negative reviews . We observed that as we move ahead in time , the average length of a review gets shorter and it uses more adjectives than previous years . Due to the space constraint , we provide a detailed analysis of the dataset in the Section B of the Appendix . Medical . We use the RSNA Bone Age dataset ( Halabi et al. , 2019 ) , a real-world dataset that contains left-hand X-ray images of the patient , along with their gender and age ( 0 to 20 years ) . We consider patients in the age group of 10 to 12 years for our dataset . To reflect diverse X-ray imaging set-ups in the hospital , we varied the brightness factor between 0 and 4.5 and form 16 different dataset DbrightnessS ( i.e. , D brightness=0.0 S , D brightness=0.2 S , . . . , D brightness=4.5 S ) , and each group contains Xray images of 200 males and 200 females . Figure 1 presents X-ray images with different levels of brightness with realistic and continuous distribution shifts . In-distribution data DI is composed of 3 , 000 images of brightness factor 1.0 ( unmodified images ) . 4 CAN OOD DETECTION METHODS ALSO DETECT NATS ? . In this section , we briefly discuss about OOD detection methods and conduct an extensive evaluation of OOD detection methods on our proposed benchmark datasets . 4.1 OOD DETECTION METHODS . In this work , we use three widely-used post-hoc and modality-agnostic OOD detection methods . We use maximum of softmax probability ( MSP ) ( Hendrycks & Gimpel , 2017 ) and ODIN ( Liang et al. , 2018 ) as confidence-based OOD detection baselines , and Mahalanobis detector ( Lee et al. , 2018 ) as distance-based OOD detection baseline . Note that ODIN and Mahalanobis detector assume the availability of OOD validation dataset to tune their hyperparameters . However , for all our experiments , we use variants of the above methods that do not access the OOD validation dataset as Hsu et al . ( 2020 ) . The exact equations and details of how each OOD detection method assigns an OOD score to a given sample is provided in Section A of the Appendix . | The paper defines a task called Natural Attribute-based Shift detection (NAS detection) to identify shifted samples on some natural attributes, e.g. age, time and lighting. The authors create three benchmark dataset from existing datasets in three domains, i.e. vision, text and medical. Their contributions are in three aspects: evaluation and analysis of out-of-distribution detection (OOD) methods on those NAS benchmark datasets, classify NAS samples as three different categories and present a modification to training objective of deep classifiers to improve an OOD detection method. | SP:2939e74d23584a619ab468cbd3831f36af0d7a6e |
Adaptive Speech Duration Modification using a Deep-Generative Framework | 1 INTRODUCTION . Human speech is a rich and varied mode of communication that encompasses both language/semantic information and the mood/intent of the speaker . The latter is primarily conveyed by prosodic features , such as pitch , energy , speaking rate , and voice quality . There are many applications where understanding and manipulating these prosodic features is required . Consider voice conversion systems as an example . Pitch and energy modifications are used to passively inject emotional cues into the neutral speech or to change the overall speaking style ( A. Russell et al. , 2003 ; Schacter et al. , 2011 ; Shankar et al. , 2019a ; b ; Valle et al. , 2019 ) . Prosodic features are also used to evaluate the quality/engagement in human machine dialog systems ( Swerts & Krahmer , 2000 ) , and they play a significant role in speaker identification and recognition systems ( Park et al. , 2016 ) . While there are many approaches for automated pitch and energy modification ( Toda et al. , 2007 ; Aihara et al. , 2012 ; Kaneko & Kameoka , 2017 ; Shankar et al. , 2020b ; a ) , comparatively little progress has been made in changing the duration/speaking rate of an utterance . In fact , the speaking rate plays a crucial role in conveying emotions ( Schmidt et al. , 2016 ) and in diagnosing human speech pathologies ( Bayerl et al. , 2020 ) . The speaking rate is difficult to manipulate because , unlike pitch or energy , there is no explicit coding for either the signal duration or speaking rhythm . Rather , these features are implicitly defined by the cardinality of the set of frames over a particular interval of interest . This cardinality is a global parameter that masks subtle variations in the speaker rate over an utterance . As a result , duration modification algorithms are not adaptive . Instead , they either require considerable user supervision or they are geared towards aligning to known speech signals . Perhaps the earliest duration modification method is the time-domain pitch synchronous overlap and add ( TD-PSOLA ) algorithm ( Charpentier & Stella , 1986 ) . TD-PSOLA modifies the pitch and duration of a speech signal by replicating and interpolating between individual frames centered at the peaks of auto-correlation signal . However , the user must manually specify both the portion of speech to modify and the exact manner in which it should be altered . Hence , the method is neither automated nor adaptive . An alternative approach is dynamic time warping ( DTW ) , which finds the optimal time alignment between two parallel speech utterances ( dtw , 2008 ) . DTW constructs a pairwise similarity matrix between all frames of the two utterances and estimates a warping path between the starting ( 0 , 0 ) and ending ( Ts , Tt ) points of the utterances based on a Viterbi-like de- coding of the similarity matrix . While simple , DTW requires both the source and target utterances to be known a priori . Hence , it can not be used for on-the-fly modification of new signals . Finally , recent advancements in deep learning have led to a new generation of neural vocoders that disentangle the semantic content from the speaking style ( Oord et al. , 2016 ; Shen et al. , 2017 ; Wang et al. , 2017 ) . These vocoders can alter the speaking rate via the learned style embeddings . While these models represent seminal contributions to speech synthesis , the latent representations are learned in an unsupervised manner , which makes it difficult to control the output speaking style in predictable manner . Another drawback of these methods is the large amount of data and computational resources required to train the models and generate new speech ( Yasuda et al. , 2020 ) . In this paper , we introduce the first fully-automated adaptive speech duration modification scheme . Our approach combines the representation capabilities of deep neural networks with the structured simplicity of dynamic decoding . Namely , we model the alignment between a source and target utterance via a latent attention map ; these maps are used as replacement of similarity matrix for backtracking . We train a masked convolutional encoder-decoder network to estimate these attention maps using a stochastic mean absolute error ( MAE ) formulation . Unlike the conventional DTW ( dtw , 2008 ) algorithm , once trained our framework operates entirely on the source utterance without needing to reference the target . We demonstrate our framework on a voice conversion task using the CMU-Arctic dataset ( Kominek & W Black , 2004 ) and on three multi-speaker emotion conversion tasks using the VESUS dataset ( Sager et al. , 2019 ) . Our experiments confirm that the proposed model can perform open-loop duration modification and produces high-quality speech . 2 METHOD . Fig . 1 illustrates our underlying generative process . Given an utterance X , we first estimate the length T of the ( unknown ) target utterance Y and subsequently use it to estimate a mask M for the attention map . The mask restricts the domain of the attention vectors At at each frame t during the inference stage to mitigate distortion of the output speech . We use paired data ( Xtr , Ytr ) to train a convolutional encoder-decoder network to generate the attention vectors . During testing , we first generate the attention map from the input X and use it to produce the target speech Y . 2.1 LOSS FUNCTION . Formally , let X ∈ RD×Ts denote the input speech . In this work , X corresponds to the Mel filterbank energies extracted from short-time moving window analysis , where D is the number of filterbanks , and Ts is the number of temporal frames in the utterance . Similarly , we denote the target speech as Y ∈ RD×T . Notice that the target utterance length T is usually different from Ts . Our generative process for a single frame of the target speech is represented as follows : T ∼ Laplace ( T 0 , bT ) and Yt ∼ Laplace ( Y 0t , by ) , ( 1 ) where T is the estimated length of the target utterance , and Yt is the target Mel filter-bank energy features at time t. The parameters { T 0 , bT , Y 0t , by } of the distributions are unknown and implicitly estimated via a deep neural network . The neural network is parameterized by γ and θ ( Fig . 1 ) . By treating the unknown parameters as functions of the input X , we obtain the following estimating equations for the target sequence length and frame-wise Mel filter-bank energies : T̂ = fγ ( X ) and Ŷt = X ·At + fθ ( X , Ŷ0 : t−1 ) . ( 2 ) The functions fγ ( · ) and fθ ( · , · ) correspond to the length prediction and energy estimation component of the same deep neural network . The variableAt ∈ RTs is an attention vector that combines framewise features of the source utterance X to generate the target frame Ŷt . Our model differs from standard sequence-to-sequence model by treating neural net predictions as residuals added to input sequence itself . Notice that the residuals depend on input and the history of predictions Ŷ0 : t−1 at previous time steps . This autoregressive property allows the neural network to learn segmental and supra-segmental variations that can potentially distinguish between the speakers or emotions . During training , we use paired data ( X , Y ) and maximize the likelihood of the target speech signal with respect to the neural network weights { θ , γ } . This likelihood can be written as : P ( Ŷ , T̂ |X ) = P ( T̂ |X ) T̂∏ t=1 P ( Ŷt|X , T̂ , Ŷ0 : t−1 ) , ( 3 ) where , the second term in Eq . ( 3 ) can be obtained via marginalization over At as follows : P ( Ŷt|X , T̂ , Ŷ0 : t−1 ) = ∑ At P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) = ∑ At P ( Ŷt|X , T̂ , At , Ŷ0 : t−1 ) P ( At|X , Ŷ0 : t−1 , M ) ( 4 ) The variable M here denotes the attention mask . We introduce M for convenience , as it is a deterministic function of the source length Ts and the estimated target length T̂ . We encode the attention At as a one-hot vector across the Ts frames of the source speech . Thus , it follows a multinomial distribution . For simplicity , we model At as conditionally independent of the target length T̂ given the maskM and the inputX ( see Fig . 1 ) . Taking log ( · ) of Eq . ( 3 ) and combining with Eq . ( 4 ) yields : L ( θ , γ ) = − log ( ∑ At P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) ) − log ( P ( T̂ |X ) ) = − log ( ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) qθ ( At|X , Ŷ0 : t−1 , M ) P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) ) − log ( P ( T̂ |X ) ) ≤ − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) ) +KL ( qθ ( At ) ||P ( At ) ) = − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) −H ( qθ ) + const . ≤ − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) + const . ( 5 ) The distribution qθ ( · ) above is an approximating distribution for the attention vectors implemented by a convolutional network . The first inequality uses the convexity of the − log function , and the second inequality comes from the fact that entropy H ( qθ ) ≥ 0 . Notice that we have implicitly assumed P ( At|X , Ŷ0 : t−1 , M ) has a uniform distribution over the masked region . This is a reasonable assumption given that the masking process reduces the attention domain to a small region ( see Section 2.3 ) . However , qθ is not penalized for deviating from this uniform distribution during training . This flexibility allows the network to learn realistic attention vectors during autoregressive decoding . Eq . ( 5 ) can be easily translated into a neural network loss function which we minimize for { θ , γ } : L ( θ , γ ) = λ1 × EAt∼qθ [ log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) ] + λ2 × log ( P ( T̂ |X ) ) = λ1 × EAt [ ‖Ŷt − Y 0t ‖1 ] + λ2 × ‖T̂ − T 0‖1 , ( 6 ) where λ1 and λ2 are the model hyperparameters that adjusts the trade-off between the two objectives and implicitly contain the variances of the Laplace distributions in Eq . ( 1 ) . Notice that the loss in Eq . ( 6 ) computes an expectation over the attention maps . We use the Monte-Carlo estimate by sampling from the attention map at each time-step . The training procedure is therefore stochastic in nature due to this random sampling . We mix this stochastic version with the maximum aposteriori estimate ( MAP ) of the attention vector with a probability of 0.2 during the start of training procedure . Algorithm 1 : Strategy for model training 1 function trainModelParameters ( X , Y ) ; Input : filterbank energies ( X ∈ RD×Ts , Y ∈ RD×Tt ) Output : model parameters ( θ , γ ) 2 if epoch < MaxEpochs then 3 Set t = 0 , predict target length T̂ = fγ ( X ) and create the mask M ∈ RTs×Tt ; 4 Estimate A ∈ RTs×Tt using masked convolution and sample u ∼ U ( 0 , 1 ) ; 5 if u < 0.2 then 6 Sample a ∈ RTs×Tt as 1-hot vectors from A ; 7 Reconstruct using Ŷt = X · a+ fθ ( X , Y0 : t−1 ) ; 8 else 9 Reconstruct using Ŷt = X ·A+ fθ ( X , Y0 : t−1 ) ; 10 end 11 Compute prediction errors and update parameters θ , γ ; 12 epoch← epoch + 1 ; 13 end 14 return θ and γ ; | This paper proposes a model for adaptive duration modification of an input signal. The model is a graphical model with neural components. By making some assumptions, authors derived an upper bound for the likelihood (conditional probability of output sequence and estimated length given input). All the model parameters are updated, by minimizing this upper bound. | SP:ba72cf4416e5b64d1ed24f87bec35b638a5efecb |
Adaptive Speech Duration Modification using a Deep-Generative Framework | 1 INTRODUCTION . Human speech is a rich and varied mode of communication that encompasses both language/semantic information and the mood/intent of the speaker . The latter is primarily conveyed by prosodic features , such as pitch , energy , speaking rate , and voice quality . There are many applications where understanding and manipulating these prosodic features is required . Consider voice conversion systems as an example . Pitch and energy modifications are used to passively inject emotional cues into the neutral speech or to change the overall speaking style ( A. Russell et al. , 2003 ; Schacter et al. , 2011 ; Shankar et al. , 2019a ; b ; Valle et al. , 2019 ) . Prosodic features are also used to evaluate the quality/engagement in human machine dialog systems ( Swerts & Krahmer , 2000 ) , and they play a significant role in speaker identification and recognition systems ( Park et al. , 2016 ) . While there are many approaches for automated pitch and energy modification ( Toda et al. , 2007 ; Aihara et al. , 2012 ; Kaneko & Kameoka , 2017 ; Shankar et al. , 2020b ; a ) , comparatively little progress has been made in changing the duration/speaking rate of an utterance . In fact , the speaking rate plays a crucial role in conveying emotions ( Schmidt et al. , 2016 ) and in diagnosing human speech pathologies ( Bayerl et al. , 2020 ) . The speaking rate is difficult to manipulate because , unlike pitch or energy , there is no explicit coding for either the signal duration or speaking rhythm . Rather , these features are implicitly defined by the cardinality of the set of frames over a particular interval of interest . This cardinality is a global parameter that masks subtle variations in the speaker rate over an utterance . As a result , duration modification algorithms are not adaptive . Instead , they either require considerable user supervision or they are geared towards aligning to known speech signals . Perhaps the earliest duration modification method is the time-domain pitch synchronous overlap and add ( TD-PSOLA ) algorithm ( Charpentier & Stella , 1986 ) . TD-PSOLA modifies the pitch and duration of a speech signal by replicating and interpolating between individual frames centered at the peaks of auto-correlation signal . However , the user must manually specify both the portion of speech to modify and the exact manner in which it should be altered . Hence , the method is neither automated nor adaptive . An alternative approach is dynamic time warping ( DTW ) , which finds the optimal time alignment between two parallel speech utterances ( dtw , 2008 ) . DTW constructs a pairwise similarity matrix between all frames of the two utterances and estimates a warping path between the starting ( 0 , 0 ) and ending ( Ts , Tt ) points of the utterances based on a Viterbi-like de- coding of the similarity matrix . While simple , DTW requires both the source and target utterances to be known a priori . Hence , it can not be used for on-the-fly modification of new signals . Finally , recent advancements in deep learning have led to a new generation of neural vocoders that disentangle the semantic content from the speaking style ( Oord et al. , 2016 ; Shen et al. , 2017 ; Wang et al. , 2017 ) . These vocoders can alter the speaking rate via the learned style embeddings . While these models represent seminal contributions to speech synthesis , the latent representations are learned in an unsupervised manner , which makes it difficult to control the output speaking style in predictable manner . Another drawback of these methods is the large amount of data and computational resources required to train the models and generate new speech ( Yasuda et al. , 2020 ) . In this paper , we introduce the first fully-automated adaptive speech duration modification scheme . Our approach combines the representation capabilities of deep neural networks with the structured simplicity of dynamic decoding . Namely , we model the alignment between a source and target utterance via a latent attention map ; these maps are used as replacement of similarity matrix for backtracking . We train a masked convolutional encoder-decoder network to estimate these attention maps using a stochastic mean absolute error ( MAE ) formulation . Unlike the conventional DTW ( dtw , 2008 ) algorithm , once trained our framework operates entirely on the source utterance without needing to reference the target . We demonstrate our framework on a voice conversion task using the CMU-Arctic dataset ( Kominek & W Black , 2004 ) and on three multi-speaker emotion conversion tasks using the VESUS dataset ( Sager et al. , 2019 ) . Our experiments confirm that the proposed model can perform open-loop duration modification and produces high-quality speech . 2 METHOD . Fig . 1 illustrates our underlying generative process . Given an utterance X , we first estimate the length T of the ( unknown ) target utterance Y and subsequently use it to estimate a mask M for the attention map . The mask restricts the domain of the attention vectors At at each frame t during the inference stage to mitigate distortion of the output speech . We use paired data ( Xtr , Ytr ) to train a convolutional encoder-decoder network to generate the attention vectors . During testing , we first generate the attention map from the input X and use it to produce the target speech Y . 2.1 LOSS FUNCTION . Formally , let X ∈ RD×Ts denote the input speech . In this work , X corresponds to the Mel filterbank energies extracted from short-time moving window analysis , where D is the number of filterbanks , and Ts is the number of temporal frames in the utterance . Similarly , we denote the target speech as Y ∈ RD×T . Notice that the target utterance length T is usually different from Ts . Our generative process for a single frame of the target speech is represented as follows : T ∼ Laplace ( T 0 , bT ) and Yt ∼ Laplace ( Y 0t , by ) , ( 1 ) where T is the estimated length of the target utterance , and Yt is the target Mel filter-bank energy features at time t. The parameters { T 0 , bT , Y 0t , by } of the distributions are unknown and implicitly estimated via a deep neural network . The neural network is parameterized by γ and θ ( Fig . 1 ) . By treating the unknown parameters as functions of the input X , we obtain the following estimating equations for the target sequence length and frame-wise Mel filter-bank energies : T̂ = fγ ( X ) and Ŷt = X ·At + fθ ( X , Ŷ0 : t−1 ) . ( 2 ) The functions fγ ( · ) and fθ ( · , · ) correspond to the length prediction and energy estimation component of the same deep neural network . The variableAt ∈ RTs is an attention vector that combines framewise features of the source utterance X to generate the target frame Ŷt . Our model differs from standard sequence-to-sequence model by treating neural net predictions as residuals added to input sequence itself . Notice that the residuals depend on input and the history of predictions Ŷ0 : t−1 at previous time steps . This autoregressive property allows the neural network to learn segmental and supra-segmental variations that can potentially distinguish between the speakers or emotions . During training , we use paired data ( X , Y ) and maximize the likelihood of the target speech signal with respect to the neural network weights { θ , γ } . This likelihood can be written as : P ( Ŷ , T̂ |X ) = P ( T̂ |X ) T̂∏ t=1 P ( Ŷt|X , T̂ , Ŷ0 : t−1 ) , ( 3 ) where , the second term in Eq . ( 3 ) can be obtained via marginalization over At as follows : P ( Ŷt|X , T̂ , Ŷ0 : t−1 ) = ∑ At P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) = ∑ At P ( Ŷt|X , T̂ , At , Ŷ0 : t−1 ) P ( At|X , Ŷ0 : t−1 , M ) ( 4 ) The variable M here denotes the attention mask . We introduce M for convenience , as it is a deterministic function of the source length Ts and the estimated target length T̂ . We encode the attention At as a one-hot vector across the Ts frames of the source speech . Thus , it follows a multinomial distribution . For simplicity , we model At as conditionally independent of the target length T̂ given the maskM and the inputX ( see Fig . 1 ) . Taking log ( · ) of Eq . ( 3 ) and combining with Eq . ( 4 ) yields : L ( θ , γ ) = − log ( ∑ At P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) ) − log ( P ( T̂ |X ) ) = − log ( ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) qθ ( At|X , Ŷ0 : t−1 , M ) P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) ) − log ( P ( T̂ |X ) ) ≤ − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) ) +KL ( qθ ( At ) ||P ( At ) ) = − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) −H ( qθ ) + const . ≤ − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) + const . ( 5 ) The distribution qθ ( · ) above is an approximating distribution for the attention vectors implemented by a convolutional network . The first inequality uses the convexity of the − log function , and the second inequality comes from the fact that entropy H ( qθ ) ≥ 0 . Notice that we have implicitly assumed P ( At|X , Ŷ0 : t−1 , M ) has a uniform distribution over the masked region . This is a reasonable assumption given that the masking process reduces the attention domain to a small region ( see Section 2.3 ) . However , qθ is not penalized for deviating from this uniform distribution during training . This flexibility allows the network to learn realistic attention vectors during autoregressive decoding . Eq . ( 5 ) can be easily translated into a neural network loss function which we minimize for { θ , γ } : L ( θ , γ ) = λ1 × EAt∼qθ [ log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) ] + λ2 × log ( P ( T̂ |X ) ) = λ1 × EAt [ ‖Ŷt − Y 0t ‖1 ] + λ2 × ‖T̂ − T 0‖1 , ( 6 ) where λ1 and λ2 are the model hyperparameters that adjusts the trade-off between the two objectives and implicitly contain the variances of the Laplace distributions in Eq . ( 1 ) . Notice that the loss in Eq . ( 6 ) computes an expectation over the attention maps . We use the Monte-Carlo estimate by sampling from the attention map at each time-step . The training procedure is therefore stochastic in nature due to this random sampling . We mix this stochastic version with the maximum aposteriori estimate ( MAP ) of the attention vector with a probability of 0.2 during the start of training procedure . Algorithm 1 : Strategy for model training 1 function trainModelParameters ( X , Y ) ; Input : filterbank energies ( X ∈ RD×Ts , Y ∈ RD×Tt ) Output : model parameters ( θ , γ ) 2 if epoch < MaxEpochs then 3 Set t = 0 , predict target length T̂ = fγ ( X ) and create the mask M ∈ RTs×Tt ; 4 Estimate A ∈ RTs×Tt using masked convolution and sample u ∼ U ( 0 , 1 ) ; 5 if u < 0.2 then 6 Sample a ∈ RTs×Tt as 1-hot vectors from A ; 7 Reconstruct using Ŷt = X · a+ fθ ( X , Y0 : t−1 ) ; 8 else 9 Reconstruct using Ŷt = X ·A+ fθ ( X , Y0 : t−1 ) ; 10 end 11 Compute prediction errors and update parameters θ , γ ; 12 epoch← epoch + 1 ; 13 end 14 return θ and γ ; | This paper proposes a generative sequence-to-sequence encoder-decoder architecture with attention for modifying the length of an input sequence. The authors derive the training loss for learning the network parameters from a principled Bayesian formulation and a variational inference bound, and use this to train the model from paired {input, output} sequences. During inference, a task-specific network is capable to estimating a target length, and modify the signal accordingly. It would seem that the architecture is capable of handling other transformations, and in fact the authors apply the model to speaker- and emotion-morphing tasks, but the bulk of the evaluation is on how accurate the target length is estimated, and (briefly) the perceptual quality of the resulting samples. | SP:ba72cf4416e5b64d1ed24f87bec35b638a5efecb |
Adaptive Speech Duration Modification using a Deep-Generative Framework | 1 INTRODUCTION . Human speech is a rich and varied mode of communication that encompasses both language/semantic information and the mood/intent of the speaker . The latter is primarily conveyed by prosodic features , such as pitch , energy , speaking rate , and voice quality . There are many applications where understanding and manipulating these prosodic features is required . Consider voice conversion systems as an example . Pitch and energy modifications are used to passively inject emotional cues into the neutral speech or to change the overall speaking style ( A. Russell et al. , 2003 ; Schacter et al. , 2011 ; Shankar et al. , 2019a ; b ; Valle et al. , 2019 ) . Prosodic features are also used to evaluate the quality/engagement in human machine dialog systems ( Swerts & Krahmer , 2000 ) , and they play a significant role in speaker identification and recognition systems ( Park et al. , 2016 ) . While there are many approaches for automated pitch and energy modification ( Toda et al. , 2007 ; Aihara et al. , 2012 ; Kaneko & Kameoka , 2017 ; Shankar et al. , 2020b ; a ) , comparatively little progress has been made in changing the duration/speaking rate of an utterance . In fact , the speaking rate plays a crucial role in conveying emotions ( Schmidt et al. , 2016 ) and in diagnosing human speech pathologies ( Bayerl et al. , 2020 ) . The speaking rate is difficult to manipulate because , unlike pitch or energy , there is no explicit coding for either the signal duration or speaking rhythm . Rather , these features are implicitly defined by the cardinality of the set of frames over a particular interval of interest . This cardinality is a global parameter that masks subtle variations in the speaker rate over an utterance . As a result , duration modification algorithms are not adaptive . Instead , they either require considerable user supervision or they are geared towards aligning to known speech signals . Perhaps the earliest duration modification method is the time-domain pitch synchronous overlap and add ( TD-PSOLA ) algorithm ( Charpentier & Stella , 1986 ) . TD-PSOLA modifies the pitch and duration of a speech signal by replicating and interpolating between individual frames centered at the peaks of auto-correlation signal . However , the user must manually specify both the portion of speech to modify and the exact manner in which it should be altered . Hence , the method is neither automated nor adaptive . An alternative approach is dynamic time warping ( DTW ) , which finds the optimal time alignment between two parallel speech utterances ( dtw , 2008 ) . DTW constructs a pairwise similarity matrix between all frames of the two utterances and estimates a warping path between the starting ( 0 , 0 ) and ending ( Ts , Tt ) points of the utterances based on a Viterbi-like de- coding of the similarity matrix . While simple , DTW requires both the source and target utterances to be known a priori . Hence , it can not be used for on-the-fly modification of new signals . Finally , recent advancements in deep learning have led to a new generation of neural vocoders that disentangle the semantic content from the speaking style ( Oord et al. , 2016 ; Shen et al. , 2017 ; Wang et al. , 2017 ) . These vocoders can alter the speaking rate via the learned style embeddings . While these models represent seminal contributions to speech synthesis , the latent representations are learned in an unsupervised manner , which makes it difficult to control the output speaking style in predictable manner . Another drawback of these methods is the large amount of data and computational resources required to train the models and generate new speech ( Yasuda et al. , 2020 ) . In this paper , we introduce the first fully-automated adaptive speech duration modification scheme . Our approach combines the representation capabilities of deep neural networks with the structured simplicity of dynamic decoding . Namely , we model the alignment between a source and target utterance via a latent attention map ; these maps are used as replacement of similarity matrix for backtracking . We train a masked convolutional encoder-decoder network to estimate these attention maps using a stochastic mean absolute error ( MAE ) formulation . Unlike the conventional DTW ( dtw , 2008 ) algorithm , once trained our framework operates entirely on the source utterance without needing to reference the target . We demonstrate our framework on a voice conversion task using the CMU-Arctic dataset ( Kominek & W Black , 2004 ) and on three multi-speaker emotion conversion tasks using the VESUS dataset ( Sager et al. , 2019 ) . Our experiments confirm that the proposed model can perform open-loop duration modification and produces high-quality speech . 2 METHOD . Fig . 1 illustrates our underlying generative process . Given an utterance X , we first estimate the length T of the ( unknown ) target utterance Y and subsequently use it to estimate a mask M for the attention map . The mask restricts the domain of the attention vectors At at each frame t during the inference stage to mitigate distortion of the output speech . We use paired data ( Xtr , Ytr ) to train a convolutional encoder-decoder network to generate the attention vectors . During testing , we first generate the attention map from the input X and use it to produce the target speech Y . 2.1 LOSS FUNCTION . Formally , let X ∈ RD×Ts denote the input speech . In this work , X corresponds to the Mel filterbank energies extracted from short-time moving window analysis , where D is the number of filterbanks , and Ts is the number of temporal frames in the utterance . Similarly , we denote the target speech as Y ∈ RD×T . Notice that the target utterance length T is usually different from Ts . Our generative process for a single frame of the target speech is represented as follows : T ∼ Laplace ( T 0 , bT ) and Yt ∼ Laplace ( Y 0t , by ) , ( 1 ) where T is the estimated length of the target utterance , and Yt is the target Mel filter-bank energy features at time t. The parameters { T 0 , bT , Y 0t , by } of the distributions are unknown and implicitly estimated via a deep neural network . The neural network is parameterized by γ and θ ( Fig . 1 ) . By treating the unknown parameters as functions of the input X , we obtain the following estimating equations for the target sequence length and frame-wise Mel filter-bank energies : T̂ = fγ ( X ) and Ŷt = X ·At + fθ ( X , Ŷ0 : t−1 ) . ( 2 ) The functions fγ ( · ) and fθ ( · , · ) correspond to the length prediction and energy estimation component of the same deep neural network . The variableAt ∈ RTs is an attention vector that combines framewise features of the source utterance X to generate the target frame Ŷt . Our model differs from standard sequence-to-sequence model by treating neural net predictions as residuals added to input sequence itself . Notice that the residuals depend on input and the history of predictions Ŷ0 : t−1 at previous time steps . This autoregressive property allows the neural network to learn segmental and supra-segmental variations that can potentially distinguish between the speakers or emotions . During training , we use paired data ( X , Y ) and maximize the likelihood of the target speech signal with respect to the neural network weights { θ , γ } . This likelihood can be written as : P ( Ŷ , T̂ |X ) = P ( T̂ |X ) T̂∏ t=1 P ( Ŷt|X , T̂ , Ŷ0 : t−1 ) , ( 3 ) where , the second term in Eq . ( 3 ) can be obtained via marginalization over At as follows : P ( Ŷt|X , T̂ , Ŷ0 : t−1 ) = ∑ At P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) = ∑ At P ( Ŷt|X , T̂ , At , Ŷ0 : t−1 ) P ( At|X , Ŷ0 : t−1 , M ) ( 4 ) The variable M here denotes the attention mask . We introduce M for convenience , as it is a deterministic function of the source length Ts and the estimated target length T̂ . We encode the attention At as a one-hot vector across the Ts frames of the source speech . Thus , it follows a multinomial distribution . For simplicity , we model At as conditionally independent of the target length T̂ given the maskM and the inputX ( see Fig . 1 ) . Taking log ( · ) of Eq . ( 3 ) and combining with Eq . ( 4 ) yields : L ( θ , γ ) = − log ( ∑ At P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) ) − log ( P ( T̂ |X ) ) = − log ( ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) qθ ( At|X , Ŷ0 : t−1 , M ) P ( Ŷt , At|X , T̂ , Ŷ0 : t−1 , M ) ) − log ( P ( T̂ |X ) ) ≤ − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) ) +KL ( qθ ( At ) ||P ( At ) ) = − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) −H ( qθ ) + const . ≤ − ∑ At qθ ( At|X , Ŷ0 : t−1 , M ) log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) − log ( P ( T̂ |X ) + const . ( 5 ) The distribution qθ ( · ) above is an approximating distribution for the attention vectors implemented by a convolutional network . The first inequality uses the convexity of the − log function , and the second inequality comes from the fact that entropy H ( qθ ) ≥ 0 . Notice that we have implicitly assumed P ( At|X , Ŷ0 : t−1 , M ) has a uniform distribution over the masked region . This is a reasonable assumption given that the masking process reduces the attention domain to a small region ( see Section 2.3 ) . However , qθ is not penalized for deviating from this uniform distribution during training . This flexibility allows the network to learn realistic attention vectors during autoregressive decoding . Eq . ( 5 ) can be easily translated into a neural network loss function which we minimize for { θ , γ } : L ( θ , γ ) = λ1 × EAt∼qθ [ log ( P ( Ŷt|X , At , Ŷ0 : t−1 ) ) ] + λ2 × log ( P ( T̂ |X ) ) = λ1 × EAt [ ‖Ŷt − Y 0t ‖1 ] + λ2 × ‖T̂ − T 0‖1 , ( 6 ) where λ1 and λ2 are the model hyperparameters that adjusts the trade-off between the two objectives and implicitly contain the variances of the Laplace distributions in Eq . ( 1 ) . Notice that the loss in Eq . ( 6 ) computes an expectation over the attention maps . We use the Monte-Carlo estimate by sampling from the attention map at each time-step . The training procedure is therefore stochastic in nature due to this random sampling . We mix this stochastic version with the maximum aposteriori estimate ( MAP ) of the attention vector with a probability of 0.2 during the start of training procedure . Algorithm 1 : Strategy for model training 1 function trainModelParameters ( X , Y ) ; Input : filterbank energies ( X ∈ RD×Ts , Y ∈ RD×Tt ) Output : model parameters ( θ , γ ) 2 if epoch < MaxEpochs then 3 Set t = 0 , predict target length T̂ = fγ ( X ) and create the mask M ∈ RTs×Tt ; 4 Estimate A ∈ RTs×Tt using masked convolution and sample u ∼ U ( 0 , 1 ) ; 5 if u < 0.2 then 6 Sample a ∈ RTs×Tt as 1-hot vectors from A ; 7 Reconstruct using Ŷt = X · a+ fθ ( X , Y0 : t−1 ) ; 8 else 9 Reconstruct using Ŷt = X ·A+ fθ ( X , Y0 : t−1 ) ; 10 end 11 Compute prediction errors and update parameters θ , γ ; 12 epoch← epoch + 1 ; 13 end 14 return θ and γ ; | Speech rate, duration, and pitch modification is of interest in several practical audio applications. The paper proposes the use of an encoder-decoder framework with attention masking to estimate a candidate target utterance length to overcome the need for a priori knowledge of a target utterance on a speaking rate modification task. Evaluation on 2 standard databases show that the proposed approach performs better than 2 sequence-to-sequence models that were originally developed for different domains. | SP:ba72cf4416e5b64d1ed24f87bec35b638a5efecb |
Rethinking Again the Value of Network Pruning -- A Dynamical Isometry Perspective | 1 INTRODUCTION . Pruning is a time-honored methodology to reduce parameters in a neural network without seriously compromising its performance ( Reed , 1993 ; Sze et al. , 2017 ) . The prevailing pipeline of pruning comprises three steps : 1 ) pretraining : train a dense model ; 2 ) pruning : prune the dense model based on certain rules ; 3 ) finetuning : retrain the pruned model to regain performance . Most existing research focuses on the second step , seeking the best criterion to select unimportant weights so as to incur as less performance degradation as possible . This 3-step pipeline has been practiced for more than 30 years ( Mozer & Smolensky , 1989 ; LeCun et al. , 1990 ) and is still extensively adopted in today ’ s pruning methods ( Sze et al. , 2017 ) . These said , several recent works ( Crowley et al. , 2018 ; Liu et al. , 2019 ) questioned the necessity of inheriting weights from a pretrained model because they empirically found the small model trained from scratch can match ( or sometimes outperform ) the counterpart pruned from the pre-trained large model . This acutely challenges the past wisdom as well as our common belief about pruning . As far as we know , there is no formal response to this critical conflict . A theoretical-level understanding of this problem is even more elusive . Meanwhile , the pruning community has been observing even more open questions . Specifically , ( Renda et al. , 2020 ; Le & Hua , 2021 ) found that the learning rate ( LR ) in finetuning holds a critical role in the final performance . A proper learning rate schedule ( e.g. , a larger initial LR 10−2 vs. 10−3 with step-decay schedule ) can improve the top-1 accuracy of a pruned ResNet-34 model ( He et al. , 2016 ) by more than 1 % on ImageNet ( Deng et al. , 2009 ) . This discovery calls for more attention being paid to the finetuning step when comparing different pruning methods . Unfortunately , they did not present more theoretical insights to explain its occurrence . This also remains an open question in the community up to date . In this paper , we will show these two open questions actually point to the same one . Specifically , we rerun the experiments of ( Crowley et al. , 2018 ; Liu et al. , 2019 ) and find simply using a larger finetuning LR ( 10−2 vs. 10−3 and decay it ) can significantly improve the final performance . Compared to the improved pruning performance , training from scratch does not compete or surpass pruning anymore ( see Tab . 1 and Tab . 2 on ImageNet ) . This observation invites many questions immediately : ( 1 ) Theoretical understanding : Why does this happen ? What is the theoretical reason behind it ? ( 2 ) Practical solution : If this is a problem , how to fix it ? Can the understanding of this problem lead us to better pruning algorithms ? This paper will present answers to all these questions . The key tool we employ to unveil the mysteries is dynamical isometry ( Saxe et al. , 2014 ) , which describes a kind of nice property in neural networks that are easy to optimize . We carefully design an explanatory experiment using a linear MLP ( multi-layer perceptron ) network to demonstrate how finetuning LR affects the final performance by affecting dynamical isometry . In brief , we observe the finetuning process can recover dynamical isometry ; a larger LR can help recover it faster ( or better ) , hence the better final performance . The proposed explanation is validated by our empirical results and resonates with many empirical observations . Furthermore , by the explanation , we learn dynamical isometry recovery is rather imperative . To achieve so , we present a very simple regularization-based method for pruning and show its effectiveness in recovering dynamical isometry on modern residual convolutional neural networks ( CNNs ) . Contributions . ( 1 ) We empirically demonstrate the questioning about the value of inheriting weights in structured pruning in previous works is inaccurate and point out that the direct cause is improperly using a small finetuning LR . Our finding justifies the value of inheriting weights in structured pruning . ( 2 ) On top of the empirical finding , more importantly , we present a theoretical explanation through examining the dynamical isometry of networks in pruning . This explanation is empirically validated by our carefully designed control experiments . ( 3 ) In addition to the theoretical understanding , we also propose a regularization-based method for dynamical isometry recovery . Despite its brutal simplicity , it is shown effective to recover the broken dynamical isometry on modern residual convolutional neural networks . 2 RELATED WORK . Conventional pruning . Pruning aims to remove as many parameters as possible in a neural network meanwhile maintaining its performance . There are many ways to categorize pruning methods . The most popular two are grouping by pruning structure and methodology . ( 1 ) In terms of pruning structure , pruning can be specified into unstructured pruning ( Han et al. , 2015 ; 2016 ) and structured pruning ( Wen et al. , 2016 ; Li et al. , 2017 ; He et al. , 2017 ) . For the former , a single weight is the basic pruning element . Unstructured pruning can deliver a high compression ratio ; whereas , without regularization , the pruned locations usually spread randomly in the network , which is hard to exploit for acceleration . On the opposite , structured pruning introduces certain patterns in the pruned locations , which benefit subsequent acceleration while can not achieve as much compression . Choices between unstructured and structured pruning depend on specific application needs . For structured pruning , there are still many sub-groups ( Mao et al. , 2017 ) . In the literature , without specific mention , structured pruning means filter pruning or channel pruning . This paper focuses on structured ( filter ) pruning because the “ no value of inheriting weights ” argument is mainly discussed in this context ( Liu et al. , 2019 ) . ( 2 ) In terms of pruning methodology ( i.e. , how to select unimportant weights to prune ) , pruning falls into two paradigms in general : importance-based and penalty-based . The former prunes weights based on some established importance criteria , such as magnitude ( for unstructured pruning ) ( Han et al. , 2015 ; 2016 ) or L1-norm ( for filter pruning ) ( Li et al. , 2017 ) , saliency based on 2nd-order gradients ( e.g. , Hessian or Fisher ) ( LeCun et al. , 1990 ; Hassibi & Stork , 1993 ; Theis et al. , 2018 ; Wang et al. , 2019a ; Singh & Alistarh , 2020 ) . The latter adds a penalty term to the objective function , drives unimportant weights towards zero , then removes those with the smallest magnitude . Note , the two groups are not starkly separated . Many methods take wisdom from both sides . For example , ( Ding et al. , 2018 ; Wang et al. , 2019b ; 2021b ) select unimportant weights by magnitude ( akin to the first group ) while also employing the regularization to penalize weights ( akin to the second group ) . There is no conclusion about which paradigm is better , yet empirically , the state-of-the-art pruning methods are closer to the second paradigm , i.e. , deciding weights via training instead of some derived formulas . Although no theories have formally discussed the reason , we can take a rough guess with the knowledge from this paper : Training can recover dynamical isometry , which is beneficial to subsequent finetuning . For more comprehensive literature , we refer interested readers to several surveys : an outdated one ( Reed , 1993 ) , some recent surveys of pruning alone ( Gale et al. , 2019 ; Blalock et al. , 2020 ) or pruning as a sub-topic under the general umbrella of model compression and acceleration ( Sze et al. , 2017 ; Cheng et al. , 2018a ; b ; Deng et al. , 2020 ) . Pruning at initialization ( PaI ) . Recent years have seen several new pruning paradigms . The most prominent one is pruning at initialization . Different from the conventional pruning , which prunes a pretrained model , PaI methods prune a randomly initialized model . Existing PaI approaches mainly include ( Lee et al. , 2019 ; 2020 ; Wang et al. , 2020 ; Frankle et al. , 2021 ; Ramanujan et al. , 2020 ) and the series of lottery ticket hypothesis ( Frankle & Carbin , 2019 ; Frankle et al. , 2020 ) . Interested readers may refer to ( Wang et al. , 2021a ) for a comprehensive summary about PaI . This topic is relevant to this work mainly because one PaI paper ( Lee et al. , 2020 ) also examines pruning using the tool of dynamical isometry . The similarity between our paper and theirs is that we both employ dynamical isometry as a tool to examine the property of network pruning . However , our paper is significantly different from theirs in many axes : ( 1 ) Basic setting . The most obvious difference is that we focus on pruning a pretrained model while ( Lee et al. , 2020 ) focuses on pruning at initialization ( PaI ) . They are two different tracks in pruning ( as such , PaI methods typically do not compare with the methods of pruning pretrained models ) and the latter was shown to consistently underperform the former ( Frankle et al. , 2021 ; Wang et al. , 2021a ) . ( 2 ) Motivation . Despite the same tool ( mean JSV ) , ( Lee et al. , 2020 ) uses it to select unimportant weights to prune ( i.e. , for a new pruning criterion ) , while we use it to analyze why finetuning LR has a significant impact on final performance . The role of finetuning LR in pruning is not mentioned at all in their paper . ( 3 ) Proposed technical method . ( Lee et al. , 2020 ) focuses on unstructured pruning , while we focuses on structured pruning . This further leads to fundamental difference when designing the dynamical isometry recovery ( DIR ) methods – In ( Lee et al. , 2020 ) , their proposed method is to use iterative optimization for approximated isometry ( due to the irregular sparsity ) ; while in our case , since the pruned filers can be completely removed from the network , one of our DIR method ( OrthP ) has closed-form solution and can achieve exact isometry . ( 4 ) Finally , in terms of empirical results , ( Lee et al. , 2020 ) only conducts experiments on MNIST ( LeCun et al. , 1998 ) and CIFAR ( Krizhevsky , 2009 ) , while we have extensive results on the large-scale ImageNet dataset ( Deng et al. , 2009 ) . 2.1 EMPIRICAL STUDY : LARGER FINETUNING LR IS CRITICAL . As far as we know , mainly two papers question the value of inheriting weights from a pretrained model : ( Crowley et al. , 2018 ; Liu et al. , 2019 ) . Both papers draw two similar conclusions . ( 1 ) Inheriting weights from a pretrained model in pruning has no value , i.e. , training from scratch the small model can match ( or outperform sometimes ) the counterpart pruned from a big pretrained model . ( 2 ) Given the fact of ( 1 ) , what really matters in pruning may lie in the pruned architecture instead of the inherited weight values . As such , both papers propose to view pruning as a form of neural architecture search ( Zoph & Le , 2017 ; Elsken et al. , 2019 ) . In this section , we first reexamine the empirical studies in ( Crowley et al. , 2018 ; Liu et al. , 2019 ) to show that the “ no value of inheriting weights ” argument is actually inaccurate owing to improper finetuning LR schedules . Reexamination of ( Liu et al. , 2019 ) . Before presenting results , here are some important comparison setting changes worth particular attention : ( 1 ) In ( Liu et al. , 2019 ) , they compare training from scratch with six pruning methods ( five structured pruning methods ( Li et al. , 2017 ; Luo et al. , 2017 ; Liu et al. , 2017 ; He et al. , 2017 ; Huang & Wang , 2018 ) and one unstructured pruning method ( Han et al. , 2015 ) ) . Here , we only focus on the L1-norm pruning ( Li et al. , 2017 ) on ImageNet . The main reason is that , L1-norm pruning is well-known a very basic filter pruning method . If we can show it outperforms training from scratch already , it will be no surprise to see other more advanced pruning methods also outperform training from scratch . In this sense , L1-norm pruning is the most representative method here for our investigation . ( 2 ) In ( Liu et al. , 2019 ) , they have two variants for the number of epochs in scratch training , “ Scratch-E ” and “ Scratch-B ” . For the former , different small models are trained for a fixed number of epochs ; for the latter , smaller models are trained for more epochs to maintain the same computation budget ( Scratch-B was shown to be better than Scratch-E in ( Liu et al. , 2019 ) ) . Also , they decay LR only to 10−3 following the official PyTorch ImageNet example1 . Here , we simply train all the networks for the same number of epochs but 1https : //github.com/pytorch/examples/tree/master/imagenet ensure the epochs are abundant ( 120 epochs ) and decay LR to a very small amount ( 10−5 ) . These two changes are to make sure the networks are trained to full convergence . As we will show , one primary cause possibly leading ( Liu et al. , 2019 ) to an inaccurate conclusion is exactly that the pruned networks are not fully converged ( see Tab . 1 ) . With the LR schedule changes , we rerun the experiments using the released code of ( Liu et al. , 2019 ) . Results are presented in Tab . 1 . In the implementations of ( Liu et al. , 2019 ) , the finetuned model is outperformed by the scratch training one , hence their “ no value of inheriting weights ” argument . We also reproduce their settings ( the two rows of “ 20 epochs , initial 10−3 , fixed ” in “ Our rerun ” ) for confirming their argument . However , the finetuning LR schedule “ 20 epochs , initial 10−3 , fixed ” is actually sub-optimal ; the network is not fully converged . Using the proper ones ( “ 90 epochs , initial 10−3 , decay ” or “ 90 epochs , initial 10−2 , decay ” ) , pruning outperforms training from scratch for both ResNet-34-A and ResNet-34-B . ( We note the pruned models even outperform the original models . This is probably because pruning reduces the network redundancy , thus curbing overfitting . This phenomenon is also widely observed in past pruning works ( Han et al. , 2016 ; Wen et al. , 2016 ; He et al. , 2017 ) especially under small pruning ratios as in Tab . 1 . ) Tab . 1 only presents two ResNet models and their speedups are actually quite small . To see if the finetuning LR effect still holds across the full spectrum of pruning ratios and on other types of networks , we vary the pruning ratios from 0.1 to 0.95 and include experiments on VGG11 BN ( Simonyan & Zisserman , 2015 ) . Results are presented in Tab . 2 . With a more proper finetune LR scheme ( column “ Pruned-Fintuned2 ” vs. “ Pruned-Fintuned-1 ” ) , the performance can be improved significantly . A clear pattern is , the larger the pruning ratio , the more of the improvement . Now , comparing the results of “ PrunedFintuned-2 ” to those of “ Scratch ” , we can see pruning outperforms scratch-training in most cases . Exceptions appear on ResNet-34/18 under extreme pruning ratios ( 90 % and 95 % ) . Despite them , we believe it is fair to say inheriting weights has value given the fact that 17/20 experiments in Tabs . 1 and 2 show pruning is better than training from scratch , especially under the pruning ratios of practical interest ( i.e. , non-extreme pruning ratios ) . Retrospectively , ( Liu et al. , 2019 ) concluded oppositely because they faithfully re-implemented the L1-norm pruning method just according to the description in the original paper ( Li et al. , 2017 ) : fixed LR 10−3 , 20 epochs , which turns out far from optimal as we know now . Reexamination of ( Crowley et al. , 2018 ) . Coincidentally , ( Crowley et al. , 2018 ) adopted a very similar finetuning LR scheme to ( Liu et al. , 2019 ) : They finetuned the pruned network with the lowest LR ( 8 ∗ 10−4 , close to 10−3 in ( Liu et al. , 2019 ) ) during scratch training and also fixed . Like the empirical study above , we reproduce the experiments of ( Crowley et al. , 2018 ) and rerun them with a larger initial LR ( 10−2 ) and decay it during finetuning . Detailed results are deferred to the Appendix ( Tab . 10 ) due to the limited length here . We summarize the observation here – Exactly the same as the case in ( Liu et al. , 2019 ) , when the proper finetuning LR is used , pruning actually outperforms the best scratch training scheme consistently . Up to now , the results above have shown that the “ no value of inheriting weights ” argument in previous works is largely attributed to sup-optimal finetuning settings . A larger LR ( e.g. , 10−2 ) can significantly improve the finetuning performance than a small one ( e.g. , 10−3 ) . In fact , we are not the only one to discover this . Previous works ( Renda et al. , 2020 ; Le & Hua , 2021 ) also reported similar observation . Nevertheless , they do not link the phenomenon with the “ value of inheriting weights ” argument and do not conduct systematical empirical studies as we do here . More importantly , neither of them presented theoretical explanations about its occurrence – next , we are about to bridge this gap . We present a faithful theoretical explanation through the lens of dynamical isometry . | This work takes a second look at the set of papers (Crowley et al 2018, Liu et al 2019) which claim that there is no generalization benefit offered by pruning, i.e., training a large network first and then pruning performs identically or worse than training the same pruned network architecture from scratch. This work claims that these previous works do not set learning rates during fine-tuning correctly while performing their experiments and hence these claims are invalid. The paper also offers an explanation for this phenomenon via dynamical isometery theory. | SP:a51218038a99a806af61511e3545fc7f562f5553 |
Rethinking Again the Value of Network Pruning -- A Dynamical Isometry Perspective | 1 INTRODUCTION . Pruning is a time-honored methodology to reduce parameters in a neural network without seriously compromising its performance ( Reed , 1993 ; Sze et al. , 2017 ) . The prevailing pipeline of pruning comprises three steps : 1 ) pretraining : train a dense model ; 2 ) pruning : prune the dense model based on certain rules ; 3 ) finetuning : retrain the pruned model to regain performance . Most existing research focuses on the second step , seeking the best criterion to select unimportant weights so as to incur as less performance degradation as possible . This 3-step pipeline has been practiced for more than 30 years ( Mozer & Smolensky , 1989 ; LeCun et al. , 1990 ) and is still extensively adopted in today ’ s pruning methods ( Sze et al. , 2017 ) . These said , several recent works ( Crowley et al. , 2018 ; Liu et al. , 2019 ) questioned the necessity of inheriting weights from a pretrained model because they empirically found the small model trained from scratch can match ( or sometimes outperform ) the counterpart pruned from the pre-trained large model . This acutely challenges the past wisdom as well as our common belief about pruning . As far as we know , there is no formal response to this critical conflict . A theoretical-level understanding of this problem is even more elusive . Meanwhile , the pruning community has been observing even more open questions . Specifically , ( Renda et al. , 2020 ; Le & Hua , 2021 ) found that the learning rate ( LR ) in finetuning holds a critical role in the final performance . A proper learning rate schedule ( e.g. , a larger initial LR 10−2 vs. 10−3 with step-decay schedule ) can improve the top-1 accuracy of a pruned ResNet-34 model ( He et al. , 2016 ) by more than 1 % on ImageNet ( Deng et al. , 2009 ) . This discovery calls for more attention being paid to the finetuning step when comparing different pruning methods . Unfortunately , they did not present more theoretical insights to explain its occurrence . This also remains an open question in the community up to date . In this paper , we will show these two open questions actually point to the same one . Specifically , we rerun the experiments of ( Crowley et al. , 2018 ; Liu et al. , 2019 ) and find simply using a larger finetuning LR ( 10−2 vs. 10−3 and decay it ) can significantly improve the final performance . Compared to the improved pruning performance , training from scratch does not compete or surpass pruning anymore ( see Tab . 1 and Tab . 2 on ImageNet ) . This observation invites many questions immediately : ( 1 ) Theoretical understanding : Why does this happen ? What is the theoretical reason behind it ? ( 2 ) Practical solution : If this is a problem , how to fix it ? Can the understanding of this problem lead us to better pruning algorithms ? This paper will present answers to all these questions . The key tool we employ to unveil the mysteries is dynamical isometry ( Saxe et al. , 2014 ) , which describes a kind of nice property in neural networks that are easy to optimize . We carefully design an explanatory experiment using a linear MLP ( multi-layer perceptron ) network to demonstrate how finetuning LR affects the final performance by affecting dynamical isometry . In brief , we observe the finetuning process can recover dynamical isometry ; a larger LR can help recover it faster ( or better ) , hence the better final performance . The proposed explanation is validated by our empirical results and resonates with many empirical observations . Furthermore , by the explanation , we learn dynamical isometry recovery is rather imperative . To achieve so , we present a very simple regularization-based method for pruning and show its effectiveness in recovering dynamical isometry on modern residual convolutional neural networks ( CNNs ) . Contributions . ( 1 ) We empirically demonstrate the questioning about the value of inheriting weights in structured pruning in previous works is inaccurate and point out that the direct cause is improperly using a small finetuning LR . Our finding justifies the value of inheriting weights in structured pruning . ( 2 ) On top of the empirical finding , more importantly , we present a theoretical explanation through examining the dynamical isometry of networks in pruning . This explanation is empirically validated by our carefully designed control experiments . ( 3 ) In addition to the theoretical understanding , we also propose a regularization-based method for dynamical isometry recovery . Despite its brutal simplicity , it is shown effective to recover the broken dynamical isometry on modern residual convolutional neural networks . 2 RELATED WORK . Conventional pruning . Pruning aims to remove as many parameters as possible in a neural network meanwhile maintaining its performance . There are many ways to categorize pruning methods . The most popular two are grouping by pruning structure and methodology . ( 1 ) In terms of pruning structure , pruning can be specified into unstructured pruning ( Han et al. , 2015 ; 2016 ) and structured pruning ( Wen et al. , 2016 ; Li et al. , 2017 ; He et al. , 2017 ) . For the former , a single weight is the basic pruning element . Unstructured pruning can deliver a high compression ratio ; whereas , without regularization , the pruned locations usually spread randomly in the network , which is hard to exploit for acceleration . On the opposite , structured pruning introduces certain patterns in the pruned locations , which benefit subsequent acceleration while can not achieve as much compression . Choices between unstructured and structured pruning depend on specific application needs . For structured pruning , there are still many sub-groups ( Mao et al. , 2017 ) . In the literature , without specific mention , structured pruning means filter pruning or channel pruning . This paper focuses on structured ( filter ) pruning because the “ no value of inheriting weights ” argument is mainly discussed in this context ( Liu et al. , 2019 ) . ( 2 ) In terms of pruning methodology ( i.e. , how to select unimportant weights to prune ) , pruning falls into two paradigms in general : importance-based and penalty-based . The former prunes weights based on some established importance criteria , such as magnitude ( for unstructured pruning ) ( Han et al. , 2015 ; 2016 ) or L1-norm ( for filter pruning ) ( Li et al. , 2017 ) , saliency based on 2nd-order gradients ( e.g. , Hessian or Fisher ) ( LeCun et al. , 1990 ; Hassibi & Stork , 1993 ; Theis et al. , 2018 ; Wang et al. , 2019a ; Singh & Alistarh , 2020 ) . The latter adds a penalty term to the objective function , drives unimportant weights towards zero , then removes those with the smallest magnitude . Note , the two groups are not starkly separated . Many methods take wisdom from both sides . For example , ( Ding et al. , 2018 ; Wang et al. , 2019b ; 2021b ) select unimportant weights by magnitude ( akin to the first group ) while also employing the regularization to penalize weights ( akin to the second group ) . There is no conclusion about which paradigm is better , yet empirically , the state-of-the-art pruning methods are closer to the second paradigm , i.e. , deciding weights via training instead of some derived formulas . Although no theories have formally discussed the reason , we can take a rough guess with the knowledge from this paper : Training can recover dynamical isometry , which is beneficial to subsequent finetuning . For more comprehensive literature , we refer interested readers to several surveys : an outdated one ( Reed , 1993 ) , some recent surveys of pruning alone ( Gale et al. , 2019 ; Blalock et al. , 2020 ) or pruning as a sub-topic under the general umbrella of model compression and acceleration ( Sze et al. , 2017 ; Cheng et al. , 2018a ; b ; Deng et al. , 2020 ) . Pruning at initialization ( PaI ) . Recent years have seen several new pruning paradigms . The most prominent one is pruning at initialization . Different from the conventional pruning , which prunes a pretrained model , PaI methods prune a randomly initialized model . Existing PaI approaches mainly include ( Lee et al. , 2019 ; 2020 ; Wang et al. , 2020 ; Frankle et al. , 2021 ; Ramanujan et al. , 2020 ) and the series of lottery ticket hypothesis ( Frankle & Carbin , 2019 ; Frankle et al. , 2020 ) . Interested readers may refer to ( Wang et al. , 2021a ) for a comprehensive summary about PaI . This topic is relevant to this work mainly because one PaI paper ( Lee et al. , 2020 ) also examines pruning using the tool of dynamical isometry . The similarity between our paper and theirs is that we both employ dynamical isometry as a tool to examine the property of network pruning . However , our paper is significantly different from theirs in many axes : ( 1 ) Basic setting . The most obvious difference is that we focus on pruning a pretrained model while ( Lee et al. , 2020 ) focuses on pruning at initialization ( PaI ) . They are two different tracks in pruning ( as such , PaI methods typically do not compare with the methods of pruning pretrained models ) and the latter was shown to consistently underperform the former ( Frankle et al. , 2021 ; Wang et al. , 2021a ) . ( 2 ) Motivation . Despite the same tool ( mean JSV ) , ( Lee et al. , 2020 ) uses it to select unimportant weights to prune ( i.e. , for a new pruning criterion ) , while we use it to analyze why finetuning LR has a significant impact on final performance . The role of finetuning LR in pruning is not mentioned at all in their paper . ( 3 ) Proposed technical method . ( Lee et al. , 2020 ) focuses on unstructured pruning , while we focuses on structured pruning . This further leads to fundamental difference when designing the dynamical isometry recovery ( DIR ) methods – In ( Lee et al. , 2020 ) , their proposed method is to use iterative optimization for approximated isometry ( due to the irregular sparsity ) ; while in our case , since the pruned filers can be completely removed from the network , one of our DIR method ( OrthP ) has closed-form solution and can achieve exact isometry . ( 4 ) Finally , in terms of empirical results , ( Lee et al. , 2020 ) only conducts experiments on MNIST ( LeCun et al. , 1998 ) and CIFAR ( Krizhevsky , 2009 ) , while we have extensive results on the large-scale ImageNet dataset ( Deng et al. , 2009 ) . 2.1 EMPIRICAL STUDY : LARGER FINETUNING LR IS CRITICAL . As far as we know , mainly two papers question the value of inheriting weights from a pretrained model : ( Crowley et al. , 2018 ; Liu et al. , 2019 ) . Both papers draw two similar conclusions . ( 1 ) Inheriting weights from a pretrained model in pruning has no value , i.e. , training from scratch the small model can match ( or outperform sometimes ) the counterpart pruned from a big pretrained model . ( 2 ) Given the fact of ( 1 ) , what really matters in pruning may lie in the pruned architecture instead of the inherited weight values . As such , both papers propose to view pruning as a form of neural architecture search ( Zoph & Le , 2017 ; Elsken et al. , 2019 ) . In this section , we first reexamine the empirical studies in ( Crowley et al. , 2018 ; Liu et al. , 2019 ) to show that the “ no value of inheriting weights ” argument is actually inaccurate owing to improper finetuning LR schedules . Reexamination of ( Liu et al. , 2019 ) . Before presenting results , here are some important comparison setting changes worth particular attention : ( 1 ) In ( Liu et al. , 2019 ) , they compare training from scratch with six pruning methods ( five structured pruning methods ( Li et al. , 2017 ; Luo et al. , 2017 ; Liu et al. , 2017 ; He et al. , 2017 ; Huang & Wang , 2018 ) and one unstructured pruning method ( Han et al. , 2015 ) ) . Here , we only focus on the L1-norm pruning ( Li et al. , 2017 ) on ImageNet . The main reason is that , L1-norm pruning is well-known a very basic filter pruning method . If we can show it outperforms training from scratch already , it will be no surprise to see other more advanced pruning methods also outperform training from scratch . In this sense , L1-norm pruning is the most representative method here for our investigation . ( 2 ) In ( Liu et al. , 2019 ) , they have two variants for the number of epochs in scratch training , “ Scratch-E ” and “ Scratch-B ” . For the former , different small models are trained for a fixed number of epochs ; for the latter , smaller models are trained for more epochs to maintain the same computation budget ( Scratch-B was shown to be better than Scratch-E in ( Liu et al. , 2019 ) ) . Also , they decay LR only to 10−3 following the official PyTorch ImageNet example1 . Here , we simply train all the networks for the same number of epochs but 1https : //github.com/pytorch/examples/tree/master/imagenet ensure the epochs are abundant ( 120 epochs ) and decay LR to a very small amount ( 10−5 ) . These two changes are to make sure the networks are trained to full convergence . As we will show , one primary cause possibly leading ( Liu et al. , 2019 ) to an inaccurate conclusion is exactly that the pruned networks are not fully converged ( see Tab . 1 ) . With the LR schedule changes , we rerun the experiments using the released code of ( Liu et al. , 2019 ) . Results are presented in Tab . 1 . In the implementations of ( Liu et al. , 2019 ) , the finetuned model is outperformed by the scratch training one , hence their “ no value of inheriting weights ” argument . We also reproduce their settings ( the two rows of “ 20 epochs , initial 10−3 , fixed ” in “ Our rerun ” ) for confirming their argument . However , the finetuning LR schedule “ 20 epochs , initial 10−3 , fixed ” is actually sub-optimal ; the network is not fully converged . Using the proper ones ( “ 90 epochs , initial 10−3 , decay ” or “ 90 epochs , initial 10−2 , decay ” ) , pruning outperforms training from scratch for both ResNet-34-A and ResNet-34-B . ( We note the pruned models even outperform the original models . This is probably because pruning reduces the network redundancy , thus curbing overfitting . This phenomenon is also widely observed in past pruning works ( Han et al. , 2016 ; Wen et al. , 2016 ; He et al. , 2017 ) especially under small pruning ratios as in Tab . 1 . ) Tab . 1 only presents two ResNet models and their speedups are actually quite small . To see if the finetuning LR effect still holds across the full spectrum of pruning ratios and on other types of networks , we vary the pruning ratios from 0.1 to 0.95 and include experiments on VGG11 BN ( Simonyan & Zisserman , 2015 ) . Results are presented in Tab . 2 . With a more proper finetune LR scheme ( column “ Pruned-Fintuned2 ” vs. “ Pruned-Fintuned-1 ” ) , the performance can be improved significantly . A clear pattern is , the larger the pruning ratio , the more of the improvement . Now , comparing the results of “ PrunedFintuned-2 ” to those of “ Scratch ” , we can see pruning outperforms scratch-training in most cases . Exceptions appear on ResNet-34/18 under extreme pruning ratios ( 90 % and 95 % ) . Despite them , we believe it is fair to say inheriting weights has value given the fact that 17/20 experiments in Tabs . 1 and 2 show pruning is better than training from scratch , especially under the pruning ratios of practical interest ( i.e. , non-extreme pruning ratios ) . Retrospectively , ( Liu et al. , 2019 ) concluded oppositely because they faithfully re-implemented the L1-norm pruning method just according to the description in the original paper ( Li et al. , 2017 ) : fixed LR 10−3 , 20 epochs , which turns out far from optimal as we know now . Reexamination of ( Crowley et al. , 2018 ) . Coincidentally , ( Crowley et al. , 2018 ) adopted a very similar finetuning LR scheme to ( Liu et al. , 2019 ) : They finetuned the pruned network with the lowest LR ( 8 ∗ 10−4 , close to 10−3 in ( Liu et al. , 2019 ) ) during scratch training and also fixed . Like the empirical study above , we reproduce the experiments of ( Crowley et al. , 2018 ) and rerun them with a larger initial LR ( 10−2 ) and decay it during finetuning . Detailed results are deferred to the Appendix ( Tab . 10 ) due to the limited length here . We summarize the observation here – Exactly the same as the case in ( Liu et al. , 2019 ) , when the proper finetuning LR is used , pruning actually outperforms the best scratch training scheme consistently . Up to now , the results above have shown that the “ no value of inheriting weights ” argument in previous works is largely attributed to sup-optimal finetuning settings . A larger LR ( e.g. , 10−2 ) can significantly improve the finetuning performance than a small one ( e.g. , 10−3 ) . In fact , we are not the only one to discover this . Previous works ( Renda et al. , 2020 ; Le & Hua , 2021 ) also reported similar observation . Nevertheless , they do not link the phenomenon with the “ value of inheriting weights ” argument and do not conduct systematical empirical studies as we do here . More importantly , neither of them presented theoretical explanations about its occurrence – next , we are about to bridge this gap . We present a faithful theoretical explanation through the lens of dynamical isometry . | There are recent works questioning the value of inheriting weights in structured neural network pruning as it is empirically observed training from scratch can match or outperform finetuning a pruned model. This paper mainly includes three components: 1) the authors reinvestigated the problem and demonstrates that the conclusion is inaccurate because of improperly small finetuning learning rates. They show finetuning with pruned weights actually outperforms training from scratch, when larger learning rates and longer training epochs are adopted. 2) the authors explored dynamical isometry (DI) to understand how finetuning LR affects the final performance. They show that weight pruning breaks dynamical isometry and finetuning can recover it and a larger LR can recover faster. 3) They proposed to fully recover dynamical isometry in fitler pruning before finetuning. | SP:a51218038a99a806af61511e3545fc7f562f5553 |
Rethinking Again the Value of Network Pruning -- A Dynamical Isometry Perspective | 1 INTRODUCTION . Pruning is a time-honored methodology to reduce parameters in a neural network without seriously compromising its performance ( Reed , 1993 ; Sze et al. , 2017 ) . The prevailing pipeline of pruning comprises three steps : 1 ) pretraining : train a dense model ; 2 ) pruning : prune the dense model based on certain rules ; 3 ) finetuning : retrain the pruned model to regain performance . Most existing research focuses on the second step , seeking the best criterion to select unimportant weights so as to incur as less performance degradation as possible . This 3-step pipeline has been practiced for more than 30 years ( Mozer & Smolensky , 1989 ; LeCun et al. , 1990 ) and is still extensively adopted in today ’ s pruning methods ( Sze et al. , 2017 ) . These said , several recent works ( Crowley et al. , 2018 ; Liu et al. , 2019 ) questioned the necessity of inheriting weights from a pretrained model because they empirically found the small model trained from scratch can match ( or sometimes outperform ) the counterpart pruned from the pre-trained large model . This acutely challenges the past wisdom as well as our common belief about pruning . As far as we know , there is no formal response to this critical conflict . A theoretical-level understanding of this problem is even more elusive . Meanwhile , the pruning community has been observing even more open questions . Specifically , ( Renda et al. , 2020 ; Le & Hua , 2021 ) found that the learning rate ( LR ) in finetuning holds a critical role in the final performance . A proper learning rate schedule ( e.g. , a larger initial LR 10−2 vs. 10−3 with step-decay schedule ) can improve the top-1 accuracy of a pruned ResNet-34 model ( He et al. , 2016 ) by more than 1 % on ImageNet ( Deng et al. , 2009 ) . This discovery calls for more attention being paid to the finetuning step when comparing different pruning methods . Unfortunately , they did not present more theoretical insights to explain its occurrence . This also remains an open question in the community up to date . In this paper , we will show these two open questions actually point to the same one . Specifically , we rerun the experiments of ( Crowley et al. , 2018 ; Liu et al. , 2019 ) and find simply using a larger finetuning LR ( 10−2 vs. 10−3 and decay it ) can significantly improve the final performance . Compared to the improved pruning performance , training from scratch does not compete or surpass pruning anymore ( see Tab . 1 and Tab . 2 on ImageNet ) . This observation invites many questions immediately : ( 1 ) Theoretical understanding : Why does this happen ? What is the theoretical reason behind it ? ( 2 ) Practical solution : If this is a problem , how to fix it ? Can the understanding of this problem lead us to better pruning algorithms ? This paper will present answers to all these questions . The key tool we employ to unveil the mysteries is dynamical isometry ( Saxe et al. , 2014 ) , which describes a kind of nice property in neural networks that are easy to optimize . We carefully design an explanatory experiment using a linear MLP ( multi-layer perceptron ) network to demonstrate how finetuning LR affects the final performance by affecting dynamical isometry . In brief , we observe the finetuning process can recover dynamical isometry ; a larger LR can help recover it faster ( or better ) , hence the better final performance . The proposed explanation is validated by our empirical results and resonates with many empirical observations . Furthermore , by the explanation , we learn dynamical isometry recovery is rather imperative . To achieve so , we present a very simple regularization-based method for pruning and show its effectiveness in recovering dynamical isometry on modern residual convolutional neural networks ( CNNs ) . Contributions . ( 1 ) We empirically demonstrate the questioning about the value of inheriting weights in structured pruning in previous works is inaccurate and point out that the direct cause is improperly using a small finetuning LR . Our finding justifies the value of inheriting weights in structured pruning . ( 2 ) On top of the empirical finding , more importantly , we present a theoretical explanation through examining the dynamical isometry of networks in pruning . This explanation is empirically validated by our carefully designed control experiments . ( 3 ) In addition to the theoretical understanding , we also propose a regularization-based method for dynamical isometry recovery . Despite its brutal simplicity , it is shown effective to recover the broken dynamical isometry on modern residual convolutional neural networks . 2 RELATED WORK . Conventional pruning . Pruning aims to remove as many parameters as possible in a neural network meanwhile maintaining its performance . There are many ways to categorize pruning methods . The most popular two are grouping by pruning structure and methodology . ( 1 ) In terms of pruning structure , pruning can be specified into unstructured pruning ( Han et al. , 2015 ; 2016 ) and structured pruning ( Wen et al. , 2016 ; Li et al. , 2017 ; He et al. , 2017 ) . For the former , a single weight is the basic pruning element . Unstructured pruning can deliver a high compression ratio ; whereas , without regularization , the pruned locations usually spread randomly in the network , which is hard to exploit for acceleration . On the opposite , structured pruning introduces certain patterns in the pruned locations , which benefit subsequent acceleration while can not achieve as much compression . Choices between unstructured and structured pruning depend on specific application needs . For structured pruning , there are still many sub-groups ( Mao et al. , 2017 ) . In the literature , without specific mention , structured pruning means filter pruning or channel pruning . This paper focuses on structured ( filter ) pruning because the “ no value of inheriting weights ” argument is mainly discussed in this context ( Liu et al. , 2019 ) . ( 2 ) In terms of pruning methodology ( i.e. , how to select unimportant weights to prune ) , pruning falls into two paradigms in general : importance-based and penalty-based . The former prunes weights based on some established importance criteria , such as magnitude ( for unstructured pruning ) ( Han et al. , 2015 ; 2016 ) or L1-norm ( for filter pruning ) ( Li et al. , 2017 ) , saliency based on 2nd-order gradients ( e.g. , Hessian or Fisher ) ( LeCun et al. , 1990 ; Hassibi & Stork , 1993 ; Theis et al. , 2018 ; Wang et al. , 2019a ; Singh & Alistarh , 2020 ) . The latter adds a penalty term to the objective function , drives unimportant weights towards zero , then removes those with the smallest magnitude . Note , the two groups are not starkly separated . Many methods take wisdom from both sides . For example , ( Ding et al. , 2018 ; Wang et al. , 2019b ; 2021b ) select unimportant weights by magnitude ( akin to the first group ) while also employing the regularization to penalize weights ( akin to the second group ) . There is no conclusion about which paradigm is better , yet empirically , the state-of-the-art pruning methods are closer to the second paradigm , i.e. , deciding weights via training instead of some derived formulas . Although no theories have formally discussed the reason , we can take a rough guess with the knowledge from this paper : Training can recover dynamical isometry , which is beneficial to subsequent finetuning . For more comprehensive literature , we refer interested readers to several surveys : an outdated one ( Reed , 1993 ) , some recent surveys of pruning alone ( Gale et al. , 2019 ; Blalock et al. , 2020 ) or pruning as a sub-topic under the general umbrella of model compression and acceleration ( Sze et al. , 2017 ; Cheng et al. , 2018a ; b ; Deng et al. , 2020 ) . Pruning at initialization ( PaI ) . Recent years have seen several new pruning paradigms . The most prominent one is pruning at initialization . Different from the conventional pruning , which prunes a pretrained model , PaI methods prune a randomly initialized model . Existing PaI approaches mainly include ( Lee et al. , 2019 ; 2020 ; Wang et al. , 2020 ; Frankle et al. , 2021 ; Ramanujan et al. , 2020 ) and the series of lottery ticket hypothesis ( Frankle & Carbin , 2019 ; Frankle et al. , 2020 ) . Interested readers may refer to ( Wang et al. , 2021a ) for a comprehensive summary about PaI . This topic is relevant to this work mainly because one PaI paper ( Lee et al. , 2020 ) also examines pruning using the tool of dynamical isometry . The similarity between our paper and theirs is that we both employ dynamical isometry as a tool to examine the property of network pruning . However , our paper is significantly different from theirs in many axes : ( 1 ) Basic setting . The most obvious difference is that we focus on pruning a pretrained model while ( Lee et al. , 2020 ) focuses on pruning at initialization ( PaI ) . They are two different tracks in pruning ( as such , PaI methods typically do not compare with the methods of pruning pretrained models ) and the latter was shown to consistently underperform the former ( Frankle et al. , 2021 ; Wang et al. , 2021a ) . ( 2 ) Motivation . Despite the same tool ( mean JSV ) , ( Lee et al. , 2020 ) uses it to select unimportant weights to prune ( i.e. , for a new pruning criterion ) , while we use it to analyze why finetuning LR has a significant impact on final performance . The role of finetuning LR in pruning is not mentioned at all in their paper . ( 3 ) Proposed technical method . ( Lee et al. , 2020 ) focuses on unstructured pruning , while we focuses on structured pruning . This further leads to fundamental difference when designing the dynamical isometry recovery ( DIR ) methods – In ( Lee et al. , 2020 ) , their proposed method is to use iterative optimization for approximated isometry ( due to the irregular sparsity ) ; while in our case , since the pruned filers can be completely removed from the network , one of our DIR method ( OrthP ) has closed-form solution and can achieve exact isometry . ( 4 ) Finally , in terms of empirical results , ( Lee et al. , 2020 ) only conducts experiments on MNIST ( LeCun et al. , 1998 ) and CIFAR ( Krizhevsky , 2009 ) , while we have extensive results on the large-scale ImageNet dataset ( Deng et al. , 2009 ) . 2.1 EMPIRICAL STUDY : LARGER FINETUNING LR IS CRITICAL . As far as we know , mainly two papers question the value of inheriting weights from a pretrained model : ( Crowley et al. , 2018 ; Liu et al. , 2019 ) . Both papers draw two similar conclusions . ( 1 ) Inheriting weights from a pretrained model in pruning has no value , i.e. , training from scratch the small model can match ( or outperform sometimes ) the counterpart pruned from a big pretrained model . ( 2 ) Given the fact of ( 1 ) , what really matters in pruning may lie in the pruned architecture instead of the inherited weight values . As such , both papers propose to view pruning as a form of neural architecture search ( Zoph & Le , 2017 ; Elsken et al. , 2019 ) . In this section , we first reexamine the empirical studies in ( Crowley et al. , 2018 ; Liu et al. , 2019 ) to show that the “ no value of inheriting weights ” argument is actually inaccurate owing to improper finetuning LR schedules . Reexamination of ( Liu et al. , 2019 ) . Before presenting results , here are some important comparison setting changes worth particular attention : ( 1 ) In ( Liu et al. , 2019 ) , they compare training from scratch with six pruning methods ( five structured pruning methods ( Li et al. , 2017 ; Luo et al. , 2017 ; Liu et al. , 2017 ; He et al. , 2017 ; Huang & Wang , 2018 ) and one unstructured pruning method ( Han et al. , 2015 ) ) . Here , we only focus on the L1-norm pruning ( Li et al. , 2017 ) on ImageNet . The main reason is that , L1-norm pruning is well-known a very basic filter pruning method . If we can show it outperforms training from scratch already , it will be no surprise to see other more advanced pruning methods also outperform training from scratch . In this sense , L1-norm pruning is the most representative method here for our investigation . ( 2 ) In ( Liu et al. , 2019 ) , they have two variants for the number of epochs in scratch training , “ Scratch-E ” and “ Scratch-B ” . For the former , different small models are trained for a fixed number of epochs ; for the latter , smaller models are trained for more epochs to maintain the same computation budget ( Scratch-B was shown to be better than Scratch-E in ( Liu et al. , 2019 ) ) . Also , they decay LR only to 10−3 following the official PyTorch ImageNet example1 . Here , we simply train all the networks for the same number of epochs but 1https : //github.com/pytorch/examples/tree/master/imagenet ensure the epochs are abundant ( 120 epochs ) and decay LR to a very small amount ( 10−5 ) . These two changes are to make sure the networks are trained to full convergence . As we will show , one primary cause possibly leading ( Liu et al. , 2019 ) to an inaccurate conclusion is exactly that the pruned networks are not fully converged ( see Tab . 1 ) . With the LR schedule changes , we rerun the experiments using the released code of ( Liu et al. , 2019 ) . Results are presented in Tab . 1 . In the implementations of ( Liu et al. , 2019 ) , the finetuned model is outperformed by the scratch training one , hence their “ no value of inheriting weights ” argument . We also reproduce their settings ( the two rows of “ 20 epochs , initial 10−3 , fixed ” in “ Our rerun ” ) for confirming their argument . However , the finetuning LR schedule “ 20 epochs , initial 10−3 , fixed ” is actually sub-optimal ; the network is not fully converged . Using the proper ones ( “ 90 epochs , initial 10−3 , decay ” or “ 90 epochs , initial 10−2 , decay ” ) , pruning outperforms training from scratch for both ResNet-34-A and ResNet-34-B . ( We note the pruned models even outperform the original models . This is probably because pruning reduces the network redundancy , thus curbing overfitting . This phenomenon is also widely observed in past pruning works ( Han et al. , 2016 ; Wen et al. , 2016 ; He et al. , 2017 ) especially under small pruning ratios as in Tab . 1 . ) Tab . 1 only presents two ResNet models and their speedups are actually quite small . To see if the finetuning LR effect still holds across the full spectrum of pruning ratios and on other types of networks , we vary the pruning ratios from 0.1 to 0.95 and include experiments on VGG11 BN ( Simonyan & Zisserman , 2015 ) . Results are presented in Tab . 2 . With a more proper finetune LR scheme ( column “ Pruned-Fintuned2 ” vs. “ Pruned-Fintuned-1 ” ) , the performance can be improved significantly . A clear pattern is , the larger the pruning ratio , the more of the improvement . Now , comparing the results of “ PrunedFintuned-2 ” to those of “ Scratch ” , we can see pruning outperforms scratch-training in most cases . Exceptions appear on ResNet-34/18 under extreme pruning ratios ( 90 % and 95 % ) . Despite them , we believe it is fair to say inheriting weights has value given the fact that 17/20 experiments in Tabs . 1 and 2 show pruning is better than training from scratch , especially under the pruning ratios of practical interest ( i.e. , non-extreme pruning ratios ) . Retrospectively , ( Liu et al. , 2019 ) concluded oppositely because they faithfully re-implemented the L1-norm pruning method just according to the description in the original paper ( Li et al. , 2017 ) : fixed LR 10−3 , 20 epochs , which turns out far from optimal as we know now . Reexamination of ( Crowley et al. , 2018 ) . Coincidentally , ( Crowley et al. , 2018 ) adopted a very similar finetuning LR scheme to ( Liu et al. , 2019 ) : They finetuned the pruned network with the lowest LR ( 8 ∗ 10−4 , close to 10−3 in ( Liu et al. , 2019 ) ) during scratch training and also fixed . Like the empirical study above , we reproduce the experiments of ( Crowley et al. , 2018 ) and rerun them with a larger initial LR ( 10−2 ) and decay it during finetuning . Detailed results are deferred to the Appendix ( Tab . 10 ) due to the limited length here . We summarize the observation here – Exactly the same as the case in ( Liu et al. , 2019 ) , when the proper finetuning LR is used , pruning actually outperforms the best scratch training scheme consistently . Up to now , the results above have shown that the “ no value of inheriting weights ” argument in previous works is largely attributed to sup-optimal finetuning settings . A larger LR ( e.g. , 10−2 ) can significantly improve the finetuning performance than a small one ( e.g. , 10−3 ) . In fact , we are not the only one to discover this . Previous works ( Renda et al. , 2020 ; Le & Hua , 2021 ) also reported similar observation . Nevertheless , they do not link the phenomenon with the “ value of inheriting weights ” argument and do not conduct systematical empirical studies as we do here . More importantly , neither of them presented theoretical explanations about its occurrence – next , we are about to bridge this gap . We present a faithful theoretical explanation through the lens of dynamical isometry . | This paper challenges an existing argument that “inheriting the weights of the pruned network is not necessary for fine-tuning”. This paper provides empirical evidence that the previous experiments are not carried out with proper learning rates and training epochs to ensure the network convergence. The authors conjecture that fine-tuning pruned network requires larger learning rates and longer training epochs because its dynamical isometry is broken. As a result, large learning rates (if training for shorter epochs) or long training epochs are required for the recovery of dynamic isometry. Then the authors propose OrthP to re-initialize the pruned weights to completely recover the dynamic isometry in simple MLP. With OrthP, the fine-tuning process is less sensitive to different learning rate and training epochs. | SP:a51218038a99a806af61511e3545fc7f562f5553 |
Neural Deep Equilibrium Solvers | A deep equilibrium ( DEQ ) model abandons traditional depth by solving for the fixed point of a single nonlinear layer fθ . This structure enables decoupling the internal structure of the layer ( which controls representational capacity ) from how the fixed point is actually computed ( which impacts inference-time efficiency ) , which is usually via classic techniques such as Broyden ’ s method or Anderson acceleration . In this paper , we show that one can exploit such decoupling and substantially enhance this fixed point computation using a custom neural solver . Specifically , our solver uses a parameterized network to both guess an initial value of the optimization and perform iterative updates , in a method that generalizes a learnable form of Anderson acceleration and can be trained end-to-end in an unsupervised manner . Such a solution is particularly well suited to the implicit model setting , because inference in these models requires repeatedly solving for a fixed point of the same nonlinear layer for different inputs , a task at which our network excels . Our experiments show that these neural equilibrium solvers are fast to train ( only taking an extra 0.9-1.1 % over the original DEQ ’ s training time ) , require few additional parameters ( 1-3 % of the original model size ) , yet lead to a 2× speedup in DEQ network inference without any degradation in accuracy across numerous domains and tasks . 1 INTRODUCTION . Recent progress on implicit networks , such as Neural ODEs ( NODEs ) ( Chen et al. , 2018b ; Dupont et al. , 2019 ; Rubanova et al. , 2019 ; Jia & Benson , 2019 ; Kelly et al. , 2020 ) and deep equilibrium ( DEQ ) models ( Bai et al. , 2019 ; Winston & Kolter , 2020 ; Kawaguchi , 2021 ; Bai et al. , 2020 ; Gilton et al. , 2021 ) , has motivated this novel class of networks to the forefront of deep learning research . Instead of stacking a series of operators hierarchically , implicit models define their outputs as solutions to nonlinear dynamical systems . For example , DEQ models ( which this paper will focus on ) define their outputs as fixed points ( a.k.a . equilibria ) of a layer fθ and input x ; i.e. , output z⋆ = fθ ( z⋆ , x ) . Then , in the backward pass , a DEQ implicitly differentiates through the final fixed point z⋆ ( Krantz & Parks , 2012 ; Bai et al. , 2019 ; Fung et al. , 2021 ) , regardless of how forward pass is computed in the first place . Such insulated forward and backward passes enable an equilibrium model to leverage arbitrary black-box solvers to reach the fixed points without storing intermediate activations , thus consuming constant training memory . Recent works have successfully applied the DEQ framework on high-dimensional tasks such as language modeling ( Merity et al. , 2017 ) and semantic segmentation ( Cordts et al. , 2016 ) , with performance competitive with architectures like Transformers ( Vaswani et al. , 2017 ; Dai et al. , 2019 ) . However , it is also well-known that these implicit models are slow , which is ( arguably ) their single most limiting drawback compared to traditional feedforward models ( Duvenaud et al. , 2020 ; Dupont et al. , 2019 ; Bai et al. , 2021 ) . For example , Neural ODEs could take well over 100 forward solver iterations ( i.e. , evaluations of fθ ) even on MNIST classification ; DEQs can scale to realistic tasks , but the overhead of fixed-point solvers is magnified by the task scales , rendering the model 3-6× slower than state-of-the-art ( SOTA ) explicit networks ( Vaswani et al. , 2017 ; Wang et al. , 2020 ) at inference . Can we make equilibrium models faster by taking advantage of their implicitness ? One benefit of DEQ ’ s formulation is the fact that they decouple the representational capacity ( determined by fθ ) and forward computation ( controlled by the solver ) , which is not possible in any explicit model ( e.g. , ResNet-101 ( He et al. , 2016 ) ) . Hence , given a trained DEQ , one can trade off inference time and the accuracy of the estimated fixed point by simply reducing the number of solver iterations . This yields a speed/accuracy trade-off curve , as shown in Fig . 1 . However , this trade-off ( i.e. , movements along the pareto curves ) can be highly risky : as we gradually increase inference speed by compromising the quality of fixed point estimates , model accuracy also degrades drastically . In this work , we show that we can shift the DEQ speed/accuracy trade-off curve by exploiting such decoupling to customize the fixed-point solving . Prior work on equilibrium models relies on classic solvers , which are manually designed and generic ( e.g. , Broyden ’ s Method ( Broyden , 1965 ) ) . We propose a tiny , learnable , and content-aware solver module that is automatically customized to a specific DEQ . Our hypersolver consists of two parts . First , we introduce a learned initializer that estimates a good starting point for the optimization . Second , we introduce a generalized parameterized version of Anderson mixing ( Anderson , 1965 ) that learns the iterative updates as an input-dependent temporal process . Overall , the hypersolver consumes a tiny amount of parameters . Since fθ is frozen when the hypersolver is trained , the training is very fast and does not compromise generalization . Our experiments apply this approach to diverse domains with large datasets : WikiText-103 language modeling ( Merity et al. , 2017 ) , ImageNet classification ( Deng et al. , 2009 ) , and Cityscapes segmentation with megapixel images ( Cordts et al. , 2016 ) . Our results suggest that neural deep equilibrium solvers add little overhead to training ( only taking an extra 0.9-1.1 % over the original DEQ ’ s training time ) , are extremely compact ( about 1-3 % of the DEQ ’ s model size ) , and lead to a consistent and universal 1.6-2× acceleration of inference with no compromise in accuracy . Overall , we believe this paper achieves two major objectives , both vital for the quickly growing community studying implicit models : first , we advance these large-scale implicit models to a much more practical level across architectures ( e.g. , almost as fast as Transformers ) ; and second , we formally bring up and exploit this valuable notion of how implicit layers decouple representational capacity and forward computation , opening a new door to significantly advancing the agenda of deploying implicit models in practice . 2 RELATED WORK . Deep Implicit Models . Recent research on models without a prescribed computation graph or hierarchical stacking led to a new class of deep learning models where the output is defined as the solution of nonlinear systems ( Duvenaud et al. , 2020 ; Amos & Kolter , 2017 ; Chen et al. , 2018b ; Wang et al. , 2019 ; El Ghaoui et al. , 2019 ; Bai et al. , 2019 ; 2020 ; Gould et al. , 2019 ; Gu et al. , 2020 ; Wang et al. , 2020 ) . Neural ODEs ( NODEs ) ( Chen et al. , 2018b ; Dupont et al. , 2019 ) , for example , model infintesimal steps of a residual layer fθ by solving an initial value problem ( IVP ) ( Coddington & Levinson , 1955 ) parameterized by this layer ; i.e . ∂z ∂t = fθ ( z ( t ) , t ) , z ( 0 ) = x , t = 0 , . . . , T . Deep equilibrium ( DEQ ) models ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) seek to directly solve for a “ fixed-point ” representation corresponding to a ( not necessarily residual ) layer fθ and input x ; i.e . z⋆ = fθ ( z⋆ , x ) . Implicit models are appealing in part due to their analytical backward passes ( e.g. , adjoint method or implicit differentiation ) that only depend on the final output , which can dramatically reduce memory consumption during training . Regularizing Implicit Models . Implicit models are known to be slow during training and inference . To address this , recent works have developed certain regularization methods that encourage these models to be more stable and thus easier to solve . For NODEs , Dupont et al . ( 2019 ) augment the neural ODE hidden state ; Grathwohl et al . ( 2019 ) use spectral normalization ( Miyato et al. , 2018 ) to stabilize the NODE dynamics ; Kelly et al . ( 2020 ) regularize higher-order time derivatives of the ODE system . For DEQs , Winston & Kolter ( 2020 ) propose a parameterization of fθ that guarantees stability of DEQ models ( i.e. , unique fixed point ) . Fung et al . ( 2021 ) show that one can simplify the implicit differentiation of Lipschitz DEQs ( Revay et al. , 2020 ) to accelerate the backward pass . Bai et al . ( 2021 ) summarize DEQ stability issues and propose to address them by regularizing the Jacobian matrices of equilibrium layers . In comparison , our work focuses on the solver rather than the layer fθ , and is orthogonal and complementary to regularization methods . Improving Implicit Model Solvers . Of particular relevance to our work are recent advances in the Neural ODE literature that improve the ODE flow solver . Poli et al . ( 2020 ) introduce a Neural ODE formulation that adds a learnable residual fitting step to the original solver steps , aiming to approximate the higher-order terms of canonical ODE solvers ( e.g. , Euler ’ s method ) on each solution checkpoint along the ODE path . Another recent work ( Kidger et al. , 2021 ) focuses on improving the adjoint method by replacing the usual L2 norm with a more flexible seminorm to make the NODE backward solver faster . To the best of our knowledge , no such solver improvement has been explored in the equilibrium model context . Unlike Neural ODEs , DEQs do not use ODE solvers and do not have unique & well-defined trajectories to the solution ( even if one starts at the same initial point z [ 0 ] ) . Our work is the first to propose a neural fixed-point solver for equilibrium models . Learning to Optimize/Learn . An important line of work has explored learnable optimization methods . Li & Malik ( 2016 ; 2017 ) propose to use reinforcement learning ( guided policy search ) to learn a new generic unconstrained continuous optimization algorithm , where the training set consists of numerous randomly generated objective functions . Andrychowicz et al . ( 2016 ) introduce the “ learning to learn ” ( L2L ) framework , where a gradient update rule for the parameters is learned by an LSTM with a pre-defined horizon T of parameter update steps . However , such approaches ( Andrychowicz et al. , 2016 ; Chen et al. , 2017 ; Wichrowska et al. , 2017 ; Ravi & Larochelle , 2016 ) have had some difficulty in generalizing to larger tasks due to the need to unroll for a large T ( e.g. , 128 ( Andrychowicz et al. , 2016 ) ) . Our work is related to these prior efforts in L2L , but differs in important ways . First , the L2L framework aims to learn a learning algorithm that will be applied to multiple models and tasks , while we aim to fit the nonlinear dynamics of a specific implicit model . Second , the optimization we tackle is not on the parameter space , but on the hidden unit space ; this means that the RNN optimizer used in L2L would not work here , because the fixed points themselves can be of variable sizes at test time ( e.g. , sequence lengths , image sizes ) . Third , while L2L methods can not know a priori what a good “ initial guess ” of optimal parameters may be , we show that it is possible and reasonable to infer this in the hidden unit space with implicit models . Concurrent to our work , Venkataraman & Amos ( 2021 ) studies an RNN-based learnable fixed-point acceleration scheme specifically in the application of convex cone programming . | The paper proposes to speed up inference of Deep Equilibrium Models (DEQs) by replacing the classic fixed-point solvers (Broyden or Anderson Acceleration) by a learned extension of AA. Their approach operates on a pre-trained DEQ, and trains a small neural network to propose an initialization and update scheme based on ground truth fixed points. Their method is orthogonal to existing regularization approaches to speeding up DEQs. The paper has extensive experiments across large scale tasks: Language modeling, ImageNet classification, and Semantic Segmentation. They show pareto improvements across these tasks as compared to standard DEQs, while only adding a ~1% additional training overhead of the DEQ. | SP:26be4da0f288362ff02938776c22cfa090d6fd07 |
Neural Deep Equilibrium Solvers | A deep equilibrium ( DEQ ) model abandons traditional depth by solving for the fixed point of a single nonlinear layer fθ . This structure enables decoupling the internal structure of the layer ( which controls representational capacity ) from how the fixed point is actually computed ( which impacts inference-time efficiency ) , which is usually via classic techniques such as Broyden ’ s method or Anderson acceleration . In this paper , we show that one can exploit such decoupling and substantially enhance this fixed point computation using a custom neural solver . Specifically , our solver uses a parameterized network to both guess an initial value of the optimization and perform iterative updates , in a method that generalizes a learnable form of Anderson acceleration and can be trained end-to-end in an unsupervised manner . Such a solution is particularly well suited to the implicit model setting , because inference in these models requires repeatedly solving for a fixed point of the same nonlinear layer for different inputs , a task at which our network excels . Our experiments show that these neural equilibrium solvers are fast to train ( only taking an extra 0.9-1.1 % over the original DEQ ’ s training time ) , require few additional parameters ( 1-3 % of the original model size ) , yet lead to a 2× speedup in DEQ network inference without any degradation in accuracy across numerous domains and tasks . 1 INTRODUCTION . Recent progress on implicit networks , such as Neural ODEs ( NODEs ) ( Chen et al. , 2018b ; Dupont et al. , 2019 ; Rubanova et al. , 2019 ; Jia & Benson , 2019 ; Kelly et al. , 2020 ) and deep equilibrium ( DEQ ) models ( Bai et al. , 2019 ; Winston & Kolter , 2020 ; Kawaguchi , 2021 ; Bai et al. , 2020 ; Gilton et al. , 2021 ) , has motivated this novel class of networks to the forefront of deep learning research . Instead of stacking a series of operators hierarchically , implicit models define their outputs as solutions to nonlinear dynamical systems . For example , DEQ models ( which this paper will focus on ) define their outputs as fixed points ( a.k.a . equilibria ) of a layer fθ and input x ; i.e. , output z⋆ = fθ ( z⋆ , x ) . Then , in the backward pass , a DEQ implicitly differentiates through the final fixed point z⋆ ( Krantz & Parks , 2012 ; Bai et al. , 2019 ; Fung et al. , 2021 ) , regardless of how forward pass is computed in the first place . Such insulated forward and backward passes enable an equilibrium model to leverage arbitrary black-box solvers to reach the fixed points without storing intermediate activations , thus consuming constant training memory . Recent works have successfully applied the DEQ framework on high-dimensional tasks such as language modeling ( Merity et al. , 2017 ) and semantic segmentation ( Cordts et al. , 2016 ) , with performance competitive with architectures like Transformers ( Vaswani et al. , 2017 ; Dai et al. , 2019 ) . However , it is also well-known that these implicit models are slow , which is ( arguably ) their single most limiting drawback compared to traditional feedforward models ( Duvenaud et al. , 2020 ; Dupont et al. , 2019 ; Bai et al. , 2021 ) . For example , Neural ODEs could take well over 100 forward solver iterations ( i.e. , evaluations of fθ ) even on MNIST classification ; DEQs can scale to realistic tasks , but the overhead of fixed-point solvers is magnified by the task scales , rendering the model 3-6× slower than state-of-the-art ( SOTA ) explicit networks ( Vaswani et al. , 2017 ; Wang et al. , 2020 ) at inference . Can we make equilibrium models faster by taking advantage of their implicitness ? One benefit of DEQ ’ s formulation is the fact that they decouple the representational capacity ( determined by fθ ) and forward computation ( controlled by the solver ) , which is not possible in any explicit model ( e.g. , ResNet-101 ( He et al. , 2016 ) ) . Hence , given a trained DEQ , one can trade off inference time and the accuracy of the estimated fixed point by simply reducing the number of solver iterations . This yields a speed/accuracy trade-off curve , as shown in Fig . 1 . However , this trade-off ( i.e. , movements along the pareto curves ) can be highly risky : as we gradually increase inference speed by compromising the quality of fixed point estimates , model accuracy also degrades drastically . In this work , we show that we can shift the DEQ speed/accuracy trade-off curve by exploiting such decoupling to customize the fixed-point solving . Prior work on equilibrium models relies on classic solvers , which are manually designed and generic ( e.g. , Broyden ’ s Method ( Broyden , 1965 ) ) . We propose a tiny , learnable , and content-aware solver module that is automatically customized to a specific DEQ . Our hypersolver consists of two parts . First , we introduce a learned initializer that estimates a good starting point for the optimization . Second , we introduce a generalized parameterized version of Anderson mixing ( Anderson , 1965 ) that learns the iterative updates as an input-dependent temporal process . Overall , the hypersolver consumes a tiny amount of parameters . Since fθ is frozen when the hypersolver is trained , the training is very fast and does not compromise generalization . Our experiments apply this approach to diverse domains with large datasets : WikiText-103 language modeling ( Merity et al. , 2017 ) , ImageNet classification ( Deng et al. , 2009 ) , and Cityscapes segmentation with megapixel images ( Cordts et al. , 2016 ) . Our results suggest that neural deep equilibrium solvers add little overhead to training ( only taking an extra 0.9-1.1 % over the original DEQ ’ s training time ) , are extremely compact ( about 1-3 % of the DEQ ’ s model size ) , and lead to a consistent and universal 1.6-2× acceleration of inference with no compromise in accuracy . Overall , we believe this paper achieves two major objectives , both vital for the quickly growing community studying implicit models : first , we advance these large-scale implicit models to a much more practical level across architectures ( e.g. , almost as fast as Transformers ) ; and second , we formally bring up and exploit this valuable notion of how implicit layers decouple representational capacity and forward computation , opening a new door to significantly advancing the agenda of deploying implicit models in practice . 2 RELATED WORK . Deep Implicit Models . Recent research on models without a prescribed computation graph or hierarchical stacking led to a new class of deep learning models where the output is defined as the solution of nonlinear systems ( Duvenaud et al. , 2020 ; Amos & Kolter , 2017 ; Chen et al. , 2018b ; Wang et al. , 2019 ; El Ghaoui et al. , 2019 ; Bai et al. , 2019 ; 2020 ; Gould et al. , 2019 ; Gu et al. , 2020 ; Wang et al. , 2020 ) . Neural ODEs ( NODEs ) ( Chen et al. , 2018b ; Dupont et al. , 2019 ) , for example , model infintesimal steps of a residual layer fθ by solving an initial value problem ( IVP ) ( Coddington & Levinson , 1955 ) parameterized by this layer ; i.e . ∂z ∂t = fθ ( z ( t ) , t ) , z ( 0 ) = x , t = 0 , . . . , T . Deep equilibrium ( DEQ ) models ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) seek to directly solve for a “ fixed-point ” representation corresponding to a ( not necessarily residual ) layer fθ and input x ; i.e . z⋆ = fθ ( z⋆ , x ) . Implicit models are appealing in part due to their analytical backward passes ( e.g. , adjoint method or implicit differentiation ) that only depend on the final output , which can dramatically reduce memory consumption during training . Regularizing Implicit Models . Implicit models are known to be slow during training and inference . To address this , recent works have developed certain regularization methods that encourage these models to be more stable and thus easier to solve . For NODEs , Dupont et al . ( 2019 ) augment the neural ODE hidden state ; Grathwohl et al . ( 2019 ) use spectral normalization ( Miyato et al. , 2018 ) to stabilize the NODE dynamics ; Kelly et al . ( 2020 ) regularize higher-order time derivatives of the ODE system . For DEQs , Winston & Kolter ( 2020 ) propose a parameterization of fθ that guarantees stability of DEQ models ( i.e. , unique fixed point ) . Fung et al . ( 2021 ) show that one can simplify the implicit differentiation of Lipschitz DEQs ( Revay et al. , 2020 ) to accelerate the backward pass . Bai et al . ( 2021 ) summarize DEQ stability issues and propose to address them by regularizing the Jacobian matrices of equilibrium layers . In comparison , our work focuses on the solver rather than the layer fθ , and is orthogonal and complementary to regularization methods . Improving Implicit Model Solvers . Of particular relevance to our work are recent advances in the Neural ODE literature that improve the ODE flow solver . Poli et al . ( 2020 ) introduce a Neural ODE formulation that adds a learnable residual fitting step to the original solver steps , aiming to approximate the higher-order terms of canonical ODE solvers ( e.g. , Euler ’ s method ) on each solution checkpoint along the ODE path . Another recent work ( Kidger et al. , 2021 ) focuses on improving the adjoint method by replacing the usual L2 norm with a more flexible seminorm to make the NODE backward solver faster . To the best of our knowledge , no such solver improvement has been explored in the equilibrium model context . Unlike Neural ODEs , DEQs do not use ODE solvers and do not have unique & well-defined trajectories to the solution ( even if one starts at the same initial point z [ 0 ] ) . Our work is the first to propose a neural fixed-point solver for equilibrium models . Learning to Optimize/Learn . An important line of work has explored learnable optimization methods . Li & Malik ( 2016 ; 2017 ) propose to use reinforcement learning ( guided policy search ) to learn a new generic unconstrained continuous optimization algorithm , where the training set consists of numerous randomly generated objective functions . Andrychowicz et al . ( 2016 ) introduce the “ learning to learn ” ( L2L ) framework , where a gradient update rule for the parameters is learned by an LSTM with a pre-defined horizon T of parameter update steps . However , such approaches ( Andrychowicz et al. , 2016 ; Chen et al. , 2017 ; Wichrowska et al. , 2017 ; Ravi & Larochelle , 2016 ) have had some difficulty in generalizing to larger tasks due to the need to unroll for a large T ( e.g. , 128 ( Andrychowicz et al. , 2016 ) ) . Our work is related to these prior efforts in L2L , but differs in important ways . First , the L2L framework aims to learn a learning algorithm that will be applied to multiple models and tasks , while we aim to fit the nonlinear dynamics of a specific implicit model . Second , the optimization we tackle is not on the parameter space , but on the hidden unit space ; this means that the RNN optimizer used in L2L would not work here , because the fixed points themselves can be of variable sizes at test time ( e.g. , sequence lengths , image sizes ) . Third , while L2L methods can not know a priori what a good “ initial guess ” of optimal parameters may be , we show that it is possible and reasonable to infer this in the hidden unit space with implicit models . Concurrent to our work , Venkataraman & Amos ( 2021 ) studies an RNN-based learnable fixed-point acceleration scheme specifically in the application of convex cone programming . | The authors introduce a neural network approach for solving the fixed point equations arising in deep equilibrium models. This consists of a tiny network that provides an initial guess for the fixed point, as well as a small network that computes coefficients inside an algorithm inspired by Anderson iteration. The approach is intuitive and empirical. Although no theory is given, the authors demonstrate the strength of their proposed solver in large scale experimental evaluations. Specifically, the new solver is fast to train, has a small parameter count, and appears to drastically shift the pareto front of the inference speed/performance curve for all DEQ models. | SP:26be4da0f288362ff02938776c22cfa090d6fd07 |
Neural Deep Equilibrium Solvers | A deep equilibrium ( DEQ ) model abandons traditional depth by solving for the fixed point of a single nonlinear layer fθ . This structure enables decoupling the internal structure of the layer ( which controls representational capacity ) from how the fixed point is actually computed ( which impacts inference-time efficiency ) , which is usually via classic techniques such as Broyden ’ s method or Anderson acceleration . In this paper , we show that one can exploit such decoupling and substantially enhance this fixed point computation using a custom neural solver . Specifically , our solver uses a parameterized network to both guess an initial value of the optimization and perform iterative updates , in a method that generalizes a learnable form of Anderson acceleration and can be trained end-to-end in an unsupervised manner . Such a solution is particularly well suited to the implicit model setting , because inference in these models requires repeatedly solving for a fixed point of the same nonlinear layer for different inputs , a task at which our network excels . Our experiments show that these neural equilibrium solvers are fast to train ( only taking an extra 0.9-1.1 % over the original DEQ ’ s training time ) , require few additional parameters ( 1-3 % of the original model size ) , yet lead to a 2× speedup in DEQ network inference without any degradation in accuracy across numerous domains and tasks . 1 INTRODUCTION . Recent progress on implicit networks , such as Neural ODEs ( NODEs ) ( Chen et al. , 2018b ; Dupont et al. , 2019 ; Rubanova et al. , 2019 ; Jia & Benson , 2019 ; Kelly et al. , 2020 ) and deep equilibrium ( DEQ ) models ( Bai et al. , 2019 ; Winston & Kolter , 2020 ; Kawaguchi , 2021 ; Bai et al. , 2020 ; Gilton et al. , 2021 ) , has motivated this novel class of networks to the forefront of deep learning research . Instead of stacking a series of operators hierarchically , implicit models define their outputs as solutions to nonlinear dynamical systems . For example , DEQ models ( which this paper will focus on ) define their outputs as fixed points ( a.k.a . equilibria ) of a layer fθ and input x ; i.e. , output z⋆ = fθ ( z⋆ , x ) . Then , in the backward pass , a DEQ implicitly differentiates through the final fixed point z⋆ ( Krantz & Parks , 2012 ; Bai et al. , 2019 ; Fung et al. , 2021 ) , regardless of how forward pass is computed in the first place . Such insulated forward and backward passes enable an equilibrium model to leverage arbitrary black-box solvers to reach the fixed points without storing intermediate activations , thus consuming constant training memory . Recent works have successfully applied the DEQ framework on high-dimensional tasks such as language modeling ( Merity et al. , 2017 ) and semantic segmentation ( Cordts et al. , 2016 ) , with performance competitive with architectures like Transformers ( Vaswani et al. , 2017 ; Dai et al. , 2019 ) . However , it is also well-known that these implicit models are slow , which is ( arguably ) their single most limiting drawback compared to traditional feedforward models ( Duvenaud et al. , 2020 ; Dupont et al. , 2019 ; Bai et al. , 2021 ) . For example , Neural ODEs could take well over 100 forward solver iterations ( i.e. , evaluations of fθ ) even on MNIST classification ; DEQs can scale to realistic tasks , but the overhead of fixed-point solvers is magnified by the task scales , rendering the model 3-6× slower than state-of-the-art ( SOTA ) explicit networks ( Vaswani et al. , 2017 ; Wang et al. , 2020 ) at inference . Can we make equilibrium models faster by taking advantage of their implicitness ? One benefit of DEQ ’ s formulation is the fact that they decouple the representational capacity ( determined by fθ ) and forward computation ( controlled by the solver ) , which is not possible in any explicit model ( e.g. , ResNet-101 ( He et al. , 2016 ) ) . Hence , given a trained DEQ , one can trade off inference time and the accuracy of the estimated fixed point by simply reducing the number of solver iterations . This yields a speed/accuracy trade-off curve , as shown in Fig . 1 . However , this trade-off ( i.e. , movements along the pareto curves ) can be highly risky : as we gradually increase inference speed by compromising the quality of fixed point estimates , model accuracy also degrades drastically . In this work , we show that we can shift the DEQ speed/accuracy trade-off curve by exploiting such decoupling to customize the fixed-point solving . Prior work on equilibrium models relies on classic solvers , which are manually designed and generic ( e.g. , Broyden ’ s Method ( Broyden , 1965 ) ) . We propose a tiny , learnable , and content-aware solver module that is automatically customized to a specific DEQ . Our hypersolver consists of two parts . First , we introduce a learned initializer that estimates a good starting point for the optimization . Second , we introduce a generalized parameterized version of Anderson mixing ( Anderson , 1965 ) that learns the iterative updates as an input-dependent temporal process . Overall , the hypersolver consumes a tiny amount of parameters . Since fθ is frozen when the hypersolver is trained , the training is very fast and does not compromise generalization . Our experiments apply this approach to diverse domains with large datasets : WikiText-103 language modeling ( Merity et al. , 2017 ) , ImageNet classification ( Deng et al. , 2009 ) , and Cityscapes segmentation with megapixel images ( Cordts et al. , 2016 ) . Our results suggest that neural deep equilibrium solvers add little overhead to training ( only taking an extra 0.9-1.1 % over the original DEQ ’ s training time ) , are extremely compact ( about 1-3 % of the DEQ ’ s model size ) , and lead to a consistent and universal 1.6-2× acceleration of inference with no compromise in accuracy . Overall , we believe this paper achieves two major objectives , both vital for the quickly growing community studying implicit models : first , we advance these large-scale implicit models to a much more practical level across architectures ( e.g. , almost as fast as Transformers ) ; and second , we formally bring up and exploit this valuable notion of how implicit layers decouple representational capacity and forward computation , opening a new door to significantly advancing the agenda of deploying implicit models in practice . 2 RELATED WORK . Deep Implicit Models . Recent research on models without a prescribed computation graph or hierarchical stacking led to a new class of deep learning models where the output is defined as the solution of nonlinear systems ( Duvenaud et al. , 2020 ; Amos & Kolter , 2017 ; Chen et al. , 2018b ; Wang et al. , 2019 ; El Ghaoui et al. , 2019 ; Bai et al. , 2019 ; 2020 ; Gould et al. , 2019 ; Gu et al. , 2020 ; Wang et al. , 2020 ) . Neural ODEs ( NODEs ) ( Chen et al. , 2018b ; Dupont et al. , 2019 ) , for example , model infintesimal steps of a residual layer fθ by solving an initial value problem ( IVP ) ( Coddington & Levinson , 1955 ) parameterized by this layer ; i.e . ∂z ∂t = fθ ( z ( t ) , t ) , z ( 0 ) = x , t = 0 , . . . , T . Deep equilibrium ( DEQ ) models ( Bai et al. , 2019 ; Winston & Kolter , 2020 ) seek to directly solve for a “ fixed-point ” representation corresponding to a ( not necessarily residual ) layer fθ and input x ; i.e . z⋆ = fθ ( z⋆ , x ) . Implicit models are appealing in part due to their analytical backward passes ( e.g. , adjoint method or implicit differentiation ) that only depend on the final output , which can dramatically reduce memory consumption during training . Regularizing Implicit Models . Implicit models are known to be slow during training and inference . To address this , recent works have developed certain regularization methods that encourage these models to be more stable and thus easier to solve . For NODEs , Dupont et al . ( 2019 ) augment the neural ODE hidden state ; Grathwohl et al . ( 2019 ) use spectral normalization ( Miyato et al. , 2018 ) to stabilize the NODE dynamics ; Kelly et al . ( 2020 ) regularize higher-order time derivatives of the ODE system . For DEQs , Winston & Kolter ( 2020 ) propose a parameterization of fθ that guarantees stability of DEQ models ( i.e. , unique fixed point ) . Fung et al . ( 2021 ) show that one can simplify the implicit differentiation of Lipschitz DEQs ( Revay et al. , 2020 ) to accelerate the backward pass . Bai et al . ( 2021 ) summarize DEQ stability issues and propose to address them by regularizing the Jacobian matrices of equilibrium layers . In comparison , our work focuses on the solver rather than the layer fθ , and is orthogonal and complementary to regularization methods . Improving Implicit Model Solvers . Of particular relevance to our work are recent advances in the Neural ODE literature that improve the ODE flow solver . Poli et al . ( 2020 ) introduce a Neural ODE formulation that adds a learnable residual fitting step to the original solver steps , aiming to approximate the higher-order terms of canonical ODE solvers ( e.g. , Euler ’ s method ) on each solution checkpoint along the ODE path . Another recent work ( Kidger et al. , 2021 ) focuses on improving the adjoint method by replacing the usual L2 norm with a more flexible seminorm to make the NODE backward solver faster . To the best of our knowledge , no such solver improvement has been explored in the equilibrium model context . Unlike Neural ODEs , DEQs do not use ODE solvers and do not have unique & well-defined trajectories to the solution ( even if one starts at the same initial point z [ 0 ] ) . Our work is the first to propose a neural fixed-point solver for equilibrium models . Learning to Optimize/Learn . An important line of work has explored learnable optimization methods . Li & Malik ( 2016 ; 2017 ) propose to use reinforcement learning ( guided policy search ) to learn a new generic unconstrained continuous optimization algorithm , where the training set consists of numerous randomly generated objective functions . Andrychowicz et al . ( 2016 ) introduce the “ learning to learn ” ( L2L ) framework , where a gradient update rule for the parameters is learned by an LSTM with a pre-defined horizon T of parameter update steps . However , such approaches ( Andrychowicz et al. , 2016 ; Chen et al. , 2017 ; Wichrowska et al. , 2017 ; Ravi & Larochelle , 2016 ) have had some difficulty in generalizing to larger tasks due to the need to unroll for a large T ( e.g. , 128 ( Andrychowicz et al. , 2016 ) ) . Our work is related to these prior efforts in L2L , but differs in important ways . First , the L2L framework aims to learn a learning algorithm that will be applied to multiple models and tasks , while we aim to fit the nonlinear dynamics of a specific implicit model . Second , the optimization we tackle is not on the parameter space , but on the hidden unit space ; this means that the RNN optimizer used in L2L would not work here , because the fixed points themselves can be of variable sizes at test time ( e.g. , sequence lengths , image sizes ) . Third , while L2L methods can not know a priori what a good “ initial guess ” of optimal parameters may be , we show that it is possible and reasonable to infer this in the hidden unit space with implicit models . Concurrent to our work , Venkataraman & Amos ( 2021 ) studies an RNN-based learnable fixed-point acceleration scheme specifically in the application of convex cone programming . | The paper presents a method called neural deep equilibrium solver to increase the efficiency in the inference stage for implicit deep models by initializing the equilibrium states using neural network. The authors start with the traditional Anderson Acceleration scheme for fixed point calculation and extend it using neural network initialization and Anderson steps to improve the inference efficiency. The authors conduct comprehensive experiments to demonstrate that the speed up in inference is significant and general with little overhead at training time. The further experiments shows that the proposed method can be incorporated in the training procedure to give faster training while introducing the speedup at inference time. | SP:26be4da0f288362ff02938776c22cfa090d6fd07 |
Scaling Fair Learning to Hundreds of Intersectional Groups | 1 INTRODUCTION . In many real-world applications , there is a significant potential for harm when the predictive properties and performance of machine learning models vary across different demographic populations . This is indeed the case for many applications including facial analysis ( Buolamwini & Gebru , 2018 ) , ad delivery ( Sweeney , 2013 ) and search engines ( Noble , 2018 ) . Recent works aim to address this by designing learning algorithms that guarantee or approximate formal notions of algorithmic fairness ( Mehrabi et al. , 2021 ) . However , there still remains a gap between theory and practice . This gap is especially pronounced in the case of intersectional fairness . In the real-world , it is critical to simultaneously protect multiple attributes such as gender , skin color and age . Due to increasing disadvantage along the intersecting dimensions of protected attributes ( Crenshaw , 1989 ; Foulds et al. , 2020 ) , it is also important to protect the intersections of these attributes . However , there has been scarce work on intersectional and subgroup fairness in deep learning contexts , with a large portion of the literature still focusing on single ( binary ) protected attribute settings , such as protecting the Male/Not-Male attribute in the CelebA Dataset ( Liu et al. , 2015 ) . In this work , we focus on intersectional fairness and seek to scale deep learning bias mitigation techniques to multiple protected attributes with hundreds of intersectional groups . Approaches such as training separate classifiers for each group ( Wang et al. , 2020 ) , upweighting the losses of samples based on the group size , optimizing for the worst-case group outcome ( Sagawa et al. , 2020 ) , or upweighting errors in early epochs ( Liu et al. , 2021 ) have all been shown to be effective in the single attribute setting . However , it is not clear whether they are effective given the intersection of multiple protected attributes , where there is a combinatorial number of protected groups and fairness for individual protected attributes does not imply intersectional fairness ( Figure 1b ) . We address the question of how to scale existing bias mitigation techniques to this important but challenging setting . Summary of Contributions : 1 . We perform a thorough empirical analysis of existing deep learning bias mitigation techniques to explore their applicability in multi-attribute fairness settings . We find that as the number of protected attributes ( k ) increases , attribute labels become necessary to mitigate biases . 2 . We conduct the first study of bias mitigation on the ImageNet People Subtree , with 196 protected groups but fewer than 10 % of training data having protected attribute labels . In this challenging setting , we show existing bias mitigation methods can reduce bias amplification ( Zhao et al. , 2017 ) by 9 % against empirical risk minimization ( ERM ) . We identify two key issues with scaling up to so many intersectional groups : model complexity and overfitting to limited attribute labels . 3 . We address these scaling challenges by proposing a novel regularization method : Knowledge Distillation of Independent Models as Regularization ( DIR ) . DIR regularizes a single student classifier with teacher classifiers trained specifically for each intersectional group to implicitly incorporate attribute label information whenever it ’ s available . In contrast to existing techniques with complexity linear in the number of protected groups , DIR has a constant inference complexity and model size when used as a stand-alone bias mitigation algorithm . We further demonstrate that DIR can regularize other bias mitigation algorithms , reducing bias amplification by an additional 15 % on ImageNet . We first conduct a thorough evaluation of existing bias mitigation algorithms through a series of fair learning tasks on the CelebA dataset ( Liu et al. , 2015 ) . Each round introduces a new protected attribute and evaluates whether existing methods can protect the resulting intersectional groups . Our findings demonstrate that , even with hundreds of intersectional groups , there are bias mitigation algorithms which can significantly reduce bias . However , they all require access to protected attribute labels during training . While bias mitigation methods which do not require attribute labels are occasionally effective in single-attribute settings , as suggested in prior literature ( Liu et al. , 2021 ; Shrestha et al. , 2021 ) , we show they are not effective in multi-attribute scenarios . We then consider the more challenging task of learning a fair classifier on the ImageNet People Subtree , which Yang et al . ( 2020 ) labels for 196 intersectional groups . This setting is especially challenging for two reasons . First , our previous results suggest that access to protected attribute labels during training is necessary to effectively protect intersectional groups . However , such labels are scarce on our ImageNet dataset , with less than 10 % of training datapoints having such information . This leaves bias mitigation algorithms prone to overfitting on the few available attribute labels . Second , bias mitigation algorithms often have a model size and complexity that scale linearly with the number of protected groups , which , in turn , explode combinatorially due to intersectionality . We address these challenges by proposing Knowledge Distillation of Independent Models as Regularization ( DIR ) . DIR uses group-specific teacher classifiers to train a single student classifier that , at each datapoint , mimics the output of the teacher classifier corresponding to the datapoint ’ s groundtruth group . DIR , whose model complexity is independent of the number of protected attributes ( k ) , is an efficient alternative to existing bias mitigation algorithms which scale linearly with k. DIR also provides a regularization scheme for other bias mitigation techniques , reducing overfitting by implicitly incorporating attribute label information into learning objectives . Using DIR , we can significantly mitigate bias towards intersectional groups on ImageNet , empirically demonstrating a 22 % reduction in bias amplification over standard empirical risk minimization . 2 RELATED WORK . Algorithmic Fairness Prior works have explored algorithmic bias in a number of real-world applications . Deep learning models , including commercial image classifiers ( Buolamwini & Gebru , 2018 ) and natural langauge processing ( Alvi et al. , 2018 ; Bolukbasi et al. , 2016 ; Garg et al. , 2018 ) , have been the subject of significant scrutiny . Recent literature have proposed methods for bias mitigation ( Edwards & Storkey , 2015 ; Ramaswamy et al. , 2021 ; Ryu et al. , 2018 ; Wang et al. , 2020 ; Zhao et al. , 2017 ) , approximately optimizing metrics like worst-case accuracy ( weighted on the worst-off group ) , mean accuracy ( balanced over protected groups ) , bias amplification scores ( Zhao et al. , 2017 ) , and intersectional bias scores ( Foulds et al. , 2020 ) . Most build on existing techniques in classical fairness , such as reductions approaches , adversarial models , fairness through awareness , and importance weighting ( Agarwal et al. , 2018 ; Saerens et al. , 2002 ; Dwork et al. , 2012 ; Zhang et al. , 2018 ) . Many also borrow on techniques from robust learning ( Adragna et al. , 2020 ; Liu et al. , 2021 ; Sagawa et al. , 2020 ) , causal inference ( Arjovsky et al. , 2020 ; Creager et al. , 2021 ; Kusner et al. , 2017 ; Madras et al. , 2019 ) , and representation learning ( Pezeshki et al. , 2020 ) . Intersectional Multi-Attribute Fairness One of our contributions is replicating existing results ( Wang et al. , 2020 ; Shrestha et al. , 2021 ; Liu et al. , 2021 ; Sagawa et al. , 2020 ) and extending their empirical analysis to analagous multi-attribute settings . Shrestha et al . ( 2021 ) is the most related work of multi-attribute fairness , but only compare existing algorithms against new sets of datasets . In contrast , we focus on the ImageNet and CelebA datasets and observe algorithms as we tune the number of protected attributes ( on a fixed dataset ) . We reach conflicting findings with Shrestha et al . ( 2021 ) . While methods that do not access protected attribute labels may appear to be effective in certain single-attribute settings , their performance deteriorates in multi-attribute settings . We also find that a reweighting baseline is competitive with methods proposed by Wang et al . ( 2020 ) . Beyond the deep learning settings considered by our paper and the aforementioned literature , Kang et al . ( 2021 ) considers a variational method for multi-attribute fairness in classical learning settings . Fairness with Partial Unknown Attributes In many settings , it may not be practical to label the protected attributes of all datapoints . The setting where only part of the dataset has attribute labels has been explored in previous works : Dai & Wang ( 2021 ) targets a specific graph neural network setting and Ho et al . ( 2020 ) provides a basic analysis of adversarial fairness . Some works have looked at the case where no labels are available ( Chen et al. , 2019 ; Hashimoto et al. , 2018 ) . Others have looked at learning proxy labels from a different task ( Kallus et al. , 2020 ; Awasthi et al. , 2021 ) . Our work focuses on the partial unknown attributes case and provides an in-depth analysis with extensive experiments and larger coverage of methodologies . 3 PRELIMINARIES . We formalize our problem as learning a model h : X → Y , where X is the input space and Y a finite label space . We seek to learn a h that is fair with respect to am-dimensional protected attribute vector A where m is the number of protected attributes . Fairness datasets are thus in tuples ( x , a , y ) with image x ∼ X , protected attribute vector a ∈ A , and single/multi-label label y ∈ Y . For example , a protected attribute vector might look like a = ( male , caucasian , teenager , not veteran ) . For the entirety of this paper , “ labeled/unlabeled ” refers to whether the protected attribute labels A are available—we always assume the input features X and class labels Y are available . Unlabeled data refers to datapoints where X and Y are available , but A is not . To be fair with respect to the protected attributesA , we want to be fair to the intersectional groupsG arising from A . The number of such groups is generally combinatorial in m. An example is shown in Figure 1 ( a ) . We formalize fairness among groups by reviewing two common definitions . First , demographic parity concerns parity in Pr ( ŷ | g ) , the probability of predicting a label ŷ ∈ Y for a datapoint belonging to a group g ∈ G. This definition of fairness requires that , over some fixed data distribution , the predictions of a model h are probabilistically independent of the protected group membership . Formally , ∀g′ ∈ G , ŷ ∈ Y : Pr ( ŷ | g ) = Pr ( ŷ | g′ ) . This objective is generally more applicable to settings where the predicted class Y is a decision ( e.g. , issuing a loan ) . Second , the equalized odds definition refines the demographic parity constraint to enforce parity in Pr ( ŷ | g , y ) , the probability of predicting a label ŷ ∈ Y , for a datapoint with true label y ∈ Y belonging to group g ∈ G. This definition enforces , for instance , equivalent accuracies for each group . Formally , ∀g′ ∈ G , ŷ , y ∈ Y : Pr ( ŷ | g , y ) = Pr ( ŷ | g′ , y ) . | This paper conducts an empirical analysis of existing bias mitigation methods on two large datasets CelebA and ImageNet where there are multiple sensitive attributes and some unavailable protected labels. The results show the existing can mitigate intersectional bias at scale but the unlabeled methods generalize poorly. This paper further proposes a knowledge distillation of independent models as regularization method (DIR) which is able to augment into other bias mitigation algorithms. | SP:a19aa343577a763a6beda431383cfc70baa31e9d |
Scaling Fair Learning to Hundreds of Intersectional Groups | 1 INTRODUCTION . In many real-world applications , there is a significant potential for harm when the predictive properties and performance of machine learning models vary across different demographic populations . This is indeed the case for many applications including facial analysis ( Buolamwini & Gebru , 2018 ) , ad delivery ( Sweeney , 2013 ) and search engines ( Noble , 2018 ) . Recent works aim to address this by designing learning algorithms that guarantee or approximate formal notions of algorithmic fairness ( Mehrabi et al. , 2021 ) . However , there still remains a gap between theory and practice . This gap is especially pronounced in the case of intersectional fairness . In the real-world , it is critical to simultaneously protect multiple attributes such as gender , skin color and age . Due to increasing disadvantage along the intersecting dimensions of protected attributes ( Crenshaw , 1989 ; Foulds et al. , 2020 ) , it is also important to protect the intersections of these attributes . However , there has been scarce work on intersectional and subgroup fairness in deep learning contexts , with a large portion of the literature still focusing on single ( binary ) protected attribute settings , such as protecting the Male/Not-Male attribute in the CelebA Dataset ( Liu et al. , 2015 ) . In this work , we focus on intersectional fairness and seek to scale deep learning bias mitigation techniques to multiple protected attributes with hundreds of intersectional groups . Approaches such as training separate classifiers for each group ( Wang et al. , 2020 ) , upweighting the losses of samples based on the group size , optimizing for the worst-case group outcome ( Sagawa et al. , 2020 ) , or upweighting errors in early epochs ( Liu et al. , 2021 ) have all been shown to be effective in the single attribute setting . However , it is not clear whether they are effective given the intersection of multiple protected attributes , where there is a combinatorial number of protected groups and fairness for individual protected attributes does not imply intersectional fairness ( Figure 1b ) . We address the question of how to scale existing bias mitigation techniques to this important but challenging setting . Summary of Contributions : 1 . We perform a thorough empirical analysis of existing deep learning bias mitigation techniques to explore their applicability in multi-attribute fairness settings . We find that as the number of protected attributes ( k ) increases , attribute labels become necessary to mitigate biases . 2 . We conduct the first study of bias mitigation on the ImageNet People Subtree , with 196 protected groups but fewer than 10 % of training data having protected attribute labels . In this challenging setting , we show existing bias mitigation methods can reduce bias amplification ( Zhao et al. , 2017 ) by 9 % against empirical risk minimization ( ERM ) . We identify two key issues with scaling up to so many intersectional groups : model complexity and overfitting to limited attribute labels . 3 . We address these scaling challenges by proposing a novel regularization method : Knowledge Distillation of Independent Models as Regularization ( DIR ) . DIR regularizes a single student classifier with teacher classifiers trained specifically for each intersectional group to implicitly incorporate attribute label information whenever it ’ s available . In contrast to existing techniques with complexity linear in the number of protected groups , DIR has a constant inference complexity and model size when used as a stand-alone bias mitigation algorithm . We further demonstrate that DIR can regularize other bias mitigation algorithms , reducing bias amplification by an additional 15 % on ImageNet . We first conduct a thorough evaluation of existing bias mitigation algorithms through a series of fair learning tasks on the CelebA dataset ( Liu et al. , 2015 ) . Each round introduces a new protected attribute and evaluates whether existing methods can protect the resulting intersectional groups . Our findings demonstrate that , even with hundreds of intersectional groups , there are bias mitigation algorithms which can significantly reduce bias . However , they all require access to protected attribute labels during training . While bias mitigation methods which do not require attribute labels are occasionally effective in single-attribute settings , as suggested in prior literature ( Liu et al. , 2021 ; Shrestha et al. , 2021 ) , we show they are not effective in multi-attribute scenarios . We then consider the more challenging task of learning a fair classifier on the ImageNet People Subtree , which Yang et al . ( 2020 ) labels for 196 intersectional groups . This setting is especially challenging for two reasons . First , our previous results suggest that access to protected attribute labels during training is necessary to effectively protect intersectional groups . However , such labels are scarce on our ImageNet dataset , with less than 10 % of training datapoints having such information . This leaves bias mitigation algorithms prone to overfitting on the few available attribute labels . Second , bias mitigation algorithms often have a model size and complexity that scale linearly with the number of protected groups , which , in turn , explode combinatorially due to intersectionality . We address these challenges by proposing Knowledge Distillation of Independent Models as Regularization ( DIR ) . DIR uses group-specific teacher classifiers to train a single student classifier that , at each datapoint , mimics the output of the teacher classifier corresponding to the datapoint ’ s groundtruth group . DIR , whose model complexity is independent of the number of protected attributes ( k ) , is an efficient alternative to existing bias mitigation algorithms which scale linearly with k. DIR also provides a regularization scheme for other bias mitigation techniques , reducing overfitting by implicitly incorporating attribute label information into learning objectives . Using DIR , we can significantly mitigate bias towards intersectional groups on ImageNet , empirically demonstrating a 22 % reduction in bias amplification over standard empirical risk minimization . 2 RELATED WORK . Algorithmic Fairness Prior works have explored algorithmic bias in a number of real-world applications . Deep learning models , including commercial image classifiers ( Buolamwini & Gebru , 2018 ) and natural langauge processing ( Alvi et al. , 2018 ; Bolukbasi et al. , 2016 ; Garg et al. , 2018 ) , have been the subject of significant scrutiny . Recent literature have proposed methods for bias mitigation ( Edwards & Storkey , 2015 ; Ramaswamy et al. , 2021 ; Ryu et al. , 2018 ; Wang et al. , 2020 ; Zhao et al. , 2017 ) , approximately optimizing metrics like worst-case accuracy ( weighted on the worst-off group ) , mean accuracy ( balanced over protected groups ) , bias amplification scores ( Zhao et al. , 2017 ) , and intersectional bias scores ( Foulds et al. , 2020 ) . Most build on existing techniques in classical fairness , such as reductions approaches , adversarial models , fairness through awareness , and importance weighting ( Agarwal et al. , 2018 ; Saerens et al. , 2002 ; Dwork et al. , 2012 ; Zhang et al. , 2018 ) . Many also borrow on techniques from robust learning ( Adragna et al. , 2020 ; Liu et al. , 2021 ; Sagawa et al. , 2020 ) , causal inference ( Arjovsky et al. , 2020 ; Creager et al. , 2021 ; Kusner et al. , 2017 ; Madras et al. , 2019 ) , and representation learning ( Pezeshki et al. , 2020 ) . Intersectional Multi-Attribute Fairness One of our contributions is replicating existing results ( Wang et al. , 2020 ; Shrestha et al. , 2021 ; Liu et al. , 2021 ; Sagawa et al. , 2020 ) and extending their empirical analysis to analagous multi-attribute settings . Shrestha et al . ( 2021 ) is the most related work of multi-attribute fairness , but only compare existing algorithms against new sets of datasets . In contrast , we focus on the ImageNet and CelebA datasets and observe algorithms as we tune the number of protected attributes ( on a fixed dataset ) . We reach conflicting findings with Shrestha et al . ( 2021 ) . While methods that do not access protected attribute labels may appear to be effective in certain single-attribute settings , their performance deteriorates in multi-attribute settings . We also find that a reweighting baseline is competitive with methods proposed by Wang et al . ( 2020 ) . Beyond the deep learning settings considered by our paper and the aforementioned literature , Kang et al . ( 2021 ) considers a variational method for multi-attribute fairness in classical learning settings . Fairness with Partial Unknown Attributes In many settings , it may not be practical to label the protected attributes of all datapoints . The setting where only part of the dataset has attribute labels has been explored in previous works : Dai & Wang ( 2021 ) targets a specific graph neural network setting and Ho et al . ( 2020 ) provides a basic analysis of adversarial fairness . Some works have looked at the case where no labels are available ( Chen et al. , 2019 ; Hashimoto et al. , 2018 ) . Others have looked at learning proxy labels from a different task ( Kallus et al. , 2020 ; Awasthi et al. , 2021 ) . Our work focuses on the partial unknown attributes case and provides an in-depth analysis with extensive experiments and larger coverage of methodologies . 3 PRELIMINARIES . We formalize our problem as learning a model h : X → Y , where X is the input space and Y a finite label space . We seek to learn a h that is fair with respect to am-dimensional protected attribute vector A where m is the number of protected attributes . Fairness datasets are thus in tuples ( x , a , y ) with image x ∼ X , protected attribute vector a ∈ A , and single/multi-label label y ∈ Y . For example , a protected attribute vector might look like a = ( male , caucasian , teenager , not veteran ) . For the entirety of this paper , “ labeled/unlabeled ” refers to whether the protected attribute labels A are available—we always assume the input features X and class labels Y are available . Unlabeled data refers to datapoints where X and Y are available , but A is not . To be fair with respect to the protected attributesA , we want to be fair to the intersectional groupsG arising from A . The number of such groups is generally combinatorial in m. An example is shown in Figure 1 ( a ) . We formalize fairness among groups by reviewing two common definitions . First , demographic parity concerns parity in Pr ( ŷ | g ) , the probability of predicting a label ŷ ∈ Y for a datapoint belonging to a group g ∈ G. This definition of fairness requires that , over some fixed data distribution , the predictions of a model h are probabilistically independent of the protected group membership . Formally , ∀g′ ∈ G , ŷ ∈ Y : Pr ( ŷ | g ) = Pr ( ŷ | g′ ) . This objective is generally more applicable to settings where the predicted class Y is a decision ( e.g. , issuing a loan ) . Second , the equalized odds definition refines the demographic parity constraint to enforce parity in Pr ( ŷ | g , y ) , the probability of predicting a label ŷ ∈ Y , for a datapoint with true label y ∈ Y belonging to group g ∈ G. This definition enforces , for instance , equivalent accuracies for each group . Formally , ∀g′ ∈ G , ŷ , y ∈ Y : Pr ( ŷ | g , y ) = Pr ( ŷ | g′ , y ) . | This paper studies the fairness of deep learning models in classification tasks with a large number of intersectional groups. The paper has two primary contributions: 1. The paper includes an empirical study that shows the inherent difficulty of this setting (i.e., due to the lack of labels), and the limitations of existing approaches. 2. The paper presents a new approach to train deep models to settings with many intersectional groups called Knowledge Distillation of Independent Models as Regularization (DIR). | SP:a19aa343577a763a6beda431383cfc70baa31e9d |
Scaling Fair Learning to Hundreds of Intersectional Groups | 1 INTRODUCTION . In many real-world applications , there is a significant potential for harm when the predictive properties and performance of machine learning models vary across different demographic populations . This is indeed the case for many applications including facial analysis ( Buolamwini & Gebru , 2018 ) , ad delivery ( Sweeney , 2013 ) and search engines ( Noble , 2018 ) . Recent works aim to address this by designing learning algorithms that guarantee or approximate formal notions of algorithmic fairness ( Mehrabi et al. , 2021 ) . However , there still remains a gap between theory and practice . This gap is especially pronounced in the case of intersectional fairness . In the real-world , it is critical to simultaneously protect multiple attributes such as gender , skin color and age . Due to increasing disadvantage along the intersecting dimensions of protected attributes ( Crenshaw , 1989 ; Foulds et al. , 2020 ) , it is also important to protect the intersections of these attributes . However , there has been scarce work on intersectional and subgroup fairness in deep learning contexts , with a large portion of the literature still focusing on single ( binary ) protected attribute settings , such as protecting the Male/Not-Male attribute in the CelebA Dataset ( Liu et al. , 2015 ) . In this work , we focus on intersectional fairness and seek to scale deep learning bias mitigation techniques to multiple protected attributes with hundreds of intersectional groups . Approaches such as training separate classifiers for each group ( Wang et al. , 2020 ) , upweighting the losses of samples based on the group size , optimizing for the worst-case group outcome ( Sagawa et al. , 2020 ) , or upweighting errors in early epochs ( Liu et al. , 2021 ) have all been shown to be effective in the single attribute setting . However , it is not clear whether they are effective given the intersection of multiple protected attributes , where there is a combinatorial number of protected groups and fairness for individual protected attributes does not imply intersectional fairness ( Figure 1b ) . We address the question of how to scale existing bias mitigation techniques to this important but challenging setting . Summary of Contributions : 1 . We perform a thorough empirical analysis of existing deep learning bias mitigation techniques to explore their applicability in multi-attribute fairness settings . We find that as the number of protected attributes ( k ) increases , attribute labels become necessary to mitigate biases . 2 . We conduct the first study of bias mitigation on the ImageNet People Subtree , with 196 protected groups but fewer than 10 % of training data having protected attribute labels . In this challenging setting , we show existing bias mitigation methods can reduce bias amplification ( Zhao et al. , 2017 ) by 9 % against empirical risk minimization ( ERM ) . We identify two key issues with scaling up to so many intersectional groups : model complexity and overfitting to limited attribute labels . 3 . We address these scaling challenges by proposing a novel regularization method : Knowledge Distillation of Independent Models as Regularization ( DIR ) . DIR regularizes a single student classifier with teacher classifiers trained specifically for each intersectional group to implicitly incorporate attribute label information whenever it ’ s available . In contrast to existing techniques with complexity linear in the number of protected groups , DIR has a constant inference complexity and model size when used as a stand-alone bias mitigation algorithm . We further demonstrate that DIR can regularize other bias mitigation algorithms , reducing bias amplification by an additional 15 % on ImageNet . We first conduct a thorough evaluation of existing bias mitigation algorithms through a series of fair learning tasks on the CelebA dataset ( Liu et al. , 2015 ) . Each round introduces a new protected attribute and evaluates whether existing methods can protect the resulting intersectional groups . Our findings demonstrate that , even with hundreds of intersectional groups , there are bias mitigation algorithms which can significantly reduce bias . However , they all require access to protected attribute labels during training . While bias mitigation methods which do not require attribute labels are occasionally effective in single-attribute settings , as suggested in prior literature ( Liu et al. , 2021 ; Shrestha et al. , 2021 ) , we show they are not effective in multi-attribute scenarios . We then consider the more challenging task of learning a fair classifier on the ImageNet People Subtree , which Yang et al . ( 2020 ) labels for 196 intersectional groups . This setting is especially challenging for two reasons . First , our previous results suggest that access to protected attribute labels during training is necessary to effectively protect intersectional groups . However , such labels are scarce on our ImageNet dataset , with less than 10 % of training datapoints having such information . This leaves bias mitigation algorithms prone to overfitting on the few available attribute labels . Second , bias mitigation algorithms often have a model size and complexity that scale linearly with the number of protected groups , which , in turn , explode combinatorially due to intersectionality . We address these challenges by proposing Knowledge Distillation of Independent Models as Regularization ( DIR ) . DIR uses group-specific teacher classifiers to train a single student classifier that , at each datapoint , mimics the output of the teacher classifier corresponding to the datapoint ’ s groundtruth group . DIR , whose model complexity is independent of the number of protected attributes ( k ) , is an efficient alternative to existing bias mitigation algorithms which scale linearly with k. DIR also provides a regularization scheme for other bias mitigation techniques , reducing overfitting by implicitly incorporating attribute label information into learning objectives . Using DIR , we can significantly mitigate bias towards intersectional groups on ImageNet , empirically demonstrating a 22 % reduction in bias amplification over standard empirical risk minimization . 2 RELATED WORK . Algorithmic Fairness Prior works have explored algorithmic bias in a number of real-world applications . Deep learning models , including commercial image classifiers ( Buolamwini & Gebru , 2018 ) and natural langauge processing ( Alvi et al. , 2018 ; Bolukbasi et al. , 2016 ; Garg et al. , 2018 ) , have been the subject of significant scrutiny . Recent literature have proposed methods for bias mitigation ( Edwards & Storkey , 2015 ; Ramaswamy et al. , 2021 ; Ryu et al. , 2018 ; Wang et al. , 2020 ; Zhao et al. , 2017 ) , approximately optimizing metrics like worst-case accuracy ( weighted on the worst-off group ) , mean accuracy ( balanced over protected groups ) , bias amplification scores ( Zhao et al. , 2017 ) , and intersectional bias scores ( Foulds et al. , 2020 ) . Most build on existing techniques in classical fairness , such as reductions approaches , adversarial models , fairness through awareness , and importance weighting ( Agarwal et al. , 2018 ; Saerens et al. , 2002 ; Dwork et al. , 2012 ; Zhang et al. , 2018 ) . Many also borrow on techniques from robust learning ( Adragna et al. , 2020 ; Liu et al. , 2021 ; Sagawa et al. , 2020 ) , causal inference ( Arjovsky et al. , 2020 ; Creager et al. , 2021 ; Kusner et al. , 2017 ; Madras et al. , 2019 ) , and representation learning ( Pezeshki et al. , 2020 ) . Intersectional Multi-Attribute Fairness One of our contributions is replicating existing results ( Wang et al. , 2020 ; Shrestha et al. , 2021 ; Liu et al. , 2021 ; Sagawa et al. , 2020 ) and extending their empirical analysis to analagous multi-attribute settings . Shrestha et al . ( 2021 ) is the most related work of multi-attribute fairness , but only compare existing algorithms against new sets of datasets . In contrast , we focus on the ImageNet and CelebA datasets and observe algorithms as we tune the number of protected attributes ( on a fixed dataset ) . We reach conflicting findings with Shrestha et al . ( 2021 ) . While methods that do not access protected attribute labels may appear to be effective in certain single-attribute settings , their performance deteriorates in multi-attribute settings . We also find that a reweighting baseline is competitive with methods proposed by Wang et al . ( 2020 ) . Beyond the deep learning settings considered by our paper and the aforementioned literature , Kang et al . ( 2021 ) considers a variational method for multi-attribute fairness in classical learning settings . Fairness with Partial Unknown Attributes In many settings , it may not be practical to label the protected attributes of all datapoints . The setting where only part of the dataset has attribute labels has been explored in previous works : Dai & Wang ( 2021 ) targets a specific graph neural network setting and Ho et al . ( 2020 ) provides a basic analysis of adversarial fairness . Some works have looked at the case where no labels are available ( Chen et al. , 2019 ; Hashimoto et al. , 2018 ) . Others have looked at learning proxy labels from a different task ( Kallus et al. , 2020 ; Awasthi et al. , 2021 ) . Our work focuses on the partial unknown attributes case and provides an in-depth analysis with extensive experiments and larger coverage of methodologies . 3 PRELIMINARIES . We formalize our problem as learning a model h : X → Y , where X is the input space and Y a finite label space . We seek to learn a h that is fair with respect to am-dimensional protected attribute vector A where m is the number of protected attributes . Fairness datasets are thus in tuples ( x , a , y ) with image x ∼ X , protected attribute vector a ∈ A , and single/multi-label label y ∈ Y . For example , a protected attribute vector might look like a = ( male , caucasian , teenager , not veteran ) . For the entirety of this paper , “ labeled/unlabeled ” refers to whether the protected attribute labels A are available—we always assume the input features X and class labels Y are available . Unlabeled data refers to datapoints where X and Y are available , but A is not . To be fair with respect to the protected attributesA , we want to be fair to the intersectional groupsG arising from A . The number of such groups is generally combinatorial in m. An example is shown in Figure 1 ( a ) . We formalize fairness among groups by reviewing two common definitions . First , demographic parity concerns parity in Pr ( ŷ | g ) , the probability of predicting a label ŷ ∈ Y for a datapoint belonging to a group g ∈ G. This definition of fairness requires that , over some fixed data distribution , the predictions of a model h are probabilistically independent of the protected group membership . Formally , ∀g′ ∈ G , ŷ ∈ Y : Pr ( ŷ | g ) = Pr ( ŷ | g′ ) . This objective is generally more applicable to settings where the predicted class Y is a decision ( e.g. , issuing a loan ) . Second , the equalized odds definition refines the demographic parity constraint to enforce parity in Pr ( ŷ | g , y ) , the probability of predicting a label ŷ ∈ Y , for a datapoint with true label y ∈ Y belonging to group g ∈ G. This definition enforces , for instance , equivalent accuracies for each group . Formally , ∀g′ ∈ G , ŷ , y ∈ Y : Pr ( ŷ | g , y ) = Pr ( ŷ | g′ , y ) . | This paper provides an empirical investigation of deep learning bias mitigation methods, focusing on two problems: intersectionality and missing sensitive attribute labels. The paper evaluates several bias mitigation approaches on the Celeb dataset and on ImageNet, the latter having few sensitive attribute labels. The main takeaways from their empirical analysis is that the best performing mitigation method depends on the context and mitigation methods that operate without using the sensitive attribute do not perform well in terms of intersectionality. The paper also proposes a new distillation-based approach to resolve the runtime complexity problem that models like Domain Independent exhibit. On ImageNet their method significantly reduces bias amplification but there are no differences in group-reweighed accuracy or intersectional bias. | SP:a19aa343577a763a6beda431383cfc70baa31e9d |
Reinforcement Learning State Estimation for High-Dimensional Nonlinear Systems | In high-dimensional nonlinear systems such as fluid flows , the design of state estimators such as Kalman filters relies on a reduced-order model ( ROM ) of the dynamics . However , ROMs are prone to large errors , which negatively affects the performance of the estimator . Here , we introduce the reinforcement learning reduced-order estimator ( RL-ROE ) , a ROM-based estimator in which the data assimilation feedback term is given by a nonlinear stochastic policy trained through reinforcement learning . The flexibility of the nonlinear policy enables the RLROE to compensate for errors of the ROM , while still taking advantage of the imperfect knowledge of the dynamics . We show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM , and displays robust estimation performance with respect to different reference trajectories and initial state estimates . 1 INTRODUCTION . Active control of turbulent flows has the potential to cut down emissions across a range of industries through drag reduction in aircrafts and ships or improved efficiency of heating and air-conditioning systems , among many other examples ( Brunton & Noack , 2015 ) . But real-time feedback control requires inferring the state of the system from sparse measurements using an algorithm called a state estimator , which typically relies on a model for the underlying dynamics ( Simon , 2006 ) . Among state estimators , the Kalman filter is by far the most well-known thanks to its optimality for linear systems , which has led to its widespread use in numerous applications ( Kalman , 1960 ; Zarchan & Musoff , 2015 ) . However , continuous systems such as fluid flows are governed by partial differential equations ( PDEs ) which , when discretized , yield high-dimensional and oftentimes nonlinear dynamical models with hundreds or thousands of state variables . These high-dimensional models are too expensive to integrate with common state estimation techniques , including the Kalman filter or its numerous extensions . Thus , state estimators are instead designed based on a reduced-order model ( ROM ) of the underlying dynamics ( Barbagallo et al. , 2009 ; Rowley & Dawson , 2017 ) . A big challenge is that ROMs provide a simplified and imperfect description of the dynamics , which negatively affects the performance of the state estimator . One potential solution is to improve the accuracy of the ROM itself through the inclusion of additional closure terms ( Ahmed et al. , 2021 ) . In this paper , we leave the ROM untouched and instead propose a new design paradigm for the estimator itself , which we call a reinforcement-learning reduced-order estimator ( RL-ROE ) . The RL-ROE is constructed from the ROM in an analogous way to a Kalman filter , with the crucial difference that the linear filter gain function is replaced by a nonlinear stochastic policy trained through reinforcement learning ( RL ) . The flexibility of the nonlinear policy enables the RL-ROE to compensate for errors of the ROM , while still taking advantage of the imperfect knowledge of the dynamics . We describe how we frame the problem as a stationary Markov decision process in order to enable RL training , which is non-trivial since the RL-ROE must be able to estimate time-varying states . Finally , we show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM , and displays robust estimation performance with respect to different reference trajectories and initial state estimates . The RL-ROE is the first application of reinforcement learning to state estimation for high-dimensional systems . Under review as a conference paper at ICLR 2022 2 PROBLEM FORMULATION . 2.1 SETUP . Consider the discrete-time nonlinear system given by zk+1 = f ( zk ) , ( 1a ) yk = Czk , ( 1b ) where zk ∈ Rn and yk ∈ Rp are respectively the state and measurement at time k , f : Rn → Rn is a time-invariant nonlinear map from current to next state , and C ∈ Rp×n is a linear map from state to measurement . In this study , we assume that the dynamics given in ( 1 ) are obtained from the numerical discretization of a nonlinear partial differential equation ( PDE ) , which typically requires a large number n of state dimensions . Note that we do not account for exogenous control inputs to the system , which will be studied in future extensions of the present work . 2.2 REDUCED-ORDER MODEL . Because the high dimensionality of ( 1 ) makes online prediction and control impractical , it is instead customary to formulate a reduced-order model ( ROM ) of the dynamics ( Rowley & Dawson , 2017 ) . First , one chooses a suitable linearly independent set of modes { u1 , . . . , ur } , where ui ∈ Rn , defining an r-dimensional subspace of Rn in which most of the dynamics is assumed to take place . Stacking these modes as columns of a matrix U ∈ Rn×r , one can then express zk ' Uxk , where the reduced-order state xk ∈ Rr represents the coordinates of zk in the subspace . Finally , one finds a ROM for the dynamics of xk , which is vastly cheaper to evolve than ( 1 ) when r n. There exist various ways to find an appropriate set of modes U and corresponding ROM for the dynamics of xk ( Taira et al. , 2017 ) . In this work , we employ the Dynamic Mode Decomposition ( DMD ) , a purely data-driven algorithm that has found wide applications in fields ranging from fluid dynamics to neuroscience ( Schmid , 2010 ; Kutz et al. , 2016 ) . Starting with a collection of snapshots Z = { z0 , . . . , zm } collected along a trajectory of ( 1a ) , the DMD seeks a best-fit linear model of the dynamics in the form of a matrixA ∈ Rn×n such that zk+1 ' Azk , and computes the modes U as the r leading principal component analysis ( PCA ) modes of Z . The transformation zk ' Uxk and the orthogonality of U then yield a linear discrete-time ROM of the form xk+1 = Arxk +wk , ( 2a ) yk = Crxk + vk , ( 2b ) whereAr = UTAU ∈ Rr×r andCr = CU ∈ Rp×r are the reduced-order state-transition and observation models , respectively . In order to account for the neglected PCA modes of Z as well as the unmodeled dynamics incurred by the linear approximation zk+1 ' Azk , we add ( unknown ) nonGaussian process noise wk and observation noise vk . Additional details regarding the calculation ofAr and U are provided in Appendix A . 2.3 REDUCED-ORDER ESTIMATOR . This paper uses reinforcement learning ( RL ) to solve the following estimation problem : given a sequence of measurements { y0 , · · · , yk } from a reference trajectory { z0 , · · · , zk } of ( 1 ) and knowing the ROM ( 2 ) defined byAr , Cr andU , we want to estimate the high-dimensional state zk at current time k. To this effect , we design a reduced-order estimator ( ROE ) of the form x̂k = Arx̂k−1 + ak , ( 3a ) ak ∼ πθ ( · |yk , x̂k−1 ) , ( 3b ) where x̂k is an estimate of the reduced-order state xk , and ak ∈ Rr is an action sampled from a stochastic policy πθ which depends on the current measurement yk and the previous state estimate x̂k−1 . The subscript θ denotes the set of parameters that defines the stochastic policy , whose goal is to minimize the mean square error E [ zk − ẑk ] over a range of reference trajectories and initial reduced-order state estimates . Here , ẑk = Ux̂k denotes the high-dimensional state estimate reconstructed from x̂k . A Kalman filter is a special case of such an estimator , for which the action in ( 3b ) is given by ak = Kk ( yk −CrArx̂k−1 ) , ( 4 ) Under review as a conference paper at ICLR 2022 with Kk ∈ Rr×p the optimal Kalman gain . Although the Kalman filter is optimal when the statetransition and observation models are known exactly , its performance suffers in the presence of unmodeled dynamics . In our case , such model errors are unavoidable due to the ROM ( 2 ) being an inherent approximation of the high-dimensional dynamics ( 1 ) , which motivates our adoption of the more general form ( 3b ) . This form retains the dependence of ak on yk and x̂k−1 but is more flexible thanks to the nonlinearity of the stochastic policy πθ , which we train with deep RL in an offline stage . The stochasticity of πθ forces the RL algorithm to explore different actions during the training process , in order to find eventually an optimal θ∗ such that E [ zk − ẑk ] is minimized for various reference trajectories and initial estimates . We call the estimator constructed and trained through this process an RL-trained ROE , or RL-ROE for short . Thus , the methodology we propose consists of two steps . In a first offline stage , a ROM of the form ( 2 ) is obtained using high-dimensional snapshots zk from a single trajectory of ( 1 ) . The RL-ROE ( 3 ) is then constructed based on this ROM , and its policyπθ is trained using high-dimensional snapshots zk from multiple reference trajectories of ( 1 ) . Finally , the trained RL-ROE is deployed online to track a reference trajectory of ( 1 ) . In the online stage , the RL-ROE only requires measurements yk from the reference trajectory , and gives an estimate ẑk = Ux̂k for the high-dimensional state . In summary , our contributions in this paper are two-fold : 1 . We propose RL-ROE , a reduced-order state estimator for high-dimensional nonlinear systems . The RL-ROE takes the form ( 3 ) , which combines two unique features : first , the state transition modelAr is a ROM of the high-dimensional dynamics ; second , the term ak that assimilates measurements is sampled from a stochastic policy πθ trained with RL . The training procedure for πθ , which involves a non-trivial reformulation of the time-varying tracking problem as a stationary Markov decision process , is described in Section 4 . 2 . The performance of the RL-ROE is compared in Section 5 with that of KF-ROE , a Kalman filter constructed from the same ROM . The comparison is performed in the context of the Burgers equation using a range of reference trajectories and initial state estimates . 3 RELATED WORK . Previous studies have already proposed designing state estimators using policies trained through reinforcement learning . Morimoto & Doya ( 2007 ) introduced an estimator of the form x̂k = f ( x̂k−1 ) + L ( x̂k−1 ) ( yk−1 −Cx̂k−1 ) , where f ( · ) is the state-transition model of the system , and the state-dependent filter gain matrix L ( x̂k−1 ) is defined using Gaussian basis functions whose parameters are learned through a variant of vanilla policy gradient . Their reward function , however , was calculated using the measurement error instead of the state estimate error , potentially limiting the performance of the trained estimator . Hu et al . ( 2020 ) proposed an estimator of the form x̂k = f ( x̂k−1 ) + L ( xk − x̂k ) ( yk − Cf ( x̂k−1 ) ) , where L ( xk − x̂k ) is approximated by neural networks trained with a modified Soft-Actor Critic algorithm ( Haarnoja et al. , 2018 ) . Although they derived convergence properties for the estimate error , the dependence of the filter gain L ( xk − x̂k ) on the reference state xk limits its practical application . A major difference between these past studies and our work is that they do not construct a ROM of the dynamics and only consider low-dimensional systems with four state variables at most , in comparison with the hundred or more state dimensions that our RL-ROE can handle . Therefore , RL-ROE represents the first application of reinforcement learning to state estimation for high-dimensional systems , which makes it applicable to systems governed by PDEs such as fluid flows . | The objective of the paper is to construct an estimator for the state of a high-dimensional nonlinear dynamical system given partial observations of the state. The objective is motivated by applications in fluid mechanics or turbulent flows where the state of the system is large obtained by discretizing a PDE. The proposed approach has two main steps: (1) construction of a reduced order model. In particular, the paper proposes the dynamic mode decomposition method which only requires a single trajectory of the nonlinear dynamics. (2) formulating the problem of finding the estimator as a MDP problem and application of RL techniques to solve it. In particular, the estimator is modeled as a dynamical system driven by a stochastic control policy that depends on the current value of the estimate and the value of observation. The objective function is modeled with running cost equal to the error in estimating the state and a quadratic penalty on the control. Then, the optimal control policy is learned using a policy gradient method by sampling trajectories from the system. The proposed approach is evaluated on a benchmark example that involves Burger's equation and compared with Kalman filter applied on the reduced order system. | SP:484cc8865184e726701daffe431bb6f37a9c2946 |
Reinforcement Learning State Estimation for High-Dimensional Nonlinear Systems | In high-dimensional nonlinear systems such as fluid flows , the design of state estimators such as Kalman filters relies on a reduced-order model ( ROM ) of the dynamics . However , ROMs are prone to large errors , which negatively affects the performance of the estimator . Here , we introduce the reinforcement learning reduced-order estimator ( RL-ROE ) , a ROM-based estimator in which the data assimilation feedback term is given by a nonlinear stochastic policy trained through reinforcement learning . The flexibility of the nonlinear policy enables the RLROE to compensate for errors of the ROM , while still taking advantage of the imperfect knowledge of the dynamics . We show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM , and displays robust estimation performance with respect to different reference trajectories and initial state estimates . 1 INTRODUCTION . Active control of turbulent flows has the potential to cut down emissions across a range of industries through drag reduction in aircrafts and ships or improved efficiency of heating and air-conditioning systems , among many other examples ( Brunton & Noack , 2015 ) . But real-time feedback control requires inferring the state of the system from sparse measurements using an algorithm called a state estimator , which typically relies on a model for the underlying dynamics ( Simon , 2006 ) . Among state estimators , the Kalman filter is by far the most well-known thanks to its optimality for linear systems , which has led to its widespread use in numerous applications ( Kalman , 1960 ; Zarchan & Musoff , 2015 ) . However , continuous systems such as fluid flows are governed by partial differential equations ( PDEs ) which , when discretized , yield high-dimensional and oftentimes nonlinear dynamical models with hundreds or thousands of state variables . These high-dimensional models are too expensive to integrate with common state estimation techniques , including the Kalman filter or its numerous extensions . Thus , state estimators are instead designed based on a reduced-order model ( ROM ) of the underlying dynamics ( Barbagallo et al. , 2009 ; Rowley & Dawson , 2017 ) . A big challenge is that ROMs provide a simplified and imperfect description of the dynamics , which negatively affects the performance of the state estimator . One potential solution is to improve the accuracy of the ROM itself through the inclusion of additional closure terms ( Ahmed et al. , 2021 ) . In this paper , we leave the ROM untouched and instead propose a new design paradigm for the estimator itself , which we call a reinforcement-learning reduced-order estimator ( RL-ROE ) . The RL-ROE is constructed from the ROM in an analogous way to a Kalman filter , with the crucial difference that the linear filter gain function is replaced by a nonlinear stochastic policy trained through reinforcement learning ( RL ) . The flexibility of the nonlinear policy enables the RL-ROE to compensate for errors of the ROM , while still taking advantage of the imperfect knowledge of the dynamics . We describe how we frame the problem as a stationary Markov decision process in order to enable RL training , which is non-trivial since the RL-ROE must be able to estimate time-varying states . Finally , we show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM , and displays robust estimation performance with respect to different reference trajectories and initial state estimates . The RL-ROE is the first application of reinforcement learning to state estimation for high-dimensional systems . Under review as a conference paper at ICLR 2022 2 PROBLEM FORMULATION . 2.1 SETUP . Consider the discrete-time nonlinear system given by zk+1 = f ( zk ) , ( 1a ) yk = Czk , ( 1b ) where zk ∈ Rn and yk ∈ Rp are respectively the state and measurement at time k , f : Rn → Rn is a time-invariant nonlinear map from current to next state , and C ∈ Rp×n is a linear map from state to measurement . In this study , we assume that the dynamics given in ( 1 ) are obtained from the numerical discretization of a nonlinear partial differential equation ( PDE ) , which typically requires a large number n of state dimensions . Note that we do not account for exogenous control inputs to the system , which will be studied in future extensions of the present work . 2.2 REDUCED-ORDER MODEL . Because the high dimensionality of ( 1 ) makes online prediction and control impractical , it is instead customary to formulate a reduced-order model ( ROM ) of the dynamics ( Rowley & Dawson , 2017 ) . First , one chooses a suitable linearly independent set of modes { u1 , . . . , ur } , where ui ∈ Rn , defining an r-dimensional subspace of Rn in which most of the dynamics is assumed to take place . Stacking these modes as columns of a matrix U ∈ Rn×r , one can then express zk ' Uxk , where the reduced-order state xk ∈ Rr represents the coordinates of zk in the subspace . Finally , one finds a ROM for the dynamics of xk , which is vastly cheaper to evolve than ( 1 ) when r n. There exist various ways to find an appropriate set of modes U and corresponding ROM for the dynamics of xk ( Taira et al. , 2017 ) . In this work , we employ the Dynamic Mode Decomposition ( DMD ) , a purely data-driven algorithm that has found wide applications in fields ranging from fluid dynamics to neuroscience ( Schmid , 2010 ; Kutz et al. , 2016 ) . Starting with a collection of snapshots Z = { z0 , . . . , zm } collected along a trajectory of ( 1a ) , the DMD seeks a best-fit linear model of the dynamics in the form of a matrixA ∈ Rn×n such that zk+1 ' Azk , and computes the modes U as the r leading principal component analysis ( PCA ) modes of Z . The transformation zk ' Uxk and the orthogonality of U then yield a linear discrete-time ROM of the form xk+1 = Arxk +wk , ( 2a ) yk = Crxk + vk , ( 2b ) whereAr = UTAU ∈ Rr×r andCr = CU ∈ Rp×r are the reduced-order state-transition and observation models , respectively . In order to account for the neglected PCA modes of Z as well as the unmodeled dynamics incurred by the linear approximation zk+1 ' Azk , we add ( unknown ) nonGaussian process noise wk and observation noise vk . Additional details regarding the calculation ofAr and U are provided in Appendix A . 2.3 REDUCED-ORDER ESTIMATOR . This paper uses reinforcement learning ( RL ) to solve the following estimation problem : given a sequence of measurements { y0 , · · · , yk } from a reference trajectory { z0 , · · · , zk } of ( 1 ) and knowing the ROM ( 2 ) defined byAr , Cr andU , we want to estimate the high-dimensional state zk at current time k. To this effect , we design a reduced-order estimator ( ROE ) of the form x̂k = Arx̂k−1 + ak , ( 3a ) ak ∼ πθ ( · |yk , x̂k−1 ) , ( 3b ) where x̂k is an estimate of the reduced-order state xk , and ak ∈ Rr is an action sampled from a stochastic policy πθ which depends on the current measurement yk and the previous state estimate x̂k−1 . The subscript θ denotes the set of parameters that defines the stochastic policy , whose goal is to minimize the mean square error E [ zk − ẑk ] over a range of reference trajectories and initial reduced-order state estimates . Here , ẑk = Ux̂k denotes the high-dimensional state estimate reconstructed from x̂k . A Kalman filter is a special case of such an estimator , for which the action in ( 3b ) is given by ak = Kk ( yk −CrArx̂k−1 ) , ( 4 ) Under review as a conference paper at ICLR 2022 with Kk ∈ Rr×p the optimal Kalman gain . Although the Kalman filter is optimal when the statetransition and observation models are known exactly , its performance suffers in the presence of unmodeled dynamics . In our case , such model errors are unavoidable due to the ROM ( 2 ) being an inherent approximation of the high-dimensional dynamics ( 1 ) , which motivates our adoption of the more general form ( 3b ) . This form retains the dependence of ak on yk and x̂k−1 but is more flexible thanks to the nonlinearity of the stochastic policy πθ , which we train with deep RL in an offline stage . The stochasticity of πθ forces the RL algorithm to explore different actions during the training process , in order to find eventually an optimal θ∗ such that E [ zk − ẑk ] is minimized for various reference trajectories and initial estimates . We call the estimator constructed and trained through this process an RL-trained ROE , or RL-ROE for short . Thus , the methodology we propose consists of two steps . In a first offline stage , a ROM of the form ( 2 ) is obtained using high-dimensional snapshots zk from a single trajectory of ( 1 ) . The RL-ROE ( 3 ) is then constructed based on this ROM , and its policyπθ is trained using high-dimensional snapshots zk from multiple reference trajectories of ( 1 ) . Finally , the trained RL-ROE is deployed online to track a reference trajectory of ( 1 ) . In the online stage , the RL-ROE only requires measurements yk from the reference trajectory , and gives an estimate ẑk = Ux̂k for the high-dimensional state . In summary , our contributions in this paper are two-fold : 1 . We propose RL-ROE , a reduced-order state estimator for high-dimensional nonlinear systems . The RL-ROE takes the form ( 3 ) , which combines two unique features : first , the state transition modelAr is a ROM of the high-dimensional dynamics ; second , the term ak that assimilates measurements is sampled from a stochastic policy πθ trained with RL . The training procedure for πθ , which involves a non-trivial reformulation of the time-varying tracking problem as a stationary Markov decision process , is described in Section 4 . 2 . The performance of the RL-ROE is compared in Section 5 with that of KF-ROE , a Kalman filter constructed from the same ROM . The comparison is performed in the context of the Burgers equation using a range of reference trajectories and initial state estimates . 3 RELATED WORK . Previous studies have already proposed designing state estimators using policies trained through reinforcement learning . Morimoto & Doya ( 2007 ) introduced an estimator of the form x̂k = f ( x̂k−1 ) + L ( x̂k−1 ) ( yk−1 −Cx̂k−1 ) , where f ( · ) is the state-transition model of the system , and the state-dependent filter gain matrix L ( x̂k−1 ) is defined using Gaussian basis functions whose parameters are learned through a variant of vanilla policy gradient . Their reward function , however , was calculated using the measurement error instead of the state estimate error , potentially limiting the performance of the trained estimator . Hu et al . ( 2020 ) proposed an estimator of the form x̂k = f ( x̂k−1 ) + L ( xk − x̂k ) ( yk − Cf ( x̂k−1 ) ) , where L ( xk − x̂k ) is approximated by neural networks trained with a modified Soft-Actor Critic algorithm ( Haarnoja et al. , 2018 ) . Although they derived convergence properties for the estimate error , the dependence of the filter gain L ( xk − x̂k ) on the reference state xk limits its practical application . A major difference between these past studies and our work is that they do not construct a ROM of the dynamics and only consider low-dimensional systems with four state variables at most , in comparison with the hundred or more state dimensions that our RL-ROE can handle . Therefore , RL-ROE represents the first application of reinforcement learning to state estimation for high-dimensional systems , which makes it applicable to systems governed by PDEs such as fluid flows . | This paper proposes a new state estimation method based on reinforcement learning for high-dimensional system obtained by discretizing continuous system originally modeled by partial differential equations (PDEs). The proposed method learns to correct the model errors caused by reduced order model (ROM). In the experiment, the proposed method named RL-ROE performs better than ordinary Kalman filter applied to the ROM. | SP:484cc8865184e726701daffe431bb6f37a9c2946 |
Reinforcement Learning State Estimation for High-Dimensional Nonlinear Systems | In high-dimensional nonlinear systems such as fluid flows , the design of state estimators such as Kalman filters relies on a reduced-order model ( ROM ) of the dynamics . However , ROMs are prone to large errors , which negatively affects the performance of the estimator . Here , we introduce the reinforcement learning reduced-order estimator ( RL-ROE ) , a ROM-based estimator in which the data assimilation feedback term is given by a nonlinear stochastic policy trained through reinforcement learning . The flexibility of the nonlinear policy enables the RLROE to compensate for errors of the ROM , while still taking advantage of the imperfect knowledge of the dynamics . We show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM , and displays robust estimation performance with respect to different reference trajectories and initial state estimates . 1 INTRODUCTION . Active control of turbulent flows has the potential to cut down emissions across a range of industries through drag reduction in aircrafts and ships or improved efficiency of heating and air-conditioning systems , among many other examples ( Brunton & Noack , 2015 ) . But real-time feedback control requires inferring the state of the system from sparse measurements using an algorithm called a state estimator , which typically relies on a model for the underlying dynamics ( Simon , 2006 ) . Among state estimators , the Kalman filter is by far the most well-known thanks to its optimality for linear systems , which has led to its widespread use in numerous applications ( Kalman , 1960 ; Zarchan & Musoff , 2015 ) . However , continuous systems such as fluid flows are governed by partial differential equations ( PDEs ) which , when discretized , yield high-dimensional and oftentimes nonlinear dynamical models with hundreds or thousands of state variables . These high-dimensional models are too expensive to integrate with common state estimation techniques , including the Kalman filter or its numerous extensions . Thus , state estimators are instead designed based on a reduced-order model ( ROM ) of the underlying dynamics ( Barbagallo et al. , 2009 ; Rowley & Dawson , 2017 ) . A big challenge is that ROMs provide a simplified and imperfect description of the dynamics , which negatively affects the performance of the state estimator . One potential solution is to improve the accuracy of the ROM itself through the inclusion of additional closure terms ( Ahmed et al. , 2021 ) . In this paper , we leave the ROM untouched and instead propose a new design paradigm for the estimator itself , which we call a reinforcement-learning reduced-order estimator ( RL-ROE ) . The RL-ROE is constructed from the ROM in an analogous way to a Kalman filter , with the crucial difference that the linear filter gain function is replaced by a nonlinear stochastic policy trained through reinforcement learning ( RL ) . The flexibility of the nonlinear policy enables the RL-ROE to compensate for errors of the ROM , while still taking advantage of the imperfect knowledge of the dynamics . We describe how we frame the problem as a stationary Markov decision process in order to enable RL training , which is non-trivial since the RL-ROE must be able to estimate time-varying states . Finally , we show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM , and displays robust estimation performance with respect to different reference trajectories and initial state estimates . The RL-ROE is the first application of reinforcement learning to state estimation for high-dimensional systems . Under review as a conference paper at ICLR 2022 2 PROBLEM FORMULATION . 2.1 SETUP . Consider the discrete-time nonlinear system given by zk+1 = f ( zk ) , ( 1a ) yk = Czk , ( 1b ) where zk ∈ Rn and yk ∈ Rp are respectively the state and measurement at time k , f : Rn → Rn is a time-invariant nonlinear map from current to next state , and C ∈ Rp×n is a linear map from state to measurement . In this study , we assume that the dynamics given in ( 1 ) are obtained from the numerical discretization of a nonlinear partial differential equation ( PDE ) , which typically requires a large number n of state dimensions . Note that we do not account for exogenous control inputs to the system , which will be studied in future extensions of the present work . 2.2 REDUCED-ORDER MODEL . Because the high dimensionality of ( 1 ) makes online prediction and control impractical , it is instead customary to formulate a reduced-order model ( ROM ) of the dynamics ( Rowley & Dawson , 2017 ) . First , one chooses a suitable linearly independent set of modes { u1 , . . . , ur } , where ui ∈ Rn , defining an r-dimensional subspace of Rn in which most of the dynamics is assumed to take place . Stacking these modes as columns of a matrix U ∈ Rn×r , one can then express zk ' Uxk , where the reduced-order state xk ∈ Rr represents the coordinates of zk in the subspace . Finally , one finds a ROM for the dynamics of xk , which is vastly cheaper to evolve than ( 1 ) when r n. There exist various ways to find an appropriate set of modes U and corresponding ROM for the dynamics of xk ( Taira et al. , 2017 ) . In this work , we employ the Dynamic Mode Decomposition ( DMD ) , a purely data-driven algorithm that has found wide applications in fields ranging from fluid dynamics to neuroscience ( Schmid , 2010 ; Kutz et al. , 2016 ) . Starting with a collection of snapshots Z = { z0 , . . . , zm } collected along a trajectory of ( 1a ) , the DMD seeks a best-fit linear model of the dynamics in the form of a matrixA ∈ Rn×n such that zk+1 ' Azk , and computes the modes U as the r leading principal component analysis ( PCA ) modes of Z . The transformation zk ' Uxk and the orthogonality of U then yield a linear discrete-time ROM of the form xk+1 = Arxk +wk , ( 2a ) yk = Crxk + vk , ( 2b ) whereAr = UTAU ∈ Rr×r andCr = CU ∈ Rp×r are the reduced-order state-transition and observation models , respectively . In order to account for the neglected PCA modes of Z as well as the unmodeled dynamics incurred by the linear approximation zk+1 ' Azk , we add ( unknown ) nonGaussian process noise wk and observation noise vk . Additional details regarding the calculation ofAr and U are provided in Appendix A . 2.3 REDUCED-ORDER ESTIMATOR . This paper uses reinforcement learning ( RL ) to solve the following estimation problem : given a sequence of measurements { y0 , · · · , yk } from a reference trajectory { z0 , · · · , zk } of ( 1 ) and knowing the ROM ( 2 ) defined byAr , Cr andU , we want to estimate the high-dimensional state zk at current time k. To this effect , we design a reduced-order estimator ( ROE ) of the form x̂k = Arx̂k−1 + ak , ( 3a ) ak ∼ πθ ( · |yk , x̂k−1 ) , ( 3b ) where x̂k is an estimate of the reduced-order state xk , and ak ∈ Rr is an action sampled from a stochastic policy πθ which depends on the current measurement yk and the previous state estimate x̂k−1 . The subscript θ denotes the set of parameters that defines the stochastic policy , whose goal is to minimize the mean square error E [ zk − ẑk ] over a range of reference trajectories and initial reduced-order state estimates . Here , ẑk = Ux̂k denotes the high-dimensional state estimate reconstructed from x̂k . A Kalman filter is a special case of such an estimator , for which the action in ( 3b ) is given by ak = Kk ( yk −CrArx̂k−1 ) , ( 4 ) Under review as a conference paper at ICLR 2022 with Kk ∈ Rr×p the optimal Kalman gain . Although the Kalman filter is optimal when the statetransition and observation models are known exactly , its performance suffers in the presence of unmodeled dynamics . In our case , such model errors are unavoidable due to the ROM ( 2 ) being an inherent approximation of the high-dimensional dynamics ( 1 ) , which motivates our adoption of the more general form ( 3b ) . This form retains the dependence of ak on yk and x̂k−1 but is more flexible thanks to the nonlinearity of the stochastic policy πθ , which we train with deep RL in an offline stage . The stochasticity of πθ forces the RL algorithm to explore different actions during the training process , in order to find eventually an optimal θ∗ such that E [ zk − ẑk ] is minimized for various reference trajectories and initial estimates . We call the estimator constructed and trained through this process an RL-trained ROE , or RL-ROE for short . Thus , the methodology we propose consists of two steps . In a first offline stage , a ROM of the form ( 2 ) is obtained using high-dimensional snapshots zk from a single trajectory of ( 1 ) . The RL-ROE ( 3 ) is then constructed based on this ROM , and its policyπθ is trained using high-dimensional snapshots zk from multiple reference trajectories of ( 1 ) . Finally , the trained RL-ROE is deployed online to track a reference trajectory of ( 1 ) . In the online stage , the RL-ROE only requires measurements yk from the reference trajectory , and gives an estimate ẑk = Ux̂k for the high-dimensional state . In summary , our contributions in this paper are two-fold : 1 . We propose RL-ROE , a reduced-order state estimator for high-dimensional nonlinear systems . The RL-ROE takes the form ( 3 ) , which combines two unique features : first , the state transition modelAr is a ROM of the high-dimensional dynamics ; second , the term ak that assimilates measurements is sampled from a stochastic policy πθ trained with RL . The training procedure for πθ , which involves a non-trivial reformulation of the time-varying tracking problem as a stationary Markov decision process , is described in Section 4 . 2 . The performance of the RL-ROE is compared in Section 5 with that of KF-ROE , a Kalman filter constructed from the same ROM . The comparison is performed in the context of the Burgers equation using a range of reference trajectories and initial state estimates . 3 RELATED WORK . Previous studies have already proposed designing state estimators using policies trained through reinforcement learning . Morimoto & Doya ( 2007 ) introduced an estimator of the form x̂k = f ( x̂k−1 ) + L ( x̂k−1 ) ( yk−1 −Cx̂k−1 ) , where f ( · ) is the state-transition model of the system , and the state-dependent filter gain matrix L ( x̂k−1 ) is defined using Gaussian basis functions whose parameters are learned through a variant of vanilla policy gradient . Their reward function , however , was calculated using the measurement error instead of the state estimate error , potentially limiting the performance of the trained estimator . Hu et al . ( 2020 ) proposed an estimator of the form x̂k = f ( x̂k−1 ) + L ( xk − x̂k ) ( yk − Cf ( x̂k−1 ) ) , where L ( xk − x̂k ) is approximated by neural networks trained with a modified Soft-Actor Critic algorithm ( Haarnoja et al. , 2018 ) . Although they derived convergence properties for the estimate error , the dependence of the filter gain L ( xk − x̂k ) on the reference state xk limits its practical application . A major difference between these past studies and our work is that they do not construct a ROM of the dynamics and only consider low-dimensional systems with four state variables at most , in comparison with the hundred or more state dimensions that our RL-ROE can handle . Therefore , RL-ROE represents the first application of reinforcement learning to state estimation for high-dimensional systems , which makes it applicable to systems governed by PDEs such as fluid flows . | This paper proposes using RL to train a policy to output correction terms in the reduced states of reduced order models in order to perform state estimation. In particular the approach is applied to DMD and Burgers equation, but could presumably be applied to other forms of ROMs and dynamical systems. The method was shown to perform well in experiments, and was found to be more robust than baseline methods in the presence of perturbations and noise. | SP:484cc8865184e726701daffe431bb6f37a9c2946 |
Learning Curves for Gaussian Process Regression with Power-Law Priors and Targets | 1 INTRODUCTION Gaussian processes ( GPs ) provide a flexible and interpretable framework for learning and adaptive inference , and are widely used for constructing prior distributions in non-parametric Bayesian learning . From an application perspective , one crucial question is how fast do GPs learn , i.e. , how much training data is needed to achieve a certain level of generalization performance . Theoretically , this is addressed by analyzing so-called “ learning curves ” , which describe the generalization error as a function of the training set size n. The rate at which the curve approaches zero determines the difficulty of learning tasks and conveys important information about the asymptotic performance of GP learning algorithms . In this paper , we study the learning curves for Gaussian process regression . Our main result characterizes the asymptotics of the generalization error in cases where the eigenvalues of the GP kernel and the coefficients of the eigenexpansion of the target function have a power-law decay . In the remainder of this introductory section , we review related work and outline our main contributions . Gaussian processes A GP model is a probabilistic model on an infinite-dimensional parameter space ( Williams and Rasmussen , 2006 ; Orbanz and Teh , 2010 ) . In GP regression ( GPR ) , for example , this space can be the set of all continuous functions . Assumptions about the learning problem are encoded by way of a prior distribution over functions , which gets transformed into a posterior distribution given some observed data . The mean of the posterior is then used for prediction . The model uses only a finite subset of the available parameters to explain the data and this subset can grow arbitrarily large as more data are observed . In this sense , GPs are “ non-parametric ” and contrast with parametric models , where there is a fixed number of parameters . For regression with Gaussian noise , a major appeal of the GP formalism is that the posterior is analytically tractable . GPs are also one important part in learning with kernel machines ( Kanagawa et al. , 2018 ) and modeling using GPs has recently gained considerable traction in the neural network community . Neural networks and kernel learning From a GP viewpoint , there exists a well known correspondence between kernel methods and infinite neural networks ( NNs ) first studied by Neal ( 1996 ) . Neal showed that the outputs of a randomly initialized one-hidden layer neural network ( with appropriate scaling of the variance of the initialization distribution ) converges to a GP over functions in the limit of an infinite number of hidden units . Follow-up work extended this correspondence with analytical expressions for the kernel covariance for shallow NNs by Williams ( 1997 ) , and more recently for deep fully-connected NNs ( Lee et al. , 2018 ; de G. Matthews et al. , 2018 ) , convolutional NNs with many channels ( Novak et al. , 2019 ; Garriga-Alonso et al. , 2019 ) , and more general architectures ( Yang , 2019 ) . The correspondence enables exact Bayesian inference in the associated GP model for infinite-width NNs on regression tasks and has led to some recent breakthroughs in our understanding of overparameterized NNs ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Arora et al. , 2019 ; Belkin et al. , 2018 ; Daniely et al. , 2016 ; Yang and Salman , 2019 ; Bietti and Mairal , 2019 ) . The most prominent kernels associated with infinite-width NNs are the Neural Network Gaussian Process ( NNGP ) kernel when only the last layer is trained ( Lee et al. , 2018 ; de G. Matthews et al. , 2018 ) , and the Neural Tangent Kernel ( NTK ) when the entire model is trained ( Jacot et al. , 2018 ) . Empirical studies have shown that inference with such infinite network kernels is competitive with standard gradient descent-based optimization for fully-connected architectures ( Lee et al. , 2020 ) . Learning curves A large-scale empirical characterization of the generalization performance of state-of-the-art deep NNs showed that the associated learning curves often follow a power law of the form n−β with the exponent β ranging between 0.07 and 0.35 depending on the data and the algorithm ( Hestness et al. , 2017 ; Spigler et al. , 2020 ) . Power-law asymptotics of learning curves have been theoretically studied in early works for the Gibbs learning algorithm ( Amari et al. , 1992 ; Amari and Murata , 1993 ; Haussler et al. , 1996 ) that showed a generalization error scaling with exponent β=0.5 , 1 or 2 under certain assumptions . More recent results from statistical learning theory characterize the shape of learning curves depending on the properties of the hypothesis class ( Bousquet et al. , 2021 ) . In the context of GPs , approximations and bounds on learning curves have been investigated in several works ( Sollich , 1999 ; Sollich and Halees , 2002 ; Sollich , 2001 ; Opper and Vivarelli , 1999 ; Opper and Malzahn , 2002 ; Williams and Vivarelli , 2000 ; Malzahn and Opper , 2001a ; b ; Seeger et al. , 2008 ; Van Der Vaart and Van Zanten , 2011 ; Le Gratiet and Garnier , 2015 ) , with recent extensions to kernel regression from a spectral bias perspective ( Bordelon et al. , 2020 ; Canatar et al. , 2021 ) . For a review on learning curves in relation to its shape and monotonicity , see Loog et al . ( 2019 ) ; Viering et al . ( 2019 ) ; Viering and Loog ( 2021 ) . A related but complementary line of work studies the convergence rates and posterior consistency properties of Bayesian non-parametric models ( Barron , 1998 ; Seeger et al. , 2008 ; Van Der Vaart and Van Zanten , 2011 ) . Power-law decay of the GP kernel eigenspectrum The rate of decay of the eigenvalues of the GP kernel conveys important information about its smoothness . Intuitively , if a process is “ rough ” with more power at high frequencies , then the eigenspectrum decays more slowly . On the other hand , kernels that define smooth processes have a fast-decaying eigenspectrum ( Stein , 2012 ; Williams and Rasmussen , 2006 ) . The precise eigenvalues ( λp ) p≥1 of the operators associated to many kernels and input distributions are not known explicitly , except for a few special cases ( Williams and Rasmussen , 2006 ) . Often , however , the asymptotic properties are known . The asymptotic rate of decay of the eigenvalues of stationary kernels for input distributions with bounded support is well understood ( Widom , 1963 ; Ritter et al. , 1995 ) . Ronen et al . ( 2019 ) showed that for inputs distributed uniformly on a hypersphere , the eigenfunctions of the arc-cosine kernel are spherical harmonics and the eigenvalues follow a power-law decay . The spectral properties of the NTK are integral to the analysis of training convergence and generalization of NNs , and several recent works empirically justify and rely on a power law assumption for the NTK spectrum ( Bahri et al. , 2021 ; Canatar et al. , 2021 ; Lee et al. , 2020 ; Nitanda and Suzuki , 2021 ) . Velikanov and Yarotsky ( 2021 ) showed that the asymptotics of the NTK of infinitely wide shallow ReLU networks follows a power-law that is determined primarily by the singularities of the kernel and has the form λp∝p−α with α=1+ 1d , where d is the input dimension . Asymptotics of the generalization error of kernel ridge regression ( KRR ) There is a well known equivalence between GPR and KRR with the additive noise in GPR playing the role of regularization in KRR ( Kanagawa et al. , 2018 ) . Analysis of the decay rates of the excess generalization error of KRR has appeared in several works , e.g , in the noiseless case with constant regularization ( Bordelon et al. , 2020 ; Spigler et al. , 2020 ; Jun et al. , 2019 ) , and the noisy optimally regularized case ( Caponnetto and De Vito , 2007 ; Steinwart et al. , 2009 ; Fischer and Steinwart , 2020 ) under the assumption that the kernel eigenspectrum , and the eigenexpansion coefficients of the target function follow a power law . These assumptions , which are often called resp . the capacity and source conditions are related to the effective dimension of the problem and the difficulty of learning the target function ( Caponnetto and De Vito , 2007 ; Blanchard and Mücke , 2018 ) . Cui et al . ( 2021 ) present a unifying picture of the excess error decay rates under the capacity and source conditions in terms of the interplay between noise and regularization illustrating their results with real datasets . Contributions In this work , we characterize the asymptotics of the generalization error of GPR and KRR under the capacity and source conditions . Our main contributions are as follows : • When the eigenspectrum of the prior decays with rate α and the eigenexpansion coefficients of the target function decay with rate β , we show that with high probability over the draw of n input samples , the negative log-marginal likelihood behaves as Θ ( nmax { 1 α , 1−2β α +1 } ) ( Theorem 7 ) and the generalization error behaves as Θ ( nmax { 1 α−1 , 1−2β α } ) ( Theorem 9 ) . In the special case that the model is correctly specified , i.e. , the GP prior is the true one from which the target functions are actually generated , our result implies that the generalization error behaves asO ( n 1 α−1 ) recovering as a special case a result due to Sollich and Halees ( 2002 ) ( vide Remark 10 ) . • Under similar assumptions as in the previous item , we leverage the equivalence between GPR and KRR to show that the excess generalization error of KRR behaves as Θ ( nmax { 1 α−1 , 1−2β α } ) ( Theorem 12 ) . In the noiseless case with constant regularization , our result implies that the generalization error behaves as Θ ( n 1−2β α ) recovering as a special case a result due to Bordelon et al . ( 2020 ) . Specializing to the case of KRR with Gaussian design , we recover as a special case a result due to Cui et al . ( 2021 ) ( vide Remark 14 ) . For the unrealizable case , i.e. , when the target function is outside the span of the eigenfunctions with positive eigenvalues , we show that the generalization error converges to a constant . • We present a few toy experiments demonstrating the theory for GPR with arc-cosine kernel without biases ( resp . with biases ) which is the conjugate kernel of an infinitely wide shallow network with two inputs and one hidden layer without biases ( resp . with biases ) ( Cho and Saul , 2009 ; Ronen et al. , 2019 ) . 2 BAYESIAN LEARNING AND GENERALIZATION ERROR FOR GPS In GP regression , our goal is to learn a target function f : Ω 7→ R between an input x ∈ Ω and output y ∈ R based on training samples Dn = { ( xi , yi ) } ni=1 . We consider an additive noise model yi = f ( xi ) + i , where i i.i.d.∼ N ( 0 , σ2true ) . If ρ denotes the marginal density of the inputs xi , then the pairs ( xi , yi ) are generated according to the density q ( x , y ) = ρ ( x ) q ( y|x ) , where q ( y|x ) =N ( y|f ( x ) , σ2true ) . We assume that there is a prior distribution Π0 on f which is defined as a zero-mean GP with continuous covariance function k : Ω×Ω→R , i.e. , f ∼GP ( 0 , k ) . This means that for any finite set x = ( x1 , ... , xn ) T , the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T follows the multivariate normal distributionN ( 0 , Kn ) with covariance matrixKn= ( k ( xi , xj ) ) ni , j=1∈Rn×n . By Bayes ’ rule , the posterior distribution of the target f given the training data is given by dΠn ( f |Dn ) = 1 Z ( Dn ) n∏ i=1 N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) , where Π0 is the prior distribution , Z ( Dn ) = ∫ ∏n i=1N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) is the marginal likelihood or model evidence and σmodel is the sample variance used in GPR . In practice , we do not know the exact value of σtrue and so our choice of σmodel can be different from σtrue . The GP prior and the Gaussian noise assumption allows for exact Bayesian inference and the posterior distribution over functions is again a GP with mean and covariance function given by m̄ ( x ) =KTxx ( Kn+σ 2 modelIn ) −1y , x∈Ω ( 1 ) k̄ ( x , x′ ) =k ( x , x′ ) −KTxx ( Kn+σ2modelIn ) −1Kxx′ , x , x′∈Ω , ( 2 ) whereKxx= ( k ( x1 , x ) , ... , k ( xn , x ) ) T and y= ( y1 , ... , yn ) T ∈Rn ( Williams and Rasmussen , 2006 , Eqs . 2.23-24 ) . The performance of GPR depends on how well the posterior approximates f as the number of training samples n tends to infinity . The distance of the posterior to the ground truth can be measured in various ways . We consider two such measures , namely the Bayesian generalization error ( Seeger et al. , 2008 ; Haussler and Opper , 1997 ; Opper and Vivarelli , 1999 ) and the excess mean squared error ( Sollich and Halees , 2002 ; Le Gratiet and Garnier , 2015 ; Bordelon et al. , 2020 ; Cui et al. , 2021 ) . Definition 1 ( Bayesian generalization error ) . The Bayesian generalization error is defined as the Kullback-Leibler divergence between the true density q ( y|x ) and the Bayesian predictive density pn ( y|x , Dn ) = ∫ p ( y|f ( x ) ) dΠn ( f |Dn ) , G ( Dn ) = ∫ q ( x , y ) log q ( y|x ) pn ( y|x , Dn ) dxdy . ( 3 ) A related quantity of interest is the stochastic complexity ( SC ) , also known as the free energy , which is just the negative log-marginal likelihood . We shall primarily be concerned with a normalized version of the stochastic complexity which is defined as follows : F 0 ( Dn ) =−log Z ( Dn ) ∏n i=1q ( yi|xi ) =−log ∫∏n i=1N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) ∏n i=1q ( yi|xi ) . ( 4 ) The generalization error ( 3 ) can be expressed in terms of the normalized SC as follows ( Watanabe , 2009 , Theorem 1.2 ) : G ( Dn ) =E ( xn+1 , yn+1 ) F 0 ( Dn+1 ) −F 0 ( Dn ) , ( 5 ) whereDn+1 =Dn∪ { ( xn+1 , yn+1 ) } is obtained by augmentingDn with a test point ( xn+1 , yn+1 ) . If we only wish to measure the performance of the mean of the Bayesian posterior , then we can use the excess mean squared error : Definition 2 ( Excess mean squared error ) . The excess mean squared error is defined as M ( Dn ) =E ( xn+1 , yn+1 ) ( m̄ ( xn+1 ) −yn+1 ) 2−σ2true =Exn+1 ( m̄ ( xn+1 ) −f ( xn+1 ) ) 2 . ( 6 ) Proposition 3 ( Normalized stochastic complexity for GPR ) . Assume that σ2model =σ2true =σ2 . The normalized SC F 0 ( Dn ) ( 4 ) for GPR with prior GP ( 0 , k ) is given as F 0 ( Dn ) = 1 2 logdet ( In+ Kn σ2 ) + 1 2σ2y T ( In+ Kn σ2 ) −1y− 12σ2 ( y−f ( x ) ) T ( y−f ( x ) ) , ( 7 ) where = ( 1 , ... , n ) T . The expectation of the normalized SC w.r.t . the noise is given as E F 0 ( Dn ) = 12 logdet ( In+ Kn σ2 ) − 12Tr ( In− ( In+ Kn σ2 ) −1 ) + 12σ2 f ( x ) T ( In+ Kn σ2 ) −1 f ( x ) . ( 8 ) This is a basic result and has applications in relation to model selection in GPR ( Williams and Rasmussen , 2006 ) . For completeness , we give a proof of Proposition 3 in Appendix B. Seeger et al . ( 2008 , Theorem 1 ) gave an upper bound on the normalized stochastic complexity for the case when f lies in the reproducing kernel Hilbert space ( RKHS ) of the GP prior . It is well known , however , that sample paths of GP almost surely fall outside the corresponding RKHS ( Van Der Vaart and Van Zanten , 2011 ) limiting the applicability of the result . We next derive the asymptotics of E F 0 ( Dn ) , the expected generalization error E G ( Dn ) = E E ( xn+1 , yn+1 ) F 0 ( Dn+1 ) −E F 0 ( Dn ) , and the excess mean squared error E M ( Dn ) . 3 ASYMPTOTIC ANALYSIS OF GP REGRESSION WITH POWER-LAW PRIORS We begin by introducing some notations and assumptions . We assume that f ∈L2 ( Ω , ρ ) . By Mercer ’ s theorem ( Williams and Rasmussen , 2006 , Theorem 4.2 ) , the covariance function of the GP prior can be decomposed as k ( x1 , x2 ) = ∑∞ p=1λpφp ( x1 ) φp ( x2 ) , where ( φp ( x ) ) p≥1 are the eigenfunctions of the operator Lk : L2 ( Ω , ρ ) 7→ L2 ( Ω , ρ ) ; ( Lkf ) ( x ) = ∫ Ω k ( x , s ) f ( s ) dρ ( s ) , and ( λp ) p≥1 are the corresponding positive eigenvalues . We index the sequence of eigenvalues in decreasing order , that is λ1≥λ2≥··· > 0 . The target function f ( x ) is decomposed into the orthonormal set ( φp ( x ) ) p≥1 and its orthogonal complement { φp ( x ) : p≥1 } ⊥ as f ( x ) = ∞∑ p=1 µpφp ( x ) +µ0φ0 ( x ) ∈L2 ( Ω , ρ ) , ( 9 ) whereµ= ( µ0 , µ1 , ... , µp , ... ) T are the coefficients of the decomposition , andφ0 ( x ) satisfies ‖φ0 ( x ) ‖2 = 1 and φ0 ( x ) ∈ { φp ( x ) : p ≥ 1 } ⊥ . For given sample inputs x , let φp ( x ) = ( φp ( x1 ) , ... , φp ( xn ) ) T , Φ = ( φ0 ( x ) , φ1 ( x ) , ... , φp ( x ) , ... ) and Λ = diag { 0 , λ1 , ... , λp , ... } . Then the covariance matrixKn can be written asKn=ΦΛΦT , and the function values on the sample inputs can be written as f ( x ) =Φµ . We shall make the following assumptions in order to derive the power-law asymptotics of the normalized stochastic complexity and the generalization error of GPR : Assumption 4 ( Power law decay of eigenvalues ) . The eigenvalues ( λp ) p≥1 follow the power law Cλp −α≤λp≤Cλp−α , ∀p≥1 ( 10 ) whereCλ , Cλ and α are three positive constants which satisfy 0 < Cλ≤Cλ and α > 1 . As mentioned in the introduction , this assumption , called the capacity condition , is fairly standard in kernel learning and is adopted in many recent works ( Bordelon et al. , 2020 ; Canatar et al. , 2021 ; Jun et al. , 2019 ; Bietti et al. , 2021 ; Cui et al. , 2021 ) . Velikanov and Yarotsky ( 2021 ) derived the exact value of the exponent αwhen the kernel function has a homogeneous singularity on its diagonal , which is the case for instance for the arc-cosine kernel . Assumption 5 ( Power law decay of coefficients of decomposition ) . Let Cµ , Cµ > 0 and β > 1/2 be positive constants and let { pi } i≥1 be an increasing integer sequence such that supi≥1 ( pi+1−pi ) < ∞ . The coefficients ( µp ) p≥1 of the decomposition ( 9 ) of the target function follow the power law |µp|≤Cµp−β , ∀p≥1 and |µpi |≥Cµpi−β , ∀i≥1 . ( 11 ) Since f ∈L2 ( Ω , ρ ) , we have ∑∞ p=0µ 2 p < ∞ . The condition β > 1/2 in Assumption 5 ensures that the sum ∑∞ p=0µ 2 p does not diverge . When the orthonormal basis ( φp ( x ) ) p is the Fourier basis or the spherical harmonics basis , the coefficients ( µp ) p decay at least as fast as a power law so long as the target function f ( x ) satisfies certain smoothness conditions ( Bietti and Mairal , 2019 ) . Velikanov and Yarotsky ( 2021 ) gave examples of some natural classes of functions for which Assumption 5 is satisfied , such as functions that have a bounded support with smooth boundary and are smooth on the interior of this support , and derived the corresponding exponents β . Assumption 6 ( Boundedness of eigenfunctions ) . The eigenfunctions ( φp ( x ) ) p≥0 satisfy ‖φ0‖∞≤Cφ and ‖φp‖∞≤Cφpτ , p≥1 , ( 12 ) whereCφ and τ are two positive constants which satisfy τ < α−12 . The second condition in ( 12 ) appears , for example , in Valdivia ( 2018 , Hypothesis H1 ) and is less restrictive than the assumption of uniformly bounded eigenfunctions that has appeared in several other works in the GP literature , see , e.g. , Braun ( 2006 ) ; Chatterji et al . ( 2019 ) ; Vakili et al . ( 2021 ) . Define T1 ( Dn ) = 1 2 logdet ( In+ ΦΛΦT σ2 ) − 12Tr ( In− ( In+ ΦΛΦT σ2 ) −1 ) , ( 13 ) T2 ( Dn ) = 1 2σ2 f ( x ) T ( In+ ΦΛΦT σ2 ) −1 f ( x ) , ( 14 ) G1 ( Dn ) =E ( xn+1 , yn+1 ) ( T1 ( Dn+1 ) −T1 ( Dn ) ) , ( 15 ) G2 ( Dn ) =E ( xn+1 , yn+1 ) ( T2 ( Dn+1 ) −T2 ( Dn ) ) . ( 16 ) Using ( 8 ) and ( 5 ) , we haveE F 0 ( Dn ) =T1 ( Dn ) +T2 ( Dn ) andE G ( Dn ) =G1 ( Dn ) +G2 ( Dn ) . Intuitively , G1 corresponds to the effect of the noise on the generalization error irrespective of the target function f , whereasG2 corresponds to the ability of the model to fit the target function . As we will see next in Theorems 9 and 11 , ifα is large , then the error associated with the noise is smaller . When f is contained in the span of the eigenfunctions { φp } p≥1 , G2 decreases with increasingn , but if f contains an orthogonal component , then the error remains constant and GP regression is not able to learn the target function . 3.1 ASYMPTOTICS OF THE NORMALIZED STOCHASTIC COMPLEXITY We derive the asymptotics of the normalized SC ( 8 ) for the following two cases : µ0 = 0 and µ0 > 0 . When µ0 =0 , the target function f ( x ) lies in the span of all eigenfunctions with positive eigenvalues . Theorem 7 ( Asymptotics of the normalized SC , µ0 = 0 ) . Assume that µ0 = 0 and σ2model = σ 2 true = σ 2 = Θ ( 1 ) . Under Assumptions 4 , 5 and 6 , with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0 ≤ q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } , the expected normalized SC ( 8 ) has the asymptotic behavior : E F 0 ( Dn ) = [ 1 2 logdet ( I+ n σ2 Λ ) − 1 2Tr ( I− ( I+ nσ2 Λ ) −1 ) + n2σ2µT ( I+ nσ2 Λ ) −1µ ] ( 1+o ( 1 ) ) =Θ ( nmax { 1 α , 1−2β α +1 } ) . ( 17 ) The complete proof of Theorem 7 is given in Appendix D.1 . We give a sketch of the proof below . In the sequel , we use the notationsO and Θ to denote the standard mathematical orders and the notation Õ to suppress logarithmic factors . Proof sketch of Theorem 7 . By ( 8 ) , ( 13 ) and ( 14 ) we have E F 0 ( Dn ) = T1 ( Dn ) + T2 ( Dn ) . In order to analyze the terms T1 ( Dn ) and T2 ( Dn ) , we will consider truncated versions of these quantities and bound the corresponding residual errors . Given a truncation parameter R ∈ N , let ΦR = ( φ0 ( x ) , φ1 ( x ) , ... , φR ( x ) ) ∈Rn×R be the truncated matrix of eigenfunctions evaluated at the data points , ΛR = diag ( 0 , λ1 , ... , λR ) ∈R ( R+1 ) × ( R+1 ) and µR = ( µ0 , µ1 , ... , µR ) ∈RR+1 . We define the truncated version of T1 ( Dn ) as follows : T1 , R ( Dn ) = 1 2 logdet ( In+ ΦRΛRΦ T R σ2 ) − 12Tr ( In− ( In+ ΦRΛRΦ T R σ2 ) −1 ) . ( 18 ) Similarly , define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , Λ > R = diag ( λR+1 , ... , λp , ... ) , fR ( x ) = ∑R p=1 µpφp ( x ) , fR ( x ) = ( fR ( x1 ) , ... , fR ( xn ) ) T , f > R ( x ) = f ( x ) − fR ( x ) , and f > R ( x ) = ( f > R ( x1 ) , ... , f > R ( xn ) ) T . The truncated version of T2 ( Dn ) is then defined as T2 , R ( Dn ) = 1 2σ2 fR ( x ) T ( In+ ΦRΛRΦ T R σ2 ) −1fR ( x ) T . ( 19 ) The proof consists of three steps : • Approximation step : In this step , we show that the asymptotics of T1 , R resp . T2 , R dominates that of the residuals , |T1 , R ( Dn ) −T1 ( Dn ) | resp . |T2 , R ( Dn ) −T2 ( Dn ) | ( see Lemma 32 ) . This builds upon first showing that ‖Φ > RΛ > RΦT > R‖2 =Õ ( max { nR−α , n 1 2R 1−2α 2 , R1−α } ) ( see Lemma 25 ) and then choosingR=n 1 α+κ where 0 < κ < α−1−2τ2α2 when we have ‖Φ > RΛ > RΦ T > R‖2 =o ( 1 ) . Intuitively , the choice of the truncation parameterR is governed by the fact that λR=Θ ( R−α ) =n−1+κα=o ( n−1 ) . • Decomposition step : In this step , we decompose T1 , R into a term independent of ΦR and a series involving ΦTRΦR−nIR , and likewise for T2 , R ( see Lemma 34 ) . This builds upon first showing using the Woodbury matrix identity ( Williams and Rasmussen , 2006 , §A.3 ) that T1 , R ( Dn ) = 1 2 logdet ( IR+ 1 σ2 ΛRΦ T RΦR ) − 12TrΦR ( σ 2IR+ΛRΦ T RΦR ) −1ΛRΦ T R , ( 20 ) T2 , R ( Dn ) = 1 2σ2µ T RΦ T RΦR ( σ 2IR+ΛRΦ T RΦR ) −1µR , ( 21 ) and then Taylor expanding the matrix inverse ( σ2IR + ΛRΦTRΦR ) −1 in ( 20 ) and ( 21 ) to show that the ΦR-independent terms in the decomposition of T1 , R and T2 , R are , respectively , 1 2 logdet ( IR+ n σ2 ΛR ) − 1 2Tr ( IR− ( IR+ nσ2 ΛR ) −1 ) , and n2σ2µTR ( IR+ nσ2 ΛR ) −1µR . • Concentration step : Finally , we use concentration inequalities to show that these ΦR-independent terms dominate the series involving ΦTRΦR−nIR ( see Lemma 35 ) when we have T1 , R ( Dn ) = ( 1 2 logdet ( IR+ n σ2 ΛR ) − 1 2Tr ( IR− ( IR+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1α ) , T2 , R ( Dn ) = ( n 2σ2µ T R ( IR+ n σ2 ΛR ) −1µR ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0 , 1−2β α +1 } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . The key idea is to consider the matrix Λ1/2R ( I+ n σ2 ΛR ) −1/2ΦTRΦR ( I+ n σ2 ΛR ) −1/2Λ 1/2 R and show that it concentrates around nΛR ( I+ nσ2 ) −1 ( see Corollary 22 ) . Note that an ordinary application of the matrix Bernstein inequality to ΦTRΦR−nIR yields ‖ΦTRΦR−nI‖2 =O ( R √ n ) , which is not sufficient for our purposes , since this would giveO ( R √ n ) =o ( n ) only when α > 2 . In contrast , our results are valid forα > 1 and cover cases of practical interest , e.g. , the NTK of infinitely wide shallow ReLU network ( Velikanov and Yarotsky , 2021 ) and the arc-cosine kernels over high-dimensional hyperspheres ( Ronen et al. , 2019 ) that have α=1+O ( 1d ) , where d is the input dimension . For µ0 > 0 , we note the following result : Theorem 8 ( Asymptotics of the normalized SC , µ0 > 0 ) . Assume µ0 > 0 and σ2model = σ 2 true = σ 2 = Θ ( 1 ) . Under Assumptions 4 , 5 and 6 , with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0≤q < min { 2β−1 2 , α } ·min { α−1−2τ 2α2 , 2β−1 α2 } . the expected normalized SC ( 8 ) has the asymptotic behavior : E F 0 ( Dn ) = 12σ2µ 2 0n+o ( n ) . The proof of Theorem 8 is given in Appendix D.1 and follows from showing that when µ0 > 0 , T2 , R ( Dn ) = ( n 2σ2µ T R ( IR+ n σ2 ΛR ) −1µR ) ( 1 + o ( 1 ) ) = 12σ2µ 2 0n + o ( n ) ( see Lemma 38 ) , which dominates T1 ( Dn ) and the residual |T2 , R ( Dn ) −T2 ( Dn ) | . 3.2 ASYMPTOTICS OF THE BAYESIAN GENERALIZATION ERROR In this section , we derive the asymptotics of the expected generalization error E G ( Dn ) by analyzing the asymptotics of the componentsG1 ( Dn ) andG2 ( Dn ) in resp . ( 15 ) and ( 16 ) for the following two cases : µ0 =0 and µ0 > 0 . First , we consider the case µ0 =0 . Theorem 9 ( Asymptotics of the Bayesian generalization error , µ0 = 0 ) . Let Assumptions 4 , 5 , and 6 hold . Assume that µ0 = 0 and σ2model = σ 2 true = σ 2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n−q over sample inputs ( xi ) ni=1 where 0≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the expectation of the Bayesian generalization error ( 3 ) w.r.t . the noise has the asymptotic behavior : E G ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ nσ2 Λ ) −1Λ−‖Λ1/2 ( I+ nσ2 Λ ) −1‖2F +‖ ( I+ nσ2 Λ ) −1µ‖22 ) =Θ ( nmax { ( 1−α ) ( 1−t ) α , ( 1−2β ) ( 1−t ) α } ) . ( 22 ) The proof of Theorem 9 is given in Appendix D.2 . Intuitively , for a given t , the exponent ( 1−α ) ( 1−t ) α in ( 22 ) captures the rate at which the model suppresses the noise , while the exponent ( 1−2β ) ( 1−t ) α captures the rate at which the model learns the target function . A larger β implies that the exponent ( 1−2β ) ( 1−t ) α is smaller and it is easier to learn the target . A larger α implies that the exponent ( 1−α ) ( 1−t ) α is smaller and the error associated with the noise is smaller as well . A larger α , however , also implies that the exponent ( 1−2β ) ( 1−t ) α is larger ( recall that α > 1 and β > 1/2 by Assumptions 4 and 5 , resp . ) , which means that it is harder to learn the target . Remark 10 . If f ∼ GP ( 0 , k ) , then using the Karhunen-Loève expansion we have f ( x ) = ∑∞ p=1 √ λpωpφp ( x ) , where ( ωp ) ∞p=1 are i.i.d . standard Gaussian variables . We can bound ωp almost surely as |ωp| ≤ C logp , where C = supp≥1 |ωp| logp is a finite constant . Comparing with the expansion of f ( x ) in ( 9 ) , we find that µp = √ λpωp =O ( p −α/2logp ) =O ( p−α/2+ε ) where ε > 0 is arbitrarily small . Choosing β=α/2−ε in ( 22 ) , we have E G ( Dn ) =O ( n 1 α−1+ 2ε α ) . This rate matches that of an earlier result due to Sollich and Halees ( 2002 ) , where it is shown that the asymptotic learning curve ( as measured by the expectation of the excess mean squared error , EfM ( Dn ) ) scales as n 1 α−1 when the model is correctly specified , i.e. , f is a sample from the same Gaussian process GP ( 0 , k ) , and the eigenvalues decay as a power law for large i , λi∼ iα . For µ0 > 0 , we note the following result : Theorem 11 ( Asymptotics of the Bayesian generalization error , µ0 > 0 ) . Let Assumptions 4 , 5 , and 6 hold . Assume that µ0 > 0 and σ2model = σ 2 true = σ 2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the expectation of the Bayesian generalization error ( 3 ) w.r.t . the noise has the asymptotic behavior : E G ( Dn ) = 12σ2µ 2 0+o ( 1 ) . In general , if µ0 > 0 , then the generalization error remains constant when n→∞ . This means that if the target function contains a component in the kernel of the operatorLk , then GP regression is not able to learn the target function . The proof of Theorem 11 is given in Appendix D.2 . 3.3 ASYMPTOTICS OF THE EXCESS MEAN SQUARED ERROR In this section we derive the asymptotics of the excess mean squared error in Definition 2 . Theorem 12 ( Asymptotics of excess mean squared error ) . Let Assumptions 4 , 5 , and 6 hold . Assume σ2model =Θ ( n t ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n −q over sample inputs ( xi ) n i=1 , where 0≤q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the excess mean squared error ( 6 ) has the asymptotic : E M ( Dn ) = ( 1+o ( 1 ) ) [ σ2true σ2model ( Tr ( I+ n σ2model Λ ) −1Λ−‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ) +‖ ( I+ n σ2model Λ ) −1µ‖22 ] =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) when µ0 =0 , and E M ( Dn ) =µ20+o ( 1 ) , when µ0 > 0 . The proof of Theorem 12 uses similar techniques as Theorem 9 and is given in Appendix D.3 . Remark 13 ( Correspondence with kernel ridge regression ) . The kernel ridge regression ( KRR ) estimator arises as a solution to the optimization problem f̂=argmin f∈Hk 1 n n∑ i=1 ( f ( xi ) −yi ) 2+λ‖f‖2Hk , ( 23 ) where the hypothesis spaceHk is chosen to be an RKHS , and λ > 0 is a regularization parameter . The solution to ( 23 ) is unique as a function , and is given by f̂ ( x ) = KTxx ( Kn +nλIn ) −1y , which coincides with the posterior mean function m̄ ( x ) of the GPR ( 1 ) if σ2model = nλ ( Kanagawa et al. , 2018 , Proposition 3.6 ) . Thus , the additive Gaussian noise in GPR plays the role of regularization in KRR . Leveraging this well known equivalence between GPR and KRR we observe that Theorem 12 also describes the generalization error of KRR as measured by the excess mean squared error . Remark 14 . Cui et al . ( 2021 ) derived the asymptotics of the expected excess mean-squared error for different regularization strengths and different scales of noise . In particular , for KRR with Gaussian design where Λ1/2R ( φ1 ( x ) , ... , φR ( x ) ) ) is assumed to follow a Gaussian distributionN ( 0 , ΛR ) , and regularization λ=nt−1 where 1−α≤ t , Cui et al . ( 2021 , Eq . 10 ) showed that E { xi } ni=1E M ( Dn ) =O ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . ( 24 ) Let δ = n−q , where 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . By Markov ’ s inequality , this implies that with probability of at least 1 − δ , E M ( Dn ) = O ( 1δ max { σ 2 truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) = O ( nqmax { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . Theorem 12 improves upon this by showing that with probability of at least 1−δ , we have an optimal bound E M ( Dn ) =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . Furthermore , in contrast to the approach by Cui et al . ( 2021 ) , we have no requirement on the distribution of φp ( x ) , and hence our result is more generally applicable . For example , Theorem 12 can be applied to KRR with the arc-cosine kernel when the Gaussian design assumption is not valid . In the noiseless setting ( σtrue =0 ) with constant regularization ( t=0 ) , Theorem 12 implies that the mean squared error behaves as Θ ( n 1−2β α ) . This recovers a result in Bordelon et al . ( 2020 , §2.2 ) . 4 EXPERIMENTS We illustrate our theory on a few toy experiments . We let the input x be uniformly distributed on a unit circle , i.e. , Ω = S1 and ρ = U ( S1 ) . The points on S1 can be represented by x= ( cosθ , sinθ ) where θ ∈ [ −π , π ) . We use the first order arc-cosine kernel function without bias , k ( 1 ) w/o bias ( x1 , x2 ) = 1 π ( sinψ+ ( π−ψ ) cosψ ) , where ψ = 〈x1 , x2〉 is the angle between x1 and x2 . Cho and Saul ( 2009 ) showed that this kernel is the conjugate kernel of an infinitely wide shallow ReLU network with two inputs and no biases in the hidden layer . GP regression with prior GP ( 0 , k ) corresponds to Bayesian training of this network ( Lee et al. , 2018 ) . The eigenvalues and eigenfunctions of the kernel are λ1 = 4π2 , λ2 = λ3 = 1 4 , λ2p = λ2p+1 = 4 π2 ( ( 2p−2 ) 2−1 ) 2 , p ≥ 2 and φ1 ( θ ) = 1 , φ2 ( θ ) = √ 2 2 cosθ , φ3 ( θ ) = √ 2 2 sinθ , φ2p ( θ ) = √ 2 2 cos ( 2p− 2 ) θ , φ2p+1 ( θ ) = √ 2 2 sin ( 2p− 2 ) θ , p≥ 2 . Hence Assumption 4 is satisfied with α= 4 , and Assumption 6 is satisfied with ‖φp‖∞≤ √ 2 2 , p≥ 1 and τ=0 . We consider the target functions in Table 1 , which satisfy Assumption 5 with the indicated β , and µ0 indicates whether the function lies in the span of eigenfunctions of the kernel . The training and test data are generated as follows : We independently sample training inputs x1 , ... , xn and test input xn+1 from U ( S1 ) and training outputs yi , i = 1 , ... , n from N ( f ( xi ) , σ2 ) , where we choose σ = 0.1 . The Bayesian predictive density conditioned on the test point xn+1 N ( m̄ ( xn+1 ) , k̄ ( xn+1 , xn+1 ) ) is obtained by ( 1 ) and ( 2 ) . We compute the normalized SC by ( 7 ) and the Bayesian generalization error by the Kullback-Leibler divergence betweenN ( f ( xn+1 ) , σ2 ) and N ( m̄ ( xn+1 ) , k̄ ( xn+1 , xn+1 ) ) . For each target we conduct GPR 20 times and report the mean and standard deviation of the normalized SC and the Bayesian generalization error in Figure 1 , which agree with the asymptotics predicted in Theorems 7 and 9 . In Appendix A , we show more experiments confirming our theory for zero- and second- order arc-cosine kernels , with and without biases . k ( 1 ) w/o bias , their values of β and µ0 , and theoretical rates for the normalized SC and the Bayesian generalization error from our theorems . k ( 1 ) w/o bias and the target functions in Table 1 . The orange curves show the linear regression fit for the experimental values ( in blue ) of the log Bayesian generalization error as a function of log n. 5 CONCLUSION We described the learning curves for GPR for the case that the kernel and target function follow a power law . This setting is frequently encountered in kernel learning and relates to recent advances on neural networks . Our approach is based on a tight analysis of the concentration of the inner product of empirical eigenfunctions ΦTΦ around nI . This allowed us to obtain more general results with more realistic assumptions than previous works . In particular , we recovered some results on learning curves for GPR and KRR previously obtained under more restricted settings ( vide Remarks 10 and 14 ) . We showed that when β≥α/2 , meaning that the target function has a compact representation in terms of the eigenfunctions of the kernel , the learning rate is as good as in the correctly specified case . In addition , our result allows us to interpret β from a spectral bias perspective . When 12 < β ≤ α 2 , the larger the value of β , the faster the decay of the generalization error . This implies that low-frequency functions are learned faster in terms of the number of training data points . By leveraging the equivalence between GPR and KRR , we obtained a result on the generalization error of KRR . In the infinite-width limit , training fully-connected deep NNs with gradient descent and infinitesimally small learning rate under least-squared loss is equivalent to solving KRR with respect to the NTK ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Domingos , 2020 ) , which in several cases is known to have a power-law spectrum ( Velikanov and Yarotsky , 2021 ) . Hence our methods can be applied to study the generalization error of infinitely wide neural networks . In future work , it would be interesting to estimate the values of α and β for the NTK and the NNGP kernel of deep fully-connected or convolutional NNs and real data distributions and test our theory in these cases . Similarly , it would be interesting to consider extensions to finite width kernels . REFERENCES S. Amari and N. Murata . Statistical theory of learning curves under entropic loss criterion . Neural Computation , 5 ( 1 ) :140–153 , 1993 . S. Amari , N. Fujita , and S. Shinomoto . Four types of learning curves . Neural Computation , 4 ( 4 ) : 605–618 , 1992 . S. Arora , S. S. Du , W. Hu , Z. Li , R. R. Salakhutdinov , and R. Wang . On exact computation with an infinitely wide neural net . In Advances in Neural Information Processing Systems , volume 32 , pages 8139–8148 , 2019 . Y. Bahri , E. Dyer , J. Kaplan , J. Lee , and U. Sharma . Explaining neural scaling laws . arXiv preprint arXiv:2102.06701 , 2021 . A. R. Barron . Information-theoretic characterization of Bayes performance and the choice of priors in parametric and nonparametric problems . In D. A. Bernardo J. , Berger J. and S. A. , editors , Bayesian statistics , volume 6 , pages 27–52 . Oxford University Press , 1998 . M. Belkin , S. Ma , and S. Mandal . To understand deep learning we need to understand kernel learning . In Proceedings of the 35th International Conference on Machine Learning ( ICML ) , pages 541–549 , 2018 . A. Bietti and J. Mairal . On the inductive bias of neural tangent kernels . In Advances in Neural Information Processing Systems , volume 32 , pages 12873–12884 , 2019 . A. Bietti , L. Venturi , and J. Bruna . On the sample complexity of learning with geometric stability . arXiv preprint arXiv:2106.07148 , 2021 . G. Blanchard and N. Mücke . Optimal rates for regularization of statistical inverse learning problems . Foundations of Computational Mathematics , 18 ( 4 ) :971–1013 , 2018 . B. Bordelon , A. Canatar , and C. Pehlevan . Spectrum dependent learning curves in kernel regression and wide neural networks . In Proceedings of the 37th International Conference on Machine Learning ( ICML ) , pages 1024–1034 , 2020 . O. Bousquet , S. Hanneke , S. Moran , R. van Handel , and A. Yehudayoff . A theory of universal learning . In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing ( STOC ) , pages 532–541 , 2021 . M. L. Braun . Accurate error bounds for the eigenvalues of the kernel matrix . The Journal of Machine Learning Research , 7:2303–2328 , 2006 . A. Canatar , B. Bordelon , and C. Pehlevan . Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks . Nature communications , 12 ( 1 ) :1–12 , 2021 . A. Caponnetto and E. De Vito . Optimal rates for the regularized least-squares algorithm . Foundations of Computational Mathematics , 7 ( 3 ) :331–368 , 2007 . N. Chatterji , A. Pacchiano , and P. Bartlett . Online learning with kernel losses . In Proceedings of the 36th International Conference on Machine Learning ( ICML ) , pages 971–980 , 2019 . Y. Cho and L. K. Saul . Kernel methods for deep learning . In Advances in Neural Information Processing Systems , volume 22 , pages 342–350 , 2009 . H. Cui , B. Loureiro , F. Krzakala , and L. Zdeborová . Generalization error rates in kernel regression : The crossover from the noiseless to noisy regime . arXiv preprint arXiv:2105.15004 , 2021 . A. Daniely , R. Frostig , and Y . Singer . Toward deeper understanding of neural networks : The power of initialization and a dual view on expressivity . In Advances In Neural Information Processing Systems , volume 29 , pages 2253–2261 , 2016 . A. G. de G. Matthews , J. Hron , M. Rowland , R. E. Turner , and Z. Ghahramani . Gaussian process behaviour in wide deep neural networks . In International Conference on Learning Representations , 2018 . P. Domingos . Every model learned by gradient descent is approximately a kernel machine . arXiv preprint arXiv:2012.00152 , 2020 . S. Fischer and I. Steinwart . Sobolev norm learning rates for regularized least-squares algorithms . Journal of Machine Learning Research , 21:1–38 , 2020 . A. Garriga-Alonso , C. E. Rasmussen , and L. Aitchison . Deep convolutional networks as shallow gaussian processes . In International Conference on Learning Representations , 2019 . D. Haussler and M. Opper . Mutual information , metric entropy and cumulative relative entropy risk . The Annals of Statistics , 25 ( 6 ) :2451–2492 , 1997 . D. Haussler , M. Kearns , H. S. Seung , and N. Tishby . Rigorous learning curve bounds from statistical mechanics . Machine Learning , 25 ( 2-3 ) :195–236 , 1996 . J. Hestness , S. Narang , N. Ardalani , G. Diamos , H. Jun , H. Kianinejad , M. Patwary , M. Ali , Y. Yang , and Y. Zhou . Deep learning scaling is predictable , empirically . arXiv preprint arXiv:1712.00409 , 2017 . A. Jacot , F. Gabriel , and C. Hongler . Neural tangent kernel : Convergence and generalization in neural networks . In Advances in Neural Information Processing Systems , volume 31 , pages 8571–8580 , 2018 . K.-S. Jun , A. Cutkosky , and F. Orabona . Kernel truncated randomized ridge regression : Optimal rates and low noise acceleration . Advances in Neural Information Processing Systems , 32:15358–15367 , 2019 . M. Kanagawa , P. Hennig , D. Sejdinovic , and B. K. Sriperumbudur . Gaussian processes and kernel methods : A review on connections and equivalences . arXiv preprint arXiv:1807.02582 , 2018 . L. Le Gratiet and J. Garnier . Asymptotic analysis of the learning curve for Gaussian process regression . Machine Learning , 98 ( 3 ) :407–433 , 2015 . J. Lee , J. Sohl-Dickstein , J. Pennington , R. Novak , S. Schoenholz , and Y. Bahri . Deep neural networks as gaussian processes . In International Conference on Learning Representations , 2018 . J. Lee , L. Xiao , S. Schoenholz , Y. Bahri , R. Novak , J. Sohl-Dickstein , and J. Pennington . Wide neural networks of any depth evolve as linear models under gradient descent . In Advances in Neural Information Processing Systems , volume 32 , pages 8572–8583 , 2019 . J. Lee , S. Schoenholz , J. Pennington , B. Adlam , L. Xiao , R. Novak , and J. Sohl-Dickstein . Finite versus infinite neural networks : an empirical study . In Advances in Neural Information Processing Systems , volume 33 , pages 15156–15172 , 2020 . M. Loog , T. Viering , and A. Mey . Minimizers of the empirical risk and risk monotonicity . In Advances in Neural Information Processing Systems , volume 32 , pages 7478–7487 , 2019 . D. Malzahn and M. Opper . Learning curves for Gaussian processes regression : A framework for good approximations . In Advances in Neural Information Processing Systems , volume 13 , pages 273–279 , 2001a . D. Malzahn and M. Opper . Learning curves for Gaussian processes models : Fluctuations and universality . In International Conference on Artificial Neural Networks , pages 271–276 , 2001b . R. M. Neal . Bayesian Learning for Neural Networks . Springer-Verlag , Berlin , Heidelberg , 1996 . ISBN 0387947248 . A. Nitanda and T. Suzuki . Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime . In International Conference on Learning Representations , 2021 . R. Novak , L. Xiao , Y. Bahri , J. Lee , G. Yang , D. A. Abolafia , J. Pennington , and J. Sohl-Dickstein . Bayesian deep convolutional networks with many channels are gaussian processes . In International Conference on Learning Representations , 2019 . M. Opper and D. Malzahn . A variational approach to learning curves . In Advances in Neural Information Processing Systems , volume 14 , pages 463–469 , 2002 . M. Opper and F. Vivarelli . General bounds on Bayes errors for regression with Gaussian processes . In Advances in Neural Information Processing Systems , volume 11 , pages 302–308 , 1999 . P. Orbanz and Y. W. Teh . Bayesian nonparametric models . In Encyclopedia of Machine Learning , pages 81–89 . Springer , 2010 . K. Ritter , G. W. Wasilkowski , and H. Woźniakowski . Multivariate integration and approximation for random fields satisfying Sacks-Ylvisaker conditions . The Annals of Applied Probability , pages 518–540 , 1995 . B. Ronen , D. Jacobs , Y. Kasten , and S. Kritchman . The convergence rate of neural networks for learned functions of different frequencies . Advances in Neural Information Processing Systems , 32:4761–4771 , 2019 . M. W. Seeger , S. M. Kakade , and D. P. Foster . Information consistency of nonparametric Gaussian process methods . IEEE Transactions on Information Theory , 54 ( 5 ) :2376–2382 , 2008 . P. Sollich . Learning curves for Gaussian processes . In Advances in Neural Information Processing Systems , volume 11 , pages 344–350 , 1999 . P. Sollich . Gaussian process regression with mismatched models . In Advances in Neural Information Processing Systems , volume 13 , pages 519–526 , 2001 . P. Sollich and A. Halees . Learning curves for Gaussian process regression : Approximations and bounds . Neural Computation , 14 ( 6 ) :1393–1428 , 2002 . S. Spigler , M. Geiger , and M. Wyart . Asymptotic learning curves of kernel methods : empirical data versus teacher–student paradigm . Journal of Statistical Mechanics : Theory and Experiment , 2020 ( 12 ) :124001 , 2020 . M. L. Stein . Interpolation of spatial data : Some theory for kriging . Springer Science & Business Media , 2012 . I. Steinwart , D. R. Hush , C. Scovel , et al . Optimal rates for regularized least squares regression . In Conference on Learning Theory , pages 79–93 , 2009 . J . A. Tropp . User-friendly tail bounds for sums of random matrices . Foundations of computational mathematics , 12 ( 4 ) :389–434 , 2012 . S. Vakili , K. Khezeli , and V. Picheny . On information gain and regret bounds in Gaussian process bandits . In International Conference on Artificial Intelligence and Statistics , pages 82–90 , 2021 . E. A. Valdivia . Relative concentration bounds for the spectrum of kernel matrices . arXiv preprint arXiv:1812.02108 , 2018 . A . Van Der Vaart and H. Van Zanten . Information rates of nonparametric Gaussian process methods . Journal of Machine Learning Research , 12 ( 6 ) , 2011 . M. Velikanov and D. Yarotsky . Universal scaling laws in the gradient descent training of neural networks . arXiv preprint arXiv:2105.00507 , 2021 . T. Viering and M. Loog . The shape of learning curves : A review . arXiv preprint arXiv:2103.10948 , 2021 . T. Viering , A. Mey , and M. Loog . Open problem : Monotonicity of learning . In Conference on Learning Theory , pages 3198–3201 , 2019 . S. Watanabe . Algebraic Geometry and Statistical Learning Theory . Cambridge University Press , 2009 . H. Widom . Asymptotic behavior of the eigenvalues of certain integral equations . Transactions of the American Mathematical Society , 109 ( 2 ) :278–295 , 1963 . C. K. Williams . Computing with infinite networks . In Advances in Neural Information Processing Systems , volume 9 , pages 295–301 , 1997 . C. K. Williams and C. E. Rasmussen . Gaussian processes for machine learning . MIT press , 2006 . C. K. Williams and F. Vivarelli . Upper and lower bounds on the learning curve for Gaussian processes . Machine Learning , 40 ( 1 ) :77–102 , 2000 . G. Yang . Wide feedforward or recurrent neural networks of any architecture are gaussian processes . In Advances in Neural Information Processing Systems , volume 32 , pages 9951–9960 , 2019 . G. Yang and H. Salman . A fine-grained spectral perspective on neural networks . arXiv preprint arXiv:1907.10599 , 2019 . APPENDIX A EXPERIMENTS FOR ARC-COSINE KERNELS OF DIFFERENT ORDERS Consider the first order arc-cosine kernel function with biases , k ( 1 ) w/ bias ( x1 , x2 ) = 1 π ( sinψ̄+ ( π−ψ̄ ) cosψ̄ ) , where ψ̄=arccos ( 1 2 ( 〈x1 , x2〉+1 ) ) . ( 25 ) Ronen et al . ( 2019 ) showed that this kernel is the conjugate kernel of an infinitely wide shallow ReLU network with two inputs and one hidden layer with biases , whose eigenvalues satisfy Assumption 4 with α = 4 . The eigenfunctions of this kernel are the same as that of the first-order arc-cosine kernel without biases , k ( 1 ) w/o bias in Section 4 . We consider the target functions in Table 3 , which satisfy Assumption 5 with the indicated β , and µ0 indicates whether the function lies in the span of eigenfunctions of the kernel . For each target we conduct GPR 20 times and report the mean and standard deviation of the normalized SC and the Bayesian generalization error in Figure 3 , which agree with the asymptotics predicted in Theorems 7 and 9 . Table 2 summarizes all the different kernel functions that we consider in our experiments with pointers to the corresponding tables and figures . Summarizing the observations from these experiments , we see that the smoothness of the activation function ( which is controlled by the order of the arc-cosine kernel ) influences the decay rate α of the eigenvalues . In general , when the activation function is smoother , the decay rate α is larger . Theorem 9 then implies that smooth activation functions are more capable in suppressing noise but slower in learning the target . We also observe that networks with biases are more capable at learning functions compared to networks without bias . For example , the function cos ( 2θ ) can not be learned by the zero order arc-cosine kernel without biases ( see Table 6 and Figure 6 ) , but it can be learned by the zero order arc-cosine kernel with biases ( see Table 7 and Figure 7 ) . k ( 1 ) w/ bias and the target functions in Table 3 . The orange curves show the linear regression fit for the experimental values ( in blue ) of the log Bayesian generalization error as a function of log n. k ( 2 ) w/o bias and the target functions in Table 4. k ( 2 ) w/ bias and the target functions in Table 5. k ( 0 ) w/o bias and the target functions in Table 6. k ( 0 ) w/ bias and the target functions in Table 7 . B PROOFS RELATED TO THE MARGINAL LIKELIHOOD Proof of Proposition 3 . Let ȳ = ( ȳ1 , ... , ȳn ) T be the outputs of the GP regression model on training inputs x . Under the GP prior , the prior distribution of ȳ isN ( 0 , Kn ) . Then the evidence of the model is given as follows : Zn= ∫ Rn ( n∏ i=1 1√ 2πσ e− ( ȳi−yi ) 2 2σ2 ) 1 ( 2π ) n/2det ( Kn ) 1/2 e− 1 2 ȳ TK−1n ȳdȳ = 1 ( 2π ) nσndet ( Kn ) 1/2 ∫ Rn e− 1 2 ȳ T ( K−1n + 1 σ2 I ) ȳ+ 1 σ2 ȳTy− 1 2σ2 yTydȳ . ( 26 ) Letting K̃−1n =K −1 n + 1 σ2 I and µ= 1 σ2 K̃ny , we have Zn= 1 ( 2π ) nσndet ( Kn ) 1/2 ∫ Rn e− 1 2 ( ȳ−µ ) T K̃−1n ( ȳ−µ ) − 12σ2 y Ty+ 12µ T K̃−1n µdȳ = 1 ( 2π ) nσndet ( Kn ) 1/2 ( 2π ) n/2det ( K̃n ) 1/2e− 1 2σ2 yTy+ 12µ T K̃−1n µ = det ( K̃n ) 1/2 ( 2π ) n/2σndet ( Kn ) 1/2 e− 1 2σ2 yTy+ 12µ T K̃−1n µ . ( 27 ) The normalized evidence is Z0n= Zn ( 2π ) −n/2σ−ne− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) = det ( K̃n ) 1/2 det ( Kn ) 1/2 e− 1 2σ2 yTy+ 12µ T K̃−1n µ+ 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) . ( 28 ) So the normalized stochastic complexity is F 0 ( Dn ) =−logZ0n =−1 2 logdet ( K̃n ) 1/2+ 1 2 logdet ( Kn ) 1/2+ 1 2σ2 yTy− 1 2 µT K̃−1n µ− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) =−1 2 logdet ( K−1n + 1 σ2 I ) −1+ 1 2 logdet ( Kn ) + 1 2σ2 yTy− 1 2σ4 yT ( K−1n + 1 σ2 I ) −1y − 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 yT ( I+ Kn σ2 ) −1y− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) . = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 f ( x ) T ( I+ Kn σ2 ) −1f ( x ) + 1 2σ2 T ( I+ Kn σ2 ) −1 − 1 2σ2 T + 1 2σ2 T ( I+ Kn σ2 ) −1f ( x ) . ( 29 ) After taking the expectation over noises , we get E F 0 ( Dn ) = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 f ( x ) T ( I+ Kn σ2 ) −1f ( x ) − 1 2 Tr ( I− ( I+Kn σ2 ) −1 ) . ( 30 ) This concludes the proof . C HELPER LEMMAS Lemma 15 . Assume that m → ∞ as n → ∞ . Given constants a1 , a2 , s1 , s2 > 0 , if s1 > 1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m 1−s1 s2 ) . ( 31 ) If s1 > 1 and s2s3 =s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m−s3 logm ) . ( 32 ) If s1 > 1 and s2s3 < s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m−s3 ) . ( 33 ) Overall , if s1 > 1 andm→∞ , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 = { Θ ( mmax { −s3 , 1−s1 s2 } ) , s2s3 6=s1−1 , Θ ( m 1−s1 s2 logm ) , s2s3 =s1−1 . ( 34 ) Proof of Lemma 15 . First , when s1 > 1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 + ∫ [ 1 , +∞ ] a1x −s1 ( 1+a2mx−s2 ) s3 dx = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1 , +∞ ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≥ ∫ [ 1 , R+1 ] a1x −s1 ( 1+a2mx−s2 ) s3 dx =m 1−s1 s2 ∫ [ 1 , R+1 ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 =m 1−s1 s2 ∫ [ 1/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 ) . Second , when s1 > 1 and s2s3 =s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 O ( logm ( 1/s2 ) ) =Θ ( m 1−s1 s2 logn ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≥ ∫ [ 1 , R+1 ] a1x −s1 ( 1+a2mx−s2 ) s3 dx =m 1−s1 s2 ∫ [ 1 , R+1 ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 =m 1−s1 s2 ∫ [ 1/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 logn ) . Third , when s1 > 1 and s2s3 < s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Θ ( m ( −1/s2 ) ( 1−s1+s2s3 ) ) =Θ ( m−s3 ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 2/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Θ ( m ( −1/s2 ) ( 1−s1+s2s3 ) ) =Θ ( m−s3 ) . Overall , if s1 > 1 , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 = { Θ ( mmax { −s3 , 1−s1 s2 } ) , s2s3 6=s1−1 , Θ ( m−s3 logn ) , s2s3 =s1−1 . ( 35 ) Lemma 16 . Assume thatR=m 1s2 +κ for κ > 0 . Given constants a1 , a2 , s1 , s2 > 0 , if s1≤1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Õ ( max { m−s3 , R1−s1 } ) . ( 36 ) Proof of Lemma 16 . First , when s1≤1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 + ∫ [ 1 , R ] a1x −s1 ( 1+a2mx−s2 ) s3 dx = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1 , R ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , R/m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx = a1 ( 1+a2m ) s3 +Õ ( m 1−s1 s2 ( R m1/s2 ) 1−s1 ) =Õ ( max { m−s3 , R1−s1 } ) . Second , when s1≤1 and s2s3≤s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , R/m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Õ ( m ( −1/s2 ) ( 1−s1+s2s3 ) + ( R m1/s2 ) 1−s1 ) =Õ ( max { m−s3 , R1−s1 } ) . Overall , if s1≤1 , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Õ ( max { m−s3 , R1−s1 } ) . ( 37 ) Lemma 17 . Assume that f ∈ L2 ( Ω , ρ ) . Consider the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T , where x1 , ... , xn are drawn i.i.d from ρ . Then with probability of at least 1−δ1 , we have ‖f ( x ) ‖22 = n∑ i=1 f2 ( xi ) =Õ ( ( 1δ1 +1 ) n‖f‖ 2 2 ) , where ‖f‖22 = ∫ x∈Ωf 2 ( x ) dρ ( x ) . Proof of Lemma 17 . Given a positive numberC≥‖f‖22 , applying Markov ’ s inequality we have P ( f2 ( X ) > C ) ≤ 1 C ‖f‖22 . LetA be the event that for all sample inputs ( xi ) ni=1 , f 2 ( xi ) ≤C . Then P ( A ) ≥1−nP ( f2 ( X ) > C ) ≥1− 1 C n‖f‖22 . ( 38 ) Define f̄2 ( x ) = min { f2 ( x ) , C } . Then Ef̄2 ( X ) ≤ Ef2 ( X ) = ‖f‖22 . So |f̄2 ( X ) − Ef̄2 ( X ) | ≤ max { C , ‖f‖22 } =C Since 0≤ f̄2 ( x ) ≤C , we have E ( f̄4 ( X ) ) ≤CE ( f̄2 ( X ) ) ≤C‖f‖22 . ( 39 ) So we have E|f̄2 ( X ) −Ef̄2 ( X ) |2≤E ( f̄4 ( X ) ) ≤C‖f‖22 . ( 40 ) Applying Bernstein ’ s inequality , we have P ( n∑ i=1 f̄2 ( xi ) > t+nEf̄2 ( X ) ) ≤exp ( − t 2 2 ( nE|f̄2 ( X ) −Ef̄2 ( X ) |2 ) + Ct3 ) ) ≤exp ( − t 2 2 ( nC‖f‖22+ Ct3 ) ) ≤exp ( − t 2 4max { nC‖f‖22 , Ct3 } ) . Hence , with probability of at least 1−δ1/2 we have n∑ i=1 f̄2 ( xi ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +nEf̄2 ( X ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +n‖f‖22 . ( 41 ) When event A happens , f2 ( xi ) = f̄2 ( xi ) for all sample inputs . According to ( 38 ) and ( 41 ) , with probability at least 1− 1Cn‖f‖ 2 2−δ1/2 , we have n∑ i=1 f2 ( xi ) = n∑ i=1 f̄2 ( xi ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +n‖f‖22 . ChoosingC= 2δ1n‖f‖ 2 2 , with probability of at least 1−δ1 we have n∑ i=1 f2 ( xi ) = n∑ i=1 f̄2 ( xi ) ≤max { √ 8 δ1 log 2 δ1 n2‖f‖42 , 8 3δ1 n‖f‖22log 2 δ1 } +n‖f‖22 =Õ ( ( 1δ1 +1 ) n‖f‖ 2 2 ) . Lemma 18 . Assume that f ∈ L2 ( Ω , ρ ) . Consider the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T , where x1 , ... , xn are drawn i.i.d from ρ . Assume that ‖f‖∞= supx∈Ωf ( x ) ≤C . With probability of at least 1−δ1 , we have ‖f ( x ) ‖22 =Õ ( √ C2n‖f‖22+C2 ) +n‖f‖22 , where ‖f‖22 = ∫ x∈Ωf 2 ( x ) dρ ( x ) . Proof of Lemma 18 . We have |f2 ( X ) −Ef2 ( X ) |≤max { C2 , ‖f‖22 } =C2 Since 0≤ f2 ( x ) ≤C , we have E ( f4 ( X ) ) ≤C2E ( f2 ( X ) ) ≤C2‖f‖22 . ( 42 ) So we have E|f2 ( X ) −Ef2 ( X ) |2≤E ( f4 ( X ) ) ≤C2‖f‖22 . ( 43 ) Applying Bernstein ’ s inequality , we have P ( n∑ i=1 f2 ( xi ) > t+nEf2 ( X ) ) ≤exp ( − t 2 2 ( nE|f2 ( X ) −Ef2 ( X ) |2 ) + C2t3 ) ) ≤exp ( − t 2 2 ( nC2‖f‖22+ C 2t 3 ) ) ≤exp ( − t 2 4max { nC2‖f‖22 , C 2t 3 } ) . Hence , with probability of at least 1−δ1 we have n∑ i=1 f2 ( xi ) ≤max { √ 4C2log 1 δ1 n‖f‖22 , 4C2 3 log 1 δ1 } +nEf2 ( X ) ≤Õ ( max { √ C2n‖f‖22 , C2 } ) +n‖f‖22 ≤Õ ( √ C2n‖f‖22+C2 ) +n‖f‖22 . ( 44 ) For the proofs in the reminder of this section , the definitions of the relevant quantities are given in Section 3 . Corollary 19 . With probability of at least 1−δ1 , we have ‖f > R ( x ) ‖22 =Õ ( ( 1δ1 +1 ) nR 1−2β ) . Proof of Corollary 19 . The L2 norm of f > R ( x ) is given by ‖f > R‖22 = ∑∞ p=R+1µ 2 p ≤ Cµ 2β−1R 1−2β . Applying Lemma 17 we get the result . Corollary 20 . For any ν∈RR , with probability of at least 1−δ1 we have ‖ΦRν‖22 =Õ ( ( 1δ1 +1 ) n‖ν‖ 2 2 ) . Proof of Corollary 20 . Let g ( x ) = ∑R p=1νpφp ( x ) . Then ΦRν=g ( x ) . The L2 norm of g ( x ) is given by ‖g‖22 = ∑R p=1ν 2 p =‖ν‖22 . Applying Lemma 17 we get the result . Next we consider the quantity , ΦTRΦR−nI . The key tool that we use is the matrix Bernstein inequality that describes the upper tail of a sum of independent zero-mean random matrices . Lemma 21 . Let D = diag { d1 , ... , dR } , d1 , ... , dR > 0 and dmax = max { d1 , ... , dR } . Let M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } . Then with probability of at least 1−δ , we have ‖D ( ΦTRΦR−nI ) D‖2≤max { √ nd2maxM log R δ , M log R δ ) } . ( 45 ) Proof of Lemma 21 . Let Yj = ( φ1 ( xj ) , ... , φR ( xj ) ) T and Zj = DYj . It is easy to verify that E ( ZjZTj ) =D2 . Then the left hand side of ( 45 ) is ∑n j=1 [ ZjZ T j −E ( ZjZTj ) ] . We note that ‖ZjZTj −E ( ZjZTj ) ‖2≤max { ‖ZjZTj ‖2 , ‖E ( ZjZTj ) ‖2 } ≤max { ‖Zj‖22 , d2max } . For ‖Zj‖22 , we have ‖Zj‖22 = R∑ p=0 d2pφ 2 p ( xj ) ≤ R∑ p=0 d2p‖φp‖2∞ , ( 46 ) we have ‖ZjZTj −E ( ZjZTj ) ‖2≤max { ∑R p=0d 2 p‖φp‖2∞ , d2max } . On the other hand , E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] =E [ ‖Zj‖22ZjZTj ] − ( E ( ZjZTj ) ) 2 . Since E [ ‖Zj‖22ZjZTj ] 4E [ R∑ p=0 d2p‖φp‖2∞ZjZTj ] , ( by ( 46 ) ) = R∑ p=0 d2p‖φp‖2∞E [ ZjZTj ] , we have ‖E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] ‖2≤max { ∑R p=0d 2 p‖φp‖2∞‖E [ ZjZTj ] ‖2 , d4max } ≤max { ∑R p=0d 2 p‖φp‖2∞d2max , d4max } ≤d2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } . Using the matrix Bernstein inequality ( Tropp , 2012 , Theorem 6.1 ) , we have P ( ‖ n∑ j=1 [ ZjZ T j −E ( ZjZTj ) ] ‖2 > t ) ≤Rexp −t2 2 ( n‖E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] ‖2+ tmaxj‖ZjZTj −E ( ZjZTj ) ‖2 3 ) ≤Rexp −t2 2 ( nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } + tmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } 3 ) =Rexp ( −t2 O ( max { nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } , tmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } } ) ) . Then with probability of at least 1−δ , we have ‖ n∑ j=1 [ ZjZ T j −E ( ZjZTj ) ] ‖2 ≤max { √ nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } logRδ , max { ∑R p=0d 2 p‖φp‖2∞ , d2max } logRδ } . Corollary 22 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfunctions satisfy Assumption 6 . Assume σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 Let γ be a positive number such that 1+α+2τ− ( 1+2τ+2α ) t2α ( 1−t ) < γ≤1 . Then with probability of at least 1−δ , we have ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤O ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) √ logRδ ) . ( 47 ) Proof of Corollary 22 . Use the same notation as in Lemma 21 . Let D = ( I + nσ2 ΛR ) −γ/2Λ γ/2 R . Then d2max ≤ σ 2γ nγ and ∑R p=0 d 2 p‖φp‖2∞ ≤ ∑R p=0 C 2 φ λγpp 2τ ( 1+ n σ2 λp ) γ = O ( ( nσ2 ) 1−γα+2τ α ) , where the first inequality follows from Assumptions 4 and 6 and the last equality from Lemma 15 . Then M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } =O ( ( nσ2 ) 1−γα+2τ α ) . Applying Lemma 21 , we have ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤ 1σ2 max { √ nσ 2γ nγ O ( ( n σ2 ) 1−γα+2τ α ) logRδ , O ( ( n σ2 ) 1−γα+2τ α ) logRδ } =O ( 1σ2 ( n σ2 ) 1−2γα+2τ 2α n 1 2 ) =O ( √ logRδ n ( 1−2γα+2τ ) ( 1−t ) 2α + 1 2−t ) =O ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) . ( 48 ) Corollary 23 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfuctions satisfy Assumption 6 . Let Λ̃1 , R = diag { 1 , λ1 , ... , λR } . Assume σ2 = Θ ( nt ) where t < 1 Let γ be a positive number such that 1+2τα < γ≤1 . Then with probability of at least 1−δ , we have ‖ ( I+ nσ2 ΛR ) −γ/2Λ̃ γ/2 1 , R ( Φ T RΦR−nI ) Λ̃ γ/2 1 , R ( I+ n σ2 ΛR ) −γ/2‖2≤O ( √ logRδ n 1 2 ) . ( 49 ) Proof of Corollary 23 . Use the same notation as in Lemma 21 . LetD= ( I+ nσ2 ΛR ) −γ/2Λ̃ γ/2 1 , R . Then d2max≤1 and ∑R p=0d 2 p‖φp‖2∞≤C2φ+ ∑R p=1C 2 φ λγpp 2τ ( 1+ n σ2 λp ) γ =C2φ+O ( n ( 1−γα+2τ ) ( 1−t ) α ) =O ( 1 ) where the first inequality follows from Assumptions 4 and 6 and the second equality from Lemma 15 . Then M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } =O ( 1 ) . Applying Lemma 21 , we have ‖ ( I+ nσ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤max { √ logRδ nO ( 1 ) , log R δ O ( 1 ) } =O ( √ logRδ n 1 2 ) . ( 50 ) Corollary 24 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfunctions satisfy Assumption 6 . Let ΦR+1 : S = ( φR+1 ( x ) , ... , φS ( x ) ) , and ΛR+1 : S = ( λR+1 , ... , λS ) . Then with probability of at least 1−δ , we have ‖Λ1/2R+1 : S ( Φ T R+1 : SΦR+1 : S−nI ) Λ 1/2 R+1 : S‖2≤O ( logS−Rδ max { n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . ( 51 ) Proof of Corollary 24 . Use the same notation as in Lemma 21 . Let D = Λ1/2R+1 : S . Then d2max≤CλR−α=O ( R−α ) and ∑S p=R+1C 2 φd 2 pp 2τ ≤ ∑S p=R+1C 2 φCλp −αp2τ =O ( R1−α+2τ ) , where the first inequality follows from Assumptions 4 and 6 . ThenM=max { ∑S p=R+1C 2 φd 2 pp 2τ , d2max } = O ( R1−α+2τ ) . Applying Lemma 21 , we have ‖ ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤max { √ logS−Rδ nO ( R −α ) O ( R1−α+2τ ) , logS−Rδ O ( R 1−α+2τ ) ) } =O ( logS−Rδ max { n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . ( 52 ) Lemma 25 . Under the assumptions of Corollary 24 , with probability of at least 1−δ , we have ‖Φ > RΛ > RΦT > R‖2 =Õ ( max { nR−α , n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . Proof of Lemma 25 . For S∈N , we have ‖Φ > SΛ > SΦT > S‖2≤ ∞∑ p=S+1 ‖Λpφp ( x ) φp ( x ) T ‖2 = ∞∑ p=S+1 λp‖φp ( x ) ‖22 ≤ ∞∑ p=S+1 λpnC 2 φ =O ( nS1−α ) . Let S=R α α−1 . Then we get ‖Φ > SΛ > SΦT > S‖2 =O ( nR−α ) . Let ΦR+1 : S= ( φR+1 ( x ) , ... , φS ( x ) ) , ΛR+1 : S= ( λR+1 , ... , λS ) . We then have ‖Φ > RΛ > RΦT > R‖2≤‖Φ > SΛ > SΦT > S‖2+‖ΦR+1 : SΛR+1 : SΦTR+1 : S‖2 ≤O ( nR−α ) +‖Λ1/2R+1 : SΦ T R+1 : SΦR+1 : SΛ 1/2 R+1 : S‖2 ≤O ( nR−α ) +n‖ΛR+1 : S‖2+‖Λ1/2R+1 : S ( Φ T R+1 : SΦR+1 : S−nI ) Λ 1/2 R+1 : S‖2 ≤O ( nR−α ) +O ( nR−α ) +O ( logR α α−1−R δ max { n 12R 1−2α+2τ 2 , R1−α+2τ } ) =Õ ( max { nR−α , n 12R 1−2α+2τ 2 , R1−α+2τ } ) , where in the fourth inequality we use Corollary 24 . Corollary 26 . Assume that σ2 = Θ ( 1 ) . If R=n 1α+κ where 0 < κ < α−1−2τα ( 1+2τ ) , then with probability of at least 1−δ , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ‖2≤‖ Φ > RΛ > RΦ T > R σ2 ‖2 =Õ ( n −κα ) =o ( 1 ) . Proof of Corollary 26 . By Lemma 25 and the assumptionR=n 1 α+κ , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ‖2≤‖ Φ > RΛ > RΦ T > R σ2 ‖2 ≤Õ ( max { nR−α , n 12R 1−2α+2τ 2 , R1−α+2τ } ) =Õ ( n−κα ) . Lemma 27 . Assume that ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 < 1 where 1+2τ α < γ≤1 . We then have ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . Proof of Lemma 27 . First note that ‖ 1σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2‖2 < ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 < 1 . Let Λ̃ , R = diag { , λ1 , ... , λR } . Since ΛR = diag { 0 , λ1 , ... , λR } , we have that when is sufficiently small , ‖ 1σ2 ( I + n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR − nI ) Λ̃ 1/2 , R ( I + n σ2 Λ̃ , R ) −1/2‖2 < 1 . Since all diagonal entries of Λ̃ , R are positive , we have ( I+ 1σ2 Λ̃ , RΦ T RΦR ) −1 = ( I+ nσ2 Λ̃ , R+ 1 σ2 Λ̃ , R ( Φ T RΦR−nI ) ) −1 =Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 [ I+ 1σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ] −1 ( I+ nσ2 Λ̃ , R ) −1/2Λ̃ −1/2 , R = ( I+ nσ2 Λ̃ , R ) −1 + ∞∑ j=1 [ ( −1 ) jΛ̃1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ) j ( I+ nσ2 Λ̃ , R ) −1/2Λ̃ −1/2 , R ] = ( I+ nσ2 Λ̃ , R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j ( I+ nσ2 Λ̃ , R ) −1 . Letting →0 , we get ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ nσ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . This concludes the proof . Lemma 28 . If ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 , then we have ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1 . ( 53 ) In particular , assume that σ2 =Θ ( 1 ) . LetR=n 1 α+κ where 0 < κ < α−1−2τα ( 1+2τ ) . Then with probability of at least 1−δ , for sufficiently large n , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 and ( 53 ) holds . Proof of Lemma 28 . Define Φ > R= ( φR+1 ( x ) , φR+2 ( x ) , ... ) , Λ > R=diag ( λR+1 , λR+2 , ... ) . Then we have ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( I+ ΦRΛRΦ T R σ2 + Φ > RΛ > RΦ T > R σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( ( I+ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) −1 −I ) ( I+ ΦRΛRΦ T R σ2 ) −1 . By Corollary 26 , for sufficiently large n , ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 with probability of at least 1−δ . Hence ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( ( I+ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) −1 −I ) ( I+ ΦRΛRΦ T R σ2 ) −1 = ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1 . Lemma 29 . Assume that µ0 =0 and σ2 =Θ ( nt ) where 1− α1+2τ < t < 1 . LetR=n ( 1α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) . Then when n is sufficiently large , with probability of at least 1−2δ we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } ) . ( 54 ) Proof of Lemma 29 . Let Λ1 : R = diag { λ1 , ... , λR } , Φ1 : R = ( φ1 ( x ) , φ1 ( x ) , ... , φR ( x ) ) and µ1 : R = ( µ1 , ... , µR ) . Since µ0 = 0 , we have ( I + 1σ2 ΦRΛRΦ T R ) −1fR ( x ) = ( I+ 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R. Using the Woodbury matrix identity , we have that ( I+ 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R= [ I−Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1Λ1 : RΦT1 : R ] Φ1 : Rµ1 : R =Φ1 : Rµ1 : R−Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1Λ1 : RΦT1 : RΦ1 : Rµ1 : R =σ2Φ1 : R ( σ 2I+Λ1 : RΦ T 1 : RΦ1 : R ) −1µ1 : R. ( 55 ) Let A = ( I + nσ2 Λ1 : R ) −1/2Λ 1/2 1 : R ( Φ T 1 : RΦ1 : R − nI ) Λ 1/2 1 : R ( I + n σ2 Λ1 : R ) −1/2.By Corollary 22 , with probability of at least 1−δ , we have ‖ 1σ2A‖2 = √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α . When n is sufficiently large , ‖ 1σ2A‖2 =o ( 1 ) is less than 1 because 1− α 1+2τ < t < 1 . By Lemma 27 , we have ( I+ 1σ2 Λ1 : RΦ T 1 : RΦ1 : R ) −1 = ( I+ nσ2 Λ1 : R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1 . We then have ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ nσ2 Λ1 : R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1 µ1 : R ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2+ ∞∑ j=1 ∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 . ( 56 ) By Lemma 15 and Assumption 5 , assuming that supi≥1pi+1−pi=h , we have ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2≤ √√√√ R∑ p=1 C2µp −2β ( 1+nCλp−α/σ2 ) 2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) , ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2≥ √√√√bRh c∑ i=1 C2µi −2β ( 1+ nσ2Cλ ( hi ) −α ) 2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . Overall we have ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) . ( 57 ) Using the fact that ‖ 1σ2A‖2 = √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α and ‖ ( I+ nσ2 Λ1 : R ) −1Λ1 : R‖2≤n−1 , we have∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 = ∥∥∥∥ ( I+ nσ2 Λ1 : R ) − 12 Λ 121 : R ( 1σ2A ) j ( I+ nσ2 Λ1 : R ) − 12 Λ 121 : Rµ1 : R∥∥∥∥ 2 ≤Õ ( n− 1−t 2 ) ‖ 1σ2A‖ j 2‖ ( I+ nσ2 Λ1 : R ) − 12 Λ − 12 1 : Rµ1 : R‖2 ( 58 ) By Lemma 16 and the assumptionR=n ( 1 α+κ ) ( 1−t ) , ‖ ( I+ nσ2 Λ1 : R ) − 12 Λ − 12 1 : Rµ1 : R‖2≤ √√√√ R∑ p=1 ( Cλp−α ) −1C2µp −2β ( 1+nCλp−α/σ2 ) 1 =Õ ( max { n− ( 1−t ) /2 , R1/2−β+α/2 } ) =Õ ( max { n− ( 1−t ) /2 , n ( 12 + 1−2β 2α +κ ( 1/2−β+α/2 ) ) ( 1−t ) } ) ( 59 ) We then have ∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 =‖ 1σ2A‖ j 2Õ ( max { n− ( 1−t ) , n ( 1−2β 2α +κ ( 1/2−β+α/2 ) ) ( 1−t ) } ) ( 60 ) By ( 56 ) , ( 57 ) and ( 60 ) , we have ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) + ∞∑ j=1 ‖ 1 σ2 A‖j2Õ ( max { n− ( 1−t ) , n ( 1−t ) 1−2β 2α +κ ( 1−t ) ( 1/2−β+α/2 ) } ) =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) +Õ ( n 1−α+2τ 2α − ( 1+2τ ) t 2α ) Õ ( max { n− ( 1−t ) , n ( 1−t ) 1−2β 2α +κ ( 1−t ) ( 1/2−β+α/2 ) } ) . ( 61 ) By assumption κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) , we have that κ ( 1−t ) ( 1/2−β+α/2 ) + 1−α+2τ 2α − ( 1+2τ ) t 2α < κα ( 1−t ) /2+ 1−α+2τ 2α − ( 1+2τ ) t 2α < 0 . Using ( 61 ) , we then get ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 . ( 62 ) By Corollary 20 , with probability of at least 1−δ , we have ‖Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Õ ( √ ( 1δ +1 ) n‖ ( σ 2I+Λ1 : RΦ T 1 : RΦ1 : R ) −1µ1 : R‖2 ) =Õ ( √ ( 1δ +1 ) n·n ( 1−t ) max { −1 , 1−2β2α } ) . ( 63 ) From ( 55 ) , we get ‖ ( I + 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R‖2 = Õ ( √ ( 1δ +1 ) n ·n ( 1−t ) max { −1 , 1−2β2α } ) . This concludes the proof . Lemma 30 . Assume that µ0 > 0 and σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Let R = n 1 α+κ where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 . Then when n is sufficiently large , with probability of at least 1−2δ , we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . ( 64 ) Proof of Lemma 30 . Using the Woodbury matrix identity , we have that ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) = [ I−ΦR ( σ2I+ΛRΦTRΦR ) −1ΛRΦTR ] ΦRµR =ΦRµR−ΦR ( σ2I+ΛRΦTRΦR ) −1ΛRΦTRΦRµR =σ2ΦR ( σ 2I+ΛRΦ T RΦR ) −1µR . ( 65 ) Let µR,1 = ( µ0,0 , ... ,0 ) and µR,2 = ( 0 , µ1 , ... , µR ) . Then µR=µR,1+µR,2 . Then we have ‖ ( σ2I+ΛRΦTRΦR ) −1µR‖2 =‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2+‖ ( σ2I+ΛRΦTRΦR ) −1µR,2‖2 . ( 66 ) According to ( 62 ) in the proof of Lemma 29 , we have ‖ ( σ2I + ΛRΦTRΦR ) −1µR,2‖2 = Õ ( nmax { − ( 1−t ) , ( 1−t ) ( 1−2β ) 2α } ) . Next we estimate ‖σ2ΦR ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 . Let A= ( I+ n σ2 Λ1 : R ) −γ/2Λ γ/2 1 : R ( Φ T 1 : RΦ1 : R−nI ) Λ γ/2 1 : R ( I+ n σ2 Λ1 : R ) −γ/2 where 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < γ < 1 . Since 1− α 1+2τ < t < 1 , 1 1−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < 1 so the range for γ is well-defined.By Corollary 22 , with probability of at least 1 − δ , we have ‖ 1σ2A‖2 = Õ ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) = o ( 1 ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 because 1− α1+2τ < t < 1 . By Lemma 27 , we have ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . We then have ‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 µR,1 ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ nσ2 ΛR ) −1µR,1‖2+ ∞∑ j=1 ∥∥∥ ( 1σ2 ( I+ nσ2 ΛR ) −1ΛR ( ΦTRΦR−nI ) ) j ( I+ nσ2 ΛR ) −1µR,1∥∥∥ 2 . ( 67 ) By Lemma 15 , ‖ ( I+ n σ2 ΛR ) −1µR,1‖2≤ √√√√µ20+ R∑ p=1 C2µp −2β ( 1+nCλp−α/σ2 ) 2 =O ( 1 ) . ( 68 ) Let Λ̃1 , R = diag { 1 , λ1 , ... , λR } and I0 , R = ( 0 , 1 , ... , 1 ) . Then ΛR = Λ̃1 , RI0 , R . Let B = ( I + n σ2 ΛR ) −γ/2Λ̃ γ/2 1 , R ( Φ T RΦR−nI ) Λ̃ γ/2 1 , R ( I+ n σ2 ΛR ) −γ/2 . According to Corollary 23 , we have ‖B‖2 = O ( √ logRδ n 1 2 ) . Using the fact that ‖ 1σ2A‖2 =Õ ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) , we have∥∥∥ ( 1σ2 ( I+ nσ2 ΛR ) −1ΛR ( ΦTRΦR−nI ) ) j ( I+ nσ2 ΛR ) −1µR,1∥∥∥ 2 = 1 σ2j ∥∥∥∥ ( I+ nσ2 ΛR ) −1+γ2 Λ1−γ2R ( A ( I+ nσ2 ΛR ) −1+γΛ1−γR ) j−1B ( I+ nσ2 ΛR ) −1+γ2 µR,1∥∥∥∥ 2 ≤ 1 σ2 ( n ( −1+ γ 2 + ( −1+γ ) ( j−1 ) ) ( 1−t ) Õ ( √ logRδ n ( j−1 ) ( 1+α+2τ2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) ) √ logRδ n 1 2 ‖µR,1‖2 ≤n ( −1+ γ 2 ) ( 1−t ) + 1 2−tÕ ( n [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) √ logRδ ‖µR,1‖2 =Õ ( n− 1 2 + γ 2 ( 1−t ) + [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) . ( 69 ) Since 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < γ < 1 and− 1 2 + 1 1−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) 1−t 2 < 0 , we can let γ be a little bit larger than 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) and make− 1 2 + γ 2 ( 1−t ) < 0 holds . By ( 67 ) , ( 68 ) , ( 69 ) , we have ‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 ≤O ( 1 ) + ∞∑ j=1 Õ ( n− 1 2 + γ 2 ( 1−t ) + [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) ≤O ( 1 ) +o ( 1 ) =O ( 1 ) . ( 70 ) According to ( 66 ) , we have ‖ ( σ2I + ΛRΦTRΦR ) −1µR‖2 = Õ ( nmax { − ( 1−t ) , ( 1−t ) ( 1−2β ) 2α } ) +O ( 1 ) = O ( 1 ) . By Corollary 20 , with probability of at least 1−δ , we have ‖ΦR ( σ2I+ΛRΦTRΦR ) −1µR‖2 =Õ ( √ ( 1δ +1 ) n‖ ( σ 2I+ΛRΦ T RΦR ) −1µR‖2 ) =Õ ( √ ( 1δ +1 ) n ) . From ( 65 ) , we get ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . This concludes the proof . Lemma 31 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < α−1−2τα2 . Assume that µ0 = 0 . Then when n is sufficiently large , with probability of at least 1−3δ we have ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . ( 71 ) Assume that µ0 > 0 . Then when n is sufficiently large , with probability of at least 1−3δ we have ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . ( 72 ) Proof of Lemma 31 . We have ( I+ ΦΛΦ T σ2 ) −1fR ( x ) = ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) + ( ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ) fR ( x ) . ( 73 ) When µ0 =0 , by Lemma 29 , with probability of at least 1−2δ , we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . Since α−1−2τα2 < α−1−2τ α ( 1+2τ ) , we apply Lemma 28 and Corollary 26 and get that with probability of at least 1−δ , the second term in the right hand side of ( 73 ) is estimated as follows : ‖ ( ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ) fR ( x ) ‖2 =‖ ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖2 = ∞∑ j=1 ∥∥∥ ( ( I+ ΦRΛRΦTRσ2 ) −1 Φ > RΛ > RΦT > Rσ2 ) ∥∥∥j 2 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖2 = ∞∑ j=1 Õ ( n−jκα ) Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) =o ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . Overall , from ( 73 ) , we have that with probability 1−3δ , ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . When µ0 > 0 , using the same approach and Lemma 30 , we can prove that ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 = Õ ( √ ( 1δ +1 ) n ) . This concludes the proof . D PROOF OF THE MAIN RESULTS D.1 PROOFS RELATED TO THE ASYMPTOTICS OF THE NORMALIZED STOCHASTIC COMPLEXITY Lemma 32 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−2δ we have , we have |T1 , R ( Dn ) −T1 ( Dn ) |=Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) ( 74 ) If R= n 1 α+κ where κ > 0 , we have |T1 , R ( Dn ) −T1 ( Dn ) |= o ( 1 σ2n 1 α ) . If we further assume that 0 < κ < α−1−2τα2 , µ0 =0 and σ 2 =Θ ( 1 ) , then for sufficiently large nwith probability of at least 1−4δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) . ( 75 ) Proof of Lemma 32 . Define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , and Λ > R = diag ( λR+1 , ... , λp , ... ) . We then have |T1 ( Dn ) −T1 , R ( Dn ) |= ∣∣∣∣12 logdet ( I+ 1σ2 ΦΛΦT ) − 12 logdet ( I+ 1σ2 ΦRΛRΦTR ) ∣∣∣∣ + 1 2 ∣∣∣∣Tr ( I+ ΦΛΦTσ2 ) −1−Tr ( I+ ΦRΛRΦTRσ2 ) −1 ∣∣∣∣ . ( 76 ) As for the first term in the right hand side of ( 76 ) , we have∣∣∣∣12 logdet ( I+ 1σ2 ΦΛΦT ) − 12 logdet ( I+ 1σ2 ΦRΛRΦTR ) ∣∣∣∣ = ∣∣∣∣12 logdet ( ( I+ 1 σ2 ΦRΛRΦ T R ) −1 ( I+ 1 σ2 ΦRΛRΦ T R+ 1 σ2 Φ > RΛ > RΦ T > R ) ) ∣∣∣∣ = ∣∣∣∣12 logdet ( I+ 1 σ2 ( I+ 1 σ2 ΦRΛRΦ T R ) −1Φ > RΛ > RΦ T > R ) ∣∣∣∣ = 1 2 ∣∣∣∣Trlog ( I+ 1σ2 ( I+ 1σ2 ΦRΛRΦTR ) −1Φ > RΛ > RΦT > R ) ∣∣∣∣ . ( 77 ) Given a concave function h and a matrixB∈Rn×n whose eigenvalues ζ1 , ... , ζn are all positive , we have that Trh ( B ) = ∑n p=1h ( ζi ) ≤nh ( 1 n ∑n p=1ζi ) ≤nh ( 1 nTrB ) , ( 78 ) where we used Jensen ’ s inequality . Using h ( x ) =log ( 1+x ) in ( 78 ) , with probability 1−δ , we have∣∣ 1 2 logdet ( I+ 1 σ2 ΦΛΦ T ) − 12 logdet ( I+ 1 σ2 ΦRΛRΦ T R ) ∣∣ ≤ n2 log ( 1+ 1 nTr ( 1 σ2 ( I+ ΦRΛRΦ T R σ2 ) −1Φ > RΛ > RΦ T > R ) ) ≤ n2 log ( 1+ 1 nσ2 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖2Tr ( Φ > RΛ > RΦT > R ) ) ≤ n2 log ( 1+ 1 nσ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 ) ≤ 1 2σ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 = 12σ2 ∑∞ p=R+1λp ( C2φÕ ( √ p2τn‖φp‖22+p2τ ) +n‖φp‖22 ) =Õ ( 1σ2n ∑∞ p=R+1λp+n 1/2 ∑∞ p=R+1λpp τ+ ∑∞ p=R+1λpp 2τ ) =Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) =o ( 1 σ2n 1 α ) , ( 79 ) where in the second inequality we use the fact that TrAB≤‖A‖2TrB whenA andB are symmetric positive definite matrices , and in the last inequality we use Lemma 18 . As for the second term in the right hand side of ( 76 ) , letA= ( I+ ΦRΛRΦ T R σ2 ) −1/2 . Then we have 1 2 ∣∣∣Tr ( I+ ΦΛΦTσ2 ) −1−Tr ( I+ ΦRΛRΦTRσ2 ) −1∣∣∣ = 12 ∣∣∣∣TrA [ I− ( I+A ( Φ > RΛ > RΦT > Rσ2 ) A ) −1 ] A ∣∣∣∣ ≤ 12Tr [ I− ( I+A ( Φ > RΛ > RΦ T > R σ2 ) A ) −1 ] ≤ n2 ( 1− ( 1+ 1 nTrA ( Φ > RΛ > RΦ T > R σ2 ) A ) −1 ) ≤ n2 ( 1− ( 1+ 1 nTr ( Φ > RΛ > RΦ T > R σ2 ) ) −1 ) ≤ n2 ( 1− ( 1+ 1 nσ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 ) ) −1 ) ≤ 1 2σ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 =Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) =o ( 1 σ2n 1 α ) , where in the first inequality we use the fact that ‖A‖2 < 1 and TrABA≤‖A‖22TrB when A and B are symmetric positive definite matrices , in the second inequality we use h ( x ) =1−1/ ( 1+x ) in ( 78 ) and in the last equality we use the last few steps of ( 79 ) . This concludes the proof of the first statement . As for |T2 ( Dn ) −T2 , R ( Dn ) | , we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ + ∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣ . ( 80 ) For the first term on the right-hand side of ( 80 ) , we have∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ ≤2 ∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣+∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖22 . Applying Corollary 19 and Lemma 31 , with probability of at least 1−4δ , we have∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ ≤2Õ ( √ ( 1δ +1 ) nR 1−2β ) Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) +Õ ( ( 1δ +1 ) nR 1−2β ) =2Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) +Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) ( 1−2β ) ) =2Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) , where the last equality holds because ( 1α+κ ) 1−2β 2 < 1−2β 2α when κ > 0 . As for the second term on the right-hand side of ( 80 ) , according to Lemma 28 , Corollary 26 and Lemma 29 , we have∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j−12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖22 = ∞∑ j=1 Õ ( n−jκα ) Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } ) =Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } −κα ) . ( 81 ) By ( 80 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) +Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } −κα ) =Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) . This concludes the proof of the second statement . In Lemma 32 , we gave a bound for |T2 , R ( Dn ) −T2 ( Dn ) | when n 1 α < R < n 1 α+ α−1−2τ α2 . For R > n , we note the following lemma : Lemma 33 . Let R = nC and σ2 = nt . Assume that C > = 1 and C ( 1−α+ 2τ ) − t < 0 . Under Assumptions 4 , 5 and 6 , for sufficiently large n and with probability of at least 1−3δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1δ +1 ) 1 σ2nR max { 1/2−β,1−α+2τ } ) . ( 82 ) Proof of Lemma 33 . Define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , and Λ > R = diag ( λR+1 , ... , λp , ... ) . Then we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ + ∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ . ( 83 ) For the first term on the right-hand side of ( 83 ) , with probability 1−3δ we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2 ∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣+∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖fR ( x ) ‖2+‖f > R ( x ) ‖22 ≤2Õ ( √ ( 1 δ +1 ) nR1−2β ) Õ ( √ ( 1 δ +1 ) n·‖f‖2 ) +Õ ( ( 1 δ +1 ) nR1−2β ) =Õ ( ( 1 δ +1 ) nR1/2−β ) , where we used Corollary 19 and Lemma 17 for the last inequality . The assumption C ( 1− α+ 2τ ) − t < 0 means that R 1−α+2τ σ2 = o ( 1 ) . For the second term on the right-hand side of ( 83 ) , by Lemmas 28 and 25 , we have∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j+12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖fR ( x ) ‖22 = ∞∑ j=1 Õ ( 1 σ2 Rj ( 1−α+2τ ) ) Õ ( ( 1 δ +1 ) n‖f‖22 ) =Õ ( ( 1 δ +1 ) 1 σ2 nR1−α+2τ ) . ( 84 ) Using ( 83 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1 δ +1 ) nR1/2−β ) +Õ ( ( 1 δ +1 ) n 1 σ2 R1−α+2τ ) =Õ ( ( 1 δ +1 ) n 1 σ2 Rmax { 1/2−β,1−α+2τ } ) . Next we consider the asympototics of T1 , R ( Dn ) and T2 , R ( Dn ) . Lemma 34 . Let A = ( I + nσ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR − nI ) Λ γ/2 R ( I + n σ2 ΛR ) −γ/2 . Assume that ‖A‖2 < 1 where 1+2τα < γ≤1 . Then we have T2 , R ( Dn ) = n 2σ2µ T R ( I+ n σ2 ΛR ) −1µR+ 1 2 ∑∞ j=1 ( −1 ) j+1Ej , where Ej=µ T R 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ nσ2 ΛR ) −1µR . Proof of Lemma 34 . Let Λ̃ , R = diag { , λ1 , ... , λR } . Since ΛR = diag { 0 , λ1 , ... , λR } , we have that when is sufficiently small , ‖ 1σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2‖2 < 1 . Since all diagonal entries of Λ̃ , R are positive , we have 1 2σ2 µTRΦ T R ( I+ 1 σ2 ΦRΛ̃ , RΦ T R ) −1ΦRµR = 1 2σ2 µTRΦ T R [ I−ΦR ( σ2I+Λ̃ , RΦTRΦR ) −1Λ̃ , RΦTR ] ΦRµR = 1 2σ2 µTRΦ T RΦRµR− 1 2σ2 µTRΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1Λ̃ , RΦ T RΦRµR = 1 2 µTRΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RΛ̃ , RΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ 1 σ2 Λ̃ , RΦ T RΦR ) −1µR . ( 85 ) Using Lemma 27 , we have 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ 1 σ2 Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ n σ2 Λ̃ , R ) −1µR + 1 2 ∞∑ j=1 ( −1 ) j+1µTRΛ̃−1 , R ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j ( I+ n σ2 Λ̃ , R ) −1µR = n 2σ2 µTR ( I+ n σ2 Λ̃ , R ) −1µR + 1 2 ∞∑ j=1 ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 Λ̃ , R ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 Λ̃ , R ) −1µR ( 86 ) Letting →0 , we get T2 , R ( Dn ) = 1 2σ2 µTRΦ T R ( I+ 1 σ2 ΦRΛRΦ T R ) −1ΦRµR = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR + 1 2 ∞∑ j=1 [ ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR ] This concludes the proof . Lemma 35 . Assume that σ2 = Θ ( 1 ) . LetR=n 1 α+κ where 0 < κ < α−1−2τ2α2 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ , we have T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1α ) . ( 87 ) Furthermore , if we assume µ0 =0 , we have T2 , R ( Dn ) = ( n 2σ2µ T R ( I+ n σ2 ΛR ) −1µR ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 88 ) Proof of Lemma 35 . Let A= ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2 , ( 89 ) where 1+α+2τ2α < γ≤1 . By Corollary 22 , with probability of at least 1−δ , we have ‖A‖2 =Õ ( n 1−2γα+α+2τ 2α ) . ( 90 ) When n is sufficiently large , ‖A‖2 is less than 1 . LetB= ( I+ nσ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 . Then ‖B‖2 = σ 2 ( 1−γ ) n1−γ ‖A‖2 = Õ ( n 1−α+2τ 2α ) . Using the Woodbury matrix identity , we compute T1 , R ( Dn ) as follows : T1 , R ( Dn ) = 1 2 logdet ( I+ 1 σ2 ΛRΦ T RΦR ) − 12TrΦR ( σ 2I+ΛRΦ T RΦR ) −1ΛRΦ T R = 12 logdet ( I+ n σ2 ΛR ) + 1 2 logdet [ I+ 1 σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 ] − 12Tr ( σ 2I+ΛΦTRΦR ) −1ΛΦTRΦR = 12 logdet ( I+ n σ2 ΛR ) + 1 2Trlog [ I+ 1 σ2B ] − 1 2Tr ( I−σ 2 ( σ2I+ΛΦTRΦR ) −1 ) ) = 12 logdet ( I+ n σ2 ΛR ) + 1 2Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j − 12Tr I− ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) + 12Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j − 12Tr ∞∑ j=1 ( −1 ) j 1σ2j ( I+ n σ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 , ( 91 ) where in the last equality we apply Lemma 27 . Let h ( x ) = log ( 1+x ) − ( 1− 11+x ) . It is easy to verify that h ( x ) is increasing on [ 0 , +∞ ) . As for the first term on the right hand side of ( 91 ) , we have 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) = 12 R∑ p=1 ( log ( 1+ nσ2λp ) − ( 1− 1 1+ n σ2 λp ) ) = 12 R∑ p=1 h ( nσ2λp ) ≤ 1 2 R∑ p=1 h ( n σ2 Cλp −α ) ≤ 12h ( n σ2Cλ ) + 1 2 ∫ [ 1 , R ] h ( nσ2Cλx −α ) dx = 12h ( n σ2 Cλ ) + 1 2n 1/α ∫ [ 1/n1/α , R/n1/α ] h ( Cλσ2 x −α ) dx =Θ ( n1/α ) , where in the last equality we use the fact that ∫ [ 0 , +∞ ] h ( x −α ) dx < ∞ . On the other hand , we have 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) = 12 R∑ p=1 h ( nσ2λp ) ≥ 1 2 R∑ p=1 h ( nσ2Cλp −α ) ≥ 12 ∫ [ 1 , R+1 ] h ( nσ2Cλx −α ) dx = 12n 1/α ∫ [ 1/n1/α , ( R+1 ) /n1/α ] h ( 1σ2Cλx −α ) dx =Θ ( n1/α ) . Overall , we have 12 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) =Θ ( n1/α ) . As for the second term on the right hand side of ( 91 ) , we have∣∣∣∣∣∣Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j ∣∣∣∣∣∣≤R ∞∑ j=1 ‖ 1σ2B‖ j 2 =R ∞∑ j=1 1 σ2j Õ ( n j ( 1−α+2τ ) 2α ) =RÕ ( n 1−α+2τ 2α ) =Õ ( n 1 α+κ+ 1−α+2τ 2α ) . As for the third term on the right hand side of ( 91 ) , we have∣∣∣∣∣∣Tr ∞∑ j=1 ( −1 ) j 1σ2j ( I+ n σ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 ∣∣∣∣∣∣ ≤ ∞∑ j=1 ∣∣∣Tr ( 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 ) ∣∣∣ ≤R ∞∑ j=1 ∥∥∥ 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2∥∥∥ 2 ≤R ∞∑ j=1 ∥∥∥ 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2∥∥∥ 2 ≤R ∞∑ j=1 ∥∥ 1 σ2jB j ∥∥ 2 =Õ ( n 1 α+κ+ 1−α+2τ 2α ) . Then the asymptotics of T1 , R ( Dn ) is given by T1 , R ( Dn ) = 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) +Õ ( n 1α+κ+ 1−α+2τ2α ) +Õ ( n 1α+κ+ 1−α+2τ2α ) =Θ ( n1/α ) +Õ ( n 1 α+κ+ 1−α+2τ 2α ) =Θ ( n 1 α ) , where in the last inequality we use the assumption that κ < α−1−2τ2α . Since Õ ( n 1 α+κ+ 1−α+2τ 2α ) is lower order term compared to Θ ( n 1 α ) , we further have T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) . This concludes the proof of the first statement . Let Λ1 : R=diag { λ1 , ... , λR } , Φ1 : R= ( φ1 ( x ) , φ1 ( x ) , ... , φR ( x ) ) and µ1 : R= ( µ1 , ... , µR ) . Since µ0 =0 , we have T2 , R ( Dn ) = 12σ2µ T 1 : RΦ T 1 : R ( I+ 1 σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R. According to Lemma 34 , we have T2 , R ( Dn ) = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R + 1 2 ∞∑ j=1 ( −1 ) j+1µT1 : R 1 σ2 ( I+ n σ2 Λ1 : R ) −1 ( ΦT1 : RΦ1 : R−nI ) ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j−1 = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R + 1 2 ∞∑ j=1 [ ( −1 ) j+1 1 σ2j µT1 : R ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R A ( ( I+ n σ2 Λ1 : R ) −1+γΛ1−γ1 : R A ) j−1 ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R ] ( 92 ) where in the second to last equality we used the definition ofA ( 89 ) . As for the first term on the right hand side of ( 92 ) , by Lemma 15 , Assumption 4 and Assumption 5 , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R≤ n 2σ2 R∑ p=1 C2µp −2β 1+ nσ2Cλp −α = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . On the other hand , by Assumption 5 , assuming that supi≥1pi+1−pi=h , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R≥ n 2σ2 bRh c∑ i=1 C2µp −2β i 1+ nσ2Cλp −α i ≥ n 2σ2 bRh c∑ i=1 C2µi −2β 1+ nσ2Cλ ( hi ) −α = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . Overall , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R=Θ ( n max { 0,1+ 1−2βα } logkn ) , where k= { 0 , α 6=2β−1 , 1 , α=2β−1 . By Lemma 16 , we have ‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2≤ R∑ p=1 C2µp −2β ( Cλp −α ) −γ ( 1+ nσ2Cλp −α ) 2−γ =Õ ( max { n−2+γ , R1−2β+αγ } ) =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) . ( 93 ) Using ( 90 ) , the second term on the right hand side of ( 92 ) is computed as follows : 1 2 ∞∑ j=1 [ ( −1 ) j+1 1 σ2j µT1 : R ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R A ( ( I+ n σ2 Λ1 : R ) −1+γΛ1−γ1 : R A ) j−1 ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R ] ≤1 2 ∞∑ j=1 1 σ2j ‖A‖j ( n σ2 ) ( −1+γ ) ( j−1 ) ‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2 ≤1 2 ∞∑ j=1 1 σ2j Õ ( n j ( 1−2γα+α+2τ ) 2α ) ( n σ2 ) ( −1+γ ) ( j−1 ) Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) =Õ ( nmax { −2+γ+ 1−2γα+α+2τ 2α , 1−2β α +γ+ 1−2γα+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Õ ( nmax { −2+ 1+α+2τ 2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) . ( 94 ) Since 1+α+2τ2α < 1+α+2τ α+1+2τ =1 , we have−2+ 1+α+2τ 2α < 0.Also we have 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) = 1−2β α +1+ 1−α+2τ 2α +κ ( 1−2β+αγ ) ≤1−2β α +1+ 1−α+2τ 2α +καγ < 1−2β α +1 , ( 95 ) where the last inequality holds because κ < α−1−2τ2α2 and γ≤1 . Hence we have T2 , R ( Dn ) = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R+Õ ( n max { −2+ 1+α+2τ2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( nmax { −2+ 1+α+2τ 2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) . where k= { 0 , α 6=2β−1 , 1 , α=2β−1. . Since Õ ( n max { −2+ 1+α+2τ2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) is lower order term compared to Θ ( nmax { 0,1+ 1−2β α } logkn ) , we further have T1 , R ( Dn ) = ( n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R ) ( 1+o ( 1 ) ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) This concludes the proof of the second statement . Lemma 36 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−5δ , we have T1 ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) , ( 96 ) Furthermore , let δ=n−q where 0≤q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } . If we assume µ0 =0 , we have T2 ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 97 ) Proof of Lemma 36 . LetR=n 1 α+κ where 0≤κ < α−1−2τ2α2 . By Lemmas 32 and 35 , with probability of at least 1−5δ we have |T1 , R ( Dn ) −T1 ( Dn ) |=Õ ( n 1 α+κ ( 1−α ) ) , ( 98 ) and |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1 δ +1 ) nmax { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) ( 99 ) as well as T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) , ( 100 ) and T2 , R ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 101 ) We then have T1 ( Dn ) =T1 , R ( Dn ) +T1 , R ( Dn ) −T1 ( Dn ) =Θ ( n 1 α ) +Õ ( n 1 α+κ ( 1−α ) ) =Θ ( n 1 α ) . Since Õ ( n 1 α+κ ( 1−α ) ) is lower order term compared to Θ ( n 1 α ) , we further have T1 ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) This concludes the proof of the first statement . As for T2 ( Dn ) , we have T2 ( Dn ) =T2 , R ( Dn ) +T2 , R ( Dn ) −T2 ( Dn ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( ( 1 δ +1 ) nmax { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( nq+max { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) where we use δ=n−q , k= { 0 , α 6=2β−1 , 1 , α=2β−1. . Since 0≤κ < α−1−2τ2α2 and 0≤q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } , we can choose κ < α−1−2τ 2α2 and κ is arbitrarily close to α−1−2τ2α2 such that 0≤q < min { ( 2β−1 ) κ 2 , κα } . Then we have ( 1 α+κ ) 1−2β 2 +q < 0 , −1−κα+q < 0 , ( 1−2β ) κ2 +q < 0 and−κα+q < 0 . So we have T2 , R ( Dn ) =Θ ( n max { 0,1+ 1−2βα } logkn ) . Since Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) is lower order term compared to Θ ( nmax { 0,1+ 1−2β α } logkn ) , we further have T2 ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) This concludes the proof of the second statement . Proof of Theorem 7 . Using Lemma 36 and noting that 1α > 0 , with probability of at least 1−5δ̃ , we have E F 0 ( Dn ) =T1 ( Dn ) +T2 ( Dn ) = [ 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) + n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR ] ( 1+o ( 1 ) ) =Θ ( nmax { 1 α , 1−2β α +1 } ) Furthermore , we have logdet ( I+ n σ2 Λ ) −logdet ( I+ n σ2 ΛR ) = ∞∑ p=R+1 log ( 1+ n σ2 λp ) ≤ n σ2 ∞∑ p=R+1 λp≤ n σ2 ∞∑ p=R+1 Cλp −α= n σ2 O ( R1−α ) = n σ2 O ( n ( 1−α ) ( 1 α+κ ) ) =o ( n 1 α ) . Then we have log det ( I + nσ2 ΛR ) = log det ( I + n σ2 Λ ) ( 1 + o ( 1 ) ) . Similarly we can prove Tr ( I− ( I+ nσ2 Λ ) −1 ) = Tr ( I− ( I+ nσ2 ΛR ) −1 ) ( 1 + o ( 1 ) ) and µT ( I + nσ2 Λ ) −1µ = µTR ( I+ n σ2 ΛR ) −1µR ( 1+o ( 1 ) ) . Letting δ=5δ̃ , we get the result . In the case of µ0 > 0 , we have the following lemma : Lemma 37 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < α−1−2τα2 . Assume that µ0 > 0 . Under Assumptions 4 , 5 and 6 , for sufficiently large nwith probability of at least 1−4δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1 δ +1 ) nmax { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) .. ( 102 ) Proof of Lemma 37 . As for |T2 ( Dn ) −T2 , R ( Dn ) | , we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ + ∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ . ( 103 ) For the first term on the right-hand side of ( 103 ) , we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2 ∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣+∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖22 . Applying Corollary 19 and Lemma 31 , with probability of at least 1−4δ , we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2Õ ( √ ( 1 δ +1 ) nR1−2β ) Õ ( √ ( 1 δ +1 ) n ) +Õ ( ( 1 δ +1 ) nR1−2β ) =2Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) +Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) ( 1−2β ) ) =2Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) . As for the second term on the right-hand side of ( 80 ) , according to Lemma 28 , Corollary 26 and Lemma 30 , we have∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j−12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖22 = ∞∑ j=1 Õ ( n−jκα ) Õ ( ( 1 δ +1 ) n ) =Õ ( ( 1 δ +1 ) n1−κα ) . ( 104 ) By ( 80 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) +Õ ( ( 1 δ +1 ) n1−κα ) =Õ ( ( 1 δ +1 ) nmax { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) . Lemma 38 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < min { α−1−2τ2α2 , 2β−1 α2 } . Assume that µ0 > 0 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ , we have T2 , R ( Dn ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) . ( 105 ) Proof of Lemma 38 . Let A= ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2 , ( 106 ) where 1+α+2τ2α < γ≤1 . By Corollary 22 , with probability of at least 1−δ , we have ‖A‖2 =Õ ( n 1−2γα+α+2τ 2α ) . ( 107 ) When n is sufficiently large , ‖A‖2 is less than 1 . Let µR,1 = ( µ0,0 , ... ,0 ) and µR,2 = ( 0 , µ1 , ... , µR ) . Then µR=µR,1+µR,2 . Let Λ̃1 , R=diag { 1 , λ1 , ... , λR } and I0 , R= ( 0,1 , ... ,1 ) . Then ΛR=Λ̃1 , RI0 , R . Let B = ( I + nσ2 ΛR ) −1/2Λ̃ 1/2 1 , R ( Φ T RΦR − nI ) Λ̃ 1/2 1 , R ( I + n σ2 ΛR ) −1/2 . By Corollary 23 , we have ‖B‖2 =O ( √ logRδ n 1 2 ) . By Lemma 34 , we have T2 , R ( Dn ) = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR + 1 2 ∞∑ j=1 [ ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR ] ( 108 ) As for the first term on the right hand side of ( 108 ) , by Lemma 15 , we have n 2σ2 µT ( I+ n σ2 Λ ) −1µ≤ n 2σ2 ( µ20+ R∑ p=1 C2µp −2β 1+ nσ2Cλp −α ) = n 2σ2 µ20+Õ ( n max { 0,1+ 1−2βα } ) . We defineQ1 , j , Q2 , j andQ3 , j by Q1 , j=µ T R,1 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,1 Q2 , j=µ T R,1 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,2 Q3 , j=µ T R,2 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,2 ( 109 ) The quantity Q3 , j actually shows up in the case of µ0 = 0 in the proof of Lemma 35 . By ( 92 ) , ( 94 ) and ( 95 ) , we have that | ∞∑ j=1 ( −1 ) j+1Q3 , j |= | ∞∑ j=1 ( −1 ) j+1Õ ( n ( j−1 ) ( 1−α+2τ ) 2α ) o ( nmax { 0,1+ 1−2β α } ) |=o ( nmax { 0,1+ 1−2β α } ) . ( 110 ) ForQ1 , j , we have Q1,1 = 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( I+ n σ2 ΛR ) −1+ γ2 µR,1 ≤ 1 σ2j ‖µR,1‖22‖ ( I+ n σ2 ΛR ) −1+ γ2 ‖22‖B‖2 =O ( √ log R δ n 1 2 ) , where in the last equality we use ‖B‖2 =O ( √ logRδ n 1 2 ) . For j≥2 , we have Q1 , j= 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( ( I+ n σ2 ΛR ) −1+γΛ1−γR A ) j−2 ( I+ n σ2 ΛR ) −1+γΛ1−γR B ( I+ n σ2 ΛR ) −1+ γ2 µR,1 ≤ 1 σ2j ‖µR,1‖22‖ ( I+ n σ2 ΛR ) −1+ γ2 ‖22‖B‖22‖A‖ j−2 2 ‖ ( I+ n σ2 ΛR ) −1+γΛ1−γR ‖ j−1 2 =O ( log R δ n·n ( j−2 ) ( 1−2γα+α+2τ ) 2α ·n− ( 1−γ ) ( j−1 ) ) =O ( log R δ nγ ·n ( j−2 ) ( 1−α+2τ ) 2α ) . Then we have | ∞∑ j=1 ( −1 ) j+1Q1 , j |≤O ( √ log R δ n 1 2 ) + ∞∑ j=2 O ( log R δ nγ ·n ( j−2 ) ( 1−α+2τ ) 2α ) =O ( log R δ nγ ) ( 111 ) ForQ2 , j , we have Q2 , j= 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( ( I+ n σ2 ΛR ) −1+γΛ1−γR A ) j−1 ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2 ≤ 1 σ2j ‖µR,1‖2‖B‖2‖A‖j−12 ‖ ( I+ n σ2 ΛR ) −1+γΛ1−γR ‖ j−1 2 ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖2 =O ( √ log R δ n 1 2 ·n ( j−1 ) ( 1−α+2τ ) 2α ) ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖2 . Since ‖ ( I+ nσ2 Λ ) −1+ γ2 Λ̃ − γ2 1 , RµR,2‖2 is actually the case of µ0 = 0 , we can use ( 93 ) in the proof of Lemma 35 and get ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖ 2 2 =‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2 =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) =o ( nγ ) , ( 112 ) where in the last equality we use κ < 2β−1α2 . Then we have | ∞∑ j=1 ( −1 ) j+1Q2 , j |≤ ∞∑ j=1 o ( √ log R δ n 1+γ 2 ·n ( j−1 ) ( 1−α+2τ ) 2α ) =o ( √ log R δ n 1+γ 2 ) ( 113 ) Choosing γ= 12 ( 1+ 1+α+2τ 2α ) = 1+3α+2τ 4α < 1 , we have T2 , R ( Dn ) = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR+ ∞∑ j=1 ( −1 ) j+1 ( Q1 , j+Q2 , j+Q3 , j ) = n 2σ2 µ20+Õ ( n max { 0,1+ 1−2βα } ) +o ( nmax { 0,1+ 1−2β α } ) +O ( log R δ nγ ) +o ( √ log R δ n 1+γ 2 ) = n 2σ2 µ20+Õ ( n max { 1+γ2 ,1+ 1−2β α } ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) . Proof of Theorem 8 . Let R = n 1 α+κ where 0 < κ < min { α−1−2τ2α2 , 2β−1 α2 } . Since 0 ≤ q < min { 2β−12 , α } · min { α−1−2τ 2α2 , 2β−1 α2 } , we can choose κ < min { α−1−2τ 2α2 , 2β−1 α2 } and κ is arbitrarily close to κ < min { α−1−2τ2α2 , 2β−1 α2 } such that 0≤ q < min { ( 2β−1 ) κ 2 , κα } . Then we have ( 1α+κ ) 1−2β 2 +q < 0 , and−κα+q < 0 . As for T2 ( Dn ) , we have T2 ( Dn ) ≤T2 , R ( Dn ) +|T2 , R ( Dn ) −T2 ( Dn ) | = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) +Õ ( ( 1δ +1 ) n max { 1+ ( 1α+κ ) 1−2β 2 ,1−κα } ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) +Õ ( nq+max { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) = n 2σ2 µ20+o ( n ) . By Lemma 36 , we have T1 ( Dn ) = O ( n 1 α ) . Hence E F 0 ( Dn ) = T1 ( Dn ) + T2 ( Dn ) = n 2σ2µ 2 0+o ( n ) . D.2 PROOFS RELATED TO THE ASYMPTOTICS OF THE GENERALIZATION ERROR Lemma 39 . Assume σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ over sample inputs ( xi ) ni=1 , we have G1 ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ nσ2 ΛR ) −1ΛR−‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F ) =Θ ( n ( 1−α ) ( 1−t ) α ) . ( 114 ) Proof of Lemma 39 . Let G1 , R ( Dn ) = E ( xn+1 , yn+1 ) ( T1 , R ( Dn+1 ) − T1 , R ( Dn ) ) , where R = nC for some constant C. By Lemma 32 , we have that |G1 ( Dn ) −G1 , R ( Dn ) |= ∣∣E ( xn+1 , yn+1 ) [ T1 ( Dn+1 ) −T1 , R ( Dn+1 ) ] − [ T1 ( Dn ) −T1 , R ( Dn ) ] ∣∣ = ∣∣E ( xn+1 , yn+1 ) O ( ( n+1 ) R1−α ) ∣∣+∣∣O ( nR1−α ) ] ∣∣ =O ( 1σ2nR 1−α ) . ( 115 ) Define ηR= ( φ0 ( xn+1 ) , φ1 ( xn+1 ) , ... , φR ( xn+1 ) ) T and Φ̃R= ( ΦTR , ηR ) T . As forG1 , R ( Dn ) , we have G1 , R ( Dn ) =E ( xn+1 , yn+1 ) ( T1 , R ( Dn+1 ) −T1 , R ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2 logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) − 1 2 Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1 ) ) − ( 1 2 logdet ( I+ ΦRΛRΦ T R σ2 ) − 1 2 Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃R T σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) − 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1 ) −Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) ) . ( 116 ) As for the first term in the right hand side ( 116 ) , we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ΛRΦ̃ T RΦ̃R σ2 ) −logdet ( I+ ΛRΦ T RΦR σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ΛRΦ T RΦR+ηRη T R σ2 ) −logdet ( I+ ΛRΦ T RΦR σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( ( I+ ΛRΦ T RΦR σ2 ) −1 ( I+ ΛRΦ T RΦR σ2 + ΛRηRη T R σ2 ) ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ( I+ ΛRΦ T RΦR σ2 ) −1 ΛRηRη T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) log ( 1+ 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ) Let A= ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 . ( 117 ) According to Corollary 22 , with probability of at least 1 − δ , we have ‖ 1σ2A‖2 = O ( √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α ) = o ( 1 ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 . By Lemma 27 , we have ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR =ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ( −1 ) jηTR ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ n σ2 ΛR ) −1ΛRηR =ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ( −1 ) j 1 σ2j ηTR ( I+ n σ2j ΛR ) −1/2Λ 1/2 R A j ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ηR ≤ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ‖A‖j2‖ ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ηR‖ 2 2 ≤ R∑ p=1 φ2p ( xn+1 ) Cλp −α 1+nCλp−α/σ2 + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) R∑ p=1 φ2p ( xn+1 ) Cλp −α 1+nCλp−α/σ2 ≤ R∑ p=1 Cλp −αp2τ 1+nCλp−α/σ2 + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) R∑ p=1 Cλp −αp2τ 1+nCλp−α/σ2 ≤O ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) O ( n ( 1−α ) ( 1−t ) α ) =O ( n ( 1−α ) ( 1−t ) α ) =o ( 1 ) , ( 118 ) where we use Lemma 15 in the last inequality . Next we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) log ( 1+ 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ) = 1 2 ( E ( xn+1 , yn+1 ) ( 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ( 1+o ( 1 ) ) ) = 1 2σ2 ( Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR ) ( 1+o ( 1 ) ) , where in the last equality we use the fact that E ( xn+1 , yn+1 ) ηRηTR=I . By Lemma 27 , we have Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR =Tr ( I+ n σ2 ΛR ) −1ΛR+ ∞∑ j=1 ( −1 ) jTr ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ n σ2 ΛR ) −1ΛR =Tr ( I+ n σ2 ΛR ) −1ΛR+ ∞∑ j=1 ( −1 ) jTr 1 σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R A j ( I+ n σ2 ΛR ) −1/2Λ 1/2 R . By Lemma 15 , we have Tr ( I+ n σ2 ΛR ) −1ΛR≤ R∑ p=1 Cλp −α 1+nCλp−α/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) Tr ( I+ n σ2 ΛR ) −1ΛR≥ R∑ p=1 Cλp −α 1+nCλp−α/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) . Overall , Tr ( I+ n σ2 ΛR ) −1ΛR=Θ ( n ( 1−α ) ( 1−t ) α ) . ( 119 ) Since ‖ 1σ2A‖ j 2 =o ( 1 ) , we have that the absolute values of diagonal entries of 1 σ2jA j are at most o ( 1 ) ) . Let ( Aj ) p , p denote the ( p , p ) -th entry of the matrixAj . Then we have∣∣∣∣Tr 1σ2 ( I+ nσ2 ΛR ) −1/2Λ1/2R Aj ( I+ nσ2 ΛR ) −1/2Λ1/2R ∣∣∣∣ = ∣∣∣∣∣ R∑ p=1 λp 1 σ2j ( A j ) p , p 1+nλp/σ2 ∣∣∣∣∣≤ R∑ p=1 λp‖A‖j2 1+nλp/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 120 ) where in the last step we used ( 119 ) . According to ( 119 ) and ( 120 ) , we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2σ2 ( Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR ) ( 1+o ( 1 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) =Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( n ( 1−α ) ( 1−t ) α ) o ( 1 ) =Θ ( n ( 1−α ) ( 1−t ) α ) = 1 2σ2 ( Tr ( I+ n σ2 ΛR ) −1ΛR ) ( 1+o ( 1 ) ) . ( 121 ) Using the Woodbury matrix identity , the second term in the right hand side ( 116 ) is given by 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1−Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) = 1 2 ( E ( xn+1 , yn+1 ) Tr ( 1 σ2 Φ̃R ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1ΛRΦ̃ T R−Tr ( 1 σ2 ΦR ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRΦ T R ) = 1 2 ( E ( xn+1 , yn+1 ) Tr ( 1 σ2 ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1ΛRΦ̃ T RΦ̃R−Tr ( 1 σ2 ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRΦ T RΦR ) =−1 2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1−Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ) =−1 2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ T RΦR+ 1 σ2 ΛRηRη T R ) −1−Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ) = 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 1+ 1σ2 η T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηR ) , where the last equality uses the Sherman–Morrison formula . According to ( 118 ) , we get 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 1+ 1σ2 η T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηR ) = 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ( 1+o ( 1 ) ) ) = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛR ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 TrΛ 1/2 R ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1Λ 1/2 R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1Λ 1/2 R ( I+ 1 σ2 ΛRΦ T RΦR ) −1Λ 1/2 R = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1ΛR ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1 = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ 1 σ2 A ) −1 ( I+ n σ2 ΛR ) −1/2‖2F , where in the penultimate equality we use Tr ( BBT ) =‖B‖2F , ‖B‖F is the Frobenius norm ofA , and in the last equality we use the definition ofA ( 117 ) . Then we have 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ 1 σ2 A ) −1 ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ ∞∑ j=1 ( −1 ) j 1 σ2j Aj ) ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖2F . ( 122 ) By Lemma 15 , we have ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F ≤ √√√√ R∑ p=1 Cλp−α ( 1+nCλp−α/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) 2α ) ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F ≥ √√√√ R∑ p=1 Cλp−α ( 1+nCλp−α/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) 2α ) . Overall , we have ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F =Θ ( n ( 1−α ) ( 1−t ) 2α ) . ( 123 ) Since ‖ 1σ2A‖2 =O ( √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α ) =o ( 1 ) , we have ‖ 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖F ≤‖Λ1/2R ( I+ n σ2 ΛR ) −1/2‖F ‖ 1 σ2 A‖j2‖ ( I+ n σ2 ΛR ) −1/2‖2 =O ( n ( 1−α ) ( 1−t ) 2α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 124 ) where in the first inequality we use the fact that ‖AB‖F ≤ ‖A‖F ‖B‖2 when B is symmetric . By Lemma 15 , we have 1 σ2j ∣∣∣TrΛ1/2R ( I+ nσ2 ΛR ) −1Λ1/2R ( I+ nσ2 ΛR ) −1/2Aj ( I+ nσ2 ΛR ) −1/2∣∣∣ = ∣∣∣∣∣ R∑ p=1 λp ( ( 1 σ2A ) j ) p , p ( 1+nλp/σ2 ) 2 ∣∣∣∣∣≤ R∑ p=1 λp‖ 1σ2A‖ j 2 ( 1+nλp/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 125 ) According to ( 123 ) , ( 124 ) and ( 125 ) , we have 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1−Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛR ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ( ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F + ∞∑ j=1 ∥∥∥∥ 1σ2j Λ1/2R ( I+ nσ2 ΛR ) −1/2Aj ( I+ nσ2 ΛR ) −1/2 ∥∥∥∥2 F +2TrΛ 1/2 R ( I+ n σ2 ΛR ) −1 ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2 ) = 1+o ( 1 ) 2σ2 ( Θ ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 1 σ2j O ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) +2 ∞∑ j=1 1 σ2j Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F . ( 126 ) Combining ( 121 ) and ( 126 ) we get that G1 , R ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I + n σ2 ΛR ) −1ΛR + ‖Λ1/2R ( I + n σ2 ΛR ) −1‖2F ) = Θ ( n ( 1−α ) ( 1−t ) α ) . From ( 115 ) we have that G1 ( Dn ) ≤ G1 , R ( Dn ) + |G1 ( Dn ) − G1 , R ( Dn ) | = Θ ( n ( 1−α ) ( 1−t ) α ) +O ( n 1σ2R 1−α ) . Choosing R = n ( 2α−1 α ( α−1 ) +1 ) ( 1−t ) we conclude the proof . Lemma 40 . Assume σ2 =Θ ( nt ) where 1− α1+2τ < t < 1 . Let S=n D. Assume that ‖ξ‖2 =1 . When n is sufficiently large , with probability of at least 1−2δ we have ‖ ( I+ 1σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2 =O ( √ ( 1δ +1 ) n·n − ( 1−t ) ) . ( 127 ) Proof of Lemma 40 . Using the Woodbury matrix identity , we have that ( ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦSΛSξ= [ I−ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSΦTS ] ΦSΛSξ =ΦSΛSξ−ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSΦTSΦSΛSξ =σ2ΦS ( σ 2I+ΛSΦ T SΦS ) −1ΛSξ . ( 128 ) Let A= ( I+ nσ2 ΛS ) −γ/2Λ γ/2 S ( Φ T SΦS−nI ) Λ γ/2 S ( I+ n σ2 ΛS ) −γ/2 , where γ > 1+α+2τ− ( 1+2τ+2α ) t2α ( 1−t ) . By Corollary 22 , with probability of at least 1−δ , we have ‖ 1σ2A‖2 =Õ ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 . By Lemma 27 , we have ( I+ 1 σ2 ΛSΦ T SΦS ) −1 = ( I+ n σ2 ΛS ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1 . Then we have ‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ n σ2 ΛS ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1 ΛSξ ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ n σ2 ΛS ) −1ΛSξ‖2+ ∞∑ j=1 ∥∥∥∥∥ ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1ΛSξ ∥∥∥∥∥ 2 . ( 129 ) For the first term in the right hand side of the last equation , we have ‖ ( I+ n σ2 ΛS ) −1ΛSξ‖2≤‖ ( I+ n σ2 ΛS ) −1ΛS‖2‖ξ‖2≤ σ2 n =O ( n−1 ) . ( 130 ) Using the fact that ‖ 1σ2A‖2 = Õ ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) and ‖ ( I+ nσ2 ΛS ) −1ΛS‖2 ≤ n−1 , we have∥∥∥∥∥ ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1ΛSξ ∥∥∥∥∥ 2 = 1 σ2j ∥∥∥∥ ( I+ nσ2 ΛS ) −1+ γ2 Λ1− γ2S ( A ( I+ nσ2 ΛS ) −1+γΛ1−γS ) j−1A ( I+ nσ2 ΛS ) −1+ γ2 Λ− γ2S ΛSξ ∥∥∥∥ 2 ≤n ( 1−t ) ( −1+ γ 2 + ( −1+γ ) ( j−1 ) ) Õ ( n j ( 1+α+2τ− ( 1+2τ+2α ) t ) 2α −jγ ( 1−t ) ) ‖ ( I+ n σ2 ΛS ) −1+ γ2 Λ 1− γ2 S ξ‖2 =Õ ( n− γ 2 ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) ‖ ( I+ n σ2 ΛS ) −1+ γ2 Λ 1− γ2 S ‖2‖ξ‖2 =Õ ( n− γ 2 ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) O ( n ( −1+γ/2 ) ( 1−t ) ) =Õ ( n− ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) . ( 131 ) Using ( 129 ) , ( 130 ) and ( 131 ) , we have ‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 =σ−2Õ ( n−1 ) + ∞∑ j=1 Õ ( n−1+ ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) =Õ ( n− ( 1−t ) ) +Õ ( n−1+ 1−α+2τ− ( 1+2τ ) t 2α ) =Õ ( n− ( 1−t ) ) . ( 132 ) By Corollary 20 , with probability of at least 1−δ , we have ‖ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 =Õ ( √ ( 1 δ +1 ) n‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 ) =Õ ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) . From ( 128 ) we get ‖ ( I + 1σ2 ΦSΛSΦ T S ) −1fS ( x ) ‖2 = Õ ( √ ( 1δ +1 ) n ·n − ( 1−t ) ) . This concludes the proof . Lemma 41 . Assume σ2 = Θ ( nt ) where 1 − α1+2τ < t < 1 . Let δ = n −q where 0≤q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . Under Assumptions 4 , 5 and 6 , assume that µ0 =0 . Then with probability of at least 1−6δ over sample inputs ( xi ) ni=1 , we haveG2 ( Dn ) = ( 1+o ( 1 ) ) 2σ2 ‖ ( I+ n σ2 ΛR ) −1µR‖22 = Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) , where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . Proof of Lemma 41 . Let S = nD . Let G2 , S ( Dn ) = E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) − T2 , S ( Dn ) ) . By Lemma 33 , when S is large enough , with probability of at least 1−3δ we have that c ( 133 ) Let Λ1 : S = diag { λ1 , ... , λS } , Φ1 : S = ( φ1 ( x ) , φ1 ( x ) , ... , φS ( x ) ) and µ1 : S = ( µ1 , ... , µS ) . Since µ0 = 0 , we have T2 , S ( Dn ) = 12σ2µ T 1 : SΦ T 1 : S ( I + 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : Sµ1 : S . Define η1 : S = ( φ1 ( xn+1 ) , ... , φS ( xn+1 ) ) T and Φ̃1 : S= ( ΦT1 : S , η1 : S ) T . In the proof of Lemma 34 , we showed that T2 , S ( Dn ) = 1 2σ2 µT1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : Sµ1 : S = 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S . We have G2 , S ( Dn ) =E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) −T2 , S ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ̃ T S Φ̃S ) −1µ1 : S ) − ( 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S ) ) =E ( xn+1 , yn+1 ) ( 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ̃ T S Φ̃S ) −1µ1 : S ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µT1 : SΛ −1 1 : S ( I+ 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 1+ 1σ2 η T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : S µ1 : S ) ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µT1 : S ( I+ 1 σ2 Φ T 1 : SΦ1 : SΛ1 : S ) −1η1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S 1+ 1σ2 η T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : S ) ) =E ( xn+1 , yn+1 ) ( 1+o ( 1 ) 2σ2 µT1 : S ( I+ 1 σ2 ΦT1 : SΦ1 : SΛ1 : S ) −1η1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S ) = 1+o ( 1 ) 2σ2 µT1 : S ( I+ 1 σ2 ΦT1 : SΦ1 : SΛ1 : S ) −1 ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S = 1+o ( 1 ) 2σ2 ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖22 , ( 134 ) where in the fourth to last equality we used the Sherman–Morrison formula , in the third inequality we used ( 118 ) , and in the last equality we used the fact that E ( xn+1 , yn+1 ) η1 : SηT1 : S=I . Let µ̂1 : R= ( µ1 , ... , µR,0 , ... ,0 ) ∈RS . Then we have ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2≤‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ̂1 : R‖2+‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) ‖2 , ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2≥‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ̂1 : R‖2−‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) ‖2 . ( 135 ) ChooseR=n ( 1 α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) . In Lemma 29 , ( 62 ) , we showed that with probability of at least 1−δ , ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 , ( 136 ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . The same proof holds if we replace Φ1 : R with Φ1 : S , Λ1 : R with Λ1 : S , and µ1 : R with µ̂1 : R. We have ‖ ( σ2I+Λ1 : SΦT1 : SΦ1 : S ) −1µ̂1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 . ( 137 ) Next we bound ‖ ( I + 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S − µ̂1 : R ) ‖2 . By Assumption 5 , we have that ‖µ1 : S−µ̂1 : R‖2 =O ( R 1−2β 2 ) . For any ξ∈RS and ‖ξ‖2 =1 , using the Woodbury matrix identity , with probability of at least 1−2δ we have |ξT ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) | = |ξT ( I− 1 σ2 Λ1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ) ( µ1 : S−µ̂1 : R ) | = |ξT ( µ1 : S−µ̂1 : R ) − 1 σ2 ξTΛ1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ( µ1 : S−µ̂1 : R ) | ≤‖ξ‖2‖µ1 : S−µ̂1 : R‖2+ 1 σ2 |ξTΛ1 : SΦT1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ( µ1 : S−µ̂1 : R ) | ≤O ( R 1−2β 2 ) + 1 σ2 ‖ ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : SΛ1 : Sξ‖2‖Φ1 : S ( µ1 : S−µ̂1 : R ) ‖2 =O ( R 1−2β 2 ) + 1 σ2 O ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) O ( √ ( 1 δ +1 ) nR 1−2β 2 ) =O ( ( 1 δ +1 ) R 1−2β 2 ) , where in the second to last step we used Corollary 20 to show ‖Φ1 : S ( µ1 : S − µ̂1 : R ) ‖2 = O ( √ ( 1δ +1 ) nR 1−2β 2 ) with probability of at least 1 − δ , and Lemma 40 to show that ‖ ( I + 1σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : SΛ1 : Sξ‖2 = O ( √ ( 1δ +1 ) n · n −1 ) with probability of at least 1−δ . SinceR=n ( 1α+κ ) ( 1−t ) , we have |ξT ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) |=O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since ξ is arbitrary , we have ‖ ( I + 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S − µ̂1 : R ) ‖2 = O ( ( 1δ + 1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 and 0 < κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) , we can choose κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) and κ is arbitrarily close to κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) such that 0≤q < ( 2β−1 ) ( 1−t ) κ2 . Then we have ( 1−2β ) ( 1−t ) κ 2 +q < 0 . From ( 135 ) and ( 137 ) , we have ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) +O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) +O ( ( nq+ ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) = ( 1+o ( 1 ) ) ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 = ( 1+o ( 1 ) ) ‖ ( I+ n σ2 ΛR ) −1µR‖2 . ( 138 ) Hence G2 , S ( Dn ) = 1+o ( 1 ) 2σ2 ‖ ( I + 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖22 = Θ ( n ( 1−t ) max { −2 , 1−2β α } logk/2 n ) . Then by ( 133 ) , G2 ( Dn ) =Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) +Õ ( ( 1δ +1 ) n 1 σ2S max { 1/2−β,1−α } ) . Choosing S=n ( 1+min { 2 , 2β−1 α } min { β−1/2 , α−1 } +1 ) ( 1−t ) , we get the result . Proof of Theorem 9 . From Lemmas 39 and 41 and 1α −1 > −2 , we have that with probability of at least 1−7δ̃ , E G ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ n σ2 ΛR ) −1ΛR−‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F +‖ ( I+ n σ2 ΛR ) −1µR‖22 ) =Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =Θ ( nmax { ( 1−α ) ( 1−t ) α , ( 1−2β ) ( 1−t ) α } ) ( 139 ) where k= { 0 , 2α 6=2β−1 1 , 2α=2β−1 . Furthermore , we have Tr ( I+ n σ2 Λ ) −1Λ−Tr ( I+ n σ2 ΛR ) −1ΛR = ∞∑ p=R+1 λp 1+ nσ2λp ≤ ∞∑ p=R+1 Cλp −α 1+ nσ2Cλp −α ≤ ∞∑ p=R+1 Cλp −α= n σ2 O ( R1−α ) =O ( n ( 1−α ) ( 1−t ) ( 1 α+κ ) ) =o ( n ( 1−α ) ( 1−t ) α ) . Then we have Tr ( I+ n σ2 ΛR ) −1ΛR=Tr ( I+ n σ2 Λ ) −1Λ ( 1+o ( 1 ) ) . ( 140 ) Similarly we can prove ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F =‖Λ1/2 ( I+ n σ2 Λ ) −1‖2F ( 1+o ( 1 ) ) ( 141 ) ‖ ( I+ n σ2 ΛR ) −1µR‖22 =‖ ( I+ n σ2 Λ ) −1µ‖22 ( 1+o ( 1 ) ) ( 142 ) Letting δ=7δ̃ , the proof is complete . In the case of µ0 > 0 , we have the following lemma : Lemma 42 . Let δ = n−q where 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . Under Assumptions 4 , 5 and 6 , assume that µ0 > 0 . Then with probability of at least 1− 6δ over sample inputs ( xi ) ni=1 , we have G2 ( Dn ) = 1 2σ2µ 2 0+o ( 1 ) . Proof of Lemma 42 . Let S = nD . Let G2 , S ( Dn ) = E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) − T2 , S ( Dn ) ) . By Lemma 33 , when S is large enough , with probability of at least 1−3δ we have that |G2 ( Dn ) −G2 , S ( Dn ) |= ∣∣E ( xn+1 , yn+1 ) [ T2 ( Dn+1 ) −T2 , S ( Dn+1 ) ] − [ T2 ( Dn ) −T2 , S ( Dn ) ] ∣∣ = ∣∣∣∣E ( xn+1 , yn+1 ) Õ ( ( 1δ+1 ) ( n+1 ) 1σ2Smax { 1/2−β,1−α } ) ∣∣∣∣ + ∣∣∣∣Õ ( ( 1δ+1 ) n 1σ2Smax { 1/2−β,1−α } ) ∣∣∣∣ =Õ ( ( 1 δ +1 ) n 1 σ2 Smax { 1/2−β,1−α } ) . ( 143 ) Let ΛS = diag { λ1 , ... , λS } , ΦS = ( φ1 ( x ) , φ1 ( x ) , ... , φS ( x ) ) and µS = ( µ1 , ... , µS ) . Define ηS = ( φ0 ( xn+1 ) , φ1 ( xn+1 ) , ... , φS ( xn+1 ) ) T and Φ̃S = ( ΦTS , ηS ) T . By the same technique as in the proof of Lemma 34 , we replace ΛR by Λ̃ , R=diag { , λ1 , ... , λR } , let →0 and show the counterpart of the result ( 134 ) in the proof of Lemma 41 : G2 , S ( Dn ) =E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) −T2 , S ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µTS ( I+ 1 σ2 Φ T SΦSΛS ) −1ηSη T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS 1+ 1σ2 η T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1ΛSηS ) ) =E ( xn+1 , yn+1 ) ( 1+o ( 1 ) 2σ2 µTS ( I+ 1 σ2 ΦTSΦSΛS ) −1ηSη T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS ) = 1+o ( 1 ) 2σ2 µTS ( I+ 1 σ2 ΦTSΦSΛS ) −1 ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS = 1+o ( 1 ) 2σ2 ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖22 , ( 144 ) where in the fourth to last equality we used the Sherman–Morrison formula , in the third inequality we used ( 118 ) , and in the last equality we used the fact that E ( xn+1 , yn+1 ) η1 : SηT1 : S=I . Let µ̂R= ( µ0 , µ1 , ... , µR,0 , ... ,0 ) ∈RS . Then we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2≤‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2+‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) ‖2 , ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2≥‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2−‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) ‖2 . ( 145 ) ChooseR=n ( 1 α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) . In Lemma 29 , ( 62 ) , we showed that with probability of at least 1−δ , ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 , ( 146 ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . The same proof holds if we replace Φ1 : R with Φ1 : S , Λ1 : R with Λ1 : S , and µ1 : R with µ̂1 : R. We have ‖ ( σ2I+Λ1 : SΦT1 : SΦ1 : S ) −1µ̂1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 . ( 147 ) So we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2 =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) =µ0+o ( 1 ) . ( 148 ) Next we bound ‖ ( I + 1σ2 ΛSΦ T SΦS ) −1 ( µS − µ̂R ) ‖2 . By Assumption 5 , we have that ‖µS − µ̂R‖2 = O ( R 1−2β 2 ) . For any ξ ∈ RS and ‖ξ‖2 = 1 , using the Woodbury matrix identity , with probability of at least 1−2δ we have |ξT ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) | = |ξT ( I− 1 σ2 ΛSΦ T S ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ) ( µS−µ̂R ) | = |ξT ( µS−µ̂R ) − 1 σ2 ξTΛSΦ T S ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ( µS−µ̂R ) | ≤‖ξ‖2‖µS−µ̂R‖2+ 1 σ2 |ξTΛSΦTS ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ( µS−µ̂R ) | ≤O ( R 1−2β 2 ) + 1 σ2 ‖ ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2‖ΦS ( µS−µ̂R ) ‖2 =O ( R 1−2β 2 ) + 1 σ2 O ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) O ( √ ( 1 δ +1 ) nR 1−2β 2 ) =O ( ( 1 δ +1 ) R 1−2β 2 ) , where in the second to last step we used Corollary 20 to show‖ΦS ( µS−µ̂R ) ‖2 =O ( √ ( 1δ +1 ) nR 1−2β 2 ) with probability of at least 1− δ , and Lemma 40 to show that ‖ ( I + 1σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2 = O ( √ ( 1δ +1 ) n·n − ( 1−t ) ) with probability of at least 1−δ . SinceR=n ( 1α+κ ) ( 1−t ) , we have |ξT ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) |=O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since ξ is arbitrary , we have ‖ ( I + 1σ2 ΛSΦ T SΦS ) −1 ( µS − µ̂R ) ‖2 = O ( ( 1δ + 1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 and 0 < κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) , we can choose κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) and κ is arbitrarily close to κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) such that 0≤q < ( 2β−1 ) ( 1−t ) κ2 . Then we have ( 1−2β ) ( 1−t ) κ 2 +q < 0 . From ( 145 ) and ( 148 ) , we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2 =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) +O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β2α } logk/2n ) =µ0+o ( 1 ) . ( 149 ) Hence G2 , S ( Dn ) = 1+o ( 1 ) 2σ2 ‖ ( I + 1 σ2 ΛSΦ T SΦS ) −1µS‖22 = 12σ2µ 2 0 + o ( 1 ) . Then by ( 143 ) , G2 ( Dn ) = 1 2σ2µ 2 0+o ( 1 ) +Õ ( ( 1δ +1 ) nS max { 1/2−β,1−α } ) . Choosing S=n ( 1+min { 2 , 2β−1α } min { β−1/2 , α−1 } +1 ) ( 1−t ) , we get the result . Proof of Theorem 11 . According to Lemma 42 , G2 ( Dn ) = 12σ2µ 2 0 +o ( 1 ) . By Lemma 39 , we have G1 ( Dn ) =Θ ( n ( 1−α ) ( 1−t ) α ) . Then E G ( Dn ) =G1 ( Dn ) +G2 ( Dn ) = 12σ2µ 2 0+o ( 1 ) . D.3 PROOFS RELATED TO THE EXCESS MEAN SQUARED GENERALIZATION ERROR Proof of Theorem 12 . For µ0 =0 , we can show that E M ( Dn ) =E Exn+1 [ m̄ ( xn+1 ) −f ( xn+1 ) ] 2 =E Exn+1 [ Kxn+1x ( Kn+σ2modelIn ) −1y−f ( xn+1 ) ] 2 =E Exn+1 [ ηTΛΦT [ ΦΛΦT +σ2modelIn ) −1 ( Φµ+ ) −ηTµ ] 2 =E Exn+1 [ ηTΛΦT ( ΦΛΦT +σ2modelIn ) −1 ] 2 +Exn+1 [ ηT ( ΛΦT ( ΦΛΦT +σ2modelIn ) −1Φ−I ) µ ] 2 =σ2trueTrΛΦ T ( ΦΛΦT +σ2modelIn ) −2ΦΛ +µT ( I+ 1 σ2model ΦTΦΛ ) −1 ( I+ 1 σ2model ΛΦTΦ ) −1 µ = σ2true σ2model Tr ( I+ ΛΦ TΦ σ2model ) −1Λ−Tr ( I+ ΛΦ TΦ σ2model ) −2Λ+‖ ( I+ 1 σ2model ΛΦTΦ ) −1µ‖22 . According to ( 138 ) from the proof of Lemma 41 , the truncation procedure ( 133 ) and ( 142 ) , with probability of at least 1−δ we have ‖ ( I+ 1 σ2model ΛΦTΦ ) −1µ‖22 =Θ ( n max { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) = ( 1+o ( 1 ) ) ‖ ( I+ n σ2model Λ ) −1µ‖22 , where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . According to ( 121 ) and ( 126 ) from the proof of Lemma 39 , the truncation procedure ( 115 ) , ( 140 ) and ( 141 ) , with probability of at least 1−δ we have Tr ( I+ ΛΦ TΦ σ2model ) −1Λ−Tr ( I+ ΛΦ TΦ σ2model ) −2Λ = ( Tr ( I+ n σ2model Λ ) −1Λ ) ( 1+o ( 1 ) ) −‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ( 1+o ( 1 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) . Combining the above two equations we get E M ( Dn ) = ( 1+o ( 1 ) ) ( σ2true σ2model ( Tr ( I+ n σ2model Λ ) −1Λ−‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ) +‖ ( I+ n σ2model Λ ) −1µ‖22 ) = σ2true σ2model Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =σ2trueΘ ( n 1−α−t α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) When µ0 > 0 , according to ( 149 ) in the proof of Lemma 42 and the truncation procedure ( 133 ) , with probability of at least 1−δ we have E M ( Dn ) =Θ ( n ( 1−α ) ( 1−t ) α ) +µ20+o ( 1 ) =µ20+o ( 1 ) . | Working under the nonparametric regression model, the authors established asymptotic rates for the Bayesian generalization error, normalized stochastic complexity and excess mean square error. The authors adopted the Bayesian approach by endowing the unknown regression function with a Gaussian process prior, and the covariance function is parameterized by a kernel function. By diagonalizing this kernel, the authors used the resulting eigenfunctions to represent the target function. They assumed that the kernel eigenvalues and the target basis coefficients decay according to the power-law, i.e., polynomial decay. Lastly, a simulation experiment was performed to verify the claims. | SP:c3dc3845317218c326c306f54a6a67edca6c041e |
Learning Curves for Gaussian Process Regression with Power-Law Priors and Targets | 1 INTRODUCTION Gaussian processes ( GPs ) provide a flexible and interpretable framework for learning and adaptive inference , and are widely used for constructing prior distributions in non-parametric Bayesian learning . From an application perspective , one crucial question is how fast do GPs learn , i.e. , how much training data is needed to achieve a certain level of generalization performance . Theoretically , this is addressed by analyzing so-called “ learning curves ” , which describe the generalization error as a function of the training set size n. The rate at which the curve approaches zero determines the difficulty of learning tasks and conveys important information about the asymptotic performance of GP learning algorithms . In this paper , we study the learning curves for Gaussian process regression . Our main result characterizes the asymptotics of the generalization error in cases where the eigenvalues of the GP kernel and the coefficients of the eigenexpansion of the target function have a power-law decay . In the remainder of this introductory section , we review related work and outline our main contributions . Gaussian processes A GP model is a probabilistic model on an infinite-dimensional parameter space ( Williams and Rasmussen , 2006 ; Orbanz and Teh , 2010 ) . In GP regression ( GPR ) , for example , this space can be the set of all continuous functions . Assumptions about the learning problem are encoded by way of a prior distribution over functions , which gets transformed into a posterior distribution given some observed data . The mean of the posterior is then used for prediction . The model uses only a finite subset of the available parameters to explain the data and this subset can grow arbitrarily large as more data are observed . In this sense , GPs are “ non-parametric ” and contrast with parametric models , where there is a fixed number of parameters . For regression with Gaussian noise , a major appeal of the GP formalism is that the posterior is analytically tractable . GPs are also one important part in learning with kernel machines ( Kanagawa et al. , 2018 ) and modeling using GPs has recently gained considerable traction in the neural network community . Neural networks and kernel learning From a GP viewpoint , there exists a well known correspondence between kernel methods and infinite neural networks ( NNs ) first studied by Neal ( 1996 ) . Neal showed that the outputs of a randomly initialized one-hidden layer neural network ( with appropriate scaling of the variance of the initialization distribution ) converges to a GP over functions in the limit of an infinite number of hidden units . Follow-up work extended this correspondence with analytical expressions for the kernel covariance for shallow NNs by Williams ( 1997 ) , and more recently for deep fully-connected NNs ( Lee et al. , 2018 ; de G. Matthews et al. , 2018 ) , convolutional NNs with many channels ( Novak et al. , 2019 ; Garriga-Alonso et al. , 2019 ) , and more general architectures ( Yang , 2019 ) . The correspondence enables exact Bayesian inference in the associated GP model for infinite-width NNs on regression tasks and has led to some recent breakthroughs in our understanding of overparameterized NNs ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Arora et al. , 2019 ; Belkin et al. , 2018 ; Daniely et al. , 2016 ; Yang and Salman , 2019 ; Bietti and Mairal , 2019 ) . The most prominent kernels associated with infinite-width NNs are the Neural Network Gaussian Process ( NNGP ) kernel when only the last layer is trained ( Lee et al. , 2018 ; de G. Matthews et al. , 2018 ) , and the Neural Tangent Kernel ( NTK ) when the entire model is trained ( Jacot et al. , 2018 ) . Empirical studies have shown that inference with such infinite network kernels is competitive with standard gradient descent-based optimization for fully-connected architectures ( Lee et al. , 2020 ) . Learning curves A large-scale empirical characterization of the generalization performance of state-of-the-art deep NNs showed that the associated learning curves often follow a power law of the form n−β with the exponent β ranging between 0.07 and 0.35 depending on the data and the algorithm ( Hestness et al. , 2017 ; Spigler et al. , 2020 ) . Power-law asymptotics of learning curves have been theoretically studied in early works for the Gibbs learning algorithm ( Amari et al. , 1992 ; Amari and Murata , 1993 ; Haussler et al. , 1996 ) that showed a generalization error scaling with exponent β=0.5 , 1 or 2 under certain assumptions . More recent results from statistical learning theory characterize the shape of learning curves depending on the properties of the hypothesis class ( Bousquet et al. , 2021 ) . In the context of GPs , approximations and bounds on learning curves have been investigated in several works ( Sollich , 1999 ; Sollich and Halees , 2002 ; Sollich , 2001 ; Opper and Vivarelli , 1999 ; Opper and Malzahn , 2002 ; Williams and Vivarelli , 2000 ; Malzahn and Opper , 2001a ; b ; Seeger et al. , 2008 ; Van Der Vaart and Van Zanten , 2011 ; Le Gratiet and Garnier , 2015 ) , with recent extensions to kernel regression from a spectral bias perspective ( Bordelon et al. , 2020 ; Canatar et al. , 2021 ) . For a review on learning curves in relation to its shape and monotonicity , see Loog et al . ( 2019 ) ; Viering et al . ( 2019 ) ; Viering and Loog ( 2021 ) . A related but complementary line of work studies the convergence rates and posterior consistency properties of Bayesian non-parametric models ( Barron , 1998 ; Seeger et al. , 2008 ; Van Der Vaart and Van Zanten , 2011 ) . Power-law decay of the GP kernel eigenspectrum The rate of decay of the eigenvalues of the GP kernel conveys important information about its smoothness . Intuitively , if a process is “ rough ” with more power at high frequencies , then the eigenspectrum decays more slowly . On the other hand , kernels that define smooth processes have a fast-decaying eigenspectrum ( Stein , 2012 ; Williams and Rasmussen , 2006 ) . The precise eigenvalues ( λp ) p≥1 of the operators associated to many kernels and input distributions are not known explicitly , except for a few special cases ( Williams and Rasmussen , 2006 ) . Often , however , the asymptotic properties are known . The asymptotic rate of decay of the eigenvalues of stationary kernels for input distributions with bounded support is well understood ( Widom , 1963 ; Ritter et al. , 1995 ) . Ronen et al . ( 2019 ) showed that for inputs distributed uniformly on a hypersphere , the eigenfunctions of the arc-cosine kernel are spherical harmonics and the eigenvalues follow a power-law decay . The spectral properties of the NTK are integral to the analysis of training convergence and generalization of NNs , and several recent works empirically justify and rely on a power law assumption for the NTK spectrum ( Bahri et al. , 2021 ; Canatar et al. , 2021 ; Lee et al. , 2020 ; Nitanda and Suzuki , 2021 ) . Velikanov and Yarotsky ( 2021 ) showed that the asymptotics of the NTK of infinitely wide shallow ReLU networks follows a power-law that is determined primarily by the singularities of the kernel and has the form λp∝p−α with α=1+ 1d , where d is the input dimension . Asymptotics of the generalization error of kernel ridge regression ( KRR ) There is a well known equivalence between GPR and KRR with the additive noise in GPR playing the role of regularization in KRR ( Kanagawa et al. , 2018 ) . Analysis of the decay rates of the excess generalization error of KRR has appeared in several works , e.g , in the noiseless case with constant regularization ( Bordelon et al. , 2020 ; Spigler et al. , 2020 ; Jun et al. , 2019 ) , and the noisy optimally regularized case ( Caponnetto and De Vito , 2007 ; Steinwart et al. , 2009 ; Fischer and Steinwart , 2020 ) under the assumption that the kernel eigenspectrum , and the eigenexpansion coefficients of the target function follow a power law . These assumptions , which are often called resp . the capacity and source conditions are related to the effective dimension of the problem and the difficulty of learning the target function ( Caponnetto and De Vito , 2007 ; Blanchard and Mücke , 2018 ) . Cui et al . ( 2021 ) present a unifying picture of the excess error decay rates under the capacity and source conditions in terms of the interplay between noise and regularization illustrating their results with real datasets . Contributions In this work , we characterize the asymptotics of the generalization error of GPR and KRR under the capacity and source conditions . Our main contributions are as follows : • When the eigenspectrum of the prior decays with rate α and the eigenexpansion coefficients of the target function decay with rate β , we show that with high probability over the draw of n input samples , the negative log-marginal likelihood behaves as Θ ( nmax { 1 α , 1−2β α +1 } ) ( Theorem 7 ) and the generalization error behaves as Θ ( nmax { 1 α−1 , 1−2β α } ) ( Theorem 9 ) . In the special case that the model is correctly specified , i.e. , the GP prior is the true one from which the target functions are actually generated , our result implies that the generalization error behaves asO ( n 1 α−1 ) recovering as a special case a result due to Sollich and Halees ( 2002 ) ( vide Remark 10 ) . • Under similar assumptions as in the previous item , we leverage the equivalence between GPR and KRR to show that the excess generalization error of KRR behaves as Θ ( nmax { 1 α−1 , 1−2β α } ) ( Theorem 12 ) . In the noiseless case with constant regularization , our result implies that the generalization error behaves as Θ ( n 1−2β α ) recovering as a special case a result due to Bordelon et al . ( 2020 ) . Specializing to the case of KRR with Gaussian design , we recover as a special case a result due to Cui et al . ( 2021 ) ( vide Remark 14 ) . For the unrealizable case , i.e. , when the target function is outside the span of the eigenfunctions with positive eigenvalues , we show that the generalization error converges to a constant . • We present a few toy experiments demonstrating the theory for GPR with arc-cosine kernel without biases ( resp . with biases ) which is the conjugate kernel of an infinitely wide shallow network with two inputs and one hidden layer without biases ( resp . with biases ) ( Cho and Saul , 2009 ; Ronen et al. , 2019 ) . 2 BAYESIAN LEARNING AND GENERALIZATION ERROR FOR GPS In GP regression , our goal is to learn a target function f : Ω 7→ R between an input x ∈ Ω and output y ∈ R based on training samples Dn = { ( xi , yi ) } ni=1 . We consider an additive noise model yi = f ( xi ) + i , where i i.i.d.∼ N ( 0 , σ2true ) . If ρ denotes the marginal density of the inputs xi , then the pairs ( xi , yi ) are generated according to the density q ( x , y ) = ρ ( x ) q ( y|x ) , where q ( y|x ) =N ( y|f ( x ) , σ2true ) . We assume that there is a prior distribution Π0 on f which is defined as a zero-mean GP with continuous covariance function k : Ω×Ω→R , i.e. , f ∼GP ( 0 , k ) . This means that for any finite set x = ( x1 , ... , xn ) T , the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T follows the multivariate normal distributionN ( 0 , Kn ) with covariance matrixKn= ( k ( xi , xj ) ) ni , j=1∈Rn×n . By Bayes ’ rule , the posterior distribution of the target f given the training data is given by dΠn ( f |Dn ) = 1 Z ( Dn ) n∏ i=1 N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) , where Π0 is the prior distribution , Z ( Dn ) = ∫ ∏n i=1N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) is the marginal likelihood or model evidence and σmodel is the sample variance used in GPR . In practice , we do not know the exact value of σtrue and so our choice of σmodel can be different from σtrue . The GP prior and the Gaussian noise assumption allows for exact Bayesian inference and the posterior distribution over functions is again a GP with mean and covariance function given by m̄ ( x ) =KTxx ( Kn+σ 2 modelIn ) −1y , x∈Ω ( 1 ) k̄ ( x , x′ ) =k ( x , x′ ) −KTxx ( Kn+σ2modelIn ) −1Kxx′ , x , x′∈Ω , ( 2 ) whereKxx= ( k ( x1 , x ) , ... , k ( xn , x ) ) T and y= ( y1 , ... , yn ) T ∈Rn ( Williams and Rasmussen , 2006 , Eqs . 2.23-24 ) . The performance of GPR depends on how well the posterior approximates f as the number of training samples n tends to infinity . The distance of the posterior to the ground truth can be measured in various ways . We consider two such measures , namely the Bayesian generalization error ( Seeger et al. , 2008 ; Haussler and Opper , 1997 ; Opper and Vivarelli , 1999 ) and the excess mean squared error ( Sollich and Halees , 2002 ; Le Gratiet and Garnier , 2015 ; Bordelon et al. , 2020 ; Cui et al. , 2021 ) . Definition 1 ( Bayesian generalization error ) . The Bayesian generalization error is defined as the Kullback-Leibler divergence between the true density q ( y|x ) and the Bayesian predictive density pn ( y|x , Dn ) = ∫ p ( y|f ( x ) ) dΠn ( f |Dn ) , G ( Dn ) = ∫ q ( x , y ) log q ( y|x ) pn ( y|x , Dn ) dxdy . ( 3 ) A related quantity of interest is the stochastic complexity ( SC ) , also known as the free energy , which is just the negative log-marginal likelihood . We shall primarily be concerned with a normalized version of the stochastic complexity which is defined as follows : F 0 ( Dn ) =−log Z ( Dn ) ∏n i=1q ( yi|xi ) =−log ∫∏n i=1N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) ∏n i=1q ( yi|xi ) . ( 4 ) The generalization error ( 3 ) can be expressed in terms of the normalized SC as follows ( Watanabe , 2009 , Theorem 1.2 ) : G ( Dn ) =E ( xn+1 , yn+1 ) F 0 ( Dn+1 ) −F 0 ( Dn ) , ( 5 ) whereDn+1 =Dn∪ { ( xn+1 , yn+1 ) } is obtained by augmentingDn with a test point ( xn+1 , yn+1 ) . If we only wish to measure the performance of the mean of the Bayesian posterior , then we can use the excess mean squared error : Definition 2 ( Excess mean squared error ) . The excess mean squared error is defined as M ( Dn ) =E ( xn+1 , yn+1 ) ( m̄ ( xn+1 ) −yn+1 ) 2−σ2true =Exn+1 ( m̄ ( xn+1 ) −f ( xn+1 ) ) 2 . ( 6 ) Proposition 3 ( Normalized stochastic complexity for GPR ) . Assume that σ2model =σ2true =σ2 . The normalized SC F 0 ( Dn ) ( 4 ) for GPR with prior GP ( 0 , k ) is given as F 0 ( Dn ) = 1 2 logdet ( In+ Kn σ2 ) + 1 2σ2y T ( In+ Kn σ2 ) −1y− 12σ2 ( y−f ( x ) ) T ( y−f ( x ) ) , ( 7 ) where = ( 1 , ... , n ) T . The expectation of the normalized SC w.r.t . the noise is given as E F 0 ( Dn ) = 12 logdet ( In+ Kn σ2 ) − 12Tr ( In− ( In+ Kn σ2 ) −1 ) + 12σ2 f ( x ) T ( In+ Kn σ2 ) −1 f ( x ) . ( 8 ) This is a basic result and has applications in relation to model selection in GPR ( Williams and Rasmussen , 2006 ) . For completeness , we give a proof of Proposition 3 in Appendix B. Seeger et al . ( 2008 , Theorem 1 ) gave an upper bound on the normalized stochastic complexity for the case when f lies in the reproducing kernel Hilbert space ( RKHS ) of the GP prior . It is well known , however , that sample paths of GP almost surely fall outside the corresponding RKHS ( Van Der Vaart and Van Zanten , 2011 ) limiting the applicability of the result . We next derive the asymptotics of E F 0 ( Dn ) , the expected generalization error E G ( Dn ) = E E ( xn+1 , yn+1 ) F 0 ( Dn+1 ) −E F 0 ( Dn ) , and the excess mean squared error E M ( Dn ) . 3 ASYMPTOTIC ANALYSIS OF GP REGRESSION WITH POWER-LAW PRIORS We begin by introducing some notations and assumptions . We assume that f ∈L2 ( Ω , ρ ) . By Mercer ’ s theorem ( Williams and Rasmussen , 2006 , Theorem 4.2 ) , the covariance function of the GP prior can be decomposed as k ( x1 , x2 ) = ∑∞ p=1λpφp ( x1 ) φp ( x2 ) , where ( φp ( x ) ) p≥1 are the eigenfunctions of the operator Lk : L2 ( Ω , ρ ) 7→ L2 ( Ω , ρ ) ; ( Lkf ) ( x ) = ∫ Ω k ( x , s ) f ( s ) dρ ( s ) , and ( λp ) p≥1 are the corresponding positive eigenvalues . We index the sequence of eigenvalues in decreasing order , that is λ1≥λ2≥··· > 0 . The target function f ( x ) is decomposed into the orthonormal set ( φp ( x ) ) p≥1 and its orthogonal complement { φp ( x ) : p≥1 } ⊥ as f ( x ) = ∞∑ p=1 µpφp ( x ) +µ0φ0 ( x ) ∈L2 ( Ω , ρ ) , ( 9 ) whereµ= ( µ0 , µ1 , ... , µp , ... ) T are the coefficients of the decomposition , andφ0 ( x ) satisfies ‖φ0 ( x ) ‖2 = 1 and φ0 ( x ) ∈ { φp ( x ) : p ≥ 1 } ⊥ . For given sample inputs x , let φp ( x ) = ( φp ( x1 ) , ... , φp ( xn ) ) T , Φ = ( φ0 ( x ) , φ1 ( x ) , ... , φp ( x ) , ... ) and Λ = diag { 0 , λ1 , ... , λp , ... } . Then the covariance matrixKn can be written asKn=ΦΛΦT , and the function values on the sample inputs can be written as f ( x ) =Φµ . We shall make the following assumptions in order to derive the power-law asymptotics of the normalized stochastic complexity and the generalization error of GPR : Assumption 4 ( Power law decay of eigenvalues ) . The eigenvalues ( λp ) p≥1 follow the power law Cλp −α≤λp≤Cλp−α , ∀p≥1 ( 10 ) whereCλ , Cλ and α are three positive constants which satisfy 0 < Cλ≤Cλ and α > 1 . As mentioned in the introduction , this assumption , called the capacity condition , is fairly standard in kernel learning and is adopted in many recent works ( Bordelon et al. , 2020 ; Canatar et al. , 2021 ; Jun et al. , 2019 ; Bietti et al. , 2021 ; Cui et al. , 2021 ) . Velikanov and Yarotsky ( 2021 ) derived the exact value of the exponent αwhen the kernel function has a homogeneous singularity on its diagonal , which is the case for instance for the arc-cosine kernel . Assumption 5 ( Power law decay of coefficients of decomposition ) . Let Cµ , Cµ > 0 and β > 1/2 be positive constants and let { pi } i≥1 be an increasing integer sequence such that supi≥1 ( pi+1−pi ) < ∞ . The coefficients ( µp ) p≥1 of the decomposition ( 9 ) of the target function follow the power law |µp|≤Cµp−β , ∀p≥1 and |µpi |≥Cµpi−β , ∀i≥1 . ( 11 ) Since f ∈L2 ( Ω , ρ ) , we have ∑∞ p=0µ 2 p < ∞ . The condition β > 1/2 in Assumption 5 ensures that the sum ∑∞ p=0µ 2 p does not diverge . When the orthonormal basis ( φp ( x ) ) p is the Fourier basis or the spherical harmonics basis , the coefficients ( µp ) p decay at least as fast as a power law so long as the target function f ( x ) satisfies certain smoothness conditions ( Bietti and Mairal , 2019 ) . Velikanov and Yarotsky ( 2021 ) gave examples of some natural classes of functions for which Assumption 5 is satisfied , such as functions that have a bounded support with smooth boundary and are smooth on the interior of this support , and derived the corresponding exponents β . Assumption 6 ( Boundedness of eigenfunctions ) . The eigenfunctions ( φp ( x ) ) p≥0 satisfy ‖φ0‖∞≤Cφ and ‖φp‖∞≤Cφpτ , p≥1 , ( 12 ) whereCφ and τ are two positive constants which satisfy τ < α−12 . The second condition in ( 12 ) appears , for example , in Valdivia ( 2018 , Hypothesis H1 ) and is less restrictive than the assumption of uniformly bounded eigenfunctions that has appeared in several other works in the GP literature , see , e.g. , Braun ( 2006 ) ; Chatterji et al . ( 2019 ) ; Vakili et al . ( 2021 ) . Define T1 ( Dn ) = 1 2 logdet ( In+ ΦΛΦT σ2 ) − 12Tr ( In− ( In+ ΦΛΦT σ2 ) −1 ) , ( 13 ) T2 ( Dn ) = 1 2σ2 f ( x ) T ( In+ ΦΛΦT σ2 ) −1 f ( x ) , ( 14 ) G1 ( Dn ) =E ( xn+1 , yn+1 ) ( T1 ( Dn+1 ) −T1 ( Dn ) ) , ( 15 ) G2 ( Dn ) =E ( xn+1 , yn+1 ) ( T2 ( Dn+1 ) −T2 ( Dn ) ) . ( 16 ) Using ( 8 ) and ( 5 ) , we haveE F 0 ( Dn ) =T1 ( Dn ) +T2 ( Dn ) andE G ( Dn ) =G1 ( Dn ) +G2 ( Dn ) . Intuitively , G1 corresponds to the effect of the noise on the generalization error irrespective of the target function f , whereasG2 corresponds to the ability of the model to fit the target function . As we will see next in Theorems 9 and 11 , ifα is large , then the error associated with the noise is smaller . When f is contained in the span of the eigenfunctions { φp } p≥1 , G2 decreases with increasingn , but if f contains an orthogonal component , then the error remains constant and GP regression is not able to learn the target function . 3.1 ASYMPTOTICS OF THE NORMALIZED STOCHASTIC COMPLEXITY We derive the asymptotics of the normalized SC ( 8 ) for the following two cases : µ0 = 0 and µ0 > 0 . When µ0 =0 , the target function f ( x ) lies in the span of all eigenfunctions with positive eigenvalues . Theorem 7 ( Asymptotics of the normalized SC , µ0 = 0 ) . Assume that µ0 = 0 and σ2model = σ 2 true = σ 2 = Θ ( 1 ) . Under Assumptions 4 , 5 and 6 , with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0 ≤ q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } , the expected normalized SC ( 8 ) has the asymptotic behavior : E F 0 ( Dn ) = [ 1 2 logdet ( I+ n σ2 Λ ) − 1 2Tr ( I− ( I+ nσ2 Λ ) −1 ) + n2σ2µT ( I+ nσ2 Λ ) −1µ ] ( 1+o ( 1 ) ) =Θ ( nmax { 1 α , 1−2β α +1 } ) . ( 17 ) The complete proof of Theorem 7 is given in Appendix D.1 . We give a sketch of the proof below . In the sequel , we use the notationsO and Θ to denote the standard mathematical orders and the notation Õ to suppress logarithmic factors . Proof sketch of Theorem 7 . By ( 8 ) , ( 13 ) and ( 14 ) we have E F 0 ( Dn ) = T1 ( Dn ) + T2 ( Dn ) . In order to analyze the terms T1 ( Dn ) and T2 ( Dn ) , we will consider truncated versions of these quantities and bound the corresponding residual errors . Given a truncation parameter R ∈ N , let ΦR = ( φ0 ( x ) , φ1 ( x ) , ... , φR ( x ) ) ∈Rn×R be the truncated matrix of eigenfunctions evaluated at the data points , ΛR = diag ( 0 , λ1 , ... , λR ) ∈R ( R+1 ) × ( R+1 ) and µR = ( µ0 , µ1 , ... , µR ) ∈RR+1 . We define the truncated version of T1 ( Dn ) as follows : T1 , R ( Dn ) = 1 2 logdet ( In+ ΦRΛRΦ T R σ2 ) − 12Tr ( In− ( In+ ΦRΛRΦ T R σ2 ) −1 ) . ( 18 ) Similarly , define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , Λ > R = diag ( λR+1 , ... , λp , ... ) , fR ( x ) = ∑R p=1 µpφp ( x ) , fR ( x ) = ( fR ( x1 ) , ... , fR ( xn ) ) T , f > R ( x ) = f ( x ) − fR ( x ) , and f > R ( x ) = ( f > R ( x1 ) , ... , f > R ( xn ) ) T . The truncated version of T2 ( Dn ) is then defined as T2 , R ( Dn ) = 1 2σ2 fR ( x ) T ( In+ ΦRΛRΦ T R σ2 ) −1fR ( x ) T . ( 19 ) The proof consists of three steps : • Approximation step : In this step , we show that the asymptotics of T1 , R resp . T2 , R dominates that of the residuals , |T1 , R ( Dn ) −T1 ( Dn ) | resp . |T2 , R ( Dn ) −T2 ( Dn ) | ( see Lemma 32 ) . This builds upon first showing that ‖Φ > RΛ > RΦT > R‖2 =Õ ( max { nR−α , n 1 2R 1−2α 2 , R1−α } ) ( see Lemma 25 ) and then choosingR=n 1 α+κ where 0 < κ < α−1−2τ2α2 when we have ‖Φ > RΛ > RΦ T > R‖2 =o ( 1 ) . Intuitively , the choice of the truncation parameterR is governed by the fact that λR=Θ ( R−α ) =n−1+κα=o ( n−1 ) . • Decomposition step : In this step , we decompose T1 , R into a term independent of ΦR and a series involving ΦTRΦR−nIR , and likewise for T2 , R ( see Lemma 34 ) . This builds upon first showing using the Woodbury matrix identity ( Williams and Rasmussen , 2006 , §A.3 ) that T1 , R ( Dn ) = 1 2 logdet ( IR+ 1 σ2 ΛRΦ T RΦR ) − 12TrΦR ( σ 2IR+ΛRΦ T RΦR ) −1ΛRΦ T R , ( 20 ) T2 , R ( Dn ) = 1 2σ2µ T RΦ T RΦR ( σ 2IR+ΛRΦ T RΦR ) −1µR , ( 21 ) and then Taylor expanding the matrix inverse ( σ2IR + ΛRΦTRΦR ) −1 in ( 20 ) and ( 21 ) to show that the ΦR-independent terms in the decomposition of T1 , R and T2 , R are , respectively , 1 2 logdet ( IR+ n σ2 ΛR ) − 1 2Tr ( IR− ( IR+ nσ2 ΛR ) −1 ) , and n2σ2µTR ( IR+ nσ2 ΛR ) −1µR . • Concentration step : Finally , we use concentration inequalities to show that these ΦR-independent terms dominate the series involving ΦTRΦR−nIR ( see Lemma 35 ) when we have T1 , R ( Dn ) = ( 1 2 logdet ( IR+ n σ2 ΛR ) − 1 2Tr ( IR− ( IR+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1α ) , T2 , R ( Dn ) = ( n 2σ2µ T R ( IR+ n σ2 ΛR ) −1µR ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0 , 1−2β α +1 } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . The key idea is to consider the matrix Λ1/2R ( I+ n σ2 ΛR ) −1/2ΦTRΦR ( I+ n σ2 ΛR ) −1/2Λ 1/2 R and show that it concentrates around nΛR ( I+ nσ2 ) −1 ( see Corollary 22 ) . Note that an ordinary application of the matrix Bernstein inequality to ΦTRΦR−nIR yields ‖ΦTRΦR−nI‖2 =O ( R √ n ) , which is not sufficient for our purposes , since this would giveO ( R √ n ) =o ( n ) only when α > 2 . In contrast , our results are valid forα > 1 and cover cases of practical interest , e.g. , the NTK of infinitely wide shallow ReLU network ( Velikanov and Yarotsky , 2021 ) and the arc-cosine kernels over high-dimensional hyperspheres ( Ronen et al. , 2019 ) that have α=1+O ( 1d ) , where d is the input dimension . For µ0 > 0 , we note the following result : Theorem 8 ( Asymptotics of the normalized SC , µ0 > 0 ) . Assume µ0 > 0 and σ2model = σ 2 true = σ 2 = Θ ( 1 ) . Under Assumptions 4 , 5 and 6 , with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0≤q < min { 2β−1 2 , α } ·min { α−1−2τ 2α2 , 2β−1 α2 } . the expected normalized SC ( 8 ) has the asymptotic behavior : E F 0 ( Dn ) = 12σ2µ 2 0n+o ( n ) . The proof of Theorem 8 is given in Appendix D.1 and follows from showing that when µ0 > 0 , T2 , R ( Dn ) = ( n 2σ2µ T R ( IR+ n σ2 ΛR ) −1µR ) ( 1 + o ( 1 ) ) = 12σ2µ 2 0n + o ( n ) ( see Lemma 38 ) , which dominates T1 ( Dn ) and the residual |T2 , R ( Dn ) −T2 ( Dn ) | . 3.2 ASYMPTOTICS OF THE BAYESIAN GENERALIZATION ERROR In this section , we derive the asymptotics of the expected generalization error E G ( Dn ) by analyzing the asymptotics of the componentsG1 ( Dn ) andG2 ( Dn ) in resp . ( 15 ) and ( 16 ) for the following two cases : µ0 =0 and µ0 > 0 . First , we consider the case µ0 =0 . Theorem 9 ( Asymptotics of the Bayesian generalization error , µ0 = 0 ) . Let Assumptions 4 , 5 , and 6 hold . Assume that µ0 = 0 and σ2model = σ 2 true = σ 2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n−q over sample inputs ( xi ) ni=1 where 0≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the expectation of the Bayesian generalization error ( 3 ) w.r.t . the noise has the asymptotic behavior : E G ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ nσ2 Λ ) −1Λ−‖Λ1/2 ( I+ nσ2 Λ ) −1‖2F +‖ ( I+ nσ2 Λ ) −1µ‖22 ) =Θ ( nmax { ( 1−α ) ( 1−t ) α , ( 1−2β ) ( 1−t ) α } ) . ( 22 ) The proof of Theorem 9 is given in Appendix D.2 . Intuitively , for a given t , the exponent ( 1−α ) ( 1−t ) α in ( 22 ) captures the rate at which the model suppresses the noise , while the exponent ( 1−2β ) ( 1−t ) α captures the rate at which the model learns the target function . A larger β implies that the exponent ( 1−2β ) ( 1−t ) α is smaller and it is easier to learn the target . A larger α implies that the exponent ( 1−α ) ( 1−t ) α is smaller and the error associated with the noise is smaller as well . A larger α , however , also implies that the exponent ( 1−2β ) ( 1−t ) α is larger ( recall that α > 1 and β > 1/2 by Assumptions 4 and 5 , resp . ) , which means that it is harder to learn the target . Remark 10 . If f ∼ GP ( 0 , k ) , then using the Karhunen-Loève expansion we have f ( x ) = ∑∞ p=1 √ λpωpφp ( x ) , where ( ωp ) ∞p=1 are i.i.d . standard Gaussian variables . We can bound ωp almost surely as |ωp| ≤ C logp , where C = supp≥1 |ωp| logp is a finite constant . Comparing with the expansion of f ( x ) in ( 9 ) , we find that µp = √ λpωp =O ( p −α/2logp ) =O ( p−α/2+ε ) where ε > 0 is arbitrarily small . Choosing β=α/2−ε in ( 22 ) , we have E G ( Dn ) =O ( n 1 α−1+ 2ε α ) . This rate matches that of an earlier result due to Sollich and Halees ( 2002 ) , where it is shown that the asymptotic learning curve ( as measured by the expectation of the excess mean squared error , EfM ( Dn ) ) scales as n 1 α−1 when the model is correctly specified , i.e. , f is a sample from the same Gaussian process GP ( 0 , k ) , and the eigenvalues decay as a power law for large i , λi∼ iα . For µ0 > 0 , we note the following result : Theorem 11 ( Asymptotics of the Bayesian generalization error , µ0 > 0 ) . Let Assumptions 4 , 5 , and 6 hold . Assume that µ0 > 0 and σ2model = σ 2 true = σ 2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the expectation of the Bayesian generalization error ( 3 ) w.r.t . the noise has the asymptotic behavior : E G ( Dn ) = 12σ2µ 2 0+o ( 1 ) . In general , if µ0 > 0 , then the generalization error remains constant when n→∞ . This means that if the target function contains a component in the kernel of the operatorLk , then GP regression is not able to learn the target function . The proof of Theorem 11 is given in Appendix D.2 . 3.3 ASYMPTOTICS OF THE EXCESS MEAN SQUARED ERROR In this section we derive the asymptotics of the excess mean squared error in Definition 2 . Theorem 12 ( Asymptotics of excess mean squared error ) . Let Assumptions 4 , 5 , and 6 hold . Assume σ2model =Θ ( n t ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n −q over sample inputs ( xi ) n i=1 , where 0≤q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the excess mean squared error ( 6 ) has the asymptotic : E M ( Dn ) = ( 1+o ( 1 ) ) [ σ2true σ2model ( Tr ( I+ n σ2model Λ ) −1Λ−‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ) +‖ ( I+ n σ2model Λ ) −1µ‖22 ] =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) when µ0 =0 , and E M ( Dn ) =µ20+o ( 1 ) , when µ0 > 0 . The proof of Theorem 12 uses similar techniques as Theorem 9 and is given in Appendix D.3 . Remark 13 ( Correspondence with kernel ridge regression ) . The kernel ridge regression ( KRR ) estimator arises as a solution to the optimization problem f̂=argmin f∈Hk 1 n n∑ i=1 ( f ( xi ) −yi ) 2+λ‖f‖2Hk , ( 23 ) where the hypothesis spaceHk is chosen to be an RKHS , and λ > 0 is a regularization parameter . The solution to ( 23 ) is unique as a function , and is given by f̂ ( x ) = KTxx ( Kn +nλIn ) −1y , which coincides with the posterior mean function m̄ ( x ) of the GPR ( 1 ) if σ2model = nλ ( Kanagawa et al. , 2018 , Proposition 3.6 ) . Thus , the additive Gaussian noise in GPR plays the role of regularization in KRR . Leveraging this well known equivalence between GPR and KRR we observe that Theorem 12 also describes the generalization error of KRR as measured by the excess mean squared error . Remark 14 . Cui et al . ( 2021 ) derived the asymptotics of the expected excess mean-squared error for different regularization strengths and different scales of noise . In particular , for KRR with Gaussian design where Λ1/2R ( φ1 ( x ) , ... , φR ( x ) ) ) is assumed to follow a Gaussian distributionN ( 0 , ΛR ) , and regularization λ=nt−1 where 1−α≤ t , Cui et al . ( 2021 , Eq . 10 ) showed that E { xi } ni=1E M ( Dn ) =O ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . ( 24 ) Let δ = n−q , where 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . By Markov ’ s inequality , this implies that with probability of at least 1 − δ , E M ( Dn ) = O ( 1δ max { σ 2 truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) = O ( nqmax { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . Theorem 12 improves upon this by showing that with probability of at least 1−δ , we have an optimal bound E M ( Dn ) =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . Furthermore , in contrast to the approach by Cui et al . ( 2021 ) , we have no requirement on the distribution of φp ( x ) , and hence our result is more generally applicable . For example , Theorem 12 can be applied to KRR with the arc-cosine kernel when the Gaussian design assumption is not valid . In the noiseless setting ( σtrue =0 ) with constant regularization ( t=0 ) , Theorem 12 implies that the mean squared error behaves as Θ ( n 1−2β α ) . This recovers a result in Bordelon et al . ( 2020 , §2.2 ) . 4 EXPERIMENTS We illustrate our theory on a few toy experiments . We let the input x be uniformly distributed on a unit circle , i.e. , Ω = S1 and ρ = U ( S1 ) . The points on S1 can be represented by x= ( cosθ , sinθ ) where θ ∈ [ −π , π ) . We use the first order arc-cosine kernel function without bias , k ( 1 ) w/o bias ( x1 , x2 ) = 1 π ( sinψ+ ( π−ψ ) cosψ ) , where ψ = 〈x1 , x2〉 is the angle between x1 and x2 . Cho and Saul ( 2009 ) showed that this kernel is the conjugate kernel of an infinitely wide shallow ReLU network with two inputs and no biases in the hidden layer . GP regression with prior GP ( 0 , k ) corresponds to Bayesian training of this network ( Lee et al. , 2018 ) . The eigenvalues and eigenfunctions of the kernel are λ1 = 4π2 , λ2 = λ3 = 1 4 , λ2p = λ2p+1 = 4 π2 ( ( 2p−2 ) 2−1 ) 2 , p ≥ 2 and φ1 ( θ ) = 1 , φ2 ( θ ) = √ 2 2 cosθ , φ3 ( θ ) = √ 2 2 sinθ , φ2p ( θ ) = √ 2 2 cos ( 2p− 2 ) θ , φ2p+1 ( θ ) = √ 2 2 sin ( 2p− 2 ) θ , p≥ 2 . Hence Assumption 4 is satisfied with α= 4 , and Assumption 6 is satisfied with ‖φp‖∞≤ √ 2 2 , p≥ 1 and τ=0 . We consider the target functions in Table 1 , which satisfy Assumption 5 with the indicated β , and µ0 indicates whether the function lies in the span of eigenfunctions of the kernel . The training and test data are generated as follows : We independently sample training inputs x1 , ... , xn and test input xn+1 from U ( S1 ) and training outputs yi , i = 1 , ... , n from N ( f ( xi ) , σ2 ) , where we choose σ = 0.1 . The Bayesian predictive density conditioned on the test point xn+1 N ( m̄ ( xn+1 ) , k̄ ( xn+1 , xn+1 ) ) is obtained by ( 1 ) and ( 2 ) . We compute the normalized SC by ( 7 ) and the Bayesian generalization error by the Kullback-Leibler divergence betweenN ( f ( xn+1 ) , σ2 ) and N ( m̄ ( xn+1 ) , k̄ ( xn+1 , xn+1 ) ) . For each target we conduct GPR 20 times and report the mean and standard deviation of the normalized SC and the Bayesian generalization error in Figure 1 , which agree with the asymptotics predicted in Theorems 7 and 9 . In Appendix A , we show more experiments confirming our theory for zero- and second- order arc-cosine kernels , with and without biases . k ( 1 ) w/o bias , their values of β and µ0 , and theoretical rates for the normalized SC and the Bayesian generalization error from our theorems . k ( 1 ) w/o bias and the target functions in Table 1 . The orange curves show the linear regression fit for the experimental values ( in blue ) of the log Bayesian generalization error as a function of log n. 5 CONCLUSION We described the learning curves for GPR for the case that the kernel and target function follow a power law . This setting is frequently encountered in kernel learning and relates to recent advances on neural networks . Our approach is based on a tight analysis of the concentration of the inner product of empirical eigenfunctions ΦTΦ around nI . This allowed us to obtain more general results with more realistic assumptions than previous works . In particular , we recovered some results on learning curves for GPR and KRR previously obtained under more restricted settings ( vide Remarks 10 and 14 ) . We showed that when β≥α/2 , meaning that the target function has a compact representation in terms of the eigenfunctions of the kernel , the learning rate is as good as in the correctly specified case . In addition , our result allows us to interpret β from a spectral bias perspective . When 12 < β ≤ α 2 , the larger the value of β , the faster the decay of the generalization error . This implies that low-frequency functions are learned faster in terms of the number of training data points . By leveraging the equivalence between GPR and KRR , we obtained a result on the generalization error of KRR . In the infinite-width limit , training fully-connected deep NNs with gradient descent and infinitesimally small learning rate under least-squared loss is equivalent to solving KRR with respect to the NTK ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Domingos , 2020 ) , which in several cases is known to have a power-law spectrum ( Velikanov and Yarotsky , 2021 ) . Hence our methods can be applied to study the generalization error of infinitely wide neural networks . In future work , it would be interesting to estimate the values of α and β for the NTK and the NNGP kernel of deep fully-connected or convolutional NNs and real data distributions and test our theory in these cases . Similarly , it would be interesting to consider extensions to finite width kernels . REFERENCES S. Amari and N. Murata . Statistical theory of learning curves under entropic loss criterion . Neural Computation , 5 ( 1 ) :140–153 , 1993 . S. Amari , N. Fujita , and S. Shinomoto . Four types of learning curves . Neural Computation , 4 ( 4 ) : 605–618 , 1992 . S. Arora , S. S. Du , W. Hu , Z. Li , R. R. Salakhutdinov , and R. Wang . On exact computation with an infinitely wide neural net . In Advances in Neural Information Processing Systems , volume 32 , pages 8139–8148 , 2019 . Y. Bahri , E. Dyer , J. Kaplan , J. Lee , and U. Sharma . Explaining neural scaling laws . arXiv preprint arXiv:2102.06701 , 2021 . A. R. Barron . Information-theoretic characterization of Bayes performance and the choice of priors in parametric and nonparametric problems . In D. A. Bernardo J. , Berger J. and S. A. , editors , Bayesian statistics , volume 6 , pages 27–52 . Oxford University Press , 1998 . M. Belkin , S. Ma , and S. Mandal . To understand deep learning we need to understand kernel learning . In Proceedings of the 35th International Conference on Machine Learning ( ICML ) , pages 541–549 , 2018 . A. Bietti and J. Mairal . On the inductive bias of neural tangent kernels . In Advances in Neural Information Processing Systems , volume 32 , pages 12873–12884 , 2019 . A. Bietti , L. Venturi , and J. Bruna . On the sample complexity of learning with geometric stability . arXiv preprint arXiv:2106.07148 , 2021 . G. Blanchard and N. Mücke . Optimal rates for regularization of statistical inverse learning problems . Foundations of Computational Mathematics , 18 ( 4 ) :971–1013 , 2018 . B. Bordelon , A. Canatar , and C. Pehlevan . Spectrum dependent learning curves in kernel regression and wide neural networks . In Proceedings of the 37th International Conference on Machine Learning ( ICML ) , pages 1024–1034 , 2020 . O. Bousquet , S. Hanneke , S. Moran , R. van Handel , and A. Yehudayoff . A theory of universal learning . In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing ( STOC ) , pages 532–541 , 2021 . M. L. Braun . Accurate error bounds for the eigenvalues of the kernel matrix . The Journal of Machine Learning Research , 7:2303–2328 , 2006 . A. Canatar , B. Bordelon , and C. Pehlevan . Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks . Nature communications , 12 ( 1 ) :1–12 , 2021 . A. Caponnetto and E. De Vito . Optimal rates for the regularized least-squares algorithm . Foundations of Computational Mathematics , 7 ( 3 ) :331–368 , 2007 . N. Chatterji , A. Pacchiano , and P. Bartlett . Online learning with kernel losses . In Proceedings of the 36th International Conference on Machine Learning ( ICML ) , pages 971–980 , 2019 . Y. Cho and L. K. Saul . Kernel methods for deep learning . In Advances in Neural Information Processing Systems , volume 22 , pages 342–350 , 2009 . H. Cui , B. Loureiro , F. Krzakala , and L. Zdeborová . Generalization error rates in kernel regression : The crossover from the noiseless to noisy regime . arXiv preprint arXiv:2105.15004 , 2021 . A. Daniely , R. Frostig , and Y . Singer . Toward deeper understanding of neural networks : The power of initialization and a dual view on expressivity . In Advances In Neural Information Processing Systems , volume 29 , pages 2253–2261 , 2016 . A. G. de G. Matthews , J. Hron , M. Rowland , R. E. Turner , and Z. Ghahramani . Gaussian process behaviour in wide deep neural networks . In International Conference on Learning Representations , 2018 . P. Domingos . Every model learned by gradient descent is approximately a kernel machine . arXiv preprint arXiv:2012.00152 , 2020 . S. Fischer and I. Steinwart . Sobolev norm learning rates for regularized least-squares algorithms . Journal of Machine Learning Research , 21:1–38 , 2020 . A. Garriga-Alonso , C. E. Rasmussen , and L. Aitchison . Deep convolutional networks as shallow gaussian processes . In International Conference on Learning Representations , 2019 . D. Haussler and M. Opper . Mutual information , metric entropy and cumulative relative entropy risk . The Annals of Statistics , 25 ( 6 ) :2451–2492 , 1997 . D. Haussler , M. Kearns , H. S. Seung , and N. Tishby . Rigorous learning curve bounds from statistical mechanics . Machine Learning , 25 ( 2-3 ) :195–236 , 1996 . J. Hestness , S. Narang , N. Ardalani , G. Diamos , H. Jun , H. Kianinejad , M. Patwary , M. Ali , Y. Yang , and Y. Zhou . Deep learning scaling is predictable , empirically . arXiv preprint arXiv:1712.00409 , 2017 . A. Jacot , F. Gabriel , and C. Hongler . Neural tangent kernel : Convergence and generalization in neural networks . In Advances in Neural Information Processing Systems , volume 31 , pages 8571–8580 , 2018 . K.-S. Jun , A. Cutkosky , and F. Orabona . Kernel truncated randomized ridge regression : Optimal rates and low noise acceleration . Advances in Neural Information Processing Systems , 32:15358–15367 , 2019 . M. Kanagawa , P. Hennig , D. Sejdinovic , and B. K. Sriperumbudur . Gaussian processes and kernel methods : A review on connections and equivalences . arXiv preprint arXiv:1807.02582 , 2018 . L. Le Gratiet and J. Garnier . Asymptotic analysis of the learning curve for Gaussian process regression . Machine Learning , 98 ( 3 ) :407–433 , 2015 . J. Lee , J. Sohl-Dickstein , J. Pennington , R. Novak , S. Schoenholz , and Y. Bahri . Deep neural networks as gaussian processes . In International Conference on Learning Representations , 2018 . J. Lee , L. Xiao , S. Schoenholz , Y. Bahri , R. Novak , J. Sohl-Dickstein , and J. Pennington . Wide neural networks of any depth evolve as linear models under gradient descent . In Advances in Neural Information Processing Systems , volume 32 , pages 8572–8583 , 2019 . J. Lee , S. Schoenholz , J. Pennington , B. Adlam , L. Xiao , R. Novak , and J. Sohl-Dickstein . Finite versus infinite neural networks : an empirical study . In Advances in Neural Information Processing Systems , volume 33 , pages 15156–15172 , 2020 . M. Loog , T. Viering , and A. Mey . Minimizers of the empirical risk and risk monotonicity . In Advances in Neural Information Processing Systems , volume 32 , pages 7478–7487 , 2019 . D. Malzahn and M. Opper . Learning curves for Gaussian processes regression : A framework for good approximations . In Advances in Neural Information Processing Systems , volume 13 , pages 273–279 , 2001a . D. Malzahn and M. Opper . Learning curves for Gaussian processes models : Fluctuations and universality . In International Conference on Artificial Neural Networks , pages 271–276 , 2001b . R. M. Neal . Bayesian Learning for Neural Networks . Springer-Verlag , Berlin , Heidelberg , 1996 . ISBN 0387947248 . A. Nitanda and T. Suzuki . Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime . In International Conference on Learning Representations , 2021 . R. Novak , L. Xiao , Y. Bahri , J. Lee , G. Yang , D. A. Abolafia , J. Pennington , and J. Sohl-Dickstein . Bayesian deep convolutional networks with many channels are gaussian processes . In International Conference on Learning Representations , 2019 . M. Opper and D. Malzahn . A variational approach to learning curves . In Advances in Neural Information Processing Systems , volume 14 , pages 463–469 , 2002 . M. Opper and F. Vivarelli . General bounds on Bayes errors for regression with Gaussian processes . In Advances in Neural Information Processing Systems , volume 11 , pages 302–308 , 1999 . P. Orbanz and Y. W. Teh . Bayesian nonparametric models . In Encyclopedia of Machine Learning , pages 81–89 . Springer , 2010 . K. Ritter , G. W. Wasilkowski , and H. Woźniakowski . Multivariate integration and approximation for random fields satisfying Sacks-Ylvisaker conditions . The Annals of Applied Probability , pages 518–540 , 1995 . B. Ronen , D. Jacobs , Y. Kasten , and S. Kritchman . The convergence rate of neural networks for learned functions of different frequencies . Advances in Neural Information Processing Systems , 32:4761–4771 , 2019 . M. W. Seeger , S. M. Kakade , and D. P. Foster . Information consistency of nonparametric Gaussian process methods . IEEE Transactions on Information Theory , 54 ( 5 ) :2376–2382 , 2008 . P. Sollich . Learning curves for Gaussian processes . In Advances in Neural Information Processing Systems , volume 11 , pages 344–350 , 1999 . P. Sollich . Gaussian process regression with mismatched models . In Advances in Neural Information Processing Systems , volume 13 , pages 519–526 , 2001 . P. Sollich and A. Halees . Learning curves for Gaussian process regression : Approximations and bounds . Neural Computation , 14 ( 6 ) :1393–1428 , 2002 . S. Spigler , M. Geiger , and M. Wyart . Asymptotic learning curves of kernel methods : empirical data versus teacher–student paradigm . Journal of Statistical Mechanics : Theory and Experiment , 2020 ( 12 ) :124001 , 2020 . M. L. Stein . Interpolation of spatial data : Some theory for kriging . Springer Science & Business Media , 2012 . I. Steinwart , D. R. Hush , C. Scovel , et al . Optimal rates for regularized least squares regression . In Conference on Learning Theory , pages 79–93 , 2009 . J . A. Tropp . User-friendly tail bounds for sums of random matrices . Foundations of computational mathematics , 12 ( 4 ) :389–434 , 2012 . S. Vakili , K. Khezeli , and V. Picheny . On information gain and regret bounds in Gaussian process bandits . In International Conference on Artificial Intelligence and Statistics , pages 82–90 , 2021 . E. A. Valdivia . Relative concentration bounds for the spectrum of kernel matrices . arXiv preprint arXiv:1812.02108 , 2018 . A . Van Der Vaart and H. Van Zanten . Information rates of nonparametric Gaussian process methods . Journal of Machine Learning Research , 12 ( 6 ) , 2011 . M. Velikanov and D. Yarotsky . Universal scaling laws in the gradient descent training of neural networks . arXiv preprint arXiv:2105.00507 , 2021 . T. Viering and M. Loog . The shape of learning curves : A review . arXiv preprint arXiv:2103.10948 , 2021 . T. Viering , A. Mey , and M. Loog . Open problem : Monotonicity of learning . In Conference on Learning Theory , pages 3198–3201 , 2019 . S. Watanabe . Algebraic Geometry and Statistical Learning Theory . Cambridge University Press , 2009 . H. Widom . Asymptotic behavior of the eigenvalues of certain integral equations . Transactions of the American Mathematical Society , 109 ( 2 ) :278–295 , 1963 . C. K. Williams . Computing with infinite networks . In Advances in Neural Information Processing Systems , volume 9 , pages 295–301 , 1997 . C. K. Williams and C. E. Rasmussen . Gaussian processes for machine learning . MIT press , 2006 . C. K. Williams and F. Vivarelli . Upper and lower bounds on the learning curve for Gaussian processes . Machine Learning , 40 ( 1 ) :77–102 , 2000 . G. Yang . Wide feedforward or recurrent neural networks of any architecture are gaussian processes . In Advances in Neural Information Processing Systems , volume 32 , pages 9951–9960 , 2019 . G. Yang and H. Salman . A fine-grained spectral perspective on neural networks . arXiv preprint arXiv:1907.10599 , 2019 . APPENDIX A EXPERIMENTS FOR ARC-COSINE KERNELS OF DIFFERENT ORDERS Consider the first order arc-cosine kernel function with biases , k ( 1 ) w/ bias ( x1 , x2 ) = 1 π ( sinψ̄+ ( π−ψ̄ ) cosψ̄ ) , where ψ̄=arccos ( 1 2 ( 〈x1 , x2〉+1 ) ) . ( 25 ) Ronen et al . ( 2019 ) showed that this kernel is the conjugate kernel of an infinitely wide shallow ReLU network with two inputs and one hidden layer with biases , whose eigenvalues satisfy Assumption 4 with α = 4 . The eigenfunctions of this kernel are the same as that of the first-order arc-cosine kernel without biases , k ( 1 ) w/o bias in Section 4 . We consider the target functions in Table 3 , which satisfy Assumption 5 with the indicated β , and µ0 indicates whether the function lies in the span of eigenfunctions of the kernel . For each target we conduct GPR 20 times and report the mean and standard deviation of the normalized SC and the Bayesian generalization error in Figure 3 , which agree with the asymptotics predicted in Theorems 7 and 9 . Table 2 summarizes all the different kernel functions that we consider in our experiments with pointers to the corresponding tables and figures . Summarizing the observations from these experiments , we see that the smoothness of the activation function ( which is controlled by the order of the arc-cosine kernel ) influences the decay rate α of the eigenvalues . In general , when the activation function is smoother , the decay rate α is larger . Theorem 9 then implies that smooth activation functions are more capable in suppressing noise but slower in learning the target . We also observe that networks with biases are more capable at learning functions compared to networks without bias . For example , the function cos ( 2θ ) can not be learned by the zero order arc-cosine kernel without biases ( see Table 6 and Figure 6 ) , but it can be learned by the zero order arc-cosine kernel with biases ( see Table 7 and Figure 7 ) . k ( 1 ) w/ bias and the target functions in Table 3 . The orange curves show the linear regression fit for the experimental values ( in blue ) of the log Bayesian generalization error as a function of log n. k ( 2 ) w/o bias and the target functions in Table 4. k ( 2 ) w/ bias and the target functions in Table 5. k ( 0 ) w/o bias and the target functions in Table 6. k ( 0 ) w/ bias and the target functions in Table 7 . B PROOFS RELATED TO THE MARGINAL LIKELIHOOD Proof of Proposition 3 . Let ȳ = ( ȳ1 , ... , ȳn ) T be the outputs of the GP regression model on training inputs x . Under the GP prior , the prior distribution of ȳ isN ( 0 , Kn ) . Then the evidence of the model is given as follows : Zn= ∫ Rn ( n∏ i=1 1√ 2πσ e− ( ȳi−yi ) 2 2σ2 ) 1 ( 2π ) n/2det ( Kn ) 1/2 e− 1 2 ȳ TK−1n ȳdȳ = 1 ( 2π ) nσndet ( Kn ) 1/2 ∫ Rn e− 1 2 ȳ T ( K−1n + 1 σ2 I ) ȳ+ 1 σ2 ȳTy− 1 2σ2 yTydȳ . ( 26 ) Letting K̃−1n =K −1 n + 1 σ2 I and µ= 1 σ2 K̃ny , we have Zn= 1 ( 2π ) nσndet ( Kn ) 1/2 ∫ Rn e− 1 2 ( ȳ−µ ) T K̃−1n ( ȳ−µ ) − 12σ2 y Ty+ 12µ T K̃−1n µdȳ = 1 ( 2π ) nσndet ( Kn ) 1/2 ( 2π ) n/2det ( K̃n ) 1/2e− 1 2σ2 yTy+ 12µ T K̃−1n µ = det ( K̃n ) 1/2 ( 2π ) n/2σndet ( Kn ) 1/2 e− 1 2σ2 yTy+ 12µ T K̃−1n µ . ( 27 ) The normalized evidence is Z0n= Zn ( 2π ) −n/2σ−ne− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) = det ( K̃n ) 1/2 det ( Kn ) 1/2 e− 1 2σ2 yTy+ 12µ T K̃−1n µ+ 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) . ( 28 ) So the normalized stochastic complexity is F 0 ( Dn ) =−logZ0n =−1 2 logdet ( K̃n ) 1/2+ 1 2 logdet ( Kn ) 1/2+ 1 2σ2 yTy− 1 2 µT K̃−1n µ− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) =−1 2 logdet ( K−1n + 1 σ2 I ) −1+ 1 2 logdet ( Kn ) + 1 2σ2 yTy− 1 2σ4 yT ( K−1n + 1 σ2 I ) −1y − 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 yT ( I+ Kn σ2 ) −1y− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) . = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 f ( x ) T ( I+ Kn σ2 ) −1f ( x ) + 1 2σ2 T ( I+ Kn σ2 ) −1 − 1 2σ2 T + 1 2σ2 T ( I+ Kn σ2 ) −1f ( x ) . ( 29 ) After taking the expectation over noises , we get E F 0 ( Dn ) = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 f ( x ) T ( I+ Kn σ2 ) −1f ( x ) − 1 2 Tr ( I− ( I+Kn σ2 ) −1 ) . ( 30 ) This concludes the proof . C HELPER LEMMAS Lemma 15 . Assume that m → ∞ as n → ∞ . Given constants a1 , a2 , s1 , s2 > 0 , if s1 > 1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m 1−s1 s2 ) . ( 31 ) If s1 > 1 and s2s3 =s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m−s3 logm ) . ( 32 ) If s1 > 1 and s2s3 < s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m−s3 ) . ( 33 ) Overall , if s1 > 1 andm→∞ , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 = { Θ ( mmax { −s3 , 1−s1 s2 } ) , s2s3 6=s1−1 , Θ ( m 1−s1 s2 logm ) , s2s3 =s1−1 . ( 34 ) Proof of Lemma 15 . First , when s1 > 1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 + ∫ [ 1 , +∞ ] a1x −s1 ( 1+a2mx−s2 ) s3 dx = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1 , +∞ ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≥ ∫ [ 1 , R+1 ] a1x −s1 ( 1+a2mx−s2 ) s3 dx =m 1−s1 s2 ∫ [ 1 , R+1 ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 =m 1−s1 s2 ∫ [ 1/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 ) . Second , when s1 > 1 and s2s3 =s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 O ( logm ( 1/s2 ) ) =Θ ( m 1−s1 s2 logn ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≥ ∫ [ 1 , R+1 ] a1x −s1 ( 1+a2mx−s2 ) s3 dx =m 1−s1 s2 ∫ [ 1 , R+1 ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 =m 1−s1 s2 ∫ [ 1/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 logn ) . Third , when s1 > 1 and s2s3 < s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Θ ( m ( −1/s2 ) ( 1−s1+s2s3 ) ) =Θ ( m−s3 ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 2/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Θ ( m ( −1/s2 ) ( 1−s1+s2s3 ) ) =Θ ( m−s3 ) . Overall , if s1 > 1 , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 = { Θ ( mmax { −s3 , 1−s1 s2 } ) , s2s3 6=s1−1 , Θ ( m−s3 logn ) , s2s3 =s1−1 . ( 35 ) Lemma 16 . Assume thatR=m 1s2 +κ for κ > 0 . Given constants a1 , a2 , s1 , s2 > 0 , if s1≤1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Õ ( max { m−s3 , R1−s1 } ) . ( 36 ) Proof of Lemma 16 . First , when s1≤1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 + ∫ [ 1 , R ] a1x −s1 ( 1+a2mx−s2 ) s3 dx = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1 , R ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , R/m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx = a1 ( 1+a2m ) s3 +Õ ( m 1−s1 s2 ( R m1/s2 ) 1−s1 ) =Õ ( max { m−s3 , R1−s1 } ) . Second , when s1≤1 and s2s3≤s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , R/m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Õ ( m ( −1/s2 ) ( 1−s1+s2s3 ) + ( R m1/s2 ) 1−s1 ) =Õ ( max { m−s3 , R1−s1 } ) . Overall , if s1≤1 , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Õ ( max { m−s3 , R1−s1 } ) . ( 37 ) Lemma 17 . Assume that f ∈ L2 ( Ω , ρ ) . Consider the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T , where x1 , ... , xn are drawn i.i.d from ρ . Then with probability of at least 1−δ1 , we have ‖f ( x ) ‖22 = n∑ i=1 f2 ( xi ) =Õ ( ( 1δ1 +1 ) n‖f‖ 2 2 ) , where ‖f‖22 = ∫ x∈Ωf 2 ( x ) dρ ( x ) . Proof of Lemma 17 . Given a positive numberC≥‖f‖22 , applying Markov ’ s inequality we have P ( f2 ( X ) > C ) ≤ 1 C ‖f‖22 . LetA be the event that for all sample inputs ( xi ) ni=1 , f 2 ( xi ) ≤C . Then P ( A ) ≥1−nP ( f2 ( X ) > C ) ≥1− 1 C n‖f‖22 . ( 38 ) Define f̄2 ( x ) = min { f2 ( x ) , C } . Then Ef̄2 ( X ) ≤ Ef2 ( X ) = ‖f‖22 . So |f̄2 ( X ) − Ef̄2 ( X ) | ≤ max { C , ‖f‖22 } =C Since 0≤ f̄2 ( x ) ≤C , we have E ( f̄4 ( X ) ) ≤CE ( f̄2 ( X ) ) ≤C‖f‖22 . ( 39 ) So we have E|f̄2 ( X ) −Ef̄2 ( X ) |2≤E ( f̄4 ( X ) ) ≤C‖f‖22 . ( 40 ) Applying Bernstein ’ s inequality , we have P ( n∑ i=1 f̄2 ( xi ) > t+nEf̄2 ( X ) ) ≤exp ( − t 2 2 ( nE|f̄2 ( X ) −Ef̄2 ( X ) |2 ) + Ct3 ) ) ≤exp ( − t 2 2 ( nC‖f‖22+ Ct3 ) ) ≤exp ( − t 2 4max { nC‖f‖22 , Ct3 } ) . Hence , with probability of at least 1−δ1/2 we have n∑ i=1 f̄2 ( xi ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +nEf̄2 ( X ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +n‖f‖22 . ( 41 ) When event A happens , f2 ( xi ) = f̄2 ( xi ) for all sample inputs . According to ( 38 ) and ( 41 ) , with probability at least 1− 1Cn‖f‖ 2 2−δ1/2 , we have n∑ i=1 f2 ( xi ) = n∑ i=1 f̄2 ( xi ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +n‖f‖22 . ChoosingC= 2δ1n‖f‖ 2 2 , with probability of at least 1−δ1 we have n∑ i=1 f2 ( xi ) = n∑ i=1 f̄2 ( xi ) ≤max { √ 8 δ1 log 2 δ1 n2‖f‖42 , 8 3δ1 n‖f‖22log 2 δ1 } +n‖f‖22 =Õ ( ( 1δ1 +1 ) n‖f‖ 2 2 ) . Lemma 18 . Assume that f ∈ L2 ( Ω , ρ ) . Consider the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T , where x1 , ... , xn are drawn i.i.d from ρ . Assume that ‖f‖∞= supx∈Ωf ( x ) ≤C . With probability of at least 1−δ1 , we have ‖f ( x ) ‖22 =Õ ( √ C2n‖f‖22+C2 ) +n‖f‖22 , where ‖f‖22 = ∫ x∈Ωf 2 ( x ) dρ ( x ) . Proof of Lemma 18 . We have |f2 ( X ) −Ef2 ( X ) |≤max { C2 , ‖f‖22 } =C2 Since 0≤ f2 ( x ) ≤C , we have E ( f4 ( X ) ) ≤C2E ( f2 ( X ) ) ≤C2‖f‖22 . ( 42 ) So we have E|f2 ( X ) −Ef2 ( X ) |2≤E ( f4 ( X ) ) ≤C2‖f‖22 . ( 43 ) Applying Bernstein ’ s inequality , we have P ( n∑ i=1 f2 ( xi ) > t+nEf2 ( X ) ) ≤exp ( − t 2 2 ( nE|f2 ( X ) −Ef2 ( X ) |2 ) + C2t3 ) ) ≤exp ( − t 2 2 ( nC2‖f‖22+ C 2t 3 ) ) ≤exp ( − t 2 4max { nC2‖f‖22 , C 2t 3 } ) . Hence , with probability of at least 1−δ1 we have n∑ i=1 f2 ( xi ) ≤max { √ 4C2log 1 δ1 n‖f‖22 , 4C2 3 log 1 δ1 } +nEf2 ( X ) ≤Õ ( max { √ C2n‖f‖22 , C2 } ) +n‖f‖22 ≤Õ ( √ C2n‖f‖22+C2 ) +n‖f‖22 . ( 44 ) For the proofs in the reminder of this section , the definitions of the relevant quantities are given in Section 3 . Corollary 19 . With probability of at least 1−δ1 , we have ‖f > R ( x ) ‖22 =Õ ( ( 1δ1 +1 ) nR 1−2β ) . Proof of Corollary 19 . The L2 norm of f > R ( x ) is given by ‖f > R‖22 = ∑∞ p=R+1µ 2 p ≤ Cµ 2β−1R 1−2β . Applying Lemma 17 we get the result . Corollary 20 . For any ν∈RR , with probability of at least 1−δ1 we have ‖ΦRν‖22 =Õ ( ( 1δ1 +1 ) n‖ν‖ 2 2 ) . Proof of Corollary 20 . Let g ( x ) = ∑R p=1νpφp ( x ) . Then ΦRν=g ( x ) . The L2 norm of g ( x ) is given by ‖g‖22 = ∑R p=1ν 2 p =‖ν‖22 . Applying Lemma 17 we get the result . Next we consider the quantity , ΦTRΦR−nI . The key tool that we use is the matrix Bernstein inequality that describes the upper tail of a sum of independent zero-mean random matrices . Lemma 21 . Let D = diag { d1 , ... , dR } , d1 , ... , dR > 0 and dmax = max { d1 , ... , dR } . Let M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } . Then with probability of at least 1−δ , we have ‖D ( ΦTRΦR−nI ) D‖2≤max { √ nd2maxM log R δ , M log R δ ) } . ( 45 ) Proof of Lemma 21 . Let Yj = ( φ1 ( xj ) , ... , φR ( xj ) ) T and Zj = DYj . It is easy to verify that E ( ZjZTj ) =D2 . Then the left hand side of ( 45 ) is ∑n j=1 [ ZjZ T j −E ( ZjZTj ) ] . We note that ‖ZjZTj −E ( ZjZTj ) ‖2≤max { ‖ZjZTj ‖2 , ‖E ( ZjZTj ) ‖2 } ≤max { ‖Zj‖22 , d2max } . For ‖Zj‖22 , we have ‖Zj‖22 = R∑ p=0 d2pφ 2 p ( xj ) ≤ R∑ p=0 d2p‖φp‖2∞ , ( 46 ) we have ‖ZjZTj −E ( ZjZTj ) ‖2≤max { ∑R p=0d 2 p‖φp‖2∞ , d2max } . On the other hand , E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] =E [ ‖Zj‖22ZjZTj ] − ( E ( ZjZTj ) ) 2 . Since E [ ‖Zj‖22ZjZTj ] 4E [ R∑ p=0 d2p‖φp‖2∞ZjZTj ] , ( by ( 46 ) ) = R∑ p=0 d2p‖φp‖2∞E [ ZjZTj ] , we have ‖E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] ‖2≤max { ∑R p=0d 2 p‖φp‖2∞‖E [ ZjZTj ] ‖2 , d4max } ≤max { ∑R p=0d 2 p‖φp‖2∞d2max , d4max } ≤d2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } . Using the matrix Bernstein inequality ( Tropp , 2012 , Theorem 6.1 ) , we have P ( ‖ n∑ j=1 [ ZjZ T j −E ( ZjZTj ) ] ‖2 > t ) ≤Rexp −t2 2 ( n‖E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] ‖2+ tmaxj‖ZjZTj −E ( ZjZTj ) ‖2 3 ) ≤Rexp −t2 2 ( nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } + tmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } 3 ) =Rexp ( −t2 O ( max { nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } , tmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } } ) ) . Then with probability of at least 1−δ , we have ‖ n∑ j=1 [ ZjZ T j −E ( ZjZTj ) ] ‖2 ≤max { √ nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } logRδ , max { ∑R p=0d 2 p‖φp‖2∞ , d2max } logRδ } . Corollary 22 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfunctions satisfy Assumption 6 . Assume σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 Let γ be a positive number such that 1+α+2τ− ( 1+2τ+2α ) t2α ( 1−t ) < γ≤1 . Then with probability of at least 1−δ , we have ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤O ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) √ logRδ ) . ( 47 ) Proof of Corollary 22 . Use the same notation as in Lemma 21 . Let D = ( I + nσ2 ΛR ) −γ/2Λ γ/2 R . Then d2max ≤ σ 2γ nγ and ∑R p=0 d 2 p‖φp‖2∞ ≤ ∑R p=0 C 2 φ λγpp 2τ ( 1+ n σ2 λp ) γ = O ( ( nσ2 ) 1−γα+2τ α ) , where the first inequality follows from Assumptions 4 and 6 and the last equality from Lemma 15 . Then M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } =O ( ( nσ2 ) 1−γα+2τ α ) . Applying Lemma 21 , we have ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤ 1σ2 max { √ nσ 2γ nγ O ( ( n σ2 ) 1−γα+2τ α ) logRδ , O ( ( n σ2 ) 1−γα+2τ α ) logRδ } =O ( 1σ2 ( n σ2 ) 1−2γα+2τ 2α n 1 2 ) =O ( √ logRδ n ( 1−2γα+2τ ) ( 1−t ) 2α + 1 2−t ) =O ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) . ( 48 ) Corollary 23 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfuctions satisfy Assumption 6 . Let Λ̃1 , R = diag { 1 , λ1 , ... , λR } . Assume σ2 = Θ ( nt ) where t < 1 Let γ be a positive number such that 1+2τα < γ≤1 . Then with probability of at least 1−δ , we have ‖ ( I+ nσ2 ΛR ) −γ/2Λ̃ γ/2 1 , R ( Φ T RΦR−nI ) Λ̃ γ/2 1 , R ( I+ n σ2 ΛR ) −γ/2‖2≤O ( √ logRδ n 1 2 ) . ( 49 ) Proof of Corollary 23 . Use the same notation as in Lemma 21 . LetD= ( I+ nσ2 ΛR ) −γ/2Λ̃ γ/2 1 , R . Then d2max≤1 and ∑R p=0d 2 p‖φp‖2∞≤C2φ+ ∑R p=1C 2 φ λγpp 2τ ( 1+ n σ2 λp ) γ =C2φ+O ( n ( 1−γα+2τ ) ( 1−t ) α ) =O ( 1 ) where the first inequality follows from Assumptions 4 and 6 and the second equality from Lemma 15 . Then M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } =O ( 1 ) . Applying Lemma 21 , we have ‖ ( I+ nσ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤max { √ logRδ nO ( 1 ) , log R δ O ( 1 ) } =O ( √ logRδ n 1 2 ) . ( 50 ) Corollary 24 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfunctions satisfy Assumption 6 . Let ΦR+1 : S = ( φR+1 ( x ) , ... , φS ( x ) ) , and ΛR+1 : S = ( λR+1 , ... , λS ) . Then with probability of at least 1−δ , we have ‖Λ1/2R+1 : S ( Φ T R+1 : SΦR+1 : S−nI ) Λ 1/2 R+1 : S‖2≤O ( logS−Rδ max { n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . ( 51 ) Proof of Corollary 24 . Use the same notation as in Lemma 21 . Let D = Λ1/2R+1 : S . Then d2max≤CλR−α=O ( R−α ) and ∑S p=R+1C 2 φd 2 pp 2τ ≤ ∑S p=R+1C 2 φCλp −αp2τ =O ( R1−α+2τ ) , where the first inequality follows from Assumptions 4 and 6 . ThenM=max { ∑S p=R+1C 2 φd 2 pp 2τ , d2max } = O ( R1−α+2τ ) . Applying Lemma 21 , we have ‖ ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤max { √ logS−Rδ nO ( R −α ) O ( R1−α+2τ ) , logS−Rδ O ( R 1−α+2τ ) ) } =O ( logS−Rδ max { n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . ( 52 ) Lemma 25 . Under the assumptions of Corollary 24 , with probability of at least 1−δ , we have ‖Φ > RΛ > RΦT > R‖2 =Õ ( max { nR−α , n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . Proof of Lemma 25 . For S∈N , we have ‖Φ > SΛ > SΦT > S‖2≤ ∞∑ p=S+1 ‖Λpφp ( x ) φp ( x ) T ‖2 = ∞∑ p=S+1 λp‖φp ( x ) ‖22 ≤ ∞∑ p=S+1 λpnC 2 φ =O ( nS1−α ) . Let S=R α α−1 . Then we get ‖Φ > SΛ > SΦT > S‖2 =O ( nR−α ) . Let ΦR+1 : S= ( φR+1 ( x ) , ... , φS ( x ) ) , ΛR+1 : S= ( λR+1 , ... , λS ) . We then have ‖Φ > RΛ > RΦT > R‖2≤‖Φ > SΛ > SΦT > S‖2+‖ΦR+1 : SΛR+1 : SΦTR+1 : S‖2 ≤O ( nR−α ) +‖Λ1/2R+1 : SΦ T R+1 : SΦR+1 : SΛ 1/2 R+1 : S‖2 ≤O ( nR−α ) +n‖ΛR+1 : S‖2+‖Λ1/2R+1 : S ( Φ T R+1 : SΦR+1 : S−nI ) Λ 1/2 R+1 : S‖2 ≤O ( nR−α ) +O ( nR−α ) +O ( logR α α−1−R δ max { n 12R 1−2α+2τ 2 , R1−α+2τ } ) =Õ ( max { nR−α , n 12R 1−2α+2τ 2 , R1−α+2τ } ) , where in the fourth inequality we use Corollary 24 . Corollary 26 . Assume that σ2 = Θ ( 1 ) . If R=n 1α+κ where 0 < κ < α−1−2τα ( 1+2τ ) , then with probability of at least 1−δ , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ‖2≤‖ Φ > RΛ > RΦ T > R σ2 ‖2 =Õ ( n −κα ) =o ( 1 ) . Proof of Corollary 26 . By Lemma 25 and the assumptionR=n 1 α+κ , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ‖2≤‖ Φ > RΛ > RΦ T > R σ2 ‖2 ≤Õ ( max { nR−α , n 12R 1−2α+2τ 2 , R1−α+2τ } ) =Õ ( n−κα ) . Lemma 27 . Assume that ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 < 1 where 1+2τ α < γ≤1 . We then have ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . Proof of Lemma 27 . First note that ‖ 1σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2‖2 < ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 < 1 . Let Λ̃ , R = diag { , λ1 , ... , λR } . Since ΛR = diag { 0 , λ1 , ... , λR } , we have that when is sufficiently small , ‖ 1σ2 ( I + n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR − nI ) Λ̃ 1/2 , R ( I + n σ2 Λ̃ , R ) −1/2‖2 < 1 . Since all diagonal entries of Λ̃ , R are positive , we have ( I+ 1σ2 Λ̃ , RΦ T RΦR ) −1 = ( I+ nσ2 Λ̃ , R+ 1 σ2 Λ̃ , R ( Φ T RΦR−nI ) ) −1 =Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 [ I+ 1σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ] −1 ( I+ nσ2 Λ̃ , R ) −1/2Λ̃ −1/2 , R = ( I+ nσ2 Λ̃ , R ) −1 + ∞∑ j=1 [ ( −1 ) jΛ̃1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ) j ( I+ nσ2 Λ̃ , R ) −1/2Λ̃ −1/2 , R ] = ( I+ nσ2 Λ̃ , R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j ( I+ nσ2 Λ̃ , R ) −1 . Letting →0 , we get ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ nσ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . This concludes the proof . Lemma 28 . If ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 , then we have ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1 . ( 53 ) In particular , assume that σ2 =Θ ( 1 ) . LetR=n 1 α+κ where 0 < κ < α−1−2τα ( 1+2τ ) . Then with probability of at least 1−δ , for sufficiently large n , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 and ( 53 ) holds . Proof of Lemma 28 . Define Φ > R= ( φR+1 ( x ) , φR+2 ( x ) , ... ) , Λ > R=diag ( λR+1 , λR+2 , ... ) . Then we have ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( I+ ΦRΛRΦ T R σ2 + Φ > RΛ > RΦ T > R σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( ( I+ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) −1 −I ) ( I+ ΦRΛRΦ T R σ2 ) −1 . By Corollary 26 , for sufficiently large n , ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 with probability of at least 1−δ . Hence ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( ( I+ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) −1 −I ) ( I+ ΦRΛRΦ T R σ2 ) −1 = ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1 . Lemma 29 . Assume that µ0 =0 and σ2 =Θ ( nt ) where 1− α1+2τ < t < 1 . LetR=n ( 1α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) . Then when n is sufficiently large , with probability of at least 1−2δ we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } ) . ( 54 ) Proof of Lemma 29 . Let Λ1 : R = diag { λ1 , ... , λR } , Φ1 : R = ( φ1 ( x ) , φ1 ( x ) , ... , φR ( x ) ) and µ1 : R = ( µ1 , ... , µR ) . Since µ0 = 0 , we have ( I + 1σ2 ΦRΛRΦ T R ) −1fR ( x ) = ( I+ 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R. Using the Woodbury matrix identity , we have that ( I+ 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R= [ I−Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1Λ1 : RΦT1 : R ] Φ1 : Rµ1 : R =Φ1 : Rµ1 : R−Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1Λ1 : RΦT1 : RΦ1 : Rµ1 : R =σ2Φ1 : R ( σ 2I+Λ1 : RΦ T 1 : RΦ1 : R ) −1µ1 : R. ( 55 ) Let A = ( I + nσ2 Λ1 : R ) −1/2Λ 1/2 1 : R ( Φ T 1 : RΦ1 : R − nI ) Λ 1/2 1 : R ( I + n σ2 Λ1 : R ) −1/2.By Corollary 22 , with probability of at least 1−δ , we have ‖ 1σ2A‖2 = √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α . When n is sufficiently large , ‖ 1σ2A‖2 =o ( 1 ) is less than 1 because 1− α 1+2τ < t < 1 . By Lemma 27 , we have ( I+ 1σ2 Λ1 : RΦ T 1 : RΦ1 : R ) −1 = ( I+ nσ2 Λ1 : R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1 . We then have ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ nσ2 Λ1 : R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1 µ1 : R ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2+ ∞∑ j=1 ∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 . ( 56 ) By Lemma 15 and Assumption 5 , assuming that supi≥1pi+1−pi=h , we have ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2≤ √√√√ R∑ p=1 C2µp −2β ( 1+nCλp−α/σ2 ) 2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) , ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2≥ √√√√bRh c∑ i=1 C2µi −2β ( 1+ nσ2Cλ ( hi ) −α ) 2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . Overall we have ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) . ( 57 ) Using the fact that ‖ 1σ2A‖2 = √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α and ‖ ( I+ nσ2 Λ1 : R ) −1Λ1 : R‖2≤n−1 , we have∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 = ∥∥∥∥ ( I+ nσ2 Λ1 : R ) − 12 Λ 121 : R ( 1σ2A ) j ( I+ nσ2 Λ1 : R ) − 12 Λ 121 : Rµ1 : R∥∥∥∥ 2 ≤Õ ( n− 1−t 2 ) ‖ 1σ2A‖ j 2‖ ( I+ nσ2 Λ1 : R ) − 12 Λ − 12 1 : Rµ1 : R‖2 ( 58 ) By Lemma 16 and the assumptionR=n ( 1 α+κ ) ( 1−t ) , ‖ ( I+ nσ2 Λ1 : R ) − 12 Λ − 12 1 : Rµ1 : R‖2≤ √√√√ R∑ p=1 ( Cλp−α ) −1C2µp −2β ( 1+nCλp−α/σ2 ) 1 =Õ ( max { n− ( 1−t ) /2 , R1/2−β+α/2 } ) =Õ ( max { n− ( 1−t ) /2 , n ( 12 + 1−2β 2α +κ ( 1/2−β+α/2 ) ) ( 1−t ) } ) ( 59 ) We then have ∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 =‖ 1σ2A‖ j 2Õ ( max { n− ( 1−t ) , n ( 1−2β 2α +κ ( 1/2−β+α/2 ) ) ( 1−t ) } ) ( 60 ) By ( 56 ) , ( 57 ) and ( 60 ) , we have ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) + ∞∑ j=1 ‖ 1 σ2 A‖j2Õ ( max { n− ( 1−t ) , n ( 1−t ) 1−2β 2α +κ ( 1−t ) ( 1/2−β+α/2 ) } ) =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) +Õ ( n 1−α+2τ 2α − ( 1+2τ ) t 2α ) Õ ( max { n− ( 1−t ) , n ( 1−t ) 1−2β 2α +κ ( 1−t ) ( 1/2−β+α/2 ) } ) . ( 61 ) By assumption κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) , we have that κ ( 1−t ) ( 1/2−β+α/2 ) + 1−α+2τ 2α − ( 1+2τ ) t 2α < κα ( 1−t ) /2+ 1−α+2τ 2α − ( 1+2τ ) t 2α < 0 . Using ( 61 ) , we then get ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 . ( 62 ) By Corollary 20 , with probability of at least 1−δ , we have ‖Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Õ ( √ ( 1δ +1 ) n‖ ( σ 2I+Λ1 : RΦ T 1 : RΦ1 : R ) −1µ1 : R‖2 ) =Õ ( √ ( 1δ +1 ) n·n ( 1−t ) max { −1 , 1−2β2α } ) . ( 63 ) From ( 55 ) , we get ‖ ( I + 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R‖2 = Õ ( √ ( 1δ +1 ) n ·n ( 1−t ) max { −1 , 1−2β2α } ) . This concludes the proof . Lemma 30 . Assume that µ0 > 0 and σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Let R = n 1 α+κ where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 . Then when n is sufficiently large , with probability of at least 1−2δ , we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . ( 64 ) Proof of Lemma 30 . Using the Woodbury matrix identity , we have that ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) = [ I−ΦR ( σ2I+ΛRΦTRΦR ) −1ΛRΦTR ] ΦRµR =ΦRµR−ΦR ( σ2I+ΛRΦTRΦR ) −1ΛRΦTRΦRµR =σ2ΦR ( σ 2I+ΛRΦ T RΦR ) −1µR . ( 65 ) Let µR,1 = ( µ0,0 , ... ,0 ) and µR,2 = ( 0 , µ1 , ... , µR ) . Then µR=µR,1+µR,2 . Then we have ‖ ( σ2I+ΛRΦTRΦR ) −1µR‖2 =‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2+‖ ( σ2I+ΛRΦTRΦR ) −1µR,2‖2 . ( 66 ) According to ( 62 ) in the proof of Lemma 29 , we have ‖ ( σ2I + ΛRΦTRΦR ) −1µR,2‖2 = Õ ( nmax { − ( 1−t ) , ( 1−t ) ( 1−2β ) 2α } ) . Next we estimate ‖σ2ΦR ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 . Let A= ( I+ n σ2 Λ1 : R ) −γ/2Λ γ/2 1 : R ( Φ T 1 : RΦ1 : R−nI ) Λ γ/2 1 : R ( I+ n σ2 Λ1 : R ) −γ/2 where 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < γ < 1 . Since 1− α 1+2τ < t < 1 , 1 1−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < 1 so the range for γ is well-defined.By Corollary 22 , with probability of at least 1 − δ , we have ‖ 1σ2A‖2 = Õ ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) = o ( 1 ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 because 1− α1+2τ < t < 1 . By Lemma 27 , we have ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . We then have ‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 µR,1 ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ nσ2 ΛR ) −1µR,1‖2+ ∞∑ j=1 ∥∥∥ ( 1σ2 ( I+ nσ2 ΛR ) −1ΛR ( ΦTRΦR−nI ) ) j ( I+ nσ2 ΛR ) −1µR,1∥∥∥ 2 . ( 67 ) By Lemma 15 , ‖ ( I+ n σ2 ΛR ) −1µR,1‖2≤ √√√√µ20+ R∑ p=1 C2µp −2β ( 1+nCλp−α/σ2 ) 2 =O ( 1 ) . ( 68 ) Let Λ̃1 , R = diag { 1 , λ1 , ... , λR } and I0 , R = ( 0 , 1 , ... , 1 ) . Then ΛR = Λ̃1 , RI0 , R . Let B = ( I + n σ2 ΛR ) −γ/2Λ̃ γ/2 1 , R ( Φ T RΦR−nI ) Λ̃ γ/2 1 , R ( I+ n σ2 ΛR ) −γ/2 . According to Corollary 23 , we have ‖B‖2 = O ( √ logRδ n 1 2 ) . Using the fact that ‖ 1σ2A‖2 =Õ ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) , we have∥∥∥ ( 1σ2 ( I+ nσ2 ΛR ) −1ΛR ( ΦTRΦR−nI ) ) j ( I+ nσ2 ΛR ) −1µR,1∥∥∥ 2 = 1 σ2j ∥∥∥∥ ( I+ nσ2 ΛR ) −1+γ2 Λ1−γ2R ( A ( I+ nσ2 ΛR ) −1+γΛ1−γR ) j−1B ( I+ nσ2 ΛR ) −1+γ2 µR,1∥∥∥∥ 2 ≤ 1 σ2 ( n ( −1+ γ 2 + ( −1+γ ) ( j−1 ) ) ( 1−t ) Õ ( √ logRδ n ( j−1 ) ( 1+α+2τ2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) ) √ logRδ n 1 2 ‖µR,1‖2 ≤n ( −1+ γ 2 ) ( 1−t ) + 1 2−tÕ ( n [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) √ logRδ ‖µR,1‖2 =Õ ( n− 1 2 + γ 2 ( 1−t ) + [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) . ( 69 ) Since 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < γ < 1 and− 1 2 + 1 1−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) 1−t 2 < 0 , we can let γ be a little bit larger than 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) and make− 1 2 + γ 2 ( 1−t ) < 0 holds . By ( 67 ) , ( 68 ) , ( 69 ) , we have ‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 ≤O ( 1 ) + ∞∑ j=1 Õ ( n− 1 2 + γ 2 ( 1−t ) + [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) ≤O ( 1 ) +o ( 1 ) =O ( 1 ) . ( 70 ) According to ( 66 ) , we have ‖ ( σ2I + ΛRΦTRΦR ) −1µR‖2 = Õ ( nmax { − ( 1−t ) , ( 1−t ) ( 1−2β ) 2α } ) +O ( 1 ) = O ( 1 ) . By Corollary 20 , with probability of at least 1−δ , we have ‖ΦR ( σ2I+ΛRΦTRΦR ) −1µR‖2 =Õ ( √ ( 1δ +1 ) n‖ ( σ 2I+ΛRΦ T RΦR ) −1µR‖2 ) =Õ ( √ ( 1δ +1 ) n ) . From ( 65 ) , we get ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . This concludes the proof . Lemma 31 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < α−1−2τα2 . Assume that µ0 = 0 . Then when n is sufficiently large , with probability of at least 1−3δ we have ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . ( 71 ) Assume that µ0 > 0 . Then when n is sufficiently large , with probability of at least 1−3δ we have ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . ( 72 ) Proof of Lemma 31 . We have ( I+ ΦΛΦ T σ2 ) −1fR ( x ) = ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) + ( ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ) fR ( x ) . ( 73 ) When µ0 =0 , by Lemma 29 , with probability of at least 1−2δ , we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . Since α−1−2τα2 < α−1−2τ α ( 1+2τ ) , we apply Lemma 28 and Corollary 26 and get that with probability of at least 1−δ , the second term in the right hand side of ( 73 ) is estimated as follows : ‖ ( ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ) fR ( x ) ‖2 =‖ ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖2 = ∞∑ j=1 ∥∥∥ ( ( I+ ΦRΛRΦTRσ2 ) −1 Φ > RΛ > RΦT > Rσ2 ) ∥∥∥j 2 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖2 = ∞∑ j=1 Õ ( n−jκα ) Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) =o ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . Overall , from ( 73 ) , we have that with probability 1−3δ , ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . When µ0 > 0 , using the same approach and Lemma 30 , we can prove that ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 = Õ ( √ ( 1δ +1 ) n ) . This concludes the proof . D PROOF OF THE MAIN RESULTS D.1 PROOFS RELATED TO THE ASYMPTOTICS OF THE NORMALIZED STOCHASTIC COMPLEXITY Lemma 32 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−2δ we have , we have |T1 , R ( Dn ) −T1 ( Dn ) |=Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) ( 74 ) If R= n 1 α+κ where κ > 0 , we have |T1 , R ( Dn ) −T1 ( Dn ) |= o ( 1 σ2n 1 α ) . If we further assume that 0 < κ < α−1−2τα2 , µ0 =0 and σ 2 =Θ ( 1 ) , then for sufficiently large nwith probability of at least 1−4δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) . ( 75 ) Proof of Lemma 32 . Define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , and Λ > R = diag ( λR+1 , ... , λp , ... ) . We then have |T1 ( Dn ) −T1 , R ( Dn ) |= ∣∣∣∣12 logdet ( I+ 1σ2 ΦΛΦT ) − 12 logdet ( I+ 1σ2 ΦRΛRΦTR ) ∣∣∣∣ + 1 2 ∣∣∣∣Tr ( I+ ΦΛΦTσ2 ) −1−Tr ( I+ ΦRΛRΦTRσ2 ) −1 ∣∣∣∣ . ( 76 ) As for the first term in the right hand side of ( 76 ) , we have∣∣∣∣12 logdet ( I+ 1σ2 ΦΛΦT ) − 12 logdet ( I+ 1σ2 ΦRΛRΦTR ) ∣∣∣∣ = ∣∣∣∣12 logdet ( ( I+ 1 σ2 ΦRΛRΦ T R ) −1 ( I+ 1 σ2 ΦRΛRΦ T R+ 1 σ2 Φ > RΛ > RΦ T > R ) ) ∣∣∣∣ = ∣∣∣∣12 logdet ( I+ 1 σ2 ( I+ 1 σ2 ΦRΛRΦ T R ) −1Φ > RΛ > RΦ T > R ) ∣∣∣∣ = 1 2 ∣∣∣∣Trlog ( I+ 1σ2 ( I+ 1σ2 ΦRΛRΦTR ) −1Φ > RΛ > RΦT > R ) ∣∣∣∣ . ( 77 ) Given a concave function h and a matrixB∈Rn×n whose eigenvalues ζ1 , ... , ζn are all positive , we have that Trh ( B ) = ∑n p=1h ( ζi ) ≤nh ( 1 n ∑n p=1ζi ) ≤nh ( 1 nTrB ) , ( 78 ) where we used Jensen ’ s inequality . Using h ( x ) =log ( 1+x ) in ( 78 ) , with probability 1−δ , we have∣∣ 1 2 logdet ( I+ 1 σ2 ΦΛΦ T ) − 12 logdet ( I+ 1 σ2 ΦRΛRΦ T R ) ∣∣ ≤ n2 log ( 1+ 1 nTr ( 1 σ2 ( I+ ΦRΛRΦ T R σ2 ) −1Φ > RΛ > RΦ T > R ) ) ≤ n2 log ( 1+ 1 nσ2 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖2Tr ( Φ > RΛ > RΦT > R ) ) ≤ n2 log ( 1+ 1 nσ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 ) ≤ 1 2σ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 = 12σ2 ∑∞ p=R+1λp ( C2φÕ ( √ p2τn‖φp‖22+p2τ ) +n‖φp‖22 ) =Õ ( 1σ2n ∑∞ p=R+1λp+n 1/2 ∑∞ p=R+1λpp τ+ ∑∞ p=R+1λpp 2τ ) =Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) =o ( 1 σ2n 1 α ) , ( 79 ) where in the second inequality we use the fact that TrAB≤‖A‖2TrB whenA andB are symmetric positive definite matrices , and in the last inequality we use Lemma 18 . As for the second term in the right hand side of ( 76 ) , letA= ( I+ ΦRΛRΦ T R σ2 ) −1/2 . Then we have 1 2 ∣∣∣Tr ( I+ ΦΛΦTσ2 ) −1−Tr ( I+ ΦRΛRΦTRσ2 ) −1∣∣∣ = 12 ∣∣∣∣TrA [ I− ( I+A ( Φ > RΛ > RΦT > Rσ2 ) A ) −1 ] A ∣∣∣∣ ≤ 12Tr [ I− ( I+A ( Φ > RΛ > RΦ T > R σ2 ) A ) −1 ] ≤ n2 ( 1− ( 1+ 1 nTrA ( Φ > RΛ > RΦ T > R σ2 ) A ) −1 ) ≤ n2 ( 1− ( 1+ 1 nTr ( Φ > RΛ > RΦ T > R σ2 ) ) −1 ) ≤ n2 ( 1− ( 1+ 1 nσ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 ) ) −1 ) ≤ 1 2σ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 =Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) =o ( 1 σ2n 1 α ) , where in the first inequality we use the fact that ‖A‖2 < 1 and TrABA≤‖A‖22TrB when A and B are symmetric positive definite matrices , in the second inequality we use h ( x ) =1−1/ ( 1+x ) in ( 78 ) and in the last equality we use the last few steps of ( 79 ) . This concludes the proof of the first statement . As for |T2 ( Dn ) −T2 , R ( Dn ) | , we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ + ∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣ . ( 80 ) For the first term on the right-hand side of ( 80 ) , we have∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ ≤2 ∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣+∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖22 . Applying Corollary 19 and Lemma 31 , with probability of at least 1−4δ , we have∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ ≤2Õ ( √ ( 1δ +1 ) nR 1−2β ) Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) +Õ ( ( 1δ +1 ) nR 1−2β ) =2Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) +Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) ( 1−2β ) ) =2Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) , where the last equality holds because ( 1α+κ ) 1−2β 2 < 1−2β 2α when κ > 0 . As for the second term on the right-hand side of ( 80 ) , according to Lemma 28 , Corollary 26 and Lemma 29 , we have∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j−12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖22 = ∞∑ j=1 Õ ( n−jκα ) Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } ) =Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } −κα ) . ( 81 ) By ( 80 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) +Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } −κα ) =Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) . This concludes the proof of the second statement . In Lemma 32 , we gave a bound for |T2 , R ( Dn ) −T2 ( Dn ) | when n 1 α < R < n 1 α+ α−1−2τ α2 . For R > n , we note the following lemma : Lemma 33 . Let R = nC and σ2 = nt . Assume that C > = 1 and C ( 1−α+ 2τ ) − t < 0 . Under Assumptions 4 , 5 and 6 , for sufficiently large n and with probability of at least 1−3δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1δ +1 ) 1 σ2nR max { 1/2−β,1−α+2τ } ) . ( 82 ) Proof of Lemma 33 . Define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , and Λ > R = diag ( λR+1 , ... , λp , ... ) . Then we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ + ∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ . ( 83 ) For the first term on the right-hand side of ( 83 ) , with probability 1−3δ we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2 ∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣+∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖fR ( x ) ‖2+‖f > R ( x ) ‖22 ≤2Õ ( √ ( 1 δ +1 ) nR1−2β ) Õ ( √ ( 1 δ +1 ) n·‖f‖2 ) +Õ ( ( 1 δ +1 ) nR1−2β ) =Õ ( ( 1 δ +1 ) nR1/2−β ) , where we used Corollary 19 and Lemma 17 for the last inequality . The assumption C ( 1− α+ 2τ ) − t < 0 means that R 1−α+2τ σ2 = o ( 1 ) . For the second term on the right-hand side of ( 83 ) , by Lemmas 28 and 25 , we have∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j+12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖fR ( x ) ‖22 = ∞∑ j=1 Õ ( 1 σ2 Rj ( 1−α+2τ ) ) Õ ( ( 1 δ +1 ) n‖f‖22 ) =Õ ( ( 1 δ +1 ) 1 σ2 nR1−α+2τ ) . ( 84 ) Using ( 83 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1 δ +1 ) nR1/2−β ) +Õ ( ( 1 δ +1 ) n 1 σ2 R1−α+2τ ) =Õ ( ( 1 δ +1 ) n 1 σ2 Rmax { 1/2−β,1−α+2τ } ) . Next we consider the asympototics of T1 , R ( Dn ) and T2 , R ( Dn ) . Lemma 34 . Let A = ( I + nσ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR − nI ) Λ γ/2 R ( I + n σ2 ΛR ) −γ/2 . Assume that ‖A‖2 < 1 where 1+2τα < γ≤1 . Then we have T2 , R ( Dn ) = n 2σ2µ T R ( I+ n σ2 ΛR ) −1µR+ 1 2 ∑∞ j=1 ( −1 ) j+1Ej , where Ej=µ T R 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ nσ2 ΛR ) −1µR . Proof of Lemma 34 . Let Λ̃ , R = diag { , λ1 , ... , λR } . Since ΛR = diag { 0 , λ1 , ... , λR } , we have that when is sufficiently small , ‖ 1σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2‖2 < 1 . Since all diagonal entries of Λ̃ , R are positive , we have 1 2σ2 µTRΦ T R ( I+ 1 σ2 ΦRΛ̃ , RΦ T R ) −1ΦRµR = 1 2σ2 µTRΦ T R [ I−ΦR ( σ2I+Λ̃ , RΦTRΦR ) −1Λ̃ , RΦTR ] ΦRµR = 1 2σ2 µTRΦ T RΦRµR− 1 2σ2 µTRΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1Λ̃ , RΦ T RΦRµR = 1 2 µTRΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RΛ̃ , RΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ 1 σ2 Λ̃ , RΦ T RΦR ) −1µR . ( 85 ) Using Lemma 27 , we have 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ 1 σ2 Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ n σ2 Λ̃ , R ) −1µR + 1 2 ∞∑ j=1 ( −1 ) j+1µTRΛ̃−1 , R ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j ( I+ n σ2 Λ̃ , R ) −1µR = n 2σ2 µTR ( I+ n σ2 Λ̃ , R ) −1µR + 1 2 ∞∑ j=1 ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 Λ̃ , R ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 Λ̃ , R ) −1µR ( 86 ) Letting →0 , we get T2 , R ( Dn ) = 1 2σ2 µTRΦ T R ( I+ 1 σ2 ΦRΛRΦ T R ) −1ΦRµR = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR + 1 2 ∞∑ j=1 [ ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR ] This concludes the proof . Lemma 35 . Assume that σ2 = Θ ( 1 ) . LetR=n 1 α+κ where 0 < κ < α−1−2τ2α2 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ , we have T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1α ) . ( 87 ) Furthermore , if we assume µ0 =0 , we have T2 , R ( Dn ) = ( n 2σ2µ T R ( I+ n σ2 ΛR ) −1µR ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 88 ) Proof of Lemma 35 . Let A= ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2 , ( 89 ) where 1+α+2τ2α < γ≤1 . By Corollary 22 , with probability of at least 1−δ , we have ‖A‖2 =Õ ( n 1−2γα+α+2τ 2α ) . ( 90 ) When n is sufficiently large , ‖A‖2 is less than 1 . LetB= ( I+ nσ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 . Then ‖B‖2 = σ 2 ( 1−γ ) n1−γ ‖A‖2 = Õ ( n 1−α+2τ 2α ) . Using the Woodbury matrix identity , we compute T1 , R ( Dn ) as follows : T1 , R ( Dn ) = 1 2 logdet ( I+ 1 σ2 ΛRΦ T RΦR ) − 12TrΦR ( σ 2I+ΛRΦ T RΦR ) −1ΛRΦ T R = 12 logdet ( I+ n σ2 ΛR ) + 1 2 logdet [ I+ 1 σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 ] − 12Tr ( σ 2I+ΛΦTRΦR ) −1ΛΦTRΦR = 12 logdet ( I+ n σ2 ΛR ) + 1 2Trlog [ I+ 1 σ2B ] − 1 2Tr ( I−σ 2 ( σ2I+ΛΦTRΦR ) −1 ) ) = 12 logdet ( I+ n σ2 ΛR ) + 1 2Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j − 12Tr I− ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) + 12Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j − 12Tr ∞∑ j=1 ( −1 ) j 1σ2j ( I+ n σ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 , ( 91 ) where in the last equality we apply Lemma 27 . Let h ( x ) = log ( 1+x ) − ( 1− 11+x ) . It is easy to verify that h ( x ) is increasing on [ 0 , +∞ ) . As for the first term on the right hand side of ( 91 ) , we have 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) = 12 R∑ p=1 ( log ( 1+ nσ2λp ) − ( 1− 1 1+ n σ2 λp ) ) = 12 R∑ p=1 h ( nσ2λp ) ≤ 1 2 R∑ p=1 h ( n σ2 Cλp −α ) ≤ 12h ( n σ2Cλ ) + 1 2 ∫ [ 1 , R ] h ( nσ2Cλx −α ) dx = 12h ( n σ2 Cλ ) + 1 2n 1/α ∫ [ 1/n1/α , R/n1/α ] h ( Cλσ2 x −α ) dx =Θ ( n1/α ) , where in the last equality we use the fact that ∫ [ 0 , +∞ ] h ( x −α ) dx < ∞ . On the other hand , we have 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) = 12 R∑ p=1 h ( nσ2λp ) ≥ 1 2 R∑ p=1 h ( nσ2Cλp −α ) ≥ 12 ∫ [ 1 , R+1 ] h ( nσ2Cλx −α ) dx = 12n 1/α ∫ [ 1/n1/α , ( R+1 ) /n1/α ] h ( 1σ2Cλx −α ) dx =Θ ( n1/α ) . Overall , we have 12 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) =Θ ( n1/α ) . As for the second term on the right hand side of ( 91 ) , we have∣∣∣∣∣∣Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j ∣∣∣∣∣∣≤R ∞∑ j=1 ‖ 1σ2B‖ j 2 =R ∞∑ j=1 1 σ2j Õ ( n j ( 1−α+2τ ) 2α ) =RÕ ( n 1−α+2τ 2α ) =Õ ( n 1 α+κ+ 1−α+2τ 2α ) . As for the third term on the right hand side of ( 91 ) , we have∣∣∣∣∣∣Tr ∞∑ j=1 ( −1 ) j 1σ2j ( I+ n σ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 ∣∣∣∣∣∣ ≤ ∞∑ j=1 ∣∣∣Tr ( 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 ) ∣∣∣ ≤R ∞∑ j=1 ∥∥∥ 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2∥∥∥ 2 ≤R ∞∑ j=1 ∥∥∥ 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2∥∥∥ 2 ≤R ∞∑ j=1 ∥∥ 1 σ2jB j ∥∥ 2 =Õ ( n 1 α+κ+ 1−α+2τ 2α ) . Then the asymptotics of T1 , R ( Dn ) is given by T1 , R ( Dn ) = 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) +Õ ( n 1α+κ+ 1−α+2τ2α ) +Õ ( n 1α+κ+ 1−α+2τ2α ) =Θ ( n1/α ) +Õ ( n 1 α+κ+ 1−α+2τ 2α ) =Θ ( n 1 α ) , where in the last inequality we use the assumption that κ < α−1−2τ2α . Since Õ ( n 1 α+κ+ 1−α+2τ 2α ) is lower order term compared to Θ ( n 1 α ) , we further have T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) . This concludes the proof of the first statement . Let Λ1 : R=diag { λ1 , ... , λR } , Φ1 : R= ( φ1 ( x ) , φ1 ( x ) , ... , φR ( x ) ) and µ1 : R= ( µ1 , ... , µR ) . Since µ0 =0 , we have T2 , R ( Dn ) = 12σ2µ T 1 : RΦ T 1 : R ( I+ 1 σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R. According to Lemma 34 , we have T2 , R ( Dn ) = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R + 1 2 ∞∑ j=1 ( −1 ) j+1µT1 : R 1 σ2 ( I+ n σ2 Λ1 : R ) −1 ( ΦT1 : RΦ1 : R−nI ) ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j−1 = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R + 1 2 ∞∑ j=1 [ ( −1 ) j+1 1 σ2j µT1 : R ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R A ( ( I+ n σ2 Λ1 : R ) −1+γΛ1−γ1 : R A ) j−1 ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R ] ( 92 ) where in the second to last equality we used the definition ofA ( 89 ) . As for the first term on the right hand side of ( 92 ) , by Lemma 15 , Assumption 4 and Assumption 5 , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R≤ n 2σ2 R∑ p=1 C2µp −2β 1+ nσ2Cλp −α = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . On the other hand , by Assumption 5 , assuming that supi≥1pi+1−pi=h , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R≥ n 2σ2 bRh c∑ i=1 C2µp −2β i 1+ nσ2Cλp −α i ≥ n 2σ2 bRh c∑ i=1 C2µi −2β 1+ nσ2Cλ ( hi ) −α = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . Overall , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R=Θ ( n max { 0,1+ 1−2βα } logkn ) , where k= { 0 , α 6=2β−1 , 1 , α=2β−1 . By Lemma 16 , we have ‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2≤ R∑ p=1 C2µp −2β ( Cλp −α ) −γ ( 1+ nσ2Cλp −α ) 2−γ =Õ ( max { n−2+γ , R1−2β+αγ } ) =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) . ( 93 ) Using ( 90 ) , the second term on the right hand side of ( 92 ) is computed as follows : 1 2 ∞∑ j=1 [ ( −1 ) j+1 1 σ2j µT1 : R ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R A ( ( I+ n σ2 Λ1 : R ) −1+γΛ1−γ1 : R A ) j−1 ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R ] ≤1 2 ∞∑ j=1 1 σ2j ‖A‖j ( n σ2 ) ( −1+γ ) ( j−1 ) ‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2 ≤1 2 ∞∑ j=1 1 σ2j Õ ( n j ( 1−2γα+α+2τ ) 2α ) ( n σ2 ) ( −1+γ ) ( j−1 ) Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) =Õ ( nmax { −2+γ+ 1−2γα+α+2τ 2α , 1−2β α +γ+ 1−2γα+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Õ ( nmax { −2+ 1+α+2τ 2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) . ( 94 ) Since 1+α+2τ2α < 1+α+2τ α+1+2τ =1 , we have−2+ 1+α+2τ 2α < 0.Also we have 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) = 1−2β α +1+ 1−α+2τ 2α +κ ( 1−2β+αγ ) ≤1−2β α +1+ 1−α+2τ 2α +καγ < 1−2β α +1 , ( 95 ) where the last inequality holds because κ < α−1−2τ2α2 and γ≤1 . Hence we have T2 , R ( Dn ) = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R+Õ ( n max { −2+ 1+α+2τ2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( nmax { −2+ 1+α+2τ 2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) . where k= { 0 , α 6=2β−1 , 1 , α=2β−1. . Since Õ ( n max { −2+ 1+α+2τ2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) is lower order term compared to Θ ( nmax { 0,1+ 1−2β α } logkn ) , we further have T1 , R ( Dn ) = ( n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R ) ( 1+o ( 1 ) ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) This concludes the proof of the second statement . Lemma 36 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−5δ , we have T1 ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) , ( 96 ) Furthermore , let δ=n−q where 0≤q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } . If we assume µ0 =0 , we have T2 ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 97 ) Proof of Lemma 36 . LetR=n 1 α+κ where 0≤κ < α−1−2τ2α2 . By Lemmas 32 and 35 , with probability of at least 1−5δ we have |T1 , R ( Dn ) −T1 ( Dn ) |=Õ ( n 1 α+κ ( 1−α ) ) , ( 98 ) and |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1 δ +1 ) nmax { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) ( 99 ) as well as T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) , ( 100 ) and T2 , R ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 101 ) We then have T1 ( Dn ) =T1 , R ( Dn ) +T1 , R ( Dn ) −T1 ( Dn ) =Θ ( n 1 α ) +Õ ( n 1 α+κ ( 1−α ) ) =Θ ( n 1 α ) . Since Õ ( n 1 α+κ ( 1−α ) ) is lower order term compared to Θ ( n 1 α ) , we further have T1 ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) This concludes the proof of the first statement . As for T2 ( Dn ) , we have T2 ( Dn ) =T2 , R ( Dn ) +T2 , R ( Dn ) −T2 ( Dn ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( ( 1 δ +1 ) nmax { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( nq+max { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) where we use δ=n−q , k= { 0 , α 6=2β−1 , 1 , α=2β−1. . Since 0≤κ < α−1−2τ2α2 and 0≤q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } , we can choose κ < α−1−2τ 2α2 and κ is arbitrarily close to α−1−2τ2α2 such that 0≤q < min { ( 2β−1 ) κ 2 , κα } . Then we have ( 1 α+κ ) 1−2β 2 +q < 0 , −1−κα+q < 0 , ( 1−2β ) κ2 +q < 0 and−κα+q < 0 . So we have T2 , R ( Dn ) =Θ ( n max { 0,1+ 1−2βα } logkn ) . Since Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) is lower order term compared to Θ ( nmax { 0,1+ 1−2β α } logkn ) , we further have T2 ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) This concludes the proof of the second statement . Proof of Theorem 7 . Using Lemma 36 and noting that 1α > 0 , with probability of at least 1−5δ̃ , we have E F 0 ( Dn ) =T1 ( Dn ) +T2 ( Dn ) = [ 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) + n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR ] ( 1+o ( 1 ) ) =Θ ( nmax { 1 α , 1−2β α +1 } ) Furthermore , we have logdet ( I+ n σ2 Λ ) −logdet ( I+ n σ2 ΛR ) = ∞∑ p=R+1 log ( 1+ n σ2 λp ) ≤ n σ2 ∞∑ p=R+1 λp≤ n σ2 ∞∑ p=R+1 Cλp −α= n σ2 O ( R1−α ) = n σ2 O ( n ( 1−α ) ( 1 α+κ ) ) =o ( n 1 α ) . Then we have log det ( I + nσ2 ΛR ) = log det ( I + n σ2 Λ ) ( 1 + o ( 1 ) ) . Similarly we can prove Tr ( I− ( I+ nσ2 Λ ) −1 ) = Tr ( I− ( I+ nσ2 ΛR ) −1 ) ( 1 + o ( 1 ) ) and µT ( I + nσ2 Λ ) −1µ = µTR ( I+ n σ2 ΛR ) −1µR ( 1+o ( 1 ) ) . Letting δ=5δ̃ , we get the result . In the case of µ0 > 0 , we have the following lemma : Lemma 37 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < α−1−2τα2 . Assume that µ0 > 0 . Under Assumptions 4 , 5 and 6 , for sufficiently large nwith probability of at least 1−4δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1 δ +1 ) nmax { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) .. ( 102 ) Proof of Lemma 37 . As for |T2 ( Dn ) −T2 , R ( Dn ) | , we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ + ∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ . ( 103 ) For the first term on the right-hand side of ( 103 ) , we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2 ∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣+∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖22 . Applying Corollary 19 and Lemma 31 , with probability of at least 1−4δ , we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2Õ ( √ ( 1 δ +1 ) nR1−2β ) Õ ( √ ( 1 δ +1 ) n ) +Õ ( ( 1 δ +1 ) nR1−2β ) =2Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) +Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) ( 1−2β ) ) =2Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) . As for the second term on the right-hand side of ( 80 ) , according to Lemma 28 , Corollary 26 and Lemma 30 , we have∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j−12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖22 = ∞∑ j=1 Õ ( n−jκα ) Õ ( ( 1 δ +1 ) n ) =Õ ( ( 1 δ +1 ) n1−κα ) . ( 104 ) By ( 80 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) +Õ ( ( 1 δ +1 ) n1−κα ) =Õ ( ( 1 δ +1 ) nmax { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) . Lemma 38 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < min { α−1−2τ2α2 , 2β−1 α2 } . Assume that µ0 > 0 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ , we have T2 , R ( Dn ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) . ( 105 ) Proof of Lemma 38 . Let A= ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2 , ( 106 ) where 1+α+2τ2α < γ≤1 . By Corollary 22 , with probability of at least 1−δ , we have ‖A‖2 =Õ ( n 1−2γα+α+2τ 2α ) . ( 107 ) When n is sufficiently large , ‖A‖2 is less than 1 . Let µR,1 = ( µ0,0 , ... ,0 ) and µR,2 = ( 0 , µ1 , ... , µR ) . Then µR=µR,1+µR,2 . Let Λ̃1 , R=diag { 1 , λ1 , ... , λR } and I0 , R= ( 0,1 , ... ,1 ) . Then ΛR=Λ̃1 , RI0 , R . Let B = ( I + nσ2 ΛR ) −1/2Λ̃ 1/2 1 , R ( Φ T RΦR − nI ) Λ̃ 1/2 1 , R ( I + n σ2 ΛR ) −1/2 . By Corollary 23 , we have ‖B‖2 =O ( √ logRδ n 1 2 ) . By Lemma 34 , we have T2 , R ( Dn ) = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR + 1 2 ∞∑ j=1 [ ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR ] ( 108 ) As for the first term on the right hand side of ( 108 ) , by Lemma 15 , we have n 2σ2 µT ( I+ n σ2 Λ ) −1µ≤ n 2σ2 ( µ20+ R∑ p=1 C2µp −2β 1+ nσ2Cλp −α ) = n 2σ2 µ20+Õ ( n max { 0,1+ 1−2βα } ) . We defineQ1 , j , Q2 , j andQ3 , j by Q1 , j=µ T R,1 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,1 Q2 , j=µ T R,1 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,2 Q3 , j=µ T R,2 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,2 ( 109 ) The quantity Q3 , j actually shows up in the case of µ0 = 0 in the proof of Lemma 35 . By ( 92 ) , ( 94 ) and ( 95 ) , we have that | ∞∑ j=1 ( −1 ) j+1Q3 , j |= | ∞∑ j=1 ( −1 ) j+1Õ ( n ( j−1 ) ( 1−α+2τ ) 2α ) o ( nmax { 0,1+ 1−2β α } ) |=o ( nmax { 0,1+ 1−2β α } ) . ( 110 ) ForQ1 , j , we have Q1,1 = 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( I+ n σ2 ΛR ) −1+ γ2 µR,1 ≤ 1 σ2j ‖µR,1‖22‖ ( I+ n σ2 ΛR ) −1+ γ2 ‖22‖B‖2 =O ( √ log R δ n 1 2 ) , where in the last equality we use ‖B‖2 =O ( √ logRδ n 1 2 ) . For j≥2 , we have Q1 , j= 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( ( I+ n σ2 ΛR ) −1+γΛ1−γR A ) j−2 ( I+ n σ2 ΛR ) −1+γΛ1−γR B ( I+ n σ2 ΛR ) −1+ γ2 µR,1 ≤ 1 σ2j ‖µR,1‖22‖ ( I+ n σ2 ΛR ) −1+ γ2 ‖22‖B‖22‖A‖ j−2 2 ‖ ( I+ n σ2 ΛR ) −1+γΛ1−γR ‖ j−1 2 =O ( log R δ n·n ( j−2 ) ( 1−2γα+α+2τ ) 2α ·n− ( 1−γ ) ( j−1 ) ) =O ( log R δ nγ ·n ( j−2 ) ( 1−α+2τ ) 2α ) . Then we have | ∞∑ j=1 ( −1 ) j+1Q1 , j |≤O ( √ log R δ n 1 2 ) + ∞∑ j=2 O ( log R δ nγ ·n ( j−2 ) ( 1−α+2τ ) 2α ) =O ( log R δ nγ ) ( 111 ) ForQ2 , j , we have Q2 , j= 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( ( I+ n σ2 ΛR ) −1+γΛ1−γR A ) j−1 ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2 ≤ 1 σ2j ‖µR,1‖2‖B‖2‖A‖j−12 ‖ ( I+ n σ2 ΛR ) −1+γΛ1−γR ‖ j−1 2 ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖2 =O ( √ log R δ n 1 2 ·n ( j−1 ) ( 1−α+2τ ) 2α ) ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖2 . Since ‖ ( I+ nσ2 Λ ) −1+ γ2 Λ̃ − γ2 1 , RµR,2‖2 is actually the case of µ0 = 0 , we can use ( 93 ) in the proof of Lemma 35 and get ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖ 2 2 =‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2 =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) =o ( nγ ) , ( 112 ) where in the last equality we use κ < 2β−1α2 . Then we have | ∞∑ j=1 ( −1 ) j+1Q2 , j |≤ ∞∑ j=1 o ( √ log R δ n 1+γ 2 ·n ( j−1 ) ( 1−α+2τ ) 2α ) =o ( √ log R δ n 1+γ 2 ) ( 113 ) Choosing γ= 12 ( 1+ 1+α+2τ 2α ) = 1+3α+2τ 4α < 1 , we have T2 , R ( Dn ) = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR+ ∞∑ j=1 ( −1 ) j+1 ( Q1 , j+Q2 , j+Q3 , j ) = n 2σ2 µ20+Õ ( n max { 0,1+ 1−2βα } ) +o ( nmax { 0,1+ 1−2β α } ) +O ( log R δ nγ ) +o ( √ log R δ n 1+γ 2 ) = n 2σ2 µ20+Õ ( n max { 1+γ2 ,1+ 1−2β α } ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) . Proof of Theorem 8 . Let R = n 1 α+κ where 0 < κ < min { α−1−2τ2α2 , 2β−1 α2 } . Since 0 ≤ q < min { 2β−12 , α } · min { α−1−2τ 2α2 , 2β−1 α2 } , we can choose κ < min { α−1−2τ 2α2 , 2β−1 α2 } and κ is arbitrarily close to κ < min { α−1−2τ2α2 , 2β−1 α2 } such that 0≤ q < min { ( 2β−1 ) κ 2 , κα } . Then we have ( 1α+κ ) 1−2β 2 +q < 0 , and−κα+q < 0 . As for T2 ( Dn ) , we have T2 ( Dn ) ≤T2 , R ( Dn ) +|T2 , R ( Dn ) −T2 ( Dn ) | = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) +Õ ( ( 1δ +1 ) n max { 1+ ( 1α+κ ) 1−2β 2 ,1−κα } ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) +Õ ( nq+max { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) = n 2σ2 µ20+o ( n ) . By Lemma 36 , we have T1 ( Dn ) = O ( n 1 α ) . Hence E F 0 ( Dn ) = T1 ( Dn ) + T2 ( Dn ) = n 2σ2µ 2 0+o ( n ) . D.2 PROOFS RELATED TO THE ASYMPTOTICS OF THE GENERALIZATION ERROR Lemma 39 . Assume σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ over sample inputs ( xi ) ni=1 , we have G1 ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ nσ2 ΛR ) −1ΛR−‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F ) =Θ ( n ( 1−α ) ( 1−t ) α ) . ( 114 ) Proof of Lemma 39 . Let G1 , R ( Dn ) = E ( xn+1 , yn+1 ) ( T1 , R ( Dn+1 ) − T1 , R ( Dn ) ) , where R = nC for some constant C. By Lemma 32 , we have that |G1 ( Dn ) −G1 , R ( Dn ) |= ∣∣E ( xn+1 , yn+1 ) [ T1 ( Dn+1 ) −T1 , R ( Dn+1 ) ] − [ T1 ( Dn ) −T1 , R ( Dn ) ] ∣∣ = ∣∣E ( xn+1 , yn+1 ) O ( ( n+1 ) R1−α ) ∣∣+∣∣O ( nR1−α ) ] ∣∣ =O ( 1σ2nR 1−α ) . ( 115 ) Define ηR= ( φ0 ( xn+1 ) , φ1 ( xn+1 ) , ... , φR ( xn+1 ) ) T and Φ̃R= ( ΦTR , ηR ) T . As forG1 , R ( Dn ) , we have G1 , R ( Dn ) =E ( xn+1 , yn+1 ) ( T1 , R ( Dn+1 ) −T1 , R ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2 logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) − 1 2 Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1 ) ) − ( 1 2 logdet ( I+ ΦRΛRΦ T R σ2 ) − 1 2 Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃R T σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) − 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1 ) −Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) ) . ( 116 ) As for the first term in the right hand side ( 116 ) , we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ΛRΦ̃ T RΦ̃R σ2 ) −logdet ( I+ ΛRΦ T RΦR σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ΛRΦ T RΦR+ηRη T R σ2 ) −logdet ( I+ ΛRΦ T RΦR σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( ( I+ ΛRΦ T RΦR σ2 ) −1 ( I+ ΛRΦ T RΦR σ2 + ΛRηRη T R σ2 ) ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ( I+ ΛRΦ T RΦR σ2 ) −1 ΛRηRη T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) log ( 1+ 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ) Let A= ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 . ( 117 ) According to Corollary 22 , with probability of at least 1 − δ , we have ‖ 1σ2A‖2 = O ( √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α ) = o ( 1 ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 . By Lemma 27 , we have ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR =ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ( −1 ) jηTR ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ n σ2 ΛR ) −1ΛRηR =ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ( −1 ) j 1 σ2j ηTR ( I+ n σ2j ΛR ) −1/2Λ 1/2 R A j ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ηR ≤ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ‖A‖j2‖ ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ηR‖ 2 2 ≤ R∑ p=1 φ2p ( xn+1 ) Cλp −α 1+nCλp−α/σ2 + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) R∑ p=1 φ2p ( xn+1 ) Cλp −α 1+nCλp−α/σ2 ≤ R∑ p=1 Cλp −αp2τ 1+nCλp−α/σ2 + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) R∑ p=1 Cλp −αp2τ 1+nCλp−α/σ2 ≤O ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) O ( n ( 1−α ) ( 1−t ) α ) =O ( n ( 1−α ) ( 1−t ) α ) =o ( 1 ) , ( 118 ) where we use Lemma 15 in the last inequality . Next we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) log ( 1+ 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ) = 1 2 ( E ( xn+1 , yn+1 ) ( 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ( 1+o ( 1 ) ) ) = 1 2σ2 ( Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR ) ( 1+o ( 1 ) ) , where in the last equality we use the fact that E ( xn+1 , yn+1 ) ηRηTR=I . By Lemma 27 , we have Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR =Tr ( I+ n σ2 ΛR ) −1ΛR+ ∞∑ j=1 ( −1 ) jTr ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ n σ2 ΛR ) −1ΛR =Tr ( I+ n σ2 ΛR ) −1ΛR+ ∞∑ j=1 ( −1 ) jTr 1 σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R A j ( I+ n σ2 ΛR ) −1/2Λ 1/2 R . By Lemma 15 , we have Tr ( I+ n σ2 ΛR ) −1ΛR≤ R∑ p=1 Cλp −α 1+nCλp−α/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) Tr ( I+ n σ2 ΛR ) −1ΛR≥ R∑ p=1 Cλp −α 1+nCλp−α/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) . Overall , Tr ( I+ n σ2 ΛR ) −1ΛR=Θ ( n ( 1−α ) ( 1−t ) α ) . ( 119 ) Since ‖ 1σ2A‖ j 2 =o ( 1 ) , we have that the absolute values of diagonal entries of 1 σ2jA j are at most o ( 1 ) ) . Let ( Aj ) p , p denote the ( p , p ) -th entry of the matrixAj . Then we have∣∣∣∣Tr 1σ2 ( I+ nσ2 ΛR ) −1/2Λ1/2R Aj ( I+ nσ2 ΛR ) −1/2Λ1/2R ∣∣∣∣ = ∣∣∣∣∣ R∑ p=1 λp 1 σ2j ( A j ) p , p 1+nλp/σ2 ∣∣∣∣∣≤ R∑ p=1 λp‖A‖j2 1+nλp/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 120 ) where in the last step we used ( 119 ) . According to ( 119 ) and ( 120 ) , we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2σ2 ( Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR ) ( 1+o ( 1 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) =Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( n ( 1−α ) ( 1−t ) α ) o ( 1 ) =Θ ( n ( 1−α ) ( 1−t ) α ) = 1 2σ2 ( Tr ( I+ n σ2 ΛR ) −1ΛR ) ( 1+o ( 1 ) ) . ( 121 ) Using the Woodbury matrix identity , the second term in the right hand side ( 116 ) is given by 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1−Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) = 1 2 ( E ( xn+1 , yn+1 ) Tr ( 1 σ2 Φ̃R ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1ΛRΦ̃ T R−Tr ( 1 σ2 ΦR ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRΦ T R ) = 1 2 ( E ( xn+1 , yn+1 ) Tr ( 1 σ2 ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1ΛRΦ̃ T RΦ̃R−Tr ( 1 σ2 ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRΦ T RΦR ) =−1 2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1−Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ) =−1 2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ T RΦR+ 1 σ2 ΛRηRη T R ) −1−Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ) = 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 1+ 1σ2 η T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηR ) , where the last equality uses the Sherman–Morrison formula . According to ( 118 ) , we get 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 1+ 1σ2 η T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηR ) = 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ( 1+o ( 1 ) ) ) = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛR ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 TrΛ 1/2 R ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1Λ 1/2 R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1Λ 1/2 R ( I+ 1 σ2 ΛRΦ T RΦR ) −1Λ 1/2 R = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1ΛR ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1 = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ 1 σ2 A ) −1 ( I+ n σ2 ΛR ) −1/2‖2F , where in the penultimate equality we use Tr ( BBT ) =‖B‖2F , ‖B‖F is the Frobenius norm ofA , and in the last equality we use the definition ofA ( 117 ) . Then we have 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ 1 σ2 A ) −1 ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ ∞∑ j=1 ( −1 ) j 1 σ2j Aj ) ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖2F . ( 122 ) By Lemma 15 , we have ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F ≤ √√√√ R∑ p=1 Cλp−α ( 1+nCλp−α/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) 2α ) ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F ≥ √√√√ R∑ p=1 Cλp−α ( 1+nCλp−α/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) 2α ) . Overall , we have ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F =Θ ( n ( 1−α ) ( 1−t ) 2α ) . ( 123 ) Since ‖ 1σ2A‖2 =O ( √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α ) =o ( 1 ) , we have ‖ 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖F ≤‖Λ1/2R ( I+ n σ2 ΛR ) −1/2‖F ‖ 1 σ2 A‖j2‖ ( I+ n σ2 ΛR ) −1/2‖2 =O ( n ( 1−α ) ( 1−t ) 2α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 124 ) where in the first inequality we use the fact that ‖AB‖F ≤ ‖A‖F ‖B‖2 when B is symmetric . By Lemma 15 , we have 1 σ2j ∣∣∣TrΛ1/2R ( I+ nσ2 ΛR ) −1Λ1/2R ( I+ nσ2 ΛR ) −1/2Aj ( I+ nσ2 ΛR ) −1/2∣∣∣ = ∣∣∣∣∣ R∑ p=1 λp ( ( 1 σ2A ) j ) p , p ( 1+nλp/σ2 ) 2 ∣∣∣∣∣≤ R∑ p=1 λp‖ 1σ2A‖ j 2 ( 1+nλp/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 125 ) According to ( 123 ) , ( 124 ) and ( 125 ) , we have 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1−Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛR ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ( ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F + ∞∑ j=1 ∥∥∥∥ 1σ2j Λ1/2R ( I+ nσ2 ΛR ) −1/2Aj ( I+ nσ2 ΛR ) −1/2 ∥∥∥∥2 F +2TrΛ 1/2 R ( I+ n σ2 ΛR ) −1 ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2 ) = 1+o ( 1 ) 2σ2 ( Θ ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 1 σ2j O ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) +2 ∞∑ j=1 1 σ2j Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F . ( 126 ) Combining ( 121 ) and ( 126 ) we get that G1 , R ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I + n σ2 ΛR ) −1ΛR + ‖Λ1/2R ( I + n σ2 ΛR ) −1‖2F ) = Θ ( n ( 1−α ) ( 1−t ) α ) . From ( 115 ) we have that G1 ( Dn ) ≤ G1 , R ( Dn ) + |G1 ( Dn ) − G1 , R ( Dn ) | = Θ ( n ( 1−α ) ( 1−t ) α ) +O ( n 1σ2R 1−α ) . Choosing R = n ( 2α−1 α ( α−1 ) +1 ) ( 1−t ) we conclude the proof . Lemma 40 . Assume σ2 =Θ ( nt ) where 1− α1+2τ < t < 1 . Let S=n D. Assume that ‖ξ‖2 =1 . When n is sufficiently large , with probability of at least 1−2δ we have ‖ ( I+ 1σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2 =O ( √ ( 1δ +1 ) n·n − ( 1−t ) ) . ( 127 ) Proof of Lemma 40 . Using the Woodbury matrix identity , we have that ( ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦSΛSξ= [ I−ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSΦTS ] ΦSΛSξ =ΦSΛSξ−ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSΦTSΦSΛSξ =σ2ΦS ( σ 2I+ΛSΦ T SΦS ) −1ΛSξ . ( 128 ) Let A= ( I+ nσ2 ΛS ) −γ/2Λ γ/2 S ( Φ T SΦS−nI ) Λ γ/2 S ( I+ n σ2 ΛS ) −γ/2 , where γ > 1+α+2τ− ( 1+2τ+2α ) t2α ( 1−t ) . By Corollary 22 , with probability of at least 1−δ , we have ‖ 1σ2A‖2 =Õ ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 . By Lemma 27 , we have ( I+ 1 σ2 ΛSΦ T SΦS ) −1 = ( I+ n σ2 ΛS ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1 . Then we have ‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ n σ2 ΛS ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1 ΛSξ ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ n σ2 ΛS ) −1ΛSξ‖2+ ∞∑ j=1 ∥∥∥∥∥ ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1ΛSξ ∥∥∥∥∥ 2 . ( 129 ) For the first term in the right hand side of the last equation , we have ‖ ( I+ n σ2 ΛS ) −1ΛSξ‖2≤‖ ( I+ n σ2 ΛS ) −1ΛS‖2‖ξ‖2≤ σ2 n =O ( n−1 ) . ( 130 ) Using the fact that ‖ 1σ2A‖2 = Õ ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) and ‖ ( I+ nσ2 ΛS ) −1ΛS‖2 ≤ n−1 , we have∥∥∥∥∥ ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1ΛSξ ∥∥∥∥∥ 2 = 1 σ2j ∥∥∥∥ ( I+ nσ2 ΛS ) −1+ γ2 Λ1− γ2S ( A ( I+ nσ2 ΛS ) −1+γΛ1−γS ) j−1A ( I+ nσ2 ΛS ) −1+ γ2 Λ− γ2S ΛSξ ∥∥∥∥ 2 ≤n ( 1−t ) ( −1+ γ 2 + ( −1+γ ) ( j−1 ) ) Õ ( n j ( 1+α+2τ− ( 1+2τ+2α ) t ) 2α −jγ ( 1−t ) ) ‖ ( I+ n σ2 ΛS ) −1+ γ2 Λ 1− γ2 S ξ‖2 =Õ ( n− γ 2 ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) ‖ ( I+ n σ2 ΛS ) −1+ γ2 Λ 1− γ2 S ‖2‖ξ‖2 =Õ ( n− γ 2 ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) O ( n ( −1+γ/2 ) ( 1−t ) ) =Õ ( n− ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) . ( 131 ) Using ( 129 ) , ( 130 ) and ( 131 ) , we have ‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 =σ−2Õ ( n−1 ) + ∞∑ j=1 Õ ( n−1+ ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) =Õ ( n− ( 1−t ) ) +Õ ( n−1+ 1−α+2τ− ( 1+2τ ) t 2α ) =Õ ( n− ( 1−t ) ) . ( 132 ) By Corollary 20 , with probability of at least 1−δ , we have ‖ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 =Õ ( √ ( 1 δ +1 ) n‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 ) =Õ ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) . From ( 128 ) we get ‖ ( I + 1σ2 ΦSΛSΦ T S ) −1fS ( x ) ‖2 = Õ ( √ ( 1δ +1 ) n ·n − ( 1−t ) ) . This concludes the proof . Lemma 41 . Assume σ2 = Θ ( nt ) where 1 − α1+2τ < t < 1 . Let δ = n −q where 0≤q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . Under Assumptions 4 , 5 and 6 , assume that µ0 =0 . Then with probability of at least 1−6δ over sample inputs ( xi ) ni=1 , we haveG2 ( Dn ) = ( 1+o ( 1 ) ) 2σ2 ‖ ( I+ n σ2 ΛR ) −1µR‖22 = Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) , where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . Proof of Lemma 41 . Let S = nD . Let G2 , S ( Dn ) = E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) − T2 , S ( Dn ) ) . By Lemma 33 , when S is large enough , with probability of at least 1−3δ we have that c ( 133 ) Let Λ1 : S = diag { λ1 , ... , λS } , Φ1 : S = ( φ1 ( x ) , φ1 ( x ) , ... , φS ( x ) ) and µ1 : S = ( µ1 , ... , µS ) . Since µ0 = 0 , we have T2 , S ( Dn ) = 12σ2µ T 1 : SΦ T 1 : S ( I + 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : Sµ1 : S . Define η1 : S = ( φ1 ( xn+1 ) , ... , φS ( xn+1 ) ) T and Φ̃1 : S= ( ΦT1 : S , η1 : S ) T . In the proof of Lemma 34 , we showed that T2 , S ( Dn ) = 1 2σ2 µT1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : Sµ1 : S = 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S . We have G2 , S ( Dn ) =E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) −T2 , S ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ̃ T S Φ̃S ) −1µ1 : S ) − ( 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S ) ) =E ( xn+1 , yn+1 ) ( 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ̃ T S Φ̃S ) −1µ1 : S ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µT1 : SΛ −1 1 : S ( I+ 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 1+ 1σ2 η T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : S µ1 : S ) ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µT1 : S ( I+ 1 σ2 Φ T 1 : SΦ1 : SΛ1 : S ) −1η1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S 1+ 1σ2 η T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : S ) ) =E ( xn+1 , yn+1 ) ( 1+o ( 1 ) 2σ2 µT1 : S ( I+ 1 σ2 ΦT1 : SΦ1 : SΛ1 : S ) −1η1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S ) = 1+o ( 1 ) 2σ2 µT1 : S ( I+ 1 σ2 ΦT1 : SΦ1 : SΛ1 : S ) −1 ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S = 1+o ( 1 ) 2σ2 ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖22 , ( 134 ) where in the fourth to last equality we used the Sherman–Morrison formula , in the third inequality we used ( 118 ) , and in the last equality we used the fact that E ( xn+1 , yn+1 ) η1 : SηT1 : S=I . Let µ̂1 : R= ( µ1 , ... , µR,0 , ... ,0 ) ∈RS . Then we have ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2≤‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ̂1 : R‖2+‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) ‖2 , ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2≥‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ̂1 : R‖2−‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) ‖2 . ( 135 ) ChooseR=n ( 1 α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) . In Lemma 29 , ( 62 ) , we showed that with probability of at least 1−δ , ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 , ( 136 ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . The same proof holds if we replace Φ1 : R with Φ1 : S , Λ1 : R with Λ1 : S , and µ1 : R with µ̂1 : R. We have ‖ ( σ2I+Λ1 : SΦT1 : SΦ1 : S ) −1µ̂1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 . ( 137 ) Next we bound ‖ ( I + 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S − µ̂1 : R ) ‖2 . By Assumption 5 , we have that ‖µ1 : S−µ̂1 : R‖2 =O ( R 1−2β 2 ) . For any ξ∈RS and ‖ξ‖2 =1 , using the Woodbury matrix identity , with probability of at least 1−2δ we have |ξT ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) | = |ξT ( I− 1 σ2 Λ1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ) ( µ1 : S−µ̂1 : R ) | = |ξT ( µ1 : S−µ̂1 : R ) − 1 σ2 ξTΛ1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ( µ1 : S−µ̂1 : R ) | ≤‖ξ‖2‖µ1 : S−µ̂1 : R‖2+ 1 σ2 |ξTΛ1 : SΦT1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ( µ1 : S−µ̂1 : R ) | ≤O ( R 1−2β 2 ) + 1 σ2 ‖ ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : SΛ1 : Sξ‖2‖Φ1 : S ( µ1 : S−µ̂1 : R ) ‖2 =O ( R 1−2β 2 ) + 1 σ2 O ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) O ( √ ( 1 δ +1 ) nR 1−2β 2 ) =O ( ( 1 δ +1 ) R 1−2β 2 ) , where in the second to last step we used Corollary 20 to show ‖Φ1 : S ( µ1 : S − µ̂1 : R ) ‖2 = O ( √ ( 1δ +1 ) nR 1−2β 2 ) with probability of at least 1 − δ , and Lemma 40 to show that ‖ ( I + 1σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : SΛ1 : Sξ‖2 = O ( √ ( 1δ +1 ) n · n −1 ) with probability of at least 1−δ . SinceR=n ( 1α+κ ) ( 1−t ) , we have |ξT ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) |=O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since ξ is arbitrary , we have ‖ ( I + 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S − µ̂1 : R ) ‖2 = O ( ( 1δ + 1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 and 0 < κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) , we can choose κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) and κ is arbitrarily close to κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) such that 0≤q < ( 2β−1 ) ( 1−t ) κ2 . Then we have ( 1−2β ) ( 1−t ) κ 2 +q < 0 . From ( 135 ) and ( 137 ) , we have ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) +O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) +O ( ( nq+ ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) = ( 1+o ( 1 ) ) ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 = ( 1+o ( 1 ) ) ‖ ( I+ n σ2 ΛR ) −1µR‖2 . ( 138 ) Hence G2 , S ( Dn ) = 1+o ( 1 ) 2σ2 ‖ ( I + 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖22 = Θ ( n ( 1−t ) max { −2 , 1−2β α } logk/2 n ) . Then by ( 133 ) , G2 ( Dn ) =Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) +Õ ( ( 1δ +1 ) n 1 σ2S max { 1/2−β,1−α } ) . Choosing S=n ( 1+min { 2 , 2β−1 α } min { β−1/2 , α−1 } +1 ) ( 1−t ) , we get the result . Proof of Theorem 9 . From Lemmas 39 and 41 and 1α −1 > −2 , we have that with probability of at least 1−7δ̃ , E G ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ n σ2 ΛR ) −1ΛR−‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F +‖ ( I+ n σ2 ΛR ) −1µR‖22 ) =Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =Θ ( nmax { ( 1−α ) ( 1−t ) α , ( 1−2β ) ( 1−t ) α } ) ( 139 ) where k= { 0 , 2α 6=2β−1 1 , 2α=2β−1 . Furthermore , we have Tr ( I+ n σ2 Λ ) −1Λ−Tr ( I+ n σ2 ΛR ) −1ΛR = ∞∑ p=R+1 λp 1+ nσ2λp ≤ ∞∑ p=R+1 Cλp −α 1+ nσ2Cλp −α ≤ ∞∑ p=R+1 Cλp −α= n σ2 O ( R1−α ) =O ( n ( 1−α ) ( 1−t ) ( 1 α+κ ) ) =o ( n ( 1−α ) ( 1−t ) α ) . Then we have Tr ( I+ n σ2 ΛR ) −1ΛR=Tr ( I+ n σ2 Λ ) −1Λ ( 1+o ( 1 ) ) . ( 140 ) Similarly we can prove ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F =‖Λ1/2 ( I+ n σ2 Λ ) −1‖2F ( 1+o ( 1 ) ) ( 141 ) ‖ ( I+ n σ2 ΛR ) −1µR‖22 =‖ ( I+ n σ2 Λ ) −1µ‖22 ( 1+o ( 1 ) ) ( 142 ) Letting δ=7δ̃ , the proof is complete . In the case of µ0 > 0 , we have the following lemma : Lemma 42 . Let δ = n−q where 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . Under Assumptions 4 , 5 and 6 , assume that µ0 > 0 . Then with probability of at least 1− 6δ over sample inputs ( xi ) ni=1 , we have G2 ( Dn ) = 1 2σ2µ 2 0+o ( 1 ) . Proof of Lemma 42 . Let S = nD . Let G2 , S ( Dn ) = E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) − T2 , S ( Dn ) ) . By Lemma 33 , when S is large enough , with probability of at least 1−3δ we have that |G2 ( Dn ) −G2 , S ( Dn ) |= ∣∣E ( xn+1 , yn+1 ) [ T2 ( Dn+1 ) −T2 , S ( Dn+1 ) ] − [ T2 ( Dn ) −T2 , S ( Dn ) ] ∣∣ = ∣∣∣∣E ( xn+1 , yn+1 ) Õ ( ( 1δ+1 ) ( n+1 ) 1σ2Smax { 1/2−β,1−α } ) ∣∣∣∣ + ∣∣∣∣Õ ( ( 1δ+1 ) n 1σ2Smax { 1/2−β,1−α } ) ∣∣∣∣ =Õ ( ( 1 δ +1 ) n 1 σ2 Smax { 1/2−β,1−α } ) . ( 143 ) Let ΛS = diag { λ1 , ... , λS } , ΦS = ( φ1 ( x ) , φ1 ( x ) , ... , φS ( x ) ) and µS = ( µ1 , ... , µS ) . Define ηS = ( φ0 ( xn+1 ) , φ1 ( xn+1 ) , ... , φS ( xn+1 ) ) T and Φ̃S = ( ΦTS , ηS ) T . By the same technique as in the proof of Lemma 34 , we replace ΛR by Λ̃ , R=diag { , λ1 , ... , λR } , let →0 and show the counterpart of the result ( 134 ) in the proof of Lemma 41 : G2 , S ( Dn ) =E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) −T2 , S ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µTS ( I+ 1 σ2 Φ T SΦSΛS ) −1ηSη T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS 1+ 1σ2 η T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1ΛSηS ) ) =E ( xn+1 , yn+1 ) ( 1+o ( 1 ) 2σ2 µTS ( I+ 1 σ2 ΦTSΦSΛS ) −1ηSη T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS ) = 1+o ( 1 ) 2σ2 µTS ( I+ 1 σ2 ΦTSΦSΛS ) −1 ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS = 1+o ( 1 ) 2σ2 ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖22 , ( 144 ) where in the fourth to last equality we used the Sherman–Morrison formula , in the third inequality we used ( 118 ) , and in the last equality we used the fact that E ( xn+1 , yn+1 ) η1 : SηT1 : S=I . Let µ̂R= ( µ0 , µ1 , ... , µR,0 , ... ,0 ) ∈RS . Then we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2≤‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2+‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) ‖2 , ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2≥‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2−‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) ‖2 . ( 145 ) ChooseR=n ( 1 α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) . In Lemma 29 , ( 62 ) , we showed that with probability of at least 1−δ , ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 , ( 146 ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . The same proof holds if we replace Φ1 : R with Φ1 : S , Λ1 : R with Λ1 : S , and µ1 : R with µ̂1 : R. We have ‖ ( σ2I+Λ1 : SΦT1 : SΦ1 : S ) −1µ̂1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 . ( 147 ) So we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2 =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) =µ0+o ( 1 ) . ( 148 ) Next we bound ‖ ( I + 1σ2 ΛSΦ T SΦS ) −1 ( µS − µ̂R ) ‖2 . By Assumption 5 , we have that ‖µS − µ̂R‖2 = O ( R 1−2β 2 ) . For any ξ ∈ RS and ‖ξ‖2 = 1 , using the Woodbury matrix identity , with probability of at least 1−2δ we have |ξT ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) | = |ξT ( I− 1 σ2 ΛSΦ T S ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ) ( µS−µ̂R ) | = |ξT ( µS−µ̂R ) − 1 σ2 ξTΛSΦ T S ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ( µS−µ̂R ) | ≤‖ξ‖2‖µS−µ̂R‖2+ 1 σ2 |ξTΛSΦTS ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ( µS−µ̂R ) | ≤O ( R 1−2β 2 ) + 1 σ2 ‖ ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2‖ΦS ( µS−µ̂R ) ‖2 =O ( R 1−2β 2 ) + 1 σ2 O ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) O ( √ ( 1 δ +1 ) nR 1−2β 2 ) =O ( ( 1 δ +1 ) R 1−2β 2 ) , where in the second to last step we used Corollary 20 to show‖ΦS ( µS−µ̂R ) ‖2 =O ( √ ( 1δ +1 ) nR 1−2β 2 ) with probability of at least 1− δ , and Lemma 40 to show that ‖ ( I + 1σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2 = O ( √ ( 1δ +1 ) n·n − ( 1−t ) ) with probability of at least 1−δ . SinceR=n ( 1α+κ ) ( 1−t ) , we have |ξT ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) |=O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since ξ is arbitrary , we have ‖ ( I + 1σ2 ΛSΦ T SΦS ) −1 ( µS − µ̂R ) ‖2 = O ( ( 1δ + 1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 and 0 < κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) , we can choose κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) and κ is arbitrarily close to κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) such that 0≤q < ( 2β−1 ) ( 1−t ) κ2 . Then we have ( 1−2β ) ( 1−t ) κ 2 +q < 0 . From ( 145 ) and ( 148 ) , we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2 =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) +O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β2α } logk/2n ) =µ0+o ( 1 ) . ( 149 ) Hence G2 , S ( Dn ) = 1+o ( 1 ) 2σ2 ‖ ( I + 1 σ2 ΛSΦ T SΦS ) −1µS‖22 = 12σ2µ 2 0 + o ( 1 ) . Then by ( 143 ) , G2 ( Dn ) = 1 2σ2µ 2 0+o ( 1 ) +Õ ( ( 1δ +1 ) nS max { 1/2−β,1−α } ) . Choosing S=n ( 1+min { 2 , 2β−1α } min { β−1/2 , α−1 } +1 ) ( 1−t ) , we get the result . Proof of Theorem 11 . According to Lemma 42 , G2 ( Dn ) = 12σ2µ 2 0 +o ( 1 ) . By Lemma 39 , we have G1 ( Dn ) =Θ ( n ( 1−α ) ( 1−t ) α ) . Then E G ( Dn ) =G1 ( Dn ) +G2 ( Dn ) = 12σ2µ 2 0+o ( 1 ) . D.3 PROOFS RELATED TO THE EXCESS MEAN SQUARED GENERALIZATION ERROR Proof of Theorem 12 . For µ0 =0 , we can show that E M ( Dn ) =E Exn+1 [ m̄ ( xn+1 ) −f ( xn+1 ) ] 2 =E Exn+1 [ Kxn+1x ( Kn+σ2modelIn ) −1y−f ( xn+1 ) ] 2 =E Exn+1 [ ηTΛΦT [ ΦΛΦT +σ2modelIn ) −1 ( Φµ+ ) −ηTµ ] 2 =E Exn+1 [ ηTΛΦT ( ΦΛΦT +σ2modelIn ) −1 ] 2 +Exn+1 [ ηT ( ΛΦT ( ΦΛΦT +σ2modelIn ) −1Φ−I ) µ ] 2 =σ2trueTrΛΦ T ( ΦΛΦT +σ2modelIn ) −2ΦΛ +µT ( I+ 1 σ2model ΦTΦΛ ) −1 ( I+ 1 σ2model ΛΦTΦ ) −1 µ = σ2true σ2model Tr ( I+ ΛΦ TΦ σ2model ) −1Λ−Tr ( I+ ΛΦ TΦ σ2model ) −2Λ+‖ ( I+ 1 σ2model ΛΦTΦ ) −1µ‖22 . According to ( 138 ) from the proof of Lemma 41 , the truncation procedure ( 133 ) and ( 142 ) , with probability of at least 1−δ we have ‖ ( I+ 1 σ2model ΛΦTΦ ) −1µ‖22 =Θ ( n max { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) = ( 1+o ( 1 ) ) ‖ ( I+ n σ2model Λ ) −1µ‖22 , where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . According to ( 121 ) and ( 126 ) from the proof of Lemma 39 , the truncation procedure ( 115 ) , ( 140 ) and ( 141 ) , with probability of at least 1−δ we have Tr ( I+ ΛΦ TΦ σ2model ) −1Λ−Tr ( I+ ΛΦ TΦ σ2model ) −2Λ = ( Tr ( I+ n σ2model Λ ) −1Λ ) ( 1+o ( 1 ) ) −‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ( 1+o ( 1 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) . Combining the above two equations we get E M ( Dn ) = ( 1+o ( 1 ) ) ( σ2true σ2model ( Tr ( I+ n σ2model Λ ) −1Λ−‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ) +‖ ( I+ n σ2model Λ ) −1µ‖22 ) = σ2true σ2model Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =σ2trueΘ ( n 1−α−t α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) When µ0 > 0 , according to ( 149 ) in the proof of Lemma 42 and the truncation procedure ( 133 ) , with probability of at least 1−δ we have E M ( Dn ) =Θ ( n ( 1−α ) ( 1−t ) α ) +µ20+o ( 1 ) =µ20+o ( 1 ) . | This paper considers the problem of fitting a target function corrupted by additive iid Gaussian noise using with a zero mean Gaussian Process (GP). It provides an asymptotic characterisation of the typical Bayesian and mean-squared generalisation errors under the assumption of power-law decay of the covariance eigenvalues and the target function coefficients when expressed in the covariance eigenbasis. The main result is to derive the exponents characterising the rate of decay of the aforementioned errors with the number of samples $n$, as a function of the spectrum and target function decays. The key technical step is to prove the concentration of $\Phi_{R}^{\top}\Phi_{R}\in\mathbb{R}^{R\times R}$ around $n I$ where $\Phi_{R}\in\mathbb{R}^{n\times R}$ the truncated matrix of eigenvectors evaluated at the data points, and $R=n^{\frac{2}{1+\alpha}-\kappa}$ with $0\leq \kappa < \frac{\alpha-1}{\alpha(\alpha+1)}$. Finally, the authors provide one numerical experiment with the arc-cosine kernel and data uniformly distributed in the unit circle. | SP:c3dc3845317218c326c306f54a6a67edca6c041e |
Learning Curves for Gaussian Process Regression with Power-Law Priors and Targets | 1 INTRODUCTION Gaussian processes ( GPs ) provide a flexible and interpretable framework for learning and adaptive inference , and are widely used for constructing prior distributions in non-parametric Bayesian learning . From an application perspective , one crucial question is how fast do GPs learn , i.e. , how much training data is needed to achieve a certain level of generalization performance . Theoretically , this is addressed by analyzing so-called “ learning curves ” , which describe the generalization error as a function of the training set size n. The rate at which the curve approaches zero determines the difficulty of learning tasks and conveys important information about the asymptotic performance of GP learning algorithms . In this paper , we study the learning curves for Gaussian process regression . Our main result characterizes the asymptotics of the generalization error in cases where the eigenvalues of the GP kernel and the coefficients of the eigenexpansion of the target function have a power-law decay . In the remainder of this introductory section , we review related work and outline our main contributions . Gaussian processes A GP model is a probabilistic model on an infinite-dimensional parameter space ( Williams and Rasmussen , 2006 ; Orbanz and Teh , 2010 ) . In GP regression ( GPR ) , for example , this space can be the set of all continuous functions . Assumptions about the learning problem are encoded by way of a prior distribution over functions , which gets transformed into a posterior distribution given some observed data . The mean of the posterior is then used for prediction . The model uses only a finite subset of the available parameters to explain the data and this subset can grow arbitrarily large as more data are observed . In this sense , GPs are “ non-parametric ” and contrast with parametric models , where there is a fixed number of parameters . For regression with Gaussian noise , a major appeal of the GP formalism is that the posterior is analytically tractable . GPs are also one important part in learning with kernel machines ( Kanagawa et al. , 2018 ) and modeling using GPs has recently gained considerable traction in the neural network community . Neural networks and kernel learning From a GP viewpoint , there exists a well known correspondence between kernel methods and infinite neural networks ( NNs ) first studied by Neal ( 1996 ) . Neal showed that the outputs of a randomly initialized one-hidden layer neural network ( with appropriate scaling of the variance of the initialization distribution ) converges to a GP over functions in the limit of an infinite number of hidden units . Follow-up work extended this correspondence with analytical expressions for the kernel covariance for shallow NNs by Williams ( 1997 ) , and more recently for deep fully-connected NNs ( Lee et al. , 2018 ; de G. Matthews et al. , 2018 ) , convolutional NNs with many channels ( Novak et al. , 2019 ; Garriga-Alonso et al. , 2019 ) , and more general architectures ( Yang , 2019 ) . The correspondence enables exact Bayesian inference in the associated GP model for infinite-width NNs on regression tasks and has led to some recent breakthroughs in our understanding of overparameterized NNs ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Arora et al. , 2019 ; Belkin et al. , 2018 ; Daniely et al. , 2016 ; Yang and Salman , 2019 ; Bietti and Mairal , 2019 ) . The most prominent kernels associated with infinite-width NNs are the Neural Network Gaussian Process ( NNGP ) kernel when only the last layer is trained ( Lee et al. , 2018 ; de G. Matthews et al. , 2018 ) , and the Neural Tangent Kernel ( NTK ) when the entire model is trained ( Jacot et al. , 2018 ) . Empirical studies have shown that inference with such infinite network kernels is competitive with standard gradient descent-based optimization for fully-connected architectures ( Lee et al. , 2020 ) . Learning curves A large-scale empirical characterization of the generalization performance of state-of-the-art deep NNs showed that the associated learning curves often follow a power law of the form n−β with the exponent β ranging between 0.07 and 0.35 depending on the data and the algorithm ( Hestness et al. , 2017 ; Spigler et al. , 2020 ) . Power-law asymptotics of learning curves have been theoretically studied in early works for the Gibbs learning algorithm ( Amari et al. , 1992 ; Amari and Murata , 1993 ; Haussler et al. , 1996 ) that showed a generalization error scaling with exponent β=0.5 , 1 or 2 under certain assumptions . More recent results from statistical learning theory characterize the shape of learning curves depending on the properties of the hypothesis class ( Bousquet et al. , 2021 ) . In the context of GPs , approximations and bounds on learning curves have been investigated in several works ( Sollich , 1999 ; Sollich and Halees , 2002 ; Sollich , 2001 ; Opper and Vivarelli , 1999 ; Opper and Malzahn , 2002 ; Williams and Vivarelli , 2000 ; Malzahn and Opper , 2001a ; b ; Seeger et al. , 2008 ; Van Der Vaart and Van Zanten , 2011 ; Le Gratiet and Garnier , 2015 ) , with recent extensions to kernel regression from a spectral bias perspective ( Bordelon et al. , 2020 ; Canatar et al. , 2021 ) . For a review on learning curves in relation to its shape and monotonicity , see Loog et al . ( 2019 ) ; Viering et al . ( 2019 ) ; Viering and Loog ( 2021 ) . A related but complementary line of work studies the convergence rates and posterior consistency properties of Bayesian non-parametric models ( Barron , 1998 ; Seeger et al. , 2008 ; Van Der Vaart and Van Zanten , 2011 ) . Power-law decay of the GP kernel eigenspectrum The rate of decay of the eigenvalues of the GP kernel conveys important information about its smoothness . Intuitively , if a process is “ rough ” with more power at high frequencies , then the eigenspectrum decays more slowly . On the other hand , kernels that define smooth processes have a fast-decaying eigenspectrum ( Stein , 2012 ; Williams and Rasmussen , 2006 ) . The precise eigenvalues ( λp ) p≥1 of the operators associated to many kernels and input distributions are not known explicitly , except for a few special cases ( Williams and Rasmussen , 2006 ) . Often , however , the asymptotic properties are known . The asymptotic rate of decay of the eigenvalues of stationary kernels for input distributions with bounded support is well understood ( Widom , 1963 ; Ritter et al. , 1995 ) . Ronen et al . ( 2019 ) showed that for inputs distributed uniformly on a hypersphere , the eigenfunctions of the arc-cosine kernel are spherical harmonics and the eigenvalues follow a power-law decay . The spectral properties of the NTK are integral to the analysis of training convergence and generalization of NNs , and several recent works empirically justify and rely on a power law assumption for the NTK spectrum ( Bahri et al. , 2021 ; Canatar et al. , 2021 ; Lee et al. , 2020 ; Nitanda and Suzuki , 2021 ) . Velikanov and Yarotsky ( 2021 ) showed that the asymptotics of the NTK of infinitely wide shallow ReLU networks follows a power-law that is determined primarily by the singularities of the kernel and has the form λp∝p−α with α=1+ 1d , where d is the input dimension . Asymptotics of the generalization error of kernel ridge regression ( KRR ) There is a well known equivalence between GPR and KRR with the additive noise in GPR playing the role of regularization in KRR ( Kanagawa et al. , 2018 ) . Analysis of the decay rates of the excess generalization error of KRR has appeared in several works , e.g , in the noiseless case with constant regularization ( Bordelon et al. , 2020 ; Spigler et al. , 2020 ; Jun et al. , 2019 ) , and the noisy optimally regularized case ( Caponnetto and De Vito , 2007 ; Steinwart et al. , 2009 ; Fischer and Steinwart , 2020 ) under the assumption that the kernel eigenspectrum , and the eigenexpansion coefficients of the target function follow a power law . These assumptions , which are often called resp . the capacity and source conditions are related to the effective dimension of the problem and the difficulty of learning the target function ( Caponnetto and De Vito , 2007 ; Blanchard and Mücke , 2018 ) . Cui et al . ( 2021 ) present a unifying picture of the excess error decay rates under the capacity and source conditions in terms of the interplay between noise and regularization illustrating their results with real datasets . Contributions In this work , we characterize the asymptotics of the generalization error of GPR and KRR under the capacity and source conditions . Our main contributions are as follows : • When the eigenspectrum of the prior decays with rate α and the eigenexpansion coefficients of the target function decay with rate β , we show that with high probability over the draw of n input samples , the negative log-marginal likelihood behaves as Θ ( nmax { 1 α , 1−2β α +1 } ) ( Theorem 7 ) and the generalization error behaves as Θ ( nmax { 1 α−1 , 1−2β α } ) ( Theorem 9 ) . In the special case that the model is correctly specified , i.e. , the GP prior is the true one from which the target functions are actually generated , our result implies that the generalization error behaves asO ( n 1 α−1 ) recovering as a special case a result due to Sollich and Halees ( 2002 ) ( vide Remark 10 ) . • Under similar assumptions as in the previous item , we leverage the equivalence between GPR and KRR to show that the excess generalization error of KRR behaves as Θ ( nmax { 1 α−1 , 1−2β α } ) ( Theorem 12 ) . In the noiseless case with constant regularization , our result implies that the generalization error behaves as Θ ( n 1−2β α ) recovering as a special case a result due to Bordelon et al . ( 2020 ) . Specializing to the case of KRR with Gaussian design , we recover as a special case a result due to Cui et al . ( 2021 ) ( vide Remark 14 ) . For the unrealizable case , i.e. , when the target function is outside the span of the eigenfunctions with positive eigenvalues , we show that the generalization error converges to a constant . • We present a few toy experiments demonstrating the theory for GPR with arc-cosine kernel without biases ( resp . with biases ) which is the conjugate kernel of an infinitely wide shallow network with two inputs and one hidden layer without biases ( resp . with biases ) ( Cho and Saul , 2009 ; Ronen et al. , 2019 ) . 2 BAYESIAN LEARNING AND GENERALIZATION ERROR FOR GPS In GP regression , our goal is to learn a target function f : Ω 7→ R between an input x ∈ Ω and output y ∈ R based on training samples Dn = { ( xi , yi ) } ni=1 . We consider an additive noise model yi = f ( xi ) + i , where i i.i.d.∼ N ( 0 , σ2true ) . If ρ denotes the marginal density of the inputs xi , then the pairs ( xi , yi ) are generated according to the density q ( x , y ) = ρ ( x ) q ( y|x ) , where q ( y|x ) =N ( y|f ( x ) , σ2true ) . We assume that there is a prior distribution Π0 on f which is defined as a zero-mean GP with continuous covariance function k : Ω×Ω→R , i.e. , f ∼GP ( 0 , k ) . This means that for any finite set x = ( x1 , ... , xn ) T , the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T follows the multivariate normal distributionN ( 0 , Kn ) with covariance matrixKn= ( k ( xi , xj ) ) ni , j=1∈Rn×n . By Bayes ’ rule , the posterior distribution of the target f given the training data is given by dΠn ( f |Dn ) = 1 Z ( Dn ) n∏ i=1 N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) , where Π0 is the prior distribution , Z ( Dn ) = ∫ ∏n i=1N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) is the marginal likelihood or model evidence and σmodel is the sample variance used in GPR . In practice , we do not know the exact value of σtrue and so our choice of σmodel can be different from σtrue . The GP prior and the Gaussian noise assumption allows for exact Bayesian inference and the posterior distribution over functions is again a GP with mean and covariance function given by m̄ ( x ) =KTxx ( Kn+σ 2 modelIn ) −1y , x∈Ω ( 1 ) k̄ ( x , x′ ) =k ( x , x′ ) −KTxx ( Kn+σ2modelIn ) −1Kxx′ , x , x′∈Ω , ( 2 ) whereKxx= ( k ( x1 , x ) , ... , k ( xn , x ) ) T and y= ( y1 , ... , yn ) T ∈Rn ( Williams and Rasmussen , 2006 , Eqs . 2.23-24 ) . The performance of GPR depends on how well the posterior approximates f as the number of training samples n tends to infinity . The distance of the posterior to the ground truth can be measured in various ways . We consider two such measures , namely the Bayesian generalization error ( Seeger et al. , 2008 ; Haussler and Opper , 1997 ; Opper and Vivarelli , 1999 ) and the excess mean squared error ( Sollich and Halees , 2002 ; Le Gratiet and Garnier , 2015 ; Bordelon et al. , 2020 ; Cui et al. , 2021 ) . Definition 1 ( Bayesian generalization error ) . The Bayesian generalization error is defined as the Kullback-Leibler divergence between the true density q ( y|x ) and the Bayesian predictive density pn ( y|x , Dn ) = ∫ p ( y|f ( x ) ) dΠn ( f |Dn ) , G ( Dn ) = ∫ q ( x , y ) log q ( y|x ) pn ( y|x , Dn ) dxdy . ( 3 ) A related quantity of interest is the stochastic complexity ( SC ) , also known as the free energy , which is just the negative log-marginal likelihood . We shall primarily be concerned with a normalized version of the stochastic complexity which is defined as follows : F 0 ( Dn ) =−log Z ( Dn ) ∏n i=1q ( yi|xi ) =−log ∫∏n i=1N ( yi|f ( xi ) , σ2model ) dΠ0 ( f ) ∏n i=1q ( yi|xi ) . ( 4 ) The generalization error ( 3 ) can be expressed in terms of the normalized SC as follows ( Watanabe , 2009 , Theorem 1.2 ) : G ( Dn ) =E ( xn+1 , yn+1 ) F 0 ( Dn+1 ) −F 0 ( Dn ) , ( 5 ) whereDn+1 =Dn∪ { ( xn+1 , yn+1 ) } is obtained by augmentingDn with a test point ( xn+1 , yn+1 ) . If we only wish to measure the performance of the mean of the Bayesian posterior , then we can use the excess mean squared error : Definition 2 ( Excess mean squared error ) . The excess mean squared error is defined as M ( Dn ) =E ( xn+1 , yn+1 ) ( m̄ ( xn+1 ) −yn+1 ) 2−σ2true =Exn+1 ( m̄ ( xn+1 ) −f ( xn+1 ) ) 2 . ( 6 ) Proposition 3 ( Normalized stochastic complexity for GPR ) . Assume that σ2model =σ2true =σ2 . The normalized SC F 0 ( Dn ) ( 4 ) for GPR with prior GP ( 0 , k ) is given as F 0 ( Dn ) = 1 2 logdet ( In+ Kn σ2 ) + 1 2σ2y T ( In+ Kn σ2 ) −1y− 12σ2 ( y−f ( x ) ) T ( y−f ( x ) ) , ( 7 ) where = ( 1 , ... , n ) T . The expectation of the normalized SC w.r.t . the noise is given as E F 0 ( Dn ) = 12 logdet ( In+ Kn σ2 ) − 12Tr ( In− ( In+ Kn σ2 ) −1 ) + 12σ2 f ( x ) T ( In+ Kn σ2 ) −1 f ( x ) . ( 8 ) This is a basic result and has applications in relation to model selection in GPR ( Williams and Rasmussen , 2006 ) . For completeness , we give a proof of Proposition 3 in Appendix B. Seeger et al . ( 2008 , Theorem 1 ) gave an upper bound on the normalized stochastic complexity for the case when f lies in the reproducing kernel Hilbert space ( RKHS ) of the GP prior . It is well known , however , that sample paths of GP almost surely fall outside the corresponding RKHS ( Van Der Vaart and Van Zanten , 2011 ) limiting the applicability of the result . We next derive the asymptotics of E F 0 ( Dn ) , the expected generalization error E G ( Dn ) = E E ( xn+1 , yn+1 ) F 0 ( Dn+1 ) −E F 0 ( Dn ) , and the excess mean squared error E M ( Dn ) . 3 ASYMPTOTIC ANALYSIS OF GP REGRESSION WITH POWER-LAW PRIORS We begin by introducing some notations and assumptions . We assume that f ∈L2 ( Ω , ρ ) . By Mercer ’ s theorem ( Williams and Rasmussen , 2006 , Theorem 4.2 ) , the covariance function of the GP prior can be decomposed as k ( x1 , x2 ) = ∑∞ p=1λpφp ( x1 ) φp ( x2 ) , where ( φp ( x ) ) p≥1 are the eigenfunctions of the operator Lk : L2 ( Ω , ρ ) 7→ L2 ( Ω , ρ ) ; ( Lkf ) ( x ) = ∫ Ω k ( x , s ) f ( s ) dρ ( s ) , and ( λp ) p≥1 are the corresponding positive eigenvalues . We index the sequence of eigenvalues in decreasing order , that is λ1≥λ2≥··· > 0 . The target function f ( x ) is decomposed into the orthonormal set ( φp ( x ) ) p≥1 and its orthogonal complement { φp ( x ) : p≥1 } ⊥ as f ( x ) = ∞∑ p=1 µpφp ( x ) +µ0φ0 ( x ) ∈L2 ( Ω , ρ ) , ( 9 ) whereµ= ( µ0 , µ1 , ... , µp , ... ) T are the coefficients of the decomposition , andφ0 ( x ) satisfies ‖φ0 ( x ) ‖2 = 1 and φ0 ( x ) ∈ { φp ( x ) : p ≥ 1 } ⊥ . For given sample inputs x , let φp ( x ) = ( φp ( x1 ) , ... , φp ( xn ) ) T , Φ = ( φ0 ( x ) , φ1 ( x ) , ... , φp ( x ) , ... ) and Λ = diag { 0 , λ1 , ... , λp , ... } . Then the covariance matrixKn can be written asKn=ΦΛΦT , and the function values on the sample inputs can be written as f ( x ) =Φµ . We shall make the following assumptions in order to derive the power-law asymptotics of the normalized stochastic complexity and the generalization error of GPR : Assumption 4 ( Power law decay of eigenvalues ) . The eigenvalues ( λp ) p≥1 follow the power law Cλp −α≤λp≤Cλp−α , ∀p≥1 ( 10 ) whereCλ , Cλ and α are three positive constants which satisfy 0 < Cλ≤Cλ and α > 1 . As mentioned in the introduction , this assumption , called the capacity condition , is fairly standard in kernel learning and is adopted in many recent works ( Bordelon et al. , 2020 ; Canatar et al. , 2021 ; Jun et al. , 2019 ; Bietti et al. , 2021 ; Cui et al. , 2021 ) . Velikanov and Yarotsky ( 2021 ) derived the exact value of the exponent αwhen the kernel function has a homogeneous singularity on its diagonal , which is the case for instance for the arc-cosine kernel . Assumption 5 ( Power law decay of coefficients of decomposition ) . Let Cµ , Cµ > 0 and β > 1/2 be positive constants and let { pi } i≥1 be an increasing integer sequence such that supi≥1 ( pi+1−pi ) < ∞ . The coefficients ( µp ) p≥1 of the decomposition ( 9 ) of the target function follow the power law |µp|≤Cµp−β , ∀p≥1 and |µpi |≥Cµpi−β , ∀i≥1 . ( 11 ) Since f ∈L2 ( Ω , ρ ) , we have ∑∞ p=0µ 2 p < ∞ . The condition β > 1/2 in Assumption 5 ensures that the sum ∑∞ p=0µ 2 p does not diverge . When the orthonormal basis ( φp ( x ) ) p is the Fourier basis or the spherical harmonics basis , the coefficients ( µp ) p decay at least as fast as a power law so long as the target function f ( x ) satisfies certain smoothness conditions ( Bietti and Mairal , 2019 ) . Velikanov and Yarotsky ( 2021 ) gave examples of some natural classes of functions for which Assumption 5 is satisfied , such as functions that have a bounded support with smooth boundary and are smooth on the interior of this support , and derived the corresponding exponents β . Assumption 6 ( Boundedness of eigenfunctions ) . The eigenfunctions ( φp ( x ) ) p≥0 satisfy ‖φ0‖∞≤Cφ and ‖φp‖∞≤Cφpτ , p≥1 , ( 12 ) whereCφ and τ are two positive constants which satisfy τ < α−12 . The second condition in ( 12 ) appears , for example , in Valdivia ( 2018 , Hypothesis H1 ) and is less restrictive than the assumption of uniformly bounded eigenfunctions that has appeared in several other works in the GP literature , see , e.g. , Braun ( 2006 ) ; Chatterji et al . ( 2019 ) ; Vakili et al . ( 2021 ) . Define T1 ( Dn ) = 1 2 logdet ( In+ ΦΛΦT σ2 ) − 12Tr ( In− ( In+ ΦΛΦT σ2 ) −1 ) , ( 13 ) T2 ( Dn ) = 1 2σ2 f ( x ) T ( In+ ΦΛΦT σ2 ) −1 f ( x ) , ( 14 ) G1 ( Dn ) =E ( xn+1 , yn+1 ) ( T1 ( Dn+1 ) −T1 ( Dn ) ) , ( 15 ) G2 ( Dn ) =E ( xn+1 , yn+1 ) ( T2 ( Dn+1 ) −T2 ( Dn ) ) . ( 16 ) Using ( 8 ) and ( 5 ) , we haveE F 0 ( Dn ) =T1 ( Dn ) +T2 ( Dn ) andE G ( Dn ) =G1 ( Dn ) +G2 ( Dn ) . Intuitively , G1 corresponds to the effect of the noise on the generalization error irrespective of the target function f , whereasG2 corresponds to the ability of the model to fit the target function . As we will see next in Theorems 9 and 11 , ifα is large , then the error associated with the noise is smaller . When f is contained in the span of the eigenfunctions { φp } p≥1 , G2 decreases with increasingn , but if f contains an orthogonal component , then the error remains constant and GP regression is not able to learn the target function . 3.1 ASYMPTOTICS OF THE NORMALIZED STOCHASTIC COMPLEXITY We derive the asymptotics of the normalized SC ( 8 ) for the following two cases : µ0 = 0 and µ0 > 0 . When µ0 =0 , the target function f ( x ) lies in the span of all eigenfunctions with positive eigenvalues . Theorem 7 ( Asymptotics of the normalized SC , µ0 = 0 ) . Assume that µ0 = 0 and σ2model = σ 2 true = σ 2 = Θ ( 1 ) . Under Assumptions 4 , 5 and 6 , with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0 ≤ q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } , the expected normalized SC ( 8 ) has the asymptotic behavior : E F 0 ( Dn ) = [ 1 2 logdet ( I+ n σ2 Λ ) − 1 2Tr ( I− ( I+ nσ2 Λ ) −1 ) + n2σ2µT ( I+ nσ2 Λ ) −1µ ] ( 1+o ( 1 ) ) =Θ ( nmax { 1 α , 1−2β α +1 } ) . ( 17 ) The complete proof of Theorem 7 is given in Appendix D.1 . We give a sketch of the proof below . In the sequel , we use the notationsO and Θ to denote the standard mathematical orders and the notation Õ to suppress logarithmic factors . Proof sketch of Theorem 7 . By ( 8 ) , ( 13 ) and ( 14 ) we have E F 0 ( Dn ) = T1 ( Dn ) + T2 ( Dn ) . In order to analyze the terms T1 ( Dn ) and T2 ( Dn ) , we will consider truncated versions of these quantities and bound the corresponding residual errors . Given a truncation parameter R ∈ N , let ΦR = ( φ0 ( x ) , φ1 ( x ) , ... , φR ( x ) ) ∈Rn×R be the truncated matrix of eigenfunctions evaluated at the data points , ΛR = diag ( 0 , λ1 , ... , λR ) ∈R ( R+1 ) × ( R+1 ) and µR = ( µ0 , µ1 , ... , µR ) ∈RR+1 . We define the truncated version of T1 ( Dn ) as follows : T1 , R ( Dn ) = 1 2 logdet ( In+ ΦRΛRΦ T R σ2 ) − 12Tr ( In− ( In+ ΦRΛRΦ T R σ2 ) −1 ) . ( 18 ) Similarly , define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , Λ > R = diag ( λR+1 , ... , λp , ... ) , fR ( x ) = ∑R p=1 µpφp ( x ) , fR ( x ) = ( fR ( x1 ) , ... , fR ( xn ) ) T , f > R ( x ) = f ( x ) − fR ( x ) , and f > R ( x ) = ( f > R ( x1 ) , ... , f > R ( xn ) ) T . The truncated version of T2 ( Dn ) is then defined as T2 , R ( Dn ) = 1 2σ2 fR ( x ) T ( In+ ΦRΛRΦ T R σ2 ) −1fR ( x ) T . ( 19 ) The proof consists of three steps : • Approximation step : In this step , we show that the asymptotics of T1 , R resp . T2 , R dominates that of the residuals , |T1 , R ( Dn ) −T1 ( Dn ) | resp . |T2 , R ( Dn ) −T2 ( Dn ) | ( see Lemma 32 ) . This builds upon first showing that ‖Φ > RΛ > RΦT > R‖2 =Õ ( max { nR−α , n 1 2R 1−2α 2 , R1−α } ) ( see Lemma 25 ) and then choosingR=n 1 α+κ where 0 < κ < α−1−2τ2α2 when we have ‖Φ > RΛ > RΦ T > R‖2 =o ( 1 ) . Intuitively , the choice of the truncation parameterR is governed by the fact that λR=Θ ( R−α ) =n−1+κα=o ( n−1 ) . • Decomposition step : In this step , we decompose T1 , R into a term independent of ΦR and a series involving ΦTRΦR−nIR , and likewise for T2 , R ( see Lemma 34 ) . This builds upon first showing using the Woodbury matrix identity ( Williams and Rasmussen , 2006 , §A.3 ) that T1 , R ( Dn ) = 1 2 logdet ( IR+ 1 σ2 ΛRΦ T RΦR ) − 12TrΦR ( σ 2IR+ΛRΦ T RΦR ) −1ΛRΦ T R , ( 20 ) T2 , R ( Dn ) = 1 2σ2µ T RΦ T RΦR ( σ 2IR+ΛRΦ T RΦR ) −1µR , ( 21 ) and then Taylor expanding the matrix inverse ( σ2IR + ΛRΦTRΦR ) −1 in ( 20 ) and ( 21 ) to show that the ΦR-independent terms in the decomposition of T1 , R and T2 , R are , respectively , 1 2 logdet ( IR+ n σ2 ΛR ) − 1 2Tr ( IR− ( IR+ nσ2 ΛR ) −1 ) , and n2σ2µTR ( IR+ nσ2 ΛR ) −1µR . • Concentration step : Finally , we use concentration inequalities to show that these ΦR-independent terms dominate the series involving ΦTRΦR−nIR ( see Lemma 35 ) when we have T1 , R ( Dn ) = ( 1 2 logdet ( IR+ n σ2 ΛR ) − 1 2Tr ( IR− ( IR+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1α ) , T2 , R ( Dn ) = ( n 2σ2µ T R ( IR+ n σ2 ΛR ) −1µR ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0 , 1−2β α +1 } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . The key idea is to consider the matrix Λ1/2R ( I+ n σ2 ΛR ) −1/2ΦTRΦR ( I+ n σ2 ΛR ) −1/2Λ 1/2 R and show that it concentrates around nΛR ( I+ nσ2 ) −1 ( see Corollary 22 ) . Note that an ordinary application of the matrix Bernstein inequality to ΦTRΦR−nIR yields ‖ΦTRΦR−nI‖2 =O ( R √ n ) , which is not sufficient for our purposes , since this would giveO ( R √ n ) =o ( n ) only when α > 2 . In contrast , our results are valid forα > 1 and cover cases of practical interest , e.g. , the NTK of infinitely wide shallow ReLU network ( Velikanov and Yarotsky , 2021 ) and the arc-cosine kernels over high-dimensional hyperspheres ( Ronen et al. , 2019 ) that have α=1+O ( 1d ) , where d is the input dimension . For µ0 > 0 , we note the following result : Theorem 8 ( Asymptotics of the normalized SC , µ0 > 0 ) . Assume µ0 > 0 and σ2model = σ 2 true = σ 2 = Θ ( 1 ) . Under Assumptions 4 , 5 and 6 , with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0≤q < min { 2β−1 2 , α } ·min { α−1−2τ 2α2 , 2β−1 α2 } . the expected normalized SC ( 8 ) has the asymptotic behavior : E F 0 ( Dn ) = 12σ2µ 2 0n+o ( n ) . The proof of Theorem 8 is given in Appendix D.1 and follows from showing that when µ0 > 0 , T2 , R ( Dn ) = ( n 2σ2µ T R ( IR+ n σ2 ΛR ) −1µR ) ( 1 + o ( 1 ) ) = 12σ2µ 2 0n + o ( n ) ( see Lemma 38 ) , which dominates T1 ( Dn ) and the residual |T2 , R ( Dn ) −T2 ( Dn ) | . 3.2 ASYMPTOTICS OF THE BAYESIAN GENERALIZATION ERROR In this section , we derive the asymptotics of the expected generalization error E G ( Dn ) by analyzing the asymptotics of the componentsG1 ( Dn ) andG2 ( Dn ) in resp . ( 15 ) and ( 16 ) for the following two cases : µ0 =0 and µ0 > 0 . First , we consider the case µ0 =0 . Theorem 9 ( Asymptotics of the Bayesian generalization error , µ0 = 0 ) . Let Assumptions 4 , 5 , and 6 hold . Assume that µ0 = 0 and σ2model = σ 2 true = σ 2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n−q over sample inputs ( xi ) ni=1 where 0≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the expectation of the Bayesian generalization error ( 3 ) w.r.t . the noise has the asymptotic behavior : E G ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ nσ2 Λ ) −1Λ−‖Λ1/2 ( I+ nσ2 Λ ) −1‖2F +‖ ( I+ nσ2 Λ ) −1µ‖22 ) =Θ ( nmax { ( 1−α ) ( 1−t ) α , ( 1−2β ) ( 1−t ) α } ) . ( 22 ) The proof of Theorem 9 is given in Appendix D.2 . Intuitively , for a given t , the exponent ( 1−α ) ( 1−t ) α in ( 22 ) captures the rate at which the model suppresses the noise , while the exponent ( 1−2β ) ( 1−t ) α captures the rate at which the model learns the target function . A larger β implies that the exponent ( 1−2β ) ( 1−t ) α is smaller and it is easier to learn the target . A larger α implies that the exponent ( 1−α ) ( 1−t ) α is smaller and the error associated with the noise is smaller as well . A larger α , however , also implies that the exponent ( 1−2β ) ( 1−t ) α is larger ( recall that α > 1 and β > 1/2 by Assumptions 4 and 5 , resp . ) , which means that it is harder to learn the target . Remark 10 . If f ∼ GP ( 0 , k ) , then using the Karhunen-Loève expansion we have f ( x ) = ∑∞ p=1 √ λpωpφp ( x ) , where ( ωp ) ∞p=1 are i.i.d . standard Gaussian variables . We can bound ωp almost surely as |ωp| ≤ C logp , where C = supp≥1 |ωp| logp is a finite constant . Comparing with the expansion of f ( x ) in ( 9 ) , we find that µp = √ λpωp =O ( p −α/2logp ) =O ( p−α/2+ε ) where ε > 0 is arbitrarily small . Choosing β=α/2−ε in ( 22 ) , we have E G ( Dn ) =O ( n 1 α−1+ 2ε α ) . This rate matches that of an earlier result due to Sollich and Halees ( 2002 ) , where it is shown that the asymptotic learning curve ( as measured by the expectation of the excess mean squared error , EfM ( Dn ) ) scales as n 1 α−1 when the model is correctly specified , i.e. , f is a sample from the same Gaussian process GP ( 0 , k ) , and the eigenvalues decay as a power law for large i , λi∼ iα . For µ0 > 0 , we note the following result : Theorem 11 ( Asymptotics of the Bayesian generalization error , µ0 > 0 ) . Let Assumptions 4 , 5 , and 6 hold . Assume that µ0 > 0 and σ2model = σ 2 true = σ 2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n−q over sample inputs ( xi ) ni=1 , where 0≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the expectation of the Bayesian generalization error ( 3 ) w.r.t . the noise has the asymptotic behavior : E G ( Dn ) = 12σ2µ 2 0+o ( 1 ) . In general , if µ0 > 0 , then the generalization error remains constant when n→∞ . This means that if the target function contains a component in the kernel of the operatorLk , then GP regression is not able to learn the target function . The proof of Theorem 11 is given in Appendix D.2 . 3.3 ASYMPTOTICS OF THE EXCESS MEAN SQUARED ERROR In this section we derive the asymptotics of the excess mean squared error in Definition 2 . Theorem 12 ( Asymptotics of excess mean squared error ) . Let Assumptions 4 , 5 , and 6 hold . Assume σ2model =Θ ( n t ) where 1− α1+2τ < t < 1 . Then with probability of at least 1−n −q over sample inputs ( xi ) n i=1 , where 0≤q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 , the excess mean squared error ( 6 ) has the asymptotic : E M ( Dn ) = ( 1+o ( 1 ) ) [ σ2true σ2model ( Tr ( I+ n σ2model Λ ) −1Λ−‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ) +‖ ( I+ n σ2model Λ ) −1µ‖22 ] =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) when µ0 =0 , and E M ( Dn ) =µ20+o ( 1 ) , when µ0 > 0 . The proof of Theorem 12 uses similar techniques as Theorem 9 and is given in Appendix D.3 . Remark 13 ( Correspondence with kernel ridge regression ) . The kernel ridge regression ( KRR ) estimator arises as a solution to the optimization problem f̂=argmin f∈Hk 1 n n∑ i=1 ( f ( xi ) −yi ) 2+λ‖f‖2Hk , ( 23 ) where the hypothesis spaceHk is chosen to be an RKHS , and λ > 0 is a regularization parameter . The solution to ( 23 ) is unique as a function , and is given by f̂ ( x ) = KTxx ( Kn +nλIn ) −1y , which coincides with the posterior mean function m̄ ( x ) of the GPR ( 1 ) if σ2model = nλ ( Kanagawa et al. , 2018 , Proposition 3.6 ) . Thus , the additive Gaussian noise in GPR plays the role of regularization in KRR . Leveraging this well known equivalence between GPR and KRR we observe that Theorem 12 also describes the generalization error of KRR as measured by the excess mean squared error . Remark 14 . Cui et al . ( 2021 ) derived the asymptotics of the expected excess mean-squared error for different regularization strengths and different scales of noise . In particular , for KRR with Gaussian design where Λ1/2R ( φ1 ( x ) , ... , φR ( x ) ) ) is assumed to follow a Gaussian distributionN ( 0 , ΛR ) , and regularization λ=nt−1 where 1−α≤ t , Cui et al . ( 2021 , Eq . 10 ) showed that E { xi } ni=1E M ( Dn ) =O ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . ( 24 ) Let δ = n−q , where 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . By Markov ’ s inequality , this implies that with probability of at least 1 − δ , E M ( Dn ) = O ( 1δ max { σ 2 truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) = O ( nqmax { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . Theorem 12 improves upon this by showing that with probability of at least 1−δ , we have an optimal bound E M ( Dn ) =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) . Furthermore , in contrast to the approach by Cui et al . ( 2021 ) , we have no requirement on the distribution of φp ( x ) , and hence our result is more generally applicable . For example , Theorem 12 can be applied to KRR with the arc-cosine kernel when the Gaussian design assumption is not valid . In the noiseless setting ( σtrue =0 ) with constant regularization ( t=0 ) , Theorem 12 implies that the mean squared error behaves as Θ ( n 1−2β α ) . This recovers a result in Bordelon et al . ( 2020 , §2.2 ) . 4 EXPERIMENTS We illustrate our theory on a few toy experiments . We let the input x be uniformly distributed on a unit circle , i.e. , Ω = S1 and ρ = U ( S1 ) . The points on S1 can be represented by x= ( cosθ , sinθ ) where θ ∈ [ −π , π ) . We use the first order arc-cosine kernel function without bias , k ( 1 ) w/o bias ( x1 , x2 ) = 1 π ( sinψ+ ( π−ψ ) cosψ ) , where ψ = 〈x1 , x2〉 is the angle between x1 and x2 . Cho and Saul ( 2009 ) showed that this kernel is the conjugate kernel of an infinitely wide shallow ReLU network with two inputs and no biases in the hidden layer . GP regression with prior GP ( 0 , k ) corresponds to Bayesian training of this network ( Lee et al. , 2018 ) . The eigenvalues and eigenfunctions of the kernel are λ1 = 4π2 , λ2 = λ3 = 1 4 , λ2p = λ2p+1 = 4 π2 ( ( 2p−2 ) 2−1 ) 2 , p ≥ 2 and φ1 ( θ ) = 1 , φ2 ( θ ) = √ 2 2 cosθ , φ3 ( θ ) = √ 2 2 sinθ , φ2p ( θ ) = √ 2 2 cos ( 2p− 2 ) θ , φ2p+1 ( θ ) = √ 2 2 sin ( 2p− 2 ) θ , p≥ 2 . Hence Assumption 4 is satisfied with α= 4 , and Assumption 6 is satisfied with ‖φp‖∞≤ √ 2 2 , p≥ 1 and τ=0 . We consider the target functions in Table 1 , which satisfy Assumption 5 with the indicated β , and µ0 indicates whether the function lies in the span of eigenfunctions of the kernel . The training and test data are generated as follows : We independently sample training inputs x1 , ... , xn and test input xn+1 from U ( S1 ) and training outputs yi , i = 1 , ... , n from N ( f ( xi ) , σ2 ) , where we choose σ = 0.1 . The Bayesian predictive density conditioned on the test point xn+1 N ( m̄ ( xn+1 ) , k̄ ( xn+1 , xn+1 ) ) is obtained by ( 1 ) and ( 2 ) . We compute the normalized SC by ( 7 ) and the Bayesian generalization error by the Kullback-Leibler divergence betweenN ( f ( xn+1 ) , σ2 ) and N ( m̄ ( xn+1 ) , k̄ ( xn+1 , xn+1 ) ) . For each target we conduct GPR 20 times and report the mean and standard deviation of the normalized SC and the Bayesian generalization error in Figure 1 , which agree with the asymptotics predicted in Theorems 7 and 9 . In Appendix A , we show more experiments confirming our theory for zero- and second- order arc-cosine kernels , with and without biases . k ( 1 ) w/o bias , their values of β and µ0 , and theoretical rates for the normalized SC and the Bayesian generalization error from our theorems . k ( 1 ) w/o bias and the target functions in Table 1 . The orange curves show the linear regression fit for the experimental values ( in blue ) of the log Bayesian generalization error as a function of log n. 5 CONCLUSION We described the learning curves for GPR for the case that the kernel and target function follow a power law . This setting is frequently encountered in kernel learning and relates to recent advances on neural networks . Our approach is based on a tight analysis of the concentration of the inner product of empirical eigenfunctions ΦTΦ around nI . This allowed us to obtain more general results with more realistic assumptions than previous works . In particular , we recovered some results on learning curves for GPR and KRR previously obtained under more restricted settings ( vide Remarks 10 and 14 ) . We showed that when β≥α/2 , meaning that the target function has a compact representation in terms of the eigenfunctions of the kernel , the learning rate is as good as in the correctly specified case . In addition , our result allows us to interpret β from a spectral bias perspective . When 12 < β ≤ α 2 , the larger the value of β , the faster the decay of the generalization error . This implies that low-frequency functions are learned faster in terms of the number of training data points . By leveraging the equivalence between GPR and KRR , we obtained a result on the generalization error of KRR . In the infinite-width limit , training fully-connected deep NNs with gradient descent and infinitesimally small learning rate under least-squared loss is equivalent to solving KRR with respect to the NTK ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Domingos , 2020 ) , which in several cases is known to have a power-law spectrum ( Velikanov and Yarotsky , 2021 ) . Hence our methods can be applied to study the generalization error of infinitely wide neural networks . In future work , it would be interesting to estimate the values of α and β for the NTK and the NNGP kernel of deep fully-connected or convolutional NNs and real data distributions and test our theory in these cases . Similarly , it would be interesting to consider extensions to finite width kernels . REFERENCES S. Amari and N. Murata . Statistical theory of learning curves under entropic loss criterion . Neural Computation , 5 ( 1 ) :140–153 , 1993 . S. Amari , N. Fujita , and S. Shinomoto . Four types of learning curves . Neural Computation , 4 ( 4 ) : 605–618 , 1992 . S. Arora , S. S. Du , W. Hu , Z. Li , R. R. Salakhutdinov , and R. Wang . On exact computation with an infinitely wide neural net . In Advances in Neural Information Processing Systems , volume 32 , pages 8139–8148 , 2019 . Y. Bahri , E. Dyer , J. Kaplan , J. Lee , and U. Sharma . Explaining neural scaling laws . arXiv preprint arXiv:2102.06701 , 2021 . A. R. Barron . Information-theoretic characterization of Bayes performance and the choice of priors in parametric and nonparametric problems . In D. A. Bernardo J. , Berger J. and S. A. , editors , Bayesian statistics , volume 6 , pages 27–52 . Oxford University Press , 1998 . M. Belkin , S. Ma , and S. Mandal . To understand deep learning we need to understand kernel learning . In Proceedings of the 35th International Conference on Machine Learning ( ICML ) , pages 541–549 , 2018 . A. Bietti and J. Mairal . On the inductive bias of neural tangent kernels . In Advances in Neural Information Processing Systems , volume 32 , pages 12873–12884 , 2019 . A. Bietti , L. Venturi , and J. Bruna . On the sample complexity of learning with geometric stability . arXiv preprint arXiv:2106.07148 , 2021 . G. Blanchard and N. Mücke . Optimal rates for regularization of statistical inverse learning problems . Foundations of Computational Mathematics , 18 ( 4 ) :971–1013 , 2018 . B. Bordelon , A. Canatar , and C. Pehlevan . Spectrum dependent learning curves in kernel regression and wide neural networks . In Proceedings of the 37th International Conference on Machine Learning ( ICML ) , pages 1024–1034 , 2020 . O. Bousquet , S. Hanneke , S. Moran , R. van Handel , and A. Yehudayoff . A theory of universal learning . In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing ( STOC ) , pages 532–541 , 2021 . M. L. Braun . Accurate error bounds for the eigenvalues of the kernel matrix . The Journal of Machine Learning Research , 7:2303–2328 , 2006 . A. Canatar , B. Bordelon , and C. Pehlevan . Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks . Nature communications , 12 ( 1 ) :1–12 , 2021 . A. Caponnetto and E. De Vito . Optimal rates for the regularized least-squares algorithm . Foundations of Computational Mathematics , 7 ( 3 ) :331–368 , 2007 . N. Chatterji , A. Pacchiano , and P. Bartlett . Online learning with kernel losses . In Proceedings of the 36th International Conference on Machine Learning ( ICML ) , pages 971–980 , 2019 . Y. Cho and L. K. Saul . Kernel methods for deep learning . In Advances in Neural Information Processing Systems , volume 22 , pages 342–350 , 2009 . H. Cui , B. Loureiro , F. Krzakala , and L. Zdeborová . Generalization error rates in kernel regression : The crossover from the noiseless to noisy regime . arXiv preprint arXiv:2105.15004 , 2021 . A. Daniely , R. Frostig , and Y . Singer . Toward deeper understanding of neural networks : The power of initialization and a dual view on expressivity . In Advances In Neural Information Processing Systems , volume 29 , pages 2253–2261 , 2016 . A. G. de G. Matthews , J. Hron , M. Rowland , R. E. Turner , and Z. Ghahramani . Gaussian process behaviour in wide deep neural networks . In International Conference on Learning Representations , 2018 . P. Domingos . Every model learned by gradient descent is approximately a kernel machine . arXiv preprint arXiv:2012.00152 , 2020 . S. Fischer and I. Steinwart . Sobolev norm learning rates for regularized least-squares algorithms . Journal of Machine Learning Research , 21:1–38 , 2020 . A. Garriga-Alonso , C. E. Rasmussen , and L. Aitchison . Deep convolutional networks as shallow gaussian processes . In International Conference on Learning Representations , 2019 . D. Haussler and M. Opper . Mutual information , metric entropy and cumulative relative entropy risk . The Annals of Statistics , 25 ( 6 ) :2451–2492 , 1997 . D. Haussler , M. Kearns , H. S. Seung , and N. Tishby . Rigorous learning curve bounds from statistical mechanics . Machine Learning , 25 ( 2-3 ) :195–236 , 1996 . J. Hestness , S. Narang , N. Ardalani , G. Diamos , H. Jun , H. Kianinejad , M. Patwary , M. Ali , Y. Yang , and Y. Zhou . Deep learning scaling is predictable , empirically . arXiv preprint arXiv:1712.00409 , 2017 . A. Jacot , F. Gabriel , and C. Hongler . Neural tangent kernel : Convergence and generalization in neural networks . In Advances in Neural Information Processing Systems , volume 31 , pages 8571–8580 , 2018 . K.-S. Jun , A. Cutkosky , and F. Orabona . Kernel truncated randomized ridge regression : Optimal rates and low noise acceleration . Advances in Neural Information Processing Systems , 32:15358–15367 , 2019 . M. Kanagawa , P. Hennig , D. Sejdinovic , and B. K. Sriperumbudur . Gaussian processes and kernel methods : A review on connections and equivalences . arXiv preprint arXiv:1807.02582 , 2018 . L. Le Gratiet and J. Garnier . Asymptotic analysis of the learning curve for Gaussian process regression . Machine Learning , 98 ( 3 ) :407–433 , 2015 . J. Lee , J. Sohl-Dickstein , J. Pennington , R. Novak , S. Schoenholz , and Y. Bahri . Deep neural networks as gaussian processes . In International Conference on Learning Representations , 2018 . J. Lee , L. Xiao , S. Schoenholz , Y. Bahri , R. Novak , J. Sohl-Dickstein , and J. Pennington . Wide neural networks of any depth evolve as linear models under gradient descent . In Advances in Neural Information Processing Systems , volume 32 , pages 8572–8583 , 2019 . J. Lee , S. Schoenholz , J. Pennington , B. Adlam , L. Xiao , R. Novak , and J. Sohl-Dickstein . Finite versus infinite neural networks : an empirical study . In Advances in Neural Information Processing Systems , volume 33 , pages 15156–15172 , 2020 . M. Loog , T. Viering , and A. Mey . Minimizers of the empirical risk and risk monotonicity . In Advances in Neural Information Processing Systems , volume 32 , pages 7478–7487 , 2019 . D. Malzahn and M. Opper . Learning curves for Gaussian processes regression : A framework for good approximations . In Advances in Neural Information Processing Systems , volume 13 , pages 273–279 , 2001a . D. Malzahn and M. Opper . Learning curves for Gaussian processes models : Fluctuations and universality . In International Conference on Artificial Neural Networks , pages 271–276 , 2001b . R. M. Neal . Bayesian Learning for Neural Networks . Springer-Verlag , Berlin , Heidelberg , 1996 . ISBN 0387947248 . A. Nitanda and T. Suzuki . Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime . In International Conference on Learning Representations , 2021 . R. Novak , L. Xiao , Y. Bahri , J. Lee , G. Yang , D. A. Abolafia , J. Pennington , and J. Sohl-Dickstein . Bayesian deep convolutional networks with many channels are gaussian processes . In International Conference on Learning Representations , 2019 . M. Opper and D. Malzahn . A variational approach to learning curves . In Advances in Neural Information Processing Systems , volume 14 , pages 463–469 , 2002 . M. Opper and F. Vivarelli . General bounds on Bayes errors for regression with Gaussian processes . In Advances in Neural Information Processing Systems , volume 11 , pages 302–308 , 1999 . P. Orbanz and Y. W. Teh . Bayesian nonparametric models . In Encyclopedia of Machine Learning , pages 81–89 . Springer , 2010 . K. Ritter , G. W. Wasilkowski , and H. Woźniakowski . Multivariate integration and approximation for random fields satisfying Sacks-Ylvisaker conditions . The Annals of Applied Probability , pages 518–540 , 1995 . B. Ronen , D. Jacobs , Y. Kasten , and S. Kritchman . The convergence rate of neural networks for learned functions of different frequencies . Advances in Neural Information Processing Systems , 32:4761–4771 , 2019 . M. W. Seeger , S. M. Kakade , and D. P. Foster . Information consistency of nonparametric Gaussian process methods . IEEE Transactions on Information Theory , 54 ( 5 ) :2376–2382 , 2008 . P. Sollich . Learning curves for Gaussian processes . In Advances in Neural Information Processing Systems , volume 11 , pages 344–350 , 1999 . P. Sollich . Gaussian process regression with mismatched models . In Advances in Neural Information Processing Systems , volume 13 , pages 519–526 , 2001 . P. Sollich and A. Halees . Learning curves for Gaussian process regression : Approximations and bounds . Neural Computation , 14 ( 6 ) :1393–1428 , 2002 . S. Spigler , M. Geiger , and M. Wyart . Asymptotic learning curves of kernel methods : empirical data versus teacher–student paradigm . Journal of Statistical Mechanics : Theory and Experiment , 2020 ( 12 ) :124001 , 2020 . M. L. Stein . Interpolation of spatial data : Some theory for kriging . Springer Science & Business Media , 2012 . I. Steinwart , D. R. Hush , C. Scovel , et al . Optimal rates for regularized least squares regression . In Conference on Learning Theory , pages 79–93 , 2009 . J . A. Tropp . User-friendly tail bounds for sums of random matrices . Foundations of computational mathematics , 12 ( 4 ) :389–434 , 2012 . S. Vakili , K. Khezeli , and V. Picheny . On information gain and regret bounds in Gaussian process bandits . In International Conference on Artificial Intelligence and Statistics , pages 82–90 , 2021 . E. A. Valdivia . Relative concentration bounds for the spectrum of kernel matrices . arXiv preprint arXiv:1812.02108 , 2018 . A . Van Der Vaart and H. Van Zanten . Information rates of nonparametric Gaussian process methods . Journal of Machine Learning Research , 12 ( 6 ) , 2011 . M. Velikanov and D. Yarotsky . Universal scaling laws in the gradient descent training of neural networks . arXiv preprint arXiv:2105.00507 , 2021 . T. Viering and M. Loog . The shape of learning curves : A review . arXiv preprint arXiv:2103.10948 , 2021 . T. Viering , A. Mey , and M. Loog . Open problem : Monotonicity of learning . In Conference on Learning Theory , pages 3198–3201 , 2019 . S. Watanabe . Algebraic Geometry and Statistical Learning Theory . Cambridge University Press , 2009 . H. Widom . Asymptotic behavior of the eigenvalues of certain integral equations . Transactions of the American Mathematical Society , 109 ( 2 ) :278–295 , 1963 . C. K. Williams . Computing with infinite networks . In Advances in Neural Information Processing Systems , volume 9 , pages 295–301 , 1997 . C. K. Williams and C. E. Rasmussen . Gaussian processes for machine learning . MIT press , 2006 . C. K. Williams and F. Vivarelli . Upper and lower bounds on the learning curve for Gaussian processes . Machine Learning , 40 ( 1 ) :77–102 , 2000 . G. Yang . Wide feedforward or recurrent neural networks of any architecture are gaussian processes . In Advances in Neural Information Processing Systems , volume 32 , pages 9951–9960 , 2019 . G. Yang and H. Salman . A fine-grained spectral perspective on neural networks . arXiv preprint arXiv:1907.10599 , 2019 . APPENDIX A EXPERIMENTS FOR ARC-COSINE KERNELS OF DIFFERENT ORDERS Consider the first order arc-cosine kernel function with biases , k ( 1 ) w/ bias ( x1 , x2 ) = 1 π ( sinψ̄+ ( π−ψ̄ ) cosψ̄ ) , where ψ̄=arccos ( 1 2 ( 〈x1 , x2〉+1 ) ) . ( 25 ) Ronen et al . ( 2019 ) showed that this kernel is the conjugate kernel of an infinitely wide shallow ReLU network with two inputs and one hidden layer with biases , whose eigenvalues satisfy Assumption 4 with α = 4 . The eigenfunctions of this kernel are the same as that of the first-order arc-cosine kernel without biases , k ( 1 ) w/o bias in Section 4 . We consider the target functions in Table 3 , which satisfy Assumption 5 with the indicated β , and µ0 indicates whether the function lies in the span of eigenfunctions of the kernel . For each target we conduct GPR 20 times and report the mean and standard deviation of the normalized SC and the Bayesian generalization error in Figure 3 , which agree with the asymptotics predicted in Theorems 7 and 9 . Table 2 summarizes all the different kernel functions that we consider in our experiments with pointers to the corresponding tables and figures . Summarizing the observations from these experiments , we see that the smoothness of the activation function ( which is controlled by the order of the arc-cosine kernel ) influences the decay rate α of the eigenvalues . In general , when the activation function is smoother , the decay rate α is larger . Theorem 9 then implies that smooth activation functions are more capable in suppressing noise but slower in learning the target . We also observe that networks with biases are more capable at learning functions compared to networks without bias . For example , the function cos ( 2θ ) can not be learned by the zero order arc-cosine kernel without biases ( see Table 6 and Figure 6 ) , but it can be learned by the zero order arc-cosine kernel with biases ( see Table 7 and Figure 7 ) . k ( 1 ) w/ bias and the target functions in Table 3 . The orange curves show the linear regression fit for the experimental values ( in blue ) of the log Bayesian generalization error as a function of log n. k ( 2 ) w/o bias and the target functions in Table 4. k ( 2 ) w/ bias and the target functions in Table 5. k ( 0 ) w/o bias and the target functions in Table 6. k ( 0 ) w/ bias and the target functions in Table 7 . B PROOFS RELATED TO THE MARGINAL LIKELIHOOD Proof of Proposition 3 . Let ȳ = ( ȳ1 , ... , ȳn ) T be the outputs of the GP regression model on training inputs x . Under the GP prior , the prior distribution of ȳ isN ( 0 , Kn ) . Then the evidence of the model is given as follows : Zn= ∫ Rn ( n∏ i=1 1√ 2πσ e− ( ȳi−yi ) 2 2σ2 ) 1 ( 2π ) n/2det ( Kn ) 1/2 e− 1 2 ȳ TK−1n ȳdȳ = 1 ( 2π ) nσndet ( Kn ) 1/2 ∫ Rn e− 1 2 ȳ T ( K−1n + 1 σ2 I ) ȳ+ 1 σ2 ȳTy− 1 2σ2 yTydȳ . ( 26 ) Letting K̃−1n =K −1 n + 1 σ2 I and µ= 1 σ2 K̃ny , we have Zn= 1 ( 2π ) nσndet ( Kn ) 1/2 ∫ Rn e− 1 2 ( ȳ−µ ) T K̃−1n ( ȳ−µ ) − 12σ2 y Ty+ 12µ T K̃−1n µdȳ = 1 ( 2π ) nσndet ( Kn ) 1/2 ( 2π ) n/2det ( K̃n ) 1/2e− 1 2σ2 yTy+ 12µ T K̃−1n µ = det ( K̃n ) 1/2 ( 2π ) n/2σndet ( Kn ) 1/2 e− 1 2σ2 yTy+ 12µ T K̃−1n µ . ( 27 ) The normalized evidence is Z0n= Zn ( 2π ) −n/2σ−ne− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) = det ( K̃n ) 1/2 det ( Kn ) 1/2 e− 1 2σ2 yTy+ 12µ T K̃−1n µ+ 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) . ( 28 ) So the normalized stochastic complexity is F 0 ( Dn ) =−logZ0n =−1 2 logdet ( K̃n ) 1/2+ 1 2 logdet ( Kn ) 1/2+ 1 2σ2 yTy− 1 2 µT K̃−1n µ− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) =−1 2 logdet ( K−1n + 1 σ2 I ) −1+ 1 2 logdet ( Kn ) + 1 2σ2 yTy− 1 2σ4 yT ( K−1n + 1 σ2 I ) −1y − 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 yT ( I+ Kn σ2 ) −1y− 1 2σ2 ( y−f ( x ) ) T ( y−f ( x ) ) . = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 f ( x ) T ( I+ Kn σ2 ) −1f ( x ) + 1 2σ2 T ( I+ Kn σ2 ) −1 − 1 2σ2 T + 1 2σ2 T ( I+ Kn σ2 ) −1f ( x ) . ( 29 ) After taking the expectation over noises , we get E F 0 ( Dn ) = 1 2 logdet ( I+ Kn σ2 ) + 1 2σ2 f ( x ) T ( I+ Kn σ2 ) −1f ( x ) − 1 2 Tr ( I− ( I+Kn σ2 ) −1 ) . ( 30 ) This concludes the proof . C HELPER LEMMAS Lemma 15 . Assume that m → ∞ as n → ∞ . Given constants a1 , a2 , s1 , s2 > 0 , if s1 > 1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m 1−s1 s2 ) . ( 31 ) If s1 > 1 and s2s3 =s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m−s3 logm ) . ( 32 ) If s1 > 1 and s2s3 < s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Θ ( m−s3 ) . ( 33 ) Overall , if s1 > 1 andm→∞ , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 = { Θ ( mmax { −s3 , 1−s1 s2 } ) , s2s3 6=s1−1 , Θ ( m 1−s1 s2 logm ) , s2s3 =s1−1 . ( 34 ) Proof of Lemma 15 . First , when s1 > 1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 + ∫ [ 1 , +∞ ] a1x −s1 ( 1+a2mx−s2 ) s3 dx = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1 , +∞ ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≥ ∫ [ 1 , R+1 ] a1x −s1 ( 1+a2mx−s2 ) s3 dx =m 1−s1 s2 ∫ [ 1 , R+1 ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 =m 1−s1 s2 ∫ [ 1/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 ) . Second , when s1 > 1 and s2s3 =s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 O ( logm ( 1/s2 ) ) =Θ ( m 1−s1 s2 logn ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≥ ∫ [ 1 , R+1 ] a1x −s1 ( 1+a2mx−s2 ) s3 dx =m 1−s1 s2 ∫ [ 1 , R+1 ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 =m 1−s1 s2 ∫ [ 1/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx =Θ ( m 1−s1 s2 logn ) . Third , when s1 > 1 and s2s3 < s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , +∞ ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Θ ( m ( −1/s2 ) ( 1−s1+s2s3 ) ) =Θ ( m−s3 ) . On the other hand , we have R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 2/m1/s2 , ( R+1 ) /m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Θ ( m ( −1/s2 ) ( 1−s1+s2s3 ) ) =Θ ( m−s3 ) . Overall , if s1 > 1 , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 = { Θ ( mmax { −s3 , 1−s1 s2 } ) , s2s3 6=s1−1 , Θ ( m−s3 logn ) , s2s3 =s1−1 . ( 35 ) Lemma 16 . Assume thatR=m 1s2 +κ for κ > 0 . Given constants a1 , a2 , s1 , s2 > 0 , if s1≤1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Õ ( max { m−s3 , R1−s1 } ) . ( 36 ) Proof of Lemma 16 . First , when s1≤1 and s2s3 > s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 + ∫ [ 1 , R ] a1x −s1 ( 1+a2mx−s2 ) s3 dx = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1 , R ] a1 ( x m1/s2 ) −s1 ( 1+a2 ( x m1/s2 ) −s2 ) s3 d x m1/s2 = a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , R/m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx = a1 ( 1+a2m ) s3 +Õ ( m 1−s1 s2 ( R m1/s2 ) 1−s1 ) =Õ ( max { m−s3 , R1−s1 } ) . Second , when s1≤1 and s2s3≤s1−1 , we have that R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 ∫ [ 1/m1/s2 , R/m1/s2 ] a1x −s1 ( 1+a2x−s2 ) s3 dx ≤ a1 ( 1+a2m ) s3 +m 1−s1 s2 Õ ( m ( −1/s2 ) ( 1−s1+s2s3 ) + ( R m1/s2 ) 1−s1 ) =Õ ( max { m−s3 , R1−s1 } ) . Overall , if s1≤1 , R∑ i=1 a1i −s1 ( 1+a2mi−s2 ) s3 =Õ ( max { m−s3 , R1−s1 } ) . ( 37 ) Lemma 17 . Assume that f ∈ L2 ( Ω , ρ ) . Consider the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T , where x1 , ... , xn are drawn i.i.d from ρ . Then with probability of at least 1−δ1 , we have ‖f ( x ) ‖22 = n∑ i=1 f2 ( xi ) =Õ ( ( 1δ1 +1 ) n‖f‖ 2 2 ) , where ‖f‖22 = ∫ x∈Ωf 2 ( x ) dρ ( x ) . Proof of Lemma 17 . Given a positive numberC≥‖f‖22 , applying Markov ’ s inequality we have P ( f2 ( X ) > C ) ≤ 1 C ‖f‖22 . LetA be the event that for all sample inputs ( xi ) ni=1 , f 2 ( xi ) ≤C . Then P ( A ) ≥1−nP ( f2 ( X ) > C ) ≥1− 1 C n‖f‖22 . ( 38 ) Define f̄2 ( x ) = min { f2 ( x ) , C } . Then Ef̄2 ( X ) ≤ Ef2 ( X ) = ‖f‖22 . So |f̄2 ( X ) − Ef̄2 ( X ) | ≤ max { C , ‖f‖22 } =C Since 0≤ f̄2 ( x ) ≤C , we have E ( f̄4 ( X ) ) ≤CE ( f̄2 ( X ) ) ≤C‖f‖22 . ( 39 ) So we have E|f̄2 ( X ) −Ef̄2 ( X ) |2≤E ( f̄4 ( X ) ) ≤C‖f‖22 . ( 40 ) Applying Bernstein ’ s inequality , we have P ( n∑ i=1 f̄2 ( xi ) > t+nEf̄2 ( X ) ) ≤exp ( − t 2 2 ( nE|f̄2 ( X ) −Ef̄2 ( X ) |2 ) + Ct3 ) ) ≤exp ( − t 2 2 ( nC‖f‖22+ Ct3 ) ) ≤exp ( − t 2 4max { nC‖f‖22 , Ct3 } ) . Hence , with probability of at least 1−δ1/2 we have n∑ i=1 f̄2 ( xi ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +nEf̄2 ( X ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +n‖f‖22 . ( 41 ) When event A happens , f2 ( xi ) = f̄2 ( xi ) for all sample inputs . According to ( 38 ) and ( 41 ) , with probability at least 1− 1Cn‖f‖ 2 2−δ1/2 , we have n∑ i=1 f2 ( xi ) = n∑ i=1 f̄2 ( xi ) ≤max { √ 4Clog 2 δ1 n‖f‖22 , 4C 3 log 2 δ1 } +n‖f‖22 . ChoosingC= 2δ1n‖f‖ 2 2 , with probability of at least 1−δ1 we have n∑ i=1 f2 ( xi ) = n∑ i=1 f̄2 ( xi ) ≤max { √ 8 δ1 log 2 δ1 n2‖f‖42 , 8 3δ1 n‖f‖22log 2 δ1 } +n‖f‖22 =Õ ( ( 1δ1 +1 ) n‖f‖ 2 2 ) . Lemma 18 . Assume that f ∈ L2 ( Ω , ρ ) . Consider the random vector f ( x ) = ( f ( x1 ) , ... , f ( xn ) ) T , where x1 , ... , xn are drawn i.i.d from ρ . Assume that ‖f‖∞= supx∈Ωf ( x ) ≤C . With probability of at least 1−δ1 , we have ‖f ( x ) ‖22 =Õ ( √ C2n‖f‖22+C2 ) +n‖f‖22 , where ‖f‖22 = ∫ x∈Ωf 2 ( x ) dρ ( x ) . Proof of Lemma 18 . We have |f2 ( X ) −Ef2 ( X ) |≤max { C2 , ‖f‖22 } =C2 Since 0≤ f2 ( x ) ≤C , we have E ( f4 ( X ) ) ≤C2E ( f2 ( X ) ) ≤C2‖f‖22 . ( 42 ) So we have E|f2 ( X ) −Ef2 ( X ) |2≤E ( f4 ( X ) ) ≤C2‖f‖22 . ( 43 ) Applying Bernstein ’ s inequality , we have P ( n∑ i=1 f2 ( xi ) > t+nEf2 ( X ) ) ≤exp ( − t 2 2 ( nE|f2 ( X ) −Ef2 ( X ) |2 ) + C2t3 ) ) ≤exp ( − t 2 2 ( nC2‖f‖22+ C 2t 3 ) ) ≤exp ( − t 2 4max { nC2‖f‖22 , C 2t 3 } ) . Hence , with probability of at least 1−δ1 we have n∑ i=1 f2 ( xi ) ≤max { √ 4C2log 1 δ1 n‖f‖22 , 4C2 3 log 1 δ1 } +nEf2 ( X ) ≤Õ ( max { √ C2n‖f‖22 , C2 } ) +n‖f‖22 ≤Õ ( √ C2n‖f‖22+C2 ) +n‖f‖22 . ( 44 ) For the proofs in the reminder of this section , the definitions of the relevant quantities are given in Section 3 . Corollary 19 . With probability of at least 1−δ1 , we have ‖f > R ( x ) ‖22 =Õ ( ( 1δ1 +1 ) nR 1−2β ) . Proof of Corollary 19 . The L2 norm of f > R ( x ) is given by ‖f > R‖22 = ∑∞ p=R+1µ 2 p ≤ Cµ 2β−1R 1−2β . Applying Lemma 17 we get the result . Corollary 20 . For any ν∈RR , with probability of at least 1−δ1 we have ‖ΦRν‖22 =Õ ( ( 1δ1 +1 ) n‖ν‖ 2 2 ) . Proof of Corollary 20 . Let g ( x ) = ∑R p=1νpφp ( x ) . Then ΦRν=g ( x ) . The L2 norm of g ( x ) is given by ‖g‖22 = ∑R p=1ν 2 p =‖ν‖22 . Applying Lemma 17 we get the result . Next we consider the quantity , ΦTRΦR−nI . The key tool that we use is the matrix Bernstein inequality that describes the upper tail of a sum of independent zero-mean random matrices . Lemma 21 . Let D = diag { d1 , ... , dR } , d1 , ... , dR > 0 and dmax = max { d1 , ... , dR } . Let M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } . Then with probability of at least 1−δ , we have ‖D ( ΦTRΦR−nI ) D‖2≤max { √ nd2maxM log R δ , M log R δ ) } . ( 45 ) Proof of Lemma 21 . Let Yj = ( φ1 ( xj ) , ... , φR ( xj ) ) T and Zj = DYj . It is easy to verify that E ( ZjZTj ) =D2 . Then the left hand side of ( 45 ) is ∑n j=1 [ ZjZ T j −E ( ZjZTj ) ] . We note that ‖ZjZTj −E ( ZjZTj ) ‖2≤max { ‖ZjZTj ‖2 , ‖E ( ZjZTj ) ‖2 } ≤max { ‖Zj‖22 , d2max } . For ‖Zj‖22 , we have ‖Zj‖22 = R∑ p=0 d2pφ 2 p ( xj ) ≤ R∑ p=0 d2p‖φp‖2∞ , ( 46 ) we have ‖ZjZTj −E ( ZjZTj ) ‖2≤max { ∑R p=0d 2 p‖φp‖2∞ , d2max } . On the other hand , E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] =E [ ‖Zj‖22ZjZTj ] − ( E ( ZjZTj ) ) 2 . Since E [ ‖Zj‖22ZjZTj ] 4E [ R∑ p=0 d2p‖φp‖2∞ZjZTj ] , ( by ( 46 ) ) = R∑ p=0 d2p‖φp‖2∞E [ ZjZTj ] , we have ‖E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] ‖2≤max { ∑R p=0d 2 p‖φp‖2∞‖E [ ZjZTj ] ‖2 , d4max } ≤max { ∑R p=0d 2 p‖φp‖2∞d2max , d4max } ≤d2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } . Using the matrix Bernstein inequality ( Tropp , 2012 , Theorem 6.1 ) , we have P ( ‖ n∑ j=1 [ ZjZ T j −E ( ZjZTj ) ] ‖2 > t ) ≤Rexp −t2 2 ( n‖E [ ( ZjZTj −E ( ZjZTj ) ) 2 ] ‖2+ tmaxj‖ZjZTj −E ( ZjZTj ) ‖2 3 ) ≤Rexp −t2 2 ( nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } + tmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } 3 ) =Rexp ( −t2 O ( max { nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } , tmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } } ) ) . Then with probability of at least 1−δ , we have ‖ n∑ j=1 [ ZjZ T j −E ( ZjZTj ) ] ‖2 ≤max { √ nd2maxmax { ∑R p=0d 2 p‖φp‖2∞ , d2max } logRδ , max { ∑R p=0d 2 p‖φp‖2∞ , d2max } logRδ } . Corollary 22 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfunctions satisfy Assumption 6 . Assume σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 Let γ be a positive number such that 1+α+2τ− ( 1+2τ+2α ) t2α ( 1−t ) < γ≤1 . Then with probability of at least 1−δ , we have ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤O ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) √ logRδ ) . ( 47 ) Proof of Corollary 22 . Use the same notation as in Lemma 21 . Let D = ( I + nσ2 ΛR ) −γ/2Λ γ/2 R . Then d2max ≤ σ 2γ nγ and ∑R p=0 d 2 p‖φp‖2∞ ≤ ∑R p=0 C 2 φ λγpp 2τ ( 1+ n σ2 λp ) γ = O ( ( nσ2 ) 1−γα+2τ α ) , where the first inequality follows from Assumptions 4 and 6 and the last equality from Lemma 15 . Then M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } =O ( ( nσ2 ) 1−γα+2τ α ) . Applying Lemma 21 , we have ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤ 1σ2 max { √ nσ 2γ nγ O ( ( n σ2 ) 1−γα+2τ α ) logRδ , O ( ( n σ2 ) 1−γα+2τ α ) logRδ } =O ( 1σ2 ( n σ2 ) 1−2γα+2τ 2α n 1 2 ) =O ( √ logRδ n ( 1−2γα+2τ ) ( 1−t ) 2α + 1 2−t ) =O ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) . ( 48 ) Corollary 23 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfuctions satisfy Assumption 6 . Let Λ̃1 , R = diag { 1 , λ1 , ... , λR } . Assume σ2 = Θ ( nt ) where t < 1 Let γ be a positive number such that 1+2τα < γ≤1 . Then with probability of at least 1−δ , we have ‖ ( I+ nσ2 ΛR ) −γ/2Λ̃ γ/2 1 , R ( Φ T RΦR−nI ) Λ̃ γ/2 1 , R ( I+ n σ2 ΛR ) −γ/2‖2≤O ( √ logRδ n 1 2 ) . ( 49 ) Proof of Corollary 23 . Use the same notation as in Lemma 21 . LetD= ( I+ nσ2 ΛR ) −γ/2Λ̃ γ/2 1 , R . Then d2max≤1 and ∑R p=0d 2 p‖φp‖2∞≤C2φ+ ∑R p=1C 2 φ λγpp 2τ ( 1+ n σ2 λp ) γ =C2φ+O ( n ( 1−γα+2τ ) ( 1−t ) α ) =O ( 1 ) where the first inequality follows from Assumptions 4 and 6 and the second equality from Lemma 15 . Then M=max { ∑R p=0d 2 p‖φp‖2∞ , d2max } =O ( 1 ) . Applying Lemma 21 , we have ‖ ( I+ nσ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤max { √ logRδ nO ( 1 ) , log R δ O ( 1 ) } =O ( √ logRδ n 1 2 ) . ( 50 ) Corollary 24 . Suppose that the eigenvalues ( λp ) p≥1 satisfy Assumption 4 , and the eigenfunctions satisfy Assumption 6 . Let ΦR+1 : S = ( φR+1 ( x ) , ... , φS ( x ) ) , and ΛR+1 : S = ( λR+1 , ... , λS ) . Then with probability of at least 1−δ , we have ‖Λ1/2R+1 : S ( Φ T R+1 : SΦR+1 : S−nI ) Λ 1/2 R+1 : S‖2≤O ( logS−Rδ max { n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . ( 51 ) Proof of Corollary 24 . Use the same notation as in Lemma 21 . Let D = Λ1/2R+1 : S . Then d2max≤CλR−α=O ( R−α ) and ∑S p=R+1C 2 φd 2 pp 2τ ≤ ∑S p=R+1C 2 φCλp −αp2τ =O ( R1−α+2τ ) , where the first inequality follows from Assumptions 4 and 6 . ThenM=max { ∑S p=R+1C 2 φd 2 pp 2τ , d2max } = O ( R1−α+2τ ) . Applying Lemma 21 , we have ‖ ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 ≤max { √ logS−Rδ nO ( R −α ) O ( R1−α+2τ ) , logS−Rδ O ( R 1−α+2τ ) ) } =O ( logS−Rδ max { n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . ( 52 ) Lemma 25 . Under the assumptions of Corollary 24 , with probability of at least 1−δ , we have ‖Φ > RΛ > RΦT > R‖2 =Õ ( max { nR−α , n 1 2R 1−2α+2τ 2 , R1−α+2τ } ) . Proof of Lemma 25 . For S∈N , we have ‖Φ > SΛ > SΦT > S‖2≤ ∞∑ p=S+1 ‖Λpφp ( x ) φp ( x ) T ‖2 = ∞∑ p=S+1 λp‖φp ( x ) ‖22 ≤ ∞∑ p=S+1 λpnC 2 φ =O ( nS1−α ) . Let S=R α α−1 . Then we get ‖Φ > SΛ > SΦT > S‖2 =O ( nR−α ) . Let ΦR+1 : S= ( φR+1 ( x ) , ... , φS ( x ) ) , ΛR+1 : S= ( λR+1 , ... , λS ) . We then have ‖Φ > RΛ > RΦT > R‖2≤‖Φ > SΛ > SΦT > S‖2+‖ΦR+1 : SΛR+1 : SΦTR+1 : S‖2 ≤O ( nR−α ) +‖Λ1/2R+1 : SΦ T R+1 : SΦR+1 : SΛ 1/2 R+1 : S‖2 ≤O ( nR−α ) +n‖ΛR+1 : S‖2+‖Λ1/2R+1 : S ( Φ T R+1 : SΦR+1 : S−nI ) Λ 1/2 R+1 : S‖2 ≤O ( nR−α ) +O ( nR−α ) +O ( logR α α−1−R δ max { n 12R 1−2α+2τ 2 , R1−α+2τ } ) =Õ ( max { nR−α , n 12R 1−2α+2τ 2 , R1−α+2τ } ) , where in the fourth inequality we use Corollary 24 . Corollary 26 . Assume that σ2 = Θ ( 1 ) . If R=n 1α+κ where 0 < κ < α−1−2τα ( 1+2τ ) , then with probability of at least 1−δ , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ‖2≤‖ Φ > RΛ > RΦ T > R σ2 ‖2 =Õ ( n −κα ) =o ( 1 ) . Proof of Corollary 26 . By Lemma 25 and the assumptionR=n 1 α+κ , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ‖2≤‖ Φ > RΛ > RΦ T > R σ2 ‖2 ≤Õ ( max { nR−α , n 12R 1−2α+2τ 2 , R1−α+2τ } ) =Õ ( n−κα ) . Lemma 27 . Assume that ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 < 1 where 1+2τ α < γ≤1 . We then have ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . Proof of Lemma 27 . First note that ‖ 1σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2‖2 < ‖ 1σ2 ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2‖2 < 1 . Let Λ̃ , R = diag { , λ1 , ... , λR } . Since ΛR = diag { 0 , λ1 , ... , λR } , we have that when is sufficiently small , ‖ 1σ2 ( I + n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR − nI ) Λ̃ 1/2 , R ( I + n σ2 Λ̃ , R ) −1/2‖2 < 1 . Since all diagonal entries of Λ̃ , R are positive , we have ( I+ 1σ2 Λ̃ , RΦ T RΦR ) −1 = ( I+ nσ2 Λ̃ , R+ 1 σ2 Λ̃ , R ( Φ T RΦR−nI ) ) −1 =Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 [ I+ 1σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ] −1 ( I+ nσ2 Λ̃ , R ) −1/2Λ̃ −1/2 , R = ( I+ nσ2 Λ̃ , R ) −1 + ∞∑ j=1 [ ( −1 ) jΛ̃1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2 ) j ( I+ nσ2 Λ̃ , R ) −1/2Λ̃ −1/2 , R ] = ( I+ nσ2 Λ̃ , R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j ( I+ nσ2 Λ̃ , R ) −1 . Letting →0 , we get ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ nσ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . This concludes the proof . Lemma 28 . If ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 , then we have ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1 . ( 53 ) In particular , assume that σ2 =Θ ( 1 ) . LetR=n 1 α+κ where 0 < κ < α−1−2τα ( 1+2τ ) . Then with probability of at least 1−δ , for sufficiently large n , we have ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 and ( 53 ) holds . Proof of Lemma 28 . Define Φ > R= ( φR+1 ( x ) , φR+2 ( x ) , ... ) , Λ > R=diag ( λR+1 , λR+2 , ... ) . Then we have ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( I+ ΦRΛRΦ T R σ2 + Φ > RΛ > RΦ T > R σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( ( I+ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) −1 −I ) ( I+ ΦRΛRΦ T R σ2 ) −1 . By Corollary 26 , for sufficiently large n , ‖ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ‖2 < 1 with probability of at least 1−δ . Hence ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 = ( ( I+ ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) −1 −I ) ( I+ ΦRΛRΦ T R σ2 ) −1 = ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1 . Lemma 29 . Assume that µ0 =0 and σ2 =Θ ( nt ) where 1− α1+2τ < t < 1 . LetR=n ( 1α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) . Then when n is sufficiently large , with probability of at least 1−2δ we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } ) . ( 54 ) Proof of Lemma 29 . Let Λ1 : R = diag { λ1 , ... , λR } , Φ1 : R = ( φ1 ( x ) , φ1 ( x ) , ... , φR ( x ) ) and µ1 : R = ( µ1 , ... , µR ) . Since µ0 = 0 , we have ( I + 1σ2 ΦRΛRΦ T R ) −1fR ( x ) = ( I+ 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R. Using the Woodbury matrix identity , we have that ( I+ 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R= [ I−Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1Λ1 : RΦT1 : R ] Φ1 : Rµ1 : R =Φ1 : Rµ1 : R−Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1Λ1 : RΦT1 : RΦ1 : Rµ1 : R =σ2Φ1 : R ( σ 2I+Λ1 : RΦ T 1 : RΦ1 : R ) −1µ1 : R. ( 55 ) Let A = ( I + nσ2 Λ1 : R ) −1/2Λ 1/2 1 : R ( Φ T 1 : RΦ1 : R − nI ) Λ 1/2 1 : R ( I + n σ2 Λ1 : R ) −1/2.By Corollary 22 , with probability of at least 1−δ , we have ‖ 1σ2A‖2 = √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α . When n is sufficiently large , ‖ 1σ2A‖2 =o ( 1 ) is less than 1 because 1− α 1+2τ < t < 1 . By Lemma 27 , we have ( I+ 1σ2 Λ1 : RΦ T 1 : RΦ1 : R ) −1 = ( I+ nσ2 Λ1 : R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1 . We then have ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ nσ2 Λ1 : R ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1 µ1 : R ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2+ ∞∑ j=1 ∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 . ( 56 ) By Lemma 15 and Assumption 5 , assuming that supi≥1pi+1−pi=h , we have ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2≤ √√√√ R∑ p=1 C2µp −2β ( 1+nCλp−α/σ2 ) 2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) , ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2≥ √√√√bRh c∑ i=1 C2µi −2β ( 1+ nσ2Cλ ( hi ) −α ) 2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . Overall we have ‖ ( I+ nσ2 Λ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) . ( 57 ) Using the fact that ‖ 1σ2A‖2 = √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α and ‖ ( I+ nσ2 Λ1 : R ) −1Λ1 : R‖2≤n−1 , we have∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 = ∥∥∥∥ ( I+ nσ2 Λ1 : R ) − 12 Λ 121 : R ( 1σ2A ) j ( I+ nσ2 Λ1 : R ) − 12 Λ 121 : Rµ1 : R∥∥∥∥ 2 ≤Õ ( n− 1−t 2 ) ‖ 1σ2A‖ j 2‖ ( I+ nσ2 Λ1 : R ) − 12 Λ − 12 1 : Rµ1 : R‖2 ( 58 ) By Lemma 16 and the assumptionR=n ( 1 α+κ ) ( 1−t ) , ‖ ( I+ nσ2 Λ1 : R ) − 12 Λ − 12 1 : Rµ1 : R‖2≤ √√√√ R∑ p=1 ( Cλp−α ) −1C2µp −2β ( 1+nCλp−α/σ2 ) 1 =Õ ( max { n− ( 1−t ) /2 , R1/2−β+α/2 } ) =Õ ( max { n− ( 1−t ) /2 , n ( 12 + 1−2β 2α +κ ( 1/2−β+α/2 ) ) ( 1−t ) } ) ( 59 ) We then have ∥∥∥ ( 1σ2 ( I+ nσ2 Λ1 : R ) −1Λ1 : R ( ΦT1 : RΦ1 : R−nI ) ) j ( I+ nσ2 Λ1 : R ) −1µ1 : R∥∥∥ 2 =‖ 1σ2A‖ j 2Õ ( max { n− ( 1−t ) , n ( 1−2β 2α +κ ( 1/2−β+α/2 ) ) ( 1−t ) } ) ( 60 ) By ( 56 ) , ( 57 ) and ( 60 ) , we have ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) + ∞∑ j=1 ‖ 1 σ2 A‖j2Õ ( max { n− ( 1−t ) , n ( 1−t ) 1−2β 2α +κ ( 1−t ) ( 1/2−β+α/2 ) } ) =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) +Õ ( n 1−α+2τ 2α − ( 1+2τ ) t 2α ) Õ ( max { n− ( 1−t ) , n ( 1−t ) 1−2β 2α +κ ( 1−t ) ( 1/2−β+α/2 ) } ) . ( 61 ) By assumption κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) , we have that κ ( 1−t ) ( 1/2−β+α/2 ) + 1−α+2τ 2α − ( 1+2τ ) t 2α < κα ( 1−t ) /2+ 1−α+2τ 2α − ( 1+2τ ) t 2α < 0 . Using ( 61 ) , we then get ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 . ( 62 ) By Corollary 20 , with probability of at least 1−δ , we have ‖Φ1 : R ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Õ ( √ ( 1δ +1 ) n‖ ( σ 2I+Λ1 : RΦ T 1 : RΦ1 : R ) −1µ1 : R‖2 ) =Õ ( √ ( 1δ +1 ) n·n ( 1−t ) max { −1 , 1−2β2α } ) . ( 63 ) From ( 55 ) , we get ‖ ( I + 1σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R‖2 = Õ ( √ ( 1δ +1 ) n ·n ( 1−t ) max { −1 , 1−2β2α } ) . This concludes the proof . Lemma 30 . Assume that µ0 > 0 and σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Let R = n 1 α+κ where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 . Then when n is sufficiently large , with probability of at least 1−2δ , we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . ( 64 ) Proof of Lemma 30 . Using the Woodbury matrix identity , we have that ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) = [ I−ΦR ( σ2I+ΛRΦTRΦR ) −1ΛRΦTR ] ΦRµR =ΦRµR−ΦR ( σ2I+ΛRΦTRΦR ) −1ΛRΦTRΦRµR =σ2ΦR ( σ 2I+ΛRΦ T RΦR ) −1µR . ( 65 ) Let µR,1 = ( µ0,0 , ... ,0 ) and µR,2 = ( 0 , µ1 , ... , µR ) . Then µR=µR,1+µR,2 . Then we have ‖ ( σ2I+ΛRΦTRΦR ) −1µR‖2 =‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2+‖ ( σ2I+ΛRΦTRΦR ) −1µR,2‖2 . ( 66 ) According to ( 62 ) in the proof of Lemma 29 , we have ‖ ( σ2I + ΛRΦTRΦR ) −1µR,2‖2 = Õ ( nmax { − ( 1−t ) , ( 1−t ) ( 1−2β ) 2α } ) . Next we estimate ‖σ2ΦR ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 . Let A= ( I+ n σ2 Λ1 : R ) −γ/2Λ γ/2 1 : R ( Φ T 1 : RΦ1 : R−nI ) Λ γ/2 1 : R ( I+ n σ2 Λ1 : R ) −γ/2 where 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < γ < 1 . Since 1− α 1+2τ < t < 1 , 1 1−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < 1 so the range for γ is well-defined.By Corollary 22 , with probability of at least 1 − δ , we have ‖ 1σ2A‖2 = Õ ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) = o ( 1 ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 because 1− α1+2τ < t < 1 . By Lemma 27 , we have ( I+ 1σ2 ΛRΦ T RΦR ) −1 = ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 . We then have ‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 µR,1 ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ nσ2 ΛR ) −1µR,1‖2+ ∞∑ j=1 ∥∥∥ ( 1σ2 ( I+ nσ2 ΛR ) −1ΛR ( ΦTRΦR−nI ) ) j ( I+ nσ2 ΛR ) −1µR,1∥∥∥ 2 . ( 67 ) By Lemma 15 , ‖ ( I+ n σ2 ΛR ) −1µR,1‖2≤ √√√√µ20+ R∑ p=1 C2µp −2β ( 1+nCλp−α/σ2 ) 2 =O ( 1 ) . ( 68 ) Let Λ̃1 , R = diag { 1 , λ1 , ... , λR } and I0 , R = ( 0 , 1 , ... , 1 ) . Then ΛR = Λ̃1 , RI0 , R . Let B = ( I + n σ2 ΛR ) −γ/2Λ̃ γ/2 1 , R ( Φ T RΦR−nI ) Λ̃ γ/2 1 , R ( I+ n σ2 ΛR ) −γ/2 . According to Corollary 23 , we have ‖B‖2 = O ( √ logRδ n 1 2 ) . Using the fact that ‖ 1σ2A‖2 =Õ ( √ logRδ n 1+α+2τ 2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) , we have∥∥∥ ( 1σ2 ( I+ nσ2 ΛR ) −1ΛR ( ΦTRΦR−nI ) ) j ( I+ nσ2 ΛR ) −1µR,1∥∥∥ 2 = 1 σ2j ∥∥∥∥ ( I+ nσ2 ΛR ) −1+γ2 Λ1−γ2R ( A ( I+ nσ2 ΛR ) −1+γΛ1−γR ) j−1B ( I+ nσ2 ΛR ) −1+γ2 µR,1∥∥∥∥ 2 ≤ 1 σ2 ( n ( −1+ γ 2 + ( −1+γ ) ( j−1 ) ) ( 1−t ) Õ ( √ logRδ n ( j−1 ) ( 1+α+2τ2α − ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) ) √ logRδ n 1 2 ‖µR,1‖2 ≤n ( −1+ γ 2 ) ( 1−t ) + 1 2−tÕ ( n [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) √ logRδ ‖µR,1‖2 =Õ ( n− 1 2 + γ 2 ( 1−t ) + [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) . ( 69 ) Since 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) < γ < 1 and− 1 2 + 1 1−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) 1−t 2 < 0 , we can let γ be a little bit larger than 11−t ( 1+α+2τ 2α − ( 1+2τ+2α ) t 2α ) and make− 1 2 + γ 2 ( 1−t ) < 0 holds . By ( 67 ) , ( 68 ) , ( 69 ) , we have ‖ ( σ2I+ΛRΦTRΦR ) −1µR,1‖2 ≤O ( 1 ) + ∞∑ j=1 Õ ( n− 1 2 + γ 2 ( 1−t ) + [ 1−α+2τ− ( 1+2τ ) t ] ( j−1 ) 2α ) ≤O ( 1 ) +o ( 1 ) =O ( 1 ) . ( 70 ) According to ( 66 ) , we have ‖ ( σ2I + ΛRΦTRΦR ) −1µR‖2 = Õ ( nmax { − ( 1−t ) , ( 1−t ) ( 1−2β ) 2α } ) +O ( 1 ) = O ( 1 ) . By Corollary 20 , with probability of at least 1−δ , we have ‖ΦR ( σ2I+ΛRΦTRΦR ) −1µR‖2 =Õ ( √ ( 1δ +1 ) n‖ ( σ 2I+ΛRΦ T RΦR ) −1µR‖2 ) =Õ ( √ ( 1δ +1 ) n ) . From ( 65 ) , we get ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . This concludes the proof . Lemma 31 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < α−1−2τα2 . Assume that µ0 = 0 . Then when n is sufficiently large , with probability of at least 1−3δ we have ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . ( 71 ) Assume that µ0 > 0 . Then when n is sufficiently large , with probability of at least 1−3δ we have ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n ) . ( 72 ) Proof of Lemma 31 . We have ( I+ ΦΛΦ T σ2 ) −1fR ( x ) = ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) + ( ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ) fR ( x ) . ( 73 ) When µ0 =0 , by Lemma 29 , with probability of at least 1−2δ , we have ‖ ( I+ 1σ2 ΦRΛRΦ T R ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . Since α−1−2τα2 < α−1−2τ α ( 1+2τ ) , we apply Lemma 28 and Corollary 26 and get that with probability of at least 1−δ , the second term in the right hand side of ( 73 ) is estimated as follows : ‖ ( ( I+ ΦΛΦ T σ2 ) −1− ( I+ ΦRΛRΦ T R σ2 ) −1 ) fR ( x ) ‖2 =‖ ∞∑ j=1 ( −1 ) j ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖2 = ∞∑ j=1 ∥∥∥ ( ( I+ ΦRΛRΦTRσ2 ) −1 Φ > RΛ > RΦT > Rσ2 ) ∥∥∥j 2 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖2 = ∞∑ j=1 Õ ( n−jκα ) Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) =o ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . Overall , from ( 73 ) , we have that with probability 1−3δ , ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 =Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) . When µ0 > 0 , using the same approach and Lemma 30 , we can prove that ‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2 = Õ ( √ ( 1δ +1 ) n ) . This concludes the proof . D PROOF OF THE MAIN RESULTS D.1 PROOFS RELATED TO THE ASYMPTOTICS OF THE NORMALIZED STOCHASTIC COMPLEXITY Lemma 32 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−2δ we have , we have |T1 , R ( Dn ) −T1 ( Dn ) |=Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) ( 74 ) If R= n 1 α+κ where κ > 0 , we have |T1 , R ( Dn ) −T1 ( Dn ) |= o ( 1 σ2n 1 α ) . If we further assume that 0 < κ < α−1−2τα2 , µ0 =0 and σ 2 =Θ ( 1 ) , then for sufficiently large nwith probability of at least 1−4δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) . ( 75 ) Proof of Lemma 32 . Define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , and Λ > R = diag ( λR+1 , ... , λp , ... ) . We then have |T1 ( Dn ) −T1 , R ( Dn ) |= ∣∣∣∣12 logdet ( I+ 1σ2 ΦΛΦT ) − 12 logdet ( I+ 1σ2 ΦRΛRΦTR ) ∣∣∣∣ + 1 2 ∣∣∣∣Tr ( I+ ΦΛΦTσ2 ) −1−Tr ( I+ ΦRΛRΦTRσ2 ) −1 ∣∣∣∣ . ( 76 ) As for the first term in the right hand side of ( 76 ) , we have∣∣∣∣12 logdet ( I+ 1σ2 ΦΛΦT ) − 12 logdet ( I+ 1σ2 ΦRΛRΦTR ) ∣∣∣∣ = ∣∣∣∣12 logdet ( ( I+ 1 σ2 ΦRΛRΦ T R ) −1 ( I+ 1 σ2 ΦRΛRΦ T R+ 1 σ2 Φ > RΛ > RΦ T > R ) ) ∣∣∣∣ = ∣∣∣∣12 logdet ( I+ 1 σ2 ( I+ 1 σ2 ΦRΛRΦ T R ) −1Φ > RΛ > RΦ T > R ) ∣∣∣∣ = 1 2 ∣∣∣∣Trlog ( I+ 1σ2 ( I+ 1σ2 ΦRΛRΦTR ) −1Φ > RΛ > RΦT > R ) ∣∣∣∣ . ( 77 ) Given a concave function h and a matrixB∈Rn×n whose eigenvalues ζ1 , ... , ζn are all positive , we have that Trh ( B ) = ∑n p=1h ( ζi ) ≤nh ( 1 n ∑n p=1ζi ) ≤nh ( 1 nTrB ) , ( 78 ) where we used Jensen ’ s inequality . Using h ( x ) =log ( 1+x ) in ( 78 ) , with probability 1−δ , we have∣∣ 1 2 logdet ( I+ 1 σ2 ΦΛΦ T ) − 12 logdet ( I+ 1 σ2 ΦRΛRΦ T R ) ∣∣ ≤ n2 log ( 1+ 1 nTr ( 1 σ2 ( I+ ΦRΛRΦ T R σ2 ) −1Φ > RΛ > RΦ T > R ) ) ≤ n2 log ( 1+ 1 nσ2 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖2Tr ( Φ > RΛ > RΦT > R ) ) ≤ n2 log ( 1+ 1 nσ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 ) ≤ 1 2σ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 = 12σ2 ∑∞ p=R+1λp ( C2φÕ ( √ p2τn‖φp‖22+p2τ ) +n‖φp‖22 ) =Õ ( 1σ2n ∑∞ p=R+1λp+n 1/2 ∑∞ p=R+1λpp τ+ ∑∞ p=R+1λpp 2τ ) =Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) =o ( 1 σ2n 1 α ) , ( 79 ) where in the second inequality we use the fact that TrAB≤‖A‖2TrB whenA andB are symmetric positive definite matrices , and in the last inequality we use Lemma 18 . As for the second term in the right hand side of ( 76 ) , letA= ( I+ ΦRΛRΦ T R σ2 ) −1/2 . Then we have 1 2 ∣∣∣Tr ( I+ ΦΛΦTσ2 ) −1−Tr ( I+ ΦRΛRΦTRσ2 ) −1∣∣∣ = 12 ∣∣∣∣TrA [ I− ( I+A ( Φ > RΛ > RΦT > Rσ2 ) A ) −1 ] A ∣∣∣∣ ≤ 12Tr [ I− ( I+A ( Φ > RΛ > RΦ T > R σ2 ) A ) −1 ] ≤ n2 ( 1− ( 1+ 1 nTrA ( Φ > RΛ > RΦ T > R σ2 ) A ) −1 ) ≤ n2 ( 1− ( 1+ 1 nTr ( Φ > RΛ > RΦ T > R σ2 ) ) −1 ) ≤ n2 ( 1− ( 1+ 1 nσ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 ) ) −1 ) ≤ 1 2σ2 ∑∞ p=R+1λp‖φp ( x ) ‖22 =Õ ( 1 σ2 ( nR 1−α+n1/2R1−α+τ+R1−α+2τ ) ) =o ( 1 σ2n 1 α ) , where in the first inequality we use the fact that ‖A‖2 < 1 and TrABA≤‖A‖22TrB when A and B are symmetric positive definite matrices , in the second inequality we use h ( x ) =1−1/ ( 1+x ) in ( 78 ) and in the last equality we use the last few steps of ( 79 ) . This concludes the proof of the first statement . As for |T2 ( Dn ) −T2 , R ( Dn ) | , we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ + ∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣ . ( 80 ) For the first term on the right-hand side of ( 80 ) , we have∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ ≤2 ∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣+∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦ T σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖22 . Applying Corollary 19 and Lemma 31 , with probability of at least 1−4δ , we have∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣ ≤2Õ ( √ ( 1δ +1 ) nR 1−2β ) Õ ( √ ( 1δ +1 ) n·n max { −1 , 1−2β2α } ) +Õ ( ( 1δ +1 ) nR 1−2β ) =2Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) +Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) ( 1−2β ) ) =2Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) , where the last equality holds because ( 1α+κ ) 1−2β 2 < 1−2β 2α when κ > 0 . As for the second term on the right-hand side of ( 80 ) , according to Lemma 28 , Corollary 26 and Lemma 29 , we have∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦT > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j−12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖22 = ∞∑ j=1 Õ ( n−jκα ) Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } ) =Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } −κα ) . ( 81 ) By ( 80 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1δ +1 ) n 1+ ( 1 α+κ ) 1−2β 2 +max { −1 , 1−2β 2α } ) +Õ ( ( 1δ +1 ) n 1+max { −2 , 1−2βα } −κα ) =Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) . This concludes the proof of the second statement . In Lemma 32 , we gave a bound for |T2 , R ( Dn ) −T2 ( Dn ) | when n 1 α < R < n 1 α+ α−1−2τ α2 . For R > n , we note the following lemma : Lemma 33 . Let R = nC and σ2 = nt . Assume that C > = 1 and C ( 1−α+ 2τ ) − t < 0 . Under Assumptions 4 , 5 and 6 , for sufficiently large n and with probability of at least 1−3δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1δ +1 ) 1 σ2nR max { 1/2−β,1−α+2τ } ) . ( 82 ) Proof of Lemma 33 . Define Φ > R = ( φR+1 ( x ) , φR+2 ( x ) , ... , φp ( x ) , ... ) , and Λ > R = diag ( λR+1 , ... , λp , ... ) . Then we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ + ∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ . ( 83 ) For the first term on the right-hand side of ( 83 ) , with probability 1−3δ we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2 ∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣+∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖fR ( x ) ‖2+‖f > R ( x ) ‖22 ≤2Õ ( √ ( 1 δ +1 ) nR1−2β ) Õ ( √ ( 1 δ +1 ) n·‖f‖2 ) +Õ ( ( 1 δ +1 ) nR1−2β ) =Õ ( ( 1 δ +1 ) nR1/2−β ) , where we used Corollary 19 and Lemma 17 for the last inequality . The assumption C ( 1− α+ 2τ ) − t < 0 means that R 1−α+2τ σ2 = o ( 1 ) . For the second term on the right-hand side of ( 83 ) , by Lemmas 28 and 25 , we have∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j+12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖fR ( x ) ‖22 = ∞∑ j=1 Õ ( 1 σ2 Rj ( 1−α+2τ ) ) Õ ( ( 1 δ +1 ) n‖f‖22 ) =Õ ( ( 1 δ +1 ) 1 σ2 nR1−α+2τ ) . ( 84 ) Using ( 83 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1 δ +1 ) nR1/2−β ) +Õ ( ( 1 δ +1 ) n 1 σ2 R1−α+2τ ) =Õ ( ( 1 δ +1 ) n 1 σ2 Rmax { 1/2−β,1−α+2τ } ) . Next we consider the asympototics of T1 , R ( Dn ) and T2 , R ( Dn ) . Lemma 34 . Let A = ( I + nσ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR − nI ) Λ γ/2 R ( I + n σ2 ΛR ) −γ/2 . Assume that ‖A‖2 < 1 where 1+2τα < γ≤1 . Then we have T2 , R ( Dn ) = n 2σ2µ T R ( I+ n σ2 ΛR ) −1µR+ 1 2 ∑∞ j=1 ( −1 ) j+1Ej , where Ej=µ T R 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ nσ2 ΛR ) −1µR . Proof of Lemma 34 . Let Λ̃ , R = diag { , λ1 , ... , λR } . Since ΛR = diag { 0 , λ1 , ... , λR } , we have that when is sufficiently small , ‖ 1σ2 ( I+ n σ2 Λ̃ , R ) −1/2Λ̃ 1/2 , R ( Φ T RΦR−nI ) Λ̃ 1/2 , R ( I+ n σ2 Λ̃ , R ) −1/2‖2 < 1 . Since all diagonal entries of Λ̃ , R are positive , we have 1 2σ2 µTRΦ T R ( I+ 1 σ2 ΦRΛ̃ , RΦ T R ) −1ΦRµR = 1 2σ2 µTRΦ T R [ I−ΦR ( σ2I+Λ̃ , RΦTRΦR ) −1Λ̃ , RΦTR ] ΦRµR = 1 2σ2 µTRΦ T RΦRµR− 1 2σ2 µTRΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1Λ̃ , RΦ T RΦRµR = 1 2 µTRΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RΛ̃ , RΦ T RΦR ( σ 2I+Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ 1 σ2 Λ̃ , RΦ T RΦR ) −1µR . ( 85 ) Using Lemma 27 , we have 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ 1 σ2 Λ̃ , RΦ T RΦR ) −1µR = 1 2 µTRΛ̃ −1 , RµR− 1 2 µTRΛ̃ −1 , R ( I+ n σ2 Λ̃ , R ) −1µR + 1 2 ∞∑ j=1 ( −1 ) j+1µTRΛ̃−1 , R ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j ( I+ n σ2 Λ̃ , R ) −1µR = n 2σ2 µTR ( I+ n σ2 Λ̃ , R ) −1µR + 1 2 ∞∑ j=1 ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 Λ̃ , R ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 Λ̃ , R ) −1Λ̃ , R ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 Λ̃ , R ) −1µR ( 86 ) Letting →0 , we get T2 , R ( Dn ) = 1 2σ2 µTRΦ T R ( I+ 1 σ2 ΦRΛRΦ T R ) −1ΦRµR = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR + 1 2 ∞∑ j=1 [ ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR ] This concludes the proof . Lemma 35 . Assume that σ2 = Θ ( 1 ) . LetR=n 1 α+κ where 0 < κ < α−1−2τ2α2 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ , we have T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1α ) . ( 87 ) Furthermore , if we assume µ0 =0 , we have T2 , R ( Dn ) = ( n 2σ2µ T R ( I+ n σ2 ΛR ) −1µR ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 88 ) Proof of Lemma 35 . Let A= ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2 , ( 89 ) where 1+α+2τ2α < γ≤1 . By Corollary 22 , with probability of at least 1−δ , we have ‖A‖2 =Õ ( n 1−2γα+α+2τ 2α ) . ( 90 ) When n is sufficiently large , ‖A‖2 is less than 1 . LetB= ( I+ nσ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 . Then ‖B‖2 = σ 2 ( 1−γ ) n1−γ ‖A‖2 = Õ ( n 1−α+2τ 2α ) . Using the Woodbury matrix identity , we compute T1 , R ( Dn ) as follows : T1 , R ( Dn ) = 1 2 logdet ( I+ 1 σ2 ΛRΦ T RΦR ) − 12TrΦR ( σ 2I+ΛRΦ T RΦR ) −1ΛRΦ T R = 12 logdet ( I+ n σ2 ΛR ) + 1 2 logdet [ I+ 1 σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 ] − 12Tr ( σ 2I+ΛΦTRΦR ) −1ΛΦTRΦR = 12 logdet ( I+ n σ2 ΛR ) + 1 2Trlog [ I+ 1 σ2B ] − 1 2Tr ( I−σ 2 ( σ2I+ΛΦTRΦR ) −1 ) ) = 12 logdet ( I+ n σ2 ΛR ) + 1 2Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j − 12Tr I− ( I+ nσ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ nσ2 ΛR ) −1 = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) + 12Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j − 12Tr ∞∑ j=1 ( −1 ) j 1σ2j ( I+ n σ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 , ( 91 ) where in the last equality we apply Lemma 27 . Let h ( x ) = log ( 1+x ) − ( 1− 11+x ) . It is easy to verify that h ( x ) is increasing on [ 0 , +∞ ) . As for the first term on the right hand side of ( 91 ) , we have 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) = 12 R∑ p=1 ( log ( 1+ nσ2λp ) − ( 1− 1 1+ n σ2 λp ) ) = 12 R∑ p=1 h ( nσ2λp ) ≤ 1 2 R∑ p=1 h ( n σ2 Cλp −α ) ≤ 12h ( n σ2Cλ ) + 1 2 ∫ [ 1 , R ] h ( nσ2Cλx −α ) dx = 12h ( n σ2 Cλ ) + 1 2n 1/α ∫ [ 1/n1/α , R/n1/α ] h ( Cλσ2 x −α ) dx =Θ ( n1/α ) , where in the last equality we use the fact that ∫ [ 0 , +∞ ] h ( x −α ) dx < ∞ . On the other hand , we have 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) = 12 R∑ p=1 h ( nσ2λp ) ≥ 1 2 R∑ p=1 h ( nσ2Cλp −α ) ≥ 12 ∫ [ 1 , R+1 ] h ( nσ2Cλx −α ) dx = 12n 1/α ∫ [ 1/n1/α , ( R+1 ) /n1/α ] h ( 1σ2Cλx −α ) dx =Θ ( n1/α ) . Overall , we have 12 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) =Θ ( n1/α ) . As for the second term on the right hand side of ( 91 ) , we have∣∣∣∣∣∣Tr ∞∑ j=1 ( −1 ) j−1 j ( 1 σ2B ) j ∣∣∣∣∣∣≤R ∞∑ j=1 ‖ 1σ2B‖ j 2 =R ∞∑ j=1 1 σ2j Õ ( n j ( 1−α+2τ ) 2α ) =RÕ ( n 1−α+2τ 2α ) =Õ ( n 1 α+κ+ 1−α+2τ 2α ) . As for the third term on the right hand side of ( 91 ) , we have∣∣∣∣∣∣Tr ∞∑ j=1 ( −1 ) j 1σ2j ( I+ n σ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 ∣∣∣∣∣∣ ≤ ∞∑ j=1 ∣∣∣Tr ( 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2 ) ∣∣∣ ≤R ∞∑ j=1 ∥∥∥ 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2∥∥∥ 2 ≤R ∞∑ j=1 ∥∥∥ 1σ2j ( I+ nσ2 ΛR ) −1/2Bj ( I+ nσ2 ΛR ) −1/2∥∥∥ 2 ≤R ∞∑ j=1 ∥∥ 1 σ2jB j ∥∥ 2 =Õ ( n 1 α+κ+ 1−α+2τ 2α ) . Then the asymptotics of T1 , R ( Dn ) is given by T1 , R ( Dn ) = 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) +Õ ( n 1α+κ+ 1−α+2τ2α ) +Õ ( n 1α+κ+ 1−α+2τ2α ) =Θ ( n1/α ) +Õ ( n 1 α+κ+ 1−α+2τ 2α ) =Θ ( n 1 α ) , where in the last inequality we use the assumption that κ < α−1−2τ2α . Since Õ ( n 1 α+κ+ 1−α+2τ 2α ) is lower order term compared to Θ ( n 1 α ) , we further have T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2Tr ( I− ( I+ nσ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) . This concludes the proof of the first statement . Let Λ1 : R=diag { λ1 , ... , λR } , Φ1 : R= ( φ1 ( x ) , φ1 ( x ) , ... , φR ( x ) ) and µ1 : R= ( µ1 , ... , µR ) . Since µ0 =0 , we have T2 , R ( Dn ) = 12σ2µ T 1 : RΦ T 1 : R ( I+ 1 σ2 Φ1 : RΛ1 : RΦ T 1 : R ) −1Φ1 : Rµ1 : R. According to Lemma 34 , we have T2 , R ( Dn ) = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R + 1 2 ∞∑ j=1 ( −1 ) j+1µT1 : R 1 σ2 ( I+ n σ2 Λ1 : R ) −1 ( ΦT1 : RΦ1 : R−nI ) ( 1 σ2 ( I+ n σ2 Λ1 : R ) −1Λ1 : R ( Φ T 1 : RΦ1 : R−nI ) ) j−1 = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R + 1 2 ∞∑ j=1 [ ( −1 ) j+1 1 σ2j µT1 : R ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R A ( ( I+ n σ2 Λ1 : R ) −1+γΛ1−γ1 : R A ) j−1 ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R ] ( 92 ) where in the second to last equality we used the definition ofA ( 89 ) . As for the first term on the right hand side of ( 92 ) , by Lemma 15 , Assumption 4 and Assumption 5 , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R≤ n 2σ2 R∑ p=1 C2µp −2β 1+ nσ2Cλp −α = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . On the other hand , by Assumption 5 , assuming that supi≥1pi+1−pi=h , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R≥ n 2σ2 bRh c∑ i=1 C2µp −2β i 1+ nσ2Cλp −α i ≥ n 2σ2 bRh c∑ i=1 C2µi −2β 1+ nσ2Cλ ( hi ) −α = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . Overall , we have n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R=Θ ( n max { 0,1+ 1−2βα } logkn ) , where k= { 0 , α 6=2β−1 , 1 , α=2β−1 . By Lemma 16 , we have ‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2≤ R∑ p=1 C2µp −2β ( Cλp −α ) −γ ( 1+ nσ2Cλp −α ) 2−γ =Õ ( max { n−2+γ , R1−2β+αγ } ) =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) . ( 93 ) Using ( 90 ) , the second term on the right hand side of ( 92 ) is computed as follows : 1 2 ∞∑ j=1 [ ( −1 ) j+1 1 σ2j µT1 : R ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R A ( ( I+ n σ2 Λ1 : R ) −1+γΛ1−γ1 : R A ) j−1 ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R ] ≤1 2 ∞∑ j=1 1 σ2j ‖A‖j ( n σ2 ) ( −1+γ ) ( j−1 ) ‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2 ≤1 2 ∞∑ j=1 1 σ2j Õ ( n j ( 1−2γα+α+2τ ) 2α ) ( n σ2 ) ( −1+γ ) ( j−1 ) Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) =Õ ( nmax { −2+γ+ 1−2γα+α+2τ 2α , 1−2β α +γ+ 1−2γα+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Õ ( nmax { −2+ 1+α+2τ 2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) . ( 94 ) Since 1+α+2τ2α < 1+α+2τ α+1+2τ =1 , we have−2+ 1+α+2τ 2α < 0.Also we have 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) = 1−2β α +1+ 1−α+2τ 2α +κ ( 1−2β+αγ ) ≤1−2β α +1+ 1−α+2τ 2α +καγ < 1−2β α +1 , ( 95 ) where the last inequality holds because κ < α−1−2τ2α2 and γ≤1 . Hence we have T2 , R ( Dn ) = n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R+Õ ( n max { −2+ 1+α+2τ2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( nmax { −2+ 1+α+2τ 2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) . where k= { 0 , α 6=2β−1 , 1 , α=2β−1. . Since Õ ( n max { −2+ 1+α+2τ2α , 1−2β α + 1+α+2τ 2α +κ ( 1−2β+αγ ) } ) is lower order term compared to Θ ( nmax { 0,1+ 1−2β α } logkn ) , we further have T1 , R ( Dn ) = ( n 2σ2 µT1 : R ( I+ n σ2 Λ1 : R ) −1µ1 : R ) ( 1+o ( 1 ) ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) This concludes the proof of the second statement . Lemma 36 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−5δ , we have T1 ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) , ( 96 ) Furthermore , let δ=n−q where 0≤q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } . If we assume µ0 =0 , we have T2 ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 97 ) Proof of Lemma 36 . LetR=n 1 α+κ where 0≤κ < α−1−2τ2α2 . By Lemmas 32 and 35 , with probability of at least 1−5δ we have |T1 , R ( Dn ) −T1 ( Dn ) |=Õ ( n 1 α+κ ( 1−α ) ) , ( 98 ) and |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1 δ +1 ) nmax { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) ( 99 ) as well as T1 , R ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) , ( 100 ) and T2 , R ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) = { Θ ( nmax { 0,1+ 1−2β α } ) , α 6=2β−1 , Θ ( logn ) , α=2β−1 . ( 101 ) We then have T1 ( Dn ) =T1 , R ( Dn ) +T1 , R ( Dn ) −T1 ( Dn ) =Θ ( n 1 α ) +Õ ( n 1 α+κ ( 1−α ) ) =Θ ( n 1 α ) . Since Õ ( n 1 α+κ ( 1−α ) ) is lower order term compared to Θ ( n 1 α ) , we further have T1 ( Dn ) = ( 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) ) ( 1+o ( 1 ) ) =Θ ( n 1 α ) This concludes the proof of the first statement . As for T2 ( Dn ) , we have T2 ( Dn ) =T2 , R ( Dn ) +T2 , R ( Dn ) −T2 ( Dn ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( ( 1 δ +1 ) nmax { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) =Θ ( nmax { 0,1+ 1−2β α } logkn ) +Õ ( nq+max { ( 1 α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) where we use δ=n−q , k= { 0 , α 6=2β−1 , 1 , α=2β−1. . Since 0≤κ < α−1−2τ2α2 and 0≤q < min { ( 2β−1 ) ( α−1−2τ ) 4α2 , α−1−2τ 2α } , we can choose κ < α−1−2τ 2α2 and κ is arbitrarily close to α−1−2τ2α2 such that 0≤q < min { ( 2β−1 ) κ 2 , κα } . Then we have ( 1 α+κ ) 1−2β 2 +q < 0 , −1−κα+q < 0 , ( 1−2β ) κ2 +q < 0 and−κα+q < 0 . So we have T2 , R ( Dn ) =Θ ( n max { 0,1+ 1−2βα } logkn ) . Since Õ ( ( 1δ +1 ) n max { ( 1α+κ ) 1−2β 2 ,1+ 1−2β α + ( 1−2β ) κ 2 , −1−κα,1+ 1−2β α −κα } ) is lower order term compared to Θ ( nmax { 0,1+ 1−2β α } logkn ) , we further have T2 ( Dn ) = ( n 2σ2 µT ( I+ n σ2 Λ ) −1µ ) ( 1+o ( 1 ) ) This concludes the proof of the second statement . Proof of Theorem 7 . Using Lemma 36 and noting that 1α > 0 , with probability of at least 1−5δ̃ , we have E F 0 ( Dn ) =T1 ( Dn ) +T2 ( Dn ) = [ 1 2 logdet ( I+ n σ2 ΛR ) − 1 2 Tr ( I− ( I+ n σ2 ΛR ) −1 ) + n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR ] ( 1+o ( 1 ) ) =Θ ( nmax { 1 α , 1−2β α +1 } ) Furthermore , we have logdet ( I+ n σ2 Λ ) −logdet ( I+ n σ2 ΛR ) = ∞∑ p=R+1 log ( 1+ n σ2 λp ) ≤ n σ2 ∞∑ p=R+1 λp≤ n σ2 ∞∑ p=R+1 Cλp −α= n σ2 O ( R1−α ) = n σ2 O ( n ( 1−α ) ( 1 α+κ ) ) =o ( n 1 α ) . Then we have log det ( I + nσ2 ΛR ) = log det ( I + n σ2 Λ ) ( 1 + o ( 1 ) ) . Similarly we can prove Tr ( I− ( I+ nσ2 Λ ) −1 ) = Tr ( I− ( I+ nσ2 ΛR ) −1 ) ( 1 + o ( 1 ) ) and µT ( I + nσ2 Λ ) −1µ = µTR ( I+ n σ2 ΛR ) −1µR ( 1+o ( 1 ) ) . Letting δ=5δ̃ , we get the result . In the case of µ0 > 0 , we have the following lemma : Lemma 37 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < α−1−2τα2 . Assume that µ0 > 0 . Under Assumptions 4 , 5 and 6 , for sufficiently large nwith probability of at least 1−4δ we have |T2 , R ( Dn ) −T2 ( Dn ) |=Õ ( ( 1 δ +1 ) nmax { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) .. ( 102 ) Proof of Lemma 37 . As for |T2 ( Dn ) −T2 , R ( Dn ) | , we have |T2 ( Dn ) −T2 , R ( Dn ) |= ∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ + ∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ . ( 103 ) For the first term on the right-hand side of ( 103 ) , we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2 ∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣+∣∣∣∣f > R ( x ) T ( I+ ΦΛΦTσ2 ) −1f > R ( x ) ∣∣∣∣ ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1‖2‖f > R ( x ) ‖2 ≤2‖f > R ( x ) ‖2‖ ( I+ ΦΛΦT σ2 ) −1fR ( x ) ‖2+‖f > R ( x ) ‖22 . Applying Corollary 19 and Lemma 31 , with probability of at least 1−4δ , we have∣∣∣∣f ( x ) T ( I+ ΦΛΦTσ2 ) −1f ( x ) −fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) ∣∣∣∣ ≤2Õ ( √ ( 1 δ +1 ) nR1−2β ) Õ ( √ ( 1 δ +1 ) n ) +Õ ( ( 1 δ +1 ) nR1−2β ) =2Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) +Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) ( 1−2β ) ) =2Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) . As for the second term on the right-hand side of ( 80 ) , according to Lemma 28 , Corollary 26 and Lemma 30 , we have∣∣∣∣fR ( x ) T ( I+ ΦΛΦTσ2 ) −1fR ( x ) −fR ( x ) T ( I+ ΦRΛRΦTRσ2 ) −1fR ( x ) ∣∣∣∣ = ∣∣∣∣∣∣ ∞∑ j=1 ( −1 ) jfR ( x ) T ( ( I+ ΦRΛRΦ T R σ2 ) −1 Φ > RΛ > RΦ T > R σ2 ) j ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ∣∣∣∣∣∣ ≤ ∞∑ j=1 ‖ ( I+ ΦRΛRΦ T R σ2 ) −1‖j−12 ·‖ Φ > RΛ > RΦ T > R σ2 ‖j2 ·‖ ( I+ ΦRΛRΦ T R σ2 ) −1fR ( x ) ‖22 = ∞∑ j=1 Õ ( n−jκα ) Õ ( ( 1 δ +1 ) n ) =Õ ( ( 1 δ +1 ) n1−κα ) . ( 104 ) By ( 80 ) , we have |T2 ( Dn ) −T2 , R ( Dn ) |=Õ ( ( 1 δ +1 ) n1+ ( 1 α+κ ) 1−2β 2 ) +Õ ( ( 1 δ +1 ) n1−κα ) =Õ ( ( 1 δ +1 ) nmax { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) . Lemma 38 . Assume that σ2 = Θ ( 1 ) . Let R= n 1α+κ where 0 < κ < min { α−1−2τ2α2 , 2β−1 α2 } . Assume that µ0 > 0 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ , we have T2 , R ( Dn ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) . ( 105 ) Proof of Lemma 38 . Let A= ( I+ n σ2 ΛR ) −γ/2Λ γ/2 R ( Φ T RΦR−nI ) Λ γ/2 R ( I+ n σ2 ΛR ) −γ/2 , ( 106 ) where 1+α+2τ2α < γ≤1 . By Corollary 22 , with probability of at least 1−δ , we have ‖A‖2 =Õ ( n 1−2γα+α+2τ 2α ) . ( 107 ) When n is sufficiently large , ‖A‖2 is less than 1 . Let µR,1 = ( µ0,0 , ... ,0 ) and µR,2 = ( 0 , µ1 , ... , µR ) . Then µR=µR,1+µR,2 . Let Λ̃1 , R=diag { 1 , λ1 , ... , λR } and I0 , R= ( 0,1 , ... ,1 ) . Then ΛR=Λ̃1 , RI0 , R . Let B = ( I + nσ2 ΛR ) −1/2Λ̃ 1/2 1 , R ( Φ T RΦR − nI ) Λ̃ 1/2 1 , R ( I + n σ2 ΛR ) −1/2 . By Corollary 23 , we have ‖B‖2 =O ( √ logRδ n 1 2 ) . By Lemma 34 , we have T2 , R ( Dn ) = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR + 1 2 ∞∑ j=1 [ ( −1 ) j+1µTR 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR ] ( 108 ) As for the first term on the right hand side of ( 108 ) , by Lemma 15 , we have n 2σ2 µT ( I+ n σ2 Λ ) −1µ≤ n 2σ2 ( µ20+ R∑ p=1 C2µp −2β 1+ nσ2Cλp −α ) = n 2σ2 µ20+Õ ( n max { 0,1+ 1−2βα } ) . We defineQ1 , j , Q2 , j andQ3 , j by Q1 , j=µ T R,1 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,1 Q2 , j=µ T R,1 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,2 Q3 , j=µ T R,2 1 σ2 ( I+ n σ2 ΛR ) −1 ( ΦTRΦR−nI ) ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j−1 ( I+ n σ2 ΛR ) −1µR,2 ( 109 ) The quantity Q3 , j actually shows up in the case of µ0 = 0 in the proof of Lemma 35 . By ( 92 ) , ( 94 ) and ( 95 ) , we have that | ∞∑ j=1 ( −1 ) j+1Q3 , j |= | ∞∑ j=1 ( −1 ) j+1Õ ( n ( j−1 ) ( 1−α+2τ ) 2α ) o ( nmax { 0,1+ 1−2β α } ) |=o ( nmax { 0,1+ 1−2β α } ) . ( 110 ) ForQ1 , j , we have Q1,1 = 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( I+ n σ2 ΛR ) −1+ γ2 µR,1 ≤ 1 σ2j ‖µR,1‖22‖ ( I+ n σ2 ΛR ) −1+ γ2 ‖22‖B‖2 =O ( √ log R δ n 1 2 ) , where in the last equality we use ‖B‖2 =O ( √ logRδ n 1 2 ) . For j≥2 , we have Q1 , j= 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( ( I+ n σ2 ΛR ) −1+γΛ1−γR A ) j−2 ( I+ n σ2 ΛR ) −1+γΛ1−γR B ( I+ n σ2 ΛR ) −1+ γ2 µR,1 ≤ 1 σ2j ‖µR,1‖22‖ ( I+ n σ2 ΛR ) −1+ γ2 ‖22‖B‖22‖A‖ j−2 2 ‖ ( I+ n σ2 ΛR ) −1+γΛ1−γR ‖ j−1 2 =O ( log R δ n·n ( j−2 ) ( 1−2γα+α+2τ ) 2α ·n− ( 1−γ ) ( j−1 ) ) =O ( log R δ nγ ·n ( j−2 ) ( 1−α+2τ ) 2α ) . Then we have | ∞∑ j=1 ( −1 ) j+1Q1 , j |≤O ( √ log R δ n 1 2 ) + ∞∑ j=2 O ( log R δ nγ ·n ( j−2 ) ( 1−α+2τ ) 2α ) =O ( log R δ nγ ) ( 111 ) ForQ2 , j , we have Q2 , j= 1 σ2j µTR,1 ( I+ n σ2 ΛR ) −1+ γ2B ( ( I+ n σ2 ΛR ) −1+γΛ1−γR A ) j−1 ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2 ≤ 1 σ2j ‖µR,1‖2‖B‖2‖A‖j−12 ‖ ( I+ n σ2 ΛR ) −1+γΛ1−γR ‖ j−1 2 ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖2 =O ( √ log R δ n 1 2 ·n ( j−1 ) ( 1−α+2τ ) 2α ) ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖2 . Since ‖ ( I+ nσ2 Λ ) −1+ γ2 Λ̃ − γ2 1 , RµR,2‖2 is actually the case of µ0 = 0 , we can use ( 93 ) in the proof of Lemma 35 and get ‖ ( I+ n σ2 Λ ) −1+ γ 2 Λ̃ − γ2 1 , RµR,2‖ 2 2 =‖ ( I+ n σ2 Λ1 : R ) −1+γ/2Λ −γ/2 1 : R µ1 : R‖ 2 2 =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } =Õ ( nmax { −2+γ , 1−2β α +γ+κ ( 1−2β+αγ ) } ) =o ( nγ ) , ( 112 ) where in the last equality we use κ < 2β−1α2 . Then we have | ∞∑ j=1 ( −1 ) j+1Q2 , j |≤ ∞∑ j=1 o ( √ log R δ n 1+γ 2 ·n ( j−1 ) ( 1−α+2τ ) 2α ) =o ( √ log R δ n 1+γ 2 ) ( 113 ) Choosing γ= 12 ( 1+ 1+α+2τ 2α ) = 1+3α+2τ 4α < 1 , we have T2 , R ( Dn ) = n 2σ2 µTR ( I+ n σ2 ΛR ) −1µR+ ∞∑ j=1 ( −1 ) j+1 ( Q1 , j+Q2 , j+Q3 , j ) = n 2σ2 µ20+Õ ( n max { 0,1+ 1−2βα } ) +o ( nmax { 0,1+ 1−2β α } ) +O ( log R δ nγ ) +o ( √ log R δ n 1+γ 2 ) = n 2σ2 µ20+Õ ( n max { 1+γ2 ,1+ 1−2β α } ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) . Proof of Theorem 8 . Let R = n 1 α+κ where 0 < κ < min { α−1−2τ2α2 , 2β−1 α2 } . Since 0 ≤ q < min { 2β−12 , α } · min { α−1−2τ 2α2 , 2β−1 α2 } , we can choose κ < min { α−1−2τ 2α2 , 2β−1 α2 } and κ is arbitrarily close to κ < min { α−1−2τ2α2 , 2β−1 α2 } such that 0≤ q < min { ( 2β−1 ) κ 2 , κα } . Then we have ( 1α+κ ) 1−2β 2 +q < 0 , and−κα+q < 0 . As for T2 ( Dn ) , we have T2 ( Dn ) ≤T2 , R ( Dn ) +|T2 , R ( Dn ) −T2 ( Dn ) | = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) +Õ ( ( 1δ +1 ) n max { 1+ ( 1α+κ ) 1−2β 2 ,1−κα } ) = n 2σ2 µ20+Õ ( n max { 1+7α+2τ8α ,1+ 1−2β α } ) +Õ ( nq+max { 1+ ( 1 α+κ ) 1−2β 2 ,1−κα } ) = n 2σ2 µ20+o ( n ) . By Lemma 36 , we have T1 ( Dn ) = O ( n 1 α ) . Hence E F 0 ( Dn ) = T1 ( Dn ) + T2 ( Dn ) = n 2σ2µ 2 0+o ( n ) . D.2 PROOFS RELATED TO THE ASYMPTOTICS OF THE GENERALIZATION ERROR Lemma 39 . Assume σ2 = Θ ( nt ) where 1− α1+2τ < t < 1 . Under Assumptions 4 , 5 and 6 , with probability of at least 1−δ over sample inputs ( xi ) ni=1 , we have G1 ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ nσ2 ΛR ) −1ΛR−‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F ) =Θ ( n ( 1−α ) ( 1−t ) α ) . ( 114 ) Proof of Lemma 39 . Let G1 , R ( Dn ) = E ( xn+1 , yn+1 ) ( T1 , R ( Dn+1 ) − T1 , R ( Dn ) ) , where R = nC for some constant C. By Lemma 32 , we have that |G1 ( Dn ) −G1 , R ( Dn ) |= ∣∣E ( xn+1 , yn+1 ) [ T1 ( Dn+1 ) −T1 , R ( Dn+1 ) ] − [ T1 ( Dn ) −T1 , R ( Dn ) ] ∣∣ = ∣∣E ( xn+1 , yn+1 ) O ( ( n+1 ) R1−α ) ∣∣+∣∣O ( nR1−α ) ] ∣∣ =O ( 1σ2nR 1−α ) . ( 115 ) Define ηR= ( φ0 ( xn+1 ) , φ1 ( xn+1 ) , ... , φR ( xn+1 ) ) T and Φ̃R= ( ΦTR , ηR ) T . As forG1 , R ( Dn ) , we have G1 , R ( Dn ) =E ( xn+1 , yn+1 ) ( T1 , R ( Dn+1 ) −T1 , R ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2 logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) − 1 2 Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1 ) ) − ( 1 2 logdet ( I+ ΦRΛRΦ T R σ2 ) − 1 2 Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃R T σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) − 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1 ) −Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) ) . ( 116 ) As for the first term in the right hand side ( 116 ) , we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ΛRΦ̃ T RΦ̃R σ2 ) −logdet ( I+ ΛRΦ T RΦR σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ΛRΦ T RΦR+ηRη T R σ2 ) −logdet ( I+ ΛRΦ T RΦR σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( ( I+ ΛRΦ T RΦR σ2 ) −1 ( I+ ΛRΦ T RΦR σ2 + ΛRηRη T R σ2 ) ) ) = 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ ( I+ ΛRΦ T RΦR σ2 ) −1 ΛRηRη T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) log ( 1+ 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ) Let A= ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ( Φ T RΦR−nI ) Λ 1/2 R ( I+ n σ2 ΛR ) −1/2 . ( 117 ) According to Corollary 22 , with probability of at least 1 − δ , we have ‖ 1σ2A‖2 = O ( √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α ) = o ( 1 ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 . By Lemma 27 , we have ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR =ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ( −1 ) jηTR ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ n σ2 ΛR ) −1ΛRηR =ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ( −1 ) j 1 σ2j ηTR ( I+ n σ2j ΛR ) −1/2Λ 1/2 R A j ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ηR ≤ηTR ( I+ n σ2 ΛR ) −1ΛRηR+ ∞∑ j=1 ‖A‖j2‖ ( I+ n σ2 ΛR ) −1/2Λ 1/2 R ηR‖ 2 2 ≤ R∑ p=1 φ2p ( xn+1 ) Cλp −α 1+nCλp−α/σ2 + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) R∑ p=1 φ2p ( xn+1 ) Cλp −α 1+nCλp−α/σ2 ≤ R∑ p=1 Cλp −αp2τ 1+nCλp−α/σ2 + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) R∑ p=1 Cλp −αp2τ 1+nCλp−α/σ2 ≤O ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 ‖ 1 σ2 A‖j2 ( logR ) j/2 ) O ( n ( 1−α ) ( 1−t ) α ) =O ( n ( 1−α ) ( 1−t ) α ) =o ( 1 ) , ( 118 ) where we use Lemma 15 in the last inequality . Next we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2 ( E ( xn+1 , yn+1 ) log ( 1+ 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ) = 1 2 ( E ( xn+1 , yn+1 ) ( 1 σ2 ηTR ( I+ ΛRΦ T RΦR σ2 ) −1ΛRηR ) ( 1+o ( 1 ) ) ) = 1 2σ2 ( Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR ) ( 1+o ( 1 ) ) , where in the last equality we use the fact that E ( xn+1 , yn+1 ) ηRηTR=I . By Lemma 27 , we have Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR =Tr ( I+ n σ2 ΛR ) −1ΛR+ ∞∑ j=1 ( −1 ) jTr ( 1 σ2 ( I+ n σ2 ΛR ) −1ΛR ( Φ T RΦR−nI ) ) j ( I+ n σ2 ΛR ) −1ΛR =Tr ( I+ n σ2 ΛR ) −1ΛR+ ∞∑ j=1 ( −1 ) jTr 1 σ2 ( I+ n σ2 ΛR ) −1/2Λ 1/2 R A j ( I+ n σ2 ΛR ) −1/2Λ 1/2 R . By Lemma 15 , we have Tr ( I+ n σ2 ΛR ) −1ΛR≤ R∑ p=1 Cλp −α 1+nCλp−α/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) Tr ( I+ n σ2 ΛR ) −1ΛR≥ R∑ p=1 Cλp −α 1+nCλp−α/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) . Overall , Tr ( I+ n σ2 ΛR ) −1ΛR=Θ ( n ( 1−α ) ( 1−t ) α ) . ( 119 ) Since ‖ 1σ2A‖ j 2 =o ( 1 ) , we have that the absolute values of diagonal entries of 1 σ2jA j are at most o ( 1 ) ) . Let ( Aj ) p , p denote the ( p , p ) -th entry of the matrixAj . Then we have∣∣∣∣Tr 1σ2 ( I+ nσ2 ΛR ) −1/2Λ1/2R Aj ( I+ nσ2 ΛR ) −1/2Λ1/2R ∣∣∣∣ = ∣∣∣∣∣ R∑ p=1 λp 1 σ2j ( A j ) p , p 1+nλp/σ2 ∣∣∣∣∣≤ R∑ p=1 λp‖A‖j2 1+nλp/σ2 =Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 120 ) where in the last step we used ( 119 ) . According to ( 119 ) and ( 120 ) , we have 1 2 ( E ( xn+1 , yn+1 ) logdet ( I+ Φ̃RΛRΦ̃ T R σ2 ) −logdet ( I+ ΦRΛRΦ T R σ2 ) ) = 1 2σ2 ( Tr ( I+ ΛRΦ T RΦR σ2 ) −1ΛR ) ( 1+o ( 1 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) =Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( n ( 1−α ) ( 1−t ) α ) o ( 1 ) =Θ ( n ( 1−α ) ( 1−t ) α ) = 1 2σ2 ( Tr ( I+ n σ2 ΛR ) −1ΛR ) ( 1+o ( 1 ) ) . ( 121 ) Using the Woodbury matrix identity , the second term in the right hand side ( 116 ) is given by 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1−Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) = 1 2 ( E ( xn+1 , yn+1 ) Tr ( 1 σ2 Φ̃R ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1ΛRΦ̃ T R−Tr ( 1 σ2 ΦR ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRΦ T R ) = 1 2 ( E ( xn+1 , yn+1 ) Tr ( 1 σ2 ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1ΛRΦ̃ T RΦ̃R−Tr ( 1 σ2 ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRΦ T RΦR ) =−1 2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ̃ T RΦ̃R ) −1−Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ) =−1 2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ T RΦR+ 1 σ2 ΛRηRη T R ) −1−Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ) = 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 1+ 1σ2 η T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηR ) , where the last equality uses the Sherman–Morrison formula . According to ( 118 ) , we get 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 1+ 1σ2 η T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηR ) = 1 2σ2 ( E ( xn+1 , yn+1 ) Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛRηRη T R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 ( 1+o ( 1 ) ) ) = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛR ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 TrΛ 1/2 R ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1Λ 1/2 R ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1Λ 1/2 R ( I+ 1 σ2 ΛRΦ T RΦR ) −1Λ 1/2 R = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1ΛR ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1 = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ 1 σ2 Λ 1/2 R Φ T RΦRΛ 1/2 R ) −1‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ 1 σ2 A ) −1 ( I+ n σ2 ΛR ) −1/2‖2F , where in the penultimate equality we use Tr ( BBT ) =‖B‖2F , ‖B‖F is the Frobenius norm ofA , and in the last equality we use the definition ofA ( 117 ) . Then we have 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ 1 σ2 A ) −1 ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1/2 ( I+ ∞∑ j=1 ( −1 ) j 1 σ2j Aj ) ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖2F . ( 122 ) By Lemma 15 , we have ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F ≤ √√√√ R∑ p=1 Cλp−α ( 1+nCλp−α/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) 2α ) ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F ≥ √√√√ R∑ p=1 Cλp−α ( 1+nCλp−α/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) 2α ) . Overall , we have ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖F =Θ ( n ( 1−α ) ( 1−t ) 2α ) . ( 123 ) Since ‖ 1σ2A‖2 =O ( √ logRδ n 1−α+2τ 2α − ( 1+2τ ) t 2α ) =o ( 1 ) , we have ‖ 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖F ≤‖Λ1/2R ( I+ n σ2 ΛR ) −1/2‖F ‖ 1 σ2 A‖j2‖ ( I+ n σ2 ΛR ) −1/2‖2 =O ( n ( 1−α ) ( 1−t ) 2α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 124 ) where in the first inequality we use the fact that ‖AB‖F ≤ ‖A‖F ‖B‖2 when B is symmetric . By Lemma 15 , we have 1 σ2j ∣∣∣TrΛ1/2R ( I+ nσ2 ΛR ) −1Λ1/2R ( I+ nσ2 ΛR ) −1/2Aj ( I+ nσ2 ΛR ) −1/2∣∣∣ = ∣∣∣∣∣ R∑ p=1 λp ( ( 1 σ2A ) j ) p , p ( 1+nλp/σ2 ) 2 ∣∣∣∣∣≤ R∑ p=1 λp‖ 1σ2A‖ j 2 ( 1+nλp/σ2 ) 2 =Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) , ( 125 ) According to ( 123 ) , ( 124 ) and ( 125 ) , we have 1 2 ( E ( xn+1 , yn+1 ) Tr ( I− ( I+ Φ̃RΛRΦ̃ T R σ2 ) −1−Tr ( I− ( I+ ΦRΛRΦ T R σ2 ) −1 ) = 1+o ( 1 ) 2σ2 Tr ( I+ 1 σ2 ΛRΦ T RΦR ) −1ΛR ( I+ 1 σ2 ΛRΦ T RΦR ) −1 = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1+ ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2‖2F = 1+o ( 1 ) 2σ2 ( ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F + ∞∑ j=1 ∥∥∥∥ 1σ2j Λ1/2R ( I+ nσ2 ΛR ) −1/2Aj ( I+ nσ2 ΛR ) −1/2 ∥∥∥∥2 F +2TrΛ 1/2 R ( I+ n σ2 ΛR ) −1 ∞∑ j=1 ( −1 ) j 1 σ2j Λ 1/2 R ( I+ n σ2 ΛR ) −1/2Aj ( I+ n σ2 ΛR ) −1/2 ) = 1+o ( 1 ) 2σ2 ( Θ ( n ( 1−α ) ( 1−t ) α ) + ∞∑ j=1 1 σ2j O ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) +2 ∞∑ j=1 1 σ2j Θ ( n ( 1−α ) ( 1−t ) α ) Õ ( n j ( 1−α+2τ− ( 1+2τ ) t ) 2α ( logR ) j/2 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) = 1+o ( 1 ) 2σ2 ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F . ( 126 ) Combining ( 121 ) and ( 126 ) we get that G1 , R ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I + n σ2 ΛR ) −1ΛR + ‖Λ1/2R ( I + n σ2 ΛR ) −1‖2F ) = Θ ( n ( 1−α ) ( 1−t ) α ) . From ( 115 ) we have that G1 ( Dn ) ≤ G1 , R ( Dn ) + |G1 ( Dn ) − G1 , R ( Dn ) | = Θ ( n ( 1−α ) ( 1−t ) α ) +O ( n 1σ2R 1−α ) . Choosing R = n ( 2α−1 α ( α−1 ) +1 ) ( 1−t ) we conclude the proof . Lemma 40 . Assume σ2 =Θ ( nt ) where 1− α1+2τ < t < 1 . Let S=n D. Assume that ‖ξ‖2 =1 . When n is sufficiently large , with probability of at least 1−2δ we have ‖ ( I+ 1σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2 =O ( √ ( 1δ +1 ) n·n − ( 1−t ) ) . ( 127 ) Proof of Lemma 40 . Using the Woodbury matrix identity , we have that ( ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦSΛSξ= [ I−ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSΦTS ] ΦSΛSξ =ΦSΛSξ−ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSΦTSΦSΛSξ =σ2ΦS ( σ 2I+ΛSΦ T SΦS ) −1ΛSξ . ( 128 ) Let A= ( I+ nσ2 ΛS ) −γ/2Λ γ/2 S ( Φ T SΦS−nI ) Λ γ/2 S ( I+ n σ2 ΛS ) −γ/2 , where γ > 1+α+2τ− ( 1+2τ+2α ) t2α ( 1−t ) . By Corollary 22 , with probability of at least 1−δ , we have ‖ 1σ2A‖2 =Õ ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) . When n is sufficiently large , ‖ 1σ2A‖2 is less than 1 . By Lemma 27 , we have ( I+ 1 σ2 ΛSΦ T SΦS ) −1 = ( I+ n σ2 ΛS ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1 . Then we have ‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 = 1 σ2 ∥∥∥∥∥∥ ( I+ n σ2 ΛS ) −1+ ∞∑ j=1 ( −1 ) j ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1 ΛSξ ∥∥∥∥∥∥ 2 ≤ 1 σ2 ‖ ( I+ n σ2 ΛS ) −1ΛSξ‖2+ ∞∑ j=1 ∥∥∥∥∥ ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1ΛSξ ∥∥∥∥∥ 2 . ( 129 ) For the first term in the right hand side of the last equation , we have ‖ ( I+ n σ2 ΛS ) −1ΛSξ‖2≤‖ ( I+ n σ2 ΛS ) −1ΛS‖2‖ξ‖2≤ σ2 n =O ( n−1 ) . ( 130 ) Using the fact that ‖ 1σ2A‖2 = Õ ( n 1+α+2τ− ( 1+2τ+2α ) t 2α −γ ( 1−t ) ) and ‖ ( I+ nσ2 ΛS ) −1ΛS‖2 ≤ n−1 , we have∥∥∥∥∥ ( 1 σ2 ( I+ n σ2 ΛS ) −1ΛS ( Φ T SΦS−nI ) ) j ( I+ n σ2 ΛS ) −1ΛSξ ∥∥∥∥∥ 2 = 1 σ2j ∥∥∥∥ ( I+ nσ2 ΛS ) −1+ γ2 Λ1− γ2S ( A ( I+ nσ2 ΛS ) −1+γΛ1−γS ) j−1A ( I+ nσ2 ΛS ) −1+ γ2 Λ− γ2S ΛSξ ∥∥∥∥ 2 ≤n ( 1−t ) ( −1+ γ 2 + ( −1+γ ) ( j−1 ) ) Õ ( n j ( 1+α+2τ− ( 1+2τ+2α ) t ) 2α −jγ ( 1−t ) ) ‖ ( I+ n σ2 ΛS ) −1+ γ2 Λ 1− γ2 S ξ‖2 =Õ ( n− γ 2 ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) ‖ ( I+ n σ2 ΛS ) −1+ γ2 Λ 1− γ2 S ‖2‖ξ‖2 =Õ ( n− γ 2 ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) O ( n ( −1+γ/2 ) ( 1−t ) ) =Õ ( n− ( 1−t ) + ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) . ( 131 ) Using ( 129 ) , ( 130 ) and ( 131 ) , we have ‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 =σ−2Õ ( n−1 ) + ∞∑ j=1 Õ ( n−1+ ( 1−α+2τ− ( 1+2τ ) t ) j 2α ) =Õ ( n− ( 1−t ) ) +Õ ( n−1+ 1−α+2τ− ( 1+2τ ) t 2α ) =Õ ( n− ( 1−t ) ) . ( 132 ) By Corollary 20 , with probability of at least 1−δ , we have ‖ΦS ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 =Õ ( √ ( 1 δ +1 ) n‖ ( σ2I+ΛSΦTSΦS ) −1ΛSξ‖2 ) =Õ ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) . From ( 128 ) we get ‖ ( I + 1σ2 ΦSΛSΦ T S ) −1fS ( x ) ‖2 = Õ ( √ ( 1δ +1 ) n ·n − ( 1−t ) ) . This concludes the proof . Lemma 41 . Assume σ2 = Θ ( nt ) where 1 − α1+2τ < t < 1 . Let δ = n −q where 0≤q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . Under Assumptions 4 , 5 and 6 , assume that µ0 =0 . Then with probability of at least 1−6δ over sample inputs ( xi ) ni=1 , we haveG2 ( Dn ) = ( 1+o ( 1 ) ) 2σ2 ‖ ( I+ n σ2 ΛR ) −1µR‖22 = Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) , where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . Proof of Lemma 41 . Let S = nD . Let G2 , S ( Dn ) = E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) − T2 , S ( Dn ) ) . By Lemma 33 , when S is large enough , with probability of at least 1−3δ we have that c ( 133 ) Let Λ1 : S = diag { λ1 , ... , λS } , Φ1 : S = ( φ1 ( x ) , φ1 ( x ) , ... , φS ( x ) ) and µ1 : S = ( µ1 , ... , µS ) . Since µ0 = 0 , we have T2 , S ( Dn ) = 12σ2µ T 1 : SΦ T 1 : S ( I + 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : Sµ1 : S . Define η1 : S = ( φ1 ( xn+1 ) , ... , φS ( xn+1 ) ) T and Φ̃1 : S= ( ΦT1 : S , η1 : S ) T . In the proof of Lemma 34 , we showed that T2 , S ( Dn ) = 1 2σ2 µT1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : Sµ1 : S = 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S . We have G2 , S ( Dn ) =E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) −T2 , S ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ̃ T S Φ̃S ) −1µ1 : S ) − ( 1 2 µT1 : SΛ −1 1 : Sµ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S ) ) =E ( xn+1 , yn+1 ) ( 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S− 1 2 µT1 : SΛ −1 1 : S ( I+ 1 σ2 Λ1 : SΦ̃ T S Φ̃S ) −1µ1 : S ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µT1 : SΛ −1 1 : S ( I+ 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 1+ 1σ2 η T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : S µ1 : S ) ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µT1 : S ( I+ 1 σ2 Φ T 1 : SΦ1 : SΛ1 : S ) −1η1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S 1+ 1σ2 η T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1Λ1 : Sη1 : S ) ) =E ( xn+1 , yn+1 ) ( 1+o ( 1 ) 2σ2 µT1 : S ( I+ 1 σ2 ΦT1 : SΦ1 : SΛ1 : S ) −1η1 : Sη T 1 : S ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S ) = 1+o ( 1 ) 2σ2 µT1 : S ( I+ 1 σ2 ΦT1 : SΦ1 : SΛ1 : S ) −1 ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S = 1+o ( 1 ) 2σ2 ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖22 , ( 134 ) where in the fourth to last equality we used the Sherman–Morrison formula , in the third inequality we used ( 118 ) , and in the last equality we used the fact that E ( xn+1 , yn+1 ) η1 : SηT1 : S=I . Let µ̂1 : R= ( µ1 , ... , µR,0 , ... ,0 ) ∈RS . Then we have ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2≤‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ̂1 : R‖2+‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) ‖2 , ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2≥‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ̂1 : R‖2−‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) ‖2 . ( 135 ) ChooseR=n ( 1 α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) . In Lemma 29 , ( 62 ) , we showed that with probability of at least 1−δ , ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 , ( 136 ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . The same proof holds if we replace Φ1 : R with Φ1 : S , Λ1 : R with Λ1 : S , and µ1 : R with µ̂1 : R. We have ‖ ( σ2I+Λ1 : SΦT1 : SΦ1 : S ) −1µ̂1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 . ( 137 ) Next we bound ‖ ( I + 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S − µ̂1 : R ) ‖2 . By Assumption 5 , we have that ‖µ1 : S−µ̂1 : R‖2 =O ( R 1−2β 2 ) . For any ξ∈RS and ‖ξ‖2 =1 , using the Woodbury matrix identity , with probability of at least 1−2δ we have |ξT ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) | = |ξT ( I− 1 σ2 Λ1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ) ( µ1 : S−µ̂1 : R ) | = |ξT ( µ1 : S−µ̂1 : R ) − 1 σ2 ξTΛ1 : SΦ T 1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ( µ1 : S−µ̂1 : R ) | ≤‖ξ‖2‖µ1 : S−µ̂1 : R‖2+ 1 σ2 |ξTΛ1 : SΦT1 : S ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : S ( µ1 : S−µ̂1 : R ) | ≤O ( R 1−2β 2 ) + 1 σ2 ‖ ( I+ 1 σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : SΛ1 : Sξ‖2‖Φ1 : S ( µ1 : S−µ̂1 : R ) ‖2 =O ( R 1−2β 2 ) + 1 σ2 O ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) O ( √ ( 1 δ +1 ) nR 1−2β 2 ) =O ( ( 1 δ +1 ) R 1−2β 2 ) , where in the second to last step we used Corollary 20 to show ‖Φ1 : S ( µ1 : S − µ̂1 : R ) ‖2 = O ( √ ( 1δ +1 ) nR 1−2β 2 ) with probability of at least 1 − δ , and Lemma 40 to show that ‖ ( I + 1σ2 Φ1 : SΛ1 : SΦ T 1 : S ) −1Φ1 : SΛ1 : Sξ‖2 = O ( √ ( 1δ +1 ) n · n −1 ) with probability of at least 1−δ . SinceR=n ( 1α+κ ) ( 1−t ) , we have |ξT ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S−µ̂1 : R ) |=O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since ξ is arbitrary , we have ‖ ( I + 1σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1 ( µ1 : S − µ̂1 : R ) ‖2 = O ( ( 1δ + 1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 and 0 < κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) , we can choose κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) and κ is arbitrarily close to κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) such that 0≤q < ( 2β−1 ) ( 1−t ) κ2 . Then we have ( 1−2β ) ( 1−t ) κ 2 +q < 0 . From ( 135 ) and ( 137 ) , we have ‖ ( I+ 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖2 =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) +O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) +O ( ( nq+ ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =Θ ( nmax { − ( 1−t ) , ( 1−2β ) ( 1−t ) 2α } logk/2n ) = ( 1+o ( 1 ) ) ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 = ( 1+o ( 1 ) ) ‖ ( I+ n σ2 ΛR ) −1µR‖2 . ( 138 ) Hence G2 , S ( Dn ) = 1+o ( 1 ) 2σ2 ‖ ( I + 1 σ2 Λ1 : SΦ T 1 : SΦ1 : S ) −1µ1 : S‖22 = Θ ( n ( 1−t ) max { −2 , 1−2β α } logk/2 n ) . Then by ( 133 ) , G2 ( Dn ) =Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) +Õ ( ( 1δ +1 ) n 1 σ2S max { 1/2−β,1−α } ) . Choosing S=n ( 1+min { 2 , 2β−1 α } min { β−1/2 , α−1 } +1 ) ( 1−t ) , we get the result . Proof of Theorem 9 . From Lemmas 39 and 41 and 1α −1 > −2 , we have that with probability of at least 1−7δ̃ , E G ( Dn ) = 1+o ( 1 ) 2σ2 ( Tr ( I+ n σ2 ΛR ) −1ΛR−‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F +‖ ( I+ n σ2 ΛR ) −1µR‖22 ) =Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =Θ ( nmax { ( 1−α ) ( 1−t ) α , ( 1−2β ) ( 1−t ) α } ) ( 139 ) where k= { 0 , 2α 6=2β−1 1 , 2α=2β−1 . Furthermore , we have Tr ( I+ n σ2 Λ ) −1Λ−Tr ( I+ n σ2 ΛR ) −1ΛR = ∞∑ p=R+1 λp 1+ nσ2λp ≤ ∞∑ p=R+1 Cλp −α 1+ nσ2Cλp −α ≤ ∞∑ p=R+1 Cλp −α= n σ2 O ( R1−α ) =O ( n ( 1−α ) ( 1−t ) ( 1 α+κ ) ) =o ( n ( 1−α ) ( 1−t ) α ) . Then we have Tr ( I+ n σ2 ΛR ) −1ΛR=Tr ( I+ n σ2 Λ ) −1Λ ( 1+o ( 1 ) ) . ( 140 ) Similarly we can prove ‖Λ1/2R ( I+ n σ2 ΛR ) −1‖2F =‖Λ1/2 ( I+ n σ2 Λ ) −1‖2F ( 1+o ( 1 ) ) ( 141 ) ‖ ( I+ n σ2 ΛR ) −1µR‖22 =‖ ( I+ n σ2 Λ ) −1µ‖22 ( 1+o ( 1 ) ) ( 142 ) Letting δ=7δ̃ , the proof is complete . In the case of µ0 > 0 , we have the following lemma : Lemma 42 . Let δ = n−q where 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 . Under Assumptions 4 , 5 and 6 , assume that µ0 > 0 . Then with probability of at least 1− 6δ over sample inputs ( xi ) ni=1 , we have G2 ( Dn ) = 1 2σ2µ 2 0+o ( 1 ) . Proof of Lemma 42 . Let S = nD . Let G2 , S ( Dn ) = E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) − T2 , S ( Dn ) ) . By Lemma 33 , when S is large enough , with probability of at least 1−3δ we have that |G2 ( Dn ) −G2 , S ( Dn ) |= ∣∣E ( xn+1 , yn+1 ) [ T2 ( Dn+1 ) −T2 , S ( Dn+1 ) ] − [ T2 ( Dn ) −T2 , S ( Dn ) ] ∣∣ = ∣∣∣∣E ( xn+1 , yn+1 ) Õ ( ( 1δ+1 ) ( n+1 ) 1σ2Smax { 1/2−β,1−α } ) ∣∣∣∣ + ∣∣∣∣Õ ( ( 1δ+1 ) n 1σ2Smax { 1/2−β,1−α } ) ∣∣∣∣ =Õ ( ( 1 δ +1 ) n 1 σ2 Smax { 1/2−β,1−α } ) . ( 143 ) Let ΛS = diag { λ1 , ... , λS } , ΦS = ( φ1 ( x ) , φ1 ( x ) , ... , φS ( x ) ) and µS = ( µ1 , ... , µS ) . Define ηS = ( φ0 ( xn+1 ) , φ1 ( xn+1 ) , ... , φS ( xn+1 ) ) T and Φ̃S = ( ΦTS , ηS ) T . By the same technique as in the proof of Lemma 34 , we replace ΛR by Λ̃ , R=diag { , λ1 , ... , λR } , let →0 and show the counterpart of the result ( 134 ) in the proof of Lemma 41 : G2 , S ( Dn ) =E ( xn+1 , yn+1 ) ( T2 , S ( Dn+1 ) −T2 , S ( Dn ) ) =E ( xn+1 , yn+1 ) ( 1 2σ2 µTS ( I+ 1 σ2 Φ T SΦSΛS ) −1ηSη T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS 1+ 1σ2 η T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1ΛSηS ) ) =E ( xn+1 , yn+1 ) ( 1+o ( 1 ) 2σ2 µTS ( I+ 1 σ2 ΦTSΦSΛS ) −1ηSη T S ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS ) = 1+o ( 1 ) 2σ2 µTS ( I+ 1 σ2 ΦTSΦSΛS ) −1 ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS = 1+o ( 1 ) 2σ2 ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖22 , ( 144 ) where in the fourth to last equality we used the Sherman–Morrison formula , in the third inequality we used ( 118 ) , and in the last equality we used the fact that E ( xn+1 , yn+1 ) η1 : SηT1 : S=I . Let µ̂R= ( µ0 , µ1 , ... , µR,0 , ... ,0 ) ∈RS . Then we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2≤‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2+‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) ‖2 , ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2≥‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2−‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) ‖2 . ( 145 ) ChooseR=n ( 1 α+κ ) ( 1−t ) where 0 < κ < α−1−2τ+ ( 1+2τ ) tα2 ( 1−t ) . In Lemma 29 , ( 62 ) , we showed that with probability of at least 1−δ , ‖ ( σ2I+Λ1 : RΦT1 : RΦ1 : R ) −1µ1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : R ) −1µ1 : R‖2 , ( 146 ) where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . The same proof holds if we replace Φ1 : R with Φ1 : S , Λ1 : R with Λ1 : S , and µ1 : R with µ̂1 : R. We have ‖ ( σ2I+Λ1 : SΦT1 : SΦ1 : S ) −1µ̂1 : R‖2 =Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) = 1+o ( 1 ) σ2 ‖ ( I+ n σ2 Λ1 : S ) −1µ̂1 : R‖2 . ( 147 ) So we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µ̂R‖2 =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) =µ0+o ( 1 ) . ( 148 ) Next we bound ‖ ( I + 1σ2 ΛSΦ T SΦS ) −1 ( µS − µ̂R ) ‖2 . By Assumption 5 , we have that ‖µS − µ̂R‖2 = O ( R 1−2β 2 ) . For any ξ ∈ RS and ‖ξ‖2 = 1 , using the Woodbury matrix identity , with probability of at least 1−2δ we have |ξT ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) | = |ξT ( I− 1 σ2 ΛSΦ T S ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ) ( µS−µ̂R ) | = |ξT ( µS−µ̂R ) − 1 σ2 ξTΛSΦ T S ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ( µS−µ̂R ) | ≤‖ξ‖2‖µS−µ̂R‖2+ 1 σ2 |ξTΛSΦTS ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦS ( µS−µ̂R ) | ≤O ( R 1−2β 2 ) + 1 σ2 ‖ ( I+ 1 σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2‖ΦS ( µS−µ̂R ) ‖2 =O ( R 1−2β 2 ) + 1 σ2 O ( √ ( 1 δ +1 ) n·n− ( 1−t ) ) O ( √ ( 1 δ +1 ) nR 1−2β 2 ) =O ( ( 1 δ +1 ) R 1−2β 2 ) , where in the second to last step we used Corollary 20 to show‖ΦS ( µS−µ̂R ) ‖2 =O ( √ ( 1δ +1 ) nR 1−2β 2 ) with probability of at least 1− δ , and Lemma 40 to show that ‖ ( I + 1σ2 ΦSΛSΦ T S ) −1ΦSΛSξ‖2 = O ( √ ( 1δ +1 ) n·n − ( 1−t ) ) with probability of at least 1−δ . SinceR=n ( 1α+κ ) ( 1−t ) , we have |ξT ( I+ 1 σ2 ΛSΦ T SΦS ) −1 ( µS−µ̂R ) |=O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since ξ is arbitrary , we have ‖ ( I + 1σ2 ΛSΦ T SΦS ) −1 ( µS − µ̂R ) ‖2 = O ( ( 1δ + 1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) . Since 0 ≤ q < [ α− ( 1+2τ ) ( 1−t ) ] ( 2β−1 ) 4α2 and 0 < κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) , we can choose κ < α−1−2τ+ ( 1+2τ ) t2α2 ( 1−t ) and κ is arbitrarily close to κ < α−1−2τ+ ( 1+2τ ) t 2α2 ( 1−t ) such that 0≤q < ( 2β−1 ) ( 1−t ) κ2 . Then we have ( 1−2β ) ( 1−t ) κ 2 +q < 0 . From ( 145 ) and ( 148 ) , we have ‖ ( I+ 1 σ2 ΛSΦ T SΦS ) −1µS‖2 =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β 2α } logk/2n ) +O ( ( 1 δ +1 ) n ( 1−2β ) ( 1−t ) 2α + ( 1−2β ) ( 1−t ) κ 2 ) =µ0+Θ ( n ( 1−t ) max { −1 , 1−2β2α } logk/2n ) =µ0+o ( 1 ) . ( 149 ) Hence G2 , S ( Dn ) = 1+o ( 1 ) 2σ2 ‖ ( I + 1 σ2 ΛSΦ T SΦS ) −1µS‖22 = 12σ2µ 2 0 + o ( 1 ) . Then by ( 143 ) , G2 ( Dn ) = 1 2σ2µ 2 0+o ( 1 ) +Õ ( ( 1δ +1 ) nS max { 1/2−β,1−α } ) . Choosing S=n ( 1+min { 2 , 2β−1α } min { β−1/2 , α−1 } +1 ) ( 1−t ) , we get the result . Proof of Theorem 11 . According to Lemma 42 , G2 ( Dn ) = 12σ2µ 2 0 +o ( 1 ) . By Lemma 39 , we have G1 ( Dn ) =Θ ( n ( 1−α ) ( 1−t ) α ) . Then E G ( Dn ) =G1 ( Dn ) +G2 ( Dn ) = 12σ2µ 2 0+o ( 1 ) . D.3 PROOFS RELATED TO THE EXCESS MEAN SQUARED GENERALIZATION ERROR Proof of Theorem 12 . For µ0 =0 , we can show that E M ( Dn ) =E Exn+1 [ m̄ ( xn+1 ) −f ( xn+1 ) ] 2 =E Exn+1 [ Kxn+1x ( Kn+σ2modelIn ) −1y−f ( xn+1 ) ] 2 =E Exn+1 [ ηTΛΦT [ ΦΛΦT +σ2modelIn ) −1 ( Φµ+ ) −ηTµ ] 2 =E Exn+1 [ ηTΛΦT ( ΦΛΦT +σ2modelIn ) −1 ] 2 +Exn+1 [ ηT ( ΛΦT ( ΦΛΦT +σ2modelIn ) −1Φ−I ) µ ] 2 =σ2trueTrΛΦ T ( ΦΛΦT +σ2modelIn ) −2ΦΛ +µT ( I+ 1 σ2model ΦTΦΛ ) −1 ( I+ 1 σ2model ΛΦTΦ ) −1 µ = σ2true σ2model Tr ( I+ ΛΦ TΦ σ2model ) −1Λ−Tr ( I+ ΛΦ TΦ σ2model ) −2Λ+‖ ( I+ 1 σ2model ΛΦTΦ ) −1µ‖22 . According to ( 138 ) from the proof of Lemma 41 , the truncation procedure ( 133 ) and ( 142 ) , with probability of at least 1−δ we have ‖ ( I+ 1 σ2model ΛΦTΦ ) −1µ‖22 =Θ ( n max { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) = ( 1+o ( 1 ) ) ‖ ( I+ n σ2model Λ ) −1µ‖22 , where k= { 0 , 2α 6=2β−1 , 1 , 2α=2β−1. . According to ( 121 ) and ( 126 ) from the proof of Lemma 39 , the truncation procedure ( 115 ) , ( 140 ) and ( 141 ) , with probability of at least 1−δ we have Tr ( I+ ΛΦ TΦ σ2model ) −1Λ−Tr ( I+ ΛΦ TΦ σ2model ) −2Λ = ( Tr ( I+ n σ2model Λ ) −1Λ ) ( 1+o ( 1 ) ) −‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ( 1+o ( 1 ) ) =Θ ( n ( 1−α ) ( 1−t ) α ) . Combining the above two equations we get E M ( Dn ) = ( 1+o ( 1 ) ) ( σ2true σ2model ( Tr ( I+ n σ2model Λ ) −1Λ−‖Λ1/2 ( I+ n σ2model Λ ) −1‖2F ) +‖ ( I+ n σ2model Λ ) −1µ‖22 ) = σ2true σ2model Θ ( n ( 1−α ) ( 1−t ) α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =σ2trueΘ ( n 1−α−t α ) +Θ ( nmax { −2 ( 1−t ) , ( 1−2β ) ( 1−t ) α } logk/2n ) =Θ ( max { σ2truen 1−α−t α , n ( 1−2β ) ( 1−t ) α } ) When µ0 > 0 , according to ( 149 ) in the proof of Lemma 42 and the truncation procedure ( 133 ) , with probability of at least 1−δ we have E M ( Dn ) =Θ ( n ( 1−α ) ( 1−t ) α ) +µ20+o ( 1 ) =µ20+o ( 1 ) . | The paper consider asymptotic properties of Gaussian process models where the eigenvalues from Mercer's theorem exhibit polynomial decay. Unlike the earlier analysis of Sollich, the authors do not assume that the model is correctly specified. The authors provide a short experiment to illustrate the theory. | SP:c3dc3845317218c326c306f54a6a67edca6c041e |
What Would the Expert $do(\cdot)$?: Causal Imitation Learning | 1 INTRODUCTION . Much of the theory of imitation learning ( IL ) indicates that with enough demonstrations , we should be able to accurately recover the expert ’ s policy . When we apply IL algorithms in practice however , we sometimes see them produce manifestly incorrect estimates of the expert ’ s policy ( Muller et al. , 2006 ; Codevilla et al. , 2019 ; de Haan et al. , 2019 ; Bansal et al. , 2018 ; Kuefler et al. , 2017 ) . One possible reason for this phenomenon is that empirically , we only have access to noisy recordings of what the expert did . This critical detail has been thus far neglected by most prior theoretical work in imitation learning . We focus in this paper on how best to learn from two kinds of noisy data : • Exogenous noise : When we observe expert actions corrupted by a persistent noise ( e.g . a faulty joystick that persistently perturbs actions before they are executed in the game ) . • Endogenous noise : When we do not observe the full state an expert used to pick an action ( e.g . the learner not knowing there ’ s an enemy behind a door ) . The net effect of either kind of persistent noise ( more formally , an unobserved confounder ) is to introduce temporal correlations in the recorded actions that do not have their true cause in the recorded state . For example , consider recordings of an expert driver slowing down at a stop sign . If all we present the learner with as state input is whether the expert was slowing down at the last timestep , they will likely learn to simply repeat the expert ’ s past action . Thus , once the car begins to slow down , it continues to slow down , regardless of whether there is a stop sign present . At a more abstract level , these sorts of inertia problems can result from temporal correlations between pairs of actions ( e.g . the effect of the stop sign ) being reflected in the state ( e.g . the past action variable ) , leading to spurious correlations between state and action that the learner might unfortunately latch onto ( e.g . repeating the past action ) . What should we hope to learn then in these confounded settings ? Given we do not have access to the unobserved confounder , a reasonable choice is to ensure that we match the behavior of an expert that has access to the same information we do . That is , if we could query the expert for an action with only the information we have available , we should strive to produce an action that matches this queried action . While applying an interactive imitation learning algorithm ( Ross et al. , 2011 ) would allow us to collect a dataset uncorrupted by confounding , a queryable expert is not a realistic assumption for many domains . We therefore focus on approaches for the off-policy setting . We base our algorithms on a technique from econometrics for dealing with confounding in recorded data known an instrumental variable regression ( IVR ) ( Angrist et al. , 1996 ) . The high-level idea of IVR is to leverage an instrument , a source of random variation independent of the confounder , to deconfound inputs to a learning procedure via conditioning on the instrument . In dynamical systems , history can act as this source of variation , as it is unaffected by future confounding ( Hefny et al. , 2015 ) . Our key insight is that we can leverage past states as instruments to break the spurious correlation between states and actions caused by an unobserved confounder . Our work provides the following contributions : 1 . We formalize confounding in imitation learning . We provide a structural causal model that captures inertia effects that result from temporally correlated exogenous or endogenous noise . We also derive a test to detect whether this sort of confounding is present in a dataset . 2 . We present a unified derivation of modern instrumental variable regression techniques . We show how two recent extensions of the classical IVR technique share a common structure . We also extend the theoretical analysis of previous work by deriving accuracy bounds . 3 . We provide two novel algorithms to deal with confounding in imitation learning . We derive two novel IVR-based algorithms : • DoubIL is a generative modeling approach that and can utilize access to a simulator for reduced sample complexity . • ResiduIL is a simulator-free , game-theoretic approach . We derive performance bounds for policies produced by these algorithms under exogenous noise . We empirically investigate the effect of the persistence of the confounder on this bound . We also compare the performance of these approaches to behavioral cloning under endogenous noise . 2 RELATED WORK . Imitation Learning . Broadly speaking , imitation learning approaches can be grouped into three classes : offline , online , and interactive . Our work is most similar to offline imitation learning algorithms ( e.g . Behavioral Cloning ( Pomerleau , 1989 ) , ValueDice ( Kostrikov et al. , 2019 ) , AdVIL ( Swamy et al. , 2021 ) ) that operate purely on collected data . Unlike previous work however , we consider the effect of unobserved confounding . Our work shares the goal of interactive imitation learning algorithms ( e.g . DAgger ( Ross et al. , 2011 ) , AggreVaTe ( Ross & Bagnell , 2014 ) ) , in that we seek to match the output of a query to the expert . However , we focus on matching the output of a query on recorded data , rather than on learner rollouts , as is standard for interactive approaches . This is because the confounder decouples the actions recorded in the data and queried actions . Zhang et al . ( 2020 ) consider imitation learning through the lens of causal inference but focus on the one-step setting , while we consider multiple timesteps . Kumor et al . ( 2021 ) contemporaneously consider the multi-step setting and come to similar conclusions as us about the challenges of endogenous noise . They derive a necessary and sufficient structural condition for successful imitation learning , while we focus on practical algorithms with performance guarantees for a particular graphical model under exogenous noise . In contrast to imitation learning methods that seeks to match moments of the expert ’ s behavior ( Swamy et al. , 2021 ) , we focus only on matching average expert actions . We leave matching arbitrary moments to future work . Inertia Effects in Imitation Learning . Several authors have empirically observed a latching effect in policies trained via imitation learning : ( Muller et al. , 2006 ; Codevilla et al. , 2019 ; de Haan et al. , 2019 ; Bansal et al. , 2018 ; Kuefler et al. , 2017 ) , where learned policies tend to inappropriately repeat the same action . We seek to provide a plausible explanation and correction for the phenomenon reported in these works , and list several examples in Table 1 . We note that when attempting to explain inertia effects , de Haan et al . ( 2019 ) propose causal confounding as the root cause of the error . However , as previously pointed out by Spencer et al . ( 2021 ) , there is no actual confound in the theoretical or empirical examples in the work of de Haan et al . ( 2019 ) . This is because the learner observes all of the variables the expert was using to make decisions . Instrumental Variable Regression . The classical approach to instrumental variable regression ( Wright , 1928 ) is a two-stage least squares procedure ( e.g . in Angrist et al . ( 1996 ) ’ s textbook ) . We focus on the more general nonlinear setting and instead base our approaches on the more recent DEEPIV ( Hartford et al. , 2017 ) and AGMM ( Dikkala et al. , 2020 ) . We present extensions to the work in these papers , including a unified derivation of both methods and error analysis for DEEPIV . 3 A BRIEF REVIEW OF INSTRUMENTS IN CAUSAL MODELING . We begin by discussing the concept of an instrument before deriving our algorithmic approaches in a simplified , non-sequential setting . Let X , Y , and Z be random variables on ( potentially infinite ) sample spaces X , Y , and Z . Assume that X , Y , and Z have the causal , rather than statistical , dependency structure in Fig . 2 . Given a dataset of ( x , y , z ) tuples , we are interested in determining the causal relationship between X and Y , E [ Y |do ( x ) ] , where do ( · ) is the interventional operator of Pearl et al . ( 2016 ) . Intuitively , E [ Y |do ( x ) ] is the expected value of Y when we intervene and set X = x , rather than observe such an X . In the SCM to the right , h ( x ) = E [ Y |do ( x ) ] . Because of the presence of an unobserved confounder , U , that affects both X and Y , standard regression ( e.g . Ordinary Least Squares or OLS ) generically produces inconsistent estimates . Coarsely , this occurs because OLS will over-estimate the influence of the parts ofX that are affected by the confounder . If we only have observational data and are unable to perform randomized control trials , a canonical technique to recover h is IVR ( Winship & Morgan , 1999 ) . Formally , an instrument Z must satisfy three structural conditions : 1 . Unconfounded Instrument : Z ⊥⊥ U – i.e . independent randomization from confounder . 2 . Exclusion : Z ⊥⊥ Y |X , U – i.e . no extraneous paths . 3 . Relevance : Z 6⊥⊥ X – i.e . conditioning has an effect . Z satisfies these three conditions in the SCM of Fig . 2 . 1 Without loss of generality , we assume that E [ U ] = 0 . This allows us to concisely derive a set of conditional moment restrictions ( CMR ) : 0 = E [ U ] = E [ U |z ] = E [ Y − h ( X ) |z ] ( 1 ) ⇒ ∀z ∈ Z , E [ Y |z ] = E [ h ( X ) |z ] . ( 2 ) In words , these constraints are saying that a necessary condition for recovery of h ( x ) is that for all values of the instrument , the actual and predicted expected values of Y |Z are equal . We further assume that noise U enters additively to Y ,2 and write out the following equations : X = g ( Z , U , V ) , Y = h ( X ) + U . ( 3 ) We now derive an appropriate loss function for finding an ĥ that approximately satisfies the CMR . If we only have finite samples and can therefore only estimate conditional expectations up to some tolerance , it is natural to relax the CMR to minĥ∈H , δ 1 2Ez [ δ 2 z ] s.t . |E [ Y − ĥ ( X ) |z ] | ≤ δz , δz ≥ 0 , ∀z ∈ Z , ( 4 ) where the δz are slack variables . Then , the Lagrangian ( with the natural P ( z ) -weighted inner product that captures how often each we expect each z to occur ) is L ( ĥ , δ , λ ) = ∑ z∈Z P ( z ) λz ( E [ Y − ĥ ( X ) |z ] − δz ) + P ( z ) 1 2 δ2z , ( 5 ) where λ is the vector of Lagrange multipliers . By the stationarity component of the KKT conditions , ∇δzL ( ĥ , δ , λ ) = −P ( z ) λz + P ( z ) δz = 0 , ( 6 ) implying that δz = λz . Plugging this back into the Lagrangian , we can simplify our function to L ( ĥ , λ ) = ∑ z∈Z P ( z ) λzE [ Y − ĥ ( X ) |z ] − P ( z ) 1 2 λ2z . ( 7 ) We refer to ( 7 ) as the Regularized Lagrangian or ReLa for short . Now , solving for the optimal Lagrange multipliers via stationarity , we arrive at ∇λzL ( ĥ , λ ) = P ( z ) E [ Y − ĥ ( X ) |z ] − P ( z ) λz = 0 , ( 8 ) which implies the optimal λz is equal to E [ Y − ĥ ( X ) |z ] . Plugging this back into ( 7 ) recovers the loss function , L ( ĥ ) = ∑ z∈Z P ( z ) E [ Y − ĥ ( X ) |z ] 2 = PRMSE2 ( ĥ ) . ( 9 ) This expression is the square of the Projected Root Mean Squared Error ( PRMSE ) of Chen & Pouzo ( 2012 ) . To recap , by minimizing Eq . 9 , we are attempting to find an ĥ that approximately satisfies the CMR . Minimizing PRMSE is a necessary condition for recovering E [ Y |do ( X ) ] . For it to be a sufficient condition , one needs the natural identifiability assumptions – we refer interested readers to Chen & Pouzo ( 2012 ) for a more thorough discussion . | The authors proposed a SCM to model latent confounder in the problem of imitation learning. In particular, two cases of exogenous and endogenous noises are considered. For the instrumental variable, they studied the effect of error in estimation $P(X|z)=g(z)$ on Projected Root Mean Squared Error (PRMSE) where $Z$ is the instrumental variable. They also proposed a game-theoric approach to estimate $h(x)= \mathbb{E}[Y|do(X)=x]$. Moreover, they proposed two algorithms to deal with latent confounders in imitation learning. Experimental results showed the proposed algorithms can perform better with respect to behavior cloning. | SP:a88ed0bcf467ba30ac13795d6767298d8cdedb2e |
What Would the Expert $do(\cdot)$?: Causal Imitation Learning | 1 INTRODUCTION . Much of the theory of imitation learning ( IL ) indicates that with enough demonstrations , we should be able to accurately recover the expert ’ s policy . When we apply IL algorithms in practice however , we sometimes see them produce manifestly incorrect estimates of the expert ’ s policy ( Muller et al. , 2006 ; Codevilla et al. , 2019 ; de Haan et al. , 2019 ; Bansal et al. , 2018 ; Kuefler et al. , 2017 ) . One possible reason for this phenomenon is that empirically , we only have access to noisy recordings of what the expert did . This critical detail has been thus far neglected by most prior theoretical work in imitation learning . We focus in this paper on how best to learn from two kinds of noisy data : • Exogenous noise : When we observe expert actions corrupted by a persistent noise ( e.g . a faulty joystick that persistently perturbs actions before they are executed in the game ) . • Endogenous noise : When we do not observe the full state an expert used to pick an action ( e.g . the learner not knowing there ’ s an enemy behind a door ) . The net effect of either kind of persistent noise ( more formally , an unobserved confounder ) is to introduce temporal correlations in the recorded actions that do not have their true cause in the recorded state . For example , consider recordings of an expert driver slowing down at a stop sign . If all we present the learner with as state input is whether the expert was slowing down at the last timestep , they will likely learn to simply repeat the expert ’ s past action . Thus , once the car begins to slow down , it continues to slow down , regardless of whether there is a stop sign present . At a more abstract level , these sorts of inertia problems can result from temporal correlations between pairs of actions ( e.g . the effect of the stop sign ) being reflected in the state ( e.g . the past action variable ) , leading to spurious correlations between state and action that the learner might unfortunately latch onto ( e.g . repeating the past action ) . What should we hope to learn then in these confounded settings ? Given we do not have access to the unobserved confounder , a reasonable choice is to ensure that we match the behavior of an expert that has access to the same information we do . That is , if we could query the expert for an action with only the information we have available , we should strive to produce an action that matches this queried action . While applying an interactive imitation learning algorithm ( Ross et al. , 2011 ) would allow us to collect a dataset uncorrupted by confounding , a queryable expert is not a realistic assumption for many domains . We therefore focus on approaches for the off-policy setting . We base our algorithms on a technique from econometrics for dealing with confounding in recorded data known an instrumental variable regression ( IVR ) ( Angrist et al. , 1996 ) . The high-level idea of IVR is to leverage an instrument , a source of random variation independent of the confounder , to deconfound inputs to a learning procedure via conditioning on the instrument . In dynamical systems , history can act as this source of variation , as it is unaffected by future confounding ( Hefny et al. , 2015 ) . Our key insight is that we can leverage past states as instruments to break the spurious correlation between states and actions caused by an unobserved confounder . Our work provides the following contributions : 1 . We formalize confounding in imitation learning . We provide a structural causal model that captures inertia effects that result from temporally correlated exogenous or endogenous noise . We also derive a test to detect whether this sort of confounding is present in a dataset . 2 . We present a unified derivation of modern instrumental variable regression techniques . We show how two recent extensions of the classical IVR technique share a common structure . We also extend the theoretical analysis of previous work by deriving accuracy bounds . 3 . We provide two novel algorithms to deal with confounding in imitation learning . We derive two novel IVR-based algorithms : • DoubIL is a generative modeling approach that and can utilize access to a simulator for reduced sample complexity . • ResiduIL is a simulator-free , game-theoretic approach . We derive performance bounds for policies produced by these algorithms under exogenous noise . We empirically investigate the effect of the persistence of the confounder on this bound . We also compare the performance of these approaches to behavioral cloning under endogenous noise . 2 RELATED WORK . Imitation Learning . Broadly speaking , imitation learning approaches can be grouped into three classes : offline , online , and interactive . Our work is most similar to offline imitation learning algorithms ( e.g . Behavioral Cloning ( Pomerleau , 1989 ) , ValueDice ( Kostrikov et al. , 2019 ) , AdVIL ( Swamy et al. , 2021 ) ) that operate purely on collected data . Unlike previous work however , we consider the effect of unobserved confounding . Our work shares the goal of interactive imitation learning algorithms ( e.g . DAgger ( Ross et al. , 2011 ) , AggreVaTe ( Ross & Bagnell , 2014 ) ) , in that we seek to match the output of a query to the expert . However , we focus on matching the output of a query on recorded data , rather than on learner rollouts , as is standard for interactive approaches . This is because the confounder decouples the actions recorded in the data and queried actions . Zhang et al . ( 2020 ) consider imitation learning through the lens of causal inference but focus on the one-step setting , while we consider multiple timesteps . Kumor et al . ( 2021 ) contemporaneously consider the multi-step setting and come to similar conclusions as us about the challenges of endogenous noise . They derive a necessary and sufficient structural condition for successful imitation learning , while we focus on practical algorithms with performance guarantees for a particular graphical model under exogenous noise . In contrast to imitation learning methods that seeks to match moments of the expert ’ s behavior ( Swamy et al. , 2021 ) , we focus only on matching average expert actions . We leave matching arbitrary moments to future work . Inertia Effects in Imitation Learning . Several authors have empirically observed a latching effect in policies trained via imitation learning : ( Muller et al. , 2006 ; Codevilla et al. , 2019 ; de Haan et al. , 2019 ; Bansal et al. , 2018 ; Kuefler et al. , 2017 ) , where learned policies tend to inappropriately repeat the same action . We seek to provide a plausible explanation and correction for the phenomenon reported in these works , and list several examples in Table 1 . We note that when attempting to explain inertia effects , de Haan et al . ( 2019 ) propose causal confounding as the root cause of the error . However , as previously pointed out by Spencer et al . ( 2021 ) , there is no actual confound in the theoretical or empirical examples in the work of de Haan et al . ( 2019 ) . This is because the learner observes all of the variables the expert was using to make decisions . Instrumental Variable Regression . The classical approach to instrumental variable regression ( Wright , 1928 ) is a two-stage least squares procedure ( e.g . in Angrist et al . ( 1996 ) ’ s textbook ) . We focus on the more general nonlinear setting and instead base our approaches on the more recent DEEPIV ( Hartford et al. , 2017 ) and AGMM ( Dikkala et al. , 2020 ) . We present extensions to the work in these papers , including a unified derivation of both methods and error analysis for DEEPIV . 3 A BRIEF REVIEW OF INSTRUMENTS IN CAUSAL MODELING . We begin by discussing the concept of an instrument before deriving our algorithmic approaches in a simplified , non-sequential setting . Let X , Y , and Z be random variables on ( potentially infinite ) sample spaces X , Y , and Z . Assume that X , Y , and Z have the causal , rather than statistical , dependency structure in Fig . 2 . Given a dataset of ( x , y , z ) tuples , we are interested in determining the causal relationship between X and Y , E [ Y |do ( x ) ] , where do ( · ) is the interventional operator of Pearl et al . ( 2016 ) . Intuitively , E [ Y |do ( x ) ] is the expected value of Y when we intervene and set X = x , rather than observe such an X . In the SCM to the right , h ( x ) = E [ Y |do ( x ) ] . Because of the presence of an unobserved confounder , U , that affects both X and Y , standard regression ( e.g . Ordinary Least Squares or OLS ) generically produces inconsistent estimates . Coarsely , this occurs because OLS will over-estimate the influence of the parts ofX that are affected by the confounder . If we only have observational data and are unable to perform randomized control trials , a canonical technique to recover h is IVR ( Winship & Morgan , 1999 ) . Formally , an instrument Z must satisfy three structural conditions : 1 . Unconfounded Instrument : Z ⊥⊥ U – i.e . independent randomization from confounder . 2 . Exclusion : Z ⊥⊥ Y |X , U – i.e . no extraneous paths . 3 . Relevance : Z 6⊥⊥ X – i.e . conditioning has an effect . Z satisfies these three conditions in the SCM of Fig . 2 . 1 Without loss of generality , we assume that E [ U ] = 0 . This allows us to concisely derive a set of conditional moment restrictions ( CMR ) : 0 = E [ U ] = E [ U |z ] = E [ Y − h ( X ) |z ] ( 1 ) ⇒ ∀z ∈ Z , E [ Y |z ] = E [ h ( X ) |z ] . ( 2 ) In words , these constraints are saying that a necessary condition for recovery of h ( x ) is that for all values of the instrument , the actual and predicted expected values of Y |Z are equal . We further assume that noise U enters additively to Y ,2 and write out the following equations : X = g ( Z , U , V ) , Y = h ( X ) + U . ( 3 ) We now derive an appropriate loss function for finding an ĥ that approximately satisfies the CMR . If we only have finite samples and can therefore only estimate conditional expectations up to some tolerance , it is natural to relax the CMR to minĥ∈H , δ 1 2Ez [ δ 2 z ] s.t . |E [ Y − ĥ ( X ) |z ] | ≤ δz , δz ≥ 0 , ∀z ∈ Z , ( 4 ) where the δz are slack variables . Then , the Lagrangian ( with the natural P ( z ) -weighted inner product that captures how often each we expect each z to occur ) is L ( ĥ , δ , λ ) = ∑ z∈Z P ( z ) λz ( E [ Y − ĥ ( X ) |z ] − δz ) + P ( z ) 1 2 δ2z , ( 5 ) where λ is the vector of Lagrange multipliers . By the stationarity component of the KKT conditions , ∇δzL ( ĥ , δ , λ ) = −P ( z ) λz + P ( z ) δz = 0 , ( 6 ) implying that δz = λz . Plugging this back into the Lagrangian , we can simplify our function to L ( ĥ , λ ) = ∑ z∈Z P ( z ) λzE [ Y − ĥ ( X ) |z ] − P ( z ) 1 2 λ2z . ( 7 ) We refer to ( 7 ) as the Regularized Lagrangian or ReLa for short . Now , solving for the optimal Lagrange multipliers via stationarity , we arrive at ∇λzL ( ĥ , λ ) = P ( z ) E [ Y − ĥ ( X ) |z ] − P ( z ) λz = 0 , ( 8 ) which implies the optimal λz is equal to E [ Y − ĥ ( X ) |z ] . Plugging this back into ( 7 ) recovers the loss function , L ( ĥ ) = ∑ z∈Z P ( z ) E [ Y − ĥ ( X ) |z ] 2 = PRMSE2 ( ĥ ) . ( 9 ) This expression is the square of the Projected Root Mean Squared Error ( PRMSE ) of Chen & Pouzo ( 2012 ) . To recap , by minimizing Eq . 9 , we are attempting to find an ĥ that approximately satisfies the CMR . Minimizing PRMSE is a necessary condition for recovering E [ Y |do ( X ) ] . For it to be a sufficient condition , one needs the natural identifiability assumptions – we refer interested readers to Chen & Pouzo ( 2012 ) for a more thorough discussion . | This paper attempts to study imitation learning from a causal inference perspective. More specifically, let S be an observed state, A be action and Z be an instrumental variable. The authors propose that one should perform causal imitation learning by learning a policy function that could induce the same interventional distribution P(A|do(Z)). | SP:a88ed0bcf467ba30ac13795d6767298d8cdedb2e |
What Would the Expert $do(\cdot)$?: Causal Imitation Learning | 1 INTRODUCTION . Much of the theory of imitation learning ( IL ) indicates that with enough demonstrations , we should be able to accurately recover the expert ’ s policy . When we apply IL algorithms in practice however , we sometimes see them produce manifestly incorrect estimates of the expert ’ s policy ( Muller et al. , 2006 ; Codevilla et al. , 2019 ; de Haan et al. , 2019 ; Bansal et al. , 2018 ; Kuefler et al. , 2017 ) . One possible reason for this phenomenon is that empirically , we only have access to noisy recordings of what the expert did . This critical detail has been thus far neglected by most prior theoretical work in imitation learning . We focus in this paper on how best to learn from two kinds of noisy data : • Exogenous noise : When we observe expert actions corrupted by a persistent noise ( e.g . a faulty joystick that persistently perturbs actions before they are executed in the game ) . • Endogenous noise : When we do not observe the full state an expert used to pick an action ( e.g . the learner not knowing there ’ s an enemy behind a door ) . The net effect of either kind of persistent noise ( more formally , an unobserved confounder ) is to introduce temporal correlations in the recorded actions that do not have their true cause in the recorded state . For example , consider recordings of an expert driver slowing down at a stop sign . If all we present the learner with as state input is whether the expert was slowing down at the last timestep , they will likely learn to simply repeat the expert ’ s past action . Thus , once the car begins to slow down , it continues to slow down , regardless of whether there is a stop sign present . At a more abstract level , these sorts of inertia problems can result from temporal correlations between pairs of actions ( e.g . the effect of the stop sign ) being reflected in the state ( e.g . the past action variable ) , leading to spurious correlations between state and action that the learner might unfortunately latch onto ( e.g . repeating the past action ) . What should we hope to learn then in these confounded settings ? Given we do not have access to the unobserved confounder , a reasonable choice is to ensure that we match the behavior of an expert that has access to the same information we do . That is , if we could query the expert for an action with only the information we have available , we should strive to produce an action that matches this queried action . While applying an interactive imitation learning algorithm ( Ross et al. , 2011 ) would allow us to collect a dataset uncorrupted by confounding , a queryable expert is not a realistic assumption for many domains . We therefore focus on approaches for the off-policy setting . We base our algorithms on a technique from econometrics for dealing with confounding in recorded data known an instrumental variable regression ( IVR ) ( Angrist et al. , 1996 ) . The high-level idea of IVR is to leverage an instrument , a source of random variation independent of the confounder , to deconfound inputs to a learning procedure via conditioning on the instrument . In dynamical systems , history can act as this source of variation , as it is unaffected by future confounding ( Hefny et al. , 2015 ) . Our key insight is that we can leverage past states as instruments to break the spurious correlation between states and actions caused by an unobserved confounder . Our work provides the following contributions : 1 . We formalize confounding in imitation learning . We provide a structural causal model that captures inertia effects that result from temporally correlated exogenous or endogenous noise . We also derive a test to detect whether this sort of confounding is present in a dataset . 2 . We present a unified derivation of modern instrumental variable regression techniques . We show how two recent extensions of the classical IVR technique share a common structure . We also extend the theoretical analysis of previous work by deriving accuracy bounds . 3 . We provide two novel algorithms to deal with confounding in imitation learning . We derive two novel IVR-based algorithms : • DoubIL is a generative modeling approach that and can utilize access to a simulator for reduced sample complexity . • ResiduIL is a simulator-free , game-theoretic approach . We derive performance bounds for policies produced by these algorithms under exogenous noise . We empirically investigate the effect of the persistence of the confounder on this bound . We also compare the performance of these approaches to behavioral cloning under endogenous noise . 2 RELATED WORK . Imitation Learning . Broadly speaking , imitation learning approaches can be grouped into three classes : offline , online , and interactive . Our work is most similar to offline imitation learning algorithms ( e.g . Behavioral Cloning ( Pomerleau , 1989 ) , ValueDice ( Kostrikov et al. , 2019 ) , AdVIL ( Swamy et al. , 2021 ) ) that operate purely on collected data . Unlike previous work however , we consider the effect of unobserved confounding . Our work shares the goal of interactive imitation learning algorithms ( e.g . DAgger ( Ross et al. , 2011 ) , AggreVaTe ( Ross & Bagnell , 2014 ) ) , in that we seek to match the output of a query to the expert . However , we focus on matching the output of a query on recorded data , rather than on learner rollouts , as is standard for interactive approaches . This is because the confounder decouples the actions recorded in the data and queried actions . Zhang et al . ( 2020 ) consider imitation learning through the lens of causal inference but focus on the one-step setting , while we consider multiple timesteps . Kumor et al . ( 2021 ) contemporaneously consider the multi-step setting and come to similar conclusions as us about the challenges of endogenous noise . They derive a necessary and sufficient structural condition for successful imitation learning , while we focus on practical algorithms with performance guarantees for a particular graphical model under exogenous noise . In contrast to imitation learning methods that seeks to match moments of the expert ’ s behavior ( Swamy et al. , 2021 ) , we focus only on matching average expert actions . We leave matching arbitrary moments to future work . Inertia Effects in Imitation Learning . Several authors have empirically observed a latching effect in policies trained via imitation learning : ( Muller et al. , 2006 ; Codevilla et al. , 2019 ; de Haan et al. , 2019 ; Bansal et al. , 2018 ; Kuefler et al. , 2017 ) , where learned policies tend to inappropriately repeat the same action . We seek to provide a plausible explanation and correction for the phenomenon reported in these works , and list several examples in Table 1 . We note that when attempting to explain inertia effects , de Haan et al . ( 2019 ) propose causal confounding as the root cause of the error . However , as previously pointed out by Spencer et al . ( 2021 ) , there is no actual confound in the theoretical or empirical examples in the work of de Haan et al . ( 2019 ) . This is because the learner observes all of the variables the expert was using to make decisions . Instrumental Variable Regression . The classical approach to instrumental variable regression ( Wright , 1928 ) is a two-stage least squares procedure ( e.g . in Angrist et al . ( 1996 ) ’ s textbook ) . We focus on the more general nonlinear setting and instead base our approaches on the more recent DEEPIV ( Hartford et al. , 2017 ) and AGMM ( Dikkala et al. , 2020 ) . We present extensions to the work in these papers , including a unified derivation of both methods and error analysis for DEEPIV . 3 A BRIEF REVIEW OF INSTRUMENTS IN CAUSAL MODELING . We begin by discussing the concept of an instrument before deriving our algorithmic approaches in a simplified , non-sequential setting . Let X , Y , and Z be random variables on ( potentially infinite ) sample spaces X , Y , and Z . Assume that X , Y , and Z have the causal , rather than statistical , dependency structure in Fig . 2 . Given a dataset of ( x , y , z ) tuples , we are interested in determining the causal relationship between X and Y , E [ Y |do ( x ) ] , where do ( · ) is the interventional operator of Pearl et al . ( 2016 ) . Intuitively , E [ Y |do ( x ) ] is the expected value of Y when we intervene and set X = x , rather than observe such an X . In the SCM to the right , h ( x ) = E [ Y |do ( x ) ] . Because of the presence of an unobserved confounder , U , that affects both X and Y , standard regression ( e.g . Ordinary Least Squares or OLS ) generically produces inconsistent estimates . Coarsely , this occurs because OLS will over-estimate the influence of the parts ofX that are affected by the confounder . If we only have observational data and are unable to perform randomized control trials , a canonical technique to recover h is IVR ( Winship & Morgan , 1999 ) . Formally , an instrument Z must satisfy three structural conditions : 1 . Unconfounded Instrument : Z ⊥⊥ U – i.e . independent randomization from confounder . 2 . Exclusion : Z ⊥⊥ Y |X , U – i.e . no extraneous paths . 3 . Relevance : Z 6⊥⊥ X – i.e . conditioning has an effect . Z satisfies these three conditions in the SCM of Fig . 2 . 1 Without loss of generality , we assume that E [ U ] = 0 . This allows us to concisely derive a set of conditional moment restrictions ( CMR ) : 0 = E [ U ] = E [ U |z ] = E [ Y − h ( X ) |z ] ( 1 ) ⇒ ∀z ∈ Z , E [ Y |z ] = E [ h ( X ) |z ] . ( 2 ) In words , these constraints are saying that a necessary condition for recovery of h ( x ) is that for all values of the instrument , the actual and predicted expected values of Y |Z are equal . We further assume that noise U enters additively to Y ,2 and write out the following equations : X = g ( Z , U , V ) , Y = h ( X ) + U . ( 3 ) We now derive an appropriate loss function for finding an ĥ that approximately satisfies the CMR . If we only have finite samples and can therefore only estimate conditional expectations up to some tolerance , it is natural to relax the CMR to minĥ∈H , δ 1 2Ez [ δ 2 z ] s.t . |E [ Y − ĥ ( X ) |z ] | ≤ δz , δz ≥ 0 , ∀z ∈ Z , ( 4 ) where the δz are slack variables . Then , the Lagrangian ( with the natural P ( z ) -weighted inner product that captures how often each we expect each z to occur ) is L ( ĥ , δ , λ ) = ∑ z∈Z P ( z ) λz ( E [ Y − ĥ ( X ) |z ] − δz ) + P ( z ) 1 2 δ2z , ( 5 ) where λ is the vector of Lagrange multipliers . By the stationarity component of the KKT conditions , ∇δzL ( ĥ , δ , λ ) = −P ( z ) λz + P ( z ) δz = 0 , ( 6 ) implying that δz = λz . Plugging this back into the Lagrangian , we can simplify our function to L ( ĥ , λ ) = ∑ z∈Z P ( z ) λzE [ Y − ĥ ( X ) |z ] − P ( z ) 1 2 λ2z . ( 7 ) We refer to ( 7 ) as the Regularized Lagrangian or ReLa for short . Now , solving for the optimal Lagrange multipliers via stationarity , we arrive at ∇λzL ( ĥ , λ ) = P ( z ) E [ Y − ĥ ( X ) |z ] − P ( z ) λz = 0 , ( 8 ) which implies the optimal λz is equal to E [ Y − ĥ ( X ) |z ] . Plugging this back into ( 7 ) recovers the loss function , L ( ĥ ) = ∑ z∈Z P ( z ) E [ Y − ĥ ( X ) |z ] 2 = PRMSE2 ( ĥ ) . ( 9 ) This expression is the square of the Projected Root Mean Squared Error ( PRMSE ) of Chen & Pouzo ( 2012 ) . To recap , by minimizing Eq . 9 , we are attempting to find an ĥ that approximately satisfies the CMR . Minimizing PRMSE is a necessary condition for recovering E [ Y |do ( X ) ] . For it to be a sufficient condition , one needs the natural identifiability assumptions – we refer interested readers to Chen & Pouzo ( 2012 ) for a more thorough discussion . | This paper proposes two causal imitation learning algorithms to address the confounding issue that causes spurious correlations. The method relies on instrumental variable regression to correct for confounding effects. The main idea is to use the state variable from the last time point as instrumental variable for the state-act variables at the current time point in a Markov decision process. | SP:a88ed0bcf467ba30ac13795d6767298d8cdedb2e |
Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN | 1 INTRODUCTION . Conditional image generation is the task of generating images , based on some attributes . The idea of conditional GAN ( cGAN ) was proposed by Mirza & Osindero ( 2014 ) . The authors modified the classic GAN architecture by adding the attribute as a parameter to the input of the generator to generate the corresponding image . They also added attributes to the discriminator input to better distinguish real data . Since then , a lot of other methods have been developed . ACGAN ( Odena et al . ( 2017 ) ) has an auxiliary classifier to guide the generator to synthesize well-classifiable images . ProjGAN ( Miyato & Koyama ( 2018 ) ) improves the approach proposed in ACGAN by utilizing the inner product of an embedded image and the corresponding attribute embeddings . ContraGAN ( Kang & Park ( 2020 ) ) utilizes contrastive 2C loss with multiple positive and negative pairs to update the generator . While these methods have shown impressive results in conditional image generation , they are known to be difficult to train , to lack the image diversity within the same input attribute , and not to always have meaningful data exploration in the latent space . We propose a new stochastic contrastive conditional generative adversarial network ( InfoSCC-GAN ) with an explorable latent space . As a baseline generator , we use EigenGAN ( He et al . ( 2021 ) ) generator with interpretable and controllable input dimensions , yet trained in an unsupervised way . EigenGAN ensures that different layers of a generative CNN , controlled by noise vectors , hold different semantics of the synthesized images . EigenGAN is able to mine interpretable and controllable dimensions in an unsupervised way from different generator layers by embedding one linear subspace with an orthogonal basis into each generator layer . These layer-wise subspaces automatically discover a set of ” eigen-dimensions ” at each layer corresponding to a set of semantic attributes or interpretable variations , via generative adversarial training to learn a target distribution . By traversing the coefficient of a specific ” eigen-dimension ” , the generator can generate samples with continuous changes corresponding to a specific semantic attribute . The ” core ” latent space of EigenGAN is generated from the Gaussian distribution . In contrast to that , we use the latent space of the contrastive encoder as the ” core ” latent space for the generator . By using the contrastive encoder , we have the ability to discover the ” inner ” attributes from the dataset by clustering the latent space . The ” inner ” attributes are useful for datasets without external annotations , unbalanced datasets , and datasets with subclasses . By using the encoder , we have an opportunity to compare latent spaces of the real images and the generated images . At the same time , the classifier ensures the correspondence between the attributes of training data and conditionally generated ones . Also , by training the encoder and the classifier independently from the generator , we reduce the general training complexity of the system . It allows avoiding training the encoder and classifier on unrealistic synthetic data , when training it jointly with the generator and discriminator . The information-theoretical interpretation of the proposed model is provided in Section 2 . The experiments and ablation studies are provided in Section 4 . We summarize our contributions as follows : • We proposed a novel Stochastic Contrastive Conditional Generative Adversarial Network ( InfoSCC-GAN ) for stochastic conditional image generation with controllable and interpretable latent space . It is based on an EigenGAN , an independent contrastive encoder , and an independent attribute classifier . • We introduce a novel classification regularization technique , which is based on updating the model with classification loss each n-th iteration and updating the generator using the adversarial and classification loss separately . • We propose a novel method for the attribute selection , based on clustering the embeddings , computed using the pre-trained contrastive encoder . • We provide an information-theoretic interpretation of the proposed system . • We perform an ablation study to determine the contribution of each part of the model to overall performance . 2 INFORMATION-THEORETICAL FORMULATION . 2.1 THE TRAINING OF THE ENCODER ( STAGE 1 ) . The encoder training is schematically shown in Figure 1 , stage 1 . The encoder training is based on the maximization problem : φ̂ε = argmax φε Iφε ( X ; E ) , ( 1 ) where Iφε ( X ; E ) = Ep ( x , ε ) [ log qφε ( ε|x ) qφε ( ε ) ] , where qφε ( ε|x ) denotes the encoder and qφε ( ε ) - the marginal latent space distribution . In the framework of contrastive learning , ( 1 ) is maximized based on the infoNCE framework ( van den Oord et al . ( 2018 ) ) . In the practical implementation , one can use approaches similar to SimCLR ( Chen et al . ( 2020 ) ) where the inner product between the positive pairs created from the augmented views originating from the same image is maximized and the inner product between the negative pairs originating from different images is minimized . Alternatively , one can use other approaches to learn the representation ε such as BYOL ( Grill et al . ( 2020 ) ) , Barlow Twins ( Zbontar et al . ( 2021 ) ) , etc . without loose of the generality of the proposed approach . It should be pointed out that the encoder is trained independently from the decoder in the scope of the considered setup . 2.2 THE TRAINING OF THE CLASS ATTRIBUTE CLASSIFIER ( STAGE 2 ) . The class attribute classifier training is schematically shown in Figure 1 , stage 2 . The training of the class attribute classifier is based on the maximization problem : θ̂y = argmax θy Iφ∗ε , θy ( Y ; E ) , ( 2 ) where Iφ∗ε , θy ( Y ; E ) = H ( Y ) −Hφ∗ε , θy ( Y|E ) and H ( Y ) = −Epy ( y ) log py ( y ) and the conditional entropy is defined as Hφ∗ε , θy ( Y|E ) = −Epx ( x ) [ Eqφ∗ε ( ε|x ) [ log pθy ( y|ε ) ] ] . Since H ( Y ) is independent of the parameters of the encoder and classifier , ( 2 ) reduces to the lower bound minimization : θ̂y = argmin θy Hφ∗ε , θy ( Y|E ) , ( 3 ) that under the categorical conditional distribution pθy ( y|ε ) can be expressed as the categorical crossentropy Ly ( y , ŷ ) . 2.3 THE TRAINING OF THE DECODER , I.E. , THE MAPPER AND GENERATOR ( STAGE 3 ) . The training of decoder is shown in Figure 1 , stage 3 . The decoder is trained first to maximize the mutual information between the class attributes ỹ predicted from the generated images and true class attributes y : ( θ̂x , ψ̂ ) = argmax θx , ψ Iψ , θx , φ∗ε , θ∗y ( Y ; E ) , ( 4 ) where Iψ , θx , φ∗ε , θ∗y ( Y ; E ) = H ( Y ) − Hψ , θx , φ∗ε , θ∗y ( Y|E ) and H ( Y ) = −Epy ( y ) log py ( y ) and the conditional entropy is defined as Hψ , θx , φ∗ε , θ∗y ( Y|E ) = −Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ Epθx ( x|ε ) [ Eqφ∗ε ( ε|x ) [ log pθ∗y ( y|ε ) ] ] ] ] ] , where p∗θy ( y | ε ) corresponds to the classifier and q∗φε ( ε|x ) denotes the pre-trained encoder . Since H ( Y ) is independent of the parameters of the encoder and classifier , ( 4 ) reduces to the lower bound minimization : ( θ̂x , ψ̂ ) = argmin θx , ψ Hψ , θx , φ∗ε , θ∗y ( Y|E ) , ( 5 ) that under the categorical conditional distribution pθy ( y|ε ) can be expressed as the categorical crossentropy Ly ( y , ỹ ) . Finally , the decoder should produce samples that follow the distribution of training data px ( x ) that corresponds to the maximization of mutual information : ( θ̂x , ψ̂ ) = argmax θx , ψ Iψ , θx ( X ; E ) , ( 6 ) where Iψ , θx ( X ; E ) = Epx ( x ) [ Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ Epθx ( x|ε ) [ log pθx ( x|ε ) px ( x ) ] ] ] ] ] = Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ DKL ( pθx ( x|E = ε ) ||pθx ( x ) ) ] ] ] − DKL ( px ( x ) ||pθx ( x ) ) , where pθx ( x ) denotes the distribution of generated samples x̃ . Since DKL ( pθx ( x|E = ε ) ||pθx ( x ) ) ≥ 0 , the maximization of the above mutual information reduces to the minimization problem : ( θ̂x , ψ̂ ) = argmin θx , ψ DKL ( px ( x ) ||pθx ( x ) ) . ( 7 ) The above discriminator is denoted asDxx̃ ( x ) . At the same time , one can also envision the discriminator conditioned on the attribute class y Dxx̃ ( x | y ) that is implemented as a set of discriminators for each subset of generated and original samples defined by y . 3 IMPLEMENTATION DETAILS . Dataset We test the proposed method on AFHQ ( Choi et al . ( 2020 ) ) and CelebA ( Liu et al . ( 2015 ) ) datasets . AFHQ dataset contains 16130 images belonging to 3 classes : cats , dogs , and wilds animals , CelebA dataset contains 202599 face images with 40 binary attributes . We use AFHQ and CelebA dataset for visual result inspection and AFHQ dataset for the ablations studies . 3.1 ENCODER . The proposed encoder is designed to produce the interpretable latent space and it can be used for : ( i ) internal latent exploration , ( ii ) feature metric like the VGG-loss ( Ledig et al . ( 2017 ) ) , ( iii ) feature extraction for the classification of the generated samples with ” external ” and ” internal ” labels . We have selected the SimCLR unsupervised encoder since it has shown state-of-the-art performance in unsupervised learning on diverse datasets . By training the encoder in an unsupervised way , it is trained to learn the inner data distribution , which is then used to compare real and generated data . For both AFHQ and CelebA datasets we use Resnet50 ( He et al . ( 2016 ) ) as a base model . We pretrain the SimCLR model for each dataset using contrastive NT-Xent loss ( Sohn ( 2016 ) ) . We apply the same augmentations as in the original SimCLR paper . The 2D t-SNE of the extracted features for the AFHQ dataset is shown in Figure 2 . The 2D t-SNEs of the extracted features for CelebA dataset for selected attributes are shown in the Appendix A . 3.2 CLASSIFIER . The initial idea of using a pre-trained classifier to regularize the generative model is based on the need to generate images that belong to the specific class . Training the classifier jointly with the generator and discriminator requires more time , and is inefficient since in the early iterations the generator network produces poorly generated images that are not similar to the real ones . While it is possible to use L2 or other distance-based metrics to regularize the generator by comparing embeddings between real and generated images computed using the encoder , it requires having predefined pairs , and since our goal is to develop a generative model , we use the pre-trained classifier to regularize the generator . We use a one-layer linear classifier for classification . As an input , we use features extracted using the pre-trained encoder . When training on the AFHQ dataset we use the cross-entropy loss , since each image has one attribute per image , when training on CelebA dataset , we use the binary cross-entropy loss , since each image has multiple attributes per image . | This paper proposes a GAN for conditional generation. The authors combine an unsupervised contrastive encoder, stochastic EigenGAN generator, and a classifier. InfoSCC-GAN can perform image generation conditioned on external attributes by maximizing mutual information between input data and class attributes. In addition, clustering on the output space of the pre-trained encoder allows us to generate images conditioned on inner attributes. | SP:8dedb0eefbf343dfe104acc7d051becfab069fb1 |
Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN | 1 INTRODUCTION . Conditional image generation is the task of generating images , based on some attributes . The idea of conditional GAN ( cGAN ) was proposed by Mirza & Osindero ( 2014 ) . The authors modified the classic GAN architecture by adding the attribute as a parameter to the input of the generator to generate the corresponding image . They also added attributes to the discriminator input to better distinguish real data . Since then , a lot of other methods have been developed . ACGAN ( Odena et al . ( 2017 ) ) has an auxiliary classifier to guide the generator to synthesize well-classifiable images . ProjGAN ( Miyato & Koyama ( 2018 ) ) improves the approach proposed in ACGAN by utilizing the inner product of an embedded image and the corresponding attribute embeddings . ContraGAN ( Kang & Park ( 2020 ) ) utilizes contrastive 2C loss with multiple positive and negative pairs to update the generator . While these methods have shown impressive results in conditional image generation , they are known to be difficult to train , to lack the image diversity within the same input attribute , and not to always have meaningful data exploration in the latent space . We propose a new stochastic contrastive conditional generative adversarial network ( InfoSCC-GAN ) with an explorable latent space . As a baseline generator , we use EigenGAN ( He et al . ( 2021 ) ) generator with interpretable and controllable input dimensions , yet trained in an unsupervised way . EigenGAN ensures that different layers of a generative CNN , controlled by noise vectors , hold different semantics of the synthesized images . EigenGAN is able to mine interpretable and controllable dimensions in an unsupervised way from different generator layers by embedding one linear subspace with an orthogonal basis into each generator layer . These layer-wise subspaces automatically discover a set of ” eigen-dimensions ” at each layer corresponding to a set of semantic attributes or interpretable variations , via generative adversarial training to learn a target distribution . By traversing the coefficient of a specific ” eigen-dimension ” , the generator can generate samples with continuous changes corresponding to a specific semantic attribute . The ” core ” latent space of EigenGAN is generated from the Gaussian distribution . In contrast to that , we use the latent space of the contrastive encoder as the ” core ” latent space for the generator . By using the contrastive encoder , we have the ability to discover the ” inner ” attributes from the dataset by clustering the latent space . The ” inner ” attributes are useful for datasets without external annotations , unbalanced datasets , and datasets with subclasses . By using the encoder , we have an opportunity to compare latent spaces of the real images and the generated images . At the same time , the classifier ensures the correspondence between the attributes of training data and conditionally generated ones . Also , by training the encoder and the classifier independently from the generator , we reduce the general training complexity of the system . It allows avoiding training the encoder and classifier on unrealistic synthetic data , when training it jointly with the generator and discriminator . The information-theoretical interpretation of the proposed model is provided in Section 2 . The experiments and ablation studies are provided in Section 4 . We summarize our contributions as follows : • We proposed a novel Stochastic Contrastive Conditional Generative Adversarial Network ( InfoSCC-GAN ) for stochastic conditional image generation with controllable and interpretable latent space . It is based on an EigenGAN , an independent contrastive encoder , and an independent attribute classifier . • We introduce a novel classification regularization technique , which is based on updating the model with classification loss each n-th iteration and updating the generator using the adversarial and classification loss separately . • We propose a novel method for the attribute selection , based on clustering the embeddings , computed using the pre-trained contrastive encoder . • We provide an information-theoretic interpretation of the proposed system . • We perform an ablation study to determine the contribution of each part of the model to overall performance . 2 INFORMATION-THEORETICAL FORMULATION . 2.1 THE TRAINING OF THE ENCODER ( STAGE 1 ) . The encoder training is schematically shown in Figure 1 , stage 1 . The encoder training is based on the maximization problem : φ̂ε = argmax φε Iφε ( X ; E ) , ( 1 ) where Iφε ( X ; E ) = Ep ( x , ε ) [ log qφε ( ε|x ) qφε ( ε ) ] , where qφε ( ε|x ) denotes the encoder and qφε ( ε ) - the marginal latent space distribution . In the framework of contrastive learning , ( 1 ) is maximized based on the infoNCE framework ( van den Oord et al . ( 2018 ) ) . In the practical implementation , one can use approaches similar to SimCLR ( Chen et al . ( 2020 ) ) where the inner product between the positive pairs created from the augmented views originating from the same image is maximized and the inner product between the negative pairs originating from different images is minimized . Alternatively , one can use other approaches to learn the representation ε such as BYOL ( Grill et al . ( 2020 ) ) , Barlow Twins ( Zbontar et al . ( 2021 ) ) , etc . without loose of the generality of the proposed approach . It should be pointed out that the encoder is trained independently from the decoder in the scope of the considered setup . 2.2 THE TRAINING OF THE CLASS ATTRIBUTE CLASSIFIER ( STAGE 2 ) . The class attribute classifier training is schematically shown in Figure 1 , stage 2 . The training of the class attribute classifier is based on the maximization problem : θ̂y = argmax θy Iφ∗ε , θy ( Y ; E ) , ( 2 ) where Iφ∗ε , θy ( Y ; E ) = H ( Y ) −Hφ∗ε , θy ( Y|E ) and H ( Y ) = −Epy ( y ) log py ( y ) and the conditional entropy is defined as Hφ∗ε , θy ( Y|E ) = −Epx ( x ) [ Eqφ∗ε ( ε|x ) [ log pθy ( y|ε ) ] ] . Since H ( Y ) is independent of the parameters of the encoder and classifier , ( 2 ) reduces to the lower bound minimization : θ̂y = argmin θy Hφ∗ε , θy ( Y|E ) , ( 3 ) that under the categorical conditional distribution pθy ( y|ε ) can be expressed as the categorical crossentropy Ly ( y , ŷ ) . 2.3 THE TRAINING OF THE DECODER , I.E. , THE MAPPER AND GENERATOR ( STAGE 3 ) . The training of decoder is shown in Figure 1 , stage 3 . The decoder is trained first to maximize the mutual information between the class attributes ỹ predicted from the generated images and true class attributes y : ( θ̂x , ψ̂ ) = argmax θx , ψ Iψ , θx , φ∗ε , θ∗y ( Y ; E ) , ( 4 ) where Iψ , θx , φ∗ε , θ∗y ( Y ; E ) = H ( Y ) − Hψ , θx , φ∗ε , θ∗y ( Y|E ) and H ( Y ) = −Epy ( y ) log py ( y ) and the conditional entropy is defined as Hψ , θx , φ∗ε , θ∗y ( Y|E ) = −Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ Epθx ( x|ε ) [ Eqφ∗ε ( ε|x ) [ log pθ∗y ( y|ε ) ] ] ] ] ] , where p∗θy ( y | ε ) corresponds to the classifier and q∗φε ( ε|x ) denotes the pre-trained encoder . Since H ( Y ) is independent of the parameters of the encoder and classifier , ( 4 ) reduces to the lower bound minimization : ( θ̂x , ψ̂ ) = argmin θx , ψ Hψ , θx , φ∗ε , θ∗y ( Y|E ) , ( 5 ) that under the categorical conditional distribution pθy ( y|ε ) can be expressed as the categorical crossentropy Ly ( y , ỹ ) . Finally , the decoder should produce samples that follow the distribution of training data px ( x ) that corresponds to the maximization of mutual information : ( θ̂x , ψ̂ ) = argmax θx , ψ Iψ , θx ( X ; E ) , ( 6 ) where Iψ , θx ( X ; E ) = Epx ( x ) [ Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ Epθx ( x|ε ) [ log pθx ( x|ε ) px ( x ) ] ] ] ] ] = Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ DKL ( pθx ( x|E = ε ) ||pθx ( x ) ) ] ] ] − DKL ( px ( x ) ||pθx ( x ) ) , where pθx ( x ) denotes the distribution of generated samples x̃ . Since DKL ( pθx ( x|E = ε ) ||pθx ( x ) ) ≥ 0 , the maximization of the above mutual information reduces to the minimization problem : ( θ̂x , ψ̂ ) = argmin θx , ψ DKL ( px ( x ) ||pθx ( x ) ) . ( 7 ) The above discriminator is denoted asDxx̃ ( x ) . At the same time , one can also envision the discriminator conditioned on the attribute class y Dxx̃ ( x | y ) that is implemented as a set of discriminators for each subset of generated and original samples defined by y . 3 IMPLEMENTATION DETAILS . Dataset We test the proposed method on AFHQ ( Choi et al . ( 2020 ) ) and CelebA ( Liu et al . ( 2015 ) ) datasets . AFHQ dataset contains 16130 images belonging to 3 classes : cats , dogs , and wilds animals , CelebA dataset contains 202599 face images with 40 binary attributes . We use AFHQ and CelebA dataset for visual result inspection and AFHQ dataset for the ablations studies . 3.1 ENCODER . The proposed encoder is designed to produce the interpretable latent space and it can be used for : ( i ) internal latent exploration , ( ii ) feature metric like the VGG-loss ( Ledig et al . ( 2017 ) ) , ( iii ) feature extraction for the classification of the generated samples with ” external ” and ” internal ” labels . We have selected the SimCLR unsupervised encoder since it has shown state-of-the-art performance in unsupervised learning on diverse datasets . By training the encoder in an unsupervised way , it is trained to learn the inner data distribution , which is then used to compare real and generated data . For both AFHQ and CelebA datasets we use Resnet50 ( He et al . ( 2016 ) ) as a base model . We pretrain the SimCLR model for each dataset using contrastive NT-Xent loss ( Sohn ( 2016 ) ) . We apply the same augmentations as in the original SimCLR paper . The 2D t-SNE of the extracted features for the AFHQ dataset is shown in Figure 2 . The 2D t-SNEs of the extracted features for CelebA dataset for selected attributes are shown in the Appendix A . 3.2 CLASSIFIER . The initial idea of using a pre-trained classifier to regularize the generative model is based on the need to generate images that belong to the specific class . Training the classifier jointly with the generator and discriminator requires more time , and is inefficient since in the early iterations the generator network produces poorly generated images that are not similar to the real ones . While it is possible to use L2 or other distance-based metrics to regularize the generator by comparing embeddings between real and generated images computed using the encoder , it requires having predefined pairs , and since our goal is to develop a generative model , we use the pre-trained classifier to regularize the generator . We use a one-layer linear classifier for classification . As an input , we use features extracted using the pre-trained encoder . When training on the AFHQ dataset we use the cross-entropy loss , since each image has one attribute per image , when training on CelebA dataset , we use the binary cross-entropy loss , since each image has multiple attributes per image . | This paper proposes a stochastic contrastive conditional GAN, which consists of an encoder, an attributes’ classifier and a stochastic EigenGAN generator. The encoder learns to extract the features into a latent space based on contrastive learning. It can provide internal attributes of the dataset for training. The classifier is trained to guide the generator to generate images with the corresponding attributes. The EigenGAN generator guarantees to generate stochastic images. Experiments on AFHQ and CelebA datasets show the effectiveness of the proposed method. | SP:8dedb0eefbf343dfe104acc7d051becfab069fb1 |
Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN | 1 INTRODUCTION . Conditional image generation is the task of generating images , based on some attributes . The idea of conditional GAN ( cGAN ) was proposed by Mirza & Osindero ( 2014 ) . The authors modified the classic GAN architecture by adding the attribute as a parameter to the input of the generator to generate the corresponding image . They also added attributes to the discriminator input to better distinguish real data . Since then , a lot of other methods have been developed . ACGAN ( Odena et al . ( 2017 ) ) has an auxiliary classifier to guide the generator to synthesize well-classifiable images . ProjGAN ( Miyato & Koyama ( 2018 ) ) improves the approach proposed in ACGAN by utilizing the inner product of an embedded image and the corresponding attribute embeddings . ContraGAN ( Kang & Park ( 2020 ) ) utilizes contrastive 2C loss with multiple positive and negative pairs to update the generator . While these methods have shown impressive results in conditional image generation , they are known to be difficult to train , to lack the image diversity within the same input attribute , and not to always have meaningful data exploration in the latent space . We propose a new stochastic contrastive conditional generative adversarial network ( InfoSCC-GAN ) with an explorable latent space . As a baseline generator , we use EigenGAN ( He et al . ( 2021 ) ) generator with interpretable and controllable input dimensions , yet trained in an unsupervised way . EigenGAN ensures that different layers of a generative CNN , controlled by noise vectors , hold different semantics of the synthesized images . EigenGAN is able to mine interpretable and controllable dimensions in an unsupervised way from different generator layers by embedding one linear subspace with an orthogonal basis into each generator layer . These layer-wise subspaces automatically discover a set of ” eigen-dimensions ” at each layer corresponding to a set of semantic attributes or interpretable variations , via generative adversarial training to learn a target distribution . By traversing the coefficient of a specific ” eigen-dimension ” , the generator can generate samples with continuous changes corresponding to a specific semantic attribute . The ” core ” latent space of EigenGAN is generated from the Gaussian distribution . In contrast to that , we use the latent space of the contrastive encoder as the ” core ” latent space for the generator . By using the contrastive encoder , we have the ability to discover the ” inner ” attributes from the dataset by clustering the latent space . The ” inner ” attributes are useful for datasets without external annotations , unbalanced datasets , and datasets with subclasses . By using the encoder , we have an opportunity to compare latent spaces of the real images and the generated images . At the same time , the classifier ensures the correspondence between the attributes of training data and conditionally generated ones . Also , by training the encoder and the classifier independently from the generator , we reduce the general training complexity of the system . It allows avoiding training the encoder and classifier on unrealistic synthetic data , when training it jointly with the generator and discriminator . The information-theoretical interpretation of the proposed model is provided in Section 2 . The experiments and ablation studies are provided in Section 4 . We summarize our contributions as follows : • We proposed a novel Stochastic Contrastive Conditional Generative Adversarial Network ( InfoSCC-GAN ) for stochastic conditional image generation with controllable and interpretable latent space . It is based on an EigenGAN , an independent contrastive encoder , and an independent attribute classifier . • We introduce a novel classification regularization technique , which is based on updating the model with classification loss each n-th iteration and updating the generator using the adversarial and classification loss separately . • We propose a novel method for the attribute selection , based on clustering the embeddings , computed using the pre-trained contrastive encoder . • We provide an information-theoretic interpretation of the proposed system . • We perform an ablation study to determine the contribution of each part of the model to overall performance . 2 INFORMATION-THEORETICAL FORMULATION . 2.1 THE TRAINING OF THE ENCODER ( STAGE 1 ) . The encoder training is schematically shown in Figure 1 , stage 1 . The encoder training is based on the maximization problem : φ̂ε = argmax φε Iφε ( X ; E ) , ( 1 ) where Iφε ( X ; E ) = Ep ( x , ε ) [ log qφε ( ε|x ) qφε ( ε ) ] , where qφε ( ε|x ) denotes the encoder and qφε ( ε ) - the marginal latent space distribution . In the framework of contrastive learning , ( 1 ) is maximized based on the infoNCE framework ( van den Oord et al . ( 2018 ) ) . In the practical implementation , one can use approaches similar to SimCLR ( Chen et al . ( 2020 ) ) where the inner product between the positive pairs created from the augmented views originating from the same image is maximized and the inner product between the negative pairs originating from different images is minimized . Alternatively , one can use other approaches to learn the representation ε such as BYOL ( Grill et al . ( 2020 ) ) , Barlow Twins ( Zbontar et al . ( 2021 ) ) , etc . without loose of the generality of the proposed approach . It should be pointed out that the encoder is trained independently from the decoder in the scope of the considered setup . 2.2 THE TRAINING OF THE CLASS ATTRIBUTE CLASSIFIER ( STAGE 2 ) . The class attribute classifier training is schematically shown in Figure 1 , stage 2 . The training of the class attribute classifier is based on the maximization problem : θ̂y = argmax θy Iφ∗ε , θy ( Y ; E ) , ( 2 ) where Iφ∗ε , θy ( Y ; E ) = H ( Y ) −Hφ∗ε , θy ( Y|E ) and H ( Y ) = −Epy ( y ) log py ( y ) and the conditional entropy is defined as Hφ∗ε , θy ( Y|E ) = −Epx ( x ) [ Eqφ∗ε ( ε|x ) [ log pθy ( y|ε ) ] ] . Since H ( Y ) is independent of the parameters of the encoder and classifier , ( 2 ) reduces to the lower bound minimization : θ̂y = argmin θy Hφ∗ε , θy ( Y|E ) , ( 3 ) that under the categorical conditional distribution pθy ( y|ε ) can be expressed as the categorical crossentropy Ly ( y , ŷ ) . 2.3 THE TRAINING OF THE DECODER , I.E. , THE MAPPER AND GENERATOR ( STAGE 3 ) . The training of decoder is shown in Figure 1 , stage 3 . The decoder is trained first to maximize the mutual information between the class attributes ỹ predicted from the generated images and true class attributes y : ( θ̂x , ψ̂ ) = argmax θx , ψ Iψ , θx , φ∗ε , θ∗y ( Y ; E ) , ( 4 ) where Iψ , θx , φ∗ε , θ∗y ( Y ; E ) = H ( Y ) − Hψ , θx , φ∗ε , θ∗y ( Y|E ) and H ( Y ) = −Epy ( y ) log py ( y ) and the conditional entropy is defined as Hψ , θx , φ∗ε , θ∗y ( Y|E ) = −Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ Epθx ( x|ε ) [ Eqφ∗ε ( ε|x ) [ log pθ∗y ( y|ε ) ] ] ] ] ] , where p∗θy ( y | ε ) corresponds to the classifier and q∗φε ( ε|x ) denotes the pre-trained encoder . Since H ( Y ) is independent of the parameters of the encoder and classifier , ( 4 ) reduces to the lower bound minimization : ( θ̂x , ψ̂ ) = argmin θx , ψ Hψ , θx , φ∗ε , θ∗y ( Y|E ) , ( 5 ) that under the categorical conditional distribution pθy ( y|ε ) can be expressed as the categorical crossentropy Ly ( y , ỹ ) . Finally , the decoder should produce samples that follow the distribution of training data px ( x ) that corresponds to the maximization of mutual information : ( θ̂x , ψ̂ ) = argmax θx , ψ Iψ , θx ( X ; E ) , ( 6 ) where Iψ , θx ( X ; E ) = Epx ( x ) [ Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ Epθx ( x|ε ) [ log pθx ( x|ε ) px ( x ) ] ] ] ] ] = Epy ( y ) [ Epz ( z ) [ Erψ ( ε|y , z ) [ DKL ( pθx ( x|E = ε ) ||pθx ( x ) ) ] ] ] − DKL ( px ( x ) ||pθx ( x ) ) , where pθx ( x ) denotes the distribution of generated samples x̃ . Since DKL ( pθx ( x|E = ε ) ||pθx ( x ) ) ≥ 0 , the maximization of the above mutual information reduces to the minimization problem : ( θ̂x , ψ̂ ) = argmin θx , ψ DKL ( px ( x ) ||pθx ( x ) ) . ( 7 ) The above discriminator is denoted asDxx̃ ( x ) . At the same time , one can also envision the discriminator conditioned on the attribute class y Dxx̃ ( x | y ) that is implemented as a set of discriminators for each subset of generated and original samples defined by y . 3 IMPLEMENTATION DETAILS . Dataset We test the proposed method on AFHQ ( Choi et al . ( 2020 ) ) and CelebA ( Liu et al . ( 2015 ) ) datasets . AFHQ dataset contains 16130 images belonging to 3 classes : cats , dogs , and wilds animals , CelebA dataset contains 202599 face images with 40 binary attributes . We use AFHQ and CelebA dataset for visual result inspection and AFHQ dataset for the ablations studies . 3.1 ENCODER . The proposed encoder is designed to produce the interpretable latent space and it can be used for : ( i ) internal latent exploration , ( ii ) feature metric like the VGG-loss ( Ledig et al . ( 2017 ) ) , ( iii ) feature extraction for the classification of the generated samples with ” external ” and ” internal ” labels . We have selected the SimCLR unsupervised encoder since it has shown state-of-the-art performance in unsupervised learning on diverse datasets . By training the encoder in an unsupervised way , it is trained to learn the inner data distribution , which is then used to compare real and generated data . For both AFHQ and CelebA datasets we use Resnet50 ( He et al . ( 2016 ) ) as a base model . We pretrain the SimCLR model for each dataset using contrastive NT-Xent loss ( Sohn ( 2016 ) ) . We apply the same augmentations as in the original SimCLR paper . The 2D t-SNE of the extracted features for the AFHQ dataset is shown in Figure 2 . The 2D t-SNEs of the extracted features for CelebA dataset for selected attributes are shown in the Appendix A . 3.2 CLASSIFIER . The initial idea of using a pre-trained classifier to regularize the generative model is based on the need to generate images that belong to the specific class . Training the classifier jointly with the generator and discriminator requires more time , and is inefficient since in the early iterations the generator network produces poorly generated images that are not similar to the real ones . While it is possible to use L2 or other distance-based metrics to regularize the generator by comparing embeddings between real and generated images computed using the encoder , it requires having predefined pairs , and since our goal is to develop a generative model , we use the pre-trained classifier to regularize the generator . We use a one-layer linear classifier for classification . As an input , we use features extracted using the pre-trained encoder . When training on the AFHQ dataset we use the cross-entropy loss , since each image has one attribute per image , when training on CelebA dataset , we use the binary cross-entropy loss , since each image has multiple attributes per image . | The authors aimed to improve EigenGAN for conditional image generation. The conditional information can be obtained via explicit supervision or clustering. The whole idea is not real novel. As already appeared in InfoGAN (2016), a classifier was introduced to facilitate the conditional image generation. Here, the authors suggested to use a pretrained classifier which uses a SimCLR based encoder as the backbone. As the regularization term, the classification loss is active every $n$th iteration. However, in the paper, there are no obvious evidence for the superiority of such design choice. | SP:8dedb0eefbf343dfe104acc7d051becfab069fb1 |
FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning | 1 INTRODUCTION . Federated learning ( FL ; McMahan et al. , 2017 ) has been proposed as an efficient collaborative learning strategy along with the advance and spread of mobile and IoT devices . FL allows leveraging local computing resources of edge devices and locally stored private data without data sharing for privacy . FL typically consists of the following steps : ( 1 ) clients download a globally shared model from a central server , ( 2 ) the clients locally update each model using their own private data without accessing the others ’ data , ( 3 ) the clients upload their local models back to the server , and ( 4 ) the server consolidates the updated models and repeats these steps until the global model converges . FL has the key properties ( McMahan et al. , 2017 ) that differentiate it from distributed learning : • Heterogeneous data . Data is decentralized and non-IID as well as unbalanced in its amount due to different characteristics of clients ; thus , local data does not represent the population distribution . • Heterogeneous systems . Clients consist of heterogeneous setups of hardware and infrastructure ; hence , those connections are not guaranteed to be online , fast , or cheap . Besides , massive client participation is expected through different communication paths , causing communication burdens . These FL properties introduce challenges in the convergence stability with heterogeneous data and communication overheads . To improve the stability and reduce the communication rounds , the prior works in FL have proposed modified loss functions or model aggregation methods ( Li et al. , 2020 ; Karimireddy et al. , 2020 ; Acar et al. , 2021 ; Yu et al. , 2020 ; Reddi et al. , 2021 ) . However , the transferred data is still a lot for the edge devices with bandwidth constraints or countries having low-quality communication infrastructure.1 A large amount of transferred data introduces an energy consumption issue on edge devices because wireless communication is significantly more powerintensive than computation ( Yadav & Yadav , 2016 ; Yan et al. , 2019 ) . In this work , we propose a communication-efficient re-parameterization for FL , FedPara , which reduces the number of bits transferred per round . FedPara directly re-parameterizes each fully- 1The gap between the fastest and the lowest communication speed across countries is significant ; approximately 63 times different ( Speedtest ) . connected ( FC ) and convolutional layers of the model to have a small and factorized form while preserving the model ’ s capacity . Our key idea is to combine the Hadamard product with low-rank parameterization as W = ( X1Y > 1 ) ( X2Y > 2 ) ∈ Rm×n , called low-rank Hadamard product . When rank ( X1Y > 1 ) = rank ( X2Y > 2 ) = r , then rank ( W ) ≤ r2 . This outstanding property facilitates spanning a full-rank matrix with much fewer parameters than the typical m× n matrix . It significantly reduces the communication burdens during training . At the inference phase , we pre-compose and maintain W that boils down to its original structure ; thus , FedPara does not alter computational complexity at inference time . Compared to the aforementioned prior works that tackle reducing the required communication rounds for convergence , our FedPara is an orthogonal approach in that FedPara does not change the optimization part but re-defines each layer ’ s internal structure . We demonstrate the effectiveness of FedPara with various network architectures , including VGG , ResNet , and LSTM , on standard classification benchmark datasets for both IID and non-IID settings . The accuracy of our parameterization outperforms that of the traditional low-rank parameterization baseline given the same number of parameters . Besides , FedPara has comparable accuracy to original counterpart models and even outperforms as the number of parameters increases at times . We also combine FedPara with other FL algorithms to improve communication efficiency further . We extend FedPara to the personalized FL application , named pFedPara , which separates the roles of each sub-matrix into global and local inner matrices . The global and local inner matrices learn the globally shared common knowledge and client-specific knowledge , respectively . We devise three scenarios according to the amount and heterogeneity of local data using the subset of FEMNIST and MNIST . We demonstrate performance improvement and robustness of pFedPara against competing algorithms . We summarize our main contributions as follows : • We propose FedPara , a low-rank Hadamard product parameterization for communicationefficient FL . Unlike traditional low-rank parameterization , we show that FedPara can span a full-rank matrix and tensor with reduced parameters . We also show that FedPara requires up to ten times fewer total communication costs than the original model to achieve target accuracy . FedPara even outperforms the original model by adjusting ranks at times . • Our FedPara takes a novel approach ; thereby , our FedPara can be combined with other FL methods to get mutual benefits , which further increase accuracy and communication efficiency . • We propose pFedPara , a personalization application of FedPara , which splits the layer weights into global and local parameters . pFedPara shows more robust results in challenging regimes than competing methods . 2 METHOD . In this section , we first provide the overview of three popular low-rank parameterizations in Section 2.1 and present our parameterization , FedPara , with its algorithmic properties in Section 2.2 . Then , we extend FedPara to the personalized FL application , pFedPara , in Section 2.3 . Notations . We denote the Hadamard product as , the Kronecker product as⊗ , n-mode tensor product as ×n , and the i-th unfolding of the tensor T ( i ) ∈ Rki× ∏ j 6=i kj given a tensor T ∈ Rk1×···×kn . 2.1 OVERVIEW OF LOW-RANK PARAMETERIZATION . The low-rank decomposition in neural networks has been typically applied to pre-trained models for compression ( Phan et al. , 2020 ) , whereby the number of parameters is reduced while minimizing the loss of encoded information . Given a learned parameter matrix W ∈ Rm×n , it is formulated as finding the best rank-r approximation , as argmin W̃ ||W − W̃||F such that W̃ = XY > , where X ∈ Rm×r , Y ∈ Rn×r , and r min ( m , n ) . It reduces the number of parameters from O ( mn ) to O ( ( m+ n ) r ) , and its closed-form optimal solution can be found by SVD . This matrix decomposition is applicable to the FC layers and the reshaped kernels of the convolution layers . However , the natural shape of a convolution kernel is a fourth-order tensor ; thus , the lowrank tensor decomposition , such as Tucker and CP decomposition ( Lebedev et al. , 2015 ; Phan et al. , 2020 ) , can be a more suitable approach . Given a learned high-order tensor T ∈ Rk1×···×kn , Tucker decomposition multiplies a kernel tensor K ∈ Rr1×···×rn with matrices Xi ∈ Rri×n1 , where ri = rank ( T̃ ( i ) ) as T̃ = K ×1 X1 ×2 · · · ×n Xn , and CP decomposition is the summation of rank-1 tensors as T̃ = ∑i=r i=1 x 1 i × x2i × · · · × xni , where x j i ∈ Rkj . Likewise , it also reduces the number of model parameters . We refer to these rank-constrained structure methods simply as conventional low-rank constraints or low-rank parameterization methods . In the FL context , where the parameters are frequently transferred between clients and the server during training , the reduced parameters lead to communication cost reduction , which is the main focus of this work . The post-decomposition approaches ( Lebedev et al. , 2015 ; Phan et al. , 2020 ) using SVD , Tucker , and CP decompositions do not reduce the communication costs because those are applied to the original parameterization after finishing training . That is , the original large-size parameters are transferred during training in FL , and the number of parameters is reduced after finishing training . We take a different notion from the low-rank parameterizations . In the FL scenario , we train a model from scratch with low-rank constraints , but specifically with low-rank Hadamard product reparameterization . We re-parameterize each learnable layer , including FC and convolutional layers , and train the surrogate model by FL . Different from the existing low-rank method in FL ( Konečnỳ et al. , 2016 ) , our parameterization can achieve comparable accuracy to the original counterpart . 2.2 FEDPARA : A COMMUNICATION-EFFICIENT PARAMETERIZATION . As mentioned , the conventional low-rank parameterization has limited expressiveness due to its lowrank constraint . To overcome this while maintaining fewer parameters , we present our new low-rank Hadamard product parameterization , called FedPara , which has the favorable property as follows : Proposition 1 Let X1 ∈ Rm×r1 , X2 ∈ Rm×r2 , Y1 ∈ Rn×r1 , Y2 ∈ Rn×r2 , r1 , r2 ≤ min ( m , n ) and the constructed matrix be W : = ( X1Y > 1 ) ( X2Y > 2 ) . Then , rank ( W ) ≤ r1r2 . All proofs can be found in the supplementary material including Proposition 1 . Proposition 1 implies that , unlike the low-rank parameterization , a higher-rank matrix can be constructed using the Hadamard product of two inner low-rank matrices , W1 andW2 ( Refer to Figure 1 ) . If we choose the inner ranks r1 and r2 such that r1r2 ≥ min ( m , n ) , the constructed matrix does not have a low-rank restriction and is able to span a full-rank matrix with a high chance ( See Figure 6 ) ; i.e. , FedPara has the minimal parameter property achievable to full-rank . In addition , we can control the number of parameters by changing the inner ranks r1 and r2 , respectively , but we have the following useful property to set the hyper-parameters to be a minimal number of parameters with a maximal rank . Proposition 2 Given R ∈ N , r1 = r2 = R is the unique optimal choice of the following criteria , argmin r1 , r2∈N ( r1 + r2 ) ( m+ n ) s.t . r1r2 ≥ R 2 , ( 1 ) and its optimal value is 2R ( m+ n ) . Equation 3 implies the criteria that minimize the number of weight parameters used in our parameterization with the target rank constraint of the constructed matrix as R2 . Proposition 2 provides an efficient way to set the hyper-parameters . It implies that , if we set r1=r2=R and R2 ≥ min ( m , n ) , FedPara is highly likely to have no low-rank restriction2 even with much fewer parameters than that of a naı̈ve weight , i.e. , 2R ( m + n ) mn . Moreover , given the same number of parameters , rank ( W ) of FedPara is higher than that of the naı̈ve low-rank parameterization by a square factor , as shown in Figure 1 and Table 1 . To extend Proposition 1 to the convolutional layers , we can simply reshape the fourth-order tensor kernel to the matrix as RO×I×K1×K2 → RO× ( IK1K2 ) as a naı̈ve way , where O , I , K1 , and K2 are the output channels , the input channels , and the kernel sizes , respectively . That is , our parameterization spans convolution filters with a few basis filters of size I ×K1 ×K2 . However , we can derive more efficient parameterization of convolutional layers without reshaping as follows : Proposition 3 Let T1 , T2 ∈ RR×R×k3×k4 , X1 , X2 ∈ Rk1×R , Y1 , Y2 ∈ Rk2×R , R ≤ min ( k1 , k2 ) and the convolution kernel beW : = ( T1×1 X1×2 Y1 ) ( T2×1 X2×2 Y2 ) . Then , the rank of the kernel satisfies rank ( W ( 1 ) ) = rank ( W ( 2 ) ) ≤ R2 . Proposition 3 is the extension of Proposition 1 but can be applied to the convolutional layer without reshaping . In the convolutional layer case of Table 1 , given the specific tensor size , Proposition 3 requires 3.8 times fewer parameters than Proposition 1 . Hence , we use Proposition 3 for the convolutional layer since the tensor method is more effective for common convolutional models . Optionally , we employ non-linearity and the Jacobian correction regularization , of which details can be found in the supplementary material . These techniques improve the accuracy and convergence stability but not essential . Depending on the resources of devices , these techniques can be omitted . 2.3 PFEDPARA : PERSONALIZED FL APPLICATION In practice , data are heterogeneous and personal due to different characteristics of each client , such as usage times and habits . FedPer ( Arivazhagan et al. , 2019 ) has been proposed to tackle this scenario by distinguishing global and local layers in the model . Clients only transfer global layers ( the top layer ) and keep local ones ( the bottom layers ) on each device . The global layers learn jointly to extract general features , while the local layers are biased towards each user . With our FedPara , we propose a personalization application , pFedPara , in which the Hadamard product is used as a bridge between the global inner weight W1 and the local inner weight W2 . Each layer of the personalized model is constructed by W = W1 ( W2 + 1 ) , where W1 is transferred to the server while W2 is kept in a local device during training . This induces W1 to learn globally shared knowledge implicitly and acts as a switch of the term ( W2 + 1 ) . Conceptually , we 2Its corollary and empirical evidence can be found in the supplementary material . Under Proposition 2 , R2 ≥ min ( m , n ) is a necessary and sufficient condition for achieving a maximal rank . can interpret by rewriting W = W1 W2+W1 = Wper.+Wglo. , where Wper . = W1 W2 and Wglo . = W1 . The construction of the final personalized parameter W in pFedPara can be viewed as an additive model of the global weight Wglo . and the personalizing residue Wper .. pFedPara transfers only a half of the parameters compared to FedPara under the same rank condition ; hence , the communication efficiency is increased further . Intuitively , FedPer and pFedPara are distinguished by their respective split directions , as illustrated in Figure 2 . We summarize our algorithms in the supplementary material . Although we only illustrate feed-forward network cases for convenience , it can be extended to general cases . | This paper introduces FedPara, a low-rank based method for achieving communication efficient federated learning. On the client-side, low rank factorized models are trained. Different from the traditional approaches to finding low rank factorized networks, FedPara introduces a novel low rank factorization strategy, which attains a higher maximal rank for the weight matrices. The paper also identifies that FedPara can be used to enhance personalized federated learning (by introducing a variant of FedPara, i.e., pFedPara). Both theoretical analysis and experimental results are provided to show the effectiveness of FedPara. | SP:05371cbf2ccd7358e78fb9aebdedbb29a324058e |
FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning | 1 INTRODUCTION . Federated learning ( FL ; McMahan et al. , 2017 ) has been proposed as an efficient collaborative learning strategy along with the advance and spread of mobile and IoT devices . FL allows leveraging local computing resources of edge devices and locally stored private data without data sharing for privacy . FL typically consists of the following steps : ( 1 ) clients download a globally shared model from a central server , ( 2 ) the clients locally update each model using their own private data without accessing the others ’ data , ( 3 ) the clients upload their local models back to the server , and ( 4 ) the server consolidates the updated models and repeats these steps until the global model converges . FL has the key properties ( McMahan et al. , 2017 ) that differentiate it from distributed learning : • Heterogeneous data . Data is decentralized and non-IID as well as unbalanced in its amount due to different characteristics of clients ; thus , local data does not represent the population distribution . • Heterogeneous systems . Clients consist of heterogeneous setups of hardware and infrastructure ; hence , those connections are not guaranteed to be online , fast , or cheap . Besides , massive client participation is expected through different communication paths , causing communication burdens . These FL properties introduce challenges in the convergence stability with heterogeneous data and communication overheads . To improve the stability and reduce the communication rounds , the prior works in FL have proposed modified loss functions or model aggregation methods ( Li et al. , 2020 ; Karimireddy et al. , 2020 ; Acar et al. , 2021 ; Yu et al. , 2020 ; Reddi et al. , 2021 ) . However , the transferred data is still a lot for the edge devices with bandwidth constraints or countries having low-quality communication infrastructure.1 A large amount of transferred data introduces an energy consumption issue on edge devices because wireless communication is significantly more powerintensive than computation ( Yadav & Yadav , 2016 ; Yan et al. , 2019 ) . In this work , we propose a communication-efficient re-parameterization for FL , FedPara , which reduces the number of bits transferred per round . FedPara directly re-parameterizes each fully- 1The gap between the fastest and the lowest communication speed across countries is significant ; approximately 63 times different ( Speedtest ) . connected ( FC ) and convolutional layers of the model to have a small and factorized form while preserving the model ’ s capacity . Our key idea is to combine the Hadamard product with low-rank parameterization as W = ( X1Y > 1 ) ( X2Y > 2 ) ∈ Rm×n , called low-rank Hadamard product . When rank ( X1Y > 1 ) = rank ( X2Y > 2 ) = r , then rank ( W ) ≤ r2 . This outstanding property facilitates spanning a full-rank matrix with much fewer parameters than the typical m× n matrix . It significantly reduces the communication burdens during training . At the inference phase , we pre-compose and maintain W that boils down to its original structure ; thus , FedPara does not alter computational complexity at inference time . Compared to the aforementioned prior works that tackle reducing the required communication rounds for convergence , our FedPara is an orthogonal approach in that FedPara does not change the optimization part but re-defines each layer ’ s internal structure . We demonstrate the effectiveness of FedPara with various network architectures , including VGG , ResNet , and LSTM , on standard classification benchmark datasets for both IID and non-IID settings . The accuracy of our parameterization outperforms that of the traditional low-rank parameterization baseline given the same number of parameters . Besides , FedPara has comparable accuracy to original counterpart models and even outperforms as the number of parameters increases at times . We also combine FedPara with other FL algorithms to improve communication efficiency further . We extend FedPara to the personalized FL application , named pFedPara , which separates the roles of each sub-matrix into global and local inner matrices . The global and local inner matrices learn the globally shared common knowledge and client-specific knowledge , respectively . We devise three scenarios according to the amount and heterogeneity of local data using the subset of FEMNIST and MNIST . We demonstrate performance improvement and robustness of pFedPara against competing algorithms . We summarize our main contributions as follows : • We propose FedPara , a low-rank Hadamard product parameterization for communicationefficient FL . Unlike traditional low-rank parameterization , we show that FedPara can span a full-rank matrix and tensor with reduced parameters . We also show that FedPara requires up to ten times fewer total communication costs than the original model to achieve target accuracy . FedPara even outperforms the original model by adjusting ranks at times . • Our FedPara takes a novel approach ; thereby , our FedPara can be combined with other FL methods to get mutual benefits , which further increase accuracy and communication efficiency . • We propose pFedPara , a personalization application of FedPara , which splits the layer weights into global and local parameters . pFedPara shows more robust results in challenging regimes than competing methods . 2 METHOD . In this section , we first provide the overview of three popular low-rank parameterizations in Section 2.1 and present our parameterization , FedPara , with its algorithmic properties in Section 2.2 . Then , we extend FedPara to the personalized FL application , pFedPara , in Section 2.3 . Notations . We denote the Hadamard product as , the Kronecker product as⊗ , n-mode tensor product as ×n , and the i-th unfolding of the tensor T ( i ) ∈ Rki× ∏ j 6=i kj given a tensor T ∈ Rk1×···×kn . 2.1 OVERVIEW OF LOW-RANK PARAMETERIZATION . The low-rank decomposition in neural networks has been typically applied to pre-trained models for compression ( Phan et al. , 2020 ) , whereby the number of parameters is reduced while minimizing the loss of encoded information . Given a learned parameter matrix W ∈ Rm×n , it is formulated as finding the best rank-r approximation , as argmin W̃ ||W − W̃||F such that W̃ = XY > , where X ∈ Rm×r , Y ∈ Rn×r , and r min ( m , n ) . It reduces the number of parameters from O ( mn ) to O ( ( m+ n ) r ) , and its closed-form optimal solution can be found by SVD . This matrix decomposition is applicable to the FC layers and the reshaped kernels of the convolution layers . However , the natural shape of a convolution kernel is a fourth-order tensor ; thus , the lowrank tensor decomposition , such as Tucker and CP decomposition ( Lebedev et al. , 2015 ; Phan et al. , 2020 ) , can be a more suitable approach . Given a learned high-order tensor T ∈ Rk1×···×kn , Tucker decomposition multiplies a kernel tensor K ∈ Rr1×···×rn with matrices Xi ∈ Rri×n1 , where ri = rank ( T̃ ( i ) ) as T̃ = K ×1 X1 ×2 · · · ×n Xn , and CP decomposition is the summation of rank-1 tensors as T̃ = ∑i=r i=1 x 1 i × x2i × · · · × xni , where x j i ∈ Rkj . Likewise , it also reduces the number of model parameters . We refer to these rank-constrained structure methods simply as conventional low-rank constraints or low-rank parameterization methods . In the FL context , where the parameters are frequently transferred between clients and the server during training , the reduced parameters lead to communication cost reduction , which is the main focus of this work . The post-decomposition approaches ( Lebedev et al. , 2015 ; Phan et al. , 2020 ) using SVD , Tucker , and CP decompositions do not reduce the communication costs because those are applied to the original parameterization after finishing training . That is , the original large-size parameters are transferred during training in FL , and the number of parameters is reduced after finishing training . We take a different notion from the low-rank parameterizations . In the FL scenario , we train a model from scratch with low-rank constraints , but specifically with low-rank Hadamard product reparameterization . We re-parameterize each learnable layer , including FC and convolutional layers , and train the surrogate model by FL . Different from the existing low-rank method in FL ( Konečnỳ et al. , 2016 ) , our parameterization can achieve comparable accuracy to the original counterpart . 2.2 FEDPARA : A COMMUNICATION-EFFICIENT PARAMETERIZATION . As mentioned , the conventional low-rank parameterization has limited expressiveness due to its lowrank constraint . To overcome this while maintaining fewer parameters , we present our new low-rank Hadamard product parameterization , called FedPara , which has the favorable property as follows : Proposition 1 Let X1 ∈ Rm×r1 , X2 ∈ Rm×r2 , Y1 ∈ Rn×r1 , Y2 ∈ Rn×r2 , r1 , r2 ≤ min ( m , n ) and the constructed matrix be W : = ( X1Y > 1 ) ( X2Y > 2 ) . Then , rank ( W ) ≤ r1r2 . All proofs can be found in the supplementary material including Proposition 1 . Proposition 1 implies that , unlike the low-rank parameterization , a higher-rank matrix can be constructed using the Hadamard product of two inner low-rank matrices , W1 andW2 ( Refer to Figure 1 ) . If we choose the inner ranks r1 and r2 such that r1r2 ≥ min ( m , n ) , the constructed matrix does not have a low-rank restriction and is able to span a full-rank matrix with a high chance ( See Figure 6 ) ; i.e. , FedPara has the minimal parameter property achievable to full-rank . In addition , we can control the number of parameters by changing the inner ranks r1 and r2 , respectively , but we have the following useful property to set the hyper-parameters to be a minimal number of parameters with a maximal rank . Proposition 2 Given R ∈ N , r1 = r2 = R is the unique optimal choice of the following criteria , argmin r1 , r2∈N ( r1 + r2 ) ( m+ n ) s.t . r1r2 ≥ R 2 , ( 1 ) and its optimal value is 2R ( m+ n ) . Equation 3 implies the criteria that minimize the number of weight parameters used in our parameterization with the target rank constraint of the constructed matrix as R2 . Proposition 2 provides an efficient way to set the hyper-parameters . It implies that , if we set r1=r2=R and R2 ≥ min ( m , n ) , FedPara is highly likely to have no low-rank restriction2 even with much fewer parameters than that of a naı̈ve weight , i.e. , 2R ( m + n ) mn . Moreover , given the same number of parameters , rank ( W ) of FedPara is higher than that of the naı̈ve low-rank parameterization by a square factor , as shown in Figure 1 and Table 1 . To extend Proposition 1 to the convolutional layers , we can simply reshape the fourth-order tensor kernel to the matrix as RO×I×K1×K2 → RO× ( IK1K2 ) as a naı̈ve way , where O , I , K1 , and K2 are the output channels , the input channels , and the kernel sizes , respectively . That is , our parameterization spans convolution filters with a few basis filters of size I ×K1 ×K2 . However , we can derive more efficient parameterization of convolutional layers without reshaping as follows : Proposition 3 Let T1 , T2 ∈ RR×R×k3×k4 , X1 , X2 ∈ Rk1×R , Y1 , Y2 ∈ Rk2×R , R ≤ min ( k1 , k2 ) and the convolution kernel beW : = ( T1×1 X1×2 Y1 ) ( T2×1 X2×2 Y2 ) . Then , the rank of the kernel satisfies rank ( W ( 1 ) ) = rank ( W ( 2 ) ) ≤ R2 . Proposition 3 is the extension of Proposition 1 but can be applied to the convolutional layer without reshaping . In the convolutional layer case of Table 1 , given the specific tensor size , Proposition 3 requires 3.8 times fewer parameters than Proposition 1 . Hence , we use Proposition 3 for the convolutional layer since the tensor method is more effective for common convolutional models . Optionally , we employ non-linearity and the Jacobian correction regularization , of which details can be found in the supplementary material . These techniques improve the accuracy and convergence stability but not essential . Depending on the resources of devices , these techniques can be omitted . 2.3 PFEDPARA : PERSONALIZED FL APPLICATION In practice , data are heterogeneous and personal due to different characteristics of each client , such as usage times and habits . FedPer ( Arivazhagan et al. , 2019 ) has been proposed to tackle this scenario by distinguishing global and local layers in the model . Clients only transfer global layers ( the top layer ) and keep local ones ( the bottom layers ) on each device . The global layers learn jointly to extract general features , while the local layers are biased towards each user . With our FedPara , we propose a personalization application , pFedPara , in which the Hadamard product is used as a bridge between the global inner weight W1 and the local inner weight W2 . Each layer of the personalized model is constructed by W = W1 ( W2 + 1 ) , where W1 is transferred to the server while W2 is kept in a local device during training . This induces W1 to learn globally shared knowledge implicitly and acts as a switch of the term ( W2 + 1 ) . Conceptually , we 2Its corollary and empirical evidence can be found in the supplementary material . Under Proposition 2 , R2 ≥ min ( m , n ) is a necessary and sufficient condition for achieving a maximal rank . can interpret by rewriting W = W1 W2+W1 = Wper.+Wglo. , where Wper . = W1 W2 and Wglo . = W1 . The construction of the final personalized parameter W in pFedPara can be viewed as an additive model of the global weight Wglo . and the personalizing residue Wper .. pFedPara transfers only a half of the parameters compared to FedPara under the same rank condition ; hence , the communication efficiency is increased further . Intuitively , FedPer and pFedPara are distinguished by their respective split directions , as illustrated in Figure 2 . We summarize our algorithms in the supplementary material . Although we only illustrate feed-forward network cases for convenience , it can be extended to general cases . | The paper proposes a communication-efficient parameterization methodology for federated learning tasks through the Hadamard product of low-rank weights. Such parameterization has better model expressiveness in terms of minimal parameters to achieve a maximal rank, compared with conventional low-rank approaches. Using such parameterization, the work designs the corresponding communication-efficient federated learning framework which only uploads a low-rank part of parameters of each level to the server. The work presents empirical studies of the proposed algorithm and validates its effectiveness in terms of accuracy vs communication costs. | SP:05371cbf2ccd7358e78fb9aebdedbb29a324058e |
FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning | 1 INTRODUCTION . Federated learning ( FL ; McMahan et al. , 2017 ) has been proposed as an efficient collaborative learning strategy along with the advance and spread of mobile and IoT devices . FL allows leveraging local computing resources of edge devices and locally stored private data without data sharing for privacy . FL typically consists of the following steps : ( 1 ) clients download a globally shared model from a central server , ( 2 ) the clients locally update each model using their own private data without accessing the others ’ data , ( 3 ) the clients upload their local models back to the server , and ( 4 ) the server consolidates the updated models and repeats these steps until the global model converges . FL has the key properties ( McMahan et al. , 2017 ) that differentiate it from distributed learning : • Heterogeneous data . Data is decentralized and non-IID as well as unbalanced in its amount due to different characteristics of clients ; thus , local data does not represent the population distribution . • Heterogeneous systems . Clients consist of heterogeneous setups of hardware and infrastructure ; hence , those connections are not guaranteed to be online , fast , or cheap . Besides , massive client participation is expected through different communication paths , causing communication burdens . These FL properties introduce challenges in the convergence stability with heterogeneous data and communication overheads . To improve the stability and reduce the communication rounds , the prior works in FL have proposed modified loss functions or model aggregation methods ( Li et al. , 2020 ; Karimireddy et al. , 2020 ; Acar et al. , 2021 ; Yu et al. , 2020 ; Reddi et al. , 2021 ) . However , the transferred data is still a lot for the edge devices with bandwidth constraints or countries having low-quality communication infrastructure.1 A large amount of transferred data introduces an energy consumption issue on edge devices because wireless communication is significantly more powerintensive than computation ( Yadav & Yadav , 2016 ; Yan et al. , 2019 ) . In this work , we propose a communication-efficient re-parameterization for FL , FedPara , which reduces the number of bits transferred per round . FedPara directly re-parameterizes each fully- 1The gap between the fastest and the lowest communication speed across countries is significant ; approximately 63 times different ( Speedtest ) . connected ( FC ) and convolutional layers of the model to have a small and factorized form while preserving the model ’ s capacity . Our key idea is to combine the Hadamard product with low-rank parameterization as W = ( X1Y > 1 ) ( X2Y > 2 ) ∈ Rm×n , called low-rank Hadamard product . When rank ( X1Y > 1 ) = rank ( X2Y > 2 ) = r , then rank ( W ) ≤ r2 . This outstanding property facilitates spanning a full-rank matrix with much fewer parameters than the typical m× n matrix . It significantly reduces the communication burdens during training . At the inference phase , we pre-compose and maintain W that boils down to its original structure ; thus , FedPara does not alter computational complexity at inference time . Compared to the aforementioned prior works that tackle reducing the required communication rounds for convergence , our FedPara is an orthogonal approach in that FedPara does not change the optimization part but re-defines each layer ’ s internal structure . We demonstrate the effectiveness of FedPara with various network architectures , including VGG , ResNet , and LSTM , on standard classification benchmark datasets for both IID and non-IID settings . The accuracy of our parameterization outperforms that of the traditional low-rank parameterization baseline given the same number of parameters . Besides , FedPara has comparable accuracy to original counterpart models and even outperforms as the number of parameters increases at times . We also combine FedPara with other FL algorithms to improve communication efficiency further . We extend FedPara to the personalized FL application , named pFedPara , which separates the roles of each sub-matrix into global and local inner matrices . The global and local inner matrices learn the globally shared common knowledge and client-specific knowledge , respectively . We devise three scenarios according to the amount and heterogeneity of local data using the subset of FEMNIST and MNIST . We demonstrate performance improvement and robustness of pFedPara against competing algorithms . We summarize our main contributions as follows : • We propose FedPara , a low-rank Hadamard product parameterization for communicationefficient FL . Unlike traditional low-rank parameterization , we show that FedPara can span a full-rank matrix and tensor with reduced parameters . We also show that FedPara requires up to ten times fewer total communication costs than the original model to achieve target accuracy . FedPara even outperforms the original model by adjusting ranks at times . • Our FedPara takes a novel approach ; thereby , our FedPara can be combined with other FL methods to get mutual benefits , which further increase accuracy and communication efficiency . • We propose pFedPara , a personalization application of FedPara , which splits the layer weights into global and local parameters . pFedPara shows more robust results in challenging regimes than competing methods . 2 METHOD . In this section , we first provide the overview of three popular low-rank parameterizations in Section 2.1 and present our parameterization , FedPara , with its algorithmic properties in Section 2.2 . Then , we extend FedPara to the personalized FL application , pFedPara , in Section 2.3 . Notations . We denote the Hadamard product as , the Kronecker product as⊗ , n-mode tensor product as ×n , and the i-th unfolding of the tensor T ( i ) ∈ Rki× ∏ j 6=i kj given a tensor T ∈ Rk1×···×kn . 2.1 OVERVIEW OF LOW-RANK PARAMETERIZATION . The low-rank decomposition in neural networks has been typically applied to pre-trained models for compression ( Phan et al. , 2020 ) , whereby the number of parameters is reduced while minimizing the loss of encoded information . Given a learned parameter matrix W ∈ Rm×n , it is formulated as finding the best rank-r approximation , as argmin W̃ ||W − W̃||F such that W̃ = XY > , where X ∈ Rm×r , Y ∈ Rn×r , and r min ( m , n ) . It reduces the number of parameters from O ( mn ) to O ( ( m+ n ) r ) , and its closed-form optimal solution can be found by SVD . This matrix decomposition is applicable to the FC layers and the reshaped kernels of the convolution layers . However , the natural shape of a convolution kernel is a fourth-order tensor ; thus , the lowrank tensor decomposition , such as Tucker and CP decomposition ( Lebedev et al. , 2015 ; Phan et al. , 2020 ) , can be a more suitable approach . Given a learned high-order tensor T ∈ Rk1×···×kn , Tucker decomposition multiplies a kernel tensor K ∈ Rr1×···×rn with matrices Xi ∈ Rri×n1 , where ri = rank ( T̃ ( i ) ) as T̃ = K ×1 X1 ×2 · · · ×n Xn , and CP decomposition is the summation of rank-1 tensors as T̃ = ∑i=r i=1 x 1 i × x2i × · · · × xni , where x j i ∈ Rkj . Likewise , it also reduces the number of model parameters . We refer to these rank-constrained structure methods simply as conventional low-rank constraints or low-rank parameterization methods . In the FL context , where the parameters are frequently transferred between clients and the server during training , the reduced parameters lead to communication cost reduction , which is the main focus of this work . The post-decomposition approaches ( Lebedev et al. , 2015 ; Phan et al. , 2020 ) using SVD , Tucker , and CP decompositions do not reduce the communication costs because those are applied to the original parameterization after finishing training . That is , the original large-size parameters are transferred during training in FL , and the number of parameters is reduced after finishing training . We take a different notion from the low-rank parameterizations . In the FL scenario , we train a model from scratch with low-rank constraints , but specifically with low-rank Hadamard product reparameterization . We re-parameterize each learnable layer , including FC and convolutional layers , and train the surrogate model by FL . Different from the existing low-rank method in FL ( Konečnỳ et al. , 2016 ) , our parameterization can achieve comparable accuracy to the original counterpart . 2.2 FEDPARA : A COMMUNICATION-EFFICIENT PARAMETERIZATION . As mentioned , the conventional low-rank parameterization has limited expressiveness due to its lowrank constraint . To overcome this while maintaining fewer parameters , we present our new low-rank Hadamard product parameterization , called FedPara , which has the favorable property as follows : Proposition 1 Let X1 ∈ Rm×r1 , X2 ∈ Rm×r2 , Y1 ∈ Rn×r1 , Y2 ∈ Rn×r2 , r1 , r2 ≤ min ( m , n ) and the constructed matrix be W : = ( X1Y > 1 ) ( X2Y > 2 ) . Then , rank ( W ) ≤ r1r2 . All proofs can be found in the supplementary material including Proposition 1 . Proposition 1 implies that , unlike the low-rank parameterization , a higher-rank matrix can be constructed using the Hadamard product of two inner low-rank matrices , W1 andW2 ( Refer to Figure 1 ) . If we choose the inner ranks r1 and r2 such that r1r2 ≥ min ( m , n ) , the constructed matrix does not have a low-rank restriction and is able to span a full-rank matrix with a high chance ( See Figure 6 ) ; i.e. , FedPara has the minimal parameter property achievable to full-rank . In addition , we can control the number of parameters by changing the inner ranks r1 and r2 , respectively , but we have the following useful property to set the hyper-parameters to be a minimal number of parameters with a maximal rank . Proposition 2 Given R ∈ N , r1 = r2 = R is the unique optimal choice of the following criteria , argmin r1 , r2∈N ( r1 + r2 ) ( m+ n ) s.t . r1r2 ≥ R 2 , ( 1 ) and its optimal value is 2R ( m+ n ) . Equation 3 implies the criteria that minimize the number of weight parameters used in our parameterization with the target rank constraint of the constructed matrix as R2 . Proposition 2 provides an efficient way to set the hyper-parameters . It implies that , if we set r1=r2=R and R2 ≥ min ( m , n ) , FedPara is highly likely to have no low-rank restriction2 even with much fewer parameters than that of a naı̈ve weight , i.e. , 2R ( m + n ) mn . Moreover , given the same number of parameters , rank ( W ) of FedPara is higher than that of the naı̈ve low-rank parameterization by a square factor , as shown in Figure 1 and Table 1 . To extend Proposition 1 to the convolutional layers , we can simply reshape the fourth-order tensor kernel to the matrix as RO×I×K1×K2 → RO× ( IK1K2 ) as a naı̈ve way , where O , I , K1 , and K2 are the output channels , the input channels , and the kernel sizes , respectively . That is , our parameterization spans convolution filters with a few basis filters of size I ×K1 ×K2 . However , we can derive more efficient parameterization of convolutional layers without reshaping as follows : Proposition 3 Let T1 , T2 ∈ RR×R×k3×k4 , X1 , X2 ∈ Rk1×R , Y1 , Y2 ∈ Rk2×R , R ≤ min ( k1 , k2 ) and the convolution kernel beW : = ( T1×1 X1×2 Y1 ) ( T2×1 X2×2 Y2 ) . Then , the rank of the kernel satisfies rank ( W ( 1 ) ) = rank ( W ( 2 ) ) ≤ R2 . Proposition 3 is the extension of Proposition 1 but can be applied to the convolutional layer without reshaping . In the convolutional layer case of Table 1 , given the specific tensor size , Proposition 3 requires 3.8 times fewer parameters than Proposition 1 . Hence , we use Proposition 3 for the convolutional layer since the tensor method is more effective for common convolutional models . Optionally , we employ non-linearity and the Jacobian correction regularization , of which details can be found in the supplementary material . These techniques improve the accuracy and convergence stability but not essential . Depending on the resources of devices , these techniques can be omitted . 2.3 PFEDPARA : PERSONALIZED FL APPLICATION In practice , data are heterogeneous and personal due to different characteristics of each client , such as usage times and habits . FedPer ( Arivazhagan et al. , 2019 ) has been proposed to tackle this scenario by distinguishing global and local layers in the model . Clients only transfer global layers ( the top layer ) and keep local ones ( the bottom layers ) on each device . The global layers learn jointly to extract general features , while the local layers are biased towards each user . With our FedPara , we propose a personalization application , pFedPara , in which the Hadamard product is used as a bridge between the global inner weight W1 and the local inner weight W2 . Each layer of the personalized model is constructed by W = W1 ( W2 + 1 ) , where W1 is transferred to the server while W2 is kept in a local device during training . This induces W1 to learn globally shared knowledge implicitly and acts as a switch of the term ( W2 + 1 ) . Conceptually , we 2Its corollary and empirical evidence can be found in the supplementary material . Under Proposition 2 , R2 ≥ min ( m , n ) is a necessary and sufficient condition for achieving a maximal rank . can interpret by rewriting W = W1 W2+W1 = Wper.+Wglo. , where Wper . = W1 W2 and Wglo . = W1 . The construction of the final personalized parameter W in pFedPara can be viewed as an additive model of the global weight Wglo . and the personalizing residue Wper .. pFedPara transfers only a half of the parameters compared to FedPara under the same rank condition ; hence , the communication efficiency is increased further . Intuitively , FedPer and pFedPara are distinguished by their respective split directions , as illustrated in Figure 2 . We summarize our algorithms in the supplementary material . Although we only illustrate feed-forward network cases for convenience , it can be extended to general cases . | The paper introduces FedPara, which is a low-rank parametrization method for the neural networks aiming to reduce the total number of communication bits while preserving the model accuracy in the federated learning scenario. Besides FedPara, which is designed for federated learning applications, the paper also generalizes FedPara into pFedPara, which is designed for personalized federated learning. The novelty of this paper includes using the Hadamard product in the low-rank approximation. Compared with directly using low-rank approximation, using Hadamard produce can increase the expressiveness of the parametrization. The paper also includes empirical results to show the effectiveness of FedPara/pFedPara in the federated learning setting. | SP:05371cbf2ccd7358e78fb9aebdedbb29a324058e |
BoolNet: Streamlining Binary Neural Networks Using Binary Feature Maps | 1 INTRODUCTION . The recent success of Deep Neural Networks ( DNNs ) is like the jewel in the crown of modern AI waves . However , the large size and the high number of operations cause the current DNNs to heavily rely on high-performance computing hardware , such as GPU and TPU . Training sophisticated DNN models also results in excessive energy consumption and CO2 emission , e.g. , training the OpenAI ’ s GPT-3 by Brown et al . ( 2020 ) causes as much CO2 emissions as 43 cars during their lifetime ( Patterson et al. , 2021 ) . Moreover , their computational expensiveness strongly limits their applicability on resource-constrained devices such as mobile phones , IoT devices , and embedded devices . Various works aim to solve this challenge by reducing memory footprints and accelerating inference . We can roughly categorize these works into the following directions : network pruning ( Han et al. , 2015a ; b ) , knowledge distillation ( Crowley et al. , 2018 ; Polino et al. , 2018 ) , compact networks ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ; Ma et al. , 2018b ; Tan et al. , 2019 ) , and low-bit quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ; Zhou et al. , 2016 ; Hubara et al. , 2016 ) . From the latter , there is an extreme case , Binary Neural Networks ( BNNs ) , first introduced by Courbariaux et al . ( 2016 ) , that uses only 1 bit for weight and activation . As shown in the literature ( Rastegari et al. , 2016 ) , BNNs can achieve 32× memory compression and up to 58× speedup on CPU , since the conventional arithmetic operations can be replaced by bit-wise xnor and bitcount operations . However , BNNs suffer from accuracy degradation compared to their 32-bit counterparts . For instance , XNOR-Net leaves an 18 % accuracy gap to ResNet-18 on ImageNet classification ( Rastegari et al. , 2016 ) . Therefore , recent efforts ( analyzed in more detail in Section 2 ) mainly focus on narrowing the accuracy gap , including specific architecture design ( Liu et al. , 2018 ; Bethge et al. , 2019 ; 2020 ; Liu et al. , 2020b ) , real-valued weight and activation approximation ( Lin et al. , 2017a ; Zhuang et al. , 2019 ) , specific training recipes ( Martinez et al. , 2020 ) , a dedicated optimizer ( Helwegen et al. , 2019 ) , leveraging neural architecture search ( Bulat et al. , 2020 ; Zhao et al. , 2020 ) and dynamic networks ( Bulat et al. , 2021 ) . In the existing work , efficiency analysis usually only considers the theoretical instruction counts . However , memory usage , inference efficiency and energy consumption , which are essential to practical applications , have received little attention . Furthermore , Fromm et al . ( 2020 ) ; Bannink et al . ( 2021 ) point out that the theoretical complexity is often inconsistent with the actual performance in practice and measurable performance gains on existing BNN models are hard to achieve as the 32-bit components in BNNs ( such as BatchNorm , scaling , and 32-bit branches ) become bottlenecks . Using 32-bit information flow ( e.g. , 32-bit identity connections , 32-bit downsampling layers are equipped by almost all latest BNNs , see Figure 1a ) , and multiplication/division operations ( in BatchNorm , scaling , average pooling etc . ) significantly increase the memory usage and power consumption of BNNs and are thus unfriendly to hardware accelerators . For these reasons , even if BNNs have achieved MobileNet-level accuracy with a similar theoretical number of OPs ( Bethge et al. , 2020 ; Martinez et al. , 2020 ) , they still can not be used as conveniently as compact networks ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ) . In this paper , we extensively study the trade-off between BNN ’ s accuracy and hardware efficiency . We propose a novel BNN architecture : BoolNet , which replaces most commonly used 32-bit components ( see Section 3 ) . First , BoolNet only uses binary feature maps in the network ( see Figure 1b ) . Second , during inference , we fuse the BN layer into the Sign function through a lossless transformation , thereby effectively removing the Mult-Adds brought by BN . Other changes include removing components that require additional 32-bit multiplication/division operations : ( 1 ) PReLU , ( 2 ) average pooling , and ( 3 ) binary downsampling convolutions . We then propose a Multi-slice strategy to help alleviate the loss of representational capacity incurred by binarizing the feature maps and removing 32-bit components . We show the effectiveness of our proposed methods and the increased energy efficiency of BoolNet with experiments on the ImageNet dataset ( Deng et al. , 2009 ) . The results show the key benefit of BoolNet : a reasonable accuracy coupled with a higher energy efficiency over state-of-the-art BNNs ( see Figure 1c for a brief summary and Section 4 for more details ) . The energy data is obtained through a hardware accelerator simulation ( see Section 4.4 for details ) . We summarize our main contributions as follows : • The first work studying the effects of 32-bit layers often used in previous works on BNNs . • A novel BNN architecture BoolNet with minimal 32-bit components for higher efficiency . • A Multi-slice strategy to alleviate the accuracy loss incurred by using 1-bit feature maps . • State-of-the-art performance on the trade-off between accuracy and energy consumption with a 2.9× lower power consumption than Bi-RealNet ( Liu et al. , 2018 ) and 6.6 % higher accuracy . 2 RELATED WORK . In recent years , Efficient Deep Learning has become a research field that has attracted much attention . Technical directions , such as , compact network design ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ; Zhang et al. , 2018 ; Ma et al. , 2018b ) , knowledge distillation ( Crowley et al. , 2018 ; Polino et al. , 2018 ) , network pruning ( Han et al. , 2015a ; b ; Li et al. , 2017 ; He et al. , 2017 ) , and low-bit quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ; Liu et al. , 2018 ; 2020b ; Bethge et al. , 2020 ) are proposed for model compression and acceleration . The efficient models have evolved from the earliest handcrafted designs to the current use of neural architecture search to search for the best basic block and overall network structure ( Tan et al. , 2019 ; Howard et al. , 2019 ; Tan & Le , 2019 ; Radosavovic et al. , 2020 ) . The criterion of efficiency evaluation has also changed from instruction and parameter counts to more precise measurements of actual memory and operating efficiency on the target hardware ( Cai et al. , 2019 ; 2018 ) . Binary Neural Networks were first introduced by Courbariaux et al . ( 2016 ) and their initial attempt only evaluated on small datasets such as MNIST ( LeCun & Cortes , 2010 ) , CIFAR10 ( Krizhevsky et al. , 2009 ) and SVHN ( Netzer et al. , 2011 ) . The follow-up XNOR-Net ( Rastegari et al. , 2016 ) proposes channel-wise scaling factors for approximating the real-valued parameters , which achieves 51.2 % top-1 accuracy on ImageNet . However , there is an 18 % gap compared with its 32-bit counterpart , ResNet-18 . Therefore , recent efforts mainly focused on narrowing the accuracy gap . WRPN by Mishra et al . ( 2018 ) shows that expanding the channel width of binary convolutions can obtain a better performance . ABC-Net by Lin et al . ( 2017a ) , GroupNet by Zhuang et al . ( 2019 ) , and ( Zhu et al. , 2019 ) use a set of k binary convolutions ( referred to as binary bases ) , instead of using a single binary convolution , to approximate a 32-bit convolution . This sort of method achieves higher accuracy but increases the required memory and number of operations of each convolution by the factor k. Bi-RealNet by Liu et al . ( 2018 ) proposes using real-valued ( 32-bit ) shortcuts to maintain a 32-bit information flow , which effectively improves the accuracy . This design strategy became a standard for later work , e.g. , Bethge et al . ( 2019 ; 2020 ) ; Liu et al . ( 2020b ) . Martinez et al . ( 2020 ) propose using a real-valued attention mechanism and well-tuned training recipes to boost the accuracy further . Thanks to the special architecture design , the recent MeliusNet ( Bethge et al. , 2020 ) and ReActNet ( Liu et al. , 2020b ) achieve MobileNet-level accuracy with similar number of theoretical operations . Other attempts , such as leveraging neural architecture search ( Bulat et al. , 2020 ; Zhao et al. , 2020 ) and dynamic networks ( Bulat et al. , 2021 ) , show that those successful methods on regular real-valued networks are also effective for BNN . Another method by Shen et al . ( 2019 ) combines neural architecture search to dynamically increase the number of channels for more accurate BNNs . Often , with improved accuracy , 32-bit components are used more frequently as well , such as PReLU and BatchNorm after each binary convolution ( Liu et al. , 2020b ) , a real-valued attention module ( Martinez et al. , 2020 ) and scaling factors , etc . Apart from some works that include and optimize real-time measurements on mobile devices , such as Bannink et al . ( 2021 ) ; Umuroglu et al . ( 2017 ) , efficiency analysis in the literature often only considers the theoretical operation number . However , the memory usage and the actual energy consumption has received very little attention so far . 3 BOOLNET . In this section , we first revisit the latest BNNs and recap how they enhanced the accuracy by adding more 32-bit components ( in Section 3.1 ) . Afterwards , we propose to replace most commonly used 32-bit components from current BNN designs and instead use a fully binary information flow in the network ( in Section 3.2 ) . However , abandoning 32-bit information flow results in a serious degradation of the representative capacity of the network . Thus , we also present our strategies to restore the representative capacity ( in Section 3.3 ) . The focus on boolean operations and binary feature maps leads to the name of our network : BoolNet . 3.1 IMPROVING ACCURACY WITH ADDITIONAL 32-BIT COMPONENTS . Recent works on BNNs have made promising progress in narrowing the gap to their 32-bit counterparts . The key intention is to enhance the representative capacity by fully exploiting additional 32-bit components . However , such additional 32-bit components significantly reduce the hardware efficiency ( as shown by Fromm et al . ( 2020 ) and further discussed in Section 4.4 ) . The following list summarizes the 32-bit components commonly used in the latest BNNs : ( 1 ) The channel-wise scaling factor was first proposed by Rastegari et al . ( 2016 ) for approximating the 32-bit parameters . It increases the value range of activation and weight . ( 2 ) Bi-RealNet ( Liu et al. , 2018 ) proposes to use a 32-bit shortcut for enclosing each binary convolution . The key advantage is that the network can maintain an almost completely 32-bit information flow ( cf . Figure 2a ) . ( 3 ) XNOR-Net ( Rastegari et al. , 2016 ) uses 32-bit 1×1 downsampling convolutions , which is also used by many subsequent methods ( Liu et al. , 2018 ; Martinez et al. , 2020 ; Bethge et al. , 2020 ) . Bethge et al . ( 2019 ) shows that this simple strategy can achieve about 3.6 % Top-1 accuracy gains on ImageNet based on a binary ResNet-18 model . ( 4 ) Martinez et al . ( 2020 ) ; Bulat et al . ( 2020 ; 2021 ) show that PReLU activation effectively improves accuracy of BNNs . ReActNet ( Liu et al. , 2020b ) constructs the RPReLU activation function and uses it before every sign function . ( 5 ) Martinez et al . ( 2020 ) reuse the 32-bit activation in their Real-to-Binary Net after BN with a squeeze and excitation ( SE ) attention mechanism . This module can adaptively re-scale the outputs of each binary convolution but needs additional 32-bit operations . Although these techniques can effectively improve the accuracy , they increase the number of 32-bit values and floating point operations , making them not particularly efficient on hardware accelerators . They are closer to mixed-precision neural networks rather than being highly efficient binary neural networks , as one might expect . | This paper proposes methods to further reduce energy consumption of binary neural networks by removing or replacing 32-bit components (e.g., skip connections) in SOTA BNNs. More specifically, the proposed architecture (1) reduces the precision of skip connections and activation functions, (2) transforms the BatchNorm layer into a simple sign function, and (3) employs a multi-slice strategy to alleviate the loss of representational capacity incurred by binarizing the feature maps and shortcut connections. The results shows that the new model achieves 4.7x energy reduction with some accuracy degradation compared to SOTA architectures. | SP:ab69e1f4ea5129f8fd3d715aff6dd9cbdaedb49b |
BoolNet: Streamlining Binary Neural Networks Using Binary Feature Maps | 1 INTRODUCTION . The recent success of Deep Neural Networks ( DNNs ) is like the jewel in the crown of modern AI waves . However , the large size and the high number of operations cause the current DNNs to heavily rely on high-performance computing hardware , such as GPU and TPU . Training sophisticated DNN models also results in excessive energy consumption and CO2 emission , e.g. , training the OpenAI ’ s GPT-3 by Brown et al . ( 2020 ) causes as much CO2 emissions as 43 cars during their lifetime ( Patterson et al. , 2021 ) . Moreover , their computational expensiveness strongly limits their applicability on resource-constrained devices such as mobile phones , IoT devices , and embedded devices . Various works aim to solve this challenge by reducing memory footprints and accelerating inference . We can roughly categorize these works into the following directions : network pruning ( Han et al. , 2015a ; b ) , knowledge distillation ( Crowley et al. , 2018 ; Polino et al. , 2018 ) , compact networks ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ; Ma et al. , 2018b ; Tan et al. , 2019 ) , and low-bit quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ; Zhou et al. , 2016 ; Hubara et al. , 2016 ) . From the latter , there is an extreme case , Binary Neural Networks ( BNNs ) , first introduced by Courbariaux et al . ( 2016 ) , that uses only 1 bit for weight and activation . As shown in the literature ( Rastegari et al. , 2016 ) , BNNs can achieve 32× memory compression and up to 58× speedup on CPU , since the conventional arithmetic operations can be replaced by bit-wise xnor and bitcount operations . However , BNNs suffer from accuracy degradation compared to their 32-bit counterparts . For instance , XNOR-Net leaves an 18 % accuracy gap to ResNet-18 on ImageNet classification ( Rastegari et al. , 2016 ) . Therefore , recent efforts ( analyzed in more detail in Section 2 ) mainly focus on narrowing the accuracy gap , including specific architecture design ( Liu et al. , 2018 ; Bethge et al. , 2019 ; 2020 ; Liu et al. , 2020b ) , real-valued weight and activation approximation ( Lin et al. , 2017a ; Zhuang et al. , 2019 ) , specific training recipes ( Martinez et al. , 2020 ) , a dedicated optimizer ( Helwegen et al. , 2019 ) , leveraging neural architecture search ( Bulat et al. , 2020 ; Zhao et al. , 2020 ) and dynamic networks ( Bulat et al. , 2021 ) . In the existing work , efficiency analysis usually only considers the theoretical instruction counts . However , memory usage , inference efficiency and energy consumption , which are essential to practical applications , have received little attention . Furthermore , Fromm et al . ( 2020 ) ; Bannink et al . ( 2021 ) point out that the theoretical complexity is often inconsistent with the actual performance in practice and measurable performance gains on existing BNN models are hard to achieve as the 32-bit components in BNNs ( such as BatchNorm , scaling , and 32-bit branches ) become bottlenecks . Using 32-bit information flow ( e.g. , 32-bit identity connections , 32-bit downsampling layers are equipped by almost all latest BNNs , see Figure 1a ) , and multiplication/division operations ( in BatchNorm , scaling , average pooling etc . ) significantly increase the memory usage and power consumption of BNNs and are thus unfriendly to hardware accelerators . For these reasons , even if BNNs have achieved MobileNet-level accuracy with a similar theoretical number of OPs ( Bethge et al. , 2020 ; Martinez et al. , 2020 ) , they still can not be used as conveniently as compact networks ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ) . In this paper , we extensively study the trade-off between BNN ’ s accuracy and hardware efficiency . We propose a novel BNN architecture : BoolNet , which replaces most commonly used 32-bit components ( see Section 3 ) . First , BoolNet only uses binary feature maps in the network ( see Figure 1b ) . Second , during inference , we fuse the BN layer into the Sign function through a lossless transformation , thereby effectively removing the Mult-Adds brought by BN . Other changes include removing components that require additional 32-bit multiplication/division operations : ( 1 ) PReLU , ( 2 ) average pooling , and ( 3 ) binary downsampling convolutions . We then propose a Multi-slice strategy to help alleviate the loss of representational capacity incurred by binarizing the feature maps and removing 32-bit components . We show the effectiveness of our proposed methods and the increased energy efficiency of BoolNet with experiments on the ImageNet dataset ( Deng et al. , 2009 ) . The results show the key benefit of BoolNet : a reasonable accuracy coupled with a higher energy efficiency over state-of-the-art BNNs ( see Figure 1c for a brief summary and Section 4 for more details ) . The energy data is obtained through a hardware accelerator simulation ( see Section 4.4 for details ) . We summarize our main contributions as follows : • The first work studying the effects of 32-bit layers often used in previous works on BNNs . • A novel BNN architecture BoolNet with minimal 32-bit components for higher efficiency . • A Multi-slice strategy to alleviate the accuracy loss incurred by using 1-bit feature maps . • State-of-the-art performance on the trade-off between accuracy and energy consumption with a 2.9× lower power consumption than Bi-RealNet ( Liu et al. , 2018 ) and 6.6 % higher accuracy . 2 RELATED WORK . In recent years , Efficient Deep Learning has become a research field that has attracted much attention . Technical directions , such as , compact network design ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ; Zhang et al. , 2018 ; Ma et al. , 2018b ) , knowledge distillation ( Crowley et al. , 2018 ; Polino et al. , 2018 ) , network pruning ( Han et al. , 2015a ; b ; Li et al. , 2017 ; He et al. , 2017 ) , and low-bit quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ; Liu et al. , 2018 ; 2020b ; Bethge et al. , 2020 ) are proposed for model compression and acceleration . The efficient models have evolved from the earliest handcrafted designs to the current use of neural architecture search to search for the best basic block and overall network structure ( Tan et al. , 2019 ; Howard et al. , 2019 ; Tan & Le , 2019 ; Radosavovic et al. , 2020 ) . The criterion of efficiency evaluation has also changed from instruction and parameter counts to more precise measurements of actual memory and operating efficiency on the target hardware ( Cai et al. , 2019 ; 2018 ) . Binary Neural Networks were first introduced by Courbariaux et al . ( 2016 ) and their initial attempt only evaluated on small datasets such as MNIST ( LeCun & Cortes , 2010 ) , CIFAR10 ( Krizhevsky et al. , 2009 ) and SVHN ( Netzer et al. , 2011 ) . The follow-up XNOR-Net ( Rastegari et al. , 2016 ) proposes channel-wise scaling factors for approximating the real-valued parameters , which achieves 51.2 % top-1 accuracy on ImageNet . However , there is an 18 % gap compared with its 32-bit counterpart , ResNet-18 . Therefore , recent efforts mainly focused on narrowing the accuracy gap . WRPN by Mishra et al . ( 2018 ) shows that expanding the channel width of binary convolutions can obtain a better performance . ABC-Net by Lin et al . ( 2017a ) , GroupNet by Zhuang et al . ( 2019 ) , and ( Zhu et al. , 2019 ) use a set of k binary convolutions ( referred to as binary bases ) , instead of using a single binary convolution , to approximate a 32-bit convolution . This sort of method achieves higher accuracy but increases the required memory and number of operations of each convolution by the factor k. Bi-RealNet by Liu et al . ( 2018 ) proposes using real-valued ( 32-bit ) shortcuts to maintain a 32-bit information flow , which effectively improves the accuracy . This design strategy became a standard for later work , e.g. , Bethge et al . ( 2019 ; 2020 ) ; Liu et al . ( 2020b ) . Martinez et al . ( 2020 ) propose using a real-valued attention mechanism and well-tuned training recipes to boost the accuracy further . Thanks to the special architecture design , the recent MeliusNet ( Bethge et al. , 2020 ) and ReActNet ( Liu et al. , 2020b ) achieve MobileNet-level accuracy with similar number of theoretical operations . Other attempts , such as leveraging neural architecture search ( Bulat et al. , 2020 ; Zhao et al. , 2020 ) and dynamic networks ( Bulat et al. , 2021 ) , show that those successful methods on regular real-valued networks are also effective for BNN . Another method by Shen et al . ( 2019 ) combines neural architecture search to dynamically increase the number of channels for more accurate BNNs . Often , with improved accuracy , 32-bit components are used more frequently as well , such as PReLU and BatchNorm after each binary convolution ( Liu et al. , 2020b ) , a real-valued attention module ( Martinez et al. , 2020 ) and scaling factors , etc . Apart from some works that include and optimize real-time measurements on mobile devices , such as Bannink et al . ( 2021 ) ; Umuroglu et al . ( 2017 ) , efficiency analysis in the literature often only considers the theoretical operation number . However , the memory usage and the actual energy consumption has received very little attention so far . 3 BOOLNET . In this section , we first revisit the latest BNNs and recap how they enhanced the accuracy by adding more 32-bit components ( in Section 3.1 ) . Afterwards , we propose to replace most commonly used 32-bit components from current BNN designs and instead use a fully binary information flow in the network ( in Section 3.2 ) . However , abandoning 32-bit information flow results in a serious degradation of the representative capacity of the network . Thus , we also present our strategies to restore the representative capacity ( in Section 3.3 ) . The focus on boolean operations and binary feature maps leads to the name of our network : BoolNet . 3.1 IMPROVING ACCURACY WITH ADDITIONAL 32-BIT COMPONENTS . Recent works on BNNs have made promising progress in narrowing the gap to their 32-bit counterparts . The key intention is to enhance the representative capacity by fully exploiting additional 32-bit components . However , such additional 32-bit components significantly reduce the hardware efficiency ( as shown by Fromm et al . ( 2020 ) and further discussed in Section 4.4 ) . The following list summarizes the 32-bit components commonly used in the latest BNNs : ( 1 ) The channel-wise scaling factor was first proposed by Rastegari et al . ( 2016 ) for approximating the 32-bit parameters . It increases the value range of activation and weight . ( 2 ) Bi-RealNet ( Liu et al. , 2018 ) proposes to use a 32-bit shortcut for enclosing each binary convolution . The key advantage is that the network can maintain an almost completely 32-bit information flow ( cf . Figure 2a ) . ( 3 ) XNOR-Net ( Rastegari et al. , 2016 ) uses 32-bit 1×1 downsampling convolutions , which is also used by many subsequent methods ( Liu et al. , 2018 ; Martinez et al. , 2020 ; Bethge et al. , 2020 ) . Bethge et al . ( 2019 ) shows that this simple strategy can achieve about 3.6 % Top-1 accuracy gains on ImageNet based on a binary ResNet-18 model . ( 4 ) Martinez et al . ( 2020 ) ; Bulat et al . ( 2020 ; 2021 ) show that PReLU activation effectively improves accuracy of BNNs . ReActNet ( Liu et al. , 2020b ) constructs the RPReLU activation function and uses it before every sign function . ( 5 ) Martinez et al . ( 2020 ) reuse the 32-bit activation in their Real-to-Binary Net after BN with a squeeze and excitation ( SE ) attention mechanism . This module can adaptively re-scale the outputs of each binary convolution but needs additional 32-bit operations . Although these techniques can effectively improve the accuracy , they increase the number of 32-bit values and floating point operations , making them not particularly efficient on hardware accelerators . They are closer to mixed-precision neural networks rather than being highly efficient binary neural networks , as one might expect . | The paper proposes two novel BNNs (BaseNet and BooLNet) where most of the parameters are represented in the binary formats which is the major difference compared to pervious works where activations in layers are represented in 32-bit floats. Moreover, the proposed networks removes the BN and most of ReLu activations and replaces with sign function. To increase the knowledge capacity, the authors uses Multi-slice Binary Convolution and Local Adaptive Shifting approaches. The inference performance of BaseNet and BooLNet are evaluated and compared with the inference performance of ResNet-34 using 32-bit floating-point format. | SP:ab69e1f4ea5129f8fd3d715aff6dd9cbdaedb49b |
BoolNet: Streamlining Binary Neural Networks Using Binary Feature Maps | 1 INTRODUCTION . The recent success of Deep Neural Networks ( DNNs ) is like the jewel in the crown of modern AI waves . However , the large size and the high number of operations cause the current DNNs to heavily rely on high-performance computing hardware , such as GPU and TPU . Training sophisticated DNN models also results in excessive energy consumption and CO2 emission , e.g. , training the OpenAI ’ s GPT-3 by Brown et al . ( 2020 ) causes as much CO2 emissions as 43 cars during their lifetime ( Patterson et al. , 2021 ) . Moreover , their computational expensiveness strongly limits their applicability on resource-constrained devices such as mobile phones , IoT devices , and embedded devices . Various works aim to solve this challenge by reducing memory footprints and accelerating inference . We can roughly categorize these works into the following directions : network pruning ( Han et al. , 2015a ; b ) , knowledge distillation ( Crowley et al. , 2018 ; Polino et al. , 2018 ) , compact networks ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ; Ma et al. , 2018b ; Tan et al. , 2019 ) , and low-bit quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ; Zhou et al. , 2016 ; Hubara et al. , 2016 ) . From the latter , there is an extreme case , Binary Neural Networks ( BNNs ) , first introduced by Courbariaux et al . ( 2016 ) , that uses only 1 bit for weight and activation . As shown in the literature ( Rastegari et al. , 2016 ) , BNNs can achieve 32× memory compression and up to 58× speedup on CPU , since the conventional arithmetic operations can be replaced by bit-wise xnor and bitcount operations . However , BNNs suffer from accuracy degradation compared to their 32-bit counterparts . For instance , XNOR-Net leaves an 18 % accuracy gap to ResNet-18 on ImageNet classification ( Rastegari et al. , 2016 ) . Therefore , recent efforts ( analyzed in more detail in Section 2 ) mainly focus on narrowing the accuracy gap , including specific architecture design ( Liu et al. , 2018 ; Bethge et al. , 2019 ; 2020 ; Liu et al. , 2020b ) , real-valued weight and activation approximation ( Lin et al. , 2017a ; Zhuang et al. , 2019 ) , specific training recipes ( Martinez et al. , 2020 ) , a dedicated optimizer ( Helwegen et al. , 2019 ) , leveraging neural architecture search ( Bulat et al. , 2020 ; Zhao et al. , 2020 ) and dynamic networks ( Bulat et al. , 2021 ) . In the existing work , efficiency analysis usually only considers the theoretical instruction counts . However , memory usage , inference efficiency and energy consumption , which are essential to practical applications , have received little attention . Furthermore , Fromm et al . ( 2020 ) ; Bannink et al . ( 2021 ) point out that the theoretical complexity is often inconsistent with the actual performance in practice and measurable performance gains on existing BNN models are hard to achieve as the 32-bit components in BNNs ( such as BatchNorm , scaling , and 32-bit branches ) become bottlenecks . Using 32-bit information flow ( e.g. , 32-bit identity connections , 32-bit downsampling layers are equipped by almost all latest BNNs , see Figure 1a ) , and multiplication/division operations ( in BatchNorm , scaling , average pooling etc . ) significantly increase the memory usage and power consumption of BNNs and are thus unfriendly to hardware accelerators . For these reasons , even if BNNs have achieved MobileNet-level accuracy with a similar theoretical number of OPs ( Bethge et al. , 2020 ; Martinez et al. , 2020 ) , they still can not be used as conveniently as compact networks ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ) . In this paper , we extensively study the trade-off between BNN ’ s accuracy and hardware efficiency . We propose a novel BNN architecture : BoolNet , which replaces most commonly used 32-bit components ( see Section 3 ) . First , BoolNet only uses binary feature maps in the network ( see Figure 1b ) . Second , during inference , we fuse the BN layer into the Sign function through a lossless transformation , thereby effectively removing the Mult-Adds brought by BN . Other changes include removing components that require additional 32-bit multiplication/division operations : ( 1 ) PReLU , ( 2 ) average pooling , and ( 3 ) binary downsampling convolutions . We then propose a Multi-slice strategy to help alleviate the loss of representational capacity incurred by binarizing the feature maps and removing 32-bit components . We show the effectiveness of our proposed methods and the increased energy efficiency of BoolNet with experiments on the ImageNet dataset ( Deng et al. , 2009 ) . The results show the key benefit of BoolNet : a reasonable accuracy coupled with a higher energy efficiency over state-of-the-art BNNs ( see Figure 1c for a brief summary and Section 4 for more details ) . The energy data is obtained through a hardware accelerator simulation ( see Section 4.4 for details ) . We summarize our main contributions as follows : • The first work studying the effects of 32-bit layers often used in previous works on BNNs . • A novel BNN architecture BoolNet with minimal 32-bit components for higher efficiency . • A Multi-slice strategy to alleviate the accuracy loss incurred by using 1-bit feature maps . • State-of-the-art performance on the trade-off between accuracy and energy consumption with a 2.9× lower power consumption than Bi-RealNet ( Liu et al. , 2018 ) and 6.6 % higher accuracy . 2 RELATED WORK . In recent years , Efficient Deep Learning has become a research field that has attracted much attention . Technical directions , such as , compact network design ( Howard et al. , 2017 ; 2019 ; Sandler et al. , 2018 ; Zhang et al. , 2018 ; Ma et al. , 2018b ) , knowledge distillation ( Crowley et al. , 2018 ; Polino et al. , 2018 ) , network pruning ( Han et al. , 2015a ; b ; Li et al. , 2017 ; He et al. , 2017 ) , and low-bit quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ; Liu et al. , 2018 ; 2020b ; Bethge et al. , 2020 ) are proposed for model compression and acceleration . The efficient models have evolved from the earliest handcrafted designs to the current use of neural architecture search to search for the best basic block and overall network structure ( Tan et al. , 2019 ; Howard et al. , 2019 ; Tan & Le , 2019 ; Radosavovic et al. , 2020 ) . The criterion of efficiency evaluation has also changed from instruction and parameter counts to more precise measurements of actual memory and operating efficiency on the target hardware ( Cai et al. , 2019 ; 2018 ) . Binary Neural Networks were first introduced by Courbariaux et al . ( 2016 ) and their initial attempt only evaluated on small datasets such as MNIST ( LeCun & Cortes , 2010 ) , CIFAR10 ( Krizhevsky et al. , 2009 ) and SVHN ( Netzer et al. , 2011 ) . The follow-up XNOR-Net ( Rastegari et al. , 2016 ) proposes channel-wise scaling factors for approximating the real-valued parameters , which achieves 51.2 % top-1 accuracy on ImageNet . However , there is an 18 % gap compared with its 32-bit counterpart , ResNet-18 . Therefore , recent efforts mainly focused on narrowing the accuracy gap . WRPN by Mishra et al . ( 2018 ) shows that expanding the channel width of binary convolutions can obtain a better performance . ABC-Net by Lin et al . ( 2017a ) , GroupNet by Zhuang et al . ( 2019 ) , and ( Zhu et al. , 2019 ) use a set of k binary convolutions ( referred to as binary bases ) , instead of using a single binary convolution , to approximate a 32-bit convolution . This sort of method achieves higher accuracy but increases the required memory and number of operations of each convolution by the factor k. Bi-RealNet by Liu et al . ( 2018 ) proposes using real-valued ( 32-bit ) shortcuts to maintain a 32-bit information flow , which effectively improves the accuracy . This design strategy became a standard for later work , e.g. , Bethge et al . ( 2019 ; 2020 ) ; Liu et al . ( 2020b ) . Martinez et al . ( 2020 ) propose using a real-valued attention mechanism and well-tuned training recipes to boost the accuracy further . Thanks to the special architecture design , the recent MeliusNet ( Bethge et al. , 2020 ) and ReActNet ( Liu et al. , 2020b ) achieve MobileNet-level accuracy with similar number of theoretical operations . Other attempts , such as leveraging neural architecture search ( Bulat et al. , 2020 ; Zhao et al. , 2020 ) and dynamic networks ( Bulat et al. , 2021 ) , show that those successful methods on regular real-valued networks are also effective for BNN . Another method by Shen et al . ( 2019 ) combines neural architecture search to dynamically increase the number of channels for more accurate BNNs . Often , with improved accuracy , 32-bit components are used more frequently as well , such as PReLU and BatchNorm after each binary convolution ( Liu et al. , 2020b ) , a real-valued attention module ( Martinez et al. , 2020 ) and scaling factors , etc . Apart from some works that include and optimize real-time measurements on mobile devices , such as Bannink et al . ( 2021 ) ; Umuroglu et al . ( 2017 ) , efficiency analysis in the literature often only considers the theoretical operation number . However , the memory usage and the actual energy consumption has received very little attention so far . 3 BOOLNET . In this section , we first revisit the latest BNNs and recap how they enhanced the accuracy by adding more 32-bit components ( in Section 3.1 ) . Afterwards , we propose to replace most commonly used 32-bit components from current BNN designs and instead use a fully binary information flow in the network ( in Section 3.2 ) . However , abandoning 32-bit information flow results in a serious degradation of the representative capacity of the network . Thus , we also present our strategies to restore the representative capacity ( in Section 3.3 ) . The focus on boolean operations and binary feature maps leads to the name of our network : BoolNet . 3.1 IMPROVING ACCURACY WITH ADDITIONAL 32-BIT COMPONENTS . Recent works on BNNs have made promising progress in narrowing the gap to their 32-bit counterparts . The key intention is to enhance the representative capacity by fully exploiting additional 32-bit components . However , such additional 32-bit components significantly reduce the hardware efficiency ( as shown by Fromm et al . ( 2020 ) and further discussed in Section 4.4 ) . The following list summarizes the 32-bit components commonly used in the latest BNNs : ( 1 ) The channel-wise scaling factor was first proposed by Rastegari et al . ( 2016 ) for approximating the 32-bit parameters . It increases the value range of activation and weight . ( 2 ) Bi-RealNet ( Liu et al. , 2018 ) proposes to use a 32-bit shortcut for enclosing each binary convolution . The key advantage is that the network can maintain an almost completely 32-bit information flow ( cf . Figure 2a ) . ( 3 ) XNOR-Net ( Rastegari et al. , 2016 ) uses 32-bit 1×1 downsampling convolutions , which is also used by many subsequent methods ( Liu et al. , 2018 ; Martinez et al. , 2020 ; Bethge et al. , 2020 ) . Bethge et al . ( 2019 ) shows that this simple strategy can achieve about 3.6 % Top-1 accuracy gains on ImageNet based on a binary ResNet-18 model . ( 4 ) Martinez et al . ( 2020 ) ; Bulat et al . ( 2020 ; 2021 ) show that PReLU activation effectively improves accuracy of BNNs . ReActNet ( Liu et al. , 2020b ) constructs the RPReLU activation function and uses it before every sign function . ( 5 ) Martinez et al . ( 2020 ) reuse the 32-bit activation in their Real-to-Binary Net after BN with a squeeze and excitation ( SE ) attention mechanism . This module can adaptively re-scale the outputs of each binary convolution but needs additional 32-bit operations . Although these techniques can effectively improve the accuracy , they increase the number of 32-bit values and floating point operations , making them not particularly efficient on hardware accelerators . They are closer to mixed-precision neural networks rather than being highly efficient binary neural networks , as one might expect . | This paper aims to eliminate 32bit features of BNNs as possible. This paper claims that existing BNNs embed 32bit features, which can improve the accuracy of BNNs but must lead to overheads during inference. In the proposed network, full-precision batchnorm layers are replaced with 'shifted-sign' layers, full-precision scaling factors are eliminated, and multi-slice binary convolutions and 1b shortcuts are used. The network achieves 63% accuracy on ImageNet with various techniques and knowledge distillation. | SP:ab69e1f4ea5129f8fd3d715aff6dd9cbdaedb49b |
Heterologous Normalization | 1 INTRODUCTION . Deep neural networks have received great success in many areas . Batch Normalization ( BN ) has become a standard technique for training modern deep networks . BN normalizes the features by the mean and the standard deviation ( std ) computed from a batch of samples . The coordination between examples helps the learning process . The random selection of examples in the minibatch brings the sampling noises , providing a regularization effect . However , its effectiveness diminishes when the batch size becomes smaller since the noises are too much and make inaccurate batch statistics estimation . That hinders BN ’ s usage for scenes lacking samples or memories . For example , federated learning , a hot topic in machine learning , aims to train a model across multiple decentralized edge devices or servers to address privacy and security issues . The heterogeneous environments require a robust training algorithm with large or small batch size . To address the small batch size problem , some methods try to avoid normalizing along the batch dimension . In the case of NCHW format feature map , let N refer to the batch dimension , C refer to the channel dimension , H and W refer to the spatial height and width dimensions . Layer Normalization ( LN ) ( Ba et al. , 2016 ) computes the mean and the std along ( C , H , W ) dimensions . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) computes the mean and the std along ( H , W ) dimensions . Group Normalization ( GN ) ( Wu & He , 2018 ) is an intermediate state between Layer Normalization and Instance Normalization . It uses a group of channels within the sample itself to compute the mean and the std . Avoiding normalizing along the batch dimension gives up the advantages of BN , resulting in poor performances in many cases . To take advantage of different approaches , Switchable Normalization ( SN ) ( Luo et al. , 2018a ) combines different normalizations . It uses learnable parameters to combine BN , IN , and LN linearly . SN brings more statistics and computational complexity . Besides , the mean and the std of those methods need to be calculated at inference time . They slow down the inference comparing to BN since BN ’ s mean and std are pre-computed from the training data by the moving average . Although different normalizations compute the statistics from different pixel sets ( pixel refers to a element in the feature map ) , a specific normalization computes the mean and the std from the same pixel set . Thus existing methods can be viewed as homologous normalization . In this paper , we propose Heterologous Normalization ( HN ) , which computes the mean and the std from different pixel sets to take advantage of different normalization methods . Although HN is a general method that can use different strategies to compute the mean and the std , we find this combination strategy works well in most situations : calculating the mean along the ( N , H , W ) dimensions as the same as BN , while calculating the std along the ( N , C , H , W ) dimensions . On the one hand , it maintains the advantage of batch normalization when the batch size is large . On the other hand , it enlarges the number of pixels from which the std is calculated , thus alleviating the problem caused by the small batch size . At inference time , HN ’ s mean and std are pre-computed from the training set by moving average as the same as BN , keeping the inference efficiency . We evaluate HN on various datasets : CIFAR-10 , CIFAR-100 , CalTech256 , ImageNet . Experiments show that heterologous combination is valid for most situations . HN surpasses or achieves comparable performance than BN , IN , LN , GN , and SN , with large or small batch sizes on various datasets . Moreover , HN can be combined with SN together to improve the performance further . By analyzing the statistics ’ evolution over the course of training , we find that the noise of the small batch size is mainly caused by the std fluctuation rather than the mean . Enlarging the number of pixels from which the std is calculated can alleviate fluctuation successfully . That explains why HN works well with the small batch size . We conclude our key contributions as follows : 1 ) We show that it is unnecessary to estimate normalization statistics from the same pixel set and propose a general Heterologous Normalization that calculates normalization statistics from different pixel sets . 2 ) We find a special heterologous method that surpasses or achieves comparable performance to existing homologous methods , with large or small batch sizes on various datasets . 3 ) We find the noise of the small batch size is mainly caused by the std fluctuation rather than the mean . We should make an equilibrium between generalization and stability by controlling the number of pixels to calculate the statistics . 2 RELATED WORK . Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) is effective at accelerating and improving the training of deep neural networks by reducing internal covariate shift . It performs the normalization for each training minibatch along ( N , H , W ) dimensions in the case of NCHW format feature . Since BN uses the statistics on minibatch examples , its effect is dependent on the minibatch size . To mitigate this problem , Normalization Propagation ( Arpit et al. , 2016 ) uses a data-independent parametric estimate of the mean and standard deviation instead of explicitly calculating from data . Batch Renormalization ( Ioffe , 2017 ) introduces two extra parameters to correct the fact that the minibatch statistics differ from the population ones . It needs to train the model for a certain number of iterations with Batch Normalization alone , without the correction , then ramps up the amount of allowed correction . There is a family of methods that avoid normalizing along the batch dimension . Layer Normalization ( LN ) ( Ba et al. , 2016 ) computes the mean and standard deviation along ( C , H , W ) dimensions . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) computes the mean and standard deviation along ( H , W ) dimensions . When the batch size is 1 , Batch Normalization is equivalent to Instance Normalization . Group Normalization ( GN ) ( Wu & He , 2018 ) is an intermediate state between Layer Normalization and Instance Normalization . It uses a group of channels to compute the mean and the std , while Layer Normalization uses all channels , and Instance Normalization uses one channel . To take advantage of different approaches , Switchable Normalization ( SN ) ( Luo et al. , 2018a ) , Exemplar Normalization Zhang et al . ( 2020 ) and Batch-Instance Normalization ( BIN ) ( Nam & Kim , 2018 ) try to combine different normalization together . Switchable and Exemplar Normalization use learnable parameters to combine Batch , Instance , and Layer Normalization . BIN uses a learnable gate parameter to combine Batch and Instance normalization . Weight normalization ( Salimans & Kingma , 2016 ) normalizes the filter weights instead of the activations by re-parameterizing the incoming weight vector . Cosine normalization ( Luo et al. , 2017 ) normalizes both the filter weights and the activations by using cosine similarity instead of dot product in neural networks . Some researchers try to use other statistics instead of the mean and the standard deviation in normalization . Instead of the standard L2 batch normalization , ( Hoffer et al. , 2018 ) uses normalization in L1 and L∞ spaces and shows that it can improve numerical stability in low-precision implementations as well as provide computational and memory benefits . Generalized batch normalization ( Yuan et al. , 2019 ) investigates a variety of alternative deviation measures for scaling and alternative mean measures for centering . Virtual batch normalization ( Salimans et al. , 2016 ) and spectral normalization ( Miyato et al. , 2018 ) focus on the normalization in generative adversarial networks . Self-Normalizing ( Klambauer et al. , 2017 ) focuses on standard feed-forward neural networks ( fully-connected networks ) . Recurrent batch normalization ( Cooijmans et al. , 2016 ) modifies batch normalization to use in recurrent networks . Kalman normalization ( Wang et al. , 2018 ) estimates the mean and standard deviation of a certain layer by considering the distributions of all its preceding layers , mimicking the merits of Kalman Filtering Process . EvalNorm ( Singh & Shrivastava , 2019 ) estimates corrected normalization statistics to use for batch normalization during evaluation . ( Ren et al. , 2016 ) provides a unifying view of the different normalization approaches . ( Santurkar et al. , 2018 ) , ( Luo et al. , 2018b ) and ( Bjorck et al. , 2018 ) try to explain how batch normalization works . Summers & Dinneen ( 2020 ) propose several useful techniques to improve batch normalization . 3 HETEROLOGOUS NORMALIZATION . We first describe some notations that will be used next . For NCHW format feature map , let U denote the universal pixel set in the same feature layer , UN denote the set of pixels that belong to the same example , UC denote the set of pixels that belong to the same channel , UG denote the set of pixels that belong to the same group of channels . A family of normalization can be formalized as : x̂i = 1 σS′i ( xi − µSi ) ( 1 ) yi = γx̂i + β ( 2 ) where xi is the input , and yi is the output of the normalization . µ is the mean , and σ is the standard deviation ( std ) . Si is the pixel set from which the mean is computed , and S ′ i is the pixel set from which the std is computed . γ and β are learned parameters for affine transformation . As shown in Table 1 , different normalization methods estimate statistics from different pixel sets . BN computes the mean and the std along the ( N , H , W ) dimensions . The random noise brought by batch statistics estimation is beneficial to the generalization . However , the batch statistics estimation brings too much noise when the batch size is small . IN , LN , and GN try to avoid estimate statistics along the batch dimension . Each example estimates statistics within its own pixel sets , ignoring the information of other examples . Since there are not enough pixels to estimate statistics for BN when batch size is small , a straightforward method is to extend the pixel set for statistics calculation . For example , we can use the universal pixels U or the pixels from a group of channels UG to calculate statistics . We use Extended Normalization ( EN ) to refer to that method . In the following , the default number of groups in EN is 1 ( the universal set U ) when there is no mention of it , and we use EN Gn to represent that the groups of channels are n in EN . Figure 1 shows the visualization of different normalizations . Although different normalizations compute the statistics from different pixel sets , a specific normalization computes the mean and the std from the same pixel set . Thus existing methods can be viewed as homologous normalization , which has Si = S ′ i ( 3 ) In this paper , we propose Heterologous Normalization ( HN ) , which computes the mean and the std from different of pixel sets to take advantage of different normalization methods . That is to say , in HN , we have Si 6= S ′ i ( 4 ) Although HN is a general method that can use different strategies to compute the mean and the std , we find this configure works well in most situations : Si = UC , S ′ i = UG ( 5 ) Specifically , the mean is calculated along the ( N , H , W ) dimensions as the same as BN , while the std is computed along the ( N , C , H , W ) dimensions as the same as EN . On the one hand , it maintains BN ’ s advantage when the batch size is large . On the other hand , it enlarges the number of pixels from which the std is calculated , thus alleviating the problem caused by the small batch size . Switchable Normalization ( SN ) ( Luo et al. , 2018a ) also try to take advantage of different normalizations by mixing them together . Its mean and std are both linear combinations of the batch , instance , and layer ones . The pixel sets , from which the mean and the std are calculated , are the same . Different from SN , HN calculates the mean and the std from heterogeneous pixel sets . HN has fewer statistics and computational complexity than SN . Besides , the mean and the std of HN are pre-computed from the training data by the moving average as the same as BN . There is no need to compute the mean and the std at inference time . Moreover , since the mean and the std are pre-computed and fixed at inference time , the normalization can further be fused into convolution operation . That is very helpful to speed up the inference , especially on mobile or embedded devices . As for IN , LN , GN , SN , the mean and the std need to be calculated at inference time . Finally , HN and SN are not mutually exclusive . We can combine HN and SN together . For example , we can use the mean of BN , and the linear combinations of the std from different normalizations . | The paper introduces Heterologous Normalization (HN), an alternative to normalization techniques in neural networks such as BN, LN, etc. The key insight is, the optimal statistics of mean and standard deviation may be derived from different pixel sets respectively. Based on the observation, the proposed HN calculates the mean in BN’s style, while following the way of EN to stabilizing the variance. Experiments shows HN achieves comparable or slightly better performances than BN’s in normal batch sizes. While for very small batch sizes (e.g. 4), HN significantly outperforms BN by a large margin. | SP:1f68cd6ad079da702a239da2d2cea3ddaea55ff9 |
Heterologous Normalization | 1 INTRODUCTION . Deep neural networks have received great success in many areas . Batch Normalization ( BN ) has become a standard technique for training modern deep networks . BN normalizes the features by the mean and the standard deviation ( std ) computed from a batch of samples . The coordination between examples helps the learning process . The random selection of examples in the minibatch brings the sampling noises , providing a regularization effect . However , its effectiveness diminishes when the batch size becomes smaller since the noises are too much and make inaccurate batch statistics estimation . That hinders BN ’ s usage for scenes lacking samples or memories . For example , federated learning , a hot topic in machine learning , aims to train a model across multiple decentralized edge devices or servers to address privacy and security issues . The heterogeneous environments require a robust training algorithm with large or small batch size . To address the small batch size problem , some methods try to avoid normalizing along the batch dimension . In the case of NCHW format feature map , let N refer to the batch dimension , C refer to the channel dimension , H and W refer to the spatial height and width dimensions . Layer Normalization ( LN ) ( Ba et al. , 2016 ) computes the mean and the std along ( C , H , W ) dimensions . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) computes the mean and the std along ( H , W ) dimensions . Group Normalization ( GN ) ( Wu & He , 2018 ) is an intermediate state between Layer Normalization and Instance Normalization . It uses a group of channels within the sample itself to compute the mean and the std . Avoiding normalizing along the batch dimension gives up the advantages of BN , resulting in poor performances in many cases . To take advantage of different approaches , Switchable Normalization ( SN ) ( Luo et al. , 2018a ) combines different normalizations . It uses learnable parameters to combine BN , IN , and LN linearly . SN brings more statistics and computational complexity . Besides , the mean and the std of those methods need to be calculated at inference time . They slow down the inference comparing to BN since BN ’ s mean and std are pre-computed from the training data by the moving average . Although different normalizations compute the statistics from different pixel sets ( pixel refers to a element in the feature map ) , a specific normalization computes the mean and the std from the same pixel set . Thus existing methods can be viewed as homologous normalization . In this paper , we propose Heterologous Normalization ( HN ) , which computes the mean and the std from different pixel sets to take advantage of different normalization methods . Although HN is a general method that can use different strategies to compute the mean and the std , we find this combination strategy works well in most situations : calculating the mean along the ( N , H , W ) dimensions as the same as BN , while calculating the std along the ( N , C , H , W ) dimensions . On the one hand , it maintains the advantage of batch normalization when the batch size is large . On the other hand , it enlarges the number of pixels from which the std is calculated , thus alleviating the problem caused by the small batch size . At inference time , HN ’ s mean and std are pre-computed from the training set by moving average as the same as BN , keeping the inference efficiency . We evaluate HN on various datasets : CIFAR-10 , CIFAR-100 , CalTech256 , ImageNet . Experiments show that heterologous combination is valid for most situations . HN surpasses or achieves comparable performance than BN , IN , LN , GN , and SN , with large or small batch sizes on various datasets . Moreover , HN can be combined with SN together to improve the performance further . By analyzing the statistics ’ evolution over the course of training , we find that the noise of the small batch size is mainly caused by the std fluctuation rather than the mean . Enlarging the number of pixels from which the std is calculated can alleviate fluctuation successfully . That explains why HN works well with the small batch size . We conclude our key contributions as follows : 1 ) We show that it is unnecessary to estimate normalization statistics from the same pixel set and propose a general Heterologous Normalization that calculates normalization statistics from different pixel sets . 2 ) We find a special heterologous method that surpasses or achieves comparable performance to existing homologous methods , with large or small batch sizes on various datasets . 3 ) We find the noise of the small batch size is mainly caused by the std fluctuation rather than the mean . We should make an equilibrium between generalization and stability by controlling the number of pixels to calculate the statistics . 2 RELATED WORK . Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) is effective at accelerating and improving the training of deep neural networks by reducing internal covariate shift . It performs the normalization for each training minibatch along ( N , H , W ) dimensions in the case of NCHW format feature . Since BN uses the statistics on minibatch examples , its effect is dependent on the minibatch size . To mitigate this problem , Normalization Propagation ( Arpit et al. , 2016 ) uses a data-independent parametric estimate of the mean and standard deviation instead of explicitly calculating from data . Batch Renormalization ( Ioffe , 2017 ) introduces two extra parameters to correct the fact that the minibatch statistics differ from the population ones . It needs to train the model for a certain number of iterations with Batch Normalization alone , without the correction , then ramps up the amount of allowed correction . There is a family of methods that avoid normalizing along the batch dimension . Layer Normalization ( LN ) ( Ba et al. , 2016 ) computes the mean and standard deviation along ( C , H , W ) dimensions . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) computes the mean and standard deviation along ( H , W ) dimensions . When the batch size is 1 , Batch Normalization is equivalent to Instance Normalization . Group Normalization ( GN ) ( Wu & He , 2018 ) is an intermediate state between Layer Normalization and Instance Normalization . It uses a group of channels to compute the mean and the std , while Layer Normalization uses all channels , and Instance Normalization uses one channel . To take advantage of different approaches , Switchable Normalization ( SN ) ( Luo et al. , 2018a ) , Exemplar Normalization Zhang et al . ( 2020 ) and Batch-Instance Normalization ( BIN ) ( Nam & Kim , 2018 ) try to combine different normalization together . Switchable and Exemplar Normalization use learnable parameters to combine Batch , Instance , and Layer Normalization . BIN uses a learnable gate parameter to combine Batch and Instance normalization . Weight normalization ( Salimans & Kingma , 2016 ) normalizes the filter weights instead of the activations by re-parameterizing the incoming weight vector . Cosine normalization ( Luo et al. , 2017 ) normalizes both the filter weights and the activations by using cosine similarity instead of dot product in neural networks . Some researchers try to use other statistics instead of the mean and the standard deviation in normalization . Instead of the standard L2 batch normalization , ( Hoffer et al. , 2018 ) uses normalization in L1 and L∞ spaces and shows that it can improve numerical stability in low-precision implementations as well as provide computational and memory benefits . Generalized batch normalization ( Yuan et al. , 2019 ) investigates a variety of alternative deviation measures for scaling and alternative mean measures for centering . Virtual batch normalization ( Salimans et al. , 2016 ) and spectral normalization ( Miyato et al. , 2018 ) focus on the normalization in generative adversarial networks . Self-Normalizing ( Klambauer et al. , 2017 ) focuses on standard feed-forward neural networks ( fully-connected networks ) . Recurrent batch normalization ( Cooijmans et al. , 2016 ) modifies batch normalization to use in recurrent networks . Kalman normalization ( Wang et al. , 2018 ) estimates the mean and standard deviation of a certain layer by considering the distributions of all its preceding layers , mimicking the merits of Kalman Filtering Process . EvalNorm ( Singh & Shrivastava , 2019 ) estimates corrected normalization statistics to use for batch normalization during evaluation . ( Ren et al. , 2016 ) provides a unifying view of the different normalization approaches . ( Santurkar et al. , 2018 ) , ( Luo et al. , 2018b ) and ( Bjorck et al. , 2018 ) try to explain how batch normalization works . Summers & Dinneen ( 2020 ) propose several useful techniques to improve batch normalization . 3 HETEROLOGOUS NORMALIZATION . We first describe some notations that will be used next . For NCHW format feature map , let U denote the universal pixel set in the same feature layer , UN denote the set of pixels that belong to the same example , UC denote the set of pixels that belong to the same channel , UG denote the set of pixels that belong to the same group of channels . A family of normalization can be formalized as : x̂i = 1 σS′i ( xi − µSi ) ( 1 ) yi = γx̂i + β ( 2 ) where xi is the input , and yi is the output of the normalization . µ is the mean , and σ is the standard deviation ( std ) . Si is the pixel set from which the mean is computed , and S ′ i is the pixel set from which the std is computed . γ and β are learned parameters for affine transformation . As shown in Table 1 , different normalization methods estimate statistics from different pixel sets . BN computes the mean and the std along the ( N , H , W ) dimensions . The random noise brought by batch statistics estimation is beneficial to the generalization . However , the batch statistics estimation brings too much noise when the batch size is small . IN , LN , and GN try to avoid estimate statistics along the batch dimension . Each example estimates statistics within its own pixel sets , ignoring the information of other examples . Since there are not enough pixels to estimate statistics for BN when batch size is small , a straightforward method is to extend the pixel set for statistics calculation . For example , we can use the universal pixels U or the pixels from a group of channels UG to calculate statistics . We use Extended Normalization ( EN ) to refer to that method . In the following , the default number of groups in EN is 1 ( the universal set U ) when there is no mention of it , and we use EN Gn to represent that the groups of channels are n in EN . Figure 1 shows the visualization of different normalizations . Although different normalizations compute the statistics from different pixel sets , a specific normalization computes the mean and the std from the same pixel set . Thus existing methods can be viewed as homologous normalization , which has Si = S ′ i ( 3 ) In this paper , we propose Heterologous Normalization ( HN ) , which computes the mean and the std from different of pixel sets to take advantage of different normalization methods . That is to say , in HN , we have Si 6= S ′ i ( 4 ) Although HN is a general method that can use different strategies to compute the mean and the std , we find this configure works well in most situations : Si = UC , S ′ i = UG ( 5 ) Specifically , the mean is calculated along the ( N , H , W ) dimensions as the same as BN , while the std is computed along the ( N , C , H , W ) dimensions as the same as EN . On the one hand , it maintains BN ’ s advantage when the batch size is large . On the other hand , it enlarges the number of pixels from which the std is calculated , thus alleviating the problem caused by the small batch size . Switchable Normalization ( SN ) ( Luo et al. , 2018a ) also try to take advantage of different normalizations by mixing them together . Its mean and std are both linear combinations of the batch , instance , and layer ones . The pixel sets , from which the mean and the std are calculated , are the same . Different from SN , HN calculates the mean and the std from heterogeneous pixel sets . HN has fewer statistics and computational complexity than SN . Besides , the mean and the std of HN are pre-computed from the training data by the moving average as the same as BN . There is no need to compute the mean and the std at inference time . Moreover , since the mean and the std are pre-computed and fixed at inference time , the normalization can further be fused into convolution operation . That is very helpful to speed up the inference , especially on mobile or embedded devices . As for IN , LN , GN , SN , the mean and the std need to be calculated at inference time . Finally , HN and SN are not mutually exclusive . We can combine HN and SN together . For example , we can use the mean of BN , and the linear combinations of the std from different normalizations . | This paper presents a heterologous normalization method that estimates the mean and the variance from different pixel sets for training deep networks. It is claimed in the paper that this kind of mixed statistics for normalization provides better and more stable performance on learning a deep model. Several experiments of classification tasks on CIFAR-10, CIFAR-100, Caltech-256, and ImageNet datasets are shown to support the claim. The paper also presents an analysis to illustrate the fluctuation in the statistics during training. | SP:1f68cd6ad079da702a239da2d2cea3ddaea55ff9 |
Heterologous Normalization | 1 INTRODUCTION . Deep neural networks have received great success in many areas . Batch Normalization ( BN ) has become a standard technique for training modern deep networks . BN normalizes the features by the mean and the standard deviation ( std ) computed from a batch of samples . The coordination between examples helps the learning process . The random selection of examples in the minibatch brings the sampling noises , providing a regularization effect . However , its effectiveness diminishes when the batch size becomes smaller since the noises are too much and make inaccurate batch statistics estimation . That hinders BN ’ s usage for scenes lacking samples or memories . For example , federated learning , a hot topic in machine learning , aims to train a model across multiple decentralized edge devices or servers to address privacy and security issues . The heterogeneous environments require a robust training algorithm with large or small batch size . To address the small batch size problem , some methods try to avoid normalizing along the batch dimension . In the case of NCHW format feature map , let N refer to the batch dimension , C refer to the channel dimension , H and W refer to the spatial height and width dimensions . Layer Normalization ( LN ) ( Ba et al. , 2016 ) computes the mean and the std along ( C , H , W ) dimensions . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) computes the mean and the std along ( H , W ) dimensions . Group Normalization ( GN ) ( Wu & He , 2018 ) is an intermediate state between Layer Normalization and Instance Normalization . It uses a group of channels within the sample itself to compute the mean and the std . Avoiding normalizing along the batch dimension gives up the advantages of BN , resulting in poor performances in many cases . To take advantage of different approaches , Switchable Normalization ( SN ) ( Luo et al. , 2018a ) combines different normalizations . It uses learnable parameters to combine BN , IN , and LN linearly . SN brings more statistics and computational complexity . Besides , the mean and the std of those methods need to be calculated at inference time . They slow down the inference comparing to BN since BN ’ s mean and std are pre-computed from the training data by the moving average . Although different normalizations compute the statistics from different pixel sets ( pixel refers to a element in the feature map ) , a specific normalization computes the mean and the std from the same pixel set . Thus existing methods can be viewed as homologous normalization . In this paper , we propose Heterologous Normalization ( HN ) , which computes the mean and the std from different pixel sets to take advantage of different normalization methods . Although HN is a general method that can use different strategies to compute the mean and the std , we find this combination strategy works well in most situations : calculating the mean along the ( N , H , W ) dimensions as the same as BN , while calculating the std along the ( N , C , H , W ) dimensions . On the one hand , it maintains the advantage of batch normalization when the batch size is large . On the other hand , it enlarges the number of pixels from which the std is calculated , thus alleviating the problem caused by the small batch size . At inference time , HN ’ s mean and std are pre-computed from the training set by moving average as the same as BN , keeping the inference efficiency . We evaluate HN on various datasets : CIFAR-10 , CIFAR-100 , CalTech256 , ImageNet . Experiments show that heterologous combination is valid for most situations . HN surpasses or achieves comparable performance than BN , IN , LN , GN , and SN , with large or small batch sizes on various datasets . Moreover , HN can be combined with SN together to improve the performance further . By analyzing the statistics ’ evolution over the course of training , we find that the noise of the small batch size is mainly caused by the std fluctuation rather than the mean . Enlarging the number of pixels from which the std is calculated can alleviate fluctuation successfully . That explains why HN works well with the small batch size . We conclude our key contributions as follows : 1 ) We show that it is unnecessary to estimate normalization statistics from the same pixel set and propose a general Heterologous Normalization that calculates normalization statistics from different pixel sets . 2 ) We find a special heterologous method that surpasses or achieves comparable performance to existing homologous methods , with large or small batch sizes on various datasets . 3 ) We find the noise of the small batch size is mainly caused by the std fluctuation rather than the mean . We should make an equilibrium between generalization and stability by controlling the number of pixels to calculate the statistics . 2 RELATED WORK . Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) is effective at accelerating and improving the training of deep neural networks by reducing internal covariate shift . It performs the normalization for each training minibatch along ( N , H , W ) dimensions in the case of NCHW format feature . Since BN uses the statistics on minibatch examples , its effect is dependent on the minibatch size . To mitigate this problem , Normalization Propagation ( Arpit et al. , 2016 ) uses a data-independent parametric estimate of the mean and standard deviation instead of explicitly calculating from data . Batch Renormalization ( Ioffe , 2017 ) introduces two extra parameters to correct the fact that the minibatch statistics differ from the population ones . It needs to train the model for a certain number of iterations with Batch Normalization alone , without the correction , then ramps up the amount of allowed correction . There is a family of methods that avoid normalizing along the batch dimension . Layer Normalization ( LN ) ( Ba et al. , 2016 ) computes the mean and standard deviation along ( C , H , W ) dimensions . Instance Normalization ( IN ) ( Ulyanov et al. , 2016 ) computes the mean and standard deviation along ( H , W ) dimensions . When the batch size is 1 , Batch Normalization is equivalent to Instance Normalization . Group Normalization ( GN ) ( Wu & He , 2018 ) is an intermediate state between Layer Normalization and Instance Normalization . It uses a group of channels to compute the mean and the std , while Layer Normalization uses all channels , and Instance Normalization uses one channel . To take advantage of different approaches , Switchable Normalization ( SN ) ( Luo et al. , 2018a ) , Exemplar Normalization Zhang et al . ( 2020 ) and Batch-Instance Normalization ( BIN ) ( Nam & Kim , 2018 ) try to combine different normalization together . Switchable and Exemplar Normalization use learnable parameters to combine Batch , Instance , and Layer Normalization . BIN uses a learnable gate parameter to combine Batch and Instance normalization . Weight normalization ( Salimans & Kingma , 2016 ) normalizes the filter weights instead of the activations by re-parameterizing the incoming weight vector . Cosine normalization ( Luo et al. , 2017 ) normalizes both the filter weights and the activations by using cosine similarity instead of dot product in neural networks . Some researchers try to use other statistics instead of the mean and the standard deviation in normalization . Instead of the standard L2 batch normalization , ( Hoffer et al. , 2018 ) uses normalization in L1 and L∞ spaces and shows that it can improve numerical stability in low-precision implementations as well as provide computational and memory benefits . Generalized batch normalization ( Yuan et al. , 2019 ) investigates a variety of alternative deviation measures for scaling and alternative mean measures for centering . Virtual batch normalization ( Salimans et al. , 2016 ) and spectral normalization ( Miyato et al. , 2018 ) focus on the normalization in generative adversarial networks . Self-Normalizing ( Klambauer et al. , 2017 ) focuses on standard feed-forward neural networks ( fully-connected networks ) . Recurrent batch normalization ( Cooijmans et al. , 2016 ) modifies batch normalization to use in recurrent networks . Kalman normalization ( Wang et al. , 2018 ) estimates the mean and standard deviation of a certain layer by considering the distributions of all its preceding layers , mimicking the merits of Kalman Filtering Process . EvalNorm ( Singh & Shrivastava , 2019 ) estimates corrected normalization statistics to use for batch normalization during evaluation . ( Ren et al. , 2016 ) provides a unifying view of the different normalization approaches . ( Santurkar et al. , 2018 ) , ( Luo et al. , 2018b ) and ( Bjorck et al. , 2018 ) try to explain how batch normalization works . Summers & Dinneen ( 2020 ) propose several useful techniques to improve batch normalization . 3 HETEROLOGOUS NORMALIZATION . We first describe some notations that will be used next . For NCHW format feature map , let U denote the universal pixel set in the same feature layer , UN denote the set of pixels that belong to the same example , UC denote the set of pixels that belong to the same channel , UG denote the set of pixels that belong to the same group of channels . A family of normalization can be formalized as : x̂i = 1 σS′i ( xi − µSi ) ( 1 ) yi = γx̂i + β ( 2 ) where xi is the input , and yi is the output of the normalization . µ is the mean , and σ is the standard deviation ( std ) . Si is the pixel set from which the mean is computed , and S ′ i is the pixel set from which the std is computed . γ and β are learned parameters for affine transformation . As shown in Table 1 , different normalization methods estimate statistics from different pixel sets . BN computes the mean and the std along the ( N , H , W ) dimensions . The random noise brought by batch statistics estimation is beneficial to the generalization . However , the batch statistics estimation brings too much noise when the batch size is small . IN , LN , and GN try to avoid estimate statistics along the batch dimension . Each example estimates statistics within its own pixel sets , ignoring the information of other examples . Since there are not enough pixels to estimate statistics for BN when batch size is small , a straightforward method is to extend the pixel set for statistics calculation . For example , we can use the universal pixels U or the pixels from a group of channels UG to calculate statistics . We use Extended Normalization ( EN ) to refer to that method . In the following , the default number of groups in EN is 1 ( the universal set U ) when there is no mention of it , and we use EN Gn to represent that the groups of channels are n in EN . Figure 1 shows the visualization of different normalizations . Although different normalizations compute the statistics from different pixel sets , a specific normalization computes the mean and the std from the same pixel set . Thus existing methods can be viewed as homologous normalization , which has Si = S ′ i ( 3 ) In this paper , we propose Heterologous Normalization ( HN ) , which computes the mean and the std from different of pixel sets to take advantage of different normalization methods . That is to say , in HN , we have Si 6= S ′ i ( 4 ) Although HN is a general method that can use different strategies to compute the mean and the std , we find this configure works well in most situations : Si = UC , S ′ i = UG ( 5 ) Specifically , the mean is calculated along the ( N , H , W ) dimensions as the same as BN , while the std is computed along the ( N , C , H , W ) dimensions as the same as EN . On the one hand , it maintains BN ’ s advantage when the batch size is large . On the other hand , it enlarges the number of pixels from which the std is calculated , thus alleviating the problem caused by the small batch size . Switchable Normalization ( SN ) ( Luo et al. , 2018a ) also try to take advantage of different normalizations by mixing them together . Its mean and std are both linear combinations of the batch , instance , and layer ones . The pixel sets , from which the mean and the std are calculated , are the same . Different from SN , HN calculates the mean and the std from heterogeneous pixel sets . HN has fewer statistics and computational complexity than SN . Besides , the mean and the std of HN are pre-computed from the training data by the moving average as the same as BN . There is no need to compute the mean and the std at inference time . Moreover , since the mean and the std are pre-computed and fixed at inference time , the normalization can further be fused into convolution operation . That is very helpful to speed up the inference , especially on mobile or embedded devices . As for IN , LN , GN , SN , the mean and the std need to be calculated at inference time . Finally , HN and SN are not mutually exclusive . We can combine HN and SN together . For example , we can use the mean of BN , and the linear combinations of the std from different normalizations . | This paper proposes a new technique called Heterologous Normalization to help train modern deep networks. The main method is easy to follow. In contrast to conventional homologous normalization methods(e.g BN, IN), HN computes normalization’s mean and standard deviation from different pixel sets. The experiments show HN is slightly better than BN, and GN in some cases. | SP:1f68cd6ad079da702a239da2d2cea3ddaea55ff9 |
Towards Empirical Sandwich Bounds on the Rate-Distortion Function | 1 INTRODUCTION . From storing astronomical images captured by the Hubble telescope , to delivering familiar faces and voices over video chats , data compression , i.e. , communicating the “ same ” information but with less bits , is commonplace and indispensable to our digital life , and even arguably lies at the heart of intelligence ( Mahoney , 2009 ) . While for lossless compression , there exist practical algorithms that can compress any discrete data arbitrarily close to the information theory limit ( Ziv & Lempel , 1977 ; Witten et al. , 1987 ) , no such universal algorithm has been found for lossy data compression ( Berger & Gibson , 1998 ) , and significant research efforts have dedicated to lossy compression algorithms for various data . Recently , deep learning has shown promise for learning lossy compressors from raw data examples , with continually improving compression performance often matching or exceeding traditionally engineered methods ( Minnen et al. , 2018 ; Agustsson et al. , 2020 ; Yang et al. , 2020a ) . However , there are fundamental limits to the performance of any lossy compression algorithm , due to the inevitable trade-off between rate , the average number of bits needed to represent the data , and the distortion incurred by lossy representations . This trade-off is formally described by the rate-distortion ( R-D ) function , for a given source ( i.e. , the data distribution of interest ; referred to as such in information theory ) and distortion metric . The R-D function characterizes the best theoretically achievable rate-distortion performance by any compression algorithm , which can be seen as a lossy-compression counterpart and generalization of Shannon entropy for lossless compression . Despite its fundamental importance , the R-D function is generally unknown analytically , and establishing it for general data sources , especially real world data , is a difficult problem ( Gibson , 2017 ) . The default method for computing R-D functions , the Blahut-Arimoto algorithm ( Blahut , 1972 ; Arimoto , 1972 ) , only works for discrete data with a known probability mass function and has a complexity exponential in the data dimensionality . Applying it to an unknown data source requires discretization ( if it is continuous ) and estimating the source probabilities by a histogram , both of which introduce errors and are computationally infeasible beyond a couple of dimensions . Previous work characterizing the R-D function of images and videos ( Hayes et al. , 1970 ; Gibson , 2017 ) all assumed a statistical model of the source , making the results dependent on the modeling assumptions . In this work , we make progress on this old problem in information theory using tools from machine learning , and introduce new algorithms for upper and lower bounding the R-D function of a general ( i.e. , discrete , continuous , or neither ) , unknown memoryless source . More specifically , 1 . Our upper bound draws from the deep generative modeling toolbox , and is closely related to a class of β-VAEs in learned data compression ( Ballé et al. , 2017 ) ; we clarify how these models optimize a model-independent upper bound on the source rate-distortion function . 2 . We theoretically derive a general lower bound on the R-D function that can in principle be optimized by gradient ascent . Facing the difficulty of the problem involving global optimization , we restrict to a squared error distortion and obtain an approximate algorithm . 3 . We experimentally show that our upper bound can recover the R-D function of Gaussian sources , and obtain non-trivial sandwich bounds on high-dimension data with low intrinsic dimension . In particular , our sandwich bounds on 128×128 GAN-generated images shed light on the effectiveness of learned compression approaches ( Ballé et al. , 2021 ) . 4 . Although we currently can not verify its tightness , our estimated R-D upper bound on natural images indicates possible room for improving the image compression performance of state-of-the-art methods by one dB in PSNR , at various bitrates . We begin by reviewing the prerequisite rate-distortion theory in Section 2 , then describe our upper and lower bound algorithms in Section 3 and Section 4 , respectively . We discuss related work in Section 5 , report experimental results in Section 6 , and conclude in Section 7 . 2 BACKGROUND . Rate-distortion ( R-D ) theory deals with the fundamental trade-off between the average number of bits per sample ( rate ) used to represent a data source X and the distortion incurred by the lossy representation X̂ . It asks the following question about the limit of lossy compression : for a given data source and a distortion metric ( a.k.a. , a fidelity criterion ) , what is the minimum number of bits ( per sample ) needed to represent the source at a tolerable level of distortion , regardless of the computation complexity of the compression procedure ? The answer is given by the rate-distortion function R ( D ) . To introduce it , let the source and its reproduction take values in the sets X and X̂ , conventionally called the source and reproduction alphabets , respectively . We define the data source formally by a random variable X ∈ X following a ( usually unknown ) distribution PX , and assume a distortion metric ρ : X × X̂ → [ 0 , ∞ ) has been given , such as the squared error ρ ( x , x̂ ) = ‖x − x̂‖2 . The rate-distortion function is then defined by the following constrained optimization problem , R ( D ) = inf QX̂|X : E [ ρ ( X , X̂ ) ] ≤D I ( X ; X̂ ) , ( 1 ) where we consider all random transforms QX̂|X whose expected distortion is within the given threshold D ≥ 0 , and minimize the mutual information between the source X and its reproduced representation X̂ 1 . Shannon ’ s lossy source coding theorems ( Shannon , 1948 ; 1959 ) gave operational significance to the above mathematical definition of R ( D ) , as the minimum achievable rate with which any lossy compression algorithm can code i.i.d . data samples at a distortion level within D. The R-D function thus gives the tightest lower bound on the rate-distortion performance of any lossy compression algorithm , and can inform the design and analysis of such algorithms . If the operational distortion-rate performance of an algorithm lies high above the source R ( D ) -curve ( D , R ( D ) ) , then further performance improvement may be expected ; otherwise , its performance is already close to theoretically optimal , and we may focus our attention on other aspects of the algorithm . As the R-D function does not have an analytical form in general , we propose to estimate it from data samples , making the standard assumption that various expectations w.r.t . the true data distribution PX exist and can be approximated by sample averages . For a discrete source , this assumption automatically holds , and the R-D function also provides a lower bound on the Shannon entropy of discrete data . 3 UPPER BOUND ALGORITHM . For a known discrete source , the Blahut-Arimoto ( BA ) algorithm converges to R ( D ) from above by fixed-point equations . For a general unknown source , it is not clear how this can be done . We 1Both the expected distortion and mutual information terms are defined w.r.t . the joint distribution PXQX̂|X . We formally describe the general setting of the paper , including the technical definitions , in Appendix A.1 . propose to solve the underlying variaitonal problem approximately by gradient descent . In exchange for generality and scalability , we lose the optimality guarantee of the BA algorithm , only arriving at a stochastic upper bound of R ( D ) in general . By R-D theory ( Cover & Thomas , 2006 ) , every ( distortion , rate ) pair lying above the R ( D ) -curve is in principle realizable by a ( possibly expensive ) compression algorithm ; an upper bound on R ( D ) , therefore , reveals what kind of ( improved ) R-D performance is theoretically possible , without suggesting how it can be practically achieved . Variational Formulation . We adopt the same unconstrained variational objective as the BlahutArimoto ( BA ) algorithm ( Blahut , 1972 ; Arimoto , 1972 ) , in its most general form , L ( QX̂|X , QX̂ , λ ) : = Ex∼PX [ KL ( QX̂|X=x‖QX̂ ) ] + λEPXQX̂|X [ ρ ( X , X̂ ) ] , ( 2 ) where QX̂ is an arbitrary probability measure on X̂ and KL ( ·‖· ) denotes the Kullback-Leibler ( KL ) divergence . This objective can be seen as a Lagrangian relaxation of the constrained problem defining R ( D ) , where the first ( rate ) term is a variational upper bound on the mutual information I ( X ; X̂ ) , and the second ( distortion ) term enforces the distortion tolerance constraint in Eq . 1 . For each fixed λ > 0 , a global minimizer of L yields a point ( R , D ) on the R ( D ) curve ( Csiszar , 1974 ) , where R and D are simply the two terms of L evaluated at the optimal ( QX̂|X , QX̂ ) . Repeating this minimization for various λ then traces out the R ( D ) curve . Based on this connection , the BA algorithm carries out the minimization by coordinate descent on L via fixed-point equations , each time setting QX̂|X to be optimal w.r.t . QX̂ and vice versa ; the sequence of alternating distributions can be shown to converge and yield a point on the R ( D ) curve ( Csiszar , 1974 ) . Unfortunately , the BA algorithm only applies when X and X̂ are finite , and the source distribution PX known ( or estimated from data samples ) in the form of a vector of probabilities PX ( x ) of every state x ∈ X . Moreover , the algorithm requires storage and running time exponential in the data dimensionality , since it operates on exhaustive tabular representations of PX , ρ , QX̂|X , and QX̂ . The algorithm therefore quickly becomes infeasible on data with more than a couple of dimensions , not to mention high-dimension data such as natural images . The fixed-point equations of the BA algorithm are known in general settings ( Rezaei et al. , 2006 ) , but when X or X̂ is infinite ( such as in the continuous case ) , it is not clear how to carry them out or exhaustively represent the measures QX̂ and QX̂|X=x for each x ∈ X . Proposed Method . To avoid these difficulties , we propose to apply ( stochastic ) gradient descent on L w.r.t . flexibly parameterized variational distributions QX̂|X and QX̂ . The distributions can be members from any variational family , as long as QX̂|X=x is absolutely continuous w.r.t . QX̂ for ( PX -almost ) all x ∈ X . This technical condition ensures their KL divergence is well defined . This is easily satisfied , when X̂ is discrete , by requiring the support of QX̂|X=x to be contained in that of QX̂ for all x ∈ X ; and in the continuous case , by representing both measures in terms of probability density functions ( e.g. , normalizing flows ( Kobyzev et al. , 2021 ) ) . In this work we represent QX̂ and QX̂|X by parametric distributions , and predict the parameters of each QX̂|X=x by an encoder neural network φ ( x ) as in amortized inference ( Kingma & Welling , 2014 ) . Given i.i.d . X samples , we optimize the parameters of the variational distributions by SGD on L ; at convergence , the estimates of rate and distortion terms of L yields a point that in expectation lies on an R-D upper bound RU ( D ) . The variational objective L ( Eq . 2 ) closely resembles the negative ELBO ( NELBO ) objective of a β-VAE ( Higgins et al. , 2017 ) , if we regard the reproduction alphabet X̂ as the “ latent space ” . The connection is immediate , when X̂ is continuous and a squared error ρ specifies the density of a Gaussian likelihood p ( x|x̂ ) ∝ exp ( −‖x− x̂|2 ) . However , unlike in data compression , where X̂ is determined by the application ( and often equal to X for a full-reference distortion ) , the latent space in a ( β- ) VAE typically has a lower dimension than X ; a decoder network is then used to parameterize a likelihood model in the data space . To capture this setup , we introduce a new , arbitrary latent space Z on which we define variational distributions QZ|X , QZ , and a ( possibly stochastic ) decoder function ω : Z → X̂ . This results in an extended objective ( with L being the special case of an identity ω ) , J ( QZ|X , QZ , ω , λ ) : = Ex∼PX [ KL ( QZ|X=x‖QZ ) ] + λEPXQZ|X [ ρ ( X , ω ( Z ) ) ] . ( 3 ) How does this relate to the original rate-distortion problem ? We note that the same results from rate-distortion theory apply , once we identify a new distortion function ρω ( x , z ) : = ρ ( x , ω ( z ) ) and treat Z as the reproduction alphabet . Then for each fixed decoder , we may define a ω-dependent rate-distortion function , Rω ( D ) : = infQZ|X : E [ ρω ( X , Z ) ] ≤D I ( X ; Z ) . The minimum of J w.r.t . the variational distributions produces a point on the Rω ( D ) curve . Moreover , as a consequence of the data processing inequality I ( X ; Z ) ≥ I ( X ; ω ( Z ) ) , we can prove that Rω ( D ) ≥ R ( D ) for any ω ( Theorem A.3 ) . Moreover , the inequality is tight for a bijective ω , offering some theoretical support for the use of sub-pixel instead of upsampled convolutions in the decoder of image compression autoencoders ( Theis et al. , 2017 ; Cheng et al. , 2020 ) . We can now minimize the NELBO objective Eq . 3 w.r.t . parameters of ( QZ|X , QZ , ω ) similar to training a β-VAE , knowing that we are optimizing an upper bound on the information R-D function of the data source . This can be seen as a generalization to the lossless case ( with a countable X ) , where minimizing the NELBO minimizes an upper bound on the Shannon entropy of the source ( Frey & Hinton , 1997 ) , the limit of lossless compression . The tightness of our bound depends on the choice of variational distributions . The freedom to define them over any suitable latent space Z can simplify the modeling task ( of which there are many tools ( Salakhutdinov , 2015 ; Kobyzev et al. , 2021 ) ) . e.g. , we can work with densities on a continuous Z , even if X̂ is high-dimensional and discrete . We can also treat Z as the concatenation of sub-vectors [ Z1 , Z2 , ... , ZL ] , and parameterize QZ in terms of simpler component distributions QZ = ∏L l=1QZl|Z < l ( similarly forQZ|X ) . We exploit these properties in our experiments on images . | This paper aims to provide stronger upper and lower bounds for the RD function of arbitrary sources. The authors handle unknown sources by requiring only i.i.d. samples. Specifically, the authors derive an upper bound to the RD function using a $\beta$-VAE-like generative model, which has some similarities to the Blahut-Arimoto (BA) algorithm with two distinctions: 1) they do not restrict the source to be a low-dimensional discrete data, and 2) they use gradient descent to predict the parameters of the variational distributions $Q_{\hat{X}|X}$ and $Q_{\hat{X}}$. They also derive a lower bound to the RD function using the dual form of the RD function. Finally, the authors provide experimental results to show that the proposed upper bound is, in fact, exact for random Gaussian samples. For more complex data such as banana-shaped sources and their high-dimensional projections, they show that the proposed upper and lower bounds are tight. Furthermore, the proposed upper bounds on the RD function of natural images indicate that the state-of-the-art image compression methods can still be improved by at least one dB PSNR. | SP:2193ff0a8a22eb79b52df0d0472a58012bb064e9 |
Towards Empirical Sandwich Bounds on the Rate-Distortion Function | 1 INTRODUCTION . From storing astronomical images captured by the Hubble telescope , to delivering familiar faces and voices over video chats , data compression , i.e. , communicating the “ same ” information but with less bits , is commonplace and indispensable to our digital life , and even arguably lies at the heart of intelligence ( Mahoney , 2009 ) . While for lossless compression , there exist practical algorithms that can compress any discrete data arbitrarily close to the information theory limit ( Ziv & Lempel , 1977 ; Witten et al. , 1987 ) , no such universal algorithm has been found for lossy data compression ( Berger & Gibson , 1998 ) , and significant research efforts have dedicated to lossy compression algorithms for various data . Recently , deep learning has shown promise for learning lossy compressors from raw data examples , with continually improving compression performance often matching or exceeding traditionally engineered methods ( Minnen et al. , 2018 ; Agustsson et al. , 2020 ; Yang et al. , 2020a ) . However , there are fundamental limits to the performance of any lossy compression algorithm , due to the inevitable trade-off between rate , the average number of bits needed to represent the data , and the distortion incurred by lossy representations . This trade-off is formally described by the rate-distortion ( R-D ) function , for a given source ( i.e. , the data distribution of interest ; referred to as such in information theory ) and distortion metric . The R-D function characterizes the best theoretically achievable rate-distortion performance by any compression algorithm , which can be seen as a lossy-compression counterpart and generalization of Shannon entropy for lossless compression . Despite its fundamental importance , the R-D function is generally unknown analytically , and establishing it for general data sources , especially real world data , is a difficult problem ( Gibson , 2017 ) . The default method for computing R-D functions , the Blahut-Arimoto algorithm ( Blahut , 1972 ; Arimoto , 1972 ) , only works for discrete data with a known probability mass function and has a complexity exponential in the data dimensionality . Applying it to an unknown data source requires discretization ( if it is continuous ) and estimating the source probabilities by a histogram , both of which introduce errors and are computationally infeasible beyond a couple of dimensions . Previous work characterizing the R-D function of images and videos ( Hayes et al. , 1970 ; Gibson , 2017 ) all assumed a statistical model of the source , making the results dependent on the modeling assumptions . In this work , we make progress on this old problem in information theory using tools from machine learning , and introduce new algorithms for upper and lower bounding the R-D function of a general ( i.e. , discrete , continuous , or neither ) , unknown memoryless source . More specifically , 1 . Our upper bound draws from the deep generative modeling toolbox , and is closely related to a class of β-VAEs in learned data compression ( Ballé et al. , 2017 ) ; we clarify how these models optimize a model-independent upper bound on the source rate-distortion function . 2 . We theoretically derive a general lower bound on the R-D function that can in principle be optimized by gradient ascent . Facing the difficulty of the problem involving global optimization , we restrict to a squared error distortion and obtain an approximate algorithm . 3 . We experimentally show that our upper bound can recover the R-D function of Gaussian sources , and obtain non-trivial sandwich bounds on high-dimension data with low intrinsic dimension . In particular , our sandwich bounds on 128×128 GAN-generated images shed light on the effectiveness of learned compression approaches ( Ballé et al. , 2021 ) . 4 . Although we currently can not verify its tightness , our estimated R-D upper bound on natural images indicates possible room for improving the image compression performance of state-of-the-art methods by one dB in PSNR , at various bitrates . We begin by reviewing the prerequisite rate-distortion theory in Section 2 , then describe our upper and lower bound algorithms in Section 3 and Section 4 , respectively . We discuss related work in Section 5 , report experimental results in Section 6 , and conclude in Section 7 . 2 BACKGROUND . Rate-distortion ( R-D ) theory deals with the fundamental trade-off between the average number of bits per sample ( rate ) used to represent a data source X and the distortion incurred by the lossy representation X̂ . It asks the following question about the limit of lossy compression : for a given data source and a distortion metric ( a.k.a. , a fidelity criterion ) , what is the minimum number of bits ( per sample ) needed to represent the source at a tolerable level of distortion , regardless of the computation complexity of the compression procedure ? The answer is given by the rate-distortion function R ( D ) . To introduce it , let the source and its reproduction take values in the sets X and X̂ , conventionally called the source and reproduction alphabets , respectively . We define the data source formally by a random variable X ∈ X following a ( usually unknown ) distribution PX , and assume a distortion metric ρ : X × X̂ → [ 0 , ∞ ) has been given , such as the squared error ρ ( x , x̂ ) = ‖x − x̂‖2 . The rate-distortion function is then defined by the following constrained optimization problem , R ( D ) = inf QX̂|X : E [ ρ ( X , X̂ ) ] ≤D I ( X ; X̂ ) , ( 1 ) where we consider all random transforms QX̂|X whose expected distortion is within the given threshold D ≥ 0 , and minimize the mutual information between the source X and its reproduced representation X̂ 1 . Shannon ’ s lossy source coding theorems ( Shannon , 1948 ; 1959 ) gave operational significance to the above mathematical definition of R ( D ) , as the minimum achievable rate with which any lossy compression algorithm can code i.i.d . data samples at a distortion level within D. The R-D function thus gives the tightest lower bound on the rate-distortion performance of any lossy compression algorithm , and can inform the design and analysis of such algorithms . If the operational distortion-rate performance of an algorithm lies high above the source R ( D ) -curve ( D , R ( D ) ) , then further performance improvement may be expected ; otherwise , its performance is already close to theoretically optimal , and we may focus our attention on other aspects of the algorithm . As the R-D function does not have an analytical form in general , we propose to estimate it from data samples , making the standard assumption that various expectations w.r.t . the true data distribution PX exist and can be approximated by sample averages . For a discrete source , this assumption automatically holds , and the R-D function also provides a lower bound on the Shannon entropy of discrete data . 3 UPPER BOUND ALGORITHM . For a known discrete source , the Blahut-Arimoto ( BA ) algorithm converges to R ( D ) from above by fixed-point equations . For a general unknown source , it is not clear how this can be done . We 1Both the expected distortion and mutual information terms are defined w.r.t . the joint distribution PXQX̂|X . We formally describe the general setting of the paper , including the technical definitions , in Appendix A.1 . propose to solve the underlying variaitonal problem approximately by gradient descent . In exchange for generality and scalability , we lose the optimality guarantee of the BA algorithm , only arriving at a stochastic upper bound of R ( D ) in general . By R-D theory ( Cover & Thomas , 2006 ) , every ( distortion , rate ) pair lying above the R ( D ) -curve is in principle realizable by a ( possibly expensive ) compression algorithm ; an upper bound on R ( D ) , therefore , reveals what kind of ( improved ) R-D performance is theoretically possible , without suggesting how it can be practically achieved . Variational Formulation . We adopt the same unconstrained variational objective as the BlahutArimoto ( BA ) algorithm ( Blahut , 1972 ; Arimoto , 1972 ) , in its most general form , L ( QX̂|X , QX̂ , λ ) : = Ex∼PX [ KL ( QX̂|X=x‖QX̂ ) ] + λEPXQX̂|X [ ρ ( X , X̂ ) ] , ( 2 ) where QX̂ is an arbitrary probability measure on X̂ and KL ( ·‖· ) denotes the Kullback-Leibler ( KL ) divergence . This objective can be seen as a Lagrangian relaxation of the constrained problem defining R ( D ) , where the first ( rate ) term is a variational upper bound on the mutual information I ( X ; X̂ ) , and the second ( distortion ) term enforces the distortion tolerance constraint in Eq . 1 . For each fixed λ > 0 , a global minimizer of L yields a point ( R , D ) on the R ( D ) curve ( Csiszar , 1974 ) , where R and D are simply the two terms of L evaluated at the optimal ( QX̂|X , QX̂ ) . Repeating this minimization for various λ then traces out the R ( D ) curve . Based on this connection , the BA algorithm carries out the minimization by coordinate descent on L via fixed-point equations , each time setting QX̂|X to be optimal w.r.t . QX̂ and vice versa ; the sequence of alternating distributions can be shown to converge and yield a point on the R ( D ) curve ( Csiszar , 1974 ) . Unfortunately , the BA algorithm only applies when X and X̂ are finite , and the source distribution PX known ( or estimated from data samples ) in the form of a vector of probabilities PX ( x ) of every state x ∈ X . Moreover , the algorithm requires storage and running time exponential in the data dimensionality , since it operates on exhaustive tabular representations of PX , ρ , QX̂|X , and QX̂ . The algorithm therefore quickly becomes infeasible on data with more than a couple of dimensions , not to mention high-dimension data such as natural images . The fixed-point equations of the BA algorithm are known in general settings ( Rezaei et al. , 2006 ) , but when X or X̂ is infinite ( such as in the continuous case ) , it is not clear how to carry them out or exhaustively represent the measures QX̂ and QX̂|X=x for each x ∈ X . Proposed Method . To avoid these difficulties , we propose to apply ( stochastic ) gradient descent on L w.r.t . flexibly parameterized variational distributions QX̂|X and QX̂ . The distributions can be members from any variational family , as long as QX̂|X=x is absolutely continuous w.r.t . QX̂ for ( PX -almost ) all x ∈ X . This technical condition ensures their KL divergence is well defined . This is easily satisfied , when X̂ is discrete , by requiring the support of QX̂|X=x to be contained in that of QX̂ for all x ∈ X ; and in the continuous case , by representing both measures in terms of probability density functions ( e.g. , normalizing flows ( Kobyzev et al. , 2021 ) ) . In this work we represent QX̂ and QX̂|X by parametric distributions , and predict the parameters of each QX̂|X=x by an encoder neural network φ ( x ) as in amortized inference ( Kingma & Welling , 2014 ) . Given i.i.d . X samples , we optimize the parameters of the variational distributions by SGD on L ; at convergence , the estimates of rate and distortion terms of L yields a point that in expectation lies on an R-D upper bound RU ( D ) . The variational objective L ( Eq . 2 ) closely resembles the negative ELBO ( NELBO ) objective of a β-VAE ( Higgins et al. , 2017 ) , if we regard the reproduction alphabet X̂ as the “ latent space ” . The connection is immediate , when X̂ is continuous and a squared error ρ specifies the density of a Gaussian likelihood p ( x|x̂ ) ∝ exp ( −‖x− x̂|2 ) . However , unlike in data compression , where X̂ is determined by the application ( and often equal to X for a full-reference distortion ) , the latent space in a ( β- ) VAE typically has a lower dimension than X ; a decoder network is then used to parameterize a likelihood model in the data space . To capture this setup , we introduce a new , arbitrary latent space Z on which we define variational distributions QZ|X , QZ , and a ( possibly stochastic ) decoder function ω : Z → X̂ . This results in an extended objective ( with L being the special case of an identity ω ) , J ( QZ|X , QZ , ω , λ ) : = Ex∼PX [ KL ( QZ|X=x‖QZ ) ] + λEPXQZ|X [ ρ ( X , ω ( Z ) ) ] . ( 3 ) How does this relate to the original rate-distortion problem ? We note that the same results from rate-distortion theory apply , once we identify a new distortion function ρω ( x , z ) : = ρ ( x , ω ( z ) ) and treat Z as the reproduction alphabet . Then for each fixed decoder , we may define a ω-dependent rate-distortion function , Rω ( D ) : = infQZ|X : E [ ρω ( X , Z ) ] ≤D I ( X ; Z ) . The minimum of J w.r.t . the variational distributions produces a point on the Rω ( D ) curve . Moreover , as a consequence of the data processing inequality I ( X ; Z ) ≥ I ( X ; ω ( Z ) ) , we can prove that Rω ( D ) ≥ R ( D ) for any ω ( Theorem A.3 ) . Moreover , the inequality is tight for a bijective ω , offering some theoretical support for the use of sub-pixel instead of upsampled convolutions in the decoder of image compression autoencoders ( Theis et al. , 2017 ; Cheng et al. , 2020 ) . We can now minimize the NELBO objective Eq . 3 w.r.t . parameters of ( QZ|X , QZ , ω ) similar to training a β-VAE , knowing that we are optimizing an upper bound on the information R-D function of the data source . This can be seen as a generalization to the lossless case ( with a countable X ) , where minimizing the NELBO minimizes an upper bound on the Shannon entropy of the source ( Frey & Hinton , 1997 ) , the limit of lossless compression . The tightness of our bound depends on the choice of variational distributions . The freedom to define them over any suitable latent space Z can simplify the modeling task ( of which there are many tools ( Salakhutdinov , 2015 ; Kobyzev et al. , 2021 ) ) . e.g. , we can work with densities on a continuous Z , even if X̂ is high-dimensional and discrete . We can also treat Z as the concatenation of sub-vectors [ Z1 , Z2 , ... , ZL ] , and parameterize QZ in terms of simpler component distributions QZ = ∏L l=1QZl|Z < l ( similarly forQZ|X ) . We exploit these properties in our experiments on images . | The paper proposes to use ML to establish lower and upper bounds on rate distortion for general sources, thus going beyond the Blahut-Arimoto algorithm (which assumes discrete sources with known PMF, and has complexity exponential in the dimension of the data). Some theoretical results are provided, extending prior results. Some numerical results are provided | SP:2193ff0a8a22eb79b52df0d0472a58012bb064e9 |
Towards Empirical Sandwich Bounds on the Rate-Distortion Function | 1 INTRODUCTION . From storing astronomical images captured by the Hubble telescope , to delivering familiar faces and voices over video chats , data compression , i.e. , communicating the “ same ” information but with less bits , is commonplace and indispensable to our digital life , and even arguably lies at the heart of intelligence ( Mahoney , 2009 ) . While for lossless compression , there exist practical algorithms that can compress any discrete data arbitrarily close to the information theory limit ( Ziv & Lempel , 1977 ; Witten et al. , 1987 ) , no such universal algorithm has been found for lossy data compression ( Berger & Gibson , 1998 ) , and significant research efforts have dedicated to lossy compression algorithms for various data . Recently , deep learning has shown promise for learning lossy compressors from raw data examples , with continually improving compression performance often matching or exceeding traditionally engineered methods ( Minnen et al. , 2018 ; Agustsson et al. , 2020 ; Yang et al. , 2020a ) . However , there are fundamental limits to the performance of any lossy compression algorithm , due to the inevitable trade-off between rate , the average number of bits needed to represent the data , and the distortion incurred by lossy representations . This trade-off is formally described by the rate-distortion ( R-D ) function , for a given source ( i.e. , the data distribution of interest ; referred to as such in information theory ) and distortion metric . The R-D function characterizes the best theoretically achievable rate-distortion performance by any compression algorithm , which can be seen as a lossy-compression counterpart and generalization of Shannon entropy for lossless compression . Despite its fundamental importance , the R-D function is generally unknown analytically , and establishing it for general data sources , especially real world data , is a difficult problem ( Gibson , 2017 ) . The default method for computing R-D functions , the Blahut-Arimoto algorithm ( Blahut , 1972 ; Arimoto , 1972 ) , only works for discrete data with a known probability mass function and has a complexity exponential in the data dimensionality . Applying it to an unknown data source requires discretization ( if it is continuous ) and estimating the source probabilities by a histogram , both of which introduce errors and are computationally infeasible beyond a couple of dimensions . Previous work characterizing the R-D function of images and videos ( Hayes et al. , 1970 ; Gibson , 2017 ) all assumed a statistical model of the source , making the results dependent on the modeling assumptions . In this work , we make progress on this old problem in information theory using tools from machine learning , and introduce new algorithms for upper and lower bounding the R-D function of a general ( i.e. , discrete , continuous , or neither ) , unknown memoryless source . More specifically , 1 . Our upper bound draws from the deep generative modeling toolbox , and is closely related to a class of β-VAEs in learned data compression ( Ballé et al. , 2017 ) ; we clarify how these models optimize a model-independent upper bound on the source rate-distortion function . 2 . We theoretically derive a general lower bound on the R-D function that can in principle be optimized by gradient ascent . Facing the difficulty of the problem involving global optimization , we restrict to a squared error distortion and obtain an approximate algorithm . 3 . We experimentally show that our upper bound can recover the R-D function of Gaussian sources , and obtain non-trivial sandwich bounds on high-dimension data with low intrinsic dimension . In particular , our sandwich bounds on 128×128 GAN-generated images shed light on the effectiveness of learned compression approaches ( Ballé et al. , 2021 ) . 4 . Although we currently can not verify its tightness , our estimated R-D upper bound on natural images indicates possible room for improving the image compression performance of state-of-the-art methods by one dB in PSNR , at various bitrates . We begin by reviewing the prerequisite rate-distortion theory in Section 2 , then describe our upper and lower bound algorithms in Section 3 and Section 4 , respectively . We discuss related work in Section 5 , report experimental results in Section 6 , and conclude in Section 7 . 2 BACKGROUND . Rate-distortion ( R-D ) theory deals with the fundamental trade-off between the average number of bits per sample ( rate ) used to represent a data source X and the distortion incurred by the lossy representation X̂ . It asks the following question about the limit of lossy compression : for a given data source and a distortion metric ( a.k.a. , a fidelity criterion ) , what is the minimum number of bits ( per sample ) needed to represent the source at a tolerable level of distortion , regardless of the computation complexity of the compression procedure ? The answer is given by the rate-distortion function R ( D ) . To introduce it , let the source and its reproduction take values in the sets X and X̂ , conventionally called the source and reproduction alphabets , respectively . We define the data source formally by a random variable X ∈ X following a ( usually unknown ) distribution PX , and assume a distortion metric ρ : X × X̂ → [ 0 , ∞ ) has been given , such as the squared error ρ ( x , x̂ ) = ‖x − x̂‖2 . The rate-distortion function is then defined by the following constrained optimization problem , R ( D ) = inf QX̂|X : E [ ρ ( X , X̂ ) ] ≤D I ( X ; X̂ ) , ( 1 ) where we consider all random transforms QX̂|X whose expected distortion is within the given threshold D ≥ 0 , and minimize the mutual information between the source X and its reproduced representation X̂ 1 . Shannon ’ s lossy source coding theorems ( Shannon , 1948 ; 1959 ) gave operational significance to the above mathematical definition of R ( D ) , as the minimum achievable rate with which any lossy compression algorithm can code i.i.d . data samples at a distortion level within D. The R-D function thus gives the tightest lower bound on the rate-distortion performance of any lossy compression algorithm , and can inform the design and analysis of such algorithms . If the operational distortion-rate performance of an algorithm lies high above the source R ( D ) -curve ( D , R ( D ) ) , then further performance improvement may be expected ; otherwise , its performance is already close to theoretically optimal , and we may focus our attention on other aspects of the algorithm . As the R-D function does not have an analytical form in general , we propose to estimate it from data samples , making the standard assumption that various expectations w.r.t . the true data distribution PX exist and can be approximated by sample averages . For a discrete source , this assumption automatically holds , and the R-D function also provides a lower bound on the Shannon entropy of discrete data . 3 UPPER BOUND ALGORITHM . For a known discrete source , the Blahut-Arimoto ( BA ) algorithm converges to R ( D ) from above by fixed-point equations . For a general unknown source , it is not clear how this can be done . We 1Both the expected distortion and mutual information terms are defined w.r.t . the joint distribution PXQX̂|X . We formally describe the general setting of the paper , including the technical definitions , in Appendix A.1 . propose to solve the underlying variaitonal problem approximately by gradient descent . In exchange for generality and scalability , we lose the optimality guarantee of the BA algorithm , only arriving at a stochastic upper bound of R ( D ) in general . By R-D theory ( Cover & Thomas , 2006 ) , every ( distortion , rate ) pair lying above the R ( D ) -curve is in principle realizable by a ( possibly expensive ) compression algorithm ; an upper bound on R ( D ) , therefore , reveals what kind of ( improved ) R-D performance is theoretically possible , without suggesting how it can be practically achieved . Variational Formulation . We adopt the same unconstrained variational objective as the BlahutArimoto ( BA ) algorithm ( Blahut , 1972 ; Arimoto , 1972 ) , in its most general form , L ( QX̂|X , QX̂ , λ ) : = Ex∼PX [ KL ( QX̂|X=x‖QX̂ ) ] + λEPXQX̂|X [ ρ ( X , X̂ ) ] , ( 2 ) where QX̂ is an arbitrary probability measure on X̂ and KL ( ·‖· ) denotes the Kullback-Leibler ( KL ) divergence . This objective can be seen as a Lagrangian relaxation of the constrained problem defining R ( D ) , where the first ( rate ) term is a variational upper bound on the mutual information I ( X ; X̂ ) , and the second ( distortion ) term enforces the distortion tolerance constraint in Eq . 1 . For each fixed λ > 0 , a global minimizer of L yields a point ( R , D ) on the R ( D ) curve ( Csiszar , 1974 ) , where R and D are simply the two terms of L evaluated at the optimal ( QX̂|X , QX̂ ) . Repeating this minimization for various λ then traces out the R ( D ) curve . Based on this connection , the BA algorithm carries out the minimization by coordinate descent on L via fixed-point equations , each time setting QX̂|X to be optimal w.r.t . QX̂ and vice versa ; the sequence of alternating distributions can be shown to converge and yield a point on the R ( D ) curve ( Csiszar , 1974 ) . Unfortunately , the BA algorithm only applies when X and X̂ are finite , and the source distribution PX known ( or estimated from data samples ) in the form of a vector of probabilities PX ( x ) of every state x ∈ X . Moreover , the algorithm requires storage and running time exponential in the data dimensionality , since it operates on exhaustive tabular representations of PX , ρ , QX̂|X , and QX̂ . The algorithm therefore quickly becomes infeasible on data with more than a couple of dimensions , not to mention high-dimension data such as natural images . The fixed-point equations of the BA algorithm are known in general settings ( Rezaei et al. , 2006 ) , but when X or X̂ is infinite ( such as in the continuous case ) , it is not clear how to carry them out or exhaustively represent the measures QX̂ and QX̂|X=x for each x ∈ X . Proposed Method . To avoid these difficulties , we propose to apply ( stochastic ) gradient descent on L w.r.t . flexibly parameterized variational distributions QX̂|X and QX̂ . The distributions can be members from any variational family , as long as QX̂|X=x is absolutely continuous w.r.t . QX̂ for ( PX -almost ) all x ∈ X . This technical condition ensures their KL divergence is well defined . This is easily satisfied , when X̂ is discrete , by requiring the support of QX̂|X=x to be contained in that of QX̂ for all x ∈ X ; and in the continuous case , by representing both measures in terms of probability density functions ( e.g. , normalizing flows ( Kobyzev et al. , 2021 ) ) . In this work we represent QX̂ and QX̂|X by parametric distributions , and predict the parameters of each QX̂|X=x by an encoder neural network φ ( x ) as in amortized inference ( Kingma & Welling , 2014 ) . Given i.i.d . X samples , we optimize the parameters of the variational distributions by SGD on L ; at convergence , the estimates of rate and distortion terms of L yields a point that in expectation lies on an R-D upper bound RU ( D ) . The variational objective L ( Eq . 2 ) closely resembles the negative ELBO ( NELBO ) objective of a β-VAE ( Higgins et al. , 2017 ) , if we regard the reproduction alphabet X̂ as the “ latent space ” . The connection is immediate , when X̂ is continuous and a squared error ρ specifies the density of a Gaussian likelihood p ( x|x̂ ) ∝ exp ( −‖x− x̂|2 ) . However , unlike in data compression , where X̂ is determined by the application ( and often equal to X for a full-reference distortion ) , the latent space in a ( β- ) VAE typically has a lower dimension than X ; a decoder network is then used to parameterize a likelihood model in the data space . To capture this setup , we introduce a new , arbitrary latent space Z on which we define variational distributions QZ|X , QZ , and a ( possibly stochastic ) decoder function ω : Z → X̂ . This results in an extended objective ( with L being the special case of an identity ω ) , J ( QZ|X , QZ , ω , λ ) : = Ex∼PX [ KL ( QZ|X=x‖QZ ) ] + λEPXQZ|X [ ρ ( X , ω ( Z ) ) ] . ( 3 ) How does this relate to the original rate-distortion problem ? We note that the same results from rate-distortion theory apply , once we identify a new distortion function ρω ( x , z ) : = ρ ( x , ω ( z ) ) and treat Z as the reproduction alphabet . Then for each fixed decoder , we may define a ω-dependent rate-distortion function , Rω ( D ) : = infQZ|X : E [ ρω ( X , Z ) ] ≤D I ( X ; Z ) . The minimum of J w.r.t . the variational distributions produces a point on the Rω ( D ) curve . Moreover , as a consequence of the data processing inequality I ( X ; Z ) ≥ I ( X ; ω ( Z ) ) , we can prove that Rω ( D ) ≥ R ( D ) for any ω ( Theorem A.3 ) . Moreover , the inequality is tight for a bijective ω , offering some theoretical support for the use of sub-pixel instead of upsampled convolutions in the decoder of image compression autoencoders ( Theis et al. , 2017 ; Cheng et al. , 2020 ) . We can now minimize the NELBO objective Eq . 3 w.r.t . parameters of ( QZ|X , QZ , ω ) similar to training a β-VAE , knowing that we are optimizing an upper bound on the information R-D function of the data source . This can be seen as a generalization to the lossless case ( with a countable X ) , where minimizing the NELBO minimizes an upper bound on the Shannon entropy of the source ( Frey & Hinton , 1997 ) , the limit of lossless compression . The tightness of our bound depends on the choice of variational distributions . The freedom to define them over any suitable latent space Z can simplify the modeling task ( of which there are many tools ( Salakhutdinov , 2015 ; Kobyzev et al. , 2021 ) ) . e.g. , we can work with densities on a continuous Z , even if X̂ is high-dimensional and discrete . We can also treat Z as the concatenation of sub-vectors [ Z1 , Z2 , ... , ZL ] , and parameterize QZ in terms of simpler component distributions QZ = ∏L l=1QZl|Z < l ( similarly forQZ|X ) . We exploit these properties in our experiments on images . | This paper considers the problem of estimating the rate-distortion function R(D) for arbitrary sources with unknown distribution. An upper bound is established using a variational distribution and is estimated using an iterative coordinate descent algorithm inspired by the Blahut-Arimoto algorithm. A lower bound is established using a parameterization of the dual form of the rate-distortion function by Cisszar. These are then evaluated using neural network encoder/decoders on Gaussian and Banana sources, and tested on natural images suggesting that there is room for improved compression rates achieved by current state-of-the-art image compression algorithms. | SP:2193ff0a8a22eb79b52df0d0472a58012bb064e9 |
Automatic Termination for Hyperparameter Optimization | 1 INTRODUCTION . While the performance of machine learning algorithms crucially depends on their hyperparameters , setting them correctly is typically a tedious and expensive task . Hyperparameter optimization ( HPO ) emerged as a new sub-field in machine learning that tries to automatically determine how to configure a machine learning algorithm . One of the most successful strategies for HPO is Bayesian optimization ( BO ; Močkus , 1975 ; Chen et al. , 2018 ; Snoek et al. , 2012 ; Melis et al. , 2018 ) which iteratively trains a probabilistic model on the evaluations of the tuned algorithm . to select the most promising next candidate point that trades-off exploration and exploitation . In practice , the quality of the final solution found by BO heavily depends on a user defined budget , such as wall-clock time or the number of iterations , which needs to be defined in advanced . If this budget is too small , BO might return hyperparameters that result in a poor predictive performance . If the budget is too large , compute resources will be wasted and , in some cases , it may result in overfitting as we will show in our experiments . Automatically stopping BO is a rather under-explored topic in the literature . A simple baseline is to stop if BO has not found a better solution than the current best incumbent for some successive iterations , which is in the same vein as early stopping for neural network training . Another approach is to track probability of improvement ( Lorenz et al. , 2016 ) or expected improvement ( Nguyen et al. , 2017 ) , and stop the optimization process once it falls below a given threshold . However , determining this threshold may in practice be less intuitive than setting the number of iterations or the wall-clock time . Instead of stopping BO completely , McLeod et al . ( 2018 ) propose to switch to local optimization when the global regret is smaller than a pre-defined target . This condition can also be used to terminate BO early , but it comes with additional complexity such as identifying a ( convex ) region for local optimization and again a predefined budget . In this work , we propose a simple and interpretable automatic termination criterion for BO . In our criterion , we construct high-probability confidence bound on the regret ( i.e. , the difference of our current solution to the global optimum ) exploiting the probabilistic model of the objective . Thus , users are now asked to specify a desired tolerance that defines how accurate should the final solution be compared to global optimum . In addition , we propose to determine the threshold via a cross-validation estimate of the generalization error . This choice takes into account the irreducible discrepancy between the actual objective and the target function optimized via BO , namely the difference between the performance on new data ( i.e. , the population risk ) and the validation error . Our extensive empirical evaluation on a variety of HPO and NAS benchmarks suggests that our method is more robust and effective in maintaining the final solution quality than common baselines ( Lorenz et al. , 2016 ; Nguyen et al. , 2017 ) . We also surface overfitting effects in HPO on both small and large datasets , arguably an overlooked problem in the literature , and demonstrate that our termination criterion helps to mitigate it . 2 BACKGROUND . Bayesian optimization ( BO ) refers to methods of optimizing a black-box objective f : Γ → R in an iterative manner . At every step t , a learner selects an input γt ∈ Γ and observes a noisy output yt , f ( γt ) + εt , where εt is typically assumed to be i.i.d . ( sub ) -Gaussian noise with some variance ( proxy ) σ2ε . The decision of the next input to evaluate depends on a probabilistic model , used to approximate the objective f , and an acquisition function , which determines the decision rule . A popular choice for the probabilistic model is a Gaussian process ( GP ) : f ∼ GP ( µ , κ ) ( Rasmussen & Williams , 2006 ) , specified by some mean function µ : Γ → R and some kernel κ : Γ × Γ → R. As observations y1 : t = [ y1 , . . . , yt ] > for the selected inputs Gt = { γ1 , . . . , γt } are being collected , they are used to update the posterior belief of the model defined by the posterior mean µt ( γ ) and variance σ2t ( γ ) as : µt ( γ ) = κt ( γ ) T ( Kt + σ 2 εI ) −1y1 : t ( 1 ) σ2t ( γ ) = κ ( γ , γ ) − κt ( γ ) > ( Kt + σ2εI ) −1κt ( γ ) , ( 2 ) where ( Kt ) i , j = κ ( γi , γj ) and κt ( γ ) T = [ κ ( γ1 , γ ) , . . . , κ ( γt , γ ) ] T . The next input to query is determined by an acquisition function that aims to trade off exploration and exploitation . Common choices include probability of improvement ( Kushner , 1963 ) , entropy search ( Hennig & Schuler , 2012 ) , GP upper-confidence bound ( Srinivas et al. , 2010 ) , to name a few . The convergence of BO can be quantified via ( simple ) regret , i.e. , the sub-optimality in function value : rt : = f ( γ ∗ t ) − f ( γ∗ ) , ( 3 ) where γ∗ is the global optimizer of f and γ∗t = arg minγ∈Gt f ( γ ) . Specifying adequate tolerance that defines how small the regret should be to terminate BO is of high importance as it determine both the quality and the cost of the solution . However , this criterion can not be directly evaluated in practice , as the input γ∗ and the optimum f ( γ∗ ) are not known . Hyperparameter optimization ( HPO ) is a widely considered application for BO . Consider a supervised learning setting training a machine learning model ( e.g. , a neural network ) M on some feature-response data points D = { ( xi , yi ) } ni=1 sampled i.i.d . from some unknown data distribution P . The model is obtained by running a training algorithm ( e.g. , optimizing the weights of the neural network via SGD ) on D , and the model returned also depends on hyperparameters γ ( e.g. , learning rates used , batch size , etc. ) . We use the notationMγ ( x ; D ) to refer to the prediction that the model produced byM makes for an input x , when trained with hyperparameters γ on data D. Given some loss function ` ( · , · ) , the population risk of the model on unseen data points is given by the expected loss EP [ ` ( y , Mγ ( x , D ) ) ] . The main objective of HPO is to identify hyperparameters γ , such that the resulting model minimizes the population risk : f ( γ ) = EP [ ` ( y , Mγ ( x , D ) ) ] , γ∗ = arg min γ∈Γ f ( γ ) . ( 4 ) In practice , however , the population risk can not be evaluated since P is unknown . Thus , typically , it is estimated on a separate finite validation set DV drawn from the same distribution P . As the result , practical HPO focuses on minimizing the empirical estimate f̂ ( γ ) of the expected loss f ( γ ) leading to ( probably different ) optimizer γ∗D : f̂ ( γ ) = 1 |DV | ∑ xi , yi∈DV ` ( yi , Mγ ( xi , D ) ) , γ∗D = arg min γ∈Γ f̂ ( γ ) . ( 5 ) At its core , BO-based HPO evaluates noisy empirical estimate f̂ ( γt ) for promising hyperparameters γt for some finite number of iterations and the final performance after termination heavily depends on that number . Alternatively , one can also terminate BO when sufficiently close to the global optima , i.e. , using an analogue to the simple regret rt for the validation loss f̂ ( γ ) and f̂ ( γ∗t ) = min γ∈Gt f̂ ( γ ) : r̂t : = f̂ ( γ ∗ t ) − f̂ ( γ∗D ) . ( 6 ) Inconsistency in the optimization objective . Importantly , the true HPO objective f ( γ ) in Eq . ( 4 ) and the empirical surrogate f̂ ( γ ) in Eq . ( 5 ) used for tuning by BO generally do not coincide . Therefore , existing BO approaches may yield sub-optimal solutions to the population risk minimization , even if they succeed in globally optimizing f̂ ( γ ) . This issue , however , is typically neglected in practical HPO . In contrast , we propose a termination condition for BO motivated by the discrepancy in the objectives . 3 TERMINATION CRITERION FOR HYPERPARAMETER OPTIMIZATION . This section firstly motivates why early termination of HPO can be beneficial and then addresses the following two questions : ( 1 ) How to estimate the unknown simple regret and ( 2 ) What threshold of the simple regret can be used to stop HPO . 3.1 MOTIVATION FOR THE TERMINATION CRITERION . We start by analysing the true discrepancy of interest : between the population risk f ( γ∗t ) at some input γ∗t and true optimum f ( γ ∗ ) . We then observe that this discrepancy sums from the statistical error of the empirical BO objective f̂ ( γ ) as well the sub-optimality of the BO candidates ( encoded in the simple regret r̂t ) . The key insight of the following proposition is that iterative reducing of r̂t to 0 may not bring any benefits if the statistical error dominates . Proposition 1 . Consider the expected loss f and its estimator f̂ defined in Eqs . ( 4 ) and ( 5 ) , respectively , and assume the statistical error of the estimator is bounded as ||f̂−f ||∞ ≤ st for some st ≥ 0 . Let γ∗ and γ∗D be their optimizers : γ ∗ = arg minγ∈Γ f ( γ ) and γ ∗ D = arg minγ∈Γ f̂ ( γ ) . Let γ ∗ t be some candidate solution to minγ∈Γ f̂ ( γ ) with sub-optimality in function value r̂t : = f̂ ( γ∗t ) − f̂ ( γ∗D ) . Then the gap in generalization performance f ( γ∗t ) − f ( γ∗ ) can be bounded as follows : f ( γ∗t ) − f ( γ∗ ) ≤ f ( γ∗t ) − f̂ ( γ∗t ) ︸ ︷︷ ︸ ≤ st + f̂ ( γ∗t ) − f̂ ( γ∗D ) ︸ ︷︷ ︸ =r̂t + f̂ ( γ∗D ) − f̂ ( γ∗ ) ︸ ︷︷ ︸ ≤0 + f̂ ( γ∗ ) − f ( γ∗ ) ︸ ︷︷ ︸ ≤ st ≤ 2 st + r̂t . Moreover , without further restrictions on f , f̂ , γ∗t and γ ∗ , the upper bound is tight . Proof : While the second inequality is due to the definition of γ∗t , the others can proved as follows : f ( γ∗t ) − f̂ ( γ∗t ) ≤ |f ( γ∗t ) − f̂ ( γ∗t ) | ≤ max γ∈Γ |f ( γ ) − f̂ ( γ ) | = ||f̂ − f ||∞ ≤ st , γ∗D = arg min γ∈Γ f̂ ( γ ) −→ ∀γ ∈ Γ : f̂ ( γ∗D ) − f̂ ( γ ) ≤ 0 −→ f̂ ( γ∗D ) − f̂ ( γ∗ ) ≤ 0 . The proposition provides the discrepancy bound in terms of the statistical error st and simple regret r̂t . This naturally incites terminating HPO at a candidate γ∗t for which the simple regret r̂t is of the same magnitude as the statistical error st ( as further reduction in r̂t may not improve notably the true objective ) . However , neither of the quantities st and r̂t is known . Below , we propose a termination criterion that relies on estimates of both quantities . Firstly , we show how to use confidence bounds on f̂ ( γ ) to obtain high probability upper bounds on the simple regret r̂t Srinivas et al . ( 2010 ) ; Ha et al . ( 2019 ) . Secondly , we estimate the statistical error st in the case of cross-validation ( Stone , 1974 ; Geisser , 1975 ) where the model performance is defined as an average over several training-validation runs . To this end , we rely on the statistical characteristics ( i.e. , variance or bias ) of such cross-validation-based estimator that are theoretically studied by Nadeau & Bengio ( 2003 ) ; Bayle et al . ( 2020 ) . When cross validation is not used , one could define an intuitive threshold in advance due to the usage of interpretable upper bound of simple regret . | The paper proposes a new approach for automatic termination of hyperparameter optimization based on Bayesian Optimization (BO). The idea is to construct high-probability confidence bound on the regret and then determine when to terminate the BO process. Empirical experiments are conducted on various real-world HPO and NAS benchmarks to show the efficacy of the proposed approach. | SP:19b4a9974571084ae3fb7702fdc6171045c66a60 |
Automatic Termination for Hyperparameter Optimization | 1 INTRODUCTION . While the performance of machine learning algorithms crucially depends on their hyperparameters , setting them correctly is typically a tedious and expensive task . Hyperparameter optimization ( HPO ) emerged as a new sub-field in machine learning that tries to automatically determine how to configure a machine learning algorithm . One of the most successful strategies for HPO is Bayesian optimization ( BO ; Močkus , 1975 ; Chen et al. , 2018 ; Snoek et al. , 2012 ; Melis et al. , 2018 ) which iteratively trains a probabilistic model on the evaluations of the tuned algorithm . to select the most promising next candidate point that trades-off exploration and exploitation . In practice , the quality of the final solution found by BO heavily depends on a user defined budget , such as wall-clock time or the number of iterations , which needs to be defined in advanced . If this budget is too small , BO might return hyperparameters that result in a poor predictive performance . If the budget is too large , compute resources will be wasted and , in some cases , it may result in overfitting as we will show in our experiments . Automatically stopping BO is a rather under-explored topic in the literature . A simple baseline is to stop if BO has not found a better solution than the current best incumbent for some successive iterations , which is in the same vein as early stopping for neural network training . Another approach is to track probability of improvement ( Lorenz et al. , 2016 ) or expected improvement ( Nguyen et al. , 2017 ) , and stop the optimization process once it falls below a given threshold . However , determining this threshold may in practice be less intuitive than setting the number of iterations or the wall-clock time . Instead of stopping BO completely , McLeod et al . ( 2018 ) propose to switch to local optimization when the global regret is smaller than a pre-defined target . This condition can also be used to terminate BO early , but it comes with additional complexity such as identifying a ( convex ) region for local optimization and again a predefined budget . In this work , we propose a simple and interpretable automatic termination criterion for BO . In our criterion , we construct high-probability confidence bound on the regret ( i.e. , the difference of our current solution to the global optimum ) exploiting the probabilistic model of the objective . Thus , users are now asked to specify a desired tolerance that defines how accurate should the final solution be compared to global optimum . In addition , we propose to determine the threshold via a cross-validation estimate of the generalization error . This choice takes into account the irreducible discrepancy between the actual objective and the target function optimized via BO , namely the difference between the performance on new data ( i.e. , the population risk ) and the validation error . Our extensive empirical evaluation on a variety of HPO and NAS benchmarks suggests that our method is more robust and effective in maintaining the final solution quality than common baselines ( Lorenz et al. , 2016 ; Nguyen et al. , 2017 ) . We also surface overfitting effects in HPO on both small and large datasets , arguably an overlooked problem in the literature , and demonstrate that our termination criterion helps to mitigate it . 2 BACKGROUND . Bayesian optimization ( BO ) refers to methods of optimizing a black-box objective f : Γ → R in an iterative manner . At every step t , a learner selects an input γt ∈ Γ and observes a noisy output yt , f ( γt ) + εt , where εt is typically assumed to be i.i.d . ( sub ) -Gaussian noise with some variance ( proxy ) σ2ε . The decision of the next input to evaluate depends on a probabilistic model , used to approximate the objective f , and an acquisition function , which determines the decision rule . A popular choice for the probabilistic model is a Gaussian process ( GP ) : f ∼ GP ( µ , κ ) ( Rasmussen & Williams , 2006 ) , specified by some mean function µ : Γ → R and some kernel κ : Γ × Γ → R. As observations y1 : t = [ y1 , . . . , yt ] > for the selected inputs Gt = { γ1 , . . . , γt } are being collected , they are used to update the posterior belief of the model defined by the posterior mean µt ( γ ) and variance σ2t ( γ ) as : µt ( γ ) = κt ( γ ) T ( Kt + σ 2 εI ) −1y1 : t ( 1 ) σ2t ( γ ) = κ ( γ , γ ) − κt ( γ ) > ( Kt + σ2εI ) −1κt ( γ ) , ( 2 ) where ( Kt ) i , j = κ ( γi , γj ) and κt ( γ ) T = [ κ ( γ1 , γ ) , . . . , κ ( γt , γ ) ] T . The next input to query is determined by an acquisition function that aims to trade off exploration and exploitation . Common choices include probability of improvement ( Kushner , 1963 ) , entropy search ( Hennig & Schuler , 2012 ) , GP upper-confidence bound ( Srinivas et al. , 2010 ) , to name a few . The convergence of BO can be quantified via ( simple ) regret , i.e. , the sub-optimality in function value : rt : = f ( γ ∗ t ) − f ( γ∗ ) , ( 3 ) where γ∗ is the global optimizer of f and γ∗t = arg minγ∈Gt f ( γ ) . Specifying adequate tolerance that defines how small the regret should be to terminate BO is of high importance as it determine both the quality and the cost of the solution . However , this criterion can not be directly evaluated in practice , as the input γ∗ and the optimum f ( γ∗ ) are not known . Hyperparameter optimization ( HPO ) is a widely considered application for BO . Consider a supervised learning setting training a machine learning model ( e.g. , a neural network ) M on some feature-response data points D = { ( xi , yi ) } ni=1 sampled i.i.d . from some unknown data distribution P . The model is obtained by running a training algorithm ( e.g. , optimizing the weights of the neural network via SGD ) on D , and the model returned also depends on hyperparameters γ ( e.g. , learning rates used , batch size , etc. ) . We use the notationMγ ( x ; D ) to refer to the prediction that the model produced byM makes for an input x , when trained with hyperparameters γ on data D. Given some loss function ` ( · , · ) , the population risk of the model on unseen data points is given by the expected loss EP [ ` ( y , Mγ ( x , D ) ) ] . The main objective of HPO is to identify hyperparameters γ , such that the resulting model minimizes the population risk : f ( γ ) = EP [ ` ( y , Mγ ( x , D ) ) ] , γ∗ = arg min γ∈Γ f ( γ ) . ( 4 ) In practice , however , the population risk can not be evaluated since P is unknown . Thus , typically , it is estimated on a separate finite validation set DV drawn from the same distribution P . As the result , practical HPO focuses on minimizing the empirical estimate f̂ ( γ ) of the expected loss f ( γ ) leading to ( probably different ) optimizer γ∗D : f̂ ( γ ) = 1 |DV | ∑ xi , yi∈DV ` ( yi , Mγ ( xi , D ) ) , γ∗D = arg min γ∈Γ f̂ ( γ ) . ( 5 ) At its core , BO-based HPO evaluates noisy empirical estimate f̂ ( γt ) for promising hyperparameters γt for some finite number of iterations and the final performance after termination heavily depends on that number . Alternatively , one can also terminate BO when sufficiently close to the global optima , i.e. , using an analogue to the simple regret rt for the validation loss f̂ ( γ ) and f̂ ( γ∗t ) = min γ∈Gt f̂ ( γ ) : r̂t : = f̂ ( γ ∗ t ) − f̂ ( γ∗D ) . ( 6 ) Inconsistency in the optimization objective . Importantly , the true HPO objective f ( γ ) in Eq . ( 4 ) and the empirical surrogate f̂ ( γ ) in Eq . ( 5 ) used for tuning by BO generally do not coincide . Therefore , existing BO approaches may yield sub-optimal solutions to the population risk minimization , even if they succeed in globally optimizing f̂ ( γ ) . This issue , however , is typically neglected in practical HPO . In contrast , we propose a termination condition for BO motivated by the discrepancy in the objectives . 3 TERMINATION CRITERION FOR HYPERPARAMETER OPTIMIZATION . This section firstly motivates why early termination of HPO can be beneficial and then addresses the following two questions : ( 1 ) How to estimate the unknown simple regret and ( 2 ) What threshold of the simple regret can be used to stop HPO . 3.1 MOTIVATION FOR THE TERMINATION CRITERION . We start by analysing the true discrepancy of interest : between the population risk f ( γ∗t ) at some input γ∗t and true optimum f ( γ ∗ ) . We then observe that this discrepancy sums from the statistical error of the empirical BO objective f̂ ( γ ) as well the sub-optimality of the BO candidates ( encoded in the simple regret r̂t ) . The key insight of the following proposition is that iterative reducing of r̂t to 0 may not bring any benefits if the statistical error dominates . Proposition 1 . Consider the expected loss f and its estimator f̂ defined in Eqs . ( 4 ) and ( 5 ) , respectively , and assume the statistical error of the estimator is bounded as ||f̂−f ||∞ ≤ st for some st ≥ 0 . Let γ∗ and γ∗D be their optimizers : γ ∗ = arg minγ∈Γ f ( γ ) and γ ∗ D = arg minγ∈Γ f̂ ( γ ) . Let γ ∗ t be some candidate solution to minγ∈Γ f̂ ( γ ) with sub-optimality in function value r̂t : = f̂ ( γ∗t ) − f̂ ( γ∗D ) . Then the gap in generalization performance f ( γ∗t ) − f ( γ∗ ) can be bounded as follows : f ( γ∗t ) − f ( γ∗ ) ≤ f ( γ∗t ) − f̂ ( γ∗t ) ︸ ︷︷ ︸ ≤ st + f̂ ( γ∗t ) − f̂ ( γ∗D ) ︸ ︷︷ ︸ =r̂t + f̂ ( γ∗D ) − f̂ ( γ∗ ) ︸ ︷︷ ︸ ≤0 + f̂ ( γ∗ ) − f ( γ∗ ) ︸ ︷︷ ︸ ≤ st ≤ 2 st + r̂t . Moreover , without further restrictions on f , f̂ , γ∗t and γ ∗ , the upper bound is tight . Proof : While the second inequality is due to the definition of γ∗t , the others can proved as follows : f ( γ∗t ) − f̂ ( γ∗t ) ≤ |f ( γ∗t ) − f̂ ( γ∗t ) | ≤ max γ∈Γ |f ( γ ) − f̂ ( γ ) | = ||f̂ − f ||∞ ≤ st , γ∗D = arg min γ∈Γ f̂ ( γ ) −→ ∀γ ∈ Γ : f̂ ( γ∗D ) − f̂ ( γ ) ≤ 0 −→ f̂ ( γ∗D ) − f̂ ( γ∗ ) ≤ 0 . The proposition provides the discrepancy bound in terms of the statistical error st and simple regret r̂t . This naturally incites terminating HPO at a candidate γ∗t for which the simple regret r̂t is of the same magnitude as the statistical error st ( as further reduction in r̂t may not improve notably the true objective ) . However , neither of the quantities st and r̂t is known . Below , we propose a termination criterion that relies on estimates of both quantities . Firstly , we show how to use confidence bounds on f̂ ( γ ) to obtain high probability upper bounds on the simple regret r̂t Srinivas et al . ( 2010 ) ; Ha et al . ( 2019 ) . Secondly , we estimate the statistical error st in the case of cross-validation ( Stone , 1974 ; Geisser , 1975 ) where the model performance is defined as an average over several training-validation runs . To this end , we rely on the statistical characteristics ( i.e. , variance or bias ) of such cross-validation-based estimator that are theoretically studied by Nadeau & Bengio ( 2003 ) ; Bayle et al . ( 2020 ) . When cross validation is not used , one could define an intuitive threshold in advance due to the usage of interpretable upper bound of simple regret . | This paper proposes an automatic termination criterion for Bayesian optimization (BO) by using the upper bound of the simple reget. Various experiments are conducted to demonstrate that with the utilization of the termination criterion, computation and energy consumption can be reduced. The major contribution of this paper lies in the two propositions. Proposition 1 discusses the relationship between statistical error and optimization error, the authors claim that due to the irreducible statistical error, it is appropriate to reduce the optimization error $\epsilon_{BO}$ to the same magnitude w.r.t. the statistical error. Since the statistical error $\epsilon_{st}$ is unknown, the authors adopt an existing cross-validate method to estimate it. Then in proposition 2, the termination criterion is detailed once the BO regret is less than the standard deviation of statistical variance. | SP:19b4a9974571084ae3fb7702fdc6171045c66a60 |
Automatic Termination for Hyperparameter Optimization | 1 INTRODUCTION . While the performance of machine learning algorithms crucially depends on their hyperparameters , setting them correctly is typically a tedious and expensive task . Hyperparameter optimization ( HPO ) emerged as a new sub-field in machine learning that tries to automatically determine how to configure a machine learning algorithm . One of the most successful strategies for HPO is Bayesian optimization ( BO ; Močkus , 1975 ; Chen et al. , 2018 ; Snoek et al. , 2012 ; Melis et al. , 2018 ) which iteratively trains a probabilistic model on the evaluations of the tuned algorithm . to select the most promising next candidate point that trades-off exploration and exploitation . In practice , the quality of the final solution found by BO heavily depends on a user defined budget , such as wall-clock time or the number of iterations , which needs to be defined in advanced . If this budget is too small , BO might return hyperparameters that result in a poor predictive performance . If the budget is too large , compute resources will be wasted and , in some cases , it may result in overfitting as we will show in our experiments . Automatically stopping BO is a rather under-explored topic in the literature . A simple baseline is to stop if BO has not found a better solution than the current best incumbent for some successive iterations , which is in the same vein as early stopping for neural network training . Another approach is to track probability of improvement ( Lorenz et al. , 2016 ) or expected improvement ( Nguyen et al. , 2017 ) , and stop the optimization process once it falls below a given threshold . However , determining this threshold may in practice be less intuitive than setting the number of iterations or the wall-clock time . Instead of stopping BO completely , McLeod et al . ( 2018 ) propose to switch to local optimization when the global regret is smaller than a pre-defined target . This condition can also be used to terminate BO early , but it comes with additional complexity such as identifying a ( convex ) region for local optimization and again a predefined budget . In this work , we propose a simple and interpretable automatic termination criterion for BO . In our criterion , we construct high-probability confidence bound on the regret ( i.e. , the difference of our current solution to the global optimum ) exploiting the probabilistic model of the objective . Thus , users are now asked to specify a desired tolerance that defines how accurate should the final solution be compared to global optimum . In addition , we propose to determine the threshold via a cross-validation estimate of the generalization error . This choice takes into account the irreducible discrepancy between the actual objective and the target function optimized via BO , namely the difference between the performance on new data ( i.e. , the population risk ) and the validation error . Our extensive empirical evaluation on a variety of HPO and NAS benchmarks suggests that our method is more robust and effective in maintaining the final solution quality than common baselines ( Lorenz et al. , 2016 ; Nguyen et al. , 2017 ) . We also surface overfitting effects in HPO on both small and large datasets , arguably an overlooked problem in the literature , and demonstrate that our termination criterion helps to mitigate it . 2 BACKGROUND . Bayesian optimization ( BO ) refers to methods of optimizing a black-box objective f : Γ → R in an iterative manner . At every step t , a learner selects an input γt ∈ Γ and observes a noisy output yt , f ( γt ) + εt , where εt is typically assumed to be i.i.d . ( sub ) -Gaussian noise with some variance ( proxy ) σ2ε . The decision of the next input to evaluate depends on a probabilistic model , used to approximate the objective f , and an acquisition function , which determines the decision rule . A popular choice for the probabilistic model is a Gaussian process ( GP ) : f ∼ GP ( µ , κ ) ( Rasmussen & Williams , 2006 ) , specified by some mean function µ : Γ → R and some kernel κ : Γ × Γ → R. As observations y1 : t = [ y1 , . . . , yt ] > for the selected inputs Gt = { γ1 , . . . , γt } are being collected , they are used to update the posterior belief of the model defined by the posterior mean µt ( γ ) and variance σ2t ( γ ) as : µt ( γ ) = κt ( γ ) T ( Kt + σ 2 εI ) −1y1 : t ( 1 ) σ2t ( γ ) = κ ( γ , γ ) − κt ( γ ) > ( Kt + σ2εI ) −1κt ( γ ) , ( 2 ) where ( Kt ) i , j = κ ( γi , γj ) and κt ( γ ) T = [ κ ( γ1 , γ ) , . . . , κ ( γt , γ ) ] T . The next input to query is determined by an acquisition function that aims to trade off exploration and exploitation . Common choices include probability of improvement ( Kushner , 1963 ) , entropy search ( Hennig & Schuler , 2012 ) , GP upper-confidence bound ( Srinivas et al. , 2010 ) , to name a few . The convergence of BO can be quantified via ( simple ) regret , i.e. , the sub-optimality in function value : rt : = f ( γ ∗ t ) − f ( γ∗ ) , ( 3 ) where γ∗ is the global optimizer of f and γ∗t = arg minγ∈Gt f ( γ ) . Specifying adequate tolerance that defines how small the regret should be to terminate BO is of high importance as it determine both the quality and the cost of the solution . However , this criterion can not be directly evaluated in practice , as the input γ∗ and the optimum f ( γ∗ ) are not known . Hyperparameter optimization ( HPO ) is a widely considered application for BO . Consider a supervised learning setting training a machine learning model ( e.g. , a neural network ) M on some feature-response data points D = { ( xi , yi ) } ni=1 sampled i.i.d . from some unknown data distribution P . The model is obtained by running a training algorithm ( e.g. , optimizing the weights of the neural network via SGD ) on D , and the model returned also depends on hyperparameters γ ( e.g. , learning rates used , batch size , etc. ) . We use the notationMγ ( x ; D ) to refer to the prediction that the model produced byM makes for an input x , when trained with hyperparameters γ on data D. Given some loss function ` ( · , · ) , the population risk of the model on unseen data points is given by the expected loss EP [ ` ( y , Mγ ( x , D ) ) ] . The main objective of HPO is to identify hyperparameters γ , such that the resulting model minimizes the population risk : f ( γ ) = EP [ ` ( y , Mγ ( x , D ) ) ] , γ∗ = arg min γ∈Γ f ( γ ) . ( 4 ) In practice , however , the population risk can not be evaluated since P is unknown . Thus , typically , it is estimated on a separate finite validation set DV drawn from the same distribution P . As the result , practical HPO focuses on minimizing the empirical estimate f̂ ( γ ) of the expected loss f ( γ ) leading to ( probably different ) optimizer γ∗D : f̂ ( γ ) = 1 |DV | ∑ xi , yi∈DV ` ( yi , Mγ ( xi , D ) ) , γ∗D = arg min γ∈Γ f̂ ( γ ) . ( 5 ) At its core , BO-based HPO evaluates noisy empirical estimate f̂ ( γt ) for promising hyperparameters γt for some finite number of iterations and the final performance after termination heavily depends on that number . Alternatively , one can also terminate BO when sufficiently close to the global optima , i.e. , using an analogue to the simple regret rt for the validation loss f̂ ( γ ) and f̂ ( γ∗t ) = min γ∈Gt f̂ ( γ ) : r̂t : = f̂ ( γ ∗ t ) − f̂ ( γ∗D ) . ( 6 ) Inconsistency in the optimization objective . Importantly , the true HPO objective f ( γ ) in Eq . ( 4 ) and the empirical surrogate f̂ ( γ ) in Eq . ( 5 ) used for tuning by BO generally do not coincide . Therefore , existing BO approaches may yield sub-optimal solutions to the population risk minimization , even if they succeed in globally optimizing f̂ ( γ ) . This issue , however , is typically neglected in practical HPO . In contrast , we propose a termination condition for BO motivated by the discrepancy in the objectives . 3 TERMINATION CRITERION FOR HYPERPARAMETER OPTIMIZATION . This section firstly motivates why early termination of HPO can be beneficial and then addresses the following two questions : ( 1 ) How to estimate the unknown simple regret and ( 2 ) What threshold of the simple regret can be used to stop HPO . 3.1 MOTIVATION FOR THE TERMINATION CRITERION . We start by analysing the true discrepancy of interest : between the population risk f ( γ∗t ) at some input γ∗t and true optimum f ( γ ∗ ) . We then observe that this discrepancy sums from the statistical error of the empirical BO objective f̂ ( γ ) as well the sub-optimality of the BO candidates ( encoded in the simple regret r̂t ) . The key insight of the following proposition is that iterative reducing of r̂t to 0 may not bring any benefits if the statistical error dominates . Proposition 1 . Consider the expected loss f and its estimator f̂ defined in Eqs . ( 4 ) and ( 5 ) , respectively , and assume the statistical error of the estimator is bounded as ||f̂−f ||∞ ≤ st for some st ≥ 0 . Let γ∗ and γ∗D be their optimizers : γ ∗ = arg minγ∈Γ f ( γ ) and γ ∗ D = arg minγ∈Γ f̂ ( γ ) . Let γ ∗ t be some candidate solution to minγ∈Γ f̂ ( γ ) with sub-optimality in function value r̂t : = f̂ ( γ∗t ) − f̂ ( γ∗D ) . Then the gap in generalization performance f ( γ∗t ) − f ( γ∗ ) can be bounded as follows : f ( γ∗t ) − f ( γ∗ ) ≤ f ( γ∗t ) − f̂ ( γ∗t ) ︸ ︷︷ ︸ ≤ st + f̂ ( γ∗t ) − f̂ ( γ∗D ) ︸ ︷︷ ︸ =r̂t + f̂ ( γ∗D ) − f̂ ( γ∗ ) ︸ ︷︷ ︸ ≤0 + f̂ ( γ∗ ) − f ( γ∗ ) ︸ ︷︷ ︸ ≤ st ≤ 2 st + r̂t . Moreover , without further restrictions on f , f̂ , γ∗t and γ ∗ , the upper bound is tight . Proof : While the second inequality is due to the definition of γ∗t , the others can proved as follows : f ( γ∗t ) − f̂ ( γ∗t ) ≤ |f ( γ∗t ) − f̂ ( γ∗t ) | ≤ max γ∈Γ |f ( γ ) − f̂ ( γ ) | = ||f̂ − f ||∞ ≤ st , γ∗D = arg min γ∈Γ f̂ ( γ ) −→ ∀γ ∈ Γ : f̂ ( γ∗D ) − f̂ ( γ ) ≤ 0 −→ f̂ ( γ∗D ) − f̂ ( γ∗ ) ≤ 0 . The proposition provides the discrepancy bound in terms of the statistical error st and simple regret r̂t . This naturally incites terminating HPO at a candidate γ∗t for which the simple regret r̂t is of the same magnitude as the statistical error st ( as further reduction in r̂t may not improve notably the true objective ) . However , neither of the quantities st and r̂t is known . Below , we propose a termination criterion that relies on estimates of both quantities . Firstly , we show how to use confidence bounds on f̂ ( γ ) to obtain high probability upper bounds on the simple regret r̂t Srinivas et al . ( 2010 ) ; Ha et al . ( 2019 ) . Secondly , we estimate the statistical error st in the case of cross-validation ( Stone , 1974 ; Geisser , 1975 ) where the model performance is defined as an average over several training-validation runs . To this end , we rely on the statistical characteristics ( i.e. , variance or bias ) of such cross-validation-based estimator that are theoretically studied by Nadeau & Bengio ( 2003 ) ; Bayle et al . ( 2020 ) . When cross validation is not used , one could define an intuitive threshold in advance due to the usage of interpretable upper bound of simple regret . | This paper studies the problem of pre-specifying the optimal termination criterion for Bayesian optimization. Different from prior work that tracks the value of acquisition function, this paper proposes an automatic termination criterion for BO. In particular, they construct a high-probability confidence bound on the regret, and then the users can specify a desired tolerance that shows how accurate the final solution should be compared to the global optimal. They estimate the threshold via a cross-validation estimate of the generalization error. Empirically, they design two evaluation metrics, relative test error change RYC and relative time change RTC, and compare to the comprehensive prior work. The results demonstrate the effectiveness of their proposed approach. | SP:19b4a9974571084ae3fb7702fdc6171045c66a60 |
A Novel Convergence Analysis for the Stochastic Proximal Point Algorithm | 1 INTRODUCTION . It has been widely accepted that when training large-scale machine learning models , the training algorithm should act in a sample-by-sample manner in order to reduce computational and memory overhead—the size of the data set may be too large that calculating the full gradient information is too costly . Moreover , most machine learning problems does not have to be solved with very high accuracy , since the ultimate goal is not to fit the training data but rather to generalize the algorithm well such that the performance is decent on unseen data . Most existing stochastic algorithms are based upon the stochastic gradient descent ( SGD ) framework ( Bottou et al. , 2018 ) . SGD is extremely easy to implement and provides asymptotic convergence , although the convergence rate is generally slow ( and subject to careful choice of step sizes ) . Various approaches have been proposed to accelerate the plain vanilla SGD . Reducing the variance of the stochastic gradient and introducing adaptive learning schemes are two main lines of research . SVRG ( Johnson & Zhang , 2013 ) and SAGA ( Defazio et al. , 2014 ) ( and their follow-up works such as ( Defazio , 2016 ) ) focus on reducing the variance of the stochastic gradient descent , at the cost of increased time or memory complexities ( to the order of the entire data set ) . On the other hand , AdaGrad ( Duchi et al. , 2011 ) and Adam ( Diederik P. Kingma , 2014 ) introduce adaptive learning schemes and effectively keep the algorithm fully stochastic and light-weight . Besides these practical improvements , theoretical progress has been made by on quantifying the best possible convergence rate using first-order information ( Lei et al. , 2017 ; Allen-Zhu , 2017 ; 2018a ; b ) . 1.1 STOCHASTIC PROXIMAL POINT ALGORITHM ( SPPA ) . In this paper , we explore a different type of stochastic algorithm called the stochastic proximal point algorithm ( SPPA ) , also known as stochastic proximal iterations ( Ryu & Boyd , 2014 ) or incremental proximal point method ( Bertsekas , 2011a ; b ) . Consider the following optimization problem with the objective function in the form of a finite sum of component functions minimize w∈R3 1 = =∑ 8=1 58 ( w ) = ( w ) . ( 1 ) SPPA takes the following simple form : Algorithm 1 Stochastic proximal point algorithm ( SPPA ) 1 : repeat 2 : randomly draw 8C uniformly from { 1 , . . . , = } 3 : wC+1 ← argminw _C 58C ( w ) + ( 1/2 ) ‖w − wC ‖2 = Prox_C 58C ( wC ) 4 : until convergence Line 3 of Algorithm 1 calculates the proximal operator of the function _C 58 evaluated at wC , denoted as Prox_C 58 ( wC ) . This is the stochastic version of the proximal point algorithm , which dates back to Rockafellar ( 1976 ) . SPPA has an abstract per-iteration update rule and it acquires more information from the problem than solely the first order derivatives , which makes it not as universally applicable as SGD . Yet , thanks to the more information inquired , it is possible to obtain faster and more robust convergence guarantees . While some ‘ accelerated ’ versions of SGD demand additional time or space overhead to go over the entire data set , SPPA does not have any overhead and it is suitable to be used in a completely online setting , it performs well even the data samples are not revisited again . To the best of our knowledge , people started studying the convergence behavior of SPPA only recently ( Bertsekas , 2011a ; Ryu & Boyd , 2014 ; Bianchi , 2016 ; Pătraşcu , 2020 ; Toulis et al. , 2021 ) . Somewhat surprisingly , the convergence analysis of SPPA draws little resemblance to its deterministic counterpart , the proximal point algorithm . This is unlike the case for SGD , of which the convergence analysis follows almost line-by-line to that of the subgradient method . Moreover , existing analysis of SPPA shows no improvement in terms of convergence rate , which seems counter-intuitive due to the nature of the updates . Most authors also accept the premise that the proximal operator is sometimes difficult to evaluate , and thus proposed variations to the plain vanilla version to handle more complicated problem structures ( Wang & Bertsekas , 2013 ; Duchi & Ruan , 2018 ; Asi & Duchi , 2019 ; Davis & Drusvyatskiy , 2019 ) . 1.2 CONTRIBUTIONS . The main contribution of this paper is to provide a completely novel convergence analysis of SPPA for general convex problems . This contribution , together with the efficient implementation strategies discussed in the appendix , results in great practical results in almost all of the classical empirical risk minimization ( ERM ) problems in large-scale machine learning . While there exists some convergence results of SPPA for convex problems , the novel analysis provided in this work requires minimal assumptions ( nothing but the convexity of the loss functions ) . The new convergence analysis also shows great resemblance to its deterministic counterpart , which is not the case from other works . In the appendix we will discuss how to efficiently compute the abstract per-iteration update rule for SPPA with complexity that is comparable to SGD-type methods . An interesting observation is that in a lot of ERM examples the resulting update has a similar form like SGD , but with a smartly chosen step size derived from the proximal update . For a class of smooth risks such as the logistic loss , we appeal to reformulate the optimization problem such that we get close to closed form solution via bisection . We also briefly discuss how to modify the algorithm to have the ability to handle nonsmooth regularization terms such as the ! 1 norm , using the stochastic Douglas-Rachford splitting approach . Finally , we apply SPPA with efficient implementations to a large variety of classification and regression problems . Numerical experiments are conducted not only for linear classification and regression ( in the appendix ) models , but also nonconvex deep neural network problems . Although the convergence analysis provided in this paper does not cover nonconvex cases , empirical results suggest that it is still worth treating SPPA as an effective alternative for deep learning . 2 CONVERGENCE ANALYSIS . In this section we provide convergence analysis of SPPA for general convex loss functions equation 1 to a global minimum in expectation . In recent years there have been some work tackling the same problem , e.g. , Bertsekas ( 2011a ) ; Pătraşcu ( 2020 ) ; Toulis et al . ( 2021 ) . In this paper , however , we provide new convergence analysis that is much easier to understand while requiring nearly no assumptions other than convexity . For this reason we believe the theoretical contribution is significant enough as it broadens the applicability of SPPA . There is a well-known resemblance between proximal methods and gradient descent , assuming the loss function is differentiable : while a full gradient descent step takes the form wC+1 = wC − _C∇ ( wC ) , the definition of a full proximal step guarantees that _C∇ ( wC+1 ) = wC+1 − wC , meaning that wC+1 = wC − _C∇ ( wC+1 ) . Therefore , one might expect that a well-established convergence analysis of SGD for nonconvex problems , for example ( Bottou et al. , 2018 , §4.3 ) , can be seamlessly applied to SPPA . However , when applied in a stochastic fashion , the situation is a little more complicated . Consider E [ ∇ 58 ( wC ) | wC ] , where the expectation is taken over the sampling procedure conditioned on wC , for SGD we typically require ∇ 58 ( wC ) to be an unbiased estimator of ∇ ( wC ) , which is easy to satisfy if 8C is uniformly sampled from { 1 , . . . , = } given wC . For SPPA then one needs to consider E [ ∇ 58 ( wC+1 ) | wC ] , again over the sampling procedure conditioned on wC . This is in fact difficult to quantify because the update wC+1 depends on the sample that is drawn from the data set . The equation E [ ∇ 58 ( wC+1 ) | wC ] = ∇ ( wC+1 ) does not make sense because wC+1 on the right-hand-side is still random conditioned on wC . It is for this reason that existing analysis of SPPA differs drastically from its deterministic counterpart PPA ( Bertsekas , 2011a ; Ryu & Boyd , 2014 ; Bianchi , 2016 ) . What we can show , however , is that the distribution of 8C is still uniform conditioned on wC+1 instead of wC , as formalized in the following proposition . Lemma 1 . At every iteration of SPPA ( Algorithm 1 ) , we have the conditional probability ? ( 8C |wC+1 ) = 1/= . The proof of Lemma 1 is relegated to the supplementary material . What Lemma 1 suggests is that given the current iterate wC+1 , without knowing its predecessors wC and beyond , every component function 58 is equally likely to be picked as the proximal update that leads to wC+1 . In other words , conditioned on the current iterate without knowing the past , we do not gain additional information about which component function 58 is more likely to have been selected . As it turns out , Lemma 1 significantly simplifies the following convergence analysis while showing resemblance to that of deterministic PPA . What is more , the resulting analysis requires no assumptions other than convexity : typical assumptions such as Lipschitz smoothness or bounded variance are not required . Remark . Lemma 1 holds for C = 0 if the initialization w0 is random , and the distribution from which it is drawn has the sample space the same as the domain of the objective function equation 1 . In practice w0 is usually drawn from a distribution that is independent from the cost function , say N ( 0 , O ) . In this case , for a given value of w1 , there is no chance of eliminating possible w0 ’ s because any corresponding w0 is in the sample space of the initialization distribution . 2.1 GENERIC CONVEX CASE WITHOUT STRONG CONVEXITY . Under Lemma 1 , we have the following proposition , which serves as the stepping stone for our main convergence results . Proposition 1 . Suppose each 51 , . . . , 5= is convex , then at iteration C of SPPA ( Algorithm 1 ) we have 2_C ( ( wC+1 ) − ( w★ ) ) ≤ E [ ‖wC − w★‖2 | wC+1 ] − ‖wC+1 − w★‖2 , ( 2 ) where w★ denotes an optimal solution of equation 1 . Proof . We start by the equation ‖wC − w★‖2 = ‖wC − wC+1 + wC+1 − w★‖2 = ‖wC+1 − w★‖2 + 2 ( wC − wC+1 ) > ( wC+1 − w★ ) + ‖wC − wC+1‖2 . ( 3 ) According to the definition wC+1 = argminw _C 58C ( w ) + ( 1/2 ) ‖w − wC ‖2 , we know that wC − wC+1 is a subgradient of _C 58C at wC+1 . Therefore _C 58C ( wC+1 ) + ( wC − wC+1 ) > ( w − wC+1 ) ≤ _8 58C ( w ) at any w. Let w = w★ and substitute it in equation 3 , we have ‖wC − w★‖2 ≥ ‖wC+1 − w★‖2 + 2_C ( 58C ( wC+1 ) − 58C ( w★ ) ) + ‖wC − wC+1‖2 ≥ ‖wC+1 − w★‖2 + 2_C ( 58C ( wC+1 ) − 58C ( w★ ) ) . Finally , taking condition expectations on both sides given wC+1 ( but not wC nor 8C ) E [ ‖wC − w★‖2 | wC+1 ] ≥ ‖wC+1 − w★‖2 + 2_CE [ ( 58 ( wC+1 ) − 58 ( w★ ) | wC+1 ] . According to Lemma 1 , the conditional distribution over 8C is uniform , thus E [ 58C ( wC+1 ) − 58C ( w★ ) | wC+1 ] = ( wC+1 ) − ( w★ ) , and we obtain equation 2 . With the help of Proposition 1 , the rest of the results follows straight-forwardly . Theorem 1 . If all 51 , . . . , 5= are convex , and an initialization w0 is picked from a distribution with the sample space the same as the domain of , then the sequence { wC } generated by SPPA ( Algorithm 1 ) satisfies lim ) →∞ inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 2 ∑∞ C=1 _C E‖w0 − w★‖2 , ( 4 ) where w★ denotes an optimal solution of equation 1 . The term E‖w0 − w★‖2 is a constant for any approach of initializing w0 . Proof . Taking total expectation over equation 2 , we have 2_C ( E [ ( wC+1 ) ] − ( w★ ) ) ≤ E [ ‖wC − w★‖2 ] − E [ ‖wC+1 − w★‖2 ] . Sum over all C = 1 , 2 , . . . , ) , we have 2 ) ∑ C=1 _C ( E [ ( wC+1 ) ] − ( w★ ) ) ≤ E‖w0 − w★‖2 − E‖w ) − w★‖2 ≤ E‖w0 − w★‖2 . On the left-hand-side we have ( by definition ) inf E [ ( wC ) ] ≤ E [ ( wg ) ] for any g. Applying that , dividing both sides by 2 ∑ C _C , and letting ) →∞ , we get equation 4 . Remark . The proofs here take conditional expectations backwards , i.e. , conditioned on wC+1 and average over the previous iteration . This may be a little counter-intuitive . However , there is nothing wrong mathematically—expectation is a linear operator so E [ + ] = E [ ] + E [ ] under all circumstances ; if an inequality 5 ( G ) ≤ 6 ( G ) holds for all values of G in its sample space , then E [ 5 ( G ) ] ≤ E [ 6 ( G ) ] for any distribution over G since it is just a nonnegative sum on both sides . Theorem 1 states a generic results on the convergence of SPPA . The left-hand-side of equation 4 is , by definition , nonnegative ; the right-hand-side , however , goes to zero if the infinite sum ∑ C _C →∞ . This implies that the infimum if E [ ( wC ) ] goes to zero for appropriately chosen step sizes . Somewhat surprisingly , the only assumption we made was that the functions are convex . The celebrated SGD , on the other hand , requires at least two more assumptions : Lipschitz smoothness and that the stochastic gradients have bounded variance . What is more , the flexible choice of _C means that we can make the convergence rate arbitrarily fast , by allowing _C to be increasing ( rather than decreasing in most gradient-based methods ) . Of course this may lead to large variance of the sequence , which we may want to avoid in practice . Here we provide the convergence rate for two commonly used step size rules . Corollary 1 . If all 51 , . . . , 5= are convex , then the sequence { wC } generated by SPPA ( Algorithm 1 ) with a constant step size _C = _ satisfies inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 2_ ) ‖w0 − w★‖2 , ( 5 ) where w★ denotes an optimal solution of equation 1 . The proof is straight-forward by substituting _C with _ in equation 4 . This shows that a constant step size ( regardless of its value ) gives a O ( 1/ ) ) sublinear convergence rate . Corollary 2 . If all 51 , . . . , 5= are convex , then the sequence { wC } generated by SPPA ( Algorithm 1 ) with an increasing step size rule _C = C_ satisfies inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 _ ) ( ) + 1 ) ‖w0 − w★‖ 2 , ( 6 ) where w★ denotes an optimal solution of equation 1 . The proof comes from the arithmetic series 1+2+· · ·+ ) = ) ( ) +1 ) /2 . This shows that the increasing step size rule _C = C_ gives a O ( 1/ ) 2 ) sublinear convergence rate , the same as Nesterov ’ s optimal gradient algorithm ( 2013 ) . While the expected convergence looks excellent , we notice that in practice the performance also depends on the variance of the sequence generated by SPPA , which could be large if the expected convergence rate is too fast . We can even have linear convergence rate by letting _C increase exponentially , but the variance would be so large that it makes little practical sense . As we will see in the experiment section , a constant step size rule still gives the best performance in most cases . | # === Update === I have read the authors reviews as well as the response by the authors. I'd like to thank the authors for engaging in a discussion around Lemma 1. It is unfortunate that the result does not appear to hold even in a restricted setting. I will maintain my score. I wish the authors the best in their future work. # ====== This submission investigates the convergence rate of the stochastic proximal point algorithm (SPPA) for convex, finite-sum functions. The authors derive new convergence rates for SPPA under the assumption that each function in the finite sum is convex (or strongly convex). Key to these derivations is a lemma characterizing the probabilities of sampling a sub-function form a time-reversible Markov chain. In addition to these results, the authors derive efficient implementations of SPPA for a variety of loss functions, generally by reducing the problem to one-dimensional root-finding. The paper concludes with experiments comparing SPPA to SGD, Adam, and other baseline stochastic optimization algorithms for (i) convex optimization and (ii) training CNNs and ResNets on MNIST and CIFAR-10, respectively. The experiments show that SPPA converge faster than stochastic gradient methods and have a similar computational cost. | SP:1a7abc172681588b8eb887216830d814a89c8cf4 |
A Novel Convergence Analysis for the Stochastic Proximal Point Algorithm | 1 INTRODUCTION . It has been widely accepted that when training large-scale machine learning models , the training algorithm should act in a sample-by-sample manner in order to reduce computational and memory overhead—the size of the data set may be too large that calculating the full gradient information is too costly . Moreover , most machine learning problems does not have to be solved with very high accuracy , since the ultimate goal is not to fit the training data but rather to generalize the algorithm well such that the performance is decent on unseen data . Most existing stochastic algorithms are based upon the stochastic gradient descent ( SGD ) framework ( Bottou et al. , 2018 ) . SGD is extremely easy to implement and provides asymptotic convergence , although the convergence rate is generally slow ( and subject to careful choice of step sizes ) . Various approaches have been proposed to accelerate the plain vanilla SGD . Reducing the variance of the stochastic gradient and introducing adaptive learning schemes are two main lines of research . SVRG ( Johnson & Zhang , 2013 ) and SAGA ( Defazio et al. , 2014 ) ( and their follow-up works such as ( Defazio , 2016 ) ) focus on reducing the variance of the stochastic gradient descent , at the cost of increased time or memory complexities ( to the order of the entire data set ) . On the other hand , AdaGrad ( Duchi et al. , 2011 ) and Adam ( Diederik P. Kingma , 2014 ) introduce adaptive learning schemes and effectively keep the algorithm fully stochastic and light-weight . Besides these practical improvements , theoretical progress has been made by on quantifying the best possible convergence rate using first-order information ( Lei et al. , 2017 ; Allen-Zhu , 2017 ; 2018a ; b ) . 1.1 STOCHASTIC PROXIMAL POINT ALGORITHM ( SPPA ) . In this paper , we explore a different type of stochastic algorithm called the stochastic proximal point algorithm ( SPPA ) , also known as stochastic proximal iterations ( Ryu & Boyd , 2014 ) or incremental proximal point method ( Bertsekas , 2011a ; b ) . Consider the following optimization problem with the objective function in the form of a finite sum of component functions minimize w∈R3 1 = =∑ 8=1 58 ( w ) = ( w ) . ( 1 ) SPPA takes the following simple form : Algorithm 1 Stochastic proximal point algorithm ( SPPA ) 1 : repeat 2 : randomly draw 8C uniformly from { 1 , . . . , = } 3 : wC+1 ← argminw _C 58C ( w ) + ( 1/2 ) ‖w − wC ‖2 = Prox_C 58C ( wC ) 4 : until convergence Line 3 of Algorithm 1 calculates the proximal operator of the function _C 58 evaluated at wC , denoted as Prox_C 58 ( wC ) . This is the stochastic version of the proximal point algorithm , which dates back to Rockafellar ( 1976 ) . SPPA has an abstract per-iteration update rule and it acquires more information from the problem than solely the first order derivatives , which makes it not as universally applicable as SGD . Yet , thanks to the more information inquired , it is possible to obtain faster and more robust convergence guarantees . While some ‘ accelerated ’ versions of SGD demand additional time or space overhead to go over the entire data set , SPPA does not have any overhead and it is suitable to be used in a completely online setting , it performs well even the data samples are not revisited again . To the best of our knowledge , people started studying the convergence behavior of SPPA only recently ( Bertsekas , 2011a ; Ryu & Boyd , 2014 ; Bianchi , 2016 ; Pătraşcu , 2020 ; Toulis et al. , 2021 ) . Somewhat surprisingly , the convergence analysis of SPPA draws little resemblance to its deterministic counterpart , the proximal point algorithm . This is unlike the case for SGD , of which the convergence analysis follows almost line-by-line to that of the subgradient method . Moreover , existing analysis of SPPA shows no improvement in terms of convergence rate , which seems counter-intuitive due to the nature of the updates . Most authors also accept the premise that the proximal operator is sometimes difficult to evaluate , and thus proposed variations to the plain vanilla version to handle more complicated problem structures ( Wang & Bertsekas , 2013 ; Duchi & Ruan , 2018 ; Asi & Duchi , 2019 ; Davis & Drusvyatskiy , 2019 ) . 1.2 CONTRIBUTIONS . The main contribution of this paper is to provide a completely novel convergence analysis of SPPA for general convex problems . This contribution , together with the efficient implementation strategies discussed in the appendix , results in great practical results in almost all of the classical empirical risk minimization ( ERM ) problems in large-scale machine learning . While there exists some convergence results of SPPA for convex problems , the novel analysis provided in this work requires minimal assumptions ( nothing but the convexity of the loss functions ) . The new convergence analysis also shows great resemblance to its deterministic counterpart , which is not the case from other works . In the appendix we will discuss how to efficiently compute the abstract per-iteration update rule for SPPA with complexity that is comparable to SGD-type methods . An interesting observation is that in a lot of ERM examples the resulting update has a similar form like SGD , but with a smartly chosen step size derived from the proximal update . For a class of smooth risks such as the logistic loss , we appeal to reformulate the optimization problem such that we get close to closed form solution via bisection . We also briefly discuss how to modify the algorithm to have the ability to handle nonsmooth regularization terms such as the ! 1 norm , using the stochastic Douglas-Rachford splitting approach . Finally , we apply SPPA with efficient implementations to a large variety of classification and regression problems . Numerical experiments are conducted not only for linear classification and regression ( in the appendix ) models , but also nonconvex deep neural network problems . Although the convergence analysis provided in this paper does not cover nonconvex cases , empirical results suggest that it is still worth treating SPPA as an effective alternative for deep learning . 2 CONVERGENCE ANALYSIS . In this section we provide convergence analysis of SPPA for general convex loss functions equation 1 to a global minimum in expectation . In recent years there have been some work tackling the same problem , e.g. , Bertsekas ( 2011a ) ; Pătraşcu ( 2020 ) ; Toulis et al . ( 2021 ) . In this paper , however , we provide new convergence analysis that is much easier to understand while requiring nearly no assumptions other than convexity . For this reason we believe the theoretical contribution is significant enough as it broadens the applicability of SPPA . There is a well-known resemblance between proximal methods and gradient descent , assuming the loss function is differentiable : while a full gradient descent step takes the form wC+1 = wC − _C∇ ( wC ) , the definition of a full proximal step guarantees that _C∇ ( wC+1 ) = wC+1 − wC , meaning that wC+1 = wC − _C∇ ( wC+1 ) . Therefore , one might expect that a well-established convergence analysis of SGD for nonconvex problems , for example ( Bottou et al. , 2018 , §4.3 ) , can be seamlessly applied to SPPA . However , when applied in a stochastic fashion , the situation is a little more complicated . Consider E [ ∇ 58 ( wC ) | wC ] , where the expectation is taken over the sampling procedure conditioned on wC , for SGD we typically require ∇ 58 ( wC ) to be an unbiased estimator of ∇ ( wC ) , which is easy to satisfy if 8C is uniformly sampled from { 1 , . . . , = } given wC . For SPPA then one needs to consider E [ ∇ 58 ( wC+1 ) | wC ] , again over the sampling procedure conditioned on wC . This is in fact difficult to quantify because the update wC+1 depends on the sample that is drawn from the data set . The equation E [ ∇ 58 ( wC+1 ) | wC ] = ∇ ( wC+1 ) does not make sense because wC+1 on the right-hand-side is still random conditioned on wC . It is for this reason that existing analysis of SPPA differs drastically from its deterministic counterpart PPA ( Bertsekas , 2011a ; Ryu & Boyd , 2014 ; Bianchi , 2016 ) . What we can show , however , is that the distribution of 8C is still uniform conditioned on wC+1 instead of wC , as formalized in the following proposition . Lemma 1 . At every iteration of SPPA ( Algorithm 1 ) , we have the conditional probability ? ( 8C |wC+1 ) = 1/= . The proof of Lemma 1 is relegated to the supplementary material . What Lemma 1 suggests is that given the current iterate wC+1 , without knowing its predecessors wC and beyond , every component function 58 is equally likely to be picked as the proximal update that leads to wC+1 . In other words , conditioned on the current iterate without knowing the past , we do not gain additional information about which component function 58 is more likely to have been selected . As it turns out , Lemma 1 significantly simplifies the following convergence analysis while showing resemblance to that of deterministic PPA . What is more , the resulting analysis requires no assumptions other than convexity : typical assumptions such as Lipschitz smoothness or bounded variance are not required . Remark . Lemma 1 holds for C = 0 if the initialization w0 is random , and the distribution from which it is drawn has the sample space the same as the domain of the objective function equation 1 . In practice w0 is usually drawn from a distribution that is independent from the cost function , say N ( 0 , O ) . In this case , for a given value of w1 , there is no chance of eliminating possible w0 ’ s because any corresponding w0 is in the sample space of the initialization distribution . 2.1 GENERIC CONVEX CASE WITHOUT STRONG CONVEXITY . Under Lemma 1 , we have the following proposition , which serves as the stepping stone for our main convergence results . Proposition 1 . Suppose each 51 , . . . , 5= is convex , then at iteration C of SPPA ( Algorithm 1 ) we have 2_C ( ( wC+1 ) − ( w★ ) ) ≤ E [ ‖wC − w★‖2 | wC+1 ] − ‖wC+1 − w★‖2 , ( 2 ) where w★ denotes an optimal solution of equation 1 . Proof . We start by the equation ‖wC − w★‖2 = ‖wC − wC+1 + wC+1 − w★‖2 = ‖wC+1 − w★‖2 + 2 ( wC − wC+1 ) > ( wC+1 − w★ ) + ‖wC − wC+1‖2 . ( 3 ) According to the definition wC+1 = argminw _C 58C ( w ) + ( 1/2 ) ‖w − wC ‖2 , we know that wC − wC+1 is a subgradient of _C 58C at wC+1 . Therefore _C 58C ( wC+1 ) + ( wC − wC+1 ) > ( w − wC+1 ) ≤ _8 58C ( w ) at any w. Let w = w★ and substitute it in equation 3 , we have ‖wC − w★‖2 ≥ ‖wC+1 − w★‖2 + 2_C ( 58C ( wC+1 ) − 58C ( w★ ) ) + ‖wC − wC+1‖2 ≥ ‖wC+1 − w★‖2 + 2_C ( 58C ( wC+1 ) − 58C ( w★ ) ) . Finally , taking condition expectations on both sides given wC+1 ( but not wC nor 8C ) E [ ‖wC − w★‖2 | wC+1 ] ≥ ‖wC+1 − w★‖2 + 2_CE [ ( 58 ( wC+1 ) − 58 ( w★ ) | wC+1 ] . According to Lemma 1 , the conditional distribution over 8C is uniform , thus E [ 58C ( wC+1 ) − 58C ( w★ ) | wC+1 ] = ( wC+1 ) − ( w★ ) , and we obtain equation 2 . With the help of Proposition 1 , the rest of the results follows straight-forwardly . Theorem 1 . If all 51 , . . . , 5= are convex , and an initialization w0 is picked from a distribution with the sample space the same as the domain of , then the sequence { wC } generated by SPPA ( Algorithm 1 ) satisfies lim ) →∞ inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 2 ∑∞ C=1 _C E‖w0 − w★‖2 , ( 4 ) where w★ denotes an optimal solution of equation 1 . The term E‖w0 − w★‖2 is a constant for any approach of initializing w0 . Proof . Taking total expectation over equation 2 , we have 2_C ( E [ ( wC+1 ) ] − ( w★ ) ) ≤ E [ ‖wC − w★‖2 ] − E [ ‖wC+1 − w★‖2 ] . Sum over all C = 1 , 2 , . . . , ) , we have 2 ) ∑ C=1 _C ( E [ ( wC+1 ) ] − ( w★ ) ) ≤ E‖w0 − w★‖2 − E‖w ) − w★‖2 ≤ E‖w0 − w★‖2 . On the left-hand-side we have ( by definition ) inf E [ ( wC ) ] ≤ E [ ( wg ) ] for any g. Applying that , dividing both sides by 2 ∑ C _C , and letting ) →∞ , we get equation 4 . Remark . The proofs here take conditional expectations backwards , i.e. , conditioned on wC+1 and average over the previous iteration . This may be a little counter-intuitive . However , there is nothing wrong mathematically—expectation is a linear operator so E [ + ] = E [ ] + E [ ] under all circumstances ; if an inequality 5 ( G ) ≤ 6 ( G ) holds for all values of G in its sample space , then E [ 5 ( G ) ] ≤ E [ 6 ( G ) ] for any distribution over G since it is just a nonnegative sum on both sides . Theorem 1 states a generic results on the convergence of SPPA . The left-hand-side of equation 4 is , by definition , nonnegative ; the right-hand-side , however , goes to zero if the infinite sum ∑ C _C →∞ . This implies that the infimum if E [ ( wC ) ] goes to zero for appropriately chosen step sizes . Somewhat surprisingly , the only assumption we made was that the functions are convex . The celebrated SGD , on the other hand , requires at least two more assumptions : Lipschitz smoothness and that the stochastic gradients have bounded variance . What is more , the flexible choice of _C means that we can make the convergence rate arbitrarily fast , by allowing _C to be increasing ( rather than decreasing in most gradient-based methods ) . Of course this may lead to large variance of the sequence , which we may want to avoid in practice . Here we provide the convergence rate for two commonly used step size rules . Corollary 1 . If all 51 , . . . , 5= are convex , then the sequence { wC } generated by SPPA ( Algorithm 1 ) with a constant step size _C = _ satisfies inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 2_ ) ‖w0 − w★‖2 , ( 5 ) where w★ denotes an optimal solution of equation 1 . The proof is straight-forward by substituting _C with _ in equation 4 . This shows that a constant step size ( regardless of its value ) gives a O ( 1/ ) ) sublinear convergence rate . Corollary 2 . If all 51 , . . . , 5= are convex , then the sequence { wC } generated by SPPA ( Algorithm 1 ) with an increasing step size rule _C = C_ satisfies inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 _ ) ( ) + 1 ) ‖w0 − w★‖ 2 , ( 6 ) where w★ denotes an optimal solution of equation 1 . The proof comes from the arithmetic series 1+2+· · ·+ ) = ) ( ) +1 ) /2 . This shows that the increasing step size rule _C = C_ gives a O ( 1/ ) 2 ) sublinear convergence rate , the same as Nesterov ’ s optimal gradient algorithm ( 2013 ) . While the expected convergence looks excellent , we notice that in practice the performance also depends on the variance of the sequence generated by SPPA , which could be large if the expected convergence rate is too fast . We can even have linear convergence rate by letting _C increase exponentially , but the variance would be so large that it makes little practical sense . As we will see in the experiment section , a constant step size rule still gives the best performance in most cases . | This paper provides a novel elegant proof of convergence for the Stochastic Proximal Point Algorithm, under the assumption of convexity. It proceeds by surprisingly conditioning on future iterates, reversing usual convergence proofs. Experiments show that the SPPA is competitive, including in nonconvex settings such as training neural networks for image classification. | SP:1a7abc172681588b8eb887216830d814a89c8cf4 |
A Novel Convergence Analysis for the Stochastic Proximal Point Algorithm | 1 INTRODUCTION . It has been widely accepted that when training large-scale machine learning models , the training algorithm should act in a sample-by-sample manner in order to reduce computational and memory overhead—the size of the data set may be too large that calculating the full gradient information is too costly . Moreover , most machine learning problems does not have to be solved with very high accuracy , since the ultimate goal is not to fit the training data but rather to generalize the algorithm well such that the performance is decent on unseen data . Most existing stochastic algorithms are based upon the stochastic gradient descent ( SGD ) framework ( Bottou et al. , 2018 ) . SGD is extremely easy to implement and provides asymptotic convergence , although the convergence rate is generally slow ( and subject to careful choice of step sizes ) . Various approaches have been proposed to accelerate the plain vanilla SGD . Reducing the variance of the stochastic gradient and introducing adaptive learning schemes are two main lines of research . SVRG ( Johnson & Zhang , 2013 ) and SAGA ( Defazio et al. , 2014 ) ( and their follow-up works such as ( Defazio , 2016 ) ) focus on reducing the variance of the stochastic gradient descent , at the cost of increased time or memory complexities ( to the order of the entire data set ) . On the other hand , AdaGrad ( Duchi et al. , 2011 ) and Adam ( Diederik P. Kingma , 2014 ) introduce adaptive learning schemes and effectively keep the algorithm fully stochastic and light-weight . Besides these practical improvements , theoretical progress has been made by on quantifying the best possible convergence rate using first-order information ( Lei et al. , 2017 ; Allen-Zhu , 2017 ; 2018a ; b ) . 1.1 STOCHASTIC PROXIMAL POINT ALGORITHM ( SPPA ) . In this paper , we explore a different type of stochastic algorithm called the stochastic proximal point algorithm ( SPPA ) , also known as stochastic proximal iterations ( Ryu & Boyd , 2014 ) or incremental proximal point method ( Bertsekas , 2011a ; b ) . Consider the following optimization problem with the objective function in the form of a finite sum of component functions minimize w∈R3 1 = =∑ 8=1 58 ( w ) = ( w ) . ( 1 ) SPPA takes the following simple form : Algorithm 1 Stochastic proximal point algorithm ( SPPA ) 1 : repeat 2 : randomly draw 8C uniformly from { 1 , . . . , = } 3 : wC+1 ← argminw _C 58C ( w ) + ( 1/2 ) ‖w − wC ‖2 = Prox_C 58C ( wC ) 4 : until convergence Line 3 of Algorithm 1 calculates the proximal operator of the function _C 58 evaluated at wC , denoted as Prox_C 58 ( wC ) . This is the stochastic version of the proximal point algorithm , which dates back to Rockafellar ( 1976 ) . SPPA has an abstract per-iteration update rule and it acquires more information from the problem than solely the first order derivatives , which makes it not as universally applicable as SGD . Yet , thanks to the more information inquired , it is possible to obtain faster and more robust convergence guarantees . While some ‘ accelerated ’ versions of SGD demand additional time or space overhead to go over the entire data set , SPPA does not have any overhead and it is suitable to be used in a completely online setting , it performs well even the data samples are not revisited again . To the best of our knowledge , people started studying the convergence behavior of SPPA only recently ( Bertsekas , 2011a ; Ryu & Boyd , 2014 ; Bianchi , 2016 ; Pătraşcu , 2020 ; Toulis et al. , 2021 ) . Somewhat surprisingly , the convergence analysis of SPPA draws little resemblance to its deterministic counterpart , the proximal point algorithm . This is unlike the case for SGD , of which the convergence analysis follows almost line-by-line to that of the subgradient method . Moreover , existing analysis of SPPA shows no improvement in terms of convergence rate , which seems counter-intuitive due to the nature of the updates . Most authors also accept the premise that the proximal operator is sometimes difficult to evaluate , and thus proposed variations to the plain vanilla version to handle more complicated problem structures ( Wang & Bertsekas , 2013 ; Duchi & Ruan , 2018 ; Asi & Duchi , 2019 ; Davis & Drusvyatskiy , 2019 ) . 1.2 CONTRIBUTIONS . The main contribution of this paper is to provide a completely novel convergence analysis of SPPA for general convex problems . This contribution , together with the efficient implementation strategies discussed in the appendix , results in great practical results in almost all of the classical empirical risk minimization ( ERM ) problems in large-scale machine learning . While there exists some convergence results of SPPA for convex problems , the novel analysis provided in this work requires minimal assumptions ( nothing but the convexity of the loss functions ) . The new convergence analysis also shows great resemblance to its deterministic counterpart , which is not the case from other works . In the appendix we will discuss how to efficiently compute the abstract per-iteration update rule for SPPA with complexity that is comparable to SGD-type methods . An interesting observation is that in a lot of ERM examples the resulting update has a similar form like SGD , but with a smartly chosen step size derived from the proximal update . For a class of smooth risks such as the logistic loss , we appeal to reformulate the optimization problem such that we get close to closed form solution via bisection . We also briefly discuss how to modify the algorithm to have the ability to handle nonsmooth regularization terms such as the ! 1 norm , using the stochastic Douglas-Rachford splitting approach . Finally , we apply SPPA with efficient implementations to a large variety of classification and regression problems . Numerical experiments are conducted not only for linear classification and regression ( in the appendix ) models , but also nonconvex deep neural network problems . Although the convergence analysis provided in this paper does not cover nonconvex cases , empirical results suggest that it is still worth treating SPPA as an effective alternative for deep learning . 2 CONVERGENCE ANALYSIS . In this section we provide convergence analysis of SPPA for general convex loss functions equation 1 to a global minimum in expectation . In recent years there have been some work tackling the same problem , e.g. , Bertsekas ( 2011a ) ; Pătraşcu ( 2020 ) ; Toulis et al . ( 2021 ) . In this paper , however , we provide new convergence analysis that is much easier to understand while requiring nearly no assumptions other than convexity . For this reason we believe the theoretical contribution is significant enough as it broadens the applicability of SPPA . There is a well-known resemblance between proximal methods and gradient descent , assuming the loss function is differentiable : while a full gradient descent step takes the form wC+1 = wC − _C∇ ( wC ) , the definition of a full proximal step guarantees that _C∇ ( wC+1 ) = wC+1 − wC , meaning that wC+1 = wC − _C∇ ( wC+1 ) . Therefore , one might expect that a well-established convergence analysis of SGD for nonconvex problems , for example ( Bottou et al. , 2018 , §4.3 ) , can be seamlessly applied to SPPA . However , when applied in a stochastic fashion , the situation is a little more complicated . Consider E [ ∇ 58 ( wC ) | wC ] , where the expectation is taken over the sampling procedure conditioned on wC , for SGD we typically require ∇ 58 ( wC ) to be an unbiased estimator of ∇ ( wC ) , which is easy to satisfy if 8C is uniformly sampled from { 1 , . . . , = } given wC . For SPPA then one needs to consider E [ ∇ 58 ( wC+1 ) | wC ] , again over the sampling procedure conditioned on wC . This is in fact difficult to quantify because the update wC+1 depends on the sample that is drawn from the data set . The equation E [ ∇ 58 ( wC+1 ) | wC ] = ∇ ( wC+1 ) does not make sense because wC+1 on the right-hand-side is still random conditioned on wC . It is for this reason that existing analysis of SPPA differs drastically from its deterministic counterpart PPA ( Bertsekas , 2011a ; Ryu & Boyd , 2014 ; Bianchi , 2016 ) . What we can show , however , is that the distribution of 8C is still uniform conditioned on wC+1 instead of wC , as formalized in the following proposition . Lemma 1 . At every iteration of SPPA ( Algorithm 1 ) , we have the conditional probability ? ( 8C |wC+1 ) = 1/= . The proof of Lemma 1 is relegated to the supplementary material . What Lemma 1 suggests is that given the current iterate wC+1 , without knowing its predecessors wC and beyond , every component function 58 is equally likely to be picked as the proximal update that leads to wC+1 . In other words , conditioned on the current iterate without knowing the past , we do not gain additional information about which component function 58 is more likely to have been selected . As it turns out , Lemma 1 significantly simplifies the following convergence analysis while showing resemblance to that of deterministic PPA . What is more , the resulting analysis requires no assumptions other than convexity : typical assumptions such as Lipschitz smoothness or bounded variance are not required . Remark . Lemma 1 holds for C = 0 if the initialization w0 is random , and the distribution from which it is drawn has the sample space the same as the domain of the objective function equation 1 . In practice w0 is usually drawn from a distribution that is independent from the cost function , say N ( 0 , O ) . In this case , for a given value of w1 , there is no chance of eliminating possible w0 ’ s because any corresponding w0 is in the sample space of the initialization distribution . 2.1 GENERIC CONVEX CASE WITHOUT STRONG CONVEXITY . Under Lemma 1 , we have the following proposition , which serves as the stepping stone for our main convergence results . Proposition 1 . Suppose each 51 , . . . , 5= is convex , then at iteration C of SPPA ( Algorithm 1 ) we have 2_C ( ( wC+1 ) − ( w★ ) ) ≤ E [ ‖wC − w★‖2 | wC+1 ] − ‖wC+1 − w★‖2 , ( 2 ) where w★ denotes an optimal solution of equation 1 . Proof . We start by the equation ‖wC − w★‖2 = ‖wC − wC+1 + wC+1 − w★‖2 = ‖wC+1 − w★‖2 + 2 ( wC − wC+1 ) > ( wC+1 − w★ ) + ‖wC − wC+1‖2 . ( 3 ) According to the definition wC+1 = argminw _C 58C ( w ) + ( 1/2 ) ‖w − wC ‖2 , we know that wC − wC+1 is a subgradient of _C 58C at wC+1 . Therefore _C 58C ( wC+1 ) + ( wC − wC+1 ) > ( w − wC+1 ) ≤ _8 58C ( w ) at any w. Let w = w★ and substitute it in equation 3 , we have ‖wC − w★‖2 ≥ ‖wC+1 − w★‖2 + 2_C ( 58C ( wC+1 ) − 58C ( w★ ) ) + ‖wC − wC+1‖2 ≥ ‖wC+1 − w★‖2 + 2_C ( 58C ( wC+1 ) − 58C ( w★ ) ) . Finally , taking condition expectations on both sides given wC+1 ( but not wC nor 8C ) E [ ‖wC − w★‖2 | wC+1 ] ≥ ‖wC+1 − w★‖2 + 2_CE [ ( 58 ( wC+1 ) − 58 ( w★ ) | wC+1 ] . According to Lemma 1 , the conditional distribution over 8C is uniform , thus E [ 58C ( wC+1 ) − 58C ( w★ ) | wC+1 ] = ( wC+1 ) − ( w★ ) , and we obtain equation 2 . With the help of Proposition 1 , the rest of the results follows straight-forwardly . Theorem 1 . If all 51 , . . . , 5= are convex , and an initialization w0 is picked from a distribution with the sample space the same as the domain of , then the sequence { wC } generated by SPPA ( Algorithm 1 ) satisfies lim ) →∞ inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 2 ∑∞ C=1 _C E‖w0 − w★‖2 , ( 4 ) where w★ denotes an optimal solution of equation 1 . The term E‖w0 − w★‖2 is a constant for any approach of initializing w0 . Proof . Taking total expectation over equation 2 , we have 2_C ( E [ ( wC+1 ) ] − ( w★ ) ) ≤ E [ ‖wC − w★‖2 ] − E [ ‖wC+1 − w★‖2 ] . Sum over all C = 1 , 2 , . . . , ) , we have 2 ) ∑ C=1 _C ( E [ ( wC+1 ) ] − ( w★ ) ) ≤ E‖w0 − w★‖2 − E‖w ) − w★‖2 ≤ E‖w0 − w★‖2 . On the left-hand-side we have ( by definition ) inf E [ ( wC ) ] ≤ E [ ( wg ) ] for any g. Applying that , dividing both sides by 2 ∑ C _C , and letting ) →∞ , we get equation 4 . Remark . The proofs here take conditional expectations backwards , i.e. , conditioned on wC+1 and average over the previous iteration . This may be a little counter-intuitive . However , there is nothing wrong mathematically—expectation is a linear operator so E [ + ] = E [ ] + E [ ] under all circumstances ; if an inequality 5 ( G ) ≤ 6 ( G ) holds for all values of G in its sample space , then E [ 5 ( G ) ] ≤ E [ 6 ( G ) ] for any distribution over G since it is just a nonnegative sum on both sides . Theorem 1 states a generic results on the convergence of SPPA . The left-hand-side of equation 4 is , by definition , nonnegative ; the right-hand-side , however , goes to zero if the infinite sum ∑ C _C →∞ . This implies that the infimum if E [ ( wC ) ] goes to zero for appropriately chosen step sizes . Somewhat surprisingly , the only assumption we made was that the functions are convex . The celebrated SGD , on the other hand , requires at least two more assumptions : Lipschitz smoothness and that the stochastic gradients have bounded variance . What is more , the flexible choice of _C means that we can make the convergence rate arbitrarily fast , by allowing _C to be increasing ( rather than decreasing in most gradient-based methods ) . Of course this may lead to large variance of the sequence , which we may want to avoid in practice . Here we provide the convergence rate for two commonly used step size rules . Corollary 1 . If all 51 , . . . , 5= are convex , then the sequence { wC } generated by SPPA ( Algorithm 1 ) with a constant step size _C = _ satisfies inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 2_ ) ‖w0 − w★‖2 , ( 5 ) where w★ denotes an optimal solution of equation 1 . The proof is straight-forward by substituting _C with _ in equation 4 . This shows that a constant step size ( regardless of its value ) gives a O ( 1/ ) ) sublinear convergence rate . Corollary 2 . If all 51 , . . . , 5= are convex , then the sequence { wC } generated by SPPA ( Algorithm 1 ) with an increasing step size rule _C = C_ satisfies inf C≤ ) E [ ( wC ) ] − ( w★ ) ≤ 1 _ ) ( ) + 1 ) ‖w0 − w★‖ 2 , ( 6 ) where w★ denotes an optimal solution of equation 1 . The proof comes from the arithmetic series 1+2+· · ·+ ) = ) ( ) +1 ) /2 . This shows that the increasing step size rule _C = C_ gives a O ( 1/ ) 2 ) sublinear convergence rate , the same as Nesterov ’ s optimal gradient algorithm ( 2013 ) . While the expected convergence looks excellent , we notice that in practice the performance also depends on the variance of the sequence generated by SPPA , which could be large if the expected convergence rate is too fast . We can even have linear convergence rate by letting _C increase exponentially , but the variance would be so large that it makes little practical sense . As we will see in the experiment section , a constant step size rule still gives the best performance in most cases . | This paper shows the arbitrary convergence rate of the stochastic proximal point algorithm(SPPA), under the convex assumption of the objective function. The authors use a novel approach to analyze SPPA. Numerical results show the efficiency of SPPA on some realistic data-sets. | SP:1a7abc172681588b8eb887216830d814a89c8cf4 |
Visual Representation Learning Does Not Generalize Strongly Within the Same Domain | 1 INTRODUCTION . Humans excel at learning underlying physical mechanisms or inner workings of a system from observations ( Funke et al. , 2021 ; Barrett et al. , 2018 ; Santoro et al. , 2017 ; Villalobos et al. , 2020 ; Spelke , 1990 ) , which helps them generalize quickly to new situations and to learn efficiently from little data ( Battaglia et al. , 2013 ; Dehaene , 2020 ; Lake et al. , 2017 ; Téglás et al. , 2011 ) . In contrast , machine learning systems typically require large amounts of curated data and still mostly fail to generalize to out-of-distribution ( OOD ) scenarios ( Schölkopf et al. , 2021 ; Hendrycks & Dietterich , 2019 ; Karahan et al. , 2016 ; Michaelis et al. , 2019 ; Roy et al. , 2018 ; Azulay & Weiss , 2019 ; Barbu et al. , 2019 ) . It has been hypothesized that this failure of machine learning systems is due to shortcut learning ( Kilbertus * et al. , 2018 ; Ilyas et al. , 2019 ; Geirhos et al. , 2020 ; Schölkopf et al. , 2021 ) . In essence , machines seemingly learn to solve the tasks they have been trained on using auxiliary and spurious statistical relationships in the data , rather than true mechanistic relationships . Pragmatically , models relying on statistical relationships tend to fail if tested outside their training distribution , while models relying on ( approximately ) the true underlying mechanisms tend to generalize well to novel scenarios ( Barrett et al. , 2018 ; Funke et al. , 2021 ; Wu et al. , 2019 ; Zhang et al. , 2018 ; Parascandolo et al. , 2018 ; Schölkopf et al. , 2021 ; Locatello et al. , 2020a ; b ) . To learn effective statistical relationships , the training data needs to cover most combinations of factors of variation ( like shape , size , color , viewpoint , etc. ) . Unfortunately , the number of combinations scales exponentially with the number of factors . In contrast , learning the underlying mechanisms behind the factors of variation should greatly reduce the need for training data and scale more gently with the number of factors ( Schölkopf et al. , 2021 ; Peters et al. , 2017 ; Besserve et al. , 2021 ) . Benchmark : Our goal is to quantify how well machine learning models already learn the mechanisms underlying a data generative process . To this end , we consider four image data sets where each image is described by a small number of independently controllable factors of variation such as scale , color , or size . We split the training and test data such that models that learned the underlying mechanisms should generalize to the test data . More precisely , we propose several systematic out-of-distribution ( OOD ) test splits like composition ( e.g. , train = small hearts , large squares → test = small squares , large hearts ) , interpolation ( e.g. , small hearts , large hearts → medium hearts ) and extrapolation ( e.g. , small hearts , medium hearts → large hearts ) . While the factors of variation are independently controllable ( e.g. , there may exist large and small hearts ) , the observations may exhibit spurious statistical dependencies ( e.g. , observed hearts are typically small , but size may not be predictive at test time ) . Based on this setup , we benchmark 17 representation learning approaches and study their inductive biases . The considered approaches stem from un-/weakly supervised disentanglement , supervised learning , and the transfer learning literature . Results : Our benchmark results indicate that the tested models mostly struggle to learn the underlying mechanisms regardless of supervision signal and architecture . As soon as a factor of variation is outside the training distribution , models consistently tend to predict a value in the previously observed range . On the other hand , these models can be fairly modular in the sense that predictions of in-distribution factors remain accurate , which is in part against common criticisms of deep neural networks ( Greff et al. , 2020 ; Csordás et al. , 2021 ; Marcus , 2018 ; Lake & Baroni , 2018 ) . New Dataset : Previous datasets with independent controllable factors such as dSprites , Shapes3D , and MPI3D ( Matthey et al. , 2017 ; Kim & Mnih , 2018 ; Gondal et al. , 2019 ) stem from highly structured environments . For these datasets , common factors of variations are scaling , rotation and simple geometrical shapes . We introduce a dataset derived from celebrity faces , named CelebGlow , with factors of variations such as smiling , age and hair-color . It also contains all possible factor combinations . It is based on latent traversals of a pretrained Glow network provided by Kingma et al . ( Kingma & Dhariwal , 2018 ) and the Celeb-HQ dataset ( Liu et al. , 2015 ) . We hope that this benchmark can guide future efforts to find machine learning models capable of understanding the true underlying mechanisms in the data . To this end , all data sets and evaluation scripts are released alongside a leaderboard on GitHub . 1 2 PROBLEM SETTING y1 . . . yn x s Figure 1 : Assumed graphical model connecting the factors of variations y = ( y1 , ... , yn ) to observations x = g ( y ) . The selection variable s ∈ { tr , te } leads to different train and test splits ps ( y ) , thereby inducing correlation between the FoVs . Assume that we render each observation or image x ∈ Rd using a “ computer graphic model ” which takes as input a set of independently controllable factors of variation ( FoVs ) y ∈ Rn like size or color . More formally , we assume a generative process of the form x = g ( y ) , where g : Rn 7→ Rd is an injective and smooth function . In the standard independently and identically distributed ( IID ) setting , we would generate the training and test data in the same way , i.e. , we would draw y from the same prior distribution p ( y ) and then generate the corresponding images x according to g ( · ) . Instead , we here consider an OOD setting where the prior distribution ptr ( y ) during training is different from the prior distribution pte ( y ) during testing . In fact , in all settings of our benchmark , the training and test distributions are completely disjoint , meaning that each point can only have non-zero probability mass in either ptr ( y ) or pte ( y ) . Crucially , however , the function g which maps between FoVs and observations is shared between training and testing , which is why we refer to it as an invariant mechanism . As shown in the causal graphical model in Fig . 1 , the factors of variations y are independently controllable to begin with , but the binary split variable s introduces spurious correlations between the FoVs that are different at training and test time as a result of selection bias ( Storkey , 2009 ; Bareinboim & Pearl , 2012 ) . In particular , we consider Random , Composition , Interpolation , and Extrapolation splits as illustrated in Fig . 2 . We refer to §4.2 for details on the implementation of these splits . The task for our machine learning models f is to estimate the factors of variations y that generated the sample x on both the training and test data . In other words , we want that ( ideally ) f = g−1 . The main challenge is that , during training , we only observe data from ptr but wish to generalize to pte . Hence , the learned function f should not only invert g locally on the training domain supp ( ptr ( y ) ) ⊆ Rn but ideally globally . In practice , let Dte = { ( yk , xk ) } be the test data with yk drawn from pte ( y ) and let f : Rd 7→ Rn be the model . Now , the goal is to design and optimize the 1https : //github.com/bethgelab/InDomainGeneralizationBenchmark model f on the training set Dtr such that it achieves a minimal R-squared distance between yk and f ( xk ) on the test set Dte . During training , models are allowed to sample the data from all non-zero probability regions supp ( ptr ( y ) ) in whatever way is optimal for its learning algorithm . This general formulation covers different scenarios and learning methods that could prove valuable for learning independent mechanisms . For example , supervised methods will sample an IID data set Dtr = { ( yk , xk ) } with yk ∼ ptr ( y ) , while self-supervised methods might sample a data set of unlabeled image pairs Dtr = { ( xk , x̃k ) } . We aim to understand what inductive biases help on these OOD settings and how to best leverage the training data to learn representations that generalize . 3 INDUCTIVE BIASES FOR GENERALIZATION IN VISUAL REPRESENTATION LEARNING . We now explore different types of assumptions , or inductive biases , on the representational format ( §3.1 ) , architecture ( §3.2 ) , and dataset ( §3.3 ) which have been proposed and used in the past to facilitate generalization . Inductive inference and the generalization of empirical findings is a fundamental problem of science that has a long-standing history in many disciplines . Notable examples include Occam ’ s razor , Solomonoff ’ s inductive inference ( Solomonoff , 1964 ) , Kolmogorov complexity ( Kolmogorov , 1998 ) , the bias-variance-tradeoff ( Kohavi et al. , 1996 ; Von Luxburg & Schölkopf , 2011 ) , and the no free lunch theorem ( Wolpert , 1996 ; Wolpert & Macready , 1997 ) . In the context of statistical learning , Vapnik and Chervonenkis ( Vapnik & Chervonenkis , 1982 ; Vapnik , 1995 ) showed that generalizing from a sample to its population ( i.e. , IID generalization ) requires restricting the capacity of the class of candidate functions—a type of inductive bias . Since shifts between train and test distributions violate the IID assumption , however , statistical learning theory does not directly apply to our types of OOD generalization . OOD generalization across different ( e.g. , observational and experimental ) conditions also bears connections to causal inference ( Pearl , 2009 ; Peters et al. , 2017 ; Hernán & Robins , 2020 ) . Typically , a causal graph encodes assumptions about the relation between different distributions and is used to decide how to “ transport ” a learned model ( Pearl & Bareinboim , 2011 ; Pearl et al. , 2014 ; Bareinboim & Pearl , 2016 ; von Kügelgen et al. , 2019 ) . Other approaches aim to learn a model which leads to invariant prediction across multiple environments ( Schölkopf et al. , 2012 ; Peters et al. , 2016 ; HeinzeDeml et al. , 2018 ; Rojas-Carulla et al. , 2018 ; Arjovsky et al. , 2019 ; Lu et al. , 2021 ) . However , these methods either consider a small number of causally meaningful variables in combination with domain knowledge , or assume access to data from multiple environments . In our setting , on the other hand , we aim to learn from higher-dimensional observations and to generalize from a single training set to a different test environment . Our work focuses on OOD generalization in the context of visual representation learning , where deep learning has excelled over traditional learning approaches ( Krizhevsky et al. , 2012 ; LeCun et al. , 2015 ; Schmidhuber , 2015 ; Goodfellow et al. , 2016 ) . In the following , we therefore concentrate on inductive biases specific to deep neural networks ( Goyal & Bengio , 2020 ) on visual data . For details regarding specific objective functions , architectures , and training , we refer to the supplement . | The paper tests 17 unsupervised, weakly supervised, and fully supervised representation learning approaches to infer the generative factors of variation across three simple datasets in well-controlled conditions. In addition, the authors introduce a CelebGlow dataset, which is more complex. The generalization abilities are characterized as composition, interpolation, and extrapolation. The conclusions from these empirical observations on the experimental results are interesting and suggest that most networks fail to generalize. | SP:5fefb833a05111c601ed2cad72f356b708c0ec42 |
Visual Representation Learning Does Not Generalize Strongly Within the Same Domain | 1 INTRODUCTION . Humans excel at learning underlying physical mechanisms or inner workings of a system from observations ( Funke et al. , 2021 ; Barrett et al. , 2018 ; Santoro et al. , 2017 ; Villalobos et al. , 2020 ; Spelke , 1990 ) , which helps them generalize quickly to new situations and to learn efficiently from little data ( Battaglia et al. , 2013 ; Dehaene , 2020 ; Lake et al. , 2017 ; Téglás et al. , 2011 ) . In contrast , machine learning systems typically require large amounts of curated data and still mostly fail to generalize to out-of-distribution ( OOD ) scenarios ( Schölkopf et al. , 2021 ; Hendrycks & Dietterich , 2019 ; Karahan et al. , 2016 ; Michaelis et al. , 2019 ; Roy et al. , 2018 ; Azulay & Weiss , 2019 ; Barbu et al. , 2019 ) . It has been hypothesized that this failure of machine learning systems is due to shortcut learning ( Kilbertus * et al. , 2018 ; Ilyas et al. , 2019 ; Geirhos et al. , 2020 ; Schölkopf et al. , 2021 ) . In essence , machines seemingly learn to solve the tasks they have been trained on using auxiliary and spurious statistical relationships in the data , rather than true mechanistic relationships . Pragmatically , models relying on statistical relationships tend to fail if tested outside their training distribution , while models relying on ( approximately ) the true underlying mechanisms tend to generalize well to novel scenarios ( Barrett et al. , 2018 ; Funke et al. , 2021 ; Wu et al. , 2019 ; Zhang et al. , 2018 ; Parascandolo et al. , 2018 ; Schölkopf et al. , 2021 ; Locatello et al. , 2020a ; b ) . To learn effective statistical relationships , the training data needs to cover most combinations of factors of variation ( like shape , size , color , viewpoint , etc. ) . Unfortunately , the number of combinations scales exponentially with the number of factors . In contrast , learning the underlying mechanisms behind the factors of variation should greatly reduce the need for training data and scale more gently with the number of factors ( Schölkopf et al. , 2021 ; Peters et al. , 2017 ; Besserve et al. , 2021 ) . Benchmark : Our goal is to quantify how well machine learning models already learn the mechanisms underlying a data generative process . To this end , we consider four image data sets where each image is described by a small number of independently controllable factors of variation such as scale , color , or size . We split the training and test data such that models that learned the underlying mechanisms should generalize to the test data . More precisely , we propose several systematic out-of-distribution ( OOD ) test splits like composition ( e.g. , train = small hearts , large squares → test = small squares , large hearts ) , interpolation ( e.g. , small hearts , large hearts → medium hearts ) and extrapolation ( e.g. , small hearts , medium hearts → large hearts ) . While the factors of variation are independently controllable ( e.g. , there may exist large and small hearts ) , the observations may exhibit spurious statistical dependencies ( e.g. , observed hearts are typically small , but size may not be predictive at test time ) . Based on this setup , we benchmark 17 representation learning approaches and study their inductive biases . The considered approaches stem from un-/weakly supervised disentanglement , supervised learning , and the transfer learning literature . Results : Our benchmark results indicate that the tested models mostly struggle to learn the underlying mechanisms regardless of supervision signal and architecture . As soon as a factor of variation is outside the training distribution , models consistently tend to predict a value in the previously observed range . On the other hand , these models can be fairly modular in the sense that predictions of in-distribution factors remain accurate , which is in part against common criticisms of deep neural networks ( Greff et al. , 2020 ; Csordás et al. , 2021 ; Marcus , 2018 ; Lake & Baroni , 2018 ) . New Dataset : Previous datasets with independent controllable factors such as dSprites , Shapes3D , and MPI3D ( Matthey et al. , 2017 ; Kim & Mnih , 2018 ; Gondal et al. , 2019 ) stem from highly structured environments . For these datasets , common factors of variations are scaling , rotation and simple geometrical shapes . We introduce a dataset derived from celebrity faces , named CelebGlow , with factors of variations such as smiling , age and hair-color . It also contains all possible factor combinations . It is based on latent traversals of a pretrained Glow network provided by Kingma et al . ( Kingma & Dhariwal , 2018 ) and the Celeb-HQ dataset ( Liu et al. , 2015 ) . We hope that this benchmark can guide future efforts to find machine learning models capable of understanding the true underlying mechanisms in the data . To this end , all data sets and evaluation scripts are released alongside a leaderboard on GitHub . 1 2 PROBLEM SETTING y1 . . . yn x s Figure 1 : Assumed graphical model connecting the factors of variations y = ( y1 , ... , yn ) to observations x = g ( y ) . The selection variable s ∈ { tr , te } leads to different train and test splits ps ( y ) , thereby inducing correlation between the FoVs . Assume that we render each observation or image x ∈ Rd using a “ computer graphic model ” which takes as input a set of independently controllable factors of variation ( FoVs ) y ∈ Rn like size or color . More formally , we assume a generative process of the form x = g ( y ) , where g : Rn 7→ Rd is an injective and smooth function . In the standard independently and identically distributed ( IID ) setting , we would generate the training and test data in the same way , i.e. , we would draw y from the same prior distribution p ( y ) and then generate the corresponding images x according to g ( · ) . Instead , we here consider an OOD setting where the prior distribution ptr ( y ) during training is different from the prior distribution pte ( y ) during testing . In fact , in all settings of our benchmark , the training and test distributions are completely disjoint , meaning that each point can only have non-zero probability mass in either ptr ( y ) or pte ( y ) . Crucially , however , the function g which maps between FoVs and observations is shared between training and testing , which is why we refer to it as an invariant mechanism . As shown in the causal graphical model in Fig . 1 , the factors of variations y are independently controllable to begin with , but the binary split variable s introduces spurious correlations between the FoVs that are different at training and test time as a result of selection bias ( Storkey , 2009 ; Bareinboim & Pearl , 2012 ) . In particular , we consider Random , Composition , Interpolation , and Extrapolation splits as illustrated in Fig . 2 . We refer to §4.2 for details on the implementation of these splits . The task for our machine learning models f is to estimate the factors of variations y that generated the sample x on both the training and test data . In other words , we want that ( ideally ) f = g−1 . The main challenge is that , during training , we only observe data from ptr but wish to generalize to pte . Hence , the learned function f should not only invert g locally on the training domain supp ( ptr ( y ) ) ⊆ Rn but ideally globally . In practice , let Dte = { ( yk , xk ) } be the test data with yk drawn from pte ( y ) and let f : Rd 7→ Rn be the model . Now , the goal is to design and optimize the 1https : //github.com/bethgelab/InDomainGeneralizationBenchmark model f on the training set Dtr such that it achieves a minimal R-squared distance between yk and f ( xk ) on the test set Dte . During training , models are allowed to sample the data from all non-zero probability regions supp ( ptr ( y ) ) in whatever way is optimal for its learning algorithm . This general formulation covers different scenarios and learning methods that could prove valuable for learning independent mechanisms . For example , supervised methods will sample an IID data set Dtr = { ( yk , xk ) } with yk ∼ ptr ( y ) , while self-supervised methods might sample a data set of unlabeled image pairs Dtr = { ( xk , x̃k ) } . We aim to understand what inductive biases help on these OOD settings and how to best leverage the training data to learn representations that generalize . 3 INDUCTIVE BIASES FOR GENERALIZATION IN VISUAL REPRESENTATION LEARNING . We now explore different types of assumptions , or inductive biases , on the representational format ( §3.1 ) , architecture ( §3.2 ) , and dataset ( §3.3 ) which have been proposed and used in the past to facilitate generalization . Inductive inference and the generalization of empirical findings is a fundamental problem of science that has a long-standing history in many disciplines . Notable examples include Occam ’ s razor , Solomonoff ’ s inductive inference ( Solomonoff , 1964 ) , Kolmogorov complexity ( Kolmogorov , 1998 ) , the bias-variance-tradeoff ( Kohavi et al. , 1996 ; Von Luxburg & Schölkopf , 2011 ) , and the no free lunch theorem ( Wolpert , 1996 ; Wolpert & Macready , 1997 ) . In the context of statistical learning , Vapnik and Chervonenkis ( Vapnik & Chervonenkis , 1982 ; Vapnik , 1995 ) showed that generalizing from a sample to its population ( i.e. , IID generalization ) requires restricting the capacity of the class of candidate functions—a type of inductive bias . Since shifts between train and test distributions violate the IID assumption , however , statistical learning theory does not directly apply to our types of OOD generalization . OOD generalization across different ( e.g. , observational and experimental ) conditions also bears connections to causal inference ( Pearl , 2009 ; Peters et al. , 2017 ; Hernán & Robins , 2020 ) . Typically , a causal graph encodes assumptions about the relation between different distributions and is used to decide how to “ transport ” a learned model ( Pearl & Bareinboim , 2011 ; Pearl et al. , 2014 ; Bareinboim & Pearl , 2016 ; von Kügelgen et al. , 2019 ) . Other approaches aim to learn a model which leads to invariant prediction across multiple environments ( Schölkopf et al. , 2012 ; Peters et al. , 2016 ; HeinzeDeml et al. , 2018 ; Rojas-Carulla et al. , 2018 ; Arjovsky et al. , 2019 ; Lu et al. , 2021 ) . However , these methods either consider a small number of causally meaningful variables in combination with domain knowledge , or assume access to data from multiple environments . In our setting , on the other hand , we aim to learn from higher-dimensional observations and to generalize from a single training set to a different test environment . Our work focuses on OOD generalization in the context of visual representation learning , where deep learning has excelled over traditional learning approaches ( Krizhevsky et al. , 2012 ; LeCun et al. , 2015 ; Schmidhuber , 2015 ; Goodfellow et al. , 2016 ) . In the following , we therefore concentrate on inductive biases specific to deep neural networks ( Goyal & Bengio , 2020 ) on visual data . For details regarding specific objective functions , architectures , and training , we refer to the supplement . | This paper presents an empirical study of generalization in visual representation learning. The paper compares in-distribution (ID) generalization to out-of-distribution (OOD) generalization of three types -- interpolation, extrapolation, and composition. Datasets are constructed with several factors of variation where the training split exhibits some factors and the test split exhibits others, constructed to test the ID and OOD settings. A variety of different representation learning models with different kinds of inductive bias are tested. The main findings are that: 1) all models perform better ID than OOD, indicating that all methods fail to find the true underlying generative mechanisms that created the data, 2) when some factors are ID and others are OOD the ID factors are modeled well despite that the OOD factors may be modeled poorly, demonstrating a kind of modularity between the learning of different factors. The paper will be accompanied by a benchmark for others to improve on this task. | SP:5fefb833a05111c601ed2cad72f356b708c0ec42 |
Visual Representation Learning Does Not Generalize Strongly Within the Same Domain | 1 INTRODUCTION . Humans excel at learning underlying physical mechanisms or inner workings of a system from observations ( Funke et al. , 2021 ; Barrett et al. , 2018 ; Santoro et al. , 2017 ; Villalobos et al. , 2020 ; Spelke , 1990 ) , which helps them generalize quickly to new situations and to learn efficiently from little data ( Battaglia et al. , 2013 ; Dehaene , 2020 ; Lake et al. , 2017 ; Téglás et al. , 2011 ) . In contrast , machine learning systems typically require large amounts of curated data and still mostly fail to generalize to out-of-distribution ( OOD ) scenarios ( Schölkopf et al. , 2021 ; Hendrycks & Dietterich , 2019 ; Karahan et al. , 2016 ; Michaelis et al. , 2019 ; Roy et al. , 2018 ; Azulay & Weiss , 2019 ; Barbu et al. , 2019 ) . It has been hypothesized that this failure of machine learning systems is due to shortcut learning ( Kilbertus * et al. , 2018 ; Ilyas et al. , 2019 ; Geirhos et al. , 2020 ; Schölkopf et al. , 2021 ) . In essence , machines seemingly learn to solve the tasks they have been trained on using auxiliary and spurious statistical relationships in the data , rather than true mechanistic relationships . Pragmatically , models relying on statistical relationships tend to fail if tested outside their training distribution , while models relying on ( approximately ) the true underlying mechanisms tend to generalize well to novel scenarios ( Barrett et al. , 2018 ; Funke et al. , 2021 ; Wu et al. , 2019 ; Zhang et al. , 2018 ; Parascandolo et al. , 2018 ; Schölkopf et al. , 2021 ; Locatello et al. , 2020a ; b ) . To learn effective statistical relationships , the training data needs to cover most combinations of factors of variation ( like shape , size , color , viewpoint , etc. ) . Unfortunately , the number of combinations scales exponentially with the number of factors . In contrast , learning the underlying mechanisms behind the factors of variation should greatly reduce the need for training data and scale more gently with the number of factors ( Schölkopf et al. , 2021 ; Peters et al. , 2017 ; Besserve et al. , 2021 ) . Benchmark : Our goal is to quantify how well machine learning models already learn the mechanisms underlying a data generative process . To this end , we consider four image data sets where each image is described by a small number of independently controllable factors of variation such as scale , color , or size . We split the training and test data such that models that learned the underlying mechanisms should generalize to the test data . More precisely , we propose several systematic out-of-distribution ( OOD ) test splits like composition ( e.g. , train = small hearts , large squares → test = small squares , large hearts ) , interpolation ( e.g. , small hearts , large hearts → medium hearts ) and extrapolation ( e.g. , small hearts , medium hearts → large hearts ) . While the factors of variation are independently controllable ( e.g. , there may exist large and small hearts ) , the observations may exhibit spurious statistical dependencies ( e.g. , observed hearts are typically small , but size may not be predictive at test time ) . Based on this setup , we benchmark 17 representation learning approaches and study their inductive biases . The considered approaches stem from un-/weakly supervised disentanglement , supervised learning , and the transfer learning literature . Results : Our benchmark results indicate that the tested models mostly struggle to learn the underlying mechanisms regardless of supervision signal and architecture . As soon as a factor of variation is outside the training distribution , models consistently tend to predict a value in the previously observed range . On the other hand , these models can be fairly modular in the sense that predictions of in-distribution factors remain accurate , which is in part against common criticisms of deep neural networks ( Greff et al. , 2020 ; Csordás et al. , 2021 ; Marcus , 2018 ; Lake & Baroni , 2018 ) . New Dataset : Previous datasets with independent controllable factors such as dSprites , Shapes3D , and MPI3D ( Matthey et al. , 2017 ; Kim & Mnih , 2018 ; Gondal et al. , 2019 ) stem from highly structured environments . For these datasets , common factors of variations are scaling , rotation and simple geometrical shapes . We introduce a dataset derived from celebrity faces , named CelebGlow , with factors of variations such as smiling , age and hair-color . It also contains all possible factor combinations . It is based on latent traversals of a pretrained Glow network provided by Kingma et al . ( Kingma & Dhariwal , 2018 ) and the Celeb-HQ dataset ( Liu et al. , 2015 ) . We hope that this benchmark can guide future efforts to find machine learning models capable of understanding the true underlying mechanisms in the data . To this end , all data sets and evaluation scripts are released alongside a leaderboard on GitHub . 1 2 PROBLEM SETTING y1 . . . yn x s Figure 1 : Assumed graphical model connecting the factors of variations y = ( y1 , ... , yn ) to observations x = g ( y ) . The selection variable s ∈ { tr , te } leads to different train and test splits ps ( y ) , thereby inducing correlation between the FoVs . Assume that we render each observation or image x ∈ Rd using a “ computer graphic model ” which takes as input a set of independently controllable factors of variation ( FoVs ) y ∈ Rn like size or color . More formally , we assume a generative process of the form x = g ( y ) , where g : Rn 7→ Rd is an injective and smooth function . In the standard independently and identically distributed ( IID ) setting , we would generate the training and test data in the same way , i.e. , we would draw y from the same prior distribution p ( y ) and then generate the corresponding images x according to g ( · ) . Instead , we here consider an OOD setting where the prior distribution ptr ( y ) during training is different from the prior distribution pte ( y ) during testing . In fact , in all settings of our benchmark , the training and test distributions are completely disjoint , meaning that each point can only have non-zero probability mass in either ptr ( y ) or pte ( y ) . Crucially , however , the function g which maps between FoVs and observations is shared between training and testing , which is why we refer to it as an invariant mechanism . As shown in the causal graphical model in Fig . 1 , the factors of variations y are independently controllable to begin with , but the binary split variable s introduces spurious correlations between the FoVs that are different at training and test time as a result of selection bias ( Storkey , 2009 ; Bareinboim & Pearl , 2012 ) . In particular , we consider Random , Composition , Interpolation , and Extrapolation splits as illustrated in Fig . 2 . We refer to §4.2 for details on the implementation of these splits . The task for our machine learning models f is to estimate the factors of variations y that generated the sample x on both the training and test data . In other words , we want that ( ideally ) f = g−1 . The main challenge is that , during training , we only observe data from ptr but wish to generalize to pte . Hence , the learned function f should not only invert g locally on the training domain supp ( ptr ( y ) ) ⊆ Rn but ideally globally . In practice , let Dte = { ( yk , xk ) } be the test data with yk drawn from pte ( y ) and let f : Rd 7→ Rn be the model . Now , the goal is to design and optimize the 1https : //github.com/bethgelab/InDomainGeneralizationBenchmark model f on the training set Dtr such that it achieves a minimal R-squared distance between yk and f ( xk ) on the test set Dte . During training , models are allowed to sample the data from all non-zero probability regions supp ( ptr ( y ) ) in whatever way is optimal for its learning algorithm . This general formulation covers different scenarios and learning methods that could prove valuable for learning independent mechanisms . For example , supervised methods will sample an IID data set Dtr = { ( yk , xk ) } with yk ∼ ptr ( y ) , while self-supervised methods might sample a data set of unlabeled image pairs Dtr = { ( xk , x̃k ) } . We aim to understand what inductive biases help on these OOD settings and how to best leverage the training data to learn representations that generalize . 3 INDUCTIVE BIASES FOR GENERALIZATION IN VISUAL REPRESENTATION LEARNING . We now explore different types of assumptions , or inductive biases , on the representational format ( §3.1 ) , architecture ( §3.2 ) , and dataset ( §3.3 ) which have been proposed and used in the past to facilitate generalization . Inductive inference and the generalization of empirical findings is a fundamental problem of science that has a long-standing history in many disciplines . Notable examples include Occam ’ s razor , Solomonoff ’ s inductive inference ( Solomonoff , 1964 ) , Kolmogorov complexity ( Kolmogorov , 1998 ) , the bias-variance-tradeoff ( Kohavi et al. , 1996 ; Von Luxburg & Schölkopf , 2011 ) , and the no free lunch theorem ( Wolpert , 1996 ; Wolpert & Macready , 1997 ) . In the context of statistical learning , Vapnik and Chervonenkis ( Vapnik & Chervonenkis , 1982 ; Vapnik , 1995 ) showed that generalizing from a sample to its population ( i.e. , IID generalization ) requires restricting the capacity of the class of candidate functions—a type of inductive bias . Since shifts between train and test distributions violate the IID assumption , however , statistical learning theory does not directly apply to our types of OOD generalization . OOD generalization across different ( e.g. , observational and experimental ) conditions also bears connections to causal inference ( Pearl , 2009 ; Peters et al. , 2017 ; Hernán & Robins , 2020 ) . Typically , a causal graph encodes assumptions about the relation between different distributions and is used to decide how to “ transport ” a learned model ( Pearl & Bareinboim , 2011 ; Pearl et al. , 2014 ; Bareinboim & Pearl , 2016 ; von Kügelgen et al. , 2019 ) . Other approaches aim to learn a model which leads to invariant prediction across multiple environments ( Schölkopf et al. , 2012 ; Peters et al. , 2016 ; HeinzeDeml et al. , 2018 ; Rojas-Carulla et al. , 2018 ; Arjovsky et al. , 2019 ; Lu et al. , 2021 ) . However , these methods either consider a small number of causally meaningful variables in combination with domain knowledge , or assume access to data from multiple environments . In our setting , on the other hand , we aim to learn from higher-dimensional observations and to generalize from a single training set to a different test environment . Our work focuses on OOD generalization in the context of visual representation learning , where deep learning has excelled over traditional learning approaches ( Krizhevsky et al. , 2012 ; LeCun et al. , 2015 ; Schmidhuber , 2015 ; Goodfellow et al. , 2016 ) . In the following , we therefore concentrate on inductive biases specific to deep neural networks ( Goyal & Bengio , 2020 ) on visual data . For details regarding specific objective functions , architectures , and training , we refer to the supplement . | The paper introduces a new dataset: CelebGLOW which is a controllable environment generation dataset, which can be used in the same form as significantly less complex datasets, such as sprites. The paper evaluates over 3 key inductive biases using the dataset: "representational format" (using images in this paper), neural network architectural variants (MLPs/CNNs/ transformers, etc), and ability to perform transfer learning. The paper evalutes the models on 4 given modes of generalization: interpolation, extrapolation, compositional learning and "random", and finds extremely fascinating conclusions through extensive experimentation. | SP:5fefb833a05111c601ed2cad72f356b708c0ec42 |
Asynchronous Multi-Agent Actor-Critic with Macro-Actions | 1 INTRODUCTION . In recent years , multi-agent policy gradient methods using the actor-critic framework have achieved impressive success in solving a variety of cooperative and competitive domains ( Lowe et al. , 2017 ; Foerster et al. , 2018 ; Du et al. , 2019 ; Iqbal & Sha , 2019 ; Vinyals et al. , 2019 ; Li et al. , 2019 ; Wang et al. , 2020 ; Yang et al. , 2020 ; Zhou et al. , 2020 ; Baker et al. , 2020 ; Su et al. , 2021 ; Wang et al. , 2021 ; Du et al. , 2021 ) . However , as these methods assume synchronized primitive-action execution over agents , they struggle to solve tasks that involve long-term reasoning and asynchronous behavior , such as real-world multi-robot applications ( e.g. , search and rescue ( Queralta et al. , 2020 ) , package delivery ( Choudhury et al. , 2021 ) and warehouse service ( Xiao et al. , 2020 ) ) . The Macro-Action Decentralized Partially Observable Markov Decision Process ( MacDecPOMDP ) ( Amato et al. , 2014 ; 2019 ) provides a general formalism for multi-agent asynchronous collaborative decision-making under uncertainty . Macro-actions represent temporally extended actions that have ( potentially ) different durations . This introduces asynchronous high-level decisionmaking over agents , as agents can start and terminate macro-actions at different timesteps . Such asynchronicity actually makes multi-agent reinforcement learning ( MARL ) more challenging because it is difficult to determine what information to use and when to update agents ’ policies from either the decentralized or centralized perspective . Despite several efforts made recently to enable agents to learn asynchronous hierarchical policies such as extending DQN ( Mnih et al. , 2015 ) to learn macro-action-value functions ( Xiao et al. , 2019 ) , transferring MacDec-POMDPs to event-driven processes with continuous timing ( Menda et al. , 2019 ) , and adapting a single-agent option-critic framework ( Bacon et al. , 2017 ) to multi-agent domains to learn all components ( e.g . low-level policy , high-level abstraction , high-level policy ) from scratch ( Chakravorty et al. , 2019 ) , none of them provides a principled way for optimizing macroaction-based policies via asynchronous policy gradients to solve general multi-agent problems with asynchronous decision-making . In this paper , we propose a group of macro-action-based multi-agent actor-critic methods to generalize the current primitive-action-based multi-agent actor-critic methods to multi-agent problems with macro-actions as well as allowing asynchronous policy optimization . First , we formulate a macroaction-based independent actor-critic ( Mac-IAC ) method . Although independent learning suffers from a theoretical curse of environmental non-stationarity , it allows fully online learning and may still work well in certain domains . Second , we introduce a macro-action-based centralized actorcritic ( Mac-CAC ) method , for the case where full communication is available during execution . We also formulate a centralized training for decentralized execution ( CTDE ) paradigm ( Kraemer & Banerjee , 2016 ; Oliehoek et al. , 2008 ) variant of our method . CTDE has gained popularity since such methods can learn better decentralized policies by using centralized information during training . Current primitive-action-based multi-agent actor-critic methods typically use a centralized critic to optimize each decentralized actor . However , the asynchronous joint macro-action execution from the centralized perspective could be very different with the completion time being very different from each agent ’ s decentralized perspective . To this end , we first present a Naive Independent Actor with Centralized Critic ( Naive IACC ) method that naively uses a joint macro-action-value function as the critic for each actor ’ s policy gradient estimation ; and then propose an Independent Actor with Individual Centralized Critic ( Mac-IAICC ) method addressing the above challenge . We evaluate our proposed methods on diverse macro-action-based multi-agent problems : a benchmark Box Pushing domain ( Xiao et al. , 2019 ) , a variant of the Overcooked domain ( Wu et al. , 2021 ) and a larger warehouse service domain ( Xiao et al. , 2019 ) . Experimental results show that our methods are able to learn high-quality solutions while primitive-action-based methods can not , and show the strength of Mac-IAICC for learning decentralized policies over Naive IAICC and Mac-IAC . To our knowledge , this is the first general formalization of macro-action-based multi-agent actor-critic frameworks considering the three state-of-the-art multi-agent training paradigms . 2 BACKGROUND . This section first introduces the formal definitions of the Dec-POMDP and the MacDec-POMDP , and then reviews single-agent and multi-agent actor-critic policy gradient methods with primitiveactions . We also provide an overview of value-based MARL methods with macro-actions . 2.1 DEC-POMDPS AND MACDEC-POMDPS . The decentralized partially observable Markov decision processes ( Dec-POMDP ) ( Oliehoek & Amato , 2016 ) is a general framework to model fully cooperative multi-agent tasks , where agents make decisions in a decentralized way based on only local information . Formally , a Dec-POMDP is defined by a tuple 〈I , S , A , Ω , T , O , R , H , γ〉 , where I is a set of agents ; S is the environmental state space ; A = ×i∈IAi is the joint primitive-action space over each agent ’ s primitive-action set Ai ; Ω = ×i∈IΩi is the joint primitive-observation space over each agent ’ s primitive-observation set Ωi . At every timestep , under a state s , agents synchronously execute a joint primitive-action~a = ×i∈Iai , each individually selected by an agent using a policy πi : HAi × Ai → [ 0 , 1 ] , a mapping from local primitive observation-action historyHAi to primitive-actions . The environment then transits to a new state s′ according to a state transition function T ( s , ~a , s′ ) = P ( s′ | s , ~a ) . Agents receive a global reward r ( s , ~a ) issued by a reward function R : S×A→ R , and obtain a joint primitive-observation ~o = ×i∈Ioi drawn from an observation function O ( ~o , ~a , s′ ) = P ( ~o | ~a , s′ ) in state s′ . The objective is to find a joint policy ~π = ×iπi such that the expected sum of discounted rewards from an initial state , V ~π ( s ( 0 ) ) = E [ ∑H−1 t=0 γ tr ( s ( t ) , ~a ( t ) ) | s ( 0 ) , ~π ] , gets optimized , where γ ∈ [ 0 , 1 ] is a discount factor , and H is the number of ( primitive ) timesteps until the problem terminates ( the horizon ) . The macro-action decentralized partially observable Markov decision process ( MacDecPOMDP ) ( Amato et al. , 2014 ; 2019 ) incorporates the option framework ( Sutton et al. , 1999 ) into the Dec-POMDP by defining each agent ’ s macro-action as a tuple mi = 〈Imi , πmi , βmi〉 , where the initiation set Imi ⊂ HMi defines how to initiate a macro-action based on macro-observationaction history HMi at the high-level ; πmi : H A i × Ai → [ 0 , 1 ] is the low-level policy for the execution of a macro-action ; and a stochastic termination function βmi : H A i → [ 0 , 1 ] determines how to terminate a macro-action based on primitive-observation-action history HAi at the low-level . A MacDec-POMDP is thus formally defined by a tuple 〈I , S , A , M , Ω , ζ , T , O , Z , R , H , γ〉 , where I , S , A , Ω , T , O , R , H and γ remain the same definitions as in the Dec-POMDP ; M = ×i∈IMi is the joint macro-action space over each agent ’ s macro-action space Mi ; ζ = ×i∈Iζi is the joint macroobservation space over each agent ’ s macro-observation space ζi ; and Z = { Zi } i∈I is a set of macroobservation likelihood models . During execution , each agent independently selects a macro-action mi using a high-level policy Ψi : HMi ×Mi → [ 0 , 1 ] , a mapping from macro-observation-action his- tory to macro-actions , and captures a macro-observation zi ∈ ζi according to the macro-observation probability function Zi ( zi , mi , s′ ) = P ( zi | mi , s′ ) when the macro-action terminates in a state s′ . The objective of solving MacDec-POMDPs with finite horizon is to find a joint high-level policy ~Ψ = ×i∈IΨi that maximizes the value , V ~Ψ ( s ( 0 ) ) = E [ ∑H−1 t=0 γ tr ( s ( t ) , ~a ( t ) ) | s ( 0 ) , ~π , ~Ψ ] . 2.2 SINGLE-AGENT ACTOR-CRITIC . In single-agent reinforcement learning , the policy gradient theorem ( Sutton et al. , 2000 ) formulates a principled way to optimize a parameterized policy πθ via gradient ascent on the policy ’ s performance defined as J ( θ ) = Eπθ [ ∑∞ t=0 γ tr ( s ( t ) , a ( t ) ) ] . In POMDPs , the gradient w.r.t . parameters of a observation-action history-based policy πθ ( a | h ) is expressed as : ∇θJ ( θ ) = Eπθ [ ∇θ log πθ ( a | h ) Qπθ ( h , a ) ] ( 1 ) where , h is often maintained by having a RNN in the policy network ( Hausknecht & Stone , 2015 ) . The actor-critic framework ( Konda & Tsitsiklis , 2000 ) learns an on-policy action-value function Qπθφ ( h , a ) ( critic ) via temporal-difference ( TD ) learning ( Sutton , 1988 ) to approximate the actionvalue for the policy ( actor ) updates . Variance reduction is commonly achieved by training a historyvalue function V πθw ( h ) and using it as a baseline ( Weaver & Tao , 2001 ) as well as bootstrapping to estimate the action-value . Accordingly , the actor-critic policy gradient can be written as : ∇θJ ( θ ) = Eπθ [ ∇θ log πθ ( a | h ) ( r + γV πθw ( h ′ ) − V πθw ( h ) ) ] ( 2 ) where , r is the immediate reward received by the agent at the corresponding timestep . 2.3 INDEPENDENT ACTOR-CRITIC . The single-agent actor-critic algorithm can be adapted to multi-agent problems in a simple way such that each agent independently learns its own actor and critic while treating other agents as part of the world ( Foerster et al. , 2018 ) . We consider a variance reduction version of independent actor-critic ( IAC ) with the policy gradient as follows : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) ( r + γV πθi wi ( h ′ i ) − V πθi wi ( hi ) ) ] ( 3 ) where , r is a shared reward over agents at every timestep . Due to other agents ’ policy updating and exploring , from any agent ’ s local perspective , the environment appears non-stationary which can lead to unstable learning of the critic without convergence guarantees ( Lowe et al. , 2017 ) . This instability often prevents IAC from learning high-quality cooperative policies . 2.4 INDEPENDENT ACTOR WITH CENTRALIZED CRITIC . To address the above difficulties existing in independent learning approaches , centralized training for decentralized execution ( CTDE ) provides agents with access to global information during offline training while allowing agents to rely on only local information during online decentralized execution . Typically , the key idea of exploiting CTDE with actor-critic is to train a joint action-value function , Q~π~θφ ( x , ~a ) , as the centralized critic and use it to compute gradients w.r.t . the parameters of each decentralized policy ( Foerster et al. , 2018 ; Lowe et al. , 2017 ) , which can be formulated as : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) Q ~π~θ φ ( x , ~a ) ] ( 4 ) where , x represents the available centralized information ( e.g. , joint observation , joint observationaction history , or the true state ) . Although the centralized critic in Eq . 4 can facilitate the update of decentralized policies in the direction that optimizes global collaborative performance , it also introduces extra variance over other agents ’ actions ( Lyu et al. , 2021 ; Wang et al. , 2021 ) . Therefore , we consider the version of independent actor with centralized critic ( IACC ) with a general variance reduction trick ( Foerster et al. , 2018 ; Su et al. , 2021 ) , the policy gradient of which is : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) ( r + γV ~π~θ w ( x ′ ) − V ~π~θw ( x ) ) ] ( 5 ) | The paper tackles the problem of learning asynchronous multi-agent policies with macro-actions. The authors present a set of asynchronous multi-agent Actor-Critic methods in order to solve the problem, which allows agents to `directly' optimize policies that are asynchronous and macro-action based. They apply the framework in 3 standard multi-agent learning paradigms: decentralized learning, centralized learning and centralized learning for decentralized execution. They also empirically show the utility of the methods over the standard individual actor-critic method and centralized actor-critic method on 3 multi-agent cooperative tasks: Box pushing, Overcooked and Warehouse. | SP:fdcb67feb73b7ae98789711b179fe873b32ae44e |
Asynchronous Multi-Agent Actor-Critic with Macro-Actions | 1 INTRODUCTION . In recent years , multi-agent policy gradient methods using the actor-critic framework have achieved impressive success in solving a variety of cooperative and competitive domains ( Lowe et al. , 2017 ; Foerster et al. , 2018 ; Du et al. , 2019 ; Iqbal & Sha , 2019 ; Vinyals et al. , 2019 ; Li et al. , 2019 ; Wang et al. , 2020 ; Yang et al. , 2020 ; Zhou et al. , 2020 ; Baker et al. , 2020 ; Su et al. , 2021 ; Wang et al. , 2021 ; Du et al. , 2021 ) . However , as these methods assume synchronized primitive-action execution over agents , they struggle to solve tasks that involve long-term reasoning and asynchronous behavior , such as real-world multi-robot applications ( e.g. , search and rescue ( Queralta et al. , 2020 ) , package delivery ( Choudhury et al. , 2021 ) and warehouse service ( Xiao et al. , 2020 ) ) . The Macro-Action Decentralized Partially Observable Markov Decision Process ( MacDecPOMDP ) ( Amato et al. , 2014 ; 2019 ) provides a general formalism for multi-agent asynchronous collaborative decision-making under uncertainty . Macro-actions represent temporally extended actions that have ( potentially ) different durations . This introduces asynchronous high-level decisionmaking over agents , as agents can start and terminate macro-actions at different timesteps . Such asynchronicity actually makes multi-agent reinforcement learning ( MARL ) more challenging because it is difficult to determine what information to use and when to update agents ’ policies from either the decentralized or centralized perspective . Despite several efforts made recently to enable agents to learn asynchronous hierarchical policies such as extending DQN ( Mnih et al. , 2015 ) to learn macro-action-value functions ( Xiao et al. , 2019 ) , transferring MacDec-POMDPs to event-driven processes with continuous timing ( Menda et al. , 2019 ) , and adapting a single-agent option-critic framework ( Bacon et al. , 2017 ) to multi-agent domains to learn all components ( e.g . low-level policy , high-level abstraction , high-level policy ) from scratch ( Chakravorty et al. , 2019 ) , none of them provides a principled way for optimizing macroaction-based policies via asynchronous policy gradients to solve general multi-agent problems with asynchronous decision-making . In this paper , we propose a group of macro-action-based multi-agent actor-critic methods to generalize the current primitive-action-based multi-agent actor-critic methods to multi-agent problems with macro-actions as well as allowing asynchronous policy optimization . First , we formulate a macroaction-based independent actor-critic ( Mac-IAC ) method . Although independent learning suffers from a theoretical curse of environmental non-stationarity , it allows fully online learning and may still work well in certain domains . Second , we introduce a macro-action-based centralized actorcritic ( Mac-CAC ) method , for the case where full communication is available during execution . We also formulate a centralized training for decentralized execution ( CTDE ) paradigm ( Kraemer & Banerjee , 2016 ; Oliehoek et al. , 2008 ) variant of our method . CTDE has gained popularity since such methods can learn better decentralized policies by using centralized information during training . Current primitive-action-based multi-agent actor-critic methods typically use a centralized critic to optimize each decentralized actor . However , the asynchronous joint macro-action execution from the centralized perspective could be very different with the completion time being very different from each agent ’ s decentralized perspective . To this end , we first present a Naive Independent Actor with Centralized Critic ( Naive IACC ) method that naively uses a joint macro-action-value function as the critic for each actor ’ s policy gradient estimation ; and then propose an Independent Actor with Individual Centralized Critic ( Mac-IAICC ) method addressing the above challenge . We evaluate our proposed methods on diverse macro-action-based multi-agent problems : a benchmark Box Pushing domain ( Xiao et al. , 2019 ) , a variant of the Overcooked domain ( Wu et al. , 2021 ) and a larger warehouse service domain ( Xiao et al. , 2019 ) . Experimental results show that our methods are able to learn high-quality solutions while primitive-action-based methods can not , and show the strength of Mac-IAICC for learning decentralized policies over Naive IAICC and Mac-IAC . To our knowledge , this is the first general formalization of macro-action-based multi-agent actor-critic frameworks considering the three state-of-the-art multi-agent training paradigms . 2 BACKGROUND . This section first introduces the formal definitions of the Dec-POMDP and the MacDec-POMDP , and then reviews single-agent and multi-agent actor-critic policy gradient methods with primitiveactions . We also provide an overview of value-based MARL methods with macro-actions . 2.1 DEC-POMDPS AND MACDEC-POMDPS . The decentralized partially observable Markov decision processes ( Dec-POMDP ) ( Oliehoek & Amato , 2016 ) is a general framework to model fully cooperative multi-agent tasks , where agents make decisions in a decentralized way based on only local information . Formally , a Dec-POMDP is defined by a tuple 〈I , S , A , Ω , T , O , R , H , γ〉 , where I is a set of agents ; S is the environmental state space ; A = ×i∈IAi is the joint primitive-action space over each agent ’ s primitive-action set Ai ; Ω = ×i∈IΩi is the joint primitive-observation space over each agent ’ s primitive-observation set Ωi . At every timestep , under a state s , agents synchronously execute a joint primitive-action~a = ×i∈Iai , each individually selected by an agent using a policy πi : HAi × Ai → [ 0 , 1 ] , a mapping from local primitive observation-action historyHAi to primitive-actions . The environment then transits to a new state s′ according to a state transition function T ( s , ~a , s′ ) = P ( s′ | s , ~a ) . Agents receive a global reward r ( s , ~a ) issued by a reward function R : S×A→ R , and obtain a joint primitive-observation ~o = ×i∈Ioi drawn from an observation function O ( ~o , ~a , s′ ) = P ( ~o | ~a , s′ ) in state s′ . The objective is to find a joint policy ~π = ×iπi such that the expected sum of discounted rewards from an initial state , V ~π ( s ( 0 ) ) = E [ ∑H−1 t=0 γ tr ( s ( t ) , ~a ( t ) ) | s ( 0 ) , ~π ] , gets optimized , where γ ∈ [ 0 , 1 ] is a discount factor , and H is the number of ( primitive ) timesteps until the problem terminates ( the horizon ) . The macro-action decentralized partially observable Markov decision process ( MacDecPOMDP ) ( Amato et al. , 2014 ; 2019 ) incorporates the option framework ( Sutton et al. , 1999 ) into the Dec-POMDP by defining each agent ’ s macro-action as a tuple mi = 〈Imi , πmi , βmi〉 , where the initiation set Imi ⊂ HMi defines how to initiate a macro-action based on macro-observationaction history HMi at the high-level ; πmi : H A i × Ai → [ 0 , 1 ] is the low-level policy for the execution of a macro-action ; and a stochastic termination function βmi : H A i → [ 0 , 1 ] determines how to terminate a macro-action based on primitive-observation-action history HAi at the low-level . A MacDec-POMDP is thus formally defined by a tuple 〈I , S , A , M , Ω , ζ , T , O , Z , R , H , γ〉 , where I , S , A , Ω , T , O , R , H and γ remain the same definitions as in the Dec-POMDP ; M = ×i∈IMi is the joint macro-action space over each agent ’ s macro-action space Mi ; ζ = ×i∈Iζi is the joint macroobservation space over each agent ’ s macro-observation space ζi ; and Z = { Zi } i∈I is a set of macroobservation likelihood models . During execution , each agent independently selects a macro-action mi using a high-level policy Ψi : HMi ×Mi → [ 0 , 1 ] , a mapping from macro-observation-action his- tory to macro-actions , and captures a macro-observation zi ∈ ζi according to the macro-observation probability function Zi ( zi , mi , s′ ) = P ( zi | mi , s′ ) when the macro-action terminates in a state s′ . The objective of solving MacDec-POMDPs with finite horizon is to find a joint high-level policy ~Ψ = ×i∈IΨi that maximizes the value , V ~Ψ ( s ( 0 ) ) = E [ ∑H−1 t=0 γ tr ( s ( t ) , ~a ( t ) ) | s ( 0 ) , ~π , ~Ψ ] . 2.2 SINGLE-AGENT ACTOR-CRITIC . In single-agent reinforcement learning , the policy gradient theorem ( Sutton et al. , 2000 ) formulates a principled way to optimize a parameterized policy πθ via gradient ascent on the policy ’ s performance defined as J ( θ ) = Eπθ [ ∑∞ t=0 γ tr ( s ( t ) , a ( t ) ) ] . In POMDPs , the gradient w.r.t . parameters of a observation-action history-based policy πθ ( a | h ) is expressed as : ∇θJ ( θ ) = Eπθ [ ∇θ log πθ ( a | h ) Qπθ ( h , a ) ] ( 1 ) where , h is often maintained by having a RNN in the policy network ( Hausknecht & Stone , 2015 ) . The actor-critic framework ( Konda & Tsitsiklis , 2000 ) learns an on-policy action-value function Qπθφ ( h , a ) ( critic ) via temporal-difference ( TD ) learning ( Sutton , 1988 ) to approximate the actionvalue for the policy ( actor ) updates . Variance reduction is commonly achieved by training a historyvalue function V πθw ( h ) and using it as a baseline ( Weaver & Tao , 2001 ) as well as bootstrapping to estimate the action-value . Accordingly , the actor-critic policy gradient can be written as : ∇θJ ( θ ) = Eπθ [ ∇θ log πθ ( a | h ) ( r + γV πθw ( h ′ ) − V πθw ( h ) ) ] ( 2 ) where , r is the immediate reward received by the agent at the corresponding timestep . 2.3 INDEPENDENT ACTOR-CRITIC . The single-agent actor-critic algorithm can be adapted to multi-agent problems in a simple way such that each agent independently learns its own actor and critic while treating other agents as part of the world ( Foerster et al. , 2018 ) . We consider a variance reduction version of independent actor-critic ( IAC ) with the policy gradient as follows : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) ( r + γV πθi wi ( h ′ i ) − V πθi wi ( hi ) ) ] ( 3 ) where , r is a shared reward over agents at every timestep . Due to other agents ’ policy updating and exploring , from any agent ’ s local perspective , the environment appears non-stationary which can lead to unstable learning of the critic without convergence guarantees ( Lowe et al. , 2017 ) . This instability often prevents IAC from learning high-quality cooperative policies . 2.4 INDEPENDENT ACTOR WITH CENTRALIZED CRITIC . To address the above difficulties existing in independent learning approaches , centralized training for decentralized execution ( CTDE ) provides agents with access to global information during offline training while allowing agents to rely on only local information during online decentralized execution . Typically , the key idea of exploiting CTDE with actor-critic is to train a joint action-value function , Q~π~θφ ( x , ~a ) , as the centralized critic and use it to compute gradients w.r.t . the parameters of each decentralized policy ( Foerster et al. , 2018 ; Lowe et al. , 2017 ) , which can be formulated as : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) Q ~π~θ φ ( x , ~a ) ] ( 4 ) where , x represents the available centralized information ( e.g. , joint observation , joint observationaction history , or the true state ) . Although the centralized critic in Eq . 4 can facilitate the update of decentralized policies in the direction that optimizes global collaborative performance , it also introduces extra variance over other agents ’ actions ( Lyu et al. , 2021 ; Wang et al. , 2021 ) . Therefore , we consider the version of independent actor with centralized critic ( IACC ) with a general variance reduction trick ( Foerster et al. , 2018 ; Su et al. , 2021 ) , the policy gradient of which is : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) ( r + γV ~π~θ w ( x ′ ) − V ~π~θw ( x ) ) ] ( 5 ) | This paper considers the problem of MacDec-POMDPs where it requires agents to be capable of performing asynchronously without waiting for other agents to terminate. As there is no multi-agent policy gradient method with macro-actions for MacDec-POMDPs, this paper fill this gap and integrates the macro-action-value into multi-agent policy gradient and proposes (i) macro-action-based independent actor-critic (Mac-IAC) method, (ii) macro-action-based centralized actor-critic (Mac-CAC) method, (iii) Naive Independent Actor with Centralized Critic (Naive IACC) as well as Independent Actor with Individual Centralized Critic (Mac-IAICC) via CTDE. Experimental results show that the proposed methods outperforms vanilla baselines. | SP:fdcb67feb73b7ae98789711b179fe873b32ae44e |
Asynchronous Multi-Agent Actor-Critic with Macro-Actions | 1 INTRODUCTION . In recent years , multi-agent policy gradient methods using the actor-critic framework have achieved impressive success in solving a variety of cooperative and competitive domains ( Lowe et al. , 2017 ; Foerster et al. , 2018 ; Du et al. , 2019 ; Iqbal & Sha , 2019 ; Vinyals et al. , 2019 ; Li et al. , 2019 ; Wang et al. , 2020 ; Yang et al. , 2020 ; Zhou et al. , 2020 ; Baker et al. , 2020 ; Su et al. , 2021 ; Wang et al. , 2021 ; Du et al. , 2021 ) . However , as these methods assume synchronized primitive-action execution over agents , they struggle to solve tasks that involve long-term reasoning and asynchronous behavior , such as real-world multi-robot applications ( e.g. , search and rescue ( Queralta et al. , 2020 ) , package delivery ( Choudhury et al. , 2021 ) and warehouse service ( Xiao et al. , 2020 ) ) . The Macro-Action Decentralized Partially Observable Markov Decision Process ( MacDecPOMDP ) ( Amato et al. , 2014 ; 2019 ) provides a general formalism for multi-agent asynchronous collaborative decision-making under uncertainty . Macro-actions represent temporally extended actions that have ( potentially ) different durations . This introduces asynchronous high-level decisionmaking over agents , as agents can start and terminate macro-actions at different timesteps . Such asynchronicity actually makes multi-agent reinforcement learning ( MARL ) more challenging because it is difficult to determine what information to use and when to update agents ’ policies from either the decentralized or centralized perspective . Despite several efforts made recently to enable agents to learn asynchronous hierarchical policies such as extending DQN ( Mnih et al. , 2015 ) to learn macro-action-value functions ( Xiao et al. , 2019 ) , transferring MacDec-POMDPs to event-driven processes with continuous timing ( Menda et al. , 2019 ) , and adapting a single-agent option-critic framework ( Bacon et al. , 2017 ) to multi-agent domains to learn all components ( e.g . low-level policy , high-level abstraction , high-level policy ) from scratch ( Chakravorty et al. , 2019 ) , none of them provides a principled way for optimizing macroaction-based policies via asynchronous policy gradients to solve general multi-agent problems with asynchronous decision-making . In this paper , we propose a group of macro-action-based multi-agent actor-critic methods to generalize the current primitive-action-based multi-agent actor-critic methods to multi-agent problems with macro-actions as well as allowing asynchronous policy optimization . First , we formulate a macroaction-based independent actor-critic ( Mac-IAC ) method . Although independent learning suffers from a theoretical curse of environmental non-stationarity , it allows fully online learning and may still work well in certain domains . Second , we introduce a macro-action-based centralized actorcritic ( Mac-CAC ) method , for the case where full communication is available during execution . We also formulate a centralized training for decentralized execution ( CTDE ) paradigm ( Kraemer & Banerjee , 2016 ; Oliehoek et al. , 2008 ) variant of our method . CTDE has gained popularity since such methods can learn better decentralized policies by using centralized information during training . Current primitive-action-based multi-agent actor-critic methods typically use a centralized critic to optimize each decentralized actor . However , the asynchronous joint macro-action execution from the centralized perspective could be very different with the completion time being very different from each agent ’ s decentralized perspective . To this end , we first present a Naive Independent Actor with Centralized Critic ( Naive IACC ) method that naively uses a joint macro-action-value function as the critic for each actor ’ s policy gradient estimation ; and then propose an Independent Actor with Individual Centralized Critic ( Mac-IAICC ) method addressing the above challenge . We evaluate our proposed methods on diverse macro-action-based multi-agent problems : a benchmark Box Pushing domain ( Xiao et al. , 2019 ) , a variant of the Overcooked domain ( Wu et al. , 2021 ) and a larger warehouse service domain ( Xiao et al. , 2019 ) . Experimental results show that our methods are able to learn high-quality solutions while primitive-action-based methods can not , and show the strength of Mac-IAICC for learning decentralized policies over Naive IAICC and Mac-IAC . To our knowledge , this is the first general formalization of macro-action-based multi-agent actor-critic frameworks considering the three state-of-the-art multi-agent training paradigms . 2 BACKGROUND . This section first introduces the formal definitions of the Dec-POMDP and the MacDec-POMDP , and then reviews single-agent and multi-agent actor-critic policy gradient methods with primitiveactions . We also provide an overview of value-based MARL methods with macro-actions . 2.1 DEC-POMDPS AND MACDEC-POMDPS . The decentralized partially observable Markov decision processes ( Dec-POMDP ) ( Oliehoek & Amato , 2016 ) is a general framework to model fully cooperative multi-agent tasks , where agents make decisions in a decentralized way based on only local information . Formally , a Dec-POMDP is defined by a tuple 〈I , S , A , Ω , T , O , R , H , γ〉 , where I is a set of agents ; S is the environmental state space ; A = ×i∈IAi is the joint primitive-action space over each agent ’ s primitive-action set Ai ; Ω = ×i∈IΩi is the joint primitive-observation space over each agent ’ s primitive-observation set Ωi . At every timestep , under a state s , agents synchronously execute a joint primitive-action~a = ×i∈Iai , each individually selected by an agent using a policy πi : HAi × Ai → [ 0 , 1 ] , a mapping from local primitive observation-action historyHAi to primitive-actions . The environment then transits to a new state s′ according to a state transition function T ( s , ~a , s′ ) = P ( s′ | s , ~a ) . Agents receive a global reward r ( s , ~a ) issued by a reward function R : S×A→ R , and obtain a joint primitive-observation ~o = ×i∈Ioi drawn from an observation function O ( ~o , ~a , s′ ) = P ( ~o | ~a , s′ ) in state s′ . The objective is to find a joint policy ~π = ×iπi such that the expected sum of discounted rewards from an initial state , V ~π ( s ( 0 ) ) = E [ ∑H−1 t=0 γ tr ( s ( t ) , ~a ( t ) ) | s ( 0 ) , ~π ] , gets optimized , where γ ∈ [ 0 , 1 ] is a discount factor , and H is the number of ( primitive ) timesteps until the problem terminates ( the horizon ) . The macro-action decentralized partially observable Markov decision process ( MacDecPOMDP ) ( Amato et al. , 2014 ; 2019 ) incorporates the option framework ( Sutton et al. , 1999 ) into the Dec-POMDP by defining each agent ’ s macro-action as a tuple mi = 〈Imi , πmi , βmi〉 , where the initiation set Imi ⊂ HMi defines how to initiate a macro-action based on macro-observationaction history HMi at the high-level ; πmi : H A i × Ai → [ 0 , 1 ] is the low-level policy for the execution of a macro-action ; and a stochastic termination function βmi : H A i → [ 0 , 1 ] determines how to terminate a macro-action based on primitive-observation-action history HAi at the low-level . A MacDec-POMDP is thus formally defined by a tuple 〈I , S , A , M , Ω , ζ , T , O , Z , R , H , γ〉 , where I , S , A , Ω , T , O , R , H and γ remain the same definitions as in the Dec-POMDP ; M = ×i∈IMi is the joint macro-action space over each agent ’ s macro-action space Mi ; ζ = ×i∈Iζi is the joint macroobservation space over each agent ’ s macro-observation space ζi ; and Z = { Zi } i∈I is a set of macroobservation likelihood models . During execution , each agent independently selects a macro-action mi using a high-level policy Ψi : HMi ×Mi → [ 0 , 1 ] , a mapping from macro-observation-action his- tory to macro-actions , and captures a macro-observation zi ∈ ζi according to the macro-observation probability function Zi ( zi , mi , s′ ) = P ( zi | mi , s′ ) when the macro-action terminates in a state s′ . The objective of solving MacDec-POMDPs with finite horizon is to find a joint high-level policy ~Ψ = ×i∈IΨi that maximizes the value , V ~Ψ ( s ( 0 ) ) = E [ ∑H−1 t=0 γ tr ( s ( t ) , ~a ( t ) ) | s ( 0 ) , ~π , ~Ψ ] . 2.2 SINGLE-AGENT ACTOR-CRITIC . In single-agent reinforcement learning , the policy gradient theorem ( Sutton et al. , 2000 ) formulates a principled way to optimize a parameterized policy πθ via gradient ascent on the policy ’ s performance defined as J ( θ ) = Eπθ [ ∑∞ t=0 γ tr ( s ( t ) , a ( t ) ) ] . In POMDPs , the gradient w.r.t . parameters of a observation-action history-based policy πθ ( a | h ) is expressed as : ∇θJ ( θ ) = Eπθ [ ∇θ log πθ ( a | h ) Qπθ ( h , a ) ] ( 1 ) where , h is often maintained by having a RNN in the policy network ( Hausknecht & Stone , 2015 ) . The actor-critic framework ( Konda & Tsitsiklis , 2000 ) learns an on-policy action-value function Qπθφ ( h , a ) ( critic ) via temporal-difference ( TD ) learning ( Sutton , 1988 ) to approximate the actionvalue for the policy ( actor ) updates . Variance reduction is commonly achieved by training a historyvalue function V πθw ( h ) and using it as a baseline ( Weaver & Tao , 2001 ) as well as bootstrapping to estimate the action-value . Accordingly , the actor-critic policy gradient can be written as : ∇θJ ( θ ) = Eπθ [ ∇θ log πθ ( a | h ) ( r + γV πθw ( h ′ ) − V πθw ( h ) ) ] ( 2 ) where , r is the immediate reward received by the agent at the corresponding timestep . 2.3 INDEPENDENT ACTOR-CRITIC . The single-agent actor-critic algorithm can be adapted to multi-agent problems in a simple way such that each agent independently learns its own actor and critic while treating other agents as part of the world ( Foerster et al. , 2018 ) . We consider a variance reduction version of independent actor-critic ( IAC ) with the policy gradient as follows : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) ( r + γV πθi wi ( h ′ i ) − V πθi wi ( hi ) ) ] ( 3 ) where , r is a shared reward over agents at every timestep . Due to other agents ’ policy updating and exploring , from any agent ’ s local perspective , the environment appears non-stationary which can lead to unstable learning of the critic without convergence guarantees ( Lowe et al. , 2017 ) . This instability often prevents IAC from learning high-quality cooperative policies . 2.4 INDEPENDENT ACTOR WITH CENTRALIZED CRITIC . To address the above difficulties existing in independent learning approaches , centralized training for decentralized execution ( CTDE ) provides agents with access to global information during offline training while allowing agents to rely on only local information during online decentralized execution . Typically , the key idea of exploiting CTDE with actor-critic is to train a joint action-value function , Q~π~θφ ( x , ~a ) , as the centralized critic and use it to compute gradients w.r.t . the parameters of each decentralized policy ( Foerster et al. , 2018 ; Lowe et al. , 2017 ) , which can be formulated as : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) Q ~π~θ φ ( x , ~a ) ] ( 4 ) where , x represents the available centralized information ( e.g. , joint observation , joint observationaction history , or the true state ) . Although the centralized critic in Eq . 4 can facilitate the update of decentralized policies in the direction that optimizes global collaborative performance , it also introduces extra variance over other agents ’ actions ( Lyu et al. , 2021 ; Wang et al. , 2021 ) . Therefore , we consider the version of independent actor with centralized critic ( IACC ) with a general variance reduction trick ( Foerster et al. , 2018 ; Su et al. , 2021 ) , the policy gradient of which is : ∇θiJ ( θi ) = E~π~θ [ ∇θi log πθi ( ai | hi ) ( r + γV ~π~θ w ( x ′ ) − V ~π~θw ( x ) ) ] ( 5 ) | The paper presents a method for learning policy gradient based methods on macro-actions in environments where multiple agents initiate and terminate actions at different timesteps. The paper identifies a suitable framework (MacDec-POMDPs) for these tasks and develops a series of actor-critic mechanisms to allow agents to coordinate asynchronous macros actions. Four main macro-action algorithms are proposed: * Mac-IAC - that applies a variance reduced version of actor critic directly to the multi-agent domain * Mac-CAC - which treats all agents as a single global agent * Mac-IACC - described as naive, which uses a centralised critic but derives independent policy gradients for each agent * Mac-IAICC - which again uses a centralised critic like mechanism but one where the termination times of the individual agent correct the traces used to train this central critic. Again this derives independent policy gradients for each agent. The experimental section presents three models: box pushing, overcooked and warehouse. From my reading of the paper, these models provide predefined options to all agents (so all agents have access to all options), where some options are extended and some are single step. Rewards are defined that are broadcast to all agents, and are used to learn the coordination policy (high level) but not the low level options, in the case of the macro action algorithms. Two non-macro action methods are evaluated against Mac-IAC and Mac-CAC on Box-pushing and Overcooked. This finds that macro-action methods are better than the primitive action methods (less interesting - see notes) and that there are situations in which Mac-IAC is better but more typically Mac-CAC outperforms Mac-IAC when there is a significant need for coordination. The next series of experiments compares the four macro-actions methods and show (most pertinantly) that the Mac-IAICC method achieves better coordination than Mac-IAC, nearing the performance of Mac-CAC even in the most coordination intensive conditions. Naive Mac-IACC improves upon Mac-IAC in terms of coordination but suffers in coordination intensive tasks. This supports the authors' argument that the variance in the shared critic for Naive Mac-IACC is damaging and that the proposed individual centralised critic approach for Mac-IAICC significantly improves upon this. | SP:fdcb67feb73b7ae98789711b179fe873b32ae44e |
Deep Probability Estimation | 1 INTRODUCTION . We consider the problem of building models that answer questions such as : Will it rain ? Will a patient survive ? Will a car collide with another vehicle ? Due to the inherently-uncertain nature of these real-world phenomena , this requires performing probability estimation , i.e . estimating the probability of each possible outcome of the phenomenon of interest . Models for probability prediction must be trained on observed outcomes ( e.g . whether it rained , a patient died , or a collision occurred ) , because the ground-truth probabilities are unknown . The problem is therefore analogous to binary classification , with the important difference that the objective is to estimate probabilities rather than predicting specific outcomes . In probability estimation , two identical inputs ( e.g . histopathology images from cancer patients ) can potentially result in two different outcomes ( death vs. survival ) . In contrast , in classification the class label is usually completely determined by the data ( a picture either shows a cat or it does not ) . The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks . Deep networks trained for classification often generate probabilities , which quantify the uncertainty of the estimate ( i.e . how likely the network is to classify correctly ) . This quantification has been observed to be inaccurate , and several methods have been developed to improve it ( Platt , 1999 ; Guo et al. , 2017 ; Szegedy et al. , 2016 ; Zhang et al. , 2020 ; Thulasidasan et al. , 2020 ; Mukhoti et al. , 2020 ; Thagaard et al. , 2020 ) , including Bayesian neural networks ( Gal & Ghahramani , 2016 ; Wang et al. , 2016 ; Shekhovtsov & Flach , 2019 ; Postels et al. , 2019 ) . However , these works restrict their attention almost exclusively to classification in datasets ( e.g . CIFAR10/100 Krizhevsky ( 2009 ) , or ImageNet ( Deng et al. , 2009 ) ) where the label is not uncertain , and therefore the uncertainty is completely tied to the model : it quantifies the confidence of the model in its own prediction , not the probability of an event of interest . Probability estimation from high-dimensional data is crucial in medical prognostics ( Wulczyn et al. , 2020 ) , weather prediction ( Agrawal et al. , 2019 ) , and autonomous driving ( Kim et al. , 2019 ) . In order to advance deep-learning methodology for probability estimation it is crucial to build appropriate benchmark datasets . Here we build a synthetic dataset and gather three real-world datasets , which we use to systematically evaluate existing methodology . In addition , we propose a novel approach , which outperforms current state-of-the-art methods . Our contributions are the following : • We introduce a new synthetic dataset for probability estimation where a population of people may have a certain disease connected to age . The task is to predict the probability that they contract the disease from a picture of their face . The data are generated based on the UTKFaces dataset ( Zhang et al. , 2017a ) , which contains age information . The dataset contains multiple versions of the synthetic labels , which are generated according to different distributions designed to mimic realworld probability-prediction datasets . The dataset serves two objectives . First , it allows us to evaluate existing methodology . Second , it enables us to evaluate different metrics in a controlled scenario where we have access to ground-truth probabilities . • We have used publicly available data to build probability-estimation benchmark datasets for three real-world applications : ( 1 ) precipitation forecasting from radar images , ( 2 ) prediction of cancerpatient survival from histopathology images , and ( 3 ) prediction of vehicle collisions from dashcam videos . We use these datasets to systematically evaluate existing approaches , which have been previously tested mainly on classification datasets . • We propose Calibrated Probability Estimation ( CaPE ) , a novel technique which modifies the training process so that output probabilities are consistent with empirical probabilities computed from the data . CaPE outperforms existing approaches on most metrics on synthetic and real-world data . 2 PROBLEM FORMULATION : PROBABILITY ESTIMATION . The goal of probability estimation is to evaluate the likelihood of a certain event of interest , based on observed data . The available training data consists of n examples xi , 1 ≤ i ≤ n , each associated with a corresponding outcome yi . In our applications of interest , the input data are high dimensional : each xi corresponds to an image or a video . The corresponding label yi is either 0 or 1 depending on whether or not the event in question occurred . For example , in the cancer-survival application xi is a histopathology image of a patient , and yi equals 1 if the patient survived for 5 years after xi was collected . The data have inherent uncertainty : yi , the patient ’ s survival , does not depend deterministically on the histopathology image ( due e.g . to comorbidities and other health factors ) . Instead , we assume that yi equals 1 with a certain probability pi associated with xi , , as illustrated in Figure 1 , because the input data provides key information about the patient ’ s survival chances . At inference , a probability-estimation model aims to generate an estimate p̂ of the underlying probability p , associated with a new input data point x ( e.g . the probability of survival for over 5 years for new patients based on their histopathology data ) . To summarize , this is not a classification problem , because the labels are not completely predictable . Instead , the goal is to predict the probability of the outcome , which is critical in choosing a course of treatment for the patient . 3 EVALUATION METRICS . Probability estimation shares similar target labels and network outputs with binary classification . However , classification accuracy is not an appropriate metric for evaluating probability-estimation models due to the inherent uncertainty of the outcomes . This is illustrated by the example in Figure 2a where a perfect probability estimate would result in a classification accuracy of just 75 % .1 Metrics when ground-truth probabilities are available . For synthetic datasets , we have access to the ground truth probability labels and can use them to evaluate performance . Two reasonable metrics are the mean squared error or ` 2 distance MSEp , and the Kullback–Leibler divergence KLp between the estimated and ground-truth probabilities : MSEp = 1 N N∑ i=1 ( p̂i − pi ) 2 , and KLp = 1 N N∑ i=1 ( p̂i log ( p̂i pi ) + ( 1− p̂i ) log ( 1− p̂i 1− pi ) ) . ( 1 ) N is the number of data , and pi , p̂i are the ground-truth and predicted probabilities respectively . Calibration metrics . In real-world data , ground-truth probabilities are not available . In order to evaluate the probabilities estimated by a model , we need to compare them to the observed probabilities . To this end , we aggregate the examples for which the model output equals a certain value ( e.g . 0.5 ) , and verify what fraction of them have outcomes equal to 1 . If the fraction is close to the model output , then the model is said to be well calibrated . Definition 3.1 . A model f is well calibrated if P ( y = 1 | f ( x ) ∈ I ( q ) ) = q , ∀ 0 ≤ q ≤ 1 , ( 2 ) where y is the observed outcome , f ( x ) is the probability predicted by model f for input x , and I ( q ) is a small interval around q . Model calibration can be evaluated using the expected calibration error ( ECE ) ( Guo et al. , 2017 ) ( note however that the definition Guo et al . ( 2017 ) is specific to classification ) . Given a probabilityestimation model f and a dataset of input data xi and associated outcomes yi , 1 ≤ i ≤ N , we partition the examples into B bins , I1I2 , · · · , IB , according to the probabilities assigned to the examples by the model . Let Q1 , . . . , QB−1 the B-quantiles of the set { f ( x1 ) , . . . , f ( xN ) } , we have Ib : = [ Qb−1 , Qb ] ∩ { f ( xi ) } Ni=1 ( setting Q0 = 0 ) . For each bin , we compute the mean predicted and empirical probabilities , p ( b ) emp = E ( y | f ( x ) ∈ Ib ) = 1 |Ib| ∑ i∈Index ( Ib ) yi , ( 3 ) q ( b ) = 1 |Ib| ∑ i∈Index ( Ib ) f ( xi ) , ( 4 ) 1A perfect model ( in terms of probability estimation ) , assigns 0.25 to the blue class and 0.75 to the red class . To maximize classification accuracy , we predict when the model outputs 0.75 ( red examples ) and 0 when it outputs 0.25 ( blue examples ) . However , 25 % of red examples have an outcome of 0 , and 25 % of blue examples have an outcome of 1 . As a result , the model would only have 75 % accuracy . where Index ( Ib ) = { i | f ( xi ) ∈ Ib } . The pairs ( q ( b ) , p ( b ) emp ) can be plotted as a reliability diagram , shown in the second row of Figure 4 and in Figure 6 . ECE is then defined as ECE = 1 B B∑ b=1 ∣∣∣p ( b ) emp − q ( b ) ∣∣∣ . ( 5 ) Other metrics for calibration include the maximum calibration error ( MCE ) defined as MCE = max b=1 , ... , B ∣∣∣p ( b ) emp − q ( b ) ∣∣∣ , and the Kolmogorov-Smirnov error ( KS-error ) ( Gupta et al. , 2021 ) , a metric based on the cumulative distribution function , which is described in more detail in Appendix B. Brier score . Crucially , a model without any discriminative power can be perfectly calibrated ( see Figure 2b ) . The Brier score is a metric designed to evaluate both calibration and discriminative ability . It is the mean squared error between the predicted probability and the observed outcomes : Brier = 1 N N∑ i=1 ( p̂i − yi ) 2 . ( 6 ) This score can be decomposed into two terms associated to calibration and discrimination ability , as shown in Appendix C. Using the synthetic data in Section 6.1 , where the ground-truth probabilities are known , we show that Brier score is indeed a reliable proxy for gold-standard MSE metric based on ground-truth probabilities MSEp , in contrast to calibration metrics such as ECE , MCE or KSerror , and to classification metrics such as AUC ( see Figure 3 and Appendix D ) . 4 PROPOSED METHOD : CALIBRATED PROBABILITY ESTIMATION ( CAPE ) . Prediction models based on deep learning are typically trained by minimizing the cross entropy between the model output and the training labels ( Goodfellow et al. , 2016 ) . This cost function is a proper scoring rule , which means that it evaluates probability estimates in a consistent manner and is therefore guaranteed to be well calibrated in an infinite-data regime ( Buja et al. , 2005 ) , as illustrated by Figure 4 ( first column ) . Unfortunately , in practice prediction models are trained on finite data . This is crucial in the case of deep neural networks , because these models are highly overparametrized and therefore prone to overfitting ( Goodfellow et al. , 2016 ) . In classification , networks have been shown to be capable of fitting arbitrary random labels ( Zhang et al. , 2017a ) . In probability estimation , we observe that neural networks indeed eventually overfit the observed outcomes completely . Moreover , the estimated probabilities collapse to 0 or 1 ( Figure 4 , second column ) , a phenomenon that has also been reported in classification ( Mukhoti et al. , 2020 ) . However , calibration is preserved during the first stages of training ( Figure 4 , third column ) . This is reminiscent of the early-learning phenomenon observed for classification from partially corrupted labels ( Yao et al. , 2020 ; Xia et al. , 2020 ) , where neural networks learn from the correct labels before eventually overfitting the false ones ( Liu et al. , 2020 ) . Here , we propose to exploit the training dynamics of cross-entropy minimization through a method that we name Calibrated Probability Estimation ( CaPE ) . Our starting point is a model obtained via early stopping using validation data on the cross-entropy loss . CaPE is designed to further improve the discrimination ability of the model , while ensuring that it remains well calibrated . This is achieved by alternatively minimizing the following two loss functions : Discrimination loss : Cross entropy between the model output and the observed binary outcomes , LD = − N∑ i=1 [ yi log ( f ( xi ) ) + ( 1− yi ) log ( 1− f ( xi ) ) ] . Calibration loss : Cross entropy between the output probability of the model and the empirical probability of the outcomes conditioned on the model output : LC = − N∑ i=1 [ piemp log ( f ( xi ) ) + ( 1− piemp ) log ( 1− f ( xi ) ) ] , where piemp is an estimate of the conditional probability P [ y = 1|f ( x ) ∈ I ( f ( xi ) ) ] where I ( f ( xi ) ) is a small interval centered at f ( xi ) . As explained in Section 3 if f ( xi ) is close to this value , then the model is well calibrated . We consider two approaches for estimating piemp . ( 1 ) CaPE ( bin ) where we divide the training set into bins , select the bin bi containing f ( xi ) and set piemp = p ( bi ) emp in equation 3 . ( 2 ) CaPE ( kernel ) where piemp is estimated through a moving average with a kernel function ( see Appendix E for more details ) . Both methods are efficiently computed by sorting the predictions p̂i . The calibration loss requires a reasonable estimation of the empirical probabilities p ( i ) emp , which can be obtained from the model after early learning . Therefore using the calibration loss from the beginning is counterproductive , as demonstrated in Section J. Algorithm 1 Pseudocode for CaPE Require : f . early stopped model Require : m . freq . of training with LC Require : { xi , yi } Ni=1 . training set Require : K ( p , q ) : = exp [ − ( p− q ) 2 /σ2 ] . Gaussian kernel for t = 1 to num epochs do if t mod m = 0 then p̂i ← f ( xi ) , ∀i Update piemp , ∀i , with BIN or KERNEL L ← LC . compute discrimination loss else L ← LD . compute calibration loss end if f ← backprop with L . train with loss end for function BIN ( B ) . B-number of bins I1 , · · · IB ← partitions by quantile of { p̂j } Nj=1 Find b such that p̂i ∈ Ib Index ( Ib ) ← { j|p̂j ∈ Ib } . get indices in bin b piemp ← 1|Ib| ∑ i∈Index ( Ib ) yi . empirical mean of bin b end function function KERNEL ( r , K ) . r-window size ; kernel Nr ( i ) ← r-nearest neighbor of p̂i ( output probability space ) Z ← ∑ p̂j∈Nr ( i ) K ( p̂i , p̂j ) . normalization factor piemp ← ∑ p̂j∈Nr ( i ) K ( p̂i , p̂j ) yj/Z . kernel smooth end function CaPE is summarized in Algorithm 1 . Figures 4 and 5 show that incorporating the calibration-loss minimization step indeed preserves calibration as training proceeds ( this is not necessarily expected because CaPE minimizes a calibration loss on the training data ) , and prevents the model from overfitting the observed outputs . This is beneficial also for the discriminative ability of the model , because it enables it to further reduce the cross-entropy loss without overfitting , as shown in Figure 5 . The experiments with synthetic and real-world data reported in Section 6 suggest that this approach results in accurate probability estimates across a variety of realistic scenarios . | This work tackles the problem of probability estimation. Current machine learning models do not fully reflect the uncertainty of the outcome but only of the model. The authors propose a loss that enforces both calibration and discrimination. Additionally, they present a semi-synthetic dataset for further study of this problem: from an image dataset of faces with associated age, they estimate the risk of developing a given disease. | SP:73446db7aae072a0eb1fc2e847d27c4e9419c775 |
Deep Probability Estimation | 1 INTRODUCTION . We consider the problem of building models that answer questions such as : Will it rain ? Will a patient survive ? Will a car collide with another vehicle ? Due to the inherently-uncertain nature of these real-world phenomena , this requires performing probability estimation , i.e . estimating the probability of each possible outcome of the phenomenon of interest . Models for probability prediction must be trained on observed outcomes ( e.g . whether it rained , a patient died , or a collision occurred ) , because the ground-truth probabilities are unknown . The problem is therefore analogous to binary classification , with the important difference that the objective is to estimate probabilities rather than predicting specific outcomes . In probability estimation , two identical inputs ( e.g . histopathology images from cancer patients ) can potentially result in two different outcomes ( death vs. survival ) . In contrast , in classification the class label is usually completely determined by the data ( a picture either shows a cat or it does not ) . The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks . Deep networks trained for classification often generate probabilities , which quantify the uncertainty of the estimate ( i.e . how likely the network is to classify correctly ) . This quantification has been observed to be inaccurate , and several methods have been developed to improve it ( Platt , 1999 ; Guo et al. , 2017 ; Szegedy et al. , 2016 ; Zhang et al. , 2020 ; Thulasidasan et al. , 2020 ; Mukhoti et al. , 2020 ; Thagaard et al. , 2020 ) , including Bayesian neural networks ( Gal & Ghahramani , 2016 ; Wang et al. , 2016 ; Shekhovtsov & Flach , 2019 ; Postels et al. , 2019 ) . However , these works restrict their attention almost exclusively to classification in datasets ( e.g . CIFAR10/100 Krizhevsky ( 2009 ) , or ImageNet ( Deng et al. , 2009 ) ) where the label is not uncertain , and therefore the uncertainty is completely tied to the model : it quantifies the confidence of the model in its own prediction , not the probability of an event of interest . Probability estimation from high-dimensional data is crucial in medical prognostics ( Wulczyn et al. , 2020 ) , weather prediction ( Agrawal et al. , 2019 ) , and autonomous driving ( Kim et al. , 2019 ) . In order to advance deep-learning methodology for probability estimation it is crucial to build appropriate benchmark datasets . Here we build a synthetic dataset and gather three real-world datasets , which we use to systematically evaluate existing methodology . In addition , we propose a novel approach , which outperforms current state-of-the-art methods . Our contributions are the following : • We introduce a new synthetic dataset for probability estimation where a population of people may have a certain disease connected to age . The task is to predict the probability that they contract the disease from a picture of their face . The data are generated based on the UTKFaces dataset ( Zhang et al. , 2017a ) , which contains age information . The dataset contains multiple versions of the synthetic labels , which are generated according to different distributions designed to mimic realworld probability-prediction datasets . The dataset serves two objectives . First , it allows us to evaluate existing methodology . Second , it enables us to evaluate different metrics in a controlled scenario where we have access to ground-truth probabilities . • We have used publicly available data to build probability-estimation benchmark datasets for three real-world applications : ( 1 ) precipitation forecasting from radar images , ( 2 ) prediction of cancerpatient survival from histopathology images , and ( 3 ) prediction of vehicle collisions from dashcam videos . We use these datasets to systematically evaluate existing approaches , which have been previously tested mainly on classification datasets . • We propose Calibrated Probability Estimation ( CaPE ) , a novel technique which modifies the training process so that output probabilities are consistent with empirical probabilities computed from the data . CaPE outperforms existing approaches on most metrics on synthetic and real-world data . 2 PROBLEM FORMULATION : PROBABILITY ESTIMATION . The goal of probability estimation is to evaluate the likelihood of a certain event of interest , based on observed data . The available training data consists of n examples xi , 1 ≤ i ≤ n , each associated with a corresponding outcome yi . In our applications of interest , the input data are high dimensional : each xi corresponds to an image or a video . The corresponding label yi is either 0 or 1 depending on whether or not the event in question occurred . For example , in the cancer-survival application xi is a histopathology image of a patient , and yi equals 1 if the patient survived for 5 years after xi was collected . The data have inherent uncertainty : yi , the patient ’ s survival , does not depend deterministically on the histopathology image ( due e.g . to comorbidities and other health factors ) . Instead , we assume that yi equals 1 with a certain probability pi associated with xi , , as illustrated in Figure 1 , because the input data provides key information about the patient ’ s survival chances . At inference , a probability-estimation model aims to generate an estimate p̂ of the underlying probability p , associated with a new input data point x ( e.g . the probability of survival for over 5 years for new patients based on their histopathology data ) . To summarize , this is not a classification problem , because the labels are not completely predictable . Instead , the goal is to predict the probability of the outcome , which is critical in choosing a course of treatment for the patient . 3 EVALUATION METRICS . Probability estimation shares similar target labels and network outputs with binary classification . However , classification accuracy is not an appropriate metric for evaluating probability-estimation models due to the inherent uncertainty of the outcomes . This is illustrated by the example in Figure 2a where a perfect probability estimate would result in a classification accuracy of just 75 % .1 Metrics when ground-truth probabilities are available . For synthetic datasets , we have access to the ground truth probability labels and can use them to evaluate performance . Two reasonable metrics are the mean squared error or ` 2 distance MSEp , and the Kullback–Leibler divergence KLp between the estimated and ground-truth probabilities : MSEp = 1 N N∑ i=1 ( p̂i − pi ) 2 , and KLp = 1 N N∑ i=1 ( p̂i log ( p̂i pi ) + ( 1− p̂i ) log ( 1− p̂i 1− pi ) ) . ( 1 ) N is the number of data , and pi , p̂i are the ground-truth and predicted probabilities respectively . Calibration metrics . In real-world data , ground-truth probabilities are not available . In order to evaluate the probabilities estimated by a model , we need to compare them to the observed probabilities . To this end , we aggregate the examples for which the model output equals a certain value ( e.g . 0.5 ) , and verify what fraction of them have outcomes equal to 1 . If the fraction is close to the model output , then the model is said to be well calibrated . Definition 3.1 . A model f is well calibrated if P ( y = 1 | f ( x ) ∈ I ( q ) ) = q , ∀ 0 ≤ q ≤ 1 , ( 2 ) where y is the observed outcome , f ( x ) is the probability predicted by model f for input x , and I ( q ) is a small interval around q . Model calibration can be evaluated using the expected calibration error ( ECE ) ( Guo et al. , 2017 ) ( note however that the definition Guo et al . ( 2017 ) is specific to classification ) . Given a probabilityestimation model f and a dataset of input data xi and associated outcomes yi , 1 ≤ i ≤ N , we partition the examples into B bins , I1I2 , · · · , IB , according to the probabilities assigned to the examples by the model . Let Q1 , . . . , QB−1 the B-quantiles of the set { f ( x1 ) , . . . , f ( xN ) } , we have Ib : = [ Qb−1 , Qb ] ∩ { f ( xi ) } Ni=1 ( setting Q0 = 0 ) . For each bin , we compute the mean predicted and empirical probabilities , p ( b ) emp = E ( y | f ( x ) ∈ Ib ) = 1 |Ib| ∑ i∈Index ( Ib ) yi , ( 3 ) q ( b ) = 1 |Ib| ∑ i∈Index ( Ib ) f ( xi ) , ( 4 ) 1A perfect model ( in terms of probability estimation ) , assigns 0.25 to the blue class and 0.75 to the red class . To maximize classification accuracy , we predict when the model outputs 0.75 ( red examples ) and 0 when it outputs 0.25 ( blue examples ) . However , 25 % of red examples have an outcome of 0 , and 25 % of blue examples have an outcome of 1 . As a result , the model would only have 75 % accuracy . where Index ( Ib ) = { i | f ( xi ) ∈ Ib } . The pairs ( q ( b ) , p ( b ) emp ) can be plotted as a reliability diagram , shown in the second row of Figure 4 and in Figure 6 . ECE is then defined as ECE = 1 B B∑ b=1 ∣∣∣p ( b ) emp − q ( b ) ∣∣∣ . ( 5 ) Other metrics for calibration include the maximum calibration error ( MCE ) defined as MCE = max b=1 , ... , B ∣∣∣p ( b ) emp − q ( b ) ∣∣∣ , and the Kolmogorov-Smirnov error ( KS-error ) ( Gupta et al. , 2021 ) , a metric based on the cumulative distribution function , which is described in more detail in Appendix B. Brier score . Crucially , a model without any discriminative power can be perfectly calibrated ( see Figure 2b ) . The Brier score is a metric designed to evaluate both calibration and discriminative ability . It is the mean squared error between the predicted probability and the observed outcomes : Brier = 1 N N∑ i=1 ( p̂i − yi ) 2 . ( 6 ) This score can be decomposed into two terms associated to calibration and discrimination ability , as shown in Appendix C. Using the synthetic data in Section 6.1 , where the ground-truth probabilities are known , we show that Brier score is indeed a reliable proxy for gold-standard MSE metric based on ground-truth probabilities MSEp , in contrast to calibration metrics such as ECE , MCE or KSerror , and to classification metrics such as AUC ( see Figure 3 and Appendix D ) . 4 PROPOSED METHOD : CALIBRATED PROBABILITY ESTIMATION ( CAPE ) . Prediction models based on deep learning are typically trained by minimizing the cross entropy between the model output and the training labels ( Goodfellow et al. , 2016 ) . This cost function is a proper scoring rule , which means that it evaluates probability estimates in a consistent manner and is therefore guaranteed to be well calibrated in an infinite-data regime ( Buja et al. , 2005 ) , as illustrated by Figure 4 ( first column ) . Unfortunately , in practice prediction models are trained on finite data . This is crucial in the case of deep neural networks , because these models are highly overparametrized and therefore prone to overfitting ( Goodfellow et al. , 2016 ) . In classification , networks have been shown to be capable of fitting arbitrary random labels ( Zhang et al. , 2017a ) . In probability estimation , we observe that neural networks indeed eventually overfit the observed outcomes completely . Moreover , the estimated probabilities collapse to 0 or 1 ( Figure 4 , second column ) , a phenomenon that has also been reported in classification ( Mukhoti et al. , 2020 ) . However , calibration is preserved during the first stages of training ( Figure 4 , third column ) . This is reminiscent of the early-learning phenomenon observed for classification from partially corrupted labels ( Yao et al. , 2020 ; Xia et al. , 2020 ) , where neural networks learn from the correct labels before eventually overfitting the false ones ( Liu et al. , 2020 ) . Here , we propose to exploit the training dynamics of cross-entropy minimization through a method that we name Calibrated Probability Estimation ( CaPE ) . Our starting point is a model obtained via early stopping using validation data on the cross-entropy loss . CaPE is designed to further improve the discrimination ability of the model , while ensuring that it remains well calibrated . This is achieved by alternatively minimizing the following two loss functions : Discrimination loss : Cross entropy between the model output and the observed binary outcomes , LD = − N∑ i=1 [ yi log ( f ( xi ) ) + ( 1− yi ) log ( 1− f ( xi ) ) ] . Calibration loss : Cross entropy between the output probability of the model and the empirical probability of the outcomes conditioned on the model output : LC = − N∑ i=1 [ piemp log ( f ( xi ) ) + ( 1− piemp ) log ( 1− f ( xi ) ) ] , where piemp is an estimate of the conditional probability P [ y = 1|f ( x ) ∈ I ( f ( xi ) ) ] where I ( f ( xi ) ) is a small interval centered at f ( xi ) . As explained in Section 3 if f ( xi ) is close to this value , then the model is well calibrated . We consider two approaches for estimating piemp . ( 1 ) CaPE ( bin ) where we divide the training set into bins , select the bin bi containing f ( xi ) and set piemp = p ( bi ) emp in equation 3 . ( 2 ) CaPE ( kernel ) where piemp is estimated through a moving average with a kernel function ( see Appendix E for more details ) . Both methods are efficiently computed by sorting the predictions p̂i . The calibration loss requires a reasonable estimation of the empirical probabilities p ( i ) emp , which can be obtained from the model after early learning . Therefore using the calibration loss from the beginning is counterproductive , as demonstrated in Section J. Algorithm 1 Pseudocode for CaPE Require : f . early stopped model Require : m . freq . of training with LC Require : { xi , yi } Ni=1 . training set Require : K ( p , q ) : = exp [ − ( p− q ) 2 /σ2 ] . Gaussian kernel for t = 1 to num epochs do if t mod m = 0 then p̂i ← f ( xi ) , ∀i Update piemp , ∀i , with BIN or KERNEL L ← LC . compute discrimination loss else L ← LD . compute calibration loss end if f ← backprop with L . train with loss end for function BIN ( B ) . B-number of bins I1 , · · · IB ← partitions by quantile of { p̂j } Nj=1 Find b such that p̂i ∈ Ib Index ( Ib ) ← { j|p̂j ∈ Ib } . get indices in bin b piemp ← 1|Ib| ∑ i∈Index ( Ib ) yi . empirical mean of bin b end function function KERNEL ( r , K ) . r-window size ; kernel Nr ( i ) ← r-nearest neighbor of p̂i ( output probability space ) Z ← ∑ p̂j∈Nr ( i ) K ( p̂i , p̂j ) . normalization factor piemp ← ∑ p̂j∈Nr ( i ) K ( p̂i , p̂j ) yj/Z . kernel smooth end function CaPE is summarized in Algorithm 1 . Figures 4 and 5 show that incorporating the calibration-loss minimization step indeed preserves calibration as training proceeds ( this is not necessarily expected because CaPE minimizes a calibration loss on the training data ) , and prevents the model from overfitting the observed outputs . This is beneficial also for the discriminative ability of the model , because it enables it to further reduce the cross-entropy loss without overfitting , as shown in Figure 5 . The experiments with synthetic and real-world data reported in Section 6 suggest that this approach results in accurate probability estimates across a variety of realistic scenarios . | The authors propose a method to ensure calibration of probability outputs during training. They do so by adding an explicit penalty to push the output probability values towards an empirical estimate of the actual probability for the current inferred probability. They compare the method to a number of baseline approaches, showing the benefits on both synthetic and real-world data sets. | SP:73446db7aae072a0eb1fc2e847d27c4e9419c775 |
Deep Probability Estimation | 1 INTRODUCTION . We consider the problem of building models that answer questions such as : Will it rain ? Will a patient survive ? Will a car collide with another vehicle ? Due to the inherently-uncertain nature of these real-world phenomena , this requires performing probability estimation , i.e . estimating the probability of each possible outcome of the phenomenon of interest . Models for probability prediction must be trained on observed outcomes ( e.g . whether it rained , a patient died , or a collision occurred ) , because the ground-truth probabilities are unknown . The problem is therefore analogous to binary classification , with the important difference that the objective is to estimate probabilities rather than predicting specific outcomes . In probability estimation , two identical inputs ( e.g . histopathology images from cancer patients ) can potentially result in two different outcomes ( death vs. survival ) . In contrast , in classification the class label is usually completely determined by the data ( a picture either shows a cat or it does not ) . The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks . Deep networks trained for classification often generate probabilities , which quantify the uncertainty of the estimate ( i.e . how likely the network is to classify correctly ) . This quantification has been observed to be inaccurate , and several methods have been developed to improve it ( Platt , 1999 ; Guo et al. , 2017 ; Szegedy et al. , 2016 ; Zhang et al. , 2020 ; Thulasidasan et al. , 2020 ; Mukhoti et al. , 2020 ; Thagaard et al. , 2020 ) , including Bayesian neural networks ( Gal & Ghahramani , 2016 ; Wang et al. , 2016 ; Shekhovtsov & Flach , 2019 ; Postels et al. , 2019 ) . However , these works restrict their attention almost exclusively to classification in datasets ( e.g . CIFAR10/100 Krizhevsky ( 2009 ) , or ImageNet ( Deng et al. , 2009 ) ) where the label is not uncertain , and therefore the uncertainty is completely tied to the model : it quantifies the confidence of the model in its own prediction , not the probability of an event of interest . Probability estimation from high-dimensional data is crucial in medical prognostics ( Wulczyn et al. , 2020 ) , weather prediction ( Agrawal et al. , 2019 ) , and autonomous driving ( Kim et al. , 2019 ) . In order to advance deep-learning methodology for probability estimation it is crucial to build appropriate benchmark datasets . Here we build a synthetic dataset and gather three real-world datasets , which we use to systematically evaluate existing methodology . In addition , we propose a novel approach , which outperforms current state-of-the-art methods . Our contributions are the following : • We introduce a new synthetic dataset for probability estimation where a population of people may have a certain disease connected to age . The task is to predict the probability that they contract the disease from a picture of their face . The data are generated based on the UTKFaces dataset ( Zhang et al. , 2017a ) , which contains age information . The dataset contains multiple versions of the synthetic labels , which are generated according to different distributions designed to mimic realworld probability-prediction datasets . The dataset serves two objectives . First , it allows us to evaluate existing methodology . Second , it enables us to evaluate different metrics in a controlled scenario where we have access to ground-truth probabilities . • We have used publicly available data to build probability-estimation benchmark datasets for three real-world applications : ( 1 ) precipitation forecasting from radar images , ( 2 ) prediction of cancerpatient survival from histopathology images , and ( 3 ) prediction of vehicle collisions from dashcam videos . We use these datasets to systematically evaluate existing approaches , which have been previously tested mainly on classification datasets . • We propose Calibrated Probability Estimation ( CaPE ) , a novel technique which modifies the training process so that output probabilities are consistent with empirical probabilities computed from the data . CaPE outperforms existing approaches on most metrics on synthetic and real-world data . 2 PROBLEM FORMULATION : PROBABILITY ESTIMATION . The goal of probability estimation is to evaluate the likelihood of a certain event of interest , based on observed data . The available training data consists of n examples xi , 1 ≤ i ≤ n , each associated with a corresponding outcome yi . In our applications of interest , the input data are high dimensional : each xi corresponds to an image or a video . The corresponding label yi is either 0 or 1 depending on whether or not the event in question occurred . For example , in the cancer-survival application xi is a histopathology image of a patient , and yi equals 1 if the patient survived for 5 years after xi was collected . The data have inherent uncertainty : yi , the patient ’ s survival , does not depend deterministically on the histopathology image ( due e.g . to comorbidities and other health factors ) . Instead , we assume that yi equals 1 with a certain probability pi associated with xi , , as illustrated in Figure 1 , because the input data provides key information about the patient ’ s survival chances . At inference , a probability-estimation model aims to generate an estimate p̂ of the underlying probability p , associated with a new input data point x ( e.g . the probability of survival for over 5 years for new patients based on their histopathology data ) . To summarize , this is not a classification problem , because the labels are not completely predictable . Instead , the goal is to predict the probability of the outcome , which is critical in choosing a course of treatment for the patient . 3 EVALUATION METRICS . Probability estimation shares similar target labels and network outputs with binary classification . However , classification accuracy is not an appropriate metric for evaluating probability-estimation models due to the inherent uncertainty of the outcomes . This is illustrated by the example in Figure 2a where a perfect probability estimate would result in a classification accuracy of just 75 % .1 Metrics when ground-truth probabilities are available . For synthetic datasets , we have access to the ground truth probability labels and can use them to evaluate performance . Two reasonable metrics are the mean squared error or ` 2 distance MSEp , and the Kullback–Leibler divergence KLp between the estimated and ground-truth probabilities : MSEp = 1 N N∑ i=1 ( p̂i − pi ) 2 , and KLp = 1 N N∑ i=1 ( p̂i log ( p̂i pi ) + ( 1− p̂i ) log ( 1− p̂i 1− pi ) ) . ( 1 ) N is the number of data , and pi , p̂i are the ground-truth and predicted probabilities respectively . Calibration metrics . In real-world data , ground-truth probabilities are not available . In order to evaluate the probabilities estimated by a model , we need to compare them to the observed probabilities . To this end , we aggregate the examples for which the model output equals a certain value ( e.g . 0.5 ) , and verify what fraction of them have outcomes equal to 1 . If the fraction is close to the model output , then the model is said to be well calibrated . Definition 3.1 . A model f is well calibrated if P ( y = 1 | f ( x ) ∈ I ( q ) ) = q , ∀ 0 ≤ q ≤ 1 , ( 2 ) where y is the observed outcome , f ( x ) is the probability predicted by model f for input x , and I ( q ) is a small interval around q . Model calibration can be evaluated using the expected calibration error ( ECE ) ( Guo et al. , 2017 ) ( note however that the definition Guo et al . ( 2017 ) is specific to classification ) . Given a probabilityestimation model f and a dataset of input data xi and associated outcomes yi , 1 ≤ i ≤ N , we partition the examples into B bins , I1I2 , · · · , IB , according to the probabilities assigned to the examples by the model . Let Q1 , . . . , QB−1 the B-quantiles of the set { f ( x1 ) , . . . , f ( xN ) } , we have Ib : = [ Qb−1 , Qb ] ∩ { f ( xi ) } Ni=1 ( setting Q0 = 0 ) . For each bin , we compute the mean predicted and empirical probabilities , p ( b ) emp = E ( y | f ( x ) ∈ Ib ) = 1 |Ib| ∑ i∈Index ( Ib ) yi , ( 3 ) q ( b ) = 1 |Ib| ∑ i∈Index ( Ib ) f ( xi ) , ( 4 ) 1A perfect model ( in terms of probability estimation ) , assigns 0.25 to the blue class and 0.75 to the red class . To maximize classification accuracy , we predict when the model outputs 0.75 ( red examples ) and 0 when it outputs 0.25 ( blue examples ) . However , 25 % of red examples have an outcome of 0 , and 25 % of blue examples have an outcome of 1 . As a result , the model would only have 75 % accuracy . where Index ( Ib ) = { i | f ( xi ) ∈ Ib } . The pairs ( q ( b ) , p ( b ) emp ) can be plotted as a reliability diagram , shown in the second row of Figure 4 and in Figure 6 . ECE is then defined as ECE = 1 B B∑ b=1 ∣∣∣p ( b ) emp − q ( b ) ∣∣∣ . ( 5 ) Other metrics for calibration include the maximum calibration error ( MCE ) defined as MCE = max b=1 , ... , B ∣∣∣p ( b ) emp − q ( b ) ∣∣∣ , and the Kolmogorov-Smirnov error ( KS-error ) ( Gupta et al. , 2021 ) , a metric based on the cumulative distribution function , which is described in more detail in Appendix B. Brier score . Crucially , a model without any discriminative power can be perfectly calibrated ( see Figure 2b ) . The Brier score is a metric designed to evaluate both calibration and discriminative ability . It is the mean squared error between the predicted probability and the observed outcomes : Brier = 1 N N∑ i=1 ( p̂i − yi ) 2 . ( 6 ) This score can be decomposed into two terms associated to calibration and discrimination ability , as shown in Appendix C. Using the synthetic data in Section 6.1 , where the ground-truth probabilities are known , we show that Brier score is indeed a reliable proxy for gold-standard MSE metric based on ground-truth probabilities MSEp , in contrast to calibration metrics such as ECE , MCE or KSerror , and to classification metrics such as AUC ( see Figure 3 and Appendix D ) . 4 PROPOSED METHOD : CALIBRATED PROBABILITY ESTIMATION ( CAPE ) . Prediction models based on deep learning are typically trained by minimizing the cross entropy between the model output and the training labels ( Goodfellow et al. , 2016 ) . This cost function is a proper scoring rule , which means that it evaluates probability estimates in a consistent manner and is therefore guaranteed to be well calibrated in an infinite-data regime ( Buja et al. , 2005 ) , as illustrated by Figure 4 ( first column ) . Unfortunately , in practice prediction models are trained on finite data . This is crucial in the case of deep neural networks , because these models are highly overparametrized and therefore prone to overfitting ( Goodfellow et al. , 2016 ) . In classification , networks have been shown to be capable of fitting arbitrary random labels ( Zhang et al. , 2017a ) . In probability estimation , we observe that neural networks indeed eventually overfit the observed outcomes completely . Moreover , the estimated probabilities collapse to 0 or 1 ( Figure 4 , second column ) , a phenomenon that has also been reported in classification ( Mukhoti et al. , 2020 ) . However , calibration is preserved during the first stages of training ( Figure 4 , third column ) . This is reminiscent of the early-learning phenomenon observed for classification from partially corrupted labels ( Yao et al. , 2020 ; Xia et al. , 2020 ) , where neural networks learn from the correct labels before eventually overfitting the false ones ( Liu et al. , 2020 ) . Here , we propose to exploit the training dynamics of cross-entropy minimization through a method that we name Calibrated Probability Estimation ( CaPE ) . Our starting point is a model obtained via early stopping using validation data on the cross-entropy loss . CaPE is designed to further improve the discrimination ability of the model , while ensuring that it remains well calibrated . This is achieved by alternatively minimizing the following two loss functions : Discrimination loss : Cross entropy between the model output and the observed binary outcomes , LD = − N∑ i=1 [ yi log ( f ( xi ) ) + ( 1− yi ) log ( 1− f ( xi ) ) ] . Calibration loss : Cross entropy between the output probability of the model and the empirical probability of the outcomes conditioned on the model output : LC = − N∑ i=1 [ piemp log ( f ( xi ) ) + ( 1− piemp ) log ( 1− f ( xi ) ) ] , where piemp is an estimate of the conditional probability P [ y = 1|f ( x ) ∈ I ( f ( xi ) ) ] where I ( f ( xi ) ) is a small interval centered at f ( xi ) . As explained in Section 3 if f ( xi ) is close to this value , then the model is well calibrated . We consider two approaches for estimating piemp . ( 1 ) CaPE ( bin ) where we divide the training set into bins , select the bin bi containing f ( xi ) and set piemp = p ( bi ) emp in equation 3 . ( 2 ) CaPE ( kernel ) where piemp is estimated through a moving average with a kernel function ( see Appendix E for more details ) . Both methods are efficiently computed by sorting the predictions p̂i . The calibration loss requires a reasonable estimation of the empirical probabilities p ( i ) emp , which can be obtained from the model after early learning . Therefore using the calibration loss from the beginning is counterproductive , as demonstrated in Section J. Algorithm 1 Pseudocode for CaPE Require : f . early stopped model Require : m . freq . of training with LC Require : { xi , yi } Ni=1 . training set Require : K ( p , q ) : = exp [ − ( p− q ) 2 /σ2 ] . Gaussian kernel for t = 1 to num epochs do if t mod m = 0 then p̂i ← f ( xi ) , ∀i Update piemp , ∀i , with BIN or KERNEL L ← LC . compute discrimination loss else L ← LD . compute calibration loss end if f ← backprop with L . train with loss end for function BIN ( B ) . B-number of bins I1 , · · · IB ← partitions by quantile of { p̂j } Nj=1 Find b such that p̂i ∈ Ib Index ( Ib ) ← { j|p̂j ∈ Ib } . get indices in bin b piemp ← 1|Ib| ∑ i∈Index ( Ib ) yi . empirical mean of bin b end function function KERNEL ( r , K ) . r-window size ; kernel Nr ( i ) ← r-nearest neighbor of p̂i ( output probability space ) Z ← ∑ p̂j∈Nr ( i ) K ( p̂i , p̂j ) . normalization factor piemp ← ∑ p̂j∈Nr ( i ) K ( p̂i , p̂j ) yj/Z . kernel smooth end function CaPE is summarized in Algorithm 1 . Figures 4 and 5 show that incorporating the calibration-loss minimization step indeed preserves calibration as training proceeds ( this is not necessarily expected because CaPE minimizes a calibration loss on the training data ) , and prevents the model from overfitting the observed outputs . This is beneficial also for the discriminative ability of the model , because it enables it to further reduce the cross-entropy loss without overfitting , as shown in Figure 5 . The experiments with synthetic and real-world data reported in Section 6 suggest that this approach results in accurate probability estimates across a variety of realistic scenarios . | The paper studies the problem of uncertainty quantification in deep neural nets. It introduces a concept called "probability estimation" and an uncertainty calibration method called "CaPE" based on this new concept. The studies uncertainty calibration in a few new data sets such a histopathology cancer diagnostics. | SP:73446db7aae072a0eb1fc2e847d27c4e9419c775 |
Bootstrapping Semantic Segmentation with Regional Contrast | 1 INTRODUCTION . Semantic segmentation is an essential part of applications such as scene understanding and autonomous driving , whose goal is to assign a semantic label to each pixel in an image . Significant progress has been achieved by use of large datasets with high quality human annotations . However , labelling images with pixel-level accuracy is time consuming and expensive ; for example , labelling a single image in CityScapes can take more than 90 minutes ( Cordts et al. , 2016 ) . When deploying semantic segmentation models in practical applications where only limited labelled data are available , high quality ground-truth annotation is a significant bottleneck . To reduce the need for labelled data , there is a recent surge of interest in leveraging unlabelled data for semi-supervised learning . Previous methods include improving segmentation models via adversarial learning ( Hung et al. , 2019 ; Mittal et al. , 2019 ) and self-training ( Zou et al. , 2019 ; 2018 ; Zhu et al. , 2020 ) . Others focus on designing advanced data augmentation strategies to generate pseudo image-annotation pairs from unlabelled images ( Olsson et al. , 2021 ; French et al. , 2020 ) . In both semi-supervised and supervised learning , a segmentation model often predicts smooth label maps , because neighbouring pixels are usually of the same class , and rarer high-frequency regions are typically only found in object boundaries . This learning bias produces blurry contours and regularly mis-labels rare objects . After carefully examining the label predictions , we further observe that wrongly labelled pixels are typically confused with very few other classes ; e.g . a pixel labelled as rider has a much higher chance of being wrongly classified as person , compared to train or bus . By understanding this class structure , learning can be actively focused on the challenging pixels to improve overall segmentation quality . Here we propose ReCo , a contrastive learning framework designed at a regional level . Specifically , ReCo is a new loss function which helps semantic segmentation not only to learn from local context ( neighbouring pixels ) , but also from global semantic class relationships across the entire dataset . ReCo performs contrastive learning on a pixel-level dense representation , as visualised in Fig . 1 . For each semantic class in a mini-batch , ReCo samples a set of pixel-level representations ( queries ) , and encourages them to be close to the class mean representation ( positive keys ) , and simultaneously pushes them away from representations sampled from other classes ( negative keys ) . For pixel-level contrastive learning with high-resolution images , it is impractical to sample all pixels . In ReCo , we actively sample a sparse set of queries and keys , consisting of less than 5 % of all available pixels . We sample negative keys from a learned distribution based on the relative distance between the mean representation of each negative key and the query class . This distribution can be interpreted as a pairwise semantic class relationship , dynamically updated during training . We sample queries for those having a low prediction confidence . Active sampling helps ReCo to rapidly focus on the most confusing pixels for each semantic class , and requires minimal additional memory . ReCo enables a high-accuracy segmentation model to be trained with very few human annotations . We evaluate ReCo in a semi-supervised setting , with two different modes : i ) Partial Dataset Full Labels — a sparse subset of training images , where each image has full ground-truth labels , and the remaining images are unlabelled ; ii ) Partial Labels Full Dataset — all images have some labels , but covering only a sparse subset of pixels within each image . In both settings , we show that ReCo can consistently improve performance across all methods and datasets . 2 RELATED WORK . Semantic Segmentation One recent direction is in designing more effective deep convolutional neural networks . Fully convolutional networks ( FCNs ) ( Long et al. , 2015 ) are the foundation of modern segmentation network design . They were later improved with dilated/atrous convolutions with larger receptive fields , capturing more long range information ( Chen et al. , 2017 ; 2018 ) . Alternative approaches include encoder-decoder architectures ( Ronneberger et al. , 2015 ; Kirillov et al. , 2019 ) , sometimes using skip connections ( Ronneberger et al. , 2015 ) to refine filtered details . A parallel direction is to improve optimisation strategies , by designing loss functions that better respect class imbalance ( Lin et al. , 2017 ) or using rendering strategy to refine uncertain pixels from high-frequency regions improving the label quality ( Kirillov et al. , 2020 ) . ReCo is built upon this line of research , to improve segmentation by providing additional supervision on hard pixels . Semi-supervised Classification and Segmentation The goal of semi-supervised learning is to improve model performance by taking advantage of a large amount of unlabelled data during training . Here consistency regularisation and entropy minimisation are two common strategies . The intuition is that the network ’ s output should be invariant to data perturbation and geometric transformation . Based on these strategies , many semi-supervised methods have been developed for image classification ( Sohn et al. , 2020 ; Tarvainen & Valpola , 2017 ; Berthelot et al. , 2019 ; Kuo et al. , 2020 ) . However , for segmentation , generating effective pseudo-labels and well-designed data augmentation are non-trivial . Some solutions improved the quality of pseudo-labelling , using adversarial learning ( Hung et al. , 2019 ; Mittal et al. , 2019 ) or enforcing consistency from different augmented images ( French et al. , 2020 ; Olsson et al. , 2021 ) . In this work , we show that we can improve the performance of current semi-supervised segmentation methods by jointly training with a suitable auxiliary task . Contrastive Learning Contrastive learning learns a similarity function to bring views of the same data closer in representation space , whilst pushing views of different data apart . Most recent contrastive frameworks learn similarity scores based on global representations of the views , parameterising data with a single vector ( He et al. , 2020 ; Chen et al. , 2020 ; Khosla et al. , 2020 ) . Dense representations , on the other hand , rely on pixel-level representations and naturally provide additional supervision , capturing fine-grained pixel correspondence . Contrastive pre-training based on dense representations has recently been explored , and shows better performance in dense prediction tasks , such as object detection and keypoint detection ( Wang et al. , 2021b ; O. Pinheiro et al. , 2020 ) . Contrastive Learning for Semantic Segmentation Contrastive learning has been recently studied to improve semantic segmentation , with a number of different design strategies . Zhang et al . ( 2021 ) and Zhao et al . ( 2021 ) both perform contrastive learning via pre-training , based on the generated auxiliary labels and ground-truth labels respectively , but at the cost of huge memory consump- tion . In contrast , ours performs contrastive learning whilst requiring much less memory , via active sampling . In concurrent work , ( Wang et al. , 2021a ; Alonso et al. , 2021 ) also perform contrastive learning with active sampling . However , whilst both these methods are applied to a stored feature bank , ours focuses on sampling features on-the-fly . Active sampling in Alonso et al . ( 2021 ) is further based on learnable , class-specific attention modules , whilst ours only samples features based on relation graphs and prediction confidence , without introducing any additional computation overhead , which results in a simpler and much more memory-efficient implementation . 3 RECO – REGIONAL CONTRAST . 3.1 PIXEL-LEVEL CONTRASTIVE LEARNING . Let ( X , Y ) be a training dataset with training images x ∈ X and their corresponding C-class pixellevel segmentation labels y ∈ Y , where y can be either provided in the original dataset , or generated automatically as pseudo-labels . A segmentation network f is then optimised to learn a mapping fθ : X 7→ Y , parameterised by network parameters θ . This segmentation network f can be decomposed into two parts : an encoder network : φ : X 7→ Z , and a decoder classification head ψc : Z 7→ Y . To perform pixel-level contrastive learning , we additionally attach a decoder representation head ψr on top of the encoder network φ , parallel to the classification head , mapping the encoded feature into a higher m-dimensional dense representation with the same spatial resolution as the input image : ψr : Z 7→ R , R ∈ Rm . This representation head is only applied during training to guide the classifier using the ReCo loss as an auxiliary task , and is removed during inference . A pixel-level contrastive loss is a function which encourages queries rq to be similar to the positive key r+k , and dissimilar to the negative keys r − k . All queries and keys are sampled from the decoder representation head : rq , r + , − k ∈ R. In ReCo , we use a pixel-level contrastive loss across all available semantic classes in each mini-batch , with the distance between keys and queries measured by their normalised dot product . The general formation of the ReCo loss Lreco is then defined as : Lreco = ∑ c∈C ∑ rq∼Rcq − log exp ( rq · rc , +k /τ ) exp ( rq · rc , +k /τ ) + ∑ r−k ∼R c k exp ( rq · r−k /τ ) , ( 1 ) for which C is a set containing all available classes in the current mini-batch , τ is the temperature control of the softness of the distribution , Rcq represents a query set containing all representations whose labels belong to class c , Rck represents a negative key set containing all representations whose labels do not belong to class c , and rc , +k represents the positive key which is the mean representation of class c. Suppose P is a set containing all pixel coordinates with the same resolution as R , these queries and keys are then defined as : Rcq = ⋃ [ u , v ] ∈P 1 ( y [ u , v ] = c ) r [ u , v ] , Rck = ⋃ [ u , v ] ∈P 1 ( y [ u , v ] 6= c ) r [ u , v ] , rc , +k = 1 |Rcq| ∑ rq∈Rcq rq . ( 2 ) 3.2 ACTIVE HARD SAMPLING ON QUERIES AND KEYS . Contrastive learning on all pixels in high-resolution images would be computationally expensive . Here , we introduce active hard sampling strategies to optimise only a sparse set of queries and keys . Active Key Sampling When classifying a pixel , a semantic network might be uncertain only over a very small number of candidates , among all available classes . The uncertainty from these candidates typically comes from a close spatial ( e.g . rider and bicycle ) or semantic ( e.g . horse and cow ) relationship . To reduce this uncertainty , we propose to sample negative keys non-uniformly , based on the relative distance between each negative key class and the query class . This involves building a pair-wise class relationship graph G , with G ∈ R|C|×|C| , computed and dynamically updated for each mini-batch . This pair-wise relationship is measured by the normalised dot product between the mean representation from a pair of two classes and is defined as : G [ p , q ] = ( rp , +k · r q , + k ) , ∀p , q ∈ C , and p 6= q . ( 3 ) We further apply SoftMax to normalise these pair-wise relationships among all negative classes j for each query class c , which produces a probabilistic distribution : exp ( G [ c , i ] ) / ∑ j∈C , j 6=c exp ( G [ c , j ] ) . We sample negative keys for each class i based on this distribution , to learn the corresponding query class c. This procedure allocates more samples to hard , confusing classes chosen specifically for each query class , helping the segmentation network to learn a more accurate decision boundary . Active Query Sampling Due to the natural class imbalance in semantic segmentation , it is easy to over-fit on common classes , such as the road and building classes in the CityScapes dataset , or the background class in the Pascal VOC dataset . These common classes contribute to the majority of pixel space in training images , and so randomly sampling queries will under-sample rare classes and provide minimal supervision to these classes . Therefore , we instead sample hard queries — for those whose corresponding pixel prediction confidence is below a defined threshold . Accordingly , ReCo ’ s loss would then guide the segmentation network by providing appropriate supervision on these less certain pixels . The easy and hard queries are defined as follows , and visualised in Fig . 2 , Rc , easyq = ⋃ rq∈Rcq 1 ( ŷq > δs ) rq , Rc , hardq = ⋃ rq∈Rcq 1 ( ŷq ≤ δs ) rq , ( 4 ) where ŷq is the predicted confidence of label c after the SoftMax operation corresponding to the same pixel location as rq , and δs is the user-defined confidence threshold . | This paper proposes ReCo, a regional contrastive learning method for semi-supervised semantic segmentation. The query and key pixel sampling methods are proposed for efficient learning. The proposed method showed state-of-the-art level performances in various settings and various datasets. | SP:0f708eb86de2495c91df25d36a9dec0cb03c8c62 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.